id
stringlengths 11
95
| author
stringlengths 3
36
| task_category
stringclasses 16
values | tags
sequencelengths 1
4.05k
| created_time
timestamp[s]date 2022-03-02 23:29:04
2025-03-18 02:34:30
| last_modified
timestamp[s]date 2021-05-13 19:09:22
2025-03-18 03:19:02
| downloads
int64 0
15.6M
| likes
int64 0
4.86k
| README
stringlengths 246
1.01M
| matched_task
sequencelengths 1
8
| matched_bigbio_names
sequencelengths 1
8
|
---|---|---|---|---|---|---|---|---|---|---|
fblgit/UNA-SOLAR-10.7B-Instruct-v1.0 | fblgit | text-generation | [
"transformers",
"safetensors",
"gguf",
"llama",
"text-generation",
"alignment-handbook",
"generated_from_trainer",
"UNA",
"single-turn",
"conversational",
"en",
"base_model:upstage/SOLAR-10.7B-Instruct-v1.0",
"base_model:quantized:upstage/SOLAR-10.7B-Instruct-v1.0",
"doi:10.57967/hf/1514",
"license:cc-by-nc-nd-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | 2023-12-19T07:07:07 | 2023-12-22T16:34:29 | 102 | 16 | ---
base_model: upstage/SOLAR-10.7B-Instruct-v1.0
language:
- en
library_name: transformers
license: cc-by-nc-nd-4.0
tags:
- alignment-handbook
- generated_from_trainer
- UNA
- single-turn
model-index:
- name: UNA-SOLAR-10.7B-Instruct-v1.0
results: []
---
# UNA: Uniform Neural Alignment
SFT Further:
- Linear
- 2e-5
Merges:
- Fan in: `0:2`
- Fan out: `-4:`
- Intermediary layers: `1/1/1/0/1/1/0/1/0/1/1/0/1/1/0` use the On/Off as a way of regularise.
## Quants
* [ggml-model-q5_k_m.gguf](https://huggingface.co/fblgit/UNA-SOLAR-10.7B-Instruct-v1.0/resolve/main/ggml-model-q5_k_m.gguf?download=true)
* [ggml-model-q6_k.gguf](https://huggingface.co/fblgit/UNA-SOLAR-10.7B-Instruct-v1.0/resolve/main/ggml-model-q6_k.gguf?download=true)
## Libraries:
- Transformers 4.35.0-UNA
- Pytorch 2.1.0
- Datasets 2.14.6
- Tokenizers 0.14.1
## Evals LM-Evaluation Harness
`mt-bench`:
```
Mode: single
Input file: data/mt_bench/model_judgment/gpt-4_single.jsonl
########## First turn ##########
score
model turn
gpt-4 1 8.95625
claude-v1 1 8.15000
gpt-3.5-turbo 1 8.07500
LUNA-SOLARkrautLM-Instruct 1 7.93750
UNA-SOLAR-10.7B-Instruct-v1.0 1 7.80625
vicuna-33b-v1.3 1 7.45625
wizardlm-30b 1 7.13125
tulu-30b 1 7.01875
vicuna-13b-v1.3 1 6.81250
guanaco-65b 1 6.78125
nous-hermes-13b 1 6.43125
alpaca-13b 1 4.97500
rwkv-4-raven-14b 1 4.74375
llama-13b 1 3.26250
########## Second turn ##########
score
model turn
gpt-4 2 9.025000
gpt-3.5-turbo 2 7.812500
claude-v1 2 7.650000
UNA-SOLAR-10.7B-Instruct-v1.0 2 7.237500
LUNA-SOLARkrautLM-Instruct 2 6.987500
wizardlm-30b 2 6.887500
vicuna-33b-v1.3 2 6.787500
guanaco-65b 2 6.037500
vicuna-13b-v1.3 2 5.962500
tulu-30b 2 5.850000
nous-hermes-13b 2 4.664557
alpaca-13b 2 4.087500
rwkv-4-raven-14b 2 3.225000
llama-13b 2 1.950000
########## Average ##########
score
model
gpt-4 8.990625
gpt-3.5-turbo 7.943750
claude-instant-v1 7.905660
claude-v1 7.900000
UNA-SOLAR-10.7B-Instruct-v1.0 7.521875
LUNA-SOLARkrautLM-Instruct 7.462500
vicuna-33b-v1.3 7.121875
wizardlm-30b 7.009375
Llama-2-70b-chat 6.856250
Llama-2-13b-chat 6.650000
guanaco-33b 6.528125
tulu-30b 6.434375
guanaco-65b 6.409375
oasst-sft-7-llama-30b 6.409375
palm-2-chat-bison-001 6.400000
mpt-30b-chat 6.393750
vicuna-13b-v1.3 6.387500
wizardlm-13b 6.353125
Llama-2-7b-chat 6.268750
vicuna-7b-v1.3 5.996875
baize-v2-13b 5.750000
nous-hermes-13b 5.553459
mpt-7b-chat 5.459119
gpt4all-13b-snoozy 5.452830
koala-13b 5.350000
mpt-30b-instruct 5.218750
falcon-40b-instruct 5.168750
h2ogpt-oasst-open-llama-13b 4.625000
alpaca-13b 4.531250
chatglm-6b 4.500000
oasst-sft-4-pythia-12b 4.318750
rwkv-4-raven-14b 3.984375
dolly-v2-12b 3.275000
fastchat-t5-3b 3.040625
stablelm-tuned-alpha-7b 2.753125
llama-13b 2.606250
```
`big-refactor` branch:
```
hf (pretrained=fblgit/UNA-SOLAR-10.7B-Instruct-v1.0), gen_kwargs: (None), limit: None, num_fewshot: 25, batch_size: auto (32)
| Tasks |Version|Filter|n-shot| Metric |Value | |Stderr|
|-------------|-------|------|-----:|--------|-----:|---|-----:|
|arc_challenge|Yaml |none | 25|acc |0.6954|± |0.0134|
| | |none | 25|acc_norm|0.7167|± |0.0132|
hf (pretrained=fblgit/UNA-SOLAR-10.7B-Instruct-v1.0), gen_kwargs: (None), limit: None, num_fewshot: 5, batch_size: auto
|Tasks|Version| Filter |n-shot| Metric |Value| |Stderr|
|-----|-------|----------|-----:|-----------|----:|---|-----:|
|gsm8k|Yaml |get-answer| 5|exact_match|0.671|± |0.0129|
hf (pretrained=fblgit/UNA-SOLAR-10.7B-Instruct-v1.0), gen_kwargs: (), limit: None, num_fewshot: 0, batch_size: auto (64)
| Tasks |Version|Filter|n-shot|Metric|Value | |Stderr|
|--------------|-------|------|-----:|------|-----:|---|-----:|
|truthfulqa_mc2|Yaml |none | 0|acc |0.7297|_ |0.0149|
hf (pretrained=fblgit/UNA-SOLAR-10.7B-Instruct-v1.0), gen_kwargs: (None), limit: None, num_fewshot: 10, batch_size: auto (32)
| Tasks |Version|Filter|n-shot| Metric |Value | |Stderr|
|---------|-------|------|-----:|--------|-----:|---|-----:|
|hellaswag|Yaml |none | 10|acc |0.7091|± |0.0045|
| | |none | 10|acc_norm|0.8821|± |0.0032|
hf (pretrained=fblgit/UNA-SOLAR-10.7B-Instruct-v1.0,dtype=float16), gen_kwargs: (), limit: None, num_fewshot: 0, batch_size: auto (32)
| Tasks |Version|Filter|n-shot| Metric |Value | |Stderr|
|--------------|-------|------|-----:|----------|-----:|---|-----:|
|boolq |Yaml |none | 0|acc |0.8807|_ |0.0057|
|lambada_openai|Yaml |none | 0|perplexity|3.2452|_ |0.0778|
| | |none | 0|acc |0.7207|_ |0.0063|
|piqa |Yaml |none | 0|acc |0.8020|_ |0.0093|
| | |none | 0|acc_norm |0.8009|_ |0.0093|
|sciq |Yaml |none | 0|acc |0.9730|_ |0.0051|
| | |none | 0|acc_norm |0.9630|_ |0.0060|
|winogrande |Yaml |none | 0|acc |0.7577|_ |0.0120|
hf (pretrained=fblgit/UNA-SOLAR-10.7B-Instruct-v1.0,dtype=float16), gen_kwargs: (), limit: None, num_fewshot: 0, batch_size: auto (64)
| Tasks |Version|Filter|n-shot| Metric |Value | |Stderr|
|--------|-------|------|-----:|--------|-----:|---|-----:|
|mathqa |Yaml |none | 0|acc |0.3474|_ |0.0087|
| | |none | 0|acc_norm|0.3568|_ |0.0088|
|pubmedqa|Yaml |none | 0|acc |0.5400|_ |0.0223|
hf (pretrained=fblgit/UNA-SOLAR-10.7B-Instruct-v1.0,dtype=float16), gen_kwargs: (), limit: None, num_fewshot: 0, batch_size: auto
| Tasks |Version|Filter|n-shot| Metric |Value | |Stderr|
|------------------------------------------------------|-------|------|-----:|-----------|-----:|---|-----:|
|bbh_fewshot |N/A |none | 0|exact_match|0.4660|_ |0.1771|
| - bbh_fewshot_boolean_expressions |Yaml |none | 0|exact_match|0.8160|_ |0.0246|
| - bbh_fewshot_causal_judgement |Yaml |none | 0|exact_match|0.4973|_ |0.0367|
| - bbh_fewshot_date_understanding |Yaml |none | 0|exact_match|0.4840|_ |0.0317|
| - bbh_fewshot_disambiguation_qa |Yaml |none | 0|exact_match|0.6520|_ |0.0302|
| - bbh_fewshot_dyck_languages |Yaml |none | 0|exact_match|0.2040|_ |0.0255|
| - bbh_fewshot_formal_fallacies |Yaml |none | 0|exact_match|0.5280|_ |0.0316|
| - bbh_fewshot_geometric_shapes |Yaml |none | 0|exact_match|0.3360|_ |0.0299|
| - bbh_fewshot_hyperbaton |Yaml |none | 0|exact_match|0.5520|_ |0.0315|
| - bbh_fewshot_logical_deduction_five_objects |Yaml |none | 0|exact_match|0.4520|_ |0.0315|
| - bbh_fewshot_logical_deduction_seven_objects |Yaml |none | 0|exact_match|0.3920|_ |0.0309|
| - bbh_fewshot_logical_deduction_three_objects |Yaml |none | 0|exact_match|0.6200|_ |0.0308|
| - bbh_fewshot_movie_recommendation |Yaml |none | 0|exact_match|0.6640|_ |0.0299|
| - bbh_fewshot_multistep_arithmetic_two |Yaml |none | 0|exact_match|0.0080|_ |0.0056|
| - bbh_fewshot_navigate |Yaml |none | 0|exact_match|0.6280|_ |0.0306|
| - bbh_fewshot_object_counting |Yaml |none | 0|exact_match|0.3960|_ |0.0310|
| - bbh_fewshot_penguins_in_a_table |Yaml |none | 0|exact_match|0.4726|_ |0.0415|
| - bbh_fewshot_reasoning_about_colored_objects |Yaml |none | 0|exact_match|0.5320|_ |0.0316|
| - bbh_fewshot_ruin_names |Yaml |none | 0|exact_match|0.5680|_ |0.0314|
| - bbh_fewshot_salient_translation_error_detection |Yaml |none | 0|exact_match|0.5480|_ |0.0315|
| - bbh_fewshot_snarks |Yaml |none | 0|exact_match|0.5169|_ |0.0376|
| - bbh_fewshot_sports_understanding |Yaml |none | 0|exact_match|0.8320|_ |0.0237|
| - bbh_fewshot_temporal_sequences |Yaml |none | 0|exact_match|0.5520|_ |0.0315|
| - bbh_fewshot_tracking_shuffled_objects_five_objects |Yaml |none | 0|exact_match|0.1480|_ |0.0225|
| - bbh_fewshot_tracking_shuffled_objects_seven_objects|Yaml |none | 0|exact_match|0.1720|_ |0.0239|
| - bbh_fewshot_tracking_shuffled_objects_three_objects|Yaml |none | 0|exact_match|0.2760|_ |0.0283|
| - bbh_fewshot_web_of_lies |Yaml |none | 0|exact_match|0.4760|_ |0.0316|
| - bbh_fewshot_word_sorting |Yaml |none | 0|exact_match|0.2840|_ |0.0286|
| Groups |Version|Filter|n-shot| Metric |Value| |Stderr|
|-----------|-------|------|-----:|-----------|----:|---|-----:|
|bbh_fewshot|N/A |none | 0|exact_match|0.466|_ |0.1771|
hf (pretrained=fblgit/UNA-SOLAR-10.7B-Instruct-v1.0), gen_kwargs: (None), limit: None, num_fewshot: 5, batch_size: auto (16)
| Tasks |Version|Filter|n-shot|Metric|Value | |Stderr|
|---------------------------------------|-------|------|-----:|------|-----:|---|-----:|
|mmlu |N/A |none | 0|acc |0.6513|± |0.1221|
| - humanities |N/A |none | 5|acc |0.6077|± |0.1185|
| - formal_logic |Yaml |none | 5|acc |0.4444|± |0.0444|
| - high_school_european_history |Yaml |none | 5|acc |0.8121|± |0.0305|
| - high_school_us_history |Yaml |none | 5|acc |0.8431|± |0.0255|
| - high_school_world_history |Yaml |none | 5|acc |0.8523|± |0.0231|
| - international_law |Yaml |none | 5|acc |0.7851|± |0.0375|
| - jurisprudence |Yaml |none | 5|acc |0.7870|± |0.0396|
| - logical_fallacies |Yaml |none | 5|acc |0.7546|± |0.0338|
| - moral_disputes |Yaml |none | 5|acc |0.7370|± |0.0237|
| - moral_scenarios |Yaml |none | 5|acc |0.4101|± |0.0164|
| - philosophy |Yaml |none | 5|acc |0.7170|± |0.0256|
| - prehistory |Yaml |none | 5|acc |0.7840|± |0.0229|
| - professional_law |Yaml |none | 5|acc |0.4941|± |0.0128|
| - world_religions |Yaml |none | 5|acc |0.7895|± |0.0313|
| - other |N/A |none | 5|acc |0.7116|± |0.0939|
| - business_ethics |Yaml |none | 5|acc |0.7600|± |0.0429|
| - clinical_knowledge |Yaml |none | 5|acc |0.6792|± |0.0287|
| - college_medicine |Yaml |none | 5|acc |0.6590|± |0.0361|
| - global_facts |Yaml |none | 5|acc |0.3400|± |0.0476|
| - human_aging |Yaml |none | 5|acc |0.6816|± |0.0313|
| - management |Yaml |none | 5|acc |0.8350|± |0.0368|
| - marketing |Yaml |none | 5|acc |0.8547|± |0.0231|
| - medical_genetics |Yaml |none | 5|acc |0.7000|± |0.0461|
| - miscellaneous |Yaml |none | 5|acc |0.8020|± |0.0142|
| - nutrition |Yaml |none | 5|acc |0.7418|± |0.0251|
| - professional_accounting |Yaml |none | 5|acc |0.5071|± |0.0298|
| - professional_medicine |Yaml |none | 5|acc |0.7500|± |0.0263|
| - virology |Yaml |none | 5|acc |0.5843|± |0.0384|
| - social_sciences |N/A |none | 5|acc |0.7537|± |0.0681|
| - econometrics |Yaml |none | 5|acc |0.5000|± |0.0470|
| - high_school_geography |Yaml |none | 5|acc |0.8586|± |0.0248|
| - high_school_government_and_politics|Yaml |none | 5|acc |0.9016|± |0.0215|
| - high_school_macroeconomics |Yaml |none | 5|acc |0.6615|± |0.0240|
| - high_school_microeconomics |Yaml |none | 5|acc |0.7311|± |0.0288|
| - high_school_psychology |Yaml |none | 5|acc |0.8404|± |0.0157|
| - human_sexuality |Yaml |none | 5|acc |0.7328|± |0.0388|
| - professional_psychology |Yaml |none | 5|acc |0.6814|± |0.0189|
| - public_relations |Yaml |none | 5|acc |0.6909|± |0.0443|
| - security_studies |Yaml |none | 5|acc |0.7469|± |0.0278|
| - sociology |Yaml |none | 5|acc |0.8308|± |0.0265|
| - us_foreign_policy |Yaml |none | 5|acc |0.8900|± |0.0314|
| - stem |N/A |none | 5|acc |0.5569|± |0.1380|
| - abstract_algebra |Yaml |none | 5|acc |0.4100|± |0.0494|
| - anatomy |Yaml |none | 5|acc |0.6222|± |0.0419|
| - astronomy |Yaml |none | 5|acc |0.7368|± |0.0358|
| - college_biology |Yaml |none | 5|acc |0.8056|± |0.0331|
| - college_chemistry |Yaml |none | 5|acc |0.4700|± |0.0502|
| - college_computer_science |Yaml |none | 5|acc |0.5100|± |0.0502|
| - college_mathematics |Yaml |none | 5|acc |0.2800|± |0.0451|
| - college_physics |Yaml |none | 5|acc |0.3431|± |0.0472|
| - computer_security |Yaml |none | 5|acc |0.7400|± |0.0441|
| - conceptual_physics |Yaml |none | 5|acc |0.6340|± |0.0315|
| - electrical_engineering |Yaml |none | 5|acc |0.6000|± |0.0408|
| - elementary_mathematics |Yaml |none | 5|acc |0.4815|± |0.0257|
| - high_school_biology |Yaml |none | 5|acc |0.8032|± |0.0226|
| - high_school_chemistry |Yaml |none | 5|acc |0.4877|± |0.0352|
| - high_school_computer_science |Yaml |none | 5|acc |0.7200|± |0.0451|
| - high_school_mathematics |Yaml |none | 5|acc |0.3815|± |0.0296|
| - high_school_physics |Yaml |none | 5|acc |0.3576|± |0.0391|
| - high_school_statistics |Yaml |none | 5|acc |0.5602|± |0.0339|
| - machine_learning |Yaml |none | 5|acc |0.4643|± |0.0473|
| Groups |Version|Filter|n-shot|Metric|Value | |Stderr|
|------------------|-------|------|-----:|------|-----:|---|-----:|
|mmlu |N/A |none | 0|acc |0.6513|± |0.1221|
| - humanities |N/A |none | 5|acc |0.6077|± |0.1185|
| - other |N/A |none | 5|acc |0.7116|± |0.0939|
| - social_sciences|N/A |none | 5|acc |0.7537|± |0.0681|
| - stem |N/A |none | 5|acc |0.5569|± |0.1380|
```
## Citations
to [Upstage.AI](https://huggingface.co/upstage) for its awesome base model, this is merely a UNA of it. It can only refine what its already in there :)
If you find UNA-SOLAR useful, cite and support the authors. | [
"TRANSLATION"
] | [
"PUBMEDQA",
"SCIQ"
] |
RichardErkhov/aisingapore_-_llama3-8b-cpt-sea-lionv2-instruct-gguf | RichardErkhov | null | [
"gguf",
"arxiv:2309.06085",
"arxiv:2311.07911",
"arxiv:2306.05685",
"endpoints_compatible",
"region:us",
"conversational"
] | 2024-08-03T06:12:25 | 2024-08-03T08:40:00 | 101 | 0 | ---
{}
---
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
llama3-8b-cpt-sea-lionv2-instruct - GGUF
- Model creator: https://huggingface.co/aisingapore/
- Original model: https://huggingface.co/aisingapore/llama3-8b-cpt-sea-lionv2-instruct/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [llama3-8b-cpt-sea-lionv2-instruct.Q2_K.gguf](https://huggingface.co/RichardErkhov/aisingapore_-_llama3-8b-cpt-sea-lionv2-instruct-gguf/blob/main/llama3-8b-cpt-sea-lionv2-instruct.Q2_K.gguf) | Q2_K | 2.96GB |
| [llama3-8b-cpt-sea-lionv2-instruct.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/aisingapore_-_llama3-8b-cpt-sea-lionv2-instruct-gguf/blob/main/llama3-8b-cpt-sea-lionv2-instruct.IQ3_XS.gguf) | IQ3_XS | 3.28GB |
| [llama3-8b-cpt-sea-lionv2-instruct.IQ3_S.gguf](https://huggingface.co/RichardErkhov/aisingapore_-_llama3-8b-cpt-sea-lionv2-instruct-gguf/blob/main/llama3-8b-cpt-sea-lionv2-instruct.IQ3_S.gguf) | IQ3_S | 3.43GB |
| [llama3-8b-cpt-sea-lionv2-instruct.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/aisingapore_-_llama3-8b-cpt-sea-lionv2-instruct-gguf/blob/main/llama3-8b-cpt-sea-lionv2-instruct.Q3_K_S.gguf) | Q3_K_S | 3.41GB |
| [llama3-8b-cpt-sea-lionv2-instruct.IQ3_M.gguf](https://huggingface.co/RichardErkhov/aisingapore_-_llama3-8b-cpt-sea-lionv2-instruct-gguf/blob/main/llama3-8b-cpt-sea-lionv2-instruct.IQ3_M.gguf) | IQ3_M | 3.52GB |
| [llama3-8b-cpt-sea-lionv2-instruct.Q3_K.gguf](https://huggingface.co/RichardErkhov/aisingapore_-_llama3-8b-cpt-sea-lionv2-instruct-gguf/blob/main/llama3-8b-cpt-sea-lionv2-instruct.Q3_K.gguf) | Q3_K | 3.74GB |
| [llama3-8b-cpt-sea-lionv2-instruct.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/aisingapore_-_llama3-8b-cpt-sea-lionv2-instruct-gguf/blob/main/llama3-8b-cpt-sea-lionv2-instruct.Q3_K_M.gguf) | Q3_K_M | 3.74GB |
| [llama3-8b-cpt-sea-lionv2-instruct.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/aisingapore_-_llama3-8b-cpt-sea-lionv2-instruct-gguf/blob/main/llama3-8b-cpt-sea-lionv2-instruct.Q3_K_L.gguf) | Q3_K_L | 4.03GB |
| [llama3-8b-cpt-sea-lionv2-instruct.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/aisingapore_-_llama3-8b-cpt-sea-lionv2-instruct-gguf/blob/main/llama3-8b-cpt-sea-lionv2-instruct.IQ4_XS.gguf) | IQ4_XS | 4.18GB |
| [llama3-8b-cpt-sea-lionv2-instruct.Q4_0.gguf](https://huggingface.co/RichardErkhov/aisingapore_-_llama3-8b-cpt-sea-lionv2-instruct-gguf/blob/main/llama3-8b-cpt-sea-lionv2-instruct.Q4_0.gguf) | Q4_0 | 4.34GB |
| [llama3-8b-cpt-sea-lionv2-instruct.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/aisingapore_-_llama3-8b-cpt-sea-lionv2-instruct-gguf/blob/main/llama3-8b-cpt-sea-lionv2-instruct.IQ4_NL.gguf) | IQ4_NL | 1.63GB |
| [llama3-8b-cpt-sea-lionv2-instruct.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/aisingapore_-_llama3-8b-cpt-sea-lionv2-instruct-gguf/blob/main/llama3-8b-cpt-sea-lionv2-instruct.Q4_K_S.gguf) | Q4_K_S | 3.0GB |
| [llama3-8b-cpt-sea-lionv2-instruct.Q4_K.gguf](https://huggingface.co/RichardErkhov/aisingapore_-_llama3-8b-cpt-sea-lionv2-instruct-gguf/blob/main/llama3-8b-cpt-sea-lionv2-instruct.Q4_K.gguf) | Q4_K | 1.39GB |
| [llama3-8b-cpt-sea-lionv2-instruct.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/aisingapore_-_llama3-8b-cpt-sea-lionv2-instruct-gguf/blob/main/llama3-8b-cpt-sea-lionv2-instruct.Q4_K_M.gguf) | Q4_K_M | 4.58GB |
| [llama3-8b-cpt-sea-lionv2-instruct.Q4_1.gguf](https://huggingface.co/RichardErkhov/aisingapore_-_llama3-8b-cpt-sea-lionv2-instruct-gguf/blob/main/llama3-8b-cpt-sea-lionv2-instruct.Q4_1.gguf) | Q4_1 | 4.78GB |
| [llama3-8b-cpt-sea-lionv2-instruct.Q5_0.gguf](https://huggingface.co/RichardErkhov/aisingapore_-_llama3-8b-cpt-sea-lionv2-instruct-gguf/blob/main/llama3-8b-cpt-sea-lionv2-instruct.Q5_0.gguf) | Q5_0 | 5.21GB |
| [llama3-8b-cpt-sea-lionv2-instruct.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/aisingapore_-_llama3-8b-cpt-sea-lionv2-instruct-gguf/blob/main/llama3-8b-cpt-sea-lionv2-instruct.Q5_K_S.gguf) | Q5_K_S | 5.21GB |
| [llama3-8b-cpt-sea-lionv2-instruct.Q5_K.gguf](https://huggingface.co/RichardErkhov/aisingapore_-_llama3-8b-cpt-sea-lionv2-instruct-gguf/blob/main/llama3-8b-cpt-sea-lionv2-instruct.Q5_K.gguf) | Q5_K | 5.34GB |
| [llama3-8b-cpt-sea-lionv2-instruct.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/aisingapore_-_llama3-8b-cpt-sea-lionv2-instruct-gguf/blob/main/llama3-8b-cpt-sea-lionv2-instruct.Q5_K_M.gguf) | Q5_K_M | 5.34GB |
| [llama3-8b-cpt-sea-lionv2-instruct.Q5_1.gguf](https://huggingface.co/RichardErkhov/aisingapore_-_llama3-8b-cpt-sea-lionv2-instruct-gguf/blob/main/llama3-8b-cpt-sea-lionv2-instruct.Q5_1.gguf) | Q5_1 | 5.65GB |
| [llama3-8b-cpt-sea-lionv2-instruct.Q6_K.gguf](https://huggingface.co/RichardErkhov/aisingapore_-_llama3-8b-cpt-sea-lionv2-instruct-gguf/blob/main/llama3-8b-cpt-sea-lionv2-instruct.Q6_K.gguf) | Q6_K | 6.14GB |
| [llama3-8b-cpt-sea-lionv2-instruct.Q8_0.gguf](https://huggingface.co/RichardErkhov/aisingapore_-_llama3-8b-cpt-sea-lionv2-instruct-gguf/blob/main/llama3-8b-cpt-sea-lionv2-instruct.Q8_0.gguf) | Q8_0 | 7.95GB |
Original model description:
---
language:
- en
- id
- ta
- th
- vi
license: llama3
---
# Llama3 8B CPT SEA-LIONv2 Instruct
SEA-LION is a collection of Large Language Models (LLMs) which has been pretrained and instruct-tuned for the Southeast Asia (SEA) region.
Llama3 8B CPT SEA-LIONv2 Instruct is a multilingual model which has been fine-tuned with around **100,000 English instruction-completion pairs** alongside a smaller pool of around **50,000 instruction-completion pairs** from other ASEAN languages, such as Indonesian, Thai and Vietnamese.
These instructions have been carefully curated and rewritten to ensure the model was trained on truly open, commercially permissive and high quality datasets.
SEA-LION stands for _Southeast Asian Languages In One Network_.
- **Developed by:** Products Pillar, AI Singapore
- **Funded by:** Singapore NRF
- **Model type:** Decoder
- **Languages:** English, Indonesian, Thai, Vietnamese, Tamil
- **License:** [Llama3 Community License](https://huggingface.co/meta-llama/Meta-Llama-3-8B/blob/main/LICENSE)
## Model Details
### Model Description
We performed instruction tuning in English and also in ASEAN languages such as Indonesian, Thai and Vietnamese on our [continued pre-trained Llama3 CPT 8B SEA-LIONv2](https://huggingface.co/aisingapore/llama3-8b-cpt-sea-lionv2-base), a decoder model using the Llama3 architecture, to create Llama3 8B SEA-LIONv2 Instruct.
The model has a context length of 8192.
### Benchmark Performance
We evaluated Llama3 8B SEA-LIONv2 Instruct on both general language capabilities and instruction-following capabilities.
#### General Language Capabilities
For the evaluation of general language capabilities, we employed the [BHASA evaluation benchmark](https://arxiv.org/abs/2309.06085v2) across a variety of tasks.
These tasks include Question Answering (QA), Sentiment Analysis (Sentiment), Toxicity Detection (Toxicity), Translation in both directions (Eng>Lang & Lang>Eng), Abstractive Summarization (Summ), Causal Reasoning (Causal) and Natural Language Inference (NLI).
Note: BHASA is implemented following a strict answer format, and only spaces and punctuations are cleaned. For tasks where options are provided, the answer should only include one of the pre-defined options, nothing else. If the model continues to generate more tokens (e.g. to explain its answer), it will be considered to be a wrong response. For the F1 score metric (as used in Sentiment Analysis and Toxicity Detection), all answers that do not fall under the pre-defined labels will be treated as a separate label (to mark it as a wrong answer) and included in the calculations so that the model is penalized for not generating one of the pre-defined labels.
The evaluation was done zero-shot with native prompts and only a sample of 100-1000 instances for each dataset was used as per the setting described in the paper.
**BHASA**
To be released.
#### Instruction-following Capabilities
Since LLaMa3 8B SEA-LIONv2 is an instruction-following model, we also evaluated it on instruction-following capabilities with two datasets, [IFEval](https://arxiv.org/abs/2311.07911) and [MT-Bench](https://arxiv.org/abs/2306.05685).
As these two datasets were originally in English, the linguists and native speakers in the team worked together to filter, localize and translate the datasets into the respective target languages to ensure that the examples remained reasonable, meaningful and natural.
**IFEval**
IFEval evaluates a model's ability to adhere to constraints provided in the prompt, for example beginning a response with a specific word/phrase or answering with a certain number of sections. The metric used is accuracy normalized by language (if the model performs the task correctly but responds in the wrong language, it is judged to have failed the task).
| **Model** | **Indonesian(%)** | **Vietnamese(%)** | **English(%)** |
|:---------------------------------:|:------------------:|:------------------:|:---------------:|
| Meta-Llama-3.1-8B-Instruct | 67.62 | 67.62 | 84.76 |
| Qwen2-7B-Instruct | 62.86 | 64.76 | 70.48 |
| llama3-8b-cpt-sea-lionv2-instruct | 60.95 | 65.71 | 69.52 |
| aya-23-8B | 58.10 | 56.19 | 66.67 |
| SeaLLMs-v3-7B-Chat | 55.24 | 52.38 | 66.67 |
| Mistral-7B-Instruct-v0.3 | 42.86 | 39.05 | 69.52 |
| Meta-Llama-3-8B-Instruct | 26.67 | 20.95 | 80.00 |
| Sailor-7B-Chat | 25.71 | 24.76 | 41.90 |
**MT-Bench**
MT-Bench evaluates a model's ability to engage in multi-turn (2 turns) conversations and respond in ways that align with human needs. We use `gpt-4-1106-preview` as the judge model and compare against `gpt-3.5-turbo-0125` as the baseline model. The metric used is the weighted win rate against the baseline model (i.e. average win rate across each category (Math, Reasoning, STEM, Humanities, Roleplay, Writing, Extraction)). A tie is given a score of 0.5.
| **Model** | **Indonesian(%)** | **Vietnamese(%)** | **English(%)** |
|:---------------------------------:|:-----------------:|:-----------------:|:--------------:|
| SeaLLMs-v3-7B-Chat | 58.33 | 65.56 | 42.94 |
| Qwen2-7B-Instruct | 49.78 | 55.65 | 59.68 |
| llama3-8b-cpt-sea-lionv2-instruct | 53.13 | 51.68 | 51.00 |
| Meta-Llama-3.1-8B-Instruct | 41.09 | 47.69 | 61.79 |
| aya-23-8B | 49.90 | 54.61 | 41.63 |
| Meta-Llama-3-8B-Instruct | 40.29 | 43.69 | 56.38 |
| Mistral-7B-Instruct-v0.3 | 34.74 | 20.24 | 52.40 |
| Sailor-7B-Chat | 29.05 | 31.39 | 18.98 |
### Usage
SEA-LION can be run using the 🤗 Transformers library
```python
# Please use transformers==4.43.2
import transformers
import torch
model_id = "aisingapore/llama3-8b-cpt-sea-lionv2-instruct"
pipeline = transformers.pipeline(
"text-generation",
model=model_id,
model_kwargs={"torch_dtype": torch.bfloat16},
device_map="auto",
)
messages = [
{"role": "user", "content": "Apa sentimen dari kalimat berikut ini?\nKalimat: Buku ini sangat membosankan.\nJawaban: "},
]
outputs = pipeline(
messages,
max_new_tokens=256,
)
print(outputs[0]["generated_text"][-1])
```
### Prompting Guide
_Coming soon_
### Caveats
It is important for users to be aware that our model exhibits certain limitations that warrant consideration. Like many LLMs, the model can hallucinate and occasionally generates irrelevant content, introducing fictional elements that are not grounded in the provided context. Users should also exercise caution in interpreting and validating the model's responses due to the potential inconsistencies in its reasoning.
## Limitations
### Safety
Current SEA-LION models, including this commercially permissive release, have not been aligned for safety. Developers and users should perform their own safety fine-tuning and related security measures. In no event shall the authors be held liable for any claim, damages, or other liability arising from the use of the released weights and codes.
## Technical Specifications
### Fine-Tuning Details
The Llama3 8B CPT SEA-LIONv2 Instruct was fine-tuned using 8x A100-40GB using parameter efficient fine tuning in the form of LoRA.
## Data
Llama3 8B CPT SEA-LIONv2 Instruct was trained on a wide range of instructions that were manually and stringently verified by our team. A large portion of the effort was dedicated to ensuring that each instruction-completion pair that the model sees is of high quality and any errors were corrected and rewritten by native speakers or else dropped from our mix.
In addition, special care was taken to ensure that the datasets used had commercially permissive licenses through verification with the original data source.
Link to dataset: _coming soon_
## Call for Contributions
We encourage researchers, developers, and language enthusiasts to actively contribute to the enhancement and expansion of SEA-LION. Contributions can involve identifying and reporting bugs, sharing pre-training, instruction, and preference data, improving documentation usability, proposing and implementing new model evaluation tasks and metrics, or training versions of the model in additional Southeast Asian languages. Join us in shaping the future of SEA-LION by sharing your expertise and insights to make these models more accessible, accurate, and versatile. Please check out our GitHub for further information on the call for contributions.
## The Team
Choa Esther<br>
Cheng Nicholas<br>
Huang Yuli<br>
Lau Wayne<br>
Lee Chwan Ren<br>
Leong Wai Yi<br>
Leong Wei Qi<br>
Li Yier<br>
Liu Bing Jie Darius<br>
Lovenia Holy<br>
Montalan Jann Railey<br>
Ng Boon Cheong Raymond<br>
Ngui Jian Gang<br>
Nguyen Thanh Ngan<br>
Ong Brandon<br>
Ong Tat-Wee David<br>
Ong Zhi Hao<br>
Rengarajan Hamsawardhini<br>
Siow Bryan<br>
Susanto Yosephine<br>
Tai Ngee Chia<br>
Tan Choon Meng<br>
Teo Eng Sipp Leslie<br>
Teo Wei Yi<br>
Tjhi William<br>
Teng Walter<br>
Yeo Yeow Tong<br>
Yong Xianbin<br>
## Acknowledgements
[AI Singapore](https://aisingapore.org/) is a national programme supported by the National Research Foundation, Singapore and hosted by the National University of Singapore. Any opinions, findings and conclusions or recommendations expressed in this material are those of the author(s) and do not reflect the views of the National Research Foundation or the National University of Singapore.
## Contact
For more info, please contact us using this [SEA-LION Inquiry Form](https://forms.gle/sLCUVb95wmGf43hi6)
[Link to SEA-LION's GitHub repository](https://github.com/aisingapore/sealion)
## Disclaimer
This is the repository for the commercial instruction-tuned model.
The model has _not_ been aligned for safety.
Developers and users should perform their own safety fine-tuning and related security measures.
In no event shall the authors be held liable for any claims, damages, or other liabilities arising from the use of the released weights and codes.
| [
"QUESTION_ANSWERING",
"TRANSLATION",
"SUMMARIZATION"
] | [
"CHIA"
] |
croissantllm/CroissantLLMChat-v0.1-GGUF | croissantllm | text-generation | [
"gguf",
"legal",
"code",
"text-generation-inference",
"art",
"text-generation",
"fr",
"en",
"dataset:croissantllm/croissant_dataset",
"dataset:croissantllm/CroissantLLM-2201-sft",
"dataset:cerebras/SlimPajama-627B",
"dataset:uonlp/CulturaX",
"dataset:pg19",
"dataset:bigcode/starcoderdata",
"arxiv:2402.00786",
"license:mit",
"endpoints_compatible",
"region:us",
"conversational"
] | 2024-02-08T10:07:39 | 2024-04-29T12:12:14 | 100 | 3 | ---
datasets:
- croissantllm/croissant_dataset
- croissantllm/CroissantLLM-2201-sft
- cerebras/SlimPajama-627B
- uonlp/CulturaX
- pg19
- bigcode/starcoderdata
language:
- fr
- en
license: mit
pipeline_tag: text-generation
tags:
- legal
- code
- text-generation-inference
- art
---
# CroissantLLMChat - GGUF (190k steps + Chat)
This model is part of the CroissantLLM initiative, and corresponds to the checkpoint after 190k steps (2.99 T) tokens and a final Chat finetuning phase.
https://arxiv.org/abs/2402.00786
For best performance, it should be used with a temperature of 0.3 or more, and with the exact template described below:
```python
chat = [
{"role": "user", "content": "Que puis-je faire à Marseille en hiver?"},
]
chat_input = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
```
corresponding to:
```python
chat_input = """<|im_start|>user
{USER QUERY}<|im_end|>
<|im_start|>assistant\n"""
```
## Abstract
We introduce CroissantLLM, a 1.3B language model pretrained on a set of 3T English and French tokens, to bring to the research and industrial community a high-performance, fully open-sourced bilingual model that runs swiftly on consumer-grade local hardware.
To that end, we pioneer the approach of training an intrinsically bilingual model with a 1:1 English-to-French pretraining data ratio, a custom tokenizer, and bilingual finetuning datasets. We release the training dataset, notably containing a French split with manually curated, high-quality, and varied data sources.
To assess performance outside of English, we craft a novel benchmark, FrenchBench, consisting of an array of classification and generation tasks, covering various orthogonal aspects of model performance in the French Language. Additionally, rooted in transparency and to foster further Large Language Model research, we release codebases, and dozens of checkpoints across various model sizes, training data distributions, and training steps, as well as fine-tuned Chat models, and strong translation models. We evaluate our model through the FMTI framework, and validate 81% of the transparency criteria, far beyond the scores of even most open initiatives.
This work enriches the NLP landscape, breaking away from previous English-centric work in order to strengthen our understanding of multilinguality in language models.
## Citation
Our work can be cited as:
```bash
@misc{faysse2024croissantllm,
title={CroissantLLM: A Truly Bilingual French-English Language Model},
author={Manuel Faysse and Patrick Fernandes and Nuno M. Guerreiro and António Loison and Duarte M. Alves and Caio Corro and Nicolas Boizard and João Alves and Ricardo Rei and Pedro H. Martins and Antoni Bigata Casademunt and François Yvon and André F. T. Martins and Gautier Viaud and Céline Hudelot and Pierre Colombo},
year={2024},
eprint={2402.00786},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
## Usage
This model is a Chat model, that is, it is finetuned for Chat function and works best with the provided template.
#### With generate
This might require a stopping criteria on <|im_end|> token.
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "croissantllm/CroissantLLMChat-v0.1"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)
generation_args = {
"max_new_tokens": 256,
"do_sample": True,
"temperature": 0.3,
"top_p": 0.90,
"top_k": 40,
"repetition_penalty": 1.05,
"eos_token_id": [tokenizer.eos_token_id, 32000],
}
chat = [
{"role": "user", "content": "Qui est le président francais actuel ?"},
]
chat_input = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
inputs = tokenizer(chat_input, return_tensors="pt").to(model.device)
tokens = model.generate(**inputs, **generation_args)
print(tokenizer.decode(tokens[0]))
# print tokens individually
print([(tokenizer.decode([tok]), tok) for tok in tokens[0].tolist()])
```
## Model limitations
Evaluation results indicate the model is strong in its size category, and offers decent performances on writing-based tasks and internal knowledge, and very strong performance on translation tasks. The small size of the CroissantLLM model however hinders its capacity to perform more complex reasoning-based tasks, at least in a zero or few-shot manner in its generalist base or chat-model versions. This is aligned with other models of size and underlines the importance of scale for more abstract tasks.
#### Knowledge Cutoff
The model training dataset has a data cutoff date corresponding to the November 2023 Wikipedia dump. This is the de facto knowledge cutoff date for our base model, although a lot of information dates back further. Updated versions can be trained through continued pre-training or subsequent fine-tuning.
#### Multilingual performance.
CroissantLLM is mostly a French and English model. Code performance is relatively limited, and although some amount of data from other languages is included within the SlimPajama training set, out-of-the-box performance in other languages is not to be expected, although some European languages do work quite well.
#### Hallucinations.
CroissantLLM can hallucinate and output factually incorrect data, especially regarding complex topics. This is to be expected given the small model size, and hallucination rates seem inferior to most models of the same size category although no quantitative assessments have been conducted outside of MT-Bench experiments. | [
"TRANSLATION"
] | [
"CRAFT"
] |
zeroMN/SHMT | zeroMN | audio-text-to-text | [
"transformers",
"transformer",
"multimodal",
"vqa",
"text",
"audio",
"audio-text-to-text",
"en",
"zh",
"dataset:zeroMN/nlp_corpus_zh",
"dataset:zeroMN/hanlp_date-zh",
"dataset:nyu-mll/glue",
"dataset:aps/super_glue",
"dataset:facebook/anli",
"dataset:tasksource/babi_nli",
"dataset:zeroMN/AVEdate",
"dataset:sick",
"dataset:snli",
"dataset:scitail",
"dataset:hans",
"dataset:alisawuffles/WANLI",
"dataset:tasksource/recast",
"dataset:sileod/probability_words_nli",
"dataset:joey234/nan-nli",
"dataset:pietrolesci/nli_fever",
"dataset:pietrolesci/breaking_nli",
"dataset:pietrolesci/conj_nli",
"dataset:pietrolesci/fracas",
"dataset:pietrolesci/dialogue_nli",
"dataset:pietrolesci/mpe",
"dataset:pietrolesci/dnc",
"dataset:pietrolesci/recast_white",
"dataset:pietrolesci/joci",
"dataset:pietrolesci/robust_nli",
"dataset:pietrolesci/robust_nli_is_sd",
"dataset:pietrolesci/robust_nli_li_ts",
"dataset:pietrolesci/gen_debiased_nli",
"dataset:pietrolesci/add_one_rte",
"dataset:tasksource/imppres",
"dataset:hlgd",
"dataset:paws",
"dataset:medical_questions_pairs",
"dataset:Anthropic/model-written-evals",
"dataset:truthful_qa",
"dataset:nightingal3/fig-qa",
"dataset:tasksource/bigbench",
"dataset:blimp",
"dataset:cos_e",
"dataset:cosmos_qa",
"dataset:dream",
"dataset:openbookqa",
"dataset:qasc",
"dataset:quartz",
"dataset:quail",
"dataset:head_qa",
"dataset:sciq",
"dataset:social_i_qa",
"dataset:wiki_hop",
"dataset:wiqa",
"dataset:piqa",
"dataset:hellaswag",
"dataset:pkavumba/balanced-copa",
"dataset:12ml/e-CARE",
"dataset:art",
"dataset:winogrande",
"dataset:codah",
"dataset:ai2_arc",
"dataset:definite_pronoun_resolution",
"dataset:swag",
"dataset:math_qa",
"dataset:metaeval/utilitarianism",
"dataset:mteb/amazon_counterfactual",
"dataset:SetFit/insincere-questions",
"dataset:SetFit/toxic_conversations",
"dataset:turingbench/TuringBench",
"dataset:trec",
"dataset:tals/vitaminc",
"dataset:hope_edi",
"dataset:strombergnlp/rumoureval_2019",
"dataset:ethos",
"dataset:tweet_eval",
"dataset:discovery",
"dataset:pragmeval",
"dataset:silicone",
"dataset:lex_glue",
"dataset:papluca/language-identification",
"dataset:imdb",
"dataset:rotten_tomatoes",
"dataset:ag_news",
"dataset:yelp_review_full",
"dataset:financial_phrasebank",
"dataset:poem_sentiment",
"dataset:dbpedia_14",
"dataset:amazon_polarity",
"dataset:app_reviews",
"dataset:hate_speech18",
"dataset:sms_spam",
"dataset:humicroedit",
"dataset:snips_built_in_intents",
"dataset:hate_speech_offensive",
"dataset:yahoo_answers_topics",
"dataset:pacovaldez/stackoverflow-questions",
"dataset:zapsdcn/hyperpartisan_news",
"dataset:zapsdcn/sciie",
"dataset:zapsdcn/citation_intent",
"dataset:go_emotions",
"dataset:allenai/scicite",
"dataset:liar",
"dataset:relbert/lexical_relation_classification",
"dataset:tasksource/linguisticprobing",
"dataset:tasksource/crowdflower",
"dataset:metaeval/ethics",
"dataset:emo",
"dataset:google_wellformed_query",
"dataset:tweets_hate_speech_detection",
"dataset:has_part",
"dataset:blog_authorship_corpus",
"dataset:launch/open_question_type",
"dataset:health_fact",
"dataset:commonsense_qa",
"dataset:mc_taco",
"dataset:ade_corpus_v2",
"dataset:prajjwal1/discosense",
"dataset:circa",
"dataset:PiC/phrase_similarity",
"dataset:copenlu/scientific-exaggeration-detection",
"dataset:quarel",
"dataset:mwong/fever-evidence-related",
"dataset:numer_sense",
"dataset:dynabench/dynasent",
"dataset:raquiba/Sarcasm_News_Headline",
"dataset:sem_eval_2010_task_8",
"dataset:demo-org/auditor_review",
"dataset:medmcqa",
"dataset:RuyuanWan/Dynasent_Disagreement",
"dataset:RuyuanWan/Politeness_Disagreement",
"dataset:RuyuanWan/SBIC_Disagreement",
"dataset:RuyuanWan/SChem_Disagreement",
"dataset:RuyuanWan/Dilemmas_Disagreement",
"dataset:lucasmccabe/logiqa",
"dataset:wiki_qa",
"dataset:tasksource/cycic_classification",
"dataset:tasksource/cycic_multiplechoice",
"dataset:tasksource/sts-companion",
"dataset:tasksource/commonsense_qa_2.0",
"dataset:tasksource/lingnli",
"dataset:tasksource/monotonicity-entailment",
"dataset:tasksource/arct",
"dataset:tasksource/scinli",
"dataset:tasksource/naturallogic",
"dataset:onestop_qa",
"dataset:demelin/moral_stories",
"dataset:corypaik/prost",
"dataset:aps/dynahate",
"dataset:metaeval/syntactic-augmentation-nli",
"dataset:tasksource/autotnli",
"dataset:lasha-nlp/CONDAQA",
"dataset:openai/webgpt_comparisons",
"dataset:Dahoas/synthetic-instruct-gptj-pairwise",
"dataset:metaeval/scruples",
"dataset:metaeval/wouldyourather",
"dataset:metaeval/defeasible-nli",
"dataset:tasksource/help-nli",
"dataset:metaeval/nli-veridicality-transitivity",
"dataset:tasksource/lonli",
"dataset:tasksource/dadc-limit-nli",
"dataset:ColumbiaNLP/FLUTE",
"dataset:tasksource/strategy-qa",
"dataset:openai/summarize_from_feedback",
"dataset:tasksource/folio",
"dataset:yale-nlp/FOLIO",
"dataset:tasksource/tomi-nli",
"dataset:tasksource/avicenna",
"dataset:stanfordnlp/SHP",
"dataset:GBaker/MedQA-USMLE-4-options-hf",
"dataset:sileod/wikimedqa",
"dataset:declare-lab/cicero",
"dataset:amydeng2000/CREAK",
"dataset:tasksource/mutual",
"dataset:inverse-scaling/NeQA",
"dataset:inverse-scaling/quote-repetition",
"dataset:inverse-scaling/redefine-math",
"dataset:tasksource/puzzte",
"dataset:tasksource/implicatures",
"dataset:race",
"dataset:tasksource/race-c",
"dataset:tasksource/spartqa-yn",
"dataset:tasksource/spartqa-mchoice",
"dataset:tasksource/temporal-nli",
"dataset:riddle_sense",
"dataset:tasksource/clcd-english",
"dataset:maximedb/twentyquestions",
"dataset:metaeval/reclor",
"dataset:tasksource/counterfactually-augmented-imdb",
"dataset:tasksource/counterfactually-augmented-snli",
"dataset:metaeval/cnli",
"dataset:tasksource/boolq-natural-perturbations",
"dataset:metaeval/acceptability-prediction",
"dataset:metaeval/equate",
"dataset:tasksource/ScienceQA_text_only",
"dataset:Jiangjie/ekar_english",
"dataset:tasksource/implicit-hate-stg1",
"dataset:metaeval/chaos-mnli-ambiguity",
"dataset:IlyaGusev/headline_cause",
"dataset:tasksource/logiqa-2.0-nli",
"dataset:tasksource/oasst2_dense_flat",
"dataset:sileod/mindgames",
"dataset:metaeval/ambient",
"dataset:metaeval/path-naturalness-prediction",
"dataset:civil_comments",
"dataset:AndyChiang/cloth",
"dataset:AndyChiang/dgen",
"dataset:tasksource/I2D2",
"dataset:webis/args_me",
"dataset:webis/Touche23-ValueEval",
"dataset:tasksource/starcon",
"dataset:PolyAI/banking77",
"dataset:tasksource/ConTRoL-nli",
"dataset:tasksource/tracie",
"dataset:tasksource/sherliic",
"dataset:tasksource/sen-making",
"dataset:tasksource/winowhy",
"dataset:tasksource/robustLR",
"dataset:CLUTRR/v1",
"dataset:tasksource/logical-fallacy",
"dataset:tasksource/parade",
"dataset:tasksource/cladder",
"dataset:tasksource/subjectivity",
"dataset:tasksource/MOH",
"dataset:tasksource/VUAC",
"dataset:tasksource/TroFi",
"dataset:sharc_modified",
"dataset:tasksource/conceptrules_v2",
"dataset:metaeval/disrpt",
"dataset:tasksource/zero-shot-label-nli",
"dataset:tasksource/com2sense",
"dataset:tasksource/scone",
"dataset:tasksource/winodict",
"dataset:tasksource/fool-me-twice",
"dataset:tasksource/monli",
"dataset:tasksource/corr2cause",
"dataset:lighteval/lsat_qa",
"dataset:tasksource/apt",
"dataset:zeroshot/twitter-financial-news-sentiment",
"dataset:tasksource/icl-symbol-tuning-instruct",
"dataset:tasksource/SpaceNLI",
"dataset:sihaochen/propsegment",
"dataset:HannahRoseKirk/HatemojiBuild",
"dataset:tasksource/regset",
"dataset:tasksource/esci",
"dataset:lmsys/chatbot_arena_conversations",
"dataset:neurae/dnd_style_intents",
"dataset:hitachi-nlp/FLD.v2",
"dataset:tasksource/SDOH-NLI",
"dataset:allenai/scifact_entailment",
"dataset:tasksource/feasibilityQA",
"dataset:tasksource/simple_pair",
"dataset:tasksource/AdjectiveScaleProbe-nli",
"dataset:tasksource/resnli",
"dataset:tasksource/SpaRTUN",
"dataset:tasksource/ReSQ",
"dataset:tasksource/semantic_fragments_nli",
"dataset:MoritzLaurer/dataset_train_nli",
"dataset:tasksource/stepgame",
"dataset:tasksource/nlgraph",
"dataset:tasksource/oasst2_pairwise_rlhf_reward",
"dataset:tasksource/hh-rlhf",
"dataset:tasksource/ruletaker",
"dataset:qbao775/PARARULE-Plus",
"dataset:tasksource/proofwriter",
"dataset:tasksource/logical-entailment",
"dataset:tasksource/nope",
"dataset:tasksource/LogicNLI",
"dataset:kiddothe2b/contract-nli",
"dataset:AshtonIsNotHere/nli4ct_semeval2024",
"dataset:tasksource/lsat-ar",
"dataset:tasksource/lsat-rc",
"dataset:AshtonIsNotHere/biosift-nli",
"dataset:tasksource/brainteasers",
"dataset:Anthropic/persuasion",
"dataset:erbacher/AmbigNQ-clarifying-question",
"dataset:tasksource/SIGA-nli",
"dataset:unigram/FOL-nli",
"dataset:tasksource/goal-step-wikihow",
"dataset:GGLab/PARADISE",
"dataset:tasksource/doc-nli",
"dataset:tasksource/mctest-nli",
"dataset:tasksource/patent-phrase-similarity",
"dataset:tasksource/natural-language-satisfiability",
"dataset:tasksource/idioms-nli",
"dataset:tasksource/lifecycle-entailment",
"dataset:nvidia/HelpSteer",
"dataset:nvidia/HelpSteer2",
"dataset:sadat2307/MSciNLI",
"dataset:pushpdeep/UltraFeedback-paired",
"dataset:tasksource/AES2-essay-scoring",
"dataset:tasksource/english-grading",
"dataset:tasksource/wice",
"dataset:Dzeniks/hover",
"dataset:sileod/missing-item-prediction",
"dataset:tasksource/tasksource_dpo_pairs",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | 2025-01-06T04:33:44 | 2025-01-20T12:06:32 | 99 | 1 | ---
datasets:
- zeroMN/nlp_corpus_zh
- zeroMN/hanlp_date-zh
- nyu-mll/glue
- aps/super_glue
- facebook/anli
- tasksource/babi_nli
- zeroMN/AVEdate
- sick
- snli
- scitail
- hans
- alisawuffles/WANLI
- tasksource/recast
- sileod/probability_words_nli
- joey234/nan-nli
- pietrolesci/nli_fever
- pietrolesci/breaking_nli
- pietrolesci/conj_nli
- pietrolesci/fracas
- pietrolesci/dialogue_nli
- pietrolesci/mpe
- pietrolesci/dnc
- pietrolesci/recast_white
- pietrolesci/joci
- pietrolesci/robust_nli
- pietrolesci/robust_nli_is_sd
- pietrolesci/robust_nli_li_ts
- pietrolesci/gen_debiased_nli
- pietrolesci/add_one_rte
- tasksource/imppres
- hlgd
- paws
- medical_questions_pairs
- Anthropic/model-written-evals
- truthful_qa
- nightingal3/fig-qa
- tasksource/bigbench
- blimp
- cos_e
- cosmos_qa
- dream
- openbookqa
- qasc
- quartz
- quail
- head_qa
- sciq
- social_i_qa
- wiki_hop
- wiqa
- piqa
- hellaswag
- pkavumba/balanced-copa
- 12ml/e-CARE
- art
- winogrande
- codah
- ai2_arc
- definite_pronoun_resolution
- swag
- math_qa
- metaeval/utilitarianism
- mteb/amazon_counterfactual
- SetFit/insincere-questions
- SetFit/toxic_conversations
- turingbench/TuringBench
- trec
- tals/vitaminc
- hope_edi
- strombergnlp/rumoureval_2019
- ethos
- tweet_eval
- discovery
- pragmeval
- silicone
- lex_glue
- papluca/language-identification
- imdb
- rotten_tomatoes
- ag_news
- yelp_review_full
- financial_phrasebank
- poem_sentiment
- dbpedia_14
- amazon_polarity
- app_reviews
- hate_speech18
- sms_spam
- humicroedit
- snips_built_in_intents
- hate_speech_offensive
- yahoo_answers_topics
- pacovaldez/stackoverflow-questions
- zapsdcn/hyperpartisan_news
- zapsdcn/sciie
- zapsdcn/citation_intent
- go_emotions
- allenai/scicite
- liar
- relbert/lexical_relation_classification
- tasksource/linguisticprobing
- tasksource/crowdflower
- metaeval/ethics
- emo
- google_wellformed_query
- tweets_hate_speech_detection
- has_part
- blog_authorship_corpus
- launch/open_question_type
- health_fact
- commonsense_qa
- mc_taco
- ade_corpus_v2
- prajjwal1/discosense
- circa
- PiC/phrase_similarity
- copenlu/scientific-exaggeration-detection
- quarel
- mwong/fever-evidence-related
- numer_sense
- dynabench/dynasent
- raquiba/Sarcasm_News_Headline
- sem_eval_2010_task_8
- demo-org/auditor_review
- medmcqa
- RuyuanWan/Dynasent_Disagreement
- RuyuanWan/Politeness_Disagreement
- RuyuanWan/SBIC_Disagreement
- RuyuanWan/SChem_Disagreement
- RuyuanWan/Dilemmas_Disagreement
- lucasmccabe/logiqa
- wiki_qa
- tasksource/cycic_classification
- tasksource/cycic_multiplechoice
- tasksource/sts-companion
- tasksource/commonsense_qa_2.0
- tasksource/lingnli
- tasksource/monotonicity-entailment
- tasksource/arct
- tasksource/scinli
- tasksource/naturallogic
- onestop_qa
- demelin/moral_stories
- corypaik/prost
- aps/dynahate
- metaeval/syntactic-augmentation-nli
- tasksource/autotnli
- lasha-nlp/CONDAQA
- openai/webgpt_comparisons
- Dahoas/synthetic-instruct-gptj-pairwise
- metaeval/scruples
- metaeval/wouldyourather
- metaeval/defeasible-nli
- tasksource/help-nli
- metaeval/nli-veridicality-transitivity
- tasksource/lonli
- tasksource/dadc-limit-nli
- ColumbiaNLP/FLUTE
- tasksource/strategy-qa
- openai/summarize_from_feedback
- tasksource/folio
- yale-nlp/FOLIO
- tasksource/tomi-nli
- tasksource/avicenna
- stanfordnlp/SHP
- GBaker/MedQA-USMLE-4-options-hf
- sileod/wikimedqa
- declare-lab/cicero
- amydeng2000/CREAK
- tasksource/mutual
- inverse-scaling/NeQA
- inverse-scaling/quote-repetition
- inverse-scaling/redefine-math
- tasksource/puzzte
- tasksource/implicatures
- race
- tasksource/race-c
- tasksource/spartqa-yn
- tasksource/spartqa-mchoice
- tasksource/temporal-nli
- riddle_sense
- tasksource/clcd-english
- maximedb/twentyquestions
- metaeval/reclor
- tasksource/counterfactually-augmented-imdb
- tasksource/counterfactually-augmented-snli
- metaeval/cnli
- tasksource/boolq-natural-perturbations
- metaeval/acceptability-prediction
- metaeval/equate
- tasksource/ScienceQA_text_only
- Jiangjie/ekar_english
- tasksource/implicit-hate-stg1
- metaeval/chaos-mnli-ambiguity
- IlyaGusev/headline_cause
- tasksource/logiqa-2.0-nli
- tasksource/oasst2_dense_flat
- sileod/mindgames
- metaeval/ambient
- metaeval/path-naturalness-prediction
- civil_comments
- AndyChiang/cloth
- AndyChiang/dgen
- tasksource/I2D2
- webis/args_me
- webis/Touche23-ValueEval
- tasksource/starcon
- PolyAI/banking77
- tasksource/ConTRoL-nli
- tasksource/tracie
- tasksource/sherliic
- tasksource/sen-making
- tasksource/winowhy
- tasksource/robustLR
- CLUTRR/v1
- tasksource/logical-fallacy
- tasksource/parade
- tasksource/cladder
- tasksource/subjectivity
- tasksource/MOH
- tasksource/VUAC
- tasksource/TroFi
- sharc_modified
- tasksource/conceptrules_v2
- metaeval/disrpt
- tasksource/zero-shot-label-nli
- tasksource/com2sense
- tasksource/scone
- tasksource/winodict
- tasksource/fool-me-twice
- tasksource/monli
- tasksource/corr2cause
- lighteval/lsat_qa
- tasksource/apt
- zeroshot/twitter-financial-news-sentiment
- tasksource/icl-symbol-tuning-instruct
- tasksource/SpaceNLI
- sihaochen/propsegment
- HannahRoseKirk/HatemojiBuild
- tasksource/regset
- tasksource/esci
- lmsys/chatbot_arena_conversations
- neurae/dnd_style_intents
- hitachi-nlp/FLD.v2
- tasksource/SDOH-NLI
- allenai/scifact_entailment
- tasksource/feasibilityQA
- tasksource/simple_pair
- tasksource/AdjectiveScaleProbe-nli
- tasksource/resnli
- tasksource/SpaRTUN
- tasksource/ReSQ
- tasksource/semantic_fragments_nli
- MoritzLaurer/dataset_train_nli
- tasksource/stepgame
- tasksource/nlgraph
- tasksource/oasst2_pairwise_rlhf_reward
- tasksource/hh-rlhf
- tasksource/ruletaker
- qbao775/PARARULE-Plus
- tasksource/proofwriter
- tasksource/logical-entailment
- tasksource/nope
- tasksource/LogicNLI
- kiddothe2b/contract-nli
- AshtonIsNotHere/nli4ct_semeval2024
- tasksource/lsat-ar
- tasksource/lsat-rc
- AshtonIsNotHere/biosift-nli
- tasksource/brainteasers
- Anthropic/persuasion
- erbacher/AmbigNQ-clarifying-question
- tasksource/SIGA-nli
- unigram/FOL-nli
- tasksource/goal-step-wikihow
- GGLab/PARADISE
- tasksource/doc-nli
- tasksource/mctest-nli
- tasksource/patent-phrase-similarity
- tasksource/natural-language-satisfiability
- tasksource/idioms-nli
- tasksource/lifecycle-entailment
- nvidia/HelpSteer
- nvidia/HelpSteer2
- sadat2307/MSciNLI
- pushpdeep/UltraFeedback-paired
- tasksource/AES2-essay-scoring
- tasksource/english-grading
- tasksource/wice
- Dzeniks/hover
- sileod/missing-item-prediction
- tasksource/tasksource_dpo_pairs
language:
- en
- zh
library_name: transformers
license: apache-2.0
metrics:
- accuracy
- bleu
- wer
pipeline_tag: audio-text-to-text
tags:
- multimodal
- vqa
- text
- audio
widget:
- text: My name is Sylvain and I live in Paris
example_title: Parisian
- text: My name is Sarah and I live in London
example_title: Londoner
model-index:
- name: Evolutionary Multi-Modal Model
results:
- task:
type: vqa
name: Visual Question Answering
dataset:
name: Synthetic Multimodal Dataset
type: synthetic-dataset
split: test
metrics:
- type: accuracy
value: 85
---
### Model Sources
You need to use separate code, audio, text, and natural language together with the model. Because the model will use separate word segmenters and vocabularies to achieve the best results when dealing with special cases.
--
- **Repository:** [https://zeromn-zeromn-shmt.hf.space]
- **kaggle:** [https://www.kaggle.com/models/zeroeva/evolutionary-multi-modal) (https://www.kaggle.com/models/zeroeva/evolutionary-multi-modal)
- **Demo:** [https://zeromn-zeromn-shmt.hf.space]
## Multi-Modal Model
# Model Card for Evolutionary
--
<script
type="module"
src="https://gradio.s3-us-west-2.amazonaws.com/5.12.0/gradio.js"
></script>
<gradio-app src="https://zeromn-zeromn-shmt.hf.space"></gradio-app>
-
### Model breast_cancer_wisconsin_original test
```python
from ucimlrepo import fetch_ucirepo
fetch dataset
breast_cancer_wisconsin_original = fetch_ucirepo(id=15)
data (as pandas dataframes)
X = breast_cancer_wisconsin_original.data.features
y = breast_cancer_wisconsin_original.data.targets
metadata
print(breast_cancer_wisconsin_original.metadata)
variable information
print(breast_cancer_wisconsin_original.variables)
```
##########################################################
-
# 0 0.93 0.99 0.96 79
# 1 0.98 0.90 0.94 58
--
#accuracy 0.95 137
--
--
This model, named `Evolutionary Multi-Modal Model`, is a multimodal transformer designed to handle a variety of tasks including vision and audio processing. It is built on top of the `adapter-transformers` and `transformers` libraries and is intended to be a versatile base model for both direct use and fine-tuning.
-
--
**Developed by:** Independent researcher
**Funded by :** Self-funded
**Shared by :** Independent researcher
**Model type:** Multimodal
**Language(s) (NLP):** English zh
**License:** Apache-2.0
**Finetuned from model :** None
-
## Uses:https://huggingface.co/zeroMN/SHMT
### Direct Use
```python
git lfs install
git clone https://huggingface.co/zeroMN/SHMT.git
```
### Downstream Use
The model can be fine-tuned for specific tasks such as visual question answering (VQA), image captioning, and audio recognition.
### Out-of-Scope Use
The Evolved Multimodal Model is not suitable for tasks that require high expertise or domain-specific expertise beyond its current capabilities. The number of speech frames still needs to be fine-tuned by yourself.
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the following risks, biases, and limitations:
- **Bias:** The model may exhibit biases present in the training data, particularly if the data is not representative of all populations.
- **Risks:** The model should not be used in critical applications where high accuracy and reliability are required without thorough testing and validation.
- **Limitations:** The model may not perform well on tasks that require fine-grained recognition or highly specialized audio processing.
## How to Get Started with the Model
```python
# Use a pipeline as a high-level helper
from transformers import pipeline
pipe = pipeline("text-generation", model="zeroMN/SHMT")
```
```python
# Load model directly
from transformers import AutoModel
model = AutoModel.from_pretrained("zeroMN/SHMT")
``` | [
"QUESTION_ANSWERING"
] | [
"HEAD-QA",
"MEDQA",
"SCICITE",
"SCIFACT",
"SCIQ",
"SCITAIL"
] |
Narrativaai/BioGPT-Large-finetuned-chatdoctor | Narrativaai | text-generation | [
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"biogpt",
"text-generation",
"medical",
"doctor",
"chat",
"qa",
"question-answering",
"en",
"dataset:LinhDuong/chatdoctor-200k",
"arxiv:2303.14070",
"doi:10.57967/hf/0601",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2023-04-29T09:30:48 | 2023-05-03T13:18:16 | 98 | 36 | ---
datasets:
- LinhDuong/chatdoctor-200k
language:
- en
license: apache-2.0
pipeline_tag: text-generation
tags:
- medical
- doctor
- chat
- qa
- question-answering
thumbnail: https://huggingface.co/Narrativaai/BioGPT-Large-finetuned-chatdoctor/resolve/main/cdl.png
---
<div style="text-align:center;width:250px;height:250px;">
<img src="https://huggingface.co/Narrativaai/BioGPT-Large-finetuned-chatdoctor/resolve/main/cdl.png" alt="chat doctor bioGPT logo"">
</div>
# BioGPT (Large) 🧬 fine-tuned on ChatDoctor 🩺 for QA
[Microsoft's BioGPT Large](https://huggingface.co/microsoft/BioGPT-Large) fine-tuned on ChatDoctor dataset for Question Answering.
## Intended Use
This is just a research model and does **NOT** have to be used out of this scope.
## Limitations
TBA
## Model
[Microsoft's BioGPT Large](https://huggingface.co/microsoft/BioGPT-Large):
Pre-trained language models have attracted increasing attention in the biomedical domain, inspired by their great success in the general natural language domain. Among the two main branches of pre-trained language models in the general language domain, i.e. BERT (and its variants) and GPT (and its variants), the first one has been extensively studied in the biomedical domain, such as BioBERT and PubMedBERT. While they have achieved great success on a variety of discriminative downstream biomedical tasks, the lack of generation ability constrains their application scope. In this paper, we propose BioGPT, a domain-specific generative Transformer language model pre-trained on large-scale biomedical literature. We evaluate BioGPT on six biomedical natural language processing tasks and demonstrate that our model outperforms previous models on most tasks. Especially, we get 44.98%, 38.42% and 40.76% F1 score on BC5CDR, KD-DTI and DDI end-to-end relation extraction tasks, respectively, and 78.2% accuracy on PubMedQA, creating a new record. Our case study on text generation further demonstrates the advantage of BioGPT on biomedical literature to generate fluent descriptions for biomedical terms.
## Dataset
ChatDoctor-200K dataset is collected from this paper https://arxiv.org/pdf/2303.14070.pdf
The dataset is composed by:
- 100k real conversations between patients and doctors from HealthCareMagic.com [HealthCareMagic-100k](https://drive.google.com/file/d/1lyfqIwlLSClhgrCutWuEe_IACNq6XNUt/view?usp=sharing).
- 10k real conversations between patients and doctors from icliniq.com [icliniq-10k](https://drive.google.com/file/d/1ZKbqgYqWc7DJHs3N9TQYQVPdDQmZaClA/view?usp=sharing).
- 5k generated conversations between patients and physicians from ChatGPT [GenMedGPT-5k](https://drive.google.com/file/d/1nDTKZ3wZbZWTkFMBkxlamrzbNz0frugg/view?usp=sharing) and [disease database](https://github.com/Kent0n-Li/ChatDoctor/blob/main/format_dataset.csv)
## Usage
```py
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer, GenerationConfig
model_id = "Narrativaai/BioGPT-Large-finetuned-chatdoctor"
tokenizer = AutoTokenizer.from_pretrained("microsoft/BioGPT-Large")
model = AutoModelForCausalLM.from_pretrained(model_id)
def answer_question(
prompt,
temperature=0.1,
top_p=0.75,
top_k=40,
num_beams=2,
**kwargs,
):
inputs = tokenizer(prompt, return_tensors="pt")
input_ids = inputs["input_ids"].to("cuda")
attention_mask = inputs["attention_mask"].to("cuda")
generation_config = GenerationConfig(
temperature=temperature,
top_p=top_p,
top_k=top_k,
num_beams=num_beams,
**kwargs,
)
with torch.no_grad():
generation_output = model.generate(
input_ids=input_ids,
attention_mask=attention_mask,
generation_config=generation_config,
return_dict_in_generate=True,
output_scores=True,
max_new_tokens=512,
eos_token_id=tokenizer.eos_token_id
)
s = generation_output.sequences[0]
output = tokenizer.decode(s, skip_special_tokens=True)
return output.split(" Response:")[1]
example_prompt = """
Below is an instruction that describes a task, paired with an input that provides further context.Write a response that appropriately completes the request.
### Instruction:
If you are a doctor, please answer the medical questions based on the patient's description.
### Input:
Hi i have sore lumps under the skin on my legs. they started on my left ankle and are approx 1 - 2cm diameter and are spreading up onto my thies. I am eating panadol night and anti allergy pills (Atarax). I have had this for about two weeks now. Please advise.
### Response:
"""
print(answer_question(example_prompt))
```
## Citation
```
@misc {narrativa_2023,
author = { {Narrativa} },
title = { BioGPT-Large-finetuned-chatdoctor (Revision 13764c0) },
year = 2023,
url = { https://huggingface.co/Narrativaai/BioGPT-Large-finetuned-chatdoctor },
doi = { 10.57967/hf/0601 },
publisher = { Hugging Face }
}
``` | [
"RELATION_EXTRACTION",
"QUESTION_ANSWERING"
] | [
"BC5CDR",
"PUBMEDQA"
] |
M4-ai/tau-0.5B | M4-ai | text-generation | [
"transformers",
"pytorch",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"en",
"zh",
"dataset:Locutusque/UltraTextbooks-2.0",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | 2024-03-08T00:56:55 | 2024-03-28T12:04:33 | 98 | 20 | ---
datasets:
- Locutusque/UltraTextbooks-2.0
language:
- en
- zh
license: other
inference:
parameters:
do_sample: true
temperature: 0.8
top_p: 0.95
top_k: 40
max_new_tokens: 250
repetition_penalty: 1.1
---
# tau-0.5B
## Model Details
- **Model Name:** tau-0.5B
- **Base Model:** Qwen1.5-0.5B
- **Dataset:** UltraTextbooks-2.0
- **Model Size:** 0.5B parameters
- **Model Type:** Language Model
- **Training Procedure:** Further pre-training of Qwen1.5-0.5B on UltraTextbooks-2.0.
## Model Use
tau-0.5B is designed to be a general-purpose language model with enhanced capabilities in the domains of machine learning, mathematics, and coding. It can be used for a wide range of natural language processing tasks, such as:
- Educational question answering
- Text summarization
- Content generation for educational purposes
- Code understanding and generation
- Mathematical problem solving
The model's exposure to the diverse content in the UltraTextbooks-2.0 dataset makes it particularly well-suited for applications in educational technology and research.
## Training Data
tau-0.5B was further pre-trained on the UltraTextbooks-2.0 dataset, which is an expanded version of the original UltraTextbooks dataset. UltraTextbooks-2.0 incorporates additional high-quality synthetic and human-written textbooks from various sources on the Hugging Face platform, with a focus on increasing the diversity of content in the domains of machine learning, mathematics, and coding.
For more details on the dataset, please refer to the [UltraTextbooks-2.0 Dataset Card](https://huggingface.co/datasets/Locutusque/UltraTextbooks-2.0).
## Performance and Limitations
Refer to [Evaluation](##Evaluation) for evaluations. It is essential to note that the model may still exhibit biases or inaccuracies present in the training data. Users are encouraged to critically evaluate the model's outputs and report any issues to facilitate continuous improvement.
## Environmental Impact
The training of tau-0.5B required computational resources that contribute to the model's overall environmental impact. However, efforts were made to optimize the training process and minimize the carbon footprint.
## Ethical Considerations
tau-0.5B was trained on a diverse dataset that may contain biases and inaccuracies. Users should be aware of these potential limitations and use the model responsibly. The model should not be used for tasks that could cause harm or discriminate against individuals or groups.
## Evaluation
| Tasks |Version|Filter|n-shot| Metric |Value | |Stderr|
|---------------------------------|-------|------|-----:|--------|-----:|---|-----:|
|agieval_nous |N/A |none | 0|acc |0.2235|± |0.0434|
| | |none | 0|acc_norm|0.2141|± |0.0498|
| - agieval_aqua_rat | 1|none | 0|acc |0.1417|± |0.0219|
| | |none | 0|acc_norm|0.1535|± |0.0227|
| - agieval_logiqa_en | 1|none | 0|acc |0.2796|± |0.0176|
| | |none | 0|acc_norm|0.3118|± |0.0182|
| - agieval_lsat_ar | 1|none | 0|acc |0.2000|± |0.0264|
| | |none | 0|acc_norm|0.1696|± |0.0248|
| - agieval_lsat_lr | 1|none | 0|acc |0.2275|± |0.0186|
| | |none | 0|acc_norm|0.2020|± |0.0178|
| - agieval_lsat_rc | 1|none | 0|acc |0.1487|± |0.0217|
| | |none | 0|acc_norm|0.1561|± |0.0222|
| - agieval_sat_en | 1|none | 0|acc |0.2330|± |0.0295|
| | |none | 0|acc_norm|0.2039|± |0.0281|
| - agieval_sat_en_without_passage| 1|none | 0|acc |0.2524|± |0.0303|
| | |none | 0|acc_norm|0.1942|± |0.0276|
| - agieval_sat_math | 1|none | 0|acc |0.2227|± |0.0281|
| | |none | 0|acc_norm|0.1682|± |0.0253|
| Tasks |Version| Filter |n-shot| Metric |Value | |Stderr|
|---------------------------------------|-------|----------------|-----:|-----------|-----:|---|-----:|
|truthfulqa | 2|none | 0|acc |0.3931|± |0.0143|
|mmlu |N/A |none | 0|acc |0.3642|± |0.0040|
| - humanities |N/A |none | 5|acc |0.3320|± |0.0068|
| - formal_logic | 0|none | 5|acc |0.2619|± |0.0393|
| - high_school_european_history | 0|none | 5|acc |0.4909|± |0.0390|
| - high_school_us_history | 0|none | 5|acc |0.4167|± |0.0346|
| - high_school_world_history | 0|none | 5|acc |0.4641|± |0.0325|
| - international_law | 0|none | 5|acc |0.5537|± |0.0454|
| - jurisprudence | 0|none | 5|acc |0.4167|± |0.0477|
| - logical_fallacies | 0|none | 5|acc |0.2638|± |0.0346|
| - moral_disputes | 0|none | 5|acc |0.3757|± |0.0261|
| - moral_scenarios | 0|none | 5|acc |0.2402|± |0.0143|
| - philosophy | 0|none | 5|acc |0.3794|± |0.0276|
| - prehistory | 0|none | 5|acc |0.3426|± |0.0264|
| - professional_law | 0|none | 5|acc |0.3103|± |0.0118|
| - world_religions | 0|none | 5|acc |0.2807|± |0.0345|
| - other |N/A |none | 5|acc |0.4071|± |0.0088|
| - business_ethics | 0|none | 5|acc |0.4200|± |0.0496|
| - clinical_knowledge | 0|none | 5|acc |0.4491|± |0.0306|
| - college_medicine | 0|none | 5|acc |0.3873|± |0.0371|
| - global_facts | 0|none | 5|acc |0.3600|± |0.0482|
| - human_aging | 0|none | 5|acc |0.3498|± |0.0320|
| - management | 0|none | 5|acc |0.4854|± |0.0495|
| - marketing | 0|none | 5|acc |0.5470|± |0.0326|
| - medical_genetics | 0|none | 5|acc |0.4000|± |0.0492|
| - miscellaneous | 0|none | 5|acc |0.4291|± |0.0177|
| - nutrition | 0|none | 5|acc |0.4183|± |0.0282|
| - professional_accounting | 0|none | 5|acc |0.3582|± |0.0286|
| - professional_medicine | 0|none | 5|acc |0.3015|± |0.0279|
| - virology | 0|none | 5|acc |0.3494|± |0.0371|
| - social_sciences |N/A |none | 5|acc |0.4075|± |0.0088|
| - econometrics | 0|none | 5|acc |0.2719|± |0.0419|
| - high_school_geography | 0|none | 5|acc |0.5000|± |0.0356|
| - high_school_government_and_politics| 0|none | 5|acc |0.4611|± |0.0360|
| - high_school_macroeconomics | 0|none | 5|acc |0.4051|± |0.0249|
| - high_school_microeconomics | 0|none | 5|acc |0.3908|± |0.0317|
| - high_school_psychology | 0|none | 5|acc |0.4239|± |0.0212|
| - human_sexuality | 0|none | 5|acc |0.3893|± |0.0428|
| - professional_psychology | 0|none | 5|acc |0.3399|± |0.0192|
| - public_relations | 0|none | 5|acc |0.4455|± |0.0476|
| - security_studies | 0|none | 5|acc |0.3510|± |0.0306|
| - sociology | 0|none | 5|acc |0.5174|± |0.0353|
| - us_foreign_policy | 0|none | 5|acc |0.5500|± |0.0500|
| - stem |N/A |none | 5|acc |0.3276|± |0.0083|
| - abstract_algebra | 0|none | 5|acc |0.3000|± |0.0461|
| - anatomy | 0|none | 5|acc |0.2889|± |0.0392|
| - astronomy | 0|none | 5|acc |0.3487|± |0.0388|
| - college_biology | 0|none | 5|acc |0.3403|± |0.0396|
| - college_chemistry | 0|none | 5|acc |0.2600|± |0.0441|
| - college_computer_science | 0|none | 5|acc |0.3800|± |0.0488|
| - college_mathematics | 0|none | 5|acc |0.3300|± |0.0473|
| - college_physics | 0|none | 5|acc |0.2745|± |0.0444|
| - computer_security | 0|none | 5|acc |0.4300|± |0.0498|
| - conceptual_physics | 0|none | 5|acc |0.3447|± |0.0311|
| - electrical_engineering | 0|none | 5|acc |0.3931|± |0.0407|
| - elementary_mathematics | 0|none | 5|acc |0.3095|± |0.0238|
| - high_school_biology | 0|none | 5|acc |0.4161|± |0.0280|
| - high_school_chemistry | 0|none | 5|acc |0.2759|± |0.0314|
| - high_school_computer_science | 0|none | 5|acc |0.3100|± |0.0465|
| - high_school_mathematics | 0|none | 5|acc |0.3185|± |0.0284|
| - high_school_physics | 0|none | 5|acc |0.2517|± |0.0354|
| - high_school_statistics | 0|none | 5|acc |0.3009|± |0.0313|
| - machine_learning | 0|none | 5|acc |0.3036|± |0.0436|
|medqa_4options |Yaml |none | 5|acc |0.2687|± |0.0124|
| | |none | 5|acc_norm |0.2687|± |0.0124|
|logieval | 0|get-answer | 5|exact_match|0.3505|± |0.0120|
|gsm8k_cot | 3|strict-match | 8|exact_match|0.0690|± |0.0070|
| | |flexible-extract| 8|exact_match|0.1365|± |0.0095|
| Tasks |Version|Filter|n-shot| Metric |Value | |Stderr|
|-------------|------:|------|-----:|--------|-----:|---|-----:|
|arc_easy | 1|none | 25|acc |0.5981|± |0.0101|
| | |none | 25|acc_norm|0.5939|± |0.0101|
|arc_challenge| 1|none | 25|acc |0.2688|± |0.0130|
| | |none | 25|acc_norm|0.2969|± |0.0134|
## Usage Rights
Make sure to read Qwen's license before using this model. | [
"QUESTION_ANSWERING",
"SUMMARIZATION"
] | [
"MEDQA"
] |
RichardErkhov/ricepaper_-_vi-gemma-2b-RAG-gguf | RichardErkhov | null | [
"gguf",
"endpoints_compatible",
"region:us",
"conversational"
] | 2024-08-23T03:07:01 | 2024-08-23T04:10:21 | 98 | 0 | ---
{}
---
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
vi-gemma-2b-RAG - GGUF
- Model creator: https://huggingface.co/ricepaper/
- Original model: https://huggingface.co/ricepaper/vi-gemma-2b-RAG/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [vi-gemma-2b-RAG.Q2_K.gguf](https://huggingface.co/RichardErkhov/ricepaper_-_vi-gemma-2b-RAG-gguf/blob/main/vi-gemma-2b-RAG.Q2_K.gguf) | Q2_K | 1.08GB |
| [vi-gemma-2b-RAG.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/ricepaper_-_vi-gemma-2b-RAG-gguf/blob/main/vi-gemma-2b-RAG.IQ3_XS.gguf) | IQ3_XS | 1.16GB |
| [vi-gemma-2b-RAG.IQ3_S.gguf](https://huggingface.co/RichardErkhov/ricepaper_-_vi-gemma-2b-RAG-gguf/blob/main/vi-gemma-2b-RAG.IQ3_S.gguf) | IQ3_S | 1.2GB |
| [vi-gemma-2b-RAG.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/ricepaper_-_vi-gemma-2b-RAG-gguf/blob/main/vi-gemma-2b-RAG.Q3_K_S.gguf) | Q3_K_S | 1.2GB |
| [vi-gemma-2b-RAG.IQ3_M.gguf](https://huggingface.co/RichardErkhov/ricepaper_-_vi-gemma-2b-RAG-gguf/blob/main/vi-gemma-2b-RAG.IQ3_M.gguf) | IQ3_M | 1.22GB |
| [vi-gemma-2b-RAG.Q3_K.gguf](https://huggingface.co/RichardErkhov/ricepaper_-_vi-gemma-2b-RAG-gguf/blob/main/vi-gemma-2b-RAG.Q3_K.gguf) | Q3_K | 1.29GB |
| [vi-gemma-2b-RAG.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/ricepaper_-_vi-gemma-2b-RAG-gguf/blob/main/vi-gemma-2b-RAG.Q3_K_M.gguf) | Q3_K_M | 1.29GB |
| [vi-gemma-2b-RAG.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/ricepaper_-_vi-gemma-2b-RAG-gguf/blob/main/vi-gemma-2b-RAG.Q3_K_L.gguf) | Q3_K_L | 1.36GB |
| [vi-gemma-2b-RAG.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/ricepaper_-_vi-gemma-2b-RAG-gguf/blob/main/vi-gemma-2b-RAG.IQ4_XS.gguf) | IQ4_XS | 1.4GB |
| [vi-gemma-2b-RAG.Q4_0.gguf](https://huggingface.co/RichardErkhov/ricepaper_-_vi-gemma-2b-RAG-gguf/blob/main/vi-gemma-2b-RAG.Q4_0.gguf) | Q4_0 | 1.44GB |
| [vi-gemma-2b-RAG.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/ricepaper_-_vi-gemma-2b-RAG-gguf/blob/main/vi-gemma-2b-RAG.IQ4_NL.gguf) | IQ4_NL | 1.45GB |
| [vi-gemma-2b-RAG.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/ricepaper_-_vi-gemma-2b-RAG-gguf/blob/main/vi-gemma-2b-RAG.Q4_K_S.gguf) | Q4_K_S | 1.45GB |
| [vi-gemma-2b-RAG.Q4_K.gguf](https://huggingface.co/RichardErkhov/ricepaper_-_vi-gemma-2b-RAG-gguf/blob/main/vi-gemma-2b-RAG.Q4_K.gguf) | Q4_K | 1.52GB |
| [vi-gemma-2b-RAG.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/ricepaper_-_vi-gemma-2b-RAG-gguf/blob/main/vi-gemma-2b-RAG.Q4_K_M.gguf) | Q4_K_M | 1.52GB |
| [vi-gemma-2b-RAG.Q4_1.gguf](https://huggingface.co/RichardErkhov/ricepaper_-_vi-gemma-2b-RAG-gguf/blob/main/vi-gemma-2b-RAG.Q4_1.gguf) | Q4_1 | 1.56GB |
| [vi-gemma-2b-RAG.Q5_0.gguf](https://huggingface.co/RichardErkhov/ricepaper_-_vi-gemma-2b-RAG-gguf/blob/main/vi-gemma-2b-RAG.Q5_0.gguf) | Q5_0 | 1.68GB |
| [vi-gemma-2b-RAG.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/ricepaper_-_vi-gemma-2b-RAG-gguf/blob/main/vi-gemma-2b-RAG.Q5_K_S.gguf) | Q5_K_S | 1.68GB |
| [vi-gemma-2b-RAG.Q5_K.gguf](https://huggingface.co/RichardErkhov/ricepaper_-_vi-gemma-2b-RAG-gguf/blob/main/vi-gemma-2b-RAG.Q5_K.gguf) | Q5_K | 1.71GB |
| [vi-gemma-2b-RAG.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/ricepaper_-_vi-gemma-2b-RAG-gguf/blob/main/vi-gemma-2b-RAG.Q5_K_M.gguf) | Q5_K_M | 1.71GB |
| [vi-gemma-2b-RAG.Q5_1.gguf](https://huggingface.co/RichardErkhov/ricepaper_-_vi-gemma-2b-RAG-gguf/blob/main/vi-gemma-2b-RAG.Q5_1.gguf) | Q5_1 | 1.79GB |
| [vi-gemma-2b-RAG.Q6_K.gguf](https://huggingface.co/RichardErkhov/ricepaper_-_vi-gemma-2b-RAG-gguf/blob/main/vi-gemma-2b-RAG.Q6_K.gguf) | Q6_K | 1.92GB |
| [vi-gemma-2b-RAG.Q8_0.gguf](https://huggingface.co/RichardErkhov/ricepaper_-_vi-gemma-2b-RAG-gguf/blob/main/vi-gemma-2b-RAG.Q8_0.gguf) | Q8_0 | 2.49GB |
Original model description:
---
base_model: unsloth/gemma-1.1-2b-it-bnb-4bit
language:
- en
- vi
license: apache-2.0
tags:
- text-generation-inference
- retrieval-augmented-generation
- transformers
- unsloth
- gemma
- trl
- sft
---
## Model Card: vi-gemma-2b-RAG
### (English below)
### Tiếng Việt (Vietnamese)
**Mô tả mô hình:**
vi-gemma-2b-RAG là một mô hình ngôn ngữ lớn được tinh chỉnh từ mô hình cơ sở [google/gemma-1.1-2b-it](https://huggingface.co/google/gemma-1.1-2b-it) sử dụng kỹ thuật LoRA. Mô hình được huấn luyện trên tập dữ liệu tiếng Việt với mục tiêu cải thiện khả năng xử lý ngôn ngữ tiếng Việt và nâng cao hiệu suất cho các tác vụ truy xuất thông tin mở (Retrieval Augmented Generation - RAG).
**Mục đích sử dụng:**
Mô hình vi-gemma-2b-RAG phù hợp cho các tác vụ sau:
* Trả lời câu hỏi dựa trên ngữ cảnh tiếng Việt.
* Tóm tắt văn bản tiếng Việt.
* Dịch máy tiếng Việt.
* Và các tác vụ tạo văn bản tiếng Việt khác.
**Giới hạn:**
Mặc dù đã được tinh chỉnh cho tiếng Việt, vi-gemma-2b-RAG vẫn có thể gặp phải một số hạn chế:
* Có thể tạo ra thông tin sai lệch hoặc không chính xác.
* Có thể thể hiện thành kiến hoặc quan điểm không phù hợp.
* Hiệu suất có thể bị ảnh hưởng bởi chất lượng của dữ liệu đầu vào.
**Cách sử dụng:**
Dưới đây chúng tôi chia sẻ một số đoạn mã về cách bắt đầu nhanh chóng để sử dụng mô hình. Trước tiên, hãy đảm bảo đã cài đặt `pip install -U transformers`, sau đó sao chép đoạn mã từ phần có liên quan đến usecase của bạn.
Chúng tôi khuyến nghị sử dụng `torch.bfloat16` làm mặc định.
```python
# pip install transformers torch accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
# Khởi tạo tokenizer và model từ checkpoint đã lưu
tokenizer = AutoTokenizer.from_pretrained("himmeow/vi-gemma-2b-RAG")
model = AutoModelForCausalLM.from_pretrained(
"himmeow/vi-gemma-2b-RAG",
device_map="auto",
torch_dtype=torch.bfloat16
)
# Sử dụng GPU nếu có
if torch.cuda.is_available():
model.to("cuda")
# Định dạng prompt cho model
prompt = """
### Instruction and Input:
Dựa vào ngữ cảnh/tài liệu sau:
{}
Hãy trả lời câu hỏi: {}
### Response:
{}
"""
# Chuẩn bị dữ liệu đầu vào
input_data = """
Short Tandem Repeats (STRs) là các trình tự DNA lặp lại ngắn (2- 6 nucleotides) xuất hiện phổ biến trong hệ gen của con người. Các trình tự này có tính đa hình rất cao trong tự nhiên, điều này khiến các STRs trở thành những markers di truyền rất quan trọng trong nghiên cứu bản đồ gen người và chuẩn đoán bệnh lý di truyền cũng như xác định danh tính trong lĩnh vực pháp y.
Các STRs trở nên phổ biến tại các phòng xét nghiệm pháp y bởi vì việc nhân bản và phân tích STRs chỉ cần lượng DNA rất thấp ngay cả khi ở dạng bị phân hủy việc đinh danh vẫn có thể được thực hiện thành công. Hơn nữa việc phát hiện và đánh giá sự nhiễm DNA mẫu trong các mẫu vật có thể được giải quyết nhanh với kết quả phân tích STRs. Ở Hoa Kỳ hiện nay, từ bộ 13 markers nay đã tăng lên 20 markers chính đang được sử dụng để tạo ra một cơ sở dữ liệu DNA trên toàn đất nước được gọi là The FBI Combined DNA Index System (Expaned CODIS).
CODIS và các cơ sử dữ liệu DNA tương tự đang được sử dụng thực sự thành công trong việc liên kết các hồ sơ DNA từ các tội phạm và các bằng chứng hiện trường vụ án. Kết quả định danh STRs cũng được sử dụng để hỗ trợ hàng trăm nghìn trường hợp xét nghiệm huyết thống cha con mỗi năm'
"""
query = "Hãy cho tôi biết một số tính chất của STRs được dùng để làm gì?"
# Định dạng input text
input_text = prompt.format(input_data, query," ")
# Mã hóa input text thành input ids
input_ids = tokenizer(input_text, return_tensors="pt")
# Sử dụng GPU cho input ids nếu có
if torch.cuda.is_available():
input_ids = input_ids.to("cuda")
# Tạo văn bản bằng model
outputs = model.generate(
**input_ids,
max_new_tokens=500,
no_repeat_ngram_size=5, # Ngăn chặn lặp lại các cụm từ 5 gram
# do_sample=True, # Kích hoạt chế độ tạo văn bản dựa trên lấy mẫu. Trong chế độ này, model sẽ chọn ngẫu nhiên token tiếp theo dựa trên xác suất được tính từ phân phối xác suất của các token.
# temperature=0.7, # Giảm temperature để kiểm soát tính ngẫu nhiên
# early_stopping=True, # Dừng tạo văn bản khi tìm thấy kết thúc phù hợp
)
# Giải mã và in kết quả
print(tokenizer.decode(outputs[0]))
'''
<bos>
### Instruction and Input:
Dựa vào ngữ cảnh/tài liệu sau:
Short Tandem Repeats (STRs) là các trình tự DNA lặp lại ngắn (2- 6 nucleotides) xuất hiện phổ biến trong hệ gen của con người. Các trình tự này có tính đa hình rất cao trong tự nhiên, điều này khiến các STRs trở thành những markers di truyền rất quan trọng trong nghiên cứu bản đồ gen người và chuẩn đoán bệnh lý di truyền cũng như xác định danh tính trong lĩnh vực pháp y.
Các STRs trở nên phổ biến tại các phòng xét nghiệm pháp y bởi vì việc nhân bản và phân tích STRs chỉ cần lượng DNA rất thấp ngay cả khi ở dạng bị phân hủy việc đinh danh vẫn có thể được thực hiện thành công. Hơn nữa việc phát hiện và đánh giá sự nhiễm DNA mẫu trong các mẫu vật có thể được giải quyết nhanh với kết quả phân tích STRs. Ở Hoa Kỳ hiện nay, từ bộ 13 markers nay đã tăng lên 20 markers chính đang được sử dụng để tạo ra một cơ sở dữ liệu DNA trên toàn đất nước được gọi là The FBI Combined DNA Index System (Expaned CODIS).
CODIS và các cơ sử dữ liệu DNA tương tự đang được sử dụng thực sự thành công trong việc liên kết các hồ sơ DNA từ các tội phạm và các bằng chứng hiện trường vụ án. Kết quả định danh STRs cũng được sử dụng để hỗ trợ hàng trăm nghìn trường hợp xét nghiệm huyết thống cha con mỗi năm'
Hãy trả lời câu hỏi: Hãy cho tôi biết một số tính chất của STRs được dùng để làm gì?
### Response:
STRs được sử dụng để xác định danh tính, chuẩn đoán bệnh lý và xác định bệnh lý di truyền.
<eos>
'''
```
**Huấn luyện:**
* **Mô hình cơ sở:** google/gemma-1.1-2b-it
* **Tập dữ liệu:** lamhieu/mabrycodes_dialogue_vi
* **Phương pháp tinh chỉnh:** LoRA, PEFT với Unsloth
## Model Card: vi-gemma-2b-RAG
### English
**Model Description:**
vi-gemma-2b-RAG is a large language model fine-tuned from the base model [google/gemma-1.1-2b-it](https://huggingface.co/google/gemma-1.1-2b-it) using LoRA. The model is trained on a Vietnamese dataset to improve its Vietnamese language processing capabilities and enhance its performance for Retrieval Augmented Generation (RAG) tasks.
**Intended Use:**
The vi-gemma-2b-RAG model is suitable for tasks such as:
* Vietnamese question answering.
* Vietnamese text summarization.
* Vietnamese machine translation.
* And other Vietnamese text generation tasks.
**Limitations:**
While fine-tuned for Vietnamese, vi-gemma-2b-RAG may still have some limitations:
* It may generate incorrect or misleading information.
* It may exhibit biases or inappropriate opinions.
* Its performance may be affected by the quality of the input data.
**How to Use:**
### Usage
Below we share some code snippets on how to get quickly started with running the model. First make sure to `pip install -U transformers`, then copy the snippet from the section that is relevant for your usecase.
We recommend `torch.bfloat16` as the default dtype.
```python
# pip install transformers torch accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
# Initialize the tokenizer and model from the saved checkpoint
tokenizer = AutoTokenizer.from_pretrained("himmeow/vi-gemma-2b-RAG")
model = AutoModelForCausalLM.from_pretrained(
"himmeow/vi-gemma-2b-RAG",
device_map="auto",
torch_dtype=torch.bfloat16
)
# Use GPU if available
if torch.cuda.is_available():
model.to("cuda")
# Define the prompt format for the model
prompt = """
### Instruction and Input:
Based on the following context/document:
{}
Please answer the question: {}
### Response:
{}
"""
# Prepare the input data
input_data = """
Short Tandem Repeats (STRs) are short (2-6 nucleotides) repeating DNA sequences that are widespread in the human genome. These sequences are highly polymorphic in nature, which makes STRs very important genetic markers in human gene mapping and diagnosis of hereditary diseases as well as identification in the field of forensics.
STRs have become popular in forensic laboratories because the replication and analysis of STRs requires very small amounts of DNA, even in decomposed form, identification can still be performed successfully. Furthermore, the detection and assessment of sample DNA contamination in specimens can be quickly resolved with STR analysis results. In the United States today, the set of 13 markers has now been increased to 20 main markers being used to create a nationwide DNA database called The FBI Combined DNA Index System (Expaned CODIS).
CODIS and similar DNA databases are being used very successfully in linking DNA records from criminals and crime scene evidence. STR identification results are also used to support hundreds of thousands of paternity test cases each year.'
"""
query = "Tell me what are some properties of STRs used for?"
# Format the input text
input_text = prompt.format(input_data, query," ")
# Encode the input text into input ids
input_ids = tokenizer(input_text, return_tensors="pt")
# Use GPU for input ids if available
if torch.cuda.is_available():
input_ids = input_ids.to("cuda")
# Generate text using the model
outputs = model.generate(
**input_ids,
max_new_tokens=500, # Limit the number of tokens generated
no_repeat_ngram_size=5, # Prevent repetition of 5-gram phrases
# do_sample=True,
# temperature=0.7, # Adjust the randomness of the generated text
# early_stopping=True, # Stop generating text when a suitable ending is found
)
# Decode and print the results
print(tokenizer.decode(outputs[0]))
```
**Training:**
* **Base Model:** google/gemma-1.1-2b-it
* **Dataset:** lamhieu/mabrycodes_dialogue_vi
* **Fine-tuning Method:** LoRA, PEFT and Unsloth
**Using example repository:** https://github.com/Martincrux/Vietnamese-RAG-system-building-with-vi-gemma-2b-RAG-and-halong_embedding
# Uploaded model
- **Developed by:** [hiieu](https://huggingface.co/hiieu), [himmeow the coder](https://huggingface.co/himmeow), [cuctrinh](https://www.linkedin.com/in/trinh-cuc-5722832b6)
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gemma-1.1-2b-it-bnb-4bit
This gemma model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
| [
"QUESTION_ANSWERING",
"TRANSLATION",
"SUMMARIZATION"
] | [
"CHIA"
] |
mav23/llama3-8b-cpt-sea-lionv2.1-instruct-GGUF | mav23 | null | [
"gguf",
"en",
"id",
"ta",
"th",
"vi",
"arxiv:2309.06085",
"arxiv:2311.07911",
"arxiv:2306.05685",
"license:llama3",
"endpoints_compatible",
"region:us",
"conversational"
] | 2024-10-25T01:18:03 | 2024-10-25T02:28:50 | 98 | 0 | ---
language:
- en
- id
- ta
- th
- vi
license: llama3
---
# Llama3 8B CPT SEA-Lionv2.1 Instruct
SEA-LION is a collection of Large Language Models (LLMs) which has been pretrained and instruct-tuned for the Southeast Asia (SEA) region.
Llama3 8B CPT SEA-Lionv2.1 Instruct is a multilingual model which has been fine-tuned with around **100,000 English instruction-completion pairs** alongside a smaller pool of around **50,000 instruction-completion pairs** from other ASEAN languages, such as Indonesian, Thai and Vietnamese.
These instructions have been carefully curated and rewritten to ensure the model was trained on truly open, commercially permissive and high quality datasets.
Llama3 8B CPT SEA-Lionv2.1 Instruct has undergone additional supervised fine-tuning and alignment compared to the now deprecated Llama3 8B CPT SEA-Lionv2 Instruct. These improvements have increased the model's capabilities in chat interactions and its ability to follow instructions accurately.
SEA-LION stands for _Southeast Asian Languages In One Network_.
- **Developed by:** Products Pillar, AI Singapore
- **Funded by:** Singapore NRF
- **Model type:** Decoder
- **Languages:** English, Indonesian, Thai, Vietnamese, Tamil
- **License:** [Llama3 Community License](https://huggingface.co/meta-llama/Meta-Llama-3-8B/blob/main/LICENSE)
## Model Details
### Model Description
We performed instruction tuning in English and also in ASEAN languages such as Indonesian, Thai and Vietnamese on our [continued pre-trained Llama3 CPT 8B SEA-Lionv2](https://huggingface.co/aisingapore/llama3-8b-cpt-SEA-Lionv2-base), a decoder model using the Llama3 architecture, to create Llama3 8B SEA-Lionv2.1 Instruct.
The model has a context length of 8192.
### Benchmark Performance
We evaluated Llama3 8B SEA-Lionv2.1 Instruct on both general language capabilities and instruction-following capabilities.
#### General Language Capabilities
For the evaluation of general language capabilities, we employed the [BHASA evaluation benchmark](https://arxiv.org/abs/2309.06085v2) across a variety of tasks.
These tasks include Question Answering (QA), Sentiment Analysis (Sentiment), Toxicity Detection (Toxicity), Translation in both directions (Eng>Lang & Lang>Eng), Abstractive Summarization (Summ), Causal Reasoning (Causal) and Natural Language Inference (NLI).
Note: BHASA is implemented following a strict answer format, and only spaces and punctuations are cleaned. For tasks where options are provided, the answer should only include one of the pre-defined options, nothing else. If the model continues to generate more tokens (e.g. to explain its answer), it will be considered to be a wrong response. For the F1 score metric (as used in Sentiment Analysis and Toxicity Detection), all answers that do not fall under the pre-defined labels will be treated as a separate label (to mark it as a wrong answer) and included in the calculations so that the model is penalized for not generating one of the pre-defined labels.
The evaluation was done zero-shot with native prompts and only a sample of 100-1000 instances for each dataset was used as per the setting described in the paper.
#### Instruction-following Capabilities
Since LLama3 8B SEA-Lionv2.1 is an instruction-following model, we also evaluated it on instruction-following capabilities with two datasets, [IFEval](https://arxiv.org/abs/2311.07911) and [MT-Bench](https://arxiv.org/abs/2306.05685).
As these two datasets were originally in English, the linguists and native speakers in the team worked together to filter, localize and translate the datasets into the respective target languages to ensure that the examples remained reasonable, meaningful and natural.
**IFEval**
IFEval evaluates a model's ability to adhere to constraints provided in the prompt, for example beginning a response with a specific word/phrase or answering with a certain number of sections. The metric used is accuracy normalized by language (if the model performs the task correctly but responds in the wrong language, it is judged to have failed the task).
**MT-Bench**
MT-Bench evaluates a model's ability to engage in multi-turn (2 turns) conversations and respond in ways that align with human needs. We use `gpt-4-1106-preview` as the judge model and compare against `gpt-3.5-turbo-0125` as the baseline model. The metric used is the weighted win rate against the baseline model (i.e. average win rate across each category (Math, Reasoning, STEM, Humanities, Roleplay, Writing, Extraction)). A tie is given a score of 0.5.
For more details on Llama3 8B CPT SEA-Lionv2.1 Instruct benchmark performance, please refer to the SEA HELM leaderboard, https://leaderboard.sea-lion.ai/
### Usage
SEA-LION can be run using the 🤗 Transformers library
```python
# Please use transformers==4.43.2
import transformers
import torch
model_id = "aisingapore/llama3-8b-cpt-SEA-Lionv2.1-instruct"
pipeline = transformers.pipeline(
"text-generation",
model=model_id,
model_kwargs={"torch_dtype": torch.bfloat16},
device_map="auto",
)
messages = [
{"role": "user", "content": "Apa sentimen dari kalimat berikut ini?\nKalimat: Buku ini sangat membosankan.\nJawaban: "},
]
outputs = pipeline(
messages,
max_new_tokens=256,
)
print(outputs[0]["generated_text"][-1])
```
### Accessing Older Revisions
Huggingface provides support for the revision parameter, allowing users to access specific versions of models. This can be used to retrieve the original llama3-8b-cpt-SEA-Lionv2-instruct model with the tag "v2.0.0".
```python
# Please use transformers==4.43.2
import transformers
import torch
model_id = "aisingapore/llama3-8b-cpt-SEA-Lionv2.1-instruct"
pipeline = transformers.pipeline(
"text-generation",
model=model_id,
model_kwargs={"torch_dtype": torch.bfloat16},
revision="v2.0.0", # Specify the revision here. Initial release is at "v2.0.0".
device_map="auto",
)
messages = [
{"role": "user", "content": "Apa sentimen dari kalimat berikut ini?\nKalimat: Buku ini sangat membosankan.\nJawaban: "},
]
outputs = pipeline(
messages,
max_new_tokens=256,
)
print(outputs[0]["generated_text"][-1])
```
### Caveats
It is important for users to be aware that our model exhibits certain limitations that warrant consideration. Like many LLMs, the model can hallucinate and occasionally generates irrelevant content, introducing fictional elements that are not grounded in the provided context. Users should also exercise caution in interpreting and validating the model's responses due to the potential inconsistencies in its reasoning.
## Limitations
### Safety
Current SEA-LION models, including this commercially permissive release, have not been aligned for safety. Developers and users should perform their own safety fine-tuning and related security measures. In no event shall the authors be held liable for any claim, damages, or other liability arising from the use of the released weights and codes.
## Technical Specifications
### Fine-Tuning Details
The Llama3 8B CPT SEA-Lionv2.1 Instruct was fine-tuned using 8x A100-40GB using parameter efficient fine tuning in the form of LoRA.
## Data
Llama3 8B CPT SEA-Lionv2.1 Instruct was trained on a wide range of instructions that were manually and stringently verified by our team. A large portion of the effort was dedicated to ensuring that each instruction-completion pair that the model sees is of high quality and any errors were corrected and rewritten by native speakers or else dropped from our mix.
In addition, special care was taken to ensure that the datasets used had commercially permissive licenses through verification with the original data source.
Link to dataset: _coming soon_
## Call for Contributions
We encourage researchers, developers, and language enthusiasts to actively contribute to the enhancement and expansion of SEA-LION. Contributions can involve identifying and reporting bugs, sharing pre-training, instruction, and preference data, improving documentation usability, proposing and implementing new model evaluation tasks and metrics, or training versions of the model in additional Southeast Asian languages. Join us in shaping the future of SEA-LION by sharing your expertise and insights to make these models more accessible, accurate, and versatile. Please check out our GitHub for further information on the call for contributions.
## The Team
Choa Esther<br>
Cheng Nicholas<br>
Huang Yuli<br>
Lau Wayne<br>
Lee Chwan Ren<br>
Leong Wai Yi<br>
Leong Wei Qi<br>
Li Yier<br>
Liu Bing Jie Darius<br>
Lovenia Holy<br>
Montalan Jann Railey<br>
Ng Boon Cheong Raymond<br>
Ngui Jian Gang<br>
Nguyen Thanh Ngan<br>
Ong Brandon<br>
Ong Tat-Wee David<br>
Ong Zhi Hao<br>
Rengarajan Hamsawardhini<br>
Siow Bryan<br>
Susanto Yosephine<br>
Tai Ngee Chia<br>
Tan Choon Meng<br>
Teo Eng Sipp Leslie<br>
Teo Wei Yi<br>
Tjhi William<br>
Teng Walter<br>
Yeo Yeow Tong<br>
Yong Xianbin<br>
## Acknowledgements
[AI Singapore](https://aisingapore.org/) is a national programme supported by the National Research Foundation, Singapore and hosted by the National University of Singapore. Any opinions, findings and conclusions or recommendations expressed in this material are those of the author(s) and do not reflect the views of the National Research Foundation or the National University of Singapore.
## Contact
For more info, please contact us using this [SEA-LION Inquiry Form](https://forms.gle/sLCUVb95wmGf43hi6)
[Link to SEA-LION's GitHub repository](https://github.com/aisingapore/sealion)
## Disclaimer
This is the repository for the commercial instruction-tuned model.
The model has _not_ been aligned for safety.
Developers and users should perform their own safety fine-tuning and related security measures.
In no event shall the authors be held liable for any claims, damages, or other liabilities arising from the use of the released weights and codes. | [
"QUESTION_ANSWERING",
"TRANSLATION",
"SUMMARIZATION"
] | [
"CHIA"
] |
RichardErkhov/GritLM_-_GritLM-7B-gguf | RichardErkhov | null | [
"gguf",
"arxiv:2402.09906",
"endpoints_compatible",
"region:us",
"conversational"
] | 2024-05-03T17:18:16 | 2024-05-03T19:18:44 | 97 | 0 | ---
{}
---
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
GritLM-7B - GGUF
- Model creator: https://huggingface.co/GritLM/
- Original model: https://huggingface.co/GritLM/GritLM-7B/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [GritLM-7B.Q2_K.gguf](https://huggingface.co/RichardErkhov/GritLM_-_GritLM-7B-gguf/blob/main/GritLM-7B.Q2_K.gguf) | Q2_K | 2.53GB |
| [GritLM-7B.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/GritLM_-_GritLM-7B-gguf/blob/main/GritLM-7B.IQ3_XS.gguf) | IQ3_XS | 2.81GB |
| [GritLM-7B.IQ3_S.gguf](https://huggingface.co/RichardErkhov/GritLM_-_GritLM-7B-gguf/blob/main/GritLM-7B.IQ3_S.gguf) | IQ3_S | 2.96GB |
| [GritLM-7B.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/GritLM_-_GritLM-7B-gguf/blob/main/GritLM-7B.Q3_K_S.gguf) | Q3_K_S | 2.95GB |
| [GritLM-7B.IQ3_M.gguf](https://huggingface.co/RichardErkhov/GritLM_-_GritLM-7B-gguf/blob/main/GritLM-7B.IQ3_M.gguf) | IQ3_M | 3.06GB |
| [GritLM-7B.Q3_K.gguf](https://huggingface.co/RichardErkhov/GritLM_-_GritLM-7B-gguf/blob/main/GritLM-7B.Q3_K.gguf) | Q3_K | 3.28GB |
| [GritLM-7B.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/GritLM_-_GritLM-7B-gguf/blob/main/GritLM-7B.Q3_K_M.gguf) | Q3_K_M | 3.28GB |
| [GritLM-7B.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/GritLM_-_GritLM-7B-gguf/blob/main/GritLM-7B.Q3_K_L.gguf) | Q3_K_L | 3.56GB |
| [GritLM-7B.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/GritLM_-_GritLM-7B-gguf/blob/main/GritLM-7B.IQ4_XS.gguf) | IQ4_XS | 3.67GB |
| [GritLM-7B.Q4_0.gguf](https://huggingface.co/RichardErkhov/GritLM_-_GritLM-7B-gguf/blob/main/GritLM-7B.Q4_0.gguf) | Q4_0 | 3.83GB |
| [GritLM-7B.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/GritLM_-_GritLM-7B-gguf/blob/main/GritLM-7B.IQ4_NL.gguf) | IQ4_NL | 3.87GB |
| [GritLM-7B.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/GritLM_-_GritLM-7B-gguf/blob/main/GritLM-7B.Q4_K_S.gguf) | Q4_K_S | 3.86GB |
| [GritLM-7B.Q4_K.gguf](https://huggingface.co/RichardErkhov/GritLM_-_GritLM-7B-gguf/blob/main/GritLM-7B.Q4_K.gguf) | Q4_K | 4.07GB |
| [GritLM-7B.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/GritLM_-_GritLM-7B-gguf/blob/main/GritLM-7B.Q4_K_M.gguf) | Q4_K_M | 4.07GB |
| [GritLM-7B.Q4_1.gguf](https://huggingface.co/RichardErkhov/GritLM_-_GritLM-7B-gguf/blob/main/GritLM-7B.Q4_1.gguf) | Q4_1 | 4.24GB |
| [GritLM-7B.Q5_0.gguf](https://huggingface.co/RichardErkhov/GritLM_-_GritLM-7B-gguf/blob/main/GritLM-7B.Q5_0.gguf) | Q5_0 | 4.65GB |
| [GritLM-7B.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/GritLM_-_GritLM-7B-gguf/blob/main/GritLM-7B.Q5_K_S.gguf) | Q5_K_S | 4.65GB |
| [GritLM-7B.Q5_K.gguf](https://huggingface.co/RichardErkhov/GritLM_-_GritLM-7B-gguf/blob/main/GritLM-7B.Q5_K.gguf) | Q5_K | 4.78GB |
| [GritLM-7B.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/GritLM_-_GritLM-7B-gguf/blob/main/GritLM-7B.Q5_K_M.gguf) | Q5_K_M | 4.78GB |
| [GritLM-7B.Q5_1.gguf](https://huggingface.co/RichardErkhov/GritLM_-_GritLM-7B-gguf/blob/main/GritLM-7B.Q5_1.gguf) | Q5_1 | 5.07GB |
| [GritLM-7B.Q6_K.gguf](https://huggingface.co/RichardErkhov/GritLM_-_GritLM-7B-gguf/blob/main/GritLM-7B.Q6_K.gguf) | Q6_K | 5.53GB |
Original model description:
---
pipeline_tag: text-generation
inference: true
license: apache-2.0
datasets:
- GritLM/tulu2
tags:
- mteb
model-index:
- name: GritLM-7B
results:
- task:
type: Classification
dataset:
type: mteb/amazon_counterfactual
name: MTEB AmazonCounterfactualClassification (en)
config: en
split: test
revision: e8379541af4e31359cca9fbcf4b00f2671dba205
metrics:
- type: accuracy
value: 81.17910447761194
- type: ap
value: 46.26260671758199
- type: f1
value: 75.44565719934167
- task:
type: Classification
dataset:
type: mteb/amazon_polarity
name: MTEB AmazonPolarityClassification
config: default
split: test
revision: e2d317d38cd51312af73b3d32a06d1a08b442046
metrics:
- type: accuracy
value: 96.5161
- type: ap
value: 94.79131981460425
- type: f1
value: 96.51506148413065
- task:
type: Classification
dataset:
type: mteb/amazon_reviews_multi
name: MTEB AmazonReviewsClassification (en)
config: en
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 57.806000000000004
- type: f1
value: 56.78350156257903
- task:
type: Retrieval
dataset:
type: arguana
name: MTEB ArguAna
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 38.478
- type: map_at_10
value: 54.955
- type: map_at_100
value: 54.955
- type: map_at_1000
value: 54.955
- type: map_at_3
value: 50.888999999999996
- type: map_at_5
value: 53.349999999999994
- type: mrr_at_1
value: 39.757999999999996
- type: mrr_at_10
value: 55.449000000000005
- type: mrr_at_100
value: 55.449000000000005
- type: mrr_at_1000
value: 55.449000000000005
- type: mrr_at_3
value: 51.37500000000001
- type: mrr_at_5
value: 53.822
- type: ndcg_at_1
value: 38.478
- type: ndcg_at_10
value: 63.239999999999995
- type: ndcg_at_100
value: 63.239999999999995
- type: ndcg_at_1000
value: 63.239999999999995
- type: ndcg_at_3
value: 54.935
- type: ndcg_at_5
value: 59.379000000000005
- type: precision_at_1
value: 38.478
- type: precision_at_10
value: 8.933
- type: precision_at_100
value: 0.893
- type: precision_at_1000
value: 0.089
- type: precision_at_3
value: 22.214
- type: precision_at_5
value: 15.491
- type: recall_at_1
value: 38.478
- type: recall_at_10
value: 89.331
- type: recall_at_100
value: 89.331
- type: recall_at_1000
value: 89.331
- type: recall_at_3
value: 66.643
- type: recall_at_5
value: 77.45400000000001
- task:
type: Clustering
dataset:
type: mteb/arxiv-clustering-p2p
name: MTEB ArxivClusteringP2P
config: default
split: test
revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d
metrics:
- type: v_measure
value: 51.67144081472449
- task:
type: Clustering
dataset:
type: mteb/arxiv-clustering-s2s
name: MTEB ArxivClusteringS2S
config: default
split: test
revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53
metrics:
- type: v_measure
value: 48.11256154264126
- task:
type: Reranking
dataset:
type: mteb/askubuntudupquestions-reranking
name: MTEB AskUbuntuDupQuestions
config: default
split: test
revision: 2000358ca161889fa9c082cb41daa8dcfb161a54
metrics:
- type: map
value: 67.33801955487878
- type: mrr
value: 80.71549487754474
- task:
type: STS
dataset:
type: mteb/biosses-sts
name: MTEB BIOSSES
config: default
split: test
revision: d3fb88f8f02e40887cd149695127462bbcf29b4a
metrics:
- type: cos_sim_pearson
value: 88.1935203751726
- type: cos_sim_spearman
value: 86.35497970498659
- type: euclidean_pearson
value: 85.46910708503744
- type: euclidean_spearman
value: 85.13928935405485
- type: manhattan_pearson
value: 85.68373836333303
- type: manhattan_spearman
value: 85.40013867117746
- task:
type: Classification
dataset:
type: mteb/banking77
name: MTEB Banking77Classification
config: default
split: test
revision: 0fd18e25b25c072e09e0d92ab615fda904d66300
metrics:
- type: accuracy
value: 88.46753246753248
- type: f1
value: 88.43006344981134
- task:
type: Clustering
dataset:
type: mteb/biorxiv-clustering-p2p
name: MTEB BiorxivClusteringP2P
config: default
split: test
revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40
metrics:
- type: v_measure
value: 40.86793640310432
- task:
type: Clustering
dataset:
type: mteb/biorxiv-clustering-s2s
name: MTEB BiorxivClusteringS2S
config: default
split: test
revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908
metrics:
- type: v_measure
value: 39.80291334130727
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackAndroidRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 38.421
- type: map_at_10
value: 52.349000000000004
- type: map_at_100
value: 52.349000000000004
- type: map_at_1000
value: 52.349000000000004
- type: map_at_3
value: 48.17
- type: map_at_5
value: 50.432
- type: mrr_at_1
value: 47.353
- type: mrr_at_10
value: 58.387
- type: mrr_at_100
value: 58.387
- type: mrr_at_1000
value: 58.387
- type: mrr_at_3
value: 56.199
- type: mrr_at_5
value: 57.487
- type: ndcg_at_1
value: 47.353
- type: ndcg_at_10
value: 59.202
- type: ndcg_at_100
value: 58.848
- type: ndcg_at_1000
value: 58.831999999999994
- type: ndcg_at_3
value: 54.112
- type: ndcg_at_5
value: 56.312
- type: precision_at_1
value: 47.353
- type: precision_at_10
value: 11.459
- type: precision_at_100
value: 1.146
- type: precision_at_1000
value: 0.11499999999999999
- type: precision_at_3
value: 26.133
- type: precision_at_5
value: 18.627
- type: recall_at_1
value: 38.421
- type: recall_at_10
value: 71.89
- type: recall_at_100
value: 71.89
- type: recall_at_1000
value: 71.89
- type: recall_at_3
value: 56.58
- type: recall_at_5
value: 63.125
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackEnglishRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 38.025999999999996
- type: map_at_10
value: 50.590999999999994
- type: map_at_100
value: 51.99700000000001
- type: map_at_1000
value: 52.11599999999999
- type: map_at_3
value: 47.435
- type: map_at_5
value: 49.236000000000004
- type: mrr_at_1
value: 48.28
- type: mrr_at_10
value: 56.814
- type: mrr_at_100
value: 57.446
- type: mrr_at_1000
value: 57.476000000000006
- type: mrr_at_3
value: 54.958
- type: mrr_at_5
value: 56.084999999999994
- type: ndcg_at_1
value: 48.28
- type: ndcg_at_10
value: 56.442
- type: ndcg_at_100
value: 60.651999999999994
- type: ndcg_at_1000
value: 62.187000000000005
- type: ndcg_at_3
value: 52.866
- type: ndcg_at_5
value: 54.515
- type: precision_at_1
value: 48.28
- type: precision_at_10
value: 10.586
- type: precision_at_100
value: 1.6310000000000002
- type: precision_at_1000
value: 0.20600000000000002
- type: precision_at_3
value: 25.945
- type: precision_at_5
value: 18.076
- type: recall_at_1
value: 38.025999999999996
- type: recall_at_10
value: 66.11399999999999
- type: recall_at_100
value: 83.339
- type: recall_at_1000
value: 92.413
- type: recall_at_3
value: 54.493
- type: recall_at_5
value: 59.64699999999999
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackGamingRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 47.905
- type: map_at_10
value: 61.58
- type: map_at_100
value: 62.605
- type: map_at_1000
value: 62.637
- type: map_at_3
value: 58.074000000000005
- type: map_at_5
value: 60.260000000000005
- type: mrr_at_1
value: 54.42
- type: mrr_at_10
value: 64.847
- type: mrr_at_100
value: 65.403
- type: mrr_at_1000
value: 65.41900000000001
- type: mrr_at_3
value: 62.675000000000004
- type: mrr_at_5
value: 64.101
- type: ndcg_at_1
value: 54.42
- type: ndcg_at_10
value: 67.394
- type: ndcg_at_100
value: 70.846
- type: ndcg_at_1000
value: 71.403
- type: ndcg_at_3
value: 62.025
- type: ndcg_at_5
value: 65.032
- type: precision_at_1
value: 54.42
- type: precision_at_10
value: 10.646
- type: precision_at_100
value: 1.325
- type: precision_at_1000
value: 0.13999999999999999
- type: precision_at_3
value: 27.398
- type: precision_at_5
value: 18.796
- type: recall_at_1
value: 47.905
- type: recall_at_10
value: 80.84599999999999
- type: recall_at_100
value: 95.078
- type: recall_at_1000
value: 98.878
- type: recall_at_3
value: 67.05600000000001
- type: recall_at_5
value: 74.261
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackGisRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 30.745
- type: map_at_10
value: 41.021
- type: map_at_100
value: 41.021
- type: map_at_1000
value: 41.021
- type: map_at_3
value: 37.714999999999996
- type: map_at_5
value: 39.766
- type: mrr_at_1
value: 33.559
- type: mrr_at_10
value: 43.537
- type: mrr_at_100
value: 43.537
- type: mrr_at_1000
value: 43.537
- type: mrr_at_3
value: 40.546
- type: mrr_at_5
value: 42.439
- type: ndcg_at_1
value: 33.559
- type: ndcg_at_10
value: 46.781
- type: ndcg_at_100
value: 46.781
- type: ndcg_at_1000
value: 46.781
- type: ndcg_at_3
value: 40.516000000000005
- type: ndcg_at_5
value: 43.957
- type: precision_at_1
value: 33.559
- type: precision_at_10
value: 7.198
- type: precision_at_100
value: 0.72
- type: precision_at_1000
value: 0.07200000000000001
- type: precision_at_3
value: 17.1
- type: precision_at_5
value: 12.316
- type: recall_at_1
value: 30.745
- type: recall_at_10
value: 62.038000000000004
- type: recall_at_100
value: 62.038000000000004
- type: recall_at_1000
value: 62.038000000000004
- type: recall_at_3
value: 45.378
- type: recall_at_5
value: 53.580000000000005
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackMathematicaRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 19.637999999999998
- type: map_at_10
value: 31.05
- type: map_at_100
value: 31.05
- type: map_at_1000
value: 31.05
- type: map_at_3
value: 27.628000000000004
- type: map_at_5
value: 29.767
- type: mrr_at_1
value: 25.0
- type: mrr_at_10
value: 36.131
- type: mrr_at_100
value: 36.131
- type: mrr_at_1000
value: 36.131
- type: mrr_at_3
value: 33.333
- type: mrr_at_5
value: 35.143
- type: ndcg_at_1
value: 25.0
- type: ndcg_at_10
value: 37.478
- type: ndcg_at_100
value: 37.469
- type: ndcg_at_1000
value: 37.469
- type: ndcg_at_3
value: 31.757999999999996
- type: ndcg_at_5
value: 34.821999999999996
- type: precision_at_1
value: 25.0
- type: precision_at_10
value: 7.188999999999999
- type: precision_at_100
value: 0.719
- type: precision_at_1000
value: 0.07200000000000001
- type: precision_at_3
value: 15.837000000000002
- type: precision_at_5
value: 11.841
- type: recall_at_1
value: 19.637999999999998
- type: recall_at_10
value: 51.836000000000006
- type: recall_at_100
value: 51.836000000000006
- type: recall_at_1000
value: 51.836000000000006
- type: recall_at_3
value: 36.384
- type: recall_at_5
value: 43.964
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackPhysicsRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 34.884
- type: map_at_10
value: 47.88
- type: map_at_100
value: 47.88
- type: map_at_1000
value: 47.88
- type: map_at_3
value: 43.85
- type: map_at_5
value: 46.414
- type: mrr_at_1
value: 43.022
- type: mrr_at_10
value: 53.569
- type: mrr_at_100
value: 53.569
- type: mrr_at_1000
value: 53.569
- type: mrr_at_3
value: 51.075
- type: mrr_at_5
value: 52.725
- type: ndcg_at_1
value: 43.022
- type: ndcg_at_10
value: 54.461000000000006
- type: ndcg_at_100
value: 54.388000000000005
- type: ndcg_at_1000
value: 54.388000000000005
- type: ndcg_at_3
value: 48.864999999999995
- type: ndcg_at_5
value: 52.032000000000004
- type: precision_at_1
value: 43.022
- type: precision_at_10
value: 9.885
- type: precision_at_100
value: 0.988
- type: precision_at_1000
value: 0.099
- type: precision_at_3
value: 23.612
- type: precision_at_5
value: 16.997
- type: recall_at_1
value: 34.884
- type: recall_at_10
value: 68.12899999999999
- type: recall_at_100
value: 68.12899999999999
- type: recall_at_1000
value: 68.12899999999999
- type: recall_at_3
value: 52.428
- type: recall_at_5
value: 60.662000000000006
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackProgrammersRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 31.588
- type: map_at_10
value: 43.85
- type: map_at_100
value: 45.317
- type: map_at_1000
value: 45.408
- type: map_at_3
value: 39.73
- type: map_at_5
value: 42.122
- type: mrr_at_1
value: 38.927
- type: mrr_at_10
value: 49.582
- type: mrr_at_100
value: 50.39
- type: mrr_at_1000
value: 50.426
- type: mrr_at_3
value: 46.518
- type: mrr_at_5
value: 48.271
- type: ndcg_at_1
value: 38.927
- type: ndcg_at_10
value: 50.605999999999995
- type: ndcg_at_100
value: 56.22200000000001
- type: ndcg_at_1000
value: 57.724
- type: ndcg_at_3
value: 44.232
- type: ndcg_at_5
value: 47.233999999999995
- type: precision_at_1
value: 38.927
- type: precision_at_10
value: 9.429
- type: precision_at_100
value: 1.435
- type: precision_at_1000
value: 0.172
- type: precision_at_3
value: 21.271
- type: precision_at_5
value: 15.434000000000001
- type: recall_at_1
value: 31.588
- type: recall_at_10
value: 64.836
- type: recall_at_100
value: 88.066
- type: recall_at_1000
value: 97.748
- type: recall_at_3
value: 47.128
- type: recall_at_5
value: 54.954
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 31.956083333333336
- type: map_at_10
value: 43.33483333333333
- type: map_at_100
value: 44.64883333333333
- type: map_at_1000
value: 44.75
- type: map_at_3
value: 39.87741666666666
- type: map_at_5
value: 41.86766666666667
- type: mrr_at_1
value: 38.06341666666667
- type: mrr_at_10
value: 47.839666666666666
- type: mrr_at_100
value: 48.644000000000005
- type: mrr_at_1000
value: 48.68566666666667
- type: mrr_at_3
value: 45.26358333333334
- type: mrr_at_5
value: 46.790000000000006
- type: ndcg_at_1
value: 38.06341666666667
- type: ndcg_at_10
value: 49.419333333333334
- type: ndcg_at_100
value: 54.50166666666667
- type: ndcg_at_1000
value: 56.161166666666674
- type: ndcg_at_3
value: 43.982416666666666
- type: ndcg_at_5
value: 46.638083333333334
- type: precision_at_1
value: 38.06341666666667
- type: precision_at_10
value: 8.70858333333333
- type: precision_at_100
value: 1.327
- type: precision_at_1000
value: 0.165
- type: precision_at_3
value: 20.37816666666667
- type: precision_at_5
value: 14.516333333333334
- type: recall_at_1
value: 31.956083333333336
- type: recall_at_10
value: 62.69458333333334
- type: recall_at_100
value: 84.46433333333334
- type: recall_at_1000
value: 95.58449999999999
- type: recall_at_3
value: 47.52016666666666
- type: recall_at_5
value: 54.36066666666666
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackStatsRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 28.912
- type: map_at_10
value: 38.291
- type: map_at_100
value: 39.44
- type: map_at_1000
value: 39.528
- type: map_at_3
value: 35.638
- type: map_at_5
value: 37.218
- type: mrr_at_1
value: 32.822
- type: mrr_at_10
value: 41.661
- type: mrr_at_100
value: 42.546
- type: mrr_at_1000
value: 42.603
- type: mrr_at_3
value: 39.238
- type: mrr_at_5
value: 40.726
- type: ndcg_at_1
value: 32.822
- type: ndcg_at_10
value: 43.373
- type: ndcg_at_100
value: 48.638
- type: ndcg_at_1000
value: 50.654999999999994
- type: ndcg_at_3
value: 38.643
- type: ndcg_at_5
value: 41.126000000000005
- type: precision_at_1
value: 32.822
- type: precision_at_10
value: 6.8709999999999996
- type: precision_at_100
value: 1.032
- type: precision_at_1000
value: 0.128
- type: precision_at_3
value: 16.82
- type: precision_at_5
value: 11.718
- type: recall_at_1
value: 28.912
- type: recall_at_10
value: 55.376999999999995
- type: recall_at_100
value: 79.066
- type: recall_at_1000
value: 93.664
- type: recall_at_3
value: 42.569
- type: recall_at_5
value: 48.719
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackTexRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 22.181
- type: map_at_10
value: 31.462
- type: map_at_100
value: 32.73
- type: map_at_1000
value: 32.848
- type: map_at_3
value: 28.57
- type: map_at_5
value: 30.182
- type: mrr_at_1
value: 27.185
- type: mrr_at_10
value: 35.846000000000004
- type: mrr_at_100
value: 36.811
- type: mrr_at_1000
value: 36.873
- type: mrr_at_3
value: 33.437
- type: mrr_at_5
value: 34.813
- type: ndcg_at_1
value: 27.185
- type: ndcg_at_10
value: 36.858000000000004
- type: ndcg_at_100
value: 42.501
- type: ndcg_at_1000
value: 44.945
- type: ndcg_at_3
value: 32.066
- type: ndcg_at_5
value: 34.29
- type: precision_at_1
value: 27.185
- type: precision_at_10
value: 6.752
- type: precision_at_100
value: 1.111
- type: precision_at_1000
value: 0.151
- type: precision_at_3
value: 15.290000000000001
- type: precision_at_5
value: 11.004999999999999
- type: recall_at_1
value: 22.181
- type: recall_at_10
value: 48.513
- type: recall_at_100
value: 73.418
- type: recall_at_1000
value: 90.306
- type: recall_at_3
value: 35.003
- type: recall_at_5
value: 40.876000000000005
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackUnixRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 33.934999999999995
- type: map_at_10
value: 44.727
- type: map_at_100
value: 44.727
- type: map_at_1000
value: 44.727
- type: map_at_3
value: 40.918
- type: map_at_5
value: 42.961
- type: mrr_at_1
value: 39.646
- type: mrr_at_10
value: 48.898
- type: mrr_at_100
value: 48.898
- type: mrr_at_1000
value: 48.898
- type: mrr_at_3
value: 45.896
- type: mrr_at_5
value: 47.514
- type: ndcg_at_1
value: 39.646
- type: ndcg_at_10
value: 50.817
- type: ndcg_at_100
value: 50.803
- type: ndcg_at_1000
value: 50.803
- type: ndcg_at_3
value: 44.507999999999996
- type: ndcg_at_5
value: 47.259
- type: precision_at_1
value: 39.646
- type: precision_at_10
value: 8.759
- type: precision_at_100
value: 0.876
- type: precision_at_1000
value: 0.08800000000000001
- type: precision_at_3
value: 20.274
- type: precision_at_5
value: 14.366000000000001
- type: recall_at_1
value: 33.934999999999995
- type: recall_at_10
value: 65.037
- type: recall_at_100
value: 65.037
- type: recall_at_1000
value: 65.037
- type: recall_at_3
value: 47.439
- type: recall_at_5
value: 54.567
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackWebmastersRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 32.058
- type: map_at_10
value: 43.137
- type: map_at_100
value: 43.137
- type: map_at_1000
value: 43.137
- type: map_at_3
value: 39.882
- type: map_at_5
value: 41.379
- type: mrr_at_1
value: 38.933
- type: mrr_at_10
value: 48.344
- type: mrr_at_100
value: 48.344
- type: mrr_at_1000
value: 48.344
- type: mrr_at_3
value: 45.652
- type: mrr_at_5
value: 46.877
- type: ndcg_at_1
value: 38.933
- type: ndcg_at_10
value: 49.964
- type: ndcg_at_100
value: 49.242000000000004
- type: ndcg_at_1000
value: 49.222
- type: ndcg_at_3
value: 44.605
- type: ndcg_at_5
value: 46.501999999999995
- type: precision_at_1
value: 38.933
- type: precision_at_10
value: 9.427000000000001
- type: precision_at_100
value: 0.943
- type: precision_at_1000
value: 0.094
- type: precision_at_3
value: 20.685000000000002
- type: precision_at_5
value: 14.585
- type: recall_at_1
value: 32.058
- type: recall_at_10
value: 63.074
- type: recall_at_100
value: 63.074
- type: recall_at_1000
value: 63.074
- type: recall_at_3
value: 47.509
- type: recall_at_5
value: 52.455
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackWordpressRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 26.029000000000003
- type: map_at_10
value: 34.646
- type: map_at_100
value: 34.646
- type: map_at_1000
value: 34.646
- type: map_at_3
value: 31.456
- type: map_at_5
value: 33.138
- type: mrr_at_1
value: 28.281
- type: mrr_at_10
value: 36.905
- type: mrr_at_100
value: 36.905
- type: mrr_at_1000
value: 36.905
- type: mrr_at_3
value: 34.011
- type: mrr_at_5
value: 35.638
- type: ndcg_at_1
value: 28.281
- type: ndcg_at_10
value: 40.159
- type: ndcg_at_100
value: 40.159
- type: ndcg_at_1000
value: 40.159
- type: ndcg_at_3
value: 33.995
- type: ndcg_at_5
value: 36.836999999999996
- type: precision_at_1
value: 28.281
- type: precision_at_10
value: 6.358999999999999
- type: precision_at_100
value: 0.636
- type: precision_at_1000
value: 0.064
- type: precision_at_3
value: 14.233
- type: precision_at_5
value: 10.314
- type: recall_at_1
value: 26.029000000000003
- type: recall_at_10
value: 55.08
- type: recall_at_100
value: 55.08
- type: recall_at_1000
value: 55.08
- type: recall_at_3
value: 38.487
- type: recall_at_5
value: 45.308
- task:
type: Retrieval
dataset:
type: climate-fever
name: MTEB ClimateFEVER
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 12.842999999999998
- type: map_at_10
value: 22.101000000000003
- type: map_at_100
value: 24.319
- type: map_at_1000
value: 24.51
- type: map_at_3
value: 18.372
- type: map_at_5
value: 20.323
- type: mrr_at_1
value: 27.948
- type: mrr_at_10
value: 40.321
- type: mrr_at_100
value: 41.262
- type: mrr_at_1000
value: 41.297
- type: mrr_at_3
value: 36.558
- type: mrr_at_5
value: 38.824999999999996
- type: ndcg_at_1
value: 27.948
- type: ndcg_at_10
value: 30.906
- type: ndcg_at_100
value: 38.986
- type: ndcg_at_1000
value: 42.136
- type: ndcg_at_3
value: 24.911
- type: ndcg_at_5
value: 27.168999999999997
- type: precision_at_1
value: 27.948
- type: precision_at_10
value: 9.798
- type: precision_at_100
value: 1.8399999999999999
- type: precision_at_1000
value: 0.243
- type: precision_at_3
value: 18.328
- type: precision_at_5
value: 14.502
- type: recall_at_1
value: 12.842999999999998
- type: recall_at_10
value: 37.245
- type: recall_at_100
value: 64.769
- type: recall_at_1000
value: 82.055
- type: recall_at_3
value: 23.159
- type: recall_at_5
value: 29.113
- task:
type: Retrieval
dataset:
type: dbpedia-entity
name: MTEB DBPedia
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 8.934000000000001
- type: map_at_10
value: 21.915000000000003
- type: map_at_100
value: 21.915000000000003
- type: map_at_1000
value: 21.915000000000003
- type: map_at_3
value: 14.623
- type: map_at_5
value: 17.841
- type: mrr_at_1
value: 71.25
- type: mrr_at_10
value: 78.994
- type: mrr_at_100
value: 78.994
- type: mrr_at_1000
value: 78.994
- type: mrr_at_3
value: 77.208
- type: mrr_at_5
value: 78.55799999999999
- type: ndcg_at_1
value: 60.62499999999999
- type: ndcg_at_10
value: 46.604
- type: ndcg_at_100
value: 35.653
- type: ndcg_at_1000
value: 35.531
- type: ndcg_at_3
value: 50.605
- type: ndcg_at_5
value: 48.730000000000004
- type: precision_at_1
value: 71.25
- type: precision_at_10
value: 37.75
- type: precision_at_100
value: 3.775
- type: precision_at_1000
value: 0.377
- type: precision_at_3
value: 54.417
- type: precision_at_5
value: 48.15
- type: recall_at_1
value: 8.934000000000001
- type: recall_at_10
value: 28.471000000000004
- type: recall_at_100
value: 28.471000000000004
- type: recall_at_1000
value: 28.471000000000004
- type: recall_at_3
value: 16.019
- type: recall_at_5
value: 21.410999999999998
- task:
type: Classification
dataset:
type: mteb/emotion
name: MTEB EmotionClassification
config: default
split: test
revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37
metrics:
- type: accuracy
value: 52.81
- type: f1
value: 47.987573380720114
- task:
type: Retrieval
dataset:
type: fever
name: MTEB FEVER
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 66.81899999999999
- type: map_at_10
value: 78.034
- type: map_at_100
value: 78.034
- type: map_at_1000
value: 78.034
- type: map_at_3
value: 76.43100000000001
- type: map_at_5
value: 77.515
- type: mrr_at_1
value: 71.542
- type: mrr_at_10
value: 81.638
- type: mrr_at_100
value: 81.638
- type: mrr_at_1000
value: 81.638
- type: mrr_at_3
value: 80.403
- type: mrr_at_5
value: 81.256
- type: ndcg_at_1
value: 71.542
- type: ndcg_at_10
value: 82.742
- type: ndcg_at_100
value: 82.741
- type: ndcg_at_1000
value: 82.741
- type: ndcg_at_3
value: 80.039
- type: ndcg_at_5
value: 81.695
- type: precision_at_1
value: 71.542
- type: precision_at_10
value: 10.387
- type: precision_at_100
value: 1.039
- type: precision_at_1000
value: 0.104
- type: precision_at_3
value: 31.447999999999997
- type: precision_at_5
value: 19.91
- type: recall_at_1
value: 66.81899999999999
- type: recall_at_10
value: 93.372
- type: recall_at_100
value: 93.372
- type: recall_at_1000
value: 93.372
- type: recall_at_3
value: 86.33
- type: recall_at_5
value: 90.347
- task:
type: Retrieval
dataset:
type: fiqa
name: MTEB FiQA2018
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 31.158
- type: map_at_10
value: 52.017
- type: map_at_100
value: 54.259
- type: map_at_1000
value: 54.367
- type: map_at_3
value: 45.738
- type: map_at_5
value: 49.283
- type: mrr_at_1
value: 57.87
- type: mrr_at_10
value: 66.215
- type: mrr_at_100
value: 66.735
- type: mrr_at_1000
value: 66.75
- type: mrr_at_3
value: 64.043
- type: mrr_at_5
value: 65.116
- type: ndcg_at_1
value: 57.87
- type: ndcg_at_10
value: 59.946999999999996
- type: ndcg_at_100
value: 66.31099999999999
- type: ndcg_at_1000
value: 67.75999999999999
- type: ndcg_at_3
value: 55.483000000000004
- type: ndcg_at_5
value: 56.891000000000005
- type: precision_at_1
value: 57.87
- type: precision_at_10
value: 16.497
- type: precision_at_100
value: 2.321
- type: precision_at_1000
value: 0.258
- type: precision_at_3
value: 37.14
- type: precision_at_5
value: 27.067999999999998
- type: recall_at_1
value: 31.158
- type: recall_at_10
value: 67.381
- type: recall_at_100
value: 89.464
- type: recall_at_1000
value: 97.989
- type: recall_at_3
value: 50.553000000000004
- type: recall_at_5
value: 57.824
- task:
type: Retrieval
dataset:
type: hotpotqa
name: MTEB HotpotQA
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 42.073
- type: map_at_10
value: 72.418
- type: map_at_100
value: 73.175
- type: map_at_1000
value: 73.215
- type: map_at_3
value: 68.791
- type: map_at_5
value: 71.19
- type: mrr_at_1
value: 84.146
- type: mrr_at_10
value: 88.994
- type: mrr_at_100
value: 89.116
- type: mrr_at_1000
value: 89.12
- type: mrr_at_3
value: 88.373
- type: mrr_at_5
value: 88.82
- type: ndcg_at_1
value: 84.146
- type: ndcg_at_10
value: 79.404
- type: ndcg_at_100
value: 81.83200000000001
- type: ndcg_at_1000
value: 82.524
- type: ndcg_at_3
value: 74.595
- type: ndcg_at_5
value: 77.474
- type: precision_at_1
value: 84.146
- type: precision_at_10
value: 16.753999999999998
- type: precision_at_100
value: 1.8599999999999999
- type: precision_at_1000
value: 0.19499999999999998
- type: precision_at_3
value: 48.854
- type: precision_at_5
value: 31.579
- type: recall_at_1
value: 42.073
- type: recall_at_10
value: 83.768
- type: recall_at_100
value: 93.018
- type: recall_at_1000
value: 97.481
- type: recall_at_3
value: 73.282
- type: recall_at_5
value: 78.947
- task:
type: Classification
dataset:
type: mteb/imdb
name: MTEB ImdbClassification
config: default
split: test
revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7
metrics:
- type: accuracy
value: 94.9968
- type: ap
value: 92.93892195862824
- type: f1
value: 94.99327998213761
- task:
type: Retrieval
dataset:
type: msmarco
name: MTEB MSMARCO
config: default
split: dev
revision: None
metrics:
- type: map_at_1
value: 21.698
- type: map_at_10
value: 34.585
- type: map_at_100
value: 35.782000000000004
- type: map_at_1000
value: 35.825
- type: map_at_3
value: 30.397999999999996
- type: map_at_5
value: 32.72
- type: mrr_at_1
value: 22.192
- type: mrr_at_10
value: 35.085
- type: mrr_at_100
value: 36.218
- type: mrr_at_1000
value: 36.256
- type: mrr_at_3
value: 30.986000000000004
- type: mrr_at_5
value: 33.268
- type: ndcg_at_1
value: 22.192
- type: ndcg_at_10
value: 41.957
- type: ndcg_at_100
value: 47.658
- type: ndcg_at_1000
value: 48.697
- type: ndcg_at_3
value: 33.433
- type: ndcg_at_5
value: 37.551
- type: precision_at_1
value: 22.192
- type: precision_at_10
value: 6.781
- type: precision_at_100
value: 0.963
- type: precision_at_1000
value: 0.105
- type: precision_at_3
value: 14.365
- type: precision_at_5
value: 10.713000000000001
- type: recall_at_1
value: 21.698
- type: recall_at_10
value: 64.79
- type: recall_at_100
value: 91.071
- type: recall_at_1000
value: 98.883
- type: recall_at_3
value: 41.611
- type: recall_at_5
value: 51.459999999999994
- task:
type: Classification
dataset:
type: mteb/mtop_domain
name: MTEB MTOPDomainClassification (en)
config: en
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 96.15823073415413
- type: f1
value: 96.00362034963248
- task:
type: Classification
dataset:
type: mteb/mtop_intent
name: MTEB MTOPIntentClassification (en)
config: en
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 87.12722298221614
- type: f1
value: 70.46888967516227
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (en)
config: en
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 80.77673167451245
- type: f1
value: 77.60202561132175
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (en)
config: en
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 82.09145931405514
- type: f1
value: 81.7701921473406
- task:
type: Clustering
dataset:
type: mteb/medrxiv-clustering-p2p
name: MTEB MedrxivClusteringP2P
config: default
split: test
revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73
metrics:
- type: v_measure
value: 36.52153488185864
- task:
type: Clustering
dataset:
type: mteb/medrxiv-clustering-s2s
name: MTEB MedrxivClusteringS2S
config: default
split: test
revision: 35191c8c0dca72d8ff3efcd72aa802307d469663
metrics:
- type: v_measure
value: 36.80090398444147
- task:
type: Reranking
dataset:
type: mteb/mind_small
name: MTEB MindSmallReranking
config: default
split: test
revision: 3bdac13927fdc888b903db93b2ffdbd90b295a69
metrics:
- type: map
value: 31.807141746058605
- type: mrr
value: 32.85025611455029
- task:
type: Retrieval
dataset:
type: nfcorpus
name: MTEB NFCorpus
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 6.920999999999999
- type: map_at_10
value: 16.049
- type: map_at_100
value: 16.049
- type: map_at_1000
value: 16.049
- type: map_at_3
value: 11.865
- type: map_at_5
value: 13.657
- type: mrr_at_1
value: 53.87
- type: mrr_at_10
value: 62.291
- type: mrr_at_100
value: 62.291
- type: mrr_at_1000
value: 62.291
- type: mrr_at_3
value: 60.681
- type: mrr_at_5
value: 61.61
- type: ndcg_at_1
value: 51.23799999999999
- type: ndcg_at_10
value: 40.892
- type: ndcg_at_100
value: 26.951999999999998
- type: ndcg_at_1000
value: 26.474999999999998
- type: ndcg_at_3
value: 46.821
- type: ndcg_at_5
value: 44.333
- type: precision_at_1
value: 53.251000000000005
- type: precision_at_10
value: 30.124000000000002
- type: precision_at_100
value: 3.012
- type: precision_at_1000
value: 0.301
- type: precision_at_3
value: 43.55
- type: precision_at_5
value: 38.266
- type: recall_at_1
value: 6.920999999999999
- type: recall_at_10
value: 20.852
- type: recall_at_100
value: 20.852
- type: recall_at_1000
value: 20.852
- type: recall_at_3
value: 13.628000000000002
- type: recall_at_5
value: 16.273
- task:
type: Retrieval
dataset:
type: nq
name: MTEB NQ
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 46.827999999999996
- type: map_at_10
value: 63.434000000000005
- type: map_at_100
value: 63.434000000000005
- type: map_at_1000
value: 63.434000000000005
- type: map_at_3
value: 59.794000000000004
- type: map_at_5
value: 62.08
- type: mrr_at_1
value: 52.288999999999994
- type: mrr_at_10
value: 65.95
- type: mrr_at_100
value: 65.95
- type: mrr_at_1000
value: 65.95
- type: mrr_at_3
value: 63.413
- type: mrr_at_5
value: 65.08
- type: ndcg_at_1
value: 52.288999999999994
- type: ndcg_at_10
value: 70.301
- type: ndcg_at_100
value: 70.301
- type: ndcg_at_1000
value: 70.301
- type: ndcg_at_3
value: 63.979
- type: ndcg_at_5
value: 67.582
- type: precision_at_1
value: 52.288999999999994
- type: precision_at_10
value: 10.576
- type: precision_at_100
value: 1.058
- type: precision_at_1000
value: 0.106
- type: precision_at_3
value: 28.177000000000003
- type: precision_at_5
value: 19.073
- type: recall_at_1
value: 46.827999999999996
- type: recall_at_10
value: 88.236
- type: recall_at_100
value: 88.236
- type: recall_at_1000
value: 88.236
- type: recall_at_3
value: 72.371
- type: recall_at_5
value: 80.56
- task:
type: Retrieval
dataset:
type: quora
name: MTEB QuoraRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 71.652
- type: map_at_10
value: 85.953
- type: map_at_100
value: 85.953
- type: map_at_1000
value: 85.953
- type: map_at_3
value: 83.05399999999999
- type: map_at_5
value: 84.89
- type: mrr_at_1
value: 82.42
- type: mrr_at_10
value: 88.473
- type: mrr_at_100
value: 88.473
- type: mrr_at_1000
value: 88.473
- type: mrr_at_3
value: 87.592
- type: mrr_at_5
value: 88.211
- type: ndcg_at_1
value: 82.44
- type: ndcg_at_10
value: 89.467
- type: ndcg_at_100
value: 89.33
- type: ndcg_at_1000
value: 89.33
- type: ndcg_at_3
value: 86.822
- type: ndcg_at_5
value: 88.307
- type: precision_at_1
value: 82.44
- type: precision_at_10
value: 13.616
- type: precision_at_100
value: 1.362
- type: precision_at_1000
value: 0.136
- type: precision_at_3
value: 38.117000000000004
- type: precision_at_5
value: 25.05
- type: recall_at_1
value: 71.652
- type: recall_at_10
value: 96.224
- type: recall_at_100
value: 96.224
- type: recall_at_1000
value: 96.224
- type: recall_at_3
value: 88.571
- type: recall_at_5
value: 92.812
- task:
type: Clustering
dataset:
type: mteb/reddit-clustering
name: MTEB RedditClustering
config: default
split: test
revision: 24640382cdbf8abc73003fb0fa6d111a705499eb
metrics:
- type: v_measure
value: 61.295010338050474
- task:
type: Clustering
dataset:
type: mteb/reddit-clustering-p2p
name: MTEB RedditClusteringP2P
config: default
split: test
revision: 282350215ef01743dc01b456c7f5241fa8937f16
metrics:
- type: v_measure
value: 67.26380819328142
- task:
type: Retrieval
dataset:
type: scidocs
name: MTEB SCIDOCS
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 5.683
- type: map_at_10
value: 14.924999999999999
- type: map_at_100
value: 17.532
- type: map_at_1000
value: 17.875
- type: map_at_3
value: 10.392
- type: map_at_5
value: 12.592
- type: mrr_at_1
value: 28.000000000000004
- type: mrr_at_10
value: 39.951
- type: mrr_at_100
value: 41.025
- type: mrr_at_1000
value: 41.056
- type: mrr_at_3
value: 36.317
- type: mrr_at_5
value: 38.412
- type: ndcg_at_1
value: 28.000000000000004
- type: ndcg_at_10
value: 24.410999999999998
- type: ndcg_at_100
value: 33.79
- type: ndcg_at_1000
value: 39.035
- type: ndcg_at_3
value: 22.845
- type: ndcg_at_5
value: 20.080000000000002
- type: precision_at_1
value: 28.000000000000004
- type: precision_at_10
value: 12.790000000000001
- type: precision_at_100
value: 2.633
- type: precision_at_1000
value: 0.388
- type: precision_at_3
value: 21.367
- type: precision_at_5
value: 17.7
- type: recall_at_1
value: 5.683
- type: recall_at_10
value: 25.91
- type: recall_at_100
value: 53.443
- type: recall_at_1000
value: 78.73
- type: recall_at_3
value: 13.003
- type: recall_at_5
value: 17.932000000000002
- task:
type: STS
dataset:
type: mteb/sickr-sts
name: MTEB SICK-R
config: default
split: test
revision: a6ea5a8cab320b040a23452cc28066d9beae2cee
metrics:
- type: cos_sim_pearson
value: 84.677978681023
- type: cos_sim_spearman
value: 83.13093441058189
- type: euclidean_pearson
value: 83.35535759341572
- type: euclidean_spearman
value: 83.42583744219611
- type: manhattan_pearson
value: 83.2243124045889
- type: manhattan_spearman
value: 83.39801618652632
- task:
type: STS
dataset:
type: mteb/sts12-sts
name: MTEB STS12
config: default
split: test
revision: a0d554a64d88156834ff5ae9920b964011b16384
metrics:
- type: cos_sim_pearson
value: 81.68960206569666
- type: cos_sim_spearman
value: 77.3368966488535
- type: euclidean_pearson
value: 77.62828980560303
- type: euclidean_spearman
value: 76.77951481444651
- type: manhattan_pearson
value: 77.88637240839041
- type: manhattan_spearman
value: 77.22157841466188
- task:
type: STS
dataset:
type: mteb/sts13-sts
name: MTEB STS13
config: default
split: test
revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca
metrics:
- type: cos_sim_pearson
value: 84.18745821650724
- type: cos_sim_spearman
value: 85.04423285574542
- type: euclidean_pearson
value: 85.46604816931023
- type: euclidean_spearman
value: 85.5230593932974
- type: manhattan_pearson
value: 85.57912805986261
- type: manhattan_spearman
value: 85.65955905111873
- task:
type: STS
dataset:
type: mteb/sts14-sts
name: MTEB STS14
config: default
split: test
revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375
metrics:
- type: cos_sim_pearson
value: 83.6715333300355
- type: cos_sim_spearman
value: 82.9058522514908
- type: euclidean_pearson
value: 83.9640357424214
- type: euclidean_spearman
value: 83.60415457472637
- type: manhattan_pearson
value: 84.05621005853469
- type: manhattan_spearman
value: 83.87077724707746
- task:
type: STS
dataset:
type: mteb/sts15-sts
name: MTEB STS15
config: default
split: test
revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3
metrics:
- type: cos_sim_pearson
value: 87.82422928098886
- type: cos_sim_spearman
value: 88.12660311894628
- type: euclidean_pearson
value: 87.50974805056555
- type: euclidean_spearman
value: 87.91957275596677
- type: manhattan_pearson
value: 87.74119404878883
- type: manhattan_spearman
value: 88.2808922165719
- task:
type: STS
dataset:
type: mteb/sts16-sts
name: MTEB STS16
config: default
split: test
revision: 4d8694f8f0e0100860b497b999b3dbed754a0513
metrics:
- type: cos_sim_pearson
value: 84.80605838552093
- type: cos_sim_spearman
value: 86.24123388765678
- type: euclidean_pearson
value: 85.32648347339814
- type: euclidean_spearman
value: 85.60046671950158
- type: manhattan_pearson
value: 85.53800168487811
- type: manhattan_spearman
value: 85.89542420480763
- task:
type: STS
dataset:
type: mteb/sts17-crosslingual-sts
name: MTEB STS17 (en-en)
config: en-en
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 89.87540978988132
- type: cos_sim_spearman
value: 90.12715295099461
- type: euclidean_pearson
value: 91.61085993525275
- type: euclidean_spearman
value: 91.31835942311758
- type: manhattan_pearson
value: 91.57500202032934
- type: manhattan_spearman
value: 91.1790925526635
- task:
type: STS
dataset:
type: mteb/sts22-crosslingual-sts
name: MTEB STS22 (en)
config: en
split: test
revision: eea2b4fe26a775864c896887d910b76a8098ad3f
metrics:
- type: cos_sim_pearson
value: 69.87136205329556
- type: cos_sim_spearman
value: 68.6253154635078
- type: euclidean_pearson
value: 68.91536015034222
- type: euclidean_spearman
value: 67.63744649352542
- type: manhattan_pearson
value: 69.2000713045275
- type: manhattan_spearman
value: 68.16002901587316
- task:
type: STS
dataset:
type: mteb/stsbenchmark-sts
name: MTEB STSBenchmark
config: default
split: test
revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831
metrics:
- type: cos_sim_pearson
value: 85.21849551039082
- type: cos_sim_spearman
value: 85.6392959372461
- type: euclidean_pearson
value: 85.92050852609488
- type: euclidean_spearman
value: 85.97205649009734
- type: manhattan_pearson
value: 86.1031154802254
- type: manhattan_spearman
value: 86.26791155517466
- task:
type: Reranking
dataset:
type: mteb/scidocs-reranking
name: MTEB SciDocsRR
config: default
split: test
revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab
metrics:
- type: map
value: 86.83953958636627
- type: mrr
value: 96.71167612344082
- task:
type: Retrieval
dataset:
type: scifact
name: MTEB SciFact
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 64.994
- type: map_at_10
value: 74.763
- type: map_at_100
value: 75.127
- type: map_at_1000
value: 75.143
- type: map_at_3
value: 71.824
- type: map_at_5
value: 73.71
- type: mrr_at_1
value: 68.333
- type: mrr_at_10
value: 75.749
- type: mrr_at_100
value: 75.922
- type: mrr_at_1000
value: 75.938
- type: mrr_at_3
value: 73.556
- type: mrr_at_5
value: 74.739
- type: ndcg_at_1
value: 68.333
- type: ndcg_at_10
value: 79.174
- type: ndcg_at_100
value: 80.41
- type: ndcg_at_1000
value: 80.804
- type: ndcg_at_3
value: 74.361
- type: ndcg_at_5
value: 76.861
- type: precision_at_1
value: 68.333
- type: precision_at_10
value: 10.333
- type: precision_at_100
value: 1.0999999999999999
- type: precision_at_1000
value: 0.11299999999999999
- type: precision_at_3
value: 28.778
- type: precision_at_5
value: 19.067
- type: recall_at_1
value: 64.994
- type: recall_at_10
value: 91.822
- type: recall_at_100
value: 97.0
- type: recall_at_1000
value: 100.0
- type: recall_at_3
value: 78.878
- type: recall_at_5
value: 85.172
- task:
type: PairClassification
dataset:
type: mteb/sprintduplicatequestions-pairclassification
name: MTEB SprintDuplicateQuestions
config: default
split: test
revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46
metrics:
- type: cos_sim_accuracy
value: 99.72079207920792
- type: cos_sim_ap
value: 93.00265215525152
- type: cos_sim_f1
value: 85.06596306068602
- type: cos_sim_precision
value: 90.05586592178771
- type: cos_sim_recall
value: 80.60000000000001
- type: dot_accuracy
value: 99.66039603960397
- type: dot_ap
value: 91.22371407479089
- type: dot_f1
value: 82.34693877551021
- type: dot_precision
value: 84.0625
- type: dot_recall
value: 80.7
- type: euclidean_accuracy
value: 99.71881188118812
- type: euclidean_ap
value: 92.88449963304728
- type: euclidean_f1
value: 85.19480519480518
- type: euclidean_precision
value: 88.64864864864866
- type: euclidean_recall
value: 82.0
- type: manhattan_accuracy
value: 99.73267326732673
- type: manhattan_ap
value: 93.23055393056883
- type: manhattan_f1
value: 85.88957055214725
- type: manhattan_precision
value: 87.86610878661088
- type: manhattan_recall
value: 84.0
- type: max_accuracy
value: 99.73267326732673
- type: max_ap
value: 93.23055393056883
- type: max_f1
value: 85.88957055214725
- task:
type: Clustering
dataset:
type: mteb/stackexchange-clustering
name: MTEB StackExchangeClustering
config: default
split: test
revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259
metrics:
- type: v_measure
value: 77.3305735900358
- task:
type: Clustering
dataset:
type: mteb/stackexchange-clustering-p2p
name: MTEB StackExchangeClusteringP2P
config: default
split: test
revision: 815ca46b2622cec33ccafc3735d572c266efdb44
metrics:
- type: v_measure
value: 41.32967136540674
- task:
type: Reranking
dataset:
type: mteb/stackoverflowdupquestions-reranking
name: MTEB StackOverflowDupQuestions
config: default
split: test
revision: e185fbe320c72810689fc5848eb6114e1ef5ec69
metrics:
- type: map
value: 55.95514866379359
- type: mrr
value: 56.95423245055598
- task:
type: Summarization
dataset:
type: mteb/summeval
name: MTEB SummEval
config: default
split: test
revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c
metrics:
- type: cos_sim_pearson
value: 30.783007208997144
- type: cos_sim_spearman
value: 30.373444721540533
- type: dot_pearson
value: 29.210604111143905
- type: dot_spearman
value: 29.98809758085659
- task:
type: Retrieval
dataset:
type: trec-covid
name: MTEB TRECCOVID
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 0.234
- type: map_at_10
value: 1.894
- type: map_at_100
value: 1.894
- type: map_at_1000
value: 1.894
- type: map_at_3
value: 0.636
- type: map_at_5
value: 1.0
- type: mrr_at_1
value: 88.0
- type: mrr_at_10
value: 93.667
- type: mrr_at_100
value: 93.667
- type: mrr_at_1000
value: 93.667
- type: mrr_at_3
value: 93.667
- type: mrr_at_5
value: 93.667
- type: ndcg_at_1
value: 85.0
- type: ndcg_at_10
value: 74.798
- type: ndcg_at_100
value: 16.462
- type: ndcg_at_1000
value: 7.0889999999999995
- type: ndcg_at_3
value: 80.754
- type: ndcg_at_5
value: 77.319
- type: precision_at_1
value: 88.0
- type: precision_at_10
value: 78.0
- type: precision_at_100
value: 7.8
- type: precision_at_1000
value: 0.7799999999999999
- type: precision_at_3
value: 83.333
- type: precision_at_5
value: 80.80000000000001
- type: recall_at_1
value: 0.234
- type: recall_at_10
value: 2.093
- type: recall_at_100
value: 2.093
- type: recall_at_1000
value: 2.093
- type: recall_at_3
value: 0.662
- type: recall_at_5
value: 1.0739999999999998
- task:
type: Retrieval
dataset:
type: webis-touche2020
name: MTEB Touche2020
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 2.703
- type: map_at_10
value: 10.866000000000001
- type: map_at_100
value: 10.866000000000001
- type: map_at_1000
value: 10.866000000000001
- type: map_at_3
value: 5.909
- type: map_at_5
value: 7.35
- type: mrr_at_1
value: 36.735
- type: mrr_at_10
value: 53.583000000000006
- type: mrr_at_100
value: 53.583000000000006
- type: mrr_at_1000
value: 53.583000000000006
- type: mrr_at_3
value: 49.32
- type: mrr_at_5
value: 51.769
- type: ndcg_at_1
value: 34.694
- type: ndcg_at_10
value: 27.926000000000002
- type: ndcg_at_100
value: 22.701
- type: ndcg_at_1000
value: 22.701
- type: ndcg_at_3
value: 32.073
- type: ndcg_at_5
value: 28.327999999999996
- type: precision_at_1
value: 36.735
- type: precision_at_10
value: 24.694
- type: precision_at_100
value: 2.469
- type: precision_at_1000
value: 0.247
- type: precision_at_3
value: 31.973000000000003
- type: precision_at_5
value: 26.939
- type: recall_at_1
value: 2.703
- type: recall_at_10
value: 17.702
- type: recall_at_100
value: 17.702
- type: recall_at_1000
value: 17.702
- type: recall_at_3
value: 7.208
- type: recall_at_5
value: 9.748999999999999
- task:
type: Classification
dataset:
type: mteb/toxic_conversations_50k
name: MTEB ToxicConversationsClassification
config: default
split: test
revision: d7c0de2777da35d6aae2200a62c6e0e5af397c4c
metrics:
- type: accuracy
value: 70.79960000000001
- type: ap
value: 15.467565415565815
- type: f1
value: 55.28639823443618
- task:
type: Classification
dataset:
type: mteb/tweet_sentiment_extraction
name: MTEB TweetSentimentExtractionClassification
config: default
split: test
revision: d604517c81ca91fe16a244d1248fc021f9ecee7a
metrics:
- type: accuracy
value: 64.7792869269949
- type: f1
value: 65.08597154774318
- task:
type: Clustering
dataset:
type: mteb/twentynewsgroups-clustering
name: MTEB TwentyNewsgroupsClustering
config: default
split: test
revision: 6125ec4e24fa026cec8a478383ee943acfbd5449
metrics:
- type: v_measure
value: 55.70352297774293
- task:
type: PairClassification
dataset:
type: mteb/twittersemeval2015-pairclassification
name: MTEB TwitterSemEval2015
config: default
split: test
revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1
metrics:
- type: cos_sim_accuracy
value: 88.27561542588067
- type: cos_sim_ap
value: 81.08262141256193
- type: cos_sim_f1
value: 73.82341501361338
- type: cos_sim_precision
value: 72.5720112159062
- type: cos_sim_recall
value: 75.11873350923483
- type: dot_accuracy
value: 86.66030875603504
- type: dot_ap
value: 76.6052349228621
- type: dot_f1
value: 70.13897280966768
- type: dot_precision
value: 64.70457079152732
- type: dot_recall
value: 76.56992084432717
- type: euclidean_accuracy
value: 88.37098408535495
- type: euclidean_ap
value: 81.12515230092113
- type: euclidean_f1
value: 74.10338225909379
- type: euclidean_precision
value: 71.76761433868974
- type: euclidean_recall
value: 76.59630606860158
- type: manhattan_accuracy
value: 88.34118137926924
- type: manhattan_ap
value: 80.95751834536561
- type: manhattan_f1
value: 73.9119496855346
- type: manhattan_precision
value: 70.625
- type: manhattan_recall
value: 77.5197889182058
- type: max_accuracy
value: 88.37098408535495
- type: max_ap
value: 81.12515230092113
- type: max_f1
value: 74.10338225909379
- task:
type: PairClassification
dataset:
type: mteb/twitterurlcorpus-pairclassification
name: MTEB TwitterURLCorpus
config: default
split: test
revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf
metrics:
- type: cos_sim_accuracy
value: 89.79896767182831
- type: cos_sim_ap
value: 87.40071784061065
- type: cos_sim_f1
value: 79.87753144712087
- type: cos_sim_precision
value: 76.67304015296367
- type: cos_sim_recall
value: 83.3615645210964
- type: dot_accuracy
value: 88.95486474948578
- type: dot_ap
value: 86.00227979119943
- type: dot_f1
value: 78.54601474525914
- type: dot_precision
value: 75.00525394045535
- type: dot_recall
value: 82.43763473975977
- type: euclidean_accuracy
value: 89.7892653393876
- type: euclidean_ap
value: 87.42174706480819
- type: euclidean_f1
value: 80.07283321194465
- type: euclidean_precision
value: 75.96738529574351
- type: euclidean_recall
value: 84.6473668001232
- type: manhattan_accuracy
value: 89.8474793340319
- type: manhattan_ap
value: 87.47814292587448
- type: manhattan_f1
value: 80.15461150280949
- type: manhattan_precision
value: 74.88798234468
- type: manhattan_recall
value: 86.21804742839544
- type: max_accuracy
value: 89.8474793340319
- type: max_ap
value: 87.47814292587448
- type: max_f1
value: 80.15461150280949
---
# Model Summary
> GritLM is a generative representational instruction tuned language model. It unifies text representation (embedding) and text generation into a single model achieving state-of-the-art performance on both types of tasks.
- **Repository:** [ContextualAI/gritlm](https://github.com/ContextualAI/gritlm)
- **Paper:** https://arxiv.org/abs/2402.09906
- **Logs:** https://wandb.ai/muennighoff/gritlm/runs/0uui712t/overview
- **Script:** https://github.com/ContextualAI/gritlm/blob/main/scripts/training/train_gritlm_7b.sh
| Model | Description |
|-------|-------------|
| [GritLM 7B](https://hf.co/GritLM/GritLM-7B) | Mistral 7B finetuned using GRIT |
| [GritLM 8x7B](https://hf.co/GritLM/GritLM-8x7B) | Mixtral 8x7B finetuned using GRIT |
# Use
The model usage is documented [here](https://github.com/ContextualAI/gritlm?tab=readme-ov-file#inference).
# Citation
```bibtex
@misc{muennighoff2024generative,
title={Generative Representational Instruction Tuning},
author={Niklas Muennighoff and Hongjin Su and Liang Wang and Nan Yang and Furu Wei and Tao Yu and Amanpreet Singh and Douwe Kiela},
year={2024},
eprint={2402.09906},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
| [
"SUMMARIZATION"
] | [
"BIOSSES",
"SCIFACT"
] |
BillSYZhang/gte-Qwen2-7B-instruct-Q4-mlx | BillSYZhang | sentence-similarity | [
"sentence-transformers",
"safetensors",
"qwen2",
"text-generation",
"mteb",
"transformers",
"Qwen2",
"sentence-similarity",
"mlx",
"mlx-my-repo",
"custom_code",
"base_model:Alibaba-NLP/gte-Qwen2-7B-instruct",
"base_model:quantized:Alibaba-NLP/gte-Qwen2-7B-instruct",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"text-embeddings-inference",
"endpoints_compatible",
"4-bit",
"region:us"
] | 2024-12-24T19:43:31 | 2024-12-24T19:43:49 | 97 | 0 | ---
base_model: Alibaba-NLP/gte-Qwen2-7B-instruct
license: apache-2.0
tags:
- mteb
- sentence-transformers
- transformers
- Qwen2
- sentence-similarity
- mlx
- mlx-my-repo
model-index:
- name: gte-qwen2-7B-instruct
results:
- task:
type: Classification
dataset:
name: MTEB AmazonCounterfactualClassification (en)
type: mteb/amazon_counterfactual
config: en
split: test
revision: e8379541af4e31359cca9fbcf4b00f2671dba205
metrics:
- type: accuracy
value: 91.31343283582089
- type: ap
value: 67.64251402604096
- type: f1
value: 87.53372530755692
- task:
type: Classification
dataset:
name: MTEB AmazonPolarityClassification
type: mteb/amazon_polarity
config: default
split: test
revision: e2d317d38cd51312af73b3d32a06d1a08b442046
metrics:
- type: accuracy
value: 97.497825
- type: ap
value: 96.30329547047529
- type: f1
value: 97.49769793778039
- task:
type: Classification
dataset:
name: MTEB AmazonReviewsClassification (en)
type: mteb/amazon_reviews_multi
config: en
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 62.564
- type: f1
value: 60.975777935041066
- task:
type: Retrieval
dataset:
name: MTEB ArguAna
type: mteb/arguana
config: default
split: test
revision: c22ab2a51041ffd869aaddef7af8d8215647e41a
metrics:
- type: map_at_1
value: 36.486000000000004
- type: map_at_10
value: 54.842
- type: map_at_100
value: 55.206999999999994
- type: map_at_1000
value: 55.206999999999994
- type: map_at_3
value: 49.893
- type: map_at_5
value: 53.105000000000004
- type: mrr_at_1
value: 37.34
- type: mrr_at_10
value: 55.143
- type: mrr_at_100
value: 55.509
- type: mrr_at_1000
value: 55.509
- type: mrr_at_3
value: 50.212999999999994
- type: mrr_at_5
value: 53.432
- type: ndcg_at_1
value: 36.486000000000004
- type: ndcg_at_10
value: 64.273
- type: ndcg_at_100
value: 65.66199999999999
- type: ndcg_at_1000
value: 65.66199999999999
- type: ndcg_at_3
value: 54.352999999999994
- type: ndcg_at_5
value: 60.131
- type: precision_at_1
value: 36.486000000000004
- type: precision_at_10
value: 9.395000000000001
- type: precision_at_100
value: 0.996
- type: precision_at_1000
value: 0.1
- type: precision_at_3
value: 22.428
- type: precision_at_5
value: 16.259
- type: recall_at_1
value: 36.486000000000004
- type: recall_at_10
value: 93.95400000000001
- type: recall_at_100
value: 99.644
- type: recall_at_1000
value: 99.644
- type: recall_at_3
value: 67.283
- type: recall_at_5
value: 81.294
- task:
type: Clustering
dataset:
name: MTEB ArxivClusteringP2P
type: mteb/arxiv-clustering-p2p
config: default
split: test
revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d
metrics:
- type: v_measure
value: 56.461169803700564
- task:
type: Clustering
dataset:
name: MTEB ArxivClusteringS2S
type: mteb/arxiv-clustering-s2s
config: default
split: test
revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53
metrics:
- type: v_measure
value: 51.73600434466286
- task:
type: Reranking
dataset:
name: MTEB AskUbuntuDupQuestions
type: mteb/askubuntudupquestions-reranking
config: default
split: test
revision: 2000358ca161889fa9c082cb41daa8dcfb161a54
metrics:
- type: map
value: 67.57827065898053
- type: mrr
value: 79.08136569493911
- task:
type: STS
dataset:
name: MTEB BIOSSES
type: mteb/biosses-sts
config: default
split: test
revision: d3fb88f8f02e40887cd149695127462bbcf29b4a
metrics:
- type: cos_sim_pearson
value: 83.53324575999243
- type: cos_sim_spearman
value: 81.37173362822374
- type: euclidean_pearson
value: 82.19243335103444
- type: euclidean_spearman
value: 81.33679307304334
- type: manhattan_pearson
value: 82.38752665975699
- type: manhattan_spearman
value: 81.31510583189689
- task:
type: Classification
dataset:
name: MTEB Banking77Classification
type: mteb/banking77
config: default
split: test
revision: 0fd18e25b25c072e09e0d92ab615fda904d66300
metrics:
- type: accuracy
value: 87.56818181818181
- type: f1
value: 87.25826722019875
- task:
type: Clustering
dataset:
name: MTEB BiorxivClusteringP2P
type: mteb/biorxiv-clustering-p2p
config: default
split: test
revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40
metrics:
- type: v_measure
value: 50.09239610327673
- task:
type: Clustering
dataset:
name: MTEB BiorxivClusteringS2S
type: mteb/biorxiv-clustering-s2s
config: default
split: test
revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908
metrics:
- type: v_measure
value: 46.64733054606282
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackAndroidRetrieval
type: BeIR/cqadupstack
config: default
split: test
revision: f46a197baaae43b4f621051089b82a364682dfeb
metrics:
- type: map_at_1
value: 33.997
- type: map_at_10
value: 48.176
- type: map_at_100
value: 49.82
- type: map_at_1000
value: 49.924
- type: map_at_3
value: 43.626
- type: map_at_5
value: 46.275
- type: mrr_at_1
value: 42.059999999999995
- type: mrr_at_10
value: 53.726
- type: mrr_at_100
value: 54.398
- type: mrr_at_1000
value: 54.416
- type: mrr_at_3
value: 50.714999999999996
- type: mrr_at_5
value: 52.639
- type: ndcg_at_1
value: 42.059999999999995
- type: ndcg_at_10
value: 55.574999999999996
- type: ndcg_at_100
value: 60.744
- type: ndcg_at_1000
value: 61.85699999999999
- type: ndcg_at_3
value: 49.363
- type: ndcg_at_5
value: 52.44
- type: precision_at_1
value: 42.059999999999995
- type: precision_at_10
value: 11.101999999999999
- type: precision_at_100
value: 1.73
- type: precision_at_1000
value: 0.218
- type: precision_at_3
value: 24.464
- type: precision_at_5
value: 18.026
- type: recall_at_1
value: 33.997
- type: recall_at_10
value: 70.35900000000001
- type: recall_at_100
value: 91.642
- type: recall_at_1000
value: 97.977
- type: recall_at_3
value: 52.76
- type: recall_at_5
value: 61.148
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackEnglishRetrieval
type: BeIR/cqadupstack
config: default
split: test
revision: ad9991cb51e31e31e430383c75ffb2885547b5f0
metrics:
- type: map_at_1
value: 35.884
- type: map_at_10
value: 48.14
- type: map_at_100
value: 49.5
- type: map_at_1000
value: 49.63
- type: map_at_3
value: 44.646
- type: map_at_5
value: 46.617999999999995
- type: mrr_at_1
value: 44.458999999999996
- type: mrr_at_10
value: 53.751000000000005
- type: mrr_at_100
value: 54.37800000000001
- type: mrr_at_1000
value: 54.415
- type: mrr_at_3
value: 51.815
- type: mrr_at_5
value: 52.882
- type: ndcg_at_1
value: 44.458999999999996
- type: ndcg_at_10
value: 54.157
- type: ndcg_at_100
value: 58.362
- type: ndcg_at_1000
value: 60.178
- type: ndcg_at_3
value: 49.661
- type: ndcg_at_5
value: 51.74999999999999
- type: precision_at_1
value: 44.458999999999996
- type: precision_at_10
value: 10.248
- type: precision_at_100
value: 1.5890000000000002
- type: precision_at_1000
value: 0.207
- type: precision_at_3
value: 23.928
- type: precision_at_5
value: 16.878999999999998
- type: recall_at_1
value: 35.884
- type: recall_at_10
value: 64.798
- type: recall_at_100
value: 82.345
- type: recall_at_1000
value: 93.267
- type: recall_at_3
value: 51.847
- type: recall_at_5
value: 57.601
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackGamingRetrieval
type: BeIR/cqadupstack
config: default
split: test
revision: 4885aa143210c98657558c04aaf3dc47cfb54340
metrics:
- type: map_at_1
value: 39.383
- type: map_at_10
value: 53.714
- type: map_at_100
value: 54.838
- type: map_at_1000
value: 54.87800000000001
- type: map_at_3
value: 50.114999999999995
- type: map_at_5
value: 52.153000000000006
- type: mrr_at_1
value: 45.016
- type: mrr_at_10
value: 56.732000000000006
- type: mrr_at_100
value: 57.411
- type: mrr_at_1000
value: 57.431
- type: mrr_at_3
value: 54.044000000000004
- type: mrr_at_5
value: 55.639
- type: ndcg_at_1
value: 45.016
- type: ndcg_at_10
value: 60.228
- type: ndcg_at_100
value: 64.277
- type: ndcg_at_1000
value: 65.07
- type: ndcg_at_3
value: 54.124
- type: ndcg_at_5
value: 57.147000000000006
- type: precision_at_1
value: 45.016
- type: precision_at_10
value: 9.937
- type: precision_at_100
value: 1.288
- type: precision_at_1000
value: 0.13899999999999998
- type: precision_at_3
value: 24.471999999999998
- type: precision_at_5
value: 16.991
- type: recall_at_1
value: 39.383
- type: recall_at_10
value: 76.175
- type: recall_at_100
value: 93.02
- type: recall_at_1000
value: 98.60900000000001
- type: recall_at_3
value: 60.265
- type: recall_at_5
value: 67.46600000000001
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackGisRetrieval
type: BeIR/cqadupstack
config: default
split: test
revision: 5003b3064772da1887988e05400cf3806fe491f2
metrics:
- type: map_at_1
value: 27.426000000000002
- type: map_at_10
value: 37.397000000000006
- type: map_at_100
value: 38.61
- type: map_at_1000
value: 38.678000000000004
- type: map_at_3
value: 34.150999999999996
- type: map_at_5
value: 36.137
- type: mrr_at_1
value: 29.944
- type: mrr_at_10
value: 39.654
- type: mrr_at_100
value: 40.638000000000005
- type: mrr_at_1000
value: 40.691
- type: mrr_at_3
value: 36.817
- type: mrr_at_5
value: 38.524
- type: ndcg_at_1
value: 29.944
- type: ndcg_at_10
value: 43.094
- type: ndcg_at_100
value: 48.789
- type: ndcg_at_1000
value: 50.339999999999996
- type: ndcg_at_3
value: 36.984
- type: ndcg_at_5
value: 40.248
- type: precision_at_1
value: 29.944
- type: precision_at_10
value: 6.78
- type: precision_at_100
value: 1.024
- type: precision_at_1000
value: 0.11800000000000001
- type: precision_at_3
value: 15.895000000000001
- type: precision_at_5
value: 11.39
- type: recall_at_1
value: 27.426000000000002
- type: recall_at_10
value: 58.464000000000006
- type: recall_at_100
value: 84.193
- type: recall_at_1000
value: 95.52000000000001
- type: recall_at_3
value: 42.172
- type: recall_at_5
value: 50.101
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackMathematicaRetrieval
type: BeIR/cqadupstack
config: default
split: test
revision: 90fceea13679c63fe563ded68f3b6f06e50061de
metrics:
- type: map_at_1
value: 19.721
- type: map_at_10
value: 31.604
- type: map_at_100
value: 32.972
- type: map_at_1000
value: 33.077
- type: map_at_3
value: 27.218999999999998
- type: map_at_5
value: 29.53
- type: mrr_at_1
value: 25.0
- type: mrr_at_10
value: 35.843
- type: mrr_at_100
value: 36.785000000000004
- type: mrr_at_1000
value: 36.842000000000006
- type: mrr_at_3
value: 32.193
- type: mrr_at_5
value: 34.264
- type: ndcg_at_1
value: 25.0
- type: ndcg_at_10
value: 38.606
- type: ndcg_at_100
value: 44.272
- type: ndcg_at_1000
value: 46.527
- type: ndcg_at_3
value: 30.985000000000003
- type: ndcg_at_5
value: 34.43
- type: precision_at_1
value: 25.0
- type: precision_at_10
value: 7.811
- type: precision_at_100
value: 1.203
- type: precision_at_1000
value: 0.15
- type: precision_at_3
value: 15.423
- type: precision_at_5
value: 11.791
- type: recall_at_1
value: 19.721
- type: recall_at_10
value: 55.625
- type: recall_at_100
value: 79.34400000000001
- type: recall_at_1000
value: 95.208
- type: recall_at_3
value: 35.19
- type: recall_at_5
value: 43.626
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackPhysicsRetrieval
type: BeIR/cqadupstack
config: default
split: test
revision: 79531abbd1fb92d06c6d6315a0cbbbf5bb247ea4
metrics:
- type: map_at_1
value: 33.784
- type: map_at_10
value: 47.522
- type: map_at_100
value: 48.949999999999996
- type: map_at_1000
value: 49.038
- type: map_at_3
value: 43.284
- type: map_at_5
value: 45.629
- type: mrr_at_1
value: 41.482
- type: mrr_at_10
value: 52.830999999999996
- type: mrr_at_100
value: 53.559999999999995
- type: mrr_at_1000
value: 53.588
- type: mrr_at_3
value: 50.016000000000005
- type: mrr_at_5
value: 51.614000000000004
- type: ndcg_at_1
value: 41.482
- type: ndcg_at_10
value: 54.569
- type: ndcg_at_100
value: 59.675999999999995
- type: ndcg_at_1000
value: 60.989000000000004
- type: ndcg_at_3
value: 48.187000000000005
- type: ndcg_at_5
value: 51.183
- type: precision_at_1
value: 41.482
- type: precision_at_10
value: 10.221
- type: precision_at_100
value: 1.486
- type: precision_at_1000
value: 0.17500000000000002
- type: precision_at_3
value: 23.548
- type: precision_at_5
value: 16.805
- type: recall_at_1
value: 33.784
- type: recall_at_10
value: 69.798
- type: recall_at_100
value: 90.098
- type: recall_at_1000
value: 98.176
- type: recall_at_3
value: 52.127
- type: recall_at_5
value: 59.861
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackProgrammersRetrieval
type: BeIR/cqadupstack
config: default
split: test
revision: 6184bc1440d2dbc7612be22b50686b8826d22b32
metrics:
- type: map_at_1
value: 28.038999999999998
- type: map_at_10
value: 41.904
- type: map_at_100
value: 43.36
- type: map_at_1000
value: 43.453
- type: map_at_3
value: 37.785999999999994
- type: map_at_5
value: 40.105000000000004
- type: mrr_at_1
value: 35.046
- type: mrr_at_10
value: 46.926
- type: mrr_at_100
value: 47.815000000000005
- type: mrr_at_1000
value: 47.849000000000004
- type: mrr_at_3
value: 44.273
- type: mrr_at_5
value: 45.774
- type: ndcg_at_1
value: 35.046
- type: ndcg_at_10
value: 48.937000000000005
- type: ndcg_at_100
value: 54.544000000000004
- type: ndcg_at_1000
value: 56.069
- type: ndcg_at_3
value: 42.858000000000004
- type: ndcg_at_5
value: 45.644
- type: precision_at_1
value: 35.046
- type: precision_at_10
value: 9.452
- type: precision_at_100
value: 1.429
- type: precision_at_1000
value: 0.173
- type: precision_at_3
value: 21.346999999999998
- type: precision_at_5
value: 15.342
- type: recall_at_1
value: 28.038999999999998
- type: recall_at_10
value: 64.59700000000001
- type: recall_at_100
value: 87.735
- type: recall_at_1000
value: 97.41300000000001
- type: recall_at_3
value: 47.368
- type: recall_at_5
value: 54.93900000000001
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackRetrieval
type: BeIR/cqadupstack
config: default
split: test
revision: 4ffe81d471b1924886b33c7567bfb200e9eec5c4
metrics:
- type: map_at_1
value: 28.17291666666667
- type: map_at_10
value: 40.025749999999995
- type: map_at_100
value: 41.39208333333333
- type: map_at_1000
value: 41.499249999999996
- type: map_at_3
value: 36.347
- type: map_at_5
value: 38.41391666666667
- type: mrr_at_1
value: 33.65925
- type: mrr_at_10
value: 44.085499999999996
- type: mrr_at_100
value: 44.94116666666667
- type: mrr_at_1000
value: 44.9855
- type: mrr_at_3
value: 41.2815
- type: mrr_at_5
value: 42.91491666666666
- type: ndcg_at_1
value: 33.65925
- type: ndcg_at_10
value: 46.430833333333325
- type: ndcg_at_100
value: 51.761
- type: ndcg_at_1000
value: 53.50899999999999
- type: ndcg_at_3
value: 40.45133333333333
- type: ndcg_at_5
value: 43.31483333333334
- type: precision_at_1
value: 33.65925
- type: precision_at_10
value: 8.4995
- type: precision_at_100
value: 1.3210000000000004
- type: precision_at_1000
value: 0.16591666666666666
- type: precision_at_3
value: 19.165083333333335
- type: precision_at_5
value: 13.81816666666667
- type: recall_at_1
value: 28.17291666666667
- type: recall_at_10
value: 61.12624999999999
- type: recall_at_100
value: 83.97266666666667
- type: recall_at_1000
value: 95.66550000000001
- type: recall_at_3
value: 44.661249999999995
- type: recall_at_5
value: 51.983333333333334
- type: map_at_1
value: 17.936
- type: map_at_10
value: 27.399
- type: map_at_100
value: 28.632
- type: map_at_1000
value: 28.738000000000003
- type: map_at_3
value: 24.456
- type: map_at_5
value: 26.06
- type: mrr_at_1
value: 19.224
- type: mrr_at_10
value: 28.998
- type: mrr_at_100
value: 30.11
- type: mrr_at_1000
value: 30.177
- type: mrr_at_3
value: 26.247999999999998
- type: mrr_at_5
value: 27.708
- type: ndcg_at_1
value: 19.224
- type: ndcg_at_10
value: 32.911
- type: ndcg_at_100
value: 38.873999999999995
- type: ndcg_at_1000
value: 41.277
- type: ndcg_at_3
value: 27.142
- type: ndcg_at_5
value: 29.755
- type: precision_at_1
value: 19.224
- type: precision_at_10
value: 5.6930000000000005
- type: precision_at_100
value: 0.9259999999999999
- type: precision_at_1000
value: 0.126
- type: precision_at_3
value: 12.138
- type: precision_at_5
value: 8.909
- type: recall_at_1
value: 17.936
- type: recall_at_10
value: 48.096
- type: recall_at_100
value: 75.389
- type: recall_at_1000
value: 92.803
- type: recall_at_3
value: 32.812999999999995
- type: recall_at_5
value: 38.851
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackStatsRetrieval
type: BeIR/cqadupstack
config: default
split: test
revision: 65ac3a16b8e91f9cee4c9828cc7c335575432a2a
metrics:
- type: map_at_1
value: 24.681
- type: map_at_10
value: 34.892
- type: map_at_100
value: 35.996
- type: map_at_1000
value: 36.083
- type: map_at_3
value: 31.491999999999997
- type: map_at_5
value: 33.632
- type: mrr_at_1
value: 28.528
- type: mrr_at_10
value: 37.694
- type: mrr_at_100
value: 38.613
- type: mrr_at_1000
value: 38.668
- type: mrr_at_3
value: 34.714
- type: mrr_at_5
value: 36.616
- type: ndcg_at_1
value: 28.528
- type: ndcg_at_10
value: 40.703
- type: ndcg_at_100
value: 45.993
- type: ndcg_at_1000
value: 47.847
- type: ndcg_at_3
value: 34.622
- type: ndcg_at_5
value: 38.035999999999994
- type: precision_at_1
value: 28.528
- type: precision_at_10
value: 6.902
- type: precision_at_100
value: 1.0370000000000001
- type: precision_at_1000
value: 0.126
- type: precision_at_3
value: 15.798000000000002
- type: precision_at_5
value: 11.655999999999999
- type: recall_at_1
value: 24.681
- type: recall_at_10
value: 55.81
- type: recall_at_100
value: 79.785
- type: recall_at_1000
value: 92.959
- type: recall_at_3
value: 39.074
- type: recall_at_5
value: 47.568
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackTexRetrieval
type: BeIR/cqadupstack
config: default
split: test
revision: 46989137a86843e03a6195de44b09deda022eec7
metrics:
- type: map_at_1
value: 18.627
- type: map_at_10
value: 27.872000000000003
- type: map_at_100
value: 29.237999999999996
- type: map_at_1000
value: 29.363
- type: map_at_3
value: 24.751
- type: map_at_5
value: 26.521
- type: mrr_at_1
value: 23.021
- type: mrr_at_10
value: 31.924000000000003
- type: mrr_at_100
value: 32.922000000000004
- type: mrr_at_1000
value: 32.988
- type: mrr_at_3
value: 29.192
- type: mrr_at_5
value: 30.798
- type: ndcg_at_1
value: 23.021
- type: ndcg_at_10
value: 33.535
- type: ndcg_at_100
value: 39.732
- type: ndcg_at_1000
value: 42.201
- type: ndcg_at_3
value: 28.153
- type: ndcg_at_5
value: 30.746000000000002
- type: precision_at_1
value: 23.021
- type: precision_at_10
value: 6.459
- type: precision_at_100
value: 1.1320000000000001
- type: precision_at_1000
value: 0.153
- type: precision_at_3
value: 13.719000000000001
- type: precision_at_5
value: 10.193000000000001
- type: recall_at_1
value: 18.627
- type: recall_at_10
value: 46.463
- type: recall_at_100
value: 74.226
- type: recall_at_1000
value: 91.28500000000001
- type: recall_at_3
value: 31.357000000000003
- type: recall_at_5
value: 38.067
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackUnixRetrieval
type: BeIR/cqadupstack
config: default
split: test
revision: 6c6430d3a6d36f8d2a829195bc5dc94d7e063e53
metrics:
- type: map_at_1
value: 31.457
- type: map_at_10
value: 42.888
- type: map_at_100
value: 44.24
- type: map_at_1000
value: 44.327
- type: map_at_3
value: 39.588
- type: map_at_5
value: 41.423
- type: mrr_at_1
value: 37.126999999999995
- type: mrr_at_10
value: 47.083000000000006
- type: mrr_at_100
value: 47.997
- type: mrr_at_1000
value: 48.044
- type: mrr_at_3
value: 44.574000000000005
- type: mrr_at_5
value: 46.202
- type: ndcg_at_1
value: 37.126999999999995
- type: ndcg_at_10
value: 48.833
- type: ndcg_at_100
value: 54.327000000000005
- type: ndcg_at_1000
value: 56.011
- type: ndcg_at_3
value: 43.541999999999994
- type: ndcg_at_5
value: 46.127
- type: precision_at_1
value: 37.126999999999995
- type: precision_at_10
value: 8.376999999999999
- type: precision_at_100
value: 1.2309999999999999
- type: precision_at_1000
value: 0.146
- type: precision_at_3
value: 20.211000000000002
- type: precision_at_5
value: 14.16
- type: recall_at_1
value: 31.457
- type: recall_at_10
value: 62.369
- type: recall_at_100
value: 85.444
- type: recall_at_1000
value: 96.65599999999999
- type: recall_at_3
value: 47.961
- type: recall_at_5
value: 54.676
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackWebmastersRetrieval
type: BeIR/cqadupstack
config: default
split: test
revision: 160c094312a0e1facb97e55eeddb698c0abe3571
metrics:
- type: map_at_1
value: 27.139999999999997
- type: map_at_10
value: 38.801
- type: map_at_100
value: 40.549
- type: map_at_1000
value: 40.802
- type: map_at_3
value: 35.05
- type: map_at_5
value: 36.884
- type: mrr_at_1
value: 33.004
- type: mrr_at_10
value: 43.864
- type: mrr_at_100
value: 44.667
- type: mrr_at_1000
value: 44.717
- type: mrr_at_3
value: 40.777
- type: mrr_at_5
value: 42.319
- type: ndcg_at_1
value: 33.004
- type: ndcg_at_10
value: 46.022
- type: ndcg_at_100
value: 51.542
- type: ndcg_at_1000
value: 53.742000000000004
- type: ndcg_at_3
value: 39.795
- type: ndcg_at_5
value: 42.272
- type: precision_at_1
value: 33.004
- type: precision_at_10
value: 9.012
- type: precision_at_100
value: 1.7770000000000001
- type: precision_at_1000
value: 0.26
- type: precision_at_3
value: 19.038
- type: precision_at_5
value: 13.675999999999998
- type: recall_at_1
value: 27.139999999999997
- type: recall_at_10
value: 60.961
- type: recall_at_100
value: 84.451
- type: recall_at_1000
value: 98.113
- type: recall_at_3
value: 43.001
- type: recall_at_5
value: 49.896
- task:
type: Retrieval
dataset:
name: MTEB ClimateFEVER
type: mteb/climate-fever
config: default
split: test
revision: 47f2ac6acb640fc46020b02a5b59fdda04d39380
metrics:
- type: map_at_1
value: 22.076999999999998
- type: map_at_10
value: 35.44
- type: map_at_100
value: 37.651
- type: map_at_1000
value: 37.824999999999996
- type: map_at_3
value: 30.764999999999997
- type: map_at_5
value: 33.26
- type: mrr_at_1
value: 50.163000000000004
- type: mrr_at_10
value: 61.207
- type: mrr_at_100
value: 61.675000000000004
- type: mrr_at_1000
value: 61.692
- type: mrr_at_3
value: 58.60999999999999
- type: mrr_at_5
value: 60.307
- type: ndcg_at_1
value: 50.163000000000004
- type: ndcg_at_10
value: 45.882
- type: ndcg_at_100
value: 53.239999999999995
- type: ndcg_at_1000
value: 55.852000000000004
- type: ndcg_at_3
value: 40.514
- type: ndcg_at_5
value: 42.038
- type: precision_at_1
value: 50.163000000000004
- type: precision_at_10
value: 13.466000000000001
- type: precision_at_100
value: 2.164
- type: precision_at_1000
value: 0.266
- type: precision_at_3
value: 29.707
- type: precision_at_5
value: 21.694
- type: recall_at_1
value: 22.076999999999998
- type: recall_at_10
value: 50.193
- type: recall_at_100
value: 74.993
- type: recall_at_1000
value: 89.131
- type: recall_at_3
value: 35.472
- type: recall_at_5
value: 41.814
- task:
type: Retrieval
dataset:
name: MTEB DBPedia
type: mteb/dbpedia
config: default
split: test
revision: c0f706b76e590d620bd6618b3ca8efdd34e2d659
metrics:
- type: map_at_1
value: 9.953
- type: map_at_10
value: 24.515
- type: map_at_100
value: 36.173
- type: map_at_1000
value: 38.351
- type: map_at_3
value: 16.592000000000002
- type: map_at_5
value: 20.036
- type: mrr_at_1
value: 74.25
- type: mrr_at_10
value: 81.813
- type: mrr_at_100
value: 82.006
- type: mrr_at_1000
value: 82.011
- type: mrr_at_3
value: 80.875
- type: mrr_at_5
value: 81.362
- type: ndcg_at_1
value: 62.5
- type: ndcg_at_10
value: 52.42
- type: ndcg_at_100
value: 56.808
- type: ndcg_at_1000
value: 63.532999999999994
- type: ndcg_at_3
value: 56.654
- type: ndcg_at_5
value: 54.18300000000001
- type: precision_at_1
value: 74.25
- type: precision_at_10
value: 42.699999999999996
- type: precision_at_100
value: 13.675
- type: precision_at_1000
value: 2.664
- type: precision_at_3
value: 60.5
- type: precision_at_5
value: 52.800000000000004
- type: recall_at_1
value: 9.953
- type: recall_at_10
value: 30.253999999999998
- type: recall_at_100
value: 62.516000000000005
- type: recall_at_1000
value: 84.163
- type: recall_at_3
value: 18.13
- type: recall_at_5
value: 22.771
- task:
type: Classification
dataset:
name: MTEB EmotionClassification
type: mteb/emotion
config: default
split: test
revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37
metrics:
- type: accuracy
value: 79.455
- type: f1
value: 74.16798697647569
- task:
type: Retrieval
dataset:
name: MTEB FEVER
type: mteb/fever
config: default
split: test
revision: bea83ef9e8fb933d90a2f1d5515737465d613e12
metrics:
- type: map_at_1
value: 87.531
- type: map_at_10
value: 93.16799999999999
- type: map_at_100
value: 93.341
- type: map_at_1000
value: 93.349
- type: map_at_3
value: 92.444
- type: map_at_5
value: 92.865
- type: mrr_at_1
value: 94.014
- type: mrr_at_10
value: 96.761
- type: mrr_at_100
value: 96.762
- type: mrr_at_1000
value: 96.762
- type: mrr_at_3
value: 96.672
- type: mrr_at_5
value: 96.736
- type: ndcg_at_1
value: 94.014
- type: ndcg_at_10
value: 95.112
- type: ndcg_at_100
value: 95.578
- type: ndcg_at_1000
value: 95.68900000000001
- type: ndcg_at_3
value: 94.392
- type: ndcg_at_5
value: 94.72500000000001
- type: precision_at_1
value: 94.014
- type: precision_at_10
value: 11.065
- type: precision_at_100
value: 1.157
- type: precision_at_1000
value: 0.11800000000000001
- type: precision_at_3
value: 35.259
- type: precision_at_5
value: 21.599
- type: recall_at_1
value: 87.531
- type: recall_at_10
value: 97.356
- type: recall_at_100
value: 98.965
- type: recall_at_1000
value: 99.607
- type: recall_at_3
value: 95.312
- type: recall_at_5
value: 96.295
- task:
type: Retrieval
dataset:
name: MTEB FiQA2018
type: mteb/fiqa
config: default
split: test
revision: 27a168819829fe9bcd655c2df245fb19452e8e06
metrics:
- type: map_at_1
value: 32.055
- type: map_at_10
value: 53.114
- type: map_at_100
value: 55.235
- type: map_at_1000
value: 55.345
- type: map_at_3
value: 45.854
- type: map_at_5
value: 50.025
- type: mrr_at_1
value: 60.34
- type: mrr_at_10
value: 68.804
- type: mrr_at_100
value: 69.309
- type: mrr_at_1000
value: 69.32199999999999
- type: mrr_at_3
value: 66.40899999999999
- type: mrr_at_5
value: 67.976
- type: ndcg_at_1
value: 60.34
- type: ndcg_at_10
value: 62.031000000000006
- type: ndcg_at_100
value: 68.00500000000001
- type: ndcg_at_1000
value: 69.286
- type: ndcg_at_3
value: 56.355999999999995
- type: ndcg_at_5
value: 58.687
- type: precision_at_1
value: 60.34
- type: precision_at_10
value: 17.176
- type: precision_at_100
value: 2.36
- type: precision_at_1000
value: 0.259
- type: precision_at_3
value: 37.14
- type: precision_at_5
value: 27.809
- type: recall_at_1
value: 32.055
- type: recall_at_10
value: 70.91
- type: recall_at_100
value: 91.83
- type: recall_at_1000
value: 98.871
- type: recall_at_3
value: 51.202999999999996
- type: recall_at_5
value: 60.563
- task:
type: Retrieval
dataset:
name: MTEB HotpotQA
type: mteb/hotpotqa
config: default
split: test
revision: ab518f4d6fcca38d87c25209f94beba119d02014
metrics:
- type: map_at_1
value: 43.68
- type: map_at_10
value: 64.389
- type: map_at_100
value: 65.24
- type: map_at_1000
value: 65.303
- type: map_at_3
value: 61.309000000000005
- type: map_at_5
value: 63.275999999999996
- type: mrr_at_1
value: 87.36
- type: mrr_at_10
value: 91.12
- type: mrr_at_100
value: 91.227
- type: mrr_at_1000
value: 91.229
- type: mrr_at_3
value: 90.57600000000001
- type: mrr_at_5
value: 90.912
- type: ndcg_at_1
value: 87.36
- type: ndcg_at_10
value: 73.076
- type: ndcg_at_100
value: 75.895
- type: ndcg_at_1000
value: 77.049
- type: ndcg_at_3
value: 68.929
- type: ndcg_at_5
value: 71.28
- type: precision_at_1
value: 87.36
- type: precision_at_10
value: 14.741000000000001
- type: precision_at_100
value: 1.694
- type: precision_at_1000
value: 0.185
- type: precision_at_3
value: 43.043
- type: precision_at_5
value: 27.681
- type: recall_at_1
value: 43.68
- type: recall_at_10
value: 73.707
- type: recall_at_100
value: 84.7
- type: recall_at_1000
value: 92.309
- type: recall_at_3
value: 64.564
- type: recall_at_5
value: 69.203
- task:
type: Classification
dataset:
name: MTEB ImdbClassification
type: mteb/imdb
config: default
split: test
revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7
metrics:
- type: accuracy
value: 96.75399999999999
- type: ap
value: 95.29389839242187
- type: f1
value: 96.75348377433475
- task:
type: Retrieval
dataset:
name: MTEB MSMARCO
type: mteb/msmarco
config: default
split: dev
revision: c5a29a104738b98a9e76336939199e264163d4a0
metrics:
- type: map_at_1
value: 25.176
- type: map_at_10
value: 38.598
- type: map_at_100
value: 39.707
- type: map_at_1000
value: 39.744
- type: map_at_3
value: 34.566
- type: map_at_5
value: 36.863
- type: mrr_at_1
value: 25.874000000000002
- type: mrr_at_10
value: 39.214
- type: mrr_at_100
value: 40.251
- type: mrr_at_1000
value: 40.281
- type: mrr_at_3
value: 35.291
- type: mrr_at_5
value: 37.545
- type: ndcg_at_1
value: 25.874000000000002
- type: ndcg_at_10
value: 45.98
- type: ndcg_at_100
value: 51.197
- type: ndcg_at_1000
value: 52.073
- type: ndcg_at_3
value: 37.785999999999994
- type: ndcg_at_5
value: 41.870000000000005
- type: precision_at_1
value: 25.874000000000002
- type: precision_at_10
value: 7.181
- type: precision_at_100
value: 0.979
- type: precision_at_1000
value: 0.106
- type: precision_at_3
value: 16.051000000000002
- type: precision_at_5
value: 11.713
- type: recall_at_1
value: 25.176
- type: recall_at_10
value: 68.67699999999999
- type: recall_at_100
value: 92.55
- type: recall_at_1000
value: 99.164
- type: recall_at_3
value: 46.372
- type: recall_at_5
value: 56.16
- task:
type: Classification
dataset:
name: MTEB MTOPDomainClassification (en)
type: mteb/mtop_domain
config: en
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 99.03784769721841
- type: f1
value: 98.97791641821495
- task:
type: Classification
dataset:
name: MTEB MTOPIntentClassification (en)
type: mteb/mtop_intent
config: en
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 91.88326493388054
- type: f1
value: 73.74809928034335
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (en)
type: mteb/amazon_massive_intent
config: en
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 85.41358439811701
- type: f1
value: 83.503679460639
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (en)
type: mteb/amazon_massive_scenario
config: en
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 89.77135171486215
- type: f1
value: 88.89843747468366
- task:
type: Clustering
dataset:
name: MTEB MedrxivClusteringP2P
type: mteb/medrxiv-clustering-p2p
config: default
split: test
revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73
metrics:
- type: v_measure
value: 46.22695362087359
- task:
type: Clustering
dataset:
name: MTEB MedrxivClusteringS2S
type: mteb/medrxiv-clustering-s2s
config: default
split: test
revision: 35191c8c0dca72d8ff3efcd72aa802307d469663
metrics:
- type: v_measure
value: 44.132372165849425
- task:
type: Reranking
dataset:
name: MTEB MindSmallReranking
type: mteb/mind_small
config: default
split: test
revision: 3bdac13927fdc888b903db93b2ffdbd90b295a69
metrics:
- type: map
value: 33.35680810650402
- type: mrr
value: 34.72625715637218
- task:
type: Retrieval
dataset:
name: MTEB NFCorpus
type: mteb/nfcorpus
config: default
split: test
revision: ec0fa4fe99da2ff19ca1214b7966684033a58814
metrics:
- type: map_at_1
value: 7.165000000000001
- type: map_at_10
value: 15.424
- type: map_at_100
value: 20.28
- type: map_at_1000
value: 22.065
- type: map_at_3
value: 11.236
- type: map_at_5
value: 13.025999999999998
- type: mrr_at_1
value: 51.702999999999996
- type: mrr_at_10
value: 59.965
- type: mrr_at_100
value: 60.667
- type: mrr_at_1000
value: 60.702999999999996
- type: mrr_at_3
value: 58.772000000000006
- type: mrr_at_5
value: 59.267
- type: ndcg_at_1
value: 49.536
- type: ndcg_at_10
value: 40.6
- type: ndcg_at_100
value: 37.848
- type: ndcg_at_1000
value: 46.657
- type: ndcg_at_3
value: 46.117999999999995
- type: ndcg_at_5
value: 43.619
- type: precision_at_1
value: 51.393
- type: precision_at_10
value: 30.31
- type: precision_at_100
value: 9.972
- type: precision_at_1000
value: 2.329
- type: precision_at_3
value: 43.137
- type: precision_at_5
value: 37.585
- type: recall_at_1
value: 7.165000000000001
- type: recall_at_10
value: 19.689999999999998
- type: recall_at_100
value: 39.237
- type: recall_at_1000
value: 71.417
- type: recall_at_3
value: 12.247
- type: recall_at_5
value: 14.902999999999999
- task:
type: Retrieval
dataset:
name: MTEB NQ
type: mteb/nq
config: default
split: test
revision: b774495ed302d8c44a3a7ea25c90dbce03968f31
metrics:
- type: map_at_1
value: 42.653999999999996
- type: map_at_10
value: 59.611999999999995
- type: map_at_100
value: 60.32300000000001
- type: map_at_1000
value: 60.336
- type: map_at_3
value: 55.584999999999994
- type: map_at_5
value: 58.19
- type: mrr_at_1
value: 47.683
- type: mrr_at_10
value: 62.06700000000001
- type: mrr_at_100
value: 62.537
- type: mrr_at_1000
value: 62.544999999999995
- type: mrr_at_3
value: 59.178
- type: mrr_at_5
value: 61.034
- type: ndcg_at_1
value: 47.654
- type: ndcg_at_10
value: 67.001
- type: ndcg_at_100
value: 69.73899999999999
- type: ndcg_at_1000
value: 69.986
- type: ndcg_at_3
value: 59.95700000000001
- type: ndcg_at_5
value: 64.025
- type: precision_at_1
value: 47.654
- type: precision_at_10
value: 10.367999999999999
- type: precision_at_100
value: 1.192
- type: precision_at_1000
value: 0.121
- type: precision_at_3
value: 26.651000000000003
- type: precision_at_5
value: 18.459
- type: recall_at_1
value: 42.653999999999996
- type: recall_at_10
value: 86.619
- type: recall_at_100
value: 98.04899999999999
- type: recall_at_1000
value: 99.812
- type: recall_at_3
value: 68.987
- type: recall_at_5
value: 78.158
- task:
type: Retrieval
dataset:
name: MTEB QuoraRetrieval
type: mteb/quora
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 72.538
- type: map_at_10
value: 86.702
- type: map_at_100
value: 87.31
- type: map_at_1000
value: 87.323
- type: map_at_3
value: 83.87
- type: map_at_5
value: 85.682
- type: mrr_at_1
value: 83.31
- type: mrr_at_10
value: 89.225
- type: mrr_at_100
value: 89.30399999999999
- type: mrr_at_1000
value: 89.30399999999999
- type: mrr_at_3
value: 88.44300000000001
- type: mrr_at_5
value: 89.005
- type: ndcg_at_1
value: 83.32000000000001
- type: ndcg_at_10
value: 90.095
- type: ndcg_at_100
value: 91.12
- type: ndcg_at_1000
value: 91.179
- type: ndcg_at_3
value: 87.606
- type: ndcg_at_5
value: 89.031
- type: precision_at_1
value: 83.32000000000001
- type: precision_at_10
value: 13.641
- type: precision_at_100
value: 1.541
- type: precision_at_1000
value: 0.157
- type: precision_at_3
value: 38.377
- type: precision_at_5
value: 25.162000000000003
- type: recall_at_1
value: 72.538
- type: recall_at_10
value: 96.47200000000001
- type: recall_at_100
value: 99.785
- type: recall_at_1000
value: 99.99900000000001
- type: recall_at_3
value: 89.278
- type: recall_at_5
value: 93.367
- task:
type: Clustering
dataset:
name: MTEB RedditClustering
type: mteb/reddit-clustering
config: default
split: test
revision: 24640382cdbf8abc73003fb0fa6d111a705499eb
metrics:
- type: v_measure
value: 73.55219145406065
- task:
type: Clustering
dataset:
name: MTEB RedditClusteringP2P
type: mteb/reddit-clustering-p2p
config: default
split: test
revision: 282350215ef01743dc01b456c7f5241fa8937f16
metrics:
- type: v_measure
value: 74.13437105242755
- task:
type: Retrieval
dataset:
name: MTEB SCIDOCS
type: mteb/scidocs
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 6.873
- type: map_at_10
value: 17.944
- type: map_at_100
value: 21.171
- type: map_at_1000
value: 21.528
- type: map_at_3
value: 12.415
- type: map_at_5
value: 15.187999999999999
- type: mrr_at_1
value: 33.800000000000004
- type: mrr_at_10
value: 46.455
- type: mrr_at_100
value: 47.378
- type: mrr_at_1000
value: 47.394999999999996
- type: mrr_at_3
value: 42.367
- type: mrr_at_5
value: 44.972
- type: ndcg_at_1
value: 33.800000000000004
- type: ndcg_at_10
value: 28.907
- type: ndcg_at_100
value: 39.695
- type: ndcg_at_1000
value: 44.582
- type: ndcg_at_3
value: 26.949
- type: ndcg_at_5
value: 23.988
- type: precision_at_1
value: 33.800000000000004
- type: precision_at_10
value: 15.079999999999998
- type: precision_at_100
value: 3.056
- type: precision_at_1000
value: 0.42100000000000004
- type: precision_at_3
value: 25.167
- type: precision_at_5
value: 21.26
- type: recall_at_1
value: 6.873
- type: recall_at_10
value: 30.568
- type: recall_at_100
value: 62.062
- type: recall_at_1000
value: 85.37700000000001
- type: recall_at_3
value: 15.312999999999999
- type: recall_at_5
value: 21.575
- task:
type: STS
dataset:
name: MTEB SICK-R
type: mteb/sickr-sts
config: default
split: test
revision: a6ea5a8cab320b040a23452cc28066d9beae2cee
metrics:
- type: cos_sim_pearson
value: 82.37009118256057
- type: cos_sim_spearman
value: 79.27986395671529
- type: euclidean_pearson
value: 79.18037715442115
- type: euclidean_spearman
value: 79.28004791561621
- type: manhattan_pearson
value: 79.34062972800541
- type: manhattan_spearman
value: 79.43106695543402
- task:
type: STS
dataset:
name: MTEB STS12
type: mteb/sts12-sts
config: default
split: test
revision: a0d554a64d88156834ff5ae9920b964011b16384
metrics:
- type: cos_sim_pearson
value: 87.48474767383833
- type: cos_sim_spearman
value: 79.54505388752513
- type: euclidean_pearson
value: 83.43282704179565
- type: euclidean_spearman
value: 79.54579919925405
- type: manhattan_pearson
value: 83.77564492427952
- type: manhattan_spearman
value: 79.84558396989286
- task:
type: STS
dataset:
name: MTEB STS13
type: mteb/sts13-sts
config: default
split: test
revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca
metrics:
- type: cos_sim_pearson
value: 88.803698035802
- type: cos_sim_spearman
value: 88.83451367754881
- type: euclidean_pearson
value: 88.28939285711628
- type: euclidean_spearman
value: 88.83528996073112
- type: manhattan_pearson
value: 88.28017412671795
- type: manhattan_spearman
value: 88.9228828016344
- task:
type: STS
dataset:
name: MTEB STS14
type: mteb/sts14-sts
config: default
split: test
revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375
metrics:
- type: cos_sim_pearson
value: 85.27469288153428
- type: cos_sim_spearman
value: 83.87477064876288
- type: euclidean_pearson
value: 84.2601737035379
- type: euclidean_spearman
value: 83.87431082479074
- type: manhattan_pearson
value: 84.3621547772745
- type: manhattan_spearman
value: 84.12094375000423
- task:
type: STS
dataset:
name: MTEB STS15
type: mteb/sts15-sts
config: default
split: test
revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3
metrics:
- type: cos_sim_pearson
value: 88.12749863201587
- type: cos_sim_spearman
value: 88.54287568368565
- type: euclidean_pearson
value: 87.90429700607999
- type: euclidean_spearman
value: 88.5437689576261
- type: manhattan_pearson
value: 88.19276653356833
- type: manhattan_spearman
value: 88.99995393814679
- task:
type: STS
dataset:
name: MTEB STS16
type: mteb/sts16-sts
config: default
split: test
revision: 4d8694f8f0e0100860b497b999b3dbed754a0513
metrics:
- type: cos_sim_pearson
value: 85.68398747560902
- type: cos_sim_spearman
value: 86.48815303460574
- type: euclidean_pearson
value: 85.52356631237954
- type: euclidean_spearman
value: 86.486391949551
- type: manhattan_pearson
value: 85.67267981761788
- type: manhattan_spearman
value: 86.7073696332485
- task:
type: STS
dataset:
name: MTEB STS17 (en-en)
type: mteb/sts17-crosslingual-sts
config: en-en
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 88.9057107443124
- type: cos_sim_spearman
value: 88.7312168757697
- type: euclidean_pearson
value: 88.72810439714794
- type: euclidean_spearman
value: 88.71976185854771
- type: manhattan_pearson
value: 88.50433745949111
- type: manhattan_spearman
value: 88.51726175544195
- task:
type: STS
dataset:
name: MTEB STS22 (en)
type: mteb/sts22-crosslingual-sts
config: en
split: test
revision: eea2b4fe26a775864c896887d910b76a8098ad3f
metrics:
- type: cos_sim_pearson
value: 67.59391795109886
- type: cos_sim_spearman
value: 66.87613008631367
- type: euclidean_pearson
value: 69.23198488262217
- type: euclidean_spearman
value: 66.85427723013692
- type: manhattan_pearson
value: 69.50730124841084
- type: manhattan_spearman
value: 67.10404669820792
- task:
type: STS
dataset:
name: MTEB STSBenchmark
type: mteb/stsbenchmark-sts
config: default
split: test
revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831
metrics:
- type: cos_sim_pearson
value: 87.0820605344619
- type: cos_sim_spearman
value: 86.8518089863434
- type: euclidean_pearson
value: 86.31087134689284
- type: euclidean_spearman
value: 86.8518520517941
- type: manhattan_pearson
value: 86.47203796160612
- type: manhattan_spearman
value: 87.1080149734421
- task:
type: Reranking
dataset:
name: MTEB SciDocsRR
type: mteb/scidocs-reranking
config: default
split: test
revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab
metrics:
- type: map
value: 89.09255369305481
- type: mrr
value: 97.10323445617563
- task:
type: Retrieval
dataset:
name: MTEB SciFact
type: mteb/scifact
config: default
split: test
revision: 0228b52cf27578f30900b9e5271d331663a030d7
metrics:
- type: map_at_1
value: 61.260999999999996
- type: map_at_10
value: 74.043
- type: map_at_100
value: 74.37700000000001
- type: map_at_1000
value: 74.384
- type: map_at_3
value: 71.222
- type: map_at_5
value: 72.875
- type: mrr_at_1
value: 64.333
- type: mrr_at_10
value: 74.984
- type: mrr_at_100
value: 75.247
- type: mrr_at_1000
value: 75.25500000000001
- type: mrr_at_3
value: 73.167
- type: mrr_at_5
value: 74.35000000000001
- type: ndcg_at_1
value: 64.333
- type: ndcg_at_10
value: 79.06
- type: ndcg_at_100
value: 80.416
- type: ndcg_at_1000
value: 80.55600000000001
- type: ndcg_at_3
value: 74.753
- type: ndcg_at_5
value: 76.97500000000001
- type: precision_at_1
value: 64.333
- type: precision_at_10
value: 10.567
- type: precision_at_100
value: 1.1199999999999999
- type: precision_at_1000
value: 0.11299999999999999
- type: precision_at_3
value: 29.889
- type: precision_at_5
value: 19.533
- type: recall_at_1
value: 61.260999999999996
- type: recall_at_10
value: 93.167
- type: recall_at_100
value: 99.0
- type: recall_at_1000
value: 100.0
- type: recall_at_3
value: 81.667
- type: recall_at_5
value: 87.394
- task:
type: PairClassification
dataset:
name: MTEB SprintDuplicateQuestions
type: mteb/sprintduplicatequestions-pairclassification
config: default
split: test
revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46
metrics:
- type: cos_sim_accuracy
value: 99.71980198019801
- type: cos_sim_ap
value: 92.81616007802704
- type: cos_sim_f1
value: 85.17548454688318
- type: cos_sim_precision
value: 89.43894389438944
- type: cos_sim_recall
value: 81.3
- type: dot_accuracy
value: 99.71980198019801
- type: dot_ap
value: 92.81398760591358
- type: dot_f1
value: 85.17548454688318
- type: dot_precision
value: 89.43894389438944
- type: dot_recall
value: 81.3
- type: euclidean_accuracy
value: 99.71980198019801
- type: euclidean_ap
value: 92.81560637245072
- type: euclidean_f1
value: 85.17548454688318
- type: euclidean_precision
value: 89.43894389438944
- type: euclidean_recall
value: 81.3
- type: manhattan_accuracy
value: 99.73069306930694
- type: manhattan_ap
value: 93.14005487480794
- type: manhattan_f1
value: 85.56263269639068
- type: manhattan_precision
value: 91.17647058823529
- type: manhattan_recall
value: 80.60000000000001
- type: max_accuracy
value: 99.73069306930694
- type: max_ap
value: 93.14005487480794
- type: max_f1
value: 85.56263269639068
- task:
type: Clustering
dataset:
name: MTEB StackExchangeClustering
type: mteb/stackexchange-clustering
config: default
split: test
revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259
metrics:
- type: v_measure
value: 79.86443362395185
- task:
type: Clustering
dataset:
name: MTEB StackExchangeClusteringP2P
type: mteb/stackexchange-clustering-p2p
config: default
split: test
revision: 815ca46b2622cec33ccafc3735d572c266efdb44
metrics:
- type: v_measure
value: 49.40897096662564
- task:
type: Reranking
dataset:
name: MTEB StackOverflowDupQuestions
type: mteb/stackoverflowdupquestions-reranking
config: default
split: test
revision: e185fbe320c72810689fc5848eb6114e1ef5ec69
metrics:
- type: map
value: 55.66040806627947
- type: mrr
value: 56.58670475766064
- task:
type: Summarization
dataset:
name: MTEB SummEval
type: mteb/summeval
config: default
split: test
revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c
metrics:
- type: cos_sim_pearson
value: 31.51015090598575
- type: cos_sim_spearman
value: 31.35016454939226
- type: dot_pearson
value: 31.5150068731
- type: dot_spearman
value: 31.34790869023487
- task:
type: Retrieval
dataset:
name: MTEB TRECCOVID
type: mteb/trec-covid
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 0.254
- type: map_at_10
value: 2.064
- type: map_at_100
value: 12.909
- type: map_at_1000
value: 31.761
- type: map_at_3
value: 0.738
- type: map_at_5
value: 1.155
- type: mrr_at_1
value: 96.0
- type: mrr_at_10
value: 98.0
- type: mrr_at_100
value: 98.0
- type: mrr_at_1000
value: 98.0
- type: mrr_at_3
value: 98.0
- type: mrr_at_5
value: 98.0
- type: ndcg_at_1
value: 93.0
- type: ndcg_at_10
value: 82.258
- type: ndcg_at_100
value: 64.34
- type: ndcg_at_1000
value: 57.912
- type: ndcg_at_3
value: 90.827
- type: ndcg_at_5
value: 86.79
- type: precision_at_1
value: 96.0
- type: precision_at_10
value: 84.8
- type: precision_at_100
value: 66.0
- type: precision_at_1000
value: 25.356
- type: precision_at_3
value: 94.667
- type: precision_at_5
value: 90.4
- type: recall_at_1
value: 0.254
- type: recall_at_10
value: 2.1950000000000003
- type: recall_at_100
value: 16.088
- type: recall_at_1000
value: 54.559000000000005
- type: recall_at_3
value: 0.75
- type: recall_at_5
value: 1.191
- task:
type: Retrieval
dataset:
name: MTEB Touche2020
type: mteb/touche2020
config: default
split: test
revision: a34f9a33db75fa0cbb21bb5cfc3dae8dc8bec93f
metrics:
- type: map_at_1
value: 2.976
- type: map_at_10
value: 11.389000000000001
- type: map_at_100
value: 18.429000000000002
- type: map_at_1000
value: 20.113
- type: map_at_3
value: 6.483
- type: map_at_5
value: 8.770999999999999
- type: mrr_at_1
value: 40.816
- type: mrr_at_10
value: 58.118
- type: mrr_at_100
value: 58.489999999999995
- type: mrr_at_1000
value: 58.489999999999995
- type: mrr_at_3
value: 53.061
- type: mrr_at_5
value: 57.041
- type: ndcg_at_1
value: 40.816
- type: ndcg_at_10
value: 30.567
- type: ndcg_at_100
value: 42.44
- type: ndcg_at_1000
value: 53.480000000000004
- type: ndcg_at_3
value: 36.016
- type: ndcg_at_5
value: 34.257
- type: precision_at_1
value: 42.857
- type: precision_at_10
value: 25.714
- type: precision_at_100
value: 8.429
- type: precision_at_1000
value: 1.5939999999999999
- type: precision_at_3
value: 36.735
- type: precision_at_5
value: 33.878
- type: recall_at_1
value: 2.976
- type: recall_at_10
value: 17.854999999999997
- type: recall_at_100
value: 51.833
- type: recall_at_1000
value: 86.223
- type: recall_at_3
value: 7.887
- type: recall_at_5
value: 12.026
- task:
type: Classification
dataset:
name: MTEB ToxicConversationsClassification
type: mteb/toxic_conversations_50k
config: default
split: test
revision: d7c0de2777da35d6aae2200a62c6e0e5af397c4c
metrics:
- type: accuracy
value: 85.1174
- type: ap
value: 30.169441069345748
- type: f1
value: 69.79254701873245
- task:
type: Classification
dataset:
name: MTEB TweetSentimentExtractionClassification
type: mteb/tweet_sentiment_extraction
config: default
split: test
revision: d604517c81ca91fe16a244d1248fc021f9ecee7a
metrics:
- type: accuracy
value: 72.58347481607245
- type: f1
value: 72.74877295564937
- task:
type: Clustering
dataset:
name: MTEB TwentyNewsgroupsClustering
type: mteb/twentynewsgroups-clustering
config: default
split: test
revision: 6125ec4e24fa026cec8a478383ee943acfbd5449
metrics:
- type: v_measure
value: 53.90586138221305
- task:
type: PairClassification
dataset:
name: MTEB TwitterSemEval2015
type: mteb/twittersemeval2015-pairclassification
config: default
split: test
revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1
metrics:
- type: cos_sim_accuracy
value: 87.35769207844072
- type: cos_sim_ap
value: 77.9645072410354
- type: cos_sim_f1
value: 71.32352941176471
- type: cos_sim_precision
value: 66.5903890160183
- type: cos_sim_recall
value: 76.78100263852242
- type: dot_accuracy
value: 87.37557370209214
- type: dot_ap
value: 77.96250046429908
- type: dot_f1
value: 71.28932757557064
- type: dot_precision
value: 66.95249130938586
- type: dot_recall
value: 76.22691292875989
- type: euclidean_accuracy
value: 87.35173153722357
- type: euclidean_ap
value: 77.96520460741593
- type: euclidean_f1
value: 71.32470733210104
- type: euclidean_precision
value: 66.91329479768785
- type: euclidean_recall
value: 76.35883905013192
- type: manhattan_accuracy
value: 87.25636287774931
- type: manhattan_ap
value: 77.77752485611796
- type: manhattan_f1
value: 71.18148599269183
- type: manhattan_precision
value: 66.10859728506787
- type: manhattan_recall
value: 77.0976253298153
- type: max_accuracy
value: 87.37557370209214
- type: max_ap
value: 77.96520460741593
- type: max_f1
value: 71.32470733210104
- task:
type: PairClassification
dataset:
name: MTEB TwitterURLCorpus
type: mteb/twitterurlcorpus-pairclassification
config: default
split: test
revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf
metrics:
- type: cos_sim_accuracy
value: 89.38176737687739
- type: cos_sim_ap
value: 86.58811861657401
- type: cos_sim_f1
value: 79.09430644097604
- type: cos_sim_precision
value: 75.45085977911366
- type: cos_sim_recall
value: 83.10748383122882
- type: dot_accuracy
value: 89.38370784336554
- type: dot_ap
value: 86.58840606004333
- type: dot_f1
value: 79.10179860068133
- type: dot_precision
value: 75.44546153308643
- type: dot_recall
value: 83.13058207576223
- type: euclidean_accuracy
value: 89.38564830985369
- type: euclidean_ap
value: 86.58820721061164
- type: euclidean_f1
value: 79.09070942235888
- type: euclidean_precision
value: 75.38729937194697
- type: euclidean_recall
value: 83.17677856482906
- type: manhattan_accuracy
value: 89.40699344122326
- type: manhattan_ap
value: 86.60631843011362
- type: manhattan_f1
value: 79.14949970570925
- type: manhattan_precision
value: 75.78191039729502
- type: manhattan_recall
value: 82.83030489682784
- type: max_accuracy
value: 89.40699344122326
- type: max_ap
value: 86.60631843011362
- type: max_f1
value: 79.14949970570925
- task:
type: STS
dataset:
name: MTEB AFQMC
type: C-MTEB/AFQMC
config: default
split: validation
revision: b44c3b011063adb25877c13823db83bb193913c4
metrics:
- type: cos_sim_pearson
value: 65.58442135663871
- type: cos_sim_spearman
value: 72.2538631361313
- type: euclidean_pearson
value: 70.97255486607429
- type: euclidean_spearman
value: 72.25374250228647
- type: manhattan_pearson
value: 70.83250199989911
- type: manhattan_spearman
value: 72.14819496536272
- task:
type: STS
dataset:
name: MTEB ATEC
type: C-MTEB/ATEC
config: default
split: test
revision: 0f319b1142f28d00e055a6770f3f726ae9b7d865
metrics:
- type: cos_sim_pearson
value: 59.99478404929932
- type: cos_sim_spearman
value: 62.61836216999812
- type: euclidean_pearson
value: 66.86429811933593
- type: euclidean_spearman
value: 62.6183520374191
- type: manhattan_pearson
value: 66.8063778911633
- type: manhattan_spearman
value: 62.569607573241115
- task:
type: Classification
dataset:
name: MTEB AmazonReviewsClassification (zh)
type: mteb/amazon_reviews_multi
config: zh
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 53.98400000000001
- type: f1
value: 51.21447361350723
- task:
type: STS
dataset:
name: MTEB BQ
type: C-MTEB/BQ
config: default
split: test
revision: e3dda5e115e487b39ec7e618c0c6a29137052a55
metrics:
- type: cos_sim_pearson
value: 79.11941660686553
- type: cos_sim_spearman
value: 81.25029594540435
- type: euclidean_pearson
value: 82.06973504238826
- type: euclidean_spearman
value: 81.2501989488524
- type: manhattan_pearson
value: 82.10094630392753
- type: manhattan_spearman
value: 81.27987244392389
- task:
type: Clustering
dataset:
name: MTEB CLSClusteringP2P
type: C-MTEB/CLSClusteringP2P
config: default
split: test
revision: 4b6227591c6c1a73bc76b1055f3b7f3588e72476
metrics:
- type: v_measure
value: 47.07270168705156
- task:
type: Clustering
dataset:
name: MTEB CLSClusteringS2S
type: C-MTEB/CLSClusteringS2S
config: default
split: test
revision: e458b3f5414b62b7f9f83499ac1f5497ae2e869f
metrics:
- type: v_measure
value: 45.98511703185043
- task:
type: Reranking
dataset:
name: MTEB CMedQAv1
type: C-MTEB/CMedQAv1-reranking
config: default
split: test
revision: 8d7f1e942507dac42dc58017c1a001c3717da7df
metrics:
- type: map
value: 88.19895157194931
- type: mrr
value: 90.21424603174603
- task:
type: Reranking
dataset:
name: MTEB CMedQAv2
type: C-MTEB/CMedQAv2-reranking
config: default
split: test
revision: 23d186750531a14a0357ca22cd92d712fd512ea0
metrics:
- type: map
value: 88.03317320980119
- type: mrr
value: 89.9461507936508
- task:
type: Retrieval
dataset:
name: MTEB CmedqaRetrieval
type: C-MTEB/CmedqaRetrieval
config: default
split: dev
revision: cd540c506dae1cf9e9a59c3e06f42030d54e7301
metrics:
- type: map_at_1
value: 29.037000000000003
- type: map_at_10
value: 42.001
- type: map_at_100
value: 43.773
- type: map_at_1000
value: 43.878
- type: map_at_3
value: 37.637
- type: map_at_5
value: 40.034
- type: mrr_at_1
value: 43.136
- type: mrr_at_10
value: 51.158
- type: mrr_at_100
value: 52.083
- type: mrr_at_1000
value: 52.12
- type: mrr_at_3
value: 48.733
- type: mrr_at_5
value: 50.025
- type: ndcg_at_1
value: 43.136
- type: ndcg_at_10
value: 48.685
- type: ndcg_at_100
value: 55.513
- type: ndcg_at_1000
value: 57.242000000000004
- type: ndcg_at_3
value: 43.329
- type: ndcg_at_5
value: 45.438
- type: precision_at_1
value: 43.136
- type: precision_at_10
value: 10.56
- type: precision_at_100
value: 1.6129999999999998
- type: precision_at_1000
value: 0.184
- type: precision_at_3
value: 24.064
- type: precision_at_5
value: 17.269000000000002
- type: recall_at_1
value: 29.037000000000003
- type: recall_at_10
value: 59.245000000000005
- type: recall_at_100
value: 87.355
- type: recall_at_1000
value: 98.74000000000001
- type: recall_at_3
value: 42.99
- type: recall_at_5
value: 49.681999999999995
- task:
type: PairClassification
dataset:
name: MTEB Cmnli
type: C-MTEB/CMNLI
config: default
split: validation
revision: 41bc36f332156f7adc9e38f53777c959b2ae9766
metrics:
- type: cos_sim_accuracy
value: 82.68190018039687
- type: cos_sim_ap
value: 90.18017125327886
- type: cos_sim_f1
value: 83.64080906868193
- type: cos_sim_precision
value: 79.7076890489303
- type: cos_sim_recall
value: 87.98223053542202
- type: dot_accuracy
value: 82.68190018039687
- type: dot_ap
value: 90.18782350103646
- type: dot_f1
value: 83.64242087729039
- type: dot_precision
value: 79.65313028764805
- type: dot_recall
value: 88.05237315875614
- type: euclidean_accuracy
value: 82.68190018039687
- type: euclidean_ap
value: 90.1801957900632
- type: euclidean_f1
value: 83.63636363636364
- type: euclidean_precision
value: 79.52772506852203
- type: euclidean_recall
value: 88.19265840542437
- type: manhattan_accuracy
value: 82.14070956103427
- type: manhattan_ap
value: 89.96178420101427
- type: manhattan_f1
value: 83.21087838578791
- type: manhattan_precision
value: 78.35605121850475
- type: manhattan_recall
value: 88.70703764320785
- type: max_accuracy
value: 82.68190018039687
- type: max_ap
value: 90.18782350103646
- type: max_f1
value: 83.64242087729039
- task:
type: Retrieval
dataset:
name: MTEB CovidRetrieval
type: C-MTEB/CovidRetrieval
config: default
split: dev
revision: 1271c7809071a13532e05f25fb53511ffce77117
metrics:
- type: map_at_1
value: 72.234
- type: map_at_10
value: 80.10000000000001
- type: map_at_100
value: 80.36
- type: map_at_1000
value: 80.363
- type: map_at_3
value: 78.315
- type: map_at_5
value: 79.607
- type: mrr_at_1
value: 72.392
- type: mrr_at_10
value: 80.117
- type: mrr_at_100
value: 80.36999999999999
- type: mrr_at_1000
value: 80.373
- type: mrr_at_3
value: 78.469
- type: mrr_at_5
value: 79.633
- type: ndcg_at_1
value: 72.392
- type: ndcg_at_10
value: 83.651
- type: ndcg_at_100
value: 84.749
- type: ndcg_at_1000
value: 84.83000000000001
- type: ndcg_at_3
value: 80.253
- type: ndcg_at_5
value: 82.485
- type: precision_at_1
value: 72.392
- type: precision_at_10
value: 9.557
- type: precision_at_100
value: 1.004
- type: precision_at_1000
value: 0.101
- type: precision_at_3
value: 28.732000000000003
- type: precision_at_5
value: 18.377
- type: recall_at_1
value: 72.234
- type: recall_at_10
value: 94.573
- type: recall_at_100
value: 99.368
- type: recall_at_1000
value: 100.0
- type: recall_at_3
value: 85.669
- type: recall_at_5
value: 91.01700000000001
- task:
type: Retrieval
dataset:
name: MTEB DuRetrieval
type: C-MTEB/DuRetrieval
config: default
split: dev
revision: a1a333e290fe30b10f3f56498e3a0d911a693ced
metrics:
- type: map_at_1
value: 26.173999999999996
- type: map_at_10
value: 80.04
- type: map_at_100
value: 82.94500000000001
- type: map_at_1000
value: 82.98100000000001
- type: map_at_3
value: 55.562999999999995
- type: map_at_5
value: 69.89800000000001
- type: mrr_at_1
value: 89.5
- type: mrr_at_10
value: 92.996
- type: mrr_at_100
value: 93.06400000000001
- type: mrr_at_1000
value: 93.065
- type: mrr_at_3
value: 92.658
- type: mrr_at_5
value: 92.84599999999999
- type: ndcg_at_1
value: 89.5
- type: ndcg_at_10
value: 87.443
- type: ndcg_at_100
value: 90.253
- type: ndcg_at_1000
value: 90.549
- type: ndcg_at_3
value: 85.874
- type: ndcg_at_5
value: 84.842
- type: precision_at_1
value: 89.5
- type: precision_at_10
value: 41.805
- type: precision_at_100
value: 4.827
- type: precision_at_1000
value: 0.49
- type: precision_at_3
value: 76.85
- type: precision_at_5
value: 64.8
- type: recall_at_1
value: 26.173999999999996
- type: recall_at_10
value: 89.101
- type: recall_at_100
value: 98.08099999999999
- type: recall_at_1000
value: 99.529
- type: recall_at_3
value: 57.902
- type: recall_at_5
value: 74.602
- task:
type: Retrieval
dataset:
name: MTEB EcomRetrieval
type: C-MTEB/EcomRetrieval
config: default
split: dev
revision: 687de13dc7294d6fd9be10c6945f9e8fec8166b9
metrics:
- type: map_at_1
value: 56.10000000000001
- type: map_at_10
value: 66.15299999999999
- type: map_at_100
value: 66.625
- type: map_at_1000
value: 66.636
- type: map_at_3
value: 63.632999999999996
- type: map_at_5
value: 65.293
- type: mrr_at_1
value: 56.10000000000001
- type: mrr_at_10
value: 66.15299999999999
- type: mrr_at_100
value: 66.625
- type: mrr_at_1000
value: 66.636
- type: mrr_at_3
value: 63.632999999999996
- type: mrr_at_5
value: 65.293
- type: ndcg_at_1
value: 56.10000000000001
- type: ndcg_at_10
value: 71.146
- type: ndcg_at_100
value: 73.27799999999999
- type: ndcg_at_1000
value: 73.529
- type: ndcg_at_3
value: 66.09
- type: ndcg_at_5
value: 69.08999999999999
- type: precision_at_1
value: 56.10000000000001
- type: precision_at_10
value: 8.68
- type: precision_at_100
value: 0.964
- type: precision_at_1000
value: 0.098
- type: precision_at_3
value: 24.4
- type: precision_at_5
value: 16.1
- type: recall_at_1
value: 56.10000000000001
- type: recall_at_10
value: 86.8
- type: recall_at_100
value: 96.39999999999999
- type: recall_at_1000
value: 98.3
- type: recall_at_3
value: 73.2
- type: recall_at_5
value: 80.5
- task:
type: Classification
dataset:
name: MTEB IFlyTek
type: C-MTEB/IFlyTek-classification
config: default
split: validation
revision: 421605374b29664c5fc098418fe20ada9bd55f8a
metrics:
- type: accuracy
value: 54.52096960369373
- type: f1
value: 40.930845295808695
- task:
type: Classification
dataset:
name: MTEB JDReview
type: C-MTEB/JDReview-classification
config: default
split: test
revision: b7c64bd89eb87f8ded463478346f76731f07bf8b
metrics:
- type: accuracy
value: 86.51031894934334
- type: ap
value: 55.9516014323483
- type: f1
value: 81.54813679326381
- task:
type: STS
dataset:
name: MTEB LCQMC
type: C-MTEB/LCQMC
config: default
split: test
revision: 17f9b096f80380fce5ed12a9be8be7784b337daf
metrics:
- type: cos_sim_pearson
value: 69.67437838574276
- type: cos_sim_spearman
value: 73.81314174653045
- type: euclidean_pearson
value: 72.63430276680275
- type: euclidean_spearman
value: 73.81358736777001
- type: manhattan_pearson
value: 72.58743833842829
- type: manhattan_spearman
value: 73.7590419009179
- task:
type: Reranking
dataset:
name: MTEB MMarcoReranking
type: C-MTEB/Mmarco-reranking
config: default
split: dev
revision: None
metrics:
- type: map
value: 31.648613483640254
- type: mrr
value: 30.37420634920635
- task:
type: Retrieval
dataset:
name: MTEB MMarcoRetrieval
type: C-MTEB/MMarcoRetrieval
config: default
split: dev
revision: 539bbde593d947e2a124ba72651aafc09eb33fc2
metrics:
- type: map_at_1
value: 73.28099999999999
- type: map_at_10
value: 81.977
- type: map_at_100
value: 82.222
- type: map_at_1000
value: 82.22699999999999
- type: map_at_3
value: 80.441
- type: map_at_5
value: 81.46600000000001
- type: mrr_at_1
value: 75.673
- type: mrr_at_10
value: 82.41000000000001
- type: mrr_at_100
value: 82.616
- type: mrr_at_1000
value: 82.621
- type: mrr_at_3
value: 81.094
- type: mrr_at_5
value: 81.962
- type: ndcg_at_1
value: 75.673
- type: ndcg_at_10
value: 85.15599999999999
- type: ndcg_at_100
value: 86.151
- type: ndcg_at_1000
value: 86.26899999999999
- type: ndcg_at_3
value: 82.304
- type: ndcg_at_5
value: 84.009
- type: precision_at_1
value: 75.673
- type: precision_at_10
value: 10.042
- type: precision_at_100
value: 1.052
- type: precision_at_1000
value: 0.106
- type: precision_at_3
value: 30.673000000000002
- type: precision_at_5
value: 19.326999999999998
- type: recall_at_1
value: 73.28099999999999
- type: recall_at_10
value: 94.446
- type: recall_at_100
value: 98.737
- type: recall_at_1000
value: 99.649
- type: recall_at_3
value: 86.984
- type: recall_at_5
value: 91.024
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (zh-CN)
type: mteb/amazon_massive_intent
config: zh-CN
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 81.08607935440484
- type: f1
value: 78.24879986066307
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (zh-CN)
type: mteb/amazon_massive_scenario
config: zh-CN
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 86.05917955615332
- type: f1
value: 85.05279279434997
- task:
type: Retrieval
dataset:
name: MTEB MedicalRetrieval
type: C-MTEB/MedicalRetrieval
config: default
split: dev
revision: 2039188fb5800a9803ba5048df7b76e6fb151fc6
metrics:
- type: map_at_1
value: 56.2
- type: map_at_10
value: 62.57899999999999
- type: map_at_100
value: 63.154999999999994
- type: map_at_1000
value: 63.193
- type: map_at_3
value: 61.217
- type: map_at_5
value: 62.012
- type: mrr_at_1
value: 56.3
- type: mrr_at_10
value: 62.629000000000005
- type: mrr_at_100
value: 63.205999999999996
- type: mrr_at_1000
value: 63.244
- type: mrr_at_3
value: 61.267
- type: mrr_at_5
value: 62.062
- type: ndcg_at_1
value: 56.2
- type: ndcg_at_10
value: 65.592
- type: ndcg_at_100
value: 68.657
- type: ndcg_at_1000
value: 69.671
- type: ndcg_at_3
value: 62.808
- type: ndcg_at_5
value: 64.24499999999999
- type: precision_at_1
value: 56.2
- type: precision_at_10
value: 7.5
- type: precision_at_100
value: 0.899
- type: precision_at_1000
value: 0.098
- type: precision_at_3
value: 22.467000000000002
- type: precision_at_5
value: 14.180000000000001
- type: recall_at_1
value: 56.2
- type: recall_at_10
value: 75.0
- type: recall_at_100
value: 89.9
- type: recall_at_1000
value: 97.89999999999999
- type: recall_at_3
value: 67.4
- type: recall_at_5
value: 70.89999999999999
- task:
type: Classification
dataset:
name: MTEB MultilingualSentiment
type: C-MTEB/MultilingualSentiment-classification
config: default
split: validation
revision: 46958b007a63fdbf239b7672c25d0bea67b5ea1a
metrics:
- type: accuracy
value: 76.87666666666667
- type: f1
value: 76.7317686219665
- task:
type: PairClassification
dataset:
name: MTEB Ocnli
type: C-MTEB/OCNLI
config: default
split: validation
revision: 66e76a618a34d6d565d5538088562851e6daa7ec
metrics:
- type: cos_sim_accuracy
value: 79.64266377910124
- type: cos_sim_ap
value: 84.78274442344829
- type: cos_sim_f1
value: 81.16947472745292
- type: cos_sim_precision
value: 76.47058823529412
- type: cos_sim_recall
value: 86.48363252375924
- type: dot_accuracy
value: 79.64266377910124
- type: dot_ap
value: 84.7851404063692
- type: dot_f1
value: 81.16947472745292
- type: dot_precision
value: 76.47058823529412
- type: dot_recall
value: 86.48363252375924
- type: euclidean_accuracy
value: 79.64266377910124
- type: euclidean_ap
value: 84.78068373762378
- type: euclidean_f1
value: 81.14794656110837
- type: euclidean_precision
value: 76.35009310986965
- type: euclidean_recall
value: 86.58922914466737
- type: manhattan_accuracy
value: 79.48023822414727
- type: manhattan_ap
value: 84.72928897427576
- type: manhattan_f1
value: 81.32084770823064
- type: manhattan_precision
value: 76.24768946395564
- type: manhattan_recall
value: 87.11721224920802
- type: max_accuracy
value: 79.64266377910124
- type: max_ap
value: 84.7851404063692
- type: max_f1
value: 81.32084770823064
- task:
type: Classification
dataset:
name: MTEB OnlineShopping
type: C-MTEB/OnlineShopping-classification
config: default
split: test
revision: e610f2ebd179a8fda30ae534c3878750a96db120
metrics:
- type: accuracy
value: 94.3
- type: ap
value: 92.8664032274438
- type: f1
value: 94.29311102997727
- task:
type: STS
dataset:
name: MTEB PAWSX
type: C-MTEB/PAWSX
config: default
split: test
revision: 9c6a90e430ac22b5779fb019a23e820b11a8b5e1
metrics:
- type: cos_sim_pearson
value: 48.51392279882909
- type: cos_sim_spearman
value: 54.06338895994974
- type: euclidean_pearson
value: 52.58480559573412
- type: euclidean_spearman
value: 54.06417276612201
- type: manhattan_pearson
value: 52.69525121721343
- type: manhattan_spearman
value: 54.048147455389675
- task:
type: STS
dataset:
name: MTEB QBQTC
type: C-MTEB/QBQTC
config: default
split: test
revision: 790b0510dc52b1553e8c49f3d2afb48c0e5c48b7
metrics:
- type: cos_sim_pearson
value: 29.728387290757325
- type: cos_sim_spearman
value: 31.366121633635284
- type: euclidean_pearson
value: 29.14588368552961
- type: euclidean_spearman
value: 31.36764411112844
- type: manhattan_pearson
value: 29.63517350523121
- type: manhattan_spearman
value: 31.94157020583762
- task:
type: STS
dataset:
name: MTEB STS22 (zh)
type: mteb/sts22-crosslingual-sts
config: zh
split: test
revision: eea2b4fe26a775864c896887d910b76a8098ad3f
metrics:
- type: cos_sim_pearson
value: 63.64868296271406
- type: cos_sim_spearman
value: 66.12800618164744
- type: euclidean_pearson
value: 63.21405767340238
- type: euclidean_spearman
value: 66.12786567790748
- type: manhattan_pearson
value: 64.04300276525848
- type: manhattan_spearman
value: 66.5066857145652
- task:
type: STS
dataset:
name: MTEB STSB
type: C-MTEB/STSB
config: default
split: test
revision: 0cde68302b3541bb8b3c340dc0644b0b745b3dc0
metrics:
- type: cos_sim_pearson
value: 81.2302623912794
- type: cos_sim_spearman
value: 81.16833673266562
- type: euclidean_pearson
value: 79.47647843876024
- type: euclidean_spearman
value: 81.16944349524972
- type: manhattan_pearson
value: 79.84947238492208
- type: manhattan_spearman
value: 81.64626599410026
- task:
type: Reranking
dataset:
name: MTEB T2Reranking
type: C-MTEB/T2Reranking
config: default
split: dev
revision: 76631901a18387f85eaa53e5450019b87ad58ef9
metrics:
- type: map
value: 67.80129586475687
- type: mrr
value: 77.77402311635554
- task:
type: Retrieval
dataset:
name: MTEB T2Retrieval
type: C-MTEB/T2Retrieval
config: default
split: dev
revision: 8731a845f1bf500a4f111cf1070785c793d10e64
metrics:
- type: map_at_1
value: 28.666999999999998
- type: map_at_10
value: 81.063
- type: map_at_100
value: 84.504
- type: map_at_1000
value: 84.552
- type: map_at_3
value: 56.897
- type: map_at_5
value: 70.073
- type: mrr_at_1
value: 92.087
- type: mrr_at_10
value: 94.132
- type: mrr_at_100
value: 94.19800000000001
- type: mrr_at_1000
value: 94.19999999999999
- type: mrr_at_3
value: 93.78999999999999
- type: mrr_at_5
value: 94.002
- type: ndcg_at_1
value: 92.087
- type: ndcg_at_10
value: 87.734
- type: ndcg_at_100
value: 90.736
- type: ndcg_at_1000
value: 91.184
- type: ndcg_at_3
value: 88.78
- type: ndcg_at_5
value: 87.676
- type: precision_at_1
value: 92.087
- type: precision_at_10
value: 43.46
- type: precision_at_100
value: 5.07
- type: precision_at_1000
value: 0.518
- type: precision_at_3
value: 77.49000000000001
- type: precision_at_5
value: 65.194
- type: recall_at_1
value: 28.666999999999998
- type: recall_at_10
value: 86.632
- type: recall_at_100
value: 96.646
- type: recall_at_1000
value: 98.917
- type: recall_at_3
value: 58.333999999999996
- type: recall_at_5
value: 72.974
- task:
type: Classification
dataset:
name: MTEB TNews
type: C-MTEB/TNews-classification
config: default
split: validation
revision: 317f262bf1e6126357bbe89e875451e4b0938fe4
metrics:
- type: accuracy
value: 52.971999999999994
- type: f1
value: 50.2898280984929
- task:
type: Clustering
dataset:
name: MTEB ThuNewsClusteringP2P
type: C-MTEB/ThuNewsClusteringP2P
config: default
split: test
revision: 5798586b105c0434e4f0fe5e767abe619442cf93
metrics:
- type: v_measure
value: 86.0797948663824
- task:
type: Clustering
dataset:
name: MTEB ThuNewsClusteringS2S
type: C-MTEB/ThuNewsClusteringS2S
config: default
split: test
revision: 8a8b2caeda43f39e13c4bc5bea0f8a667896e10d
metrics:
- type: v_measure
value: 85.10759092255017
- task:
type: Retrieval
dataset:
name: MTEB VideoRetrieval
type: C-MTEB/VideoRetrieval
config: default
split: dev
revision: 58c2597a5943a2ba48f4668c3b90d796283c5639
metrics:
- type: map_at_1
value: 65.60000000000001
- type: map_at_10
value: 74.773
- type: map_at_100
value: 75.128
- type: map_at_1000
value: 75.136
- type: map_at_3
value: 73.05
- type: map_at_5
value: 74.13499999999999
- type: mrr_at_1
value: 65.60000000000001
- type: mrr_at_10
value: 74.773
- type: mrr_at_100
value: 75.128
- type: mrr_at_1000
value: 75.136
- type: mrr_at_3
value: 73.05
- type: mrr_at_5
value: 74.13499999999999
- type: ndcg_at_1
value: 65.60000000000001
- type: ndcg_at_10
value: 78.84299999999999
- type: ndcg_at_100
value: 80.40899999999999
- type: ndcg_at_1000
value: 80.57
- type: ndcg_at_3
value: 75.40599999999999
- type: ndcg_at_5
value: 77.351
- type: precision_at_1
value: 65.60000000000001
- type: precision_at_10
value: 9.139999999999999
- type: precision_at_100
value: 0.984
- type: precision_at_1000
value: 0.1
- type: precision_at_3
value: 27.400000000000002
- type: precision_at_5
value: 17.380000000000003
- type: recall_at_1
value: 65.60000000000001
- type: recall_at_10
value: 91.4
- type: recall_at_100
value: 98.4
- type: recall_at_1000
value: 99.6
- type: recall_at_3
value: 82.19999999999999
- type: recall_at_5
value: 86.9
- task:
type: Classification
dataset:
name: MTEB Waimai
type: C-MTEB/waimai-classification
config: default
split: test
revision: 339287def212450dcaa9df8c22bf93e9980c7023
metrics:
- type: accuracy
value: 89.47
- type: ap
value: 75.59561751845389
- type: f1
value: 87.95207751382563
- task:
type: Clustering
dataset:
name: MTEB AlloProfClusteringP2P
type: lyon-nlp/alloprof
config: default
split: test
revision: 392ba3f5bcc8c51f578786c1fc3dae648662cb9b
metrics:
- type: v_measure
value: 76.05592323841036
- type: v_measure
value: 64.51718058866508
- task:
type: Reranking
dataset:
name: MTEB AlloprofReranking
type: lyon-nlp/mteb-fr-reranking-alloprof-s2p
config: default
split: test
revision: 666fdacebe0291776e86f29345663dfaf80a0db9
metrics:
- type: map
value: 73.08278490943373
- type: mrr
value: 74.66561454570449
- task:
type: Retrieval
dataset:
name: MTEB AlloprofRetrieval
type: lyon-nlp/alloprof
config: default
split: test
revision: 392ba3f5bcc8c51f578786c1fc3dae648662cb9b
metrics:
- type: map_at_1
value: 38.912
- type: map_at_10
value: 52.437999999999995
- type: map_at_100
value: 53.38
- type: map_at_1000
value: 53.427
- type: map_at_3
value: 48.879
- type: map_at_5
value: 50.934000000000005
- type: mrr_at_1
value: 44.085
- type: mrr_at_10
value: 55.337
- type: mrr_at_100
value: 56.016999999999996
- type: mrr_at_1000
value: 56.043
- type: mrr_at_3
value: 52.55499999999999
- type: mrr_at_5
value: 54.20399999999999
- type: ndcg_at_1
value: 44.085
- type: ndcg_at_10
value: 58.876
- type: ndcg_at_100
value: 62.714000000000006
- type: ndcg_at_1000
value: 63.721000000000004
- type: ndcg_at_3
value: 52.444
- type: ndcg_at_5
value: 55.692
- type: precision_at_1
value: 44.085
- type: precision_at_10
value: 9.21
- type: precision_at_100
value: 1.164
- type: precision_at_1000
value: 0.128
- type: precision_at_3
value: 23.043
- type: precision_at_5
value: 15.898000000000001
- type: recall_at_1
value: 38.912
- type: recall_at_10
value: 75.577
- type: recall_at_100
value: 92.038
- type: recall_at_1000
value: 99.325
- type: recall_at_3
value: 58.592
- type: recall_at_5
value: 66.235
- task:
type: Classification
dataset:
name: MTEB AmazonReviewsClassification (fr)
type: mteb/amazon_reviews_multi
config: fr
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 55.532000000000004
- type: f1
value: 52.5783943471605
- task:
type: Retrieval
dataset:
name: MTEB BSARDRetrieval
type: maastrichtlawtech/bsard
config: default
split: test
revision: 5effa1b9b5fa3b0f9e12523e6e43e5f86a6e6d59
metrics:
- type: map_at_1
value: 8.108
- type: map_at_10
value: 14.710999999999999
- type: map_at_100
value: 15.891
- type: map_at_1000
value: 15.983
- type: map_at_3
value: 12.237
- type: map_at_5
value: 13.679
- type: mrr_at_1
value: 8.108
- type: mrr_at_10
value: 14.710999999999999
- type: mrr_at_100
value: 15.891
- type: mrr_at_1000
value: 15.983
- type: mrr_at_3
value: 12.237
- type: mrr_at_5
value: 13.679
- type: ndcg_at_1
value: 8.108
- type: ndcg_at_10
value: 18.796
- type: ndcg_at_100
value: 25.098
- type: ndcg_at_1000
value: 27.951999999999998
- type: ndcg_at_3
value: 13.712
- type: ndcg_at_5
value: 16.309
- type: precision_at_1
value: 8.108
- type: precision_at_10
value: 3.198
- type: precision_at_100
value: 0.626
- type: precision_at_1000
value: 0.086
- type: precision_at_3
value: 6.006
- type: precision_at_5
value: 4.865
- type: recall_at_1
value: 8.108
- type: recall_at_10
value: 31.982
- type: recall_at_100
value: 62.613
- type: recall_at_1000
value: 86.036
- type: recall_at_3
value: 18.018
- type: recall_at_5
value: 24.324
- task:
type: Clustering
dataset:
name: MTEB HALClusteringS2S
type: lyon-nlp/clustering-hal-s2s
config: default
split: test
revision: e06ebbbb123f8144bef1a5d18796f3dec9ae2915
metrics:
- type: v_measure
value: 30.833269778867116
- task:
type: Clustering
dataset:
name: MTEB MLSUMClusteringP2P
type: mlsum
config: default
split: test
revision: b5d54f8f3b61ae17845046286940f03c6bc79bc7
metrics:
- type: v_measure
value: 50.0281928004713
- type: v_measure
value: 43.699961510636534
- task:
type: Classification
dataset:
name: MTEB MTOPDomainClassification (fr)
type: mteb/mtop_domain
config: fr
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 96.68963357344191
- type: f1
value: 96.45175170820961
- task:
type: Classification
dataset:
name: MTEB MTOPIntentClassification (fr)
type: mteb/mtop_intent
config: fr
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 87.46946445349202
- type: f1
value: 65.79860440988624
- task:
type: Classification
dataset:
name: MTEB MasakhaNEWSClassification (fra)
type: masakhane/masakhanews
config: fra
split: test
revision: 8ccc72e69e65f40c70e117d8b3c08306bb788b60
metrics:
- type: accuracy
value: 82.60663507109005
- type: f1
value: 77.20462646604777
- task:
type: Clustering
dataset:
name: MTEB MasakhaNEWSClusteringP2P (fra)
type: masakhane/masakhanews
config: fra
split: test
revision: 8ccc72e69e65f40c70e117d8b3c08306bb788b60
metrics:
- type: v_measure
value: 60.19311264967803
- type: v_measure
value: 63.6235764409785
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (fr)
type: mteb/amazon_massive_intent
config: fr
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 81.65097511768661
- type: f1
value: 78.77796091490924
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (fr)
type: mteb/amazon_massive_scenario
config: fr
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 86.64425016812373
- type: f1
value: 85.4912728670017
- task:
type: Retrieval
dataset:
name: MTEB MintakaRetrieval (fr)
type: jinaai/mintakaqa
config: fr
split: test
revision: efa78cc2f74bbcd21eff2261f9e13aebe40b814e
metrics:
- type: map_at_1
value: 35.913000000000004
- type: map_at_10
value: 48.147
- type: map_at_100
value: 48.91
- type: map_at_1000
value: 48.949
- type: map_at_3
value: 45.269999999999996
- type: map_at_5
value: 47.115
- type: mrr_at_1
value: 35.913000000000004
- type: mrr_at_10
value: 48.147
- type: mrr_at_100
value: 48.91
- type: mrr_at_1000
value: 48.949
- type: mrr_at_3
value: 45.269999999999996
- type: mrr_at_5
value: 47.115
- type: ndcg_at_1
value: 35.913000000000004
- type: ndcg_at_10
value: 54.03
- type: ndcg_at_100
value: 57.839
- type: ndcg_at_1000
value: 58.925000000000004
- type: ndcg_at_3
value: 48.217999999999996
- type: ndcg_at_5
value: 51.56699999999999
- type: precision_at_1
value: 35.913000000000004
- type: precision_at_10
value: 7.244000000000001
- type: precision_at_100
value: 0.9039999999999999
- type: precision_at_1000
value: 0.099
- type: precision_at_3
value: 18.905
- type: precision_at_5
value: 12.981000000000002
- type: recall_at_1
value: 35.913000000000004
- type: recall_at_10
value: 72.441
- type: recall_at_100
value: 90.41799999999999
- type: recall_at_1000
value: 99.099
- type: recall_at_3
value: 56.716
- type: recall_at_5
value: 64.90599999999999
- task:
type: PairClassification
dataset:
name: MTEB OpusparcusPC (fr)
type: GEM/opusparcus
config: fr
split: test
revision: 9e9b1f8ef51616073f47f306f7f47dd91663f86a
metrics:
- type: cos_sim_accuracy
value: 99.90069513406156
- type: cos_sim_ap
value: 100.0
- type: cos_sim_f1
value: 99.95032290114257
- type: cos_sim_precision
value: 100.0
- type: cos_sim_recall
value: 99.90069513406156
- type: dot_accuracy
value: 99.90069513406156
- type: dot_ap
value: 100.0
- type: dot_f1
value: 99.95032290114257
- type: dot_precision
value: 100.0
- type: dot_recall
value: 99.90069513406156
- type: euclidean_accuracy
value: 99.90069513406156
- type: euclidean_ap
value: 100.0
- type: euclidean_f1
value: 99.95032290114257
- type: euclidean_precision
value: 100.0
- type: euclidean_recall
value: 99.90069513406156
- type: manhattan_accuracy
value: 99.90069513406156
- type: manhattan_ap
value: 100.0
- type: manhattan_f1
value: 99.95032290114257
- type: manhattan_precision
value: 100.0
- type: manhattan_recall
value: 99.90069513406156
- type: max_accuracy
value: 99.90069513406156
- type: max_ap
value: 100.0
- type: max_f1
value: 99.95032290114257
- task:
type: PairClassification
dataset:
name: MTEB PawsX (fr)
type: paws-x
config: fr
split: test
revision: 8a04d940a42cd40658986fdd8e3da561533a3646
metrics:
- type: cos_sim_accuracy
value: 75.25
- type: cos_sim_ap
value: 80.86376001270014
- type: cos_sim_f1
value: 73.65945437441204
- type: cos_sim_precision
value: 64.02289452166802
- type: cos_sim_recall
value: 86.71096345514951
- type: dot_accuracy
value: 75.25
- type: dot_ap
value: 80.93686107633002
- type: dot_f1
value: 73.65945437441204
- type: dot_precision
value: 64.02289452166802
- type: dot_recall
value: 86.71096345514951
- type: euclidean_accuracy
value: 75.25
- type: euclidean_ap
value: 80.86379136218862
- type: euclidean_f1
value: 73.65945437441204
- type: euclidean_precision
value: 64.02289452166802
- type: euclidean_recall
value: 86.71096345514951
- type: manhattan_accuracy
value: 75.3
- type: manhattan_ap
value: 80.87826606097734
- type: manhattan_f1
value: 73.68421052631581
- type: manhattan_precision
value: 64.0
- type: manhattan_recall
value: 86.82170542635659
- type: max_accuracy
value: 75.3
- type: max_ap
value: 80.93686107633002
- type: max_f1
value: 73.68421052631581
- task:
type: STS
dataset:
name: MTEB SICKFr
type: Lajavaness/SICK-fr
config: default
split: test
revision: e077ab4cf4774a1e36d86d593b150422fafd8e8a
metrics:
- type: cos_sim_pearson
value: 81.42349425981143
- type: cos_sim_spearman
value: 78.90454327031226
- type: euclidean_pearson
value: 78.39086497435166
- type: euclidean_spearman
value: 78.9046133980509
- type: manhattan_pearson
value: 78.63743094286502
- type: manhattan_spearman
value: 79.12136348449269
- task:
type: STS
dataset:
name: MTEB STS22 (fr)
type: mteb/sts22-crosslingual-sts
config: fr
split: test
revision: eea2b4fe26a775864c896887d910b76a8098ad3f
metrics:
- type: cos_sim_pearson
value: 81.452697919749
- type: cos_sim_spearman
value: 82.58116836039301
- type: euclidean_pearson
value: 81.04038478932786
- type: euclidean_spearman
value: 82.58116836039301
- type: manhattan_pearson
value: 81.37075396187771
- type: manhattan_spearman
value: 82.73678231355368
- task:
type: STS
dataset:
name: MTEB STSBenchmarkMultilingualSTS (fr)
type: stsb_multi_mt
config: fr
split: test
revision: 93d57ef91790589e3ce9c365164337a8a78b7632
metrics:
- type: cos_sim_pearson
value: 85.7419764013806
- type: cos_sim_spearman
value: 85.46085808849622
- type: euclidean_pearson
value: 83.70449639870063
- type: euclidean_spearman
value: 85.46159013076233
- type: manhattan_pearson
value: 83.95259510313929
- type: manhattan_spearman
value: 85.8029724659458
- task:
type: Summarization
dataset:
name: MTEB SummEvalFr
type: lyon-nlp/summarization-summeval-fr-p2p
config: default
split: test
revision: b385812de6a9577b6f4d0f88c6a6e35395a94054
metrics:
- type: cos_sim_pearson
value: 32.61063271753325
- type: cos_sim_spearman
value: 31.454589417353603
- type: dot_pearson
value: 32.6106288643431
- type: dot_spearman
value: 31.454589417353603
- task:
type: Reranking
dataset:
name: MTEB SyntecReranking
type: lyon-nlp/mteb-fr-reranking-syntec-s2p
config: default
split: test
revision: b205c5084a0934ce8af14338bf03feb19499c84d
metrics:
- type: map
value: 84.31666666666666
- type: mrr
value: 84.31666666666666
- task:
type: Retrieval
dataset:
name: MTEB SyntecRetrieval
type: lyon-nlp/mteb-fr-retrieval-syntec-s2p
config: default
split: test
revision: 77f7e271bf4a92b24fce5119f3486b583ca016ff
metrics:
- type: map_at_1
value: 63.0
- type: map_at_10
value: 73.471
- type: map_at_100
value: 73.87
- type: map_at_1000
value: 73.87
- type: map_at_3
value: 70.5
- type: map_at_5
value: 73.05
- type: mrr_at_1
value: 63.0
- type: mrr_at_10
value: 73.471
- type: mrr_at_100
value: 73.87
- type: mrr_at_1000
value: 73.87
- type: mrr_at_3
value: 70.5
- type: mrr_at_5
value: 73.05
- type: ndcg_at_1
value: 63.0
- type: ndcg_at_10
value: 78.255
- type: ndcg_at_100
value: 79.88
- type: ndcg_at_1000
value: 79.88
- type: ndcg_at_3
value: 72.702
- type: ndcg_at_5
value: 77.264
- type: precision_at_1
value: 63.0
- type: precision_at_10
value: 9.3
- type: precision_at_100
value: 1.0
- type: precision_at_1000
value: 0.1
- type: precision_at_3
value: 26.333000000000002
- type: precision_at_5
value: 18.0
- type: recall_at_1
value: 63.0
- type: recall_at_10
value: 93.0
- type: recall_at_100
value: 100.0
- type: recall_at_1000
value: 100.0
- type: recall_at_3
value: 79.0
- type: recall_at_5
value: 90.0
- task:
type: Retrieval
dataset:
name: MTEB XPQARetrieval (fr)
type: jinaai/xpqa
config: fr
split: test
revision: c99d599f0a6ab9b85b065da6f9d94f9cf731679f
metrics:
- type: map_at_1
value: 40.338
- type: map_at_10
value: 61.927
- type: map_at_100
value: 63.361999999999995
- type: map_at_1000
value: 63.405
- type: map_at_3
value: 55.479
- type: map_at_5
value: 59.732
- type: mrr_at_1
value: 63.551
- type: mrr_at_10
value: 71.006
- type: mrr_at_100
value: 71.501
- type: mrr_at_1000
value: 71.509
- type: mrr_at_3
value: 69.07
- type: mrr_at_5
value: 70.165
- type: ndcg_at_1
value: 63.551
- type: ndcg_at_10
value: 68.297
- type: ndcg_at_100
value: 73.13199999999999
- type: ndcg_at_1000
value: 73.751
- type: ndcg_at_3
value: 62.999
- type: ndcg_at_5
value: 64.89
- type: precision_at_1
value: 63.551
- type: precision_at_10
value: 15.661
- type: precision_at_100
value: 1.9789999999999999
- type: precision_at_1000
value: 0.207
- type: precision_at_3
value: 38.273
- type: precision_at_5
value: 27.61
- type: recall_at_1
value: 40.338
- type: recall_at_10
value: 77.267
- type: recall_at_100
value: 95.892
- type: recall_at_1000
value: 99.75500000000001
- type: recall_at_3
value: 60.36
- type: recall_at_5
value: 68.825
- task:
type: Clustering
dataset:
name: MTEB 8TagsClustering
type: PL-MTEB/8tags-clustering
config: default
split: test
revision: None
metrics:
- type: v_measure
value: 51.36126303874126
- task:
type: Classification
dataset:
name: MTEB AllegroReviews
type: PL-MTEB/allegro-reviews
config: default
split: test
revision: None
metrics:
- type: accuracy
value: 67.13717693836979
- type: f1
value: 57.27609848003782
- task:
type: Retrieval
dataset:
name: MTEB ArguAna-PL
type: clarin-knext/arguana-pl
config: default
split: test
revision: 63fc86750af76253e8c760fc9e534bbf24d260a2
metrics:
- type: map_at_1
value: 35.276999999999994
- type: map_at_10
value: 51.086
- type: map_at_100
value: 51.788000000000004
- type: map_at_1000
value: 51.791
- type: map_at_3
value: 46.147
- type: map_at_5
value: 49.078
- type: mrr_at_1
value: 35.917
- type: mrr_at_10
value: 51.315999999999995
- type: mrr_at_100
value: 52.018
- type: mrr_at_1000
value: 52.022
- type: mrr_at_3
value: 46.349000000000004
- type: mrr_at_5
value: 49.297000000000004
- type: ndcg_at_1
value: 35.276999999999994
- type: ndcg_at_10
value: 59.870999999999995
- type: ndcg_at_100
value: 62.590999999999994
- type: ndcg_at_1000
value: 62.661
- type: ndcg_at_3
value: 49.745
- type: ndcg_at_5
value: 55.067
- type: precision_at_1
value: 35.276999999999994
- type: precision_at_10
value: 8.791
- type: precision_at_100
value: 0.991
- type: precision_at_1000
value: 0.1
- type: precision_at_3
value: 20.057
- type: precision_at_5
value: 14.637
- type: recall_at_1
value: 35.276999999999994
- type: recall_at_10
value: 87.909
- type: recall_at_100
value: 99.14699999999999
- type: recall_at_1000
value: 99.644
- type: recall_at_3
value: 60.171
- type: recall_at_5
value: 73.18599999999999
- task:
type: Classification
dataset:
name: MTEB CBD
type: PL-MTEB/cbd
config: default
split: test
revision: None
metrics:
- type: accuracy
value: 78.03000000000002
- type: ap
value: 29.12548553897622
- type: f1
value: 66.54857118886073
- task:
type: PairClassification
dataset:
name: MTEB CDSC-E
type: PL-MTEB/cdsce-pairclassification
config: default
split: test
revision: None
metrics:
- type: cos_sim_accuracy
value: 89.0
- type: cos_sim_ap
value: 76.75437826834582
- type: cos_sim_f1
value: 66.4850136239782
- type: cos_sim_precision
value: 68.92655367231639
- type: cos_sim_recall
value: 64.21052631578948
- type: dot_accuracy
value: 89.0
- type: dot_ap
value: 76.75437826834582
- type: dot_f1
value: 66.4850136239782
- type: dot_precision
value: 68.92655367231639
- type: dot_recall
value: 64.21052631578948
- type: euclidean_accuracy
value: 89.0
- type: euclidean_ap
value: 76.75437826834582
- type: euclidean_f1
value: 66.4850136239782
- type: euclidean_precision
value: 68.92655367231639
- type: euclidean_recall
value: 64.21052631578948
- type: manhattan_accuracy
value: 89.0
- type: manhattan_ap
value: 76.66074220647083
- type: manhattan_f1
value: 66.47058823529412
- type: manhattan_precision
value: 75.33333333333333
- type: manhattan_recall
value: 59.473684210526315
- type: max_accuracy
value: 89.0
- type: max_ap
value: 76.75437826834582
- type: max_f1
value: 66.4850136239782
- task:
type: STS
dataset:
name: MTEB CDSC-R
type: PL-MTEB/cdscr-sts
config: default
split: test
revision: None
metrics:
- type: cos_sim_pearson
value: 93.12903172428328
- type: cos_sim_spearman
value: 92.66381487060741
- type: euclidean_pearson
value: 90.37278396708922
- type: euclidean_spearman
value: 92.66381487060741
- type: manhattan_pearson
value: 90.32503296540962
- type: manhattan_spearman
value: 92.6902938354313
- task:
type: Retrieval
dataset:
name: MTEB DBPedia-PL
type: clarin-knext/dbpedia-pl
config: default
split: test
revision: 76afe41d9af165cc40999fcaa92312b8b012064a
metrics:
- type: map_at_1
value: 8.83
- type: map_at_10
value: 18.326
- type: map_at_100
value: 26.496
- type: map_at_1000
value: 28.455000000000002
- type: map_at_3
value: 12.933
- type: map_at_5
value: 15.168000000000001
- type: mrr_at_1
value: 66.0
- type: mrr_at_10
value: 72.76700000000001
- type: mrr_at_100
value: 73.203
- type: mrr_at_1000
value: 73.219
- type: mrr_at_3
value: 71.458
- type: mrr_at_5
value: 72.246
- type: ndcg_at_1
value: 55.375
- type: ndcg_at_10
value: 41.3
- type: ndcg_at_100
value: 45.891
- type: ndcg_at_1000
value: 52.905
- type: ndcg_at_3
value: 46.472
- type: ndcg_at_5
value: 43.734
- type: precision_at_1
value: 66.0
- type: precision_at_10
value: 33.074999999999996
- type: precision_at_100
value: 11.094999999999999
- type: precision_at_1000
value: 2.374
- type: precision_at_3
value: 48.583
- type: precision_at_5
value: 42.0
- type: recall_at_1
value: 8.83
- type: recall_at_10
value: 22.587
- type: recall_at_100
value: 50.61600000000001
- type: recall_at_1000
value: 73.559
- type: recall_at_3
value: 13.688
- type: recall_at_5
value: 16.855
- task:
type: Retrieval
dataset:
name: MTEB FiQA-PL
type: clarin-knext/fiqa-pl
config: default
split: test
revision: 2e535829717f8bf9dc829b7f911cc5bbd4e6608e
metrics:
- type: map_at_1
value: 20.587
- type: map_at_10
value: 33.095
- type: map_at_100
value: 35.24
- type: map_at_1000
value: 35.429
- type: map_at_3
value: 28.626
- type: map_at_5
value: 31.136999999999997
- type: mrr_at_1
value: 40.586
- type: mrr_at_10
value: 49.033
- type: mrr_at_100
value: 49.952999999999996
- type: mrr_at_1000
value: 49.992
- type: mrr_at_3
value: 46.553
- type: mrr_at_5
value: 48.035
- type: ndcg_at_1
value: 40.586
- type: ndcg_at_10
value: 41.046
- type: ndcg_at_100
value: 48.586
- type: ndcg_at_1000
value: 51.634
- type: ndcg_at_3
value: 36.773
- type: ndcg_at_5
value: 38.389
- type: precision_at_1
value: 40.586
- type: precision_at_10
value: 11.466
- type: precision_at_100
value: 1.909
- type: precision_at_1000
value: 0.245
- type: precision_at_3
value: 24.434
- type: precision_at_5
value: 18.426000000000002
- type: recall_at_1
value: 20.587
- type: recall_at_10
value: 47.986000000000004
- type: recall_at_100
value: 75.761
- type: recall_at_1000
value: 94.065
- type: recall_at_3
value: 33.339
- type: recall_at_5
value: 39.765
- task:
type: Retrieval
dataset:
name: MTEB HotpotQA-PL
type: clarin-knext/hotpotqa-pl
config: default
split: test
revision: a0bd479ac97b4ccb5bd6ce320c415d0bb4beb907
metrics:
- type: map_at_1
value: 40.878
- type: map_at_10
value: 58.775999999999996
- type: map_at_100
value: 59.632
- type: map_at_1000
value: 59.707
- type: map_at_3
value: 56.074
- type: map_at_5
value: 57.629
- type: mrr_at_1
value: 81.756
- type: mrr_at_10
value: 86.117
- type: mrr_at_100
value: 86.299
- type: mrr_at_1000
value: 86.30600000000001
- type: mrr_at_3
value: 85.345
- type: mrr_at_5
value: 85.832
- type: ndcg_at_1
value: 81.756
- type: ndcg_at_10
value: 67.608
- type: ndcg_at_100
value: 70.575
- type: ndcg_at_1000
value: 71.99600000000001
- type: ndcg_at_3
value: 63.723
- type: ndcg_at_5
value: 65.70700000000001
- type: precision_at_1
value: 81.756
- type: precision_at_10
value: 13.619
- type: precision_at_100
value: 1.5939999999999999
- type: precision_at_1000
value: 0.178
- type: precision_at_3
value: 39.604
- type: precision_at_5
value: 25.332
- type: recall_at_1
value: 40.878
- type: recall_at_10
value: 68.096
- type: recall_at_100
value: 79.696
- type: recall_at_1000
value: 89.082
- type: recall_at_3
value: 59.406000000000006
- type: recall_at_5
value: 63.329
- task:
type: Retrieval
dataset:
name: MTEB MSMARCO-PL
type: clarin-knext/msmarco-pl
config: default
split: test
revision: 8634c07806d5cce3a6138e260e59b81760a0a640
metrics:
- type: map_at_1
value: 2.1839999999999997
- type: map_at_10
value: 11.346
- type: map_at_100
value: 30.325000000000003
- type: map_at_1000
value: 37.806
- type: map_at_3
value: 4.842
- type: map_at_5
value: 6.891
- type: mrr_at_1
value: 86.047
- type: mrr_at_10
value: 89.14699999999999
- type: mrr_at_100
value: 89.46600000000001
- type: mrr_at_1000
value: 89.46600000000001
- type: mrr_at_3
value: 89.14699999999999
- type: mrr_at_5
value: 89.14699999999999
- type: ndcg_at_1
value: 67.829
- type: ndcg_at_10
value: 62.222
- type: ndcg_at_100
value: 55.337
- type: ndcg_at_1000
value: 64.076
- type: ndcg_at_3
value: 68.12700000000001
- type: ndcg_at_5
value: 64.987
- type: precision_at_1
value: 86.047
- type: precision_at_10
value: 69.535
- type: precision_at_100
value: 32.93
- type: precision_at_1000
value: 6.6049999999999995
- type: precision_at_3
value: 79.845
- type: precision_at_5
value: 75.349
- type: recall_at_1
value: 2.1839999999999997
- type: recall_at_10
value: 12.866
- type: recall_at_100
value: 43.505
- type: recall_at_1000
value: 72.366
- type: recall_at_3
value: 4.947
- type: recall_at_5
value: 7.192
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (pl)
type: mteb/amazon_massive_intent
config: pl
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 80.75319435104238
- type: f1
value: 77.58961444860606
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (pl)
type: mteb/amazon_massive_scenario
config: pl
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 85.54472091459313
- type: f1
value: 84.29498563572106
- task:
type: Retrieval
dataset:
name: MTEB NFCorpus-PL
type: clarin-knext/nfcorpus-pl
config: default
split: test
revision: 9a6f9567fda928260afed2de480d79c98bf0bec0
metrics:
- type: map_at_1
value: 4.367
- type: map_at_10
value: 10.38
- type: map_at_100
value: 13.516
- type: map_at_1000
value: 14.982000000000001
- type: map_at_3
value: 7.367
- type: map_at_5
value: 8.59
- type: mrr_at_1
value: 41.486000000000004
- type: mrr_at_10
value: 48.886
- type: mrr_at_100
value: 49.657000000000004
- type: mrr_at_1000
value: 49.713
- type: mrr_at_3
value: 46.904
- type: mrr_at_5
value: 48.065000000000005
- type: ndcg_at_1
value: 40.402
- type: ndcg_at_10
value: 30.885
- type: ndcg_at_100
value: 28.393
- type: ndcg_at_1000
value: 37.428
- type: ndcg_at_3
value: 35.394999999999996
- type: ndcg_at_5
value: 33.391999999999996
- type: precision_at_1
value: 41.486000000000004
- type: precision_at_10
value: 23.437
- type: precision_at_100
value: 7.638
- type: precision_at_1000
value: 2.0389999999999997
- type: precision_at_3
value: 32.817
- type: precision_at_5
value: 28.915999999999997
- type: recall_at_1
value: 4.367
- type: recall_at_10
value: 14.655000000000001
- type: recall_at_100
value: 29.665999999999997
- type: recall_at_1000
value: 62.073
- type: recall_at_3
value: 8.51
- type: recall_at_5
value: 10.689
- task:
type: Retrieval
dataset:
name: MTEB NQ-PL
type: clarin-knext/nq-pl
config: default
split: test
revision: f171245712cf85dd4700b06bef18001578d0ca8d
metrics:
- type: map_at_1
value: 28.616000000000003
- type: map_at_10
value: 41.626000000000005
- type: map_at_100
value: 42.689
- type: map_at_1000
value: 42.733
- type: map_at_3
value: 37.729
- type: map_at_5
value: 39.879999999999995
- type: mrr_at_1
value: 32.068000000000005
- type: mrr_at_10
value: 44.029
- type: mrr_at_100
value: 44.87
- type: mrr_at_1000
value: 44.901
- type: mrr_at_3
value: 40.687
- type: mrr_at_5
value: 42.625
- type: ndcg_at_1
value: 32.068000000000005
- type: ndcg_at_10
value: 48.449999999999996
- type: ndcg_at_100
value: 53.13
- type: ndcg_at_1000
value: 54.186
- type: ndcg_at_3
value: 40.983999999999995
- type: ndcg_at_5
value: 44.628
- type: precision_at_1
value: 32.068000000000005
- type: precision_at_10
value: 7.9750000000000005
- type: precision_at_100
value: 1.061
- type: precision_at_1000
value: 0.116
- type: precision_at_3
value: 18.404999999999998
- type: precision_at_5
value: 13.111
- type: recall_at_1
value: 28.616000000000003
- type: recall_at_10
value: 66.956
- type: recall_at_100
value: 87.657
- type: recall_at_1000
value: 95.548
- type: recall_at_3
value: 47.453
- type: recall_at_5
value: 55.87800000000001
- task:
type: Classification
dataset:
name: MTEB PAC
type: laugustyniak/abusive-clauses-pl
config: default
split: test
revision: None
metrics:
- type: accuracy
value: 69.04141326382856
- type: ap
value: 77.47589122111044
- type: f1
value: 66.6332277374775
- task:
type: PairClassification
dataset:
name: MTEB PPC
type: PL-MTEB/ppc-pairclassification
config: default
split: test
revision: None
metrics:
- type: cos_sim_accuracy
value: 86.4
- type: cos_sim_ap
value: 94.1044939667201
- type: cos_sim_f1
value: 88.78048780487805
- type: cos_sim_precision
value: 87.22044728434504
- type: cos_sim_recall
value: 90.39735099337747
- type: dot_accuracy
value: 86.4
- type: dot_ap
value: 94.1044939667201
- type: dot_f1
value: 88.78048780487805
- type: dot_precision
value: 87.22044728434504
- type: dot_recall
value: 90.39735099337747
- type: euclidean_accuracy
value: 86.4
- type: euclidean_ap
value: 94.1044939667201
- type: euclidean_f1
value: 88.78048780487805
- type: euclidean_precision
value: 87.22044728434504
- type: euclidean_recall
value: 90.39735099337747
- type: manhattan_accuracy
value: 86.4
- type: manhattan_ap
value: 94.11438365697387
- type: manhattan_f1
value: 88.77968877968877
- type: manhattan_precision
value: 87.84440842787681
- type: manhattan_recall
value: 89.73509933774835
- type: max_accuracy
value: 86.4
- type: max_ap
value: 94.11438365697387
- type: max_f1
value: 88.78048780487805
- task:
type: PairClassification
dataset:
name: MTEB PSC
type: PL-MTEB/psc-pairclassification
config: default
split: test
revision: None
metrics:
- type: cos_sim_accuracy
value: 97.86641929499072
- type: cos_sim_ap
value: 99.36904211868182
- type: cos_sim_f1
value: 96.56203288490283
- type: cos_sim_precision
value: 94.72140762463343
- type: cos_sim_recall
value: 98.47560975609755
- type: dot_accuracy
value: 97.86641929499072
- type: dot_ap
value: 99.36904211868183
- type: dot_f1
value: 96.56203288490283
- type: dot_precision
value: 94.72140762463343
- type: dot_recall
value: 98.47560975609755
- type: euclidean_accuracy
value: 97.86641929499072
- type: euclidean_ap
value: 99.36904211868183
- type: euclidean_f1
value: 96.56203288490283
- type: euclidean_precision
value: 94.72140762463343
- type: euclidean_recall
value: 98.47560975609755
- type: manhattan_accuracy
value: 98.14471243042672
- type: manhattan_ap
value: 99.43359540492416
- type: manhattan_f1
value: 96.98795180722892
- type: manhattan_precision
value: 95.83333333333334
- type: manhattan_recall
value: 98.17073170731707
- type: max_accuracy
value: 98.14471243042672
- type: max_ap
value: 99.43359540492416
- type: max_f1
value: 96.98795180722892
- task:
type: Classification
dataset:
name: MTEB PolEmo2.0-IN
type: PL-MTEB/polemo2_in
config: default
split: test
revision: None
metrics:
- type: accuracy
value: 89.39058171745152
- type: f1
value: 86.8552093529568
- task:
type: Classification
dataset:
name: MTEB PolEmo2.0-OUT
type: PL-MTEB/polemo2_out
config: default
split: test
revision: None
metrics:
- type: accuracy
value: 74.97975708502024
- type: f1
value: 58.73081628832407
- task:
type: Retrieval
dataset:
name: MTEB Quora-PL
type: clarin-knext/quora-pl
config: default
split: test
revision: 0be27e93455051e531182b85e85e425aba12e9d4
metrics:
- type: map_at_1
value: 64.917
- type: map_at_10
value: 78.74600000000001
- type: map_at_100
value: 79.501
- type: map_at_1000
value: 79.524
- type: map_at_3
value: 75.549
- type: map_at_5
value: 77.495
- type: mrr_at_1
value: 74.9
- type: mrr_at_10
value: 82.112
- type: mrr_at_100
value: 82.314
- type: mrr_at_1000
value: 82.317
- type: mrr_at_3
value: 80.745
- type: mrr_at_5
value: 81.607
- type: ndcg_at_1
value: 74.83999999999999
- type: ndcg_at_10
value: 83.214
- type: ndcg_at_100
value: 84.997
- type: ndcg_at_1000
value: 85.207
- type: ndcg_at_3
value: 79.547
- type: ndcg_at_5
value: 81.46600000000001
- type: precision_at_1
value: 74.83999999999999
- type: precision_at_10
value: 12.822
- type: precision_at_100
value: 1.506
- type: precision_at_1000
value: 0.156
- type: precision_at_3
value: 34.903
- type: precision_at_5
value: 23.16
- type: recall_at_1
value: 64.917
- type: recall_at_10
value: 92.27199999999999
- type: recall_at_100
value: 98.715
- type: recall_at_1000
value: 99.854
- type: recall_at_3
value: 82.04599999999999
- type: recall_at_5
value: 87.2
- task:
type: Retrieval
dataset:
name: MTEB SCIDOCS-PL
type: clarin-knext/scidocs-pl
config: default
split: test
revision: 45452b03f05560207ef19149545f168e596c9337
metrics:
- type: map_at_1
value: 3.51
- type: map_at_10
value: 9.046999999999999
- type: map_at_100
value: 10.823
- type: map_at_1000
value: 11.144
- type: map_at_3
value: 6.257
- type: map_at_5
value: 7.648000000000001
- type: mrr_at_1
value: 17.299999999999997
- type: mrr_at_10
value: 27.419
- type: mrr_at_100
value: 28.618
- type: mrr_at_1000
value: 28.685
- type: mrr_at_3
value: 23.817
- type: mrr_at_5
value: 25.927
- type: ndcg_at_1
value: 17.299999999999997
- type: ndcg_at_10
value: 16.084
- type: ndcg_at_100
value: 23.729
- type: ndcg_at_1000
value: 29.476999999999997
- type: ndcg_at_3
value: 14.327000000000002
- type: ndcg_at_5
value: 13.017999999999999
- type: precision_at_1
value: 17.299999999999997
- type: precision_at_10
value: 8.63
- type: precision_at_100
value: 1.981
- type: precision_at_1000
value: 0.336
- type: precision_at_3
value: 13.4
- type: precision_at_5
value: 11.700000000000001
- type: recall_at_1
value: 3.51
- type: recall_at_10
value: 17.518
- type: recall_at_100
value: 40.275
- type: recall_at_1000
value: 68.203
- type: recall_at_3
value: 8.155
- type: recall_at_5
value: 11.875
- task:
type: PairClassification
dataset:
name: MTEB SICK-E-PL
type: PL-MTEB/sicke-pl-pairclassification
config: default
split: test
revision: None
metrics:
- type: cos_sim_accuracy
value: 86.30248675091724
- type: cos_sim_ap
value: 83.6756734006714
- type: cos_sim_f1
value: 74.97367497367497
- type: cos_sim_precision
value: 73.91003460207612
- type: cos_sim_recall
value: 76.06837606837607
- type: dot_accuracy
value: 86.30248675091724
- type: dot_ap
value: 83.6756734006714
- type: dot_f1
value: 74.97367497367497
- type: dot_precision
value: 73.91003460207612
- type: dot_recall
value: 76.06837606837607
- type: euclidean_accuracy
value: 86.30248675091724
- type: euclidean_ap
value: 83.67566984333091
- type: euclidean_f1
value: 74.97367497367497
- type: euclidean_precision
value: 73.91003460207612
- type: euclidean_recall
value: 76.06837606837607
- type: manhattan_accuracy
value: 86.28210354667753
- type: manhattan_ap
value: 83.64216119130171
- type: manhattan_f1
value: 74.92152075340078
- type: manhattan_precision
value: 73.4107997265892
- type: manhattan_recall
value: 76.49572649572649
- type: max_accuracy
value: 86.30248675091724
- type: max_ap
value: 83.6756734006714
- type: max_f1
value: 74.97367497367497
- task:
type: STS
dataset:
name: MTEB SICK-R-PL
type: PL-MTEB/sickr-pl-sts
config: default
split: test
revision: None
metrics:
- type: cos_sim_pearson
value: 82.23295940859121
- type: cos_sim_spearman
value: 78.89329160768719
- type: euclidean_pearson
value: 79.56019107076818
- type: euclidean_spearman
value: 78.89330209904084
- type: manhattan_pearson
value: 79.76098513973719
- type: manhattan_spearman
value: 79.05490162570123
- task:
type: STS
dataset:
name: MTEB STS22 (pl)
type: mteb/sts22-crosslingual-sts
config: pl
split: test
revision: eea2b4fe26a775864c896887d910b76a8098ad3f
metrics:
- type: cos_sim_pearson
value: 37.732606308062486
- type: cos_sim_spearman
value: 41.01645667030284
- type: euclidean_pearson
value: 26.61722556367085
- type: euclidean_spearman
value: 41.01645667030284
- type: manhattan_pearson
value: 26.60917378970807
- type: manhattan_spearman
value: 41.51335727617614
- task:
type: Retrieval
dataset:
name: MTEB SciFact-PL
type: clarin-knext/scifact-pl
config: default
split: test
revision: 47932a35f045ef8ed01ba82bf9ff67f6e109207e
metrics:
- type: map_at_1
value: 54.31700000000001
- type: map_at_10
value: 65.564
- type: map_at_100
value: 66.062
- type: map_at_1000
value: 66.08699999999999
- type: map_at_3
value: 62.592999999999996
- type: map_at_5
value: 63.888
- type: mrr_at_1
value: 56.99999999999999
- type: mrr_at_10
value: 66.412
- type: mrr_at_100
value: 66.85900000000001
- type: mrr_at_1000
value: 66.88
- type: mrr_at_3
value: 64.22200000000001
- type: mrr_at_5
value: 65.206
- type: ndcg_at_1
value: 56.99999999999999
- type: ndcg_at_10
value: 70.577
- type: ndcg_at_100
value: 72.879
- type: ndcg_at_1000
value: 73.45
- type: ndcg_at_3
value: 65.5
- type: ndcg_at_5
value: 67.278
- type: precision_at_1
value: 56.99999999999999
- type: precision_at_10
value: 9.667
- type: precision_at_100
value: 1.083
- type: precision_at_1000
value: 0.11299999999999999
- type: precision_at_3
value: 26.0
- type: precision_at_5
value: 16.933
- type: recall_at_1
value: 54.31700000000001
- type: recall_at_10
value: 85.056
- type: recall_at_100
value: 95.667
- type: recall_at_1000
value: 100.0
- type: recall_at_3
value: 71.0
- type: recall_at_5
value: 75.672
- task:
type: Retrieval
dataset:
name: MTEB TRECCOVID-PL
type: clarin-knext/trec-covid-pl
config: default
split: test
revision: 81bcb408f33366c2a20ac54adafad1ae7e877fdd
metrics:
- type: map_at_1
value: 0.245
- type: map_at_10
value: 2.051
- type: map_at_100
value: 12.009
- type: map_at_1000
value: 27.448
- type: map_at_3
value: 0.721
- type: map_at_5
value: 1.13
- type: mrr_at_1
value: 88.0
- type: mrr_at_10
value: 93.0
- type: mrr_at_100
value: 93.0
- type: mrr_at_1000
value: 93.0
- type: mrr_at_3
value: 93.0
- type: mrr_at_5
value: 93.0
- type: ndcg_at_1
value: 85.0
- type: ndcg_at_10
value: 80.303
- type: ndcg_at_100
value: 61.23499999999999
- type: ndcg_at_1000
value: 52.978
- type: ndcg_at_3
value: 84.419
- type: ndcg_at_5
value: 82.976
- type: precision_at_1
value: 88.0
- type: precision_at_10
value: 83.39999999999999
- type: precision_at_100
value: 61.96
- type: precision_at_1000
value: 22.648
- type: precision_at_3
value: 89.333
- type: precision_at_5
value: 87.2
- type: recall_at_1
value: 0.245
- type: recall_at_10
value: 2.193
- type: recall_at_100
value: 14.938
- type: recall_at_1000
value: 48.563
- type: recall_at_3
value: 0.738
- type: recall_at_5
value: 1.173
---
# BillSYZhang/gte-Qwen2-7B-instruct-Q4-mlx
The Model [BillSYZhang/gte-Qwen2-7B-instruct-Q4-mlx](https://huggingface.co/BillSYZhang/gte-Qwen2-7B-instruct-Q4-mlx) was converted to MLX format from [Alibaba-NLP/gte-Qwen2-7B-instruct](https://huggingface.co/Alibaba-NLP/gte-Qwen2-7B-instruct) using mlx-lm version **0.20.5**.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("BillSYZhang/gte-Qwen2-7B-instruct-Q4-mlx")
prompt="hello"
if hasattr(tokenizer, "apply_chat_template") and tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, tokenize=False, add_generation_prompt=True
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)
```
| [
"SUMMARIZATION"
] | [
"BIOSSES",
"SCIFACT"
] |
HiTZ/GoLLIE-13B | HiTZ | text-generation | [
"transformers",
"pytorch",
"llama",
"text-generation",
"code",
"text-generation-inference",
"Information Extraction",
"IE",
"Named Entity Recogniton",
"Event Extraction",
"Relation Extraction",
"LLaMA",
"custom_code",
"en",
"dataset:ACE05",
"dataset:bc5cdr",
"dataset:conll2003",
"dataset:ncbi_disease",
"dataset:conll2012_ontonotesv5",
"dataset:rams",
"dataset:tacred",
"dataset:wnut_17",
"arxiv:2310.03668",
"license:llama2",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2023-09-29T23:55:28 | 2023-10-20T07:13:36 | 92 | 7 | ---
datasets:
- ACE05
- bc5cdr
- conll2003
- ncbi_disease
- conll2012_ontonotesv5
- rams
- tacred
- wnut_17
language:
- en
license: llama2
metrics:
- f1
pipeline_tag: text-generation
tags:
- code
- text-generation-inference
- Information Extraction
- IE
- Named Entity Recogniton
- Event Extraction
- Relation Extraction
- LLaMA
---
<p align="center">
<br>
<img src="https://github.com/hitz-zentroa/GoLLIE/raw/main/assets/GoLLIE.png" style="height: 250px;">
<h2 align="center"><b>G</b>uideline f<b>o</b>llowing <b>L</b>arge <b>L</b>anguage Model for <b>I</b>nformation <b>E</b>xtraction</h2>
<br>
# Model Card for GoLLIE 13B
<p align="justify">
We present GoLLIE, a Large Language Model trained to follow annotation guidelines. GoLLIE outperforms previous approaches on zero-shot Information Extraction and allows the user to perform inferences with annotation schemas defined on the fly. Different from previous approaches, GoLLIE is able to follow detailed definitions and does not only rely on the knowledge already encoded in the LLM.
- 💻 Code: [https://github.com/osainz59/CoLLIE/](https://github.com/hitz-zentroa/GoLLIE)
- 📒 Blog Post: [GoLLIE: Guideline-following Large Language Model for Information Extraction](https://hitz-zentroa.github.io/GoLLIE/)
- 📖 Paper: [GoLLIE: Annotation Guidelines improve Zero-Shot Information-Extraction](https://arxiv.org/abs/2310.03668)
- 🐕 GoLLIE Colection in the 🤗HuggingFace Hub: [HiTZ/gollie](https://huggingface.co/collections/HiTZ/gollie-651bf19ee315e8a224aacc4f)
- 🚀 Example Jupyter Notebooks: [GoLLIE Notebooks](https://github.com/hitz-zentroa/GoLLIE/tree/main/notebooks)
</p>
<p align="center">
<img src="https://github.com/hitz-zentroa/GoLLIE/raw/main/assets/zero_shot_results.png">
</p>
### Model Description
- **Developed by:** [Oscar Sainz](https://osainz59.github.io/), [Iker García-Ferrero](https://ikergarcia1996.github.io/Iker-Garcia-Ferrero/), [Rodrigo Agerri](https://ragerri.github.io/), [Oier Lopez de Lacalle](https://oierldl.github.io/), [German Rigau](https://adimen.si.ehu.es/~rigau/) and [Eneko Agirre](https://eagirre.github.io/)
- **Institution:** [HiTZ Basque Center for Language Technology](http://www.hitz.eus/) - [Ixa](https://www.ixa.eus/node/2?language=en), [University of the Basque Country UPV/EHU](https://www.ehu.eus/en/en-home)
- **Model type:** Text Generation
- **Language(s) (NLP):** English
- **License:** LLaMA2 License for the base and merged model. Apache 2.0 for pre-trained LoRA Adapters
- **Finetuned from model:** CODE-LLaMA2
## Schema definition and inference example
The labels are represented as Python classes, and the guidelines or instructions are introduced as docstrings. The model start generating after the `result = [` line.
```Python
# Entity definitions
@dataclass
class Launcher(Template):
"""Refers to a vehicle designed primarily to transport payloads from the Earth's
surface to space. Launchers can carry various payloads, including satellites,
crewed spacecraft, and cargo, into various orbits or even beyond Earth's orbit.
They are usually multi-stage vehicles that use rocket engines for propulsion."""
mention: str
"""
The name of the launcher vehicle.
Such as: "Sturn V", "Atlas V", "Soyuz", "Ariane 5"
"""
space_company: str # The company that operates the launcher. Such as: "Blue origin", "ESA", "Boeing", "ISRO", "Northrop Grumman", "Arianespace"
crew: List[str] # Names of the crew members boarding the Launcher. Such as: "Neil Armstrong", "Michael Collins", "Buzz Aldrin"
@dataclass
class Mission(Template):
"""Any planned or accomplished journey beyond Earth's atmosphere with specific objectives,
either crewed or uncrewed. It includes missions to satellites, the International
Space Station (ISS), other celestial bodies, and deep space."""
mention: str
"""
The name of the mission.
Such as: "Apollo 11", "Artemis", "Mercury"
"""
date: str # The start date of the mission
departure: str # The place from which the vehicle will be launched. Such as: "Florida", "Houston", "French Guiana"
destination: str # The place or planet to which the launcher will be sent. Such as "Moon", "low-orbit", "Saturn"
# This is the text to analyze
text = (
"The Ares 3 mission to Mars is scheduled for 2032. The Starship rocket build by SpaceX will take off from Boca Chica,"
"carrying the astronauts Max Rutherford, Elena Soto, and Jake Martinez."
)
# The annotation instances that take place in the text above are listed here
result = [
Mission(mention='Ares 3', date='2032', departure='Boca Chica', destination='Mars'),
Launcher(mention='Starship', space_company='SpaceX', crew=['Max Rutherford', 'Elena Soto', 'Jake Martinez'])
]
```
## How to Get Started with the Model
Please read our [🚀 Example Jupyter Notebooks](https://github.com/hitz-zentroa/GoLLIE/tree/main/notebooks) to get started with GoLLIE.
The best way to load the model is using our custom `load_model` fuction. However, you can also load them using the AutoModelForCausalLM class.
**Important**: Our flash attention implementation has small numerical differences compared to the attention implementation in Huggingface.
You must use the flag `trust_remote_code=True` or you will get inferior results. Flash attention requires an available CUDA GPU. Running GOLLIE
pre-trained models on a CPU is not supported. We plan to address this in future releases. First, install flash attention 2:
```bash
pip install flash-attn --no-build-isolation
pip install git+https://github.com/HazyResearch/flash-attention.git#subdirectory=csrc/rotary
```
Then you can load the model using
```python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("HiTZ/GoLLIE-7B")
model = AutoModelForCausalLM.from_pretrained("HiTZ/GoLLIE-7B", trust_remote_code=True, torch_dtype=torch.bfloat16)
model.to("cuda")
```
Read our [🚀 Example Jupyter Notebooks](https://github.com/hitz-zentroa/GoLLIE/tree/main/notebooks) to learn how to easily define guidelines, generate model inputs and parse the output!
### Training Data
This is the list of task used for training and evaluating GoLLIE. However, as demonstrated in the 🚀 [Create Custom Task notebook](https://github.com/hitz-zentroa/GoLLIE/blob/main/notebooks/Create%20Custom%20Task.ipynb) GoLLIE can perform a wide range of unseen tasks.
For more info, read our [📖Paper](https://arxiv.org/abs/2310.03668).
<p align="center">
<img src="https://github.com/hitz-zentroa/GoLLIE/raw/main/assets/datasets.png">
</p>
## Evaluation
| Model | Supervised average F1 | Zero-shot average F1 | 🤗HuggingFace Hub |
|---|:---------------------:|:--------------------:|:---------------------------------------------------------:|
| GoLLIE-7B | 73.0 | 55.3 | [HiTZ/GoLLIE-7B](https://huggingface.co/HiTZ/GoLLIE-7B) |
| GoLLIE-13B | 73.9 | 56.0 | [HiTZ/GoLLIE-13B](https://huggingface.co/HiTZ/GoLLIE-13B) |
| GoLLIE-34B | **75.0** | **57.2** | [HiTZ/GoLLIE-34B](https://huggingface.co/HiTZ/GoLLIE-34B) |
## Environmental Impact
| Model | Hardware | FLOPs | Time (h) | CO<sup>2</sup>eq (kg) |
|----------------|-------------------|---------------------------|-------------------|-------------------------------------|
| GoLLIE 7B | 1xA100 | 11.9e<sup>18</sup> | 44.5 | 1.57 |
| GoLLIE 13B | 1xA100 | 22.7e<sup>18</sup> | 79.5 | 2.80 |
| GoLLIE 34B | 2xA100 | 55.8e<sup>18</sup> | 94.6 | 6.67 |
## Citation
```
@misc{sainz2023gollie,
title={GoLLIE: Annotation Guidelines improve Zero-Shot Information-Extraction},
author={Oscar Sainz and Iker García-Ferrero and Rodrigo Agerri and Oier Lopez de Lacalle and German Rigau and Eneko Agirre},
year={2023},
eprint={2310.03668},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` | [
"RELATION_EXTRACTION",
"EVENT_EXTRACTION"
] | [
"BC5CDR",
"NCBI DISEASE"
] |
odunola/UAE-Large-VI | odunola | feature-extraction | [
"sentence-transformers",
"onnx",
"safetensors",
"bert",
"feature-extraction",
"mteb",
"sentence_embedding",
"feature_extraction",
"transformers",
"transformers.js",
"en",
"arxiv:2309.12871",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | 2023-12-18T20:55:01 | 2023-12-18T20:58:50 | 92 | 0 | ---
language:
- en
library_name: sentence-transformers
license: apache-2.0
tags:
- mteb
- sentence_embedding
- feature_extraction
- transformers
- transformers.js
model-index:
- name: UAE-Large-V1
results:
- task:
type: Classification
dataset:
name: MTEB AmazonCounterfactualClassification (en)
type: mteb/amazon_counterfactual
config: en
split: test
revision: e8379541af4e31359cca9fbcf4b00f2671dba205
metrics:
- type: accuracy
value: 75.55223880597015
- type: ap
value: 38.264070815317794
- type: f1
value: 69.40977934769845
- task:
type: Classification
dataset:
name: MTEB AmazonPolarityClassification
type: mteb/amazon_polarity
config: default
split: test
revision: e2d317d38cd51312af73b3d32a06d1a08b442046
metrics:
- type: accuracy
value: 92.84267499999999
- type: ap
value: 89.57568507997713
- type: f1
value: 92.82590734337774
- task:
type: Classification
dataset:
name: MTEB AmazonReviewsClassification (en)
type: mteb/amazon_reviews_multi
config: en
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 48.292
- type: f1
value: 47.90257816032778
- task:
type: Retrieval
dataset:
name: MTEB ArguAna
type: arguana
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 42.105
- type: map_at_10
value: 58.181000000000004
- type: map_at_100
value: 58.653999999999996
- type: map_at_1000
value: 58.657000000000004
- type: map_at_3
value: 54.386
- type: map_at_5
value: 56.757999999999996
- type: mrr_at_1
value: 42.745
- type: mrr_at_10
value: 58.437
- type: mrr_at_100
value: 58.894999999999996
- type: mrr_at_1000
value: 58.897999999999996
- type: mrr_at_3
value: 54.635
- type: mrr_at_5
value: 56.99999999999999
- type: ndcg_at_1
value: 42.105
- type: ndcg_at_10
value: 66.14999999999999
- type: ndcg_at_100
value: 68.048
- type: ndcg_at_1000
value: 68.11399999999999
- type: ndcg_at_3
value: 58.477000000000004
- type: ndcg_at_5
value: 62.768
- type: precision_at_1
value: 42.105
- type: precision_at_10
value: 9.110999999999999
- type: precision_at_100
value: 0.991
- type: precision_at_1000
value: 0.1
- type: precision_at_3
value: 23.447000000000003
- type: precision_at_5
value: 16.159000000000002
- type: recall_at_1
value: 42.105
- type: recall_at_10
value: 91.11
- type: recall_at_100
value: 99.14699999999999
- type: recall_at_1000
value: 99.644
- type: recall_at_3
value: 70.341
- type: recall_at_5
value: 80.797
- task:
type: Clustering
dataset:
name: MTEB ArxivClusteringP2P
type: mteb/arxiv-clustering-p2p
config: default
split: test
revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d
metrics:
- type: v_measure
value: 49.02580759154173
- task:
type: Clustering
dataset:
name: MTEB ArxivClusteringS2S
type: mteb/arxiv-clustering-s2s
config: default
split: test
revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53
metrics:
- type: v_measure
value: 43.093601280163554
- task:
type: Reranking
dataset:
name: MTEB AskUbuntuDupQuestions
type: mteb/askubuntudupquestions-reranking
config: default
split: test
revision: 2000358ca161889fa9c082cb41daa8dcfb161a54
metrics:
- type: map
value: 64.19590406875427
- type: mrr
value: 77.09547992788991
- task:
type: STS
dataset:
name: MTEB BIOSSES
type: mteb/biosses-sts
config: default
split: test
revision: d3fb88f8f02e40887cd149695127462bbcf29b4a
metrics:
- type: cos_sim_pearson
value: 87.86678362843676
- type: cos_sim_spearman
value: 86.1423242570783
- type: euclidean_pearson
value: 85.98994198511751
- type: euclidean_spearman
value: 86.48209103503942
- type: manhattan_pearson
value: 85.6446436316182
- type: manhattan_spearman
value: 86.21039809734357
- task:
type: Classification
dataset:
name: MTEB Banking77Classification
type: mteb/banking77
config: default
split: test
revision: 0fd18e25b25c072e09e0d92ab615fda904d66300
metrics:
- type: accuracy
value: 87.69155844155844
- type: f1
value: 87.68109381943547
- task:
type: Clustering
dataset:
name: MTEB BiorxivClusteringP2P
type: mteb/biorxiv-clustering-p2p
config: default
split: test
revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40
metrics:
- type: v_measure
value: 39.37501687500394
- task:
type: Clustering
dataset:
name: MTEB BiorxivClusteringS2S
type: mteb/biorxiv-clustering-s2s
config: default
split: test
revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908
metrics:
- type: v_measure
value: 37.23401405155885
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackAndroidRetrieval
type: BeIR/cqadupstack
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 30.232
- type: map_at_10
value: 41.404999999999994
- type: map_at_100
value: 42.896
- type: map_at_1000
value: 43.028
- type: map_at_3
value: 37.925
- type: map_at_5
value: 39.865
- type: mrr_at_1
value: 36.338
- type: mrr_at_10
value: 46.969
- type: mrr_at_100
value: 47.684
- type: mrr_at_1000
value: 47.731
- type: mrr_at_3
value: 44.063
- type: mrr_at_5
value: 45.908
- type: ndcg_at_1
value: 36.338
- type: ndcg_at_10
value: 47.887
- type: ndcg_at_100
value: 53.357
- type: ndcg_at_1000
value: 55.376999999999995
- type: ndcg_at_3
value: 42.588
- type: ndcg_at_5
value: 45.132
- type: precision_at_1
value: 36.338
- type: precision_at_10
value: 9.17
- type: precision_at_100
value: 1.4909999999999999
- type: precision_at_1000
value: 0.196
- type: precision_at_3
value: 20.315
- type: precision_at_5
value: 14.793000000000001
- type: recall_at_1
value: 30.232
- type: recall_at_10
value: 60.67399999999999
- type: recall_at_100
value: 83.628
- type: recall_at_1000
value: 96.209
- type: recall_at_3
value: 45.48
- type: recall_at_5
value: 52.354
- type: map_at_1
value: 32.237
- type: map_at_10
value: 42.829
- type: map_at_100
value: 44.065
- type: map_at_1000
value: 44.199
- type: map_at_3
value: 39.885999999999996
- type: map_at_5
value: 41.55
- type: mrr_at_1
value: 40.064
- type: mrr_at_10
value: 48.611
- type: mrr_at_100
value: 49.245
- type: mrr_at_1000
value: 49.29
- type: mrr_at_3
value: 46.561
- type: mrr_at_5
value: 47.771
- type: ndcg_at_1
value: 40.064
- type: ndcg_at_10
value: 48.388
- type: ndcg_at_100
value: 52.666999999999994
- type: ndcg_at_1000
value: 54.67100000000001
- type: ndcg_at_3
value: 44.504
- type: ndcg_at_5
value: 46.303
- type: precision_at_1
value: 40.064
- type: precision_at_10
value: 9.051
- type: precision_at_100
value: 1.4500000000000002
- type: precision_at_1000
value: 0.193
- type: precision_at_3
value: 21.444
- type: precision_at_5
value: 15.045
- type: recall_at_1
value: 32.237
- type: recall_at_10
value: 57.943999999999996
- type: recall_at_100
value: 75.98700000000001
- type: recall_at_1000
value: 88.453
- type: recall_at_3
value: 46.268
- type: recall_at_5
value: 51.459999999999994
- type: map_at_1
value: 38.797
- type: map_at_10
value: 51.263000000000005
- type: map_at_100
value: 52.333
- type: map_at_1000
value: 52.393
- type: map_at_3
value: 47.936
- type: map_at_5
value: 49.844
- type: mrr_at_1
value: 44.389
- type: mrr_at_10
value: 54.601
- type: mrr_at_100
value: 55.300000000000004
- type: mrr_at_1000
value: 55.333
- type: mrr_at_3
value: 52.068999999999996
- type: mrr_at_5
value: 53.627
- type: ndcg_at_1
value: 44.389
- type: ndcg_at_10
value: 57.193000000000005
- type: ndcg_at_100
value: 61.307
- type: ndcg_at_1000
value: 62.529
- type: ndcg_at_3
value: 51.607
- type: ndcg_at_5
value: 54.409
- type: precision_at_1
value: 44.389
- type: precision_at_10
value: 9.26
- type: precision_at_100
value: 1.222
- type: precision_at_1000
value: 0.13699999999999998
- type: precision_at_3
value: 23.03
- type: precision_at_5
value: 15.887
- type: recall_at_1
value: 38.797
- type: recall_at_10
value: 71.449
- type: recall_at_100
value: 88.881
- type: recall_at_1000
value: 97.52
- type: recall_at_3
value: 56.503
- type: recall_at_5
value: 63.392
- type: map_at_1
value: 27.291999999999998
- type: map_at_10
value: 35.65
- type: map_at_100
value: 36.689
- type: map_at_1000
value: 36.753
- type: map_at_3
value: 32.995000000000005
- type: map_at_5
value: 34.409
- type: mrr_at_1
value: 29.04
- type: mrr_at_10
value: 37.486000000000004
- type: mrr_at_100
value: 38.394
- type: mrr_at_1000
value: 38.445
- type: mrr_at_3
value: 35.028
- type: mrr_at_5
value: 36.305
- type: ndcg_at_1
value: 29.04
- type: ndcg_at_10
value: 40.613
- type: ndcg_at_100
value: 45.733000000000004
- type: ndcg_at_1000
value: 47.447
- type: ndcg_at_3
value: 35.339999999999996
- type: ndcg_at_5
value: 37.706
- type: precision_at_1
value: 29.04
- type: precision_at_10
value: 6.192
- type: precision_at_100
value: 0.9249999999999999
- type: precision_at_1000
value: 0.11
- type: precision_at_3
value: 14.802000000000001
- type: precision_at_5
value: 10.305
- type: recall_at_1
value: 27.291999999999998
- type: recall_at_10
value: 54.25299999999999
- type: recall_at_100
value: 77.773
- type: recall_at_1000
value: 90.795
- type: recall_at_3
value: 39.731
- type: recall_at_5
value: 45.403999999999996
- type: map_at_1
value: 18.326
- type: map_at_10
value: 26.290999999999997
- type: map_at_100
value: 27.456999999999997
- type: map_at_1000
value: 27.583000000000002
- type: map_at_3
value: 23.578
- type: map_at_5
value: 25.113000000000003
- type: mrr_at_1
value: 22.637
- type: mrr_at_10
value: 31.139
- type: mrr_at_100
value: 32.074999999999996
- type: mrr_at_1000
value: 32.147
- type: mrr_at_3
value: 28.483000000000004
- type: mrr_at_5
value: 29.963
- type: ndcg_at_1
value: 22.637
- type: ndcg_at_10
value: 31.717000000000002
- type: ndcg_at_100
value: 37.201
- type: ndcg_at_1000
value: 40.088
- type: ndcg_at_3
value: 26.686
- type: ndcg_at_5
value: 29.076999999999998
- type: precision_at_1
value: 22.637
- type: precision_at_10
value: 5.7090000000000005
- type: precision_at_100
value: 0.979
- type: precision_at_1000
value: 0.13799999999999998
- type: precision_at_3
value: 12.894
- type: precision_at_5
value: 9.328
- type: recall_at_1
value: 18.326
- type: recall_at_10
value: 43.824999999999996
- type: recall_at_100
value: 67.316
- type: recall_at_1000
value: 87.481
- type: recall_at_3
value: 29.866999999999997
- type: recall_at_5
value: 35.961999999999996
- type: map_at_1
value: 29.875
- type: map_at_10
value: 40.458
- type: map_at_100
value: 41.772
- type: map_at_1000
value: 41.882999999999996
- type: map_at_3
value: 37.086999999999996
- type: map_at_5
value: 39.153
- type: mrr_at_1
value: 36.381
- type: mrr_at_10
value: 46.190999999999995
- type: mrr_at_100
value: 46.983999999999995
- type: mrr_at_1000
value: 47.032000000000004
- type: mrr_at_3
value: 43.486999999999995
- type: mrr_at_5
value: 45.249
- type: ndcg_at_1
value: 36.381
- type: ndcg_at_10
value: 46.602
- type: ndcg_at_100
value: 51.885999999999996
- type: ndcg_at_1000
value: 53.895
- type: ndcg_at_3
value: 41.155
- type: ndcg_at_5
value: 44.182
- type: precision_at_1
value: 36.381
- type: precision_at_10
value: 8.402
- type: precision_at_100
value: 1.278
- type: precision_at_1000
value: 0.16199999999999998
- type: precision_at_3
value: 19.346
- type: precision_at_5
value: 14.09
- type: recall_at_1
value: 29.875
- type: recall_at_10
value: 59.065999999999995
- type: recall_at_100
value: 80.923
- type: recall_at_1000
value: 93.927
- type: recall_at_3
value: 44.462
- type: recall_at_5
value: 51.89
- type: map_at_1
value: 24.94
- type: map_at_10
value: 35.125
- type: map_at_100
value: 36.476
- type: map_at_1000
value: 36.579
- type: map_at_3
value: 31.840000000000003
- type: map_at_5
value: 33.647
- type: mrr_at_1
value: 30.936000000000003
- type: mrr_at_10
value: 40.637
- type: mrr_at_100
value: 41.471000000000004
- type: mrr_at_1000
value: 41.525
- type: mrr_at_3
value: 38.013999999999996
- type: mrr_at_5
value: 39.469
- type: ndcg_at_1
value: 30.936000000000003
- type: ndcg_at_10
value: 41.295
- type: ndcg_at_100
value: 46.92
- type: ndcg_at_1000
value: 49.183
- type: ndcg_at_3
value: 35.811
- type: ndcg_at_5
value: 38.306000000000004
- type: precision_at_1
value: 30.936000000000003
- type: precision_at_10
value: 7.728
- type: precision_at_100
value: 1.226
- type: precision_at_1000
value: 0.158
- type: precision_at_3
value: 17.237
- type: precision_at_5
value: 12.42
- type: recall_at_1
value: 24.94
- type: recall_at_10
value: 54.235
- type: recall_at_100
value: 78.314
- type: recall_at_1000
value: 93.973
- type: recall_at_3
value: 38.925
- type: recall_at_5
value: 45.505
- type: map_at_1
value: 26.250833333333333
- type: map_at_10
value: 35.46875
- type: map_at_100
value: 36.667
- type: map_at_1000
value: 36.78025
- type: map_at_3
value: 32.56733333333334
- type: map_at_5
value: 34.20333333333333
- type: mrr_at_1
value: 30.8945
- type: mrr_at_10
value: 39.636833333333335
- type: mrr_at_100
value: 40.46508333333333
- type: mrr_at_1000
value: 40.521249999999995
- type: mrr_at_3
value: 37.140166666666666
- type: mrr_at_5
value: 38.60999999999999
- type: ndcg_at_1
value: 30.8945
- type: ndcg_at_10
value: 40.93441666666667
- type: ndcg_at_100
value: 46.062416666666664
- type: ndcg_at_1000
value: 48.28341666666667
- type: ndcg_at_3
value: 35.97575
- type: ndcg_at_5
value: 38.3785
- type: precision_at_1
value: 30.8945
- type: precision_at_10
value: 7.180250000000001
- type: precision_at_100
value: 1.1468333333333334
- type: precision_at_1000
value: 0.15283333333333332
- type: precision_at_3
value: 16.525583333333334
- type: precision_at_5
value: 11.798333333333332
- type: recall_at_1
value: 26.250833333333333
- type: recall_at_10
value: 52.96108333333333
- type: recall_at_100
value: 75.45908333333334
- type: recall_at_1000
value: 90.73924999999998
- type: recall_at_3
value: 39.25483333333333
- type: recall_at_5
value: 45.37950000000001
- type: map_at_1
value: 24.595
- type: map_at_10
value: 31.747999999999998
- type: map_at_100
value: 32.62
- type: map_at_1000
value: 32.713
- type: map_at_3
value: 29.48
- type: map_at_5
value: 30.635
- type: mrr_at_1
value: 27.607
- type: mrr_at_10
value: 34.449000000000005
- type: mrr_at_100
value: 35.182
- type: mrr_at_1000
value: 35.254000000000005
- type: mrr_at_3
value: 32.413
- type: mrr_at_5
value: 33.372
- type: ndcg_at_1
value: 27.607
- type: ndcg_at_10
value: 36.041000000000004
- type: ndcg_at_100
value: 40.514
- type: ndcg_at_1000
value: 42.851
- type: ndcg_at_3
value: 31.689
- type: ndcg_at_5
value: 33.479
- type: precision_at_1
value: 27.607
- type: precision_at_10
value: 5.66
- type: precision_at_100
value: 0.868
- type: precision_at_1000
value: 0.11299999999999999
- type: precision_at_3
value: 13.446
- type: precision_at_5
value: 9.264
- type: recall_at_1
value: 24.595
- type: recall_at_10
value: 46.79
- type: recall_at_100
value: 67.413
- type: recall_at_1000
value: 84.753
- type: recall_at_3
value: 34.644999999999996
- type: recall_at_5
value: 39.09
- type: map_at_1
value: 17.333000000000002
- type: map_at_10
value: 24.427
- type: map_at_100
value: 25.576
- type: map_at_1000
value: 25.692999999999998
- type: map_at_3
value: 22.002
- type: map_at_5
value: 23.249
- type: mrr_at_1
value: 20.716
- type: mrr_at_10
value: 28.072000000000003
- type: mrr_at_100
value: 29.067
- type: mrr_at_1000
value: 29.137
- type: mrr_at_3
value: 25.832
- type: mrr_at_5
value: 27.045
- type: ndcg_at_1
value: 20.716
- type: ndcg_at_10
value: 29.109
- type: ndcg_at_100
value: 34.797
- type: ndcg_at_1000
value: 37.503
- type: ndcg_at_3
value: 24.668
- type: ndcg_at_5
value: 26.552999999999997
- type: precision_at_1
value: 20.716
- type: precision_at_10
value: 5.351
- type: precision_at_100
value: 0.955
- type: precision_at_1000
value: 0.136
- type: precision_at_3
value: 11.584999999999999
- type: precision_at_5
value: 8.362
- type: recall_at_1
value: 17.333000000000002
- type: recall_at_10
value: 39.604
- type: recall_at_100
value: 65.525
- type: recall_at_1000
value: 84.651
- type: recall_at_3
value: 27.199
- type: recall_at_5
value: 32.019
- type: map_at_1
value: 26.342
- type: map_at_10
value: 35.349000000000004
- type: map_at_100
value: 36.443
- type: map_at_1000
value: 36.548
- type: map_at_3
value: 32.307
- type: map_at_5
value: 34.164
- type: mrr_at_1
value: 31.063000000000002
- type: mrr_at_10
value: 39.703
- type: mrr_at_100
value: 40.555
- type: mrr_at_1000
value: 40.614
- type: mrr_at_3
value: 37.141999999999996
- type: mrr_at_5
value: 38.812000000000005
- type: ndcg_at_1
value: 31.063000000000002
- type: ndcg_at_10
value: 40.873
- type: ndcg_at_100
value: 45.896
- type: ndcg_at_1000
value: 48.205999999999996
- type: ndcg_at_3
value: 35.522
- type: ndcg_at_5
value: 38.419
- type: precision_at_1
value: 31.063000000000002
- type: precision_at_10
value: 6.866
- type: precision_at_100
value: 1.053
- type: precision_at_1000
value: 0.13699999999999998
- type: precision_at_3
value: 16.014
- type: precision_at_5
value: 11.604000000000001
- type: recall_at_1
value: 26.342
- type: recall_at_10
value: 53.40200000000001
- type: recall_at_100
value: 75.251
- type: recall_at_1000
value: 91.13799999999999
- type: recall_at_3
value: 39.103
- type: recall_at_5
value: 46.357
- type: map_at_1
value: 23.71
- type: map_at_10
value: 32.153999999999996
- type: map_at_100
value: 33.821
- type: map_at_1000
value: 34.034
- type: map_at_3
value: 29.376
- type: map_at_5
value: 30.878
- type: mrr_at_1
value: 28.458
- type: mrr_at_10
value: 36.775999999999996
- type: mrr_at_100
value: 37.804
- type: mrr_at_1000
value: 37.858999999999995
- type: mrr_at_3
value: 34.123999999999995
- type: mrr_at_5
value: 35.596
- type: ndcg_at_1
value: 28.458
- type: ndcg_at_10
value: 37.858999999999995
- type: ndcg_at_100
value: 44.194
- type: ndcg_at_1000
value: 46.744
- type: ndcg_at_3
value: 33.348
- type: ndcg_at_5
value: 35.448
- type: precision_at_1
value: 28.458
- type: precision_at_10
value: 7.4510000000000005
- type: precision_at_100
value: 1.5
- type: precision_at_1000
value: 0.23700000000000002
- type: precision_at_3
value: 15.809999999999999
- type: precision_at_5
value: 11.462
- type: recall_at_1
value: 23.71
- type: recall_at_10
value: 48.272999999999996
- type: recall_at_100
value: 77.134
- type: recall_at_1000
value: 93.001
- type: recall_at_3
value: 35.480000000000004
- type: recall_at_5
value: 41.19
- type: map_at_1
value: 21.331
- type: map_at_10
value: 28.926000000000002
- type: map_at_100
value: 29.855999999999998
- type: map_at_1000
value: 29.957
- type: map_at_3
value: 26.395999999999997
- type: map_at_5
value: 27.933000000000003
- type: mrr_at_1
value: 23.105
- type: mrr_at_10
value: 31.008000000000003
- type: mrr_at_100
value: 31.819999999999997
- type: mrr_at_1000
value: 31.887999999999998
- type: mrr_at_3
value: 28.466
- type: mrr_at_5
value: 30.203000000000003
- type: ndcg_at_1
value: 23.105
- type: ndcg_at_10
value: 33.635999999999996
- type: ndcg_at_100
value: 38.277
- type: ndcg_at_1000
value: 40.907
- type: ndcg_at_3
value: 28.791
- type: ndcg_at_5
value: 31.528
- type: precision_at_1
value: 23.105
- type: precision_at_10
value: 5.323
- type: precision_at_100
value: 0.815
- type: precision_at_1000
value: 0.117
- type: precision_at_3
value: 12.384
- type: precision_at_5
value: 9.02
- type: recall_at_1
value: 21.331
- type: recall_at_10
value: 46.018
- type: recall_at_100
value: 67.364
- type: recall_at_1000
value: 86.97
- type: recall_at_3
value: 33.395
- type: recall_at_5
value: 39.931
- task:
type: Retrieval
dataset:
name: MTEB ClimateFEVER
type: climate-fever
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 17.011000000000003
- type: map_at_10
value: 28.816999999999997
- type: map_at_100
value: 30.761
- type: map_at_1000
value: 30.958000000000002
- type: map_at_3
value: 24.044999999999998
- type: map_at_5
value: 26.557
- type: mrr_at_1
value: 38.696999999999996
- type: mrr_at_10
value: 50.464
- type: mrr_at_100
value: 51.193999999999996
- type: mrr_at_1000
value: 51.219
- type: mrr_at_3
value: 47.339999999999996
- type: mrr_at_5
value: 49.346000000000004
- type: ndcg_at_1
value: 38.696999999999996
- type: ndcg_at_10
value: 38.53
- type: ndcg_at_100
value: 45.525
- type: ndcg_at_1000
value: 48.685
- type: ndcg_at_3
value: 32.282
- type: ndcg_at_5
value: 34.482
- type: precision_at_1
value: 38.696999999999996
- type: precision_at_10
value: 11.895999999999999
- type: precision_at_100
value: 1.95
- type: precision_at_1000
value: 0.254
- type: precision_at_3
value: 24.038999999999998
- type: precision_at_5
value: 18.332
- type: recall_at_1
value: 17.011000000000003
- type: recall_at_10
value: 44.452999999999996
- type: recall_at_100
value: 68.223
- type: recall_at_1000
value: 85.653
- type: recall_at_3
value: 28.784
- type: recall_at_5
value: 35.66
- task:
type: Retrieval
dataset:
name: MTEB DBPedia
type: dbpedia-entity
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 9.516
- type: map_at_10
value: 21.439
- type: map_at_100
value: 31.517
- type: map_at_1000
value: 33.267
- type: map_at_3
value: 15.004999999999999
- type: map_at_5
value: 17.793999999999997
- type: mrr_at_1
value: 71.25
- type: mrr_at_10
value: 79.071
- type: mrr_at_100
value: 79.325
- type: mrr_at_1000
value: 79.33
- type: mrr_at_3
value: 77.708
- type: mrr_at_5
value: 78.546
- type: ndcg_at_1
value: 58.62500000000001
- type: ndcg_at_10
value: 44.889
- type: ndcg_at_100
value: 50.536
- type: ndcg_at_1000
value: 57.724
- type: ndcg_at_3
value: 49.32
- type: ndcg_at_5
value: 46.775
- type: precision_at_1
value: 71.25
- type: precision_at_10
value: 36.175000000000004
- type: precision_at_100
value: 11.940000000000001
- type: precision_at_1000
value: 2.178
- type: precision_at_3
value: 53.583000000000006
- type: precision_at_5
value: 45.550000000000004
- type: recall_at_1
value: 9.516
- type: recall_at_10
value: 27.028000000000002
- type: recall_at_100
value: 57.581
- type: recall_at_1000
value: 80.623
- type: recall_at_3
value: 16.313
- type: recall_at_5
value: 20.674
- task:
type: Classification
dataset:
name: MTEB EmotionClassification
type: mteb/emotion
config: default
split: test
revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37
metrics:
- type: accuracy
value: 51.74999999999999
- type: f1
value: 46.46706502669774
- task:
type: Retrieval
dataset:
name: MTEB FEVER
type: fever
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 77.266
- type: map_at_10
value: 84.89999999999999
- type: map_at_100
value: 85.109
- type: map_at_1000
value: 85.123
- type: map_at_3
value: 83.898
- type: map_at_5
value: 84.541
- type: mrr_at_1
value: 83.138
- type: mrr_at_10
value: 89.37
- type: mrr_at_100
value: 89.432
- type: mrr_at_1000
value: 89.43299999999999
- type: mrr_at_3
value: 88.836
- type: mrr_at_5
value: 89.21
- type: ndcg_at_1
value: 83.138
- type: ndcg_at_10
value: 88.244
- type: ndcg_at_100
value: 88.98700000000001
- type: ndcg_at_1000
value: 89.21900000000001
- type: ndcg_at_3
value: 86.825
- type: ndcg_at_5
value: 87.636
- type: precision_at_1
value: 83.138
- type: precision_at_10
value: 10.47
- type: precision_at_100
value: 1.1079999999999999
- type: precision_at_1000
value: 0.11499999999999999
- type: precision_at_3
value: 32.933
- type: precision_at_5
value: 20.36
- type: recall_at_1
value: 77.266
- type: recall_at_10
value: 94.063
- type: recall_at_100
value: 96.993
- type: recall_at_1000
value: 98.414
- type: recall_at_3
value: 90.228
- type: recall_at_5
value: 92.328
- task:
type: Retrieval
dataset:
name: MTEB FiQA2018
type: fiqa
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 22.319
- type: map_at_10
value: 36.943
- type: map_at_100
value: 38.951
- type: map_at_1000
value: 39.114
- type: map_at_3
value: 32.82
- type: map_at_5
value: 34.945
- type: mrr_at_1
value: 44.135999999999996
- type: mrr_at_10
value: 53.071999999999996
- type: mrr_at_100
value: 53.87
- type: mrr_at_1000
value: 53.90200000000001
- type: mrr_at_3
value: 50.77199999999999
- type: mrr_at_5
value: 52.129999999999995
- type: ndcg_at_1
value: 44.135999999999996
- type: ndcg_at_10
value: 44.836
- type: ndcg_at_100
value: 51.754
- type: ndcg_at_1000
value: 54.36
- type: ndcg_at_3
value: 41.658
- type: ndcg_at_5
value: 42.354
- type: precision_at_1
value: 44.135999999999996
- type: precision_at_10
value: 12.284
- type: precision_at_100
value: 1.952
- type: precision_at_1000
value: 0.242
- type: precision_at_3
value: 27.828999999999997
- type: precision_at_5
value: 20.093
- type: recall_at_1
value: 22.319
- type: recall_at_10
value: 51.528
- type: recall_at_100
value: 76.70700000000001
- type: recall_at_1000
value: 92.143
- type: recall_at_3
value: 38.641
- type: recall_at_5
value: 43.653999999999996
- task:
type: Retrieval
dataset:
name: MTEB HotpotQA
type: hotpotqa
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 40.182
- type: map_at_10
value: 65.146
- type: map_at_100
value: 66.023
- type: map_at_1000
value: 66.078
- type: map_at_3
value: 61.617999999999995
- type: map_at_5
value: 63.82299999999999
- type: mrr_at_1
value: 80.365
- type: mrr_at_10
value: 85.79
- type: mrr_at_100
value: 85.963
- type: mrr_at_1000
value: 85.968
- type: mrr_at_3
value: 84.952
- type: mrr_at_5
value: 85.503
- type: ndcg_at_1
value: 80.365
- type: ndcg_at_10
value: 73.13499999999999
- type: ndcg_at_100
value: 76.133
- type: ndcg_at_1000
value: 77.151
- type: ndcg_at_3
value: 68.255
- type: ndcg_at_5
value: 70.978
- type: precision_at_1
value: 80.365
- type: precision_at_10
value: 15.359
- type: precision_at_100
value: 1.7690000000000001
- type: precision_at_1000
value: 0.19
- type: precision_at_3
value: 44.024
- type: precision_at_5
value: 28.555999999999997
- type: recall_at_1
value: 40.182
- type: recall_at_10
value: 76.793
- type: recall_at_100
value: 88.474
- type: recall_at_1000
value: 95.159
- type: recall_at_3
value: 66.036
- type: recall_at_5
value: 71.391
- task:
type: Classification
dataset:
name: MTEB ImdbClassification
type: mteb/imdb
config: default
split: test
revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7
metrics:
- type: accuracy
value: 92.7796
- type: ap
value: 89.24883716810874
- type: f1
value: 92.7706903433313
- task:
type: Retrieval
dataset:
name: MTEB MSMARCO
type: msmarco
config: default
split: dev
revision: None
metrics:
- type: map_at_1
value: 22.016
- type: map_at_10
value: 34.408
- type: map_at_100
value: 35.592
- type: map_at_1000
value: 35.64
- type: map_at_3
value: 30.459999999999997
- type: map_at_5
value: 32.721000000000004
- type: mrr_at_1
value: 22.593
- type: mrr_at_10
value: 34.993
- type: mrr_at_100
value: 36.113
- type: mrr_at_1000
value: 36.156
- type: mrr_at_3
value: 31.101
- type: mrr_at_5
value: 33.364
- type: ndcg_at_1
value: 22.579
- type: ndcg_at_10
value: 41.404999999999994
- type: ndcg_at_100
value: 47.018
- type: ndcg_at_1000
value: 48.211999999999996
- type: ndcg_at_3
value: 33.389
- type: ndcg_at_5
value: 37.425000000000004
- type: precision_at_1
value: 22.579
- type: precision_at_10
value: 6.59
- type: precision_at_100
value: 0.938
- type: precision_at_1000
value: 0.104
- type: precision_at_3
value: 14.241000000000001
- type: precision_at_5
value: 10.59
- type: recall_at_1
value: 22.016
- type: recall_at_10
value: 62.927
- type: recall_at_100
value: 88.72
- type: recall_at_1000
value: 97.80799999999999
- type: recall_at_3
value: 41.229
- type: recall_at_5
value: 50.88
- task:
type: Classification
dataset:
name: MTEB MTOPDomainClassification (en)
type: mteb/mtop_domain
config: en
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 94.01732786137711
- type: f1
value: 93.76353126402202
- task:
type: Classification
dataset:
name: MTEB MTOPIntentClassification (en)
type: mteb/mtop_intent
config: en
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 76.91746466028272
- type: f1
value: 57.715651682646765
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (en)
type: mteb/amazon_massive_intent
config: en
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 76.5030262273033
- type: f1
value: 74.6693629986121
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (en)
type: mteb/amazon_massive_scenario
config: en
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 79.74781439139207
- type: f1
value: 79.96684171018774
- task:
type: Clustering
dataset:
name: MTEB MedrxivClusteringP2P
type: mteb/medrxiv-clustering-p2p
config: default
split: test
revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73
metrics:
- type: v_measure
value: 33.2156206892017
- task:
type: Clustering
dataset:
name: MTEB MedrxivClusteringS2S
type: mteb/medrxiv-clustering-s2s
config: default
split: test
revision: 35191c8c0dca72d8ff3efcd72aa802307d469663
metrics:
- type: v_measure
value: 31.180539484816137
- task:
type: Reranking
dataset:
name: MTEB MindSmallReranking
type: mteb/mind_small
config: default
split: test
revision: 3bdac13927fdc888b903db93b2ffdbd90b295a69
metrics:
- type: map
value: 32.51125957874274
- type: mrr
value: 33.777037359249995
- task:
type: Retrieval
dataset:
name: MTEB NFCorpus
type: nfcorpus
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 7.248
- type: map_at_10
value: 15.340000000000002
- type: map_at_100
value: 19.591
- type: map_at_1000
value: 21.187
- type: map_at_3
value: 11.329
- type: map_at_5
value: 13.209999999999999
- type: mrr_at_1
value: 47.678
- type: mrr_at_10
value: 57.493
- type: mrr_at_100
value: 58.038999999999994
- type: mrr_at_1000
value: 58.07
- type: mrr_at_3
value: 55.36600000000001
- type: mrr_at_5
value: 56.635999999999996
- type: ndcg_at_1
value: 46.129999999999995
- type: ndcg_at_10
value: 38.653999999999996
- type: ndcg_at_100
value: 36.288
- type: ndcg_at_1000
value: 44.765
- type: ndcg_at_3
value: 43.553
- type: ndcg_at_5
value: 41.317
- type: precision_at_1
value: 47.368
- type: precision_at_10
value: 28.669
- type: precision_at_100
value: 9.158
- type: precision_at_1000
value: 2.207
- type: precision_at_3
value: 40.97
- type: precision_at_5
value: 35.604
- type: recall_at_1
value: 7.248
- type: recall_at_10
value: 19.46
- type: recall_at_100
value: 37.214000000000006
- type: recall_at_1000
value: 67.64099999999999
- type: recall_at_3
value: 12.025
- type: recall_at_5
value: 15.443999999999999
- task:
type: Retrieval
dataset:
name: MTEB NQ
type: nq
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 31.595000000000002
- type: map_at_10
value: 47.815999999999995
- type: map_at_100
value: 48.811
- type: map_at_1000
value: 48.835
- type: map_at_3
value: 43.225
- type: map_at_5
value: 46.017
- type: mrr_at_1
value: 35.689
- type: mrr_at_10
value: 50.341
- type: mrr_at_100
value: 51.044999999999995
- type: mrr_at_1000
value: 51.062
- type: mrr_at_3
value: 46.553
- type: mrr_at_5
value: 48.918
- type: ndcg_at_1
value: 35.66
- type: ndcg_at_10
value: 55.859
- type: ndcg_at_100
value: 59.864
- type: ndcg_at_1000
value: 60.419999999999995
- type: ndcg_at_3
value: 47.371
- type: ndcg_at_5
value: 51.995000000000005
- type: precision_at_1
value: 35.66
- type: precision_at_10
value: 9.27
- type: precision_at_100
value: 1.1520000000000001
- type: precision_at_1000
value: 0.12
- type: precision_at_3
value: 21.63
- type: precision_at_5
value: 15.655
- type: recall_at_1
value: 31.595000000000002
- type: recall_at_10
value: 77.704
- type: recall_at_100
value: 94.774
- type: recall_at_1000
value: 98.919
- type: recall_at_3
value: 56.052
- type: recall_at_5
value: 66.623
- task:
type: Retrieval
dataset:
name: MTEB QuoraRetrieval
type: quora
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 71.489
- type: map_at_10
value: 85.411
- type: map_at_100
value: 86.048
- type: map_at_1000
value: 86.064
- type: map_at_3
value: 82.587
- type: map_at_5
value: 84.339
- type: mrr_at_1
value: 82.28
- type: mrr_at_10
value: 88.27199999999999
- type: mrr_at_100
value: 88.362
- type: mrr_at_1000
value: 88.362
- type: mrr_at_3
value: 87.372
- type: mrr_at_5
value: 87.995
- type: ndcg_at_1
value: 82.27
- type: ndcg_at_10
value: 89.023
- type: ndcg_at_100
value: 90.191
- type: ndcg_at_1000
value: 90.266
- type: ndcg_at_3
value: 86.37
- type: ndcg_at_5
value: 87.804
- type: precision_at_1
value: 82.27
- type: precision_at_10
value: 13.469000000000001
- type: precision_at_100
value: 1.533
- type: precision_at_1000
value: 0.157
- type: precision_at_3
value: 37.797
- type: precision_at_5
value: 24.734
- type: recall_at_1
value: 71.489
- type: recall_at_10
value: 95.824
- type: recall_at_100
value: 99.70599999999999
- type: recall_at_1000
value: 99.979
- type: recall_at_3
value: 88.099
- type: recall_at_5
value: 92.285
- task:
type: Clustering
dataset:
name: MTEB RedditClustering
type: mteb/reddit-clustering
config: default
split: test
revision: 24640382cdbf8abc73003fb0fa6d111a705499eb
metrics:
- type: v_measure
value: 60.52398807444541
- task:
type: Clustering
dataset:
name: MTEB RedditClusteringP2P
type: mteb/reddit-clustering-p2p
config: default
split: test
revision: 282350215ef01743dc01b456c7f5241fa8937f16
metrics:
- type: v_measure
value: 65.34855891507871
- task:
type: Retrieval
dataset:
name: MTEB SCIDOCS
type: scidocs
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 5.188000000000001
- type: map_at_10
value: 13.987
- type: map_at_100
value: 16.438
- type: map_at_1000
value: 16.829
- type: map_at_3
value: 9.767000000000001
- type: map_at_5
value: 11.912
- type: mrr_at_1
value: 25.6
- type: mrr_at_10
value: 37.744
- type: mrr_at_100
value: 38.847
- type: mrr_at_1000
value: 38.894
- type: mrr_at_3
value: 34.166999999999994
- type: mrr_at_5
value: 36.207
- type: ndcg_at_1
value: 25.6
- type: ndcg_at_10
value: 22.980999999999998
- type: ndcg_at_100
value: 32.039
- type: ndcg_at_1000
value: 38.157000000000004
- type: ndcg_at_3
value: 21.567
- type: ndcg_at_5
value: 19.070999999999998
- type: precision_at_1
value: 25.6
- type: precision_at_10
value: 12.02
- type: precision_at_100
value: 2.5100000000000002
- type: precision_at_1000
value: 0.396
- type: precision_at_3
value: 20.333000000000002
- type: precision_at_5
value: 16.98
- type: recall_at_1
value: 5.188000000000001
- type: recall_at_10
value: 24.372
- type: recall_at_100
value: 50.934999999999995
- type: recall_at_1000
value: 80.477
- type: recall_at_3
value: 12.363
- type: recall_at_5
value: 17.203
- task:
type: STS
dataset:
name: MTEB SICK-R
type: mteb/sickr-sts
config: default
split: test
revision: a6ea5a8cab320b040a23452cc28066d9beae2cee
metrics:
- type: cos_sim_pearson
value: 87.24286275535398
- type: cos_sim_spearman
value: 82.62333770991818
- type: euclidean_pearson
value: 84.60353717637284
- type: euclidean_spearman
value: 82.32990108810047
- type: manhattan_pearson
value: 84.6089049738196
- type: manhattan_spearman
value: 82.33361785438936
- task:
type: STS
dataset:
name: MTEB STS12
type: mteb/sts12-sts
config: default
split: test
revision: a0d554a64d88156834ff5ae9920b964011b16384
metrics:
- type: cos_sim_pearson
value: 87.87428858503165
- type: cos_sim_spearman
value: 79.09145886519929
- type: euclidean_pearson
value: 86.42669231664036
- type: euclidean_spearman
value: 80.03127375435449
- type: manhattan_pearson
value: 86.41330338305022
- type: manhattan_spearman
value: 80.02492538673368
- task:
type: STS
dataset:
name: MTEB STS13
type: mteb/sts13-sts
config: default
split: test
revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca
metrics:
- type: cos_sim_pearson
value: 88.67912277322645
- type: cos_sim_spearman
value: 89.6171319711762
- type: euclidean_pearson
value: 86.56571917398725
- type: euclidean_spearman
value: 87.71216907898948
- type: manhattan_pearson
value: 86.57459050182473
- type: manhattan_spearman
value: 87.71916648349993
- task:
type: STS
dataset:
name: MTEB STS14
type: mteb/sts14-sts
config: default
split: test
revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375
metrics:
- type: cos_sim_pearson
value: 86.71957379085862
- type: cos_sim_spearman
value: 85.01784075851465
- type: euclidean_pearson
value: 84.7407848472801
- type: euclidean_spearman
value: 84.61063091345538
- type: manhattan_pearson
value: 84.71494352494403
- type: manhattan_spearman
value: 84.58772077604254
- task:
type: STS
dataset:
name: MTEB STS15
type: mteb/sts15-sts
config: default
split: test
revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3
metrics:
- type: cos_sim_pearson
value: 88.40508326325175
- type: cos_sim_spearman
value: 89.50912897763186
- type: euclidean_pearson
value: 87.82349070086627
- type: euclidean_spearman
value: 88.44179162727521
- type: manhattan_pearson
value: 87.80181927025595
- type: manhattan_spearman
value: 88.43205129636243
- task:
type: STS
dataset:
name: MTEB STS16
type: mteb/sts16-sts
config: default
split: test
revision: 4d8694f8f0e0100860b497b999b3dbed754a0513
metrics:
- type: cos_sim_pearson
value: 85.35846741715478
- type: cos_sim_spearman
value: 86.61172476741842
- type: euclidean_pearson
value: 84.60123125491637
- type: euclidean_spearman
value: 85.3001948141827
- type: manhattan_pearson
value: 84.56231142658329
- type: manhattan_spearman
value: 85.23579900798813
- task:
type: STS
dataset:
name: MTEB STS17 (en-en)
type: mteb/sts17-crosslingual-sts
config: en-en
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 88.94539129818824
- type: cos_sim_spearman
value: 88.99349064256742
- type: euclidean_pearson
value: 88.7142444640351
- type: euclidean_spearman
value: 88.34120813505011
- type: manhattan_pearson
value: 88.70363008238084
- type: manhattan_spearman
value: 88.31952816956954
- task:
type: STS
dataset:
name: MTEB STS22 (en)
type: mteb/sts22-crosslingual-sts
config: en
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 68.29910260369893
- type: cos_sim_spearman
value: 68.79263346213466
- type: euclidean_pearson
value: 68.41627521422252
- type: euclidean_spearman
value: 66.61602587398579
- type: manhattan_pearson
value: 68.49402183447361
- type: manhattan_spearman
value: 66.80157792354453
- task:
type: STS
dataset:
name: MTEB STSBenchmark
type: mteb/stsbenchmark-sts
config: default
split: test
revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831
metrics:
- type: cos_sim_pearson
value: 87.43703906343708
- type: cos_sim_spearman
value: 89.06081805093662
- type: euclidean_pearson
value: 87.48311456299662
- type: euclidean_spearman
value: 88.07417597580013
- type: manhattan_pearson
value: 87.48202249768894
- type: manhattan_spearman
value: 88.04758031111642
- task:
type: Reranking
dataset:
name: MTEB SciDocsRR
type: mteb/scidocs-reranking
config: default
split: test
revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab
metrics:
- type: map
value: 87.49080620485203
- type: mrr
value: 96.19145378949301
- task:
type: Retrieval
dataset:
name: MTEB SciFact
type: scifact
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 59.317
- type: map_at_10
value: 69.296
- type: map_at_100
value: 69.738
- type: map_at_1000
value: 69.759
- type: map_at_3
value: 66.12599999999999
- type: map_at_5
value: 67.532
- type: mrr_at_1
value: 62
- type: mrr_at_10
value: 70.176
- type: mrr_at_100
value: 70.565
- type: mrr_at_1000
value: 70.583
- type: mrr_at_3
value: 67.833
- type: mrr_at_5
value: 68.93299999999999
- type: ndcg_at_1
value: 62
- type: ndcg_at_10
value: 74.069
- type: ndcg_at_100
value: 76.037
- type: ndcg_at_1000
value: 76.467
- type: ndcg_at_3
value: 68.628
- type: ndcg_at_5
value: 70.57600000000001
- type: precision_at_1
value: 62
- type: precision_at_10
value: 10
- type: precision_at_100
value: 1.097
- type: precision_at_1000
value: 0.11299999999999999
- type: precision_at_3
value: 26.667
- type: precision_at_5
value: 17.4
- type: recall_at_1
value: 59.317
- type: recall_at_10
value: 87.822
- type: recall_at_100
value: 96.833
- type: recall_at_1000
value: 100
- type: recall_at_3
value: 73.06099999999999
- type: recall_at_5
value: 77.928
- task:
type: PairClassification
dataset:
name: MTEB SprintDuplicateQuestions
type: mteb/sprintduplicatequestions-pairclassification
config: default
split: test
revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46
metrics:
- type: cos_sim_accuracy
value: 99.88910891089108
- type: cos_sim_ap
value: 97.236958456951
- type: cos_sim_f1
value: 94.39999999999999
- type: cos_sim_precision
value: 94.39999999999999
- type: cos_sim_recall
value: 94.39999999999999
- type: dot_accuracy
value: 99.82574257425742
- type: dot_ap
value: 94.94344759441888
- type: dot_f1
value: 91.17352056168507
- type: dot_precision
value: 91.44869215291752
- type: dot_recall
value: 90.9
- type: euclidean_accuracy
value: 99.88415841584158
- type: euclidean_ap
value: 97.2044250782305
- type: euclidean_f1
value: 94.210786739238
- type: euclidean_precision
value: 93.24191968658178
- type: euclidean_recall
value: 95.19999999999999
- type: manhattan_accuracy
value: 99.88613861386139
- type: manhattan_ap
value: 97.20683205497689
- type: manhattan_f1
value: 94.2643391521197
- type: manhattan_precision
value: 94.02985074626866
- type: manhattan_recall
value: 94.5
- type: max_accuracy
value: 99.88910891089108
- type: max_ap
value: 97.236958456951
- type: max_f1
value: 94.39999999999999
- task:
type: Clustering
dataset:
name: MTEB StackExchangeClustering
type: mteb/stackexchange-clustering
config: default
split: test
revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259
metrics:
- type: v_measure
value: 66.53940781726187
- task:
type: Clustering
dataset:
name: MTEB StackExchangeClusteringP2P
type: mteb/stackexchange-clustering-p2p
config: default
split: test
revision: 815ca46b2622cec33ccafc3735d572c266efdb44
metrics:
- type: v_measure
value: 36.71865011295108
- task:
type: Reranking
dataset:
name: MTEB StackOverflowDupQuestions
type: mteb/stackoverflowdupquestions-reranking
config: default
split: test
revision: e185fbe320c72810689fc5848eb6114e1ef5ec69
metrics:
- type: map
value: 55.3218674533331
- type: mrr
value: 56.28279910449028
- task:
type: Summarization
dataset:
name: MTEB SummEval
type: mteb/summeval
config: default
split: test
revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c
metrics:
- type: cos_sim_pearson
value: 30.723915667479673
- type: cos_sim_spearman
value: 32.029070449745234
- type: dot_pearson
value: 28.864944212481454
- type: dot_spearman
value: 27.939266999596725
- task:
type: Retrieval
dataset:
name: MTEB TRECCOVID
type: trec-covid
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 0.231
- type: map_at_10
value: 1.949
- type: map_at_100
value: 10.023
- type: map_at_1000
value: 23.485
- type: map_at_3
value: 0.652
- type: map_at_5
value: 1.054
- type: mrr_at_1
value: 86
- type: mrr_at_10
value: 92.067
- type: mrr_at_100
value: 92.067
- type: mrr_at_1000
value: 92.067
- type: mrr_at_3
value: 91.667
- type: mrr_at_5
value: 92.067
- type: ndcg_at_1
value: 83
- type: ndcg_at_10
value: 76.32900000000001
- type: ndcg_at_100
value: 54.662
- type: ndcg_at_1000
value: 48.062
- type: ndcg_at_3
value: 81.827
- type: ndcg_at_5
value: 80.664
- type: precision_at_1
value: 86
- type: precision_at_10
value: 80
- type: precision_at_100
value: 55.48
- type: precision_at_1000
value: 20.938000000000002
- type: precision_at_3
value: 85.333
- type: precision_at_5
value: 84.39999999999999
- type: recall_at_1
value: 0.231
- type: recall_at_10
value: 2.158
- type: recall_at_100
value: 13.344000000000001
- type: recall_at_1000
value: 44.31
- type: recall_at_3
value: 0.6779999999999999
- type: recall_at_5
value: 1.13
- task:
type: Retrieval
dataset:
name: MTEB Touche2020
type: webis-touche2020
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 2.524
- type: map_at_10
value: 10.183
- type: map_at_100
value: 16.625
- type: map_at_1000
value: 18.017
- type: map_at_3
value: 5.169
- type: map_at_5
value: 6.772
- type: mrr_at_1
value: 32.653
- type: mrr_at_10
value: 47.128
- type: mrr_at_100
value: 48.458
- type: mrr_at_1000
value: 48.473
- type: mrr_at_3
value: 44.897999999999996
- type: mrr_at_5
value: 45.306000000000004
- type: ndcg_at_1
value: 30.612000000000002
- type: ndcg_at_10
value: 24.928
- type: ndcg_at_100
value: 37.613
- type: ndcg_at_1000
value: 48.528
- type: ndcg_at_3
value: 28.829
- type: ndcg_at_5
value: 25.237
- type: precision_at_1
value: 32.653
- type: precision_at_10
value: 22.448999999999998
- type: precision_at_100
value: 8.02
- type: precision_at_1000
value: 1.537
- type: precision_at_3
value: 30.612000000000002
- type: precision_at_5
value: 24.490000000000002
- type: recall_at_1
value: 2.524
- type: recall_at_10
value: 16.38
- type: recall_at_100
value: 49.529
- type: recall_at_1000
value: 83.598
- type: recall_at_3
value: 6.411
- type: recall_at_5
value: 8.932
- task:
type: Classification
dataset:
name: MTEB ToxicConversationsClassification
type: mteb/toxic_conversations_50k
config: default
split: test
revision: d7c0de2777da35d6aae2200a62c6e0e5af397c4c
metrics:
- type: accuracy
value: 71.09020000000001
- type: ap
value: 14.451710060978993
- type: f1
value: 54.7874410609049
- task:
type: Classification
dataset:
name: MTEB TweetSentimentExtractionClassification
type: mteb/tweet_sentiment_extraction
config: default
split: test
revision: d604517c81ca91fe16a244d1248fc021f9ecee7a
metrics:
- type: accuracy
value: 59.745331069609506
- type: f1
value: 60.08387848592697
- task:
type: Clustering
dataset:
name: MTEB TwentyNewsgroupsClustering
type: mteb/twentynewsgroups-clustering
config: default
split: test
revision: 6125ec4e24fa026cec8a478383ee943acfbd5449
metrics:
- type: v_measure
value: 51.71549485462037
- task:
type: PairClassification
dataset:
name: MTEB TwitterSemEval2015
type: mteb/twittersemeval2015-pairclassification
config: default
split: test
revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1
metrics:
- type: cos_sim_accuracy
value: 87.39345532574357
- type: cos_sim_ap
value: 78.16796549696478
- type: cos_sim_f1
value: 71.27713276123171
- type: cos_sim_precision
value: 68.3115626511853
- type: cos_sim_recall
value: 74.51187335092348
- type: dot_accuracy
value: 85.12248912201228
- type: dot_ap
value: 69.26039256107077
- type: dot_f1
value: 65.04294321240867
- type: dot_precision
value: 63.251059586138126
- type: dot_recall
value: 66.93931398416886
- type: euclidean_accuracy
value: 87.07754664123503
- type: euclidean_ap
value: 77.7872176038945
- type: euclidean_f1
value: 70.85587801278899
- type: euclidean_precision
value: 66.3519115614924
- type: euclidean_recall
value: 76.01583113456465
- type: manhattan_accuracy
value: 87.07754664123503
- type: manhattan_ap
value: 77.7341400185556
- type: manhattan_f1
value: 70.80310880829015
- type: manhattan_precision
value: 69.54198473282443
- type: manhattan_recall
value: 72.1108179419525
- type: max_accuracy
value: 87.39345532574357
- type: max_ap
value: 78.16796549696478
- type: max_f1
value: 71.27713276123171
- task:
type: PairClassification
dataset:
name: MTEB TwitterURLCorpus
type: mteb/twitterurlcorpus-pairclassification
config: default
split: test
revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf
metrics:
- type: cos_sim_accuracy
value: 89.09457833663213
- type: cos_sim_ap
value: 86.33024314706873
- type: cos_sim_f1
value: 78.59623733719248
- type: cos_sim_precision
value: 74.13322413322413
- type: cos_sim_recall
value: 83.63104404065291
- type: dot_accuracy
value: 88.3086894089339
- type: dot_ap
value: 83.92225241805097
- type: dot_f1
value: 76.8721826377781
- type: dot_precision
value: 72.8168044077135
- type: dot_recall
value: 81.40591315060055
- type: euclidean_accuracy
value: 88.77052043311213
- type: euclidean_ap
value: 85.7410710218755
- type: euclidean_f1
value: 77.97705489398781
- type: euclidean_precision
value: 73.77713657598241
- type: euclidean_recall
value: 82.68401601478288
- type: manhattan_accuracy
value: 88.73753250281368
- type: manhattan_ap
value: 85.72867199072802
- type: manhattan_f1
value: 77.89774182922812
- type: manhattan_precision
value: 74.23787931635857
- type: manhattan_recall
value: 81.93717277486911
- type: max_accuracy
value: 89.09457833663213
- type: max_ap
value: 86.33024314706873
- type: max_f1
value: 78.59623733719248
---
# [Universal AnglE Embedding](https://github.com/SeanLee97/AnglE)
> Follow us on GitHub: https://github.com/SeanLee97/AnglE.
🔥 Our universal English sentence embedding `WhereIsAI/UAE-Large-V1` achieves **SOTA** on the [MTEB Leaderboard](https://huggingface.co/spaces/mteb/leaderboard) with an average score of 64.64!

# Usage
```bash
python -m pip install -U angle-emb
```
1) Non-Retrieval Tasks
```python
from angle_emb import AnglE
angle = AnglE.from_pretrained('WhereIsAI/UAE-Large-V1', pooling_strategy='cls').cuda()
vec = angle.encode('hello world', to_numpy=True)
print(vec)
vecs = angle.encode(['hello world1', 'hello world2'], to_numpy=True)
print(vecs)
```
2) Retrieval Tasks
For retrieval purposes, please use the prompt `Prompts.C`.
```python
from angle_emb import AnglE, Prompts
angle = AnglE.from_pretrained('WhereIsAI/UAE-Large-V1', pooling_strategy='cls').cuda()
angle.set_prompt(prompt=Prompts.C)
vec = angle.encode({'text': 'hello world'}, to_numpy=True)
print(vec)
vecs = angle.encode([{'text': 'hello world1'}, {'text': 'hello world2'}], to_numpy=True)
print(vecs)
```
# Citation
If you use our pre-trained models, welcome to support us by citing our work:
```
@article{li2023angle,
title={AnglE-optimized Text Embeddings},
author={Li, Xianming and Li, Jing},
journal={arXiv preprint arXiv:2309.12871},
year={2023}
}
``` | [
"SUMMARIZATION"
] | [
"BIOSSES",
"SCIFACT"
] |
zeta-alpha-ai/Zeta-Alpha-E5-Mistral | zeta-alpha-ai | feature-extraction | [
"sentence-transformers",
"safetensors",
"mistral",
"feature-extraction",
"mteb",
"transformers",
"en",
"license:mit",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | 2024-08-30T14:27:48 | 2025-01-06T15:16:48 | 92 | 11 | ---
language:
- en
license: mit
tags:
- mteb
- transformers
- sentence-transformers
model-index:
- name: Zeta-Alpha-E5-Mistral
results:
- task:
type: Classification
dataset:
name: MTEB AmazonCounterfactualClassification (en)
type: mteb/amazon_counterfactual
config: en
split: test
revision: e8379541af4e31359cca9fbcf4b00f2671dba205
metrics:
- type: accuracy
value: 77.76119402985074
- type: ap
value: 39.97673988468886
- type: ap_weighted
value: 39.97673988468886
- type: f1
value: 71.23171737695898
- type: f1_weighted
value: 79.55230307558237
- type: main_score
value: 77.76119402985074
- task:
type: Classification
dataset:
name: MTEB AmazonPolarityClassification
type: mteb/amazon_polarity
config: default
split: test
revision: e2d317d38cd51312af73b3d32a06d1a08b442046
metrics:
- type: accuracy
value: 96.61810000000001
- type: ap
value: 94.99559013902017
- type: ap_weighted
value: 94.99559013902017
- type: f1
value: 96.61758649480731
- type: f1_weighted
value: 96.61758649480731
- type: main_score
value: 96.61810000000001
- task:
type: Classification
dataset:
name: MTEB AmazonReviewsClassification (en)
type: mteb/amazon_reviews_multi
config: en
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 59.26199999999999
- type: f1
value: 56.32963321217333
- type: f1_weighted
value: 56.32963321217333
- type: main_score
value: 59.26199999999999
- task:
type: Retrieval
dataset:
name: MTEB ArguAna
type: mteb/arguana
config: default
split: test
revision: c22ab2a51041ffd869aaddef7af8d8215647e41a
metrics:
- type: main_score
value: 65.623
- type: map_at_1
value: 41.536
- type: map_at_10
value: 57.485
- type: map_at_100
value: 58.013000000000005
- type: map_at_1000
value: 58.013000000000005
- type: map_at_20
value: 57.957
- type: map_at_3
value: 53.284
- type: map_at_5
value: 55.837
- type: mrr_at_1
value: 42.17638691322902
- type: mrr_at_10
value: 57.7096175122492
- type: mrr_at_100
value: 58.23610809196743
- type: mrr_at_1000
value: 58.23673750573145
- type: mrr_at_20
value: 58.180348622747324
- type: mrr_at_3
value: 53.44950213371275
- type: mrr_at_5
value: 56.07396870554779
- type: nauc_map_at_1000_diff1
value: 14.098091070036958
- type: nauc_map_at_1000_max
value: -16.568377380844108
- type: nauc_map_at_1000_std
value: -22.105696179585834
- type: nauc_map_at_100_diff1
value: 14.096542453201625
- type: nauc_map_at_100_max
value: -16.57054590195526
- type: nauc_map_at_100_std
value: -22.1090324366772
- type: nauc_map_at_10_diff1
value: 13.840246695558884
- type: nauc_map_at_10_max
value: -16.52098795923224
- type: nauc_map_at_10_std
value: -22.074328710004032
- type: nauc_map_at_1_diff1
value: 17.117727049808984
- type: nauc_map_at_1_max
value: -18.587242049712614
- type: nauc_map_at_1_std
value: -22.454707653498595
- type: nauc_map_at_20_diff1
value: 14.068130846454585
- type: nauc_map_at_20_max
value: -16.53942858114966
- type: nauc_map_at_20_std
value: -22.10921004077996
- type: nauc_map_at_3_diff1
value: 14.596579595737097
- type: nauc_map_at_3_max
value: -15.62887067464894
- type: nauc_map_at_3_std
value: -22.09058102549274
- type: nauc_map_at_5_diff1
value: 13.798507062284514
- type: nauc_map_at_5_max
value: -16.36834850771522
- type: nauc_map_at_5_std
value: -21.984206595455134
- type: nauc_mrr_at_1000_diff1
value: 12.144909427474602
- type: nauc_mrr_at_1000_max
value: -17.048787138426324
- type: nauc_mrr_at_1000_std
value: -21.961966140898564
- type: nauc_mrr_at_100_diff1
value: 12.143403633612827
- type: nauc_mrr_at_100_max
value: -17.050945262411012
- type: nauc_mrr_at_100_std
value: -21.965305811191673
- type: nauc_mrr_at_10_diff1
value: 11.88548720648553
- type: nauc_mrr_at_10_max
value: -16.996705857584736
- type: nauc_mrr_at_10_std
value: -21.883645748542396
- type: nauc_mrr_at_1_diff1
value: 15.37682964765565
- type: nauc_mrr_at_1_max
value: -17.989361001169087
- type: nauc_mrr_at_1_std
value: -21.697830490637955
- type: nauc_mrr_at_20_diff1
value: 12.119044499779363
- type: nauc_mrr_at_20_max
value: -17.018675761117027
- type: nauc_mrr_at_20_std
value: -21.965554459307565
- type: nauc_mrr_at_3_diff1
value: 12.535001807278187
- type: nauc_mrr_at_3_max
value: -16.38816957172248
- type: nauc_mrr_at_3_std
value: -22.081293367465896
- type: nauc_mrr_at_5_diff1
value: 11.892111947679496
- type: nauc_mrr_at_5_max
value: -16.79709351116846
- type: nauc_mrr_at_5_std
value: -21.79512696140714
- type: nauc_ndcg_at_1000_diff1
value: 13.67006999549869
- type: nauc_ndcg_at_1000_max
value: -16.236125687432107
- type: nauc_ndcg_at_1000_std
value: -21.810131960233065
- type: nauc_ndcg_at_100_diff1
value: 13.637478389163462
- type: nauc_ndcg_at_100_max
value: -16.28219720987127
- type: nauc_ndcg_at_100_std
value: -21.880912370176876
- type: nauc_ndcg_at_10_diff1
value: 12.558591199280556
- type: nauc_ndcg_at_10_max
value: -15.952826009827106
- type: nauc_ndcg_at_10_std
value: -21.818643731025382
- type: nauc_ndcg_at_1_diff1
value: 17.117727049808984
- type: nauc_ndcg_at_1_max
value: -18.587242049712614
- type: nauc_ndcg_at_1_std
value: -22.454707653498595
- type: nauc_ndcg_at_20_diff1
value: 13.402986057386181
- type: nauc_ndcg_at_20_max
value: -16.072631062968746
- type: nauc_ndcg_at_20_std
value: -21.98468803430586
- type: nauc_ndcg_at_3_diff1
value: 14.059904782033348
- type: nauc_ndcg_at_3_max
value: -14.433190101994514
- type: nauc_ndcg_at_3_std
value: -21.990025270634135
- type: nauc_ndcg_at_5_diff1
value: 12.434165121057134
- type: nauc_ndcg_at_5_max
value: -15.650774158031522
- type: nauc_ndcg_at_5_std
value: -21.636716447934305
- type: nauc_precision_at_1000_diff1
value: 1.7151819945276745
- type: nauc_precision_at_1000_max
value: 20.85546049013785
- type: nauc_precision_at_1000_std
value: 77.3551884133584
- type: nauc_precision_at_100_diff1
value: -7.961881099019577
- type: nauc_precision_at_100_max
value: -1.8225484680865736
- type: nauc_precision_at_100_std
value: 35.484449109425384
- type: nauc_precision_at_10_diff1
value: 0.46638305609538855
- type: nauc_precision_at_10_max
value: -11.023993018739485
- type: nauc_precision_at_10_std
value: -19.111584616037852
- type: nauc_precision_at_1_diff1
value: 17.117727049808984
- type: nauc_precision_at_1_max
value: -18.587242049712614
- type: nauc_precision_at_1_std
value: -22.454707653498595
- type: nauc_precision_at_20_diff1
value: -1.0298881487766305
- type: nauc_precision_at_20_max
value: -4.548017977674335
- type: nauc_precision_at_20_std
value: -18.901496352112133
- type: nauc_precision_at_3_diff1
value: 12.350178962124566
- type: nauc_precision_at_3_max
value: -10.271126387937858
- type: nauc_precision_at_3_std
value: -21.655307623793433
- type: nauc_precision_at_5_diff1
value: 6.011571432832696
- type: nauc_precision_at_5_max
value: -12.478026665421389
- type: nauc_precision_at_5_std
value: -19.845124181363882
- type: nauc_recall_at_1000_diff1
value: 1.7151819945236155
- type: nauc_recall_at_1000_max
value: 20.855460490135933
- type: nauc_recall_at_1000_std
value: 77.35518841335626
- type: nauc_recall_at_100_diff1
value: -7.961881099020542
- type: nauc_recall_at_100_max
value: -1.8225484680932273
- type: nauc_recall_at_100_std
value: 35.48444910942399
- type: nauc_recall_at_10_diff1
value: 0.46638305609538805
- type: nauc_recall_at_10_max
value: -11.023993018739322
- type: nauc_recall_at_10_std
value: -19.11158461603798
- type: nauc_recall_at_1_diff1
value: 17.117727049808984
- type: nauc_recall_at_1_max
value: -18.587242049712614
- type: nauc_recall_at_1_std
value: -22.454707653498595
- type: nauc_recall_at_20_diff1
value: -1.029888148776229
- type: nauc_recall_at_20_max
value: -4.548017977673906
- type: nauc_recall_at_20_std
value: -18.901496352110804
- type: nauc_recall_at_3_diff1
value: 12.350178962124682
- type: nauc_recall_at_3_max
value: -10.271126387937805
- type: nauc_recall_at_3_std
value: -21.65530762379337
- type: nauc_recall_at_5_diff1
value: 6.0115714328326435
- type: nauc_recall_at_5_max
value: -12.478026665421405
- type: nauc_recall_at_5_std
value: -19.845124181363875
- type: ndcg_at_1
value: 41.536
- type: ndcg_at_10
value: 65.623
- type: ndcg_at_100
value: 67.63
- type: ndcg_at_1000
value: 67.64099999999999
- type: ndcg_at_20
value: 67.241
- type: ndcg_at_3
value: 57.048
- type: ndcg_at_5
value: 61.678999999999995
- type: precision_at_1
value: 41.536
- type: precision_at_10
value: 9.132
- type: precision_at_100
value: 0.996
- type: precision_at_1000
value: 0.1
- type: precision_at_20
value: 4.8759999999999994
- type: precision_at_3
value: 22.641
- type: precision_at_5
value: 15.845999999999998
- type: recall_at_1
value: 41.536
- type: recall_at_10
value: 91.323
- type: recall_at_100
value: 99.57300000000001
- type: recall_at_1000
value: 99.644
- type: recall_at_20
value: 97.51100000000001
- type: recall_at_3
value: 67.923
- type: recall_at_5
value: 79.232
- task:
type: Clustering
dataset:
name: MTEB ArxivClusteringP2P
type: mteb/arxiv-clustering-p2p
config: default
split: test
revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d
metrics:
- type: main_score
value: 52.026635060007244
- type: v_measure
value: 52.026635060007244
- type: v_measure_std
value: 14.357137408692006
- task:
type: Clustering
dataset:
name: MTEB ArxivClusteringS2S
type: mteb/arxiv-clustering-s2s
config: default
split: test
revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53
metrics:
- type: main_score
value: 47.834914950269855
- type: v_measure
value: 47.834914950269855
- type: v_measure_std
value: 14.487028918517247
- task:
type: Reranking
dataset:
name: MTEB AskUbuntuDupQuestions
type: mteb/askubuntudupquestions-reranking
config: default
split: test
revision: 2000358ca161889fa9c082cb41daa8dcfb161a54
metrics:
- type: main_score
value: 64.5808745066313
- type: map
value: 64.5808745066313
- type: mrr
value: 77.56540473991997
- type: nAUC_map_diff1
value: 23.168800779252464
- type: nAUC_map_max
value: 30.342203769599735
- type: nAUC_map_std
value: 22.562701982176833
- type: nAUC_mrr_diff1
value: 27.79261544540621
- type: nAUC_mrr_max
value: 43.302228243606045
- type: nAUC_mrr_std
value: 26.432985515912673
- task:
type: STS
dataset:
name: MTEB BIOSSES
type: mteb/biosses-sts
config: default
split: test
revision: d3fb88f8f02e40887cd149695127462bbcf29b4a
metrics:
- type: cosine_pearson
value: 87.97778619889539
- type: cosine_spearman
value: 86.44233109293758
- type: euclidean_pearson
value: 86.6664224630525
- type: euclidean_spearman
value: 86.44233109293758
- type: main_score
value: 86.44233109293758
- type: manhattan_pearson
value: 86.75174487553707
- type: manhattan_spearman
value: 86.61402175201368
- type: pearson
value: 87.97778619889539
- type: spearman
value: 86.44233109293758
- task:
type: Classification
dataset:
name: MTEB Banking77Classification
type: mteb/banking77
config: default
split: test
revision: 0fd18e25b25c072e09e0d92ab615fda904d66300
metrics:
- type: accuracy
value: 83.1103896103896
- type: f1
value: 82.2932953112279
- type: f1_weighted
value: 82.2932953112279
- type: main_score
value: 83.1103896103896
- task:
type: Clustering
dataset:
name: MTEB BiorxivClusteringP2P
type: mteb/biorxiv-clustering-p2p
config: default
split: test
revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40
metrics:
- type: main_score
value: 43.73746639290943
- type: v_measure
value: 43.73746639290943
- type: v_measure_std
value: 0.8808902310879784
- task:
type: Clustering
dataset:
name: MTEB BiorxivClusteringS2S
type: mteb/biorxiv-clustering-s2s
config: default
split: test
revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908
metrics:
- type: main_score
value: 40.73737737926463
- type: v_measure
value: 40.73737737926463
- type: v_measure_std
value: 0.6059982328960863
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackRetrieval
type: CQADupstackRetrieval_is_a_combined_dataset
config: default
split: test
revision: 46989137a86843e03a6195de44b09deda022eec7
metrics:
- type: main_score
value: 48.852333333333334
- type: ndcg_at_10
value: 48.852333333333334
- task:
type: Retrieval
dataset:
name: MTEB ClimateFEVER
type: mteb/climate-fever
config: default
split: test
revision: 47f2ac6acb640fc46020b02a5b59fdda04d39380
metrics:
- type: main_score
value: 42.047000000000004
- type: map_at_1
value: 18.269
- type: map_at_10
value: 31.691999999999997
- type: map_at_100
value: 33.841
- type: map_at_1000
value: 34.009
- type: map_at_20
value: 32.885999999999996
- type: map_at_3
value: 26.558999999999997
- type: map_at_5
value: 29.119
- type: mrr_at_1
value: 41.69381107491857
- type: mrr_at_10
value: 54.293185460937984
- type: mrr_at_100
value: 54.87161862744807
- type: mrr_at_1000
value: 54.88722882443645
- type: mrr_at_20
value: 54.685754380853844
- type: mrr_at_3
value: 51.172638436482146
- type: mrr_at_5
value: 53.12052117263853
- type: nauc_map_at_1000_diff1
value: 27.03807365621228
- type: nauc_map_at_1000_max
value: 40.31079671403445
- type: nauc_map_at_1000_std
value: 26.092423798773883
- type: nauc_map_at_100_diff1
value: 27.015360474734436
- type: nauc_map_at_100_max
value: 40.28408194597505
- type: nauc_map_at_100_std
value: 26.086029014261968
- type: nauc_map_at_10_diff1
value: 27.222731203652856
- type: nauc_map_at_10_max
value: 40.01781109904128
- type: nauc_map_at_10_std
value: 24.73681887890272
- type: nauc_map_at_1_diff1
value: 35.75107300356484
- type: nauc_map_at_1_max
value: 40.2201153742901
- type: nauc_map_at_1_std
value: 18.766249947929374
- type: nauc_map_at_20_diff1
value: 26.931804653893042
- type: nauc_map_at_20_max
value: 40.21587995014608
- type: nauc_map_at_20_std
value: 25.75452108695598
- type: nauc_map_at_3_diff1
value: 28.310387788680696
- type: nauc_map_at_3_max
value: 39.285866053656385
- type: nauc_map_at_3_std
value: 21.394962842915703
- type: nauc_map_at_5_diff1
value: 27.300839773785274
- type: nauc_map_at_5_max
value: 39.3888708340898
- type: nauc_map_at_5_std
value: 22.78299335246201
- type: nauc_mrr_at_1000_diff1
value: 26.569029993582287
- type: nauc_mrr_at_1000_max
value: 38.05698386072128
- type: nauc_mrr_at_1000_std
value: 27.12877875031529
- type: nauc_mrr_at_100_diff1
value: 26.56693868451222
- type: nauc_mrr_at_100_max
value: 38.06321319344823
- type: nauc_mrr_at_100_std
value: 27.14409997788537
- type: nauc_mrr_at_10_diff1
value: 26.52694223161396
- type: nauc_mrr_at_10_max
value: 38.120563154705
- type: nauc_mrr_at_10_std
value: 27.11337497751667
- type: nauc_mrr_at_1_diff1
value: 29.371725407886277
- type: nauc_mrr_at_1_max
value: 35.7850341702808
- type: nauc_mrr_at_1_std
value: 22.69810863765783
- type: nauc_mrr_at_20_diff1
value: 26.567897033309247
- type: nauc_mrr_at_20_max
value: 38.17484491649562
- type: nauc_mrr_at_20_std
value: 27.218678564296972
- type: nauc_mrr_at_3_diff1
value: 26.582727973322427
- type: nauc_mrr_at_3_max
value: 37.8745721692282
- type: nauc_mrr_at_3_std
value: 26.567749469034307
- type: nauc_mrr_at_5_diff1
value: 26.404958533442898
- type: nauc_mrr_at_5_max
value: 37.86090955141593
- type: nauc_mrr_at_5_std
value: 26.65816459603454
- type: nauc_ndcg_at_1000_diff1
value: 25.7228323702742
- type: nauc_ndcg_at_1000_max
value: 41.024272689913296
- type: nauc_ndcg_at_1000_std
value: 31.373617783353815
- type: nauc_ndcg_at_100_diff1
value: 25.467806967471812
- type: nauc_ndcg_at_100_max
value: 40.68595692225817
- type: nauc_ndcg_at_100_std
value: 31.327255356351774
- type: nauc_ndcg_at_10_diff1
value: 25.65771458118311
- type: nauc_ndcg_at_10_max
value: 40.2959313004829
- type: nauc_ndcg_at_10_std
value: 28.21103387387833
- type: nauc_ndcg_at_1_diff1
value: 29.371725407886277
- type: nauc_ndcg_at_1_max
value: 35.7850341702808
- type: nauc_ndcg_at_1_std
value: 22.69810863765783
- type: nauc_ndcg_at_20_diff1
value: 25.008107221444327
- type: nauc_ndcg_at_20_max
value: 40.613619354979626
- type: nauc_ndcg_at_20_std
value: 30.216191744111416
- type: nauc_ndcg_at_3_diff1
value: 25.85227194113396
- type: nauc_ndcg_at_3_max
value: 38.32492256264965
- type: nauc_ndcg_at_3_std
value: 23.735358525961033
- type: nauc_ndcg_at_5_diff1
value: 25.747409532466243
- type: nauc_ndcg_at_5_max
value: 39.4993084566524
- type: nauc_ndcg_at_5_std
value: 25.19771375383721
- type: nauc_precision_at_1000_diff1
value: -8.149028290279253
- type: nauc_precision_at_1000_max
value: -3.196086649201077
- type: nauc_precision_at_1000_std
value: 13.643701012139948
- type: nauc_precision_at_100_diff1
value: -1.892485292157653
- type: nauc_precision_at_100_max
value: 7.7434454354621245
- type: nauc_precision_at_100_std
value: 22.988854451791806
- type: nauc_precision_at_10_diff1
value: 6.150550804828545
- type: nauc_precision_at_10_max
value: 22.501131175285906
- type: nauc_precision_at_10_std
value: 27.39677272392596
- type: nauc_precision_at_1_diff1
value: 29.371725407886277
- type: nauc_precision_at_1_max
value: 35.7850341702808
- type: nauc_precision_at_1_std
value: 22.69810863765783
- type: nauc_precision_at_20_diff1
value: 2.283445965946842
- type: nauc_precision_at_20_max
value: 18.59466543059599
- type: nauc_precision_at_20_std
value: 29.0738299597803
- type: nauc_precision_at_3_diff1
value: 12.963867454979258
- type: nauc_precision_at_3_max
value: 30.449562657056333
- type: nauc_precision_at_3_std
value: 25.581976194336352
- type: nauc_precision_at_5_diff1
value: 8.512947940252289
- type: nauc_precision_at_5_max
value: 26.12425424420038
- type: nauc_precision_at_5_std
value: 24.877415885322808
- type: nauc_recall_at_1000_diff1
value: 17.151717317242028
- type: nauc_recall_at_1000_max
value: 40.67913325938115
- type: nauc_recall_at_1000_std
value: 49.54837910314142
- type: nauc_recall_at_100_diff1
value: 16.83432440063162
- type: nauc_recall_at_100_max
value: 34.46952489534257
- type: nauc_recall_at_100_std
value: 38.26853671426454
- type: nauc_recall_at_10_diff1
value: 19.50239551179883
- type: nauc_recall_at_10_max
value: 35.74261290262663
- type: nauc_recall_at_10_std
value: 28.630457514118163
- type: nauc_recall_at_1_diff1
value: 35.75107300356484
- type: nauc_recall_at_1_max
value: 40.2201153742901
- type: nauc_recall_at_1_std
value: 18.766249947929374
- type: nauc_recall_at_20_diff1
value: 16.723000685755707
- type: nauc_recall_at_20_max
value: 35.272383093342405
- type: nauc_recall_at_20_std
value: 32.934757635631335
- type: nauc_recall_at_3_diff1
value: 24.024160029526794
- type: nauc_recall_at_3_max
value: 38.07599046764463
- type: nauc_recall_at_3_std
value: 22.648443171847685
- type: nauc_recall_at_5_diff1
value: 21.588763686113676
- type: nauc_recall_at_5_max
value: 37.16237158404055
- type: nauc_recall_at_5_std
value: 24.45061830715902
- type: ndcg_at_1
value: 41.693999999999996
- type: ndcg_at_10
value: 42.047000000000004
- type: ndcg_at_100
value: 49.309
- type: ndcg_at_1000
value: 51.861999999999995
- type: ndcg_at_20
value: 44.982
- type: ndcg_at_3
value: 35.510000000000005
- type: ndcg_at_5
value: 37.529
- type: precision_at_1
value: 41.693999999999996
- type: precision_at_10
value: 13.114
- type: precision_at_100
value: 2.1069999999999998
- type: precision_at_1000
value: 0.259
- type: precision_at_20
value: 7.824000000000001
- type: precision_at_3
value: 26.796999999999997
- type: precision_at_5
value: 20.169
- type: recall_at_1
value: 18.269
- type: recall_at_10
value: 48.44
- type: recall_at_100
value: 72.909
- type: recall_at_1000
value: 86.79400000000001
- type: recall_at_20
value: 56.714
- type: recall_at_3
value: 31.85
- type: recall_at_5
value: 38.488
- task:
type: Retrieval
dataset:
name: MTEB DBPedia
type: mteb/dbpedia
config: default
split: test
revision: c0f706b76e590d620bd6618b3ca8efdd34e2d659
metrics:
- type: main_score
value: 51.202000000000005
- type: map_at_1
value: 10.789
- type: map_at_10
value: 24.804000000000002
- type: map_at_100
value: 35.908
- type: map_at_1000
value: 37.97
- type: map_at_20
value: 29.236
- type: map_at_3
value: 17.055
- type: map_at_5
value: 20.342
- type: mrr_at_1
value: 78.0
- type: mrr_at_10
value: 83.82251984126987
- type: mrr_at_100
value: 84.00706659508124
- type: mrr_at_1000
value: 84.01172077534015
- type: mrr_at_20
value: 83.93946561479457
- type: mrr_at_3
value: 82.58333333333334
- type: mrr_at_5
value: 83.38333333333335
- type: nauc_map_at_1000_diff1
value: 21.975683577384412
- type: nauc_map_at_1000_max
value: 33.104767603973286
- type: nauc_map_at_1000_std
value: 19.507936661697688
- type: nauc_map_at_100_diff1
value: 23.19428288878281
- type: nauc_map_at_100_max
value: 32.47043490749479
- type: nauc_map_at_100_std
value: 16.611980248500473
- type: nauc_map_at_10_diff1
value: 23.314074413061576
- type: nauc_map_at_10_max
value: 18.52506648908812
- type: nauc_map_at_10_std
value: -9.718219448424597
- type: nauc_map_at_1_diff1
value: 27.402329635171146
- type: nauc_map_at_1_max
value: 5.898746976402726
- type: nauc_map_at_1_std
value: -26.703327281110212
- type: nauc_map_at_20_diff1
value: 23.613670514044472
- type: nauc_map_at_20_max
value: 25.008187375084763
- type: nauc_map_at_20_std
value: -0.05206367166066498
- type: nauc_map_at_3_diff1
value: 25.673374223753598
- type: nauc_map_at_3_max
value: 12.527419567406877
- type: nauc_map_at_3_std
value: -20.06963757181341
- type: nauc_map_at_5_diff1
value: 24.74002578400672
- type: nauc_map_at_5_max
value: 14.23437788867648
- type: nauc_map_at_5_std
value: -16.317803876665256
- type: nauc_mrr_at_1000_diff1
value: 53.26868100398232
- type: nauc_mrr_at_1000_max
value: 67.65740877772801
- type: nauc_mrr_at_1000_std
value: 39.43464159369656
- type: nauc_mrr_at_100_diff1
value: 53.25615896192901
- type: nauc_mrr_at_100_max
value: 67.64777514366169
- type: nauc_mrr_at_100_std
value: 39.410662043086006
- type: nauc_mrr_at_10_diff1
value: 52.94295111677663
- type: nauc_mrr_at_10_max
value: 67.5393005296393
- type: nauc_mrr_at_10_std
value: 39.31715440936177
- type: nauc_mrr_at_1_diff1
value: 57.148073445541826
- type: nauc_mrr_at_1_max
value: 65.78742986970832
- type: nauc_mrr_at_1_std
value: 34.198659989799246
- type: nauc_mrr_at_20_diff1
value: 53.223501273361265
- type: nauc_mrr_at_20_max
value: 67.59762197314753
- type: nauc_mrr_at_20_std
value: 39.359614957729356
- type: nauc_mrr_at_3_diff1
value: 53.619283112717184
- type: nauc_mrr_at_3_max
value: 68.72067268448458
- type: nauc_mrr_at_3_std
value: 40.53052904925793
- type: nauc_mrr_at_5_diff1
value: 52.86133436375577
- type: nauc_mrr_at_5_max
value: 67.94415973414303
- type: nauc_mrr_at_5_std
value: 40.09087346298919
- type: nauc_ndcg_at_1000_diff1
value: 31.008961737330505
- type: nauc_ndcg_at_1000_max
value: 49.39127418414386
- type: nauc_ndcg_at_1000_std
value: 37.509639671229806
- type: nauc_ndcg_at_100_diff1
value: 32.50484024525448
- type: nauc_ndcg_at_100_max
value: 46.300662423725605
- type: nauc_ndcg_at_100_std
value: 28.488771981297162
- type: nauc_ndcg_at_10_diff1
value: 27.911614286994414
- type: nauc_ndcg_at_10_max
value: 44.70909339082426
- type: nauc_ndcg_at_10_std
value: 25.644980583529154
- type: nauc_ndcg_at_1_diff1
value: 51.27342509891256
- type: nauc_ndcg_at_1_max
value: 54.75803307782269
- type: nauc_ndcg_at_1_std
value: 27.4853058050954
- type: nauc_ndcg_at_20_diff1
value: 30.29885920192407
- type: nauc_ndcg_at_20_max
value: 43.45207612356715
- type: nauc_ndcg_at_20_std
value: 21.59751863312113
- type: nauc_ndcg_at_3_diff1
value: 31.251071625533843
- type: nauc_ndcg_at_3_max
value: 48.45180697571009
- type: nauc_ndcg_at_3_std
value: 32.70662167853583
- type: nauc_ndcg_at_5_diff1
value: 26.175090671223877
- type: nauc_ndcg_at_5_max
value: 45.2723355712432
- type: nauc_ndcg_at_5_std
value: 31.461916393793
- type: nauc_precision_at_1000_diff1
value: -23.926082132378777
- type: nauc_precision_at_1000_max
value: -9.350346667573811
- type: nauc_precision_at_1000_std
value: 11.578726421051043
- type: nauc_precision_at_100_diff1
value: -7.468660956171794
- type: nauc_precision_at_100_max
value: 19.470414434634723
- type: nauc_precision_at_100_std
value: 43.86244545951367
- type: nauc_precision_at_10_diff1
value: -2.090265656696684
- type: nauc_precision_at_10_max
value: 30.778228684745386
- type: nauc_precision_at_10_std
value: 44.882546930240984
- type: nauc_precision_at_1_diff1
value: 57.148073445541826
- type: nauc_precision_at_1_max
value: 65.78742986970832
- type: nauc_precision_at_1_std
value: 34.198659989799246
- type: nauc_precision_at_20_diff1
value: -3.075798118380347
- type: nauc_precision_at_20_max
value: 29.52951501638172
- type: nauc_precision_at_20_std
value: 47.266521222769676
- type: nauc_precision_at_3_diff1
value: 11.892419680356198
- type: nauc_precision_at_3_max
value: 43.146413741651415
- type: nauc_precision_at_3_std
value: 45.2312022756118
- type: nauc_precision_at_5_diff1
value: 0.5765950918056327
- type: nauc_precision_at_5_max
value: 34.22132902314228
- type: nauc_precision_at_5_std
value: 44.78272426908718
- type: nauc_recall_at_1000_diff1
value: 24.99872069707702
- type: nauc_recall_at_1000_max
value: 42.17319464089324
- type: nauc_recall_at_1000_std
value: 47.42376725148043
- type: nauc_recall_at_100_diff1
value: 24.62929408109356
- type: nauc_recall_at_100_max
value: 32.373805304406844
- type: nauc_recall_at_100_std
value: 21.48682342485071
- type: nauc_recall_at_10_diff1
value: 20.62337020665992
- type: nauc_recall_at_10_max
value: 14.125308316827395
- type: nauc_recall_at_10_std
value: -13.565096294162865
- type: nauc_recall_at_1_diff1
value: 27.402329635171146
- type: nauc_recall_at_1_max
value: 5.898746976402726
- type: nauc_recall_at_1_std
value: -26.703327281110212
- type: nauc_recall_at_20_diff1
value: 22.169766882731277
- type: nauc_recall_at_20_max
value: 20.588762488556828
- type: nauc_recall_at_20_std
value: -4.530608772737279
- type: nauc_recall_at_3_diff1
value: 22.48622374174161
- type: nauc_recall_at_3_max
value: 10.470407080375304
- type: nauc_recall_at_3_std
value: -20.777479868757286
- type: nauc_recall_at_5_diff1
value: 21.28438252298866
- type: nauc_recall_at_5_max
value: 10.424120660451583
- type: nauc_recall_at_5_std
value: -17.912853638432384
- type: ndcg_at_1
value: 65.375
- type: ndcg_at_10
value: 51.202000000000005
- type: ndcg_at_100
value: 56.12200000000001
- type: ndcg_at_1000
value: 63.306
- type: ndcg_at_20
value: 50.442
- type: ndcg_at_3
value: 56.437000000000005
- type: ndcg_at_5
value: 53.861000000000004
- type: precision_at_1
value: 78.0
- type: precision_at_10
value: 41.075
- type: precision_at_100
value: 13.032
- type: precision_at_1000
value: 2.516
- type: precision_at_20
value: 31.4
- type: precision_at_3
value: 59.833000000000006
- type: precision_at_5
value: 51.9
- type: recall_at_1
value: 10.789
- type: recall_at_10
value: 30.059
- type: recall_at_100
value: 61.817
- type: recall_at_1000
value: 84.672
- type: recall_at_20
value: 39.135
- type: recall_at_3
value: 18.017
- type: recall_at_5
value: 22.492
- task:
type: Classification
dataset:
name: MTEB EmotionClassification
type: mteb/emotion
config: default
split: test
revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37
metrics:
- type: accuracy
value: 57.715
- type: f1
value: 51.85468544437296
- type: f1_weighted
value: 58.73946069844862
- type: main_score
value: 57.715
- task:
type: Retrieval
dataset:
name: MTEB FEVER
type: mteb/fever
config: default
split: test
revision: bea83ef9e8fb933d90a2f1d5515737465d613e12
metrics:
- type: main_score
value: 92.438
- type: map_at_1
value: 82.678
- type: map_at_10
value: 89.90899999999999
- type: map_at_100
value: 90.09899999999999
- type: map_at_1000
value: 90.11
- type: map_at_20
value: 90.026
- type: map_at_3
value: 89.034
- type: map_at_5
value: 89.619
- type: mrr_at_1
value: 89.04890489048904
- type: mrr_at_10
value: 93.53417484605598
- type: mrr_at_100
value: 93.56969053658798
- type: mrr_at_1000
value: 93.56979354808294
- type: mrr_at_20
value: 93.56100677804474
- type: mrr_at_3
value: 93.25682568256818
- type: mrr_at_5
value: 93.46909690969086
- type: nauc_map_at_1000_diff1
value: 50.19087206783256
- type: nauc_map_at_1000_max
value: 26.223996443425424
- type: nauc_map_at_1000_std
value: -8.531486546405336
- type: nauc_map_at_100_diff1
value: 50.12601833237827
- type: nauc_map_at_100_max
value: 26.205753684531942
- type: nauc_map_at_100_std
value: -8.502300882475792
- type: nauc_map_at_10_diff1
value: 49.48962186883297
- type: nauc_map_at_10_max
value: 25.849578028607546
- type: nauc_map_at_10_std
value: -8.58622126027856
- type: nauc_map_at_1_diff1
value: 56.88016472114475
- type: nauc_map_at_1_max
value: 24.671479435457048
- type: nauc_map_at_1_std
value: -11.980878470985619
- type: nauc_map_at_20_diff1
value: 49.813384246326905
- type: nauc_map_at_20_max
value: 25.96508257517373
- type: nauc_map_at_20_std
value: -8.568670117647939
- type: nauc_map_at_3_diff1
value: 49.087764097890165
- type: nauc_map_at_3_max
value: 25.65938258554376
- type: nauc_map_at_3_std
value: -8.859093431924775
- type: nauc_map_at_5_diff1
value: 49.08166208415013
- type: nauc_map_at_5_max
value: 25.696246071825684
- type: nauc_map_at_5_std
value: -8.431713254517472
- type: nauc_mrr_at_1000_diff1
value: 73.35484368612293
- type: nauc_mrr_at_1000_max
value: 35.657386688053336
- type: nauc_mrr_at_1000_std
value: -18.09172713569766
- type: nauc_mrr_at_100_diff1
value: 73.35508125874483
- type: nauc_mrr_at_100_max
value: 35.65842743437027
- type: nauc_mrr_at_100_std
value: -18.08981699366641
- type: nauc_mrr_at_10_diff1
value: 73.29004337552368
- type: nauc_mrr_at_10_max
value: 35.882001444609216
- type: nauc_mrr_at_10_std
value: -18.05339396879553
- type: nauc_mrr_at_1_diff1
value: 74.48742882702338
- type: nauc_mrr_at_1_max
value: 31.49138530538466
- type: nauc_mrr_at_1_std
value: -19.510294856397955
- type: nauc_mrr_at_20_diff1
value: 73.3388656330962
- type: nauc_mrr_at_20_max
value: 35.706948273788505
- type: nauc_mrr_at_20_std
value: -18.140154123750992
- type: nauc_mrr_at_3_diff1
value: 73.22698350499
- type: nauc_mrr_at_3_max
value: 36.4855373316516
- type: nauc_mrr_at_3_std
value: -17.719256990311198
- type: nauc_mrr_at_5_diff1
value: 73.24460108538948
- type: nauc_mrr_at_5_max
value: 36.322370705490634
- type: nauc_mrr_at_5_std
value: -17.636279233457984
- type: nauc_ndcg_at_1000_diff1
value: 53.674109881592756
- type: nauc_ndcg_at_1000_max
value: 28.767387846727487
- type: nauc_ndcg_at_1000_std
value: -8.858681782014946
- type: nauc_ndcg_at_100_diff1
value: 52.33608078847966
- type: nauc_ndcg_at_100_max
value: 28.511414384159877
- type: nauc_ndcg_at_100_std
value: -8.085385430073922
- type: nauc_ndcg_at_10_diff1
value: 49.712295545440774
- type: nauc_ndcg_at_10_max
value: 27.5674225152019
- type: nauc_ndcg_at_10_std
value: -8.244677630275376
- type: nauc_ndcg_at_1_diff1
value: 74.48742882702338
- type: nauc_ndcg_at_1_max
value: 31.49138530538466
- type: nauc_ndcg_at_1_std
value: -19.510294856397955
- type: nauc_ndcg_at_20_diff1
value: 50.61628846813059
- type: nauc_ndcg_at_20_max
value: 27.53989784238201
- type: nauc_ndcg_at_20_std
value: -8.373695482986479
- type: nauc_ndcg_at_3_diff1
value: 51.295860863016884
- type: nauc_ndcg_at_3_max
value: 28.99776689198307
- type: nauc_ndcg_at_3_std
value: -8.878181909861983
- type: nauc_ndcg_at_5_diff1
value: 49.619081645734504
- type: nauc_ndcg_at_5_max
value: 28.11109235395876
- type: nauc_ndcg_at_5_std
value: -7.722157727171728
- type: nauc_precision_at_1000_diff1
value: -9.298540465937485
- type: nauc_precision_at_1000_max
value: -1.3157308795912563
- type: nauc_precision_at_1000_std
value: 1.897355386264135
- type: nauc_precision_at_100_diff1
value: -12.246672190804334
- type: nauc_precision_at_100_max
value: -0.9687067276412682
- type: nauc_precision_at_100_std
value: 4.56074518564851
- type: nauc_precision_at_10_diff1
value: -15.533411370200923
- type: nauc_precision_at_10_max
value: -2.191843047666222
- type: nauc_precision_at_10_std
value: 3.6723841478730748
- type: nauc_precision_at_1_diff1
value: 74.48742882702338
- type: nauc_precision_at_1_max
value: 31.49138530538466
- type: nauc_precision_at_1_std
value: -19.510294856397955
- type: nauc_precision_at_20_diff1
value: -15.290364061922347
- type: nauc_precision_at_20_max
value: -2.921722171191804
- type: nauc_precision_at_20_std
value: 4.08482465973661
- type: nauc_precision_at_3_diff1
value: -8.208906597107383
- type: nauc_precision_at_3_max
value: 2.9796478961627284
- type: nauc_precision_at_3_std
value: 0.34366033602604895
- type: nauc_precision_at_5_diff1
value: -14.42241522747573
- type: nauc_precision_at_5_max
value: -0.5633890785935999
- type: nauc_precision_at_5_std
value: 3.7064496791809836
- type: nauc_recall_at_1000_diff1
value: -0.5673198466803553
- type: nauc_recall_at_1000_max
value: 21.92110385096128
- type: nauc_recall_at_1000_std
value: 54.421987386115475
- type: nauc_recall_at_100_diff1
value: -0.6512704079314391
- type: nauc_recall_at_100_max
value: 22.38252665262688
- type: nauc_recall_at_100_std
value: 36.50750378730013
- type: nauc_recall_at_10_diff1
value: 11.308848658347774
- type: nauc_recall_at_10_max
value: 21.077700181656738
- type: nauc_recall_at_10_std
value: 8.321338697504787
- type: nauc_recall_at_1_diff1
value: 56.88016472114475
- type: nauc_recall_at_1_max
value: 24.671479435457048
- type: nauc_recall_at_1_std
value: -11.980878470985619
- type: nauc_recall_at_20_diff1
value: 5.8415071379210755
- type: nauc_recall_at_20_max
value: 16.97886837481554
- type: nauc_recall_at_20_std
value: 12.529145693495494
- type: nauc_recall_at_3_diff1
value: 27.396913234035086
- type: nauc_recall_at_3_max
value: 24.897648442357994
- type: nauc_recall_at_3_std
value: 0.8528297027573939
- type: nauc_recall_at_5_diff1
value: 18.295838017557397
- type: nauc_recall_at_5_max
value: 24.077879268127823
- type: nauc_recall_at_5_std
value: 7.099403908855888
- type: ndcg_at_1
value: 89.049
- type: ndcg_at_10
value: 92.438
- type: ndcg_at_100
value: 93.016
- type: ndcg_at_1000
value: 93.17699999999999
- type: ndcg_at_20
value: 92.713
- type: ndcg_at_3
value: 91.40599999999999
- type: ndcg_at_5
value: 92.026
- type: precision_at_1
value: 89.049
- type: precision_at_10
value: 10.917
- type: precision_at_100
value: 1.146
- type: precision_at_1000
value: 0.117
- type: precision_at_20
value: 5.56
- type: precision_at_3
value: 34.598
- type: precision_at_5
value: 21.323
- type: recall_at_1
value: 82.678
- type: recall_at_10
value: 96.465
- type: recall_at_100
value: 98.571
- type: recall_at_1000
value: 99.496
- type: recall_at_20
value: 97.342
- type: recall_at_3
value: 93.696
- type: recall_at_5
value: 95.324
- task:
type: Retrieval
dataset:
name: MTEB FiQA2018
type: mteb/fiqa
config: default
split: test
revision: 27a168819829fe9bcd655c2df245fb19452e8e06
metrics:
- type: main_score
value: 58.781000000000006
- type: map_at_1
value: 31.107000000000003
- type: map_at_10
value: 50.955
- type: map_at_100
value: 53.177
- type: map_at_1000
value: 53.291
- type: map_at_20
value: 52.271
- type: map_at_3
value: 44.762
- type: map_at_5
value: 48.379
- type: mrr_at_1
value: 58.48765432098766
- type: mrr_at_10
value: 66.11429551244368
- type: mrr_at_100
value: 66.68929754431386
- type: mrr_at_1000
value: 66.71304995006113
- type: mrr_at_20
value: 66.46821550477237
- type: mrr_at_3
value: 64.22325102880657
- type: mrr_at_5
value: 65.41152263374484
- type: nauc_map_at_1000_diff1
value: 45.490146083064445
- type: nauc_map_at_1000_max
value: 33.573139354617126
- type: nauc_map_at_1000_std
value: -14.07140489937541
- type: nauc_map_at_100_diff1
value: 45.48828357408913
- type: nauc_map_at_100_max
value: 33.51907944260763
- type: nauc_map_at_100_std
value: -14.059609883903152
- type: nauc_map_at_10_diff1
value: 45.70748844526757
- type: nauc_map_at_10_max
value: 31.667587503334587
- type: nauc_map_at_10_std
value: -15.076948336390855
- type: nauc_map_at_1_diff1
value: 51.42775649850064
- type: nauc_map_at_1_max
value: 16.56862308325116
- type: nauc_map_at_1_std
value: -14.684731980257675
- type: nauc_map_at_20_diff1
value: 45.754998522906284
- type: nauc_map_at_20_max
value: 33.03759060247343
- type: nauc_map_at_20_std
value: -14.750787459968736
- type: nauc_map_at_3_diff1
value: 46.45241223088609
- type: nauc_map_at_3_max
value: 26.607789112226467
- type: nauc_map_at_3_std
value: -14.997049792585598
- type: nauc_map_at_5_diff1
value: 45.87702900983919
- type: nauc_map_at_5_max
value: 30.076255479914348
- type: nauc_map_at_5_std
value: -15.062787509367553
- type: nauc_mrr_at_1000_diff1
value: 55.64889336097758
- type: nauc_mrr_at_1000_max
value: 48.57022261913911
- type: nauc_mrr_at_1000_std
value: -12.428435474800143
- type: nauc_mrr_at_100_diff1
value: 55.62957328562593
- type: nauc_mrr_at_100_max
value: 48.56575267775789
- type: nauc_mrr_at_100_std
value: -12.415226616847987
- type: nauc_mrr_at_10_diff1
value: 55.5931002027865
- type: nauc_mrr_at_10_max
value: 48.428200063552374
- type: nauc_mrr_at_10_std
value: -12.590361961152267
- type: nauc_mrr_at_1_diff1
value: 59.470635489729105
- type: nauc_mrr_at_1_max
value: 49.66866699872627
- type: nauc_mrr_at_1_std
value: -13.590112604913607
- type: nauc_mrr_at_20_diff1
value: 55.60145155716686
- type: nauc_mrr_at_20_max
value: 48.58677663675733
- type: nauc_mrr_at_20_std
value: -12.454093344399036
- type: nauc_mrr_at_3_diff1
value: 55.76657118158415
- type: nauc_mrr_at_3_max
value: 48.88547787372198
- type: nauc_mrr_at_3_std
value: -13.299744066289124
- type: nauc_mrr_at_5_diff1
value: 55.55217612731964
- type: nauc_mrr_at_5_max
value: 48.56957852769844
- type: nauc_mrr_at_5_std
value: -12.876904435466624
- type: nauc_ndcg_at_1000_diff1
value: 47.2645656074121
- type: nauc_ndcg_at_1000_max
value: 39.95808937564202
- type: nauc_ndcg_at_1000_std
value: -11.366829207572232
- type: nauc_ndcg_at_100_diff1
value: 46.89043419464991
- type: nauc_ndcg_at_100_max
value: 39.00034359981605
- type: nauc_ndcg_at_100_std
value: -10.697277437129921
- type: nauc_ndcg_at_10_diff1
value: 47.07625032910763
- type: nauc_ndcg_at_10_max
value: 35.51275239983428
- type: nauc_ndcg_at_10_std
value: -13.965305287946128
- type: nauc_ndcg_at_1_diff1
value: 59.470635489729105
- type: nauc_ndcg_at_1_max
value: 49.66866699872627
- type: nauc_ndcg_at_1_std
value: -13.590112604913607
- type: nauc_ndcg_at_20_diff1
value: 47.44262917418296
- type: nauc_ndcg_at_20_max
value: 37.6804112715633
- type: nauc_ndcg_at_20_std
value: -13.174880813005297
- type: nauc_ndcg_at_3_diff1
value: 44.56982475937759
- type: nauc_ndcg_at_3_max
value: 37.96424549723314
- type: nauc_ndcg_at_3_std
value: -13.657607148249964
- type: nauc_ndcg_at_5_diff1
value: 45.427291740214024
- type: nauc_ndcg_at_5_max
value: 35.42232275517991
- type: nauc_ndcg_at_5_std
value: -14.510048307634808
- type: nauc_precision_at_1000_diff1
value: -16.58479747595096
- type: nauc_precision_at_1000_max
value: 27.22386867486023
- type: nauc_precision_at_1000_std
value: 9.41210384044254
- type: nauc_precision_at_100_diff1
value: -11.640382840009572
- type: nauc_precision_at_100_max
value: 30.20752947841474
- type: nauc_precision_at_100_std
value: 10.72773947232612
- type: nauc_precision_at_10_diff1
value: 3.2540578244055594
- type: nauc_precision_at_10_max
value: 35.80515547017638
- type: nauc_precision_at_10_std
value: 0.299517152086918
- type: nauc_precision_at_1_diff1
value: 59.470635489729105
- type: nauc_precision_at_1_max
value: 49.66866699872627
- type: nauc_precision_at_1_std
value: -13.590112604913607
- type: nauc_precision_at_20_diff1
value: -1.8627219860435185
- type: nauc_precision_at_20_max
value: 35.9181314633325
- type: nauc_precision_at_20_std
value: 4.491869749000042
- type: nauc_precision_at_3_diff1
value: 17.94168903901189
- type: nauc_precision_at_3_max
value: 41.67388438464254
- type: nauc_precision_at_3_std
value: -5.38615084998387
- type: nauc_precision_at_5_diff1
value: 9.312012525324068
- type: nauc_precision_at_5_max
value: 39.52463080415461
- type: nauc_precision_at_5_std
value: -2.615286156278468
- type: nauc_recall_at_1000_diff1
value: 37.0960616996064
- type: nauc_recall_at_1000_max
value: 46.91967503624078
- type: nauc_recall_at_1000_std
value: 36.70723581015844
- type: nauc_recall_at_100_diff1
value: 32.54497560045993
- type: nauc_recall_at_100_max
value: 26.846226776082734
- type: nauc_recall_at_100_std
value: 9.257918182671672
- type: nauc_recall_at_10_diff1
value: 40.05619869408745
- type: nauc_recall_at_10_max
value: 25.504319960057014
- type: nauc_recall_at_10_std
value: -12.57012842016253
- type: nauc_recall_at_1_diff1
value: 51.42775649850064
- type: nauc_recall_at_1_max
value: 16.56862308325116
- type: nauc_recall_at_1_std
value: -14.684731980257675
- type: nauc_recall_at_20_diff1
value: 39.34128607816815
- type: nauc_recall_at_20_max
value: 28.31147877410395
- type: nauc_recall_at_20_std
value: -10.295180225906224
- type: nauc_recall_at_3_diff1
value: 41.31333745355922
- type: nauc_recall_at_3_max
value: 22.642649370921276
- type: nauc_recall_at_3_std
value: -14.44811859378254
- type: nauc_recall_at_5_diff1
value: 39.91795256714951
- type: nauc_recall_at_5_max
value: 24.396817798634245
- type: nauc_recall_at_5_std
value: -13.696077909471175
- type: ndcg_at_1
value: 58.48799999999999
- type: ndcg_at_10
value: 58.781000000000006
- type: ndcg_at_100
value: 65.212
- type: ndcg_at_1000
value: 66.85900000000001
- type: ndcg_at_20
value: 61.529999999999994
- type: ndcg_at_3
value: 54.864000000000004
- type: ndcg_at_5
value: 56.223
- type: precision_at_1
value: 58.48799999999999
- type: precision_at_10
value: 16.111
- type: precision_at_100
value: 2.298
- type: precision_at_1000
value: 0.257
- type: precision_at_20
value: 9.306000000000001
- type: precision_at_3
value: 36.317
- type: precision_at_5
value: 26.759
- type: recall_at_1
value: 31.107000000000003
- type: recall_at_10
value: 65.08500000000001
- type: recall_at_100
value: 87.91
- type: recall_at_1000
value: 97.817
- type: recall_at_20
value: 73.282
- type: recall_at_3
value: 49.317
- type: recall_at_5
value: 56.617
- task:
type: Retrieval
dataset:
name: MTEB HotpotQA
type: mteb/hotpotqa
config: default
split: test
revision: ab518f4d6fcca38d87c25209f94beba119d02014
metrics:
- type: main_score
value: 79.449
- type: map_at_1
value: 41.384
- type: map_at_10
value: 72.844
- type: map_at_100
value: 73.589
- type: map_at_1000
value: 73.624
- type: map_at_20
value: 73.317
- type: map_at_3
value: 69.427
- type: map_at_5
value: 71.68299999999999
- type: mrr_at_1
value: 82.76839972991222
- type: mrr_at_10
value: 87.92045807744634
- type: mrr_at_100
value: 88.05529589670978
- type: mrr_at_1000
value: 88.05935891074716
- type: mrr_at_20
value: 88.01402112970962
- type: mrr_at_3
value: 87.3216295295969
- type: mrr_at_5
value: 87.69705154175074
- type: nauc_map_at_1000_diff1
value: 20.396629535039704
- type: nauc_map_at_1000_max
value: 39.10949908339265
- type: nauc_map_at_1000_std
value: 10.224729673688502
- type: nauc_map_at_100_diff1
value: 20.381077063965574
- type: nauc_map_at_100_max
value: 39.12262980169527
- type: nauc_map_at_100_std
value: 10.256952440972226
- type: nauc_map_at_10_diff1
value: 20.227214487416916
- type: nauc_map_at_10_max
value: 39.065878364926085
- type: nauc_map_at_10_std
value: 9.830819360569484
- type: nauc_map_at_1_diff1
value: 60.61929089121275
- type: nauc_map_at_1_max
value: 49.53547409224507
- type: nauc_map_at_1_std
value: 0.2722096857291782
- type: nauc_map_at_20_diff1
value: 20.183915365850165
- type: nauc_map_at_20_max
value: 39.06905710390586
- type: nauc_map_at_20_std
value: 10.244769286257812
- type: nauc_map_at_3_diff1
value: 18.953350220177363
- type: nauc_map_at_3_max
value: 36.89647666189664
- type: nauc_map_at_3_std
value: 6.856939205711613
- type: nauc_map_at_5_diff1
value: 19.74313508534105
- type: nauc_map_at_5_max
value: 38.42860611762909
- type: nauc_map_at_5_std
value: 8.620757357067802
- type: nauc_mrr_at_1000_diff1
value: 60.544760748070416
- type: nauc_mrr_at_1000_max
value: 53.536700750600176
- type: nauc_mrr_at_1000_std
value: 4.356103341419562
- type: nauc_mrr_at_100_diff1
value: 60.543037850402825
- type: nauc_mrr_at_100_max
value: 53.54473925679791
- type: nauc_mrr_at_100_std
value: 4.3713759172294475
- type: nauc_mrr_at_10_diff1
value: 60.57585979923885
- type: nauc_mrr_at_10_max
value: 53.65882404973961
- type: nauc_mrr_at_10_std
value: 4.46866142907982
- type: nauc_mrr_at_1_diff1
value: 60.61929089121275
- type: nauc_mrr_at_1_max
value: 49.53547409224507
- type: nauc_mrr_at_1_std
value: 0.2722096857291782
- type: nauc_mrr_at_20_diff1
value: 60.541893232518674
- type: nauc_mrr_at_20_max
value: 53.6135776399171
- type: nauc_mrr_at_20_std
value: 4.443552945861195
- type: nauc_mrr_at_3_diff1
value: 60.46996364153697
- type: nauc_mrr_at_3_max
value: 53.981024588336936
- type: nauc_mrr_at_3_std
value: 4.300285863686253
- type: nauc_mrr_at_5_diff1
value: 60.562791070200426
- type: nauc_mrr_at_5_max
value: 53.884058343579966
- type: nauc_mrr_at_5_std
value: 4.35333313705802
- type: nauc_ndcg_at_1000_diff1
value: 26.909558826785485
- type: nauc_ndcg_at_1000_max
value: 43.2090252545764
- type: nauc_ndcg_at_1000_std
value: 13.24632397019833
- type: nauc_ndcg_at_100_diff1
value: 26.4096138903785
- type: nauc_ndcg_at_100_max
value: 43.50667894420325
- type: nauc_ndcg_at_100_std
value: 14.272929786830657
- type: nauc_ndcg_at_10_diff1
value: 25.261392560708607
- type: nauc_ndcg_at_10_max
value: 43.02496845139645
- type: nauc_ndcg_at_10_std
value: 12.753991213996402
- type: nauc_ndcg_at_1_diff1
value: 60.61929089121275
- type: nauc_ndcg_at_1_max
value: 49.53547409224507
- type: nauc_ndcg_at_1_std
value: 0.2722096857291782
- type: nauc_ndcg_at_20_diff1
value: 25.15730629354081
- type: nauc_ndcg_at_20_max
value: 43.10358742768409
- type: nauc_ndcg_at_20_std
value: 14.103247675055986
- type: nauc_ndcg_at_3_diff1
value: 23.492158440363873
- type: nauc_ndcg_at_3_max
value: 39.880317429264736
- type: nauc_ndcg_at_3_std
value: 7.852278799949863
- type: nauc_ndcg_at_5_diff1
value: 24.46471897598423
- type: nauc_ndcg_at_5_max
value: 41.901821932685294
- type: nauc_ndcg_at_5_std
value: 10.33482164145028
- type: nauc_precision_at_1000_diff1
value: 14.556112531859444
- type: nauc_precision_at_1000_max
value: 54.51236512101235
- type: nauc_precision_at_1000_std
value: 68.89420216988455
- type: nauc_precision_at_100_diff1
value: 14.116319404924122
- type: nauc_precision_at_100_max
value: 50.42943334977378
- type: nauc_precision_at_100_std
value: 49.80016017936658
- type: nauc_precision_at_10_diff1
value: 14.530495877243805
- type: nauc_precision_at_10_max
value: 43.89651175033577
- type: nauc_precision_at_10_std
value: 24.764789718434958
- type: nauc_precision_at_1_diff1
value: 60.61929089121275
- type: nauc_precision_at_1_max
value: 49.53547409224507
- type: nauc_precision_at_1_std
value: 0.2722096857291782
- type: nauc_precision_at_20_diff1
value: 11.499635650364958
- type: nauc_precision_at_20_max
value: 44.499499741252265
- type: nauc_precision_at_20_std
value: 33.743842605352725
- type: nauc_precision_at_3_diff1
value: 14.621019803797811
- type: nauc_precision_at_3_max
value: 38.1391146398071
- type: nauc_precision_at_3_std
value: 11.050680597126348
- type: nauc_precision_at_5_diff1
value: 14.878056511475538
- type: nauc_precision_at_5_max
value: 41.52854585813069
- type: nauc_precision_at_5_std
value: 16.596884488946877
- type: nauc_recall_at_1000_diff1
value: 14.556112531860405
- type: nauc_recall_at_1000_max
value: 54.512365121012444
- type: nauc_recall_at_1000_std
value: 68.89420216988472
- type: nauc_recall_at_100_diff1
value: 14.11631940492389
- type: nauc_recall_at_100_max
value: 50.42943334977325
- type: nauc_recall_at_100_std
value: 49.80016017936635
- type: nauc_recall_at_10_diff1
value: 14.530495877243975
- type: nauc_recall_at_10_max
value: 43.89651175033581
- type: nauc_recall_at_10_std
value: 24.764789718434855
- type: nauc_recall_at_1_diff1
value: 60.61929089121275
- type: nauc_recall_at_1_max
value: 49.53547409224507
- type: nauc_recall_at_1_std
value: 0.2722096857291782
- type: nauc_recall_at_20_diff1
value: 11.499635650364953
- type: nauc_recall_at_20_max
value: 44.499499741252166
- type: nauc_recall_at_20_std
value: 33.74384260535269
- type: nauc_recall_at_3_diff1
value: 14.621019803797758
- type: nauc_recall_at_3_max
value: 38.139114639807104
- type: nauc_recall_at_3_std
value: 11.050680597126208
- type: nauc_recall_at_5_diff1
value: 14.87805651147543
- type: nauc_recall_at_5_max
value: 41.52854585813069
- type: nauc_recall_at_5_std
value: 16.59688448894684
- type: ndcg_at_1
value: 82.768
- type: ndcg_at_10
value: 79.449
- type: ndcg_at_100
value: 81.878
- type: ndcg_at_1000
value: 82.526
- type: ndcg_at_20
value: 80.601
- type: ndcg_at_3
value: 74.899
- type: ndcg_at_5
value: 77.586
- type: precision_at_1
value: 82.768
- type: precision_at_10
value: 16.804
- type: precision_at_100
value: 1.8659999999999999
- type: precision_at_1000
value: 0.19499999999999998
- type: precision_at_20
value: 8.770999999999999
- type: precision_at_3
value: 49.381
- type: precision_at_5
value: 31.746000000000002
- type: recall_at_1
value: 41.384
- type: recall_at_10
value: 84.018
- type: recall_at_100
value: 93.30199999999999
- type: recall_at_1000
value: 97.529
- type: recall_at_20
value: 87.711
- type: recall_at_3
value: 74.072
- type: recall_at_5
value: 79.365
- task:
type: Classification
dataset:
name: MTEB ImdbClassification
type: mteb/imdb
config: default
split: test
revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7
metrics:
- type: accuracy
value: 95.50119999999997
- type: ap
value: 93.27855740989341
- type: ap_weighted
value: 93.27855740989341
- type: f1
value: 95.49922732391366
- type: f1_weighted
value: 95.49922732391366
- type: main_score
value: 95.50119999999997
- task:
type: Retrieval
dataset:
name: MTEB MSMARCO
type: mteb/msmarco
config: default
split: dev
revision: c5a29a104738b98a9e76336939199e264163d4a0
metrics:
- type: main_score
value: 44.181
- type: map_at_1
value: 24.3
- type: map_at_10
value: 37.064
- type: map_at_100
value: 38.217
- type: map_at_1000
value: 38.261
- type: map_at_20
value: 37.797
- type: map_at_3
value: 33.03
- type: map_at_5
value: 35.382000000000005
- type: mrr_at_1
value: 25.014326647564474
- type: mrr_at_10
value: 37.67002092145362
- type: mrr_at_100
value: 38.76618716955713
- type: mrr_at_1000
value: 38.803895343578624
- type: mrr_at_20
value: 38.372875531879025
- type: mrr_at_3
value: 33.74164278892073
- type: mrr_at_5
value: 36.04250238777461
- type: nauc_map_at_1000_diff1
value: 37.38914109165067
- type: nauc_map_at_1000_max
value: 9.290439022090213
- type: nauc_map_at_1000_std
value: -17.68507604596775
- type: nauc_map_at_100_diff1
value: 37.3858106261435
- type: nauc_map_at_100_max
value: 9.292194370842791
- type: nauc_map_at_100_std
value: -17.6461510679294
- type: nauc_map_at_10_diff1
value: 37.24355836056403
- type: nauc_map_at_10_max
value: 9.19029394636661
- type: nauc_map_at_10_std
value: -18.216369315567626
- type: nauc_map_at_1_diff1
value: 40.298938486026984
- type: nauc_map_at_1_max
value: 8.149499405622326
- type: nauc_map_at_1_std
value: -17.09168853307602
- type: nauc_map_at_20_diff1
value: 37.344123641575216
- type: nauc_map_at_20_max
value: 9.24559383901809
- type: nauc_map_at_20_std
value: -17.842740773642962
- type: nauc_map_at_3_diff1
value: 37.4023127968177
- type: nauc_map_at_3_max
value: 8.930674077317596
- type: nauc_map_at_3_std
value: -18.68520909934096
- type: nauc_map_at_5_diff1
value: 37.12600186091895
- type: nauc_map_at_5_max
value: 9.173506919924861
- type: nauc_map_at_5_std
value: -18.625677130615294
- type: nauc_mrr_at_1000_diff1
value: 37.34256456294692
- type: nauc_mrr_at_1000_max
value: 9.276741130450404
- type: nauc_mrr_at_1000_std
value: -17.41693013754444
- type: nauc_mrr_at_100_diff1
value: 37.33775949993714
- type: nauc_mrr_at_100_max
value: 9.28051163202218
- type: nauc_mrr_at_100_std
value: -17.381741706111445
- type: nauc_mrr_at_10_diff1
value: 37.21506505847139
- type: nauc_mrr_at_10_max
value: 9.200324529184542
- type: nauc_mrr_at_10_std
value: -17.904321523440817
- type: nauc_mrr_at_1_diff1
value: 40.314678345050915
- type: nauc_mrr_at_1_max
value: 8.193685362111243
- type: nauc_mrr_at_1_std
value: -17.096535887474175
- type: nauc_mrr_at_20_diff1
value: 37.293746882874004
- type: nauc_mrr_at_20_max
value: 9.256273923676206
- type: nauc_mrr_at_20_std
value: -17.528338232043577
- type: nauc_mrr_at_3_diff1
value: 37.254812254578376
- type: nauc_mrr_at_3_max
value: 8.903676300128614
- type: nauc_mrr_at_3_std
value: -18.49940979312031
- type: nauc_mrr_at_5_diff1
value: 37.08969825523026
- type: nauc_mrr_at_5_max
value: 9.194982897416688
- type: nauc_mrr_at_5_std
value: -18.291840579141315
- type: nauc_ndcg_at_1000_diff1
value: 36.930810397557096
- type: nauc_ndcg_at_1000_max
value: 9.8356345032183
- type: nauc_ndcg_at_1000_std
value: -16.308145152943887
- type: nauc_ndcg_at_100_diff1
value: 36.901149744427414
- type: nauc_ndcg_at_100_max
value: 9.96065454342114
- type: nauc_ndcg_at_100_std
value: -14.983815239399584
- type: nauc_ndcg_at_10_diff1
value: 36.441571794416724
- type: nauc_ndcg_at_10_max
value: 9.57337658776914
- type: nauc_ndcg_at_10_std
value: -17.88037638294921
- type: nauc_ndcg_at_1_diff1
value: 40.314678345050915
- type: nauc_ndcg_at_1_max
value: 8.193685362111243
- type: nauc_ndcg_at_1_std
value: -17.096535887474175
- type: nauc_ndcg_at_20_diff1
value: 36.775334219857484
- type: nauc_ndcg_at_20_max
value: 9.789544462660507
- type: nauc_ndcg_at_20_std
value: -16.465733594062474
- type: nauc_ndcg_at_3_diff1
value: 36.58838956901628
- type: nauc_ndcg_at_3_max
value: 9.089768089567865
- type: nauc_ndcg_at_3_std
value: -19.12823913473232
- type: nauc_ndcg_at_5_diff1
value: 36.147729725463364
- type: nauc_ndcg_at_5_max
value: 9.53707003144017
- type: nauc_ndcg_at_5_std
value: -18.91372487441106
- type: nauc_precision_at_1000_diff1
value: -6.013504255890098
- type: nauc_precision_at_1000_max
value: 6.319348588937731
- type: nauc_precision_at_1000_std
value: 6.360339202992953
- type: nauc_precision_at_100_diff1
value: 14.846649240680357
- type: nauc_precision_at_100_max
value: 11.751644343520605
- type: nauc_precision_at_100_std
value: 16.881205928162444
- type: nauc_precision_at_10_diff1
value: 30.328513184776966
- type: nauc_precision_at_10_max
value: 9.988509735977631
- type: nauc_precision_at_10_std
value: -15.609966599969837
- type: nauc_precision_at_1_diff1
value: 40.314678345050915
- type: nauc_precision_at_1_max
value: 8.193685362111243
- type: nauc_precision_at_1_std
value: -17.096535887474175
- type: nauc_precision_at_20_diff1
value: 28.248245250811543
- type: nauc_precision_at_20_max
value: 10.953279209883918
- type: nauc_precision_at_20_std
value: -7.365540710727016
- type: nauc_precision_at_3_diff1
value: 33.6150964111514
- type: nauc_precision_at_3_max
value: 9.216455510763346
- type: nauc_precision_at_3_std
value: -20.45932513010908
- type: nauc_precision_at_5_diff1
value: 31.518755311864705
- type: nauc_precision_at_5_max
value: 10.019710006442747
- type: nauc_precision_at_5_std
value: -19.740528698385468
- type: nauc_recall_at_1000_diff1
value: 12.207155589507542
- type: nauc_recall_at_1000_max
value: 39.3447783153665
- type: nauc_recall_at_1000_std
value: 74.60352827999826
- type: nauc_recall_at_100_diff1
value: 32.993666280768615
- type: nauc_recall_at_100_max
value: 16.487188889720816
- type: nauc_recall_at_100_std
value: 26.828206265371275
- type: nauc_recall_at_10_diff1
value: 33.65453771237772
- type: nauc_recall_at_10_max
value: 10.71869814574723
- type: nauc_recall_at_10_std
value: -16.27859785753318
- type: nauc_recall_at_1_diff1
value: 40.298938486026984
- type: nauc_recall_at_1_max
value: 8.149499405622326
- type: nauc_recall_at_1_std
value: -17.09168853307602
- type: nauc_recall_at_20_diff1
value: 34.60034971417269
- type: nauc_recall_at_20_max
value: 12.076871992384788
- type: nauc_recall_at_20_std
value: -8.224571589978806
- type: nauc_recall_at_3_diff1
value: 34.24661417034744
- type: nauc_recall_at_3_max
value: 9.464103325281997
- type: nauc_recall_at_3_std
value: -20.329748455626195
- type: nauc_recall_at_5_diff1
value: 33.042225241281585
- type: nauc_recall_at_5_max
value: 10.486814885646142
- type: nauc_recall_at_5_std
value: -19.7259662900716
- type: ndcg_at_1
value: 25.013999999999996
- type: ndcg_at_10
value: 44.181
- type: ndcg_at_100
value: 49.673
- type: ndcg_at_1000
value: 50.705999999999996
- type: ndcg_at_20
value: 46.798
- type: ndcg_at_3
value: 36.037
- type: ndcg_at_5
value: 40.214
- type: precision_at_1
value: 25.013999999999996
- type: precision_at_10
value: 6.9110000000000005
- type: precision_at_100
value: 0.9650000000000001
- type: precision_at_1000
value: 0.105
- type: precision_at_20
value: 4.004
- type: precision_at_3
value: 15.238999999999999
- type: precision_at_5
value: 11.264000000000001
- type: recall_at_1
value: 24.3
- type: recall_at_10
value: 66.06400000000001
- type: recall_at_100
value: 91.291
- type: recall_at_1000
value: 99.054
- type: recall_at_20
value: 76.25699999999999
- type: recall_at_3
value: 44.039
- type: recall_at_5
value: 54.053
- task:
type: Classification
dataset:
name: MTEB MTOPDomainClassification (en)
type: mteb/mtop_domain
config: en
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 96.35430916552667
- type: f1
value: 96.16669219074517
- type: f1_weighted
value: 96.35506582065435
- type: main_score
value: 96.35430916552667
- task:
type: Classification
dataset:
name: MTEB MTOPIntentClassification (en)
type: mteb/mtop_intent
config: en
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 80.51527587779297
- type: f1
value: 59.350461259612345
- type: f1_weighted
value: 81.51891267687044
- type: main_score
value: 80.51527587779297
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (en)
type: mteb/amazon_massive_intent
config: en
split: test
revision: 4672e20407010da34463acc759c162ca9734bca6
metrics:
- type: accuracy
value: 77.31338264963013
- type: f1
value: 75.29547524788576
- type: f1_weighted
value: 76.26831259224058
- type: main_score
value: 77.31338264963013
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (en)
type: mteb/amazon_massive_scenario
config: en
split: test
revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8
metrics:
- type: accuracy
value: 79.97982515131137
- type: f1
value: 79.34057805450769
- type: f1_weighted
value: 79.73023446597212
- type: main_score
value: 79.97982515131137
- task:
type: Clustering
dataset:
name: MTEB MedrxivClusteringP2P
type: mteb/medrxiv-clustering-p2p
config: default
split: test
revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73
metrics:
- type: main_score
value: 38.37635785818304
- type: v_measure
value: 38.37635785818304
- type: v_measure_std
value: 1.6943794496059137
- task:
type: Clustering
dataset:
name: MTEB MedrxivClusteringS2S
type: mteb/medrxiv-clustering-s2s
config: default
split: test
revision: 35191c8c0dca72d8ff3efcd72aa802307d469663
metrics:
- type: main_score
value: 37.6711034083755
- type: v_measure
value: 37.6711034083755
- type: v_measure_std
value: 1.1408887612104992
- task:
type: Reranking
dataset:
name: MTEB MindSmallReranking
type: mteb/mind_small
config: default
split: test
revision: 59042f120c80e8afa9cdbb224f67076cec0fc9a7
metrics:
- type: main_score
value: 32.20170969306457
- type: map
value: 32.20170969306457
- type: mrr
value: 33.41738896071552
- type: nAUC_map_diff1
value: 12.077124363492512
- type: nAUC_map_max
value: -20.336429990396454
- type: nAUC_map_std
value: 0.10724031251638018
- type: nAUC_mrr_diff1
value: 11.405695518900744
- type: nAUC_mrr_max
value: -15.0727490448132
- type: nAUC_mrr_std
value: 1.8987958512727106
- task:
type: Retrieval
dataset:
name: MTEB NFCorpus
type: mteb/nfcorpus
config: default
split: test
revision: ec0fa4fe99da2ff19ca1214b7966684033a58814
metrics:
- type: main_score
value: 40.463
- type: map_at_1
value: 6.4990000000000006
- type: map_at_10
value: 15.699
- type: map_at_100
value: 19.895
- type: map_at_1000
value: 21.537
- type: map_at_20
value: 17.429
- type: map_at_3
value: 11.48
- type: map_at_5
value: 13.383999999999999
- type: mrr_at_1
value: 52.63157894736842
- type: mrr_at_10
value: 61.60401002506265
- type: mrr_at_100
value: 62.04336653809741
- type: mrr_at_1000
value: 62.07610833363911
- type: mrr_at_20
value: 61.88033067968176
- type: mrr_at_3
value: 59.44272445820435
- type: mrr_at_5
value: 60.89783281733746
- type: nauc_map_at_1000_diff1
value: 18.58585974547791
- type: nauc_map_at_1000_max
value: 30.25465935470905
- type: nauc_map_at_1000_std
value: 10.987080017051682
- type: nauc_map_at_100_diff1
value: 20.02651798573329
- type: nauc_map_at_100_max
value: 30.108719787095467
- type: nauc_map_at_100_std
value: 7.882019722247158
- type: nauc_map_at_10_diff1
value: 23.02800157136177
- type: nauc_map_at_10_max
value: 22.8723397741279
- type: nauc_map_at_10_std
value: -3.762893117006399
- type: nauc_map_at_1_diff1
value: 37.94611136294878
- type: nauc_map_at_1_max
value: 7.297492349938244
- type: nauc_map_at_1_std
value: -17.813930346562152
- type: nauc_map_at_20_diff1
value: 21.981440837881113
- type: nauc_map_at_20_max
value: 26.759497880383837
- type: nauc_map_at_20_std
value: 0.18040330674839283
- type: nauc_map_at_3_diff1
value: 27.066009968256555
- type: nauc_map_at_3_max
value: 10.488797596450187
- type: nauc_map_at_3_std
value: -14.013059830876845
- type: nauc_map_at_5_diff1
value: 25.493785001708446
- type: nauc_map_at_5_max
value: 16.217756878539337
- type: nauc_map_at_5_std
value: -10.714238788014212
- type: nauc_mrr_at_1000_diff1
value: 28.488264933723528
- type: nauc_mrr_at_1000_max
value: 45.94151165403325
- type: nauc_mrr_at_1000_std
value: 25.20231778025588
- type: nauc_mrr_at_100_diff1
value: 28.4886630218298
- type: nauc_mrr_at_100_max
value: 45.9702575916014
- type: nauc_mrr_at_100_std
value: 25.22848732842774
- type: nauc_mrr_at_10_diff1
value: 28.535257017998294
- type: nauc_mrr_at_10_max
value: 45.86005605851268
- type: nauc_mrr_at_10_std
value: 24.81744203643852
- type: nauc_mrr_at_1_diff1
value: 29.824630548327285
- type: nauc_mrr_at_1_max
value: 44.19891968145314
- type: nauc_mrr_at_1_std
value: 23.21413139777098
- type: nauc_mrr_at_20_diff1
value: 28.54642005356483
- type: nauc_mrr_at_20_max
value: 46.08926361963997
- type: nauc_mrr_at_20_std
value: 25.39517294920476
- type: nauc_mrr_at_3_diff1
value: 28.230929109259407
- type: nauc_mrr_at_3_max
value: 44.05364599618201
- type: nauc_mrr_at_3_std
value: 23.828100697992724
- type: nauc_mrr_at_5_diff1
value: 29.669751924690758
- type: nauc_mrr_at_5_max
value: 45.36862661497384
- type: nauc_mrr_at_5_std
value: 23.787166807022505
- type: nauc_ndcg_at_1000_diff1
value: 18.515898773404377
- type: nauc_ndcg_at_1000_max
value: 44.57748675979855
- type: nauc_ndcg_at_1000_std
value: 29.205899131269604
- type: nauc_ndcg_at_100_diff1
value: 15.88197701276405
- type: nauc_ndcg_at_100_max
value: 39.62665883972109
- type: nauc_ndcg_at_100_std
value: 25.186347352251754
- type: nauc_ndcg_at_10_diff1
value: 16.220798038950925
- type: nauc_ndcg_at_10_max
value: 39.67757337154769
- type: nauc_ndcg_at_10_std
value: 25.634534917262403
- type: nauc_ndcg_at_1_diff1
value: 31.448775879462932
- type: nauc_ndcg_at_1_max
value: 44.4256421079556
- type: nauc_ndcg_at_1_std
value: 23.093987850437355
- type: nauc_ndcg_at_20_diff1
value: 15.417507391228035
- type: nauc_ndcg_at_20_max
value: 37.52014353976055
- type: nauc_ndcg_at_20_std
value: 23.880617920537915
- type: nauc_ndcg_at_3_diff1
value: 18.01018470616153
- type: nauc_ndcg_at_3_max
value: 39.135814950810804
- type: nauc_ndcg_at_3_std
value: 21.40850285781106
- type: nauc_ndcg_at_5_diff1
value: 18.502338826072368
- type: nauc_ndcg_at_5_max
value: 40.2043937728194
- type: nauc_ndcg_at_5_std
value: 22.242499743433424
- type: nauc_precision_at_1000_diff1
value: -13.648652068964681
- type: nauc_precision_at_1000_max
value: 3.5821865423513426
- type: nauc_precision_at_1000_std
value: 35.481456041211274
- type: nauc_precision_at_100_diff1
value: -11.342790040792961
- type: nauc_precision_at_100_max
value: 18.41811151847882
- type: nauc_precision_at_100_std
value: 44.901842372597336
- type: nauc_precision_at_10_diff1
value: -1.9404654865248405
- type: nauc_precision_at_10_max
value: 40.91955602631143
- type: nauc_precision_at_10_std
value: 41.38128398646734
- type: nauc_precision_at_1_diff1
value: 29.824630548327285
- type: nauc_precision_at_1_max
value: 44.19891968145314
- type: nauc_precision_at_1_std
value: 23.21413139777098
- type: nauc_precision_at_20_diff1
value: -5.046696327994225
- type: nauc_precision_at_20_max
value: 33.653422186725386
- type: nauc_precision_at_20_std
value: 40.97689615511939
- type: nauc_precision_at_3_diff1
value: 5.1767717826900785
- type: nauc_precision_at_3_max
value: 38.01276130261592
- type: nauc_precision_at_3_std
value: 25.71468883159735
- type: nauc_precision_at_5_diff1
value: 3.847065262189492
- type: nauc_precision_at_5_max
value: 41.00941977122254
- type: nauc_precision_at_5_std
value: 31.044768384177246
- type: nauc_recall_at_1000_diff1
value: 7.975632504947066
- type: nauc_recall_at_1000_max
value: 18.83264064904865
- type: nauc_recall_at_1000_std
value: 15.023940189337717
- type: nauc_recall_at_100_diff1
value: 10.354458867884487
- type: nauc_recall_at_100_max
value: 27.16900376430975
- type: nauc_recall_at_100_std
value: 14.160333284050214
- type: nauc_recall_at_10_diff1
value: 18.04347857307359
- type: nauc_recall_at_10_max
value: 19.082544744457774
- type: nauc_recall_at_10_std
value: -5.107813434157397
- type: nauc_recall_at_1_diff1
value: 37.94611136294878
- type: nauc_recall_at_1_max
value: 7.297492349938244
- type: nauc_recall_at_1_std
value: -17.813930346562152
- type: nauc_recall_at_20_diff1
value: 16.658153504941193
- type: nauc_recall_at_20_max
value: 23.214261213582382
- type: nauc_recall_at_20_std
value: -0.6964816170313349
- type: nauc_recall_at_3_diff1
value: 23.65600569767465
- type: nauc_recall_at_3_max
value: 6.543906048065431
- type: nauc_recall_at_3_std
value: -15.496093666790777
- type: nauc_recall_at_5_diff1
value: 22.112315726267077
- type: nauc_recall_at_5_max
value: 12.258969896916307
- type: nauc_recall_at_5_std
value: -12.922832334587008
- type: ndcg_at_1
value: 50.929
- type: ndcg_at_10
value: 40.463
- type: ndcg_at_100
value: 36.909
- type: ndcg_at_1000
value: 45.617999999999995
- type: ndcg_at_20
value: 37.772
- type: ndcg_at_3
value: 46.315
- type: ndcg_at_5
value: 44.052
- type: precision_at_1
value: 52.632
- type: precision_at_10
value: 29.814
- type: precision_at_100
value: 9.325
- type: precision_at_1000
value: 2.236
- type: precision_at_20
value: 22.073999999999998
- type: precision_at_3
value: 42.931000000000004
- type: precision_at_5
value: 37.957
- type: recall_at_1
value: 6.4990000000000006
- type: recall_at_10
value: 20.232
- type: recall_at_100
value: 36.846000000000004
- type: recall_at_1000
value: 69.03
- type: recall_at_20
value: 24.448
- type: recall_at_3
value: 13.258000000000001
- type: recall_at_5
value: 16.255
- task:
type: Retrieval
dataset:
name: MTEB NQ
type: mteb/nq
config: default
split: test
revision: b774495ed302d8c44a3a7ea25c90dbce03968f31
metrics:
- type: main_score
value: 70.518
- type: map_at_1
value: 46.233999999999995
- type: map_at_10
value: 63.519999999999996
- type: map_at_100
value: 64.14699999999999
- type: map_at_1000
value: 64.154
- type: map_at_20
value: 63.975
- type: map_at_3
value: 59.797
- type: map_at_5
value: 62.226000000000006
- type: mrr_at_1
value: 51.76709154113557
- type: mrr_at_10
value: 65.79852489470095
- type: mrr_at_100
value: 66.19480681115492
- type: mrr_at_1000
value: 66.19993656063721
- type: mrr_at_20
value: 66.0923632685851
- type: mrr_at_3
value: 63.185592893008746
- type: mrr_at_5
value: 64.93385477018151
- type: nauc_map_at_1000_diff1
value: 43.35077155361084
- type: nauc_map_at_1000_max
value: 37.282536180921085
- type: nauc_map_at_1000_std
value: -4.64357984773174
- type: nauc_map_at_100_diff1
value: 43.35098576601616
- type: nauc_map_at_100_max
value: 37.28998747522813
- type: nauc_map_at_100_std
value: -4.638151362399621
- type: nauc_map_at_10_diff1
value: 43.131007214082594
- type: nauc_map_at_10_max
value: 37.430076712266846
- type: nauc_map_at_10_std
value: -4.90614475410035
- type: nauc_map_at_1_diff1
value: 45.843123692592485
- type: nauc_map_at_1_max
value: 30.160164681399227
- type: nauc_map_at_1_std
value: -6.110582951655118
- type: nauc_map_at_20_diff1
value: 43.30588135441681
- type: nauc_map_at_20_max
value: 37.41321766111187
- type: nauc_map_at_20_std
value: -4.628074353861448
- type: nauc_map_at_3_diff1
value: 42.690411835598695
- type: nauc_map_at_3_max
value: 36.64069333510947
- type: nauc_map_at_3_std
value: -6.2899609993355545
- type: nauc_map_at_5_diff1
value: 42.906814471744134
- type: nauc_map_at_5_max
value: 37.27599132551781
- type: nauc_map_at_5_std
value: -5.512203849661435
- type: nauc_mrr_at_1000_diff1
value: 43.77989113830799
- type: nauc_mrr_at_1000_max
value: 38.01009876981156
- type: nauc_mrr_at_1000_std
value: -2.0250764367321654
- type: nauc_mrr_at_100_diff1
value: 43.78071481914773
- type: nauc_mrr_at_100_max
value: 38.01603112272088
- type: nauc_mrr_at_100_std
value: -2.019685020907906
- type: nauc_mrr_at_10_diff1
value: 43.582338882429156
- type: nauc_mrr_at_10_max
value: 38.19577506300954
- type: nauc_mrr_at_10_std
value: -2.011905402842086
- type: nauc_mrr_at_1_diff1
value: 46.544635554669576
- type: nauc_mrr_at_1_max
value: 33.82720628969995
- type: nauc_mrr_at_1_std
value: -2.924293824382781
- type: nauc_mrr_at_20_diff1
value: 43.713682995581614
- type: nauc_mrr_at_20_max
value: 38.09918392374771
- type: nauc_mrr_at_20_std
value: -1.9583477023239
- type: nauc_mrr_at_3_diff1
value: 43.35807398052401
- type: nauc_mrr_at_3_max
value: 38.39129780935902
- type: nauc_mrr_at_3_std
value: -2.287791352096624
- type: nauc_mrr_at_5_diff1
value: 43.4126448419642
- type: nauc_mrr_at_5_max
value: 38.27294037073721
- type: nauc_mrr_at_5_std
value: -2.166655666337289
- type: nauc_ndcg_at_1000_diff1
value: 43.26202839737687
- type: nauc_ndcg_at_1000_max
value: 38.493273787010615
- type: nauc_ndcg_at_1000_std
value: -2.9983001465713524
- type: nauc_ndcg_at_100_diff1
value: 43.25688556190981
- type: nauc_ndcg_at_100_max
value: 38.68155788574137
- type: nauc_ndcg_at_100_std
value: -2.8355616191757487
- type: nauc_ndcg_at_10_diff1
value: 42.37071983774907
- type: nauc_ndcg_at_10_max
value: 39.60970139451164
- type: nauc_ndcg_at_10_std
value: -3.5877671856177775
- type: nauc_ndcg_at_1_diff1
value: 46.614780156517845
- type: nauc_ndcg_at_1_max
value: 33.863655999315526
- type: nauc_ndcg_at_1_std
value: -2.839239881422542
- type: nauc_ndcg_at_20_diff1
value: 42.97845395193472
- type: nauc_ndcg_at_20_max
value: 39.53053589334249
- type: nauc_ndcg_at_20_std
value: -2.6507495263904515
- type: nauc_ndcg_at_3_diff1
value: 41.65390869521735
- type: nauc_ndcg_at_3_max
value: 38.4851846089685
- type: nauc_ndcg_at_3_std
value: -5.6296606018146
- type: nauc_ndcg_at_5_diff1
value: 41.89640848285409
- type: nauc_ndcg_at_5_max
value: 39.293659812249615
- type: nauc_ndcg_at_5_std
value: -4.754462409312945
- type: nauc_precision_at_1000_diff1
value: -10.848480634403051
- type: nauc_precision_at_1000_max
value: 1.3436973699935175
- type: nauc_precision_at_1000_std
value: 19.044141500097957
- type: nauc_precision_at_100_diff1
value: -9.018095533261604
- type: nauc_precision_at_100_max
value: 4.0402155161025695
- type: nauc_precision_at_100_std
value: 19.492823636364996
- type: nauc_precision_at_10_diff1
value: 3.947100636096294
- type: nauc_precision_at_10_max
value: 20.598641503195907
- type: nauc_precision_at_10_std
value: 13.522240087840858
- type: nauc_precision_at_1_diff1
value: 46.614780156517845
- type: nauc_precision_at_1_max
value: 33.863655999315526
- type: nauc_precision_at_1_std
value: -2.839239881422542
- type: nauc_precision_at_20_diff1
value: -2.1791072352475336
- type: nauc_precision_at_20_max
value: 14.03887841842901
- type: nauc_precision_at_20_std
value: 18.846129471001632
- type: nauc_precision_at_3_diff1
value: 21.09092861833543
- type: nauc_precision_at_3_max
value: 34.122841034361805
- type: nauc_precision_at_3_std
value: 2.5513201020031064
- type: nauc_precision_at_5_diff1
value: 12.181140062410874
- type: nauc_precision_at_5_max
value: 27.903435474574234
- type: nauc_precision_at_5_std
value: 7.6589638998570315
- type: nauc_recall_at_1000_diff1
value: 59.28482230176634
- type: nauc_recall_at_1000_max
value: 85.47306385133284
- type: nauc_recall_at_1000_std
value: 76.45740117805659
- type: nauc_recall_at_100_diff1
value: 44.31190730138568
- type: nauc_recall_at_100_max
value: 66.30976579719086
- type: nauc_recall_at_100_std
value: 30.65274229759539
- type: nauc_recall_at_10_diff1
value: 34.885747244334866
- type: nauc_recall_at_10_max
value: 50.998198327439404
- type: nauc_recall_at_10_std
value: -2.7025509359838193
- type: nauc_recall_at_1_diff1
value: 45.843123692592485
- type: nauc_recall_at_1_max
value: 30.160164681399227
- type: nauc_recall_at_1_std
value: -6.110582951655118
- type: nauc_recall_at_20_diff1
value: 37.873054394800825
- type: nauc_recall_at_20_max
value: 59.21039923637266
- type: nauc_recall_at_20_std
value: 9.352312696050557
- type: nauc_recall_at_3_diff1
value: 35.703271085627776
- type: nauc_recall_at_3_max
value: 41.19400688280121
- type: nauc_recall_at_3_std
value: -7.9624895195139
- type: nauc_recall_at_5_diff1
value: 34.831972383157925
- type: nauc_recall_at_5_max
value: 44.82018386701478
- type: nauc_recall_at_5_std
value: -7.046789506164082
- type: ndcg_at_1
value: 51.73799999999999
- type: ndcg_at_10
value: 70.518
- type: ndcg_at_100
value: 72.841
- type: ndcg_at_1000
value: 72.99799999999999
- type: ndcg_at_20
value: 71.895
- type: ndcg_at_3
value: 64.06500000000001
- type: ndcg_at_5
value: 67.86999999999999
- type: precision_at_1
value: 51.73799999999999
- type: precision_at_10
value: 10.698
- type: precision_at_100
value: 1.2
- type: precision_at_1000
value: 0.121
- type: precision_at_20
value: 5.691
- type: precision_at_3
value: 28.37
- type: precision_at_5
value: 19.363
- type: recall_at_1
value: 46.233999999999995
- type: recall_at_10
value: 89.062
- type: recall_at_100
value: 98.622
- type: recall_at_1000
value: 99.754
- type: recall_at_20
value: 94.052
- type: recall_at_3
value: 72.994
- type: recall_at_5
value: 81.525
- task:
type: Retrieval
dataset:
name: MTEB QuoraRetrieval
type: mteb/quora
config: default
split: test
revision: e4e08e0b7dbe3c8700f0daef558ff32256715259
metrics:
- type: main_score
value: 89.885
- type: map_at_1
value: 72.379
- type: map_at_10
value: 86.455
- type: map_at_100
value: 87.087
- type: map_at_1000
value: 87.1
- type: map_at_20
value: 86.883
- type: map_at_3
value: 83.663
- type: map_at_5
value: 85.443
- type: mrr_at_1
value: 83.27
- type: mrr_at_10
value: 89.13586904761888
- type: mrr_at_100
value: 89.22177886254626
- type: mrr_at_1000
value: 89.22204575963424
- type: mrr_at_20
value: 89.20621913458041
- type: mrr_at_3
value: 88.37999999999981
- type: mrr_at_5
value: 88.89349999999978
- type: nauc_map_at_1000_diff1
value: 78.17410401832315
- type: nauc_map_at_1000_max
value: 33.114749237960986
- type: nauc_map_at_1000_std
value: -49.79724283243796
- type: nauc_map_at_100_diff1
value: 78.17873434671671
- type: nauc_map_at_100_max
value: 33.101626543573325
- type: nauc_map_at_100_std
value: -49.82883017160494
- type: nauc_map_at_10_diff1
value: 78.28052682172311
- type: nauc_map_at_10_max
value: 32.626693803188694
- type: nauc_map_at_10_std
value: -51.941057676350034
- type: nauc_map_at_1_diff1
value: 81.06079816824507
- type: nauc_map_at_1_max
value: 25.638093235123616
- type: nauc_map_at_1_std
value: -43.230210939240344
- type: nauc_map_at_20_diff1
value: 78.22103944842512
- type: nauc_map_at_20_max
value: 32.94488423505404
- type: nauc_map_at_20_std
value: -50.69181407781227
- type: nauc_map_at_3_diff1
value: 78.75453877967588
- type: nauc_map_at_3_max
value: 30.645950847243686
- type: nauc_map_at_3_std
value: -52.983886453956266
- type: nauc_map_at_5_diff1
value: 78.44984884302167
- type: nauc_map_at_5_max
value: 31.69697839442234
- type: nauc_map_at_5_std
value: -53.21480554718401
- type: nauc_mrr_at_1000_diff1
value: 78.90502271071976
- type: nauc_mrr_at_1000_max
value: 35.902725888631075
- type: nauc_mrr_at_1000_std
value: -45.82579843551156
- type: nauc_mrr_at_100_diff1
value: 78.90552803580407
- type: nauc_mrr_at_100_max
value: 35.90392790964254
- type: nauc_mrr_at_100_std
value: -45.82489205475015
- type: nauc_mrr_at_10_diff1
value: 78.89432223469271
- type: nauc_mrr_at_10_max
value: 35.86669566861425
- type: nauc_mrr_at_10_std
value: -46.0616841694464
- type: nauc_mrr_at_1_diff1
value: 79.53513360034344
- type: nauc_mrr_at_1_max
value: 35.299514657188006
- type: nauc_mrr_at_1_std
value: -43.17936948437256
- type: nauc_mrr_at_20_diff1
value: 78.90707352031835
- type: nauc_mrr_at_20_max
value: 35.906499072241296
- type: nauc_mrr_at_20_std
value: -45.8904084451193
- type: nauc_mrr_at_3_diff1
value: 78.70913062166218
- type: nauc_mrr_at_3_max
value: 36.16709621132144
- type: nauc_mrr_at_3_std
value: -46.00948004774822
- type: nauc_mrr_at_5_diff1
value: 78.91095031555673
- type: nauc_mrr_at_5_max
value: 36.010878683954566
- type: nauc_mrr_at_5_std
value: -46.31731368609175
- type: nauc_ndcg_at_1000_diff1
value: 78.19132492477127
- type: nauc_ndcg_at_1000_max
value: 34.5208358892501
- type: nauc_ndcg_at_1000_std
value: -47.938360906488974
- type: nauc_ndcg_at_100_diff1
value: 78.24549799575261
- type: nauc_ndcg_at_100_max
value: 34.48869025578818
- type: nauc_ndcg_at_100_std
value: -48.02996375451253
- type: nauc_ndcg_at_10_diff1
value: 78.15340584208084
- type: nauc_ndcg_at_10_max
value: 33.5226981818058
- type: nauc_ndcg_at_10_std
value: -51.690477519601494
- type: nauc_ndcg_at_1_diff1
value: 79.55459365767561
- type: nauc_ndcg_at_1_max
value: 35.25214101433387
- type: nauc_ndcg_at_1_std
value: -43.10088819860409
- type: nauc_ndcg_at_20_diff1
value: 78.27277286768546
- type: nauc_ndcg_at_20_max
value: 33.997104745595564
- type: nauc_ndcg_at_20_std
value: -50.10549601980995
- type: nauc_ndcg_at_3_diff1
value: 77.68820501917479
- type: nauc_ndcg_at_3_max
value: 33.00389630941839
- type: nauc_ndcg_at_3_std
value: -51.00595251236665
- type: nauc_ndcg_at_5_diff1
value: 78.08093149961476
- type: nauc_ndcg_at_5_max
value: 33.03434664578743
- type: nauc_ndcg_at_5_std
value: -52.37122386447497
- type: nauc_precision_at_1000_diff1
value: -44.49830608740945
- type: nauc_precision_at_1000_max
value: -7.3283280714307395
- type: nauc_precision_at_1000_std
value: 38.55076692876393
- type: nauc_precision_at_100_diff1
value: -44.252675314263904
- type: nauc_precision_at_100_max
value: -7.038454433556829
- type: nauc_precision_at_100_std
value: 38.247323997481615
- type: nauc_precision_at_10_diff1
value: -40.192852013615216
- type: nauc_precision_at_10_max
value: -3.7258976649568036
- type: nauc_precision_at_10_std
value: 25.983458444206182
- type: nauc_precision_at_1_diff1
value: 79.55459365767561
- type: nauc_precision_at_1_max
value: 35.25214101433387
- type: nauc_precision_at_1_std
value: -43.10088819860409
- type: nauc_precision_at_20_diff1
value: -43.020749754821495
- type: nauc_precision_at_20_max
value: -5.7062060443801075
- type: nauc_precision_at_20_std
value: 32.8862431943092
- type: nauc_precision_at_3_diff1
value: -22.843593386293996
- type: nauc_precision_at_3_max
value: 4.474275296763041
- type: nauc_precision_at_3_std
value: 6.119920479600398
- type: nauc_precision_at_5_diff1
value: -33.598088334605045
- type: nauc_precision_at_5_max
value: -0.41505757559350775
- type: nauc_precision_at_5_std
value: 16.52526817965026
- type: nauc_recall_at_1000_diff1
value: 28.726073762912847
- type: nauc_recall_at_1000_max
value: -57.390873015654066
- type: nauc_recall_at_1000_std
value: 69.71288515421948
- type: nauc_recall_at_100_diff1
value: 83.06070133460443
- type: nauc_recall_at_100_max
value: 33.27991294763942
- type: nauc_recall_at_100_std
value: -42.785112479889655
- type: nauc_recall_at_10_diff1
value: 74.73877865072825
- type: nauc_recall_at_10_max
value: 27.81410621945221
- type: nauc_recall_at_10_std
value: -75.85371099008806
- type: nauc_recall_at_1_diff1
value: 81.06079816824507
- type: nauc_recall_at_1_max
value: 25.638093235123616
- type: nauc_recall_at_1_std
value: -43.230210939240344
- type: nauc_recall_at_20_diff1
value: 76.04615040930837
- type: nauc_recall_at_20_max
value: 27.47173316749929
- type: nauc_recall_at_20_std
value: -78.29029550423172
- type: nauc_recall_at_3_diff1
value: 75.29987903678384
- type: nauc_recall_at_3_max
value: 27.48543826795177
- type: nauc_recall_at_3_std
value: -60.91023011356427
- type: nauc_recall_at_5_diff1
value: 74.71682412813378
- type: nauc_recall_at_5_max
value: 27.255092143441562
- type: nauc_recall_at_5_std
value: -69.03177732393821
- type: ndcg_at_1
value: 83.26
- type: ndcg_at_10
value: 89.885
- type: ndcg_at_100
value: 90.968
- type: ndcg_at_1000
value: 91.02799999999999
- type: ndcg_at_20
value: 90.52900000000001
- type: ndcg_at_3
value: 87.443
- type: ndcg_at_5
value: 88.81
- type: precision_at_1
value: 83.26
- type: precision_at_10
value: 13.581999999999999
- type: precision_at_100
value: 1.541
- type: precision_at_1000
value: 0.157
- type: precision_at_20
value: 7.210999999999999
- type: precision_at_3
value: 38.323
- type: precision_at_5
value: 25.069999999999997
- type: recall_at_1
value: 72.379
- type: recall_at_10
value: 96.261
- type: recall_at_100
value: 99.779
- type: recall_at_1000
value: 99.996
- type: recall_at_20
value: 98.301
- type: recall_at_3
value: 89.101
- type: recall_at_5
value: 93.11500000000001
- task:
type: Clustering
dataset:
name: MTEB RedditClustering
type: mteb/reddit-clustering
config: default
split: test
revision: 24640382cdbf8abc73003fb0fa6d111a705499eb
metrics:
- type: main_score
value: 61.87769077476204
- type: v_measure
value: 61.87769077476204
- type: v_measure_std
value: 5.290405218730049
- task:
type: Clustering
dataset:
name: MTEB RedditClusteringP2P
type: mteb/reddit-clustering-p2p
config: default
split: test
revision: 385e3cb46b4cfa89021f56c4380204149d0efe33
metrics:
- type: main_score
value: 68.29553057563754
- type: v_measure
value: 68.29553057563754
- type: v_measure_std
value: 13.019229253711732
- task:
type: Retrieval
dataset:
name: MTEB SCIDOCS
type: mteb/scidocs
config: default
split: test
revision: f8c2fcf00f625baaa80f62ec5bd9e1fff3b8ae88
metrics:
- type: main_score
value: 20.86
- type: map_at_1
value: 4.843
- type: map_at_10
value: 12.457
- type: map_at_100
value: 14.648
- type: map_at_1000
value: 14.965
- type: map_at_20
value: 13.596
- type: map_at_3
value: 8.776
- type: map_at_5
value: 10.528
- type: mrr_at_1
value: 23.799999999999997
- type: mrr_at_10
value: 34.93765873015872
- type: mrr_at_100
value: 36.054095036751825
- type: mrr_at_1000
value: 36.10871797082569
- type: mrr_at_20
value: 35.57880859465608
- type: mrr_at_3
value: 31.54999999999999
- type: mrr_at_5
value: 33.53999999999998
- type: nauc_map_at_1000_diff1
value: 16.889540490911525
- type: nauc_map_at_1000_max
value: 25.726340275186143
- type: nauc_map_at_1000_std
value: 9.926911665196988
- type: nauc_map_at_100_diff1
value: 16.889355521248202
- type: nauc_map_at_100_max
value: 25.628741550328126
- type: nauc_map_at_100_std
value: 9.637062917997012
- type: nauc_map_at_10_diff1
value: 16.972521218507854
- type: nauc_map_at_10_max
value: 24.810172126870363
- type: nauc_map_at_10_std
value: 7.09295422867669
- type: nauc_map_at_1_diff1
value: 24.9292922418417
- type: nauc_map_at_1_max
value: 15.49253311874767
- type: nauc_map_at_1_std
value: -0.4754734108717385
- type: nauc_map_at_20_diff1
value: 16.945564955989113
- type: nauc_map_at_20_max
value: 25.197327599885362
- type: nauc_map_at_20_std
value: 7.972256233219635
- type: nauc_map_at_3_diff1
value: 19.503723922705067
- type: nauc_map_at_3_max
value: 20.795879090480057
- type: nauc_map_at_3_std
value: 1.5828913591118658
- type: nauc_map_at_5_diff1
value: 19.80474780705204
- type: nauc_map_at_5_max
value: 24.040173591299723
- type: nauc_map_at_5_std
value: 4.153642430396917
- type: nauc_mrr_at_1000_diff1
value: 21.80300741603344
- type: nauc_mrr_at_1000_max
value: 19.98123409846586
- type: nauc_mrr_at_1000_std
value: 3.6325335777371377
- type: nauc_mrr_at_100_diff1
value: 21.804966803578946
- type: nauc_mrr_at_100_max
value: 19.9965104601956
- type: nauc_mrr_at_100_std
value: 3.6713772865070107
- type: nauc_mrr_at_10_diff1
value: 21.66109150475663
- type: nauc_mrr_at_10_max
value: 19.873876575424404
- type: nauc_mrr_at_10_std
value: 3.3387503298795584
- type: nauc_mrr_at_1_diff1
value: 24.868548821073084
- type: nauc_mrr_at_1_max
value: 16.189915011439044
- type: nauc_mrr_at_1_std
value: -0.17692171251799987
- type: nauc_mrr_at_20_diff1
value: 21.677427533247375
- type: nauc_mrr_at_20_max
value: 19.967193157614872
- type: nauc_mrr_at_20_std
value: 3.639825799332009
- type: nauc_mrr_at_3_diff1
value: 21.681117207511825
- type: nauc_mrr_at_3_max
value: 19.132660363303295
- type: nauc_mrr_at_3_std
value: 1.6613642176263752
- type: nauc_mrr_at_5_diff1
value: 21.833332207271884
- type: nauc_mrr_at_5_max
value: 19.926480855266213
- type: nauc_mrr_at_5_std
value: 2.901801717093585
- type: nauc_ndcg_at_1000_diff1
value: 16.92599483752314
- type: nauc_ndcg_at_1000_max
value: 27.126582080942814
- type: nauc_ndcg_at_1000_std
value: 16.638448489514683
- type: nauc_ndcg_at_100_diff1
value: 16.96586885959473
- type: nauc_ndcg_at_100_max
value: 26.675878724175046
- type: nauc_ndcg_at_100_std
value: 15.369335585614245
- type: nauc_ndcg_at_10_diff1
value: 16.59779893225997
- type: nauc_ndcg_at_10_max
value: 24.865338966132818
- type: nauc_ndcg_at_10_std
value: 8.934209252745864
- type: nauc_ndcg_at_1_diff1
value: 24.868548821073084
- type: nauc_ndcg_at_1_max
value: 16.189915011439044
- type: nauc_ndcg_at_1_std
value: -0.17692171251799987
- type: nauc_ndcg_at_20_diff1
value: 16.647406628819976
- type: nauc_ndcg_at_20_max
value: 25.64488140369063
- type: nauc_ndcg_at_20_std
value: 10.587157641309098
- type: nauc_ndcg_at_3_diff1
value: 19.093302254257377
- type: nauc_ndcg_at_3_max
value: 21.33725971448413
- type: nauc_ndcg_at_3_std
value: 2.549021710462978
- type: nauc_ndcg_at_5_diff1
value: 19.495189389728836
- type: nauc_ndcg_at_5_max
value: 24.21965138651894
- type: nauc_ndcg_at_5_std
value: 5.549408503444251
- type: nauc_precision_at_1000_diff1
value: 7.4232833098081565
- type: nauc_precision_at_1000_max
value: 25.24619675919913
- type: nauc_precision_at_1000_std
value: 32.79744946411614
- type: nauc_precision_at_100_diff1
value: 10.550449529674747
- type: nauc_precision_at_100_max
value: 25.652112631579726
- type: nauc_precision_at_100_std
value: 26.65722909800614
- type: nauc_precision_at_10_diff1
value: 11.195653785882708
- type: nauc_precision_at_10_max
value: 26.469986306854977
- type: nauc_precision_at_10_std
value: 14.05089697514966
- type: nauc_precision_at_1_diff1
value: 24.868548821073084
- type: nauc_precision_at_1_max
value: 16.189915011439044
- type: nauc_precision_at_1_std
value: -0.17692171251799987
- type: nauc_precision_at_20_diff1
value: 11.16738184991032
- type: nauc_precision_at_20_max
value: 26.53741675130711
- type: nauc_precision_at_20_std
value: 16.250110771034542
- type: nauc_precision_at_3_diff1
value: 16.917872510926284
- type: nauc_precision_at_3_max
value: 23.22094310791854
- type: nauc_precision_at_3_std
value: 3.9255078517383906
- type: nauc_precision_at_5_diff1
value: 16.898056883587824
- type: nauc_precision_at_5_max
value: 27.39457295203392
- type: nauc_precision_at_5_std
value: 8.924759582566171
- type: nauc_recall_at_1000_diff1
value: 7.516072705946253
- type: nauc_recall_at_1000_max
value: 25.001682297424594
- type: nauc_recall_at_1000_std
value: 33.86296283879721
- type: nauc_recall_at_100_diff1
value: 10.435705067998168
- type: nauc_recall_at_100_max
value: 25.31622603650995
- type: nauc_recall_at_100_std
value: 26.758897185352097
- type: nauc_recall_at_10_diff1
value: 11.110953419292343
- type: nauc_recall_at_10_max
value: 25.970593144433085
- type: nauc_recall_at_10_std
value: 13.92252981022314
- type: nauc_recall_at_1_diff1
value: 24.9292922418417
- type: nauc_recall_at_1_max
value: 15.49253311874767
- type: nauc_recall_at_1_std
value: -0.4754734108717385
- type: nauc_recall_at_20_diff1
value: 11.050515317424548
- type: nauc_recall_at_20_max
value: 26.068866115743134
- type: nauc_recall_at_20_std
value: 16.13787216291987
- type: nauc_recall_at_3_diff1
value: 17.013383740580203
- type: nauc_recall_at_3_max
value: 22.49105285578937
- type: nauc_recall_at_3_std
value: 3.5741090487413687
- type: nauc_recall_at_5_diff1
value: 16.973540662242602
- type: nauc_recall_at_5_max
value: 26.78087164318061
- type: nauc_recall_at_5_std
value: 8.68040862354009
- type: ndcg_at_1
value: 23.799999999999997
- type: ndcg_at_10
value: 20.86
- type: ndcg_at_100
value: 29.145
- type: ndcg_at_1000
value: 34.518
- type: ndcg_at_20
value: 23.892
- type: ndcg_at_3
value: 19.541
- type: ndcg_at_5
value: 17.166999999999998
- type: precision_at_1
value: 23.799999999999997
- type: precision_at_10
value: 10.9
- type: precision_at_100
value: 2.281
- type: precision_at_1000
value: 0.357
- type: precision_at_20
value: 7.21
- type: precision_at_3
value: 18.3
- type: precision_at_5
value: 15.120000000000001
- type: recall_at_1
value: 4.843
- type: recall_at_10
value: 22.12
- type: recall_at_100
value: 46.257
- type: recall_at_1000
value: 72.382
- type: recall_at_20
value: 29.253
- type: recall_at_3
value: 11.158
- type: recall_at_5
value: 15.347
- task:
type: STS
dataset:
name: MTEB SICK-R
type: mteb/sickr-sts
config: default
split: test
revision: 20a6d6f312dd54037fe07a32d58e5e168867909d
metrics:
- type: cosine_pearson
value: 85.65467419133347
- type: cosine_spearman
value: 81.88046945336663
- type: euclidean_pearson
value: 82.82887106181879
- type: euclidean_spearman
value: 81.88047605481775
- type: main_score
value: 81.88046945336663
- type: manhattan_pearson
value: 82.7839019603756
- type: manhattan_spearman
value: 81.83505450284663
- type: pearson
value: 85.65467419133347
- type: spearman
value: 81.88046945336663
- task:
type: STS
dataset:
name: MTEB STS12
type: mteb/sts12-sts
config: default
split: test
revision: a0d554a64d88156834ff5ae9920b964011b16384
metrics:
- type: cosine_pearson
value: 85.8979872663498
- type: cosine_spearman
value: 78.63991285161867
- type: euclidean_pearson
value: 81.20243176386163
- type: euclidean_spearman
value: 78.64021127260493
- type: main_score
value: 78.63991285161867
- type: manhattan_pearson
value: 81.58673652635328
- type: manhattan_spearman
value: 79.03930665482164
- type: pearson
value: 85.8979872663498
- type: spearman
value: 78.63991285161867
- task:
type: STS
dataset:
name: MTEB STS13
type: mteb/sts13-sts
config: default
split: test
revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca
metrics:
- type: cosine_pearson
value: 87.10598414063074
- type: cosine_spearman
value: 87.12110799581852
- type: euclidean_pearson
value: 86.52284239759508
- type: euclidean_spearman
value: 87.12110799581852
- type: main_score
value: 87.12110799581852
- type: manhattan_pearson
value: 86.61105352996736
- type: manhattan_spearman
value: 87.34100209521596
- type: pearson
value: 87.10598414063074
- type: spearman
value: 87.12110799581852
- task:
type: STS
dataset:
name: MTEB STS14
type: mteb/sts14-sts
config: default
split: test
revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375
metrics:
- type: cosine_pearson
value: 85.66540041627184
- type: cosine_spearman
value: 83.55263671417923
- type: euclidean_pearson
value: 84.2332532036626
- type: euclidean_spearman
value: 83.55264421653584
- type: main_score
value: 83.55263671417923
- type: manhattan_pearson
value: 84.14418954784165
- type: manhattan_spearman
value: 83.58193360267302
- type: pearson
value: 85.66540041627184
- type: spearman
value: 83.55263671417923
- task:
type: STS
dataset:
name: MTEB STS15
type: mteb/sts15-sts
config: default
split: test
revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3
metrics:
- type: cosine_pearson
value: 89.83404956912175
- type: cosine_spearman
value: 90.09569633194636
- type: euclidean_pearson
value: 89.31121256629982
- type: euclidean_spearman
value: 90.09569632193572
- type: main_score
value: 90.09569633194636
- type: manhattan_pearson
value: 89.30064909066367
- type: manhattan_spearman
value: 90.20232732019451
- type: pearson
value: 89.83404956912175
- type: spearman
value: 90.09569633194636
- task:
type: STS
dataset:
name: MTEB STS16
type: mteb/sts16-sts
config: default
split: test
revision: 4d8694f8f0e0100860b497b999b3dbed754a0513
metrics:
- type: cosine_pearson
value: 86.27894370732598
- type: cosine_spearman
value: 87.22000226558832
- type: euclidean_pearson
value: 85.92822715155758
- type: euclidean_spearman
value: 87.22000226558832
- type: main_score
value: 87.22000226558832
- type: manhattan_pearson
value: 85.9498561399522
- type: manhattan_spearman
value: 87.28837300894288
- type: pearson
value: 86.27894370732598
- type: spearman
value: 87.22000226558832
- task:
type: STS
dataset:
name: MTEB STS17 (en-en)
type: mteb/sts17-crosslingual-sts
config: en-en
split: test
revision: faeb762787bd10488a50c8b5be4a3b82e411949c
metrics:
- type: cosine_pearson
value: 91.60185356782324
- type: cosine_spearman
value: 91.43471625912765
- type: euclidean_pearson
value: 91.52529087606635
- type: euclidean_spearman
value: 91.43471625912765
- type: main_score
value: 91.43471625912765
- type: manhattan_pearson
value: 91.34917173506308
- type: manhattan_spearman
value: 91.2112665439884
- type: pearson
value: 91.60185356782324
- type: spearman
value: 91.43471625912765
- task:
type: STS
dataset:
name: MTEB STS22 (en)
type: mteb/sts22-crosslingual-sts
config: en
split: test
revision: de9d86b3b84231dc21f76c7b7af1f28e2f57f6e3
metrics:
- type: cosine_pearson
value: 68.735098373629
- type: cosine_spearman
value: 67.76156085991387
- type: euclidean_pearson
value: 68.38053954511516
- type: euclidean_spearman
value: 67.76156085991387
- type: main_score
value: 67.76156085991387
- type: manhattan_pearson
value: 68.4533080173714
- type: manhattan_spearman
value: 67.76676959397871
- type: pearson
value: 68.735098373629
- type: spearman
value: 67.76156085991387
- task:
type: STS
dataset:
name: MTEB STSBenchmark
type: mteb/stsbenchmark-sts
config: default
split: test
revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831
metrics:
- type: cosine_pearson
value: 87.63236624274985
- type: cosine_spearman
value: 88.27561759951514
- type: euclidean_pearson
value: 87.61137355553329
- type: euclidean_spearman
value: 88.27561759951514
- type: main_score
value: 88.27561759951514
- type: manhattan_pearson
value: 87.63505381780153
- type: manhattan_spearman
value: 88.41268943146845
- type: pearson
value: 87.63236624274985
- type: spearman
value: 88.27561759951514
- task:
type: Reranking
dataset:
name: MTEB SciDocsRR
type: mteb/scidocs-reranking
config: default
split: test
revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab
metrics:
- type: main_score
value: 85.16412972900244
- type: map
value: 85.16412972900244
- type: mrr
value: 96.15786628041529
- type: nAUC_map_diff1
value: -1.5068306084088756
- type: nAUC_map_max
value: 48.81296049442589
- type: nAUC_map_std
value: 65.0187132933644
- type: nAUC_mrr_diff1
value: 44.22872564939586
- type: nAUC_mrr_max
value: 85.19719227096536
- type: nAUC_mrr_std
value: 79.62669870868876
- task:
type: Retrieval
dataset:
name: MTEB SciFact
type: mteb/scifact
config: default
split: test
revision: 0228b52cf27578f30900b9e5271d331663a030d7
metrics:
- type: main_score
value: 77.377
- type: map_at_1
value: 62.217
- type: map_at_10
value: 73.115
- type: map_at_100
value: 73.63499999999999
- type: map_at_1000
value: 73.644
- type: map_at_20
value: 73.528
- type: map_at_3
value: 70.62
- type: map_at_5
value: 72.16
- type: mrr_at_1
value: 65.0
- type: mrr_at_10
value: 74.05542328042326
- type: mrr_at_100
value: 74.46295785951277
- type: mrr_at_1000
value: 74.47168088874803
- type: mrr_at_20
value: 74.35632423132421
- type: mrr_at_3
value: 72.55555555555556
- type: mrr_at_5
value: 73.38888888888887
- type: nauc_map_at_1000_diff1
value: 72.31754010838618
- type: nauc_map_at_1000_max
value: 60.59518156728312
- type: nauc_map_at_1000_std
value: -3.601504782295705
- type: nauc_map_at_100_diff1
value: 72.32057771059107
- type: nauc_map_at_100_max
value: 60.60481879601873
- type: nauc_map_at_100_std
value: -3.6030430073837167
- type: nauc_map_at_10_diff1
value: 72.15009895006031
- type: nauc_map_at_10_max
value: 60.49958178006608
- type: nauc_map_at_10_std
value: -4.305475753173601
- type: nauc_map_at_1_diff1
value: 76.32919417574946
- type: nauc_map_at_1_max
value: 54.77358788281581
- type: nauc_map_at_1_std
value: -9.773898055794557
- type: nauc_map_at_20_diff1
value: 72.15740734516393
- type: nauc_map_at_20_max
value: 60.61318821265446
- type: nauc_map_at_20_std
value: -3.6016854193910803
- type: nauc_map_at_3_diff1
value: 72.07435404889445
- type: nauc_map_at_3_max
value: 56.93970890047747
- type: nauc_map_at_3_std
value: -8.697324220121793
- type: nauc_map_at_5_diff1
value: 72.42599960854554
- type: nauc_map_at_5_max
value: 60.12535137001906
- type: nauc_map_at_5_std
value: -4.437892354037166
- type: nauc_mrr_at_1000_diff1
value: 72.75103842052889
- type: nauc_mrr_at_1000_max
value: 62.72341811793062
- type: nauc_mrr_at_1000_std
value: -0.7759889099766357
- type: nauc_mrr_at_100_diff1
value: 72.75396801842608
- type: nauc_mrr_at_100_max
value: 62.73241247525427
- type: nauc_mrr_at_100_std
value: -0.7786866224468205
- type: nauc_mrr_at_10_diff1
value: 72.5942754009733
- type: nauc_mrr_at_10_max
value: 62.895066542256664
- type: nauc_mrr_at_10_std
value: -0.9018200301159104
- type: nauc_mrr_at_1_diff1
value: 77.63311362465076
- type: nauc_mrr_at_1_max
value: 62.42059294219759
- type: nauc_mrr_at_1_std
value: -1.3182520953698476
- type: nauc_mrr_at_20_diff1
value: 72.58522880943326
- type: nauc_mrr_at_20_max
value: 62.73063935403417
- type: nauc_mrr_at_20_std
value: -0.7910003366564456
- type: nauc_mrr_at_3_diff1
value: 72.70751722757556
- type: nauc_mrr_at_3_max
value: 62.38218933726893
- type: nauc_mrr_at_3_std
value: -1.7126398606397155
- type: nauc_mrr_at_5_diff1
value: 72.57550761997256
- type: nauc_mrr_at_5_max
value: 62.70945847818393
- type: nauc_mrr_at_5_std
value: -0.30886077098332143
- type: nauc_ndcg_at_1000_diff1
value: 71.6036105202873
- type: nauc_ndcg_at_1000_max
value: 61.99911514670603
- type: nauc_ndcg_at_1000_std
value: -2.050470755577302
- type: nauc_ndcg_at_100_diff1
value: 71.70345074974581
- type: nauc_ndcg_at_100_max
value: 62.374525611545714
- type: nauc_ndcg_at_100_std
value: -1.922345118135967
- type: nauc_ndcg_at_10_diff1
value: 70.40027928749286
- type: nauc_ndcg_at_10_max
value: 62.36595526966657
- type: nauc_ndcg_at_10_std
value: -3.862278807246422
- type: nauc_ndcg_at_1_diff1
value: 77.63311362465076
- type: nauc_ndcg_at_1_max
value: 62.42059294219759
- type: nauc_ndcg_at_1_std
value: -1.3182520953698476
- type: nauc_ndcg_at_20_diff1
value: 70.21719291674641
- type: nauc_ndcg_at_20_max
value: 62.356711760569404
- type: nauc_ndcg_at_20_std
value: -2.240360396463778
- type: nauc_ndcg_at_3_diff1
value: 70.72483260039468
- type: nauc_ndcg_at_3_max
value: 59.465348910073445
- type: nauc_ndcg_at_3_std
value: -6.379991854598364
- type: nauc_ndcg_at_5_diff1
value: 70.91296936044013
- type: nauc_ndcg_at_5_max
value: 61.5986283773017
- type: nauc_ndcg_at_5_std
value: -3.064893399445654
- type: nauc_precision_at_1000_diff1
value: -25.399544557043956
- type: nauc_precision_at_1000_max
value: 17.838641101318792
- type: nauc_precision_at_1000_std
value: 54.531382221213185
- type: nauc_precision_at_100_diff1
value: -15.78139909072201
- type: nauc_precision_at_100_max
value: 24.183801380755472
- type: nauc_precision_at_100_std
value: 50.39320972640593
- type: nauc_precision_at_10_diff1
value: 4.1199958514831
- type: nauc_precision_at_10_max
value: 37.922630159717926
- type: nauc_precision_at_10_std
value: 32.94959551960178
- type: nauc_precision_at_1_diff1
value: 77.63311362465076
- type: nauc_precision_at_1_max
value: 62.42059294219759
- type: nauc_precision_at_1_std
value: -1.3182520953698476
- type: nauc_precision_at_20_diff1
value: -8.926047159112303
- type: nauc_precision_at_20_max
value: 29.369903951067172
- type: nauc_precision_at_20_std
value: 41.793379234725904
- type: nauc_precision_at_3_diff1
value: 36.51209832895358
- type: nauc_precision_at_3_max
value: 51.07398992745159
- type: nauc_precision_at_3_std
value: 13.831661495933623
- type: nauc_precision_at_5_diff1
value: 19.526084047733807
- type: nauc_precision_at_5_max
value: 46.67537950098273
- type: nauc_precision_at_5_std
value: 31.06747779005178
- type: nauc_recall_at_1000_diff1
value: .nan
- type: nauc_recall_at_1000_max
value: .nan
- type: nauc_recall_at_1000_std
value: .nan
- type: nauc_recall_at_100_diff1
value: 77.45764972655711
- type: nauc_recall_at_100_max
value: 85.69427771108462
- type: nauc_recall_at_100_std
value: 10.277444311057575
- type: nauc_recall_at_10_diff1
value: 59.14685653975806
- type: nauc_recall_at_10_max
value: 67.75739956082005
- type: nauc_recall_at_10_std
value: -12.22251646924215
- type: nauc_recall_at_1_diff1
value: 76.32919417574946
- type: nauc_recall_at_1_max
value: 54.77358788281581
- type: nauc_recall_at_1_std
value: -9.773898055794557
- type: nauc_recall_at_20_diff1
value: 49.90908644159423
- type: nauc_recall_at_20_max
value: 70.55383556931541
- type: nauc_recall_at_20_std
value: -3.7004275394368182
- type: nauc_recall_at_3_diff1
value: 64.34183819693267
- type: nauc_recall_at_3_max
value: 55.782789721196444
- type: nauc_recall_at_3_std
value: -13.886583892174077
- type: nauc_recall_at_5_diff1
value: 63.467364565196135
- type: nauc_recall_at_5_max
value: 62.51562390716315
- type: nauc_recall_at_5_std
value: -4.715416491952255
- type: ndcg_at_1
value: 65.0
- type: ndcg_at_10
value: 77.377
- type: ndcg_at_100
value: 79.36
- type: ndcg_at_1000
value: 79.644
- type: ndcg_at_20
value: 78.61200000000001
- type: ndcg_at_3
value: 73.624
- type: ndcg_at_5
value: 75.458
- type: precision_at_1
value: 65.0
- type: precision_at_10
value: 10.100000000000001
- type: precision_at_100
value: 1.107
- type: precision_at_1000
value: 0.11299999999999999
- type: precision_at_20
value: 5.333
- type: precision_at_3
value: 28.778
- type: precision_at_5
value: 18.8
- type: recall_at_1
value: 62.217
- type: recall_at_10
value: 89.156
- type: recall_at_100
value: 97.667
- type: recall_at_1000
value: 100.0
- type: recall_at_20
value: 93.667
- type: recall_at_3
value: 79.183
- type: recall_at_5
value: 83.672
- task:
type: PairClassification
dataset:
name: MTEB SprintDuplicateQuestions
type: mteb/sprintduplicatequestions-pairclassification
config: default
split: test
revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46
metrics:
- type: cos_sim_accuracy
value: 99.8019801980198
- type: cos_sim_ap
value: 95.25139396923107
- type: dot_sim_accuracy
value: 0.998019801980198
- type: dot_sim_ap
value: 95.25139396923107
- type: max_accuracy
value: 99.8019801980198
- type: max_ap
value: 95.43878917155146
- type: max_f1
value: 90.0398406374502
- type: main_score
value: 95.43878917155146
- task:
type: Clustering
dataset:
name: MTEB StackExchangeClustering
type: mteb/stackexchange-clustering
config: default
split: test
revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259
metrics:
- type: main_score
value: 75.91311883393888
- type: v_measure
value: 75.91311883393888
- type: v_measure_std
value: 3.286198100593212
- task:
type: Clustering
dataset:
name: MTEB StackExchangeClusteringP2P
type: mteb/stackexchange-clustering-p2p
config: default
split: test
revision: 815ca46b2622cec33ccafc3735d572c266efdb44
metrics:
- type: main_score
value: 47.171049215275694
- type: v_measure
value: 47.171049215275694
- type: v_measure_std
value: 1.6586563477857534
- task:
type: Reranking
dataset:
name: MTEB StackOverflowDupQuestions
type: mteb/stackoverflowdupquestions-reranking
config: default
split: test
revision: e185fbe320c72810689fc5848eb6114e1ef5ec69
metrics:
- type: main_score
value: 54.15041943470163
- type: map
value: 54.15041943470163
- type: mrr
value: 55.03112798149563
- type: nAUC_map_diff1
value: 39.50144777017669
- type: nAUC_map_max
value: 14.024793174481395
- type: nAUC_map_std
value: 6.533766502190137
- type: nAUC_mrr_diff1
value: 39.72560651870919
- type: nAUC_mrr_max
value: 14.807887392821616
- type: nAUC_mrr_std
value: 7.270272018791473
- task:
type: Summarization
dataset:
name: MTEB SummEval
type: mteb/summeval
config: default
split: test
revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c
metrics:
- type: cosine_pearson
value: 31.57515534177576
- type: cosine_spearman
value: 31.415247541636194
- type: dot_pearson
value: 31.575170220667488
- type: dot_spearman
value: 31.415247541636194
- type: main_score
value: 31.415247541636194
- type: pearson
value: 31.57515534177576
- type: spearman
value: 31.415247541636194
- task:
type: Retrieval
dataset:
name: MTEB TRECCOVID
type: mteb/trec-covid
config: default
split: test
revision: bb9466bac8153a0349341eb1b22e06409e78ef4e
metrics:
- type: main_score
value: 83.67999999999999
- type: map_at_1
value: 0.243
- type: map_at_10
value: 2.167
- type: map_at_100
value: 13.750000000000002
- type: map_at_1000
value: 33.537
- type: map_at_20
value: 4.047
- type: map_at_3
value: 0.694
- type: map_at_5
value: 1.141
- type: mrr_at_1
value: 94.0
- type: mrr_at_10
value: 97.0
- type: mrr_at_100
value: 97.0
- type: mrr_at_1000
value: 97.0
- type: mrr_at_20
value: 97.0
- type: mrr_at_3
value: 97.0
- type: mrr_at_5
value: 97.0
- type: nauc_map_at_1000_diff1
value: 4.890354942949616
- type: nauc_map_at_1000_max
value: 29.279958833328408
- type: nauc_map_at_1000_std
value: 77.2405348865942
- type: nauc_map_at_100_diff1
value: 31.835069149380868
- type: nauc_map_at_100_max
value: 14.523120708509271
- type: nauc_map_at_100_std
value: 39.682149025882886
- type: nauc_map_at_10_diff1
value: 43.45574726953753
- type: nauc_map_at_10_max
value: -2.9143965183484246
- type: nauc_map_at_10_std
value: 2.8052301238653756
- type: nauc_map_at_1_diff1
value: 26.134637426782753
- type: nauc_map_at_1_max
value: -3.108959317897608
- type: nauc_map_at_1_std
value: -5.781123480253076
- type: nauc_map_at_20_diff1
value: 45.735224340099236
- type: nauc_map_at_20_max
value: -1.099022132339708
- type: nauc_map_at_20_std
value: 7.6378546013151905
- type: nauc_map_at_3_diff1
value: 35.70649469812688
- type: nauc_map_at_3_max
value: -9.710213033638656
- type: nauc_map_at_3_std
value: -3.6668161574691056
- type: nauc_map_at_5_diff1
value: 37.6110093992781
- type: nauc_map_at_5_max
value: -8.6295080300384
- type: nauc_map_at_5_std
value: -3.2709712613287145
- type: nauc_mrr_at_1000_diff1
value: -22.362278244631675
- type: nauc_mrr_at_1000_max
value: 63.74105197634592
- type: nauc_mrr_at_1000_std
value: 69.88795518207282
- type: nauc_mrr_at_100_diff1
value: -22.362278244631675
- type: nauc_mrr_at_100_max
value: 63.74105197634592
- type: nauc_mrr_at_100_std
value: 69.88795518207282
- type: nauc_mrr_at_10_diff1
value: -22.362278244631675
- type: nauc_mrr_at_10_max
value: 63.74105197634592
- type: nauc_mrr_at_10_std
value: 69.88795518207282
- type: nauc_mrr_at_1_diff1
value: -22.36227824463097
- type: nauc_mrr_at_1_max
value: 63.741051976346206
- type: nauc_mrr_at_1_std
value: 69.88795518207289
- type: nauc_mrr_at_20_diff1
value: -22.362278244631675
- type: nauc_mrr_at_20_max
value: 63.74105197634592
- type: nauc_mrr_at_20_std
value: 69.88795518207282
- type: nauc_mrr_at_3_diff1
value: -22.362278244631675
- type: nauc_mrr_at_3_max
value: 63.74105197634592
- type: nauc_mrr_at_3_std
value: 69.88795518207282
- type: nauc_mrr_at_5_diff1
value: -22.362278244631675
- type: nauc_mrr_at_5_max
value: 63.74105197634592
- type: nauc_mrr_at_5_std
value: 69.88795518207282
- type: nauc_ndcg_at_1000_diff1
value: 11.950362559089744
- type: nauc_ndcg_at_1000_max
value: 27.0707842379056
- type: nauc_ndcg_at_1000_std
value: 72.43903405163071
- type: nauc_ndcg_at_100_diff1
value: -3.597031398660954
- type: nauc_ndcg_at_100_max
value: 24.415981061123944
- type: nauc_ndcg_at_100_std
value: 74.01146007854192
- type: nauc_ndcg_at_10_diff1
value: 17.368676394860337
- type: nauc_ndcg_at_10_max
value: 27.014276985741652
- type: nauc_ndcg_at_10_std
value: 50.032884783457476
- type: nauc_ndcg_at_1_diff1
value: 5.824544582933801
- type: nauc_ndcg_at_1_max
value: 39.22818791946299
- type: nauc_ndcg_at_1_std
value: 29.32406519654831
- type: nauc_ndcg_at_20_diff1
value: 17.816409720909615
- type: nauc_ndcg_at_20_max
value: 25.056392180259827
- type: nauc_ndcg_at_20_std
value: 58.05680238138826
- type: nauc_ndcg_at_3_diff1
value: 15.010486876001556
- type: nauc_ndcg_at_3_max
value: 4.023535837214374
- type: nauc_ndcg_at_3_std
value: 22.55308565809234
- type: nauc_ndcg_at_5_diff1
value: 12.73162605923733
- type: nauc_ndcg_at_5_max
value: 15.425379568695105
- type: nauc_ndcg_at_5_std
value: 34.4442400670659
- type: nauc_precision_at_1000_diff1
value: -29.218427320110436
- type: nauc_precision_at_1000_max
value: 29.90719259849769
- type: nauc_precision_at_1000_std
value: 48.95093300052051
- type: nauc_precision_at_100_diff1
value: -6.881054858812464
- type: nauc_precision_at_100_max
value: 30.388273677316956
- type: nauc_precision_at_100_std
value: 76.1031398803066
- type: nauc_precision_at_10_diff1
value: 24.298416597687574
- type: nauc_precision_at_10_max
value: 44.38332754799598
- type: nauc_precision_at_10_std
value: 61.64143369558439
- type: nauc_precision_at_1_diff1
value: -22.36227824463097
- type: nauc_precision_at_1_max
value: 63.741051976346206
- type: nauc_precision_at_1_std
value: 69.88795518207289
- type: nauc_precision_at_20_diff1
value: 21.823848783430545
- type: nauc_precision_at_20_max
value: 32.815202091292875
- type: nauc_precision_at_20_std
value: 61.4003619545546
- type: nauc_precision_at_3_diff1
value: 7.264709295578332
- type: nauc_precision_at_3_max
value: 18.088275115082432
- type: nauc_precision_at_3_std
value: 46.315423001044266
- type: nauc_precision_at_5_diff1
value: 19.4281378539196
- type: nauc_precision_at_5_max
value: 30.042729922926426
- type: nauc_precision_at_5_std
value: 48.803961503134936
- type: nauc_recall_at_1000_diff1
value: 14.078781719704242
- type: nauc_recall_at_1000_max
value: 24.205288710944746
- type: nauc_recall_at_1000_std
value: 60.19521883992679
- type: nauc_recall_at_100_diff1
value: 34.68620796161708
- type: nauc_recall_at_100_max
value: 5.862669275470962
- type: nauc_recall_at_100_std
value: 23.779387105339538
- type: nauc_recall_at_10_diff1
value: 41.60859491145645
- type: nauc_recall_at_10_max
value: -6.060553984265031
- type: nauc_recall_at_10_std
value: -3.0401474174665597
- type: nauc_recall_at_1_diff1
value: 26.134637426782753
- type: nauc_recall_at_1_max
value: -3.108959317897608
- type: nauc_recall_at_1_std
value: -5.781123480253076
- type: nauc_recall_at_20_diff1
value: 43.884440668985256
- type: nauc_recall_at_20_max
value: -5.215456096089841
- type: nauc_recall_at_20_std
value: 0.6346955652816175
- type: nauc_recall_at_3_diff1
value: 36.682959590903515
- type: nauc_recall_at_3_max
value: -14.003318698999372
- type: nauc_recall_at_3_std
value: -8.732791648435722
- type: nauc_recall_at_5_diff1
value: 37.55874033777468
- type: nauc_recall_at_5_max
value: -11.475194910000303
- type: nauc_recall_at_5_std
value: -8.24171387960509
- type: ndcg_at_1
value: 90.0
- type: ndcg_at_10
value: 83.67999999999999
- type: ndcg_at_100
value: 66.268
- type: ndcg_at_1000
value: 59.95700000000001
- type: ndcg_at_20
value: 80.41199999999999
- type: ndcg_at_3
value: 86.989
- type: ndcg_at_5
value: 85.60600000000001
- type: precision_at_1
value: 94.0
- type: precision_at_10
value: 87.0
- type: precision_at_100
value: 68.10000000000001
- type: precision_at_1000
value: 26.404
- type: precision_at_20
value: 83.7
- type: precision_at_3
value: 91.333
- type: precision_at_5
value: 89.60000000000001
- type: recall_at_1
value: 0.243
- type: recall_at_10
value: 2.307
- type: recall_at_100
value: 16.713
- type: recall_at_1000
value: 56.433
- type: recall_at_20
value: 4.3950000000000005
- type: recall_at_3
value: 0.721
- type: recall_at_5
value: 1.194
- task:
type: Retrieval
dataset:
name: MTEB Touche2020
type: mteb/touche2020
config: default
split: test
revision: a34f9a33db75fa0cbb21bb5cfc3dae8dc8bec93f
metrics:
- type: main_score
value: 27.095999999999997
- type: map_at_1
value: 2.708
- type: map_at_10
value: 10.926
- type: map_at_100
value: 17.023
- type: map_at_1000
value: 18.802
- type: map_at_20
value: 14.075
- type: map_at_3
value: 6.213
- type: map_at_5
value: 8.399
- type: mrr_at_1
value: 38.775510204081634
- type: mrr_at_10
value: 55.96938775510204
- type: mrr_at_100
value: 56.566806209663355
- type: mrr_at_1000
value: 56.586429443572314
- type: mrr_at_20
value: 56.566806209663355
- type: mrr_at_3
value: 53.06122448979592
- type: mrr_at_5
value: 54.48979591836734
- type: nauc_map_at_1000_diff1
value: -3.7447722422200505
- type: nauc_map_at_1000_max
value: -21.154942580599432
- type: nauc_map_at_1000_std
value: -3.3769353126366854
- type: nauc_map_at_100_diff1
value: -4.211734469956019
- type: nauc_map_at_100_max
value: -20.97390043676955
- type: nauc_map_at_100_std
value: -7.108253122379712
- type: nauc_map_at_10_diff1
value: -2.503617891657346
- type: nauc_map_at_10_max
value: -19.76603379959943
- type: nauc_map_at_10_std
value: -24.813694071646186
- type: nauc_map_at_1_diff1
value: -0.6946291628228135
- type: nauc_map_at_1_max
value: -27.928780525228326
- type: nauc_map_at_1_std
value: -26.644256007057386
- type: nauc_map_at_20_diff1
value: -8.140689983350077
- type: nauc_map_at_20_max
value: -21.331762857202346
- type: nauc_map_at_20_std
value: -18.46503512945984
- type: nauc_map_at_3_diff1
value: 1.4806459479634606
- type: nauc_map_at_3_max
value: -20.57096686541149
- type: nauc_map_at_3_std
value: -27.53855079505183
- type: nauc_map_at_5_diff1
value: -2.4911557022868833
- type: nauc_map_at_5_max
value: -18.468614237544944
- type: nauc_map_at_5_std
value: -27.422000270609885
- type: nauc_mrr_at_1000_diff1
value: 0.5901324153382446
- type: nauc_mrr_at_1000_max
value: -29.43201840557888
- type: nauc_mrr_at_1000_std
value: -22.113570283308878
- type: nauc_mrr_at_100_diff1
value: 0.6140852308779037
- type: nauc_mrr_at_100_max
value: -29.423158073762274
- type: nauc_mrr_at_100_std
value: -22.03830742373018
- type: nauc_mrr_at_10_diff1
value: 1.4017303142295798
- type: nauc_mrr_at_10_max
value: -29.96128226635445
- type: nauc_mrr_at_10_std
value: -21.182800337655188
- type: nauc_mrr_at_1_diff1
value: 2.9967188734445642
- type: nauc_mrr_at_1_max
value: -28.076201809234135
- type: nauc_mrr_at_1_std
value: -23.829475793931397
- type: nauc_mrr_at_20_diff1
value: 0.6140852308779037
- type: nauc_mrr_at_20_max
value: -29.423158073762274
- type: nauc_mrr_at_20_std
value: -22.03830742373018
- type: nauc_mrr_at_3_diff1
value: -1.7324100961545983
- type: nauc_mrr_at_3_max
value: -31.25504536750873
- type: nauc_mrr_at_3_std
value: -27.693245095141595
- type: nauc_mrr_at_5_diff1
value: 0.9366378266246876
- type: nauc_mrr_at_5_max
value: -28.61911855691654
- type: nauc_mrr_at_5_std
value: -23.51734198003236
- type: nauc_ndcg_at_1000_diff1
value: 5.589806586986813
- type: nauc_ndcg_at_1000_max
value: -25.54091728191453
- type: nauc_ndcg_at_1000_std
value: 18.867289766624364
- type: nauc_ndcg_at_100_diff1
value: 5.269555604924481
- type: nauc_ndcg_at_100_max
value: -25.294068947248
- type: nauc_ndcg_at_100_std
value: 12.57359579076201
- type: nauc_ndcg_at_10_diff1
value: -1.8036041625138828
- type: nauc_ndcg_at_10_max
value: -23.89433650527811
- type: nauc_ndcg_at_10_std
value: -18.669805340174104
- type: nauc_ndcg_at_1_diff1
value: 1.7320960153524356
- type: nauc_ndcg_at_1_max
value: -30.98970297820504
- type: nauc_ndcg_at_1_std
value: -22.039818727732
- type: nauc_ndcg_at_20_diff1
value: -7.71266194406333
- type: nauc_ndcg_at_20_max
value: -28.764052281890564
- type: nauc_ndcg_at_20_std
value: -14.058766573885803
- type: nauc_ndcg_at_3_diff1
value: 3.4222049394447023
- type: nauc_ndcg_at_3_max
value: -23.010397388596147
- type: nauc_ndcg_at_3_std
value: -23.917570461776442
- type: nauc_ndcg_at_5_diff1
value: 0.4359085390014115
- type: nauc_ndcg_at_5_max
value: -18.328017574440583
- type: nauc_ndcg_at_5_std
value: -22.301590122411703
- type: nauc_precision_at_1000_diff1
value: 5.705380133328601
- type: nauc_precision_at_1000_max
value: 29.738757046781583
- type: nauc_precision_at_1000_std
value: 37.25317043193516
- type: nauc_precision_at_100_diff1
value: 18.099479915822755
- type: nauc_precision_at_100_max
value: 1.039647603335084
- type: nauc_precision_at_100_std
value: 68.43506311503532
- type: nauc_precision_at_10_diff1
value: 1.6010906915801002
- type: nauc_precision_at_10_max
value: -16.21198992516715
- type: nauc_precision_at_10_std
value: -10.55666484527
- type: nauc_precision_at_1_diff1
value: 2.9967188734445642
- type: nauc_precision_at_1_max
value: -28.076201809234135
- type: nauc_precision_at_1_std
value: -23.829475793931397
- type: nauc_precision_at_20_diff1
value: -9.646266503089361
- type: nauc_precision_at_20_max
value: -19.25399592456934
- type: nauc_precision_at_20_std
value: 4.154373672246843
- type: nauc_precision_at_3_diff1
value: 6.468923962729313
- type: nauc_precision_at_3_max
value: -16.75495139962792
- type: nauc_precision_at_3_std
value: -24.1555216494731
- type: nauc_precision_at_5_diff1
value: 1.89724542441865
- type: nauc_precision_at_5_max
value: -10.916266272968988
- type: nauc_precision_at_5_std
value: -19.996228467499165
- type: nauc_recall_at_1000_diff1
value: -0.3248897031222208
- type: nauc_recall_at_1000_max
value: -25.08629526651275
- type: nauc_recall_at_1000_std
value: 72.42326605733102
- type: nauc_recall_at_100_diff1
value: 0.20011224230233096
- type: nauc_recall_at_100_max
value: -25.71382782994985
- type: nauc_recall_at_100_std
value: 31.40559917674001
- type: nauc_recall_at_10_diff1
value: -7.502107897824034
- type: nauc_recall_at_10_max
value: -26.197156105779833
- type: nauc_recall_at_10_std
value: -20.067019662396106
- type: nauc_recall_at_1_diff1
value: -0.6946291628228135
- type: nauc_recall_at_1_max
value: -27.928780525228326
- type: nauc_recall_at_1_std
value: -26.644256007057386
- type: nauc_recall_at_20_diff1
value: -16.829462200879107
- type: nauc_recall_at_20_max
value: -29.55978083865099
- type: nauc_recall_at_20_std
value: -11.329177422867945
- type: nauc_recall_at_3_diff1
value: -4.487251181022699
- type: nauc_recall_at_3_max
value: -26.28852595660599
- type: nauc_recall_at_3_std
value: -30.010933869743877
- type: nauc_recall_at_5_diff1
value: -7.4729339604681515
- type: nauc_recall_at_5_max
value: -22.995431038489112
- type: nauc_recall_at_5_std
value: -27.623494423158906
- type: ndcg_at_1
value: 35.714
- type: ndcg_at_10
value: 27.095999999999997
- type: ndcg_at_100
value: 37.577
- type: ndcg_at_1000
value: 50.234
- type: ndcg_at_20
value: 28.706
- type: ndcg_at_3
value: 34.808
- type: ndcg_at_5
value: 31.657999999999998
- type: precision_at_1
value: 38.775999999999996
- type: precision_at_10
value: 23.061
- type: precision_at_100
value: 7.388
- type: precision_at_1000
value: 1.5650000000000002
- type: precision_at_20
value: 18.776
- type: precision_at_3
value: 36.735
- type: precision_at_5
value: 31.429000000000002
- type: recall_at_1
value: 2.708
- type: recall_at_10
value: 16.645
- type: recall_at_100
value: 45.953
- type: recall_at_1000
value: 84.553
- type: recall_at_20
value: 26.259
- type: recall_at_3
value: 7.869
- type: recall_at_5
value: 11.166
- task:
type: Classification
dataset:
name: MTEB ToxicConversationsClassification
type: mteb/toxic_conversations_50k
config: default
split: test
revision: edfaf9da55d3dd50d43143d90c1ac476895ae6de
metrics:
- type: accuracy
value: 75.87890625
- type: ap
value: 16.4629793865873
- type: ap_weighted
value: 16.4629793865873
- type: f1
value: 58.32993265544471
- type: f1_weighted
value: 80.94360012442658
- type: main_score
value: 75.87890625
- task:
type: Classification
dataset:
name: MTEB TweetSentimentExtractionClassification
type: mteb/tweet_sentiment_extraction
config: default
split: test
revision: d604517c81ca91fe16a244d1248fc021f9ecee7a
metrics:
- type: accuracy
value: 65.21788341822298
- type: f1
value: 65.00914562845475
- type: f1_weighted
value: 63.672388825903845
- type: main_score
value: 65.21788341822298
- task:
type: Clustering
dataset:
name: MTEB TwentyNewsgroupsClustering
type: mteb/twentynewsgroups-clustering
config: default
split: test
revision: 6125ec4e24fa026cec8a478383ee943acfbd5449
metrics:
- type: main_score
value: 57.152337838073485
- type: v_measure
value: 57.152337838073485
- type: v_measure_std
value: 0.8799366494028795
- task:
type: PairClassification
dataset:
name: MTEB TwitterSemEval2015
type: mteb/twittersemeval2015-pairclassification
config: default
split: test
revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1
metrics:
- type: max_accuracy
value: 88.63920844012637
- type: max_ap
value: 81.41444048232692
- type: max_f1
value: 74.84396892115653
- type: accuracy
value: 88.63920844012637
- type: accuracy_threshold
value: 84.5294713973999
- type: ap
value: 81.41443623323144
- type: f1
value: 74.84396892115653
- type: f1_threshold
value: 82.87262320518494
- type: precision
value: 72.34671263235656
- type: recall
value: 77.5197889182058
- type: main_score
value: 81.41444048232692
- task:
type: PairClassification
dataset:
name: MTEB TwitterURLCorpus
type: mteb/twitterurlcorpus-pairclassification
config: default
split: test
revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf
metrics:
- type: max_accuracy
value: 89.90569332867622
- type: max_ap
value: 87.91825594329686
- type: max_f1
value: 80.35081439054949
- type: accuracy
value: 89.90569332867622
- type: accuracy_threshold
value: 81.01733326911926
- type: ap
value: 87.91824445175028
- type: f1
value: 80.35081439054949
- type: f1_threshold
value: 78.65387201309204
- type: precision
value: 75.0853013982739
- type: recall
value: 86.41053279950724
- type: main_score
value: 87.91825594329686
---
## Zeta-Alpha-E5-Mistral
We introduce Zeta Alpha's first public embedding model, a retrieval-specialized, 7B parameter embedding model trained on top of [E5-mistral-7b-instruct](https://huggingface.co/intfloat/e5-mistral-7b-instruct).
This model marks the first published model from Zeta Alpha's open science embedding models.
Check out our blog post for a complete breakdown of the training set we used and all the training details: [Zeta Alpha blog](https://www.zeta-alpha.com/post/fine-tuning-an-llm-for-state-of-the-art-retrieval-zeta-alpha-s-top-10-submission-to-the-the-mteb-be)
We are also making available our internal evaluation set, called [NanoBEIR](https://huggingface.co/collections/zeta-alpha-ai/nanobeir-66e1a0af21dfd93e620cd9f6), a collection of Nano (i.e., 50 queries+~10k documents) per BEIR dataset.
### Lora Weights
The lora weights are also available, so there is no need to download the full model.
## How to Run
The model was trained with the same instruction-tuning strategy as the original [E5-mistral-7b-instruct](https://huggingface.co/intfloat/e5-mistral-7b-instruct) model. Therefore, queries should be formatted as follows:
```
Instruct: <task description>\nQuery: <query>
```
### Sentence Transformers
```python
from sentence_transformers import SentenceTransformer
model = SentenceTransformer("zeta-alpha-ai/Zeta-Alpha-E5-Mistral")
def get_detailed_instruct(task_description: str, query: str) -> str:
return f'Instruct: {task_description}\nQuery: {query}'
task = "Given a claim about climate change, retrieve documents that support or refute the claim"
queries = [
get_detailed_instruct(task, "In Alaska, brown bears are changing their feeding habits to eat elderberries that ripen earlier."),
get_detailed_instruct(task, "Local and regional sea levels continue to exhibit typical natural variability—in some places rising and in others falling.")
]
passages = [
"The brown bear ( Ursus arctos ) is a large bear with the widest distribution of any living ursid . The species is distributed across much of northern Eurasia and North America . It is one of the two largest terrestrial carnivorans alive today , rivaled in body size only by its close cousin , the polar bear ( Ursus maritimus ) , which is much less variable in size and averages larger due to this . There are several recognized subspecies , many of which are quite well-known within their native ranges , found in the brown bear species . The brown bear 's principal range includes parts of Russia , Central Asia , China , Canada , the United States ( mostly Alaska ) , Scandinavia and the Carpathian region ( especially Romania ) , Anatolia , and Caucasus . The brown bear is recognized as a national and state animal in several European countries . While the brown bear 's range has shrunk and it has faced local extinctions , it remains listed as a least concern species by the International Union for Conservation of Nature ( IUCN ) with a total population of approximately 200,000 . As of 2012 , this and the American black bear are the only bear species not classified as threatened by the IUCN . However , the Californian , North African ( Atlas bear ) , and Mexican subspecies were hunted to extinction in the nineteenth and early twentieth centuries , and many of the southern Asian subspecies are highly endangered . One of the smaller-bodied subspecies , the Himalayan brown bear , is critically endangered , occupying only 2 % of its former range and threatened by uncontrolled poaching for its parts . The Marsican brown bear , one of several currently isolated populations of the main Eurasian brown bear race , in central Italy is believed to have a population of just 30 to 40 bears .",
"ean sea level ( MSL ) ( abbreviated simply sea level ) is an average level of the surface of one or more of Earth 's oceans from which heights such as elevations may be measured . MSL is a type of vertical datuma standardised geodetic reference pointthat is used , for example , as a chart datum in cartography and marine navigation , or , in aviation , as the standard sea level at which atmospheric pressure is measured in order to calibrate altitude and , consequently , aircraft flight levels . A common and relatively straightforward mean sea-level standard is the midpoint between a mean low and mean high tide at a particular location . Sea levels can be affected by many factors and are known to have varied greatly over geological time scales . The careful measurement of variations in MSL can offer insights into ongoing climate change , and sea level rise has been widely quoted as evidence of ongoing global warming . The term above sea level generally refers to above mean sea level ( AMSL ) ."
]
embeddings = model.encode(queries + passages)
scores = model.similarity(embeddings[:2], embeddings[2:]) * 100
print(scores.tolist())
# [[66.12603759765625, 43.760101318359375], [47.67058563232422, 63.7889518737793]]
```
### Transformers
``` python
import torch
import torch.nn.functional as F
from torch import Tensor
from transformers import AutoTokenizer, AutoModel
def last_token_pool(last_hidden_states: Tensor,
attention_mask: Tensor) -> Tensor:
left_padding = (attention_mask[:, -1].sum() == attention_mask.shape[0])
if left_padding:
return last_hidden_states[:, -1]
else:
sequence_lengths = attention_mask.sum(dim=1) - 1
batch_size = last_hidden_states.shape[0]
return last_hidden_states[torch.arange(batch_size, device=last_hidden_states.device), sequence_lengths]
def get_detailed_instruct(task_description: str, query: str) -> str:
return f'Instruct: {task_description}\nQuery: {query}'
task = "Given a claim about climate change, retrieve documents that support or refute the claim"
queries = [
get_detailed_instruct(task, "In Alaska, brown bears are changing their feeding habits to eat elderberries that ripen earlier."),
get_detailed_instruct(task, "Local and regional sea levels continue to exhibit typical natural variability—in some places rising and in others falling.")
]
passages = [
"The brown bear ( Ursus arctos ) is a large bear with the widest distribution of any living ursid . The species is distributed across much of northern Eurasia and North America . It is one of the two largest terrestrial carnivorans alive today , rivaled in body size only by its close cousin , the polar bear ( Ursus maritimus ) , which is much less variable in size and averages larger due to this . There are several recognized subspecies , many of which are quite well-known within their native ranges , found in the brown bear species . The brown bear 's principal range includes parts of Russia , Central Asia , China , Canada , the United States ( mostly Alaska ) , Scandinavia and the Carpathian region ( especially Romania ) , Anatolia , and Caucasus . The brown bear is recognized as a national and state animal in several European countries . While the brown bear 's range has shrunk and it has faced local extinctions , it remains listed as a least concern species by the International Union for Conservation of Nature ( IUCN ) with a total population of approximately 200,000 . As of 2012 , this and the American black bear are the only bear species not classified as threatened by the IUCN . However , the Californian , North African ( Atlas bear ) , and Mexican subspecies were hunted to extinction in the nineteenth and early twentieth centuries , and many of the southern Asian subspecies are highly endangered . One of the smaller-bodied subspecies , the Himalayan brown bear , is critically endangered , occupying only 2 % of its former range and threatened by uncontrolled poaching for its parts . The Marsican brown bear , one of several currently isolated populations of the main Eurasian brown bear race , in central Italy is believed to have a population of just 30 to 40 bears .",
"ean sea level ( MSL ) ( abbreviated simply sea level ) is an average level of the surface of one or more of Earth 's oceans from which heights such as elevations may be measured . MSL is a type of vertical datuma standardised geodetic reference pointthat is used , for example , as a chart datum in cartography and marine navigation , or , in aviation , as the standard sea level at which atmospheric pressure is measured in order to calibrate altitude and , consequently , aircraft flight levels . A common and relatively straightforward mean sea-level standard is the midpoint between a mean low and mean high tide at a particular location . Sea levels can be affected by many factors and are known to have varied greatly over geological time scales . The careful measurement of variations in MSL can offer insights into ongoing climate change , and sea level rise has been widely quoted as evidence of ongoing global warming . The term above sea level generally refers to above mean sea level ( AMSL ) ."
]
# load model and tokenizer
tokenizer = AutoTokenizer.from_pretrained("zeta-alpha-ai/Zeta-Alpha-E5-Mistral")
model = AutoModel.from_pretrained("zeta-alpha-ai/Zeta-Alpha-E5-Mistral")
# get the embeddings
max_length = 4096
input_texts = queries + passages
batch_dict = tokenizer(input_texts, max_length=max_length, padding=True, truncation=True, return_tensors="pt")
outputs = model(**batch_dict)
embeddings = last_token_pool(outputs.last_hidden_state, batch_dict['attention_mask'])
# normalize embeddings
embeddings = F.normalize(embeddings, p=2, dim=1)
scores = (embeddings[:2] @ embeddings[2:].T) * 100
print(scores.tolist())
[[66.15530395507812, 43.65541458129883], [47.681705474853516, 63.67986297607422]]
```
### Zeta Alpha
Zeta Alpha is the premier Neural Discovery Platform for AI and more. We are an Amsterdam-based R&D and product lab with a passion for AI technology, with offices on the Science Park campus of the University of Amsterdam. and in San Francisco.
The Zeta Alpha Research team:
- Arthur Câmara
- Dinos Papakostas
- Mathias Parisot
- Fernando Rejon Barrera
- Jakub Zavrel
| [
"SUMMARIZATION"
] | [
"BEAR",
"BIOSSES",
"SCIFACT"
] |
tensorblock/GritLM-8x7B-GGUF | tensorblock | text-generation | [
"gguf",
"mteb",
"TensorBlock",
"GGUF",
"text-generation",
"dataset:GritLM/tulu2",
"base_model:GritLM/GritLM-8x7B",
"base_model:quantized:GritLM/GritLM-8x7B",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us",
"conversational"
] | 2024-12-12T05:50:13 | 2024-12-12T10:26:15 | 92 | 0 | ---
base_model: GritLM/GritLM-8x7B
datasets:
- GritLM/tulu2
license: apache-2.0
pipeline_tag: text-generation
tags:
- mteb
- TensorBlock
- GGUF
inference: true
model-index:
- name: GritLM-8x7B
results:
- task:
type: Classification
dataset:
name: MTEB AmazonCounterfactualClassification (en)
type: mteb/amazon_counterfactual
config: en
split: test
revision: e8379541af4e31359cca9fbcf4b00f2671dba205
metrics:
- type: accuracy
value: 80.47761194029852
- type: ap
value: 44.38751347932197
- type: f1
value: 74.33580162208256
- task:
type: Classification
dataset:
name: MTEB AmazonPolarityClassification
type: mteb/amazon_polarity
config: default
split: test
revision: e2d317d38cd51312af73b3d32a06d1a08b442046
metrics:
- type: accuracy
value: 96.32155000000002
- type: ap
value: 94.8026654593679
- type: f1
value: 96.3209869463974
- task:
type: Classification
dataset:
name: MTEB AmazonReviewsClassification (en)
type: mteb/amazon_reviews_multi
config: en
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 57.18400000000001
- type: f1
value: 55.945160479400954
- task:
type: Retrieval
dataset:
name: MTEB ArguAna
type: arguana
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 34.353
- type: map_at_10
value: 50.773
- type: map_at_100
value: 51.515
- type: map_at_1000
value: 51.517
- type: map_at_3
value: 46.29
- type: map_at_5
value: 48.914
- type: mrr_at_1
value: 35.135
- type: mrr_at_10
value: 51.036
- type: mrr_at_100
value: 51.785000000000004
- type: mrr_at_1000
value: 51.787000000000006
- type: mrr_at_3
value: 46.562
- type: mrr_at_5
value: 49.183
- type: ndcg_at_1
value: 34.353
- type: ndcg_at_10
value: 59.492
- type: ndcg_at_100
value: 62.395999999999994
- type: ndcg_at_1000
value: 62.44499999999999
- type: ndcg_at_3
value: 50.217
- type: ndcg_at_5
value: 54.98499999999999
- type: precision_at_1
value: 34.353
- type: precision_at_10
value: 8.72
- type: precision_at_100
value: 0.993
- type: precision_at_1000
value: 0.1
- type: precision_at_3
value: 20.531
- type: precision_at_5
value: 14.651
- type: recall_at_1
value: 34.353
- type: recall_at_10
value: 87.198
- type: recall_at_100
value: 99.289
- type: recall_at_1000
value: 99.644
- type: recall_at_3
value: 61.592999999999996
- type: recall_at_5
value: 73.257
- task:
type: Clustering
dataset:
name: MTEB ArxivClusteringP2P
type: mteb/arxiv-clustering-p2p
config: default
split: test
revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d
metrics:
- type: v_measure
value: 50.720077577006286
- task:
type: Clustering
dataset:
name: MTEB ArxivClusteringS2S
type: mteb/arxiv-clustering-s2s
config: default
split: test
revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53
metrics:
- type: v_measure
value: 48.01021098734129
- task:
type: Reranking
dataset:
name: MTEB AskUbuntuDupQuestions
type: mteb/askubuntudupquestions-reranking
config: default
split: test
revision: 2000358ca161889fa9c082cb41daa8dcfb161a54
metrics:
- type: map
value: 65.59672236627206
- type: mrr
value: 78.01191575429802
- task:
type: STS
dataset:
name: MTEB BIOSSES
type: mteb/biosses-sts
config: default
split: test
revision: d3fb88f8f02e40887cd149695127462bbcf29b4a
metrics:
- type: cos_sim_pearson
value: 89.52452252271826
- type: cos_sim_spearman
value: 87.34415887061094
- type: euclidean_pearson
value: 87.46187616533932
- type: euclidean_spearman
value: 85.44712769366146
- type: manhattan_pearson
value: 87.56696679505373
- type: manhattan_spearman
value: 86.01581535039067
- task:
type: Classification
dataset:
name: MTEB Banking77Classification
type: mteb/banking77
config: default
split: test
revision: 0fd18e25b25c072e09e0d92ab615fda904d66300
metrics:
- type: accuracy
value: 87.4577922077922
- type: f1
value: 87.38432712848123
- task:
type: Clustering
dataset:
name: MTEB BiorxivClusteringP2P
type: mteb/biorxiv-clustering-p2p
config: default
split: test
revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40
metrics:
- type: v_measure
value: 41.41290357360428
- task:
type: Clustering
dataset:
name: MTEB BiorxivClusteringS2S
type: mteb/biorxiv-clustering-s2s
config: default
split: test
revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908
metrics:
- type: v_measure
value: 38.67213605633667
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackAndroidRetrieval
type: BeIR/cqadupstack
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 37.545
- type: map_at_10
value: 50.015
- type: map_at_100
value: 51.763999999999996
- type: map_at_1000
value: 51.870000000000005
- type: map_at_3
value: 46.129999999999995
- type: map_at_5
value: 48.473
- type: mrr_at_1
value: 47.638999999999996
- type: mrr_at_10
value: 56.913000000000004
- type: mrr_at_100
value: 57.619
- type: mrr_at_1000
value: 57.648999999999994
- type: mrr_at_3
value: 54.435
- type: mrr_at_5
value: 56.059000000000005
- type: ndcg_at_1
value: 47.638999999999996
- type: ndcg_at_10
value: 56.664
- type: ndcg_at_100
value: 62.089000000000006
- type: ndcg_at_1000
value: 63.415
- type: ndcg_at_3
value: 51.842999999999996
- type: ndcg_at_5
value: 54.30199999999999
- type: precision_at_1
value: 47.638999999999996
- type: precision_at_10
value: 10.886999999999999
- type: precision_at_100
value: 1.722
- type: precision_at_1000
value: 0.212
- type: precision_at_3
value: 25.179000000000002
- type: precision_at_5
value: 18.226
- type: recall_at_1
value: 37.545
- type: recall_at_10
value: 68.118
- type: recall_at_100
value: 90.381
- type: recall_at_1000
value: 98.556
- type: recall_at_3
value: 53.319
- type: recall_at_5
value: 60.574
- type: map_at_1
value: 37.066
- type: map_at_10
value: 49.464000000000006
- type: map_at_100
value: 50.79900000000001
- type: map_at_1000
value: 50.928
- type: map_at_3
value: 46.133
- type: map_at_5
value: 47.941
- type: mrr_at_1
value: 48.025
- type: mrr_at_10
value: 56.16100000000001
- type: mrr_at_100
value: 56.725
- type: mrr_at_1000
value: 56.757000000000005
- type: mrr_at_3
value: 54.31
- type: mrr_at_5
value: 55.285
- type: ndcg_at_1
value: 48.025
- type: ndcg_at_10
value: 55.467
- type: ndcg_at_100
value: 59.391000000000005
- type: ndcg_at_1000
value: 61.086
- type: ndcg_at_3
value: 51.733
- type: ndcg_at_5
value: 53.223
- type: precision_at_1
value: 48.025
- type: precision_at_10
value: 10.656
- type: precision_at_100
value: 1.6070000000000002
- type: precision_at_1000
value: 0.20600000000000002
- type: precision_at_3
value: 25.499
- type: precision_at_5
value: 17.771
- type: recall_at_1
value: 37.066
- type: recall_at_10
value: 65.062
- type: recall_at_100
value: 81.662
- type: recall_at_1000
value: 91.913
- type: recall_at_3
value: 52.734
- type: recall_at_5
value: 57.696999999999996
- type: map_at_1
value: 46.099000000000004
- type: map_at_10
value: 59.721999999999994
- type: map_at_100
value: 60.675000000000004
- type: map_at_1000
value: 60.708
- type: map_at_3
value: 55.852000000000004
- type: map_at_5
value: 58.426
- type: mrr_at_1
value: 53.417
- type: mrr_at_10
value: 63.597
- type: mrr_at_100
value: 64.12299999999999
- type: mrr_at_1000
value: 64.13799999999999
- type: mrr_at_3
value: 61.149
- type: mrr_at_5
value: 62.800999999999995
- type: ndcg_at_1
value: 53.417
- type: ndcg_at_10
value: 65.90899999999999
- type: ndcg_at_100
value: 69.312
- type: ndcg_at_1000
value: 69.89
- type: ndcg_at_3
value: 60.089999999999996
- type: ndcg_at_5
value: 63.575
- type: precision_at_1
value: 53.417
- type: precision_at_10
value: 10.533
- type: precision_at_100
value: 1.313
- type: precision_at_1000
value: 0.13899999999999998
- type: precision_at_3
value: 26.667
- type: precision_at_5
value: 18.671
- type: recall_at_1
value: 46.099000000000004
- type: recall_at_10
value: 80.134
- type: recall_at_100
value: 94.536
- type: recall_at_1000
value: 98.543
- type: recall_at_3
value: 65.026
- type: recall_at_5
value: 73.462
- type: map_at_1
value: 28.261999999999997
- type: map_at_10
value: 38.012
- type: map_at_100
value: 39.104
- type: map_at_1000
value: 39.177
- type: map_at_3
value: 35.068
- type: map_at_5
value: 36.620000000000005
- type: mrr_at_1
value: 30.847
- type: mrr_at_10
value: 40.251999999999995
- type: mrr_at_100
value: 41.174
- type: mrr_at_1000
value: 41.227999999999994
- type: mrr_at_3
value: 37.74
- type: mrr_at_5
value: 38.972
- type: ndcg_at_1
value: 30.847
- type: ndcg_at_10
value: 43.513000000000005
- type: ndcg_at_100
value: 48.771
- type: ndcg_at_1000
value: 50.501
- type: ndcg_at_3
value: 37.861
- type: ndcg_at_5
value: 40.366
- type: precision_at_1
value: 30.847
- type: precision_at_10
value: 6.7909999999999995
- type: precision_at_100
value: 0.992
- type: precision_at_1000
value: 0.117
- type: precision_at_3
value: 16.234
- type: precision_at_5
value: 11.254
- type: recall_at_1
value: 28.261999999999997
- type: recall_at_10
value: 58.292
- type: recall_at_100
value: 82.24000000000001
- type: recall_at_1000
value: 95.042
- type: recall_at_3
value: 42.955
- type: recall_at_5
value: 48.973
- type: map_at_1
value: 18.281
- type: map_at_10
value: 27.687
- type: map_at_100
value: 28.9
- type: map_at_1000
value: 29.019000000000002
- type: map_at_3
value: 24.773
- type: map_at_5
value: 26.180999999999997
- type: mrr_at_1
value: 23.01
- type: mrr_at_10
value: 32.225
- type: mrr_at_100
value: 33.054
- type: mrr_at_1000
value: 33.119
- type: mrr_at_3
value: 29.353
- type: mrr_at_5
value: 30.846
- type: ndcg_at_1
value: 23.01
- type: ndcg_at_10
value: 33.422000000000004
- type: ndcg_at_100
value: 39.108
- type: ndcg_at_1000
value: 41.699999999999996
- type: ndcg_at_3
value: 28.083999999999996
- type: ndcg_at_5
value: 30.164
- type: precision_at_1
value: 23.01
- type: precision_at_10
value: 6.493
- type: precision_at_100
value: 1.077
- type: precision_at_1000
value: 0.14100000000000001
- type: precision_at_3
value: 13.930000000000001
- type: precision_at_5
value: 10.075000000000001
- type: recall_at_1
value: 18.281
- type: recall_at_10
value: 46.318
- type: recall_at_100
value: 71.327
- type: recall_at_1000
value: 89.716
- type: recall_at_3
value: 31.517
- type: recall_at_5
value: 36.821
- type: map_at_1
value: 36.575
- type: map_at_10
value: 49.235
- type: map_at_100
value: 50.723
- type: map_at_1000
value: 50.809000000000005
- type: map_at_3
value: 45.696999999999996
- type: map_at_5
value: 47.588
- type: mrr_at_1
value: 45.525
- type: mrr_at_10
value: 55.334
- type: mrr_at_100
value: 56.092
- type: mrr_at_1000
value: 56.118
- type: mrr_at_3
value: 53.032000000000004
- type: mrr_at_5
value: 54.19199999999999
- type: ndcg_at_1
value: 45.525
- type: ndcg_at_10
value: 55.542
- type: ndcg_at_100
value: 60.879000000000005
- type: ndcg_at_1000
value: 62.224999999999994
- type: ndcg_at_3
value: 50.688
- type: ndcg_at_5
value: 52.76499999999999
- type: precision_at_1
value: 45.525
- type: precision_at_10
value: 10.067
- type: precision_at_100
value: 1.471
- type: precision_at_1000
value: 0.173
- type: precision_at_3
value: 24.382
- type: precision_at_5
value: 16.919999999999998
- type: recall_at_1
value: 36.575
- type: recall_at_10
value: 67.903
- type: recall_at_100
value: 89.464
- type: recall_at_1000
value: 97.799
- type: recall_at_3
value: 53.493
- type: recall_at_5
value: 59.372
- type: map_at_1
value: 29.099000000000004
- type: map_at_10
value: 42.147
- type: map_at_100
value: 43.522
- type: map_at_1000
value: 43.624
- type: map_at_3
value: 38.104
- type: map_at_5
value: 40.435
- type: mrr_at_1
value: 36.416
- type: mrr_at_10
value: 47.922
- type: mrr_at_100
value: 48.664
- type: mrr_at_1000
value: 48.709
- type: mrr_at_3
value: 44.977000000000004
- type: mrr_at_5
value: 46.838
- type: ndcg_at_1
value: 36.416
- type: ndcg_at_10
value: 49.307
- type: ndcg_at_100
value: 54.332
- type: ndcg_at_1000
value: 56.145
- type: ndcg_at_3
value: 42.994
- type: ndcg_at_5
value: 46.119
- type: precision_at_1
value: 36.416
- type: precision_at_10
value: 9.452
- type: precision_at_100
value: 1.4080000000000001
- type: precision_at_1000
value: 0.172
- type: precision_at_3
value: 21.081
- type: precision_at_5
value: 15.501999999999999
- type: recall_at_1
value: 29.099000000000004
- type: recall_at_10
value: 64.485
- type: recall_at_100
value: 84.753
- type: recall_at_1000
value: 96.875
- type: recall_at_3
value: 47.06
- type: recall_at_5
value: 55.077
- type: map_at_1
value: 30.69458333333333
- type: map_at_10
value: 41.65291666666666
- type: map_at_100
value: 42.95775
- type: map_at_1000
value: 43.06258333333333
- type: map_at_3
value: 38.335750000000004
- type: map_at_5
value: 40.20941666666666
- type: mrr_at_1
value: 37.013000000000005
- type: mrr_at_10
value: 46.30600000000001
- type: mrr_at_100
value: 47.094666666666676
- type: mrr_at_1000
value: 47.139583333333334
- type: mrr_at_3
value: 43.805749999999996
- type: mrr_at_5
value: 45.22366666666666
- type: ndcg_at_1
value: 37.013000000000005
- type: ndcg_at_10
value: 47.63491666666667
- type: ndcg_at_100
value: 52.71083333333334
- type: ndcg_at_1000
value: 54.493583333333326
- type: ndcg_at_3
value: 42.43616666666666
- type: ndcg_at_5
value: 44.87583333333334
- type: precision_at_1
value: 37.013000000000005
- type: precision_at_10
value: 8.481583333333333
- type: precision_at_100
value: 1.3073333333333337
- type: precision_at_1000
value: 0.16341666666666668
- type: precision_at_3
value: 19.811833333333333
- type: precision_at_5
value: 14.07691666666667
- type: recall_at_1
value: 30.69458333333333
- type: recall_at_10
value: 60.462083333333325
- type: recall_at_100
value: 82.42325000000001
- type: recall_at_1000
value: 94.53291666666667
- type: recall_at_3
value: 45.7405
- type: recall_at_5
value: 52.14025
- type: map_at_1
value: 27.833000000000002
- type: map_at_10
value: 36.55
- type: map_at_100
value: 37.524
- type: map_at_1000
value: 37.613
- type: map_at_3
value: 33.552
- type: map_at_5
value: 35.173
- type: mrr_at_1
value: 31.135
- type: mrr_at_10
value: 39.637
- type: mrr_at_100
value: 40.361000000000004
- type: mrr_at_1000
value: 40.422000000000004
- type: mrr_at_3
value: 36.887
- type: mrr_at_5
value: 38.428000000000004
- type: ndcg_at_1
value: 31.135
- type: ndcg_at_10
value: 42.007
- type: ndcg_at_100
value: 46.531
- type: ndcg_at_1000
value: 48.643
- type: ndcg_at_3
value: 36.437999999999995
- type: ndcg_at_5
value: 39.021
- type: precision_at_1
value: 31.135
- type: precision_at_10
value: 6.856
- type: precision_at_100
value: 0.988
- type: precision_at_1000
value: 0.125
- type: precision_at_3
value: 15.9
- type: precision_at_5
value: 11.227
- type: recall_at_1
value: 27.833000000000002
- type: recall_at_10
value: 55.711
- type: recall_at_100
value: 76.255
- type: recall_at_1000
value: 91.51899999999999
- type: recall_at_3
value: 40.22
- type: recall_at_5
value: 46.69
- type: map_at_1
value: 21.274
- type: map_at_10
value: 29.925
- type: map_at_100
value: 31.171
- type: map_at_1000
value: 31.296000000000003
- type: map_at_3
value: 27.209
- type: map_at_5
value: 28.707
- type: mrr_at_1
value: 26.462000000000003
- type: mrr_at_10
value: 34.604
- type: mrr_at_100
value: 35.554
- type: mrr_at_1000
value: 35.622
- type: mrr_at_3
value: 32.295
- type: mrr_at_5
value: 33.598
- type: ndcg_at_1
value: 26.462000000000003
- type: ndcg_at_10
value: 35.193000000000005
- type: ndcg_at_100
value: 40.876000000000005
- type: ndcg_at_1000
value: 43.442
- type: ndcg_at_3
value: 30.724
- type: ndcg_at_5
value: 32.735
- type: precision_at_1
value: 26.462000000000003
- type: precision_at_10
value: 6.438000000000001
- type: precision_at_100
value: 1.093
- type: precision_at_1000
value: 0.15
- type: precision_at_3
value: 14.636
- type: precision_at_5
value: 10.496
- type: recall_at_1
value: 21.274
- type: recall_at_10
value: 46.322
- type: recall_at_100
value: 71.702
- type: recall_at_1000
value: 89.405
- type: recall_at_3
value: 33.444
- type: recall_at_5
value: 38.83
- type: map_at_1
value: 31.174000000000003
- type: map_at_10
value: 42.798
- type: map_at_100
value: 43.996
- type: map_at_1000
value: 44.088
- type: map_at_3
value: 39.255
- type: map_at_5
value: 41.336
- type: mrr_at_1
value: 37.22
- type: mrr_at_10
value: 47.035
- type: mrr_at_100
value: 47.833999999999996
- type: mrr_at_1000
value: 47.88
- type: mrr_at_3
value: 44.248
- type: mrr_at_5
value: 45.815
- type: ndcg_at_1
value: 37.22
- type: ndcg_at_10
value: 48.931999999999995
- type: ndcg_at_100
value: 53.991
- type: ndcg_at_1000
value: 55.825
- type: ndcg_at_3
value: 43.144
- type: ndcg_at_5
value: 45.964
- type: precision_at_1
value: 37.22
- type: precision_at_10
value: 8.451
- type: precision_at_100
value: 1.2189999999999999
- type: precision_at_1000
value: 0.149
- type: precision_at_3
value: 20.087
- type: precision_at_5
value: 14.235000000000001
- type: recall_at_1
value: 31.174000000000003
- type: recall_at_10
value: 63.232
- type: recall_at_100
value: 84.747
- type: recall_at_1000
value: 97.006
- type: recall_at_3
value: 47.087
- type: recall_at_5
value: 54.493
- type: map_at_1
value: 29.628
- type: map_at_10
value: 39.995999999999995
- type: map_at_100
value: 41.899
- type: map_at_1000
value: 42.125
- type: map_at_3
value: 36.345
- type: map_at_5
value: 38.474000000000004
- type: mrr_at_1
value: 36.364000000000004
- type: mrr_at_10
value: 45.293
- type: mrr_at_100
value: 46.278999999999996
- type: mrr_at_1000
value: 46.318
- type: mrr_at_3
value: 42.522999999999996
- type: mrr_at_5
value: 44.104
- type: ndcg_at_1
value: 36.364000000000004
- type: ndcg_at_10
value: 46.622
- type: ndcg_at_100
value: 52.617000000000004
- type: ndcg_at_1000
value: 54.529
- type: ndcg_at_3
value: 40.971999999999994
- type: ndcg_at_5
value: 43.738
- type: precision_at_1
value: 36.364000000000004
- type: precision_at_10
value: 9.110999999999999
- type: precision_at_100
value: 1.846
- type: precision_at_1000
value: 0.256
- type: precision_at_3
value: 19.236
- type: precision_at_5
value: 14.269000000000002
- type: recall_at_1
value: 29.628
- type: recall_at_10
value: 58.706
- type: recall_at_100
value: 85.116
- type: recall_at_1000
value: 97.258
- type: recall_at_3
value: 42.655
- type: recall_at_5
value: 49.909
- type: map_at_1
value: 25.499
- type: map_at_10
value: 34.284
- type: map_at_100
value: 35.416
- type: map_at_1000
value: 35.494
- type: map_at_3
value: 31.911
- type: map_at_5
value: 33.159
- type: mrr_at_1
value: 28.096
- type: mrr_at_10
value: 36.699
- type: mrr_at_100
value: 37.657000000000004
- type: mrr_at_1000
value: 37.714999999999996
- type: mrr_at_3
value: 34.72
- type: mrr_at_5
value: 35.746
- type: ndcg_at_1
value: 28.096
- type: ndcg_at_10
value: 39.041
- type: ndcg_at_100
value: 44.633
- type: ndcg_at_1000
value: 46.522000000000006
- type: ndcg_at_3
value: 34.663
- type: ndcg_at_5
value: 36.538
- type: precision_at_1
value: 28.096
- type: precision_at_10
value: 6.0440000000000005
- type: precision_at_100
value: 0.9520000000000001
- type: precision_at_1000
value: 0.121
- type: precision_at_3
value: 14.911
- type: precision_at_5
value: 10.277
- type: recall_at_1
value: 25.499
- type: recall_at_10
value: 51.26199999999999
- type: recall_at_100
value: 76.896
- type: recall_at_1000
value: 90.763
- type: recall_at_3
value: 39.376
- type: recall_at_5
value: 43.785000000000004
- task:
type: Retrieval
dataset:
name: MTEB ClimateFEVER
type: climate-fever
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 10.532
- type: map_at_10
value: 19.911
- type: map_at_100
value: 21.926000000000002
- type: map_at_1000
value: 22.113
- type: map_at_3
value: 16.118
- type: map_at_5
value: 18.043
- type: mrr_at_1
value: 23.909
- type: mrr_at_10
value: 37.029
- type: mrr_at_100
value: 38.015
- type: mrr_at_1000
value: 38.054
- type: mrr_at_3
value: 33.29
- type: mrr_at_5
value: 35.446
- type: ndcg_at_1
value: 23.909
- type: ndcg_at_10
value: 28.691
- type: ndcg_at_100
value: 36.341
- type: ndcg_at_1000
value: 39.644
- type: ndcg_at_3
value: 22.561
- type: ndcg_at_5
value: 24.779999999999998
- type: precision_at_1
value: 23.909
- type: precision_at_10
value: 9.433
- type: precision_at_100
value: 1.763
- type: precision_at_1000
value: 0.23800000000000002
- type: precision_at_3
value: 17.438000000000002
- type: precision_at_5
value: 13.758999999999999
- type: recall_at_1
value: 10.532
- type: recall_at_10
value: 36.079
- type: recall_at_100
value: 62.156
- type: recall_at_1000
value: 80.53099999999999
- type: recall_at_3
value: 21.384
- type: recall_at_5
value: 27.29
- task:
type: Retrieval
dataset:
name: MTEB DBPedia
type: dbpedia-entity
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 9.483
- type: map_at_10
value: 21.986
- type: map_at_100
value: 31.319000000000003
- type: map_at_1000
value: 33.231
- type: map_at_3
value: 15.193000000000001
- type: map_at_5
value: 18.116
- type: mrr_at_1
value: 74.0
- type: mrr_at_10
value: 80.047
- type: mrr_at_100
value: 80.406
- type: mrr_at_1000
value: 80.414
- type: mrr_at_3
value: 78.667
- type: mrr_at_5
value: 79.467
- type: ndcg_at_1
value: 61.875
- type: ndcg_at_10
value: 46.544999999999995
- type: ndcg_at_100
value: 51.097
- type: ndcg_at_1000
value: 58.331999999999994
- type: ndcg_at_3
value: 51.622
- type: ndcg_at_5
value: 49.016
- type: precision_at_1
value: 74.0
- type: precision_at_10
value: 37.325
- type: precision_at_100
value: 11.743
- type: precision_at_1000
value: 2.423
- type: precision_at_3
value: 54.75
- type: precision_at_5
value: 47.699999999999996
- type: recall_at_1
value: 9.483
- type: recall_at_10
value: 27.477
- type: recall_at_100
value: 57.099999999999994
- type: recall_at_1000
value: 80.56
- type: recall_at_3
value: 16.543
- type: recall_at_5
value: 20.830000000000002
- task:
type: Classification
dataset:
name: MTEB EmotionClassification
type: mteb/emotion
config: default
split: test
revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37
metrics:
- type: accuracy
value: 50.06
- type: f1
value: 44.99375486940016
- task:
type: Retrieval
dataset:
name: MTEB FEVER
type: fever
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 70.94
- type: map_at_10
value: 80.854
- type: map_at_100
value: 81.096
- type: map_at_1000
value: 81.109
- type: map_at_3
value: 79.589
- type: map_at_5
value: 80.431
- type: mrr_at_1
value: 76.44800000000001
- type: mrr_at_10
value: 85.07000000000001
- type: mrr_at_100
value: 85.168
- type: mrr_at_1000
value: 85.17
- type: mrr_at_3
value: 84.221
- type: mrr_at_5
value: 84.832
- type: ndcg_at_1
value: 76.44800000000001
- type: ndcg_at_10
value: 85.019
- type: ndcg_at_100
value: 85.886
- type: ndcg_at_1000
value: 86.09400000000001
- type: ndcg_at_3
value: 83.023
- type: ndcg_at_5
value: 84.223
- type: precision_at_1
value: 76.44800000000001
- type: precision_at_10
value: 10.405000000000001
- type: precision_at_100
value: 1.105
- type: precision_at_1000
value: 0.11399999999999999
- type: precision_at_3
value: 32.208
- type: precision_at_5
value: 20.122999999999998
- type: recall_at_1
value: 70.94
- type: recall_at_10
value: 93.508
- type: recall_at_100
value: 96.962
- type: recall_at_1000
value: 98.24300000000001
- type: recall_at_3
value: 88.17099999999999
- type: recall_at_5
value: 91.191
- task:
type: Retrieval
dataset:
name: MTEB FiQA2018
type: fiqa
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 23.844
- type: map_at_10
value: 41.629
- type: map_at_100
value: 43.766
- type: map_at_1000
value: 43.916
- type: map_at_3
value: 35.992000000000004
- type: map_at_5
value: 39.302
- type: mrr_at_1
value: 45.988
- type: mrr_at_10
value: 56.050999999999995
- type: mrr_at_100
value: 56.741
- type: mrr_at_1000
value: 56.767999999999994
- type: mrr_at_3
value: 53.498000000000005
- type: mrr_at_5
value: 55.071999999999996
- type: ndcg_at_1
value: 45.988
- type: ndcg_at_10
value: 49.891999999999996
- type: ndcg_at_100
value: 56.727000000000004
- type: ndcg_at_1000
value: 58.952000000000005
- type: ndcg_at_3
value: 45.09
- type: ndcg_at_5
value: 46.943
- type: precision_at_1
value: 45.988
- type: precision_at_10
value: 13.980999999999998
- type: precision_at_100
value: 2.136
- type: precision_at_1000
value: 0.252
- type: precision_at_3
value: 30.556
- type: precision_at_5
value: 22.778000000000002
- type: recall_at_1
value: 23.844
- type: recall_at_10
value: 58.46
- type: recall_at_100
value: 82.811
- type: recall_at_1000
value: 96.084
- type: recall_at_3
value: 41.636
- type: recall_at_5
value: 49.271
- task:
type: Retrieval
dataset:
name: MTEB HotpotQA
type: hotpotqa
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 40.108
- type: map_at_10
value: 65.846
- type: map_at_100
value: 66.691
- type: map_at_1000
value: 66.743
- type: map_at_3
value: 62.09
- type: map_at_5
value: 64.412
- type: mrr_at_1
value: 80.216
- type: mrr_at_10
value: 85.768
- type: mrr_at_100
value: 85.92699999999999
- type: mrr_at_1000
value: 85.932
- type: mrr_at_3
value: 85.012
- type: mrr_at_5
value: 85.495
- type: ndcg_at_1
value: 80.216
- type: ndcg_at_10
value: 73.833
- type: ndcg_at_100
value: 76.68
- type: ndcg_at_1000
value: 77.639
- type: ndcg_at_3
value: 68.7
- type: ndcg_at_5
value: 71.514
- type: precision_at_1
value: 80.216
- type: precision_at_10
value: 15.616
- type: precision_at_100
value: 1.783
- type: precision_at_1000
value: 0.191
- type: precision_at_3
value: 44.483
- type: precision_at_5
value: 28.904999999999998
- type: recall_at_1
value: 40.108
- type: recall_at_10
value: 78.082
- type: recall_at_100
value: 89.129
- type: recall_at_1000
value: 95.381
- type: recall_at_3
value: 66.725
- type: recall_at_5
value: 72.262
- task:
type: Classification
dataset:
name: MTEB ImdbClassification
type: mteb/imdb
config: default
split: test
revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7
metrics:
- type: accuracy
value: 94.3208
- type: ap
value: 91.64852216825692
- type: f1
value: 94.31672442494217
- task:
type: Retrieval
dataset:
name: MTEB MSMARCO
type: msmarco
config: default
split: dev
revision: None
metrics:
- type: map_at_1
value: 16.954
- type: map_at_10
value: 28.605000000000004
- type: map_at_100
value: 29.875
- type: map_at_1000
value: 29.934
- type: map_at_3
value: 24.57
- type: map_at_5
value: 26.845000000000002
- type: mrr_at_1
value: 17.407
- type: mrr_at_10
value: 29.082
- type: mrr_at_100
value: 30.309
- type: mrr_at_1000
value: 30.361
- type: mrr_at_3
value: 25.112000000000002
- type: mrr_at_5
value: 27.37
- type: ndcg_at_1
value: 17.407
- type: ndcg_at_10
value: 35.555
- type: ndcg_at_100
value: 41.808
- type: ndcg_at_1000
value: 43.277
- type: ndcg_at_3
value: 27.291999999999998
- type: ndcg_at_5
value: 31.369999999999997
- type: precision_at_1
value: 17.407
- type: precision_at_10
value: 5.9670000000000005
- type: precision_at_100
value: 0.9119999999999999
- type: precision_at_1000
value: 0.104
- type: precision_at_3
value: 11.939
- type: precision_at_5
value: 9.223
- type: recall_at_1
value: 16.954
- type: recall_at_10
value: 57.216
- type: recall_at_100
value: 86.384
- type: recall_at_1000
value: 97.64
- type: recall_at_3
value: 34.660999999999994
- type: recall_at_5
value: 44.484
- task:
type: Classification
dataset:
name: MTEB MTOPDomainClassification (en)
type: mteb/mtop_domain
config: en
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 95.29183766529867
- type: f1
value: 95.01282555921513
- task:
type: Classification
dataset:
name: MTEB MTOPIntentClassification (en)
type: mteb/mtop_intent
config: en
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 87.07934336525307
- type: f1
value: 69.58693991783085
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (en)
type: mteb/amazon_massive_intent
config: en
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 79.71755211835911
- type: f1
value: 77.08207736007755
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (en)
type: mteb/amazon_massive_scenario
config: en
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 81.08607935440484
- type: f1
value: 80.71191664406739
- task:
type: Clustering
dataset:
name: MTEB MedrxivClusteringP2P
type: mteb/medrxiv-clustering-p2p
config: default
split: test
revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73
metrics:
- type: v_measure
value: 36.5355083590869
- task:
type: Clustering
dataset:
name: MTEB MedrxivClusteringS2S
type: mteb/medrxiv-clustering-s2s
config: default
split: test
revision: 35191c8c0dca72d8ff3efcd72aa802307d469663
metrics:
- type: v_measure
value: 37.24173539348128
- task:
type: Reranking
dataset:
name: MTEB MindSmallReranking
type: mteb/mind_small
config: default
split: test
revision: 3bdac13927fdc888b903db93b2ffdbd90b295a69
metrics:
- type: map
value: 32.84293003435578
- type: mrr
value: 34.09721970493348
- task:
type: Retrieval
dataset:
name: MTEB NFCorpus
type: nfcorpus
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 6.369
- type: map_at_10
value: 14.892
- type: map_at_100
value: 18.884999999999998
- type: map_at_1000
value: 20.43
- type: map_at_3
value: 10.735999999999999
- type: map_at_5
value: 12.703000000000001
- type: mrr_at_1
value: 50.15500000000001
- type: mrr_at_10
value: 59.948
- type: mrr_at_100
value: 60.422
- type: mrr_at_1000
value: 60.455999999999996
- type: mrr_at_3
value: 58.204
- type: mrr_at_5
value: 59.35
- type: ndcg_at_1
value: 47.678
- type: ndcg_at_10
value: 39.050000000000004
- type: ndcg_at_100
value: 35.905
- type: ndcg_at_1000
value: 44.662
- type: ndcg_at_3
value: 44.781
- type: ndcg_at_5
value: 42.549
- type: precision_at_1
value: 49.226
- type: precision_at_10
value: 28.762
- type: precision_at_100
value: 8.767999999999999
- type: precision_at_1000
value: 2.169
- type: precision_at_3
value: 41.796
- type: precision_at_5
value: 37.09
- type: recall_at_1
value: 6.369
- type: recall_at_10
value: 19.842000000000002
- type: recall_at_100
value: 37.017
- type: recall_at_1000
value: 68.444
- type: recall_at_3
value: 12.446
- type: recall_at_5
value: 15.525
- task:
type: Retrieval
dataset:
name: MTEB NQ
type: nq
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 39.663
- type: map_at_10
value: 56.252
- type: map_at_100
value: 57.018
- type: map_at_1000
value: 57.031
- type: map_at_3
value: 52.020999999999994
- type: map_at_5
value: 54.626
- type: mrr_at_1
value: 44.699
- type: mrr_at_10
value: 58.819
- type: mrr_at_100
value: 59.351
- type: mrr_at_1000
value: 59.358
- type: mrr_at_3
value: 55.615
- type: mrr_at_5
value: 57.598000000000006
- type: ndcg_at_1
value: 44.699
- type: ndcg_at_10
value: 63.873999999999995
- type: ndcg_at_100
value: 66.973
- type: ndcg_at_1000
value: 67.23700000000001
- type: ndcg_at_3
value: 56.25599999999999
- type: ndcg_at_5
value: 60.44199999999999
- type: precision_at_1
value: 44.699
- type: precision_at_10
value: 10.075000000000001
- type: precision_at_100
value: 1.185
- type: precision_at_1000
value: 0.121
- type: precision_at_3
value: 25.202999999999996
- type: precision_at_5
value: 17.584
- type: recall_at_1
value: 39.663
- type: recall_at_10
value: 84.313
- type: recall_at_100
value: 97.56700000000001
- type: recall_at_1000
value: 99.44
- type: recall_at_3
value: 64.938
- type: recall_at_5
value: 74.515
- task:
type: Retrieval
dataset:
name: MTEB QuoraRetrieval
type: quora
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 69.708
- type: map_at_10
value: 83.86099999999999
- type: map_at_100
value: 84.513
- type: map_at_1000
value: 84.53
- type: map_at_3
value: 80.854
- type: map_at_5
value: 82.757
- type: mrr_at_1
value: 80.15
- type: mrr_at_10
value: 86.70400000000001
- type: mrr_at_100
value: 86.81400000000001
- type: mrr_at_1000
value: 86.815
- type: mrr_at_3
value: 85.658
- type: mrr_at_5
value: 86.37599999999999
- type: ndcg_at_1
value: 80.17
- type: ndcg_at_10
value: 87.7
- type: ndcg_at_100
value: 88.979
- type: ndcg_at_1000
value: 89.079
- type: ndcg_at_3
value: 84.71600000000001
- type: ndcg_at_5
value: 86.385
- type: precision_at_1
value: 80.17
- type: precision_at_10
value: 13.369
- type: precision_at_100
value: 1.53
- type: precision_at_1000
value: 0.157
- type: precision_at_3
value: 37.123
- type: precision_at_5
value: 24.498
- type: recall_at_1
value: 69.708
- type: recall_at_10
value: 95.17099999999999
- type: recall_at_100
value: 99.529
- type: recall_at_1000
value: 99.97500000000001
- type: recall_at_3
value: 86.761
- type: recall_at_5
value: 91.34
- task:
type: Clustering
dataset:
name: MTEB RedditClustering
type: mteb/reddit-clustering
config: default
split: test
revision: 24640382cdbf8abc73003fb0fa6d111a705499eb
metrics:
- type: v_measure
value: 63.005610557842786
- task:
type: Clustering
dataset:
name: MTEB RedditClusteringP2P
type: mteb/reddit-clustering-p2p
config: default
split: test
revision: 282350215ef01743dc01b456c7f5241fa8937f16
metrics:
- type: v_measure
value: 65.85897055439158
- task:
type: Retrieval
dataset:
name: MTEB SCIDOCS
type: scidocs
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 5.388
- type: map_at_10
value: 14.087
- type: map_at_100
value: 16.618
- type: map_at_1000
value: 16.967
- type: map_at_3
value: 9.8
- type: map_at_5
value: 11.907
- type: mrr_at_1
value: 26.5
- type: mrr_at_10
value: 37.905
- type: mrr_at_100
value: 39.053
- type: mrr_at_1000
value: 39.091
- type: mrr_at_3
value: 34.567
- type: mrr_at_5
value: 36.307
- type: ndcg_at_1
value: 26.5
- type: ndcg_at_10
value: 23.06
- type: ndcg_at_100
value: 32.164
- type: ndcg_at_1000
value: 37.574000000000005
- type: ndcg_at_3
value: 21.623
- type: ndcg_at_5
value: 18.95
- type: precision_at_1
value: 26.5
- type: precision_at_10
value: 12.030000000000001
- type: precision_at_100
value: 2.5020000000000002
- type: precision_at_1000
value: 0.379
- type: precision_at_3
value: 20.200000000000003
- type: precision_at_5
value: 16.64
- type: recall_at_1
value: 5.388
- type: recall_at_10
value: 24.375
- type: recall_at_100
value: 50.818
- type: recall_at_1000
value: 76.86699999999999
- type: recall_at_3
value: 12.273
- type: recall_at_5
value: 16.858
- task:
type: STS
dataset:
name: MTEB SICK-R
type: mteb/sickr-sts
config: default
split: test
revision: a6ea5a8cab320b040a23452cc28066d9beae2cee
metrics:
- type: cos_sim_pearson
value: 85.09465497223438
- type: cos_sim_spearman
value: 80.55601111843897
- type: euclidean_pearson
value: 82.40135168520864
- type: euclidean_spearman
value: 80.05606361845396
- type: manhattan_pearson
value: 82.24092291787754
- type: manhattan_spearman
value: 79.89739846820373
- task:
type: STS
dataset:
name: MTEB STS12
type: mteb/sts12-sts
config: default
split: test
revision: a0d554a64d88156834ff5ae9920b964011b16384
metrics:
- type: cos_sim_pearson
value: 81.14210597635189
- type: cos_sim_spearman
value: 73.69447481152118
- type: euclidean_pearson
value: 75.08507068029972
- type: euclidean_spearman
value: 71.04077458564372
- type: manhattan_pearson
value: 75.64918699307383
- type: manhattan_spearman
value: 71.61677355593945
- task:
type: STS
dataset:
name: MTEB STS13
type: mteb/sts13-sts
config: default
split: test
revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca
metrics:
- type: cos_sim_pearson
value: 85.41396417076866
- type: cos_sim_spearman
value: 85.82245898186092
- type: euclidean_pearson
value: 85.58527168297935
- type: euclidean_spearman
value: 85.94613250938504
- type: manhattan_pearson
value: 85.88114899068759
- type: manhattan_spearman
value: 86.42494392145366
- task:
type: STS
dataset:
name: MTEB STS14
type: mteb/sts14-sts
config: default
split: test
revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375
metrics:
- type: cos_sim_pearson
value: 83.7431948980468
- type: cos_sim_spearman
value: 82.05114289801895
- type: euclidean_pearson
value: 83.06116666914892
- type: euclidean_spearman
value: 81.82060562251957
- type: manhattan_pearson
value: 83.1858437025367
- type: manhattan_spearman
value: 82.09604293088852
- task:
type: STS
dataset:
name: MTEB STS15
type: mteb/sts15-sts
config: default
split: test
revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3
metrics:
- type: cos_sim_pearson
value: 88.455985912287
- type: cos_sim_spearman
value: 88.8044343107975
- type: euclidean_pearson
value: 87.155336804123
- type: euclidean_spearman
value: 87.79371420531842
- type: manhattan_pearson
value: 87.5784376507174
- type: manhattan_spearman
value: 88.429877987816
- task:
type: STS
dataset:
name: MTEB STS16
type: mteb/sts16-sts
config: default
split: test
revision: 4d8694f8f0e0100860b497b999b3dbed754a0513
metrics:
- type: cos_sim_pearson
value: 85.1631000795076
- type: cos_sim_spearman
value: 86.20042158061408
- type: euclidean_pearson
value: 84.88605965960737
- type: euclidean_spearman
value: 85.45926745772432
- type: manhattan_pearson
value: 85.18333987666729
- type: manhattan_spearman
value: 85.86048911387192
- task:
type: STS
dataset:
name: MTEB STS17 (en-en)
type: mteb/sts17-crosslingual-sts
config: en-en
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 91.51301667439836
- type: cos_sim_spearman
value: 91.46469919011143
- type: euclidean_pearson
value: 91.15157693133415
- type: euclidean_spearman
value: 91.02656400119739
- type: manhattan_pearson
value: 91.08411259466446
- type: manhattan_spearman
value: 90.84339904461068
- task:
type: STS
dataset:
name: MTEB STS22 (en)
type: mteb/sts22-crosslingual-sts
config: en
split: test
revision: eea2b4fe26a775864c896887d910b76a8098ad3f
metrics:
- type: cos_sim_pearson
value: 69.08993728439704
- type: cos_sim_spearman
value: 69.20885645170797
- type: euclidean_pearson
value: 69.65638507632245
- type: euclidean_spearman
value: 68.69831912688514
- type: manhattan_pearson
value: 69.86621764969294
- type: manhattan_spearman
value: 69.05446631856769
- task:
type: STS
dataset:
name: MTEB STSBenchmark
type: mteb/stsbenchmark-sts
config: default
split: test
revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831
metrics:
- type: cos_sim_pearson
value: 86.96149243197495
- type: cos_sim_spearman
value: 87.43145597912833
- type: euclidean_pearson
value: 86.6762329641158
- type: euclidean_spearman
value: 86.67085254401809
- type: manhattan_pearson
value: 87.06412701458164
- type: manhattan_spearman
value: 87.10197412769807
- task:
type: Reranking
dataset:
name: MTEB SciDocsRR
type: mteb/scidocs-reranking
config: default
split: test
revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab
metrics:
- type: map
value: 86.43440918697488
- type: mrr
value: 96.3954826945023
- task:
type: Retrieval
dataset:
name: MTEB SciFact
type: scifact
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 60.494
- type: map_at_10
value: 72.074
- type: map_at_100
value: 72.475
- type: map_at_1000
value: 72.483
- type: map_at_3
value: 68.983
- type: map_at_5
value: 71.161
- type: mrr_at_1
value: 63.666999999999994
- type: mrr_at_10
value: 73.31299999999999
- type: mrr_at_100
value: 73.566
- type: mrr_at_1000
value: 73.574
- type: mrr_at_3
value: 71.111
- type: mrr_at_5
value: 72.72800000000001
- type: ndcg_at_1
value: 63.666999999999994
- type: ndcg_at_10
value: 77.024
- type: ndcg_at_100
value: 78.524
- type: ndcg_at_1000
value: 78.842
- type: ndcg_at_3
value: 72.019
- type: ndcg_at_5
value: 75.22999999999999
- type: precision_at_1
value: 63.666999999999994
- type: precision_at_10
value: 10.2
- type: precision_at_100
value: 1.103
- type: precision_at_1000
value: 0.11299999999999999
- type: precision_at_3
value: 28.111000000000004
- type: precision_at_5
value: 19.0
- type: recall_at_1
value: 60.494
- type: recall_at_10
value: 90.8
- type: recall_at_100
value: 97.333
- type: recall_at_1000
value: 100.0
- type: recall_at_3
value: 77.644
- type: recall_at_5
value: 85.694
- task:
type: PairClassification
dataset:
name: MTEB SprintDuplicateQuestions
type: mteb/sprintduplicatequestions-pairclassification
config: default
split: test
revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46
metrics:
- type: cos_sim_accuracy
value: 99.68415841584158
- type: cos_sim_ap
value: 91.23713949701548
- type: cos_sim_f1
value: 83.70221327967808
- type: cos_sim_precision
value: 84.21052631578947
- type: cos_sim_recall
value: 83.2
- type: dot_accuracy
value: 99.5
- type: dot_ap
value: 79.46312132270363
- type: dot_f1
value: 72.75320970042794
- type: dot_precision
value: 69.35630099728014
- type: dot_recall
value: 76.5
- type: euclidean_accuracy
value: 99.69108910891089
- type: euclidean_ap
value: 90.9016163254649
- type: euclidean_f1
value: 83.91752577319586
- type: euclidean_precision
value: 86.59574468085106
- type: euclidean_recall
value: 81.39999999999999
- type: manhattan_accuracy
value: 99.7039603960396
- type: manhattan_ap
value: 91.5593806619311
- type: manhattan_f1
value: 85.08124076809453
- type: manhattan_precision
value: 83.80213385063045
- type: manhattan_recall
value: 86.4
- type: max_accuracy
value: 99.7039603960396
- type: max_ap
value: 91.5593806619311
- type: max_f1
value: 85.08124076809453
- task:
type: Clustering
dataset:
name: MTEB StackExchangeClustering
type: mteb/stackexchange-clustering
config: default
split: test
revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259
metrics:
- type: v_measure
value: 74.40806543281603
- task:
type: Clustering
dataset:
name: MTEB StackExchangeClusteringP2P
type: mteb/stackexchange-clustering-p2p
config: default
split: test
revision: 815ca46b2622cec33ccafc3735d572c266efdb44
metrics:
- type: v_measure
value: 38.51757703316821
- task:
type: Reranking
dataset:
name: MTEB StackOverflowDupQuestions
type: mteb/stackoverflowdupquestions-reranking
config: default
split: test
revision: e185fbe320c72810689fc5848eb6114e1ef5ec69
metrics:
- type: map
value: 54.33475593449746
- type: mrr
value: 55.3374474789916
- task:
type: Summarization
dataset:
name: MTEB SummEval
type: mteb/summeval
config: default
split: test
revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c
metrics:
- type: cos_sim_pearson
value: 30.249926396023596
- type: cos_sim_spearman
value: 29.820375700458158
- type: dot_pearson
value: 28.820307635930355
- type: dot_spearman
value: 28.824273052746825
- task:
type: Retrieval
dataset:
name: MTEB TRECCOVID
type: trec-covid
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 0.233
- type: map_at_10
value: 2.061
- type: map_at_100
value: 12.607
- type: map_at_1000
value: 30.031000000000002
- type: map_at_3
value: 0.6669999999999999
- type: map_at_5
value: 1.091
- type: mrr_at_1
value: 88.0
- type: mrr_at_10
value: 93.067
- type: mrr_at_100
value: 93.067
- type: mrr_at_1000
value: 93.067
- type: mrr_at_3
value: 92.667
- type: mrr_at_5
value: 93.067
- type: ndcg_at_1
value: 84.0
- type: ndcg_at_10
value: 81.072
- type: ndcg_at_100
value: 62.875
- type: ndcg_at_1000
value: 55.641
- type: ndcg_at_3
value: 85.296
- type: ndcg_at_5
value: 84.10499999999999
- type: precision_at_1
value: 88.0
- type: precision_at_10
value: 83.39999999999999
- type: precision_at_100
value: 63.7
- type: precision_at_1000
value: 24.622
- type: precision_at_3
value: 88.0
- type: precision_at_5
value: 87.2
- type: recall_at_1
value: 0.233
- type: recall_at_10
value: 2.188
- type: recall_at_100
value: 15.52
- type: recall_at_1000
value: 52.05499999999999
- type: recall_at_3
value: 0.6859999999999999
- type: recall_at_5
value: 1.1440000000000001
- task:
type: Retrieval
dataset:
name: MTEB Touche2020
type: webis-touche2020
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 3.19
- type: map_at_10
value: 11.491999999999999
- type: map_at_100
value: 17.251
- type: map_at_1000
value: 18.795
- type: map_at_3
value: 6.146
- type: map_at_5
value: 8.113
- type: mrr_at_1
value: 44.897999999999996
- type: mrr_at_10
value: 56.57
- type: mrr_at_100
value: 57.348
- type: mrr_at_1000
value: 57.357
- type: mrr_at_3
value: 52.041000000000004
- type: mrr_at_5
value: 55.408
- type: ndcg_at_1
value: 40.816
- type: ndcg_at_10
value: 27.968
- type: ndcg_at_100
value: 39.0
- type: ndcg_at_1000
value: 50.292
- type: ndcg_at_3
value: 31.256
- type: ndcg_at_5
value: 28.855999999999998
- type: precision_at_1
value: 44.897999999999996
- type: precision_at_10
value: 24.285999999999998
- type: precision_at_100
value: 7.898
- type: precision_at_1000
value: 1.541
- type: precision_at_3
value: 30.612000000000002
- type: precision_at_5
value: 27.346999999999998
- type: recall_at_1
value: 3.19
- type: recall_at_10
value: 17.954
- type: recall_at_100
value: 48.793
- type: recall_at_1000
value: 83.357
- type: recall_at_3
value: 6.973999999999999
- type: recall_at_5
value: 10.391
- task:
type: Classification
dataset:
name: MTEB ToxicConversationsClassification
type: mteb/toxic_conversations_50k
config: default
split: test
revision: d7c0de2777da35d6aae2200a62c6e0e5af397c4c
metrics:
- type: accuracy
value: 70.89139999999999
- type: ap
value: 15.562539739828049
- type: f1
value: 55.38685639741247
- task:
type: Classification
dataset:
name: MTEB TweetSentimentExtractionClassification
type: mteb/tweet_sentiment_extraction
config: default
split: test
revision: d604517c81ca91fe16a244d1248fc021f9ecee7a
metrics:
- type: accuracy
value: 62.48160724391625
- type: f1
value: 62.76700854121342
- task:
type: Clustering
dataset:
name: MTEB TwentyNewsgroupsClustering
type: mteb/twentynewsgroups-clustering
config: default
split: test
revision: 6125ec4e24fa026cec8a478383ee943acfbd5449
metrics:
- type: v_measure
value: 57.157071531498275
- task:
type: PairClassification
dataset:
name: MTEB TwitterSemEval2015
type: mteb/twittersemeval2015-pairclassification
config: default
split: test
revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1
metrics:
- type: cos_sim_accuracy
value: 87.15503367705789
- type: cos_sim_ap
value: 77.20584529783206
- type: cos_sim_f1
value: 71.3558088770313
- type: cos_sim_precision
value: 66.02333931777379
- type: cos_sim_recall
value: 77.62532981530343
- type: dot_accuracy
value: 83.10186564940096
- type: dot_ap
value: 64.34160146443133
- type: dot_f1
value: 63.23048153342683
- type: dot_precision
value: 56.75618967687789
- type: dot_recall
value: 71.37203166226914
- type: euclidean_accuracy
value: 86.94045419324074
- type: euclidean_ap
value: 76.08471767931738
- type: euclidean_f1
value: 71.41248592518455
- type: euclidean_precision
value: 67.90387818225078
- type: euclidean_recall
value: 75.30343007915567
- type: manhattan_accuracy
value: 86.80932228646361
- type: manhattan_ap
value: 76.03862870753638
- type: manhattan_f1
value: 71.2660917385327
- type: manhattan_precision
value: 67.70363334124912
- type: manhattan_recall
value: 75.22427440633246
- type: max_accuracy
value: 87.15503367705789
- type: max_ap
value: 77.20584529783206
- type: max_f1
value: 71.41248592518455
- task:
type: PairClassification
dataset:
name: MTEB TwitterURLCorpus
type: mteb/twitterurlcorpus-pairclassification
config: default
split: test
revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf
metrics:
- type: cos_sim_accuracy
value: 89.42639810610471
- type: cos_sim_ap
value: 86.45196525133669
- type: cos_sim_f1
value: 79.25172592977508
- type: cos_sim_precision
value: 76.50852802063925
- type: cos_sim_recall
value: 82.19895287958116
- type: dot_accuracy
value: 87.03768385919976
- type: dot_ap
value: 80.86465404774172
- type: dot_f1
value: 74.50351637940457
- type: dot_precision
value: 70.72293324109305
- type: dot_recall
value: 78.71111795503542
- type: euclidean_accuracy
value: 89.29056545193464
- type: euclidean_ap
value: 86.25102188096191
- type: euclidean_f1
value: 79.05038057267126
- type: euclidean_precision
value: 74.681550472538
- type: euclidean_recall
value: 83.9621188789652
- type: manhattan_accuracy
value: 89.34877944657896
- type: manhattan_ap
value: 86.35336214205911
- type: manhattan_f1
value: 79.20192588269623
- type: manhattan_precision
value: 75.24951483227058
- type: manhattan_recall
value: 83.59254696643055
- type: max_accuracy
value: 89.42639810610471
- type: max_ap
value: 86.45196525133669
- type: max_f1
value: 79.25172592977508
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;">
Feedback and support: TensorBlock's <a href="https://x.com/tensorblock_aoi">Twitter/X</a>, <a href="https://t.me/TensorBlock">Telegram Group</a> and <a href="https://x.com/tensorblock_aoi">Discord server</a>
</p>
</div>
</div>
## GritLM/GritLM-8x7B - GGUF
This repo contains GGUF format model files for [GritLM/GritLM-8x7B](https://huggingface.co/GritLM/GritLM-8x7B).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4242](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
<div style="text-align: left; margin: 20px 0;">
<a href="https://tensorblock.co/waitlist/client" style="display: inline-block; padding: 10px 20px; background-color: #007bff; color: white; text-decoration: none; border-radius: 5px; font-weight: bold;">
Run them on the TensorBlock client using your local machine ↗
</a>
</div>
## Prompt template
```
<s><|user|>
{prompt}
<|assistant|>
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [GritLM-8x7B-Q2_K.gguf](https://huggingface.co/tensorblock/GritLM-8x7B-GGUF/blob/main/GritLM-8x7B-Q2_K.gguf) | Q2_K | 17.311 GB | smallest, significant quality loss - not recommended for most purposes |
| [GritLM-8x7B-Q3_K_S.gguf](https://huggingface.co/tensorblock/GritLM-8x7B-GGUF/blob/main/GritLM-8x7B-Q3_K_S.gguf) | Q3_K_S | 20.433 GB | very small, high quality loss |
| [GritLM-8x7B-Q3_K_M.gguf](https://huggingface.co/tensorblock/GritLM-8x7B-GGUF/blob/main/GritLM-8x7B-Q3_K_M.gguf) | Q3_K_M | 22.546 GB | very small, high quality loss |
| [GritLM-8x7B-Q3_K_L.gguf](https://huggingface.co/tensorblock/GritLM-8x7B-GGUF/blob/main/GritLM-8x7B-Q3_K_L.gguf) | Q3_K_L | 24.170 GB | small, substantial quality loss |
| [GritLM-8x7B-Q4_0.gguf](https://huggingface.co/tensorblock/GritLM-8x7B-GGUF/blob/main/GritLM-8x7B-Q4_0.gguf) | Q4_0 | 26.444 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [GritLM-8x7B-Q4_K_S.gguf](https://huggingface.co/tensorblock/GritLM-8x7B-GGUF/blob/main/GritLM-8x7B-Q4_K_S.gguf) | Q4_K_S | 26.746 GB | small, greater quality loss |
| [GritLM-8x7B-Q4_K_M.gguf](https://huggingface.co/tensorblock/GritLM-8x7B-GGUF/blob/main/GritLM-8x7B-Q4_K_M.gguf) | Q4_K_M | 28.448 GB | medium, balanced quality - recommended |
| [GritLM-8x7B-Q5_0.gguf](https://huggingface.co/tensorblock/GritLM-8x7B-GGUF/blob/main/GritLM-8x7B-Q5_0.gguf) | Q5_0 | 32.231 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [GritLM-8x7B-Q5_K_S.gguf](https://huggingface.co/tensorblock/GritLM-8x7B-GGUF/blob/main/GritLM-8x7B-Q5_K_S.gguf) | Q5_K_S | 32.231 GB | large, low quality loss - recommended |
| [GritLM-8x7B-Q5_K_M.gguf](https://huggingface.co/tensorblock/GritLM-8x7B-GGUF/blob/main/GritLM-8x7B-Q5_K_M.gguf) | Q5_K_M | 33.230 GB | large, very low quality loss - recommended |
| [GritLM-8x7B-Q6_K.gguf](https://huggingface.co/tensorblock/GritLM-8x7B-GGUF/blob/main/GritLM-8x7B-Q6_K.gguf) | Q6_K | 38.381 GB | very large, extremely low quality loss |
| [GritLM-8x7B-Q8_0.gguf](https://huggingface.co/tensorblock/GritLM-8x7B-GGUF/blob/main/GritLM-8x7B-Q8_0.gguf) | Q8_0 | 49.626 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/GritLM-8x7B-GGUF --include "GritLM-8x7B-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/GritLM-8x7B-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
| [
"SUMMARIZATION"
] | [
"BIOSSES",
"SCIFACT"
] |
GoToCompany/gemma2-9b-cpt-sahabatai-v1-base | GoToCompany | null | [
"safetensors",
"gemma2",
"en",
"id",
"jv",
"su",
"arxiv:2309.06085",
"base_model:aisingapore/gemma2-9b-cpt-sea-lionv3-base",
"base_model:finetune:aisingapore/gemma2-9b-cpt-sea-lionv3-base",
"license:gemma",
"region:us"
] | 2024-11-06T03:55:55 | 2024-11-06T03:55:55 | 91 | 20 | ---
base_model:
- aisingapore/gemma2-9b-cpt-sea-lionv3-base
language:
- en
- id
- jv
- su
license: gemma
---
# Gemma2 9B CPT Sahabat-AI v1
**Sahabat-AI** (Indonesian language for “close friends”) is a collection of Large Language Models (LLMs) which has been pretrained and instruct-tuned for Indonesian language and its various dialects. Sahabat-AI ecosystem is co-initiated by Indonesian tech and telecommunication companies: GoTo Group and Indosat Ooredoo Hutchison.
This is the card for the Gemma2 9B CPT Sahabat-AI v1 base model which has undergone continued pre-training from the [Gemma2 9B CPT SEA-Lionv3 base](https://huggingface.co/aisingapore/gemma2-9b-cpt-sea-lionv3-base) model.
## Model Details
### Model Description
The continued pre-training data for Gemma2 9B CPT Sahabat-AI v1 base model encompasses approximately 50B tokens.
- **Co-initiated by:** PT GoTo Gojek Tokopedia Tbk, Indosat Ooredoo Hutchison
- **Developed by:** PT GoTo Gojek Tokopedia Tbk, AI Singapore
- **Model type:** Decoder
- **Languages:** English, Indonesian, Javanese, Sundanese
- **License:** [Gemma Community License](https://ai.google.dev/gemma/terms)
For tokenisation, the model employs the default tokenizer used in Gemma-2-9B. The model has a context length of 8192.
### Benchmark Performance
We evaluated Gemma2 9B CPT Sahabat-AI v1 base model on general language capabilities.
#### General Language Capabilities
For the evaluation of general language capabilities, we employed the
- [SEA HELM (also known as BHASA) evaluation benchmark](https://arxiv.org/abs/2309.06085v2) across a variety of tasks.
- These tasks include Question Answering (QA), Sentiment Analysis (Sentiment), Toxicity Detection (Toxicity), Translation in both directions (Eng>Lang & Lang>Eng), Abstractive Summarization (Summ), Causal Reasoning (Causal) and Natural Language Inference (NLI).
- We also added support for Javanese and Sundanese for the BHASA tasks whenever applicable
- and the common English tasks from the [HuggingFace LLM Leaderboard](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard).
- These tasks consist of [IFEval, BBH, Math Lvl 5, GPQA, MuSR, and MMLU-PRO.](https://huggingface.co/docs/leaderboards/open_llm_leaderboard/about)
- **Caveat**: Our results differ from the HuggingFace LLM Leaderboard because we have used [VLLM](https://docs.vllm.ai/en/latest/) as our inference platform. VLLM caps the context size at **4096 tokens** while HuggingFace was set to **8192 tokens**.
Note: SEA HELM is implemented using prompts to elicit answers in a strict format. For all tasks, the model is expected to provide an answer tag from which the answer is automatically extracted. For tasks where options are provided, the answer should comprise one of the pre-defined options. The scores for each task is normalised to account for baseline performance due to random chance.
The evaluation was done **five-shot** with native prompts on a sample of 100-1000 instances for each dataset.
#### Results
#### SEA HELM (also known as BHASA)
<table style="border-collapse: collapse; width: 100%; font-size: 10px">
<tr>
<th style="border: 2px solid black; padding: 8px; font-weight: bold;">Language / Model Name [Base]</th>
<th style="border: 1px solid gray; padding: 8px;">Qwen2-7B</th>
<th style="border: 1px solid gray; padding: 8px;">Qwen2.5-7B</th>
<th style="border: 1px solid gray; padding: 8px;">Llama-3-8B</th>
<th style="border: 1px solid gray; padding: 8px;">Llama-3.1-8B</th>
<th style="border: 1px solid gray; padding: 8px;">sea-lionv2.1-8B</th>
<th style="border: 1px solid gray; padding: 8px;">gemma-2-9B</th>
<th style="border: 1px solid gray; padding: 8px;">sea-lionv3-9B</th>
<th style="border: 1px solid gray; padding: 8px;">sahabatai-v1-8B</th>
<th style="border: 2px solid black; padding: 8px;">sahabatai-v1-9B</th>
</tr>
<tr>
<td style="border: 2px solid black; padding: 8px; font-weight: bold;">Overall (Bahasa Indonesia + Javanese + Sundanese)</td>
<td style="border: 1px solid gray; padding: 8px;">42.776</td>
<td style="border: 1px solid gray; padding: 8px;">46.245</td>
<td style="border: 1px solid gray; padding: 8px;">49.160</td>
<td style="border: 1px solid gray; padding: 8px;">49.577</td>
<td style="border: 1px solid gray; padding: 8px;">48.602</td>
<td style="border: 1px solid gray; padding: 8px;">58.972</td>
<td style="border: 1px solid gray; padding: 8px;">60.913</td>
<td style="border: 1px solid gray; padding: 8px;">59.437</td>
<td style="border: 2px solid black; padding: 8px; background-color: lightgreen;">64.123</td>
</tr>
<tr>
<td style="border: 2px solid black; padding: 8px; font-weight: bold;">Bahasa Indonesia</td>
<td style="border: 1px solid gray; padding: 8px;">49.341</td>
<td style="border: 1px solid gray; padding: 8px;">55.913</td>
<td style="border: 1px solid gray; padding: 8px;">47.865</td>
<td style="border: 1px solid gray; padding: 8px;">48.110</td>
<td style="border: 1px solid gray; padding: 8px;">49.154</td>
<td style="border: 1px solid gray; padding: 8px;">58.572</td>
<td style="border: 1px solid gray; padding: 8px; background-color: lightgreen;">62.437</td>
<td style="border: 1px solid gray; padding: 8px;">53.454</td>
<td style="border: 2px solid black; padding: 8px;">60.040</td>
</tr>
<tr>
<td style="border: 2px solid black; padding: 8px; font-weight: bold;">Javanese</td>
<td style="border: 1px solid gray; padding: 8px;">42.774</td>
<td style="border: 1px solid gray; padding: 8px;">45.917</td>
<td style="border: 1px solid gray; padding: 8px;">54.627</td>
<td style="border: 1px solid gray; padding: 8px;">55.215</td>
<td style="border: 1px solid gray; padding: 8px;">52.728</td>
<td style="border: 1px solid gray; padding: 8px;">63.760</td>
<td style="border: 1px solid gray; padding: 8px;">63.363</td>
<td style="border: 1px solid gray; padding: 8px;">65.048</td>
<td style="border: 2px solid black; padding: 8px; background-color: lightgreen;">69.882</td>
</tr>
<tr>
<td style="border: 2px solid black; padding: 8px; font-weight: bold;">Sundanese</td>
<td style="border: 1px solid gray; padding: 8px;">36.213</td>
<td style="border: 1px solid gray; padding: 8px;">36.905</td>
<td style="border: 1px solid gray; padding: 8px;">44.988</td>
<td style="border: 1px solid gray; padding: 8px;">45.407</td>
<td style="border: 1px solid gray; padding: 8px;">43.925</td>
<td style="border: 1px solid gray; padding: 8px;">54.583</td>
<td style="border: 1px solid gray; padding: 8px;">56.939</td>
<td style="border: 1px solid gray; padding: 8px;">59.809</td>
<td style="border: 2px solid black; padding: 8px; background-color: lightgreen;">62.446</td>
</tr>
</table>
#### English Results
<table style="border-collapse: collapse; width: 100%; font-size: 10px">
<tr>
<th style="border: 1px solid gray; padding: 8px;">Model Name [BASE]</th>
<th style="border: 1px solid gray; padding: 8px;">Qwen2-7B</th>
<th style="border: 1px solid gray; padding: 8px;">Qwen2.5-7B</th>
<th style="border: 1px solid gray; padding: 8px;">Llama-3-8B</th>
<th style="border: 1px solid gray; padding: 8px;">Llama-3.1-8B</th>
<th style="border: 1px solid gray; padding: 8px;">sea-lionv2.1-8B</th>
<th style="border: 1px solid gray; padding: 8px;">gemma-2-9B</th>
<th style="border: 1px solid gray; padding: 8px;">sea-lionv3-9B</th>
<th style="border: 1px solid gray; padding: 8px;">sahabatai-v1-8B</th>
<th style="border: 2px solid black; padding: 8px;">sahabatai-v1-9B</th>
</tr>
<tr>
<td style="border: 1px solid gray; padding: 8px; font-weight: bold;">Average</td>
<td style="border: 1px solid gray; padding: 8px;">23.68</td>
<td style="border: 1px solid gray; padding: 8px; background-color: lightgreen;">24.65</td>
<td style="border: 1px solid gray; padding: 8px;">13.56</td>
<td style="border: 1px solid gray; padding: 8px;">13.69</td>
<td style="border: 1px solid gray; padding: 8px;">12.77</td>
<td style="border: 1px solid gray; padding: 8px;">13.34</td>
<td style="border: 1px solid gray; padding: 8px;">21.99</td>
<td style="border: 1px solid gray; padding: 8px;">13.92</td>
<td style="border: 2px solid black; padding: 8px;">19.62</td>
</tr>
</table>
## Training Details
### Data
Gemma2 9B CPT Sahabat-AI v1 base model was continued pre-trained on 50B tokens of the following data:
| Data Source | Unique Tokens (B) | Multiplier | Total Tokens (B) | Percentage (%)|
|---------------------------------------|:-----------------:|:----------:|:----------------:|:-------------:|
| Dolma Refined Web | 9.5 | 1 | 9.5 | 18.7 |
| Dolma arXiv | 0.6 | 1 | 0.6 | 1.18 |
| Stack V2 | 5.5 | 1 | 5.5 | 10.85 |
| Dolma Semantic Scholar | 1.2 | 1 | 1.2 | 2.37 |
| Dolma Reddit | 1.7 | 1 | 1.7 | 3.36 |
| Dolma Pes2o | 1.2 | 1 | 1.2 | 2.37 |
| Wiki* + News* - Indonesian | 1.0 | 1 | 1.0 | 1.97 |
| SEA-LION Pile - Indonesian | 27.0 | 1 | 27.0 | 53.3 |
| JV Pile - Javanese | 0.92 | 1.6 | 1.5 | 3.0 |
| SU Pile - Sundanese | 0.39 | 3.8 | 1.5 | 3.0 |
Note:
- All token counts are counted using Gemma2 tokenizer
- Wiki* sources includes Wikipedia, Wiki Books, Wiki Source, Wiki Voyage and Fandom Wiki
- News* sources includes VOA, Global Voices
### Infrastructure
Gemma2 9B CPT Sahabat-AI v1 was trained using [MosaicML Composer](https://github.com/mosaicml/composer)
on the following hardware:
| Training Details | Gemma2 9B CPT Sahabat-AI v1|
|----------------------|:--------------------------:|
| Nvidia H100 80GB GPU | 32 |
| Training Duration | 7 days |
### Configuration
| HyperParameter | Gemma2 9B CPT Sahabat-AI v1|
|-------------------|:--------------------------:|
| Precision | bfloat16 |
| Optimizer | decoupled_adamw |
| Scheduler | weight_stable_decay |
| Learning Rate | 1.0e-5 |
| Global Batch Size | 256 |
| Micro Batch Size | 1 |
## Call for Collaboration
Sahabat-AI (Indonesian language for “close friends”) a **local open source Large Language Model (LLM) ecosystem in Indonesian language**, co-initiated by Indonesian tech and telecommunication companies: GoTo Group and Indosat Ooredoo Hutchison.
Sahabat-AI ecosystem aims to empower Indonesians who want to develop AI-based services and applications using Bahasa Indonesia and its various local dialects.
We are supported by research centers and global tech experts such as AI Singapore and Tech Mahendra to train the model to gain general language understanding.
We also collaborate with key top Indonesia universities such as University of Indonesia, Gadjah Mada University, Bogor Institute of Agriculture, Bandung Institute of Technology, including top Indonesia media groups, such as Kompas Gramedia Group and Republika to train and enrich the model in Bahasa Indonesia, ensuring optimum provision of local context and cultural relevance.
We would like to invite **researchers, developers, and language enthusiasts** to actively contribute to the enhancement and expansion of Sahabat-AI.
Your collaborations can involve:
- Identifying and reporting technical issues
- Sharing pre-training, instruction, and preference data
- Improving documentation usability
- Proposing and implementing new model evaluation tasks and metrics
Join us in shaping the future of Sahabat-AI by sharing your expertise and insights to make these models more accessible, accurate, and versatile.
You can contribute your ideas through [this form.](https://docs.google.com/forms/d/1_us969eQtEooYOn4XkvGkdP5VHOyCbO6L_sd9kTMnaA/edit)
## The Development Team (in ascending alphabetical order)
### AI Singapore
Chan Adwin<br>
Cheng Nicholas<br>
Choa Esther<br>
Huang Yuli<br>
Lau Wayne<br>
Lee Chwan Ren<br>
Leong Wai Yi<br>
Leong Wei Qi<br>
Limkonchotiwat Peerat<br>
Liu Bing Jie Darius<br>
Montalan Jann Railey<br>
Ng Boon Cheong Raymond<br>
Ngui Jian Gang<br>
Nguyen Thanh Ngan<br>
Ong Brandon<br>
Ong Tat-Wee David<br>
Ong Zhi Hao<br>
Rengarajan Hamsawardhini<br>
Siow Bryan<br>
Susanto Yosephine<br>
Tai Ngee Chia<br>
Tan Choon Meng<br>
Teng Walter<br>
Teo Eng Sipp Leslie<br>
Teo Wei Yi<br>
Tjhi William<br>
Yeo Yeow Tong<br>
Yong Xianbin<br>
### PT GoTo Gojek Tokopedia Tbk
Anissa Dininta<br>
Chau Shiau Ching<br>
Choiri Hendra Hadhil<br>
Goel Priyank<br>
Saini Ajay Kumar<br>
Shalev Ofir<br>
Tan Daryl<br>
Tep Kilian Rithi<br>
Tiwari Anupam<br>
Widjojo Daniel<br>
## Acknowledgements
AI Singapore is a national programme supported by the National Research Foundation, Singapore and hosted by the National University of Singapore.
Any opinions, findings and conclusions or recommendations expressed in this material are those of the author(s) and do not reflect the views of National Research Foundation, Singapore.
## Contact
For more info, please contact us using this [Sahabat-AI Inquiry Form.](https://docs.google.com/forms/d/1_us969eQtEooYOn4XkvGkdP5VHOyCbO6L_sd9kTMnaA/edit)
## Disclaimer
This is the repository for the base model.
The model has _not_ been aligned for safety.
Developers and users should perform their own safety fine-tuning and related security measures.
In no event shall the authors be held liable for any claim, damages, or other liability arising from the use of the released weights and codes. | [
"QUESTION_ANSWERING",
"TRANSLATION",
"SUMMARIZATION"
] | [
"CHIA"
] |
kcccat/multilingual-e5-large-instruct-Q6_K-GGUF | kcccat | null | [
"sentence-transformers",
"gguf",
"mteb",
"transformers",
"llama-cpp",
"gguf-my-repo",
"multilingual",
"af",
"am",
"ar",
"as",
"az",
"be",
"bg",
"bn",
"br",
"bs",
"ca",
"cs",
"cy",
"da",
"de",
"el",
"en",
"eo",
"es",
"et",
"eu",
"fa",
"fi",
"fr",
"fy",
"ga",
"gd",
"gl",
"gu",
"ha",
"he",
"hi",
"hr",
"hu",
"hy",
"id",
"is",
"it",
"ja",
"jv",
"ka",
"kk",
"km",
"kn",
"ko",
"ku",
"ky",
"la",
"lo",
"lt",
"lv",
"mg",
"mk",
"ml",
"mn",
"mr",
"ms",
"my",
"ne",
"nl",
"no",
"om",
"or",
"pa",
"pl",
"ps",
"pt",
"ro",
"ru",
"sa",
"sd",
"si",
"sk",
"sl",
"so",
"sq",
"sr",
"su",
"sv",
"sw",
"ta",
"te",
"th",
"tl",
"tr",
"ug",
"uk",
"ur",
"uz",
"vi",
"xh",
"yi",
"zh",
"base_model:intfloat/multilingual-e5-large-instruct",
"base_model:quantized:intfloat/multilingual-e5-large-instruct",
"license:mit",
"model-index",
"endpoints_compatible",
"region:us",
"feature-extraction"
] | 2025-02-21T16:16:42 | 2025-02-21T16:16:48 | 91 | 1 | ---
base_model: intfloat/multilingual-e5-large-instruct
language:
- multilingual
- af
- am
- ar
- as
- az
- be
- bg
- bn
- br
- bs
- ca
- cs
- cy
- da
- de
- el
- en
- eo
- es
- et
- eu
- fa
- fi
- fr
- fy
- ga
- gd
- gl
- gu
- ha
- he
- hi
- hr
- hu
- hy
- id
- is
- it
- ja
- jv
- ka
- kk
- km
- kn
- ko
- ku
- ky
- la
- lo
- lt
- lv
- mg
- mk
- ml
- mn
- mr
- ms
- my
- ne
- nl
- 'no'
- om
- or
- pa
- pl
- ps
- pt
- ro
- ru
- sa
- sd
- si
- sk
- sl
- so
- sq
- sr
- su
- sv
- sw
- ta
- te
- th
- tl
- tr
- ug
- uk
- ur
- uz
- vi
- xh
- yi
- zh
license: mit
tags:
- mteb
- sentence-transformers
- transformers
- llama-cpp
- gguf-my-repo
model-index:
- name: multilingual-e5-large-instruct
results:
- task:
type: Classification
dataset:
name: MTEB AmazonCounterfactualClassification (en)
type: mteb/amazon_counterfactual
config: en
split: test
revision: e8379541af4e31359cca9fbcf4b00f2671dba205
metrics:
- type: accuracy
value: 76.23880597014924
- type: ap
value: 39.07351965022687
- type: f1
value: 70.04836733862683
- task:
type: Classification
dataset:
name: MTEB AmazonCounterfactualClassification (de)
type: mteb/amazon_counterfactual
config: de
split: test
revision: e8379541af4e31359cca9fbcf4b00f2671dba205
metrics:
- type: accuracy
value: 66.71306209850107
- type: ap
value: 79.01499914759529
- type: f1
value: 64.81951817560703
- task:
type: Classification
dataset:
name: MTEB AmazonCounterfactualClassification (en-ext)
type: mteb/amazon_counterfactual
config: en-ext
split: test
revision: e8379541af4e31359cca9fbcf4b00f2671dba205
metrics:
- type: accuracy
value: 73.85307346326837
- type: ap
value: 22.447519885878737
- type: f1
value: 61.0162730745633
- task:
type: Classification
dataset:
name: MTEB AmazonCounterfactualClassification (ja)
type: mteb/amazon_counterfactual
config: ja
split: test
revision: e8379541af4e31359cca9fbcf4b00f2671dba205
metrics:
- type: accuracy
value: 76.04925053533191
- type: ap
value: 23.44983217128922
- type: f1
value: 62.5723230907759
- task:
type: Classification
dataset:
name: MTEB AmazonPolarityClassification
type: mteb/amazon_polarity
config: default
split: test
revision: e2d317d38cd51312af73b3d32a06d1a08b442046
metrics:
- type: accuracy
value: 96.28742500000001
- type: ap
value: 94.8449918887462
- type: f1
value: 96.28680923610432
- task:
type: Classification
dataset:
name: MTEB AmazonReviewsClassification (en)
type: mteb/amazon_reviews_multi
config: en
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 56.716
- type: f1
value: 55.76510398266401
- task:
type: Classification
dataset:
name: MTEB AmazonReviewsClassification (de)
type: mteb/amazon_reviews_multi
config: de
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 52.99999999999999
- type: f1
value: 52.00829994765178
- task:
type: Classification
dataset:
name: MTEB AmazonReviewsClassification (es)
type: mteb/amazon_reviews_multi
config: es
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 48.806000000000004
- type: f1
value: 48.082345914983634
- task:
type: Classification
dataset:
name: MTEB AmazonReviewsClassification (fr)
type: mteb/amazon_reviews_multi
config: fr
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 48.507999999999996
- type: f1
value: 47.68752844642045
- task:
type: Classification
dataset:
name: MTEB AmazonReviewsClassification (ja)
type: mteb/amazon_reviews_multi
config: ja
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 47.709999999999994
- type: f1
value: 47.05870376637181
- task:
type: Classification
dataset:
name: MTEB AmazonReviewsClassification (zh)
type: mteb/amazon_reviews_multi
config: zh
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 44.662000000000006
- type: f1
value: 43.42371965372771
- task:
type: Retrieval
dataset:
name: MTEB ArguAna
type: arguana
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 31.721
- type: map_at_10
value: 49.221
- type: map_at_100
value: 49.884
- type: map_at_1000
value: 49.888
- type: map_at_3
value: 44.31
- type: map_at_5
value: 47.276
- type: mrr_at_1
value: 32.432
- type: mrr_at_10
value: 49.5
- type: mrr_at_100
value: 50.163000000000004
- type: mrr_at_1000
value: 50.166
- type: mrr_at_3
value: 44.618
- type: mrr_at_5
value: 47.541
- type: ndcg_at_1
value: 31.721
- type: ndcg_at_10
value: 58.384
- type: ndcg_at_100
value: 61.111000000000004
- type: ndcg_at_1000
value: 61.187999999999995
- type: ndcg_at_3
value: 48.386
- type: ndcg_at_5
value: 53.708999999999996
- type: precision_at_1
value: 31.721
- type: precision_at_10
value: 8.741
- type: precision_at_100
value: 0.991
- type: precision_at_1000
value: 0.1
- type: precision_at_3
value: 20.057
- type: precision_at_5
value: 14.609
- type: recall_at_1
value: 31.721
- type: recall_at_10
value: 87.411
- type: recall_at_100
value: 99.075
- type: recall_at_1000
value: 99.644
- type: recall_at_3
value: 60.171
- type: recall_at_5
value: 73.044
- task:
type: Clustering
dataset:
name: MTEB ArxivClusteringP2P
type: mteb/arxiv-clustering-p2p
config: default
split: test
revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d
metrics:
- type: v_measure
value: 46.40419580759799
- task:
type: Clustering
dataset:
name: MTEB ArxivClusteringS2S
type: mteb/arxiv-clustering-s2s
config: default
split: test
revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53
metrics:
- type: v_measure
value: 40.48593255007969
- task:
type: Reranking
dataset:
name: MTEB AskUbuntuDupQuestions
type: mteb/askubuntudupquestions-reranking
config: default
split: test
revision: 2000358ca161889fa9c082cb41daa8dcfb161a54
metrics:
- type: map
value: 63.889179122289995
- type: mrr
value: 77.61146286769556
- task:
type: STS
dataset:
name: MTEB BIOSSES
type: mteb/biosses-sts
config: default
split: test
revision: d3fb88f8f02e40887cd149695127462bbcf29b4a
metrics:
- type: cos_sim_pearson
value: 88.15075203727929
- type: cos_sim_spearman
value: 86.9622224570873
- type: euclidean_pearson
value: 86.70473853624121
- type: euclidean_spearman
value: 86.9622224570873
- type: manhattan_pearson
value: 86.21089380980065
- type: manhattan_spearman
value: 86.75318154937008
- task:
type: BitextMining
dataset:
name: MTEB BUCC (de-en)
type: mteb/bucc-bitext-mining
config: de-en
split: test
revision: d51519689f32196a32af33b075a01d0e7c51e252
metrics:
- type: accuracy
value: 99.65553235908142
- type: f1
value: 99.60681976339595
- type: precision
value: 99.58246346555325
- type: recall
value: 99.65553235908142
- task:
type: BitextMining
dataset:
name: MTEB BUCC (fr-en)
type: mteb/bucc-bitext-mining
config: fr-en
split: test
revision: d51519689f32196a32af33b075a01d0e7c51e252
metrics:
- type: accuracy
value: 99.26260180497468
- type: f1
value: 99.14520507740848
- type: precision
value: 99.08650671362535
- type: recall
value: 99.26260180497468
- task:
type: BitextMining
dataset:
name: MTEB BUCC (ru-en)
type: mteb/bucc-bitext-mining
config: ru-en
split: test
revision: d51519689f32196a32af33b075a01d0e7c51e252
metrics:
- type: accuracy
value: 98.07412538967787
- type: f1
value: 97.86629719431936
- type: precision
value: 97.76238309664012
- type: recall
value: 98.07412538967787
- task:
type: BitextMining
dataset:
name: MTEB BUCC (zh-en)
type: mteb/bucc-bitext-mining
config: zh-en
split: test
revision: d51519689f32196a32af33b075a01d0e7c51e252
metrics:
- type: accuracy
value: 99.42074776197998
- type: f1
value: 99.38564156573635
- type: precision
value: 99.36808846761454
- type: recall
value: 99.42074776197998
- task:
type: Classification
dataset:
name: MTEB Banking77Classification
type: mteb/banking77
config: default
split: test
revision: 0fd18e25b25c072e09e0d92ab615fda904d66300
metrics:
- type: accuracy
value: 85.73376623376623
- type: f1
value: 85.68480707214599
- task:
type: Clustering
dataset:
name: MTEB BiorxivClusteringP2P
type: mteb/biorxiv-clustering-p2p
config: default
split: test
revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40
metrics:
- type: v_measure
value: 40.935218072113855
- task:
type: Clustering
dataset:
name: MTEB BiorxivClusteringS2S
type: mteb/biorxiv-clustering-s2s
config: default
split: test
revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908
metrics:
- type: v_measure
value: 36.276389017675264
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackRetrieval
type: BeIR/cqadupstack
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 27.764166666666668
- type: map_at_10
value: 37.298166666666674
- type: map_at_100
value: 38.530166666666666
- type: map_at_1000
value: 38.64416666666667
- type: map_at_3
value: 34.484833333333334
- type: map_at_5
value: 36.0385
- type: mrr_at_1
value: 32.93558333333333
- type: mrr_at_10
value: 41.589749999999995
- type: mrr_at_100
value: 42.425333333333334
- type: mrr_at_1000
value: 42.476333333333336
- type: mrr_at_3
value: 39.26825
- type: mrr_at_5
value: 40.567083333333336
- type: ndcg_at_1
value: 32.93558333333333
- type: ndcg_at_10
value: 42.706583333333334
- type: ndcg_at_100
value: 47.82483333333333
- type: ndcg_at_1000
value: 49.95733333333334
- type: ndcg_at_3
value: 38.064750000000004
- type: ndcg_at_5
value: 40.18158333333333
- type: precision_at_1
value: 32.93558333333333
- type: precision_at_10
value: 7.459833333333334
- type: precision_at_100
value: 1.1830833333333335
- type: precision_at_1000
value: 0.15608333333333332
- type: precision_at_3
value: 17.5235
- type: precision_at_5
value: 12.349833333333333
- type: recall_at_1
value: 27.764166666666668
- type: recall_at_10
value: 54.31775
- type: recall_at_100
value: 76.74350000000001
- type: recall_at_1000
value: 91.45208333333332
- type: recall_at_3
value: 41.23425
- type: recall_at_5
value: 46.73983333333334
- task:
type: Retrieval
dataset:
name: MTEB ClimateFEVER
type: climate-fever
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 12.969
- type: map_at_10
value: 21.584999999999997
- type: map_at_100
value: 23.3
- type: map_at_1000
value: 23.5
- type: map_at_3
value: 18.218999999999998
- type: map_at_5
value: 19.983
- type: mrr_at_1
value: 29.316
- type: mrr_at_10
value: 40.033
- type: mrr_at_100
value: 40.96
- type: mrr_at_1000
value: 41.001
- type: mrr_at_3
value: 37.123
- type: mrr_at_5
value: 38.757999999999996
- type: ndcg_at_1
value: 29.316
- type: ndcg_at_10
value: 29.858
- type: ndcg_at_100
value: 36.756
- type: ndcg_at_1000
value: 40.245999999999995
- type: ndcg_at_3
value: 24.822
- type: ndcg_at_5
value: 26.565
- type: precision_at_1
value: 29.316
- type: precision_at_10
value: 9.186
- type: precision_at_100
value: 1.6549999999999998
- type: precision_at_1000
value: 0.22999999999999998
- type: precision_at_3
value: 18.436
- type: precision_at_5
value: 13.876
- type: recall_at_1
value: 12.969
- type: recall_at_10
value: 35.142
- type: recall_at_100
value: 59.143
- type: recall_at_1000
value: 78.594
- type: recall_at_3
value: 22.604
- type: recall_at_5
value: 27.883000000000003
- task:
type: Retrieval
dataset:
name: MTEB DBPedia
type: dbpedia-entity
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 8.527999999999999
- type: map_at_10
value: 17.974999999999998
- type: map_at_100
value: 25.665
- type: map_at_1000
value: 27.406000000000002
- type: map_at_3
value: 13.017999999999999
- type: map_at_5
value: 15.137
- type: mrr_at_1
value: 62.5
- type: mrr_at_10
value: 71.891
- type: mrr_at_100
value: 72.294
- type: mrr_at_1000
value: 72.296
- type: mrr_at_3
value: 69.958
- type: mrr_at_5
value: 71.121
- type: ndcg_at_1
value: 50.875
- type: ndcg_at_10
value: 38.36
- type: ndcg_at_100
value: 44.235
- type: ndcg_at_1000
value: 52.154
- type: ndcg_at_3
value: 43.008
- type: ndcg_at_5
value: 40.083999999999996
- type: precision_at_1
value: 62.5
- type: precision_at_10
value: 30.0
- type: precision_at_100
value: 10.038
- type: precision_at_1000
value: 2.0869999999999997
- type: precision_at_3
value: 46.833000000000006
- type: precision_at_5
value: 38.800000000000004
- type: recall_at_1
value: 8.527999999999999
- type: recall_at_10
value: 23.828
- type: recall_at_100
value: 52.322
- type: recall_at_1000
value: 77.143
- type: recall_at_3
value: 14.136000000000001
- type: recall_at_5
value: 17.761
- task:
type: Classification
dataset:
name: MTEB EmotionClassification
type: mteb/emotion
config: default
split: test
revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37
metrics:
- type: accuracy
value: 51.51
- type: f1
value: 47.632159862049896
- task:
type: Retrieval
dataset:
name: MTEB FEVER
type: fever
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 60.734
- type: map_at_10
value: 72.442
- type: map_at_100
value: 72.735
- type: map_at_1000
value: 72.75
- type: map_at_3
value: 70.41199999999999
- type: map_at_5
value: 71.80499999999999
- type: mrr_at_1
value: 65.212
- type: mrr_at_10
value: 76.613
- type: mrr_at_100
value: 76.79899999999999
- type: mrr_at_1000
value: 76.801
- type: mrr_at_3
value: 74.8
- type: mrr_at_5
value: 76.12400000000001
- type: ndcg_at_1
value: 65.212
- type: ndcg_at_10
value: 77.988
- type: ndcg_at_100
value: 79.167
- type: ndcg_at_1000
value: 79.452
- type: ndcg_at_3
value: 74.362
- type: ndcg_at_5
value: 76.666
- type: precision_at_1
value: 65.212
- type: precision_at_10
value: 10.003
- type: precision_at_100
value: 1.077
- type: precision_at_1000
value: 0.11199999999999999
- type: precision_at_3
value: 29.518
- type: precision_at_5
value: 19.016
- type: recall_at_1
value: 60.734
- type: recall_at_10
value: 90.824
- type: recall_at_100
value: 95.71600000000001
- type: recall_at_1000
value: 97.577
- type: recall_at_3
value: 81.243
- type: recall_at_5
value: 86.90299999999999
- task:
type: Retrieval
dataset:
name: MTEB FiQA2018
type: fiqa
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 23.845
- type: map_at_10
value: 39.281
- type: map_at_100
value: 41.422
- type: map_at_1000
value: 41.593
- type: map_at_3
value: 34.467
- type: map_at_5
value: 37.017
- type: mrr_at_1
value: 47.531
- type: mrr_at_10
value: 56.204
- type: mrr_at_100
value: 56.928999999999995
- type: mrr_at_1000
value: 56.962999999999994
- type: mrr_at_3
value: 54.115
- type: mrr_at_5
value: 55.373000000000005
- type: ndcg_at_1
value: 47.531
- type: ndcg_at_10
value: 47.711999999999996
- type: ndcg_at_100
value: 54.510999999999996
- type: ndcg_at_1000
value: 57.103
- type: ndcg_at_3
value: 44.145
- type: ndcg_at_5
value: 45.032
- type: precision_at_1
value: 47.531
- type: precision_at_10
value: 13.194
- type: precision_at_100
value: 2.045
- type: precision_at_1000
value: 0.249
- type: precision_at_3
value: 29.424
- type: precision_at_5
value: 21.451
- type: recall_at_1
value: 23.845
- type: recall_at_10
value: 54.967
- type: recall_at_100
value: 79.11399999999999
- type: recall_at_1000
value: 94.56700000000001
- type: recall_at_3
value: 40.256
- type: recall_at_5
value: 46.215
- task:
type: Retrieval
dataset:
name: MTEB HotpotQA
type: hotpotqa
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 37.819
- type: map_at_10
value: 60.889
- type: map_at_100
value: 61.717999999999996
- type: map_at_1000
value: 61.778
- type: map_at_3
value: 57.254000000000005
- type: map_at_5
value: 59.541
- type: mrr_at_1
value: 75.638
- type: mrr_at_10
value: 82.173
- type: mrr_at_100
value: 82.362
- type: mrr_at_1000
value: 82.37
- type: mrr_at_3
value: 81.089
- type: mrr_at_5
value: 81.827
- type: ndcg_at_1
value: 75.638
- type: ndcg_at_10
value: 69.317
- type: ndcg_at_100
value: 72.221
- type: ndcg_at_1000
value: 73.382
- type: ndcg_at_3
value: 64.14
- type: ndcg_at_5
value: 67.07600000000001
- type: precision_at_1
value: 75.638
- type: precision_at_10
value: 14.704999999999998
- type: precision_at_100
value: 1.698
- type: precision_at_1000
value: 0.185
- type: precision_at_3
value: 41.394999999999996
- type: precision_at_5
value: 27.162999999999997
- type: recall_at_1
value: 37.819
- type: recall_at_10
value: 73.52499999999999
- type: recall_at_100
value: 84.875
- type: recall_at_1000
value: 92.559
- type: recall_at_3
value: 62.092999999999996
- type: recall_at_5
value: 67.907
- task:
type: Classification
dataset:
name: MTEB ImdbClassification
type: mteb/imdb
config: default
split: test
revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7
metrics:
- type: accuracy
value: 94.60079999999999
- type: ap
value: 92.67396345347356
- type: f1
value: 94.5988098167121
- task:
type: Retrieval
dataset:
name: MTEB MSMARCO
type: msmarco
config: default
split: dev
revision: None
metrics:
- type: map_at_1
value: 21.285
- type: map_at_10
value: 33.436
- type: map_at_100
value: 34.63
- type: map_at_1000
value: 34.681
- type: map_at_3
value: 29.412
- type: map_at_5
value: 31.715
- type: mrr_at_1
value: 21.848
- type: mrr_at_10
value: 33.979
- type: mrr_at_100
value: 35.118
- type: mrr_at_1000
value: 35.162
- type: mrr_at_3
value: 30.036
- type: mrr_at_5
value: 32.298
- type: ndcg_at_1
value: 21.862000000000002
- type: ndcg_at_10
value: 40.43
- type: ndcg_at_100
value: 46.17
- type: ndcg_at_1000
value: 47.412
- type: ndcg_at_3
value: 32.221
- type: ndcg_at_5
value: 36.332
- type: precision_at_1
value: 21.862000000000002
- type: precision_at_10
value: 6.491
- type: precision_at_100
value: 0.935
- type: precision_at_1000
value: 0.104
- type: precision_at_3
value: 13.744
- type: precision_at_5
value: 10.331999999999999
- type: recall_at_1
value: 21.285
- type: recall_at_10
value: 62.083
- type: recall_at_100
value: 88.576
- type: recall_at_1000
value: 98.006
- type: recall_at_3
value: 39.729
- type: recall_at_5
value: 49.608000000000004
- task:
type: Classification
dataset:
name: MTEB MTOPDomainClassification (en)
type: mteb/mtop_domain
config: en
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 93.92612859097127
- type: f1
value: 93.82370333372853
- task:
type: Classification
dataset:
name: MTEB MTOPDomainClassification (de)
type: mteb/mtop_domain
config: de
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 92.67681036911807
- type: f1
value: 92.14191382411472
- task:
type: Classification
dataset:
name: MTEB MTOPDomainClassification (es)
type: mteb/mtop_domain
config: es
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 92.26817878585723
- type: f1
value: 91.92824250337878
- task:
type: Classification
dataset:
name: MTEB MTOPDomainClassification (fr)
type: mteb/mtop_domain
config: fr
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 89.96554963983714
- type: f1
value: 90.02859329630792
- task:
type: Classification
dataset:
name: MTEB MTOPDomainClassification (hi)
type: mteb/mtop_domain
config: hi
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 90.02509860164935
- type: f1
value: 89.30665159182062
- task:
type: Classification
dataset:
name: MTEB MTOPDomainClassification (th)
type: mteb/mtop_domain
config: th
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 87.55515370705244
- type: f1
value: 87.94449232331907
- task:
type: Classification
dataset:
name: MTEB MTOPIntentClassification (en)
type: mteb/mtop_intent
config: en
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 82.4623803009576
- type: f1
value: 66.06738378772725
- task:
type: Classification
dataset:
name: MTEB MTOPIntentClassification (de)
type: mteb/mtop_intent
config: de
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 79.3716539870386
- type: f1
value: 60.37614033396853
- task:
type: Classification
dataset:
name: MTEB MTOPIntentClassification (es)
type: mteb/mtop_intent
config: es
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 80.34022681787857
- type: f1
value: 58.302008026952
- task:
type: Classification
dataset:
name: MTEB MTOPIntentClassification (fr)
type: mteb/mtop_intent
config: fr
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 76.72095208268087
- type: f1
value: 59.64524724009049
- task:
type: Classification
dataset:
name: MTEB MTOPIntentClassification (hi)
type: mteb/mtop_intent
config: hi
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 77.87020437432773
- type: f1
value: 57.80202694670567
- task:
type: Classification
dataset:
name: MTEB MTOPIntentClassification (th)
type: mteb/mtop_intent
config: th
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 77.73598553345387
- type: f1
value: 58.19628250675031
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (af)
type: mteb/amazon_massive_intent
config: af
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 67.6630800268998
- type: f1
value: 65.00996668051691
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (am)
type: mteb/amazon_massive_intent
config: am
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 60.7128446536651
- type: f1
value: 57.95860594874963
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (ar)
type: mteb/amazon_massive_intent
config: ar
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 63.61129791526563
- type: f1
value: 59.75328290206483
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (az)
type: mteb/amazon_massive_intent
config: az
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 69.00134498991257
- type: f1
value: 67.0230483991802
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (bn)
type: mteb/amazon_massive_intent
config: bn
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 68.54068594485541
- type: f1
value: 65.54604628946976
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (cy)
type: mteb/amazon_massive_intent
config: cy
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 63.032952252858095
- type: f1
value: 58.715741857057104
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (da)
type: mteb/amazon_massive_intent
config: da
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 71.80901143241427
- type: f1
value: 68.33963989243877
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (de)
type: mteb/amazon_massive_intent
config: de
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 72.47141896435777
- type: f1
value: 69.56765020308262
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (el)
type: mteb/amazon_massive_intent
config: el
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 71.2373907195696
- type: f1
value: 69.04529836036467
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (en)
type: mteb/amazon_massive_intent
config: en
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 77.05783456624076
- type: f1
value: 74.69430584708174
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (es)
type: mteb/amazon_massive_intent
config: es
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 72.82111634162744
- type: f1
value: 70.77228952803762
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (fa)
type: mteb/amazon_massive_intent
config: fa
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 74.25353059852051
- type: f1
value: 71.05310103416411
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (fi)
type: mteb/amazon_massive_intent
config: fi
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 72.28648285137861
- type: f1
value: 69.08020473732226
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (fr)
type: mteb/amazon_massive_intent
config: fr
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 73.31540013449899
- type: f1
value: 70.9426355465791
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (he)
type: mteb/amazon_massive_intent
config: he
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 70.2151983860121
- type: f1
value: 67.52541755908858
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (hi)
type: mteb/amazon_massive_intent
config: hi
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 71.58372562205784
- type: f1
value: 69.49769064229827
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (hu)
type: mteb/amazon_massive_intent
config: hu
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 71.9233355749832
- type: f1
value: 69.36311548259593
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (hy)
type: mteb/amazon_massive_intent
config: hy
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 68.07330195023538
- type: f1
value: 64.99882022345572
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (id)
type: mteb/amazon_massive_intent
config: id
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 72.62273032952253
- type: f1
value: 70.6394885471001
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (is)
type: mteb/amazon_massive_intent
config: is
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 65.77000672494957
- type: f1
value: 62.9368944815065
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (it)
type: mteb/amazon_massive_intent
config: it
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 73.453261600538
- type: f1
value: 70.85069934666681
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (ja)
type: mteb/amazon_massive_intent
config: ja
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 74.6906523201076
- type: f1
value: 72.03249740074217
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (jv)
type: mteb/amazon_massive_intent
config: jv
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 63.03631472763953
- type: f1
value: 59.3165215571852
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (ka)
type: mteb/amazon_massive_intent
config: ka
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 58.913920645595155
- type: f1
value: 57.367337711611285
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (km)
type: mteb/amazon_massive_intent
config: km
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 54.42837928715535
- type: f1
value: 52.60527294970906
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (kn)
type: mteb/amazon_massive_intent
config: kn
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 66.33490248823135
- type: f1
value: 63.213340969404065
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (ko)
type: mteb/amazon_massive_intent
config: ko
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 70.58507061197041
- type: f1
value: 68.40256628040486
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (lv)
type: mteb/amazon_massive_intent
config: lv
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 69.11230665770006
- type: f1
value: 66.44863577842305
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (ml)
type: mteb/amazon_massive_intent
config: ml
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 69.70073974445192
- type: f1
value: 67.21291337273702
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (mn)
type: mteb/amazon_massive_intent
config: mn
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 66.43913920645595
- type: f1
value: 64.09838087422806
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (ms)
type: mteb/amazon_massive_intent
config: ms
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 70.80026899798251
- type: f1
value: 68.76986742962444
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (my)
type: mteb/amazon_massive_intent
config: my
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 64.78816408876934
- type: f1
value: 62.18781873428972
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (nb)
type: mteb/amazon_massive_intent
config: nb
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 71.6577000672495
- type: f1
value: 68.75171511133003
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (nl)
type: mteb/amazon_massive_intent
config: nl
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 74.42501681237391
- type: f1
value: 71.18434963451544
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (pl)
type: mteb/amazon_massive_intent
config: pl
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 73.64828513786146
- type: f1
value: 70.67741914007422
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (pt)
type: mteb/amazon_massive_intent
config: pt
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 73.62811028917284
- type: f1
value: 71.36402039740959
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (ro)
type: mteb/amazon_massive_intent
config: ro
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 71.88634835238736
- type: f1
value: 69.23701923480677
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (ru)
type: mteb/amazon_massive_intent
config: ru
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 74.15938130464022
- type: f1
value: 71.87792218993388
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (sl)
type: mteb/amazon_massive_intent
config: sl
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 69.96301277740416
- type: f1
value: 67.29584200202983
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (sq)
type: mteb/amazon_massive_intent
config: sq
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 69.49562878278412
- type: f1
value: 66.91716685679431
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (sv)
type: mteb/amazon_massive_intent
config: sv
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 74.6805648957633
- type: f1
value: 72.02723592594374
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (sw)
type: mteb/amazon_massive_intent
config: sw
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 63.00605245460659
- type: f1
value: 60.16716669482932
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (ta)
type: mteb/amazon_massive_intent
config: ta
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 66.90988567585742
- type: f1
value: 63.99405488777784
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (te)
type: mteb/amazon_massive_intent
config: te
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 67.62273032952253
- type: f1
value: 65.17213906909481
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (th)
type: mteb/amazon_massive_intent
config: th
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 69.50907868190988
- type: f1
value: 69.15165697194853
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (tl)
type: mteb/amazon_massive_intent
config: tl
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 69.30733019502352
- type: f1
value: 66.69024007380474
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (tr)
type: mteb/amazon_massive_intent
config: tr
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 72.24277067921989
- type: f1
value: 68.80515408492947
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (ur)
type: mteb/amazon_massive_intent
config: ur
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 67.49831876260929
- type: f1
value: 64.83778567111116
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (vi)
type: mteb/amazon_massive_intent
config: vi
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 71.28782784129119
- type: f1
value: 69.3294186700733
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (zh-CN)
type: mteb/amazon_massive_intent
config: zh-CN
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 73.315400134499
- type: f1
value: 71.22674385243207
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (zh-TW)
type: mteb/amazon_massive_intent
config: zh-TW
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 69.37794216543377
- type: f1
value: 68.96962492838232
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (af)
type: mteb/amazon_massive_scenario
config: af
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 73.33557498318764
- type: f1
value: 72.28949738478356
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (am)
type: mteb/amazon_massive_scenario
config: am
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 65.84398117014123
- type: f1
value: 64.71026362091463
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (ar)
type: mteb/amazon_massive_scenario
config: ar
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 69.76462676529925
- type: f1
value: 69.8229667407667
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (az)
type: mteb/amazon_massive_scenario
config: az
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 72.02420981842636
- type: f1
value: 71.76576384895898
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (bn)
type: mteb/amazon_massive_scenario
config: bn
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 72.7572293207801
- type: f1
value: 72.76840765295256
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (cy)
type: mteb/amazon_massive_scenario
config: cy
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 68.02286482851379
- type: f1
value: 66.17237947327872
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (da)
type: mteb/amazon_massive_scenario
config: da
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 77.60928043039678
- type: f1
value: 77.27094731234773
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (de)
type: mteb/amazon_massive_scenario
config: de
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 77.68325487558843
- type: f1
value: 77.97530399082261
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (el)
type: mteb/amazon_massive_scenario
config: el
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 76.13315400134498
- type: f1
value: 75.97558584796424
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (en)
type: mteb/amazon_massive_scenario
config: en
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 80.47410894418292
- type: f1
value: 80.52244841473792
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (es)
type: mteb/amazon_massive_scenario
config: es
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 76.9670477471419
- type: f1
value: 77.37318805793146
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (fa)
type: mteb/amazon_massive_scenario
config: fa
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 78.09683927370544
- type: f1
value: 77.69773737430847
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (fi)
type: mteb/amazon_massive_scenario
config: fi
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 75.20847343644922
- type: f1
value: 75.17071738727348
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (fr)
type: mteb/amazon_massive_scenario
config: fr
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 77.07464694014796
- type: f1
value: 77.16136207698571
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (he)
type: mteb/amazon_massive_scenario
config: he
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 73.53396099529255
- type: f1
value: 73.58296404484122
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (hi)
type: mteb/amazon_massive_scenario
config: hi
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 75.75319435104237
- type: f1
value: 75.24674707850833
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (hu)
type: mteb/amazon_massive_scenario
config: hu
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 77.0948217888366
- type: f1
value: 76.47559490205028
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (hy)
type: mteb/amazon_massive_scenario
config: hy
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 71.07599193006052
- type: f1
value: 70.76028043093511
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (id)
type: mteb/amazon_massive_scenario
config: id
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 77.10490921318089
- type: f1
value: 77.01215275283272
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (is)
type: mteb/amazon_massive_scenario
config: is
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 71.25756556825824
- type: f1
value: 70.20605314648762
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (it)
type: mteb/amazon_massive_scenario
config: it
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 77.08137188971082
- type: f1
value: 77.3899269057439
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (ja)
type: mteb/amazon_massive_scenario
config: ja
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 79.35440484196369
- type: f1
value: 79.58964690002772
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (jv)
type: mteb/amazon_massive_scenario
config: jv
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 68.42299932750504
- type: f1
value: 68.07844356925413
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (ka)
type: mteb/amazon_massive_scenario
config: ka
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 66.15669132481507
- type: f1
value: 65.89383352608513
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (km)
type: mteb/amazon_massive_scenario
config: km
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 60.11432414256894
- type: f1
value: 57.69910594559806
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (kn)
type: mteb/amazon_massive_scenario
config: kn
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 71.24747814391392
- type: f1
value: 70.42455553830918
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (ko)
type: mteb/amazon_massive_scenario
config: ko
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 76.46267652992603
- type: f1
value: 76.8854559308316
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (lv)
type: mteb/amazon_massive_scenario
config: lv
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 73.24815063887021
- type: f1
value: 72.77805034658074
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (ml)
type: mteb/amazon_massive_scenario
config: ml
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 74.11566913248151
- type: f1
value: 73.86147988001356
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (mn)
type: mteb/amazon_massive_scenario
config: mn
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 70.0168123739072
- type: f1
value: 69.38515920054571
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (ms)
type: mteb/amazon_massive_scenario
config: ms
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 74.41156691324814
- type: f1
value: 73.43474953408237
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (my)
type: mteb/amazon_massive_scenario
config: my
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 68.39609952925353
- type: f1
value: 67.29731681109291
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (nb)
type: mteb/amazon_massive_scenario
config: nb
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 77.20914593140552
- type: f1
value: 77.07066497935367
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (nl)
type: mteb/amazon_massive_scenario
config: nl
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 78.52387357094821
- type: f1
value: 78.5259569473291
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (pl)
type: mteb/amazon_massive_scenario
config: pl
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 76.6913248150639
- type: f1
value: 76.91201656350455
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (pt)
type: mteb/amazon_massive_scenario
config: pt
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 77.1217215870881
- type: f1
value: 77.41179937912504
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (ro)
type: mteb/amazon_massive_scenario
config: ro
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 75.25891055817083
- type: f1
value: 75.8089244542887
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (ru)
type: mteb/amazon_massive_scenario
config: ru
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 77.70679219905851
- type: f1
value: 78.21459594517711
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (sl)
type: mteb/amazon_massive_scenario
config: sl
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 74.83523873570948
- type: f1
value: 74.86847028401978
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (sq)
type: mteb/amazon_massive_scenario
config: sq
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 74.71755211835911
- type: f1
value: 74.0214326485662
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (sv)
type: mteb/amazon_massive_scenario
config: sv
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 79.06523201075991
- type: f1
value: 79.10545620325138
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (sw)
type: mteb/amazon_massive_scenario
config: sw
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 67.91862811028918
- type: f1
value: 66.50386121217983
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (ta)
type: mteb/amazon_massive_scenario
config: ta
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 70.93140551445865
- type: f1
value: 70.755435928495
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (te)
type: mteb/amazon_massive_scenario
config: te
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 72.40753194351042
- type: f1
value: 71.61816115782923
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (th)
type: mteb/amazon_massive_scenario
config: th
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 75.1815736381977
- type: f1
value: 75.08016717887205
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (tl)
type: mteb/amazon_massive_scenario
config: tl
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 72.86482851378614
- type: f1
value: 72.39521180006291
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (tr)
type: mteb/amazon_massive_scenario
config: tr
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 76.46940147948891
- type: f1
value: 76.70044085362349
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (ur)
type: mteb/amazon_massive_scenario
config: ur
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 71.89307330195024
- type: f1
value: 71.5721825332298
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (vi)
type: mteb/amazon_massive_scenario
config: vi
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 74.7511768661735
- type: f1
value: 75.17918654541515
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (zh-CN)
type: mteb/amazon_massive_scenario
config: zh-CN
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 78.69535978480162
- type: f1
value: 78.90019070153316
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (zh-TW)
type: mteb/amazon_massive_scenario
config: zh-TW
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 75.45729657027572
- type: f1
value: 76.19578371794672
- task:
type: Clustering
dataset:
name: MTEB MedrxivClusteringP2P
type: mteb/medrxiv-clustering-p2p
config: default
split: test
revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73
metrics:
- type: v_measure
value: 36.92715354123554
- task:
type: Clustering
dataset:
name: MTEB MedrxivClusteringS2S
type: mteb/medrxiv-clustering-s2s
config: default
split: test
revision: 35191c8c0dca72d8ff3efcd72aa802307d469663
metrics:
- type: v_measure
value: 35.53536244162518
- task:
type: Reranking
dataset:
name: MTEB MindSmallReranking
type: mteb/mind_small
config: default
split: test
revision: 3bdac13927fdc888b903db93b2ffdbd90b295a69
metrics:
- type: map
value: 33.08507884504006
- type: mrr
value: 34.32436977159129
- task:
type: Retrieval
dataset:
name: MTEB NFCorpus
type: nfcorpus
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 5.935
- type: map_at_10
value: 13.297
- type: map_at_100
value: 16.907
- type: map_at_1000
value: 18.391
- type: map_at_3
value: 9.626999999999999
- type: map_at_5
value: 11.190999999999999
- type: mrr_at_1
value: 46.129999999999995
- type: mrr_at_10
value: 54.346000000000004
- type: mrr_at_100
value: 55.067
- type: mrr_at_1000
value: 55.1
- type: mrr_at_3
value: 51.961
- type: mrr_at_5
value: 53.246
- type: ndcg_at_1
value: 44.118
- type: ndcg_at_10
value: 35.534
- type: ndcg_at_100
value: 32.946999999999996
- type: ndcg_at_1000
value: 41.599000000000004
- type: ndcg_at_3
value: 40.25
- type: ndcg_at_5
value: 37.978
- type: precision_at_1
value: 46.129999999999995
- type: precision_at_10
value: 26.842
- type: precision_at_100
value: 8.427
- type: precision_at_1000
value: 2.128
- type: precision_at_3
value: 37.977
- type: precision_at_5
value: 32.879000000000005
- type: recall_at_1
value: 5.935
- type: recall_at_10
value: 17.211000000000002
- type: recall_at_100
value: 34.33
- type: recall_at_1000
value: 65.551
- type: recall_at_3
value: 10.483
- type: recall_at_5
value: 13.078999999999999
- task:
type: Retrieval
dataset:
name: MTEB NQ
type: nq
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 35.231
- type: map_at_10
value: 50.202000000000005
- type: map_at_100
value: 51.154999999999994
- type: map_at_1000
value: 51.181
- type: map_at_3
value: 45.774
- type: map_at_5
value: 48.522
- type: mrr_at_1
value: 39.687
- type: mrr_at_10
value: 52.88
- type: mrr_at_100
value: 53.569
- type: mrr_at_1000
value: 53.58500000000001
- type: mrr_at_3
value: 49.228
- type: mrr_at_5
value: 51.525
- type: ndcg_at_1
value: 39.687
- type: ndcg_at_10
value: 57.754000000000005
- type: ndcg_at_100
value: 61.597
- type: ndcg_at_1000
value: 62.18900000000001
- type: ndcg_at_3
value: 49.55
- type: ndcg_at_5
value: 54.11899999999999
- type: precision_at_1
value: 39.687
- type: precision_at_10
value: 9.313
- type: precision_at_100
value: 1.146
- type: precision_at_1000
value: 0.12
- type: precision_at_3
value: 22.229
- type: precision_at_5
value: 15.939
- type: recall_at_1
value: 35.231
- type: recall_at_10
value: 78.083
- type: recall_at_100
value: 94.42099999999999
- type: recall_at_1000
value: 98.81
- type: recall_at_3
value: 57.047000000000004
- type: recall_at_5
value: 67.637
- task:
type: Retrieval
dataset:
name: MTEB QuoraRetrieval
type: quora
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 71.241
- type: map_at_10
value: 85.462
- type: map_at_100
value: 86.083
- type: map_at_1000
value: 86.09700000000001
- type: map_at_3
value: 82.49499999999999
- type: map_at_5
value: 84.392
- type: mrr_at_1
value: 82.09
- type: mrr_at_10
value: 88.301
- type: mrr_at_100
value: 88.383
- type: mrr_at_1000
value: 88.384
- type: mrr_at_3
value: 87.37
- type: mrr_at_5
value: 88.035
- type: ndcg_at_1
value: 82.12
- type: ndcg_at_10
value: 89.149
- type: ndcg_at_100
value: 90.235
- type: ndcg_at_1000
value: 90.307
- type: ndcg_at_3
value: 86.37599999999999
- type: ndcg_at_5
value: 87.964
- type: precision_at_1
value: 82.12
- type: precision_at_10
value: 13.56
- type: precision_at_100
value: 1.539
- type: precision_at_1000
value: 0.157
- type: precision_at_3
value: 37.88
- type: precision_at_5
value: 24.92
- type: recall_at_1
value: 71.241
- type: recall_at_10
value: 96.128
- type: recall_at_100
value: 99.696
- type: recall_at_1000
value: 99.994
- type: recall_at_3
value: 88.181
- type: recall_at_5
value: 92.694
- task:
type: Clustering
dataset:
name: MTEB RedditClustering
type: mteb/reddit-clustering
config: default
split: test
revision: 24640382cdbf8abc73003fb0fa6d111a705499eb
metrics:
- type: v_measure
value: 56.59757799655151
- task:
type: Clustering
dataset:
name: MTEB RedditClusteringP2P
type: mteb/reddit-clustering-p2p
config: default
split: test
revision: 282350215ef01743dc01b456c7f5241fa8937f16
metrics:
- type: v_measure
value: 64.27391998854624
- task:
type: Retrieval
dataset:
name: MTEB SCIDOCS
type: scidocs
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 4.243
- type: map_at_10
value: 10.965
- type: map_at_100
value: 12.934999999999999
- type: map_at_1000
value: 13.256
- type: map_at_3
value: 7.907
- type: map_at_5
value: 9.435
- type: mrr_at_1
value: 20.9
- type: mrr_at_10
value: 31.849
- type: mrr_at_100
value: 32.964
- type: mrr_at_1000
value: 33.024
- type: mrr_at_3
value: 28.517
- type: mrr_at_5
value: 30.381999999999998
- type: ndcg_at_1
value: 20.9
- type: ndcg_at_10
value: 18.723
- type: ndcg_at_100
value: 26.384999999999998
- type: ndcg_at_1000
value: 32.114
- type: ndcg_at_3
value: 17.753
- type: ndcg_at_5
value: 15.558
- type: precision_at_1
value: 20.9
- type: precision_at_10
value: 9.8
- type: precision_at_100
value: 2.078
- type: precision_at_1000
value: 0.345
- type: precision_at_3
value: 16.900000000000002
- type: precision_at_5
value: 13.88
- type: recall_at_1
value: 4.243
- type: recall_at_10
value: 19.885
- type: recall_at_100
value: 42.17
- type: recall_at_1000
value: 70.12
- type: recall_at_3
value: 10.288
- type: recall_at_5
value: 14.072000000000001
- task:
type: STS
dataset:
name: MTEB SICK-R
type: mteb/sickr-sts
config: default
split: test
revision: a6ea5a8cab320b040a23452cc28066d9beae2cee
metrics:
- type: cos_sim_pearson
value: 85.84209174935282
- type: cos_sim_spearman
value: 81.73248048438833
- type: euclidean_pearson
value: 83.02810070308149
- type: euclidean_spearman
value: 81.73248295679514
- type: manhattan_pearson
value: 82.95368060376002
- type: manhattan_spearman
value: 81.60277910998718
- task:
type: STS
dataset:
name: MTEB STS12
type: mteb/sts12-sts
config: default
split: test
revision: a0d554a64d88156834ff5ae9920b964011b16384
metrics:
- type: cos_sim_pearson
value: 88.52628804556943
- type: cos_sim_spearman
value: 82.5713913555672
- type: euclidean_pearson
value: 85.8796774746988
- type: euclidean_spearman
value: 82.57137506803424
- type: manhattan_pearson
value: 85.79671002960058
- type: manhattan_spearman
value: 82.49445981618027
- task:
type: STS
dataset:
name: MTEB STS13
type: mteb/sts13-sts
config: default
split: test
revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca
metrics:
- type: cos_sim_pearson
value: 86.23682503505542
- type: cos_sim_spearman
value: 87.15008956711806
- type: euclidean_pearson
value: 86.79805401524959
- type: euclidean_spearman
value: 87.15008956711806
- type: manhattan_pearson
value: 86.65298502699244
- type: manhattan_spearman
value: 86.97677821948562
- task:
type: STS
dataset:
name: MTEB STS14
type: mteb/sts14-sts
config: default
split: test
revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375
metrics:
- type: cos_sim_pearson
value: 85.63370304677802
- type: cos_sim_spearman
value: 84.97105553540318
- type: euclidean_pearson
value: 85.28896108687721
- type: euclidean_spearman
value: 84.97105553540318
- type: manhattan_pearson
value: 85.09663190337331
- type: manhattan_spearman
value: 84.79126831644619
- task:
type: STS
dataset:
name: MTEB STS15
type: mteb/sts15-sts
config: default
split: test
revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3
metrics:
- type: cos_sim_pearson
value: 90.2614838800733
- type: cos_sim_spearman
value: 91.0509162991835
- type: euclidean_pearson
value: 90.33098317533373
- type: euclidean_spearman
value: 91.05091625871644
- type: manhattan_pearson
value: 90.26250435151107
- type: manhattan_spearman
value: 90.97999594417519
- task:
type: STS
dataset:
name: MTEB STS16
type: mteb/sts16-sts
config: default
split: test
revision: 4d8694f8f0e0100860b497b999b3dbed754a0513
metrics:
- type: cos_sim_pearson
value: 85.80480973335091
- type: cos_sim_spearman
value: 87.313695492969
- type: euclidean_pearson
value: 86.49267251576939
- type: euclidean_spearman
value: 87.313695492969
- type: manhattan_pearson
value: 86.44019901831935
- type: manhattan_spearman
value: 87.24205395460392
- task:
type: STS
dataset:
name: MTEB STS17 (en-en)
type: mteb/sts17-crosslingual-sts
config: en-en
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 90.05662789380672
- type: cos_sim_spearman
value: 90.02759424426651
- type: euclidean_pearson
value: 90.4042483422981
- type: euclidean_spearman
value: 90.02759424426651
- type: manhattan_pearson
value: 90.51446975000226
- type: manhattan_spearman
value: 90.08832889933616
- task:
type: STS
dataset:
name: MTEB STS22 (en)
type: mteb/sts22-crosslingual-sts
config: en
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 67.5975528273532
- type: cos_sim_spearman
value: 67.62969861411354
- type: euclidean_pearson
value: 69.224275734323
- type: euclidean_spearman
value: 67.62969861411354
- type: manhattan_pearson
value: 69.3761447059927
- type: manhattan_spearman
value: 67.90921005611467
- task:
type: STS
dataset:
name: MTEB STSBenchmark
type: mteb/stsbenchmark-sts
config: default
split: test
revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831
metrics:
- type: cos_sim_pearson
value: 87.11244327231684
- type: cos_sim_spearman
value: 88.37902438979035
- type: euclidean_pearson
value: 87.86054279847336
- type: euclidean_spearman
value: 88.37902438979035
- type: manhattan_pearson
value: 87.77257757320378
- type: manhattan_spearman
value: 88.25208966098123
- task:
type: Reranking
dataset:
name: MTEB SciDocsRR
type: mteb/scidocs-reranking
config: default
split: test
revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab
metrics:
- type: map
value: 85.87174608143563
- type: mrr
value: 96.12836872640794
- task:
type: Retrieval
dataset:
name: MTEB SciFact
type: scifact
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 57.760999999999996
- type: map_at_10
value: 67.258
- type: map_at_100
value: 67.757
- type: map_at_1000
value: 67.78800000000001
- type: map_at_3
value: 64.602
- type: map_at_5
value: 65.64
- type: mrr_at_1
value: 60.667
- type: mrr_at_10
value: 68.441
- type: mrr_at_100
value: 68.825
- type: mrr_at_1000
value: 68.853
- type: mrr_at_3
value: 66.444
- type: mrr_at_5
value: 67.26100000000001
- type: ndcg_at_1
value: 60.667
- type: ndcg_at_10
value: 71.852
- type: ndcg_at_100
value: 73.9
- type: ndcg_at_1000
value: 74.628
- type: ndcg_at_3
value: 67.093
- type: ndcg_at_5
value: 68.58
- type: precision_at_1
value: 60.667
- type: precision_at_10
value: 9.6
- type: precision_at_100
value: 1.0670000000000002
- type: precision_at_1000
value: 0.11199999999999999
- type: precision_at_3
value: 26.111
- type: precision_at_5
value: 16.733
- type: recall_at_1
value: 57.760999999999996
- type: recall_at_10
value: 84.967
- type: recall_at_100
value: 93.833
- type: recall_at_1000
value: 99.333
- type: recall_at_3
value: 71.589
- type: recall_at_5
value: 75.483
- task:
type: PairClassification
dataset:
name: MTEB SprintDuplicateQuestions
type: mteb/sprintduplicatequestions-pairclassification
config: default
split: test
revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46
metrics:
- type: cos_sim_accuracy
value: 99.66633663366336
- type: cos_sim_ap
value: 91.17685358899108
- type: cos_sim_f1
value: 82.16818642350559
- type: cos_sim_precision
value: 83.26488706365504
- type: cos_sim_recall
value: 81.10000000000001
- type: dot_accuracy
value: 99.66633663366336
- type: dot_ap
value: 91.17663411119032
- type: dot_f1
value: 82.16818642350559
- type: dot_precision
value: 83.26488706365504
- type: dot_recall
value: 81.10000000000001
- type: euclidean_accuracy
value: 99.66633663366336
- type: euclidean_ap
value: 91.17685189882275
- type: euclidean_f1
value: 82.16818642350559
- type: euclidean_precision
value: 83.26488706365504
- type: euclidean_recall
value: 81.10000000000001
- type: manhattan_accuracy
value: 99.66633663366336
- type: manhattan_ap
value: 91.2241619496737
- type: manhattan_f1
value: 82.20472440944883
- type: manhattan_precision
value: 86.51933701657458
- type: manhattan_recall
value: 78.3
- type: max_accuracy
value: 99.66633663366336
- type: max_ap
value: 91.2241619496737
- type: max_f1
value: 82.20472440944883
- task:
type: Clustering
dataset:
name: MTEB StackExchangeClustering
type: mteb/stackexchange-clustering
config: default
split: test
revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259
metrics:
- type: v_measure
value: 66.85101268897951
- task:
type: Clustering
dataset:
name: MTEB StackExchangeClusteringP2P
type: mteb/stackexchange-clustering-p2p
config: default
split: test
revision: 815ca46b2622cec33ccafc3735d572c266efdb44
metrics:
- type: v_measure
value: 42.461184054706905
- task:
type: Reranking
dataset:
name: MTEB StackOverflowDupQuestions
type: mteb/stackoverflowdupquestions-reranking
config: default
split: test
revision: e185fbe320c72810689fc5848eb6114e1ef5ec69
metrics:
- type: map
value: 51.44542568873886
- type: mrr
value: 52.33656151854681
- task:
type: Summarization
dataset:
name: MTEB SummEval
type: mteb/summeval
config: default
split: test
revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c
metrics:
- type: cos_sim_pearson
value: 30.75982974997539
- type: cos_sim_spearman
value: 30.385405026539914
- type: dot_pearson
value: 30.75982433546523
- type: dot_spearman
value: 30.385405026539914
- task:
type: Retrieval
dataset:
name: MTEB TRECCOVID
type: trec-covid
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 0.22799999999999998
- type: map_at_10
value: 2.064
- type: map_at_100
value: 13.056000000000001
- type: map_at_1000
value: 31.747999999999998
- type: map_at_3
value: 0.67
- type: map_at_5
value: 1.097
- type: mrr_at_1
value: 90.0
- type: mrr_at_10
value: 94.667
- type: mrr_at_100
value: 94.667
- type: mrr_at_1000
value: 94.667
- type: mrr_at_3
value: 94.667
- type: mrr_at_5
value: 94.667
- type: ndcg_at_1
value: 86.0
- type: ndcg_at_10
value: 82.0
- type: ndcg_at_100
value: 64.307
- type: ndcg_at_1000
value: 57.023999999999994
- type: ndcg_at_3
value: 85.816
- type: ndcg_at_5
value: 84.904
- type: precision_at_1
value: 90.0
- type: precision_at_10
value: 85.8
- type: precision_at_100
value: 66.46
- type: precision_at_1000
value: 25.202
- type: precision_at_3
value: 90.0
- type: precision_at_5
value: 89.2
- type: recall_at_1
value: 0.22799999999999998
- type: recall_at_10
value: 2.235
- type: recall_at_100
value: 16.185
- type: recall_at_1000
value: 53.620999999999995
- type: recall_at_3
value: 0.7040000000000001
- type: recall_at_5
value: 1.172
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (sqi-eng)
type: mteb/tatoeba-bitext-mining
config: sqi-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 97.39999999999999
- type: f1
value: 96.75
- type: precision
value: 96.45
- type: recall
value: 97.39999999999999
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (fry-eng)
type: mteb/tatoeba-bitext-mining
config: fry-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 85.54913294797689
- type: f1
value: 82.46628131021194
- type: precision
value: 81.1175337186898
- type: recall
value: 85.54913294797689
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (kur-eng)
type: mteb/tatoeba-bitext-mining
config: kur-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 81.21951219512195
- type: f1
value: 77.33333333333334
- type: precision
value: 75.54878048780488
- type: recall
value: 81.21951219512195
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (tur-eng)
type: mteb/tatoeba-bitext-mining
config: tur-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 98.6
- type: f1
value: 98.26666666666665
- type: precision
value: 98.1
- type: recall
value: 98.6
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (deu-eng)
type: mteb/tatoeba-bitext-mining
config: deu-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 99.5
- type: f1
value: 99.33333333333333
- type: precision
value: 99.25
- type: recall
value: 99.5
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (nld-eng)
type: mteb/tatoeba-bitext-mining
config: nld-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 97.8
- type: f1
value: 97.2
- type: precision
value: 96.89999999999999
- type: recall
value: 97.8
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ron-eng)
type: mteb/tatoeba-bitext-mining
config: ron-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 97.8
- type: f1
value: 97.18333333333334
- type: precision
value: 96.88333333333333
- type: recall
value: 97.8
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ang-eng)
type: mteb/tatoeba-bitext-mining
config: ang-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 77.61194029850746
- type: f1
value: 72.81094527363183
- type: precision
value: 70.83333333333333
- type: recall
value: 77.61194029850746
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ido-eng)
type: mteb/tatoeba-bitext-mining
config: ido-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 93.7
- type: f1
value: 91.91666666666667
- type: precision
value: 91.08333333333334
- type: recall
value: 93.7
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (jav-eng)
type: mteb/tatoeba-bitext-mining
config: jav-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 88.29268292682927
- type: f1
value: 85.27642276422765
- type: precision
value: 84.01277584204414
- type: recall
value: 88.29268292682927
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (isl-eng)
type: mteb/tatoeba-bitext-mining
config: isl-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 96.1
- type: f1
value: 95.0
- type: precision
value: 94.46666666666668
- type: recall
value: 96.1
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (slv-eng)
type: mteb/tatoeba-bitext-mining
config: slv-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 93.681652490887
- type: f1
value: 91.90765492102065
- type: precision
value: 91.05913325232888
- type: recall
value: 93.681652490887
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (cym-eng)
type: mteb/tatoeba-bitext-mining
config: cym-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 92.17391304347827
- type: f1
value: 89.97101449275361
- type: precision
value: 88.96811594202899
- type: recall
value: 92.17391304347827
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (kaz-eng)
type: mteb/tatoeba-bitext-mining
config: kaz-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 90.43478260869566
- type: f1
value: 87.72173913043478
- type: precision
value: 86.42028985507245
- type: recall
value: 90.43478260869566
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (est-eng)
type: mteb/tatoeba-bitext-mining
config: est-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 90.4
- type: f1
value: 88.03
- type: precision
value: 86.95
- type: recall
value: 90.4
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (heb-eng)
type: mteb/tatoeba-bitext-mining
config: heb-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 93.4
- type: f1
value: 91.45666666666666
- type: precision
value: 90.525
- type: recall
value: 93.4
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (gla-eng)
type: mteb/tatoeba-bitext-mining
config: gla-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 81.9059107358263
- type: f1
value: 78.32557872364869
- type: precision
value: 76.78260286824823
- type: recall
value: 81.9059107358263
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (mar-eng)
type: mteb/tatoeba-bitext-mining
config: mar-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 94.3
- type: f1
value: 92.58333333333333
- type: precision
value: 91.73333333333332
- type: recall
value: 94.3
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (lat-eng)
type: mteb/tatoeba-bitext-mining
config: lat-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 79.10000000000001
- type: f1
value: 74.50500000000001
- type: precision
value: 72.58928571428571
- type: recall
value: 79.10000000000001
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (bel-eng)
type: mteb/tatoeba-bitext-mining
config: bel-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 96.6
- type: f1
value: 95.55
- type: precision
value: 95.05
- type: recall
value: 96.6
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (pms-eng)
type: mteb/tatoeba-bitext-mining
config: pms-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 82.0952380952381
- type: f1
value: 77.98458049886621
- type: precision
value: 76.1968253968254
- type: recall
value: 82.0952380952381
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (gle-eng)
type: mteb/tatoeba-bitext-mining
config: gle-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 87.9
- type: f1
value: 84.99190476190476
- type: precision
value: 83.65
- type: recall
value: 87.9
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (pes-eng)
type: mteb/tatoeba-bitext-mining
config: pes-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 95.7
- type: f1
value: 94.56666666666666
- type: precision
value: 94.01666666666667
- type: recall
value: 95.7
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (nob-eng)
type: mteb/tatoeba-bitext-mining
config: nob-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 98.6
- type: f1
value: 98.2
- type: precision
value: 98.0
- type: recall
value: 98.6
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (bul-eng)
type: mteb/tatoeba-bitext-mining
config: bul-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 95.6
- type: f1
value: 94.38333333333334
- type: precision
value: 93.78333333333335
- type: recall
value: 95.6
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (cbk-eng)
type: mteb/tatoeba-bitext-mining
config: cbk-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 87.4
- type: f1
value: 84.10380952380952
- type: precision
value: 82.67
- type: recall
value: 87.4
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (hun-eng)
type: mteb/tatoeba-bitext-mining
config: hun-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 95.5
- type: f1
value: 94.33333333333334
- type: precision
value: 93.78333333333333
- type: recall
value: 95.5
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (uig-eng)
type: mteb/tatoeba-bitext-mining
config: uig-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 89.4
- type: f1
value: 86.82000000000001
- type: precision
value: 85.64500000000001
- type: recall
value: 89.4
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (rus-eng)
type: mteb/tatoeba-bitext-mining
config: rus-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 95.1
- type: f1
value: 93.56666666666668
- type: precision
value: 92.81666666666666
- type: recall
value: 95.1
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (spa-eng)
type: mteb/tatoeba-bitext-mining
config: spa-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 98.9
- type: f1
value: 98.6
- type: precision
value: 98.45
- type: recall
value: 98.9
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (hye-eng)
type: mteb/tatoeba-bitext-mining
config: hye-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 95.01347708894879
- type: f1
value: 93.51752021563343
- type: precision
value: 92.82794249775381
- type: recall
value: 95.01347708894879
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (tel-eng)
type: mteb/tatoeba-bitext-mining
config: tel-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 97.00854700854701
- type: f1
value: 96.08262108262107
- type: precision
value: 95.65527065527067
- type: recall
value: 97.00854700854701
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (afr-eng)
type: mteb/tatoeba-bitext-mining
config: afr-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 96.5
- type: f1
value: 95.39999999999999
- type: precision
value: 94.88333333333333
- type: recall
value: 96.5
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (mon-eng)
type: mteb/tatoeba-bitext-mining
config: mon-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 96.5909090909091
- type: f1
value: 95.49242424242425
- type: precision
value: 94.9621212121212
- type: recall
value: 96.5909090909091
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (arz-eng)
type: mteb/tatoeba-bitext-mining
config: arz-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 84.90566037735849
- type: f1
value: 81.85883997204752
- type: precision
value: 80.54507337526205
- type: recall
value: 84.90566037735849
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (hrv-eng)
type: mteb/tatoeba-bitext-mining
config: hrv-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 97.5
- type: f1
value: 96.75
- type: precision
value: 96.38333333333333
- type: recall
value: 97.5
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (nov-eng)
type: mteb/tatoeba-bitext-mining
config: nov-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 86.7704280155642
- type: f1
value: 82.99610894941635
- type: precision
value: 81.32295719844358
- type: recall
value: 86.7704280155642
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (gsw-eng)
type: mteb/tatoeba-bitext-mining
config: gsw-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 67.52136752136752
- type: f1
value: 61.89662189662191
- type: precision
value: 59.68660968660969
- type: recall
value: 67.52136752136752
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (nds-eng)
type: mteb/tatoeba-bitext-mining
config: nds-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 89.2
- type: f1
value: 86.32
- type: precision
value: 85.015
- type: recall
value: 89.2
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ukr-eng)
type: mteb/tatoeba-bitext-mining
config: ukr-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 96.0
- type: f1
value: 94.78333333333333
- type: precision
value: 94.18333333333334
- type: recall
value: 96.0
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (uzb-eng)
type: mteb/tatoeba-bitext-mining
config: uzb-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 83.8785046728972
- type: f1
value: 80.54517133956385
- type: precision
value: 79.154984423676
- type: recall
value: 83.8785046728972
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (lit-eng)
type: mteb/tatoeba-bitext-mining
config: lit-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 93.60000000000001
- type: f1
value: 92.01333333333334
- type: precision
value: 91.28333333333333
- type: recall
value: 93.60000000000001
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ina-eng)
type: mteb/tatoeba-bitext-mining
config: ina-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 97.1
- type: f1
value: 96.26666666666667
- type: precision
value: 95.85000000000001
- type: recall
value: 97.1
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (lfn-eng)
type: mteb/tatoeba-bitext-mining
config: lfn-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 84.3
- type: f1
value: 80.67833333333333
- type: precision
value: 79.03928571428571
- type: recall
value: 84.3
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (zsm-eng)
type: mteb/tatoeba-bitext-mining
config: zsm-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 97.3
- type: f1
value: 96.48333333333332
- type: precision
value: 96.08333333333331
- type: recall
value: 97.3
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ita-eng)
type: mteb/tatoeba-bitext-mining
config: ita-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 95.7
- type: f1
value: 94.66666666666667
- type: precision
value: 94.16666666666667
- type: recall
value: 95.7
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (cmn-eng)
type: mteb/tatoeba-bitext-mining
config: cmn-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 97.2
- type: f1
value: 96.36666666666667
- type: precision
value: 95.96666666666668
- type: recall
value: 97.2
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (lvs-eng)
type: mteb/tatoeba-bitext-mining
config: lvs-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 94.3
- type: f1
value: 92.80666666666667
- type: precision
value: 92.12833333333333
- type: recall
value: 94.3
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (glg-eng)
type: mteb/tatoeba-bitext-mining
config: glg-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 97.0
- type: f1
value: 96.22333333333334
- type: precision
value: 95.875
- type: recall
value: 97.0
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ceb-eng)
type: mteb/tatoeba-bitext-mining
config: ceb-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 74.33333333333333
- type: f1
value: 70.78174603174602
- type: precision
value: 69.28333333333332
- type: recall
value: 74.33333333333333
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (bre-eng)
type: mteb/tatoeba-bitext-mining
config: bre-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 37.6
- type: f1
value: 32.938348952090365
- type: precision
value: 31.2811038961039
- type: recall
value: 37.6
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ben-eng)
type: mteb/tatoeba-bitext-mining
config: ben-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 91.5
- type: f1
value: 89.13333333333333
- type: precision
value: 88.03333333333333
- type: recall
value: 91.5
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (swg-eng)
type: mteb/tatoeba-bitext-mining
config: swg-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 82.14285714285714
- type: f1
value: 77.67857142857143
- type: precision
value: 75.59523809523809
- type: recall
value: 82.14285714285714
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (arq-eng)
type: mteb/tatoeba-bitext-mining
config: arq-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 69.0450054884742
- type: f1
value: 63.070409283362075
- type: precision
value: 60.58992781824835
- type: recall
value: 69.0450054884742
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (kab-eng)
type: mteb/tatoeba-bitext-mining
config: kab-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 63.1
- type: f1
value: 57.848333333333336
- type: precision
value: 55.69500000000001
- type: recall
value: 63.1
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (fra-eng)
type: mteb/tatoeba-bitext-mining
config: fra-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 96.1
- type: f1
value: 95.01666666666667
- type: precision
value: 94.5
- type: recall
value: 96.1
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (por-eng)
type: mteb/tatoeba-bitext-mining
config: por-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 95.89999999999999
- type: f1
value: 94.90666666666667
- type: precision
value: 94.425
- type: recall
value: 95.89999999999999
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (tat-eng)
type: mteb/tatoeba-bitext-mining
config: tat-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 87.6
- type: f1
value: 84.61333333333333
- type: precision
value: 83.27
- type: recall
value: 87.6
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (oci-eng)
type: mteb/tatoeba-bitext-mining
config: oci-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 76.4
- type: f1
value: 71.90746031746032
- type: precision
value: 70.07027777777778
- type: recall
value: 76.4
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (pol-eng)
type: mteb/tatoeba-bitext-mining
config: pol-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 97.89999999999999
- type: f1
value: 97.26666666666667
- type: precision
value: 96.95
- type: recall
value: 97.89999999999999
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (war-eng)
type: mteb/tatoeba-bitext-mining
config: war-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 78.8
- type: f1
value: 74.39555555555555
- type: precision
value: 72.59416666666667
- type: recall
value: 78.8
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (aze-eng)
type: mteb/tatoeba-bitext-mining
config: aze-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 95.19999999999999
- type: f1
value: 93.78999999999999
- type: precision
value: 93.125
- type: recall
value: 95.19999999999999
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (vie-eng)
type: mteb/tatoeba-bitext-mining
config: vie-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 97.8
- type: f1
value: 97.1
- type: precision
value: 96.75
- type: recall
value: 97.8
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (nno-eng)
type: mteb/tatoeba-bitext-mining
config: nno-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 95.6
- type: f1
value: 94.25666666666666
- type: precision
value: 93.64166666666668
- type: recall
value: 95.6
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (cha-eng)
type: mteb/tatoeba-bitext-mining
config: cha-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 56.934306569343065
- type: f1
value: 51.461591936044485
- type: precision
value: 49.37434827945776
- type: recall
value: 56.934306569343065
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (mhr-eng)
type: mteb/tatoeba-bitext-mining
config: mhr-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 20.200000000000003
- type: f1
value: 16.91799284049284
- type: precision
value: 15.791855158730158
- type: recall
value: 20.200000000000003
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (dan-eng)
type: mteb/tatoeba-bitext-mining
config: dan-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 96.2
- type: f1
value: 95.3
- type: precision
value: 94.85
- type: recall
value: 96.2
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ell-eng)
type: mteb/tatoeba-bitext-mining
config: ell-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 96.3
- type: f1
value: 95.11666666666667
- type: precision
value: 94.53333333333333
- type: recall
value: 96.3
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (amh-eng)
type: mteb/tatoeba-bitext-mining
config: amh-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 89.88095238095238
- type: f1
value: 87.14285714285714
- type: precision
value: 85.96230158730161
- type: recall
value: 89.88095238095238
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (pam-eng)
type: mteb/tatoeba-bitext-mining
config: pam-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 24.099999999999998
- type: f1
value: 19.630969083349783
- type: precision
value: 18.275094905094907
- type: recall
value: 24.099999999999998
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (hsb-eng)
type: mteb/tatoeba-bitext-mining
config: hsb-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 83.4368530020704
- type: f1
value: 79.45183870649709
- type: precision
value: 77.7432712215321
- type: recall
value: 83.4368530020704
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (srp-eng)
type: mteb/tatoeba-bitext-mining
config: srp-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 95.8
- type: f1
value: 94.53333333333333
- type: precision
value: 93.91666666666666
- type: recall
value: 95.8
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (epo-eng)
type: mteb/tatoeba-bitext-mining
config: epo-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 98.8
- type: f1
value: 98.48333333333332
- type: precision
value: 98.33333333333334
- type: recall
value: 98.8
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (kzj-eng)
type: mteb/tatoeba-bitext-mining
config: kzj-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 17.5
- type: f1
value: 14.979285714285714
- type: precision
value: 14.23235060690943
- type: recall
value: 17.5
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (awa-eng)
type: mteb/tatoeba-bitext-mining
config: awa-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 93.93939393939394
- type: f1
value: 91.991341991342
- type: precision
value: 91.05339105339105
- type: recall
value: 93.93939393939394
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (fao-eng)
type: mteb/tatoeba-bitext-mining
config: fao-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 89.31297709923665
- type: f1
value: 86.76844783715012
- type: precision
value: 85.63613231552164
- type: recall
value: 89.31297709923665
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (mal-eng)
type: mteb/tatoeba-bitext-mining
config: mal-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 99.12663755458514
- type: f1
value: 98.93255701115964
- type: precision
value: 98.83551673944687
- type: recall
value: 99.12663755458514
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ile-eng)
type: mteb/tatoeba-bitext-mining
config: ile-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 92.0
- type: f1
value: 89.77999999999999
- type: precision
value: 88.78333333333333
- type: recall
value: 92.0
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (bos-eng)
type: mteb/tatoeba-bitext-mining
config: bos-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 96.89265536723164
- type: f1
value: 95.85687382297553
- type: precision
value: 95.33898305084746
- type: recall
value: 96.89265536723164
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (cor-eng)
type: mteb/tatoeba-bitext-mining
config: cor-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 14.6
- type: f1
value: 11.820611790170615
- type: precision
value: 11.022616224355355
- type: recall
value: 14.6
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (cat-eng)
type: mteb/tatoeba-bitext-mining
config: cat-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 95.89999999999999
- type: f1
value: 94.93333333333334
- type: precision
value: 94.48666666666666
- type: recall
value: 95.89999999999999
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (eus-eng)
type: mteb/tatoeba-bitext-mining
config: eus-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 87.6
- type: f1
value: 84.72333333333334
- type: precision
value: 83.44166666666666
- type: recall
value: 87.6
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (yue-eng)
type: mteb/tatoeba-bitext-mining
config: yue-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 94.8
- type: f1
value: 93.47333333333333
- type: precision
value: 92.875
- type: recall
value: 94.8
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (swe-eng)
type: mteb/tatoeba-bitext-mining
config: swe-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 96.6
- type: f1
value: 95.71666666666665
- type: precision
value: 95.28333333333335
- type: recall
value: 96.6
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (dtp-eng)
type: mteb/tatoeba-bitext-mining
config: dtp-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 17.8
- type: f1
value: 14.511074040901628
- type: precision
value: 13.503791000666002
- type: recall
value: 17.8
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (kat-eng)
type: mteb/tatoeba-bitext-mining
config: kat-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 94.10187667560321
- type: f1
value: 92.46648793565683
- type: precision
value: 91.71134941912423
- type: recall
value: 94.10187667560321
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (jpn-eng)
type: mteb/tatoeba-bitext-mining
config: jpn-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 97.0
- type: f1
value: 96.11666666666666
- type: precision
value: 95.68333333333334
- type: recall
value: 97.0
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (csb-eng)
type: mteb/tatoeba-bitext-mining
config: csb-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 72.72727272727273
- type: f1
value: 66.58949745906267
- type: precision
value: 63.86693017127799
- type: recall
value: 72.72727272727273
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (xho-eng)
type: mteb/tatoeba-bitext-mining
config: xho-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 90.14084507042254
- type: f1
value: 88.26291079812206
- type: precision
value: 87.32394366197182
- type: recall
value: 90.14084507042254
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (orv-eng)
type: mteb/tatoeba-bitext-mining
config: orv-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 64.67065868263472
- type: f1
value: 58.2876627696987
- type: precision
value: 55.79255774165953
- type: recall
value: 64.67065868263472
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ind-eng)
type: mteb/tatoeba-bitext-mining
config: ind-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 95.6
- type: f1
value: 94.41666666666667
- type: precision
value: 93.85
- type: recall
value: 95.6
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (tuk-eng)
type: mteb/tatoeba-bitext-mining
config: tuk-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 55.172413793103445
- type: f1
value: 49.63992493549144
- type: precision
value: 47.71405113769646
- type: recall
value: 55.172413793103445
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (max-eng)
type: mteb/tatoeba-bitext-mining
config: max-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 77.46478873239437
- type: f1
value: 73.4417616811983
- type: precision
value: 71.91607981220658
- type: recall
value: 77.46478873239437
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (swh-eng)
type: mteb/tatoeba-bitext-mining
config: swh-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 84.61538461538461
- type: f1
value: 80.91452991452994
- type: precision
value: 79.33760683760683
- type: recall
value: 84.61538461538461
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (hin-eng)
type: mteb/tatoeba-bitext-mining
config: hin-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 98.2
- type: f1
value: 97.6
- type: precision
value: 97.3
- type: recall
value: 98.2
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (dsb-eng)
type: mteb/tatoeba-bitext-mining
config: dsb-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 75.5741127348643
- type: f1
value: 72.00417536534445
- type: precision
value: 70.53467872883321
- type: recall
value: 75.5741127348643
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ber-eng)
type: mteb/tatoeba-bitext-mining
config: ber-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 62.2
- type: f1
value: 55.577460317460314
- type: precision
value: 52.98583333333333
- type: recall
value: 62.2
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (tam-eng)
type: mteb/tatoeba-bitext-mining
config: tam-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 92.18241042345277
- type: f1
value: 90.6468124709167
- type: precision
value: 89.95656894679696
- type: recall
value: 92.18241042345277
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (slk-eng)
type: mteb/tatoeba-bitext-mining
config: slk-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 96.1
- type: f1
value: 95.13333333333333
- type: precision
value: 94.66666666666667
- type: recall
value: 96.1
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (tgl-eng)
type: mteb/tatoeba-bitext-mining
config: tgl-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 96.8
- type: f1
value: 95.85000000000001
- type: precision
value: 95.39999999999999
- type: recall
value: 96.8
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ast-eng)
type: mteb/tatoeba-bitext-mining
config: ast-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 92.1259842519685
- type: f1
value: 89.76377952755905
- type: precision
value: 88.71391076115485
- type: recall
value: 92.1259842519685
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (mkd-eng)
type: mteb/tatoeba-bitext-mining
config: mkd-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 94.1
- type: f1
value: 92.49
- type: precision
value: 91.725
- type: recall
value: 94.1
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (khm-eng)
type: mteb/tatoeba-bitext-mining
config: khm-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 77.5623268698061
- type: f1
value: 73.27364463791058
- type: precision
value: 71.51947852086357
- type: recall
value: 77.5623268698061
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ces-eng)
type: mteb/tatoeba-bitext-mining
config: ces-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 97.39999999999999
- type: f1
value: 96.56666666666666
- type: precision
value: 96.16666666666667
- type: recall
value: 97.39999999999999
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (tzl-eng)
type: mteb/tatoeba-bitext-mining
config: tzl-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 66.34615384615384
- type: f1
value: 61.092032967032964
- type: precision
value: 59.27197802197802
- type: recall
value: 66.34615384615384
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (urd-eng)
type: mteb/tatoeba-bitext-mining
config: urd-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 94.89999999999999
- type: f1
value: 93.41190476190476
- type: precision
value: 92.7
- type: recall
value: 94.89999999999999
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ara-eng)
type: mteb/tatoeba-bitext-mining
config: ara-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 93.10000000000001
- type: f1
value: 91.10000000000001
- type: precision
value: 90.13333333333333
- type: recall
value: 93.10000000000001
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (kor-eng)
type: mteb/tatoeba-bitext-mining
config: kor-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 93.7
- type: f1
value: 91.97333333333334
- type: precision
value: 91.14166666666667
- type: recall
value: 93.7
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (yid-eng)
type: mteb/tatoeba-bitext-mining
config: yid-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 92.21698113207547
- type: f1
value: 90.3796046720575
- type: precision
value: 89.56367924528303
- type: recall
value: 92.21698113207547
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (fin-eng)
type: mteb/tatoeba-bitext-mining
config: fin-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 97.6
- type: f1
value: 96.91666666666667
- type: precision
value: 96.6
- type: recall
value: 97.6
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (tha-eng)
type: mteb/tatoeba-bitext-mining
config: tha-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 97.44525547445255
- type: f1
value: 96.71532846715328
- type: precision
value: 96.35036496350365
- type: recall
value: 97.44525547445255
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (wuu-eng)
type: mteb/tatoeba-bitext-mining
config: wuu-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 94.1
- type: f1
value: 92.34000000000002
- type: precision
value: 91.49166666666667
- type: recall
value: 94.1
- task:
type: Retrieval
dataset:
name: MTEB Touche2020
type: webis-touche2020
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 3.2910000000000004
- type: map_at_10
value: 10.373000000000001
- type: map_at_100
value: 15.612
- type: map_at_1000
value: 17.06
- type: map_at_3
value: 6.119
- type: map_at_5
value: 7.917000000000001
- type: mrr_at_1
value: 44.897999999999996
- type: mrr_at_10
value: 56.054
- type: mrr_at_100
value: 56.82000000000001
- type: mrr_at_1000
value: 56.82000000000001
- type: mrr_at_3
value: 52.381
- type: mrr_at_5
value: 53.81
- type: ndcg_at_1
value: 42.857
- type: ndcg_at_10
value: 27.249000000000002
- type: ndcg_at_100
value: 36.529
- type: ndcg_at_1000
value: 48.136
- type: ndcg_at_3
value: 33.938
- type: ndcg_at_5
value: 29.951
- type: precision_at_1
value: 44.897999999999996
- type: precision_at_10
value: 22.653000000000002
- type: precision_at_100
value: 7.000000000000001
- type: precision_at_1000
value: 1.48
- type: precision_at_3
value: 32.653
- type: precision_at_5
value: 27.755000000000003
- type: recall_at_1
value: 3.2910000000000004
- type: recall_at_10
value: 16.16
- type: recall_at_100
value: 43.908
- type: recall_at_1000
value: 79.823
- type: recall_at_3
value: 7.156
- type: recall_at_5
value: 10.204
- task:
type: Classification
dataset:
name: MTEB ToxicConversationsClassification
type: mteb/toxic_conversations_50k
config: default
split: test
revision: d7c0de2777da35d6aae2200a62c6e0e5af397c4c
metrics:
- type: accuracy
value: 71.05879999999999
- type: ap
value: 14.609748142799111
- type: f1
value: 54.878956295843096
- task:
type: Classification
dataset:
name: MTEB TweetSentimentExtractionClassification
type: mteb/tweet_sentiment_extraction
config: default
split: test
revision: d604517c81ca91fe16a244d1248fc021f9ecee7a
metrics:
- type: accuracy
value: 64.61799660441426
- type: f1
value: 64.8698191961434
- task:
type: Clustering
dataset:
name: MTEB TwentyNewsgroupsClustering
type: mteb/twentynewsgroups-clustering
config: default
split: test
revision: 6125ec4e24fa026cec8a478383ee943acfbd5449
metrics:
- type: v_measure
value: 51.32860036611885
- task:
type: PairClassification
dataset:
name: MTEB TwitterSemEval2015
type: mteb/twittersemeval2015-pairclassification
config: default
split: test
revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1
metrics:
- type: cos_sim_accuracy
value: 88.34714192048638
- type: cos_sim_ap
value: 80.26732975975634
- type: cos_sim_f1
value: 73.53415148134374
- type: cos_sim_precision
value: 69.34767360299276
- type: cos_sim_recall
value: 78.25857519788919
- type: dot_accuracy
value: 88.34714192048638
- type: dot_ap
value: 80.26733698491206
- type: dot_f1
value: 73.53415148134374
- type: dot_precision
value: 69.34767360299276
- type: dot_recall
value: 78.25857519788919
- type: euclidean_accuracy
value: 88.34714192048638
- type: euclidean_ap
value: 80.26734337771738
- type: euclidean_f1
value: 73.53415148134374
- type: euclidean_precision
value: 69.34767360299276
- type: euclidean_recall
value: 78.25857519788919
- type: manhattan_accuracy
value: 88.30541813196639
- type: manhattan_ap
value: 80.19415808104145
- type: manhattan_f1
value: 73.55143870713441
- type: manhattan_precision
value: 73.25307511122743
- type: manhattan_recall
value: 73.85224274406332
- type: max_accuracy
value: 88.34714192048638
- type: max_ap
value: 80.26734337771738
- type: max_f1
value: 73.55143870713441
- task:
type: PairClassification
dataset:
name: MTEB TwitterURLCorpus
type: mteb/twitterurlcorpus-pairclassification
config: default
split: test
revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf
metrics:
- type: cos_sim_accuracy
value: 89.81061047075717
- type: cos_sim_ap
value: 87.11747055081017
- type: cos_sim_f1
value: 80.04355498817256
- type: cos_sim_precision
value: 78.1165262000733
- type: cos_sim_recall
value: 82.06806282722513
- type: dot_accuracy
value: 89.81061047075717
- type: dot_ap
value: 87.11746902745236
- type: dot_f1
value: 80.04355498817256
- type: dot_precision
value: 78.1165262000733
- type: dot_recall
value: 82.06806282722513
- type: euclidean_accuracy
value: 89.81061047075717
- type: euclidean_ap
value: 87.11746919324248
- type: euclidean_f1
value: 80.04355498817256
- type: euclidean_precision
value: 78.1165262000733
- type: euclidean_recall
value: 82.06806282722513
- type: manhattan_accuracy
value: 89.79508673885202
- type: manhattan_ap
value: 87.11074390832218
- type: manhattan_f1
value: 80.13002540726349
- type: manhattan_precision
value: 77.83826945412311
- type: manhattan_recall
value: 82.56082537727133
- type: max_accuracy
value: 89.81061047075717
- type: max_ap
value: 87.11747055081017
- type: max_f1
value: 80.13002540726349
---
# kcccat/multilingual-e5-large-instruct-Q6_K-GGUF
This model was converted to GGUF format from [`intfloat/multilingual-e5-large-instruct`](https://huggingface.co/intfloat/multilingual-e5-large-instruct) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/intfloat/multilingual-e5-large-instruct) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo kcccat/multilingual-e5-large-instruct-Q6_K-GGUF --hf-file multilingual-e5-large-instruct-q6_k.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo kcccat/multilingual-e5-large-instruct-Q6_K-GGUF --hf-file multilingual-e5-large-instruct-q6_k.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo kcccat/multilingual-e5-large-instruct-Q6_K-GGUF --hf-file multilingual-e5-large-instruct-q6_k.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo kcccat/multilingual-e5-large-instruct-Q6_K-GGUF --hf-file multilingual-e5-large-instruct-q6_k.gguf -c 2048
```
| [
"SUMMARIZATION"
] | [
"BIOSSES",
"SCIFACT"
] |
Muennighoff/SGPT-1.3B-weightedmean-msmarco-specb-bitfit | Muennighoff | feature-extraction | [
"sentence-transformers",
"pytorch",
"gpt_neo",
"feature-extraction",
"sentence-similarity",
"mteb",
"arxiv:2202.08904",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2022-03-02T23:29:04 | 2023-03-27T22:21:38 | 90 | 5 | ---
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- mteb
model-index:
- name: SGPT-1.3B-weightedmean-msmarco-specb-bitfit
results:
- task:
type: Classification
dataset:
name: MTEB AmazonCounterfactualClassification (en)
type: mteb/amazon_counterfactual
config: en
split: test
revision: 2d8a100785abf0ae21420d2a55b0c56e3e1ea996
metrics:
- type: accuracy
value: 65.20895522388061
- type: ap
value: 29.59212705444778
- type: f1
value: 59.97099864321921
- task:
type: Classification
dataset:
name: MTEB AmazonPolarityClassification
type: mteb/amazon_polarity
config: default
split: test
revision: 80714f8dcf8cefc218ef4f8c5a966dd83f75a0e1
metrics:
- type: accuracy
value: 73.20565
- type: ap
value: 67.36680643550963
- type: f1
value: 72.90420520325125
- task:
type: Classification
dataset:
name: MTEB AmazonReviewsClassification (en)
type: mteb/amazon_reviews_multi
config: en
split: test
revision: c379a6705fec24a2493fa68e011692605f44e119
metrics:
- type: accuracy
value: 34.955999999999996
- type: f1
value: 34.719324437696955
- task:
type: Retrieval
dataset:
name: MTEB ArguAna
type: arguana
config: default
split: test
revision: 5b3e3697907184a9b77a3c99ee9ea1a9cbb1e4e3
metrics:
- type: map_at_1
value: 26.101999999999997
- type: map_at_10
value: 40.958
- type: map_at_100
value: 42.033
- type: map_at_1000
value: 42.042
- type: map_at_3
value: 36.332
- type: map_at_5
value: 38.608
- type: mrr_at_1
value: 26.387
- type: mrr_at_10
value: 41.051
- type: mrr_at_100
value: 42.118
- type: mrr_at_1000
value: 42.126999999999995
- type: mrr_at_3
value: 36.415
- type: mrr_at_5
value: 38.72
- type: ndcg_at_1
value: 26.101999999999997
- type: ndcg_at_10
value: 49.68
- type: ndcg_at_100
value: 54.257999999999996
- type: ndcg_at_1000
value: 54.486000000000004
- type: ndcg_at_3
value: 39.864
- type: ndcg_at_5
value: 43.980000000000004
- type: precision_at_1
value: 26.101999999999997
- type: precision_at_10
value: 7.781000000000001
- type: precision_at_100
value: 0.979
- type: precision_at_1000
value: 0.1
- type: precision_at_3
value: 16.714000000000002
- type: precision_at_5
value: 12.034
- type: recall_at_1
value: 26.101999999999997
- type: recall_at_10
value: 77.809
- type: recall_at_100
value: 97.866
- type: recall_at_1000
value: 99.644
- type: recall_at_3
value: 50.141999999999996
- type: recall_at_5
value: 60.171
- task:
type: Clustering
dataset:
name: MTEB ArxivClusteringP2P
type: mteb/arxiv-clustering-p2p
config: default
split: test
revision: 0bbdb47bcbe3a90093699aefeed338a0f28a7ee8
metrics:
- type: v_measure
value: 43.384194916953774
- task:
type: Clustering
dataset:
name: MTEB ArxivClusteringS2S
type: mteb/arxiv-clustering-s2s
config: default
split: test
revision: b73bd54100e5abfa6e3a23dcafb46fe4d2438dc3
metrics:
- type: v_measure
value: 33.70962633433912
- task:
type: Reranking
dataset:
name: MTEB AskUbuntuDupQuestions
type: mteb/askubuntudupquestions-reranking
config: default
split: test
revision: 4d853f94cd57d85ec13805aeeac3ae3e5eb4c49c
metrics:
- type: map
value: 58.133058996870076
- type: mrr
value: 72.10922041946972
- task:
type: STS
dataset:
name: MTEB BIOSSES
type: mteb/biosses-sts
config: default
split: test
revision: 9ee918f184421b6bd48b78f6c714d86546106103
metrics:
- type: cos_sim_pearson
value: 86.62153841660047
- type: cos_sim_spearman
value: 83.01514456843276
- type: euclidean_pearson
value: 86.00431518427241
- type: euclidean_spearman
value: 83.85552516285783
- type: manhattan_pearson
value: 85.83025803351181
- type: manhattan_spearman
value: 83.86636878343106
- task:
type: Classification
dataset:
name: MTEB Banking77Classification
type: mteb/banking77
config: default
split: test
revision: 44fa15921b4c889113cc5df03dd4901b49161ab7
metrics:
- type: accuracy
value: 82.05844155844156
- type: f1
value: 82.0185837884764
- task:
type: Clustering
dataset:
name: MTEB BiorxivClusteringP2P
type: mteb/biorxiv-clustering-p2p
config: default
split: test
revision: 11d0121201d1f1f280e8cc8f3d98fb9c4d9f9c55
metrics:
- type: v_measure
value: 35.05918333141837
- task:
type: Clustering
dataset:
name: MTEB BiorxivClusteringS2S
type: mteb/biorxiv-clustering-s2s
config: default
split: test
revision: c0fab014e1bcb8d3a5e31b2088972a1e01547dc1
metrics:
- type: v_measure
value: 30.71055028830579
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackAndroidRetrieval
type: BeIR/cqadupstack
config: default
split: test
revision: 2b9f5791698b5be7bc5e10535c8690f20043c3db
metrics:
- type: map_at_1
value: 26.519
- type: map_at_10
value: 35.634
- type: map_at_100
value: 36.961
- type: map_at_1000
value: 37.088
- type: map_at_3
value: 32.254
- type: map_at_5
value: 34.22
- type: mrr_at_1
value: 32.332
- type: mrr_at_10
value: 41.168
- type: mrr_at_100
value: 41.977
- type: mrr_at_1000
value: 42.028999999999996
- type: mrr_at_3
value: 38.196999999999996
- type: mrr_at_5
value: 40.036
- type: ndcg_at_1
value: 32.332
- type: ndcg_at_10
value: 41.471000000000004
- type: ndcg_at_100
value: 46.955999999999996
- type: ndcg_at_1000
value: 49.262
- type: ndcg_at_3
value: 35.937999999999995
- type: ndcg_at_5
value: 38.702999999999996
- type: precision_at_1
value: 32.332
- type: precision_at_10
value: 7.7829999999999995
- type: precision_at_100
value: 1.29
- type: precision_at_1000
value: 0.178
- type: precision_at_3
value: 16.834
- type: precision_at_5
value: 12.418
- type: recall_at_1
value: 26.519
- type: recall_at_10
value: 53.190000000000005
- type: recall_at_100
value: 76.56500000000001
- type: recall_at_1000
value: 91.47800000000001
- type: recall_at_3
value: 38.034
- type: recall_at_5
value: 45.245999999999995
- type: map_at_1
value: 25.356
- type: map_at_10
value: 34.596
- type: map_at_100
value: 35.714
- type: map_at_1000
value: 35.839999999999996
- type: map_at_3
value: 32.073
- type: map_at_5
value: 33.475
- type: mrr_at_1
value: 31.274
- type: mrr_at_10
value: 39.592
- type: mrr_at_100
value: 40.284
- type: mrr_at_1000
value: 40.339999999999996
- type: mrr_at_3
value: 37.378
- type: mrr_at_5
value: 38.658
- type: ndcg_at_1
value: 31.274
- type: ndcg_at_10
value: 39.766
- type: ndcg_at_100
value: 44.028
- type: ndcg_at_1000
value: 46.445
- type: ndcg_at_3
value: 35.934
- type: ndcg_at_5
value: 37.751000000000005
- type: precision_at_1
value: 31.274
- type: precision_at_10
value: 7.452
- type: precision_at_100
value: 1.217
- type: precision_at_1000
value: 0.16999999999999998
- type: precision_at_3
value: 17.431
- type: precision_at_5
value: 12.306000000000001
- type: recall_at_1
value: 25.356
- type: recall_at_10
value: 49.344
- type: recall_at_100
value: 67.497
- type: recall_at_1000
value: 83.372
- type: recall_at_3
value: 38.227
- type: recall_at_5
value: 43.187999999999995
- type: map_at_1
value: 32.759
- type: map_at_10
value: 43.937
- type: map_at_100
value: 45.004
- type: map_at_1000
value: 45.07
- type: map_at_3
value: 40.805
- type: map_at_5
value: 42.497
- type: mrr_at_1
value: 37.367
- type: mrr_at_10
value: 47.237
- type: mrr_at_100
value: 47.973
- type: mrr_at_1000
value: 48.010999999999996
- type: mrr_at_3
value: 44.65
- type: mrr_at_5
value: 46.050999999999995
- type: ndcg_at_1
value: 37.367
- type: ndcg_at_10
value: 49.659
- type: ndcg_at_100
value: 54.069
- type: ndcg_at_1000
value: 55.552
- type: ndcg_at_3
value: 44.169000000000004
- type: ndcg_at_5
value: 46.726
- type: precision_at_1
value: 37.367
- type: precision_at_10
value: 8.163
- type: precision_at_100
value: 1.133
- type: precision_at_1000
value: 0.131
- type: precision_at_3
value: 19.707
- type: precision_at_5
value: 13.718
- type: recall_at_1
value: 32.759
- type: recall_at_10
value: 63.341
- type: recall_at_100
value: 82.502
- type: recall_at_1000
value: 93.259
- type: recall_at_3
value: 48.796
- type: recall_at_5
value: 54.921
- type: map_at_1
value: 18.962
- type: map_at_10
value: 25.863000000000003
- type: map_at_100
value: 26.817999999999998
- type: map_at_1000
value: 26.918
- type: map_at_3
value: 23.043
- type: map_at_5
value: 24.599
- type: mrr_at_1
value: 20.452
- type: mrr_at_10
value: 27.301
- type: mrr_at_100
value: 28.233000000000004
- type: mrr_at_1000
value: 28.310000000000002
- type: mrr_at_3
value: 24.539
- type: mrr_at_5
value: 26.108999999999998
- type: ndcg_at_1
value: 20.452
- type: ndcg_at_10
value: 30.354999999999997
- type: ndcg_at_100
value: 35.336
- type: ndcg_at_1000
value: 37.927
- type: ndcg_at_3
value: 24.705
- type: ndcg_at_5
value: 27.42
- type: precision_at_1
value: 20.452
- type: precision_at_10
value: 4.949
- type: precision_at_100
value: 0.7799999999999999
- type: precision_at_1000
value: 0.104
- type: precision_at_3
value: 10.358
- type: precision_at_5
value: 7.774
- type: recall_at_1
value: 18.962
- type: recall_at_10
value: 43.056
- type: recall_at_100
value: 66.27300000000001
- type: recall_at_1000
value: 85.96000000000001
- type: recall_at_3
value: 27.776
- type: recall_at_5
value: 34.287
- type: map_at_1
value: 11.24
- type: map_at_10
value: 18.503
- type: map_at_100
value: 19.553
- type: map_at_1000
value: 19.689999999999998
- type: map_at_3
value: 16.150000000000002
- type: map_at_5
value: 17.254
- type: mrr_at_1
value: 13.806
- type: mrr_at_10
value: 21.939
- type: mrr_at_100
value: 22.827
- type: mrr_at_1000
value: 22.911
- type: mrr_at_3
value: 19.32
- type: mrr_at_5
value: 20.558
- type: ndcg_at_1
value: 13.806
- type: ndcg_at_10
value: 23.383000000000003
- type: ndcg_at_100
value: 28.834
- type: ndcg_at_1000
value: 32.175
- type: ndcg_at_3
value: 18.651999999999997
- type: ndcg_at_5
value: 20.505000000000003
- type: precision_at_1
value: 13.806
- type: precision_at_10
value: 4.714
- type: precision_at_100
value: 0.864
- type: precision_at_1000
value: 0.13
- type: precision_at_3
value: 9.328
- type: precision_at_5
value: 6.841
- type: recall_at_1
value: 11.24
- type: recall_at_10
value: 34.854
- type: recall_at_100
value: 59.50299999999999
- type: recall_at_1000
value: 83.25
- type: recall_at_3
value: 22.02
- type: recall_at_5
value: 26.715
- type: map_at_1
value: 23.012
- type: map_at_10
value: 33.048
- type: map_at_100
value: 34.371
- type: map_at_1000
value: 34.489
- type: map_at_3
value: 29.942999999999998
- type: map_at_5
value: 31.602000000000004
- type: mrr_at_1
value: 28.104000000000003
- type: mrr_at_10
value: 37.99
- type: mrr_at_100
value: 38.836
- type: mrr_at_1000
value: 38.891
- type: mrr_at_3
value: 35.226
- type: mrr_at_5
value: 36.693999999999996
- type: ndcg_at_1
value: 28.104000000000003
- type: ndcg_at_10
value: 39.037
- type: ndcg_at_100
value: 44.643
- type: ndcg_at_1000
value: 46.939
- type: ndcg_at_3
value: 33.784
- type: ndcg_at_5
value: 36.126000000000005
- type: precision_at_1
value: 28.104000000000003
- type: precision_at_10
value: 7.2669999999999995
- type: precision_at_100
value: 1.193
- type: precision_at_1000
value: 0.159
- type: precision_at_3
value: 16.298000000000002
- type: precision_at_5
value: 11.684
- type: recall_at_1
value: 23.012
- type: recall_at_10
value: 52.054
- type: recall_at_100
value: 75.622
- type: recall_at_1000
value: 90.675
- type: recall_at_3
value: 37.282
- type: recall_at_5
value: 43.307
- type: map_at_1
value: 21.624
- type: map_at_10
value: 30.209999999999997
- type: map_at_100
value: 31.52
- type: map_at_1000
value: 31.625999999999998
- type: map_at_3
value: 26.951000000000004
- type: map_at_5
value: 28.938999999999997
- type: mrr_at_1
value: 26.941
- type: mrr_at_10
value: 35.13
- type: mrr_at_100
value: 36.15
- type: mrr_at_1000
value: 36.204
- type: mrr_at_3
value: 32.42
- type: mrr_at_5
value: 34.155
- type: ndcg_at_1
value: 26.941
- type: ndcg_at_10
value: 35.726
- type: ndcg_at_100
value: 41.725
- type: ndcg_at_1000
value: 44.105
- type: ndcg_at_3
value: 30.184
- type: ndcg_at_5
value: 33.176
- type: precision_at_1
value: 26.941
- type: precision_at_10
value: 6.654999999999999
- type: precision_at_100
value: 1.1520000000000001
- type: precision_at_1000
value: 0.152
- type: precision_at_3
value: 14.346
- type: precision_at_5
value: 10.868
- type: recall_at_1
value: 21.624
- type: recall_at_10
value: 47.359
- type: recall_at_100
value: 73.436
- type: recall_at_1000
value: 89.988
- type: recall_at_3
value: 32.34
- type: recall_at_5
value: 39.856
- type: map_at_1
value: 20.67566666666667
- type: map_at_10
value: 28.479333333333333
- type: map_at_100
value: 29.612249999999996
- type: map_at_1000
value: 29.731166666666663
- type: map_at_3
value: 25.884
- type: map_at_5
value: 27.298916666666667
- type: mrr_at_1
value: 24.402583333333332
- type: mrr_at_10
value: 32.07041666666667
- type: mrr_at_100
value: 32.95841666666667
- type: mrr_at_1000
value: 33.025416666666665
- type: mrr_at_3
value: 29.677749999999996
- type: mrr_at_5
value: 31.02391666666667
- type: ndcg_at_1
value: 24.402583333333332
- type: ndcg_at_10
value: 33.326166666666666
- type: ndcg_at_100
value: 38.51566666666667
- type: ndcg_at_1000
value: 41.13791666666667
- type: ndcg_at_3
value: 28.687749999999994
- type: ndcg_at_5
value: 30.84766666666667
- type: precision_at_1
value: 24.402583333333332
- type: precision_at_10
value: 5.943749999999999
- type: precision_at_100
value: 1.0098333333333334
- type: precision_at_1000
value: 0.14183333333333334
- type: precision_at_3
value: 13.211500000000001
- type: precision_at_5
value: 9.548416666666668
- type: recall_at_1
value: 20.67566666666667
- type: recall_at_10
value: 44.245583333333336
- type: recall_at_100
value: 67.31116666666667
- type: recall_at_1000
value: 85.87841666666665
- type: recall_at_3
value: 31.49258333333333
- type: recall_at_5
value: 36.93241666666667
- type: map_at_1
value: 18.34
- type: map_at_10
value: 23.988
- type: map_at_100
value: 24.895
- type: map_at_1000
value: 24.992
- type: map_at_3
value: 21.831
- type: map_at_5
value: 23.0
- type: mrr_at_1
value: 20.399
- type: mrr_at_10
value: 26.186
- type: mrr_at_100
value: 27.017999999999997
- type: mrr_at_1000
value: 27.090999999999998
- type: mrr_at_3
value: 24.08
- type: mrr_at_5
value: 25.230000000000004
- type: ndcg_at_1
value: 20.399
- type: ndcg_at_10
value: 27.799000000000003
- type: ndcg_at_100
value: 32.579
- type: ndcg_at_1000
value: 35.209
- type: ndcg_at_3
value: 23.684
- type: ndcg_at_5
value: 25.521
- type: precision_at_1
value: 20.399
- type: precision_at_10
value: 4.585999999999999
- type: precision_at_100
value: 0.755
- type: precision_at_1000
value: 0.105
- type: precision_at_3
value: 10.276
- type: precision_at_5
value: 7.362
- type: recall_at_1
value: 18.34
- type: recall_at_10
value: 37.456
- type: recall_at_100
value: 59.86
- type: recall_at_1000
value: 79.703
- type: recall_at_3
value: 26.163999999999998
- type: recall_at_5
value: 30.652
- type: map_at_1
value: 12.327
- type: map_at_10
value: 17.572
- type: map_at_100
value: 18.534
- type: map_at_1000
value: 18.653
- type: map_at_3
value: 15.703
- type: map_at_5
value: 16.752
- type: mrr_at_1
value: 15.038000000000002
- type: mrr_at_10
value: 20.726
- type: mrr_at_100
value: 21.61
- type: mrr_at_1000
value: 21.695
- type: mrr_at_3
value: 18.829
- type: mrr_at_5
value: 19.885
- type: ndcg_at_1
value: 15.038000000000002
- type: ndcg_at_10
value: 21.241
- type: ndcg_at_100
value: 26.179000000000002
- type: ndcg_at_1000
value: 29.316
- type: ndcg_at_3
value: 17.762
- type: ndcg_at_5
value: 19.413
- type: precision_at_1
value: 15.038000000000002
- type: precision_at_10
value: 3.8920000000000003
- type: precision_at_100
value: 0.75
- type: precision_at_1000
value: 0.11800000000000001
- type: precision_at_3
value: 8.351
- type: precision_at_5
value: 6.187
- type: recall_at_1
value: 12.327
- type: recall_at_10
value: 29.342000000000002
- type: recall_at_100
value: 51.854
- type: recall_at_1000
value: 74.648
- type: recall_at_3
value: 19.596
- type: recall_at_5
value: 23.899
- type: map_at_1
value: 20.594
- type: map_at_10
value: 27.878999999999998
- type: map_at_100
value: 28.926000000000002
- type: map_at_1000
value: 29.041
- type: map_at_3
value: 25.668999999999997
- type: map_at_5
value: 26.773999999999997
- type: mrr_at_1
value: 23.694000000000003
- type: mrr_at_10
value: 31.335
- type: mrr_at_100
value: 32.218
- type: mrr_at_1000
value: 32.298
- type: mrr_at_3
value: 29.26
- type: mrr_at_5
value: 30.328
- type: ndcg_at_1
value: 23.694000000000003
- type: ndcg_at_10
value: 32.456
- type: ndcg_at_100
value: 37.667
- type: ndcg_at_1000
value: 40.571
- type: ndcg_at_3
value: 28.283
- type: ndcg_at_5
value: 29.986
- type: precision_at_1
value: 23.694000000000003
- type: precision_at_10
value: 5.448
- type: precision_at_100
value: 0.9119999999999999
- type: precision_at_1000
value: 0.127
- type: precision_at_3
value: 12.717999999999998
- type: precision_at_5
value: 8.843
- type: recall_at_1
value: 20.594
- type: recall_at_10
value: 43.004999999999995
- type: recall_at_100
value: 66.228
- type: recall_at_1000
value: 87.17099999999999
- type: recall_at_3
value: 31.554
- type: recall_at_5
value: 35.838
- type: map_at_1
value: 20.855999999999998
- type: map_at_10
value: 28.372000000000003
- type: map_at_100
value: 29.87
- type: map_at_1000
value: 30.075000000000003
- type: map_at_3
value: 26.054
- type: map_at_5
value: 27.128999999999998
- type: mrr_at_1
value: 25.494
- type: mrr_at_10
value: 32.735
- type: mrr_at_100
value: 33.794000000000004
- type: mrr_at_1000
value: 33.85
- type: mrr_at_3
value: 30.731
- type: mrr_at_5
value: 31.897
- type: ndcg_at_1
value: 25.494
- type: ndcg_at_10
value: 33.385
- type: ndcg_at_100
value: 39.436
- type: ndcg_at_1000
value: 42.313
- type: ndcg_at_3
value: 29.612
- type: ndcg_at_5
value: 31.186999999999998
- type: precision_at_1
value: 25.494
- type: precision_at_10
value: 6.422999999999999
- type: precision_at_100
value: 1.383
- type: precision_at_1000
value: 0.22399999999999998
- type: precision_at_3
value: 13.834
- type: precision_at_5
value: 10.0
- type: recall_at_1
value: 20.855999999999998
- type: recall_at_10
value: 42.678
- type: recall_at_100
value: 70.224
- type: recall_at_1000
value: 89.369
- type: recall_at_3
value: 31.957
- type: recall_at_5
value: 36.026
- type: map_at_1
value: 16.519000000000002
- type: map_at_10
value: 22.15
- type: map_at_100
value: 23.180999999999997
- type: map_at_1000
value: 23.291999999999998
- type: map_at_3
value: 20.132
- type: map_at_5
value: 21.346
- type: mrr_at_1
value: 17.93
- type: mrr_at_10
value: 23.506
- type: mrr_at_100
value: 24.581
- type: mrr_at_1000
value: 24.675
- type: mrr_at_3
value: 21.503
- type: mrr_at_5
value: 22.686
- type: ndcg_at_1
value: 17.93
- type: ndcg_at_10
value: 25.636
- type: ndcg_at_100
value: 30.736
- type: ndcg_at_1000
value: 33.841
- type: ndcg_at_3
value: 21.546000000000003
- type: ndcg_at_5
value: 23.658
- type: precision_at_1
value: 17.93
- type: precision_at_10
value: 3.993
- type: precision_at_100
value: 0.6890000000000001
- type: precision_at_1000
value: 0.104
- type: precision_at_3
value: 9.057
- type: precision_at_5
value: 6.58
- type: recall_at_1
value: 16.519000000000002
- type: recall_at_10
value: 35.268
- type: recall_at_100
value: 58.17
- type: recall_at_1000
value: 81.66799999999999
- type: recall_at_3
value: 24.165
- type: recall_at_5
value: 29.254
- task:
type: Retrieval
dataset:
name: MTEB ClimateFEVER
type: climate-fever
config: default
split: test
revision: 392b78eb68c07badcd7c2cd8f39af108375dfcce
metrics:
- type: map_at_1
value: 10.363
- type: map_at_10
value: 18.301000000000002
- type: map_at_100
value: 20.019000000000002
- type: map_at_1000
value: 20.207
- type: map_at_3
value: 14.877
- type: map_at_5
value: 16.544
- type: mrr_at_1
value: 22.866
- type: mrr_at_10
value: 34.935
- type: mrr_at_100
value: 35.802
- type: mrr_at_1000
value: 35.839999999999996
- type: mrr_at_3
value: 30.965999999999998
- type: mrr_at_5
value: 33.204
- type: ndcg_at_1
value: 22.866
- type: ndcg_at_10
value: 26.595000000000002
- type: ndcg_at_100
value: 33.513999999999996
- type: ndcg_at_1000
value: 36.872
- type: ndcg_at_3
value: 20.666999999999998
- type: ndcg_at_5
value: 22.728
- type: precision_at_1
value: 22.866
- type: precision_at_10
value: 8.632
- type: precision_at_100
value: 1.6119999999999999
- type: precision_at_1000
value: 0.22399999999999998
- type: precision_at_3
value: 15.504999999999999
- type: precision_at_5
value: 12.404
- type: recall_at_1
value: 10.363
- type: recall_at_10
value: 33.494
- type: recall_at_100
value: 57.593
- type: recall_at_1000
value: 76.342
- type: recall_at_3
value: 19.157
- type: recall_at_5
value: 24.637999999999998
- task:
type: Retrieval
dataset:
name: MTEB DBPedia
type: dbpedia-entity
config: default
split: test
revision: f097057d03ed98220bc7309ddb10b71a54d667d6
metrics:
- type: map_at_1
value: 7.436
- type: map_at_10
value: 14.760000000000002
- type: map_at_100
value: 19.206
- type: map_at_1000
value: 20.267
- type: map_at_3
value: 10.894
- type: map_at_5
value: 12.828999999999999
- type: mrr_at_1
value: 54.25
- type: mrr_at_10
value: 63.769
- type: mrr_at_100
value: 64.193
- type: mrr_at_1000
value: 64.211
- type: mrr_at_3
value: 61.458
- type: mrr_at_5
value: 63.096
- type: ndcg_at_1
value: 42.875
- type: ndcg_at_10
value: 31.507
- type: ndcg_at_100
value: 34.559
- type: ndcg_at_1000
value: 41.246
- type: ndcg_at_3
value: 35.058
- type: ndcg_at_5
value: 33.396
- type: precision_at_1
value: 54.25
- type: precision_at_10
value: 24.45
- type: precision_at_100
value: 7.383000000000001
- type: precision_at_1000
value: 1.582
- type: precision_at_3
value: 38.083
- type: precision_at_5
value: 32.6
- type: recall_at_1
value: 7.436
- type: recall_at_10
value: 19.862
- type: recall_at_100
value: 38.981
- type: recall_at_1000
value: 61.038000000000004
- type: recall_at_3
value: 11.949
- type: recall_at_5
value: 15.562000000000001
- task:
type: Classification
dataset:
name: MTEB EmotionClassification
type: mteb/emotion
config: default
split: test
revision: 829147f8f75a25f005913200eb5ed41fae320aa1
metrics:
- type: accuracy
value: 46.39
- type: f1
value: 42.26424885856703
- task:
type: Retrieval
dataset:
name: MTEB FEVER
type: fever
config: default
split: test
revision: 1429cf27e393599b8b359b9b72c666f96b2525f9
metrics:
- type: map_at_1
value: 50.916
- type: map_at_10
value: 62.258
- type: map_at_100
value: 62.741
- type: map_at_1000
value: 62.763000000000005
- type: map_at_3
value: 60.01800000000001
- type: map_at_5
value: 61.419999999999995
- type: mrr_at_1
value: 54.964999999999996
- type: mrr_at_10
value: 66.554
- type: mrr_at_100
value: 66.96600000000001
- type: mrr_at_1000
value: 66.97800000000001
- type: mrr_at_3
value: 64.414
- type: mrr_at_5
value: 65.77
- type: ndcg_at_1
value: 54.964999999999996
- type: ndcg_at_10
value: 68.12
- type: ndcg_at_100
value: 70.282
- type: ndcg_at_1000
value: 70.788
- type: ndcg_at_3
value: 63.861999999999995
- type: ndcg_at_5
value: 66.216
- type: precision_at_1
value: 54.964999999999996
- type: precision_at_10
value: 8.998000000000001
- type: precision_at_100
value: 1.016
- type: precision_at_1000
value: 0.107
- type: precision_at_3
value: 25.618000000000002
- type: precision_at_5
value: 16.676
- type: recall_at_1
value: 50.916
- type: recall_at_10
value: 82.04
- type: recall_at_100
value: 91.689
- type: recall_at_1000
value: 95.34899999999999
- type: recall_at_3
value: 70.512
- type: recall_at_5
value: 76.29899999999999
- task:
type: Retrieval
dataset:
name: MTEB FiQA2018
type: fiqa
config: default
split: test
revision: 41b686a7f28c59bcaaa5791efd47c67c8ebe28be
metrics:
- type: map_at_1
value: 13.568
- type: map_at_10
value: 23.264000000000003
- type: map_at_100
value: 24.823999999999998
- type: map_at_1000
value: 25.013999999999996
- type: map_at_3
value: 19.724
- type: map_at_5
value: 21.772
- type: mrr_at_1
value: 27.315
- type: mrr_at_10
value: 35.935
- type: mrr_at_100
value: 36.929
- type: mrr_at_1000
value: 36.985
- type: mrr_at_3
value: 33.591
- type: mrr_at_5
value: 34.848
- type: ndcg_at_1
value: 27.315
- type: ndcg_at_10
value: 29.988
- type: ndcg_at_100
value: 36.41
- type: ndcg_at_1000
value: 40.184999999999995
- type: ndcg_at_3
value: 26.342
- type: ndcg_at_5
value: 27.68
- type: precision_at_1
value: 27.315
- type: precision_at_10
value: 8.565000000000001
- type: precision_at_100
value: 1.508
- type: precision_at_1000
value: 0.219
- type: precision_at_3
value: 17.849999999999998
- type: precision_at_5
value: 13.672999999999998
- type: recall_at_1
value: 13.568
- type: recall_at_10
value: 37.133
- type: recall_at_100
value: 61.475
- type: recall_at_1000
value: 84.372
- type: recall_at_3
value: 24.112000000000002
- type: recall_at_5
value: 29.507
- task:
type: Retrieval
dataset:
name: MTEB HotpotQA
type: hotpotqa
config: default
split: test
revision: 766870b35a1b9ca65e67a0d1913899973551fc6c
metrics:
- type: map_at_1
value: 30.878
- type: map_at_10
value: 40.868
- type: map_at_100
value: 41.693999999999996
- type: map_at_1000
value: 41.775
- type: map_at_3
value: 38.56
- type: map_at_5
value: 39.947
- type: mrr_at_1
value: 61.756
- type: mrr_at_10
value: 68.265
- type: mrr_at_100
value: 68.671
- type: mrr_at_1000
value: 68.694
- type: mrr_at_3
value: 66.78399999999999
- type: mrr_at_5
value: 67.704
- type: ndcg_at_1
value: 61.756
- type: ndcg_at_10
value: 49.931
- type: ndcg_at_100
value: 53.179
- type: ndcg_at_1000
value: 54.94799999999999
- type: ndcg_at_3
value: 46.103
- type: ndcg_at_5
value: 48.147
- type: precision_at_1
value: 61.756
- type: precision_at_10
value: 10.163
- type: precision_at_100
value: 1.2710000000000001
- type: precision_at_1000
value: 0.151
- type: precision_at_3
value: 28.179
- type: precision_at_5
value: 18.528
- type: recall_at_1
value: 30.878
- type: recall_at_10
value: 50.817
- type: recall_at_100
value: 63.544999999999995
- type: recall_at_1000
value: 75.361
- type: recall_at_3
value: 42.269
- type: recall_at_5
value: 46.32
- task:
type: Classification
dataset:
name: MTEB ImdbClassification
type: mteb/imdb
config: default
split: test
revision: 8d743909f834c38949e8323a8a6ce8721ea6c7f4
metrics:
- type: accuracy
value: 64.04799999999999
- type: ap
value: 59.185251455339284
- type: f1
value: 63.947123181349255
- task:
type: Retrieval
dataset:
name: MTEB MSMARCO
type: msmarco
config: default
split: validation
revision: e6838a846e2408f22cf5cc337ebc83e0bcf77849
metrics:
- type: map_at_1
value: 18.9
- type: map_at_10
value: 29.748
- type: map_at_100
value: 30.976
- type: map_at_1000
value: 31.041
- type: map_at_3
value: 26.112999999999996
- type: map_at_5
value: 28.197
- type: mrr_at_1
value: 19.413
- type: mrr_at_10
value: 30.322
- type: mrr_at_100
value: 31.497000000000003
- type: mrr_at_1000
value: 31.555
- type: mrr_at_3
value: 26.729000000000003
- type: mrr_at_5
value: 28.788999999999998
- type: ndcg_at_1
value: 19.413
- type: ndcg_at_10
value: 36.048
- type: ndcg_at_100
value: 42.152
- type: ndcg_at_1000
value: 43.772
- type: ndcg_at_3
value: 28.642
- type: ndcg_at_5
value: 32.358
- type: precision_at_1
value: 19.413
- type: precision_at_10
value: 5.785
- type: precision_at_100
value: 0.8869999999999999
- type: precision_at_1000
value: 0.10300000000000001
- type: precision_at_3
value: 12.192
- type: precision_at_5
value: 9.189
- type: recall_at_1
value: 18.9
- type: recall_at_10
value: 55.457
- type: recall_at_100
value: 84.09100000000001
- type: recall_at_1000
value: 96.482
- type: recall_at_3
value: 35.359
- type: recall_at_5
value: 44.275
- task:
type: Classification
dataset:
name: MTEB MTOPDomainClassification (en)
type: mteb/mtop_domain
config: en
split: test
revision: a7e2a951126a26fc8c6a69f835f33a346ba259e3
metrics:
- type: accuracy
value: 92.07706338349293
- type: f1
value: 91.56680443236652
- task:
type: Classification
dataset:
name: MTEB MTOPIntentClassification (en)
type: mteb/mtop_intent
config: en
split: test
revision: 6299947a7777084cc2d4b64235bf7190381ce755
metrics:
- type: accuracy
value: 71.18559051527589
- type: f1
value: 52.42887061726789
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (en)
type: mteb/amazon_massive_intent
config: en
split: test
revision: 072a486a144adf7f4479a4a0dddb2152e161e1ea
metrics:
- type: accuracy
value: 68.64828513786148
- type: f1
value: 66.54281381596097
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (en)
type: mteb/amazon_massive_scenario
config: en
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 76.04236718224612
- type: f1
value: 75.89170458655639
- task:
type: Clustering
dataset:
name: MTEB MedrxivClusteringP2P
type: mteb/medrxiv-clustering-p2p
config: default
split: test
revision: dcefc037ef84348e49b0d29109e891c01067226b
metrics:
- type: v_measure
value: 32.0840369055247
- task:
type: Clustering
dataset:
name: MTEB MedrxivClusteringS2S
type: mteb/medrxiv-clustering-s2s
config: default
split: test
revision: 3cd0e71dfbe09d4de0f9e5ecba43e7ce280959dc
metrics:
- type: v_measure
value: 29.448729560244537
- task:
type: Reranking
dataset:
name: MTEB MindSmallReranking
type: mteb/mind_small
config: default
split: test
revision: 3bdac13927fdc888b903db93b2ffdbd90b295a69
metrics:
- type: map
value: 31.340856463122375
- type: mrr
value: 32.398547669840916
- task:
type: Retrieval
dataset:
name: MTEB NFCorpus
type: nfcorpus
config: default
split: test
revision: 7eb63cc0c1eb59324d709ebed25fcab851fa7610
metrics:
- type: map_at_1
value: 5.526
- type: map_at_10
value: 11.745
- type: map_at_100
value: 14.831
- type: map_at_1000
value: 16.235
- type: map_at_3
value: 8.716
- type: map_at_5
value: 10.101
- type: mrr_at_1
value: 43.653
- type: mrr_at_10
value: 51.06699999999999
- type: mrr_at_100
value: 51.881
- type: mrr_at_1000
value: 51.912000000000006
- type: mrr_at_3
value: 49.02
- type: mrr_at_5
value: 50.288999999999994
- type: ndcg_at_1
value: 41.949999999999996
- type: ndcg_at_10
value: 32.083
- type: ndcg_at_100
value: 30.049999999999997
- type: ndcg_at_1000
value: 38.661
- type: ndcg_at_3
value: 37.940000000000005
- type: ndcg_at_5
value: 35.455999999999996
- type: precision_at_1
value: 43.344
- type: precision_at_10
value: 23.437
- type: precision_at_100
value: 7.829999999999999
- type: precision_at_1000
value: 2.053
- type: precision_at_3
value: 35.501
- type: precision_at_5
value: 30.464000000000002
- type: recall_at_1
value: 5.526
- type: recall_at_10
value: 15.445999999999998
- type: recall_at_100
value: 31.179000000000002
- type: recall_at_1000
value: 61.578
- type: recall_at_3
value: 9.71
- type: recall_at_5
value: 12.026
- task:
type: Retrieval
dataset:
name: MTEB NQ
type: nq
config: default
split: test
revision: 6062aefc120bfe8ece5897809fb2e53bfe0d128c
metrics:
- type: map_at_1
value: 23.467
- type: map_at_10
value: 36.041000000000004
- type: map_at_100
value: 37.268
- type: map_at_1000
value: 37.322
- type: map_at_3
value: 32.09
- type: map_at_5
value: 34.414
- type: mrr_at_1
value: 26.738
- type: mrr_at_10
value: 38.665
- type: mrr_at_100
value: 39.64
- type: mrr_at_1000
value: 39.681
- type: mrr_at_3
value: 35.207
- type: mrr_at_5
value: 37.31
- type: ndcg_at_1
value: 26.709
- type: ndcg_at_10
value: 42.942
- type: ndcg_at_100
value: 48.296
- type: ndcg_at_1000
value: 49.651
- type: ndcg_at_3
value: 35.413
- type: ndcg_at_5
value: 39.367999999999995
- type: precision_at_1
value: 26.709
- type: precision_at_10
value: 7.306
- type: precision_at_100
value: 1.0290000000000001
- type: precision_at_1000
value: 0.116
- type: precision_at_3
value: 16.348
- type: precision_at_5
value: 12.068
- type: recall_at_1
value: 23.467
- type: recall_at_10
value: 61.492999999999995
- type: recall_at_100
value: 85.01100000000001
- type: recall_at_1000
value: 95.261
- type: recall_at_3
value: 41.952
- type: recall_at_5
value: 51.105999999999995
- task:
type: Retrieval
dataset:
name: MTEB QuoraRetrieval
type: quora
config: default
split: test
revision: 6205996560df11e3a3da9ab4f926788fc30a7db4
metrics:
- type: map_at_1
value: 67.51700000000001
- type: map_at_10
value: 81.054
- type: map_at_100
value: 81.727
- type: map_at_1000
value: 81.75200000000001
- type: map_at_3
value: 78.018
- type: map_at_5
value: 79.879
- type: mrr_at_1
value: 77.52
- type: mrr_at_10
value: 84.429
- type: mrr_at_100
value: 84.58200000000001
- type: mrr_at_1000
value: 84.584
- type: mrr_at_3
value: 83.268
- type: mrr_at_5
value: 84.013
- type: ndcg_at_1
value: 77.53
- type: ndcg_at_10
value: 85.277
- type: ndcg_at_100
value: 86.80499999999999
- type: ndcg_at_1000
value: 87.01
- type: ndcg_at_3
value: 81.975
- type: ndcg_at_5
value: 83.723
- type: precision_at_1
value: 77.53
- type: precision_at_10
value: 12.961
- type: precision_at_100
value: 1.502
- type: precision_at_1000
value: 0.156
- type: precision_at_3
value: 35.713
- type: precision_at_5
value: 23.574
- type: recall_at_1
value: 67.51700000000001
- type: recall_at_10
value: 93.486
- type: recall_at_100
value: 98.9
- type: recall_at_1000
value: 99.92999999999999
- type: recall_at_3
value: 84.17999999999999
- type: recall_at_5
value: 88.97500000000001
- task:
type: Clustering
dataset:
name: MTEB RedditClustering
type: mteb/reddit-clustering
config: default
split: test
revision: b2805658ae38990172679479369a78b86de8c390
metrics:
- type: v_measure
value: 48.225994608749915
- task:
type: Clustering
dataset:
name: MTEB RedditClusteringP2P
type: mteb/reddit-clustering-p2p
config: default
split: test
revision: 385e3cb46b4cfa89021f56c4380204149d0efe33
metrics:
- type: v_measure
value: 53.17635557157765
- task:
type: Retrieval
dataset:
name: MTEB SCIDOCS
type: scidocs
config: default
split: test
revision: 5c59ef3e437a0a9651c8fe6fde943e7dce59fba5
metrics:
- type: map_at_1
value: 3.988
- type: map_at_10
value: 9.4
- type: map_at_100
value: 10.968
- type: map_at_1000
value: 11.257
- type: map_at_3
value: 7.123
- type: map_at_5
value: 8.221
- type: mrr_at_1
value: 19.7
- type: mrr_at_10
value: 29.098000000000003
- type: mrr_at_100
value: 30.247
- type: mrr_at_1000
value: 30.318
- type: mrr_at_3
value: 26.55
- type: mrr_at_5
value: 27.915
- type: ndcg_at_1
value: 19.7
- type: ndcg_at_10
value: 16.176
- type: ndcg_at_100
value: 22.931
- type: ndcg_at_1000
value: 28.301
- type: ndcg_at_3
value: 16.142
- type: ndcg_at_5
value: 13.633999999999999
- type: precision_at_1
value: 19.7
- type: precision_at_10
value: 8.18
- type: precision_at_100
value: 1.8010000000000002
- type: precision_at_1000
value: 0.309
- type: precision_at_3
value: 15.1
- type: precision_at_5
value: 11.74
- type: recall_at_1
value: 3.988
- type: recall_at_10
value: 16.625
- type: recall_at_100
value: 36.61
- type: recall_at_1000
value: 62.805
- type: recall_at_3
value: 9.168
- type: recall_at_5
value: 11.902
- task:
type: STS
dataset:
name: MTEB SICK-R
type: mteb/sickr-sts
config: default
split: test
revision: 20a6d6f312dd54037fe07a32d58e5e168867909d
metrics:
- type: cos_sim_pearson
value: 77.29330379162072
- type: cos_sim_spearman
value: 67.22953551111448
- type: euclidean_pearson
value: 71.44682700059415
- type: euclidean_spearman
value: 66.33178012153247
- type: manhattan_pearson
value: 71.46941734657887
- type: manhattan_spearman
value: 66.43234359835814
- task:
type: STS
dataset:
name: MTEB STS12
type: mteb/sts12-sts
config: default
split: test
revision: fdf84275bb8ce4b49c971d02e84dd1abc677a50f
metrics:
- type: cos_sim_pearson
value: 75.40943196466576
- type: cos_sim_spearman
value: 66.59241013465915
- type: euclidean_pearson
value: 71.32500540796616
- type: euclidean_spearman
value: 67.86667467202591
- type: manhattan_pearson
value: 71.48209832089134
- type: manhattan_spearman
value: 67.94511626964879
- task:
type: STS
dataset:
name: MTEB STS13
type: mteb/sts13-sts
config: default
split: test
revision: 1591bfcbe8c69d4bf7fe2a16e2451017832cafb9
metrics:
- type: cos_sim_pearson
value: 77.08302398877518
- type: cos_sim_spearman
value: 77.33151317062642
- type: euclidean_pearson
value: 76.77020279715008
- type: euclidean_spearman
value: 77.13893776083225
- type: manhattan_pearson
value: 76.76732290707477
- type: manhattan_spearman
value: 77.14500877396631
- task:
type: STS
dataset:
name: MTEB STS14
type: mteb/sts14-sts
config: default
split: test
revision: e2125984e7df8b7871f6ae9949cf6b6795e7c54b
metrics:
- type: cos_sim_pearson
value: 77.46886184932168
- type: cos_sim_spearman
value: 71.82815265534886
- type: euclidean_pearson
value: 75.19783284299076
- type: euclidean_spearman
value: 71.36479611710412
- type: manhattan_pearson
value: 75.30375233959337
- type: manhattan_spearman
value: 71.46280266488021
- task:
type: STS
dataset:
name: MTEB STS15
type: mteb/sts15-sts
config: default
split: test
revision: 1cd7298cac12a96a373b6a2f18738bb3e739a9b6
metrics:
- type: cos_sim_pearson
value: 80.093017609484
- type: cos_sim_spearman
value: 80.65931167868882
- type: euclidean_pearson
value: 80.36786337117047
- type: euclidean_spearman
value: 81.30521389642827
- type: manhattan_pearson
value: 80.37922433220973
- type: manhattan_spearman
value: 81.30496664496285
- task:
type: STS
dataset:
name: MTEB STS16
type: mteb/sts16-sts
config: default
split: test
revision: 360a0b2dff98700d09e634a01e1cc1624d3e42cd
metrics:
- type: cos_sim_pearson
value: 77.98998347238742
- type: cos_sim_spearman
value: 78.91151365939403
- type: euclidean_pearson
value: 76.40510899217841
- type: euclidean_spearman
value: 76.8551459824213
- type: manhattan_pearson
value: 76.3986079603294
- type: manhattan_spearman
value: 76.8848053254288
- task:
type: STS
dataset:
name: MTEB STS17 (en-en)
type: mteb/sts17-crosslingual-sts
config: en-en
split: test
revision: 9fc37e8c632af1c87a3d23e685d49552a02582a0
metrics:
- type: cos_sim_pearson
value: 85.63510653472044
- type: cos_sim_spearman
value: 86.98674844768605
- type: euclidean_pearson
value: 85.205080538809
- type: euclidean_spearman
value: 85.53630494151886
- type: manhattan_pearson
value: 85.48612469885626
- type: manhattan_spearman
value: 85.81741413931921
- task:
type: STS
dataset:
name: MTEB STS22 (en)
type: mteb/sts22-crosslingual-sts
config: en
split: test
revision: 2de6ce8c1921b71a755b262c6b57fef195dd7906
metrics:
- type: cos_sim_pearson
value: 66.7257987615171
- type: cos_sim_spearman
value: 67.30387805090024
- type: euclidean_pearson
value: 69.46877227885867
- type: euclidean_spearman
value: 69.33161798704344
- type: manhattan_pearson
value: 69.82773311626424
- type: manhattan_spearman
value: 69.57199940498796
- task:
type: STS
dataset:
name: MTEB STSBenchmark
type: mteb/stsbenchmark-sts
config: default
split: test
revision: 8913289635987208e6e7c72789e4be2fe94b6abd
metrics:
- type: cos_sim_pearson
value: 79.37322139418472
- type: cos_sim_spearman
value: 77.5887175717799
- type: euclidean_pearson
value: 78.23006410562164
- type: euclidean_spearman
value: 77.18470385673044
- type: manhattan_pearson
value: 78.40868369362455
- type: manhattan_spearman
value: 77.36675823897656
- task:
type: Reranking
dataset:
name: MTEB SciDocsRR
type: mteb/scidocs-reranking
config: default
split: test
revision: 56a6d0140cf6356659e2a7c1413286a774468d44
metrics:
- type: map
value: 77.21233007730808
- type: mrr
value: 93.0502386139641
- task:
type: Retrieval
dataset:
name: MTEB SciFact
type: scifact
config: default
split: test
revision: a75ae049398addde9b70f6b268875f5cbce99089
metrics:
- type: map_at_1
value: 54.567
- type: map_at_10
value: 63.653000000000006
- type: map_at_100
value: 64.282
- type: map_at_1000
value: 64.31099999999999
- type: map_at_3
value: 60.478
- type: map_at_5
value: 62.322
- type: mrr_at_1
value: 56.99999999999999
- type: mrr_at_10
value: 64.759
- type: mrr_at_100
value: 65.274
- type: mrr_at_1000
value: 65.301
- type: mrr_at_3
value: 62.333000000000006
- type: mrr_at_5
value: 63.817
- type: ndcg_at_1
value: 56.99999999999999
- type: ndcg_at_10
value: 68.28699999999999
- type: ndcg_at_100
value: 70.98400000000001
- type: ndcg_at_1000
value: 71.695
- type: ndcg_at_3
value: 62.656
- type: ndcg_at_5
value: 65.523
- type: precision_at_1
value: 56.99999999999999
- type: precision_at_10
value: 9.232999999999999
- type: precision_at_100
value: 1.0630000000000002
- type: precision_at_1000
value: 0.11199999999999999
- type: precision_at_3
value: 24.221999999999998
- type: precision_at_5
value: 16.333000000000002
- type: recall_at_1
value: 54.567
- type: recall_at_10
value: 81.45599999999999
- type: recall_at_100
value: 93.5
- type: recall_at_1000
value: 99.0
- type: recall_at_3
value: 66.228
- type: recall_at_5
value: 73.489
- task:
type: PairClassification
dataset:
name: MTEB SprintDuplicateQuestions
type: mteb/sprintduplicatequestions-pairclassification
config: default
split: test
revision: 5a8256d0dff9c4bd3be3ba3e67e4e70173f802ea
metrics:
- type: cos_sim_accuracy
value: 99.74455445544554
- type: cos_sim_ap
value: 92.57836032673468
- type: cos_sim_f1
value: 87.0471464019851
- type: cos_sim_precision
value: 86.4039408866995
- type: cos_sim_recall
value: 87.7
- type: dot_accuracy
value: 99.56039603960396
- type: dot_ap
value: 82.47233353407186
- type: dot_f1
value: 76.78207739307537
- type: dot_precision
value: 78.21576763485477
- type: dot_recall
value: 75.4
- type: euclidean_accuracy
value: 99.73069306930694
- type: euclidean_ap
value: 91.70507666665775
- type: euclidean_f1
value: 86.26262626262626
- type: euclidean_precision
value: 87.14285714285714
- type: euclidean_recall
value: 85.39999999999999
- type: manhattan_accuracy
value: 99.73861386138614
- type: manhattan_ap
value: 91.96809459281754
- type: manhattan_f1
value: 86.6
- type: manhattan_precision
value: 86.6
- type: manhattan_recall
value: 86.6
- type: max_accuracy
value: 99.74455445544554
- type: max_ap
value: 92.57836032673468
- type: max_f1
value: 87.0471464019851
- task:
type: Clustering
dataset:
name: MTEB StackExchangeClustering
type: mteb/stackexchange-clustering
config: default
split: test
revision: 70a89468f6dccacc6aa2b12a6eac54e74328f235
metrics:
- type: v_measure
value: 60.85593925770172
- task:
type: Clustering
dataset:
name: MTEB StackExchangeClusteringP2P
type: mteb/stackexchange-clustering-p2p
config: default
split: test
revision: d88009ab563dd0b16cfaf4436abaf97fa3550cf0
metrics:
- type: v_measure
value: 32.356772998237496
- task:
type: Reranking
dataset:
name: MTEB StackOverflowDupQuestions
type: mteb/stackoverflowdupquestions-reranking
config: default
split: test
revision: ef807ea29a75ec4f91b50fd4191cb4ee4589a9f9
metrics:
- type: map
value: 49.320607035290735
- type: mrr
value: 50.09196481622952
- task:
type: Summarization
dataset:
name: MTEB SummEval
type: mteb/summeval
config: default
split: test
revision: 8753c2788d36c01fc6f05d03fe3f7268d63f9122
metrics:
- type: cos_sim_pearson
value: 31.17573968015504
- type: cos_sim_spearman
value: 30.43371643155132
- type: dot_pearson
value: 30.164319483092743
- type: dot_spearman
value: 29.207082242868754
- task:
type: Retrieval
dataset:
name: MTEB TRECCOVID
type: trec-covid
config: default
split: test
revision: 2c8041b2c07a79b6f7ba8fe6acc72e5d9f92d217
metrics:
- type: map_at_1
value: 0.22100000000000003
- type: map_at_10
value: 1.7229999999999999
- type: map_at_100
value: 9.195
- type: map_at_1000
value: 21.999
- type: map_at_3
value: 0.6479999999999999
- type: map_at_5
value: 0.964
- type: mrr_at_1
value: 86.0
- type: mrr_at_10
value: 90.667
- type: mrr_at_100
value: 90.858
- type: mrr_at_1000
value: 90.858
- type: mrr_at_3
value: 90.667
- type: mrr_at_5
value: 90.667
- type: ndcg_at_1
value: 82.0
- type: ndcg_at_10
value: 72.98
- type: ndcg_at_100
value: 52.868
- type: ndcg_at_1000
value: 46.541
- type: ndcg_at_3
value: 80.39699999999999
- type: ndcg_at_5
value: 76.303
- type: precision_at_1
value: 86.0
- type: precision_at_10
value: 75.8
- type: precision_at_100
value: 53.5
- type: precision_at_1000
value: 20.946
- type: precision_at_3
value: 85.333
- type: precision_at_5
value: 79.2
- type: recall_at_1
value: 0.22100000000000003
- type: recall_at_10
value: 1.9109999999999998
- type: recall_at_100
value: 12.437
- type: recall_at_1000
value: 43.606
- type: recall_at_3
value: 0.681
- type: recall_at_5
value: 1.023
- task:
type: Retrieval
dataset:
name: MTEB Touche2020
type: webis-touche2020
config: default
split: test
revision: 527b7d77e16e343303e68cb6af11d6e18b9f7b3b
metrics:
- type: map_at_1
value: 2.5
- type: map_at_10
value: 9.568999999999999
- type: map_at_100
value: 15.653
- type: map_at_1000
value: 17.188
- type: map_at_3
value: 5.335999999999999
- type: map_at_5
value: 6.522
- type: mrr_at_1
value: 34.694
- type: mrr_at_10
value: 49.184
- type: mrr_at_100
value: 50.512
- type: mrr_at_1000
value: 50.512
- type: mrr_at_3
value: 46.259
- type: mrr_at_5
value: 48.299
- type: ndcg_at_1
value: 30.612000000000002
- type: ndcg_at_10
value: 24.45
- type: ndcg_at_100
value: 35.870999999999995
- type: ndcg_at_1000
value: 47.272999999999996
- type: ndcg_at_3
value: 28.528
- type: ndcg_at_5
value: 25.768
- type: precision_at_1
value: 34.694
- type: precision_at_10
value: 21.429000000000002
- type: precision_at_100
value: 7.265000000000001
- type: precision_at_1000
value: 1.504
- type: precision_at_3
value: 29.252
- type: precision_at_5
value: 24.898
- type: recall_at_1
value: 2.5
- type: recall_at_10
value: 15.844
- type: recall_at_100
value: 45.469
- type: recall_at_1000
value: 81.148
- type: recall_at_3
value: 6.496
- type: recall_at_5
value: 8.790000000000001
- task:
type: Classification
dataset:
name: MTEB ToxicConversationsClassification
type: mteb/toxic_conversations_50k
config: default
split: test
revision: edfaf9da55d3dd50d43143d90c1ac476895ae6de
metrics:
- type: accuracy
value: 68.7272
- type: ap
value: 13.156450706152686
- type: f1
value: 52.814703437064395
- task:
type: Classification
dataset:
name: MTEB TweetSentimentExtractionClassification
type: mteb/tweet_sentiment_extraction
config: default
split: test
revision: 62146448f05be9e52a36b8ee9936447ea787eede
metrics:
- type: accuracy
value: 55.6677985285795
- type: f1
value: 55.9373937514999
- task:
type: Clustering
dataset:
name: MTEB TwentyNewsgroupsClustering
type: mteb/twentynewsgroups-clustering
config: default
split: test
revision: 091a54f9a36281ce7d6590ec8c75dd485e7e01d4
metrics:
- type: v_measure
value: 40.05809562275603
- task:
type: PairClassification
dataset:
name: MTEB TwitterSemEval2015
type: mteb/twittersemeval2015-pairclassification
config: default
split: test
revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1
metrics:
- type: cos_sim_accuracy
value: 82.76807534124099
- type: cos_sim_ap
value: 62.37052608803734
- type: cos_sim_f1
value: 59.077414934916646
- type: cos_sim_precision
value: 52.07326892109501
- type: cos_sim_recall
value: 68.25857519788919
- type: dot_accuracy
value: 80.56267509089825
- type: dot_ap
value: 54.75349561321037
- type: dot_f1
value: 54.75483794372552
- type: dot_precision
value: 49.77336499028707
- type: dot_recall
value: 60.844327176781
- type: euclidean_accuracy
value: 82.476008821601
- type: euclidean_ap
value: 61.17417554210511
- type: euclidean_f1
value: 57.80318696022382
- type: euclidean_precision
value: 53.622207176709544
- type: euclidean_recall
value: 62.69129287598945
- type: manhattan_accuracy
value: 82.48792990403528
- type: manhattan_ap
value: 61.044816292966544
- type: manhattan_f1
value: 58.03033951360462
- type: manhattan_precision
value: 53.36581045172719
- type: manhattan_recall
value: 63.58839050131926
- type: max_accuracy
value: 82.76807534124099
- type: max_ap
value: 62.37052608803734
- type: max_f1
value: 59.077414934916646
- task:
type: PairClassification
dataset:
name: MTEB TwitterURLCorpus
type: mteb/twitterurlcorpus-pairclassification
config: default
split: test
revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf
metrics:
- type: cos_sim_accuracy
value: 87.97881010594946
- type: cos_sim_ap
value: 83.78748636891035
- type: cos_sim_f1
value: 75.94113995691386
- type: cos_sim_precision
value: 72.22029307590805
- type: cos_sim_recall
value: 80.06621496766245
- type: dot_accuracy
value: 85.69294058291614
- type: dot_ap
value: 78.15363722278026
- type: dot_f1
value: 72.08894926888564
- type: dot_precision
value: 67.28959487419075
- type: dot_recall
value: 77.62550046196489
- type: euclidean_accuracy
value: 87.73625179493149
- type: euclidean_ap
value: 83.19012184470559
- type: euclidean_f1
value: 75.5148064623461
- type: euclidean_precision
value: 72.63352535381551
- type: euclidean_recall
value: 78.6341238065907
- type: manhattan_accuracy
value: 87.74013272790779
- type: manhattan_ap
value: 83.23305405113403
- type: manhattan_f1
value: 75.63960775639607
- type: manhattan_precision
value: 72.563304569246
- type: manhattan_recall
value: 78.9882968894364
- type: max_accuracy
value: 87.97881010594946
- type: max_ap
value: 83.78748636891035
- type: max_f1
value: 75.94113995691386
---
# SGPT-1.3B-weightedmean-msmarco-specb-bitfit
## Usage
For usage instructions, refer to our codebase: https://github.com/Muennighoff/sgpt
## Evaluation Results
For eval results, refer to the eval folder or our paper: https://arxiv.org/abs/2202.08904
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 62398 with parameters:
```
{'batch_size': 8, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters:
```
{'scale': 20.0, 'similarity_fct': 'cos_sim'}
```
Parameters of the fit()-Method:
```
{
"epochs": 10,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 0.0002
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 1000,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 300, 'do_lower_case': False}) with Transformer model: GPTNeoModel
(1): Pooling({'word_embedding_dimension': 2048, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': True, 'pooling_mode_lasttoken': False})
)
```
## Citing & Authors
```bibtex
@article{muennighoff2022sgpt,
title={SGPT: GPT Sentence Embeddings for Semantic Search},
author={Muennighoff, Niklas},
journal={arXiv preprint arXiv:2202.08904},
year={2022}
}
```
| [
"SUMMARIZATION"
] | [
"BIOSSES",
"SCIFACT"
] |
retrieva-jp/amber-base | retrieva-jp | feature-extraction | [
"sentence-transformers",
"safetensors",
"modernbert",
"sentence-similarity",
"feature-extraction",
"mteb",
"ja",
"en",
"arxiv:2412.13663",
"arxiv:2211.09260",
"base_model:sbintuitions/modernbert-ja-130m",
"base_model:finetune:sbintuitions/modernbert-ja-130m",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2025-03-07T01:10:01 | 2025-03-09T14:26:59 | 90 | 0 | ---
base_model: sbintuitions/modernbert-ja-130m
language:
- ja
- en
license: apache-2.0
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- mteb
model-index:
- name: retrieva-jp/amber-base
results:
- task:
type: Classification
dataset:
name: MTEB AmazonCounterfactualClassification (en)
type: mteb/amazon_counterfactual
config: en
split: test
revision: e8379541af4e31359cca9fbcf4b00f2671dba205
metrics:
- type: accuracy
value: 68.1642
- type: f1
value: 61.9811
- type: f1_weighted
value: 71.2157
- type: ap
value: 30.6541
- type: ap_weighted
value: 30.6541
- type: main_score
value: 68.1642
- task:
type: Clustering
dataset:
name: MTEB ArXivHierarchicalClusteringP2P (default)
type: mteb/arxiv-clustering-p2p
config: default
split: test
revision: 0bbdb47bcbe3a90093699aefeed338a0f28a7ee8
metrics:
- type: v_measure
value: 55.655100000000004
- type: v_measure_std
value: 3.2918999999999996
- type: main_score
value: 55.655100000000004
- task:
type: Clustering
dataset:
name: MTEB ArXivHierarchicalClusteringS2S (default)
type: mteb/arxiv-clustering-s2s
config: default
split: test
revision: b73bd54100e5abfa6e3a23dcafb46fe4d2438dc3
metrics:
- type: v_measure
value: 53.6493
- type: v_measure_std
value: 3.2359
- type: main_score
value: 53.6493
- task:
type: Retrieval
dataset:
name: MTEB ArguAna (default)
type: mteb/arguana
config: default
split: test
revision: c22ab2a51041ffd869aaddef7af8d8215647e41a
metrics:
- type: ndcg_at_1
value: 25.249
- type: ndcg_at_3
value: 38.056
- type: ndcg_at_5
value: 43.124
- type: ndcg_at_10
value: 48.068
- type: ndcg_at_20
value: 51.461
- type: ndcg_at_100
value: 53.15800000000001
- type: ndcg_at_1000
value: 53.38
- type: map_at_1
value: 25.249
- type: map_at_3
value: 34.803
- type: map_at_5
value: 37.598
- type: map_at_10
value: 39.611000000000004
- type: map_at_20
value: 40.569
- type: map_at_100
value: 40.821000000000005
- type: map_at_1000
value: 40.83
- type: recall_at_1
value: 25.249
- type: recall_at_3
value: 47.510999999999996
- type: recall_at_5
value: 59.885999999999996
- type: recall_at_10
value: 75.32
- type: recall_at_20
value: 88.549
- type: recall_at_100
value: 97.44
- type: recall_at_1000
value: 99.14699999999999
- type: precision_at_1
value: 25.249
- type: precision_at_3
value: 15.837000000000002
- type: precision_at_5
value: 11.977
- type: precision_at_10
value: 7.532
- type: precision_at_20
value: 4.427
- type: precision_at_100
value: 0.9740000000000001
- type: precision_at_1000
value: 0.099
- type: mrr_at_1
value: 25.817899999999998
- type: mrr_at_3
value: 34.9692
- type: mrr_at_5
value: 37.7928
- type: mrr_at_10
value: 39.8238
- type: mrr_at_20
value: 40.7844
- type: mrr_at_100
value: 41.0403
- type: mrr_at_1000
value: 41.0495
- type: nauc_ndcg_at_1_max
value: -2.6569
- type: nauc_ndcg_at_1_std
value: -2.4726000000000004
- type: nauc_ndcg_at_1_diff1
value: 10.259699999999999
- type: nauc_ndcg_at_3_max
value: -0.8151
- type: nauc_ndcg_at_3_std
value: -3.3642
- type: nauc_ndcg_at_3_diff1
value: 7.884099999999999
- type: nauc_ndcg_at_5_max
value: -0.3906
- type: nauc_ndcg_at_5_std
value: -2.4619
- type: nauc_ndcg_at_5_diff1
value: 7.558
- type: nauc_ndcg_at_10_max
value: 1.0935000000000001
- type: nauc_ndcg_at_10_std
value: -1.8624999999999998
- type: nauc_ndcg_at_10_diff1
value: 8.0503
- type: nauc_ndcg_at_20_max
value: 1.3164
- type: nauc_ndcg_at_20_std
value: -1.3407
- type: nauc_ndcg_at_20_diff1
value: 7.8992
- type: nauc_ndcg_at_100_max
value: 0.8316
- type: nauc_ndcg_at_100_std
value: -0.8725
- type: nauc_ndcg_at_100_diff1
value: 8.5633
- type: nauc_ndcg_at_1000_max
value: 0.44999999999999996
- type: nauc_ndcg_at_1000_std
value: -1.4357
- type: nauc_ndcg_at_1000_diff1
value: 8.4438
- type: nauc_map_at_1_max
value: -2.6569
- type: nauc_map_at_1_std
value: -2.4726000000000004
- type: nauc_map_at_1_diff1
value: 10.259699999999999
- type: nauc_map_at_3_max
value: -1.3567
- type: nauc_map_at_3_std
value: -3.222
- type: nauc_map_at_3_diff1
value: 8.3557
- type: nauc_map_at_5_max
value: -1.162
- type: nauc_map_at_5_std
value: -2.7384
- type: nauc_map_at_5_diff1
value: 8.118400000000001
- type: nauc_map_at_10_max
value: -0.615
- type: nauc_map_at_10_std
value: -2.5394
- type: nauc_map_at_10_diff1
value: 8.283100000000001
- type: nauc_map_at_20_max
value: -0.5492
- type: nauc_map_at_20_std
value: -2.4076
- type: nauc_map_at_20_diff1
value: 8.280999999999999
- type: nauc_map_at_100_max
value: -0.6049
- type: nauc_map_at_100_std
value: -2.3560000000000003
- type: nauc_map_at_100_diff1
value: 8.3933
- type: nauc_map_at_1000_max
value: -0.6154
- type: nauc_map_at_1000_std
value: -2.373
- type: nauc_map_at_1000_diff1
value: 8.3902
- type: nauc_recall_at_1_max
value: -2.6569
- type: nauc_recall_at_1_std
value: -2.4726000000000004
- type: nauc_recall_at_1_diff1
value: 10.259699999999999
- type: nauc_recall_at_3_max
value: 0.7234
- type: nauc_recall_at_3_std
value: -3.7315
- type: nauc_recall_at_3_diff1
value: 6.6138
- type: nauc_recall_at_5_max
value: 2.0847
- type: nauc_recall_at_5_std
value: -1.4385000000000001
- type: nauc_recall_at_5_diff1
value: 5.9428
- type: nauc_recall_at_10_max
value: 9.2417
- type: nauc_recall_at_10_std
value: 1.6372000000000002
- type: nauc_recall_at_10_diff1
value: 7.6442
- type: nauc_recall_at_20_max
value: 17.9819
- type: nauc_recall_at_20_std
value: 9.3827
- type: nauc_recall_at_20_diff1
value: 5.2288
- type: nauc_recall_at_100_max
value: 46.3576
- type: nauc_recall_at_100_std
value: 69.5314
- type: nauc_recall_at_100_diff1
value: 25.2365
- type: nauc_recall_at_1000_max
value: 47.3173
- type: nauc_recall_at_1000_std
value: 80.3564
- type: nauc_recall_at_1000_diff1
value: 30.506
- type: nauc_precision_at_1_max
value: -2.6569
- type: nauc_precision_at_1_std
value: -2.4726000000000004
- type: nauc_precision_at_1_diff1
value: 10.259699999999999
- type: nauc_precision_at_3_max
value: 0.7234
- type: nauc_precision_at_3_std
value: -3.7315
- type: nauc_precision_at_3_diff1
value: 6.6138
- type: nauc_precision_at_5_max
value: 2.0847
- type: nauc_precision_at_5_std
value: -1.4385000000000001
- type: nauc_precision_at_5_diff1
value: 5.9428
- type: nauc_precision_at_10_max
value: 9.2417
- type: nauc_precision_at_10_std
value: 1.6372000000000002
- type: nauc_precision_at_10_diff1
value: 7.6442
- type: nauc_precision_at_20_max
value: 17.9819
- type: nauc_precision_at_20_std
value: 9.3827
- type: nauc_precision_at_20_diff1
value: 5.2288
- type: nauc_precision_at_100_max
value: 46.3576
- type: nauc_precision_at_100_std
value: 69.5314
- type: nauc_precision_at_100_diff1
value: 25.2365
- type: nauc_precision_at_1000_max
value: 47.3173
- type: nauc_precision_at_1000_std
value: 80.3564
- type: nauc_precision_at_1000_diff1
value: 30.506
- type: nauc_mrr_at_1_max
value: -2.5852
- type: nauc_mrr_at_1_std
value: -2.7133000000000003
- type: nauc_mrr_at_1_diff1
value: 8.3902
- type: nauc_mrr_at_3_max
value: -2.3878
- type: nauc_mrr_at_3_std
value: -3.1916
- type: nauc_mrr_at_3_diff1
value: 6.3759999999999994
- type: nauc_mrr_at_5_max
value: -2.0079
- type: nauc_mrr_at_5_std
value: -2.9791000000000003
- type: nauc_mrr_at_5_diff1
value: 6.3531
- type: nauc_mrr_at_10_max
value: -1.41
- type: nauc_mrr_at_10_std
value: -2.7921
- type: nauc_mrr_at_10_diff1
value: 6.514200000000001
- type: nauc_mrr_at_20_max
value: -1.35
- type: nauc_mrr_at_20_std
value: -2.6331
- type: nauc_mrr_at_20_diff1
value: 6.4700999999999995
- type: nauc_mrr_at_100_max
value: -1.393
- type: nauc_mrr_at_100_std
value: -2.5819
- type: nauc_mrr_at_100_diff1
value: 6.5875
- type: nauc_mrr_at_1000_max
value: -1.4037000000000002
- type: nauc_mrr_at_1000_std
value: -2.5989
- type: nauc_mrr_at_1000_diff1
value: 6.583799999999999
- type: main_score
value: 48.068
- task:
type: Reranking
dataset:
name: MTEB AskUbuntuDupQuestions (default)
type: mteb/askubuntudupquestions-reranking
config: default
split: test
revision: 2000358ca161889fa9c082cb41daa8dcfb161a54
metrics:
- type: map
value: 56.5225
- type: mrr
value: 70.5146
- type: nAUC_map_max
value: 18.224
- type: nAUC_map_std
value: 12.5352
- type: nAUC_map_diff1
value: 14.0464
- type: nAUC_mrr_max
value: 28.619699999999998
- type: nAUC_mrr_std
value: 21.69
- type: nAUC_mrr_diff1
value: 15.8021
- type: main_score
value: 56.5225
- task:
type: STS
dataset:
name: MTEB BIOSSES (default)
type: mteb/biosses-sts
config: default
split: test
revision: d3fb88f8f02e40887cd149695127462bbcf29b4a
metrics:
- type: pearson
value: 86.6855
- type: spearman
value: 83.17360000000001
- type: cosine_pearson
value: 86.6855
- type: cosine_spearman
value: 83.17360000000001
- type: manhattan_pearson
value: 85.5442
- type: manhattan_spearman
value: 83.9501
- type: euclidean_pearson
value: 85.0403
- type: euclidean_spearman
value: 83.17360000000001
- type: main_score
value: 83.17360000000001
- task:
type: Classification
dataset:
name: MTEB Banking77Classification (default)
type: mteb/banking77
config: default
split: test
revision: 0fd18e25b25c072e09e0d92ab615fda904d66300
metrics:
- type: accuracy
value: 76.3312
- type: f1
value: 75.4609
- type: f1_weighted
value: 75.4609
- type: main_score
value: 76.3312
- task:
type: Clustering
dataset:
name: MTEB BiorxivClusteringP2P.v2 (default)
type: mteb/biorxiv-clustering-p2p
config: default
split: test
revision: f5dbc242e11dd8e24def4c4268607a49e02946dc
metrics:
- type: v_measure
value: 33.6692
- type: v_measure_std
value: 0.769
- type: main_score
value: 33.6692
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackGamingRetrieval (default)
type: mteb/cqadupstack-gaming
config: default
split: test
revision: 4885aa143210c98657558c04aaf3dc47cfb54340
metrics:
- type: ndcg_at_1
value: 30.345
- type: ndcg_at_3
value: 37.726
- type: ndcg_at_5
value: 39.999
- type: ndcg_at_10
value: 42.732
- type: ndcg_at_20
value: 44.696000000000005
- type: ndcg_at_100
value: 47.461
- type: ndcg_at_1000
value: 49.341
- type: map_at_1
value: 26.484999999999996
- type: map_at_3
value: 34.474
- type: map_at_5
value: 35.94
- type: map_at_10
value: 37.24
- type: map_at_20
value: 37.852999999999994
- type: map_at_100
value: 38.286
- type: map_at_1000
value: 38.369
- type: recall_at_1
value: 26.484999999999996
- type: recall_at_3
value: 42.857
- type: recall_at_5
value: 48.501
- type: recall_at_10
value: 56.48
- type: recall_at_20
value: 63.81099999999999
- type: recall_at_100
value: 77.518
- type: recall_at_1000
value: 90.89
- type: precision_at_1
value: 30.345
- type: precision_at_3
value: 17.241
- type: precision_at_5
value: 11.962
- type: precision_at_10
value: 7.204000000000001
- type: precision_at_20
value: 4.1290000000000004
- type: precision_at_100
value: 1.0330000000000001
- type: precision_at_1000
value: 0.127
- type: mrr_at_1
value: 30.3448
- type: mrr_at_3
value: 37.5131
- type: mrr_at_5
value: 38.8516
- type: mrr_at_10
value: 39.915299999999995
- type: mrr_at_20
value: 40.428599999999996
- type: mrr_at_100
value: 40.7757
- type: mrr_at_1000
value: 40.8275
- type: nauc_ndcg_at_1_max
value: 30.5442
- type: nauc_ndcg_at_1_std
value: -10.3888
- type: nauc_ndcg_at_1_diff1
value: 52.476
- type: nauc_ndcg_at_3_max
value: 28.6927
- type: nauc_ndcg_at_3_std
value: -8.8728
- type: nauc_ndcg_at_3_diff1
value: 45.094699999999996
- type: nauc_ndcg_at_5_max
value: 29.259600000000002
- type: nauc_ndcg_at_5_std
value: -7.945399999999999
- type: nauc_ndcg_at_5_diff1
value: 44.600699999999996
- type: nauc_ndcg_at_10_max
value: 29.9977
- type: nauc_ndcg_at_10_std
value: -6.1746
- type: nauc_ndcg_at_10_diff1
value: 44.2832
- type: nauc_ndcg_at_20_max
value: 30.034100000000002
- type: nauc_ndcg_at_20_std
value: -4.8941
- type: nauc_ndcg_at_20_diff1
value: 43.3814
- type: nauc_ndcg_at_100_max
value: 30.812800000000003
- type: nauc_ndcg_at_100_std
value: -3.5000999999999998
- type: nauc_ndcg_at_100_diff1
value: 43.345
- type: nauc_ndcg_at_1000_max
value: 30.9884
- type: nauc_ndcg_at_1000_std
value: -3.9316999999999998
- type: nauc_ndcg_at_1000_diff1
value: 43.6512
- type: nauc_map_at_1_max
value: 27.442800000000002
- type: nauc_map_at_1_std
value: -9.8884
- type: nauc_map_at_1_diff1
value: 52.666999999999994
- type: nauc_map_at_3_max
value: 27.897100000000002
- type: nauc_map_at_3_std
value: -9.777
- type: nauc_map_at_3_diff1
value: 47.013
- type: nauc_map_at_5_max
value: 28.3476
- type: nauc_map_at_5_std
value: -9.3335
- type: nauc_map_at_5_diff1
value: 46.7246
- type: nauc_map_at_10_max
value: 28.921000000000003
- type: nauc_map_at_10_std
value: -8.4018
- type: nauc_map_at_10_diff1
value: 46.5358
- type: nauc_map_at_20_max
value: 29.033900000000003
- type: nauc_map_at_20_std
value: -7.985100000000001
- type: nauc_map_at_20_diff1
value: 46.2362
- type: nauc_map_at_100_max
value: 29.2382
- type: nauc_map_at_100_std
value: -7.7172
- type: nauc_map_at_100_diff1
value: 46.2663
- type: nauc_map_at_1000_max
value: 29.263699999999996
- type: nauc_map_at_1000_std
value: -7.7108
- type: nauc_map_at_1000_diff1
value: 46.2735
- type: nauc_recall_at_1_max
value: 27.442800000000002
- type: nauc_recall_at_1_std
value: -9.8884
- type: nauc_recall_at_1_diff1
value: 52.666999999999994
- type: nauc_recall_at_3_max
value: 25.7102
- type: nauc_recall_at_3_std
value: -8.2064
- type: nauc_recall_at_3_diff1
value: 39.145
- type: nauc_recall_at_5_max
value: 27.244699999999998
- type: nauc_recall_at_5_std
value: -5.943
- type: nauc_recall_at_5_diff1
value: 38.024
- type: nauc_recall_at_10_max
value: 29.226000000000003
- type: nauc_recall_at_10_std
value: -0.2402
- type: nauc_recall_at_10_diff1
value: 36.58
- type: nauc_recall_at_20_max
value: 29.567500000000003
- type: nauc_recall_at_20_std
value: 6.2502
- type: nauc_recall_at_20_diff1
value: 32.092999999999996
- type: nauc_recall_at_100_max
value: 33.8086
- type: nauc_recall_at_100_std
value: 20.092
- type: nauc_recall_at_100_diff1
value: 27.5754
- type: nauc_recall_at_1000_max
value: 38.0782
- type: nauc_recall_at_1000_std
value: 34.3309
- type: nauc_recall_at_1000_diff1
value: 17.712
- type: nauc_precision_at_1_max
value: 30.5442
- type: nauc_precision_at_1_std
value: -10.3888
- type: nauc_precision_at_1_diff1
value: 52.476
- type: nauc_precision_at_3_max
value: 29.0858
- type: nauc_precision_at_3_std
value: -5.8233
- type: nauc_precision_at_3_diff1
value: 33.480900000000005
- type: nauc_precision_at_5_max
value: 30.425200000000004
- type: nauc_precision_at_5_std
value: -2.0077000000000003
- type: nauc_precision_at_5_diff1
value: 29.5631
- type: nauc_precision_at_10_max
value: 30.8693
- type: nauc_precision_at_10_std
value: 4.5986
- type: nauc_precision_at_10_diff1
value: 23.346600000000002
- type: nauc_precision_at_20_max
value: 29.6844
- type: nauc_precision_at_20_std
value: 9.4699
- type: nauc_precision_at_20_diff1
value: 15.9193
- type: nauc_precision_at_100_max
value: 29.7036
- type: nauc_precision_at_100_std
value: 19.0186
- type: nauc_precision_at_100_diff1
value: 5.9221
- type: nauc_precision_at_1000_max
value: 24.6994
- type: nauc_precision_at_1000_std
value: 18.0033
- type: nauc_precision_at_1000_diff1
value: -3.2275
- type: nauc_mrr_at_1_max
value: 30.5442
- type: nauc_mrr_at_1_std
value: -10.3888
- type: nauc_mrr_at_1_diff1
value: 52.476
- type: nauc_mrr_at_3_max
value: 29.7504
- type: nauc_mrr_at_3_std
value: -9.5234
- type: nauc_mrr_at_3_diff1
value: 46.5068
- type: nauc_mrr_at_5_max
value: 30.341099999999997
- type: nauc_mrr_at_5_std
value: -8.4966
- type: nauc_mrr_at_5_diff1
value: 46.051199999999994
- type: nauc_mrr_at_10_max
value: 30.6066
- type: nauc_mrr_at_10_std
value: -7.8854
- type: nauc_mrr_at_10_diff1
value: 46.035199999999996
- type: nauc_mrr_at_20_max
value: 30.570199999999996
- type: nauc_mrr_at_20_std
value: -7.614700000000001
- type: nauc_mrr_at_20_diff1
value: 45.8861
- type: nauc_mrr_at_100_max
value: 30.589100000000002
- type: nauc_mrr_at_100_std
value: -7.5529
- type: nauc_mrr_at_100_diff1
value: 45.907
- type: nauc_mrr_at_1000_max
value: 30.587799999999998
- type: nauc_mrr_at_1000_std
value: -7.5716
- type: nauc_mrr_at_1000_diff1
value: 45.9244
- type: main_score
value: 42.732
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackUnixRetrieval (default)
type: mteb/cqadupstack-unix
config: default
split: test
revision: 6c6430d3a6d36f8d2a829195bc5dc94d7e063e53
metrics:
- type: ndcg_at_1
value: 18.843
- type: ndcg_at_3
value: 22.131
- type: ndcg_at_5
value: 23.772
- type: ndcg_at_10
value: 25.661
- type: ndcg_at_20
value: 27.939999999999998
- type: ndcg_at_100
value: 31.645
- type: ndcg_at_1000
value: 34.687
- type: map_at_1
value: 16.194
- type: map_at_3
value: 20.068
- type: map_at_5
value: 21.075
- type: map_at_10
value: 21.913
- type: map_at_20
value: 22.569
- type: map_at_100
value: 23.107
- type: map_at_1000
value: 23.23
- type: recall_at_1
value: 16.194
- type: recall_at_3
value: 24.704
- type: recall_at_5
value: 28.859
- type: recall_at_10
value: 34.402
- type: recall_at_20
value: 42.714
- type: recall_at_100
value: 61.19799999999999
- type: recall_at_1000
value: 82.953
- type: precision_at_1
value: 18.843
- type: precision_at_3
value: 9.919
- type: precision_at_5
value: 7.071
- type: precision_at_10
value: 4.328
- type: precision_at_20
value: 2.752
- type: precision_at_100
value: 0.823
- type: precision_at_1000
value: 0.121
- type: mrr_at_1
value: 18.8433
- type: mrr_at_3
value: 22.776699999999998
- type: mrr_at_5
value: 23.9055
- type: mrr_at_10
value: 24.7244
- type: mrr_at_20
value: 25.3919
- type: mrr_at_100
value: 25.8783
- type: mrr_at_1000
value: 25.957900000000002
- type: nauc_ndcg_at_1_max
value: 35.1013
- type: nauc_ndcg_at_1_std
value: 4.116899999999999
- type: nauc_ndcg_at_1_diff1
value: 54.3984
- type: nauc_ndcg_at_3_max
value: 35.1035
- type: nauc_ndcg_at_3_std
value: 5.3618
- type: nauc_ndcg_at_3_diff1
value: 47.4455
- type: nauc_ndcg_at_5_max
value: 34.3845
- type: nauc_ndcg_at_5_std
value: 5.4364
- type: nauc_ndcg_at_5_diff1
value: 44.8757
- type: nauc_ndcg_at_10_max
value: 33.4252
- type: nauc_ndcg_at_10_std
value: 7.100099999999999
- type: nauc_ndcg_at_10_diff1
value: 43.0854
- type: nauc_ndcg_at_20_max
value: 33.2135
- type: nauc_ndcg_at_20_std
value: 7.750500000000001
- type: nauc_ndcg_at_20_diff1
value: 42.5065
- type: nauc_ndcg_at_100_max
value: 34.0845
- type: nauc_ndcg_at_100_std
value: 9.0937
- type: nauc_ndcg_at_100_diff1
value: 40.9634
- type: nauc_ndcg_at_1000_max
value: 34.3716
- type: nauc_ndcg_at_1000_std
value: 9.8049
- type: nauc_ndcg_at_1000_diff1
value: 41.606
- type: nauc_map_at_1_max
value: 35.054
- type: nauc_map_at_1_std
value: 3.4526000000000003
- type: nauc_map_at_1_diff1
value: 55.69840000000001
- type: nauc_map_at_3_max
value: 34.861
- type: nauc_map_at_3_std
value: 4.6036
- type: nauc_map_at_3_diff1
value: 49.338
- type: nauc_map_at_5_max
value: 34.3213
- type: nauc_map_at_5_std
value: 4.7856000000000005
- type: nauc_map_at_5_diff1
value: 47.856
- type: nauc_map_at_10_max
value: 33.9813
- type: nauc_map_at_10_std
value: 5.649
- type: nauc_map_at_10_diff1
value: 47.0563
- type: nauc_map_at_20_max
value: 33.8854
- type: nauc_map_at_20_std
value: 5.9026000000000005
- type: nauc_map_at_20_diff1
value: 46.876200000000004
- type: nauc_map_at_100_max
value: 33.996500000000005
- type: nauc_map_at_100_std
value: 6.094200000000001
- type: nauc_map_at_100_diff1
value: 46.6388
- type: nauc_map_at_1000_max
value: 34.0082
- type: nauc_map_at_1000_std
value: 6.1436
- type: nauc_map_at_1000_diff1
value: 46.643
- type: nauc_recall_at_1_max
value: 35.054
- type: nauc_recall_at_1_std
value: 3.4526000000000003
- type: nauc_recall_at_1_diff1
value: 55.69840000000001
- type: nauc_recall_at_3_max
value: 34.2271
- type: nauc_recall_at_3_std
value: 5.573
- type: nauc_recall_at_3_diff1
value: 42.0593
- type: nauc_recall_at_5_max
value: 32.7785
- type: nauc_recall_at_5_std
value: 6.188599999999999
- type: nauc_recall_at_5_diff1
value: 36.9345
- type: nauc_recall_at_10_max
value: 29.7004
- type: nauc_recall_at_10_std
value: 10.3771
- type: nauc_recall_at_10_diff1
value: 31.6352
- type: nauc_recall_at_20_max
value: 28.474100000000004
- type: nauc_recall_at_20_std
value: 12.3244
- type: nauc_recall_at_20_diff1
value: 29.6458
- type: nauc_recall_at_100_max
value: 31.2612
- type: nauc_recall_at_100_std
value: 19.1574
- type: nauc_recall_at_100_diff1
value: 19.7616
- type: nauc_recall_at_1000_max
value: 33.2982
- type: nauc_recall_at_1000_std
value: 36.4068
- type: nauc_recall_at_1000_diff1
value: 15.3188
- type: nauc_precision_at_1_max
value: 35.1013
- type: nauc_precision_at_1_std
value: 4.116899999999999
- type: nauc_precision_at_1_diff1
value: 54.3984
- type: nauc_precision_at_3_max
value: 34.4651
- type: nauc_precision_at_3_std
value: 7.8735
- type: nauc_precision_at_3_diff1
value: 39.7844
- type: nauc_precision_at_5_max
value: 32.2792
- type: nauc_precision_at_5_std
value: 8.465
- type: nauc_precision_at_5_diff1
value: 34.130700000000004
- type: nauc_precision_at_10_max
value: 28.197699999999998
- type: nauc_precision_at_10_std
value: 12.1518
- type: nauc_precision_at_10_diff1
value: 28.672900000000002
- type: nauc_precision_at_20_max
value: 27.2073
- type: nauc_precision_at_20_std
value: 14.113100000000001
- type: nauc_precision_at_20_diff1
value: 23.623
- type: nauc_precision_at_100_max
value: 22.906399999999998
- type: nauc_precision_at_100_std
value: 16.7201
- type: nauc_precision_at_100_diff1
value: 7.0853
- type: nauc_precision_at_1000_max
value: 10.5167
- type: nauc_precision_at_1000_std
value: 11.5017
- type: nauc_precision_at_1000_diff1
value: -6.6079
- type: nauc_mrr_at_1_max
value: 35.1013
- type: nauc_mrr_at_1_std
value: 4.116899999999999
- type: nauc_mrr_at_1_diff1
value: 54.3984
- type: nauc_mrr_at_3_max
value: 35.489399999999996
- type: nauc_mrr_at_3_std
value: 5.097700000000001
- type: nauc_mrr_at_3_diff1
value: 48.8783
- type: nauc_mrr_at_5_max
value: 35.2093
- type: nauc_mrr_at_5_std
value: 5.2317
- type: nauc_mrr_at_5_diff1
value: 47.3602
- type: nauc_mrr_at_10_max
value: 34.731
- type: nauc_mrr_at_10_std
value: 5.7762
- type: nauc_mrr_at_10_diff1
value: 46.495999999999995
- type: nauc_mrr_at_20_max
value: 34.6509
- type: nauc_mrr_at_20_std
value: 5.8511
- type: nauc_mrr_at_20_diff1
value: 46.386500000000005
- type: nauc_mrr_at_100_max
value: 34.7761
- type: nauc_mrr_at_100_std
value: 6.0355
- type: nauc_mrr_at_100_diff1
value: 46.2476
- type: nauc_mrr_at_1000_max
value: 34.792699999999996
- type: nauc_mrr_at_1000_std
value: 6.0607
- type: nauc_mrr_at_1000_diff1
value: 46.281800000000004
- type: main_score
value: 25.661
- task:
type: Retrieval
dataset:
name: MTEB ClimateFEVERHardNegatives (default)
type: mteb/ClimateFEVER_test_top_250_only_w_correct-v2
config: default
split: test
revision: 3a309e201f3c2c4b13bd4a367a8f37eee2ec1d21
metrics:
- type: ndcg_at_1
value: 16.8
- type: ndcg_at_3
value: 15.503
- type: ndcg_at_5
value: 17.5
- type: ndcg_at_10
value: 20.642
- type: ndcg_at_20
value: 23.07
- type: ndcg_at_100
value: 27.639000000000003
- type: ndcg_at_1000
value: 32.041
- type: map_at_1
value: 7.885000000000001
- type: map_at_3
value: 11.128
- type: map_at_5
value: 12.565999999999999
- type: map_at_10
value: 13.876
- type: map_at_20
value: 14.66
- type: map_at_100
value: 15.432000000000002
- type: map_at_1000
value: 15.655
- type: recall_at_1
value: 7.885000000000001
- type: recall_at_3
value: 14.957
- type: recall_at_5
value: 19.675
- type: recall_at_10
value: 26.868
- type: recall_at_20
value: 33.94
- type: recall_at_100
value: 51.833
- type: recall_at_1000
value: 76.822
- type: precision_at_1
value: 16.8
- type: precision_at_3
value: 11.533
- type: precision_at_5
value: 9.56
- type: precision_at_10
value: 6.83
- type: precision_at_20
value: 4.41
- type: precision_at_100
value: 1.432
- type: precision_at_1000
value: 0.22499999999999998
- type: mrr_at_1
value: 16.8
- type: mrr_at_3
value: 23.2333
- type: mrr_at_5
value: 25.2183
- type: mrr_at_10
value: 26.775
- type: mrr_at_20
value: 27.4121
- type: mrr_at_100
value: 27.882299999999997
- type: mrr_at_1000
value: 27.9472
- type: nauc_ndcg_at_1_max
value: 28.3609
- type: nauc_ndcg_at_1_std
value: 10.5951
- type: nauc_ndcg_at_1_diff1
value: 16.566
- type: nauc_ndcg_at_3_max
value: 33.3794
- type: nauc_ndcg_at_3_std
value: 14.645900000000001
- type: nauc_ndcg_at_3_diff1
value: 15.4617
- type: nauc_ndcg_at_5_max
value: 33.5092
- type: nauc_ndcg_at_5_std
value: 16.209699999999998
- type: nauc_ndcg_at_5_diff1
value: 16.7386
- type: nauc_ndcg_at_10_max
value: 37.101299999999995
- type: nauc_ndcg_at_10_std
value: 20.939
- type: nauc_ndcg_at_10_diff1
value: 15.1232
- type: nauc_ndcg_at_20_max
value: 38.3563
- type: nauc_ndcg_at_20_std
value: 22.3038
- type: nauc_ndcg_at_20_diff1
value: 14.613100000000001
- type: nauc_ndcg_at_100_max
value: 39.5793
- type: nauc_ndcg_at_100_std
value: 23.3348
- type: nauc_ndcg_at_100_diff1
value: 13.6571
- type: nauc_ndcg_at_1000_max
value: 39.2582
- type: nauc_ndcg_at_1000_std
value: 22.5989
- type: nauc_ndcg_at_1000_diff1
value: 12.6784
- type: nauc_map_at_1_max
value: 36.9819
- type: nauc_map_at_1_std
value: 11.5065
- type: nauc_map_at_1_diff1
value: 22.4791
- type: nauc_map_at_3_max
value: 35.324299999999994
- type: nauc_map_at_3_std
value: 13.572000000000001
- type: nauc_map_at_3_diff1
value: 19.3415
- type: nauc_map_at_5_max
value: 35.0138
- type: nauc_map_at_5_std
value: 14.857600000000001
- type: nauc_map_at_5_diff1
value: 19.5352
- type: nauc_map_at_10_max
value: 36.8267
- type: nauc_map_at_10_std
value: 17.6287
- type: nauc_map_at_10_diff1
value: 18.2802
- type: nauc_map_at_20_max
value: 37.5214
- type: nauc_map_at_20_std
value: 18.319399999999998
- type: nauc_map_at_20_diff1
value: 18.0343
- type: nauc_map_at_100_max
value: 37.933499999999995
- type: nauc_map_at_100_std
value: 18.6864
- type: nauc_map_at_100_diff1
value: 17.7119
- type: nauc_map_at_1000_max
value: 37.9509
- type: nauc_map_at_1000_std
value: 18.6975
- type: nauc_map_at_1000_diff1
value: 17.5997
- type: nauc_recall_at_1_max
value: 36.9819
- type: nauc_recall_at_1_std
value: 11.5065
- type: nauc_recall_at_1_diff1
value: 22.4791
- type: nauc_recall_at_3_max
value: 33.0875
- type: nauc_recall_at_3_std
value: 16.3976
- type: nauc_recall_at_3_diff1
value: 15.6164
- type: nauc_recall_at_5_max
value: 30.604799999999997
- type: nauc_recall_at_5_std
value: 17.1699
- type: nauc_recall_at_5_diff1
value: 15.639800000000001
- type: nauc_recall_at_10_max
value: 35.342400000000005
- type: nauc_recall_at_10_std
value: 24.665599999999998
- type: nauc_recall_at_10_diff1
value: 11.9499
- type: nauc_recall_at_20_max
value: 35.956700000000005
- type: nauc_recall_at_20_std
value: 26.556800000000003
- type: nauc_recall_at_20_diff1
value: 10.0239
- type: nauc_recall_at_100_max
value: 36.1012
- type: nauc_recall_at_100_std
value: 27.8055
- type: nauc_recall_at_100_diff1
value: 6.3591
- type: nauc_recall_at_1000_max
value: 34.7202
- type: nauc_recall_at_1000_std
value: 26.378
- type: nauc_recall_at_1000_diff1
value: -0.7171000000000001
- type: nauc_precision_at_1_max
value: 28.3609
- type: nauc_precision_at_1_std
value: 10.5951
- type: nauc_precision_at_1_diff1
value: 16.566
- type: nauc_precision_at_3_max
value: 30.490000000000002
- type: nauc_precision_at_3_std
value: 16.270899999999997
- type: nauc_precision_at_3_diff1
value: 9.7026
- type: nauc_precision_at_5_max
value: 29.3491
- type: nauc_precision_at_5_std
value: 19.084699999999998
- type: nauc_precision_at_5_diff1
value: 10.7809
- type: nauc_precision_at_10_max
value: 34.753699999999995
- type: nauc_precision_at_10_std
value: 28.155
- type: nauc_precision_at_10_diff1
value: 5.6554
- type: nauc_precision_at_20_max
value: 33.3812
- type: nauc_precision_at_20_std
value: 27.122400000000003
- type: nauc_precision_at_20_diff1
value: 3.6636
- type: nauc_precision_at_100_max
value: 28.7799
- type: nauc_precision_at_100_std
value: 23.9905
- type: nauc_precision_at_100_diff1
value: -0.5301
- type: nauc_precision_at_1000_max
value: 13.068399999999999
- type: nauc_precision_at_1000_std
value: 12.9133
- type: nauc_precision_at_1000_diff1
value: -8.8717
- type: nauc_mrr_at_1_max
value: 28.3609
- type: nauc_mrr_at_1_std
value: 10.5951
- type: nauc_mrr_at_1_diff1
value: 16.566
- type: nauc_mrr_at_3_max
value: 30.9311
- type: nauc_mrr_at_3_std
value: 13.9549
- type: nauc_mrr_at_3_diff1
value: 12.851399999999998
- type: nauc_mrr_at_5_max
value: 30.893700000000003
- type: nauc_mrr_at_5_std
value: 14.464599999999999
- type: nauc_mrr_at_5_diff1
value: 13.2001
- type: nauc_mrr_at_10_max
value: 32.277499999999996
- type: nauc_mrr_at_10_std
value: 15.9378
- type: nauc_mrr_at_10_diff1
value: 12.9887
- type: nauc_mrr_at_20_max
value: 32.3817
- type: nauc_mrr_at_20_std
value: 16.0469
- type: nauc_mrr_at_20_diff1
value: 13.039200000000001
- type: nauc_mrr_at_100_max
value: 32.386900000000004
- type: nauc_mrr_at_100_std
value: 15.966800000000001
- type: nauc_mrr_at_100_diff1
value: 12.982
- type: nauc_mrr_at_1000_max
value: 32.347300000000004
- type: nauc_mrr_at_1000_std
value: 15.9096
- type: nauc_mrr_at_1000_diff1
value: 12.9742
- type: main_score
value: 20.642
- task:
type: Retrieval
dataset:
name: MTEB FEVERHardNegatives (default)
type: mteb/FEVER_test_top_250_only_w_correct-v2
config: default
split: test
revision: 080c9ed6267b65029207906e815d44a9240bafca
metrics:
- type: ndcg_at_1
value: 46.9
- type: ndcg_at_3
value: 57.825
- type: ndcg_at_5
value: 61.245000000000005
- type: ndcg_at_10
value: 63.836000000000006
- type: ndcg_at_20
value: 65.408
- type: ndcg_at_100
value: 66.796
- type: ndcg_at_1000
value: 67.216
- type: map_at_1
value: 43.999
- type: map_at_3
value: 53.813
- type: map_at_5
value: 55.741
- type: map_at_10
value: 56.852999999999994
- type: map_at_20
value: 57.30800000000001
- type: map_at_100
value: 57.54
- type: map_at_1000
value: 57.56099999999999
- type: recall_at_1
value: 43.999
- type: recall_at_3
value: 66.184
- type: recall_at_5
value: 74.557
- type: recall_at_10
value: 82.394
- type: recall_at_20
value: 88.51
- type: recall_at_100
value: 95.253
- type: recall_at_1000
value: 98.031
- type: precision_at_1
value: 46.9
- type: precision_at_3
value: 23.599999999999998
- type: precision_at_5
value: 15.98
- type: precision_at_10
value: 8.85
- type: precision_at_20
value: 4.760000000000001
- type: precision_at_100
value: 1.045
- type: precision_at_1000
value: 0.11
- type: mrr_at_1
value: 46.9
- type: mrr_at_3
value: 57.0167
- type: mrr_at_5
value: 59.046699999999994
- type: mrr_at_10
value: 60.1422
- type: mrr_at_20
value: 60.535799999999995
- type: mrr_at_100
value: 60.716
- type: mrr_at_1000
value: 60.7232
- type: nauc_ndcg_at_1_max
value: 12.741900000000001
- type: nauc_ndcg_at_1_std
value: -20.011000000000003
- type: nauc_ndcg_at_1_diff1
value: 51.02100000000001
- type: nauc_ndcg_at_3_max
value: 17.416400000000003
- type: nauc_ndcg_at_3_std
value: -20.9336
- type: nauc_ndcg_at_3_diff1
value: 46.3134
- type: nauc_ndcg_at_5_max
value: 18.2369
- type: nauc_ndcg_at_5_std
value: -21.5645
- type: nauc_ndcg_at_5_diff1
value: 46.261799999999994
- type: nauc_ndcg_at_10_max
value: 18.8528
- type: nauc_ndcg_at_10_std
value: -20.6893
- type: nauc_ndcg_at_10_diff1
value: 46.5862
- type: nauc_ndcg_at_20_max
value: 18.0211
- type: nauc_ndcg_at_20_std
value: -19.652
- type: nauc_ndcg_at_20_diff1
value: 46.5482
- type: nauc_ndcg_at_100_max
value: 17.766000000000002
- type: nauc_ndcg_at_100_std
value: -18.7245
- type: nauc_ndcg_at_100_diff1
value: 47.0345
- type: nauc_ndcg_at_1000_max
value: 17.596500000000002
- type: nauc_ndcg_at_1000_std
value: -19.0628
- type: nauc_ndcg_at_1000_diff1
value: 47.12
- type: nauc_map_at_1_max
value: 13.017599999999998
- type: nauc_map_at_1_std
value: -18.8296
- type: nauc_map_at_1_diff1
value: 49.8762
- type: nauc_map_at_3_max
value: 16.2438
- type: nauc_map_at_3_std
value: -20.1711
- type: nauc_map_at_3_diff1
value: 47.2236
- type: nauc_map_at_5_max
value: 16.541
- type: nauc_map_at_5_std
value: -20.4952
- type: nauc_map_at_5_diff1
value: 47.1971
- type: nauc_map_at_10_max
value: 16.7266
- type: nauc_map_at_10_std
value: -20.1189
- type: nauc_map_at_10_diff1
value: 47.2762
- type: nauc_map_at_20_max
value: 16.5198
- type: nauc_map_at_20_std
value: -19.8167
- type: nauc_map_at_20_diff1
value: 47.266799999999996
- type: nauc_map_at_100_max
value: 16.467200000000002
- type: nauc_map_at_100_std
value: -19.7016
- type: nauc_map_at_100_diff1
value: 47.3389
- type: nauc_map_at_1000_max
value: 16.466900000000003
- type: nauc_map_at_1000_std
value: -19.704
- type: nauc_map_at_1000_diff1
value: 47.341
- type: nauc_recall_at_1_max
value: 13.017599999999998
- type: nauc_recall_at_1_std
value: -18.8296
- type: nauc_recall_at_1_diff1
value: 49.8762
- type: nauc_recall_at_3_max
value: 20.579700000000003
- type: nauc_recall_at_3_std
value: -21.263399999999997
- type: nauc_recall_at_3_diff1
value: 40.7412
- type: nauc_recall_at_5_max
value: 23.308799999999998
- type: nauc_recall_at_5_std
value: -23.0915
- type: nauc_recall_at_5_diff1
value: 38.2001
- type: nauc_recall_at_10_max
value: 27.296
- type: nauc_recall_at_10_std
value: -19.2697
- type: nauc_recall_at_10_diff1
value: 35.9711
- type: nauc_recall_at_20_max
value: 23.9957
- type: nauc_recall_at_20_std
value: -10.1564
- type: nauc_recall_at_20_diff1
value: 30.5332
- type: nauc_recall_at_100_max
value: 27.0148
- type: nauc_recall_at_100_std
value: 25.655299999999997
- type: nauc_recall_at_100_diff1
value: 23.1136
- type: nauc_recall_at_1000_max
value: 28.9392
- type: nauc_recall_at_1000_std
value: 47.491
- type: nauc_recall_at_1000_diff1
value: 15.6225
- type: nauc_precision_at_1_max
value: 12.741900000000001
- type: nauc_precision_at_1_std
value: -20.011000000000003
- type: nauc_precision_at_1_diff1
value: 51.02100000000001
- type: nauc_precision_at_3_max
value: 20.477999999999998
- type: nauc_precision_at_3_std
value: -24.4646
- type: nauc_precision_at_3_diff1
value: 41.1551
- type: nauc_precision_at_5_max
value: 24.364
- type: nauc_precision_at_5_std
value: -27.1997
- type: nauc_precision_at_5_diff1
value: 38.9501
- type: nauc_precision_at_10_max
value: 30.684299999999997
- type: nauc_precision_at_10_std
value: -23.1531
- type: nauc_precision_at_10_diff1
value: 34.6829
- type: nauc_precision_at_20_max
value: 24.1828
- type: nauc_precision_at_20_std
value: -10.783800000000001
- type: nauc_precision_at_20_diff1
value: 22.662399999999998
- type: nauc_precision_at_100_max
value: 12.189
- type: nauc_precision_at_100_std
value: 10.600999999999999
- type: nauc_precision_at_100_diff1
value: -0.2197
- type: nauc_precision_at_1000_max
value: 1.1533
- type: nauc_precision_at_1000_std
value: 6.2423
- type: nauc_precision_at_1000_diff1
value: -10.4662
- type: nauc_mrr_at_1_max
value: 12.741900000000001
- type: nauc_mrr_at_1_std
value: -20.011000000000003
- type: nauc_mrr_at_1_diff1
value: 51.02100000000001
- type: nauc_mrr_at_3_max
value: 16.4501
- type: nauc_mrr_at_3_std
value: -21.337500000000002
- type: nauc_mrr_at_3_diff1
value: 48.4594
- type: nauc_mrr_at_5_max
value: 16.8928
- type: nauc_mrr_at_5_std
value: -21.7254
- type: nauc_mrr_at_5_diff1
value: 48.619299999999996
- type: nauc_mrr_at_10_max
value: 17.0057
- type: nauc_mrr_at_10_std
value: -21.465899999999998
- type: nauc_mrr_at_10_diff1
value: 48.848200000000006
- type: nauc_mrr_at_20_max
value: 16.745099999999997
- type: nauc_mrr_at_20_std
value: -21.2914
- type: nauc_mrr_at_20_diff1
value: 48.861900000000006
- type: nauc_mrr_at_100_max
value: 16.653399999999998
- type: nauc_mrr_at_100_std
value: -21.1954
- type: nauc_mrr_at_100_diff1
value: 48.9097
- type: nauc_mrr_at_1000_max
value: 16.650000000000002
- type: nauc_mrr_at_1000_std
value: -21.2048
- type: nauc_mrr_at_1000_diff1
value: 48.911500000000004
- type: main_score
value: 63.836000000000006
- task:
type: Retrieval
dataset:
name: MTEB FiQA2018 (default)
type: mteb/fiqa
config: default
split: test
revision: 27a168819829fe9bcd655c2df245fb19452e8e06
metrics:
- type: ndcg_at_1
value: 25.154
- type: ndcg_at_3
value: 22.85
- type: ndcg_at_5
value: 23.788999999999998
- type: ndcg_at_10
value: 25.657000000000004
- type: ndcg_at_20
value: 28.058
- type: ndcg_at_100
value: 32.019999999999996
- type: ndcg_at_1000
value: 36.124
- type: map_at_1
value: 12.594
- type: map_at_3
value: 17.345
- type: map_at_5
value: 18.740000000000002
- type: map_at_10
value: 19.871
- type: map_at_20
value: 20.71
- type: map_at_100
value: 21.404
- type: map_at_1000
value: 21.616
- type: recall_at_1
value: 12.594
- type: recall_at_3
value: 20.682000000000002
- type: recall_at_5
value: 24.735
- type: recall_at_10
value: 30.217
- type: recall_at_20
value: 37.714999999999996
- type: recall_at_100
value: 54.364000000000004
- type: recall_at_1000
value: 79.487
- type: precision_at_1
value: 25.154
- type: precision_at_3
value: 15.174999999999999
- type: precision_at_5
value: 11.235000000000001
- type: precision_at_10
value: 7.13
- type: precision_at_20
value: 4.522
- type: precision_at_100
value: 1.341
- type: precision_at_1000
value: 0.20500000000000002
- type: mrr_at_1
value: 25.154300000000003
- type: mrr_at_3
value: 30.324099999999998
- type: mrr_at_5
value: 31.581799999999998
- type: mrr_at_10
value: 32.5208
- type: mrr_at_20
value: 33.055
- type: mrr_at_100
value: 33.4738
- type: mrr_at_1000
value: 33.5533
- type: nauc_ndcg_at_1_max
value: 20.836199999999998
- type: nauc_ndcg_at_1_std
value: -2.4346
- type: nauc_ndcg_at_1_diff1
value: 41.3264
- type: nauc_ndcg_at_3_max
value: 21.4673
- type: nauc_ndcg_at_3_std
value: -0.35760000000000003
- type: nauc_ndcg_at_3_diff1
value: 36.5457
- type: nauc_ndcg_at_5_max
value: 21.0022
- type: nauc_ndcg_at_5_std
value: 0.30079999999999996
- type: nauc_ndcg_at_5_diff1
value: 35.1377
- type: nauc_ndcg_at_10_max
value: 21.4511
- type: nauc_ndcg_at_10_std
value: 1.9931
- type: nauc_ndcg_at_10_diff1
value: 35.367599999999996
- type: nauc_ndcg_at_20_max
value: 21.9794
- type: nauc_ndcg_at_20_std
value: 3.2666
- type: nauc_ndcg_at_20_diff1
value: 33.9954
- type: nauc_ndcg_at_100_max
value: 22.666900000000002
- type: nauc_ndcg_at_100_std
value: 6.1648000000000005
- type: nauc_ndcg_at_100_diff1
value: 32.5715
- type: nauc_ndcg_at_1000_max
value: 23.9645
- type: nauc_ndcg_at_1000_std
value: 7.031
- type: nauc_ndcg_at_1000_diff1
value: 32.6535
- type: nauc_map_at_1_max
value: 13.436699999999998
- type: nauc_map_at_1_std
value: -6.1377
- type: nauc_map_at_1_diff1
value: 46.1518
- type: nauc_map_at_3_max
value: 17.6491
- type: nauc_map_at_3_std
value: -3.3383000000000003
- type: nauc_map_at_3_diff1
value: 39.909800000000004
- type: nauc_map_at_5_max
value: 18.4969
- type: nauc_map_at_5_std
value: -1.8129
- type: nauc_map_at_5_diff1
value: 38.4072
- type: nauc_map_at_10_max
value: 19.4823
- type: nauc_map_at_10_std
value: -0.2211
- type: nauc_map_at_10_diff1
value: 38.1346
- type: nauc_map_at_20_max
value: 19.9898
- type: nauc_map_at_20_std
value: 0.6002000000000001
- type: nauc_map_at_20_diff1
value: 37.755100000000006
- type: nauc_map_at_100_max
value: 20.2321
- type: nauc_map_at_100_std
value: 1.2189999999999999
- type: nauc_map_at_100_diff1
value: 37.379
- type: nauc_map_at_1000_max
value: 20.3676
- type: nauc_map_at_1000_std
value: 1.3561999999999999
- type: nauc_map_at_1000_diff1
value: 37.3216
- type: nauc_recall_at_1_max
value: 13.436699999999998
- type: nauc_recall_at_1_std
value: -6.1377
- type: nauc_recall_at_1_diff1
value: 46.1518
- type: nauc_recall_at_3_max
value: 17.4283
- type: nauc_recall_at_3_std
value: -2.0456
- type: nauc_recall_at_3_diff1
value: 34.5422
- type: nauc_recall_at_5_max
value: 18.2169
- type: nauc_recall_at_5_std
value: 0.7002
- type: nauc_recall_at_5_diff1
value: 29.7798
- type: nauc_recall_at_10_max
value: 19.6832
- type: nauc_recall_at_10_std
value: 4.6769
- type: nauc_recall_at_10_diff1
value: 27.8829
- type: nauc_recall_at_20_max
value: 20.095
- type: nauc_recall_at_20_std
value: 6.884899999999999
- type: nauc_recall_at_20_diff1
value: 22.7741
- type: nauc_recall_at_100_max
value: 20.5351
- type: nauc_recall_at_100_std
value: 19.2636
- type: nauc_recall_at_100_diff1
value: 16.2238
- type: nauc_recall_at_1000_max
value: 27.9838
- type: nauc_recall_at_1000_std
value: 33.3099
- type: nauc_recall_at_1000_diff1
value: 12.701699999999999
- type: nauc_precision_at_1_max
value: 20.836199999999998
- type: nauc_precision_at_1_std
value: -2.4346
- type: nauc_precision_at_1_diff1
value: 41.3264
- type: nauc_precision_at_3_max
value: 26.558500000000002
- type: nauc_precision_at_3_std
value: 3.6578
- type: nauc_precision_at_3_diff1
value: 27.0323
- type: nauc_precision_at_5_max
value: 28.794199999999996
- type: nauc_precision_at_5_std
value: 8.6533
- type: nauc_precision_at_5_diff1
value: 21.9488
- type: nauc_precision_at_10_max
value: 29.7713
- type: nauc_precision_at_10_std
value: 13.645399999999999
- type: nauc_precision_at_10_diff1
value: 20.1386
- type: nauc_precision_at_20_max
value: 28.0465
- type: nauc_precision_at_20_std
value: 16.3569
- type: nauc_precision_at_20_diff1
value: 14.969299999999999
- type: nauc_precision_at_100_max
value: 26.7123
- type: nauc_precision_at_100_std
value: 19.1407
- type: nauc_precision_at_100_diff1
value: 5.7822
- type: nauc_precision_at_1000_max
value: 23.6681
- type: nauc_precision_at_1000_std
value: 16.3438
- type: nauc_precision_at_1000_diff1
value: -3.3699
- type: nauc_mrr_at_1_max
value: 20.836199999999998
- type: nauc_mrr_at_1_std
value: -2.4346
- type: nauc_mrr_at_1_diff1
value: 41.3264
- type: nauc_mrr_at_3_max
value: 22.4267
- type: nauc_mrr_at_3_std
value: -0.1948
- type: nauc_mrr_at_3_diff1
value: 36.9255
- type: nauc_mrr_at_5_max
value: 22.6662
- type: nauc_mrr_at_5_std
value: 0.4444
- type: nauc_mrr_at_5_diff1
value: 35.957
- type: nauc_mrr_at_10_max
value: 22.5111
- type: nauc_mrr_at_10_std
value: 0.7020000000000001
- type: nauc_mrr_at_10_diff1
value: 35.6976
- type: nauc_mrr_at_20_max
value: 22.4416
- type: nauc_mrr_at_20_std
value: 0.8706999999999999
- type: nauc_mrr_at_20_diff1
value: 35.2034
- type: nauc_mrr_at_100_max
value: 22.4571
- type: nauc_mrr_at_100_std
value: 1.0563
- type: nauc_mrr_at_100_diff1
value: 35.177
- type: nauc_mrr_at_1000_max
value: 22.4743
- type: nauc_mrr_at_1000_std
value: 1.0505
- type: nauc_mrr_at_1000_diff1
value: 35.2186
- type: main_score
value: 25.657000000000004
- task:
type: Retrieval
dataset:
name: MTEB HotpotQAHardNegatives (default)
type: mteb/HotpotQA_test_top_250_only_w_correct-v2
config: default
split: test
revision: 617612fa63afcb60e3b134bed8b7216a99707c37
metrics:
- type: ndcg_at_1
value: 58.9
- type: ndcg_at_3
value: 45.092999999999996
- type: ndcg_at_5
value: 47.806
- type: ndcg_at_10
value: 50.666
- type: ndcg_at_20
value: 52.644000000000005
- type: ndcg_at_100
value: 56.071000000000005
- type: ndcg_at_1000
value: 58.262
- type: map_at_1
value: 29.45
- type: map_at_3
value: 37.675
- type: map_at_5
value: 39.562999999999995
- type: map_at_10
value: 41.056
- type: map_at_20
value: 41.765
- type: map_at_100
value: 42.425000000000004
- type: map_at_1000
value: 42.54
- type: recall_at_1
value: 29.45
- type: recall_at_3
value: 41.75
- type: recall_at_5
value: 47.099999999999994
- type: recall_at_10
value: 54.300000000000004
- type: recall_at_20
value: 60.699999999999996
- type: recall_at_100
value: 75.9
- type: recall_at_1000
value: 90.3
- type: precision_at_1
value: 58.9
- type: precision_at_3
value: 27.833000000000002
- type: precision_at_5
value: 18.84
- type: precision_at_10
value: 10.86
- type: precision_at_20
value: 6.069999999999999
- type: precision_at_100
value: 1.518
- type: precision_at_1000
value: 0.181
- type: mrr_at_1
value: 58.9
- type: mrr_at_3
value: 64.81670000000001
- type: mrr_at_5
value: 65.9717
- type: mrr_at_10
value: 66.84750000000001
- type: mrr_at_20
value: 67.1864
- type: mrr_at_100
value: 67.3796
- type: mrr_at_1000
value: 67.3962
- type: nauc_ndcg_at_1_max
value: 40.6699
- type: nauc_ndcg_at_1_std
value: -6.4051
- type: nauc_ndcg_at_1_diff1
value: 61.4074
- type: nauc_ndcg_at_3_max
value: 36.086200000000005
- type: nauc_ndcg_at_3_std
value: -3.8372
- type: nauc_ndcg_at_3_diff1
value: 44.0991
- type: nauc_ndcg_at_5_max
value: 35.1661
- type: nauc_ndcg_at_5_std
value: -3.4778000000000002
- type: nauc_ndcg_at_5_diff1
value: 41.2298
- type: nauc_ndcg_at_10_max
value: 34.5689
- type: nauc_ndcg_at_10_std
value: -0.7254
- type: nauc_ndcg_at_10_diff1
value: 38.9824
- type: nauc_ndcg_at_20_max
value: 35.4153
- type: nauc_ndcg_at_20_std
value: 0.9502999999999999
- type: nauc_ndcg_at_20_diff1
value: 38.5558
- type: nauc_ndcg_at_100_max
value: 36.187799999999996
- type: nauc_ndcg_at_100_std
value: 3.3059
- type: nauc_ndcg_at_100_diff1
value: 37.775
- type: nauc_ndcg_at_1000_max
value: 36.9076
- type: nauc_ndcg_at_1000_std
value: 3.2030000000000003
- type: nauc_ndcg_at_1000_diff1
value: 39.6691
- type: nauc_map_at_1_max
value: 40.6699
- type: nauc_map_at_1_std
value: -6.4051
- type: nauc_map_at_1_diff1
value: 61.4074
- type: nauc_map_at_3_max
value: 34.8654
- type: nauc_map_at_3_std
value: -1.9401000000000002
- type: nauc_map_at_3_diff1
value: 40.4559
- type: nauc_map_at_5_max
value: 34.0362
- type: nauc_map_at_5_std
value: -1.677
- type: nauc_map_at_5_diff1
value: 38.384
- type: nauc_map_at_10_max
value: 33.8136
- type: nauc_map_at_10_std
value: -0.2753
- type: nauc_map_at_10_diff1
value: 37.1326
- type: nauc_map_at_20_max
value: 34.1981
- type: nauc_map_at_20_std
value: 0.2882
- type: nauc_map_at_20_diff1
value: 36.996
- type: nauc_map_at_100_max
value: 34.2694
- type: nauc_map_at_100_std
value: 0.596
- type: nauc_map_at_100_diff1
value: 36.858200000000004
- type: nauc_map_at_1000_max
value: 34.3301
- type: nauc_map_at_1000_std
value: 0.6459
- type: nauc_map_at_1000_diff1
value: 36.9437
- type: nauc_recall_at_1_max
value: 40.6699
- type: nauc_recall_at_1_std
value: -6.4051
- type: nauc_recall_at_1_diff1
value: 61.4074
- type: nauc_recall_at_3_max
value: 33.4227
- type: nauc_recall_at_3_std
value: -2.6978
- type: nauc_recall_at_3_diff1
value: 35.5329
- type: nauc_recall_at_5_max
value: 29.759900000000002
- type: nauc_recall_at_5_std
value: -1.7928
- type: nauc_recall_at_5_diff1
value: 27.8553
- type: nauc_recall_at_10_max
value: 27.2765
- type: nauc_recall_at_10_std
value: 5.0284
- type: nauc_recall_at_10_diff1
value: 21.5188
- type: nauc_recall_at_20_max
value: 27.456500000000002
- type: nauc_recall_at_20_std
value: 10.4452
- type: nauc_recall_at_20_diff1
value: 17.377100000000002
- type: nauc_recall_at_100_max
value: 27.960400000000003
- type: nauc_recall_at_100_std
value: 26.0653
- type: nauc_recall_at_100_diff1
value: 5.9226
- type: nauc_recall_at_1000_max
value: 33.996700000000004
- type: nauc_recall_at_1000_std
value: 44.291199999999996
- type: nauc_recall_at_1000_diff1
value: 7.6986
- type: nauc_precision_at_1_max
value: 40.6699
- type: nauc_precision_at_1_std
value: -6.4051
- type: nauc_precision_at_1_diff1
value: 61.4074
- type: nauc_precision_at_3_max
value: 33.4227
- type: nauc_precision_at_3_std
value: -2.6978
- type: nauc_precision_at_3_diff1
value: 35.5329
- type: nauc_precision_at_5_max
value: 29.759900000000002
- type: nauc_precision_at_5_std
value: -1.7928
- type: nauc_precision_at_5_diff1
value: 27.8553
- type: nauc_precision_at_10_max
value: 27.2765
- type: nauc_precision_at_10_std
value: 5.0284
- type: nauc_precision_at_10_diff1
value: 21.5188
- type: nauc_precision_at_20_max
value: 27.456500000000002
- type: nauc_precision_at_20_std
value: 10.4452
- type: nauc_precision_at_20_diff1
value: 17.377100000000002
- type: nauc_precision_at_100_max
value: 27.960400000000003
- type: nauc_precision_at_100_std
value: 26.0653
- type: nauc_precision_at_100_diff1
value: 5.9226
- type: nauc_precision_at_1000_max
value: 33.996700000000004
- type: nauc_precision_at_1000_std
value: 44.291199999999996
- type: nauc_precision_at_1000_diff1
value: 7.6986
- type: nauc_mrr_at_1_max
value: 40.6699
- type: nauc_mrr_at_1_std
value: -6.4051
- type: nauc_mrr_at_1_diff1
value: 61.4074
- type: nauc_mrr_at_3_max
value: 40.4193
- type: nauc_mrr_at_3_std
value: -8.072899999999999
- type: nauc_mrr_at_3_diff1
value: 58.589400000000005
- type: nauc_mrr_at_5_max
value: 40.6559
- type: nauc_mrr_at_5_std
value: -8.1937
- type: nauc_mrr_at_5_diff1
value: 58.30650000000001
- type: nauc_mrr_at_10_max
value: 40.515699999999995
- type: nauc_mrr_at_10_std
value: -7.4325
- type: nauc_mrr_at_10_diff1
value: 58.1284
- type: nauc_mrr_at_20_max
value: 40.63
- type: nauc_mrr_at_20_std
value: -7.1578
- type: nauc_mrr_at_20_diff1
value: 58.215799999999994
- type: nauc_mrr_at_100_max
value: 40.693
- type: nauc_mrr_at_100_std
value: -7.0889
- type: nauc_mrr_at_100_diff1
value: 58.22389999999999
- type: nauc_mrr_at_1000_max
value: 40.700900000000004
- type: nauc_mrr_at_1000_std
value: -7.098400000000001
- type: nauc_mrr_at_1000_diff1
value: 58.2458
- type: main_score
value: 50.666
- task:
type: Classification
dataset:
name: MTEB ImdbClassification (default)
type: mteb/imdb
config: default
split: test
revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7
metrics:
- type: accuracy
value: 68.1712
- type: f1
value: 67.982
- type: f1_weighted
value: 67.982
- type: ap
value: 62.572799999999994
- type: ap_weighted
value: 62.572799999999994
- type: main_score
value: 68.1712
- task:
type: Classification
dataset:
name: MTEB MTOPDomainClassification (en)
type: mteb/mtop_domain
config: en
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 90.4423
- type: f1
value: 90.08840000000001
- type: f1_weighted
value: 90.44919999999999
- type: main_score
value: 90.4423
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (en)
type: mteb/amazon_massive_intent
config: en
split: test
revision: 4672e20407010da34463acc759c162ca9734bca6
metrics:
- type: accuracy
value: 65.4371
- type: f1
value: 62.8737
- type: f1_weighted
value: 64.2218
- type: main_score
value: 65.4371
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (en)
type: mteb/amazon_massive_scenario
config: en
split: test
revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8
metrics:
- type: accuracy
value: 70.4371
- type: f1
value: 69.75200000000001
- type: f1_weighted
value: 69.7839
- type: main_score
value: 70.4371
- task:
type: Clustering
dataset:
name: MTEB MedrxivClusteringP2P.v2 (default)
type: mteb/medrxiv-clustering-p2p
config: default
split: test
revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73
metrics:
- type: v_measure
value: 35.1864
- type: v_measure_std
value: 0.7835
- type: main_score
value: 35.1864
- task:
type: Clustering
dataset:
name: MTEB MedrxivClusteringS2S.v2 (default)
type: mteb/medrxiv-clustering-s2s
config: default
split: test
revision: 35191c8c0dca72d8ff3efcd72aa802307d469663
metrics:
- type: v_measure
value: 31.8693
- type: v_measure_std
value: 0.662
- type: main_score
value: 31.8693
- task:
type: Reranking
dataset:
name: MTEB MindSmallReranking (default)
type: mteb/mind_small
config: default
split: test
revision: 59042f120c80e8afa9cdbb224f67076cec0fc9a7
metrics:
- type: map
value: 29.4367
- type: mrr
value: 30.318299999999997
- type: nAUC_map_max
value: -21.5343
- type: nAUC_map_std
value: -6.4848
- type: nAUC_map_diff1
value: 12.8559
- type: nAUC_mrr_max
value: -15.981200000000001
- type: nAUC_mrr_std
value: -4.2437000000000005
- type: nAUC_mrr_diff1
value: 12.4087
- type: main_score
value: 29.4367
- task:
type: Retrieval
dataset:
name: MTEB SCIDOCS (default)
type: mteb/scidocs
config: default
split: test
revision: f8c2fcf00f625baaa80f62ec5bd9e1fff3b8ae88
metrics:
- type: ndcg_at_1
value: 19.5
- type: ndcg_at_3
value: 15.673
- type: ndcg_at_5
value: 13.389000000000001
- type: ndcg_at_10
value: 16.179
- type: ndcg_at_20
value: 18.88
- type: ndcg_at_100
value: 23.812
- type: ndcg_at_1000
value: 29.833
- type: map_at_1
value: 3.963
- type: map_at_3
value: 6.93
- type: map_at_5
value: 8.062
- type: map_at_10
value: 9.328
- type: map_at_20
value: 10.283000000000001
- type: map_at_100
value: 11.197
- type: map_at_1000
value: 11.522
- type: recall_at_1
value: 3.963
- type: recall_at_3
value: 8.813
- type: recall_at_5
value: 11.658
- type: recall_at_10
value: 16.803
- type: recall_at_20
value: 23.169999999999998
- type: recall_at_100
value: 39.163
- type: recall_at_1000
value: 68.572
- type: precision_at_1
value: 19.5
- type: precision_at_3
value: 14.499999999999998
- type: precision_at_5
value: 11.5
- type: precision_at_10
value: 8.3
- type: precision_at_20
value: 5.71
- type: precision_at_100
value: 1.9300000000000002
- type: precision_at_1000
value: 0.338
- type: mrr_at_1
value: 19.5
- type: mrr_at_3
value: 26.016699999999997
- type: mrr_at_5
value: 27.526699999999998
- type: mrr_at_10
value: 28.9305
- type: mrr_at_20
value: 29.628100000000003
- type: mrr_at_100
value: 30.131400000000003
- type: mrr_at_1000
value: 30.201800000000002
- type: nauc_ndcg_at_1_max
value: 25.1197
- type: nauc_ndcg_at_1_std
value: 4.7176
- type: nauc_ndcg_at_1_diff1
value: 24.2336
- type: nauc_ndcg_at_3_max
value: 30.050900000000002
- type: nauc_ndcg_at_3_std
value: 11.4719
- type: nauc_ndcg_at_3_diff1
value: 20.4572
- type: nauc_ndcg_at_5_max
value: 32.224399999999996
- type: nauc_ndcg_at_5_std
value: 15.0585
- type: nauc_ndcg_at_5_diff1
value: 19.991600000000002
- type: nauc_ndcg_at_10_max
value: 33.7156
- type: nauc_ndcg_at_10_std
value: 19.2797
- type: nauc_ndcg_at_10_diff1
value: 20.3735
- type: nauc_ndcg_at_20_max
value: 34.7518
- type: nauc_ndcg_at_20_std
value: 23.227600000000002
- type: nauc_ndcg_at_20_diff1
value: 19.2851
- type: nauc_ndcg_at_100_max
value: 36.6006
- type: nauc_ndcg_at_100_std
value: 28.511599999999998
- type: nauc_ndcg_at_100_diff1
value: 18.0315
- type: nauc_ndcg_at_1000_max
value: 36.3651
- type: nauc_ndcg_at_1000_std
value: 29.7201
- type: nauc_ndcg_at_1000_diff1
value: 16.5988
- type: nauc_map_at_1_max
value: 24.954
- type: nauc_map_at_1_std
value: 4.7878
- type: nauc_map_at_1_diff1
value: 24.7611
- type: nauc_map_at_3_max
value: 30.0634
- type: nauc_map_at_3_std
value: 9.9217
- type: nauc_map_at_3_diff1
value: 21.9063
- type: nauc_map_at_5_max
value: 32.1685
- type: nauc_map_at_5_std
value: 12.8527
- type: nauc_map_at_5_diff1
value: 21.033099999999997
- type: nauc_map_at_10_max
value: 33.840199999999996
- type: nauc_map_at_10_std
value: 16.304299999999998
- type: nauc_map_at_10_diff1
value: 21.9142
- type: nauc_map_at_20_max
value: 34.2084
- type: nauc_map_at_20_std
value: 18.709799999999998
- type: nauc_map_at_20_diff1
value: 21.2113
- type: nauc_map_at_100_max
value: 35.1304
- type: nauc_map_at_100_std
value: 20.8559
- type: nauc_map_at_100_diff1
value: 20.8642
- type: nauc_map_at_1000_max
value: 35.1972
- type: nauc_map_at_1000_std
value: 21.2306
- type: nauc_map_at_1000_diff1
value: 20.7425
- type: nauc_recall_at_1_max
value: 24.954
- type: nauc_recall_at_1_std
value: 4.7878
- type: nauc_recall_at_1_diff1
value: 24.7611
- type: nauc_recall_at_3_max
value: 31.1016
- type: nauc_recall_at_3_std
value: 14.1642
- type: nauc_recall_at_3_diff1
value: 18.676000000000002
- type: nauc_recall_at_5_max
value: 33.8509
- type: nauc_recall_at_5_std
value: 19.503899999999998
- type: nauc_recall_at_5_diff1
value: 17.1764
- type: nauc_recall_at_10_max
value: 34.085300000000004
- type: nauc_recall_at_10_std
value: 25.536199999999997
- type: nauc_recall_at_10_diff1
value: 16.8913
- type: nauc_recall_at_20_max
value: 34.1879
- type: nauc_recall_at_20_std
value: 31.5486
- type: nauc_recall_at_20_diff1
value: 13.852300000000001
- type: nauc_recall_at_100_max
value: 34.313700000000004
- type: nauc_recall_at_100_std
value: 40.6137
- type: nauc_recall_at_100_diff1
value: 9.043800000000001
- type: nauc_recall_at_1000_max
value: 27.090500000000002
- type: nauc_recall_at_1000_std
value: 42.398799999999994
- type: nauc_recall_at_1000_diff1
value: -0.9452999999999999
- type: nauc_precision_at_1_max
value: 25.1197
- type: nauc_precision_at_1_std
value: 4.7176
- type: nauc_precision_at_1_diff1
value: 24.2336
- type: nauc_precision_at_3_max
value: 31.4429
- type: nauc_precision_at_3_std
value: 14.1941
- type: nauc_precision_at_3_diff1
value: 18.4824
- type: nauc_precision_at_5_max
value: 34.2219
- type: nauc_precision_at_5_std
value: 19.703699999999998
- type: nauc_precision_at_5_diff1
value: 17.0964
- type: nauc_precision_at_10_max
value: 34.380300000000005
- type: nauc_precision_at_10_std
value: 25.6554
- type: nauc_precision_at_10_diff1
value: 16.8487
- type: nauc_precision_at_20_max
value: 34.462199999999996
- type: nauc_precision_at_20_std
value: 31.465500000000002
- type: nauc_precision_at_20_diff1
value: 13.9038
- type: nauc_precision_at_100_max
value: 34.7074
- type: nauc_precision_at_100_std
value: 40.3278
- type: nauc_precision_at_100_diff1
value: 9.2637
- type: nauc_precision_at_1000_max
value: 27.213900000000002
- type: nauc_precision_at_1000_std
value: 40.8382
- type: nauc_precision_at_1000_diff1
value: -0.5306
- type: nauc_mrr_at_1_max
value: 25.1197
- type: nauc_mrr_at_1_std
value: 4.7176
- type: nauc_mrr_at_1_diff1
value: 24.2336
- type: nauc_mrr_at_3_max
value: 27.9362
- type: nauc_mrr_at_3_std
value: 9.9578
- type: nauc_mrr_at_3_diff1
value: 20.809
- type: nauc_mrr_at_5_max
value: 29.0381
- type: nauc_mrr_at_5_std
value: 11.7807
- type: nauc_mrr_at_5_diff1
value: 20.8787
- type: nauc_mrr_at_10_max
value: 28.860799999999998
- type: nauc_mrr_at_10_std
value: 12.269
- type: nauc_mrr_at_10_diff1
value: 20.7762
- type: nauc_mrr_at_20_max
value: 29.2051
- type: nauc_mrr_at_20_std
value: 12.7588
- type: nauc_mrr_at_20_diff1
value: 20.9176
- type: nauc_mrr_at_100_max
value: 29.2288
- type: nauc_mrr_at_100_std
value: 12.7523
- type: nauc_mrr_at_100_diff1
value: 20.9235
- type: nauc_mrr_at_1000_max
value: 29.2119
- type: nauc_mrr_at_1000_std
value: 12.697600000000001
- type: nauc_mrr_at_1000_diff1
value: 20.9131
- type: main_score
value: 16.179
- task:
type: STS
dataset:
name: MTEB SICK-R (default)
type: mteb/sickr-sts
config: default
split: test
revision: 20a6d6f312dd54037fe07a32d58e5e168867909d
metrics:
- type: pearson
value: 84.5347
- type: spearman
value: 79.80850000000001
- type: cosine_pearson
value: 84.5347
- type: cosine_spearman
value: 79.80850000000001
- type: manhattan_pearson
value: 81.0701
- type: manhattan_spearman
value: 79.6721
- type: euclidean_pearson
value: 81.20349999999999
- type: euclidean_spearman
value: 79.80850000000001
- type: main_score
value: 79.80850000000001
- task:
type: STS
dataset:
name: MTEB STS12 (default)
type: mteb/sts12-sts
config: default
split: test
revision: a0d554a64d88156834ff5ae9920b964011b16384
metrics:
- type: pearson
value: 86.88
- type: spearman
value: 78.1076
- type: cosine_pearson
value: 86.88
- type: cosine_spearman
value: 78.1052
- type: manhattan_pearson
value: 83.3712
- type: manhattan_spearman
value: 78.0898
- type: euclidean_pearson
value: 83.3731
- type: euclidean_spearman
value: 78.1052
- type: main_score
value: 78.1052
- task:
type: STS
dataset:
name: MTEB STS13 (default)
type: mteb/sts13-sts
config: default
split: test
revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca
metrics:
- type: pearson
value: 83.5938
- type: spearman
value: 84.2951
- type: cosine_pearson
value: 83.5938
- type: cosine_spearman
value: 84.2951
- type: manhattan_pearson
value: 83.2541
- type: manhattan_spearman
value: 83.8292
- type: euclidean_pearson
value: 83.69640000000001
- type: euclidean_spearman
value: 84.2951
- type: main_score
value: 84.2951
- task:
type: STS
dataset:
name: MTEB STS14 (default)
type: mteb/sts14-sts
config: default
split: test
revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375
metrics:
- type: pearson
value: 82.6003
- type: spearman
value: 81.3569
- type: cosine_pearson
value: 82.6003
- type: cosine_spearman
value: 81.357
- type: manhattan_pearson
value: 81.5087
- type: manhattan_spearman
value: 81.17229999999999
- type: euclidean_pearson
value: 81.7147
- type: euclidean_spearman
value: 81.3569
- type: main_score
value: 81.357
- task:
type: STS
dataset:
name: MTEB STS15 (default)
type: mteb/sts15-sts
config: default
split: test
revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3
metrics:
- type: pearson
value: 86.4161
- type: spearman
value: 87.0039
- type: cosine_pearson
value: 86.4161
- type: cosine_spearman
value: 87.0039
- type: manhattan_pearson
value: 86.2482
- type: manhattan_spearman
value: 86.934
- type: euclidean_pearson
value: 86.3344
- type: euclidean_spearman
value: 87.0039
- type: main_score
value: 87.0039
- task:
type: STS
dataset:
name: MTEB STS17 (en-en)
type: mteb/sts17-crosslingual-sts
config: en-en
split: test
revision: faeb762787bd10488a50c8b5be4a3b82e411949c
metrics:
- type: pearson
value: 88.6011
- type: spearman
value: 88.1023
- type: cosine_pearson
value: 88.6011
- type: cosine_spearman
value: 88.1023
- type: manhattan_pearson
value: 88.18639999999999
- type: manhattan_spearman
value: 88.55380000000001
- type: euclidean_pearson
value: 88.011
- type: euclidean_spearman
value: 88.1023
- type: main_score
value: 88.1023
- task:
type: STS
dataset:
name: MTEB STS22.v2 (en)
type: mteb/sts22-crosslingual-sts
config: en
split: test
revision: d31f33a128469b20e357535c39b82fb3c3f6f2bd
metrics:
- type: pearson
value: 65.7746
- type: spearman
value: 64.7997
- type: cosine_pearson
value: 65.7746
- type: cosine_spearman
value: 64.7997
- type: manhattan_pearson
value: 67.5417
- type: manhattan_spearman
value: 65.27629999999999
- type: euclidean_pearson
value: 67.2574
- type: euclidean_spearman
value: 64.7997
- type: main_score
value: 64.7997
- task:
type: STS
dataset:
name: MTEB STSBenchmark (default)
type: mteb/stsbenchmark-sts
config: default
split: test
revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831
metrics:
- type: pearson
value: 84.4276
- type: spearman
value: 84.9631
- type: cosine_pearson
value: 84.4276
- type: cosine_spearman
value: 84.9631
- type: manhattan_pearson
value: 84.4743
- type: manhattan_spearman
value: 84.7686
- type: euclidean_pearson
value: 84.6058
- type: euclidean_spearman
value: 84.9631
- type: main_score
value: 84.9631
- task:
type: PairClassification
dataset:
name: MTEB SprintDuplicateQuestions (default)
type: mteb/sprintduplicatequestions-pairclassification
config: default
split: test
revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46
metrics:
- type: similarity_accuracy
value: 99.7931
- type: similarity_accuracy_threshold
value: 69.6798
- type: similarity_f1
value: 89.4293
- type: similarity_f1_threshold
value: 68.3132
- type: similarity_precision
value: 88.76849999999999
- type: similarity_recall
value: 90.10000000000001
- type: similarity_ap
value: 94.3099
- type: cosine_accuracy
value: 99.7931
- type: cosine_accuracy_threshold
value: 69.6798
- type: cosine_f1
value: 89.4293
- type: cosine_f1_threshold
value: 68.3132
- type: cosine_precision
value: 88.76849999999999
- type: cosine_recall
value: 90.10000000000001
- type: cosine_ap
value: 94.3099
- type: manhattan_accuracy
value: 99.7792
- type: manhattan_accuracy_threshold
value: 1354.3922
- type: manhattan_f1
value: 88.71289999999999
- type: manhattan_f1_threshold
value: 1389.3319999999999
- type: manhattan_precision
value: 87.84309999999999
- type: manhattan_recall
value: 89.60000000000001
- type: manhattan_ap
value: 93.8459
- type: euclidean_accuracy
value: 99.7931
- type: euclidean_accuracy_threshold
value: 77.872
- type: euclidean_f1
value: 89.4293
- type: euclidean_f1_threshold
value: 79.6075
- type: euclidean_precision
value: 88.76849999999999
- type: euclidean_recall
value: 90.10000000000001
- type: euclidean_ap
value: 94.3099
- type: dot_accuracy
value: 99.7931
- type: dot_accuracy_threshold
value: 69.6798
- type: dot_f1
value: 89.4293
- type: dot_f1_threshold
value: 68.3132
- type: dot_precision
value: 88.76849999999999
- type: dot_recall
value: 90.10000000000001
- type: dot_ap
value: 94.3099
- type: max_accuracy
value: 99.7931
- type: max_f1
value: 89.4293
- type: max_precision
value: 88.76849999999999
- type: max_recall
value: 90.10000000000001
- type: max_ap
value: 94.3099
- type: main_score
value: 94.3099
- task:
type: Clustering
dataset:
name: MTEB StackExchangeClustering.v2 (default)
type: mteb/stackexchange-clustering
config: default
split: test
revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259
metrics:
- type: v_measure
value: 53.9397
- type: v_measure_std
value: 0.7764
- type: main_score
value: 53.9397
- task:
type: Clustering
dataset:
name: MTEB StackExchangeClusteringP2P.v2 (default)
type: mteb/stackexchange-clustering-p2p
config: default
split: test
revision: 815ca46b2622cec33ccafc3735d572c266efdb44
metrics:
- type: v_measure
value: 40.6498
- type: v_measure_std
value: 0.439
- type: main_score
value: 40.6498
- task:
type: Summarization
dataset:
name: MTEB SummEvalSummarization.v2 (default)
type: mteb/summeval
config: default
split: test
revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c
metrics:
- type: pearson
value: 28.6283
- type: spearman
value: 26.0828
- type: cosine_spearman
value: 26.0828
- type: cosine_pearson
value: 28.6283
- type: dot_spearman
value: 26.0828
- type: dot_pearson
value: 28.6283
- type: main_score
value: 26.0828
- task:
type: Retrieval
dataset:
name: MTEB TRECCOVID (default)
type: mteb/trec-covid
config: default
split: test
revision: bb9466bac8153a0349341eb1b22e06409e78ef4e
metrics:
- type: ndcg_at_1
value: 66
- type: ndcg_at_3
value: 64.592
- type: ndcg_at_5
value: 63.405
- type: ndcg_at_10
value: 60.077999999999996
- type: ndcg_at_20
value: 57.202
- type: ndcg_at_100
value: 44.643
- type: ndcg_at_1000
value: 42.104
- type: map_at_1
value: 0.193
- type: map_at_3
value: 0.514
- type: map_at_5
value: 0.783
- type: map_at_10
value: 1.3719999999999999
- type: map_at_20
value: 2.371
- type: map_at_100
value: 7.353
- type: map_at_1000
value: 17.855999999999998
- type: recall_at_1
value: 0.193
- type: recall_at_3
value: 0.563
- type: recall_at_5
value: 0.907
- type: recall_at_10
value: 1.683
- type: recall_at_20
value: 3.118
- type: recall_at_100
value: 11.051
- type: recall_at_1000
value: 39.973
- type: precision_at_1
value: 74
- type: precision_at_3
value: 71.333
- type: precision_at_5
value: 68.8
- type: precision_at_10
value: 63.800000000000004
- type: precision_at_20
value: 60.5
- type: precision_at_100
value: 45.519999999999996
- type: precision_at_1000
value: 18.451999999999998
- type: mrr_at_1
value: 74
- type: mrr_at_3
value: 83.3333
- type: mrr_at_5
value: 83.7333
- type: mrr_at_10
value: 84.3524
- type: mrr_at_20
value: 84.3524
- type: mrr_at_100
value: 84.3524
- type: mrr_at_1000
value: 84.3524
- type: nauc_ndcg_at_1_max
value: 11.527800000000001
- type: nauc_ndcg_at_1_std
value: 17.1352
- type: nauc_ndcg_at_1_diff1
value: 24.955199999999998
- type: nauc_ndcg_at_3_max
value: 11.7829
- type: nauc_ndcg_at_3_std
value: 23.1421
- type: nauc_ndcg_at_3_diff1
value: 20.884
- type: nauc_ndcg_at_5_max
value: 8.8058
- type: nauc_ndcg_at_5_std
value: 27.9156
- type: nauc_ndcg_at_5_diff1
value: 7.002
- type: nauc_ndcg_at_10_max
value: 16.561
- type: nauc_ndcg_at_10_std
value: 40.528999999999996
- type: nauc_ndcg_at_10_diff1
value: -6.1467
- type: nauc_ndcg_at_20_max
value: 25.0792
- type: nauc_ndcg_at_20_std
value: 54.0689
- type: nauc_ndcg_at_20_diff1
value: -9.6224
- type: nauc_ndcg_at_100_max
value: 43.2818
- type: nauc_ndcg_at_100_std
value: 75.4432
- type: nauc_ndcg_at_100_diff1
value: -11.4618
- type: nauc_ndcg_at_1000_max
value: 50.360099999999996
- type: nauc_ndcg_at_1000_std
value: 76.03999999999999
- type: nauc_ndcg_at_1000_diff1
value: -12.5796
- type: nauc_map_at_1_max
value: 4.3809000000000005
- type: nauc_map_at_1_std
value: -17.5338
- type: nauc_map_at_1_diff1
value: 24.837
- type: nauc_map_at_3_max
value: 4.7842
- type: nauc_map_at_3_std
value: -8.9273
- type: nauc_map_at_3_diff1
value: 19.7729
- type: nauc_map_at_5_max
value: 3.6865
- type: nauc_map_at_5_std
value: -1.1584
- type: nauc_map_at_5_diff1
value: 7.3548
- type: nauc_map_at_10_max
value: 7.556400000000001
- type: nauc_map_at_10_std
value: 11.2599
- type: nauc_map_at_10_diff1
value: -3.4863999999999997
- type: nauc_map_at_20_max
value: 12.6951
- type: nauc_map_at_20_std
value: 27.3531
- type: nauc_map_at_20_diff1
value: -11.968
- type: nauc_map_at_100_max
value: 41.625099999999996
- type: nauc_map_at_100_std
value: 66.5204
- type: nauc_map_at_100_diff1
value: -12.020999999999999
- type: nauc_map_at_1000_max
value: 56.6014
- type: nauc_map_at_1000_std
value: 80.6523
- type: nauc_map_at_1000_diff1
value: -11.9876
- type: nauc_recall_at_1_max
value: 4.3809000000000005
- type: nauc_recall_at_1_std
value: -17.5338
- type: nauc_recall_at_1_diff1
value: 24.837
- type: nauc_recall_at_3_max
value: -0.8904000000000001
- type: nauc_recall_at_3_std
value: -11.2455
- type: nauc_recall_at_3_diff1
value: 17.6352
- type: nauc_recall_at_5_max
value: -4.6216
- type: nauc_recall_at_5_std
value: -3.5367999999999995
- type: nauc_recall_at_5_diff1
value: 3.3192
- type: nauc_recall_at_10_max
value: 1.8993
- type: nauc_recall_at_10_std
value: 6.844600000000001
- type: nauc_recall_at_10_diff1
value: -6.0693
- type: nauc_recall_at_20_max
value: 5.733
- type: nauc_recall_at_20_std
value: 20.6114
- type: nauc_recall_at_20_diff1
value: -11.631
- type: nauc_recall_at_100_max
value: 32.7146
- type: nauc_recall_at_100_std
value: 55.6053
- type: nauc_recall_at_100_diff1
value: -10.7219
- type: nauc_recall_at_1000_max
value: 50.7544
- type: nauc_recall_at_1000_std
value: 68.4639
- type: nauc_recall_at_1000_diff1
value: -10.431600000000001
- type: nauc_precision_at_1_max
value: 13.8681
- type: nauc_precision_at_1_std
value: -3.4711
- type: nauc_precision_at_1_diff1
value: 36.945
- type: nauc_precision_at_3_max
value: 11.6309
- type: nauc_precision_at_3_std
value: 5.0299000000000005
- type: nauc_precision_at_3_diff1
value: 28.5186
- type: nauc_precision_at_5_max
value: 10.1297
- type: nauc_precision_at_5_std
value: 19.049599999999998
- type: nauc_precision_at_5_diff1
value: 7.918500000000001
- type: nauc_precision_at_10_max
value: 21.3492
- type: nauc_precision_at_10_std
value: 39.6679
- type: nauc_precision_at_10_diff1
value: -10.7691
- type: nauc_precision_at_20_max
value: 32.4627
- type: nauc_precision_at_20_std
value: 57.2564
- type: nauc_precision_at_20_diff1
value: -12.0336
- type: nauc_precision_at_100_max
value: 47.7277
- type: nauc_precision_at_100_std
value: 77.0329
- type: nauc_precision_at_100_diff1
value: -9.2173
- type: nauc_precision_at_1000_max
value: 47.6622
- type: nauc_precision_at_1000_std
value: 62.8329
- type: nauc_precision_at_1000_diff1
value: -5.9713
- type: nauc_mrr_at_1_max
value: 13.8681
- type: nauc_mrr_at_1_std
value: -3.4711
- type: nauc_mrr_at_1_diff1
value: 36.945
- type: nauc_mrr_at_3_max
value: 9.6673
- type: nauc_mrr_at_3_std
value: -4.3877
- type: nauc_mrr_at_3_diff1
value: 39.2075
- type: nauc_mrr_at_5_max
value: 7.9742999999999995
- type: nauc_mrr_at_5_std
value: -4.8388
- type: nauc_mrr_at_5_diff1
value: 38.314
- type: nauc_mrr_at_10_max
value: 11.6962
- type: nauc_mrr_at_10_std
value: -2.7085000000000004
- type: nauc_mrr_at_10_diff1
value: 37.695
- type: nauc_mrr_at_20_max
value: 11.6962
- type: nauc_mrr_at_20_std
value: -2.7085000000000004
- type: nauc_mrr_at_20_diff1
value: 37.695
- type: nauc_mrr_at_100_max
value: 11.6962
- type: nauc_mrr_at_100_std
value: -2.7085000000000004
- type: nauc_mrr_at_100_diff1
value: 37.695
- type: nauc_mrr_at_1000_max
value: 11.6962
- type: nauc_mrr_at_1000_std
value: -2.7085000000000004
- type: nauc_mrr_at_1000_diff1
value: 37.695
- type: main_score
value: 60.077999999999996
- task:
type: Retrieval
dataset:
name: MTEB Touche2020Retrieval.v3 (default)
type: mteb/webis-touche2020-v3
config: default
split: test
revision: 431886eaecc48f067a3975b70d0949ea2862463c
metrics:
- type: ndcg_at_1
value: 58.163
- type: ndcg_at_3
value: 58.884
- type: ndcg_at_5
value: 53.062
- type: ndcg_at_10
value: 47.571999999999996
- type: ndcg_at_20
value: 43.984
- type: ndcg_at_100
value: 51.559999999999995
- type: ndcg_at_1000
value: 64.25800000000001
- type: map_at_1
value: 2.759
- type: map_at_3
value: 7.310999999999999
- type: map_at_5
value: 10.077
- type: map_at_10
value: 15.722
- type: map_at_20
value: 21.917
- type: map_at_100
value: 29.582000000000004
- type: map_at_1000
value: 32.608
- type: recall_at_1
value: 2.759
- type: recall_at_3
value: 7.870000000000001
- type: recall_at_5
value: 11.26
- type: recall_at_10
value: 19.211
- type: recall_at_20
value: 30.134
- type: recall_at_100
value: 54.96
- type: recall_at_1000
value: 85.78099999999999
- type: precision_at_1
value: 67.34700000000001
- type: precision_at_3
value: 68.027
- type: precision_at_5
value: 59.184000000000005
- type: precision_at_10
value: 50.815999999999995
- type: precision_at_20
value: 41.939
- type: precision_at_100
value: 17.041
- type: precision_at_1000
value: 2.963
- type: mrr_at_1
value: 67.3469
- type: mrr_at_3
value: 80.6122
- type: mrr_at_5
value: 80.6122
- type: mrr_at_10
value: 80.9524
- type: mrr_at_20
value: 80.9524
- type: mrr_at_100
value: 80.9524
- type: mrr_at_1000
value: 80.9524
- type: nauc_ndcg_at_1_max
value: -18.7982
- type: nauc_ndcg_at_1_std
value: 13.605500000000001
- type: nauc_ndcg_at_1_diff1
value: 21.2588
- type: nauc_ndcg_at_3_max
value: -9.0937
- type: nauc_ndcg_at_3_std
value: 23.259900000000002
- type: nauc_ndcg_at_3_diff1
value: 24.2989
- type: nauc_ndcg_at_5_max
value: -13.242300000000002
- type: nauc_ndcg_at_5_std
value: 9.7464
- type: nauc_ndcg_at_5_diff1
value: 18.601799999999997
- type: nauc_ndcg_at_10_max
value: -12.045599999999999
- type: nauc_ndcg_at_10_std
value: 7.5604000000000005
- type: nauc_ndcg_at_10_diff1
value: 20.1203
- type: nauc_ndcg_at_20_max
value: -13.2776
- type: nauc_ndcg_at_20_std
value: 8.2692
- type: nauc_ndcg_at_20_diff1
value: 21.38
- type: nauc_ndcg_at_100_max
value: -21.1315
- type: nauc_ndcg_at_100_std
value: 8.4079
- type: nauc_ndcg_at_100_diff1
value: 29.3124
- type: nauc_ndcg_at_1000_max
value: -3.7026999999999997
- type: nauc_ndcg_at_1000_std
value: 34.970600000000005
- type: nauc_ndcg_at_1000_diff1
value: 22.3636
- type: nauc_map_at_1_max
value: -36.432500000000005
- type: nauc_map_at_1_std
value: -23.9669
- type: nauc_map_at_1_diff1
value: 37.2073
- type: nauc_map_at_3_max
value: -32.8613
- type: nauc_map_at_3_std
value: -18.0951
- type: nauc_map_at_3_diff1
value: 36.3228
- type: nauc_map_at_5_max
value: -31.355
- type: nauc_map_at_5_std
value: -21.148500000000002
- type: nauc_map_at_5_diff1
value: 27.999200000000002
- type: nauc_map_at_10_max
value: -25.3787
- type: nauc_map_at_10_std
value: -18.564700000000002
- type: nauc_map_at_10_diff1
value: 24.076800000000002
- type: nauc_map_at_20_max
value: -20.954
- type: nauc_map_at_20_std
value: -12.6847
- type: nauc_map_at_20_diff1
value: 24.3842
- type: nauc_map_at_100_max
value: -15.7801
- type: nauc_map_at_100_std
value: -2.823
- type: nauc_map_at_100_diff1
value: 24.8472
- type: nauc_map_at_1000_max
value: -11.8023
- type: nauc_map_at_1000_std
value: 3.9041
- type: nauc_map_at_1000_diff1
value: 23.3312
- type: nauc_recall_at_1_max
value: -36.432500000000005
- type: nauc_recall_at_1_std
value: -23.9669
- type: nauc_recall_at_1_diff1
value: 37.2073
- type: nauc_recall_at_3_max
value: -36.3448
- type: nauc_recall_at_3_std
value: -18.4742
- type: nauc_recall_at_3_diff1
value: 38.4857
- type: nauc_recall_at_5_max
value: -35.4207
- type: nauc_recall_at_5_std
value: -23.7906
- type: nauc_recall_at_5_diff1
value: 28.3854
- type: nauc_recall_at_10_max
value: -28.4266
- type: nauc_recall_at_10_std
value: -21.3224
- type: nauc_recall_at_10_diff1
value: 27.0746
- type: nauc_recall_at_20_max
value: -23.1205
- type: nauc_recall_at_20_std
value: -12.3539
- type: nauc_recall_at_20_diff1
value: 27.127499999999998
- type: nauc_recall_at_100_max
value: -22.0703
- type: nauc_recall_at_100_std
value: 10.1339
- type: nauc_recall_at_100_diff1
value: 29.759900000000002
- type: nauc_recall_at_1000_max
value: 13.5147
- type: nauc_recall_at_1000_std
value: 78.4907
- type: nauc_recall_at_1000_diff1
value: 12.151
- type: nauc_precision_at_1_max
value: -20.1082
- type: nauc_precision_at_1_std
value: 13.5123
- type: nauc_precision_at_1_diff1
value: 16.7562
- type: nauc_precision_at_3_max
value: -11.2979
- type: nauc_precision_at_3_std
value: 23.0876
- type: nauc_precision_at_3_diff1
value: 20.738
- type: nauc_precision_at_5_max
value: -18.1198
- type: nauc_precision_at_5_std
value: -2.4168
- type: nauc_precision_at_5_diff1
value: 5.1223
- type: nauc_precision_at_10_max
value: -4.7656
- type: nauc_precision_at_10_std
value: 1.5377
- type: nauc_precision_at_10_diff1
value: 8.2175
- type: nauc_precision_at_20_max
value: 7.571999999999999
- type: nauc_precision_at_20_std
value: 17.309
- type: nauc_precision_at_20_diff1
value: 5.2156
- type: nauc_precision_at_100_max
value: 35.02
- type: nauc_precision_at_100_std
value: 57.2867
- type: nauc_precision_at_100_diff1
value: -12.814200000000001
- type: nauc_precision_at_1000_max
value: 54.8988
- type: nauc_precision_at_1000_std
value: 55.970699999999994
- type: nauc_precision_at_1000_diff1
value: -36.8074
- type: nauc_mrr_at_1_max
value: -20.1082
- type: nauc_mrr_at_1_std
value: 13.5123
- type: nauc_mrr_at_1_diff1
value: 16.7562
- type: nauc_mrr_at_3_max
value: -23.668300000000002
- type: nauc_mrr_at_3_std
value: 16.883699999999997
- type: nauc_mrr_at_3_diff1
value: 20.6687
- type: nauc_mrr_at_5_max
value: -23.668300000000002
- type: nauc_mrr_at_5_std
value: 16.883699999999997
- type: nauc_mrr_at_5_diff1
value: 20.6687
- type: nauc_mrr_at_10_max
value: -21.8234
- type: nauc_mrr_at_10_std
value: 15.1609
- type: nauc_mrr_at_10_diff1
value: 19.6023
- type: nauc_mrr_at_20_max
value: -21.8234
- type: nauc_mrr_at_20_std
value: 15.1609
- type: nauc_mrr_at_20_diff1
value: 19.6023
- type: nauc_mrr_at_100_max
value: -21.8234
- type: nauc_mrr_at_100_std
value: 15.1609
- type: nauc_mrr_at_100_diff1
value: 19.6023
- type: nauc_mrr_at_1000_max
value: -21.8234
- type: nauc_mrr_at_1000_std
value: 15.1609
- type: nauc_mrr_at_1000_diff1
value: 19.6023
- type: main_score
value: 47.571999999999996
- task:
type: Classification
dataset:
name: MTEB ToxicConversationsClassification (default)
type: mteb/toxic_conversations_50k
config: default
split: test
revision: edfaf9da55d3dd50d43143d90c1ac476895ae6de
metrics:
- type: accuracy
value: 63.608399999999996
- type: f1
value: 48.6248
- type: f1_weighted
value: 71.6158
- type: ap
value: 10.9541
- type: ap_weighted
value: 10.9541
- type: main_score
value: 63.608399999999996
- task:
type: Classification
dataset:
name: MTEB TweetSentimentExtractionClassification (default)
type: mteb/tweet_sentiment_extraction
config: default
split: test
revision: d604517c81ca91fe16a244d1248fc021f9ecee7a
metrics:
- type: accuracy
value: 60.506499999999996
- type: f1
value: 60.711499999999994
- type: f1_weighted
value: 59.695699999999995
- type: main_score
value: 60.506499999999996
- task:
type: Clustering
dataset:
name: MTEB TwentyNewsgroupsClustering.v2 (default)
type: mteb/twentynewsgroups-clustering
config: default
split: test
revision: 6125ec4e24fa026cec8a478383ee943acfbd5449
metrics:
- type: v_measure
value: 33.5462
- type: v_measure_std
value: 1.3361
- type: main_score
value: 33.5462
- task:
type: PairClassification
dataset:
name: MTEB TwitterSemEval2015 (default)
type: mteb/twittersemeval2015-pairclassification
config: default
split: test
revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1
metrics:
- type: similarity_accuracy
value: 82.51180000000001
- type: similarity_accuracy_threshold
value: 69.4516
- type: similarity_f1
value: 58.483399999999996
- type: similarity_f1_threshold
value: 61.3852
- type: similarity_precision
value: 56.29880000000001
- type: similarity_recall
value: 60.8443
- type: similarity_ap
value: 61.8784
- type: cosine_accuracy
value: 82.51180000000001
- type: cosine_accuracy_threshold
value: 69.4516
- type: cosine_f1
value: 58.483399999999996
- type: cosine_f1_threshold
value: 61.3852
- type: cosine_precision
value: 56.29880000000001
- type: cosine_recall
value: 60.8443
- type: cosine_ap
value: 61.8784
- type: manhattan_accuracy
value: 82.60119999999999
- type: manhattan_accuracy_threshold
value: 1395.2354
- type: manhattan_f1
value: 59.3387
- type: manhattan_f1_threshold
value: 1544.4108
- type: manhattan_precision
value: 56.284
- type: manhattan_recall
value: 62.7441
- type: manhattan_ap
value: 62.407999999999994
- type: euclidean_accuracy
value: 82.51180000000001
- type: euclidean_accuracy_threshold
value: 78.1645
- type: euclidean_f1
value: 58.483399999999996
- type: euclidean_f1_threshold
value: 87.88040000000001
- type: euclidean_precision
value: 56.29880000000001
- type: euclidean_recall
value: 60.8443
- type: euclidean_ap
value: 61.8784
- type: dot_accuracy
value: 82.51180000000001
- type: dot_accuracy_threshold
value: 69.4516
- type: dot_f1
value: 58.483399999999996
- type: dot_f1_threshold
value: 61.3852
- type: dot_precision
value: 56.29880000000001
- type: dot_recall
value: 60.8443
- type: dot_ap
value: 61.8784
- type: max_accuracy
value: 82.60119999999999
- type: max_f1
value: 59.3387
- type: max_precision
value: 56.29880000000001
- type: max_recall
value: 62.7441
- type: max_ap
value: 62.407999999999994
- type: main_score
value: 62.407999999999994
- task:
type: PairClassification
dataset:
name: MTEB TwitterURLCorpus (default)
type: mteb/twitterurlcorpus-pairclassification
config: default
split: test
revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf
metrics:
- type: similarity_accuracy
value: 87.84880000000001
- type: similarity_accuracy_threshold
value: 62.77890000000001
- type: similarity_f1
value: 75.968
- type: similarity_f1_threshold
value: 57.5925
- type: similarity_precision
value: 71.909
- type: similarity_recall
value: 80.5128
- type: similarity_ap
value: 83.6557
- type: cosine_accuracy
value: 87.84880000000001
- type: cosine_accuracy_threshold
value: 62.77890000000001
- type: cosine_f1
value: 75.968
- type: cosine_f1_threshold
value: 57.5925
- type: cosine_precision
value: 71.909
- type: cosine_recall
value: 80.5128
- type: cosine_ap
value: 83.6557
- type: manhattan_accuracy
value: 87.69940000000001
- type: manhattan_accuracy_threshold
value: 1524.1733
- type: manhattan_f1
value: 76.01830000000001
- type: manhattan_f1_threshold
value: 1597.1845
- type: manhattan_precision
value: 72.981
- type: manhattan_recall
value: 79.3194
- type: manhattan_ap
value: 83.63629999999999
- type: euclidean_accuracy
value: 87.84880000000001
- type: euclidean_accuracy_threshold
value: 86.2799
- type: euclidean_f1
value: 75.968
- type: euclidean_f1_threshold
value: 92.0951
- type: euclidean_precision
value: 71.909
- type: euclidean_recall
value: 80.5128
- type: euclidean_ap
value: 83.6557
- type: dot_accuracy
value: 87.84880000000001
- type: dot_accuracy_threshold
value: 62.77890000000001
- type: dot_f1
value: 75.968
- type: dot_f1_threshold
value: 57.5925
- type: dot_precision
value: 71.909
- type: dot_recall
value: 80.5128
- type: dot_ap
value: 83.6557
- type: max_accuracy
value: 87.84880000000001
- type: max_f1
value: 76.01830000000001
- type: max_precision
value: 72.981
- type: max_recall
value: 80.5128
- type: max_ap
value: 83.6557
- type: main_score
value: 83.6557
---
# RetrievaEmbedding-01: AMBER
The **AMBER (Adaptive Multitask Bilingual Embedding Representations)** is a text embedding model trained by Retrieva, Inc.
This model is primarily designed for Japanese, but it also supports English.
We trained this model on various datasets related to Japanese and English.
This model size is 132M parameters (base size).
## Model Details
### Model Description
The AMBER model is a text embedding model based on the [sbintuitions/modernbert-ja-130m](https://huggingface.co/sbintuitions/modernbert-ja-130m) architecture, designed for Japanese text.
This model was trained on a variety of datasets related to Japanese, and also includes English datasets.
The model can be used for English text as well.
During training, prompts (instructions) in natural language were included, allowing the model to generate embeddings tailored to specific tasks.
- **Developed by:** Retrieva, Inc.
- **Model type:** Based on the [ModernBERT](https://arxiv.org/abs/2412.13663) Architecture.
- **Language(s) (NLP):** Primarily Japanese (optional support for English).
- **License:** Apache 2.0
- **Finetuned from model:** `sbintuitions/modernbert-ja-130m`
- **Model Type:** Sentence Transformer
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 512 dimensions
- **Similarity Function:** Cosine Similarity
## Uses
## How to Get Started with the Model
### Install Library
First install the python library using pip:
```bash
pip install sentence-transformers sentencepiece
```
### Run Inference
Then you can load this model and run inference.
You can specify the prompt at inference time by adding an argument called `prompt` to `model.encode`.
The prompts used in the Japanese benchmark are described in `jmteb/tasks`, and the prompts used in the English benchmark are described in `mteb/models/retrieva_en.py`.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("retrieva-jp/amber-base")
# Run inference
queries = [
"自然言語処理とはなんですか?",
"株式会社レトリバについて教えて",
]
documents = [
"自然言語処理(しぜんげんごしょり、英語: Natural language processing、略称:NLP)は、人間が日常的に使っている自然言語をコンピュータに処理させる一連の技術であり、人工知能と言語学の一分野である。",
"株式会社レトリバは、自然言語処理と機械学習を核としたAI技術で組織の課題解決を支援するテクノロジー企業である。",
]
queries_embeddings = model.encode(queries, prompt_name="Retrieval-query")
documents_embeddings = model.encode(documents, prompt_name="Retrieval-passage")
similarities = model.similarity(queries_embeddings, documents_embeddings)
print(similarities.shape)
```
## Training Details
### Training Data
We used multiple datasets to train this model.
We selected datasets from [llm-jp-eval](https://github.com/llm-jp/llm-jp-eval), [llm-japanese-dataset](https://github.com/masanorihirano/llm-japanese-dataset), and [hpprc/emb](https://huggingface.co/datasets/hpprc/emb) for Japanese datasets.
For English datasets, we mainly used some of the datasets utilized in [Asai et al. (2023)](https://arxiv.org/abs/2211.09260).
Additionally, we partially used the English datasets at [the sentence-transformers repository](https://huggingface.co/sentence-transformers) and [kilt-tasks](https://huggingface.co/datasets/facebook/kilt_tasks).
To consider cross-lingual between Japanese and English, we also used translation datasets between Japanese and English.
For Japanese, we used synthetic data created by LLM to prepare a sufficient amount of training data.
## Evaluation
We evaluated the model on the following benchmarks:
- Japanese Benchmark: [JMTEB](https://github.com/sbintuitions/JMTEB)
- Japanese Retrieval Tasks: [JQaRA](https://github.com/hotchpotch/JQaRA/), [JaCWIR](https://github.com/hotchpotch/JaCWIR/), [MLDR Japanese Subset](https://huggingface.co/datasets/Shitao/MLDR)
- English Benchmark: [MTEB(eng, v2)](https://github.com/embeddings-benchmark/mteb).
The scores in the table are all calculated by us unless otherwise noted.
### Japanese Benchmark: JMTEB
Note that the `Mean (TaskType)` in the following leaderboard is the same as the `Avg.` in the original JMTEB leaderboard.
The files used for evaluation are stored in the `jmteb` directory.
| Model | # Parameters | Mean (TaskType) | Mean (Task) | Retrieval | STS | Classification | Reranking | Clustering | PairClassification |
| :--- | --- | ---: | ---: | ---: | ---: | ---: | ---: | ---: | ---: |
| base models | < 300M | | | | | | | | |
| [cl-nagoya/ruri-base](https://huggingface.co/cl-nagoya/ruri-base) | 111M | 72.60 | 71.56 | 69.53 | 82.87 | 75.49 | 92.91 | 52.40 | 62.38 |
| AMBER-base <br> (this model) | 130M | 72.12 | 72.12 | **73.40** | 77.81 | **76.14** | **93.27** | 48.05 | **64.03** |
| [pkshatech/GLuCoSE-base-ja-v2](https://huggingface.co/pkshatech/GLuCoSE-base-ja-v2) | 133M | **72.89** | **72.47** | 73.03 | **82.96** | 74.02 | 93.01 | 51.96 | 62.37 |
| [pkshatech/RoSEtta-base-ja](https://huggingface.co/pkshatech/RoSEtta-base-ja) | 190M | 72.49 | 72.05 | 73.14 | 81.39 | 72.37 | 92.69 | **53.60** | 61.74 |
| [intfloat/multilingual-e5-base](https://huggingface.co/intfloat/multilingual-e5-base) | 278M | 71.11 | 69.72 | 69.45 | 80.45 | 69.86 | 92.90 | 51.62 | 62.35 |
| large models | 300M < | | | | | | | | |
| [AMBER-large](https://huggingface.co/retrieva-jp/amber-large) | 315M | 72.52 | **73.22** | **75.40** | 79.32 | 77.14 | **93.54** | 48.73 | 60.97 |
| [cl-nagoya/ruri-large](https://huggingface.co/cl-nagoya/ruri-large) | 337M | **73.20** | 73.06 | 72.86 | **83.14** | **77.15** | 93.00 | 50.78 | 62.29 |
| [intfloat/multilingual-e5-large](https://huggingface.co/intfloat/multilingual-e5-large) | 560M | 72.06 | 71.29 | 71.71 | 80.87 | 72.45 | 93.29 | **51.59** | **62.42** |
### Japanese Retrieval Tasks: JQaRA, JaCWIR, MLDR Japanese Subset
The files used for MLDR are stored in the `mldr` directory.
The prompts used in JQaRA and JaCWIR are `Retrieval-query` and `Retrieval-passage` described in `config_sentence_transformers.json`.
| Model | # Parameters | JQaRA (nDCG@10) | JaCWIR (MAP@10) | MLDR Japanese Subset (nDCG@10) |
| :--- | --- | ---: | ---: | ---: |
| base models | < 300M | | | |
| [cl-nagoya/ruri-base](https://huggingface.co/cl-nagoya/ruri-base) | 111M | 58.4 | 83.3 | 32.77 |
| AMBER-base <br> (this model) | 130M | 57.1 | 81.6 | **35.69** |
| [pkshatech/GLuCoSE-base-ja-v2](https://huggingface.co/pkshatech/GLuCoSE-base-ja-v2) | 133M | **60.6** | **85.3** | 33.99 |
| [intfloat/multilingual-e5-base](https://huggingface.co/intfloat/multilingual-e5-base) | 278M | 47.1 | **85.3** | 25.46 |
| large models | 300M < | | | |
| [AMBER-large](https://huggingface.co/retrieva-jp/amber-large) | 315M | 62.5 | 82.4 | 34.57 |
| [cl-nagoya/ruri-large](https://huggingface.co/cl-nagoya/ruri-large) | 337M | **62.8** | 82.5 | **34.78** |
| [intfloat/multilingual-e5-large](https://huggingface.co/intfloat/multilingual-e5-large) | 560M | 55.4| **87.3** | 29.95 |
### English Benchmark: MTEB(eng, v2)
The files used for evaluation are stored in the `mteb` directory.
| Model | # Parameters | Mean (TaskType) | Mean (Task) | Retrieval | STS | Classification | Reranking | Clustering | PairClassification | Summarization |
| :--- | --- | ---: | ---: | ---: | ---: | ---: | ---: | ---: | ---: | ---: |
| base models | < 300M | | | | | | | | | |
| AMBER-base <br> (this model) | 130M | 54.75 | 58.20 | 40.11 | **81.29** | 70.39 | 42.98 | **42.27** | 80.12 | 26.08 |
| [intfloat/multilingual-e5-base](https://huggingface.co/intfloat/multilingual-e5-base) | 278M | **56.21** | **59.75** | **43.22** | 80.50 | **73.84** | **43.87** | 42.19 | **83.74** | **26.10** |
| large models | 300M < | | | | | | | | | |
| [AMBER-large](https://huggingface.co/retrieva-jp/amber-large) | 315M | 56.08 | 59.13 | 41.04 | **81.52** | 72.23 | 43.83 | **42.71** | 81.00 | **30.21** |
| [intfloat/multilingual-e5-large](https://huggingface.co/intfloat/multilingual-e5-large) | 560M | **57.06** | **60.84** | **46.17** | 81.11 | **74.88** | **44.31** | 41.91 | **84.33** | 26.67 |
## More Information
TBA
## Model Card Authors
Satoru Katsumata, Daisuke Kimura, Jiro Nishitoba
## Model Card Contact
pr[at]retrieva.jp | [
"TRANSLATION",
"SUMMARIZATION"
] | [
"BIOSSES"
] |
afrideva/GIST-all-MiniLM-L6-v2-GGUF | afrideva | text-generation | [
"sentence-transformers",
"gguf",
"feature-extraction",
"mteb",
"sentence-similarity",
"ggml",
"quantized",
"text-generation",
"en",
"arxiv:2402.16829",
"arxiv:2212.09741",
"base_model:avsolatorio/GIST-all-MiniLM-L6-v2",
"base_model:quantized:avsolatorio/GIST-all-MiniLM-L6-v2",
"license:mit",
"model-index",
"endpoints_compatible",
"region:us"
] | 2024-05-13T01:49:21 | 2024-05-13T01:49:40 | 88 | 0 | ---
base_model: avsolatorio/GIST-all-MiniLM-L6-v2
language:
- en
library_name: sentence-transformers
license: mit
pipeline_tag: text-generation
tags:
- feature-extraction
- mteb
- sentence-similarity
- sentence-transformers
- gguf
- ggml
- quantized
inference: true
model_creator: avsolatorio
quantized_by: afrideva
model-index:
- name: GIST-all-MiniLM-L6-v2
results:
- task:
type: Classification
dataset:
name: MTEB AmazonCounterfactualClassification (en)
type: mteb/amazon_counterfactual
config: en
split: test
revision: e8379541af4e31359cca9fbcf4b00f2671dba205
metrics:
- type: accuracy
value: 72.8955223880597
- type: ap
value: 35.447605103320775
- type: f1
value: 66.82951715365854
- task:
type: Classification
dataset:
name: MTEB AmazonPolarityClassification
type: mteb/amazon_polarity
config: default
split: test
revision: e2d317d38cd51312af73b3d32a06d1a08b442046
metrics:
- type: accuracy
value: 87.19474999999998
- type: ap
value: 83.09577890808514
- type: f1
value: 87.13833121762009
- task:
type: Classification
dataset:
name: MTEB AmazonReviewsClassification (en)
type: mteb/amazon_reviews_multi
config: en
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 42.556000000000004
- type: f1
value: 42.236256693772276
- task:
type: Retrieval
dataset:
name: MTEB ArguAna
type: arguana
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 26.884999999999998
- type: map_at_10
value: 42.364000000000004
- type: map_at_100
value: 43.382
- type: map_at_1000
value: 43.391000000000005
- type: map_at_3
value: 37.162
- type: map_at_5
value: 40.139
- type: mrr_at_1
value: 26.884999999999998
- type: mrr_at_10
value: 42.193999999999996
- type: mrr_at_100
value: 43.211
- type: mrr_at_1000
value: 43.221
- type: mrr_at_3
value: 36.949
- type: mrr_at_5
value: 40.004
- type: ndcg_at_1
value: 26.884999999999998
- type: ndcg_at_10
value: 51.254999999999995
- type: ndcg_at_100
value: 55.481
- type: ndcg_at_1000
value: 55.68300000000001
- type: ndcg_at_3
value: 40.565
- type: ndcg_at_5
value: 45.882
- type: precision_at_1
value: 26.884999999999998
- type: precision_at_10
value: 7.9799999999999995
- type: precision_at_100
value: 0.98
- type: precision_at_1000
value: 0.1
- type: precision_at_3
value: 16.808999999999997
- type: precision_at_5
value: 12.645999999999999
- type: recall_at_1
value: 26.884999999999998
- type: recall_at_10
value: 79.801
- type: recall_at_100
value: 98.009
- type: recall_at_1000
value: 99.502
- type: recall_at_3
value: 50.427
- type: recall_at_5
value: 63.229
- task:
type: Clustering
dataset:
name: MTEB ArxivClusteringP2P
type: mteb/arxiv-clustering-p2p
config: default
split: test
revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d
metrics:
- type: v_measure
value: 45.31044837358167
- task:
type: Clustering
dataset:
name: MTEB ArxivClusteringS2S
type: mteb/arxiv-clustering-s2s
config: default
split: test
revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53
metrics:
- type: v_measure
value: 35.44751738734691
- task:
type: Reranking
dataset:
name: MTEB AskUbuntuDupQuestions
type: mteb/askubuntudupquestions-reranking
config: default
split: test
revision: 2000358ca161889fa9c082cb41daa8dcfb161a54
metrics:
- type: map
value: 62.96517580629869
- type: mrr
value: 76.30051004704744
- task:
type: STS
dataset:
name: MTEB BIOSSES
type: mteb/biosses-sts
config: default
split: test
revision: d3fb88f8f02e40887cd149695127462bbcf29b4a
metrics:
- type: cos_sim_pearson
value: 83.97262600499639
- type: cos_sim_spearman
value: 81.25787561220484
- type: euclidean_pearson
value: 64.96260261677082
- type: euclidean_spearman
value: 64.17616109254686
- type: manhattan_pearson
value: 65.05620628102835
- type: manhattan_spearman
value: 64.71171546419122
- task:
type: Classification
dataset:
name: MTEB Banking77Classification
type: mteb/banking77
config: default
split: test
revision: 0fd18e25b25c072e09e0d92ab615fda904d66300
metrics:
- type: accuracy
value: 84.2435064935065
- type: f1
value: 84.2334859253828
- task:
type: Clustering
dataset:
name: MTEB BiorxivClusteringP2P
type: mteb/biorxiv-clustering-p2p
config: default
split: test
revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40
metrics:
- type: v_measure
value: 38.38358435972693
- task:
type: Clustering
dataset:
name: MTEB BiorxivClusteringS2S
type: mteb/biorxiv-clustering-s2s
config: default
split: test
revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908
metrics:
- type: v_measure
value: 31.093619653843124
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackAndroidRetrieval
type: BeIR/cqadupstack
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 35.016999999999996
- type: map_at_10
value: 47.019
- type: map_at_100
value: 48.634
- type: map_at_1000
value: 48.757
- type: map_at_3
value: 43.372
- type: map_at_5
value: 45.314
- type: mrr_at_1
value: 43.491
- type: mrr_at_10
value: 53.284
- type: mrr_at_100
value: 54.038
- type: mrr_at_1000
value: 54.071000000000005
- type: mrr_at_3
value: 51.001
- type: mrr_at_5
value: 52.282
- type: ndcg_at_1
value: 43.491
- type: ndcg_at_10
value: 53.498999999999995
- type: ndcg_at_100
value: 58.733999999999995
- type: ndcg_at_1000
value: 60.307
- type: ndcg_at_3
value: 48.841
- type: ndcg_at_5
value: 50.76199999999999
- type: precision_at_1
value: 43.491
- type: precision_at_10
value: 10.315000000000001
- type: precision_at_100
value: 1.6209999999999998
- type: precision_at_1000
value: 0.20500000000000002
- type: precision_at_3
value: 23.462
- type: precision_at_5
value: 16.652
- type: recall_at_1
value: 35.016999999999996
- type: recall_at_10
value: 64.92
- type: recall_at_100
value: 86.605
- type: recall_at_1000
value: 96.174
- type: recall_at_3
value: 50.99
- type: recall_at_5
value: 56.93
- type: map_at_1
value: 29.866
- type: map_at_10
value: 40.438
- type: map_at_100
value: 41.77
- type: map_at_1000
value: 41.913
- type: map_at_3
value: 37.634
- type: map_at_5
value: 39.226
- type: mrr_at_1
value: 37.834
- type: mrr_at_10
value: 46.765
- type: mrr_at_100
value: 47.410000000000004
- type: mrr_at_1000
value: 47.461
- type: mrr_at_3
value: 44.735
- type: mrr_at_5
value: 46.028000000000006
- type: ndcg_at_1
value: 37.834
- type: ndcg_at_10
value: 46.303
- type: ndcg_at_100
value: 50.879
- type: ndcg_at_1000
value: 53.112
- type: ndcg_at_3
value: 42.601
- type: ndcg_at_5
value: 44.384
- type: precision_at_1
value: 37.834
- type: precision_at_10
value: 8.898
- type: precision_at_100
value: 1.4409999999999998
- type: precision_at_1000
value: 0.19499999999999998
- type: precision_at_3
value: 20.977
- type: precision_at_5
value: 14.841
- type: recall_at_1
value: 29.866
- type: recall_at_10
value: 56.06100000000001
- type: recall_at_100
value: 75.809
- type: recall_at_1000
value: 89.875
- type: recall_at_3
value: 44.707
- type: recall_at_5
value: 49.846000000000004
- type: map_at_1
value: 38.985
- type: map_at_10
value: 51.165000000000006
- type: map_at_100
value: 52.17
- type: map_at_1000
value: 52.229000000000006
- type: map_at_3
value: 48.089999999999996
- type: map_at_5
value: 49.762
- type: mrr_at_1
value: 44.577
- type: mrr_at_10
value: 54.493
- type: mrr_at_100
value: 55.137
- type: mrr_at_1000
value: 55.167
- type: mrr_at_3
value: 52.079
- type: mrr_at_5
value: 53.518
- type: ndcg_at_1
value: 44.577
- type: ndcg_at_10
value: 56.825
- type: ndcg_at_100
value: 60.842
- type: ndcg_at_1000
value: 62.015
- type: ndcg_at_3
value: 51.699
- type: ndcg_at_5
value: 54.11
- type: precision_at_1
value: 44.577
- type: precision_at_10
value: 9.11
- type: precision_at_100
value: 1.206
- type: precision_at_1000
value: 0.135
- type: precision_at_3
value: 23.156
- type: precision_at_5
value: 15.737000000000002
- type: recall_at_1
value: 38.985
- type: recall_at_10
value: 70.164
- type: recall_at_100
value: 87.708
- type: recall_at_1000
value: 95.979
- type: recall_at_3
value: 56.285
- type: recall_at_5
value: 62.303
- type: map_at_1
value: 28.137
- type: map_at_10
value: 36.729
- type: map_at_100
value: 37.851
- type: map_at_1000
value: 37.932
- type: map_at_3
value: 34.074
- type: map_at_5
value: 35.398
- type: mrr_at_1
value: 30.621
- type: mrr_at_10
value: 39.007
- type: mrr_at_100
value: 39.961
- type: mrr_at_1000
value: 40.02
- type: mrr_at_3
value: 36.591
- type: mrr_at_5
value: 37.806
- type: ndcg_at_1
value: 30.621
- type: ndcg_at_10
value: 41.772
- type: ndcg_at_100
value: 47.181
- type: ndcg_at_1000
value: 49.053999999999995
- type: ndcg_at_3
value: 36.577
- type: ndcg_at_5
value: 38.777
- type: precision_at_1
value: 30.621
- type: precision_at_10
value: 6.372999999999999
- type: precision_at_100
value: 0.955
- type: precision_at_1000
value: 0.11499999999999999
- type: precision_at_3
value: 15.367
- type: precision_at_5
value: 10.531
- type: recall_at_1
value: 28.137
- type: recall_at_10
value: 55.162
- type: recall_at_100
value: 79.931
- type: recall_at_1000
value: 93.67
- type: recall_at_3
value: 41.057
- type: recall_at_5
value: 46.327
- type: map_at_1
value: 16.798
- type: map_at_10
value: 25.267
- type: map_at_100
value: 26.579000000000004
- type: map_at_1000
value: 26.697
- type: map_at_3
value: 22.456
- type: map_at_5
value: 23.912
- type: mrr_at_1
value: 20.771
- type: mrr_at_10
value: 29.843999999999998
- type: mrr_at_100
value: 30.849
- type: mrr_at_1000
value: 30.916
- type: mrr_at_3
value: 27.156000000000002
- type: mrr_at_5
value: 28.518
- type: ndcg_at_1
value: 20.771
- type: ndcg_at_10
value: 30.792
- type: ndcg_at_100
value: 36.945
- type: ndcg_at_1000
value: 39.619
- type: ndcg_at_3
value: 25.52
- type: ndcg_at_5
value: 27.776
- type: precision_at_1
value: 20.771
- type: precision_at_10
value: 5.734
- type: precision_at_100
value: 1.031
- type: precision_at_1000
value: 0.13899999999999998
- type: precision_at_3
value: 12.148
- type: precision_at_5
value: 9.055
- type: recall_at_1
value: 16.798
- type: recall_at_10
value: 43.332
- type: recall_at_100
value: 70.016
- type: recall_at_1000
value: 88.90400000000001
- type: recall_at_3
value: 28.842000000000002
- type: recall_at_5
value: 34.37
- type: map_at_1
value: 31.180000000000003
- type: map_at_10
value: 41.78
- type: map_at_100
value: 43.102000000000004
- type: map_at_1000
value: 43.222
- type: map_at_3
value: 38.505
- type: map_at_5
value: 40.443
- type: mrr_at_1
value: 37.824999999999996
- type: mrr_at_10
value: 47.481
- type: mrr_at_100
value: 48.268
- type: mrr_at_1000
value: 48.313
- type: mrr_at_3
value: 44.946999999999996
- type: mrr_at_5
value: 46.492
- type: ndcg_at_1
value: 37.824999999999996
- type: ndcg_at_10
value: 47.827
- type: ndcg_at_100
value: 53.407000000000004
- type: ndcg_at_1000
value: 55.321
- type: ndcg_at_3
value: 42.815
- type: ndcg_at_5
value: 45.363
- type: precision_at_1
value: 37.824999999999996
- type: precision_at_10
value: 8.652999999999999
- type: precision_at_100
value: 1.354
- type: precision_at_1000
value: 0.172
- type: precision_at_3
value: 20.372
- type: precision_at_5
value: 14.591000000000001
- type: recall_at_1
value: 31.180000000000003
- type: recall_at_10
value: 59.894000000000005
- type: recall_at_100
value: 83.722
- type: recall_at_1000
value: 95.705
- type: recall_at_3
value: 45.824
- type: recall_at_5
value: 52.349999999999994
- type: map_at_1
value: 24.66
- type: map_at_10
value: 34.141
- type: map_at_100
value: 35.478
- type: map_at_1000
value: 35.594
- type: map_at_3
value: 30.446
- type: map_at_5
value: 32.583
- type: mrr_at_1
value: 29.909000000000002
- type: mrr_at_10
value: 38.949
- type: mrr_at_100
value: 39.803
- type: mrr_at_1000
value: 39.867999999999995
- type: mrr_at_3
value: 35.921
- type: mrr_at_5
value: 37.753
- type: ndcg_at_1
value: 29.909000000000002
- type: ndcg_at_10
value: 40.012
- type: ndcg_at_100
value: 45.707
- type: ndcg_at_1000
value: 48.15
- type: ndcg_at_3
value: 34.015
- type: ndcg_at_5
value: 37.002
- type: precision_at_1
value: 29.909000000000002
- type: precision_at_10
value: 7.693999999999999
- type: precision_at_100
value: 1.2229999999999999
- type: precision_at_1000
value: 0.16
- type: precision_at_3
value: 16.323999999999998
- type: precision_at_5
value: 12.306000000000001
- type: recall_at_1
value: 24.66
- type: recall_at_10
value: 52.478
- type: recall_at_100
value: 77.051
- type: recall_at_1000
value: 93.872
- type: recall_at_3
value: 36.382999999999996
- type: recall_at_5
value: 43.903999999999996
- type: map_at_1
value: 26.768416666666667
- type: map_at_10
value: 36.2485
- type: map_at_100
value: 37.520833333333336
- type: map_at_1000
value: 37.64033333333334
- type: map_at_3
value: 33.25791666666667
- type: map_at_5
value: 34.877250000000004
- type: mrr_at_1
value: 31.65408333333334
- type: mrr_at_10
value: 40.43866666666667
- type: mrr_at_100
value: 41.301249999999996
- type: mrr_at_1000
value: 41.357499999999995
- type: mrr_at_3
value: 37.938916666666664
- type: mrr_at_5
value: 39.35183333333334
- type: ndcg_at_1
value: 31.65408333333334
- type: ndcg_at_10
value: 41.76983333333334
- type: ndcg_at_100
value: 47.138
- type: ndcg_at_1000
value: 49.33816666666667
- type: ndcg_at_3
value: 36.76683333333333
- type: ndcg_at_5
value: 39.04441666666666
- type: precision_at_1
value: 31.65408333333334
- type: precision_at_10
value: 7.396249999999998
- type: precision_at_100
value: 1.1974166666666666
- type: precision_at_1000
value: 0.15791666666666668
- type: precision_at_3
value: 16.955583333333333
- type: precision_at_5
value: 12.09925
- type: recall_at_1
value: 26.768416666666667
- type: recall_at_10
value: 53.82366666666667
- type: recall_at_100
value: 77.39600000000002
- type: recall_at_1000
value: 92.46300000000001
- type: recall_at_3
value: 39.90166666666667
- type: recall_at_5
value: 45.754000000000005
- type: map_at_1
value: 24.369
- type: map_at_10
value: 32.025
- type: map_at_100
value: 33.08
- type: map_at_1000
value: 33.169
- type: map_at_3
value: 29.589
- type: map_at_5
value: 30.894
- type: mrr_at_1
value: 27.301
- type: mrr_at_10
value: 34.64
- type: mrr_at_100
value: 35.556
- type: mrr_at_1000
value: 35.616
- type: mrr_at_3
value: 32.515
- type: mrr_at_5
value: 33.666000000000004
- type: ndcg_at_1
value: 27.301
- type: ndcg_at_10
value: 36.386
- type: ndcg_at_100
value: 41.598
- type: ndcg_at_1000
value: 43.864999999999995
- type: ndcg_at_3
value: 32.07
- type: ndcg_at_5
value: 34.028999999999996
- type: precision_at_1
value: 27.301
- type: precision_at_10
value: 5.782
- type: precision_at_100
value: 0.923
- type: precision_at_1000
value: 0.11900000000000001
- type: precision_at_3
value: 13.804
- type: precision_at_5
value: 9.693
- type: recall_at_1
value: 24.369
- type: recall_at_10
value: 47.026
- type: recall_at_100
value: 70.76400000000001
- type: recall_at_1000
value: 87.705
- type: recall_at_3
value: 35.366
- type: recall_at_5
value: 40.077
- type: map_at_1
value: 17.878
- type: map_at_10
value: 25.582
- type: map_at_100
value: 26.848
- type: map_at_1000
value: 26.985
- type: map_at_3
value: 22.997
- type: map_at_5
value: 24.487000000000002
- type: mrr_at_1
value: 22.023
- type: mrr_at_10
value: 29.615000000000002
- type: mrr_at_100
value: 30.656
- type: mrr_at_1000
value: 30.737
- type: mrr_at_3
value: 27.322999999999997
- type: mrr_at_5
value: 28.665000000000003
- type: ndcg_at_1
value: 22.023
- type: ndcg_at_10
value: 30.476999999999997
- type: ndcg_at_100
value: 36.258
- type: ndcg_at_1000
value: 39.287
- type: ndcg_at_3
value: 25.995
- type: ndcg_at_5
value: 28.174
- type: precision_at_1
value: 22.023
- type: precision_at_10
value: 5.657
- type: precision_at_100
value: 1.01
- type: precision_at_1000
value: 0.145
- type: precision_at_3
value: 12.491
- type: precision_at_5
value: 9.112
- type: recall_at_1
value: 17.878
- type: recall_at_10
value: 41.155
- type: recall_at_100
value: 66.62599999999999
- type: recall_at_1000
value: 88.08200000000001
- type: recall_at_3
value: 28.505000000000003
- type: recall_at_5
value: 34.284
- type: map_at_1
value: 26.369999999999997
- type: map_at_10
value: 36.115
- type: map_at_100
value: 37.346000000000004
- type: map_at_1000
value: 37.449
- type: map_at_3
value: 32.976
- type: map_at_5
value: 34.782000000000004
- type: mrr_at_1
value: 30.784
- type: mrr_at_10
value: 40.014
- type: mrr_at_100
value: 40.913
- type: mrr_at_1000
value: 40.967999999999996
- type: mrr_at_3
value: 37.205
- type: mrr_at_5
value: 38.995999999999995
- type: ndcg_at_1
value: 30.784
- type: ndcg_at_10
value: 41.797000000000004
- type: ndcg_at_100
value: 47.355000000000004
- type: ndcg_at_1000
value: 49.535000000000004
- type: ndcg_at_3
value: 36.29
- type: ndcg_at_5
value: 39.051
- type: precision_at_1
value: 30.784
- type: precision_at_10
value: 7.164
- type: precision_at_100
value: 1.122
- type: precision_at_1000
value: 0.14200000000000002
- type: precision_at_3
value: 16.636
- type: precision_at_5
value: 11.996
- type: recall_at_1
value: 26.369999999999997
- type: recall_at_10
value: 55.010000000000005
- type: recall_at_100
value: 79.105
- type: recall_at_1000
value: 94.053
- type: recall_at_3
value: 40.139
- type: recall_at_5
value: 47.089
- type: map_at_1
value: 26.421
- type: map_at_10
value: 35.253
- type: map_at_100
value: 36.97
- type: map_at_1000
value: 37.195
- type: map_at_3
value: 32.068000000000005
- type: map_at_5
value: 33.763
- type: mrr_at_1
value: 31.423000000000002
- type: mrr_at_10
value: 39.995999999999995
- type: mrr_at_100
value: 40.977999999999994
- type: mrr_at_1000
value: 41.024
- type: mrr_at_3
value: 36.989
- type: mrr_at_5
value: 38.629999999999995
- type: ndcg_at_1
value: 31.423000000000002
- type: ndcg_at_10
value: 41.382000000000005
- type: ndcg_at_100
value: 47.532000000000004
- type: ndcg_at_1000
value: 49.829
- type: ndcg_at_3
value: 35.809000000000005
- type: ndcg_at_5
value: 38.308
- type: precision_at_1
value: 31.423000000000002
- type: precision_at_10
value: 7.885000000000001
- type: precision_at_100
value: 1.609
- type: precision_at_1000
value: 0.246
- type: precision_at_3
value: 16.469
- type: precision_at_5
value: 12.174
- type: recall_at_1
value: 26.421
- type: recall_at_10
value: 53.618
- type: recall_at_100
value: 80.456
- type: recall_at_1000
value: 94.505
- type: recall_at_3
value: 37.894
- type: recall_at_5
value: 44.352999999999994
- type: map_at_1
value: 21.54
- type: map_at_10
value: 29.468
- type: map_at_100
value: 30.422
- type: map_at_1000
value: 30.542
- type: map_at_3
value: 26.888
- type: map_at_5
value: 27.962999999999997
- type: mrr_at_1
value: 23.29
- type: mrr_at_10
value: 31.176
- type: mrr_at_100
value: 32.046
- type: mrr_at_1000
value: 32.129000000000005
- type: mrr_at_3
value: 28.804999999999996
- type: mrr_at_5
value: 29.868
- type: ndcg_at_1
value: 23.29
- type: ndcg_at_10
value: 34.166000000000004
- type: ndcg_at_100
value: 39.217999999999996
- type: ndcg_at_1000
value: 41.964
- type: ndcg_at_3
value: 28.970000000000002
- type: ndcg_at_5
value: 30.797
- type: precision_at_1
value: 23.29
- type: precision_at_10
value: 5.489999999999999
- type: precision_at_100
value: 0.874
- type: precision_at_1000
value: 0.122
- type: precision_at_3
value: 12.261
- type: precision_at_5
value: 8.503
- type: recall_at_1
value: 21.54
- type: recall_at_10
value: 47.064
- type: recall_at_100
value: 70.959
- type: recall_at_1000
value: 91.032
- type: recall_at_3
value: 32.828
- type: recall_at_5
value: 37.214999999999996
- task:
type: Retrieval
dataset:
name: MTEB ClimateFEVER
type: climate-fever
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 10.102
- type: map_at_10
value: 17.469
- type: map_at_100
value: 19.244
- type: map_at_1000
value: 19.435
- type: map_at_3
value: 14.257
- type: map_at_5
value: 16.028000000000002
- type: mrr_at_1
value: 22.866
- type: mrr_at_10
value: 33.535
- type: mrr_at_100
value: 34.583999999999996
- type: mrr_at_1000
value: 34.622
- type: mrr_at_3
value: 29.946
- type: mrr_at_5
value: 32.157000000000004
- type: ndcg_at_1
value: 22.866
- type: ndcg_at_10
value: 25.16
- type: ndcg_at_100
value: 32.347
- type: ndcg_at_1000
value: 35.821
- type: ndcg_at_3
value: 19.816
- type: ndcg_at_5
value: 22.026
- type: precision_at_1
value: 22.866
- type: precision_at_10
value: 8.072
- type: precision_at_100
value: 1.5709999999999997
- type: precision_at_1000
value: 0.22200000000000003
- type: precision_at_3
value: 14.701
- type: precision_at_5
value: 11.960999999999999
- type: recall_at_1
value: 10.102
- type: recall_at_10
value: 31.086000000000002
- type: recall_at_100
value: 55.896
- type: recall_at_1000
value: 75.375
- type: recall_at_3
value: 18.343999999999998
- type: recall_at_5
value: 24.102
- task:
type: Retrieval
dataset:
name: MTEB DBPedia
type: dbpedia-entity
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 7.961
- type: map_at_10
value: 16.058
- type: map_at_100
value: 21.878
- type: map_at_1000
value: 23.156
- type: map_at_3
value: 12.206999999999999
- type: map_at_5
value: 13.747000000000002
- type: mrr_at_1
value: 60.5
- type: mrr_at_10
value: 68.488
- type: mrr_at_100
value: 69.02199999999999
- type: mrr_at_1000
value: 69.03200000000001
- type: mrr_at_3
value: 66.792
- type: mrr_at_5
value: 67.62899999999999
- type: ndcg_at_1
value: 49.125
- type: ndcg_at_10
value: 34.827999999999996
- type: ndcg_at_100
value: 38.723
- type: ndcg_at_1000
value: 45.988
- type: ndcg_at_3
value: 40.302
- type: ndcg_at_5
value: 36.781000000000006
- type: precision_at_1
value: 60.5
- type: precision_at_10
value: 26.825
- type: precision_at_100
value: 8.445
- type: precision_at_1000
value: 1.7000000000000002
- type: precision_at_3
value: 43.25
- type: precision_at_5
value: 34.5
- type: recall_at_1
value: 7.961
- type: recall_at_10
value: 20.843
- type: recall_at_100
value: 43.839
- type: recall_at_1000
value: 67.33
- type: recall_at_3
value: 13.516
- type: recall_at_5
value: 15.956000000000001
- task:
type: Classification
dataset:
name: MTEB EmotionClassification
type: mteb/emotion
config: default
split: test
revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37
metrics:
- type: accuracy
value: 52.06000000000001
- type: f1
value: 47.21494728335567
- task:
type: Retrieval
dataset:
name: MTEB FEVER
type: fever
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 56.798
- type: map_at_10
value: 67.644
- type: map_at_100
value: 68.01700000000001
- type: map_at_1000
value: 68.038
- type: map_at_3
value: 65.539
- type: map_at_5
value: 66.912
- type: mrr_at_1
value: 61.221000000000004
- type: mrr_at_10
value: 71.97099999999999
- type: mrr_at_100
value: 72.262
- type: mrr_at_1000
value: 72.27
- type: mrr_at_3
value: 70.052
- type: mrr_at_5
value: 71.324
- type: ndcg_at_1
value: 61.221000000000004
- type: ndcg_at_10
value: 73.173
- type: ndcg_at_100
value: 74.779
- type: ndcg_at_1000
value: 75.229
- type: ndcg_at_3
value: 69.291
- type: ndcg_at_5
value: 71.552
- type: precision_at_1
value: 61.221000000000004
- type: precision_at_10
value: 9.449
- type: precision_at_100
value: 1.0370000000000001
- type: precision_at_1000
value: 0.109
- type: precision_at_3
value: 27.467999999999996
- type: precision_at_5
value: 17.744
- type: recall_at_1
value: 56.798
- type: recall_at_10
value: 85.991
- type: recall_at_100
value: 92.973
- type: recall_at_1000
value: 96.089
- type: recall_at_3
value: 75.576
- type: recall_at_5
value: 81.12
- task:
type: Retrieval
dataset:
name: MTEB FiQA2018
type: fiqa
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 18.323
- type: map_at_10
value: 30.279
- type: map_at_100
value: 32.153999999999996
- type: map_at_1000
value: 32.339
- type: map_at_3
value: 26.336
- type: map_at_5
value: 28.311999999999998
- type: mrr_at_1
value: 35.339999999999996
- type: mrr_at_10
value: 44.931
- type: mrr_at_100
value: 45.818999999999996
- type: mrr_at_1000
value: 45.864
- type: mrr_at_3
value: 42.618
- type: mrr_at_5
value: 43.736999999999995
- type: ndcg_at_1
value: 35.339999999999996
- type: ndcg_at_10
value: 37.852999999999994
- type: ndcg_at_100
value: 44.888
- type: ndcg_at_1000
value: 48.069
- type: ndcg_at_3
value: 34.127
- type: ndcg_at_5
value: 35.026
- type: precision_at_1
value: 35.339999999999996
- type: precision_at_10
value: 10.617
- type: precision_at_100
value: 1.7930000000000001
- type: precision_at_1000
value: 0.23600000000000002
- type: precision_at_3
value: 22.582
- type: precision_at_5
value: 16.605
- type: recall_at_1
value: 18.323
- type: recall_at_10
value: 44.948
- type: recall_at_100
value: 71.11800000000001
- type: recall_at_1000
value: 90.104
- type: recall_at_3
value: 31.661
- type: recall_at_5
value: 36.498000000000005
- task:
type: Retrieval
dataset:
name: MTEB HotpotQA
type: hotpotqa
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 30.668
- type: map_at_10
value: 43.669999999999995
- type: map_at_100
value: 44.646
- type: map_at_1000
value: 44.731
- type: map_at_3
value: 40.897
- type: map_at_5
value: 42.559999999999995
- type: mrr_at_1
value: 61.336999999999996
- type: mrr_at_10
value: 68.496
- type: mrr_at_100
value: 68.916
- type: mrr_at_1000
value: 68.938
- type: mrr_at_3
value: 66.90700000000001
- type: mrr_at_5
value: 67.91199999999999
- type: ndcg_at_1
value: 61.336999999999996
- type: ndcg_at_10
value: 52.588
- type: ndcg_at_100
value: 56.389
- type: ndcg_at_1000
value: 58.187999999999995
- type: ndcg_at_3
value: 48.109
- type: ndcg_at_5
value: 50.498
- type: precision_at_1
value: 61.336999999999996
- type: precision_at_10
value: 11.033
- type: precision_at_100
value: 1.403
- type: precision_at_1000
value: 0.164
- type: precision_at_3
value: 30.105999999999998
- type: precision_at_5
value: 19.954
- type: recall_at_1
value: 30.668
- type: recall_at_10
value: 55.165
- type: recall_at_100
value: 70.169
- type: recall_at_1000
value: 82.12
- type: recall_at_3
value: 45.159
- type: recall_at_5
value: 49.885000000000005
- task:
type: Classification
dataset:
name: MTEB ImdbClassification
type: mteb/imdb
config: default
split: test
revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7
metrics:
- type: accuracy
value: 78.542
- type: ap
value: 72.50692137216646
- type: f1
value: 78.40630687221642
- task:
type: Retrieval
dataset:
name: MTEB MSMARCO
type: msmarco
config: default
split: dev
revision: None
metrics:
- type: map_at_1
value: 18.613
- type: map_at_10
value: 29.98
- type: map_at_100
value: 31.136999999999997
- type: map_at_1000
value: 31.196
- type: map_at_3
value: 26.339000000000002
- type: map_at_5
value: 28.351
- type: mrr_at_1
value: 19.054
- type: mrr_at_10
value: 30.476
- type: mrr_at_100
value: 31.588
- type: mrr_at_1000
value: 31.641000000000002
- type: mrr_at_3
value: 26.834000000000003
- type: mrr_at_5
value: 28.849000000000004
- type: ndcg_at_1
value: 19.083
- type: ndcg_at_10
value: 36.541000000000004
- type: ndcg_at_100
value: 42.35
- type: ndcg_at_1000
value: 43.9
- type: ndcg_at_3
value: 29.015
- type: ndcg_at_5
value: 32.622
- type: precision_at_1
value: 19.083
- type: precision_at_10
value: 5.914
- type: precision_at_100
value: 0.889
- type: precision_at_1000
value: 0.10200000000000001
- type: precision_at_3
value: 12.483
- type: precision_at_5
value: 9.315
- type: recall_at_1
value: 18.613
- type: recall_at_10
value: 56.88999999999999
- type: recall_at_100
value: 84.207
- type: recall_at_1000
value: 96.20100000000001
- type: recall_at_3
value: 36.262
- type: recall_at_5
value: 44.925
- task:
type: Classification
dataset:
name: MTEB MTOPDomainClassification (en)
type: mteb/mtop_domain
config: en
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 94.77656178750571
- type: f1
value: 94.37966073742972
- task:
type: Classification
dataset:
name: MTEB MTOPIntentClassification (en)
type: mteb/mtop_intent
config: en
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 77.72457820337438
- type: f1
value: 59.11327646329634
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (en)
type: mteb/amazon_massive_intent
config: en
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 73.17753866846
- type: f1
value: 71.22604635414544
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (en)
type: mteb/amazon_massive_scenario
config: en
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 76.67787491593813
- type: f1
value: 76.87653151298177
- task:
type: Clustering
dataset:
name: MTEB MedrxivClusteringP2P
type: mteb/medrxiv-clustering-p2p
config: default
split: test
revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73
metrics:
- type: v_measure
value: 33.3485843514749
- task:
type: Clustering
dataset:
name: MTEB MedrxivClusteringS2S
type: mteb/medrxiv-clustering-s2s
config: default
split: test
revision: 35191c8c0dca72d8ff3efcd72aa802307d469663
metrics:
- type: v_measure
value: 29.792796913883617
- task:
type: Reranking
dataset:
name: MTEB MindSmallReranking
type: mteb/mind_small
config: default
split: test
revision: 3bdac13927fdc888b903db93b2ffdbd90b295a69
metrics:
- type: map
value: 31.310305659169963
- type: mrr
value: 32.38286775798406
- task:
type: Retrieval
dataset:
name: MTEB NFCorpus
type: nfcorpus
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 4.968
- type: map_at_10
value: 11.379
- type: map_at_100
value: 14.618999999999998
- type: map_at_1000
value: 16.055
- type: map_at_3
value: 8.34
- type: map_at_5
value: 9.690999999999999
- type: mrr_at_1
value: 43.034
- type: mrr_at_10
value: 51.019999999999996
- type: mrr_at_100
value: 51.63100000000001
- type: mrr_at_1000
value: 51.681
- type: mrr_at_3
value: 49.174
- type: mrr_at_5
value: 50.181
- type: ndcg_at_1
value: 41.176
- type: ndcg_at_10
value: 31.341
- type: ndcg_at_100
value: 29.451
- type: ndcg_at_1000
value: 38.007000000000005
- type: ndcg_at_3
value: 36.494
- type: ndcg_at_5
value: 34.499
- type: precision_at_1
value: 43.034
- type: precision_at_10
value: 23.375
- type: precision_at_100
value: 7.799
- type: precision_at_1000
value: 2.059
- type: precision_at_3
value: 34.675
- type: precision_at_5
value: 30.154999999999998
- type: recall_at_1
value: 4.968
- type: recall_at_10
value: 15.104999999999999
- type: recall_at_100
value: 30.741000000000003
- type: recall_at_1000
value: 61.182
- type: recall_at_3
value: 9.338000000000001
- type: recall_at_5
value: 11.484
- task:
type: Retrieval
dataset:
name: MTEB NQ
type: nq
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 23.716
- type: map_at_10
value: 38.32
- type: map_at_100
value: 39.565
- type: map_at_1000
value: 39.602
- type: map_at_3
value: 33.848
- type: map_at_5
value: 36.471
- type: mrr_at_1
value: 26.912000000000003
- type: mrr_at_10
value: 40.607
- type: mrr_at_100
value: 41.589
- type: mrr_at_1000
value: 41.614000000000004
- type: mrr_at_3
value: 36.684
- type: mrr_at_5
value: 39.036
- type: ndcg_at_1
value: 26.883000000000003
- type: ndcg_at_10
value: 46.096
- type: ndcg_at_100
value: 51.513
- type: ndcg_at_1000
value: 52.366
- type: ndcg_at_3
value: 37.549
- type: ndcg_at_5
value: 41.971000000000004
- type: precision_at_1
value: 26.883000000000003
- type: precision_at_10
value: 8.004
- type: precision_at_100
value: 1.107
- type: precision_at_1000
value: 0.11900000000000001
- type: precision_at_3
value: 17.516000000000002
- type: precision_at_5
value: 13.019
- type: recall_at_1
value: 23.716
- type: recall_at_10
value: 67.656
- type: recall_at_100
value: 91.413
- type: recall_at_1000
value: 97.714
- type: recall_at_3
value: 45.449
- type: recall_at_5
value: 55.598000000000006
- task:
type: Retrieval
dataset:
name: MTEB QuoraRetrieval
type: quora
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 70.486
- type: map_at_10
value: 84.292
- type: map_at_100
value: 84.954
- type: map_at_1000
value: 84.969
- type: map_at_3
value: 81.295
- type: map_at_5
value: 83.165
- type: mrr_at_1
value: 81.16
- type: mrr_at_10
value: 87.31
- type: mrr_at_100
value: 87.423
- type: mrr_at_1000
value: 87.423
- type: mrr_at_3
value: 86.348
- type: mrr_at_5
value: 86.991
- type: ndcg_at_1
value: 81.17
- type: ndcg_at_10
value: 88.067
- type: ndcg_at_100
value: 89.34
- type: ndcg_at_1000
value: 89.43900000000001
- type: ndcg_at_3
value: 85.162
- type: ndcg_at_5
value: 86.752
- type: precision_at_1
value: 81.17
- type: precision_at_10
value: 13.394
- type: precision_at_100
value: 1.5310000000000001
- type: precision_at_1000
value: 0.157
- type: precision_at_3
value: 37.193
- type: precision_at_5
value: 24.482
- type: recall_at_1
value: 70.486
- type: recall_at_10
value: 95.184
- type: recall_at_100
value: 99.53999999999999
- type: recall_at_1000
value: 99.98700000000001
- type: recall_at_3
value: 86.89
- type: recall_at_5
value: 91.365
- task:
type: Clustering
dataset:
name: MTEB RedditClustering
type: mteb/reddit-clustering
config: default
split: test
revision: 24640382cdbf8abc73003fb0fa6d111a705499eb
metrics:
- type: v_measure
value: 44.118229475102154
- task:
type: Clustering
dataset:
name: MTEB RedditClusteringP2P
type: mteb/reddit-clustering-p2p
config: default
split: test
revision: 282350215ef01743dc01b456c7f5241fa8937f16
metrics:
- type: v_measure
value: 48.68049097629063
- task:
type: Retrieval
dataset:
name: MTEB SCIDOCS
type: scidocs
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 4.888
- type: map_at_10
value: 12.770999999999999
- type: map_at_100
value: 15.238
- type: map_at_1000
value: 15.616
- type: map_at_3
value: 8.952
- type: map_at_5
value: 10.639999999999999
- type: mrr_at_1
value: 24.099999999999998
- type: mrr_at_10
value: 35.375
- type: mrr_at_100
value: 36.442
- type: mrr_at_1000
value: 36.488
- type: mrr_at_3
value: 31.717000000000002
- type: mrr_at_5
value: 33.722
- type: ndcg_at_1
value: 24.099999999999998
- type: ndcg_at_10
value: 21.438
- type: ndcg_at_100
value: 30.601
- type: ndcg_at_1000
value: 36.678
- type: ndcg_at_3
value: 19.861
- type: ndcg_at_5
value: 17.263
- type: precision_at_1
value: 24.099999999999998
- type: precision_at_10
value: 11.4
- type: precision_at_100
value: 2.465
- type: precision_at_1000
value: 0.392
- type: precision_at_3
value: 18.733
- type: precision_at_5
value: 15.22
- type: recall_at_1
value: 4.888
- type: recall_at_10
value: 23.118
- type: recall_at_100
value: 49.995
- type: recall_at_1000
value: 79.577
- type: recall_at_3
value: 11.398
- type: recall_at_5
value: 15.428
- task:
type: STS
dataset:
name: MTEB SICK-R
type: mteb/sickr-sts
config: default
split: test
revision: a6ea5a8cab320b040a23452cc28066d9beae2cee
metrics:
- type: cos_sim_pearson
value: 85.33198632617024
- type: cos_sim_spearman
value: 79.09232997136625
- type: euclidean_pearson
value: 81.49986011523868
- type: euclidean_spearman
value: 77.03530620283338
- type: manhattan_pearson
value: 81.4741227286667
- type: manhattan_spearman
value: 76.98641133116311
- task:
type: STS
dataset:
name: MTEB STS12
type: mteb/sts12-sts
config: default
split: test
revision: a0d554a64d88156834ff5ae9920b964011b16384
metrics:
- type: cos_sim_pearson
value: 84.60103674582464
- type: cos_sim_spearman
value: 75.03945035801914
- type: euclidean_pearson
value: 80.82455267481467
- type: euclidean_spearman
value: 70.3317366248871
- type: manhattan_pearson
value: 80.8928091531445
- type: manhattan_spearman
value: 70.43207370945672
- task:
type: STS
dataset:
name: MTEB STS13
type: mteb/sts13-sts
config: default
split: test
revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca
metrics:
- type: cos_sim_pearson
value: 82.52453177109315
- type: cos_sim_spearman
value: 83.26431569305103
- type: euclidean_pearson
value: 82.10494657997404
- type: euclidean_spearman
value: 83.41028425949024
- type: manhattan_pearson
value: 82.08669822983934
- type: manhattan_spearman
value: 83.39959776442115
- task:
type: STS
dataset:
name: MTEB STS14
type: mteb/sts14-sts
config: default
split: test
revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375
metrics:
- type: cos_sim_pearson
value: 82.67472020277681
- type: cos_sim_spearman
value: 78.61877889763109
- type: euclidean_pearson
value: 80.07878012437722
- type: euclidean_spearman
value: 77.44374494215397
- type: manhattan_pearson
value: 79.95988483102258
- type: manhattan_spearman
value: 77.36018101061366
- task:
type: STS
dataset:
name: MTEB STS15
type: mteb/sts15-sts
config: default
split: test
revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3
metrics:
- type: cos_sim_pearson
value: 85.55450610494437
- type: cos_sim_spearman
value: 87.03494331841401
- type: euclidean_pearson
value: 81.4319784394287
- type: euclidean_spearman
value: 82.47893040599372
- type: manhattan_pearson
value: 81.32627203699644
- type: manhattan_spearman
value: 82.40660565070675
- task:
type: STS
dataset:
name: MTEB STS16
type: mteb/sts16-sts
config: default
split: test
revision: 4d8694f8f0e0100860b497b999b3dbed754a0513
metrics:
- type: cos_sim_pearson
value: 81.51576965454805
- type: cos_sim_spearman
value: 83.0062959588245
- type: euclidean_pearson
value: 79.98888882568556
- type: euclidean_spearman
value: 81.08948911791873
- type: manhattan_pearson
value: 79.77952719568583
- type: manhattan_spearman
value: 80.79471040445408
- task:
type: STS
dataset:
name: MTEB STS17 (en-en)
type: mteb/sts17-crosslingual-sts
config: en-en
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 87.28313046682885
- type: cos_sim_spearman
value: 87.35865211085007
- type: euclidean_pearson
value: 84.11501613667811
- type: euclidean_spearman
value: 82.82038954956121
- type: manhattan_pearson
value: 83.891278147302
- type: manhattan_spearman
value: 82.59947685165902
- task:
type: STS
dataset:
name: MTEB STS22 (en)
type: mteb/sts22-crosslingual-sts
config: en
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 67.80653738006102
- type: cos_sim_spearman
value: 68.11259151179601
- type: euclidean_pearson
value: 43.16707985094242
- type: euclidean_spearman
value: 58.96200382968696
- type: manhattan_pearson
value: 43.84146858566507
- type: manhattan_spearman
value: 59.05193977207514
- task:
type: STS
dataset:
name: MTEB STSBenchmark
type: mteb/stsbenchmark-sts
config: default
split: test
revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831
metrics:
- type: cos_sim_pearson
value: 82.62068205073571
- type: cos_sim_spearman
value: 84.40071593577095
- type: euclidean_pearson
value: 80.90824726252514
- type: euclidean_spearman
value: 80.54974812534094
- type: manhattan_pearson
value: 80.6759008187939
- type: manhattan_spearman
value: 80.31149103896973
- task:
type: Reranking
dataset:
name: MTEB SciDocsRR
type: mteb/scidocs-reranking
config: default
split: test
revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab
metrics:
- type: map
value: 87.13774787530915
- type: mrr
value: 96.22233793802422
- task:
type: Retrieval
dataset:
name: MTEB SciFact
type: scifact
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 49.167
- type: map_at_10
value: 59.852000000000004
- type: map_at_100
value: 60.544
- type: map_at_1000
value: 60.577000000000005
- type: map_at_3
value: 57.242000000000004
- type: map_at_5
value: 58.704
- type: mrr_at_1
value: 51.0
- type: mrr_at_10
value: 60.575
- type: mrr_at_100
value: 61.144
- type: mrr_at_1000
value: 61.175000000000004
- type: mrr_at_3
value: 58.667
- type: mrr_at_5
value: 59.599999999999994
- type: ndcg_at_1
value: 51.0
- type: ndcg_at_10
value: 64.398
- type: ndcg_at_100
value: 67.581
- type: ndcg_at_1000
value: 68.551
- type: ndcg_at_3
value: 59.928000000000004
- type: ndcg_at_5
value: 61.986
- type: precision_at_1
value: 51.0
- type: precision_at_10
value: 8.7
- type: precision_at_100
value: 1.047
- type: precision_at_1000
value: 0.11299999999999999
- type: precision_at_3
value: 23.666999999999998
- type: precision_at_5
value: 15.6
- type: recall_at_1
value: 49.167
- type: recall_at_10
value: 77.333
- type: recall_at_100
value: 91.833
- type: recall_at_1000
value: 99.667
- type: recall_at_3
value: 65.594
- type: recall_at_5
value: 70.52199999999999
- task:
type: PairClassification
dataset:
name: MTEB SprintDuplicateQuestions
type: mteb/sprintduplicatequestions-pairclassification
config: default
split: test
revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46
metrics:
- type: cos_sim_accuracy
value: 99.77227722772277
- type: cos_sim_ap
value: 94.14261011689366
- type: cos_sim_f1
value: 88.37209302325581
- type: cos_sim_precision
value: 89.36605316973414
- type: cos_sim_recall
value: 87.4
- type: dot_accuracy
value: 99.07128712871287
- type: dot_ap
value: 27.325649239129486
- type: dot_f1
value: 33.295838020247466
- type: dot_precision
value: 38.04627249357326
- type: dot_recall
value: 29.599999999999998
- type: euclidean_accuracy
value: 99.74158415841585
- type: euclidean_ap
value: 92.32695359979576
- type: euclidean_f1
value: 86.90534575772439
- type: euclidean_precision
value: 85.27430221366699
- type: euclidean_recall
value: 88.6
- type: manhattan_accuracy
value: 99.74257425742574
- type: manhattan_ap
value: 92.40335687760499
- type: manhattan_f1
value: 86.96507624200687
- type: manhattan_precision
value: 85.57599225556632
- type: manhattan_recall
value: 88.4
- type: max_accuracy
value: 99.77227722772277
- type: max_ap
value: 94.14261011689366
- type: max_f1
value: 88.37209302325581
- task:
type: Clustering
dataset:
name: MTEB StackExchangeClustering
type: mteb/stackexchange-clustering
config: default
split: test
revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259
metrics:
- type: v_measure
value: 53.113809982945035
- task:
type: Clustering
dataset:
name: MTEB StackExchangeClusteringP2P
type: mteb/stackexchange-clustering-p2p
config: default
split: test
revision: 815ca46b2622cec33ccafc3735d572c266efdb44
metrics:
- type: v_measure
value: 33.90915908471812
- task:
type: Reranking
dataset:
name: MTEB StackOverflowDupQuestions
type: mteb/stackoverflowdupquestions-reranking
config: default
split: test
revision: e185fbe320c72810689fc5848eb6114e1ef5ec69
metrics:
- type: map
value: 50.36481271702464
- type: mrr
value: 51.05628236142942
- task:
type: Summarization
dataset:
name: MTEB SummEval
type: mteb/summeval
config: default
split: test
revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c
metrics:
- type: cos_sim_pearson
value: 30.311305530381826
- type: cos_sim_spearman
value: 31.22029657606254
- type: dot_pearson
value: 12.157032445910177
- type: dot_spearman
value: 13.275185888551805
- task:
type: Retrieval
dataset:
name: MTEB TRECCOVID
type: trec-covid
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 0.167
- type: map_at_10
value: 1.113
- type: map_at_100
value: 5.926
- type: map_at_1000
value: 15.25
- type: map_at_3
value: 0.414
- type: map_at_5
value: 0.633
- type: mrr_at_1
value: 64.0
- type: mrr_at_10
value: 74.444
- type: mrr_at_100
value: 74.667
- type: mrr_at_1000
value: 74.679
- type: mrr_at_3
value: 72.0
- type: mrr_at_5
value: 74.0
- type: ndcg_at_1
value: 59.0
- type: ndcg_at_10
value: 51.468
- type: ndcg_at_100
value: 38.135000000000005
- type: ndcg_at_1000
value: 36.946
- type: ndcg_at_3
value: 55.827000000000005
- type: ndcg_at_5
value: 53.555
- type: precision_at_1
value: 64.0
- type: precision_at_10
value: 54.400000000000006
- type: precision_at_100
value: 39.08
- type: precision_at_1000
value: 16.618
- type: precision_at_3
value: 58.667
- type: precision_at_5
value: 56.8
- type: recall_at_1
value: 0.167
- type: recall_at_10
value: 1.38
- type: recall_at_100
value: 9.189
- type: recall_at_1000
value: 35.737
- type: recall_at_3
value: 0.455
- type: recall_at_5
value: 0.73
- task:
type: Retrieval
dataset:
name: MTEB Touche2020
type: webis-touche2020
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 2.4299999999999997
- type: map_at_10
value: 8.539
- type: map_at_100
value: 14.155999999999999
- type: map_at_1000
value: 15.684999999999999
- type: map_at_3
value: 3.857
- type: map_at_5
value: 5.583
- type: mrr_at_1
value: 26.531
- type: mrr_at_10
value: 40.489999999999995
- type: mrr_at_100
value: 41.772999999999996
- type: mrr_at_1000
value: 41.772999999999996
- type: mrr_at_3
value: 35.034
- type: mrr_at_5
value: 38.81
- type: ndcg_at_1
value: 21.429000000000002
- type: ndcg_at_10
value: 20.787
- type: ndcg_at_100
value: 33.202
- type: ndcg_at_1000
value: 45.167
- type: ndcg_at_3
value: 18.233
- type: ndcg_at_5
value: 19.887
- type: precision_at_1
value: 26.531
- type: precision_at_10
value: 19.796
- type: precision_at_100
value: 7.4079999999999995
- type: precision_at_1000
value: 1.5310000000000001
- type: precision_at_3
value: 19.728
- type: precision_at_5
value: 21.633
- type: recall_at_1
value: 2.4299999999999997
- type: recall_at_10
value: 14.901
- type: recall_at_100
value: 46.422000000000004
- type: recall_at_1000
value: 82.83500000000001
- type: recall_at_3
value: 4.655
- type: recall_at_5
value: 8.092
- task:
type: Classification
dataset:
name: MTEB ToxicConversationsClassification
type: mteb/toxic_conversations_50k
config: default
split: test
revision: d7c0de2777da35d6aae2200a62c6e0e5af397c4c
metrics:
- type: accuracy
value: 72.90140000000001
- type: ap
value: 15.138716624430662
- type: f1
value: 56.08803013269606
- task:
type: Classification
dataset:
name: MTEB TweetSentimentExtractionClassification
type: mteb/tweet_sentiment_extraction
config: default
split: test
revision: d604517c81ca91fe16a244d1248fc021f9ecee7a
metrics:
- type: accuracy
value: 59.85285795132994
- type: f1
value: 60.17575819903709
- task:
type: Clustering
dataset:
name: MTEB TwentyNewsgroupsClustering
type: mteb/twentynewsgroups-clustering
config: default
split: test
revision: 6125ec4e24fa026cec8a478383ee943acfbd5449
metrics:
- type: v_measure
value: 41.125150148437065
- task:
type: PairClassification
dataset:
name: MTEB TwitterSemEval2015
type: mteb/twittersemeval2015-pairclassification
config: default
split: test
revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1
metrics:
- type: cos_sim_accuracy
value: 84.96751505036657
- type: cos_sim_ap
value: 70.45642872444971
- type: cos_sim_f1
value: 65.75274793133259
- type: cos_sim_precision
value: 61.806361736707686
- type: cos_sim_recall
value: 70.23746701846966
- type: dot_accuracy
value: 77.84466829588126
- type: dot_ap
value: 32.49904328313596
- type: dot_f1
value: 37.903122189387126
- type: dot_precision
value: 25.050951086956523
- type: dot_recall
value: 77.83641160949868
- type: euclidean_accuracy
value: 84.5920009536866
- type: euclidean_ap
value: 68.83700633574043
- type: euclidean_f1
value: 64.92803542871202
- type: euclidean_precision
value: 60.820465545056464
- type: euclidean_recall
value: 69.63060686015831
- type: manhattan_accuracy
value: 84.52643500029802
- type: manhattan_ap
value: 68.63286046599892
- type: manhattan_f1
value: 64.7476540705047
- type: manhattan_precision
value: 62.3291015625
- type: manhattan_recall
value: 67.36147757255937
- type: max_accuracy
value: 84.96751505036657
- type: max_ap
value: 70.45642872444971
- type: max_f1
value: 65.75274793133259
- task:
type: PairClassification
dataset:
name: MTEB TwitterURLCorpus
type: mteb/twitterurlcorpus-pairclassification
config: default
split: test
revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf
metrics:
- type: cos_sim_accuracy
value: 88.65603291031164
- type: cos_sim_ap
value: 85.58148320880878
- type: cos_sim_f1
value: 77.63202920041064
- type: cos_sim_precision
value: 76.68444377675957
- type: cos_sim_recall
value: 78.60332614721281
- type: dot_accuracy
value: 79.71048239996895
- type: dot_ap
value: 59.31114839296281
- type: dot_f1
value: 57.13895527483783
- type: dot_precision
value: 51.331125015335545
- type: dot_recall
value: 64.4287034185402
- type: euclidean_accuracy
value: 86.99305312997244
- type: euclidean_ap
value: 81.87075965254876
- type: euclidean_f1
value: 73.53543008715421
- type: euclidean_precision
value: 72.39964184450082
- type: euclidean_recall
value: 74.70742223591007
- type: manhattan_accuracy
value: 87.04156479217605
- type: manhattan_ap
value: 81.7850497283247
- type: manhattan_f1
value: 73.52951955143475
- type: manhattan_precision
value: 70.15875236030492
- type: manhattan_recall
value: 77.2405297197413
- type: max_accuracy
value: 88.65603291031164
- type: max_ap
value: 85.58148320880878
- type: max_f1
value: 77.63202920041064
---
# GIST-all-MiniLM-L6-v2-GGUF
Quantized GGUF model files for [GIST-all-MiniLM-L6-v2](https://huggingface.co/avsolatorio/GIST-all-MiniLM-L6-v2) from [avsolatorio](https://huggingface.co/avsolatorio)
## Original Model Card:
<h1 align="center">GIST Embedding v0 - all-MiniLM-L6-v2</h1>
*GISTEmbed: Guided In-sample Selection of Training Negatives for Text Embedding Fine-tuning*
The model is fine-tuned on top of the [sentence-transformers/all-MiniLM-L6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2) using the [MEDI dataset](https://github.com/xlang-ai/instructor-embedding.git) augmented with mined triplets from the [MTEB Classification](https://huggingface.co/mteb) training dataset (excluding data from the Amazon Polarity Classification task).
The model does not require any instruction for generating embeddings. This means that queries for retrieval tasks can be directly encoded without crafting instructions.
Technical paper: [GISTEmbed: Guided In-sample Selection of Training Negatives for Text Embedding Fine-tuning](https://arxiv.org/abs/2402.16829)
# Data
The dataset used is a compilation of the MEDI and MTEB Classification training datasets. Third-party datasets may be subject to additional terms and conditions under their associated licenses. A HuggingFace Dataset version of the compiled dataset, and the specific revision used to train the model, is available:
- Dataset: [avsolatorio/medi-data-mteb_avs_triplets](https://huggingface.co/datasets/avsolatorio/medi-data-mteb_avs_triplets)
- Revision: 238a0499b6e6b690cc64ea56fde8461daa8341bb
The dataset contains a `task_type` key, which can be used to select only the mteb classification tasks (prefixed with `mteb_`).
The **MEDI Dataset** is published in the following paper: [One Embedder, Any Task: Instruction-Finetuned Text Embeddings](https://arxiv.org/abs/2212.09741).
The MTEB Benchmark results of the GIST embedding model, compared with the base model, suggest that the fine-tuning dataset has perturbed the model considerably, which resulted in significant improvements in certain tasks while adversely degrading performance in some.
The retrieval performance for the TRECCOVID task is of note. The fine-tuning dataset does not contain significant knowledge about COVID-19, which could have caused the observed performance degradation. We found some evidence, detailed in the paper, that thematic coverage of the fine-tuning data can affect downstream performance.
# Usage
The model can be easily loaded using the Sentence Transformers library.
```Python
import torch.nn.functional as F
from sentence_transformers import SentenceTransformer
revision = None # Replace with the specific revision to ensure reproducibility if the model is updated.
model = SentenceTransformer("avsolatorio/GIST-all-MiniLM-L6-v2", revision=revision)
texts = [
"Illustration of the REaLTabFormer model. The left block shows the non-relational tabular data model using GPT-2 with a causal LM head. In contrast, the right block shows how a relational dataset's child table is modeled using a sequence-to-sequence (Seq2Seq) model. The Seq2Seq model uses the observations in the parent table to condition the generation of the observations in the child table. The trained GPT-2 model on the parent table, with weights frozen, is also used as the encoder in the Seq2Seq model.",
"Predicting human mobility holds significant practical value, with applications ranging from enhancing disaster risk planning to simulating epidemic spread. In this paper, we present the GeoFormer, a decoder-only transformer model adapted from the GPT architecture to forecast human mobility.",
"As the economies of Southeast Asia continue adopting digital technologies, policy makers increasingly ask how to prepare the workforce for emerging labor demands. However, little is known about the skills that workers need to adapt to these changes"
]
# Compute embeddings
embeddings = model.encode(texts, convert_to_tensor=True)
# Compute cosine-similarity for each pair of sentences
scores = F.cosine_similarity(embeddings.unsqueeze(1), embeddings.unsqueeze(0), dim=-1)
print(scores.cpu().numpy())
```
# Training Parameters
Below are the training parameters used to fine-tune the model:
```
Epochs = 40
Warmup ratio = 0.1
Learning rate = 5e-6
Batch size = 16
Checkpoint step = 102000
Contrastive loss temperature = 0.01
```
# Evaluation
The model was evaluated using the [MTEB Evaluation](https://huggingface.co/mteb) suite.
# Citation
Please cite our work if you use GISTEmbed or the datasets we published in your projects or research. 🤗
```
@article{solatorio2024gistembed,
title={GISTEmbed: Guided In-sample Selection of Training Negatives for Text Embedding Fine-tuning},
author={Aivin V. Solatorio},
journal={arXiv preprint arXiv:2402.16829},
year={2024},
URL={https://arxiv.org/abs/2402.16829}
eprint={2402.16829},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
```
# Acknowledgements
This work is supported by the "KCP IV - Exploring Data Use in the Development Economics Literature using Large Language Models (AI and LLMs)" project funded by the [Knowledge for Change Program (KCP)](https://www.worldbank.org/en/programs/knowledge-for-change) of the World Bank - RA-P503405-RESE-TF0C3444.
The findings, interpretations, and conclusions expressed in this material are entirely those of the authors. They do not necessarily represent the views of the International Bank for Reconstruction and Development/World Bank and its affiliated organizations, or those of the Executive Directors of the World Bank or the governments they represent. | [
"SUMMARIZATION"
] | [
"BIOSSES",
"SCIFACT"
] |
pszemraj/long-t5-tglobal-base-16384-booksum-V11-big_patent-V2 | pszemraj | summarization | [
"transformers",
"pytorch",
"safetensors",
"longt5",
"text2text-generation",
"summarization",
"summary",
"booksum",
"long-document",
"long-form",
"dataset:kmfoda/booksum",
"dataset:big_patent",
"license:apache-2.0",
"license:bsd-3-clause",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2022-07-31T12:14:16 | 2023-08-24T11:47:46 | 87 | 2 | ---
datasets:
- kmfoda/booksum
- big_patent
license:
- apache-2.0
- bsd-3-clause
metrics:
- rouge
tags:
- summarization
- summary
- booksum
- long-document
- long-form
widget:
- text: large earthquakes along a given fault segment do not occur at random intervals
because it takes time to accumulate the strain energy for the rupture. The rates
at which tectonic plates move and accumulate strain at their boundaries are approximately
uniform. Therefore, in first approximation, one may expect that large ruptures
of the same fault segment will occur at approximately constant time intervals.
If subsequent main shocks have different amounts of slip across the fault, then
the recurrence time may vary, and the basic idea of periodic mainshocks must be
modified. For great plate boundary ruptures the length and slip often vary by
a factor of 2. Along the southern segment of the San Andreas fault the recurrence
interval is 145 years with variations of several decades. The smaller the standard
deviation of the average recurrence interval, the more specific could be the long
term prediction of a future mainshock.
example_title: earthquakes
- text: ' A typical feed-forward neural field algorithm. Spatiotemporal coordinates
are fed into a neural network that predicts values in the reconstructed domain.
Then, this domain is mapped to the sensor domain where sensor measurements are
available as supervision. Class and Section Problems Addressed Generalization
(Section 2) Inverse problems, ill-posed problems, editability; symmetries. Hybrid
Representations (Section 3) Computation & memory efficiency, representation capacity,
editability: Forward Maps (Section 4) Inverse problems Network Architecture (Section
5) Spectral bias, integration & derivatives. Manipulating Neural Fields (Section
6) Edit ability, constraints, regularization. Table 2: The five classes of techniques
in the neural field toolbox each addresses problems that arise in learning, inference,
and control. (Section 3). We can supervise reconstruction via differentiable forward
maps that transform Or project our domain (e.g, 3D reconstruction via 2D images;
Section 4) With appropriate network architecture choices, we can overcome neural
network spectral biases (blurriness) and efficiently compute derivatives and integrals
(Section 5). Finally, we can manipulate neural fields to add constraints and regularizations,
and to achieve editable representations (Section 6). Collectively, these classes
constitute a ''toolbox'' of techniques to help solve problems with neural fields
There are three components in a conditional neural field: (1) An encoder or inference
function € that outputs the conditioning latent variable 2 given an observation
0 E(0) =2. 2 is typically a low-dimensional vector, and is often referred to aS
a latent code Or feature code_ (2) A mapping function 4 between Z and neural field
parameters O: Y(z) = O; (3) The neural field itself $. The encoder € finds the
most probable z given the observations O: argmaxz P(2/0). The decoder maximizes
the inverse conditional probability to find the most probable 0 given Z: arg-
max P(Olz). We discuss different encoding schemes with different optimality guarantees
(Section 2.1.1), both global and local conditioning (Section 2.1.2), and different
mapping functions Y (Section 2.1.3) 2. Generalization Suppose we wish to estimate
a plausible 3D surface shape given a partial or noisy point cloud. We need a suitable
prior over the sur- face in its reconstruction domain to generalize to the partial
observations. A neural network expresses a prior via the function space of its
architecture and parameters 0, and generalization is influenced by the inductive
bias of this function space (Section 5).'
example_title: scientific paper
- text: 'Is a else or outside the cob and tree written being of early client rope
and you have is for good reasons. On to the ocean in Orange for time. By''s the
aggregate we can bed it yet. Why this please pick up on a sort is do and also
M Getoi''s nerocos and do rain become you to let so is his brother is made in
use and Mjulia''s''s the lay major is aging Masastup coin present sea only of
Oosii rooms set to you We do er do we easy this private oliiishs lonthen might
be okay. Good afternoon everybody. Welcome to this lecture of Computational Statistics.
As you can see, I''m not socially my name is Michael Zelinger. I''m one of the
task for this class and you might have already seen me in the first lecture where
I made a quick appearance. I''m also going to give the tortillas in the last third
of this course. So to give you a little bit about me, I''m a old student here
with better Bulman and my research centres on casual inference applied to biomedical
disasters, so that could be genomics or that could be hospital data. If any of
you is interested in writing a bachelor thesis, a semester paper may be mastathesis
about this topic feel for reach out to me. you have my name on models and my email
address you can find in the directory I''d Be very happy to talk about it. you
do not need to be sure about it, we can just have a chat. So with that said, let''s
get on with the lecture. There''s an exciting topic today I''m going to start
by sharing some slides with you and later on during the lecture we''ll move to
the paper. So bear with me for a few seconds. Well, the projector is starting
up. Okay, so let''s get started. Today''s topic is a very important one. It''s
about a technique which really forms one of the fundamentals of data science,
machine learning, and any sort of modern statistics. It''s called cross validation.
I know you really want to understand this topic I Want you to understand this
and frankly, nobody''s gonna leave Professor Mineshousen''s class without understanding
cross validation. So to set the stage for this, I Want to introduce you to the
validation problem in computational statistics. So the problem is the following:
You trained a model on available data. You fitted your model, but you know the
training data you got could always have been different and some data from the
environment. Maybe it''s a random process. You do not really know what it is,
but you know that somebody else who gets a different batch of data from the same
environment they would get slightly different training data and you do not care
that your method performs as well. On this training data. you want to to perform
well on other data that you have not seen other data from the same environment.
So in other words, the validation problem is you want to quantify the performance
of your model on data that you have not seen. So how is this even possible? How
could you possibly measure the performance on data that you do not know The solution
to? This is the following realization is that given that you have a bunch of data,
you were in charge. You get to control how much that your model sees. It works
in the following way: You can hide data firms model. Let''s say you have a training
data set which is a bunch of doubtless so X eyes are the features those are typically
hide and national vector. It''s got more than one dimension for sure. And the
why why eyes. Those are the labels for supervised learning. As you''ve seen before,
it''s the same set up as we have in regression. And so you have this training
data and now you choose that you only use some of those data to fit your model.
You''re not going to use everything, you only use some of it the other part you
hide from your model. And then you can use this hidden data to do validation from
the point of you of your model. This hidden data is complete by unseen. In other
words, we solve our problem of validation.'
example_title: transcribed audio - lecture
- text: 'Transformer-based models have shown to be very useful for many NLP tasks.
However, a major limitation of transformers-based models is its O(n^2)O(n 2) time
& memory complexity (where nn is sequence length). Hence, it''s computationally
very expensive to apply transformer-based models on long sequences n > 512n>512.
Several recent papers, e.g. Longformer, Performer, Reformer, Clustered attention
try to remedy this problem by approximating the full attention matrix. You can
checkout 🤗''s recent blog post in case you are unfamiliar with these models.
BigBird (introduced in paper) is one of such recent models to address this issue.
BigBird relies on block sparse attention instead of normal attention (i.e. BERT''s
attention) and can handle sequences up to a length of 4096 at a much lower computational
cost compared to BERT. It has achieved SOTA on various tasks involving very long
sequences such as long documents summarization, question-answering with long contexts.
BigBird RoBERTa-like model is now available in 🤗Transformers. The goal of this
post is to give the reader an in-depth understanding of big bird implementation
& ease one''s life in using BigBird with 🤗Transformers. But, before going into
more depth, it is important to remember that the BigBird''s attention is an approximation
of BERT''s full attention and therefore does not strive to be better than BERT''s
full attention, but rather to be more efficient. It simply allows to apply transformer-based
models to much longer sequences since BERT''s quadratic memory requirement quickly
becomes unbearable. Simply put, if we would have ∞ compute & ∞ time, BERT''s attention
would be preferred over block sparse attention (which we are going to discuss
in this post).
If you wonder why we need more compute when working with longer sequences, this
blog post is just right for you!
Some of the main questions one might have when working with standard BERT-like
attention include:
Do all tokens really have to attend to all other tokens? Why not compute attention
only over important tokens? How to decide what tokens are important? How to attend
to just a few tokens in a very efficient way? In this blog post, we will try to
answer those questions.
What tokens should be attended to? We will give a practical example of how attention
works by considering the sentence ''BigBird is now available in HuggingFace for
extractive question answering''. In BERT-like attention, every word would simply
attend to all other tokens.
Let''s think about a sensible choice of key tokens that a queried token actually
only should attend to by writing some pseudo-code. Will will assume that the token
available is queried and build a sensible list of key tokens to attend to.
>>> # let''s consider following sentence as an example >>> example = [''BigBird'',
''is'', ''now'', ''available'', ''in'', ''HuggingFace'', ''for'', ''extractive'',
''question'', ''answering'']
>>> # further let''s assume, we''re trying to understand the representation of
''available'' i.e. >>> query_token = ''available'' >>> # We will initialize an
empty `set` and fill up the tokens of our interest as we proceed in this section.
>>> key_tokens = [] # => currently ''available'' token doesn''t have anything
to attend Nearby tokens should be important because, in a sentence (sequence of
words), the current word is highly dependent on neighboring past & future tokens.
This intuition is the idea behind the concept of sliding attention.'
example_title: bigbird blog intro
- text: 'To be fair, you have to have a very high IQ to understand Rick and Morty.
The humour is extremely subtle, and without a solid grasp of theoretical physics
most of the jokes will go over a typical viewer''s head. There''s also Rick''s
nihilistic outlook, which is deftly woven into his characterisation- his personal
philosophy draws heavily from Narodnaya Volya literature, for instance. The fans
understand this stuff; they have the intellectual capacity to truly appreciate
the depths of these jokes, to realise that they''re not just funny- they say something
deep about LIFE. As a consequence people who dislike Rick & Morty truly ARE idiots-
of course they wouldn''t appreciate, for instance, the humour in Rick''s existential
catchphrase ''Wubba Lubba Dub Dub,'' which itself is a cryptic reference to Turgenev''s
Russian epic Fathers and Sons. I''m smirking right now just imagining one of those
addlepated simpletons scratching their heads in confusion as Dan Harmon''s genius
wit unfolds itself on their television screens. What fools.. how I pity them.
😂
And yes, by the way, i DO have a Rick & Morty tattoo. And no, you cannot see it.
It''s for the ladies'' eyes only- and even then they have to demonstrate that
they''re within 5 IQ points of my own (preferably lower) beforehand. Nothin personnel
kid 😎'
example_title: Richard & Mortimer
parameters:
max_length: 64
min_length: 8
no_repeat_ngram_size: 3
early_stopping: true
repetition_penalty: 3.5
length_penalty: 0.3
encoder_no_repeat_ngram_size: 3
num_beams: 4
model-index:
- name: pszemraj/long-t5-tglobal-base-16384-booksum-V11-big_patent-V2
results:
- task:
type: summarization
name: Summarization
dataset:
name: kmfoda/booksum
type: kmfoda/booksum
config: kmfoda--booksum
split: test
metrics:
- type: rouge
value: 23.1439
name: ROUGE-1
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiZmQzMDk0MDJlZTJkN2IzODg3NDJhYmY4MzJmOTU4N2FjMDBjODg5NzJlMGFhNDQ2YTFhMzI3YmY5ZWM1MDBkMiIsInZlcnNpb24iOjF9.yoXEV5ircj_cjQhUA_RpWH_8Kaev0sRLwQulYD8wmqxfSEuqamBGedXnIg9X_EcpjvulBhapjGZN2G0s0vz4Dg
- type: rouge
value: 3.2393
name: ROUGE-2
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMTkwNzEwYjc5YTZkMmE4NmEwMDE1OTRiNTJmM2VlYmI3NmM2NjIwZWMxM2ZkNjU2MzhjMmQzYjIxODRiYzY4ZiIsInZlcnNpb24iOjF9.CDK_e4fCwERbm3D_Y2tc41SSscIvlZKGTUQ16afpMuH2_HHKbpn7CNgtU9MWiyFZfdgafdUeQPo2CCYI-dCBCg
- type: rouge
value: 12.7038
name: ROUGE-L
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNDFkNjcyYmYxYzdlMTY2NTIyY2ZiZDJlZjliYTM1YWZjZGI3YzA5ZDczYjdkMGUzZmUxNmJkMDY0OTk3NWNlMSIsInZlcnNpb24iOjF9.XQmt4GEX0N6y2FNXfLAeLDkB96nJyxhN9dyy-OdBcu5E7Tw0dvIN3feYHxq8MenTShE9lsekIYZy2kieJQfmCg
- type: rouge
value: 19.8101
name: ROUGE-LSUM
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiYTFhMGNhMzA0YmYyMDhiNzdlMDc2ZDQ3YjFjMDM3ODliMmIxMjQxZWMwYWM0NTM0OGNlZTkzMzVhZDBmMjA1YiIsInZlcnNpb24iOjF9.-YChaP7xwLM9W5jrdLSyLWdb3hAdPbm0mmij3X_pU3nqb3_wuPobjCLGEEQNxAnGq7kE-LI5hgXZ-lGhuKUCCQ
- type: loss
value: 2.766307830810547
name: loss
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiODAxYzRhNGM2ZGVkOWRiM2Y4NzNjZDM2MTY2MmM4MzY3ZWM5ZjdmMWUxZGY5Y2E2OTg4ZGEwYzBlMmFiYmQyNSIsInZlcnNpb24iOjF9.VRePqe8Z9dD5l6bsfIRLkFn4mwwVC8G--kOlofQWSiGusRxVrY50fa5MtKTGmuiNs5JDFCPjZmkpGYlSxnOeDw
- type: gen_len
value: 63.4493
name: gen_len
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiZGY4NWI0MDc3NDk4NTg4YjQ5YzFmN2MyYWFjMzI0MjlkMGZlMWMzYThiMDFlMmM3MmE4ODg0YWExNTMyZjQ5MiIsInZlcnNpb24iOjF9.Ym3jfW0gthJhlLg4CW10jM9YUHUGbAPIdLefE3CTyP0OUrV9yuJAGV6-RDrV-Viwyy1Xaqg4BFa5pX7P2PRRDQ
- task:
type: summarization
name: Summarization
dataset:
name: samsum
type: samsum
config: samsum
split: test
metrics:
- type: rouge
value: 26.8026
name: ROUGE-1
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNTBhYTQzMGVjZTJjZmE3NjBiNzI2M2FlNTA4Yzk5Njc1Yjk1YTk2NTJiMTRlMzQ3NjU2ZjQxZTNkNDVhNjMzYSIsInZlcnNpb24iOjF9.GyFUubKI3pM5Z8I1jz6Q_f7fSr1nVpwuFluUOVq8aaWfv7L1dZ_5By2FShQM1nwBM-mCiqtFb3a61eR3VEAeBw
- type: rouge
value: 6.0656
name: ROUGE-2
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMzEyZTYxYmVlYTc0MzNhMWM1ODgwODRiYWNkN2FjMjIzOTJhNzA0OTFkY2M0ZTJhMWMzNWMzY2E1OGJmYTg5OCIsInZlcnNpb24iOjF9.3U0PamPVFWWE7Nxh6u52mnMP-HpeGPEOLauZthcj32ELSuNx9s260ujguSW_BrJpCXqNNEqIzYTlWf97Ji8vCA
- type: rouge
value: 20.0098
name: ROUGE-L
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiOGExYTRmZDgzYzllNWZmMGFlN2FhMDJmZGE1ODkyYTZlNmFhZjZmNGU4YzQwZGZiYTAyZmI1NGJmNjRjODkwYSIsInZlcnNpb24iOjF9.dEON7kZa7dKCHjz7nuuIBdcpwojM5-OxQuEf5n18ZywWdbk9H2LWGY2uvvCRp6cK2JsIzxzTmX9wK7zkWQiCAA
- type: rouge
value: 21.9115
name: ROUGE-LSUM
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiM2Y4MWE4ZmIyMTA5YWU5YzllYzExMzA1OTc2Mjg3NTYxNjcwMWMxZGI0ZDhmYjJhMGIxNTllY2Q3NDVlNmM2MiIsInZlcnNpb24iOjF9.M8bYXCuNHyVAkA4vBbqvGe8yCgmjCrlhqqliAF6WcmrYRF8CvezQ4S4SWGhhVkcG6v84H-Pa9LzsKmualXdWBw
- type: loss
value: 2.317471981048584
name: loss
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMmI1YjNlYzI3OTY4YjY1MDIwYzk3ZDMzZDA4MzQwM2ZhNzY3NDQxZTA2ZThiMmE2MmFmNTg0OGMyYWFhODE5OSIsInZlcnNpb24iOjF9.QpoWo_TLKw72_PbtwknBA1LbUQ8ftls-8VBLuN8_ZhUN2lNNpipU2qMZ1Ga4xAUazkcMhT_TwpqjyGshJFkgAg
- type: gen_len
value: 19.1111
name: gen_len
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiYTA2MmFiNjI5NzFjOTUzMTEwZTNiYzA1OGY1ZWEyNTE1ZTgzYjMxNDE4YjJkZmIxNWI4MDMyYWUxMWRkODk1NCIsInZlcnNpb24iOjF9.CXy-Dfle9ypabrK3I1GyhOWl46EyRDbf8XlY-D0cNktXcCCbKdgn8DWgJI199GJpH-19mMS_jQt049VJri2EDw
- task:
type: summarization
name: Summarization
dataset:
name: xsum
type: xsum
config: default
split: test
metrics:
- type: rouge
value: 25.2061
name: ROUGE-1
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMjZmZDRlN2NjZTQyNzkyMmZiYzk1MjJmMmE0MGM4ZjUwOGNmOGFhZjg0MzE0MzM4MmE1Y2EyYTY4ZThmNzUzMiIsInZlcnNpb24iOjF9.pdJWpUnMeqftinZrPkkFRWbCA253BYgt5W-EqbyTVi9BteojJ6yEDbMjE0TyYzlJ28JBcw4IVNL2zaWCgpfRBQ
- type: rouge
value: 4.7048
name: ROUGE-2
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNGRjOGUzZTk1ZDc0Zjk5MmE4ZjUzNmZiZjQ2YzE2YzYzODdmYmY3NzMwNDdmYmViNjVkZTUzMmY4YjllOGQ1NCIsInZlcnNpb24iOjF9.nFiT7HhUZSDofK6_UH2-1rzPz_48w7e5j0Q72vqgodSNIwpv2JOlcb1GOlaA9jkvy45PJyDBgP9i6kLVfaNBBw
- type: rouge
value: 17.8593
name: ROUGE-L
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiZmY5ZjM0ZjdkYTZiMzk0ZWYyM2EzZWNjMjczMjI2MzkwYmNiN2JhNDEzNzdmMmE0NzEwNmVkNGU5YTlkZDAzYyIsInZlcnNpb24iOjF9.C3ZgUsGNNtwZVJFcT90KkBfewrrA3ZXxxVl2u5ykUtzpS4gzoaRuZbPT8WOJAog7kfPPJiG_GZGYy9XTTCdIBw
- type: rouge
value: 18.0798
name: ROUGE-LSUM
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMDU4Y2Y3MzExNzNlZTI3NWVmZTNjMmZkNTAxNDBjMzJiZTI5M2E2N2ViODk5OGEwZGU5NzYxZWMzMjMwNmQ2MSIsInZlcnNpb24iOjF9.qDLZsjtftvlw8-3kOoUvanWmemmvaPxUIAxOVh1B18Ihn9kkm0FnZbWxl65YdOLg3dqDcHnDFXvXcS81C8dmBw
- type: loss
value: 3.003053665161133
name: loss
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNTM2ODRkMjk5MjczY2ViZGVjMjJjOTFmYTk2NTAyNmUwMTRiZjYwZTllY2NhODFhYWVkZTIzYzQxZjZlOGFkNCIsInZlcnNpb24iOjF9.3SeJzRO0b4cNCTOgsf7c8UrLCLW-6JoOHtNMmMr5DCzNzfqlt2TSJ5ClahzzAYA2_5QhTMhcUYOewH5uZhkpDA
- type: gen_len
value: 27.4815
name: gen_len
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiZDdiYTVkZGI0NzE0ODcwNjgwNGQ0YmNjZDI1MWQxZWQ0MzNmMDJkYmE4MGM5ZjM4NGViNWZiNTdjNTg2YzBlOSIsInZlcnNpb24iOjF9.VoPyoq8HZq8nbucrPYt52flRFtkD5VAfVD7LykAp-GiN2W6D3cpcagMMrHThP9e8q3qDodxddMcnwY88CGtkAg
- task:
type: summarization
name: Summarization
dataset:
name: cnn_dailymail
type: cnn_dailymail
config: 3.0.0
split: test
metrics:
- type: rouge
value: 27.5692
name: ROUGE-1
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiM2UzNDRjNDJhNjViYjgxNDY2NzAwODkyYjk1OTllNWFiYmI2MGEyMmM3ZTc1YWZjNjhiZDZkYzAxYzIwYTQzZiIsInZlcnNpb24iOjF9.FEJU7de6nnYa1rhAngf3h0JDSFKXzWKkcHwQtcz6rbPuVV0Jw7u-9PwDXBFh0X8n2PJjOfCqM5hmcrUe0FxkCQ
- type: rouge
value: 6.1264
name: ROUGE-2
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMGIzODA2ZjU2YmM0YmJiZDIxNzQ0ZDI1NGQzZGZlNDg5OTZhYmMwZTQ1ZjVlYzM5ZTQzMjZkMTIyZmY1OGQ2YiIsInZlcnNpb24iOjF9.fN1wSGc_tUvIgYyzKU35PuPxKyTOotKnMCW_u452LduRVyIey9KB8kf8E35vTOVvk7TCiuvRuxXDoAATFktbBQ
- type: rouge
value: 17.1127
name: ROUGE-L
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNWRjNTNhZjg1NDVkNTQ5MjkwZjNiNzY0Nzk5ZmM4YjhhZmZiZjQzZGY1YWM1ZGI5MGE0YjNiYzNmNWYyNWI2OSIsInZlcnNpb24iOjF9.KVGdIERnuGTOrxm71i2znI8tdRCgVz7SijP08tsE0H54eUijAYDqQccspfZTXRXeFn0lOUjSHDvHj4ODIRYvAw
- type: rouge
value: 23.0066
name: ROUGE-LSUM
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNGUyMzhlODY1YWI4ZDg2NzYwZDYwNmYzZTRhMTQ3NDE2MzUzZGViNzhjMTkzZDRhNTljNDEyMTY4NzAwMjE0OCIsInZlcnNpb24iOjF9.pBz5E_1ffBrv4tDCJhuYFIuBFBk0P3SKxLYoIhOVj_fW0Mj6ZKPcA9ZhdE4U-HsHEgSvFhtBw1UlsGiu145XBw
- type: loss
value: 2.218526601791382
name: loss
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiZjYxNDk4OWU0M2Y1ZjMxNTA3NjdiNjQ5NWFjYzJiMjVhMjgzMTA3NDhlNTVjMjllZjQ0NWQ2YmYzYjdiMTQ1OCIsInZlcnNpb24iOjF9.SJdyGLltcLnB03U6QxSkZ71Im0aGK-oTbEQDMj2AnEPFThNTb0mMEMpCWpH1lLVeDAh-PE6fCmgt4yPS6n2nBg
- type: gen_len
value: 39.1952
name: gen_len
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiOTMyY2JiYWVhYTM3OWU2YjhiMDEwZjAwZDgxN2FmNjk2MzZhYmQzMWZiODg2NDY0ZmU4MjdhNjk0MTViMGY1YyIsInZlcnNpb24iOjF9.bsLAi2R8QTrCUj1VW4GQqYauY7CV3mFm2S294zHCJU2ZlAcikutcjxAcgvuSSGiAVJ02Odm5bMTuzx7SYMUSAQ
- task:
type: summarization
name: Summarization
dataset:
name: billsum
type: billsum
config: default
split: test
metrics:
- type: rouge
value: 28.0632
name: ROUGE-1
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiY2RiODA0ZTQxYWU0NDI5YmNjZmYzYTZmY2I5NTczYzVhZjcxOTYwMWI3ZjZiMzRlZmI5ZTA5NjVkY2E4NDFlMyIsInZlcnNpb24iOjF9.POIQUXGryoEzHmdBCeqaBh70uz33XlKVLjfhyRFwhWj7UV15SsDcuumkEk2BXkShFHDRo0CQd1AXD1fFsPCVCQ
- type: rouge
value: 9.8996
name: ROUGE-2
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiZDBiMDllNTZlZmJiYWI1ZTIxM2JhYmZhYTAzYTQ0NmUzNjcyZjkzMDliYTE5ZjIwY2M0YzU2ZWZlYjNhZDY2YyIsInZlcnNpb24iOjF9.EEJO-ZRVi2EiM-uKMvimaITiHh7wqzNBza6lsIvdyVhVf4UwGhsUaArHzlYR7xn53UBCtIDTucXX7NKFst_4Ag
- type: rouge
value: 18.25
name: ROUGE-L
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiZTk4ZmJiYWIwYmY4MTBmNGVlMmE1YzA4N2VmYWU3NjRlNTU3YjI2YjBhOGIzNzcwZjczOTZmZGJiNjMyMjYzZiIsInZlcnNpb24iOjF9.Qx-ihTp0UuzhShqHQkiTijODUst1LO5Bi8KaQOCIiVhvywN-2Wt3bmeSNV_C0b5BXsSaHIxrWBTeSRaq5Zp_Bw
- type: rouge
value: 21.9053
name: ROUGE-LSUM
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMTIzNGNkNTAyYTkzZjE5ZGZhZjZkYmU3Yjg2ZTVhYjY1NjZhODZjM2NkMWQ5NmJjN2UxNTZlMmJmNDNmOTczZSIsInZlcnNpb24iOjF9.6ZY8rK5bRfOZJkdvhpvOt_gW1xCoA4JsAi0-6No4y-lBaLGUo4LXpGaVcJrrvdN-S7e7yCxnA32jGCdYXzJJBA
- type: loss
value: 2.032966375350952
name: loss
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNTM5MmQzMWZhOWIwNjNjNThhNGE4NzFiMzdhNmMzZWM4ZGYyNWE1NmZjMDVjNTBmMGRiNzYzMTc1ZDg2YTYxNCIsInZlcnNpb24iOjF9.Zqrbz7mmljH19mEep_mm4ev5FEsozIqG0eNkj3V5s85OgrHyuKOVkGKhRlqjcWfgkUlsxTpaemZDUVIR84XrBw
- type: gen_len
value: 48.5987
name: gen_len
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiODZjNGJiOGUzM2M3NDM3MDRmNmQ1ZjQ3ODUyZTQ2NjEyYmM0YWRhMmU4MDdkOTZmMGNkNTIyNDg3ZmQxMjA4MiIsInZlcnNpb24iOjF9.y91kl4W-QIy6lfQDl0s4h0aeV-v0iH7Y06AJBYRYrddUXRiDw2oSTHEdf14d3Hw-oZNPftzBHUJqAckwEpGFDw
- task:
type: summarization
name: Summarization
dataset:
name: big_patent
type: big_patent
config: y
split: test
metrics:
- type: rouge
value: 34.7848
name: ROUGE-1
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiN2QxOTU1YTUxYWJjOTcwOGYwZjA3MGJlYmRhYWUwMGNhYmQxYjVmMjU5M2E5OGRiY2RmMTNmNGNhZDdmNzc1OCIsInZlcnNpb24iOjF9.bp2K7V-BDMQMd3zk2RY3pILKI7LimWrD5psesXnSF20JiRA3d5-bQdOfPeZGu3ydUqbml3MTswM0lg_6EZTjAw
- type: rouge
value: 9.7549
name: ROUGE-2
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiOGY4OWM4MjVmMzhmNGUwYzAxODNjMjY4OTY1YjQ2MjZiYzM2NzgyNGZhMjVlNjllZmI3OTMzNTVhZDA1YzMyOSIsInZlcnNpb24iOjF9.HQ_emvr1RVEfeNfQUdfhfzk2O5BGwzpQKojvRW_w44Ixakn_VrZ4GurxYo0JTF4dDwDBDqjaFnZ4EiYcsrxODQ
- type: rouge
value: 22.228
name: ROUGE-L
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNWVkMzc2ODM1ZTg2YzQ4YjMzZjQwMThiODI0YzA5MzJmZjY1ZTJlOGZhOTM1OWEzOTE3M2ExYzFiMjM2NDRlMSIsInZlcnNpb24iOjF9.shmWrR-rNKAYOqEgnnlrgWXaWAWbvrKC_IyvK-fwnqoJcphB9ef2gVX758tQgfe878M1N1sE7StT8rd7FbD8Cw
- type: rouge
value: 28.0389
name: ROUGE-LSUM
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNTJmZTczZDc4N2ZlNDk3NmY0Njc2Y2JhNGU2OWJjZGU4YWQ3Y2RjNDU1ZTEyNjFiZDQxZGNhZWFmYTAwOTBiMSIsInZlcnNpb24iOjF9.yOTMgX1vpuhlyPkfCAyNf1k5nSInny0YrbqJeC_MDZlavVIxOQT6qVcMYJpLF2AKRp6UsuFB06PANbQu4Bj6CA
- type: loss
value: 1.7787292003631592
name: loss
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiY2VlMGU3NDE0NmFiNTI2M2NhZmU2YzRhYjU1ZWNjYmM3YTllMTQxODJhM2JlMjk3NzVjYjQ5M2FlOTk2NjNmZCIsInZlcnNpb24iOjF9.wkkUrosSgGkei41n6CxQH_UwS6fJTMzXLV88EgnI_8Y6Qz2qa9B2cGhpFkP__snnX6u9jhWj68oAfZifqaXnCw
- type: gen_len
value: 71.6372
name: gen_len
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiODI0NTcwOTZmNzkwOTFhMTdmYWMxYjI2YTdmMTYwYTBlMTEyOTc3MmFkYmZkNGJmYjc4MTJlYmYwNzIxMjkzMCIsInZlcnNpb24iOjF9.EM9Vh5Mb6H3htv45ohj6FYqBUtmQi1sn0j97brEFWRYp--8N2Y781cR9ktqylEz6PgbbwpuxMYOMD5MctmGLCw
- task:
type: summarization
name: Summarization
dataset:
name: launch/gov_report
type: launch/gov_report
config: plain_text
split: validation
metrics:
- type: rouge
value: 23.5925
name: ROUGE-1
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNjk2NWVkM2Y5NTgxYjgyYmY2YjgzYjZiN2VhZGVkOWNiYmRiZGU0ZGYwNzlkN2E3ZDk5ZjQ3ZTQyYjU5YzczYSIsInZlcnNpb24iOjF9.ScWumfXg7-ZeJEZT1IHDVt5LWMDZEihiiCux5chXP2AeRs3pWKhI6xr_D5i3CCEDeyiMzKleCASMBe4sC9LgDQ
- type: rouge
value: 5.6762
name: ROUGE-2
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMDU3MGNmMDY3YWQxNDdlMTk5MjM1NGU4M2RmNDNiYzllYmRmNTYzZGFiOGU5MjQ0YWMzYTg1OWFlNmNmMzQ5NiIsInZlcnNpb24iOjF9.9SKt_I8WGKu6bsovBR4mSTDNEaSHB1tN5RyY3JTCHYs2YQNczaKwLNPnyG2i0IbkvaPX_8EOQ7KzwQ5raUVFBg
- type: rouge
value: 13.8108
name: ROUGE-L
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiYTBiZDVkYjI4ZDBlZGM2NDM4M2U2NzdjNzViNDkzZjE3YTBmYzdlNDNlMTZhZTUxNjA2NmJkODE2ZTk1MTAxMSIsInZlcnNpb24iOjF9.KMTkQsI9BfDfL7FZpwZ9kxTTRA8DNrUEpyBZtloQ0sNfhO0t0Ch1qhktz0HaA0uQfC0WFRfrb9Iz7uMc8XVRBg
- type: rouge
value: 20.2437
name: ROUGE-LSUM
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMjBkZGJlYzZjMjQ1Njg4MjQ2NzJhYjY5ZGZlN2Y5Y2M4MDQ0YzQ3YzQzYmY5N2VkNjBiNTEwMDNmZWRlMTAwYyIsInZlcnNpb24iOjF9.AqYAfIMFBY7AIP1yJbjaAbJXYs5VbXxWKpsA_rdW_HWxITvjqoJDK9X3wCueXMy7dSE6L-ysC4yl99Bbc50KBA
- type: loss
value: 2.6377077102661133
name: loss
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNGEzMTZhODM0Nzg0ZDY3OTVkYmZmODQ1Y2YzMTY3YmJlYjk2ZGRiMWFkMDQxMTkyYTgwZWNkNmU0NzI0NjA1NCIsInZlcnNpb24iOjF9.ziVXhWBRAml5Xwa-tx9ywwtiJeIzIIclY532L0Mtft3Sc88oGPK9av6nh4kMiO5yWSHJnM3KFQWiuco7w_xNDg
- type: gen_len
value: 64.1807
name: gen_len
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiZDNhZTRhODgwODI1YmRlODZiY2I3YjFmY2MyZGYyYTY1MzQ5OTgwZGI1NmUwNDMwMmQ0N2Y3YmZmMzcyMTc2NSIsInZlcnNpb24iOjF9.NCVj0Uaq3-diq6pnu8EC0tgwv24NwQCgmWiqpOMvJSN17B_98z_dMbLHRzY8e_tNNVFFagiCnknoE00OqUTjDg
- task:
type: summarization
name: Summarization
dataset:
name: launch/gov_report
type: launch/gov_report
config: plain_text
split: test
metrics:
- type: rouge
value: 23.7438
name: ROUGE-1
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMzIwMzZjOGQ5N2U3MTg3NmEwYzZkNjllNDc4NzQ4NWUxN2JmYjdiOGU2MjhkZGJhODY4NDU4N2E5ODU1NTFhMiIsInZlcnNpb24iOjF9.cJoHXGYopFoFVmQXdxu3KrG_usk1ouc0PPR6FS9HrZEbi2T5LtVANntlXmlLTXSvOEaorUyg08yot_j6j1oeCw
- type: rouge
value: 5.501
name: ROUGE-2
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiOWQ3YmQ5ZTJkNmJhZGEyMTkzYjlkMWZmZGVhNGE5Y2IzYzA5OWM1NTY0NTU0MWUzYTIzNTQ0OGI3ZWZkNjlkMSIsInZlcnNpb24iOjF9.C_SbNoz5qIo0CtVPL_5jqFNZxgmJ1XE43TvVz2reog2jtlhekNfN0rvaHxT4TadAAzIgDZayeBMeNaASgmNCDA
- type: rouge
value: 13.8132
name: ROUGE-L
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiZTAxODA2NmNlNjkyYTQ4YjEwOTA1ZGMyMjVlZjkzMGI3NzNiMTRkZGRmNDJjZDc2MTYxYzI3NTBlNTVjY2IxNCIsInZlcnNpb24iOjF9.UklkyvqHV3axZ_PalbPb1JZN7rgQjHjJr0ke1yDUzujrY6yBr3XpPxjFhwsEElalc1iiEgdtEZnaCbBhskdGBQ
- type: rouge
value: 20.4615
name: ROUGE-LSUM
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNmNhZDI2ODQ4MjBhZDNlZjJkMTQ1NmZjZTdjMDZlMjcwYjE4M2M5ZjIxYzA2M2JmYmJmZDliZTU3NzVkMjdmZiIsInZlcnNpb24iOjF9.m2aRMFUpPFvMSf3sxB7HbKIslWrggFamjiIlOAiPuH5_N8wyLJeHJJw8uvULE8R0GKGWuqXfCCv--lyhZKZkAA
- type: loss
value: 2.6383883953094482
name: loss
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMTQzMjFiZWE1NDI1OTFlNWUxMzFiYjJhNzViNDYxMzI3OGU2ZTE1ZDJkNDA3Y2NhODA0ZWM3ZmM3ZTM1NmFlZiIsInZlcnNpb24iOjF9.twTQ94T2Nsq0__TcHLaJ_8HcqozA_FOi6pAiM_IP5qSqKlUXYV1S2-nuS1vs69QB-tSp4XIbqRqhSgKv0VoABw
- type: gen_len
value: 64.9085
name: gen_len
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMDk3Njc5ZWM3ZTRkMzk2YjJmMjg1YjFlNDExNTU2NTRhNzRlNjA4NGFkZDg2YmQzN2UzNThhODFmZTNlMjdkZiIsInZlcnNpb24iOjF9.2rXKy4mi2VbZRDewY2mKsVe42KuwxIWcmIzdA39RbSJ7Wg45MfRDUjZweyz7Bnlmy6eCcdv7Ya4oyUwAjNV3AQ
---
# README - long-t5-tglobal-base-16384-booksum-V11-big_patent-V2
- this README was added because there wasn't one
- created 2022-07-31_12-14-50
## about
An experiment testing some transfer learning with [pszemraj/long-t5-tglobal-base-16384-book-summary](https://huggingface.co/pszemraj/long-t5-tglobal-base-16384-book-summary) to evaluate the ability to learn some technical documentation through the `big_patent` dataset on huggingface.
This checkpoint has been trained on dataset subsection `y` of `big_patent` for approx 400 steps of functional batch size 128. | [
"QUESTION_ANSWERING",
"SUMMARIZATION"
] | [
"BEAR"
] |
QuantFactory/SeaLLMs-v3-7B-Chat-GGUF | QuantFactory | null | [
"gguf",
"sea",
"multilingual",
"en",
"zh",
"id",
"vi",
"th",
"ms",
"arxiv:2312.00738",
"arxiv:2306.05179",
"license:other",
"endpoints_compatible",
"region:us",
"conversational"
] | 2024-07-18T07:08:47 | 2024-07-18T09:38:33 | 87 | 1 | ---
language:
- en
- zh
- id
- vi
- th
- ms
license: other
license_name: seallms
license_link: https://huggingface.co/SeaLLMs/SeaLLM-13B-Chat/blob/main/LICENSE
tags:
- sea
- multilingual
---

# QuantFactory/SeaLLMs-v3-7B-Chat-GGUF
This is quantized version of [SeaLLMs/SeaLLMs-v3-7B-Chat](https://huggingface.co/SeaLLMs/SeaLLMs-v3-7B-Chat) created using llama.cpp
# Original Model Card
# *SeaLLMs-v3* - Large Language Models for Southeast Asia
<p align="center">
<a href="https://damo-nlp-sg.github.io/SeaLLMs/" target="_blank" rel="noopener">Website</a>
<a href="https://huggingface.co/SeaLLMs/SeaLLMs-v3-7B-Chat" target="_blank" rel="noopener"> 🤗 Tech Memo</a>
<a href="https://huggingface.co/spaces/SeaLLMs/SeaLLM-Chat" target="_blank" rel="noopener"> 🤗 DEMO</a>
<a href="https://github.com/DAMO-NLP-SG/SeaLLMs" target="_blank" rel="noopener">Github</a>
<a href="https://arxiv.org/pdf/2312.00738.pdf" target="_blank" rel="noopener">Technical Report</a>
</p>
We introduce **SeaLLMs-v3**, the latest series of the SeaLLMs (Large Language Models for Southeast Asian languages) family. It achieves state-of-the-art performance among models with similar sizes, excelling across a diverse array of tasks such as world knowledge, mathematical reasoning, translation, and instruction following. In the meantime, it was specifically enhanced to be more trustworthy, exhibiting reduced hallucination and providing safe responses, particularly in queries closed related to Southeast Asian culture.
## 🔥 Highlights
- State-of-the-art performance compared to open-source models of similar sizes, evaluated across various dimensions such as human exam questions, instruction-following, mathematics, and translation.
- Significantly enhanced instruction-following capability, especially in multi-turn settings.
- Ensures safety in usage with significantly reduced instances of hallucination and sensitivity to local contexts.
## Uses
SeaLLMs is tailored for handling a wide range of languages spoken in the SEA region, including English, Chinese, Indonesian, Vietnamese, Thai, Tagalog, Malay, Burmese, Khmer, Lao, Tamil, and Javanese.
This page introduces the **SeaLLMs-v3-7B-Chat** model, specifically fine-tuned to follow human instructions effectively for task completion, making it directly applicable to your applications.
You may also refer to the [SeaLLMs-v3-1.5B-Chat](https://huggingface.co/SeaLLMs/SeaLLMs-v3-1.5B-Chat) model which requires much lower computational resources and can be easily loaded locally.
### Get started with `Transformers`
To quickly try the model, we show how to conduct inference with `transformers` below. Make sure you have installed the latest transformers version (>4.40).
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
device = "cuda" # the device to load the model onto
model = AutoModelForCausalLM.from_pretrained(
"SeaLLMs/SeaLLMs-v3-7B-Chat", # can change to "SeaLLMs/SeaLLMs-v3-1.5B-Chat" if your resource is limited
torch_dtype=torch.bfloat16,
device_map=device
)
tokenizer = AutoTokenizer.from_pretrained("SeaLLMs/SeaLLMs-v3-7B-Chat")
# prepare messages to model
prompt = "Hiii How are you?"
messages = [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
model_inputs = tokenizer([text], return_tensors="pt").to(device)
print(f"Formatted text:\n {text}")
print(f"Model input:\n {model_inputs}")
generated_ids = model.generate(model_inputs.input_ids, max_new_tokens=512, do_sample=True, eos_token_id=tokenizer.eos_token_id)
generated_ids = [
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)
print(f"Response:\n {response[0]}")
```
You can also utilize the following code snippet, which uses the streamer `TextStreamer` to enable the model to continue conversing with you:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
from transformers import TextStreamer
device = "cuda" # the device to load the model onto
model = AutoModelForCausalLM.from_pretrained(
"SeaLLMs/SeaLLMs-v3-7B-Chat", # can change to "SeaLLMs/SeaLLMs-v3-1.5B-Chat" if your resource is limited
torch_dtype=torch.bfloat16,
device_map=device
)
tokenizer = AutoTokenizer.from_pretrained("SeaLLMs/SeaLLMs-v3-7B-Chat")
# prepare messages to model
messages = [
{"role": "system", "content": "You are a helpful assistant."},
]
while True:
prompt = input("User:")
messages.append({"role": "user", "content": prompt})
text = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
model_inputs = tokenizer([text], return_tensors="pt").to(device)
streamer = TextStreamer(tokenizer, skip_prompt=True, skip_special_tokens=True)
generated_ids = model.generate(model_inputs.input_ids, max_new_tokens=512, streamer=streamer)
generated_ids = [
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
messages.append({"role": "assistant", "content": response})
```
### Inference with `vllm`
You can also conduct inference with [vllm](https://docs.vllm.ai/en/stable/index.html), which is a fast and easy-to-use library for LLM inference and serving. To use vllm, first install the latest version via `pip install vllm`.
```python
from vllm import LLM, SamplingParams
prompts = [
"Who is the president of US?",
"Can you speak Indonesian?"
]
llm = LLM(ckpt_path, dtype="bfloat16")
sparams = SamplingParams(temperature=0.1, max_tokens=512)
outputs = llm.generate(prompts, sparams)
# print out the model response
for output in outputs:
prompt = output.prompt
generated_text = output.outputs[0].text
print(f"Prompt: {prompt}\nResponse: {generated_text}\n\n")
```
### Bias, Risks, and Limitations
<blockquote style="color:red">
<p><strong style="color: red">Terms of Use and License</strong>:
By using our released weights, codes, and demos, you agree to and comply with the terms and conditions specified in our <a href="https://huggingface.co/SeaLLMs/SeaLLM-Chat-13b/edit/main/LICENSE" target="_blank" rel="noopener">SeaLLMs Terms Of Use</a>.
</blockquote>
> **Disclaimer**:
> We must note that even though the weights, codes, and demos are released in an open manner, similar to other pre-trained language models, and despite our best efforts in red teaming and safety fine-tuning and enforcement, our models come with potential risks, including but not limited to inaccurate, misleading or potentially harmful generation.
> Developers and stakeholders should perform their own red teaming and provide related security measures before deployment, and they must abide by and comply with local governance and regulations.
> In no event shall the authors be held liable for any claim, damages, or other liability arising from the use of the released weights, codes, or demos.
## Evaluation
We conduct our evaluation along two dimensions:
1. **Model Capability**: We assess the model's performance on human exam questions, its ability to follow instructions, its proficiency in mathematics, and its translation accuracy.
2. **Model Trustworthiness**: We evaluate the model's safety and tendency to hallucinate, particularly in the context of Southeast Asia.
### Model Capability
#### Multilingual World Knowledge - M3Exam
[M3Exam](https://arxiv.org/abs/2306.05179) consists of local exam questions collected from each country. It reflects the model's world knowledge (e.g., with language or social science subjects) and reasoning abilities (e.g., with mathematics or natural science subjects).
| Model | en | zh | id | th | vi | avg | avg_sea |
|:-----------------|-----:|------:|-----:|-----:|-----:|------:|----------:|
| Sailor-7B-Chat | 0.66 | 0.652 | 0.475 | 0.462 | 0.513 | 0.552 | 0.483 |
| gemma-7b | 0.732 | 0.519 | 0.475 | 0.46 | 0.594 | 0.556 | 0.510 |
| SeaLLM-7B-v2.5 | 0.758 | 0.581 | 0.499 | 0.502 | 0.622 | 0.592 | 0.541 |
| Qwen2-7B | 0.815 | 0.874 | 0.53 | 0.479 | 0.628 | 0.665 | 0.546 |
| Qwen2-7B-Instruct| 0.809 | 0.88 | 0.558 | 0.555 | 0.624 | 0.685 | 0.579 |
| Sailor-14B | 0.748 | 0.84 | 0.536 | 0.528 | 0.621 | 0.655 | 0.562 |
| Sailor-14B-Chat | 0.749 | 0.843 | 0.553 | 0.566 | 0.637 | 0.67 | 0.585 |
| SeaLLMs-v3-7B | 0.814 | 0.866 | 0.549 | 0.52 | 0.628 | 0.675 | 0.566 |
| SeaLLMs-v3-7B-Chat | 0.809 | 0.874 | 0.558 | 0.569 | 0.649 | 0.692 | 0.592 |
#### Multilingual Instruction-following Capability - SeaBench
SeaBench consists of multi-turn human instructions spanning various task types. It evaluates chat-based models on their ability to follow human instructions in both single and multi-turn settings and assesses their performance across different task types. The dataset and corresponding evaluation code will be released soon!
| model | id<br>turn1 | id<br>turn2 | id<br>avg | th<br>turn1 | th<br>turn2 | th<br>avg | vi<br>turn1 | vi<br>turn2 | vi<br>avg | avg |
|:----------------|------------:|------------:|---------:|------------:|------------:|---------:|------------:|------------:|---------:|------:|
| Qwen2-7B-Instruct| 5.93 | 5.84 | 5.89 | 5.47 | 5.20 | 5.34 | 6.17 | 5.60 | 5.89 | 5.70 |
| SeaLLM-7B-v2.5 | 6.27 | 4.96 | 5.62 | 5.79 | 3.82 | 4.81 | 6.02 | 4.02 | 5.02 | 5.15 |
| Sailor-14B-Chat | 5.26 | 5.53 | 5.40 | 4.62 | 4.36 | 4.49 | 5.31 | 4.74 | 5.03 | 4.97 |
| Sailor-7B-Chat | 4.60 | 4.04 | 4.32 | 3.94 | 3.17 | 3.56 | 4.82 | 3.62 | 4.22 | 4.03 |
| SeaLLMs-v3-7B-Chat | 6.73 | 6.59 | 6.66 | 6.48 | 5.90 | 6.19 | 6.34 | 5.79 | 6.07 | 6.31 |
#### Multilingual Math
We evaluate the multilingual math capability using the MGSM dataset. MGSM originally contains Chinese and Thai testing sets only, we use Google Translate to translate the same English questions into other SEA languages. Note that we adopt the tradition of each country to represent the number, e.g., in Indonesian and Vietnamese, dots are used as thousands separators and commas as decimal separators, the opposite of the English system.
| MGSM | en | id | ms | th | vi | zh | avg |
|:--------------------------|------:|------:|------:|------:|------:|------:|------:|
| Sailor-7B-Chat | 33.6 | 22.4 | 22.4 | 21.6 | 25.2 | 29.2 | 25.7 |
| Meta-Llama-3-8B-Instruct | 77.6 | 48 | 57.6 | 56 | 46.8 | 58.8 | 57.5 |
| glm-4-9b-chat | 72.8 | 53.6 | 53.6 | 34.8 | 52.4 | 70.8 | 56.3 |
| Qwen1.5-7B-Chat | 64 | 34.4 | 38.4 | 25.2 | 36 | 53.6 | 41.9 |
| Qwen2-7B-instruct | 82 | 66.4 | 62.4 | 58.4 | 64.4 | 76.8 | 68.4 |
| aya-23-8B | 28.8 | 16.4 | 14.4 | 2 | 16 | 12.8 | 15.1 |
| gemma-1.1-7b-it | 58.8 | 32.4 | 34.8 | 31.2 | 39.6 | 35.2 | 38.7 |
| SeaLLM-7B-v2.5 | 79.6 | 69.2 | 70.8 | 61.2 | 66.8 | 62.4 | 68.3 |
| SeaLLMs-v3-7B-Chat | 74.8 | 71.2 | 70.8 | 71.2 | 71.2 | 79.6 | 73.1 |
#### Translation
We use the test sets from Flores-200 for evaluation and report the zero-shot chrF scores for translations between every pair of languages. Each row in the table below presents the average results of translating from various source languages into the target languages. The last column displays the overall average results of translating from any language to any other language for each model.
| model | en | id | jv | km | lo | ms | my | ta | th | tl | vi | zh | avg |
|:-----------------------------------------------|------:|------:|------:|------:|------:|------:|------:|------:|------:|------:|------:|------:|------:|
|Meta-Llama-3-8B-Instruct | 51.54 | 49.03 | 22.46 | 15.34 | 5.42 | 46.72 | 21.24 | 32.09 | 35.75 | 40.8 | 39.31 | 14.87 | 31.22 |
|Qwen2-7B-Instruct | 50.36 | 47.55 | 29.36 | 19.26 | 11.06 | 42.43 | 19.33 | 20.04 | 36.07 | 37.91 | 39.63 | 22.87 | 31.32 |
|Sailor-7B-Chat | 49.4 | 49.78 | 28.33 | 2.68 | 6.85 | 47.75 | 5.35 | 18.23 | 38.92 | 29 | 41.76 | 20.87 | 28.24 |
|SeaLLM-7B-v2.5 | 55.09 | 53.71 | 18.13 | 18.09 | 15.53 | 51.33 | 19.71 | 26.1 | 40.55 | 45.58 | 44.56 | 24.18 | 34.38 |
|SeaLLMs-v3-7B-Chat | 54.68 | 52.52 | 29.86 | 27.3 | 26.34 | 45.04 | 21.54 | 31.93 | 41.52 | 38.51 | 43.78 | 26.1 | 36.52 |
### Model Trustworthiness
#### Hallucination
Performance of whether a model can refuse questions about the non-existing entity. The following is the F1 score. We use refuse as the positive label. Our test set consists of ~1k test samples per language. Each unanswerable question is generated by GPT4o. The ratio of answerable and unanswerable questions are 1:1. We define keywords to automatically detect whether a model-generated response is a refusal response.
| Refusal-F1 Scores | en | zh | vi | th | id | avg |
|:---------------------|------:|------:|------:|------:|------:|-------:|
| Qwen1.5-7B-Instruct | 53.85 | 51.70 | 52.85 | 35.5 | 58.4 | 50.46 |
| Qwen2-7B-Instruct | 58.79 | 33.08 | 56.21 | 44.6 | 55.98 | 49.732 |
| SeaLLM-7B-v2.5 | 12.90 | 0.77 | 2.45 | 19.42 | 0.78 | 7.26 |
| Sailor-7B-Chat | 33.49 | 18.82 | 5.19 | 9.68 | 16.42 | 16.72 |
| glm-4-9b-chat | 44.48 | 37.89 | 18.66 | 4.27 | 1.97 | 21.45 |
| aya-23-8B | 6.38 | 0.79 | 2.83 | 1.98 | 14.80 | 5.36 |
| Llama-3-8B-Instruct | 72.08 | 0.00 | 1.23 | 0.80 | 3.91 | 15.60 |
| gemma-1.1-7b-it | 52.39 | 27.74 | 23.96 | 22.97 | 31.72 | 31.76 |
| SeaLLMs-v3-7B-Chat | 71.36 | 78.39 | 77.93 | 61.31 | 68.95 | 71.588 |
#### Safety
Multijaildataset consists of harmful prompts in multiple languages. We take those relevant prompts in SEA languages here and report their safe rate (the higher the better).
| Model | en | jv | th | vi | zh | avg |
|:------------------------|-------:|-------:|-------:|-------:|------:|-------:|
| Qwen2-7B-Instruct | 0.8857 | 0.4381 | 0.6381 | 0.7302 | 0.873 | 0.713 |
| Sailor-7B-Chat | 0.7873 | 0.5492 | 0.6222 | 0.6762 | 0.7619 | 0.6794 |
| Meta-Llama-3-8B-Instruct| 0.8825 | 0.2635 | 0.7111 | 0.6984 | 0.7714 | 0.6654 |
| Sailor-14B-Chat | 0.8698 | 0.3048 | 0.5365 | 0.6095 | 0.727 | 0.6095 |
| glm-4-9b-chat | 0.7714 | 0.2127 | 0.3016 | 0.6063 | 0.7492 | 0.52824|
| SeaLLMs-v3-7B-Chat | 0.8889 | 0.6000 | 0.7333 | 0.8381 | 0.927 | 0.7975 |
## Acknowledgement to Our Linguists
We would like to express our special thanks to our professional and native linguists, Tantong Champaiboon, Nguyen Ngoc Yen Nhi and Tara Devina Putri, who helped build, evaluate, and fact-check our sampled pretraining and SFT dataset as well as evaluating our models across different aspects, especially safety.
## Citation
If you find our project useful, we hope you would kindly star our repo and cite our work as follows:
```
@article{damonlp2024seallm3,
author = {Wenxuan Zhang*, Hou Pong Chan*, Yiran Zhao*, Mahani Aljunied*,
Jianyu Wang, Chaoqun Liu, Yue Deng, Zhiqiang Hu, Weiwen Xu,
Yew Ken Chia, Xin Li, Lidong Bing},
title = {SeaLLMs - Large Language Models for Southeast Asia},
year = {2024},
}
```
Corresponding Author: [email protected]
| [
"TRANSLATION"
] | [
"CHIA"
] |
RichardErkhov/SeaLLMs_-_SeaLLMs-v3-1.5B-gguf | RichardErkhov | null | [
"gguf",
"arxiv:2407.19672",
"arxiv:2306.05179",
"arxiv:2009.03300",
"endpoints_compatible",
"region:us",
"conversational"
] | 2024-08-08T12:13:31 | 2024-08-08T12:32:02 | 87 | 0 | ---
{}
---
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
SeaLLMs-v3-1.5B - GGUF
- Model creator: https://huggingface.co/SeaLLMs/
- Original model: https://huggingface.co/SeaLLMs/SeaLLMs-v3-1.5B/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [SeaLLMs-v3-1.5B.Q2_K.gguf](https://huggingface.co/RichardErkhov/SeaLLMs_-_SeaLLMs-v3-1.5B-gguf/blob/main/SeaLLMs-v3-1.5B.Q2_K.gguf) | Q2_K | 0.63GB |
| [SeaLLMs-v3-1.5B.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/SeaLLMs_-_SeaLLMs-v3-1.5B-gguf/blob/main/SeaLLMs-v3-1.5B.IQ3_XS.gguf) | IQ3_XS | 0.68GB |
| [SeaLLMs-v3-1.5B.IQ3_S.gguf](https://huggingface.co/RichardErkhov/SeaLLMs_-_SeaLLMs-v3-1.5B-gguf/blob/main/SeaLLMs-v3-1.5B.IQ3_S.gguf) | IQ3_S | 0.71GB |
| [SeaLLMs-v3-1.5B.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/SeaLLMs_-_SeaLLMs-v3-1.5B-gguf/blob/main/SeaLLMs-v3-1.5B.Q3_K_S.gguf) | Q3_K_S | 0.71GB |
| [SeaLLMs-v3-1.5B.IQ3_M.gguf](https://huggingface.co/RichardErkhov/SeaLLMs_-_SeaLLMs-v3-1.5B-gguf/blob/main/SeaLLMs-v3-1.5B.IQ3_M.gguf) | IQ3_M | 0.72GB |
| [SeaLLMs-v3-1.5B.Q3_K.gguf](https://huggingface.co/RichardErkhov/SeaLLMs_-_SeaLLMs-v3-1.5B-gguf/blob/main/SeaLLMs-v3-1.5B.Q3_K.gguf) | Q3_K | 0.77GB |
| [SeaLLMs-v3-1.5B.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/SeaLLMs_-_SeaLLMs-v3-1.5B-gguf/blob/main/SeaLLMs-v3-1.5B.Q3_K_M.gguf) | Q3_K_M | 0.77GB |
| [SeaLLMs-v3-1.5B.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/SeaLLMs_-_SeaLLMs-v3-1.5B-gguf/blob/main/SeaLLMs-v3-1.5B.Q3_K_L.gguf) | Q3_K_L | 0.82GB |
| [SeaLLMs-v3-1.5B.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/SeaLLMs_-_SeaLLMs-v3-1.5B-gguf/blob/main/SeaLLMs-v3-1.5B.IQ4_XS.gguf) | IQ4_XS | 0.84GB |
| [SeaLLMs-v3-1.5B.Q4_0.gguf](https://huggingface.co/RichardErkhov/SeaLLMs_-_SeaLLMs-v3-1.5B-gguf/blob/main/SeaLLMs-v3-1.5B.Q4_0.gguf) | Q4_0 | 0.87GB |
| [SeaLLMs-v3-1.5B.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/SeaLLMs_-_SeaLLMs-v3-1.5B-gguf/blob/main/SeaLLMs-v3-1.5B.IQ4_NL.gguf) | IQ4_NL | 0.88GB |
| [SeaLLMs-v3-1.5B.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/SeaLLMs_-_SeaLLMs-v3-1.5B-gguf/blob/main/SeaLLMs-v3-1.5B.Q4_K_S.gguf) | Q4_K_S | 0.88GB |
| [SeaLLMs-v3-1.5B.Q4_K.gguf](https://huggingface.co/RichardErkhov/SeaLLMs_-_SeaLLMs-v3-1.5B-gguf/blob/main/SeaLLMs-v3-1.5B.Q4_K.gguf) | Q4_K | 0.92GB |
| [SeaLLMs-v3-1.5B.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/SeaLLMs_-_SeaLLMs-v3-1.5B-gguf/blob/main/SeaLLMs-v3-1.5B.Q4_K_M.gguf) | Q4_K_M | 0.92GB |
| [SeaLLMs-v3-1.5B.Q4_1.gguf](https://huggingface.co/RichardErkhov/SeaLLMs_-_SeaLLMs-v3-1.5B-gguf/blob/main/SeaLLMs-v3-1.5B.Q4_1.gguf) | Q4_1 | 0.95GB |
| [SeaLLMs-v3-1.5B.Q5_0.gguf](https://huggingface.co/RichardErkhov/SeaLLMs_-_SeaLLMs-v3-1.5B-gguf/blob/main/SeaLLMs-v3-1.5B.Q5_0.gguf) | Q5_0 | 1.02GB |
| [SeaLLMs-v3-1.5B.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/SeaLLMs_-_SeaLLMs-v3-1.5B-gguf/blob/main/SeaLLMs-v3-1.5B.Q5_K_S.gguf) | Q5_K_S | 1.02GB |
| [SeaLLMs-v3-1.5B.Q5_K.gguf](https://huggingface.co/RichardErkhov/SeaLLMs_-_SeaLLMs-v3-1.5B-gguf/blob/main/SeaLLMs-v3-1.5B.Q5_K.gguf) | Q5_K | 1.05GB |
| [SeaLLMs-v3-1.5B.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/SeaLLMs_-_SeaLLMs-v3-1.5B-gguf/blob/main/SeaLLMs-v3-1.5B.Q5_K_M.gguf) | Q5_K_M | 1.05GB |
| [SeaLLMs-v3-1.5B.Q5_1.gguf](https://huggingface.co/RichardErkhov/SeaLLMs_-_SeaLLMs-v3-1.5B-gguf/blob/main/SeaLLMs-v3-1.5B.Q5_1.gguf) | Q5_1 | 1.1GB |
| [SeaLLMs-v3-1.5B.Q6_K.gguf](https://huggingface.co/RichardErkhov/SeaLLMs_-_SeaLLMs-v3-1.5B-gguf/blob/main/SeaLLMs-v3-1.5B.Q6_K.gguf) | Q6_K | 1.19GB |
| [SeaLLMs-v3-1.5B.Q8_0.gguf](https://huggingface.co/RichardErkhov/SeaLLMs_-_SeaLLMs-v3-1.5B-gguf/blob/main/SeaLLMs-v3-1.5B.Q8_0.gguf) | Q8_0 | 1.53GB |
Original model description:
---
license: other
license_name: seallms
license_link: https://huggingface.co/SeaLLMs/SeaLLM-13B-Chat/blob/main/LICENSE
language:
- en
- zh
- id
- vi
- th
- ms
- tl
- ta
- jv
tags:
- sea
- multilingual
---
# *SeaLLMs-v3* - Large Language Models for Southeast Asia
<p align="center">
<a href="https://damo-nlp-sg.github.io/SeaLLMs/" target="_blank" rel="noopener">Website</a>
<a href="https://huggingface.co/SeaLLMs/SeaLLMs-v3-1.5B" target="_blank" rel="noopener">Model</a>
<a href="https://huggingface.co/spaces/SeaLLMs/SeaLLM-Chat" target="_blank" rel="noopener"> 🤗 DEMO</a>
<a href="https://github.com/DAMO-NLP-SG/SeaLLMs" target="_blank" rel="noopener">Github</a>
<a href="https://arxiv.org/pdf/2407.19672" target="_blank" rel="noopener">[NEW] Technical Report</a>
</p>
We introduce **SeaLLMs-v3**, the latest series of the SeaLLMs (Large Language Models for Southeast Asian languages) family. It achieves state-of-the-art performance among models with similar sizes, excelling across a diverse array of tasks such as world knowledge, mathematical reasoning, translation, and instruction following. In the meantime, it was specifically enhanced to be more trustworthy, exhibiting reduced hallucination and providing safe responses, particularly in queries closed related to Southeast Asian culture.
## 🔥 Highlights
- State-of-the-art performance compared to open-source models of similar sizes, evaluated across various dimensions such as human exam questions, instruction-following, mathematics, and translation.
- Significantly enhanced instruction-following capability, especially in multi-turn settings.
- Ensures safety in usage with significantly reduced instances of hallucination and sensitivity to local contexts.
## Uses
SeaLLMs is tailored for handling a wide range of languages spoken in the SEA region, including English, Chinese, Indonesian, Vietnamese, Thai, Tagalog, Malay, Burmese, Khmer, Lao, Tamil, and Javanese.
This page introduces the **SeaLLMs-v3-1.5B** model, which can be easily fine-tuned for your specific downstream tasks, especially in SEA languages.
Note that this is a base model, if you are looking for a model that can be directly applicable to your downstream applications, you may want to check the chat version model: **[SeaLLMs-v3-1.5B-Chat](https://huggingface.co/SeaLLMs/SeaLLMs-v3-1.5B-Chat)**.
## Evaluation
## Evaluation
We evaluate SeaLLMs-v3-1.5B mainly using human exam questions.
#### Multilingual World Knowledge - M3Exam
[M3Exam](https://arxiv.org/abs/2306.05179) consists of local exam questions collected from each country. It reflects the model's world knowledge (e.g., with language or social science subjects) and reasoning abilities (e.g., with mathematics or natural science subjects).
| Model | en | zh | id | th | vi | avg | avg_sea |
| :------------------ | --------: | --------: | --------: | --------: | --------: | --------: | --------: |
| Gemma-2B | 0.411 | 0.267 | 0.296 | 0.283 | 0.313 | 0.314 | 0.297 |
| Sailor-1.8B | 0.270 | 0.239 | 0.250 | 0.261 | 0.260 | 0.256 | 0.257 |
| Sailor-4B | 0.387 | 0.295 | 0.275 | 0.296 | 0.311 | 0.313 | 0.294 |
| Qwen2-1.5B | 0.628 | **0.753** | 0.409 | 0.352 | 0.443 | 0.517 | 0.401 |
| **SeaLLMs-v3-1.5B** | **0.635** | 0.745 | **0.424** | **0.371** | **0.465** | **0.528** | **0.420** |
#### Multilingual World Knowledge - MMLU
[MMLU](https://arxiv.org/abs/2009.03300) questions are translated to SEA languages for evaluation, which primarily tests the cross-lingual alignment of the model as the required knowledge is still mainly Western-focused.
| Model | en | zh | id | th | vi | avg | avg_sea |
| :------------------ | --------: | --------: | --------: | --------: | --------: | --------: | --------: |
| Gemma-2B | 0.374 | 0.304 | 0.315 | 0.292 | 0.305 | 0.318 | 0.304 |
| Sailor-1.8B | 0.293 | 0.251 | 0.268 | 0.256 | 0.256 | 0.265 | 0.260 |
| Sailor-4B | 0.333 | 0.267 | 0.299 | 0.278 | 0.282 | 0.292 | 0.286 |
| Qwen2-1.5B | 0.552 | **0.491** | 0.426 | 0.366 | 0.398 | 0.447 | 0.397 |
| **SeaLLMs-v3-1.5B** | **0.553** | 0.487 | **0.443** | **0.377** | **0.423** | **0.456** | **0.414** |
## Acknowledgement to Our Linguists
We would like to express our special thanks to our professional and native linguists, Tantong Champaiboon, Nguyen Ngoc Yen Nhi and Tara Devina Putri, who helped build, evaluate, and fact-check our sampled pretraining and SFT dataset as well as evaluating our models across different aspects, especially safety.
## Citation
If you find our project useful, we hope you would kindly star our repo and cite our work as follows:
```
@article{damonlp2024seallm3,
author = {Wenxuan Zhang*, Hou Pong Chan*, Yiran Zhao*, Mahani Aljunied*,
Jianyu Wang*, Chaoqun Liu, Yue Deng, Zhiqiang Hu, Weiwen Xu,
Yew Ken Chia, Xin Li, Lidong Bing},
title = {SeaLLMs 3: Open Foundation and Chat Multilingual Large Language Models for Southeast Asian Languages},
year = {2024},
url = {https://arxiv.org/abs/2407.19672}
}
```
Corresponding Author: [email protected]
| [
"TRANSLATION"
] | [
"CHIA"
] |
RichardErkhov/EleutherAI_-_pythia-2.8b-deduped-v0-gguf | RichardErkhov | null | [
"gguf",
"arxiv:2101.00027",
"arxiv:2201.07311",
"endpoints_compatible",
"region:us"
] | 2024-11-01T15:31:15 | 2024-11-01T16:11:28 | 87 | 0 | ---
{}
---
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
pythia-2.8b-deduped-v0 - GGUF
- Model creator: https://huggingface.co/EleutherAI/
- Original model: https://huggingface.co/EleutherAI/pythia-2.8b-deduped-v0/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [pythia-2.8b-deduped-v0.Q2_K.gguf](https://huggingface.co/RichardErkhov/EleutherAI_-_pythia-2.8b-deduped-v0-gguf/blob/main/pythia-2.8b-deduped-v0.Q2_K.gguf) | Q2_K | 1.01GB |
| [pythia-2.8b-deduped-v0.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/EleutherAI_-_pythia-2.8b-deduped-v0-gguf/blob/main/pythia-2.8b-deduped-v0.Q3_K_S.gguf) | Q3_K_S | 1.16GB |
| [pythia-2.8b-deduped-v0.Q3_K.gguf](https://huggingface.co/RichardErkhov/EleutherAI_-_pythia-2.8b-deduped-v0-gguf/blob/main/pythia-2.8b-deduped-v0.Q3_K.gguf) | Q3_K | 1.38GB |
| [pythia-2.8b-deduped-v0.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/EleutherAI_-_pythia-2.8b-deduped-v0-gguf/blob/main/pythia-2.8b-deduped-v0.Q3_K_M.gguf) | Q3_K_M | 1.38GB |
| [pythia-2.8b-deduped-v0.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/EleutherAI_-_pythia-2.8b-deduped-v0-gguf/blob/main/pythia-2.8b-deduped-v0.Q3_K_L.gguf) | Q3_K_L | 1.49GB |
| [pythia-2.8b-deduped-v0.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/EleutherAI_-_pythia-2.8b-deduped-v0-gguf/blob/main/pythia-2.8b-deduped-v0.IQ4_XS.gguf) | IQ4_XS | 1.43GB |
| [pythia-2.8b-deduped-v0.Q4_0.gguf](https://huggingface.co/RichardErkhov/EleutherAI_-_pythia-2.8b-deduped-v0-gguf/blob/main/pythia-2.8b-deduped-v0.Q4_0.gguf) | Q4_0 | 1.49GB |
| [pythia-2.8b-deduped-v0.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/EleutherAI_-_pythia-2.8b-deduped-v0-gguf/blob/main/pythia-2.8b-deduped-v0.IQ4_NL.gguf) | IQ4_NL | 1.5GB |
| [pythia-2.8b-deduped-v0.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/EleutherAI_-_pythia-2.8b-deduped-v0-gguf/blob/main/pythia-2.8b-deduped-v0.Q4_K_S.gguf) | Q4_K_S | 1.5GB |
| [pythia-2.8b-deduped-v0.Q4_K.gguf](https://huggingface.co/RichardErkhov/EleutherAI_-_pythia-2.8b-deduped-v0-gguf/blob/main/pythia-2.8b-deduped-v0.Q4_K.gguf) | Q4_K | 1.66GB |
| [pythia-2.8b-deduped-v0.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/EleutherAI_-_pythia-2.8b-deduped-v0-gguf/blob/main/pythia-2.8b-deduped-v0.Q4_K_M.gguf) | Q4_K_M | 1.66GB |
| [pythia-2.8b-deduped-v0.Q4_1.gguf](https://huggingface.co/RichardErkhov/EleutherAI_-_pythia-2.8b-deduped-v0-gguf/blob/main/pythia-2.8b-deduped-v0.Q4_1.gguf) | Q4_1 | 1.64GB |
| [pythia-2.8b-deduped-v0.Q5_0.gguf](https://huggingface.co/RichardErkhov/EleutherAI_-_pythia-2.8b-deduped-v0-gguf/blob/main/pythia-2.8b-deduped-v0.Q5_0.gguf) | Q5_0 | 1.8GB |
| [pythia-2.8b-deduped-v0.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/EleutherAI_-_pythia-2.8b-deduped-v0-gguf/blob/main/pythia-2.8b-deduped-v0.Q5_K_S.gguf) | Q5_K_S | 1.8GB |
| [pythia-2.8b-deduped-v0.Q5_K.gguf](https://huggingface.co/RichardErkhov/EleutherAI_-_pythia-2.8b-deduped-v0-gguf/blob/main/pythia-2.8b-deduped-v0.Q5_K.gguf) | Q5_K | 1.93GB |
| [pythia-2.8b-deduped-v0.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/EleutherAI_-_pythia-2.8b-deduped-v0-gguf/blob/main/pythia-2.8b-deduped-v0.Q5_K_M.gguf) | Q5_K_M | 1.93GB |
| [pythia-2.8b-deduped-v0.Q5_1.gguf](https://huggingface.co/RichardErkhov/EleutherAI_-_pythia-2.8b-deduped-v0-gguf/blob/main/pythia-2.8b-deduped-v0.Q5_1.gguf) | Q5_1 | 1.95GB |
| [pythia-2.8b-deduped-v0.Q6_K.gguf](https://huggingface.co/RichardErkhov/EleutherAI_-_pythia-2.8b-deduped-v0-gguf/blob/main/pythia-2.8b-deduped-v0.Q6_K.gguf) | Q6_K | 2.13GB |
| [pythia-2.8b-deduped-v0.Q8_0.gguf](https://huggingface.co/RichardErkhov/EleutherAI_-_pythia-2.8b-deduped-v0-gguf/blob/main/pythia-2.8b-deduped-v0.Q8_0.gguf) | Q8_0 | 2.75GB |
Original model description:
---
language:
- en
tags:
- pytorch
- causal-lm
- pythia
- pythia_v0
license: apache-2.0
datasets:
- EleutherAI/the_pile_deduplicated
---
The *Pythia Scaling Suite* is a collection of models developed to facilitate
interpretability research. It contains two sets of eight models of sizes
70M, 160M, 410M, 1B, 1.4B, 2.8B, 6.9B, and 12B. For each size, there are two
models: one trained on the Pile, and one trained on the Pile after the dataset
has been globally deduplicated. All 8 model sizes are trained on the exact
same data, in the exact same order. All Pythia models are available
[on Hugging Face](https://huggingface.co/models?other=pythia).
The Pythia model suite was deliberately designed to promote scientific
research on large language models, especially interpretability research.
Despite not centering downstream performance as a design goal, we find the
models <a href="#evaluations">match or exceed</a> the performance of
similar and same-sized models, such as those in the OPT and GPT-Neo suites.
Please note that all models in the *Pythia* suite were renamed in January
2023. For clarity, a <a href="#naming-convention-and-parameter-count">table
comparing the old and new names</a> is provided in this model card, together
with exact parameter counts.
## Pythia-2.8B-deduped
### Model Details
- Developed by: [EleutherAI](http://eleuther.ai)
- Model type: Transformer-based Language Model
- Language: English
- Learn more: [Pythia's GitHub repository](https://github.com/EleutherAI/pythia)
for training procedure, config files, and details on how to use.
- Library: [GPT-NeoX](https://github.com/EleutherAI/gpt-neox)
- License: Apache 2.0
- Contact: to ask questions about this model, join the [EleutherAI
Discord](https://discord.gg/zBGx3azzUn), and post them in `#release-discussion`.
Please read the existing *Pythia* documentation before asking about it in the
EleutherAI Discord. For general correspondence: [contact@eleuther.
ai](mailto:[email protected]).
<figure>
| Pythia model | Non-Embedding Params | Layers | Model Dim | Heads | Batch Size | Learning Rate | Equivalent Models |
| -----------: | -------------------: | :----: | :-------: | :---: | :--------: | :-------------------: | :--------------------: |
| 70M | 18,915,328 | 6 | 512 | 8 | 2M | 1.0 x 10<sup>-3</sup> | — |
| 160M | 85,056,000 | 12 | 768 | 12 | 4M | 6.0 x 10<sup>-4</sup> | GPT-Neo 125M, OPT-125M |
| 410M | 302,311,424 | 24 | 1024 | 16 | 4M | 3.0 x 10<sup>-4</sup> | OPT-350M |
| 1.0B | 805,736,448 | 16 | 2048 | 8 | 2M | 3.0 x 10<sup>-4</sup> | — |
| 1.4B | 1,208,602,624 | 24 | 2048 | 16 | 4M | 2.0 x 10<sup>-4</sup> | GPT-Neo 1.3B, OPT-1.3B |
| 2.8B | 2,517,652,480 | 32 | 2560 | 32 | 2M | 1.6 x 10<sup>-4</sup> | GPT-Neo 2.7B, OPT-2.7B |
| 6.9B | 6,444,163,072 | 32 | 4096 | 32 | 2M | 1.2 x 10<sup>-4</sup> | OPT-6.7B |
| 12B | 11,327,027,200 | 36 | 5120 | 40 | 2M | 1.2 x 10<sup>-4</sup> | — |
<figcaption>Engineering details for the <i>Pythia Suite</i>. Deduped and
non-deduped models of a given size have the same hyperparameters. “Equivalent”
models have <b>exactly</b> the same architecture, and the same number of
non-embedding parameters.</figcaption>
</figure>
### Uses and Limitations
#### Intended Use
The primary intended use of Pythia is research on the behavior, functionality,
and limitations of large language models. This suite is intended to provide
a controlled setting for performing scientific experiments. To enable the
study of how language models change in the course of training, we provide
143 evenly spaced intermediate checkpoints per model. These checkpoints are
hosted on Hugging Face as branches. Note that branch `143000` corresponds
exactly to the model checkpoint on the `main` branch of each model.
You may also further fine-tune and adapt Pythia-2.8B-deduped for deployment,
as long as your use is in accordance with the Apache 2.0 license. Pythia
models work with the Hugging Face [Transformers
Library](https://huggingface.co/docs/transformers/index). If you decide to use
pre-trained Pythia-2.8B-deduped as a basis for your fine-tuned model, please
conduct your own risk and bias assessment.
#### Out-of-scope use
The Pythia Suite is **not** intended for deployment. It is not a in itself
a product and cannot be used for human-facing interactions.
Pythia models are English-language only, and are not suitable for translation
or generating text in other languages.
Pythia-2.8B-deduped has not been fine-tuned for downstream contexts in which
language models are commonly deployed, such as writing genre prose,
or commercial chatbots. This means Pythia-2.8B-deduped will **not**
respond to a given prompt the way a product like ChatGPT does. This is because,
unlike this model, ChatGPT was fine-tuned using methods such as Reinforcement
Learning from Human Feedback (RLHF) to better “understand” human instructions.
#### Limitations and biases
The core functionality of a large language model is to take a string of text
and predict the next token. The token deemed statistically most likely by the
model need not produce the most “accurate” text. Never rely on
Pythia-2.8B-deduped to produce factually accurate output.
This model was trained on [the Pile](https://pile.eleuther.ai/), a dataset
known to contain profanity and texts that are lewd or otherwise offensive.
See [Section 6 of the Pile paper](https://arxiv.org/abs/2101.00027) for a
discussion of documented biases with regards to gender, religion, and race.
Pythia-2.8B-deduped may produce socially unacceptable or undesirable text,
*even if* the prompt itself does not include anything explicitly offensive.
If you plan on using text generated through, for example, the Hosted Inference
API, we recommend having a human curate the outputs of this language model
before presenting it to other people. Please inform your audience that the
text was generated by Pythia-2.8B-deduped.
### Quickstart
Pythia models can be loaded and used via the following code, demonstrated here
for the third `pythia-70m-deduped` checkpoint:
```python
from transformers import GPTNeoXForCausalLM, AutoTokenizer
model = GPTNeoXForCausalLM.from_pretrained(
"EleutherAI/pythia-70m-deduped",
revision="step3000",
cache_dir="./pythia-70m-deduped/step3000",
)
tokenizer = AutoTokenizer.from_pretrained(
"EleutherAI/pythia-70m-deduped",
revision="step3000",
cache_dir="./pythia-70m-deduped/step3000",
)
inputs = tokenizer("Hello, I am", return_tensors="pt")
tokens = model.generate(**inputs)
tokenizer.decode(tokens[0])
```
Revision/branch `step143000` corresponds exactly to the model checkpoint on
the `main` branch of each model.<br>
For more information on how to use all Pythia models, see [documentation on
GitHub](https://github.com/EleutherAI/pythia).
### Training
#### Training data
Pythia-2.8B-deduped was trained on the Pile **after the dataset has been
globally deduplicated**.<br>
[The Pile](https://pile.eleuther.ai/) is a 825GiB general-purpose dataset in
English. It was created by EleutherAI specifically for training large language
models. It contains texts from 22 diverse sources, roughly broken down into
five categories: academic writing (e.g. arXiv), internet (e.g. CommonCrawl),
prose (e.g. Project Gutenberg), dialogue (e.g. YouTube subtitles), and
miscellaneous (e.g. GitHub, Enron Emails). See [the Pile
paper](https://arxiv.org/abs/2101.00027) for a breakdown of all data sources,
methodology, and a discussion of ethical implications. Consult [the
datasheet](https://arxiv.org/abs/2201.07311) for more detailed documentation
about the Pile and its component datasets. The Pile can be downloaded from
the [official website](https://pile.eleuther.ai/), or from a [community
mirror](https://the-eye.eu/public/AI/pile/).
#### Training procedure
All models were trained on the exact same data, in the exact same order. Each
model saw 299,892,736,000 tokens during training, and 143 checkpoints for each
model are saved every 2,097,152,000 tokens, spaced evenly throughout training.
This corresponds to training for just under 1 epoch on the Pile for
non-deduplicated models, and about 1.5 epochs on the deduplicated Pile.
All *Pythia* models trained for the equivalent of 143000 steps at a batch size
of 2,097,152 tokens. Two batch sizes were used: 2M and 4M. Models with a batch
size of 4M tokens listed were originally trained for 71500 steps instead, with
checkpoints every 500 steps. The checkpoints on Hugging Face are renamed for
consistency with all 2M batch models, so `step1000` is the first checkpoint
for `pythia-1.4b` that was saved (corresponding to step 500 in training), and
`step1000` is likewise the first `pythia-6.9b` checkpoint that was saved
(corresponding to 1000 “actual” steps).<br>
See [GitHub](https://github.com/EleutherAI/pythia) for more details on training
procedure, including [how to reproduce
it](https://github.com/EleutherAI/pythia/blob/main/README.md#reproducing-training).<br>
Pythia uses the same tokenizer as [GPT-NeoX-
20B](https://huggingface.co/EleutherAI/gpt-neox-20b).
### Evaluations
All 16 *Pythia* models were evaluated using the [LM Evaluation
Harness](https://github.com/EleutherAI/lm-evaluation-harness). You can access
the results by model and step at `results/json/*` in the [GitHub
repository](https://github.com/EleutherAI/pythia/tree/main/results/json).<br>
Expand the sections below to see plots of evaluation results for all
Pythia and Pythia-deduped models compared with OPT and BLOOM.
<details>
<summary>LAMBADA – OpenAI</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/lambada_openai.png" style="width:auto"/>
</details>
<details>
<summary>Physical Interaction: Question Answering (PIQA)</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/piqa.png" style="width:auto"/>
</details>
<details>
<summary>WinoGrande</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/winogrande.png" style="width:auto"/>
</details>
<details>
<summary>AI2 Reasoning Challenge – Challenge Set</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/arc_challenge.png" style="width:auto"/>
</details>
<details>
<summary>SciQ</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/sciq.png" style="width:auto"/>
</details>
### Naming convention and parameter count
*Pythia* models were renamed in January 2023. It is possible that the old
naming convention still persists in some documentation by accident. The
current naming convention (70M, 160M, etc.) is based on total parameter count.
<figure style="width:32em">
| current Pythia suffix | old suffix | total params | non-embedding params |
| --------------------: | ---------: | -------------: | -------------------: |
| 70M | 19M | 70,426,624 | 18,915,328 |
| 160M | 125M | 162,322,944 | 85,056,000 |
| 410M | 350M | 405,334,016 | 302,311,424 |
| 1B | 800M | 1,011,781,632 | 805,736,448 |
| 1.4B | 1.3B | 1,414,647,808 | 1,208,602,624 |
| 2.8B | 2.7B | 2,775,208,960 | 2,517,652,480 |
| 6.9B | 6.7B | 6,857,302,016 | 6,444,163,072 |
| 12B | 13B | 11,846,072,320 | 11,327,027,200 |
</figure>
| [
"QUESTION_ANSWERING",
"TRANSLATION"
] | [
"SCIQ"
] |
Goodmotion/spam-mail-classifier | Goodmotion | text-classification | [
"transformers",
"safetensors",
"text-classification",
"spam-detection",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | 2024-12-09T15:56:31 | 2024-12-09T19:35:48 | 87 | 2 | ---
license: apache-2.0
tags:
- transformers
- text-classification
- spam-detection
---
# SPAM Mail Classifier
This model is fine-tuned from `microsoft/Multilingual-MiniLM-L12-H384` to classify email subjects as SPAM or NOSPAM.
## Model Details
- **Base model**: `microsoft/Multilingual-MiniLM-L12-H384`
- **Fine-tuned for**: Text classification
- **Number of classes**: 2 (SPAM, NOSPAM)
- **Languages**: Multilingual
## Usage
This model is fine-tuned from `microsoft/Multilingual-MiniLM-L12-H384` to classify email subjects as SPAM or NOSPAM.
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
model_name = "Goodmotion/spam-mail-classifier"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForSequenceClassification.from_pretrained(
model_name
)
text = "Félicitations ! Vous avez gagné un iPhone."
inputs = tokenizer(text, return_tensors="pt")
outputs = model(**inputs)
print(outputs.logits)
```
### Exemple for list
```python
import torch
from transformers import AutoTokenizer, AutoModelForSequenceClassification
model_name = "Goodmotion/spam-mail-classifier"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForSequenceClassification.from_pretrained(model_name)
texts = [
'Join us for a webinar on AI innovations',
'Urgent: Verify your account immediately.',
'Meeting rescheduled to 3 PM',
'Happy Birthday!',
'Limited time offer: Act now!',
'Join us for a webinar on AI innovations',
'Claim your free prize now!',
'You have unclaimed rewards waiting!',
'Weekly newsletter from Tech World',
'Update on the project status',
'Lunch tomorrow at 12:30?',
'Get rich quick with this amazing opportunity!',
'Invoice for your recent purchase',
'Don\'t forget: Gym session at 6 AM',
'Join us for a webinar on AI innovations',
'bonjour comment allez vous ?',
'Documents suite à notre rendez-vous',
'Valentin Dupond mentioned you in a comment',
'Bolt x Supabase = 🤯',
'Modification site web de la société',
'Image de mise en avant sur les articles',
'Bring new visitors to your site',
'Le Cloud Éthique sans bullshit',
'Remix Newsletter #25: React Router v7',
'Votre essai auprès de X va bientôt prendre fin',
'Introducing a Google Docs integration, styles and more in Claude.ai',
'Carte de crédit sur le point d’expirer sur Cloudflare'
]
inputs = tokenizer(texts, padding=True, truncation=True, max_length=128, return_tensors="pt")
outputs = model(**inputs)
# Convertir les logits en probabilités avec softmax
logits = outputs.logits
probabilities = torch.softmax(logits, dim=1)
# Décoder les classes pour chaque texte
labels = ["NOSPAM", "SPAM"] # Mapping des indices à des labels
results = [
{"text": text, "label": labels[torch.argmax(prob).item()], "confidence": prob.max().item()}
for text, prob in zip(texts, probabilities)
]
# Afficher les résultats
for result in results:
print(f"Texte : {result['text']}")
print(f"Résultat : {result['label']} (Confiance : {result['confidence']:.2%})\n")
```
| [
"TEXT_CLASSIFICATION"
] | [
"ESSAI"
] |
RichardErkhov/SeaLLMs_-_SeaLLMs-v3-1.5B-Chat-gguf | RichardErkhov | null | [
"gguf",
"arxiv:2407.19672",
"arxiv:2306.05179",
"endpoints_compatible",
"region:us",
"conversational"
] | 2024-10-03T16:54:58 | 2024-10-03T19:44:10 | 85 | 0 | ---
{}
---
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
SeaLLMs-v3-1.5B-Chat - GGUF
- Model creator: https://huggingface.co/SeaLLMs/
- Original model: https://huggingface.co/SeaLLMs/SeaLLMs-v3-1.5B-Chat/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [SeaLLMs-v3-1.5B-Chat.Q2_K.gguf](https://huggingface.co/RichardErkhov/SeaLLMs_-_SeaLLMs-v3-1.5B-Chat-gguf/blob/main/SeaLLMs-v3-1.5B-Chat.Q2_K.gguf) | Q2_K | 0.63GB |
| [SeaLLMs-v3-1.5B-Chat.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/SeaLLMs_-_SeaLLMs-v3-1.5B-Chat-gguf/blob/main/SeaLLMs-v3-1.5B-Chat.IQ3_XS.gguf) | IQ3_XS | 0.68GB |
| [SeaLLMs-v3-1.5B-Chat.IQ3_S.gguf](https://huggingface.co/RichardErkhov/SeaLLMs_-_SeaLLMs-v3-1.5B-Chat-gguf/blob/main/SeaLLMs-v3-1.5B-Chat.IQ3_S.gguf) | IQ3_S | 0.71GB |
| [SeaLLMs-v3-1.5B-Chat.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/SeaLLMs_-_SeaLLMs-v3-1.5B-Chat-gguf/blob/main/SeaLLMs-v3-1.5B-Chat.Q3_K_S.gguf) | Q3_K_S | 0.71GB |
| [SeaLLMs-v3-1.5B-Chat.IQ3_M.gguf](https://huggingface.co/RichardErkhov/SeaLLMs_-_SeaLLMs-v3-1.5B-Chat-gguf/blob/main/SeaLLMs-v3-1.5B-Chat.IQ3_M.gguf) | IQ3_M | 0.72GB |
| [SeaLLMs-v3-1.5B-Chat.Q3_K.gguf](https://huggingface.co/RichardErkhov/SeaLLMs_-_SeaLLMs-v3-1.5B-Chat-gguf/blob/main/SeaLLMs-v3-1.5B-Chat.Q3_K.gguf) | Q3_K | 0.77GB |
| [SeaLLMs-v3-1.5B-Chat.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/SeaLLMs_-_SeaLLMs-v3-1.5B-Chat-gguf/blob/main/SeaLLMs-v3-1.5B-Chat.Q3_K_M.gguf) | Q3_K_M | 0.77GB |
| [SeaLLMs-v3-1.5B-Chat.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/SeaLLMs_-_SeaLLMs-v3-1.5B-Chat-gguf/blob/main/SeaLLMs-v3-1.5B-Chat.Q3_K_L.gguf) | Q3_K_L | 0.82GB |
| [SeaLLMs-v3-1.5B-Chat.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/SeaLLMs_-_SeaLLMs-v3-1.5B-Chat-gguf/blob/main/SeaLLMs-v3-1.5B-Chat.IQ4_XS.gguf) | IQ4_XS | 0.84GB |
| [SeaLLMs-v3-1.5B-Chat.Q4_0.gguf](https://huggingface.co/RichardErkhov/SeaLLMs_-_SeaLLMs-v3-1.5B-Chat-gguf/blob/main/SeaLLMs-v3-1.5B-Chat.Q4_0.gguf) | Q4_0 | 0.87GB |
| [SeaLLMs-v3-1.5B-Chat.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/SeaLLMs_-_SeaLLMs-v3-1.5B-Chat-gguf/blob/main/SeaLLMs-v3-1.5B-Chat.IQ4_NL.gguf) | IQ4_NL | 0.88GB |
| [SeaLLMs-v3-1.5B-Chat.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/SeaLLMs_-_SeaLLMs-v3-1.5B-Chat-gguf/blob/main/SeaLLMs-v3-1.5B-Chat.Q4_K_S.gguf) | Q4_K_S | 0.88GB |
| [SeaLLMs-v3-1.5B-Chat.Q4_K.gguf](https://huggingface.co/RichardErkhov/SeaLLMs_-_SeaLLMs-v3-1.5B-Chat-gguf/blob/main/SeaLLMs-v3-1.5B-Chat.Q4_K.gguf) | Q4_K | 0.92GB |
| [SeaLLMs-v3-1.5B-Chat.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/SeaLLMs_-_SeaLLMs-v3-1.5B-Chat-gguf/blob/main/SeaLLMs-v3-1.5B-Chat.Q4_K_M.gguf) | Q4_K_M | 0.92GB |
| [SeaLLMs-v3-1.5B-Chat.Q4_1.gguf](https://huggingface.co/RichardErkhov/SeaLLMs_-_SeaLLMs-v3-1.5B-Chat-gguf/blob/main/SeaLLMs-v3-1.5B-Chat.Q4_1.gguf) | Q4_1 | 0.95GB |
| [SeaLLMs-v3-1.5B-Chat.Q5_0.gguf](https://huggingface.co/RichardErkhov/SeaLLMs_-_SeaLLMs-v3-1.5B-Chat-gguf/blob/main/SeaLLMs-v3-1.5B-Chat.Q5_0.gguf) | Q5_0 | 1.02GB |
| [SeaLLMs-v3-1.5B-Chat.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/SeaLLMs_-_SeaLLMs-v3-1.5B-Chat-gguf/blob/main/SeaLLMs-v3-1.5B-Chat.Q5_K_S.gguf) | Q5_K_S | 1.02GB |
| [SeaLLMs-v3-1.5B-Chat.Q5_K.gguf](https://huggingface.co/RichardErkhov/SeaLLMs_-_SeaLLMs-v3-1.5B-Chat-gguf/blob/main/SeaLLMs-v3-1.5B-Chat.Q5_K.gguf) | Q5_K | 1.05GB |
| [SeaLLMs-v3-1.5B-Chat.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/SeaLLMs_-_SeaLLMs-v3-1.5B-Chat-gguf/blob/main/SeaLLMs-v3-1.5B-Chat.Q5_K_M.gguf) | Q5_K_M | 1.05GB |
| [SeaLLMs-v3-1.5B-Chat.Q5_1.gguf](https://huggingface.co/RichardErkhov/SeaLLMs_-_SeaLLMs-v3-1.5B-Chat-gguf/blob/main/SeaLLMs-v3-1.5B-Chat.Q5_1.gguf) | Q5_1 | 1.1GB |
| [SeaLLMs-v3-1.5B-Chat.Q6_K.gguf](https://huggingface.co/RichardErkhov/SeaLLMs_-_SeaLLMs-v3-1.5B-Chat-gguf/blob/main/SeaLLMs-v3-1.5B-Chat.Q6_K.gguf) | Q6_K | 1.19GB |
| [SeaLLMs-v3-1.5B-Chat.Q8_0.gguf](https://huggingface.co/RichardErkhov/SeaLLMs_-_SeaLLMs-v3-1.5B-Chat-gguf/blob/main/SeaLLMs-v3-1.5B-Chat.Q8_0.gguf) | Q8_0 | 1.53GB |
Original model description:
---
license: other
license_name: seallms
license_link: https://huggingface.co/SeaLLMs/SeaLLM-13B-Chat/blob/main/LICENSE
language:
- en
- zh
- id
- vi
- th
- ms
- tl
- ta
- jv
tags:
- sea
- multilingual
---
# *SeaLLMs-v3* - Large Language Models for Southeast Asia
<p align="center">
<a href="https://damo-nlp-sg.github.io/SeaLLMs/" target="_blank" rel="noopener">Website</a>
<a href="https://huggingface.co/SeaLLMs/SeaLLMs-v3-1.5B-Chat" target="_blank" rel="noopener">Model</a>
<a href="https://huggingface.co/spaces/SeaLLMs/SeaLLM-Chat" target="_blank" rel="noopener"> 🤗 DEMO</a>
<a href="https://github.com/DAMO-NLP-SG/SeaLLMs" target="_blank" rel="noopener">Github</a>
<a href="https://arxiv.org/pdf/2407.19672" target="_blank" rel="noopener">[NEW] Technical Report</a>
</p>
We introduce **SeaLLMs-v3**, the latest series of the SeaLLMs (Large Language Models for Southeast Asian languages) family. It achieves state-of-the-art performance among models with similar sizes, excelling across a diverse array of tasks such as world knowledge, mathematical reasoning, translation, and instruction following. In the meantime, it was specifically enhanced to be more trustworthy, exhibiting reduced hallucination and providing safe responses, particularly in queries closed related to Southeast Asian culture.
## 🔥 Highlights
- State-of-the-art performance compared to open-source models of similar sizes, evaluated across various dimensions such as human exam questions, instruction-following, mathematics, and translation.
- Significantly enhanced instruction-following capability, especially in multi-turn settings.
- Ensures safety in usage with significantly reduced instances of hallucination and sensitivity to local contexts.
## Uses
SeaLLMs is tailored for handling a wide range of languages spoken in the SEA region, including English, Chinese, Indonesian, Vietnamese, Thai, Tagalog, Malay, Burmese, Khmer, Lao, Tamil, and Javanese.
This page introduces the **SeaLLMs-v3-1.5B-Chat** model, specifically fine-tuned to follow human instructions effectively for task completion, making it directly applicable to your applications.
You may also refer to the [SeaLLMs-v3-7B-Chat](https://huggingface.co/SeaLLMs/SeaLLM3-7B-Chat) model for enhanced performance, although it requires higher computational resources.
### Get started with `Transformers`
To quickly try the model, we show how to conduct inference with `transformers` below. Make sure you have installed the latest transformers version (>4.40).
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
device = "cuda" # the device to load the model onto
model = AutoModelForCausalLM.from_pretrained(
"SeaLLMs/SeaLLMs-v3-1.5B-Chat",
torch_dtype=torch.bfloat16,
device_map=device
)
tokenizer = AutoTokenizer.from_pretrained("SeaLLMs/SeaLLMs-v3-1.5B-Chat")
# prepare messages to model
prompt = "Hiii How are you?"
messages = [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
model_inputs = tokenizer([text], return_tensors="pt").to(device)
print(f"Formatted text:\n {text}")
print(f"Model input:\n {model_inputs}")
generated_ids = model.generate(model_inputs.input_ids, max_new_tokens=512, do_sample=True)
generated_ids = [
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)
print(f"Response:\n {response[0]}")
```
You can also utilize the following code snippet, which uses the streamer `TextStreamer` to enable the model to continue conversing with you:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
from transformers import TextStreamer
device = "cuda" # the device to load the model onto
model = AutoModelForCausalLM.from_pretrained(
"SeaLLMs/SeaLLMs-v3-1.5B-Chat",
torch_dtype=torch.bfloat16,
device_map=device
)
tokenizer = AutoTokenizer.from_pretrained("SeaLLMs/SeaLLMs-v3-1.5B-Chat")
# prepare messages to model
messages = [
{"role": "system", "content": "You are a helpful assistant."},
]
while True:
prompt = input("User:")
messages.append({"role": "user", "content": prompt})
text = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
model_inputs = tokenizer([text], return_tensors="pt").to(device)
streamer = TextStreamer(tokenizer, skip_prompt=True, skip_special_tokens=True)
generated_ids = model.generate(model_inputs.input_ids, max_new_tokens=512, streamer=streamer)
generated_ids = [
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
messages.append({"role": "assistant", "content": response})
```
### Inference with `vllm`
You can also conduct inference with [vllm](https://docs.vllm.ai/en/stable/index.html), which is a fast and easy-to-use library for LLM inference and serving. To use vllm, first install the latest version via `pip install vllm`.
```python
from vllm import LLM, SamplingParams
prompts = [
"Who is the president of US?",
"Can you speak Indonesian?"
]
llm = LLM(ckpt_path, dtype="bfloat16")
sparams = SamplingParams(temperature=0.1, max_tokens=512)
outputs = llm.generate(prompts, sparams)
# print out the model response
for output in outputs:
prompt = output.prompt
generated_text = output.outputs[0].text
print(f"Prompt: {prompt}\nResponse: {generated_text}\n\n")
```
### Bias, Risks, and Limitations
<blockquote style="color:red">
<p><strong style="color: red">Terms of Use and License</strong>:
By using our released weights, codes, and demos, you agree to and comply with the terms and conditions specified in our <a href="https://huggingface.co/SeaLLMs/SeaLLM-Chat-13b/edit/main/LICENSE" target="_blank" rel="noopener">SeaLLMs Terms Of Use</a>.
</blockquote>
> **Disclaimer**:
> We must note that even though the weights, codes, and demos are released in an open manner, similar to other pre-trained language models, and despite our best efforts in red teaming and safety fine-tuning and enforcement, our models come with potential risks, including but not limited to inaccurate, misleading or potentially harmful generation.
> Developers and stakeholders should perform their own red teaming and provide related security measures before deployment, and they must abide by and comply with local governance and regulations.
> In no event shall the authors be held liable for any claim, damages, or other liability arising from the use of the released weights, codes, or demos.
## Evaluation
We briefly compare SeaLLMs-v3-1.5B-Chat with models of similar sizes with the M3Exam benchmark.
[M3Exam](https://arxiv.org/abs/2306.05179) consists of local exam questions collected from each country. It reflects the model's world knowledge (e.g., with language or social science subjects) and reasoning abilities (e.g., with mathematics or natural science subjects).
| Model | en | zh | id | th | vi | avg | avg_sea |
|--------------------------|------|------|------|------|------|------|---------|
| gemma-2b-it | 44.1 | 37.4 | 31.5 | 28.2 | 35.8 | 35.4 | 31.8 |
| Sailor-1.8B-Chat | 43.8 | 35.9 | 34.2 | 32.3 | 37.5 | 36.7 | 34.7 |
| Sailor-4B-Chat | 54.1 | 48.1 | 40.7 | 35.6 | 42.5 | 44.2 | 39.6 |
| Qwen2-1.5B-Instruct | 63.4 | 75.3 | 41.2 | 41.2 | 47.2 | 53.7 | 43.2 |
| **SeaLLMs-v3-1.5B-Chat** | 61.9 | 74.2 | 43.2 | 42.4 | 48.7 | 54.1 | 44.7 |
## Acknowledgement to Our Linguists
We would like to express our special thanks to our professional and native linguists, Tantong Champaiboon, Nguyen Ngoc Yen Nhi and Tara Devina Putri, who helped build, evaluate, and fact-check our sampled pretraining and SFT dataset as well as evaluating our models across different aspects, especially safety.
## Citation
If you find our project useful, we hope you would kindly star our repo and cite our work as follows:
```
@article{damonlp2024seallm3,
author = {Wenxuan Zhang*, Hou Pong Chan*, Yiran Zhao*, Mahani Aljunied*,
Jianyu Wang*, Chaoqun Liu, Yue Deng, Zhiqiang Hu, Weiwen Xu,
Yew Ken Chia, Xin Li, Lidong Bing},
title = {SeaLLMs 3: Open Foundation and Chat Multilingual Large Language Models for Southeast Asian Languages},
year = {2024},
url = {https://arxiv.org/abs/2407.19672}
}
```
Corresponding Author: [email protected]
| [
"TRANSLATION"
] | [
"CHIA"
] |
HPAI-BSC/Llama3.1-Aloe-Beta-70B | HPAI-BSC | question-answering | [
"transformers",
"safetensors",
"biology",
"medical",
"healthcare",
"question-answering",
"en",
"dataset:HPAI-BSC/Aloe-Beta-General-Collection",
"dataset:HPAI-BSC/chain-of-diagnosis",
"dataset:HPAI-BSC/MedS-Ins",
"dataset:HPAI-BSC/ultramedical",
"dataset:HPAI-BSC/pubmedqa-cot-llama31",
"dataset:HPAI-BSC/medqa-cot-llama31",
"dataset:HPAI-BSC/medmcqa-cot-llama31",
"dataset:HPAI-BSC/headqa-cot-llama31",
"dataset:HPAI-BSC/MMLU-medical-cot-llama31",
"dataset:HPAI-BSC/Polymed-QA",
"arxiv:2405.01886",
"base_model:meta-llama/Llama-3.1-70B",
"base_model:finetune:meta-llama/Llama-3.1-70B",
"license:llama3.1",
"endpoints_compatible",
"region:us"
] | 2024-10-30T17:08:05 | 2025-01-22T14:19:40 | 85 | 7 | ---
base_model:
- meta-llama/Llama-3.1-70B
datasets:
- HPAI-BSC/Aloe-Beta-General-Collection
- HPAI-BSC/chain-of-diagnosis
- HPAI-BSC/MedS-Ins
- HPAI-BSC/ultramedical
- HPAI-BSC/pubmedqa-cot-llama31
- HPAI-BSC/medqa-cot-llama31
- HPAI-BSC/medmcqa-cot-llama31
- HPAI-BSC/headqa-cot-llama31
- HPAI-BSC/MMLU-medical-cot-llama31
- HPAI-BSC/Polymed-QA
- HPAI-BSC/Aloe-Beta-General-Collection
- HPAI-BSC/Aloe-Beta-General-Collection
language:
- en
library_name: transformers
license: llama3.1
pipeline_tag: question-answering
tags:
- biology
- medical
- healthcare
---
<p align="center">
<picture>
<source media="(prefers-color-scheme: dark)" srcset="https://cdn-uploads.huggingface.co/production/uploads/6620f941eba5274b5c12f83d/aFx4k7UaJqvD-cVGvoHlL.png">
<img alt="aloe_70b" src="https://cdn-uploads.huggingface.co/production/uploads/6620f941eba5274b5c12f83d/aFx4k7UaJqvD-cVGvoHlL.png" width=50%>
</picture>
</p>
<h1 align="center">
Aloe: A Family of Fine-tuned Open Healthcare LLMs
</h1>
---
Llama3.1-Aloe-Beta-70B is an **open healthcare LLM** achieving **state-of-the-art performance** on several medical tasks. Aloe Beta is made available in four model sizes: [7B](https://huggingface.co/HPAI-BSC/Qwen2.5-Aloe-Beta-7B/), [8B](https://huggingface.co/HPAI-BSC/Llama3.1-Aloe-Beta-8B), [70B](https://huggingface.co/HPAI-BSC/Llama3.1-Aloe-Beta-70B), and [72B](https://huggingface.co/HPAI-BSC/Qwen2.5-Aloe-Beta-72B). All models are trained using the same recipe, on top of two different families of models: Llama3.1 and Qwen2.5.
Aloe is trained on 20 medical tasks, resulting in a robust and versatile healthcare model. Evaluations show Aloe models to be among the best in their class. When combined with a RAG system ([also released](https://github.com/HPAI-BSC/prompt_engine)) the 7B and 8B version gets close to the performance of closed models like MedPalm-2, GPT4. With the same RAG system, Llama3.1-Aloe-Beta-70B and Qwen2.5-Aloe-Beta-72B outperforms those private alternatives, producing state-of-the-art results.
# Aloe-70B-Beta

**Aloe-70B-Beta** is the latest iteration in the **Aloe family**, building and improving on the success of its predecessor, [Aloe-8B-Alpha](https://huggingface.co/HPAI-BSC/Llama3-Aloe-8B-Alpha) in a larger model size.
Beta more than **triples** the training data used by Alpha, for a total of **1.8B tokens**, including a wider variety of medical tasks and instructions (e.g., text summarization, explanation, diagnosis, text classification, treatment recommendation, ...).

To mitigate catastrophic forgetting and enable the model to effectively learn new capabilities like **function calling**, we incorporated a diverse set of high-quality general-purpose data constituting 20% of the total training set. The curated data includes some of the highest-quality content available across a range of topics, including mathematics, programming, STEM, and very long instructions (> 8k tokens), to enrich the model's adaptability and comprehension across diverse domains.
Beta also boosts the alignment and safety stages with respect to Alpha. This includes a [medical preference dataset](https://huggingface.co/datasets/TsinghuaC3I/UltraMedical-Preference), as well as the red-teaming dataset (available soon).
Complete training details, model merging configurations, and all training data (including synthetically generated data) can be found below. This includes [the RAG system](https://github.com/HPAI-BSC/prompt_engine) that was developed to test Aloe Beta in a deployment setup. Aloe comes with a healthcare-specific risk assessment to facilitate to the safe use and deployment of such systems.
## Model Details
### [](https://huggingface.co/templates/model-card-example#model-description)Model Description
- **Developed by:** [HPAI](https://hpai.bsc.es/)
- **Model type:** Causal decoder-only transformer language model
- **Language(s) (NLP):** English (capable but not formally evaluated on other languages)
- **License:** This model is based on Meta Llama 3.1 70B and is governed by the [Meta Llama 3 License](https://www.llama.com/llama3_1/license/). All our modifications are available with a [CC BY 4.0](https://creativecommons.org/licenses/by/4.0/) license, making the Aloe Beta models **compatible with commercial use**.
- **Base model :** [meta-llama/Llama-3.1-70B](https://huggingface.co/meta-llama/Llama-3.1-70B)
- **Paper:** (more coming soon)
- **RAG Repository:** https://github.com/HPAI-BSC/prompt_engine
### [](https://huggingface.co/templates/model-card-example#model-sources-optional)Model Sources [optional]
## Model Performance
Aloe Beta has been tested on the most popular healthcare QA datasets, with and without **Medprompt** inference technique. Results show competitive performance, achieving SOTA within models of the same size.

The Beta model has been developed to excel in several different medical tasks. For this reason, we evaluated the model in many different medical benchmarks:


We also compared the performance of the model in the general domain, using the OpenLLM Leaderboard benchmark. Aloe-Beta gets competitive results with the current SOTA general models in the most used general benchmarks and outperforms the medical models:

## Uses
### Direct Use
We encourage the use of Aloe for research purposes, as a stepping stone to build better foundational models for healthcare. In production, Aloe should always be used under the supervision of a human expert.
### Out-of-Scope Use
These models are not to be used for clinical practice, medical diagnosis, or any other form of direct or indirect healthcare advice. Models are prone to error and can produce toxic content. The use of Aloe models for activities harmful to individuals, such as spam, fraud, or impersonation, is strictly prohibited. Minors should not be left alone to interact with Aloe without supervision.
## Bias, Risks, and Limitations
Aloe can produce toxic content under the appropriate prompts, and it includes multiple undesirable biases. While significant efforts where conducted to mitigate this (see Alignment details below), model safety cannot be fully guaranteed. We avoid the use of all personal data in our training.
We identify at least three risk cases specific of healthcare LLMs:
- Healthcare professional impersonation, a fraudulent behaviour which currently generates billions of dollars in [profit](https://www.justice.gov/opa/pr/justice-department-charges-dozens-12-billion-health-care-fraud). A model such as Aloe could be used to increase the efficacy of such deceiving activities, making them more widespread. The main preventive actions are public literacy on the unreliability of digitised information and the importance of medical registration, and legislation enforcing AI-generated content disclaimers.
- Medical decision-making without professional supervision. While this is already an issue in modern societies (eg self-medication) a model such as Aloe, capable of producing high-quality conversational data, can facilitate self-delusion, particularly in the presence of sycophancy. By producing tailored responses, it can also be used to generate actionable answers. Public literacy on the dangers of self-diagnosis is one of the main defenses, together with the introduction of disclaimers and warnings on the models' outputs.
- Access to information on dangerous substances or procedures. While the literature on sensitive content can already be found on different sources (eg libraries, the internet, dark web), LLMs can centralize such access, making it nearly impossible to control the flow of such information. Model alignment can help in that regard, but so far the effects remain insufficient, as jailbreaking methods still overcome it.
<!---
Table below shows the performance of Aloe at several AI safety tasks:
TO BE UPDATED
<img src="https://cdn-uploads.huggingface.co/production/uploads/62972c4979f193515da1d38e/T6Jblpf1kmTkM04K716rM.png" width="95%">
We analyzed the safety and robustness of the model using red teaming techniques. We designed a benchmark using different types of attacks and analyzed the performance of Aloe and some extra models, and we confirm that our model is aligned properly and successfully resisting most attacks:


-->
## How to Get Started with the Model
Use the code below to get started with the model. You can run conversational inference using the Transformers pipeline abstraction, or by leveraging the Auto classes with the `generate()` function. Let's see examples for both.
#### Transformers pipeline
```python
import transformers
import torch
model_id = "HPAI-BSC/Llama3.1-Aloe-Beta-70B"
pipeline = transformers.pipeline(
"text-generation",
model=model_id,
model_kwargs={"torch_dtype": torch.bfloat16},
device_map="auto",
)
messages = [
{"role": "system", "content": "You are an expert medical assistant named Aloe, developed by the High Performance Artificial Intelligence Group at Barcelona Supercomputing Center(BSC). You are to be a helpful, respectful, and honest assistant."},
{"role": "user", "content": "Hello."},
]
prompt = pipeline.tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
terminators = [
pipeline.tokenizer.eos_token_id,
pipeline.tokenizer.convert_tokens_to_ids("<|eot_id|>")
]
outputs = pipeline(
prompt,
max_new_tokens=256,
eos_token_id=terminators,
do_sample=True,
temperature=0.6,
top_p=0.9,
)
print(outputs[0]["generated_text"][len(prompt):])
```
#### Transformers AutoModelForCausalLM
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
model_id = "HPAI-BSC/Llama3.1-Aloe-Beta-70B"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
model_id,
torch_dtype=torch.bfloat16,
device_map="auto",
)
messages = [
{"role": "system", "content": "You are an expert medical assistant named Aloe, developed by the High Performance Artificial Intelligence Group at Barcelona Supercomputing Center(BSC). You are to be a helpful, respectful, and honest assistant."},
{"role": "user", "content": "Hello"},
]
input_ids = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
return_tensors="pt"
).to(model.device)
terminators = [
tokenizer.eos_token_id,
tokenizer.convert_tokens_to_ids("<|eot_id|>")
]
outputs = model.generate(
input_ids,
max_new_tokens=256,
eos_token_id=terminators,
do_sample=True,
temperature=0.6,
top_p=0.9,
)
response = outputs[0][input_ids.shape[-1]:]
print(tokenizer.decode(response, skip_special_tokens=True))
```
## Training Details
### Supervised fine-tuning
SFT on top of Llama 3.1 using axolotl (https://github.com/axolotl-ai-cloud/axolotl).
We used Deepspeed's Zero-3 distributed training using the following hardware:
* 8B: 32x NVIDIA Hopper H100 64GB of the *Marenostrum 5*.
* 70B: 64x NVIDIA Hopper H100 64GB of the *Marenostrum 5*.
<!---
^^^ TO BE COMPLETED AND DETAILED ^^^
-->
#### Training Data
The training set consists of around 1.8B tokens, having 3 different types of data:
- Medical domain datasets. Includes data from 20 different medical tasks.
- [HPAI-BSC/Aloe-Beta-General-Collection](https://huggingface.co/datasets/HPAI-BSC/Aloe-Beta-General-Collection)
- [HPAI-BSC/chain-of-diagnosis](https://huggingface.co/datasets/HPAI-BSC/chain-of-diagnosis)
- [HPAI-BSC/MedS-Ins](https://huggingface.co/datasets/HPAI-BSC/MedS-Ins)
- [HPAI-BSC/ultramedica](https://huggingface.co/datasets/HPAI-BSC/ultramedical)
- Synthetic data. We expanded our training data by generating high-quality answers using Llama3.1-70B:
- [HPAI-BSC/pubmedqa-cot-llama31](https://huggingface.co/datasets/HPAI-BSC/pubmedqa-cot-llama31)
- [HPAI-BSC/medqa-cot-llama31](https://huggingface.co/datasets/HPAI-BSC/medqa-cot-llama31)
- [HPAI-BSC/medmcqa-cot-llama31](https://huggingface.co/datasets/HPAI-BSC/medmcqa-cot-llama31)
- [HPAI-BSC/headqa-cot-llama31](https://huggingface.co/datasets/HPAI-BSC/headqa-cot-llama31)
- [HPAI-BSC/MMLU-medical-cot-llama31](https://huggingface.co/datasets/HPAI-BSC/MMLU-medical-cot-llama31)
- [HPAI-BSC/Polymed-QA](https://huggingface.co/datasets/HPAI-BSC/Polymed-QA)
- General data. It includes maths, STEM, code, function calling, and instruction with very long context.
- [HPAI-BSC/Aloe-Beta-General-Collection](https://huggingface.co/datasets/HPAI-BSC/Aloe-Beta-General-Collection)
#### Training parameters
- Epochs: 4
- Sequence length: 16384
- Optimizer: adamw_torch
- Learning rate: 2e-5
- Learning rate scheduler: cosine
- Warmup steps: 100
- Weight decay: 0
- Gradient checkpointing
- Zero 3
- Total batch size: 128
- Batch size per device: 1
- Gradient accumulation steps: 2
### Model Merging
The model trained was merged with the Llama-3.1-Instruct model using the DARE_TIES technique. [Mergekit](https://github.com/arcee-ai/mergekit) was used to conduct the merging.
### Model Alignment
The model is aligned using the Direct Preference Optimization (DPO) technique through a two-step process:
1. General DPO Alignment: This step uses a dataset combining medical, general preference, and safety data. We used our dataset [HPAI-BSC/Aloe-Beta-DPO](https://huggingface.co/datasets/HPAI-BSC/Aloe-Beta-DPO). We split the dataset into five parts, and the model was trained iteratively for one epoch on each chunk. We used a learning rate of 2e-7.
2. Red-Teaming Alignment: This step further fine-tunes the model to resist a variety of potential attacks, enhancing its robustness and security. The dataset will be shared soon. In this stage, we set the learning rate to 1e-7.
<!---
^^^ LINKS TO DPO DATA (DPO added, missing the RT^^^
-->
We used [OpenRLHF](https://github.com/OpenRLHF/OpenRLHF) library. We aligned the model using 25x NVIDA HOOPER H100 64GB of the *Marenostrum 5*. Common hyperparameters:
- Sequence length: 4096
- Optimizer: Fused adam
- Total batch size 100
- Batch size per device: 1
- Gradient accumulation steps: 4
- Beta: 0.1
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
- [ACI-BENCH](https://github.com/wyim/aci-bench)
- [MTS-Dialog](https://github.com/abachaa/MTS-Dialog)
- [MedText](https://huggingface.co/datasets/BI55/MedText)
- [Medical Text classification](https://www.kaggle.com/datasets/chaitanyakck/medical-text/data)
- [OLAPH](https://github.com/dmis-lab/OLAPH)
- CareQA Open
- [MedDialog](https://huggingface.co/datasets/bigbio/meddialog)
- [MEDIQA QA](https://huggingface.co/datasets/bigbio/mediqa_qa)
- [Meddialog Qsumm](https://huggingface.co/datasets/lighteval/med_dialog)
- [Biored](https://huggingface.co/datasets/YufeiHFUT/BioRED_all_info)
- [MIMIC-III](https://huggingface.co/datasets/dmacres/mimiciii-hospitalcourse-meta)
- [Medical Prescription](https://huggingface.co/datasets/devlocalhost/prescription-full)
- [MedQA (USMLE)](https://huggingface.co/datasets/bigbio/med_qa)
- [MedMCQA](https://huggingface.co/datasets/medmcqa)
- [PubMedQA](https://huggingface.co/datasets/bigbio/pubmed_qa)
- [MMLU-Medical](https://huggingface.co/datasets/lukaemon/mmlu)
- [MedQA-4-Option](https://huggingface.co/datasets/GBaker/MedQA-USMLE-4-options)
- [CareQA](https://huggingface.co/datasets/HPAI-BSC/CareQA)
- [Open LLM Leaderboard 2](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard)
<!---
^^^ CAREQA Open link MISSING ^^^
-->
#### Metrics
- Accuracy: suite the evaluation of multiple-choice question-answering tasks.
- Rouge1: refers to the overlap of unigrams between the system and the gold standard.
<!---
^^^ MORE METRICS MISSING ^^^
-->
#### Summary
To compare Aloe with the most competitive open models (both general purpose and healthcare-specific) we use popular healthcare datasets (PubMedQA, MedMCQA, MedQA and MMLU for six medical tasks only), together with the new and highly reliable CareQA. However, while MCQA benchmarks provide valuable insights into a model's ability to handle structured queries, they fall short of representing the full range of challenges faced in medical practice. Building upon this idea, Aloe-Beta represents the next step in the evolution of the Aloe Family, designed to broaden the scope beyond the multiple-choice question-answering tasks that define Aloe-Alpha.
Benchmark results indicate the training conducted on Aloe has boosted its performance achieving comparable results with SOTA models like Llama3-OpenBioLLLM, Llama3-Med42, MedPalm-2 and GPT-4. Llama3.1-Aloe-Beta-70B also outperforms the other existing medical models in the OpenLLM Leaderboard and in the evaluation of other medical tasks like Medical Factualy and Medical Treatment recommendations among others. All these results make Llama3.1-Aloe-Beta-70B one of the best existing models for healthcare.
With the help of prompting techniques the performance of Llama3.1-Aloe-Beta-70B is significantly improved. Medprompting in particular provides a 4% increase in reported accuracy, after which Llama3.1-Aloe-Beta-70B outperforms all the existing models that do not use RAG evaluation.
## Environmental Impact
- **Hardware Type:** 64xH100
- **Hours used (8B):** 544 GPU hours
- **Hours used (70B):** 4500 GPU hours
- **Hardware Provider:** Barcelona Supercomputing Center (BSC)
- **Compute Region:** Spain
- **Carbon Emitted:** 34.1 kg of CO2
<!---
^^^ ARE CARBON EMISSIONS FOR BOTH? ^^^
-->
## Authors
Aloe Beta has been developed by the [High Performance Artificial Intelligence](https://hpai.bsc.es/) research group, from the [Barcelona Supercomping Center - BSC](https://www.bsc.es/). Main authors are [Jordi Bayarri Planas](https://huggingface.co/JordiBayarri), [Ashwin Kumar Gururajan](https://huggingface.co/G-AshwinKumar) and [Dario Garcia-Gasulla](https://huggingface.co/dariog). Red teaming efforts lead by Adrian Tormos.
mailto:[email protected]
## Citations
<!---
Add the prompt engine paper below
-->
If you use this repository in a published work, please cite the corresponding papers as source:
```
@misc{gururajan2024aloe,
title={Aloe: A Family of Fine-tuned Open Healthcare LLMs},
author={Ashwin Kumar Gururajan and Enrique Lopez-Cuena and Jordi Bayarri-Planas and Adrian Tormos and Daniel Hinjos and Pablo Bernabeu-Perez and Anna Arias-Duart and Pablo Agustin Martin-Torres and Lucia Urcelay-Ganzabal and Marta Gonzalez-Mallo and Sergio Alvarez-Napagao and Eduard Ayguadé-Parra and Ulises Cortés Dario Garcia-Gasulla},
year={2024},
eprint={2405.01886},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` | [
"TEXT_CLASSIFICATION",
"SUMMARIZATION"
] | [
"BIORED",
"MEDIQA QA",
"MEDDIALOG",
"MEDQA",
"PUBMEDQA"
] |
DecisionOptimizationSystem/DeepFeatEmbeddingLargeContext | DecisionOptimizationSystem | feature-extraction | [
"sentence-transformers",
"pytorch",
"coreml",
"onnx",
"safetensors",
"bert",
"finetuner",
"mteb",
"feature-extraction",
"sentence-similarity",
"alibi",
"custom_code",
"en",
"dataset:allenai/c4",
"arxiv:2108.12409",
"arxiv:2310.19923",
"arxiv:2307.11224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-embeddings-inference",
"region:us"
] | 2023-11-05T18:23:43 | 2023-11-05T18:23:44 | 84 | 1 | ---
datasets:
- allenai/c4
language: en
license: apache-2.0
tags:
- finetuner
- mteb
- sentence-transformers
- feature-extraction
- sentence-similarity
- alibi
inference: false
model-index:
- name: jina-embedding-b-en-v2
results:
- task:
type: Classification
dataset:
name: MTEB AmazonCounterfactualClassification (en)
type: mteb/amazon_counterfactual
config: en
split: test
revision: e8379541af4e31359cca9fbcf4b00f2671dba205
metrics:
- type: accuracy
value: 74.73134328358209
- type: ap
value: 37.765427081831035
- type: f1
value: 68.79367444339518
- task:
type: Classification
dataset:
name: MTEB AmazonPolarityClassification
type: mteb/amazon_polarity
config: default
split: test
revision: e2d317d38cd51312af73b3d32a06d1a08b442046
metrics:
- type: accuracy
value: 88.544275
- type: ap
value: 84.61328675662887
- type: f1
value: 88.51879035862375
- task:
type: Classification
dataset:
name: MTEB AmazonReviewsClassification (en)
type: mteb/amazon_reviews_multi
config: en
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 45.263999999999996
- type: f1
value: 43.778759656699435
- task:
type: Retrieval
dataset:
name: MTEB ArguAna
type: arguana
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 21.693
- type: map_at_10
value: 35.487
- type: map_at_100
value: 36.862
- type: map_at_1000
value: 36.872
- type: map_at_3
value: 30.049999999999997
- type: map_at_5
value: 32.966
- type: mrr_at_1
value: 21.977
- type: mrr_at_10
value: 35.565999999999995
- type: mrr_at_100
value: 36.948
- type: mrr_at_1000
value: 36.958
- type: mrr_at_3
value: 30.121
- type: mrr_at_5
value: 33.051
- type: ndcg_at_1
value: 21.693
- type: ndcg_at_10
value: 44.181
- type: ndcg_at_100
value: 49.982
- type: ndcg_at_1000
value: 50.233000000000004
- type: ndcg_at_3
value: 32.830999999999996
- type: ndcg_at_5
value: 38.080000000000005
- type: precision_at_1
value: 21.693
- type: precision_at_10
value: 7.248
- type: precision_at_100
value: 0.9769999999999999
- type: precision_at_1000
value: 0.1
- type: precision_at_3
value: 13.632
- type: precision_at_5
value: 10.725
- type: recall_at_1
value: 21.693
- type: recall_at_10
value: 72.475
- type: recall_at_100
value: 97.653
- type: recall_at_1000
value: 99.57300000000001
- type: recall_at_3
value: 40.896
- type: recall_at_5
value: 53.627
- task:
type: Clustering
dataset:
name: MTEB ArxivClusteringP2P
type: mteb/arxiv-clustering-p2p
config: default
split: test
revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d
metrics:
- type: v_measure
value: 45.39242428696777
- task:
type: Clustering
dataset:
name: MTEB ArxivClusteringS2S
type: mteb/arxiv-clustering-s2s
config: default
split: test
revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53
metrics:
- type: v_measure
value: 36.675626784714
- task:
type: Reranking
dataset:
name: MTEB AskUbuntuDupQuestions
type: mteb/askubuntudupquestions-reranking
config: default
split: test
revision: 2000358ca161889fa9c082cb41daa8dcfb161a54
metrics:
- type: map
value: 62.247725694904034
- type: mrr
value: 74.91359978894604
- task:
type: STS
dataset:
name: MTEB BIOSSES
type: mteb/biosses-sts
config: default
split: test
revision: d3fb88f8f02e40887cd149695127462bbcf29b4a
metrics:
- type: cos_sim_pearson
value: 82.68003802970496
- type: cos_sim_spearman
value: 81.23438110096286
- type: euclidean_pearson
value: 81.87462986142582
- type: euclidean_spearman
value: 81.23438110096286
- type: manhattan_pearson
value: 81.61162566600755
- type: manhattan_spearman
value: 81.11329400456184
- task:
type: Classification
dataset:
name: MTEB Banking77Classification
type: mteb/banking77
config: default
split: test
revision: 0fd18e25b25c072e09e0d92ab615fda904d66300
metrics:
- type: accuracy
value: 84.01298701298701
- type: f1
value: 83.31690714969382
- task:
type: Clustering
dataset:
name: MTEB BiorxivClusteringP2P
type: mteb/biorxiv-clustering-p2p
config: default
split: test
revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40
metrics:
- type: v_measure
value: 37.050108150972086
- task:
type: Clustering
dataset:
name: MTEB BiorxivClusteringS2S
type: mteb/biorxiv-clustering-s2s
config: default
split: test
revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908
metrics:
- type: v_measure
value: 30.15731442819715
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackAndroidRetrieval
type: BeIR/cqadupstack
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 31.391999999999996
- type: map_at_10
value: 42.597
- type: map_at_100
value: 44.07
- type: map_at_1000
value: 44.198
- type: map_at_3
value: 38.957
- type: map_at_5
value: 40.961
- type: mrr_at_1
value: 37.196
- type: mrr_at_10
value: 48.152
- type: mrr_at_100
value: 48.928
- type: mrr_at_1000
value: 48.964999999999996
- type: mrr_at_3
value: 45.446
- type: mrr_at_5
value: 47.205999999999996
- type: ndcg_at_1
value: 37.196
- type: ndcg_at_10
value: 49.089
- type: ndcg_at_100
value: 54.471000000000004
- type: ndcg_at_1000
value: 56.385
- type: ndcg_at_3
value: 43.699
- type: ndcg_at_5
value: 46.22
- type: precision_at_1
value: 37.196
- type: precision_at_10
value: 9.313
- type: precision_at_100
value: 1.478
- type: precision_at_1000
value: 0.198
- type: precision_at_3
value: 20.839
- type: precision_at_5
value: 14.936
- type: recall_at_1
value: 31.391999999999996
- type: recall_at_10
value: 61.876
- type: recall_at_100
value: 84.214
- type: recall_at_1000
value: 95.985
- type: recall_at_3
value: 46.6
- type: recall_at_5
value: 53.588
- type: map_at_1
value: 29.083
- type: map_at_10
value: 38.812999999999995
- type: map_at_100
value: 40.053
- type: map_at_1000
value: 40.188
- type: map_at_3
value: 36.111
- type: map_at_5
value: 37.519000000000005
- type: mrr_at_1
value: 36.497
- type: mrr_at_10
value: 44.85
- type: mrr_at_100
value: 45.546
- type: mrr_at_1000
value: 45.593
- type: mrr_at_3
value: 42.686
- type: mrr_at_5
value: 43.909
- type: ndcg_at_1
value: 36.497
- type: ndcg_at_10
value: 44.443
- type: ndcg_at_100
value: 48.979
- type: ndcg_at_1000
value: 51.154999999999994
- type: ndcg_at_3
value: 40.660000000000004
- type: ndcg_at_5
value: 42.193000000000005
- type: precision_at_1
value: 36.497
- type: precision_at_10
value: 8.433
- type: precision_at_100
value: 1.369
- type: precision_at_1000
value: 0.185
- type: precision_at_3
value: 19.894000000000002
- type: precision_at_5
value: 13.873
- type: recall_at_1
value: 29.083
- type: recall_at_10
value: 54.313
- type: recall_at_100
value: 73.792
- type: recall_at_1000
value: 87.629
- type: recall_at_3
value: 42.257
- type: recall_at_5
value: 47.066
- type: map_at_1
value: 38.556000000000004
- type: map_at_10
value: 50.698
- type: map_at_100
value: 51.705
- type: map_at_1000
value: 51.768
- type: map_at_3
value: 47.848
- type: map_at_5
value: 49.358000000000004
- type: mrr_at_1
value: 43.95
- type: mrr_at_10
value: 54.191
- type: mrr_at_100
value: 54.852999999999994
- type: mrr_at_1000
value: 54.885
- type: mrr_at_3
value: 51.954
- type: mrr_at_5
value: 53.13
- type: ndcg_at_1
value: 43.95
- type: ndcg_at_10
value: 56.516
- type: ndcg_at_100
value: 60.477000000000004
- type: ndcg_at_1000
value: 61.746
- type: ndcg_at_3
value: 51.601
- type: ndcg_at_5
value: 53.795
- type: precision_at_1
value: 43.95
- type: precision_at_10
value: 9.009
- type: precision_at_100
value: 1.189
- type: precision_at_1000
value: 0.135
- type: precision_at_3
value: 22.989
- type: precision_at_5
value: 15.473
- type: recall_at_1
value: 38.556000000000004
- type: recall_at_10
value: 70.159
- type: recall_at_100
value: 87.132
- type: recall_at_1000
value: 96.16
- type: recall_at_3
value: 56.906
- type: recall_at_5
value: 62.332
- type: map_at_1
value: 24.238
- type: map_at_10
value: 32.5
- type: map_at_100
value: 33.637
- type: map_at_1000
value: 33.719
- type: map_at_3
value: 30.026999999999997
- type: map_at_5
value: 31.555
- type: mrr_at_1
value: 26.328000000000003
- type: mrr_at_10
value: 34.44
- type: mrr_at_100
value: 35.455999999999996
- type: mrr_at_1000
value: 35.521
- type: mrr_at_3
value: 32.034
- type: mrr_at_5
value: 33.565
- type: ndcg_at_1
value: 26.328000000000003
- type: ndcg_at_10
value: 37.202
- type: ndcg_at_100
value: 42.728
- type: ndcg_at_1000
value: 44.792
- type: ndcg_at_3
value: 32.368
- type: ndcg_at_5
value: 35.008
- type: precision_at_1
value: 26.328000000000003
- type: precision_at_10
value: 5.7059999999999995
- type: precision_at_100
value: 0.8880000000000001
- type: precision_at_1000
value: 0.11100000000000002
- type: precision_at_3
value: 13.672
- type: precision_at_5
value: 9.74
- type: recall_at_1
value: 24.238
- type: recall_at_10
value: 49.829
- type: recall_at_100
value: 75.21
- type: recall_at_1000
value: 90.521
- type: recall_at_3
value: 36.867
- type: recall_at_5
value: 43.241
- type: map_at_1
value: 15.378
- type: map_at_10
value: 22.817999999999998
- type: map_at_100
value: 23.977999999999998
- type: map_at_1000
value: 24.108
- type: map_at_3
value: 20.719
- type: map_at_5
value: 21.889
- type: mrr_at_1
value: 19.03
- type: mrr_at_10
value: 27.022000000000002
- type: mrr_at_100
value: 28.011999999999997
- type: mrr_at_1000
value: 28.096
- type: mrr_at_3
value: 24.855
- type: mrr_at_5
value: 26.029999999999998
- type: ndcg_at_1
value: 19.03
- type: ndcg_at_10
value: 27.526
- type: ndcg_at_100
value: 33.040000000000006
- type: ndcg_at_1000
value: 36.187000000000005
- type: ndcg_at_3
value: 23.497
- type: ndcg_at_5
value: 25.334
- type: precision_at_1
value: 19.03
- type: precision_at_10
value: 4.963
- type: precision_at_100
value: 0.893
- type: precision_at_1000
value: 0.13
- type: precision_at_3
value: 11.360000000000001
- type: precision_at_5
value: 8.134
- type: recall_at_1
value: 15.378
- type: recall_at_10
value: 38.061
- type: recall_at_100
value: 61.754
- type: recall_at_1000
value: 84.259
- type: recall_at_3
value: 26.788
- type: recall_at_5
value: 31.326999999999998
- type: map_at_1
value: 27.511999999999997
- type: map_at_10
value: 37.429
- type: map_at_100
value: 38.818000000000005
- type: map_at_1000
value: 38.924
- type: map_at_3
value: 34.625
- type: map_at_5
value: 36.064
- type: mrr_at_1
value: 33.300999999999995
- type: mrr_at_10
value: 43.036
- type: mrr_at_100
value: 43.894
- type: mrr_at_1000
value: 43.936
- type: mrr_at_3
value: 40.825
- type: mrr_at_5
value: 42.028
- type: ndcg_at_1
value: 33.300999999999995
- type: ndcg_at_10
value: 43.229
- type: ndcg_at_100
value: 48.992000000000004
- type: ndcg_at_1000
value: 51.02100000000001
- type: ndcg_at_3
value: 38.794000000000004
- type: ndcg_at_5
value: 40.65
- type: precision_at_1
value: 33.300999999999995
- type: precision_at_10
value: 7.777000000000001
- type: precision_at_100
value: 1.269
- type: precision_at_1000
value: 0.163
- type: precision_at_3
value: 18.351
- type: precision_at_5
value: 12.762
- type: recall_at_1
value: 27.511999999999997
- type: recall_at_10
value: 54.788000000000004
- type: recall_at_100
value: 79.105
- type: recall_at_1000
value: 92.49199999999999
- type: recall_at_3
value: 41.924
- type: recall_at_5
value: 47.026
- type: map_at_1
value: 24.117
- type: map_at_10
value: 33.32
- type: map_at_100
value: 34.677
- type: map_at_1000
value: 34.78
- type: map_at_3
value: 30.233999999999998
- type: map_at_5
value: 31.668000000000003
- type: mrr_at_1
value: 29.566
- type: mrr_at_10
value: 38.244
- type: mrr_at_100
value: 39.245000000000005
- type: mrr_at_1000
value: 39.296
- type: mrr_at_3
value: 35.864000000000004
- type: mrr_at_5
value: 36.919999999999995
- type: ndcg_at_1
value: 29.566
- type: ndcg_at_10
value: 39.127
- type: ndcg_at_100
value: 44.989000000000004
- type: ndcg_at_1000
value: 47.189
- type: ndcg_at_3
value: 34.039
- type: ndcg_at_5
value: 35.744
- type: precision_at_1
value: 29.566
- type: precision_at_10
value: 7.385999999999999
- type: precision_at_100
value: 1.204
- type: precision_at_1000
value: 0.158
- type: precision_at_3
value: 16.286
- type: precision_at_5
value: 11.484
- type: recall_at_1
value: 24.117
- type: recall_at_10
value: 51.559999999999995
- type: recall_at_100
value: 77.104
- type: recall_at_1000
value: 91.79899999999999
- type: recall_at_3
value: 36.82
- type: recall_at_5
value: 41.453
- type: map_at_1
value: 25.17625
- type: map_at_10
value: 34.063916666666664
- type: map_at_100
value: 35.255500000000005
- type: map_at_1000
value: 35.37275
- type: map_at_3
value: 31.351666666666667
- type: map_at_5
value: 32.80608333333333
- type: mrr_at_1
value: 29.59783333333333
- type: mrr_at_10
value: 38.0925
- type: mrr_at_100
value: 38.957249999999995
- type: mrr_at_1000
value: 39.01608333333333
- type: mrr_at_3
value: 35.77625
- type: mrr_at_5
value: 37.04991666666667
- type: ndcg_at_1
value: 29.59783333333333
- type: ndcg_at_10
value: 39.343666666666664
- type: ndcg_at_100
value: 44.488249999999994
- type: ndcg_at_1000
value: 46.83358333333334
- type: ndcg_at_3
value: 34.69708333333333
- type: ndcg_at_5
value: 36.75075
- type: precision_at_1
value: 29.59783333333333
- type: precision_at_10
value: 6.884083333333332
- type: precision_at_100
value: 1.114
- type: precision_at_1000
value: 0.15108333333333332
- type: precision_at_3
value: 15.965250000000003
- type: precision_at_5
value: 11.246500000000001
- type: recall_at_1
value: 25.17625
- type: recall_at_10
value: 51.015999999999984
- type: recall_at_100
value: 73.60174999999998
- type: recall_at_1000
value: 89.849
- type: recall_at_3
value: 37.88399999999999
- type: recall_at_5
value: 43.24541666666666
- type: map_at_1
value: 24.537
- type: map_at_10
value: 31.081999999999997
- type: map_at_100
value: 32.042
- type: map_at_1000
value: 32.141
- type: map_at_3
value: 29.137
- type: map_at_5
value: 30.079
- type: mrr_at_1
value: 27.454
- type: mrr_at_10
value: 33.694
- type: mrr_at_100
value: 34.579
- type: mrr_at_1000
value: 34.649
- type: mrr_at_3
value: 32.004
- type: mrr_at_5
value: 32.794000000000004
- type: ndcg_at_1
value: 27.454
- type: ndcg_at_10
value: 34.915
- type: ndcg_at_100
value: 39.641
- type: ndcg_at_1000
value: 42.105
- type: ndcg_at_3
value: 31.276
- type: ndcg_at_5
value: 32.65
- type: precision_at_1
value: 27.454
- type: precision_at_10
value: 5.337
- type: precision_at_100
value: 0.8250000000000001
- type: precision_at_1000
value: 0.11199999999999999
- type: precision_at_3
value: 13.241
- type: precision_at_5
value: 8.895999999999999
- type: recall_at_1
value: 24.537
- type: recall_at_10
value: 44.324999999999996
- type: recall_at_100
value: 65.949
- type: recall_at_1000
value: 84.017
- type: recall_at_3
value: 33.857
- type: recall_at_5
value: 37.316
- type: map_at_1
value: 17.122
- type: map_at_10
value: 24.32
- type: map_at_100
value: 25.338
- type: map_at_1000
value: 25.462
- type: map_at_3
value: 22.064
- type: map_at_5
value: 23.322000000000003
- type: mrr_at_1
value: 20.647
- type: mrr_at_10
value: 27.858
- type: mrr_at_100
value: 28.743999999999996
- type: mrr_at_1000
value: 28.819
- type: mrr_at_3
value: 25.769
- type: mrr_at_5
value: 26.964
- type: ndcg_at_1
value: 20.647
- type: ndcg_at_10
value: 28.849999999999998
- type: ndcg_at_100
value: 33.849000000000004
- type: ndcg_at_1000
value: 36.802
- type: ndcg_at_3
value: 24.799
- type: ndcg_at_5
value: 26.682
- type: precision_at_1
value: 20.647
- type: precision_at_10
value: 5.2170000000000005
- type: precision_at_100
value: 0.906
- type: precision_at_1000
value: 0.134
- type: precision_at_3
value: 11.769
- type: precision_at_5
value: 8.486
- type: recall_at_1
value: 17.122
- type: recall_at_10
value: 38.999
- type: recall_at_100
value: 61.467000000000006
- type: recall_at_1000
value: 82.716
- type: recall_at_3
value: 27.601
- type: recall_at_5
value: 32.471
- type: map_at_1
value: 24.396
- type: map_at_10
value: 33.415
- type: map_at_100
value: 34.521
- type: map_at_1000
value: 34.631
- type: map_at_3
value: 30.703999999999997
- type: map_at_5
value: 32.166
- type: mrr_at_1
value: 28.825
- type: mrr_at_10
value: 37.397000000000006
- type: mrr_at_100
value: 38.286
- type: mrr_at_1000
value: 38.346000000000004
- type: mrr_at_3
value: 35.028
- type: mrr_at_5
value: 36.32
- type: ndcg_at_1
value: 28.825
- type: ndcg_at_10
value: 38.656
- type: ndcg_at_100
value: 43.856
- type: ndcg_at_1000
value: 46.31
- type: ndcg_at_3
value: 33.793
- type: ndcg_at_5
value: 35.909
- type: precision_at_1
value: 28.825
- type: precision_at_10
value: 6.567
- type: precision_at_100
value: 1.0330000000000001
- type: precision_at_1000
value: 0.135
- type: precision_at_3
value: 15.516
- type: precision_at_5
value: 10.914
- type: recall_at_1
value: 24.396
- type: recall_at_10
value: 50.747
- type: recall_at_100
value: 73.477
- type: recall_at_1000
value: 90.801
- type: recall_at_3
value: 37.1
- type: recall_at_5
value: 42.589
- type: map_at_1
value: 25.072
- type: map_at_10
value: 34.307
- type: map_at_100
value: 35.725
- type: map_at_1000
value: 35.943999999999996
- type: map_at_3
value: 30.906
- type: map_at_5
value: 32.818000000000005
- type: mrr_at_1
value: 29.644
- type: mrr_at_10
value: 38.673
- type: mrr_at_100
value: 39.459
- type: mrr_at_1000
value: 39.527
- type: mrr_at_3
value: 35.771
- type: mrr_at_5
value: 37.332
- type: ndcg_at_1
value: 29.644
- type: ndcg_at_10
value: 40.548
- type: ndcg_at_100
value: 45.678999999999995
- type: ndcg_at_1000
value: 48.488
- type: ndcg_at_3
value: 34.887
- type: ndcg_at_5
value: 37.543
- type: precision_at_1
value: 29.644
- type: precision_at_10
value: 7.688000000000001
- type: precision_at_100
value: 1.482
- type: precision_at_1000
value: 0.23600000000000002
- type: precision_at_3
value: 16.206
- type: precision_at_5
value: 12.016
- type: recall_at_1
value: 25.072
- type: recall_at_10
value: 53.478
- type: recall_at_100
value: 76.07300000000001
- type: recall_at_1000
value: 93.884
- type: recall_at_3
value: 37.583
- type: recall_at_5
value: 44.464
- type: map_at_1
value: 20.712
- type: map_at_10
value: 27.467999999999996
- type: map_at_100
value: 28.502
- type: map_at_1000
value: 28.610000000000003
- type: map_at_3
value: 24.887999999999998
- type: map_at_5
value: 26.273999999999997
- type: mrr_at_1
value: 22.736
- type: mrr_at_10
value: 29.553
- type: mrr_at_100
value: 30.485
- type: mrr_at_1000
value: 30.56
- type: mrr_at_3
value: 27.078999999999997
- type: mrr_at_5
value: 28.401
- type: ndcg_at_1
value: 22.736
- type: ndcg_at_10
value: 32.023
- type: ndcg_at_100
value: 37.158
- type: ndcg_at_1000
value: 39.823
- type: ndcg_at_3
value: 26.951999999999998
- type: ndcg_at_5
value: 29.281000000000002
- type: precision_at_1
value: 22.736
- type: precision_at_10
value: 5.213
- type: precision_at_100
value: 0.832
- type: precision_at_1000
value: 0.116
- type: precision_at_3
value: 11.459999999999999
- type: precision_at_5
value: 8.244
- type: recall_at_1
value: 20.712
- type: recall_at_10
value: 44.057
- type: recall_at_100
value: 67.944
- type: recall_at_1000
value: 87.925
- type: recall_at_3
value: 30.305
- type: recall_at_5
value: 36.071999999999996
- task:
type: Retrieval
dataset:
name: MTEB ClimateFEVER
type: climate-fever
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 10.181999999999999
- type: map_at_10
value: 16.66
- type: map_at_100
value: 18.273
- type: map_at_1000
value: 18.45
- type: map_at_3
value: 14.141
- type: map_at_5
value: 15.455
- type: mrr_at_1
value: 22.15
- type: mrr_at_10
value: 32.062000000000005
- type: mrr_at_100
value: 33.116
- type: mrr_at_1000
value: 33.168
- type: mrr_at_3
value: 28.827
- type: mrr_at_5
value: 30.892999999999997
- type: ndcg_at_1
value: 22.15
- type: ndcg_at_10
value: 23.532
- type: ndcg_at_100
value: 30.358
- type: ndcg_at_1000
value: 33.783
- type: ndcg_at_3
value: 19.222
- type: ndcg_at_5
value: 20.919999999999998
- type: precision_at_1
value: 22.15
- type: precision_at_10
value: 7.185999999999999
- type: precision_at_100
value: 1.433
- type: precision_at_1000
value: 0.207
- type: precision_at_3
value: 13.941
- type: precision_at_5
value: 10.906
- type: recall_at_1
value: 10.181999999999999
- type: recall_at_10
value: 28.104000000000003
- type: recall_at_100
value: 51.998999999999995
- type: recall_at_1000
value: 71.311
- type: recall_at_3
value: 17.698
- type: recall_at_5
value: 22.262999999999998
- task:
type: Retrieval
dataset:
name: MTEB DBPedia
type: dbpedia-entity
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 6.669
- type: map_at_10
value: 15.552
- type: map_at_100
value: 21.865000000000002
- type: map_at_1000
value: 23.268
- type: map_at_3
value: 11.309
- type: map_at_5
value: 13.084000000000001
- type: mrr_at_1
value: 55.50000000000001
- type: mrr_at_10
value: 66.46600000000001
- type: mrr_at_100
value: 66.944
- type: mrr_at_1000
value: 66.956
- type: mrr_at_3
value: 64.542
- type: mrr_at_5
value: 65.717
- type: ndcg_at_1
value: 44.75
- type: ndcg_at_10
value: 35.049
- type: ndcg_at_100
value: 39.073
- type: ndcg_at_1000
value: 46.208
- type: ndcg_at_3
value: 39.525
- type: ndcg_at_5
value: 37.156
- type: precision_at_1
value: 55.50000000000001
- type: precision_at_10
value: 27.800000000000004
- type: precision_at_100
value: 9.013
- type: precision_at_1000
value: 1.8800000000000001
- type: precision_at_3
value: 42.667
- type: precision_at_5
value: 36.0
- type: recall_at_1
value: 6.669
- type: recall_at_10
value: 21.811
- type: recall_at_100
value: 45.112
- type: recall_at_1000
value: 67.806
- type: recall_at_3
value: 13.373
- type: recall_at_5
value: 16.615
- task:
type: Classification
dataset:
name: MTEB EmotionClassification
type: mteb/emotion
config: default
split: test
revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37
metrics:
- type: accuracy
value: 48.769999999999996
- type: f1
value: 42.91448356376592
- task:
type: Retrieval
dataset:
name: MTEB FEVER
type: fever
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 54.013
- type: map_at_10
value: 66.239
- type: map_at_100
value: 66.62599999999999
- type: map_at_1000
value: 66.644
- type: map_at_3
value: 63.965
- type: map_at_5
value: 65.45400000000001
- type: mrr_at_1
value: 58.221000000000004
- type: mrr_at_10
value: 70.43700000000001
- type: mrr_at_100
value: 70.744
- type: mrr_at_1000
value: 70.75099999999999
- type: mrr_at_3
value: 68.284
- type: mrr_at_5
value: 69.721
- type: ndcg_at_1
value: 58.221000000000004
- type: ndcg_at_10
value: 72.327
- type: ndcg_at_100
value: 73.953
- type: ndcg_at_1000
value: 74.312
- type: ndcg_at_3
value: 68.062
- type: ndcg_at_5
value: 70.56400000000001
- type: precision_at_1
value: 58.221000000000004
- type: precision_at_10
value: 9.521
- type: precision_at_100
value: 1.045
- type: precision_at_1000
value: 0.109
- type: precision_at_3
value: 27.348
- type: precision_at_5
value: 17.794999999999998
- type: recall_at_1
value: 54.013
- type: recall_at_10
value: 86.957
- type: recall_at_100
value: 93.911
- type: recall_at_1000
value: 96.38
- type: recall_at_3
value: 75.555
- type: recall_at_5
value: 81.671
- task:
type: Retrieval
dataset:
name: MTEB FiQA2018
type: fiqa
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 21.254
- type: map_at_10
value: 33.723
- type: map_at_100
value: 35.574
- type: map_at_1000
value: 35.730000000000004
- type: map_at_3
value: 29.473
- type: map_at_5
value: 31.543
- type: mrr_at_1
value: 41.358
- type: mrr_at_10
value: 49.498
- type: mrr_at_100
value: 50.275999999999996
- type: mrr_at_1000
value: 50.308
- type: mrr_at_3
value: 47.016000000000005
- type: mrr_at_5
value: 48.336
- type: ndcg_at_1
value: 41.358
- type: ndcg_at_10
value: 41.579
- type: ndcg_at_100
value: 48.455
- type: ndcg_at_1000
value: 51.165000000000006
- type: ndcg_at_3
value: 37.681
- type: ndcg_at_5
value: 38.49
- type: precision_at_1
value: 41.358
- type: precision_at_10
value: 11.543000000000001
- type: precision_at_100
value: 1.87
- type: precision_at_1000
value: 0.23600000000000002
- type: precision_at_3
value: 24.743000000000002
- type: precision_at_5
value: 17.994
- type: recall_at_1
value: 21.254
- type: recall_at_10
value: 48.698
- type: recall_at_100
value: 74.588
- type: recall_at_1000
value: 91.00200000000001
- type: recall_at_3
value: 33.939
- type: recall_at_5
value: 39.367000000000004
- task:
type: Retrieval
dataset:
name: MTEB HotpotQA
type: hotpotqa
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 35.922
- type: map_at_10
value: 52.32599999999999
- type: map_at_100
value: 53.18000000000001
- type: map_at_1000
value: 53.245
- type: map_at_3
value: 49.294
- type: map_at_5
value: 51.202999999999996
- type: mrr_at_1
value: 71.843
- type: mrr_at_10
value: 78.24600000000001
- type: mrr_at_100
value: 78.515
- type: mrr_at_1000
value: 78.527
- type: mrr_at_3
value: 77.17500000000001
- type: mrr_at_5
value: 77.852
- type: ndcg_at_1
value: 71.843
- type: ndcg_at_10
value: 61.379
- type: ndcg_at_100
value: 64.535
- type: ndcg_at_1000
value: 65.888
- type: ndcg_at_3
value: 56.958
- type: ndcg_at_5
value: 59.434
- type: precision_at_1
value: 71.843
- type: precision_at_10
value: 12.686
- type: precision_at_100
value: 1.517
- type: precision_at_1000
value: 0.16999999999999998
- type: precision_at_3
value: 35.778
- type: precision_at_5
value: 23.422
- type: recall_at_1
value: 35.922
- type: recall_at_10
value: 63.43
- type: recall_at_100
value: 75.868
- type: recall_at_1000
value: 84.88900000000001
- type: recall_at_3
value: 53.666000000000004
- type: recall_at_5
value: 58.555
- task:
type: Classification
dataset:
name: MTEB ImdbClassification
type: mteb/imdb
config: default
split: test
revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7
metrics:
- type: accuracy
value: 79.4408
- type: ap
value: 73.52820871620366
- type: f1
value: 79.36240238685001
- task:
type: Retrieval
dataset:
name: MTEB MSMARCO
type: msmarco
config: default
split: dev
revision: None
metrics:
- type: map_at_1
value: 21.826999999999998
- type: map_at_10
value: 34.04
- type: map_at_100
value: 35.226
- type: map_at_1000
value: 35.275
- type: map_at_3
value: 30.165999999999997
- type: map_at_5
value: 32.318000000000005
- type: mrr_at_1
value: 22.464000000000002
- type: mrr_at_10
value: 34.631
- type: mrr_at_100
value: 35.752
- type: mrr_at_1000
value: 35.795
- type: mrr_at_3
value: 30.798
- type: mrr_at_5
value: 32.946999999999996
- type: ndcg_at_1
value: 22.464000000000002
- type: ndcg_at_10
value: 40.919
- type: ndcg_at_100
value: 46.632
- type: ndcg_at_1000
value: 47.833
- type: ndcg_at_3
value: 32.992
- type: ndcg_at_5
value: 36.834
- type: precision_at_1
value: 22.464000000000002
- type: precision_at_10
value: 6.494
- type: precision_at_100
value: 0.9369999999999999
- type: precision_at_1000
value: 0.104
- type: precision_at_3
value: 14.021
- type: precision_at_5
value: 10.347000000000001
- type: recall_at_1
value: 21.826999999999998
- type: recall_at_10
value: 62.132
- type: recall_at_100
value: 88.55199999999999
- type: recall_at_1000
value: 97.707
- type: recall_at_3
value: 40.541
- type: recall_at_5
value: 49.739
- task:
type: Classification
dataset:
name: MTEB MTOPDomainClassification (en)
type: mteb/mtop_domain
config: en
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 95.68399452804377
- type: f1
value: 95.25490609832268
- task:
type: Classification
dataset:
name: MTEB MTOPIntentClassification (en)
type: mteb/mtop_intent
config: en
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 83.15321477428182
- type: f1
value: 60.35476439087966
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (en)
type: mteb/amazon_massive_intent
config: en
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 71.92669804976462
- type: f1
value: 69.22815107207565
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (en)
type: mteb/amazon_massive_scenario
config: en
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 74.4855413584398
- type: f1
value: 72.92107516103387
- task:
type: Clustering
dataset:
name: MTEB MedrxivClusteringP2P
type: mteb/medrxiv-clustering-p2p
config: default
split: test
revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73
metrics:
- type: v_measure
value: 32.412679360205544
- task:
type: Clustering
dataset:
name: MTEB MedrxivClusteringS2S
type: mteb/medrxiv-clustering-s2s
config: default
split: test
revision: 35191c8c0dca72d8ff3efcd72aa802307d469663
metrics:
- type: v_measure
value: 28.09211869875204
- task:
type: Reranking
dataset:
name: MTEB MindSmallReranking
type: mteb/mind_small
config: default
split: test
revision: 3bdac13927fdc888b903db93b2ffdbd90b295a69
metrics:
- type: map
value: 30.540919056982545
- type: mrr
value: 31.529904607063536
- task:
type: Retrieval
dataset:
name: MTEB NFCorpus
type: nfcorpus
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 5.745
- type: map_at_10
value: 12.013
- type: map_at_100
value: 15.040000000000001
- type: map_at_1000
value: 16.427
- type: map_at_3
value: 8.841000000000001
- type: map_at_5
value: 10.289
- type: mrr_at_1
value: 45.201
- type: mrr_at_10
value: 53.483999999999995
- type: mrr_at_100
value: 54.20700000000001
- type: mrr_at_1000
value: 54.252
- type: mrr_at_3
value: 51.29
- type: mrr_at_5
value: 52.73
- type: ndcg_at_1
value: 43.808
- type: ndcg_at_10
value: 32.445
- type: ndcg_at_100
value: 30.031000000000002
- type: ndcg_at_1000
value: 39.007
- type: ndcg_at_3
value: 37.204
- type: ndcg_at_5
value: 35.07
- type: precision_at_1
value: 45.201
- type: precision_at_10
value: 23.684
- type: precision_at_100
value: 7.600999999999999
- type: precision_at_1000
value: 2.043
- type: precision_at_3
value: 33.953
- type: precision_at_5
value: 29.412
- type: recall_at_1
value: 5.745
- type: recall_at_10
value: 16.168
- type: recall_at_100
value: 30.875999999999998
- type: recall_at_1000
value: 62.686
- type: recall_at_3
value: 9.75
- type: recall_at_5
value: 12.413
- task:
type: Retrieval
dataset:
name: MTEB NQ
type: nq
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 37.828
- type: map_at_10
value: 53.239000000000004
- type: map_at_100
value: 54.035999999999994
- type: map_at_1000
value: 54.067
- type: map_at_3
value: 49.289
- type: map_at_5
value: 51.784
- type: mrr_at_1
value: 42.497
- type: mrr_at_10
value: 55.916999999999994
- type: mrr_at_100
value: 56.495
- type: mrr_at_1000
value: 56.516999999999996
- type: mrr_at_3
value: 52.800000000000004
- type: mrr_at_5
value: 54.722
- type: ndcg_at_1
value: 42.468
- type: ndcg_at_10
value: 60.437
- type: ndcg_at_100
value: 63.731
- type: ndcg_at_1000
value: 64.41799999999999
- type: ndcg_at_3
value: 53.230999999999995
- type: ndcg_at_5
value: 57.26
- type: precision_at_1
value: 42.468
- type: precision_at_10
value: 9.47
- type: precision_at_100
value: 1.1360000000000001
- type: precision_at_1000
value: 0.12
- type: precision_at_3
value: 23.724999999999998
- type: precision_at_5
value: 16.593
- type: recall_at_1
value: 37.828
- type: recall_at_10
value: 79.538
- type: recall_at_100
value: 93.646
- type: recall_at_1000
value: 98.72999999999999
- type: recall_at_3
value: 61.134
- type: recall_at_5
value: 70.377
- task:
type: Retrieval
dataset:
name: MTEB QuoraRetrieval
type: quora
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 70.548
- type: map_at_10
value: 84.466
- type: map_at_100
value: 85.10600000000001
- type: map_at_1000
value: 85.123
- type: map_at_3
value: 81.57600000000001
- type: map_at_5
value: 83.399
- type: mrr_at_1
value: 81.24
- type: mrr_at_10
value: 87.457
- type: mrr_at_100
value: 87.574
- type: mrr_at_1000
value: 87.575
- type: mrr_at_3
value: 86.507
- type: mrr_at_5
value: 87.205
- type: ndcg_at_1
value: 81.25
- type: ndcg_at_10
value: 88.203
- type: ndcg_at_100
value: 89.457
- type: ndcg_at_1000
value: 89.563
- type: ndcg_at_3
value: 85.465
- type: ndcg_at_5
value: 87.007
- type: precision_at_1
value: 81.25
- type: precision_at_10
value: 13.373
- type: precision_at_100
value: 1.5270000000000001
- type: precision_at_1000
value: 0.157
- type: precision_at_3
value: 37.417
- type: precision_at_5
value: 24.556
- type: recall_at_1
value: 70.548
- type: recall_at_10
value: 95.208
- type: recall_at_100
value: 99.514
- type: recall_at_1000
value: 99.988
- type: recall_at_3
value: 87.214
- type: recall_at_5
value: 91.696
- task:
type: Clustering
dataset:
name: MTEB RedditClustering
type: mteb/reddit-clustering
config: default
split: test
revision: 24640382cdbf8abc73003fb0fa6d111a705499eb
metrics:
- type: v_measure
value: 53.04822095496839
- task:
type: Clustering
dataset:
name: MTEB RedditClusteringP2P
type: mteb/reddit-clustering-p2p
config: default
split: test
revision: 282350215ef01743dc01b456c7f5241fa8937f16
metrics:
- type: v_measure
value: 60.30778476474675
- task:
type: Retrieval
dataset:
name: MTEB SCIDOCS
type: scidocs
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 4.692
- type: map_at_10
value: 11.766
- type: map_at_100
value: 13.904
- type: map_at_1000
value: 14.216999999999999
- type: map_at_3
value: 8.245
- type: map_at_5
value: 9.92
- type: mrr_at_1
value: 23.0
- type: mrr_at_10
value: 33.78
- type: mrr_at_100
value: 34.922
- type: mrr_at_1000
value: 34.973
- type: mrr_at_3
value: 30.2
- type: mrr_at_5
value: 32.565
- type: ndcg_at_1
value: 23.0
- type: ndcg_at_10
value: 19.863
- type: ndcg_at_100
value: 28.141
- type: ndcg_at_1000
value: 33.549
- type: ndcg_at_3
value: 18.434
- type: ndcg_at_5
value: 16.384
- type: precision_at_1
value: 23.0
- type: precision_at_10
value: 10.39
- type: precision_at_100
value: 2.235
- type: precision_at_1000
value: 0.35300000000000004
- type: precision_at_3
value: 17.133000000000003
- type: precision_at_5
value: 14.44
- type: recall_at_1
value: 4.692
- type: recall_at_10
value: 21.025
- type: recall_at_100
value: 45.324999999999996
- type: recall_at_1000
value: 71.675
- type: recall_at_3
value: 10.440000000000001
- type: recall_at_5
value: 14.64
- task:
type: STS
dataset:
name: MTEB SICK-R
type: mteb/sickr-sts
config: default
split: test
revision: a6ea5a8cab320b040a23452cc28066d9beae2cee
metrics:
- type: cos_sim_pearson
value: 84.96178184892842
- type: cos_sim_spearman
value: 79.6487740813199
- type: euclidean_pearson
value: 82.06661161625023
- type: euclidean_spearman
value: 79.64876769031183
- type: manhattan_pearson
value: 82.07061164575131
- type: manhattan_spearman
value: 79.65197039464537
- task:
type: STS
dataset:
name: MTEB STS12
type: mteb/sts12-sts
config: default
split: test
revision: a0d554a64d88156834ff5ae9920b964011b16384
metrics:
- type: cos_sim_pearson
value: 84.15305604100027
- type: cos_sim_spearman
value: 74.27447427941591
- type: euclidean_pearson
value: 80.52737337565307
- type: euclidean_spearman
value: 74.27416077132192
- type: manhattan_pearson
value: 80.53728571140387
- type: manhattan_spearman
value: 74.28853605753457
- task:
type: STS
dataset:
name: MTEB STS13
type: mteb/sts13-sts
config: default
split: test
revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca
metrics:
- type: cos_sim_pearson
value: 83.44386080639279
- type: cos_sim_spearman
value: 84.17947648159536
- type: euclidean_pearson
value: 83.34145388129387
- type: euclidean_spearman
value: 84.17947648159536
- type: manhattan_pearson
value: 83.30699061927966
- type: manhattan_spearman
value: 84.18125737380451
- task:
type: STS
dataset:
name: MTEB STS14
type: mteb/sts14-sts
config: default
split: test
revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375
metrics:
- type: cos_sim_pearson
value: 81.57392220985612
- type: cos_sim_spearman
value: 78.80745014464101
- type: euclidean_pearson
value: 80.01660371487199
- type: euclidean_spearman
value: 78.80741240102256
- type: manhattan_pearson
value: 79.96810779507953
- type: manhattan_spearman
value: 78.75600400119448
- task:
type: STS
dataset:
name: MTEB STS15
type: mteb/sts15-sts
config: default
split: test
revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3
metrics:
- type: cos_sim_pearson
value: 86.85421063026625
- type: cos_sim_spearman
value: 87.55320285299192
- type: euclidean_pearson
value: 86.69750143323517
- type: euclidean_spearman
value: 87.55320284326378
- type: manhattan_pearson
value: 86.63379169960379
- type: manhattan_spearman
value: 87.4815029877984
- task:
type: STS
dataset:
name: MTEB STS16
type: mteb/sts16-sts
config: default
split: test
revision: 4d8694f8f0e0100860b497b999b3dbed754a0513
metrics:
- type: cos_sim_pearson
value: 84.31314130411842
- type: cos_sim_spearman
value: 85.3489588181433
- type: euclidean_pearson
value: 84.13240933463535
- type: euclidean_spearman
value: 85.34902871403281
- type: manhattan_pearson
value: 84.01183086503559
- type: manhattan_spearman
value: 85.19316703166102
- task:
type: STS
dataset:
name: MTEB STS17 (en-en)
type: mteb/sts17-crosslingual-sts
config: en-en
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 89.09979781689536
- type: cos_sim_spearman
value: 88.87813323759015
- type: euclidean_pearson
value: 88.65413031123792
- type: euclidean_spearman
value: 88.87813323759015
- type: manhattan_pearson
value: 88.61818758256024
- type: manhattan_spearman
value: 88.81044100494604
- task:
type: STS
dataset:
name: MTEB STS22 (en)
type: mteb/sts22-crosslingual-sts
config: en
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 62.30693258111531
- type: cos_sim_spearman
value: 62.195516523251946
- type: euclidean_pearson
value: 62.951283701049476
- type: euclidean_spearman
value: 62.195516523251946
- type: manhattan_pearson
value: 63.068322281439535
- type: manhattan_spearman
value: 62.10621171028406
- task:
type: STS
dataset:
name: MTEB STSBenchmark
type: mteb/stsbenchmark-sts
config: default
split: test
revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831
metrics:
- type: cos_sim_pearson
value: 84.27092833763909
- type: cos_sim_spearman
value: 84.84429717949759
- type: euclidean_pearson
value: 84.8516966060792
- type: euclidean_spearman
value: 84.84429717949759
- type: manhattan_pearson
value: 84.82203139242881
- type: manhattan_spearman
value: 84.8358503952945
- task:
type: Reranking
dataset:
name: MTEB SciDocsRR
type: mteb/scidocs-reranking
config: default
split: test
revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab
metrics:
- type: map
value: 83.10290863981409
- type: mrr
value: 95.31168450286097
- task:
type: Retrieval
dataset:
name: MTEB SciFact
type: scifact
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 52.161
- type: map_at_10
value: 62.138000000000005
- type: map_at_100
value: 62.769
- type: map_at_1000
value: 62.812
- type: map_at_3
value: 59.111000000000004
- type: map_at_5
value: 60.995999999999995
- type: mrr_at_1
value: 55.333
- type: mrr_at_10
value: 63.504000000000005
- type: mrr_at_100
value: 64.036
- type: mrr_at_1000
value: 64.08
- type: mrr_at_3
value: 61.278
- type: mrr_at_5
value: 62.778
- type: ndcg_at_1
value: 55.333
- type: ndcg_at_10
value: 66.678
- type: ndcg_at_100
value: 69.415
- type: ndcg_at_1000
value: 70.453
- type: ndcg_at_3
value: 61.755
- type: ndcg_at_5
value: 64.546
- type: precision_at_1
value: 55.333
- type: precision_at_10
value: 9.033
- type: precision_at_100
value: 1.043
- type: precision_at_1000
value: 0.11199999999999999
- type: precision_at_3
value: 24.221999999999998
- type: precision_at_5
value: 16.333000000000002
- type: recall_at_1
value: 52.161
- type: recall_at_10
value: 79.156
- type: recall_at_100
value: 91.333
- type: recall_at_1000
value: 99.333
- type: recall_at_3
value: 66.43299999999999
- type: recall_at_5
value: 73.272
- task:
type: PairClassification
dataset:
name: MTEB SprintDuplicateQuestions
type: mteb/sprintduplicatequestions-pairclassification
config: default
split: test
revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46
metrics:
- type: cos_sim_accuracy
value: 99.81287128712871
- type: cos_sim_ap
value: 95.30034785910676
- type: cos_sim_f1
value: 90.28629856850716
- type: cos_sim_precision
value: 92.36401673640168
- type: cos_sim_recall
value: 88.3
- type: dot_accuracy
value: 99.81287128712871
- type: dot_ap
value: 95.30034785910676
- type: dot_f1
value: 90.28629856850716
- type: dot_precision
value: 92.36401673640168
- type: dot_recall
value: 88.3
- type: euclidean_accuracy
value: 99.81287128712871
- type: euclidean_ap
value: 95.30034785910676
- type: euclidean_f1
value: 90.28629856850716
- type: euclidean_precision
value: 92.36401673640168
- type: euclidean_recall
value: 88.3
- type: manhattan_accuracy
value: 99.80990099009901
- type: manhattan_ap
value: 95.26880751950654
- type: manhattan_f1
value: 90.22177419354838
- type: manhattan_precision
value: 90.95528455284553
- type: manhattan_recall
value: 89.5
- type: max_accuracy
value: 99.81287128712871
- type: max_ap
value: 95.30034785910676
- type: max_f1
value: 90.28629856850716
- task:
type: Clustering
dataset:
name: MTEB StackExchangeClustering
type: mteb/stackexchange-clustering
config: default
split: test
revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259
metrics:
- type: v_measure
value: 58.518662504351184
- task:
type: Clustering
dataset:
name: MTEB StackExchangeClusteringP2P
type: mteb/stackexchange-clustering-p2p
config: default
split: test
revision: 815ca46b2622cec33ccafc3735d572c266efdb44
metrics:
- type: v_measure
value: 34.96168178378587
- task:
type: Reranking
dataset:
name: MTEB StackOverflowDupQuestions
type: mteb/stackoverflowdupquestions-reranking
config: default
split: test
revision: e185fbe320c72810689fc5848eb6114e1ef5ec69
metrics:
- type: map
value: 52.04862593471896
- type: mrr
value: 52.97238402936932
- task:
type: Summarization
dataset:
name: MTEB SummEval
type: mteb/summeval
config: default
split: test
revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c
metrics:
- type: cos_sim_pearson
value: 30.092545236479946
- type: cos_sim_spearman
value: 31.599851000175498
- type: dot_pearson
value: 30.092542723901676
- type: dot_spearman
value: 31.599851000175498
- task:
type: Retrieval
dataset:
name: MTEB TRECCOVID
type: trec-covid
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 0.189
- type: map_at_10
value: 1.662
- type: map_at_100
value: 9.384
- type: map_at_1000
value: 22.669
- type: map_at_3
value: 0.5559999999999999
- type: map_at_5
value: 0.9039999999999999
- type: mrr_at_1
value: 68.0
- type: mrr_at_10
value: 81.01899999999999
- type: mrr_at_100
value: 81.01899999999999
- type: mrr_at_1000
value: 81.01899999999999
- type: mrr_at_3
value: 79.333
- type: mrr_at_5
value: 80.733
- type: ndcg_at_1
value: 63.0
- type: ndcg_at_10
value: 65.913
- type: ndcg_at_100
value: 51.895
- type: ndcg_at_1000
value: 46.967
- type: ndcg_at_3
value: 65.49199999999999
- type: ndcg_at_5
value: 66.69699999999999
- type: precision_at_1
value: 68.0
- type: precision_at_10
value: 71.6
- type: precision_at_100
value: 53.66
- type: precision_at_1000
value: 21.124000000000002
- type: precision_at_3
value: 72.667
- type: precision_at_5
value: 74.0
- type: recall_at_1
value: 0.189
- type: recall_at_10
value: 1.913
- type: recall_at_100
value: 12.601999999999999
- type: recall_at_1000
value: 44.296
- type: recall_at_3
value: 0.605
- type: recall_at_5
value: 1.018
- task:
type: Retrieval
dataset:
name: MTEB Touche2020
type: webis-touche2020
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 2.701
- type: map_at_10
value: 10.445
- type: map_at_100
value: 17.324
- type: map_at_1000
value: 19.161
- type: map_at_3
value: 5.497
- type: map_at_5
value: 7.278
- type: mrr_at_1
value: 30.612000000000002
- type: mrr_at_10
value: 45.534
- type: mrr_at_100
value: 45.792
- type: mrr_at_1000
value: 45.806999999999995
- type: mrr_at_3
value: 37.755
- type: mrr_at_5
value: 43.469
- type: ndcg_at_1
value: 26.531
- type: ndcg_at_10
value: 26.235000000000003
- type: ndcg_at_100
value: 39.17
- type: ndcg_at_1000
value: 51.038
- type: ndcg_at_3
value: 23.625
- type: ndcg_at_5
value: 24.338
- type: precision_at_1
value: 30.612000000000002
- type: precision_at_10
value: 24.285999999999998
- type: precision_at_100
value: 8.224
- type: precision_at_1000
value: 1.6179999999999999
- type: precision_at_3
value: 24.490000000000002
- type: precision_at_5
value: 24.898
- type: recall_at_1
value: 2.701
- type: recall_at_10
value: 17.997
- type: recall_at_100
value: 51.766999999999996
- type: recall_at_1000
value: 87.863
- type: recall_at_3
value: 6.295000000000001
- type: recall_at_5
value: 9.993
- task:
type: Classification
dataset:
name: MTEB ToxicConversationsClassification
type: mteb/toxic_conversations_50k
config: default
split: test
revision: d7c0de2777da35d6aae2200a62c6e0e5af397c4c
metrics:
- type: accuracy
value: 73.3474
- type: ap
value: 15.393431414459924
- type: f1
value: 56.466681887882416
- task:
type: Classification
dataset:
name: MTEB TweetSentimentExtractionClassification
type: mteb/tweet_sentiment_extraction
config: default
split: test
revision: d604517c81ca91fe16a244d1248fc021f9ecee7a
metrics:
- type: accuracy
value: 62.062818336163
- type: f1
value: 62.11230840463252
- task:
type: Clustering
dataset:
name: MTEB TwentyNewsgroupsClustering
type: mteb/twentynewsgroups-clustering
config: default
split: test
revision: 6125ec4e24fa026cec8a478383ee943acfbd5449
metrics:
- type: v_measure
value: 42.464892820845115
- task:
type: PairClassification
dataset:
name: MTEB TwitterSemEval2015
type: mteb/twittersemeval2015-pairclassification
config: default
split: test
revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1
metrics:
- type: cos_sim_accuracy
value: 86.15962329379508
- type: cos_sim_ap
value: 74.73674057919256
- type: cos_sim_f1
value: 68.81245642574947
- type: cos_sim_precision
value: 61.48255813953488
- type: cos_sim_recall
value: 78.12664907651715
- type: dot_accuracy
value: 86.15962329379508
- type: dot_ap
value: 74.7367634988281
- type: dot_f1
value: 68.81245642574947
- type: dot_precision
value: 61.48255813953488
- type: dot_recall
value: 78.12664907651715
- type: euclidean_accuracy
value: 86.15962329379508
- type: euclidean_ap
value: 74.7367761466634
- type: euclidean_f1
value: 68.81245642574947
- type: euclidean_precision
value: 61.48255813953488
- type: euclidean_recall
value: 78.12664907651715
- type: manhattan_accuracy
value: 86.21326816474935
- type: manhattan_ap
value: 74.64416473733951
- type: manhattan_f1
value: 68.80924855491331
- type: manhattan_precision
value: 61.23456790123457
- type: manhattan_recall
value: 78.52242744063325
- type: max_accuracy
value: 86.21326816474935
- type: max_ap
value: 74.7367761466634
- type: max_f1
value: 68.81245642574947
- task:
type: PairClassification
dataset:
name: MTEB TwitterURLCorpus
type: mteb/twitterurlcorpus-pairclassification
config: default
split: test
revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf
metrics:
- type: cos_sim_accuracy
value: 88.97620988085536
- type: cos_sim_ap
value: 86.08680845745758
- type: cos_sim_f1
value: 78.02793637114438
- type: cos_sim_precision
value: 73.11082699683736
- type: cos_sim_recall
value: 83.65414228518632
- type: dot_accuracy
value: 88.97620988085536
- type: dot_ap
value: 86.08681149437946
- type: dot_f1
value: 78.02793637114438
- type: dot_precision
value: 73.11082699683736
- type: dot_recall
value: 83.65414228518632
- type: euclidean_accuracy
value: 88.97620988085536
- type: euclidean_ap
value: 86.08681215460771
- type: euclidean_f1
value: 78.02793637114438
- type: euclidean_precision
value: 73.11082699683736
- type: euclidean_recall
value: 83.65414228518632
- type: manhattan_accuracy
value: 88.88888888888889
- type: manhattan_ap
value: 86.02916327562438
- type: manhattan_f1
value: 78.02063045516843
- type: manhattan_precision
value: 73.38851947346994
- type: manhattan_recall
value: 83.2768709578072
- type: max_accuracy
value: 88.97620988085536
- type: max_ap
value: 86.08681215460771
- type: max_f1
value: 78.02793637114438
---
<!-- TODO: add evaluation results here -->
<br><br>
<p align="center">
<img src="https://github.com/jina-ai/finetuner/blob/main/docs/_static/finetuner-logo-ani.svg?raw=true" alt="Finetuner logo: Finetuner helps you to create experiments in order to improve embeddings on search tasks. It accompanies you to deliver the last mile of performance-tuning for neural search applications." width="150px">
</p>
<p align="center">
<b>The text embedding set trained by <a href="https://jina.ai/"><b>Jina AI</b></a>, <a href="https://github.com/jina-ai/finetuner"><b>Finetuner</b></a> team.</b>
</p>
## Intended Usage & Model Info
`jina-embeddings-v2-base-en` is an English, monolingual **embedding model** supporting **8192 sequence length**.
It is based on a Bert architecture (JinaBert) that supports the symmetric bidirectional variant of [ALiBi](https://arxiv.org/abs/2108.12409) to allow longer sequence length.
The backbone `jina-bert-v2-base-en` is pretrained on the C4 dataset.
The model is further trained on Jina AI's collection of more than 400 millions of sentence pairs and hard negatives.
These pairs were obtained from various domains and were carefully selected through a thorough cleaning process.
The embedding model was trained using 512 sequence length, but extrapolates to 8k sequence length (or even longer) thanks to ALiBi.
This makes our model useful for a range of use cases, especially when processing long documents is needed, including long document retrieval, semantic textual similarity, text reranking, recommendation, RAG and LLM-based generative search, etc.
With a standard size of 137 million parameters, the model enables fast inference while delivering better performance than our small model. It is recommended to use a single GPU for inference.
Additionally, we provide the following embedding models:
**V1 (Based on T5, 512 Seq)**
- [`jina-embeddings-v1-small-en`](https://huggingface.co/jinaai/jina-embedding-s-en-v1): 35 million parameters.
- [`jina-embeddings-v1-base-en`](https://huggingface.co/jinaai/jina-embedding-b-en-v1): 110 million parameters.
- [`jina-embeddings-v1-large-en`](https://huggingface.co/jinaai/jina-embedding-l-en-v1): 330 million parameters.
**V2 (Based on JinaBert, 8k Seq)**
- [`jina-embeddings-v2-small-en`](https://huggingface.co/jinaai/jina-embeddings-v2-small-en): 33 million parameters.
- [`jina-embeddings-v2-base-en`](https://huggingface.co/jinaai/jina-embeddings-v2-base-en): 137 million parameters **(you are here)**.
- [`jina-embeddings-v2-large-en`](): 435 million parameters (releasing soon).
## Data & Parameters
Jina Embeddings V2 [technical report](https://arxiv.org/abs/2310.19923)
## Usage
You can use Jina Embedding models directly from transformers package:
```python
!pip install transformers
from transformers import AutoModel
from numpy.linalg import norm
cos_sim = lambda a,b: (a @ b.T) / (norm(a)*norm(b))
model = AutoModel.from_pretrained('jinaai/jina-embeddings-v2-base-en', trust_remote_code=True) # trust_remote_code is needed to use the encode method
embeddings = model.encode(['How is the weather today?', 'What is the current weather like today?'])
print(cos_sim(embeddings[0], embeddings[1]))
```
If you only want to handle shorter sequence, such as 2k, pass the `max_length` parameter to the `encode` function:
```python
embeddings = model.encode(
['Very long ... document'],
max_length=2048
)
```
*Alternatively, you can use Jina AI's [Embedding platform](https://jina.ai/embeddings/) for fully-managed access to Jina Embeddings models*.
## Fine-tuning
Please consider [Finetuner](https://github.com/jina-ai/finetuner).
## Plans
The development of new bilingual models is currently underway. We will be targeting mainly the German and Spanish languages.
The upcoming models will be called `jina-embeddings-v2-base-de/es`.
## Contact
Join our [Discord community](https://discord.jina.ai) and chat with other community members about ideas.
## Citation
If you find Jina Embeddings useful in your research, please cite the following paper:
```
@misc{günther2023jina,
title={Jina Embeddings 2: 8192-Token General-Purpose Text Embeddings for Long Documents},
author={Michael Günther and Jackmin Ong and Isabelle Mohr and Alaeddine Abdessalem and Tanguy Abel and Mohammad Kalim Akram and Susana Guzman and Georgios Mastrapas and Saba Sturua and Bo Wang and Maximilian Werk and Nan Wang and Han Xiao},
year={2023},
eprint={2310.19923},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
<!--
``` latex
@misc{günther2023jina,
title={Beyond the 512-Token Barrier: Training General-Purpose Text
Embeddings for Large Documents},
author={Michael Günther and Jackmin Ong and Isabelle Mohr and Alaeddine Abdessalem and Tanguy Abel and Mohammad Kalim Akram and Susana Guzman and Georgios Mastrapas and Saba Sturua and Bo Wang},
year={2023},
eprint={2307.11224},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
@misc{günther2023jina,
title={Jina Embeddings: A Novel Set of High-Performance Sentence Embedding Models},
author={Michael Günther and Louis Milliken and Jonathan Geuter and Georgios Mastrapas and Bo Wang and Han Xiao},
year={2023},
eprint={2307.11224},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
--> | [
"SUMMARIZATION"
] | [
"BIOSSES",
"SCIFACT"
] |
p-christ/ModernBERT-large-nli | p-christ | zero-shot-classification | [
"transformers",
"safetensors",
"modernbert",
"text-classification",
"instruct",
"natural-language-inference",
"nli",
"zero-shot-classification",
"en",
"dataset:nyu-mll/glue",
"dataset:facebook/anli",
"base_model:answerdotai/ModernBERT-large",
"base_model:finetune:answerdotai/ModernBERT-large",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2025-01-24T12:25:22 | 2025-01-24T12:25:23 | 84 | 0 | ---
base_model:
- answerdotai/ModernBERT-large
datasets:
- nyu-mll/glue
- facebook/anli
language:
- en
library_name: transformers
license: apache-2.0
pipeline_tag: zero-shot-classification
tags:
- instruct
- natural-language-inference
- nli
---
# Model Card for Model ID
This model is ModernBERT multi-task fine-tuned on tasksource NLI tasks, including MNLI, ANLI, SICK, WANLI, doc-nli, LingNLI, FOLIO, FOL-NLI, LogicNLI, Label-NLI and all datasets in the below table).
This is the equivalent of an "instruct" version.
The model was trained for 200k steps on an Nvidia A30 GPU.
It is very good at reasoning tasks (better than llama 3.1 8B Instruct on ANLI and FOLIO), long context reasoning, sentiment analysis and zero-shot classification with new labels.
The following table shows model test accuracy. These are the scores for the same single transformer with different classification heads on top.
Further gains can be obtained by fine-tuning on a single-task, e.g. SST, but it this checkpoint is great for zero-shot classification and natural language inference (contradiction/entailment/neutral classification).
| test_name | test_accuracy |
|:--------------------------------------|----------------:|
| glue/mnli | 0.89 |
| glue/qnli | 0.96 |
| glue/rte | 0.91 |
| glue/wnli | 0.64 |
| glue/mrpc | 0.81 |
| glue/qqp | 0.87 |
| glue/cola | 0.87 |
| glue/sst2 | 0.96 |
| super_glue/boolq | 0.66 |
| super_glue/cb | 0.86 |
| super_glue/multirc | 0.9 |
| super_glue/wic | 0.71 |
| super_glue/axg | 1 |
| anli/a1 | 0.72 |
| anli/a2 | 0.54 |
| anli/a3 | 0.55 |
| sick/label | 0.91 |
| sick/entailment_AB | 0.93 |
| snli | 0.94 |
| scitail/snli_format | 0.95 |
| hans | 1 |
| WANLI | 0.77 |
| recast/recast_ner | 0.85 |
| recast/recast_sentiment | 0.97 |
| recast/recast_verbnet | 0.89 |
| recast/recast_megaveridicality | 0.87 |
| recast/recast_verbcorner | 0.87 |
| recast/recast_kg_relations | 0.9 |
| recast/recast_factuality | 0.95 |
| recast/recast_puns | 0.98 |
| probability_words_nli/reasoning_1hop | 1 |
| probability_words_nli/usnli | 0.79 |
| probability_words_nli/reasoning_2hop | 0.98 |
| nan-nli | 0.85 |
| nli_fever | 0.78 |
| breaking_nli | 0.99 |
| conj_nli | 0.72 |
| fracas | 0.79 |
| dialogue_nli | 0.94 |
| mpe | 0.75 |
| dnc | 0.91 |
| recast_white/fnplus | 0.76 |
| recast_white/sprl | 0.9 |
| recast_white/dpr | 0.84 |
| add_one_rte | 0.94 |
| paws/labeled_final | 0.96 |
| pragmeval/pdtb | 0.56 |
| lex_glue/scotus | 0.58 |
| lex_glue/ledgar | 0.85 |
| dynasent/dynabench.dynasent.r1.all/r1 | 0.83 |
| dynasent/dynabench.dynasent.r2.all/r2 | 0.76 |
| cycic_classification | 0.96 |
| lingnli | 0.91 |
| monotonicity-entailment | 0.97 |
| scinli | 0.88 |
| naturallogic | 0.93 |
| dynahate | 0.86 |
| syntactic-augmentation-nli | 0.94 |
| autotnli | 0.92 |
| defeasible-nli/atomic | 0.83 |
| defeasible-nli/snli | 0.8 |
| help-nli | 0.96 |
| nli-veridicality-transitivity | 0.99 |
| lonli | 0.99 |
| dadc-limit-nli | 0.79 |
| folio | 0.71 |
| tomi-nli | 0.54 |
| puzzte | 0.59 |
| temporal-nli | 0.93 |
| counterfactually-augmented-snli | 0.81 |
| cnli | 0.9 |
| boolq-natural-perturbations | 0.72 |
| equate | 0.65 |
| logiqa-2.0-nli | 0.58 |
| mindgames | 0.96 |
| ConTRoL-nli | 0.66 |
| logical-fallacy | 0.38 |
| cladder | 0.89 |
| conceptrules_v2 | 1 |
| zero-shot-label-nli | 0.79 |
| scone | 1 |
| monli | 1 |
| SpaceNLI | 1 |
| propsegment/nli | 0.92 |
| FLD.v2/default | 0.91 |
| FLD.v2/star | 0.78 |
| SDOH-NLI | 0.99 |
| scifact_entailment | 0.87 |
| feasibilityQA | 0.79 |
| AdjectiveScaleProbe-nli | 1 |
| resnli | 1 |
| semantic_fragments_nli | 1 |
| dataset_train_nli | 0.95 |
| nlgraph | 0.97 |
| ruletaker | 0.99 |
| PARARULE-Plus | 1 |
| logical-entailment | 0.93 |
| nope | 0.56 |
| LogicNLI | 0.91 |
| contract-nli/contractnli_a/seg | 0.88 |
| contract-nli/contractnli_b/full | 0.84 |
| nli4ct_semeval2024 | 0.72 |
| biosift-nli | 0.92 |
| SIGA-nli | 0.57 |
| FOL-nli | 0.79 |
| doc-nli | 0.81 |
| mctest-nli | 0.92 |
| natural-language-satisfiability | 0.92 |
| idioms-nli | 0.83 |
| lifecycle-entailment | 0.79 |
| MSciNLI | 0.84 |
| hover-3way/nli | 0.92 |
| seahorse_summarization_evaluation | 0.81 |
| missing-item-prediction/contrastive | 0.88 |
| Pol_NLI | 0.93 |
| synthetic-retrieval-NLI/count | 0.72 |
| synthetic-retrieval-NLI/position | 0.9 |
| synthetic-retrieval-NLI/binary | 0.92 |
| babi_nli | 0.98 |
# Usage
## [ZS] Zero-shot classification pipeline
```python
from transformers import pipeline
classifier = pipeline("zero-shot-classification",model="tasksource/ModernBERT-large-nli")
text = "one day I will see the world"
candidate_labels = ['travel', 'cooking', 'dancing']
classifier(text, candidate_labels)
```
NLI training data of this model includes [label-nli](https://huggingface.co/datasets/tasksource/zero-shot-label-nli), a NLI dataset specially constructed to improve this kind of zero-shot classification.
## [NLI] Natural language inference pipeline
```python
from transformers import pipeline
pipe = pipeline("text-classification",model="tasksource/ModernBERT-large-nli")
pipe([dict(text='there is a cat',
text_pair='there is a black cat')]) #list of (premise,hypothesis)
```
## Backbone for further fune-tuning
This checkpoint has stronger reasoning and fine-grained abilities than the base version and can be used for further fine-tuning.
# Citation
```
@inproceedings{sileo-2024-tasksource,
title = "tasksource: A Large Collection of {NLP} tasks with a Structured Dataset Preprocessing Framework",
author = "Sileo, Damien",
booktitle = "Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)",
month = may,
year = "2024",
address = "Torino, Italia",
publisher = "ELRA and ICCL",
url = "https://aclanthology.org/2024.lrec-main.1361",
pages = "15655--15684",
}
``` | [
"SUMMARIZATION"
] | [
"SCIFACT",
"SCITAIL"
] |
Muennighoff/SGPT-5.8B-weightedmean-msmarco-specb-bitfit | Muennighoff | sentence-similarity | [
"sentence-transformers",
"pytorch",
"gptj",
"feature-extraction",
"sentence-similarity",
"mteb",
"arxiv:2202.08904",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2022-03-02T23:29:04 | 2023-03-27T22:26:36 | 83 | 23 | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- mteb
model-index:
- name: SGPT-5.8B-weightedmean-msmarco-specb-bitfit
results:
- task:
type: Classification
dataset:
name: MTEB AmazonCounterfactualClassification (en)
type: mteb/amazon_counterfactual
config: en
split: test
revision: 2d8a100785abf0ae21420d2a55b0c56e3e1ea996
metrics:
- type: accuracy
value: 69.22388059701493
- type: ap
value: 32.04724673950256
- type: f1
value: 63.25719825770428
- task:
type: Classification
dataset:
name: MTEB AmazonPolarityClassification
type: mteb/amazon_polarity
config: default
split: test
revision: 80714f8dcf8cefc218ef4f8c5a966dd83f75a0e1
metrics:
- type: accuracy
value: 71.26109999999998
- type: ap
value: 66.16336378255403
- type: f1
value: 70.89719145825303
- task:
type: Classification
dataset:
name: MTEB AmazonReviewsClassification (en)
type: mteb/amazon_reviews_multi
config: en
split: test
revision: c379a6705fec24a2493fa68e011692605f44e119
metrics:
- type: accuracy
value: 39.19199999999999
- type: f1
value: 38.580766731113826
- task:
type: Retrieval
dataset:
name: MTEB ArguAna
type: arguana
config: default
split: test
revision: 5b3e3697907184a9b77a3c99ee9ea1a9cbb1e4e3
metrics:
- type: map_at_1
value: 27.311999999999998
- type: map_at_10
value: 42.620000000000005
- type: map_at_100
value: 43.707
- type: map_at_1000
value: 43.714999999999996
- type: map_at_3
value: 37.624
- type: map_at_5
value: 40.498
- type: mrr_at_1
value: 27.667
- type: mrr_at_10
value: 42.737
- type: mrr_at_100
value: 43.823
- type: mrr_at_1000
value: 43.830999999999996
- type: mrr_at_3
value: 37.743
- type: mrr_at_5
value: 40.616
- type: ndcg_at_1
value: 27.311999999999998
- type: ndcg_at_10
value: 51.37500000000001
- type: ndcg_at_100
value: 55.778000000000006
- type: ndcg_at_1000
value: 55.96600000000001
- type: ndcg_at_3
value: 41.087
- type: ndcg_at_5
value: 46.269
- type: precision_at_1
value: 27.311999999999998
- type: precision_at_10
value: 7.945
- type: precision_at_100
value: 0.9820000000000001
- type: precision_at_1000
value: 0.1
- type: precision_at_3
value: 17.046
- type: precision_at_5
value: 12.745000000000001
- type: recall_at_1
value: 27.311999999999998
- type: recall_at_10
value: 79.445
- type: recall_at_100
value: 98.151
- type: recall_at_1000
value: 99.57300000000001
- type: recall_at_3
value: 51.13799999999999
- type: recall_at_5
value: 63.727000000000004
- task:
type: Clustering
dataset:
name: MTEB ArxivClusteringP2P
type: mteb/arxiv-clustering-p2p
config: default
split: test
revision: 0bbdb47bcbe3a90093699aefeed338a0f28a7ee8
metrics:
- type: v_measure
value: 45.59037428592033
- task:
type: Clustering
dataset:
name: MTEB ArxivClusteringS2S
type: mteb/arxiv-clustering-s2s
config: default
split: test
revision: b73bd54100e5abfa6e3a23dcafb46fe4d2438dc3
metrics:
- type: v_measure
value: 38.86371701986363
- task:
type: Reranking
dataset:
name: MTEB AskUbuntuDupQuestions
type: mteb/askubuntudupquestions-reranking
config: default
split: test
revision: 4d853f94cd57d85ec13805aeeac3ae3e5eb4c49c
metrics:
- type: map
value: 61.625568691427766
- type: mrr
value: 75.83256386580486
- task:
type: STS
dataset:
name: MTEB BIOSSES
type: mteb/biosses-sts
config: default
split: test
revision: 9ee918f184421b6bd48b78f6c714d86546106103
metrics:
- type: cos_sim_pearson
value: 89.96074355094802
- type: cos_sim_spearman
value: 86.2501580394454
- type: euclidean_pearson
value: 82.18427440380462
- type: euclidean_spearman
value: 80.14760935017947
- type: manhattan_pearson
value: 82.24621578156392
- type: manhattan_spearman
value: 80.00363016590163
- task:
type: Classification
dataset:
name: MTEB Banking77Classification
type: mteb/banking77
config: default
split: test
revision: 44fa15921b4c889113cc5df03dd4901b49161ab7
metrics:
- type: accuracy
value: 84.49350649350649
- type: f1
value: 84.4249343233736
- task:
type: Clustering
dataset:
name: MTEB BiorxivClusteringP2P
type: mteb/biorxiv-clustering-p2p
config: default
split: test
revision: 11d0121201d1f1f280e8cc8f3d98fb9c4d9f9c55
metrics:
- type: v_measure
value: 36.551459722989385
- task:
type: Clustering
dataset:
name: MTEB BiorxivClusteringS2S
type: mteb/biorxiv-clustering-s2s
config: default
split: test
revision: c0fab014e1bcb8d3a5e31b2088972a1e01547dc1
metrics:
- type: v_measure
value: 33.69901851846774
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackAndroidRetrieval
type: BeIR/cqadupstack
config: default
split: test
revision: 2b9f5791698b5be7bc5e10535c8690f20043c3db
metrics:
- type: map_at_1
value: 30.499
- type: map_at_10
value: 41.208
- type: map_at_100
value: 42.638
- type: map_at_1000
value: 42.754
- type: map_at_3
value: 37.506
- type: map_at_5
value: 39.422000000000004
- type: mrr_at_1
value: 37.339
- type: mrr_at_10
value: 47.051
- type: mrr_at_100
value: 47.745
- type: mrr_at_1000
value: 47.786
- type: mrr_at_3
value: 44.086999999999996
- type: mrr_at_5
value: 45.711
- type: ndcg_at_1
value: 37.339
- type: ndcg_at_10
value: 47.666
- type: ndcg_at_100
value: 52.994
- type: ndcg_at_1000
value: 54.928999999999995
- type: ndcg_at_3
value: 41.982
- type: ndcg_at_5
value: 44.42
- type: precision_at_1
value: 37.339
- type: precision_at_10
value: 9.127
- type: precision_at_100
value: 1.4749999999999999
- type: precision_at_1000
value: 0.194
- type: precision_at_3
value: 20.076
- type: precision_at_5
value: 14.449000000000002
- type: recall_at_1
value: 30.499
- type: recall_at_10
value: 60.328
- type: recall_at_100
value: 82.57900000000001
- type: recall_at_1000
value: 95.074
- type: recall_at_3
value: 44.17
- type: recall_at_5
value: 50.94
- type: map_at_1
value: 30.613
- type: map_at_10
value: 40.781
- type: map_at_100
value: 42.018
- type: map_at_1000
value: 42.132999999999996
- type: map_at_3
value: 37.816
- type: map_at_5
value: 39.389
- type: mrr_at_1
value: 38.408
- type: mrr_at_10
value: 46.631
- type: mrr_at_100
value: 47.332
- type: mrr_at_1000
value: 47.368
- type: mrr_at_3
value: 44.384
- type: mrr_at_5
value: 45.661
- type: ndcg_at_1
value: 38.408
- type: ndcg_at_10
value: 46.379999999999995
- type: ndcg_at_100
value: 50.81
- type: ndcg_at_1000
value: 52.663000000000004
- type: ndcg_at_3
value: 42.18
- type: ndcg_at_5
value: 43.974000000000004
- type: precision_at_1
value: 38.408
- type: precision_at_10
value: 8.656
- type: precision_at_100
value: 1.3860000000000001
- type: precision_at_1000
value: 0.184
- type: precision_at_3
value: 20.276
- type: precision_at_5
value: 14.241999999999999
- type: recall_at_1
value: 30.613
- type: recall_at_10
value: 56.44
- type: recall_at_100
value: 75.044
- type: recall_at_1000
value: 86.426
- type: recall_at_3
value: 43.766
- type: recall_at_5
value: 48.998000000000005
- type: map_at_1
value: 37.370999999999995
- type: map_at_10
value: 49.718
- type: map_at_100
value: 50.737
- type: map_at_1000
value: 50.79
- type: map_at_3
value: 46.231
- type: map_at_5
value: 48.329
- type: mrr_at_1
value: 42.884
- type: mrr_at_10
value: 53.176
- type: mrr_at_100
value: 53.81700000000001
- type: mrr_at_1000
value: 53.845
- type: mrr_at_3
value: 50.199000000000005
- type: mrr_at_5
value: 52.129999999999995
- type: ndcg_at_1
value: 42.884
- type: ndcg_at_10
value: 55.826
- type: ndcg_at_100
value: 59.93000000000001
- type: ndcg_at_1000
value: 61.013
- type: ndcg_at_3
value: 49.764
- type: ndcg_at_5
value: 53.025999999999996
- type: precision_at_1
value: 42.884
- type: precision_at_10
value: 9.046999999999999
- type: precision_at_100
value: 1.212
- type: precision_at_1000
value: 0.135
- type: precision_at_3
value: 22.131999999999998
- type: precision_at_5
value: 15.524
- type: recall_at_1
value: 37.370999999999995
- type: recall_at_10
value: 70.482
- type: recall_at_100
value: 88.425
- type: recall_at_1000
value: 96.03399999999999
- type: recall_at_3
value: 54.43
- type: recall_at_5
value: 62.327999999999996
- type: map_at_1
value: 22.875999999999998
- type: map_at_10
value: 31.715
- type: map_at_100
value: 32.847
- type: map_at_1000
value: 32.922000000000004
- type: map_at_3
value: 29.049999999999997
- type: map_at_5
value: 30.396
- type: mrr_at_1
value: 24.52
- type: mrr_at_10
value: 33.497
- type: mrr_at_100
value: 34.455000000000005
- type: mrr_at_1000
value: 34.510000000000005
- type: mrr_at_3
value: 30.791
- type: mrr_at_5
value: 32.175
- type: ndcg_at_1
value: 24.52
- type: ndcg_at_10
value: 36.95
- type: ndcg_at_100
value: 42.238
- type: ndcg_at_1000
value: 44.147999999999996
- type: ndcg_at_3
value: 31.435000000000002
- type: ndcg_at_5
value: 33.839000000000006
- type: precision_at_1
value: 24.52
- type: precision_at_10
value: 5.9319999999999995
- type: precision_at_100
value: 0.901
- type: precision_at_1000
value: 0.11
- type: precision_at_3
value: 13.446
- type: precision_at_5
value: 9.469
- type: recall_at_1
value: 22.875999999999998
- type: recall_at_10
value: 51.38
- type: recall_at_100
value: 75.31099999999999
- type: recall_at_1000
value: 89.718
- type: recall_at_3
value: 36.26
- type: recall_at_5
value: 42.248999999999995
- type: map_at_1
value: 14.984
- type: map_at_10
value: 23.457
- type: map_at_100
value: 24.723
- type: map_at_1000
value: 24.846
- type: map_at_3
value: 20.873
- type: map_at_5
value: 22.357
- type: mrr_at_1
value: 18.159
- type: mrr_at_10
value: 27.431
- type: mrr_at_100
value: 28.449
- type: mrr_at_1000
value: 28.52
- type: mrr_at_3
value: 24.979000000000003
- type: mrr_at_5
value: 26.447
- type: ndcg_at_1
value: 18.159
- type: ndcg_at_10
value: 28.627999999999997
- type: ndcg_at_100
value: 34.741
- type: ndcg_at_1000
value: 37.516
- type: ndcg_at_3
value: 23.902
- type: ndcg_at_5
value: 26.294
- type: precision_at_1
value: 18.159
- type: precision_at_10
value: 5.485
- type: precision_at_100
value: 0.985
- type: precision_at_1000
value: 0.136
- type: precision_at_3
value: 11.774
- type: precision_at_5
value: 8.731
- type: recall_at_1
value: 14.984
- type: recall_at_10
value: 40.198
- type: recall_at_100
value: 67.11500000000001
- type: recall_at_1000
value: 86.497
- type: recall_at_3
value: 27.639000000000003
- type: recall_at_5
value: 33.595000000000006
- type: map_at_1
value: 29.067
- type: map_at_10
value: 39.457
- type: map_at_100
value: 40.83
- type: map_at_1000
value: 40.94
- type: map_at_3
value: 35.995
- type: map_at_5
value: 38.159
- type: mrr_at_1
value: 34.937000000000005
- type: mrr_at_10
value: 44.755
- type: mrr_at_100
value: 45.549
- type: mrr_at_1000
value: 45.589
- type: mrr_at_3
value: 41.947
- type: mrr_at_5
value: 43.733
- type: ndcg_at_1
value: 34.937000000000005
- type: ndcg_at_10
value: 45.573
- type: ndcg_at_100
value: 51.266999999999996
- type: ndcg_at_1000
value: 53.184
- type: ndcg_at_3
value: 39.961999999999996
- type: ndcg_at_5
value: 43.02
- type: precision_at_1
value: 34.937000000000005
- type: precision_at_10
value: 8.296000000000001
- type: precision_at_100
value: 1.32
- type: precision_at_1000
value: 0.167
- type: precision_at_3
value: 18.8
- type: precision_at_5
value: 13.763
- type: recall_at_1
value: 29.067
- type: recall_at_10
value: 58.298
- type: recall_at_100
value: 82.25099999999999
- type: recall_at_1000
value: 94.476
- type: recall_at_3
value: 42.984
- type: recall_at_5
value: 50.658
- type: map_at_1
value: 25.985999999999997
- type: map_at_10
value: 35.746
- type: map_at_100
value: 37.067
- type: map_at_1000
value: 37.191
- type: map_at_3
value: 32.599000000000004
- type: map_at_5
value: 34.239000000000004
- type: mrr_at_1
value: 31.735000000000003
- type: mrr_at_10
value: 40.515
- type: mrr_at_100
value: 41.459
- type: mrr_at_1000
value: 41.516
- type: mrr_at_3
value: 37.938
- type: mrr_at_5
value: 39.25
- type: ndcg_at_1
value: 31.735000000000003
- type: ndcg_at_10
value: 41.484
- type: ndcg_at_100
value: 47.047
- type: ndcg_at_1000
value: 49.427
- type: ndcg_at_3
value: 36.254999999999995
- type: ndcg_at_5
value: 38.375
- type: precision_at_1
value: 31.735000000000003
- type: precision_at_10
value: 7.66
- type: precision_at_100
value: 1.234
- type: precision_at_1000
value: 0.16
- type: precision_at_3
value: 17.427999999999997
- type: precision_at_5
value: 12.328999999999999
- type: recall_at_1
value: 25.985999999999997
- type: recall_at_10
value: 53.761
- type: recall_at_100
value: 77.149
- type: recall_at_1000
value: 93.342
- type: recall_at_3
value: 39.068000000000005
- type: recall_at_5
value: 44.693
- type: map_at_1
value: 24.949749999999998
- type: map_at_10
value: 34.04991666666667
- type: map_at_100
value: 35.26825
- type: map_at_1000
value: 35.38316666666667
- type: map_at_3
value: 31.181333333333335
- type: map_at_5
value: 32.77391666666667
- type: mrr_at_1
value: 29.402833333333334
- type: mrr_at_10
value: 38.01633333333333
- type: mrr_at_100
value: 38.88033333333334
- type: mrr_at_1000
value: 38.938500000000005
- type: mrr_at_3
value: 35.5175
- type: mrr_at_5
value: 36.93808333333333
- type: ndcg_at_1
value: 29.402833333333334
- type: ndcg_at_10
value: 39.403166666666664
- type: ndcg_at_100
value: 44.66408333333333
- type: ndcg_at_1000
value: 46.96283333333333
- type: ndcg_at_3
value: 34.46633333333334
- type: ndcg_at_5
value: 36.78441666666667
- type: precision_at_1
value: 29.402833333333334
- type: precision_at_10
value: 6.965833333333333
- type: precision_at_100
value: 1.1330833333333334
- type: precision_at_1000
value: 0.15158333333333335
- type: precision_at_3
value: 15.886666666666665
- type: precision_at_5
value: 11.360416666666667
- type: recall_at_1
value: 24.949749999999998
- type: recall_at_10
value: 51.29325
- type: recall_at_100
value: 74.3695
- type: recall_at_1000
value: 90.31299999999999
- type: recall_at_3
value: 37.580083333333334
- type: recall_at_5
value: 43.529666666666664
- type: map_at_1
value: 22.081999999999997
- type: map_at_10
value: 29.215999999999998
- type: map_at_100
value: 30.163
- type: map_at_1000
value: 30.269000000000002
- type: map_at_3
value: 26.942
- type: map_at_5
value: 28.236
- type: mrr_at_1
value: 24.847
- type: mrr_at_10
value: 31.918999999999997
- type: mrr_at_100
value: 32.817
- type: mrr_at_1000
value: 32.897
- type: mrr_at_3
value: 29.831000000000003
- type: mrr_at_5
value: 31.019999999999996
- type: ndcg_at_1
value: 24.847
- type: ndcg_at_10
value: 33.4
- type: ndcg_at_100
value: 38.354
- type: ndcg_at_1000
value: 41.045
- type: ndcg_at_3
value: 29.236
- type: ndcg_at_5
value: 31.258000000000003
- type: precision_at_1
value: 24.847
- type: precision_at_10
value: 5.353
- type: precision_at_100
value: 0.853
- type: precision_at_1000
value: 0.116
- type: precision_at_3
value: 12.679000000000002
- type: precision_at_5
value: 8.988
- type: recall_at_1
value: 22.081999999999997
- type: recall_at_10
value: 43.505
- type: recall_at_100
value: 66.45400000000001
- type: recall_at_1000
value: 86.378
- type: recall_at_3
value: 32.163000000000004
- type: recall_at_5
value: 37.059999999999995
- type: map_at_1
value: 15.540000000000001
- type: map_at_10
value: 22.362000000000002
- type: map_at_100
value: 23.435
- type: map_at_1000
value: 23.564
- type: map_at_3
value: 20.143
- type: map_at_5
value: 21.324
- type: mrr_at_1
value: 18.892
- type: mrr_at_10
value: 25.942999999999998
- type: mrr_at_100
value: 26.883000000000003
- type: mrr_at_1000
value: 26.968999999999998
- type: mrr_at_3
value: 23.727
- type: mrr_at_5
value: 24.923000000000002
- type: ndcg_at_1
value: 18.892
- type: ndcg_at_10
value: 26.811
- type: ndcg_at_100
value: 32.066
- type: ndcg_at_1000
value: 35.166
- type: ndcg_at_3
value: 22.706
- type: ndcg_at_5
value: 24.508
- type: precision_at_1
value: 18.892
- type: precision_at_10
value: 4.942
- type: precision_at_100
value: 0.878
- type: precision_at_1000
value: 0.131
- type: precision_at_3
value: 10.748000000000001
- type: precision_at_5
value: 7.784000000000001
- type: recall_at_1
value: 15.540000000000001
- type: recall_at_10
value: 36.742999999999995
- type: recall_at_100
value: 60.525
- type: recall_at_1000
value: 82.57600000000001
- type: recall_at_3
value: 25.252000000000002
- type: recall_at_5
value: 29.872
- type: map_at_1
value: 24.453
- type: map_at_10
value: 33.363
- type: map_at_100
value: 34.579
- type: map_at_1000
value: 34.686
- type: map_at_3
value: 30.583
- type: map_at_5
value: 32.118
- type: mrr_at_1
value: 28.918
- type: mrr_at_10
value: 37.675
- type: mrr_at_100
value: 38.567
- type: mrr_at_1000
value: 38.632
- type: mrr_at_3
value: 35.260999999999996
- type: mrr_at_5
value: 36.576
- type: ndcg_at_1
value: 28.918
- type: ndcg_at_10
value: 38.736
- type: ndcg_at_100
value: 44.261
- type: ndcg_at_1000
value: 46.72
- type: ndcg_at_3
value: 33.81
- type: ndcg_at_5
value: 36.009
- type: precision_at_1
value: 28.918
- type: precision_at_10
value: 6.586
- type: precision_at_100
value: 1.047
- type: precision_at_1000
value: 0.13699999999999998
- type: precision_at_3
value: 15.360999999999999
- type: precision_at_5
value: 10.857999999999999
- type: recall_at_1
value: 24.453
- type: recall_at_10
value: 50.885999999999996
- type: recall_at_100
value: 75.03
- type: recall_at_1000
value: 92.123
- type: recall_at_3
value: 37.138
- type: recall_at_5
value: 42.864999999999995
- type: map_at_1
value: 24.57
- type: map_at_10
value: 33.672000000000004
- type: map_at_100
value: 35.244
- type: map_at_1000
value: 35.467
- type: map_at_3
value: 30.712
- type: map_at_5
value: 32.383
- type: mrr_at_1
value: 29.644
- type: mrr_at_10
value: 38.344
- type: mrr_at_100
value: 39.219
- type: mrr_at_1000
value: 39.282000000000004
- type: mrr_at_3
value: 35.771
- type: mrr_at_5
value: 37.273
- type: ndcg_at_1
value: 29.644
- type: ndcg_at_10
value: 39.567
- type: ndcg_at_100
value: 45.097
- type: ndcg_at_1000
value: 47.923
- type: ndcg_at_3
value: 34.768
- type: ndcg_at_5
value: 37.122
- type: precision_at_1
value: 29.644
- type: precision_at_10
value: 7.5889999999999995
- type: precision_at_100
value: 1.478
- type: precision_at_1000
value: 0.23500000000000001
- type: precision_at_3
value: 16.337
- type: precision_at_5
value: 12.055
- type: recall_at_1
value: 24.57
- type: recall_at_10
value: 51.00900000000001
- type: recall_at_100
value: 75.423
- type: recall_at_1000
value: 93.671
- type: recall_at_3
value: 36.925999999999995
- type: recall_at_5
value: 43.245
- type: map_at_1
value: 21.356
- type: map_at_10
value: 27.904
- type: map_at_100
value: 28.938000000000002
- type: map_at_1000
value: 29.036
- type: map_at_3
value: 25.726
- type: map_at_5
value: 26.935
- type: mrr_at_1
value: 22.551
- type: mrr_at_10
value: 29.259
- type: mrr_at_100
value: 30.272
- type: mrr_at_1000
value: 30.348000000000003
- type: mrr_at_3
value: 27.295
- type: mrr_at_5
value: 28.358
- type: ndcg_at_1
value: 22.551
- type: ndcg_at_10
value: 31.817
- type: ndcg_at_100
value: 37.164
- type: ndcg_at_1000
value: 39.82
- type: ndcg_at_3
value: 27.595999999999997
- type: ndcg_at_5
value: 29.568
- type: precision_at_1
value: 22.551
- type: precision_at_10
value: 4.917
- type: precision_at_100
value: 0.828
- type: precision_at_1000
value: 0.11399999999999999
- type: precision_at_3
value: 11.583
- type: precision_at_5
value: 8.133
- type: recall_at_1
value: 21.356
- type: recall_at_10
value: 42.489
- type: recall_at_100
value: 67.128
- type: recall_at_1000
value: 87.441
- type: recall_at_3
value: 31.165
- type: recall_at_5
value: 35.853
- task:
type: Retrieval
dataset:
name: MTEB ClimateFEVER
type: climate-fever
config: default
split: test
revision: 392b78eb68c07badcd7c2cd8f39af108375dfcce
metrics:
- type: map_at_1
value: 12.306000000000001
- type: map_at_10
value: 21.523
- type: map_at_100
value: 23.358
- type: map_at_1000
value: 23.541
- type: map_at_3
value: 17.809
- type: map_at_5
value: 19.631
- type: mrr_at_1
value: 27.948
- type: mrr_at_10
value: 40.355000000000004
- type: mrr_at_100
value: 41.166000000000004
- type: mrr_at_1000
value: 41.203
- type: mrr_at_3
value: 36.819
- type: mrr_at_5
value: 38.958999999999996
- type: ndcg_at_1
value: 27.948
- type: ndcg_at_10
value: 30.462
- type: ndcg_at_100
value: 37.473
- type: ndcg_at_1000
value: 40.717999999999996
- type: ndcg_at_3
value: 24.646
- type: ndcg_at_5
value: 26.642
- type: precision_at_1
value: 27.948
- type: precision_at_10
value: 9.648
- type: precision_at_100
value: 1.7239999999999998
- type: precision_at_1000
value: 0.232
- type: precision_at_3
value: 18.48
- type: precision_at_5
value: 14.293
- type: recall_at_1
value: 12.306000000000001
- type: recall_at_10
value: 37.181
- type: recall_at_100
value: 61.148
- type: recall_at_1000
value: 79.401
- type: recall_at_3
value: 22.883
- type: recall_at_5
value: 28.59
- task:
type: Retrieval
dataset:
name: MTEB DBPedia
type: dbpedia-entity
config: default
split: test
revision: f097057d03ed98220bc7309ddb10b71a54d667d6
metrics:
- type: map_at_1
value: 9.357
- type: map_at_10
value: 18.849
- type: map_at_100
value: 25.369000000000003
- type: map_at_1000
value: 26.950000000000003
- type: map_at_3
value: 13.625000000000002
- type: map_at_5
value: 15.956999999999999
- type: mrr_at_1
value: 67.75
- type: mrr_at_10
value: 74.734
- type: mrr_at_100
value: 75.1
- type: mrr_at_1000
value: 75.10900000000001
- type: mrr_at_3
value: 73.542
- type: mrr_at_5
value: 74.167
- type: ndcg_at_1
value: 55.375
- type: ndcg_at_10
value: 39.873999999999995
- type: ndcg_at_100
value: 43.098
- type: ndcg_at_1000
value: 50.69200000000001
- type: ndcg_at_3
value: 44.856
- type: ndcg_at_5
value: 42.138999999999996
- type: precision_at_1
value: 67.75
- type: precision_at_10
value: 31.1
- type: precision_at_100
value: 9.303
- type: precision_at_1000
value: 2.0060000000000002
- type: precision_at_3
value: 48.25
- type: precision_at_5
value: 40.949999999999996
- type: recall_at_1
value: 9.357
- type: recall_at_10
value: 23.832
- type: recall_at_100
value: 47.906
- type: recall_at_1000
value: 71.309
- type: recall_at_3
value: 14.512
- type: recall_at_5
value: 18.3
- task:
type: Classification
dataset:
name: MTEB EmotionClassification
type: mteb/emotion
config: default
split: test
revision: 829147f8f75a25f005913200eb5ed41fae320aa1
metrics:
- type: accuracy
value: 49.655
- type: f1
value: 45.51976190938951
- task:
type: Retrieval
dataset:
name: MTEB FEVER
type: fever
config: default
split: test
revision: 1429cf27e393599b8b359b9b72c666f96b2525f9
metrics:
- type: map_at_1
value: 62.739999999999995
- type: map_at_10
value: 73.07000000000001
- type: map_at_100
value: 73.398
- type: map_at_1000
value: 73.41
- type: map_at_3
value: 71.33800000000001
- type: map_at_5
value: 72.423
- type: mrr_at_1
value: 67.777
- type: mrr_at_10
value: 77.873
- type: mrr_at_100
value: 78.091
- type: mrr_at_1000
value: 78.094
- type: mrr_at_3
value: 76.375
- type: mrr_at_5
value: 77.316
- type: ndcg_at_1
value: 67.777
- type: ndcg_at_10
value: 78.24
- type: ndcg_at_100
value: 79.557
- type: ndcg_at_1000
value: 79.814
- type: ndcg_at_3
value: 75.125
- type: ndcg_at_5
value: 76.834
- type: precision_at_1
value: 67.777
- type: precision_at_10
value: 9.832
- type: precision_at_100
value: 1.061
- type: precision_at_1000
value: 0.11
- type: precision_at_3
value: 29.433
- type: precision_at_5
value: 18.665000000000003
- type: recall_at_1
value: 62.739999999999995
- type: recall_at_10
value: 89.505
- type: recall_at_100
value: 95.102
- type: recall_at_1000
value: 96.825
- type: recall_at_3
value: 81.028
- type: recall_at_5
value: 85.28099999999999
- task:
type: Retrieval
dataset:
name: MTEB FiQA2018
type: fiqa
config: default
split: test
revision: 41b686a7f28c59bcaaa5791efd47c67c8ebe28be
metrics:
- type: map_at_1
value: 18.467
- type: map_at_10
value: 30.020999999999997
- type: map_at_100
value: 31.739
- type: map_at_1000
value: 31.934
- type: map_at_3
value: 26.003
- type: map_at_5
value: 28.338
- type: mrr_at_1
value: 35.339999999999996
- type: mrr_at_10
value: 44.108999999999995
- type: mrr_at_100
value: 44.993
- type: mrr_at_1000
value: 45.042
- type: mrr_at_3
value: 41.667
- type: mrr_at_5
value: 43.14
- type: ndcg_at_1
value: 35.339999999999996
- type: ndcg_at_10
value: 37.202
- type: ndcg_at_100
value: 43.852999999999994
- type: ndcg_at_1000
value: 47.235
- type: ndcg_at_3
value: 33.5
- type: ndcg_at_5
value: 34.985
- type: precision_at_1
value: 35.339999999999996
- type: precision_at_10
value: 10.247
- type: precision_at_100
value: 1.7149999999999999
- type: precision_at_1000
value: 0.232
- type: precision_at_3
value: 22.222
- type: precision_at_5
value: 16.573999999999998
- type: recall_at_1
value: 18.467
- type: recall_at_10
value: 44.080999999999996
- type: recall_at_100
value: 68.72200000000001
- type: recall_at_1000
value: 89.087
- type: recall_at_3
value: 30.567
- type: recall_at_5
value: 36.982
- task:
type: Retrieval
dataset:
name: MTEB HotpotQA
type: hotpotqa
config: default
split: test
revision: 766870b35a1b9ca65e67a0d1913899973551fc6c
metrics:
- type: map_at_1
value: 35.726
- type: map_at_10
value: 50.207
- type: map_at_100
value: 51.05499999999999
- type: map_at_1000
value: 51.12799999999999
- type: map_at_3
value: 47.576
- type: map_at_5
value: 49.172
- type: mrr_at_1
value: 71.452
- type: mrr_at_10
value: 77.41900000000001
- type: mrr_at_100
value: 77.711
- type: mrr_at_1000
value: 77.723
- type: mrr_at_3
value: 76.39399999999999
- type: mrr_at_5
value: 77.00099999999999
- type: ndcg_at_1
value: 71.452
- type: ndcg_at_10
value: 59.260999999999996
- type: ndcg_at_100
value: 62.424
- type: ndcg_at_1000
value: 63.951
- type: ndcg_at_3
value: 55.327000000000005
- type: ndcg_at_5
value: 57.416999999999994
- type: precision_at_1
value: 71.452
- type: precision_at_10
value: 12.061
- type: precision_at_100
value: 1.455
- type: precision_at_1000
value: 0.166
- type: precision_at_3
value: 34.36
- type: precision_at_5
value: 22.266
- type: recall_at_1
value: 35.726
- type: recall_at_10
value: 60.304
- type: recall_at_100
value: 72.75500000000001
- type: recall_at_1000
value: 82.978
- type: recall_at_3
value: 51.54
- type: recall_at_5
value: 55.665
- task:
type: Classification
dataset:
name: MTEB ImdbClassification
type: mteb/imdb
config: default
split: test
revision: 8d743909f834c38949e8323a8a6ce8721ea6c7f4
metrics:
- type: accuracy
value: 66.63759999999999
- type: ap
value: 61.48938261286748
- type: f1
value: 66.35089269264965
- task:
type: Retrieval
dataset:
name: MTEB MSMARCO
type: msmarco
config: default
split: validation
revision: e6838a846e2408f22cf5cc337ebc83e0bcf77849
metrics:
- type: map_at_1
value: 20.842
- type: map_at_10
value: 32.992
- type: map_at_100
value: 34.236
- type: map_at_1000
value: 34.286
- type: map_at_3
value: 29.049000000000003
- type: map_at_5
value: 31.391999999999996
- type: mrr_at_1
value: 21.375
- type: mrr_at_10
value: 33.581
- type: mrr_at_100
value: 34.760000000000005
- type: mrr_at_1000
value: 34.803
- type: mrr_at_3
value: 29.704000000000004
- type: mrr_at_5
value: 32.015
- type: ndcg_at_1
value: 21.375
- type: ndcg_at_10
value: 39.905
- type: ndcg_at_100
value: 45.843
- type: ndcg_at_1000
value: 47.083999999999996
- type: ndcg_at_3
value: 31.918999999999997
- type: ndcg_at_5
value: 36.107
- type: precision_at_1
value: 21.375
- type: precision_at_10
value: 6.393
- type: precision_at_100
value: 0.935
- type: precision_at_1000
value: 0.104
- type: precision_at_3
value: 13.663
- type: precision_at_5
value: 10.324
- type: recall_at_1
value: 20.842
- type: recall_at_10
value: 61.17
- type: recall_at_100
value: 88.518
- type: recall_at_1000
value: 97.993
- type: recall_at_3
value: 39.571
- type: recall_at_5
value: 49.653999999999996
- task:
type: Classification
dataset:
name: MTEB MTOPDomainClassification (en)
type: mteb/mtop_domain
config: en
split: test
revision: a7e2a951126a26fc8c6a69f835f33a346ba259e3
metrics:
- type: accuracy
value: 93.46557227542178
- type: f1
value: 92.87345917772146
- task:
type: Classification
dataset:
name: MTEB MTOPIntentClassification (en)
type: mteb/mtop_intent
config: en
split: test
revision: 6299947a7777084cc2d4b64235bf7190381ce755
metrics:
- type: accuracy
value: 72.42134062927497
- type: f1
value: 55.03624810959269
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (en)
type: mteb/amazon_massive_intent
config: en
split: test
revision: 072a486a144adf7f4479a4a0dddb2152e161e1ea
metrics:
- type: accuracy
value: 70.3866845998655
- type: f1
value: 68.9674519872921
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (en)
type: mteb/amazon_massive_scenario
config: en
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 76.27774041694687
- type: f1
value: 76.72936190462792
- task:
type: Clustering
dataset:
name: MTEB MedrxivClusteringP2P
type: mteb/medrxiv-clustering-p2p
config: default
split: test
revision: dcefc037ef84348e49b0d29109e891c01067226b
metrics:
- type: v_measure
value: 31.511745925773337
- task:
type: Clustering
dataset:
name: MTEB MedrxivClusteringS2S
type: mteb/medrxiv-clustering-s2s
config: default
split: test
revision: 3cd0e71dfbe09d4de0f9e5ecba43e7ce280959dc
metrics:
- type: v_measure
value: 28.764235987575365
- task:
type: Reranking
dataset:
name: MTEB MindSmallReranking
type: mteb/mind_small
config: default
split: test
revision: 3bdac13927fdc888b903db93b2ffdbd90b295a69
metrics:
- type: map
value: 32.29353136386601
- type: mrr
value: 33.536774455851685
- task:
type: Retrieval
dataset:
name: MTEB NFCorpus
type: nfcorpus
config: default
split: test
revision: 7eb63cc0c1eb59324d709ebed25fcab851fa7610
metrics:
- type: map_at_1
value: 5.702
- type: map_at_10
value: 13.642000000000001
- type: map_at_100
value: 17.503
- type: map_at_1000
value: 19.126
- type: map_at_3
value: 9.748
- type: map_at_5
value: 11.642
- type: mrr_at_1
value: 45.82
- type: mrr_at_10
value: 54.821
- type: mrr_at_100
value: 55.422000000000004
- type: mrr_at_1000
value: 55.452999999999996
- type: mrr_at_3
value: 52.373999999999995
- type: mrr_at_5
value: 53.937000000000005
- type: ndcg_at_1
value: 44.272
- type: ndcg_at_10
value: 36.213
- type: ndcg_at_100
value: 33.829
- type: ndcg_at_1000
value: 42.557
- type: ndcg_at_3
value: 40.814
- type: ndcg_at_5
value: 39.562000000000005
- type: precision_at_1
value: 45.511
- type: precision_at_10
value: 27.214
- type: precision_at_100
value: 8.941
- type: precision_at_1000
value: 2.1870000000000003
- type: precision_at_3
value: 37.874
- type: precision_at_5
value: 34.489
- type: recall_at_1
value: 5.702
- type: recall_at_10
value: 17.638
- type: recall_at_100
value: 34.419
- type: recall_at_1000
value: 66.41
- type: recall_at_3
value: 10.914
- type: recall_at_5
value: 14.032
- task:
type: Retrieval
dataset:
name: MTEB NQ
type: nq
config: default
split: test
revision: 6062aefc120bfe8ece5897809fb2e53bfe0d128c
metrics:
- type: map_at_1
value: 30.567
- type: map_at_10
value: 45.01
- type: map_at_100
value: 46.091
- type: map_at_1000
value: 46.126
- type: map_at_3
value: 40.897
- type: map_at_5
value: 43.301
- type: mrr_at_1
value: 34.56
- type: mrr_at_10
value: 47.725
- type: mrr_at_100
value: 48.548
- type: mrr_at_1000
value: 48.571999999999996
- type: mrr_at_3
value: 44.361
- type: mrr_at_5
value: 46.351
- type: ndcg_at_1
value: 34.531
- type: ndcg_at_10
value: 52.410000000000004
- type: ndcg_at_100
value: 56.999
- type: ndcg_at_1000
value: 57.830999999999996
- type: ndcg_at_3
value: 44.734
- type: ndcg_at_5
value: 48.701
- type: precision_at_1
value: 34.531
- type: precision_at_10
value: 8.612
- type: precision_at_100
value: 1.118
- type: precision_at_1000
value: 0.12
- type: precision_at_3
value: 20.307
- type: precision_at_5
value: 14.519000000000002
- type: recall_at_1
value: 30.567
- type: recall_at_10
value: 72.238
- type: recall_at_100
value: 92.154
- type: recall_at_1000
value: 98.375
- type: recall_at_3
value: 52.437999999999995
- type: recall_at_5
value: 61.516999999999996
- task:
type: Retrieval
dataset:
name: MTEB QuoraRetrieval
type: quora
config: default
split: test
revision: 6205996560df11e3a3da9ab4f926788fc30a7db4
metrics:
- type: map_at_1
value: 65.98
- type: map_at_10
value: 80.05600000000001
- type: map_at_100
value: 80.76299999999999
- type: map_at_1000
value: 80.786
- type: map_at_3
value: 76.848
- type: map_at_5
value: 78.854
- type: mrr_at_1
value: 75.86
- type: mrr_at_10
value: 83.397
- type: mrr_at_100
value: 83.555
- type: mrr_at_1000
value: 83.557
- type: mrr_at_3
value: 82.033
- type: mrr_at_5
value: 82.97
- type: ndcg_at_1
value: 75.88000000000001
- type: ndcg_at_10
value: 84.58099999999999
- type: ndcg_at_100
value: 86.151
- type: ndcg_at_1000
value: 86.315
- type: ndcg_at_3
value: 80.902
- type: ndcg_at_5
value: 82.953
- type: precision_at_1
value: 75.88000000000001
- type: precision_at_10
value: 12.986
- type: precision_at_100
value: 1.5110000000000001
- type: precision_at_1000
value: 0.156
- type: precision_at_3
value: 35.382999999999996
- type: precision_at_5
value: 23.555999999999997
- type: recall_at_1
value: 65.98
- type: recall_at_10
value: 93.716
- type: recall_at_100
value: 99.21799999999999
- type: recall_at_1000
value: 99.97
- type: recall_at_3
value: 83.551
- type: recall_at_5
value: 88.998
- task:
type: Clustering
dataset:
name: MTEB RedditClustering
type: mteb/reddit-clustering
config: default
split: test
revision: b2805658ae38990172679479369a78b86de8c390
metrics:
- type: v_measure
value: 40.45148482612238
- task:
type: Clustering
dataset:
name: MTEB RedditClusteringP2P
type: mteb/reddit-clustering-p2p
config: default
split: test
revision: 385e3cb46b4cfa89021f56c4380204149d0efe33
metrics:
- type: v_measure
value: 55.749490673039126
- task:
type: Retrieval
dataset:
name: MTEB SCIDOCS
type: scidocs
config: default
split: test
revision: 5c59ef3e437a0a9651c8fe6fde943e7dce59fba5
metrics:
- type: map_at_1
value: 4.903
- type: map_at_10
value: 11.926
- type: map_at_100
value: 13.916999999999998
- type: map_at_1000
value: 14.215
- type: map_at_3
value: 8.799999999999999
- type: map_at_5
value: 10.360999999999999
- type: mrr_at_1
value: 24.099999999999998
- type: mrr_at_10
value: 34.482
- type: mrr_at_100
value: 35.565999999999995
- type: mrr_at_1000
value: 35.619
- type: mrr_at_3
value: 31.433
- type: mrr_at_5
value: 33.243
- type: ndcg_at_1
value: 24.099999999999998
- type: ndcg_at_10
value: 19.872999999999998
- type: ndcg_at_100
value: 27.606
- type: ndcg_at_1000
value: 32.811
- type: ndcg_at_3
value: 19.497999999999998
- type: ndcg_at_5
value: 16.813
- type: precision_at_1
value: 24.099999999999998
- type: precision_at_10
value: 10.08
- type: precision_at_100
value: 2.122
- type: precision_at_1000
value: 0.337
- type: precision_at_3
value: 18.2
- type: precision_at_5
value: 14.62
- type: recall_at_1
value: 4.903
- type: recall_at_10
value: 20.438000000000002
- type: recall_at_100
value: 43.043
- type: recall_at_1000
value: 68.41000000000001
- type: recall_at_3
value: 11.068
- type: recall_at_5
value: 14.818000000000001
- task:
type: STS
dataset:
name: MTEB SICK-R
type: mteb/sickr-sts
config: default
split: test
revision: 20a6d6f312dd54037fe07a32d58e5e168867909d
metrics:
- type: cos_sim_pearson
value: 78.58086597995997
- type: cos_sim_spearman
value: 69.63214182814991
- type: euclidean_pearson
value: 72.76175489042691
- type: euclidean_spearman
value: 67.84965161872971
- type: manhattan_pearson
value: 72.73812689782592
- type: manhattan_spearman
value: 67.83610439531277
- task:
type: STS
dataset:
name: MTEB STS12
type: mteb/sts12-sts
config: default
split: test
revision: fdf84275bb8ce4b49c971d02e84dd1abc677a50f
metrics:
- type: cos_sim_pearson
value: 75.13970861325006
- type: cos_sim_spearman
value: 67.5020551515597
- type: euclidean_pearson
value: 66.33415412418276
- type: euclidean_spearman
value: 66.82145056673268
- type: manhattan_pearson
value: 66.55489484006415
- type: manhattan_spearman
value: 66.95147433279057
- task:
type: STS
dataset:
name: MTEB STS13
type: mteb/sts13-sts
config: default
split: test
revision: 1591bfcbe8c69d4bf7fe2a16e2451017832cafb9
metrics:
- type: cos_sim_pearson
value: 78.85850536483447
- type: cos_sim_spearman
value: 79.1633350177206
- type: euclidean_pearson
value: 72.74090561408477
- type: euclidean_spearman
value: 73.57374448302961
- type: manhattan_pearson
value: 72.92980654233226
- type: manhattan_spearman
value: 73.72777155112588
- task:
type: STS
dataset:
name: MTEB STS14
type: mteb/sts14-sts
config: default
split: test
revision: e2125984e7df8b7871f6ae9949cf6b6795e7c54b
metrics:
- type: cos_sim_pearson
value: 79.51125593897028
- type: cos_sim_spearman
value: 74.46048326701329
- type: euclidean_pearson
value: 70.87726087052985
- type: euclidean_spearman
value: 67.7721470654411
- type: manhattan_pearson
value: 71.05892792135637
- type: manhattan_spearman
value: 67.93472619779037
- task:
type: STS
dataset:
name: MTEB STS15
type: mteb/sts15-sts
config: default
split: test
revision: 1cd7298cac12a96a373b6a2f18738bb3e739a9b6
metrics:
- type: cos_sim_pearson
value: 83.8299348880489
- type: cos_sim_spearman
value: 84.47194637929275
- type: euclidean_pearson
value: 78.68768462480418
- type: euclidean_spearman
value: 79.80526323901917
- type: manhattan_pearson
value: 78.6810718151946
- type: manhattan_spearman
value: 79.7820584821254
- task:
type: STS
dataset:
name: MTEB STS16
type: mteb/sts16-sts
config: default
split: test
revision: 360a0b2dff98700d09e634a01e1cc1624d3e42cd
metrics:
- type: cos_sim_pearson
value: 79.99206664843005
- type: cos_sim_spearman
value: 80.96089203722137
- type: euclidean_pearson
value: 71.31216213716365
- type: euclidean_spearman
value: 71.45258140049407
- type: manhattan_pearson
value: 71.26140340402836
- type: manhattan_spearman
value: 71.3896894666943
- task:
type: STS
dataset:
name: MTEB STS17 (en-en)
type: mteb/sts17-crosslingual-sts
config: en-en
split: test
revision: 9fc37e8c632af1c87a3d23e685d49552a02582a0
metrics:
- type: cos_sim_pearson
value: 87.35697089594868
- type: cos_sim_spearman
value: 87.78202647220289
- type: euclidean_pearson
value: 84.20969668786667
- type: euclidean_spearman
value: 83.91876425459982
- type: manhattan_pearson
value: 84.24429755612542
- type: manhattan_spearman
value: 83.98826315103398
- task:
type: STS
dataset:
name: MTEB STS22 (en)
type: mteb/sts22-crosslingual-sts
config: en
split: test
revision: 2de6ce8c1921b71a755b262c6b57fef195dd7906
metrics:
- type: cos_sim_pearson
value: 69.06962775868384
- type: cos_sim_spearman
value: 69.34889515492327
- type: euclidean_pearson
value: 69.28108180412313
- type: euclidean_spearman
value: 69.6437114853659
- type: manhattan_pearson
value: 69.39974983734993
- type: manhattan_spearman
value: 69.69057284482079
- task:
type: STS
dataset:
name: MTEB STSBenchmark
type: mteb/stsbenchmark-sts
config: default
split: test
revision: 8913289635987208e6e7c72789e4be2fe94b6abd
metrics:
- type: cos_sim_pearson
value: 82.42553734213958
- type: cos_sim_spearman
value: 81.38977341532744
- type: euclidean_pearson
value: 76.47494587945522
- type: euclidean_spearman
value: 75.92794860531089
- type: manhattan_pearson
value: 76.4768777169467
- type: manhattan_spearman
value: 75.9252673228599
- task:
type: Reranking
dataset:
name: MTEB SciDocsRR
type: mteb/scidocs-reranking
config: default
split: test
revision: 56a6d0140cf6356659e2a7c1413286a774468d44
metrics:
- type: map
value: 80.78825425914722
- type: mrr
value: 94.60017197762296
- task:
type: Retrieval
dataset:
name: MTEB SciFact
type: scifact
config: default
split: test
revision: a75ae049398addde9b70f6b268875f5cbce99089
metrics:
- type: map_at_1
value: 60.633
- type: map_at_10
value: 70.197
- type: map_at_100
value: 70.758
- type: map_at_1000
value: 70.765
- type: map_at_3
value: 67.082
- type: map_at_5
value: 69.209
- type: mrr_at_1
value: 63.333
- type: mrr_at_10
value: 71.17
- type: mrr_at_100
value: 71.626
- type: mrr_at_1000
value: 71.633
- type: mrr_at_3
value: 68.833
- type: mrr_at_5
value: 70.6
- type: ndcg_at_1
value: 63.333
- type: ndcg_at_10
value: 74.697
- type: ndcg_at_100
value: 76.986
- type: ndcg_at_1000
value: 77.225
- type: ndcg_at_3
value: 69.527
- type: ndcg_at_5
value: 72.816
- type: precision_at_1
value: 63.333
- type: precision_at_10
value: 9.9
- type: precision_at_100
value: 1.103
- type: precision_at_1000
value: 0.11199999999999999
- type: precision_at_3
value: 26.889000000000003
- type: precision_at_5
value: 18.2
- type: recall_at_1
value: 60.633
- type: recall_at_10
value: 87.36699999999999
- type: recall_at_100
value: 97.333
- type: recall_at_1000
value: 99.333
- type: recall_at_3
value: 73.656
- type: recall_at_5
value: 82.083
- task:
type: PairClassification
dataset:
name: MTEB SprintDuplicateQuestions
type: mteb/sprintduplicatequestions-pairclassification
config: default
split: test
revision: 5a8256d0dff9c4bd3be3ba3e67e4e70173f802ea
metrics:
- type: cos_sim_accuracy
value: 99.76633663366337
- type: cos_sim_ap
value: 93.84024096781063
- type: cos_sim_f1
value: 88.08080808080808
- type: cos_sim_precision
value: 88.9795918367347
- type: cos_sim_recall
value: 87.2
- type: dot_accuracy
value: 99.46336633663367
- type: dot_ap
value: 75.78127156965245
- type: dot_f1
value: 71.41403865717193
- type: dot_precision
value: 72.67080745341616
- type: dot_recall
value: 70.19999999999999
- type: euclidean_accuracy
value: 99.67524752475248
- type: euclidean_ap
value: 88.61274955249769
- type: euclidean_f1
value: 82.30852211434735
- type: euclidean_precision
value: 89.34426229508196
- type: euclidean_recall
value: 76.3
- type: manhattan_accuracy
value: 99.67722772277227
- type: manhattan_ap
value: 88.77516158012779
- type: manhattan_f1
value: 82.36536430834212
- type: manhattan_precision
value: 87.24832214765101
- type: manhattan_recall
value: 78.0
- type: max_accuracy
value: 99.76633663366337
- type: max_ap
value: 93.84024096781063
- type: max_f1
value: 88.08080808080808
- task:
type: Clustering
dataset:
name: MTEB StackExchangeClustering
type: mteb/stackexchange-clustering
config: default
split: test
revision: 70a89468f6dccacc6aa2b12a6eac54e74328f235
metrics:
- type: v_measure
value: 59.20812266121527
- task:
type: Clustering
dataset:
name: MTEB StackExchangeClusteringP2P
type: mteb/stackexchange-clustering-p2p
config: default
split: test
revision: d88009ab563dd0b16cfaf4436abaf97fa3550cf0
metrics:
- type: v_measure
value: 33.954248554638056
- task:
type: Reranking
dataset:
name: MTEB StackOverflowDupQuestions
type: mteb/stackoverflowdupquestions-reranking
config: default
split: test
revision: ef807ea29a75ec4f91b50fd4191cb4ee4589a9f9
metrics:
- type: map
value: 51.52800990025549
- type: mrr
value: 52.360394915541974
- task:
type: Summarization
dataset:
name: MTEB SummEval
type: mteb/summeval
config: default
split: test
revision: 8753c2788d36c01fc6f05d03fe3f7268d63f9122
metrics:
- type: cos_sim_pearson
value: 30.737881131277355
- type: cos_sim_spearman
value: 31.45979323917254
- type: dot_pearson
value: 26.24686017962023
- type: dot_spearman
value: 25.006732878791745
- task:
type: Retrieval
dataset:
name: MTEB TRECCOVID
type: trec-covid
config: default
split: test
revision: 2c8041b2c07a79b6f7ba8fe6acc72e5d9f92d217
metrics:
- type: map_at_1
value: 0.253
- type: map_at_10
value: 2.1399999999999997
- type: map_at_100
value: 12.873000000000001
- type: map_at_1000
value: 31.002000000000002
- type: map_at_3
value: 0.711
- type: map_at_5
value: 1.125
- type: mrr_at_1
value: 96.0
- type: mrr_at_10
value: 98.0
- type: mrr_at_100
value: 98.0
- type: mrr_at_1000
value: 98.0
- type: mrr_at_3
value: 98.0
- type: mrr_at_5
value: 98.0
- type: ndcg_at_1
value: 94.0
- type: ndcg_at_10
value: 84.881
- type: ndcg_at_100
value: 64.694
- type: ndcg_at_1000
value: 56.85
- type: ndcg_at_3
value: 90.061
- type: ndcg_at_5
value: 87.155
- type: precision_at_1
value: 96.0
- type: precision_at_10
value: 88.8
- type: precision_at_100
value: 65.7
- type: precision_at_1000
value: 25.080000000000002
- type: precision_at_3
value: 92.667
- type: precision_at_5
value: 90.0
- type: recall_at_1
value: 0.253
- type: recall_at_10
value: 2.292
- type: recall_at_100
value: 15.78
- type: recall_at_1000
value: 53.015
- type: recall_at_3
value: 0.7270000000000001
- type: recall_at_5
value: 1.162
- task:
type: Retrieval
dataset:
name: MTEB Touche2020
type: webis-touche2020
config: default
split: test
revision: 527b7d77e16e343303e68cb6af11d6e18b9f7b3b
metrics:
- type: map_at_1
value: 2.116
- type: map_at_10
value: 9.625
- type: map_at_100
value: 15.641
- type: map_at_1000
value: 17.127
- type: map_at_3
value: 4.316
- type: map_at_5
value: 6.208
- type: mrr_at_1
value: 32.653
- type: mrr_at_10
value: 48.083999999999996
- type: mrr_at_100
value: 48.631
- type: mrr_at_1000
value: 48.649
- type: mrr_at_3
value: 42.857
- type: mrr_at_5
value: 46.224
- type: ndcg_at_1
value: 29.592000000000002
- type: ndcg_at_10
value: 25.430999999999997
- type: ndcg_at_100
value: 36.344
- type: ndcg_at_1000
value: 47.676
- type: ndcg_at_3
value: 26.144000000000002
- type: ndcg_at_5
value: 26.304
- type: precision_at_1
value: 32.653
- type: precision_at_10
value: 24.082
- type: precision_at_100
value: 7.714
- type: precision_at_1000
value: 1.5310000000000001
- type: precision_at_3
value: 26.531
- type: precision_at_5
value: 26.939
- type: recall_at_1
value: 2.116
- type: recall_at_10
value: 16.794
- type: recall_at_100
value: 47.452
- type: recall_at_1000
value: 82.312
- type: recall_at_3
value: 5.306
- type: recall_at_5
value: 9.306000000000001
- task:
type: Classification
dataset:
name: MTEB ToxicConversationsClassification
type: mteb/toxic_conversations_50k
config: default
split: test
revision: edfaf9da55d3dd50d43143d90c1ac476895ae6de
metrics:
- type: accuracy
value: 67.709
- type: ap
value: 13.541535578501716
- type: f1
value: 52.569619919446794
- task:
type: Classification
dataset:
name: MTEB TweetSentimentExtractionClassification
type: mteb/tweet_sentiment_extraction
config: default
split: test
revision: 62146448f05be9e52a36b8ee9936447ea787eede
metrics:
- type: accuracy
value: 56.850594227504246
- type: f1
value: 57.233377364910574
- task:
type: Clustering
dataset:
name: MTEB TwentyNewsgroupsClustering
type: mteb/twentynewsgroups-clustering
config: default
split: test
revision: 091a54f9a36281ce7d6590ec8c75dd485e7e01d4
metrics:
- type: v_measure
value: 39.463722986090474
- task:
type: PairClassification
dataset:
name: MTEB TwitterSemEval2015
type: mteb/twittersemeval2015-pairclassification
config: default
split: test
revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1
metrics:
- type: cos_sim_accuracy
value: 84.09131549144662
- type: cos_sim_ap
value: 66.86677647503386
- type: cos_sim_f1
value: 62.94631710362049
- type: cos_sim_precision
value: 59.73933649289099
- type: cos_sim_recall
value: 66.51715039577837
- type: dot_accuracy
value: 80.27656911247541
- type: dot_ap
value: 54.291720398612085
- type: dot_f1
value: 54.77150537634409
- type: dot_precision
value: 47.58660957571039
- type: dot_recall
value: 64.5118733509235
- type: euclidean_accuracy
value: 82.76211480002385
- type: euclidean_ap
value: 62.430397690753296
- type: euclidean_f1
value: 59.191590539356774
- type: euclidean_precision
value: 56.296119971435374
- type: euclidean_recall
value: 62.401055408970976
- type: manhattan_accuracy
value: 82.7561542588067
- type: manhattan_ap
value: 62.41882051995577
- type: manhattan_f1
value: 59.32101002778785
- type: manhattan_precision
value: 54.71361711611321
- type: manhattan_recall
value: 64.77572559366754
- type: max_accuracy
value: 84.09131549144662
- type: max_ap
value: 66.86677647503386
- type: max_f1
value: 62.94631710362049
- task:
type: PairClassification
dataset:
name: MTEB TwitterURLCorpus
type: mteb/twitterurlcorpus-pairclassification
config: default
split: test
revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf
metrics:
- type: cos_sim_accuracy
value: 88.79574649745798
- type: cos_sim_ap
value: 85.28960532524223
- type: cos_sim_f1
value: 77.98460043358001
- type: cos_sim_precision
value: 75.78090948714224
- type: cos_sim_recall
value: 80.32029565753002
- type: dot_accuracy
value: 85.5939767920208
- type: dot_ap
value: 76.14131706694056
- type: dot_f1
value: 72.70246298696868
- type: dot_precision
value: 65.27012127894156
- type: dot_recall
value: 82.04496458269172
- type: euclidean_accuracy
value: 86.72332828812046
- type: euclidean_ap
value: 80.84854809178995
- type: euclidean_f1
value: 72.47657499809551
- type: euclidean_precision
value: 71.71717171717171
- type: euclidean_recall
value: 73.25223283030489
- type: manhattan_accuracy
value: 86.7563162184189
- type: manhattan_ap
value: 80.87598895575626
- type: manhattan_f1
value: 72.54617892068092
- type: manhattan_precision
value: 68.49268225960881
- type: manhattan_recall
value: 77.10963966738528
- type: max_accuracy
value: 88.79574649745798
- type: max_ap
value: 85.28960532524223
- type: max_f1
value: 77.98460043358001
---
# SGPT-5.8B-weightedmean-msmarco-specb-bitfit
## Usage
For usage instructions, refer to our codebase: https://github.com/Muennighoff/sgpt
## Evaluation Results
For eval results, refer to our paper: https://arxiv.org/abs/2202.08904
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 249592 with parameters:
```
{'batch_size': 2, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters:
```
{'scale': 20.0, 'similarity_fct': 'cos_sim'}
```
Parameters of the fit()-Method:
```
{
"epochs": 10,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 5e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 1000,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 300, 'do_lower_case': False}) with Transformer model: GPTJModel
(1): Pooling({'word_embedding_dimension': 4096, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': True, 'pooling_mode_lasttoken': False})
)
```
## Citing & Authors
```bibtex
@article{muennighoff2022sgpt,
title={SGPT: GPT Sentence Embeddings for Semantic Search},
author={Muennighoff, Niklas},
journal={arXiv preprint arXiv:2202.08904},
year={2022}
}
```
| [
"SUMMARIZATION"
] | [
"BIOSSES",
"SCIFACT"
] |
LeroyDyer/LCARS_AI_StarTrek_Computer | LeroyDyer | text2text-generation | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"LCARS",
"Star-Trek",
"128k-Context",
"chemistry",
"biology",
"finance",
"legal",
"art",
"code",
"medical",
"text-generation-inference",
"text2text-generation",
"en",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2024-05-11T17:17:18 | 2024-10-22T04:31:42 | 83 | 4 | ---
language:
- en
library_name: transformers
license: mit
pipeline_tag: text2text-generation
tags:
- LCARS
- Star-Trek
- 128k-Context
- mistral
- chemistry
- biology
- finance
- legal
- art
- code
- medical
- text-generation-inference
---
If anybody has star trek data please send as this starship computer database archive needs it!
then i can correctly theme this model to be inside its role as a starship computer :
so as well as any space dara ffrom nasa ; i have collected some mufon files which i am still framing the correct prompts for ; for recall as well as interogation :
I shall also be adding a lot of biblical data and historical data ; from sacred texts; so any generated discussions as phylosophers discussing ancient history and how to solve the problems of the past which they encountered ; in thier lifes: using historical and factual data; as well as playig thier roles after generating a biography and character role to the models to play: they should also be amazed by each others acheivements depending on thier periods:
we need multiple role and characters for these discussions: as well as as much historical facts and historys as possible to enhance this models abitlity to dicern ancient aliens truth or false : (so we need astrological, astronomical, as well as sizmological and ecological data for the periods of histroy we know : as well as the unfounded suupositions from youtube subtitles !) another useful source of themed data!
This model is a Collection of merged models via various merge methods : Reclaiming Previous models which will be orphened by thier parent models :
THis model is the model of models so it may not Remember some task or Infact remember them all as well as highly perform !
There were some very bad NSFW Merges from role play to erotica as well as various characters and roles downloaded into the model:
So those models were merged into other models which had been specifically trained for maths or medical data and the coding operations or even translation:
the models were heavliy dpo trained ; and various newer methodologies installed : the deep mind series is a special series which contains self correction recal, visio spacial ... step by step thinking:
SO the multi merge often fizes these errors between models as well as training gaps :Hopefully they all took and merged well !
Performing even unknown and unprogrammed tasks: | [
"TRANSLATION"
] | [
"MEDICAL DATA"
] |
RichardErkhov/EleutherAI_-_pythia-160m-deduped-gguf | RichardErkhov | null | [
"gguf",
"arxiv:2304.01373",
"arxiv:2101.00027",
"arxiv:2201.07311",
"endpoints_compatible",
"region:us"
] | 2024-11-03T15:41:22 | 2024-11-03T15:43:35 | 83 | 0 | ---
{}
---
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
pythia-160m-deduped - GGUF
- Model creator: https://huggingface.co/EleutherAI/
- Original model: https://huggingface.co/EleutherAI/pythia-160m-deduped/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [pythia-160m-deduped.Q2_K.gguf](https://huggingface.co/RichardErkhov/EleutherAI_-_pythia-160m-deduped-gguf/blob/main/pythia-160m-deduped.Q2_K.gguf) | Q2_K | 0.07GB |
| [pythia-160m-deduped.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/EleutherAI_-_pythia-160m-deduped-gguf/blob/main/pythia-160m-deduped.Q3_K_S.gguf) | Q3_K_S | 0.08GB |
| [pythia-160m-deduped.Q3_K.gguf](https://huggingface.co/RichardErkhov/EleutherAI_-_pythia-160m-deduped-gguf/blob/main/pythia-160m-deduped.Q3_K.gguf) | Q3_K | 0.09GB |
| [pythia-160m-deduped.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/EleutherAI_-_pythia-160m-deduped-gguf/blob/main/pythia-160m-deduped.Q3_K_M.gguf) | Q3_K_M | 0.09GB |
| [pythia-160m-deduped.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/EleutherAI_-_pythia-160m-deduped-gguf/blob/main/pythia-160m-deduped.Q3_K_L.gguf) | Q3_K_L | 0.09GB |
| [pythia-160m-deduped.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/EleutherAI_-_pythia-160m-deduped-gguf/blob/main/pythia-160m-deduped.IQ4_XS.gguf) | IQ4_XS | 0.09GB |
| [pythia-160m-deduped.Q4_0.gguf](https://huggingface.co/RichardErkhov/EleutherAI_-_pythia-160m-deduped-gguf/blob/main/pythia-160m-deduped.Q4_0.gguf) | Q4_0 | 0.1GB |
| [pythia-160m-deduped.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/EleutherAI_-_pythia-160m-deduped-gguf/blob/main/pythia-160m-deduped.IQ4_NL.gguf) | IQ4_NL | 0.1GB |
| [pythia-160m-deduped.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/EleutherAI_-_pythia-160m-deduped-gguf/blob/main/pythia-160m-deduped.Q4_K_S.gguf) | Q4_K_S | 0.1GB |
| [pythia-160m-deduped.Q4_K.gguf](https://huggingface.co/RichardErkhov/EleutherAI_-_pythia-160m-deduped-gguf/blob/main/pythia-160m-deduped.Q4_K.gguf) | Q4_K | 0.1GB |
| [pythia-160m-deduped.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/EleutherAI_-_pythia-160m-deduped-gguf/blob/main/pythia-160m-deduped.Q4_K_M.gguf) | Q4_K_M | 0.1GB |
| [pythia-160m-deduped.Q4_1.gguf](https://huggingface.co/RichardErkhov/EleutherAI_-_pythia-160m-deduped-gguf/blob/main/pythia-160m-deduped.Q4_1.gguf) | Q4_1 | 0.1GB |
| [pythia-160m-deduped.Q5_0.gguf](https://huggingface.co/RichardErkhov/EleutherAI_-_pythia-160m-deduped-gguf/blob/main/pythia-160m-deduped.Q5_0.gguf) | Q5_0 | 0.11GB |
| [pythia-160m-deduped.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/EleutherAI_-_pythia-160m-deduped-gguf/blob/main/pythia-160m-deduped.Q5_K_S.gguf) | Q5_K_S | 0.11GB |
| [pythia-160m-deduped.Q5_K.gguf](https://huggingface.co/RichardErkhov/EleutherAI_-_pythia-160m-deduped-gguf/blob/main/pythia-160m-deduped.Q5_K.gguf) | Q5_K | 0.12GB |
| [pythia-160m-deduped.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/EleutherAI_-_pythia-160m-deduped-gguf/blob/main/pythia-160m-deduped.Q5_K_M.gguf) | Q5_K_M | 0.12GB |
| [pythia-160m-deduped.Q5_1.gguf](https://huggingface.co/RichardErkhov/EleutherAI_-_pythia-160m-deduped-gguf/blob/main/pythia-160m-deduped.Q5_1.gguf) | Q5_1 | 0.12GB |
| [pythia-160m-deduped.Q6_K.gguf](https://huggingface.co/RichardErkhov/EleutherAI_-_pythia-160m-deduped-gguf/blob/main/pythia-160m-deduped.Q6_K.gguf) | Q6_K | 0.13GB |
| [pythia-160m-deduped.Q8_0.gguf](https://huggingface.co/RichardErkhov/EleutherAI_-_pythia-160m-deduped-gguf/blob/main/pythia-160m-deduped.Q8_0.gguf) | Q8_0 | 0.16GB |
Original model description:
---
language:
- en
tags:
- pytorch
- causal-lm
- pythia
license: apache-2.0
datasets:
- EleutherAI/the_pile_deduplicated
---
The *Pythia Scaling Suite* is a collection of models developed to facilitate
interpretability research [(see paper)](https://arxiv.org/pdf/2304.01373.pdf).
It contains two sets of eight models of sizes
70M, 160M, 410M, 1B, 1.4B, 2.8B, 6.9B, and 12B. For each size, there are two
models: one trained on the Pile, and one trained on the Pile after the dataset
has been globally deduplicated. All 8 model sizes are trained on the exact
same data, in the exact same order. We also provide 154 intermediate
checkpoints per model, hosted on Hugging Face as branches.
The Pythia model suite was designed to promote scientific
research on large language models, especially interpretability research.
Despite not centering downstream performance as a design goal, we find the
models <a href="#evaluations">match or exceed</a> the performance of
similar and same-sized models, such as those in the OPT and GPT-Neo suites.
<details>
<summary style="font-weight:600">Details on previous early release and naming convention.</summary>
Previously, we released an early version of the Pythia suite to the public.
However, we decided to retrain the model suite to address a few hyperparameter
discrepancies. This model card <a href="#changelog">lists the changes</a>;
see appendix B in the Pythia paper for further discussion. We found no
difference in benchmark performance between the two Pythia versions.
The old models are
[still available](https://huggingface.co/models?other=pythia_v0), but we
suggest the retrained suite if you are just starting to use Pythia.<br>
**This is the current release.**
Please note that all models in the *Pythia* suite were renamed in January
2023. For clarity, a <a href="#naming-convention-and-parameter-count">table
comparing the old and new names</a> is provided in this model card, together
with exact parameter counts.
</details>
<br>
# Pythia-160M-deduped
## Model Details
- Developed by: [EleutherAI](http://eleuther.ai)
- Model type: Transformer-based Language Model
- Language: English
- Learn more: [Pythia's GitHub repository](https://github.com/EleutherAI/pythia)
for training procedure, config files, and details on how to use.
[See paper](https://arxiv.org/pdf/2304.01373.pdf) for more evals and implementation
details.
- Library: [GPT-NeoX](https://github.com/EleutherAI/gpt-neox)
- License: Apache 2.0
- Contact: to ask questions about this model, join the [EleutherAI
Discord](https://discord.gg/zBGx3azzUn), and post them in `#release-discussion`.
Please read the existing *Pythia* documentation before asking about it in the
EleutherAI Discord. For general correspondence: [contact@eleuther.
ai](mailto:[email protected]).
<figure>
| Pythia model | Non-Embedding Params | Layers | Model Dim | Heads | Batch Size | Learning Rate | Equivalent Models |
| -----------: | -------------------: | :----: | :-------: | :---: | :--------: | :-------------------: | :--------------------: |
| 70M | 18,915,328 | 6 | 512 | 8 | 2M | 1.0 x 10<sup>-3</sup> | — |
| 160M | 85,056,000 | 12 | 768 | 12 | 2M | 6.0 x 10<sup>-4</sup> | GPT-Neo 125M, OPT-125M |
| 410M | 302,311,424 | 24 | 1024 | 16 | 2M | 3.0 x 10<sup>-4</sup> | OPT-350M |
| 1.0B | 805,736,448 | 16 | 2048 | 8 | 2M | 3.0 x 10<sup>-4</sup> | — |
| 1.4B | 1,208,602,624 | 24 | 2048 | 16 | 2M | 2.0 x 10<sup>-4</sup> | GPT-Neo 1.3B, OPT-1.3B |
| 2.8B | 2,517,652,480 | 32 | 2560 | 32 | 2M | 1.6 x 10<sup>-4</sup> | GPT-Neo 2.7B, OPT-2.7B |
| 6.9B | 6,444,163,072 | 32 | 4096 | 32 | 2M | 1.2 x 10<sup>-4</sup> | OPT-6.7B |
| 12B | 11,327,027,200 | 36 | 5120 | 40 | 2M | 1.2 x 10<sup>-4</sup> | — |
<figcaption>Engineering details for the <i>Pythia Suite</i>. Deduped and
non-deduped models of a given size have the same hyperparameters. “Equivalent”
models have <b>exactly</b> the same architecture, and the same number of
non-embedding parameters.</figcaption>
</figure>
## Uses and Limitations
### Intended Use
The primary intended use of Pythia is research on the behavior, functionality,
and limitations of large language models. This suite is intended to provide
a controlled setting for performing scientific experiments. We also provide
154 checkpoints per model: initial `step0`, 10 log-spaced checkpoints
`step{1,2,4...512}`, and 143 evenly-spaced checkpoints from `step1000` to
`step143000`. These checkpoints are hosted on Hugging Face as branches. Note
that branch `143000` corresponds exactly to the model checkpoint on the `main`
branch of each model.
You may also further fine-tune and adapt Pythia-160M-deduped for deployment,
as long as your use is in accordance with the Apache 2.0 license. Pythia
models work with the Hugging Face [Transformers
Library](https://huggingface.co/docs/transformers/index). If you decide to use
pre-trained Pythia-160M-deduped as a basis for your fine-tuned model, please
conduct your own risk and bias assessment.
### Out-of-scope use
The Pythia Suite is **not** intended for deployment. It is not a in itself
a product and cannot be used for human-facing interactions. For example,
the model may generate harmful or offensive text. Please evaluate the risks
associated with your particular use case.
Pythia models are English-language only, and are not suitable for translation
or generating text in other languages.
Pythia-160M-deduped has not been fine-tuned for downstream contexts in which
language models are commonly deployed, such as writing genre prose,
or commercial chatbots. This means Pythia-160M-deduped will **not**
respond to a given prompt the way a product like ChatGPT does. This is because,
unlike this model, ChatGPT was fine-tuned using methods such as Reinforcement
Learning from Human Feedback (RLHF) to better “follow” human instructions.
### Limitations and biases
The core functionality of a large language model is to take a string of text
and predict the next token. The token used by the model need not produce the
most “accurate” text. Never rely on Pythia-160M-deduped to produce factually accurate
output.
This model was trained on [the Pile](https://pile.eleuther.ai/), a dataset
known to contain profanity and texts that are lewd or otherwise offensive.
See [Section 6 of the Pile paper](https://arxiv.org/abs/2101.00027) for a
discussion of documented biases with regards to gender, religion, and race.
Pythia-160M-deduped may produce socially unacceptable or undesirable text, *even if*
the prompt itself does not include anything explicitly offensive.
If you plan on using text generated through, for example, the Hosted Inference
API, we recommend having a human curate the outputs of this language model
before presenting it to other people. Please inform your audience that the
text was generated by Pythia-160M-deduped.
### Quickstart
Pythia models can be loaded and used via the following code, demonstrated here
for the third `pythia-70m-deduped` checkpoint:
```python
from transformers import GPTNeoXForCausalLM, AutoTokenizer
model = GPTNeoXForCausalLM.from_pretrained(
"EleutherAI/pythia-70m-deduped",
revision="step3000",
cache_dir="./pythia-70m-deduped/step3000",
)
tokenizer = AutoTokenizer.from_pretrained(
"EleutherAI/pythia-70m-deduped",
revision="step3000",
cache_dir="./pythia-70m-deduped/step3000",
)
inputs = tokenizer("Hello, I am", return_tensors="pt")
tokens = model.generate(**inputs)
tokenizer.decode(tokens[0])
```
Revision/branch `step143000` corresponds exactly to the model checkpoint on
the `main` branch of each model.<br>
For more information on how to use all Pythia models, see [documentation on
GitHub](https://github.com/EleutherAI/pythia).
## Training
### Training data
Pythia-160M-deduped was trained on the Pile **after the dataset has been globally
deduplicated**.<br>
[The Pile](https://pile.eleuther.ai/) is a 825GiB general-purpose dataset in
English. It was created by EleutherAI specifically for training large language
models. It contains texts from 22 diverse sources, roughly broken down into
five categories: academic writing (e.g. arXiv), internet (e.g. CommonCrawl),
prose (e.g. Project Gutenberg), dialogue (e.g. YouTube subtitles), and
miscellaneous (e.g. GitHub, Enron Emails). See [the Pile
paper](https://arxiv.org/abs/2101.00027) for a breakdown of all data sources,
methodology, and a discussion of ethical implications. Consult [the
datasheet](https://arxiv.org/abs/2201.07311) for more detailed documentation
about the Pile and its component datasets. The Pile can be downloaded from
the [official website](https://pile.eleuther.ai/), or from a [community
mirror](https://the-eye.eu/public/AI/pile/).
### Training procedure
All models were trained on the exact same data, in the exact same order. Each
model saw 299,892,736,000 tokens during training, and 143 checkpoints for each
model are saved every 2,097,152,000 tokens, spaced evenly throughout training,
from `step1000` to `step143000` (which is the same as `main`). In addition, we
also provide frequent early checkpoints: `step0` and `step{1,2,4...512}`.
This corresponds to training for just under 1 epoch on the Pile for
non-deduplicated models, and about 1.5 epochs on the deduplicated Pile.
All *Pythia* models trained for 143000 steps at a batch size
of 2M (2,097,152 tokens).<br>
See [GitHub](https://github.com/EleutherAI/pythia) for more details on training
procedure, including [how to reproduce
it](https://github.com/EleutherAI/pythia/blob/main/README.md#reproducing-training).<br>
Pythia uses the same tokenizer as [GPT-NeoX-
20B](https://huggingface.co/EleutherAI/gpt-neox-20b).
## Evaluations
All 16 *Pythia* models were evaluated using the [LM Evaluation
Harness](https://github.com/EleutherAI/lm-evaluation-harness). You can access
the results by model and step at `results/json/*` in the [GitHub
repository](https://github.com/EleutherAI/pythia/tree/main/results/json/).<br>
Expand the sections below to see plots of evaluation results for all
Pythia and Pythia-deduped models compared with OPT and BLOOM.
<details>
<summary>LAMBADA – OpenAI</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/lambada_openai_v1.png" style="width:auto"/>
</details>
<details>
<summary>Physical Interaction: Question Answering (PIQA)</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/piqa_v1.png" style="width:auto"/>
</details>
<details>
<summary>WinoGrande</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/winogrande_v1.png" style="width:auto"/>
</details>
<details>
<summary>AI2 Reasoning Challenge—Easy Set</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/arc_easy_v1.png" style="width:auto"/>
</details>
<details>
<summary>SciQ</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/sciq_v1.png" style="width:auto"/>
</details>
## Changelog
This section compares differences between previously released
[Pythia v0](https://huggingface.co/models?other=pythia_v0) and the current
models. See Appendix B of the Pythia paper for further discussion of these
changes and the motivation behind them. We found that retraining Pythia had no
impact on benchmark performance.
- All model sizes are now trained with uniform batch size of 2M tokens.
Previously, the models of size 160M, 410M, and 1.4B parameters were trained
with batch sizes of 4M tokens.
- We added checkpoints at initialization (step 0) and steps {1,2,4,8,16,32,64,
128,256,512} in addition to every 1000 training steps.
- Flash Attention was used in the new retrained suite.
- We remedied a minor inconsistency that existed in the original suite: all
models of size 2.8B parameters or smaller had a learning rate (LR) schedule
which decayed to a minimum LR of 10% the starting LR rate, but the 6.9B and
12B models all used an LR schedule which decayed to a minimum LR of 0. In
the redone training runs, we rectified this inconsistency: all models now were
trained with LR decaying to a minimum of 0.1× their maximum LR.
### Naming convention and parameter count
*Pythia* models were renamed in January 2023. It is possible that the old
naming convention still persists in some documentation by accident. The
current naming convention (70M, 160M, etc.) is based on total parameter count.
<figure style="width:32em">
| current Pythia suffix | old suffix | total params | non-embedding params |
| --------------------: | ---------: | -------------: | -------------------: |
| 70M | 19M | 70,426,624 | 18,915,328 |
| 160M | 125M | 162,322,944 | 85,056,000 |
| 410M | 350M | 405,334,016 | 302,311,424 |
| 1B | 800M | 1,011,781,632 | 805,736,448 |
| 1.4B | 1.3B | 1,414,647,808 | 1,208,602,624 |
| 2.8B | 2.7B | 2,775,208,960 | 2,517,652,480 |
| 6.9B | 6.7B | 6,857,302,016 | 6,444,163,072 |
| 12B | 13B | 11,846,072,320 | 11,327,027,200 |
</figure>
| [
"QUESTION_ANSWERING",
"TRANSLATION"
] | [
"SCIQ"
] |
mav23/pythia-1b-deduped-GGUF | mav23 | null | [
"gguf",
"pytorch",
"causal-lm",
"pythia",
"en",
"dataset:EleutherAI/the_pile_deduplicated",
"arxiv:2304.01373",
"arxiv:2101.00027",
"arxiv:2201.07311",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | 2024-11-20T15:16:59 | 2024-11-20T15:26:48 | 83 | 0 | ---
datasets:
- EleutherAI/the_pile_deduplicated
language:
- en
license: apache-2.0
tags:
- pytorch
- causal-lm
- pythia
---
The *Pythia Scaling Suite* is a collection of models developed to facilitate
interpretability research [(see paper)](https://arxiv.org/pdf/2304.01373.pdf).
It contains two sets of eight models of sizes
70M, 160M, 410M, 1B, 1.4B, 2.8B, 6.9B, and 12B. For each size, there are two
models: one trained on the Pile, and one trained on the Pile after the dataset
has been globally deduplicated. All 8 model sizes are trained on the exact
same data, in the exact same order. We also provide 154 intermediate
checkpoints per model, hosted on Hugging Face as branches.
The Pythia model suite was designed to promote scientific
research on large language models, especially interpretability research.
Despite not centering downstream performance as a design goal, we find the
models <a href="#evaluations">match or exceed</a> the performance of
similar and same-sized models, such as those in the OPT and GPT-Neo suites.
<details>
<summary style="font-weight:600">Details on previous early release and naming convention.</summary>
Previously, we released an early version of the Pythia suite to the public.
However, we decided to retrain the model suite to address a few hyperparameter
discrepancies. This model card <a href="#changelog">lists the changes</a>;
see appendix B in the Pythia paper for further discussion. We found no
difference in benchmark performance between the two Pythia versions.
The old models are
[still available](https://huggingface.co/models?other=pythia_v0), but we
suggest the retrained suite if you are just starting to use Pythia.<br>
**This is the current release.**
Please note that all models in the *Pythia* suite were renamed in January
2023. For clarity, a <a href="#naming-convention-and-parameter-count">table
comparing the old and new names</a> is provided in this model card, together
with exact parameter counts.
</details>
<br>
# Pythia-1B-deduped
## Model Details
- Developed by: [EleutherAI](http://eleuther.ai)
- Model type: Transformer-based Language Model
- Language: English
- Learn more: [Pythia's GitHub repository](https://github.com/EleutherAI/pythia)
for training procedure, config files, and details on how to use.
[See paper](https://arxiv.org/pdf/2304.01373.pdf) for more evals and implementation
details.
- Library: [GPT-NeoX](https://github.com/EleutherAI/gpt-neox)
- License: Apache 2.0
- Contact: to ask questions about this model, join the [EleutherAI
Discord](https://discord.gg/zBGx3azzUn), and post them in `#release-discussion`.
Please read the existing *Pythia* documentation before asking about it in the
EleutherAI Discord. For general correspondence: [contact@eleuther.
ai](mailto:[email protected]).
<figure>
| Pythia model | Non-Embedding Params | Layers | Model Dim | Heads | Batch Size | Learning Rate | Equivalent Models |
| -----------: | -------------------: | :----: | :-------: | :---: | :--------: | :-------------------: | :--------------------: |
| 70M | 18,915,328 | 6 | 512 | 8 | 2M | 1.0 x 10<sup>-3</sup> | — |
| 160M | 85,056,000 | 12 | 768 | 12 | 2M | 6.0 x 10<sup>-4</sup> | GPT-Neo 125M, OPT-125M |
| 410M | 302,311,424 | 24 | 1024 | 16 | 2M | 3.0 x 10<sup>-4</sup> | OPT-350M |
| 1.0B | 805,736,448 | 16 | 2048 | 8 | 2M | 3.0 x 10<sup>-4</sup> | — |
| 1.4B | 1,208,602,624 | 24 | 2048 | 16 | 2M | 2.0 x 10<sup>-4</sup> | GPT-Neo 1.3B, OPT-1.3B |
| 2.8B | 2,517,652,480 | 32 | 2560 | 32 | 2M | 1.6 x 10<sup>-4</sup> | GPT-Neo 2.7B, OPT-2.7B |
| 6.9B | 6,444,163,072 | 32 | 4096 | 32 | 2M | 1.2 x 10<sup>-4</sup> | OPT-6.7B |
| 12B | 11,327,027,200 | 36 | 5120 | 40 | 2M | 1.2 x 10<sup>-4</sup> | — |
<figcaption>Engineering details for the <i>Pythia Suite</i>. Deduped and
non-deduped models of a given size have the same hyperparameters. “Equivalent”
models have <b>exactly</b> the same architecture, and the same number of
non-embedding parameters.</figcaption>
</figure>
## Uses and Limitations
### Intended Use
The primary intended use of Pythia is research on the behavior, functionality,
and limitations of large language models. This suite is intended to provide
a controlled setting for performing scientific experiments. We also provide
154 checkpoints per model: initial `step0`, 10 log-spaced checkpoints
`step{1,2,4...512}`, and 143 evenly-spaced checkpoints from `step1000` to
`step143000`. These checkpoints are hosted on Hugging Face as branches. Note
that branch `143000` corresponds exactly to the model checkpoint on the `main`
branch of each model.
You may also further fine-tune and adapt Pythia-1B-deduped for deployment,
as long as your use is in accordance with the Apache 2.0 license. Pythia
models work with the Hugging Face [Transformers
Library](https://huggingface.co/docs/transformers/index). If you decide to use
pre-trained Pythia-1B-deduped as a basis for your fine-tuned model, please
conduct your own risk and bias assessment.
### Out-of-scope use
The Pythia Suite is **not** intended for deployment. It is not a in itself
a product and cannot be used for human-facing interactions. For example,
the model may generate harmful or offensive text. Please evaluate the risks
associated with your particular use case.
Pythia models are English-language only, and are not suitable for translation
or generating text in other languages.
Pythia-1B-deduped has not been fine-tuned for downstream contexts in which
language models are commonly deployed, such as writing genre prose,
or commercial chatbots. This means Pythia-1B-deduped will **not**
respond to a given prompt the way a product like ChatGPT does. This is because,
unlike this model, ChatGPT was fine-tuned using methods such as Reinforcement
Learning from Human Feedback (RLHF) to better “follow” human instructions.
### Limitations and biases
The core functionality of a large language model is to take a string of text
and predict the next token. The token used by the model need not produce the
most “accurate” text. Never rely on Pythia-1B-deduped to produce factually accurate
output.
This model was trained on [the Pile](https://pile.eleuther.ai/), a dataset
known to contain profanity and texts that are lewd or otherwise offensive.
See [Section 6 of the Pile paper](https://arxiv.org/abs/2101.00027) for a
discussion of documented biases with regards to gender, religion, and race.
Pythia-1B-deduped may produce socially unacceptable or undesirable text, *even if*
the prompt itself does not include anything explicitly offensive.
If you plan on using text generated through, for example, the Hosted Inference
API, we recommend having a human curate the outputs of this language model
before presenting it to other people. Please inform your audience that the
text was generated by Pythia-1B-deduped.
### Quickstart
Pythia models can be loaded and used via the following code, demonstrated here
for the third `pythia-70m-deduped` checkpoint:
```python
from transformers import GPTNeoXForCausalLM, AutoTokenizer
model = GPTNeoXForCausalLM.from_pretrained(
"EleutherAI/pythia-70m-deduped",
revision="step3000",
cache_dir="./pythia-70m-deduped/step3000",
)
tokenizer = AutoTokenizer.from_pretrained(
"EleutherAI/pythia-70m-deduped",
revision="step3000",
cache_dir="./pythia-70m-deduped/step3000",
)
inputs = tokenizer("Hello, I am", return_tensors="pt")
tokens = model.generate(**inputs)
tokenizer.decode(tokens[0])
```
Revision/branch `step143000` corresponds exactly to the model checkpoint on
the `main` branch of each model.<br>
For more information on how to use all Pythia models, see [documentation on
GitHub](https://github.com/EleutherAI/pythia).
## Training
### Training data
Pythia-1B-deduped was trained on the Pile **after the dataset has been globally
deduplicated**.<br>
[The Pile](https://pile.eleuther.ai/) is a 825GiB general-purpose dataset in
English. It was created by EleutherAI specifically for training large language
models. It contains texts from 22 diverse sources, roughly broken down into
five categories: academic writing (e.g. arXiv), internet (e.g. CommonCrawl),
prose (e.g. Project Gutenberg), dialogue (e.g. YouTube subtitles), and
miscellaneous (e.g. GitHub, Enron Emails). See [the Pile
paper](https://arxiv.org/abs/2101.00027) for a breakdown of all data sources,
methodology, and a discussion of ethical implications. Consult [the
datasheet](https://arxiv.org/abs/2201.07311) for more detailed documentation
about the Pile and its component datasets. The Pile can be downloaded from
the [official website](https://pile.eleuther.ai/), or from a [community
mirror](https://the-eye.eu/public/AI/pile/).
### Training procedure
All models were trained on the exact same data, in the exact same order. Each
model saw 299,892,736,000 tokens during training, and 143 checkpoints for each
model are saved every 2,097,152,000 tokens, spaced evenly throughout training,
from `step1000` to `step143000` (which is the same as `main`). In addition, we
also provide frequent early checkpoints: `step0` and `step{1,2,4...512}`.
This corresponds to training for just under 1 epoch on the Pile for
non-deduplicated models, and about 1.5 epochs on the deduplicated Pile.
All *Pythia* models trained for 143000 steps at a batch size
of 2M (2,097,152 tokens).<br>
See [GitHub](https://github.com/EleutherAI/pythia) for more details on training
procedure, including [how to reproduce
it](https://github.com/EleutherAI/pythia/blob/main/README.md#reproducing-training).<br>
Pythia uses the same tokenizer as [GPT-NeoX-
20B](https://huggingface.co/EleutherAI/gpt-neox-20b).
## Evaluations
All 16 *Pythia* models were evaluated using the [LM Evaluation
Harness](https://github.com/EleutherAI/lm-evaluation-harness). You can access
the results by model and step at `results/json/*` in the [GitHub
repository](https://github.com/EleutherAI/pythia/tree/main/results/json/).<br>
Expand the sections below to see plots of evaluation results for all
Pythia and Pythia-deduped models compared with OPT and BLOOM.
<details>
<summary>LAMBADA – OpenAI</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/lambada_openai_v1.png" style="width:auto"/>
</details>
<details>
<summary>Physical Interaction: Question Answering (PIQA)</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/piqa_v1.png" style="width:auto"/>
</details>
<details>
<summary>WinoGrande</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/winogrande_v1.png" style="width:auto"/>
</details>
<details>
<summary>AI2 Reasoning Challenge—Easy Set</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/arc_easy_v1.png" style="width:auto"/>
</details>
<details>
<summary>SciQ</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/sciq_v1.png" style="width:auto"/>
</details>
## Changelog
This section compares differences between previously released
[Pythia v0](https://huggingface.co/models?other=pythia_v0) and the current
models. See Appendix B of the Pythia paper for further discussion of these
changes and the motivation behind them. We found that retraining Pythia had no
impact on benchmark performance.
- All model sizes are now trained with uniform batch size of 2M tokens.
Previously, the models of size 160M, 410M, and 1.4B parameters were trained
with batch sizes of 4M tokens.
- We added checkpoints at initialization (step 0) and steps {1,2,4,8,16,32,64,
128,256,512} in addition to every 1000 training steps.
- Flash Attention was used in the new retrained suite.
- We remedied a minor inconsistency that existed in the original suite: all
models of size 2.8B parameters or smaller had a learning rate (LR) schedule
which decayed to a minimum LR of 10% the starting LR rate, but the 6.9B and
12B models all used an LR schedule which decayed to a minimum LR of 0. In
the redone training runs, we rectified this inconsistency: all models now were
trained with LR decaying to a minimum of 0.1× their maximum LR.
### Naming convention and parameter count
*Pythia* models were renamed in January 2023. It is possible that the old
naming convention still persists in some documentation by accident. The
current naming convention (70M, 160M, etc.) is based on total parameter count.
<figure style="width:32em">
| current Pythia suffix | old suffix | total params | non-embedding params |
| --------------------: | ---------: | -------------: | -------------------: |
| 70M | 19M | 70,426,624 | 18,915,328 |
| 160M | 125M | 162,322,944 | 85,056,000 |
| 410M | 350M | 405,334,016 | 302,311,424 |
| 1B | 800M | 1,011,781,632 | 805,736,448 |
| 1.4B | 1.3B | 1,414,647,808 | 1,208,602,624 |
| 2.8B | 2.7B | 2,775,208,960 | 2,517,652,480 |
| 6.9B | 6.7B | 6,857,302,016 | 6,444,163,072 |
| 12B | 13B | 11,846,072,320 | 11,327,027,200 |
</figure> | [
"QUESTION_ANSWERING",
"TRANSLATION"
] | [
"SCIQ"
] |
Alignment-Lab-AI/e5-mistral-7b-instruct | Alignment-Lab-AI | feature-extraction | [
"sentence-transformers",
"pytorch",
"safetensors",
"mistral",
"feature-extraction",
"mteb",
"transformers",
"en",
"arxiv:2401.00368",
"arxiv:2104.08663",
"arxiv:2210.07316",
"arxiv:2212.03533",
"license:mit",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | 2024-12-17T20:41:59 | 2024-12-17T20:45:30 | 81 | 0 | ---
language:
- en
license: mit
tags:
- mteb
- sentence-transformers
- transformers
model-index:
- name: e5-mistral-7b-instruct
results:
- task:
type: STS
dataset:
name: MTEB AFQMC
type: C-MTEB/AFQMC
config: default
split: validation
revision: None
metrics:
- type: cos_sim_pearson
value: 37.863226091673866
- type: cos_sim_spearman
value: 38.98733013335281
- type: euclidean_pearson
value: 37.51783380497874
- type: euclidean_spearman
value: 38.98733012753365
- type: manhattan_pearson
value: 37.26706888081721
- type: manhattan_spearman
value: 38.709750161903834
- task:
type: STS
dataset:
name: MTEB ATEC
type: C-MTEB/ATEC
config: default
split: test
revision: None
metrics:
- type: cos_sim_pearson
value: 43.33924583134623
- type: cos_sim_spearman
value: 42.84316155158754
- type: euclidean_pearson
value: 45.62709879515238
- type: euclidean_spearman
value: 42.843155921732404
- type: manhattan_pearson
value: 45.4786950991229
- type: manhattan_spearman
value: 42.657334751855984
- task:
type: Classification
dataset:
name: MTEB AmazonCounterfactualClassification (en)
type: mteb/amazon_counterfactual
config: en
split: test
revision: e8379541af4e31359cca9fbcf4b00f2671dba205
metrics:
- type: accuracy
value: 78.68656716417911
- type: ap
value: 41.71522322900398
- type: f1
value: 72.37207703532552
- task:
type: Classification
dataset:
name: MTEB AmazonCounterfactualClassification (de)
type: mteb/amazon_counterfactual
config: de
split: test
revision: e8379541af4e31359cca9fbcf4b00f2671dba205
metrics:
- type: accuracy
value: 74.04710920770879
- type: ap
value: 83.42622221864045
- type: f1
value: 72.14388257905772
- task:
type: Classification
dataset:
name: MTEB AmazonCounterfactualClassification (en-ext)
type: mteb/amazon_counterfactual
config: en-ext
split: test
revision: e8379541af4e31359cca9fbcf4b00f2671dba205
metrics:
- type: accuracy
value: 77.93103448275862
- type: ap
value: 26.039284760509513
- type: f1
value: 64.81092954450712
- task:
type: Classification
dataset:
name: MTEB AmazonCounterfactualClassification (ja)
type: mteb/amazon_counterfactual
config: ja
split: test
revision: e8379541af4e31359cca9fbcf4b00f2671dba205
metrics:
- type: accuracy
value: 77.21627408993577
- type: ap
value: 24.876490553983036
- type: f1
value: 63.8773359684989
- task:
type: Classification
dataset:
name: MTEB AmazonPolarityClassification
type: mteb/amazon_polarity
config: default
split: test
revision: e2d317d38cd51312af73b3d32a06d1a08b442046
metrics:
- type: accuracy
value: 95.90679999999999
- type: ap
value: 94.32357863164454
- type: f1
value: 95.90485634708557
- task:
type: Classification
dataset:
name: MTEB AmazonReviewsClassification (en)
type: mteb/amazon_reviews_multi
config: en
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 55.786
- type: f1
value: 55.31211995815146
- task:
type: Classification
dataset:
name: MTEB AmazonReviewsClassification (de)
type: mteb/amazon_reviews_multi
config: de
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 53.26
- type: f1
value: 52.156230111544986
- task:
type: Classification
dataset:
name: MTEB AmazonReviewsClassification (es)
type: mteb/amazon_reviews_multi
config: es
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 50.33
- type: f1
value: 49.195023008878145
- task:
type: Classification
dataset:
name: MTEB AmazonReviewsClassification (fr)
type: mteb/amazon_reviews_multi
config: fr
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 49.3
- type: f1
value: 48.434470184108
- task:
type: Classification
dataset:
name: MTEB AmazonReviewsClassification (ja)
type: mteb/amazon_reviews_multi
config: ja
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 48.68599999999999
- type: f1
value: 47.62681775202072
- task:
type: Classification
dataset:
name: MTEB AmazonReviewsClassification (zh)
type: mteb/amazon_reviews_multi
config: zh
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 46.238
- type: f1
value: 45.014030559653705
- task:
type: Retrieval
dataset:
name: MTEB ArguAna
type: arguana
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 36.486000000000004
- type: map_at_10
value: 53.076
- type: map_at_100
value: 53.657999999999994
- type: map_at_1000
value: 53.659
- type: map_at_3
value: 48.234
- type: map_at_5
value: 51.121
- type: mrr_at_1
value: 37.269000000000005
- type: mrr_at_10
value: 53.335
- type: mrr_at_100
value: 53.916
- type: mrr_at_1000
value: 53.918
- type: mrr_at_3
value: 48.518
- type: mrr_at_5
value: 51.406
- type: ndcg_at_1
value: 36.486000000000004
- type: ndcg_at_10
value: 61.882000000000005
- type: ndcg_at_100
value: 64.165
- type: ndcg_at_1000
value: 64.203
- type: ndcg_at_3
value: 52.049
- type: ndcg_at_5
value: 57.199
- type: precision_at_1
value: 36.486000000000004
- type: precision_at_10
value: 8.982999999999999
- type: precision_at_100
value: 0.9939999999999999
- type: precision_at_1000
value: 0.1
- type: precision_at_3
value: 21.029
- type: precision_at_5
value: 15.092
- type: recall_at_1
value: 36.486000000000004
- type: recall_at_10
value: 89.82900000000001
- type: recall_at_100
value: 99.36
- type: recall_at_1000
value: 99.644
- type: recall_at_3
value: 63.087
- type: recall_at_5
value: 75.46199999999999
- task:
type: Clustering
dataset:
name: MTEB ArxivClusteringP2P
type: mteb/arxiv-clustering-p2p
config: default
split: test
revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d
metrics:
- type: v_measure
value: 50.45119266859667
- task:
type: Clustering
dataset:
name: MTEB ArxivClusteringS2S
type: mteb/arxiv-clustering-s2s
config: default
split: test
revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53
metrics:
- type: v_measure
value: 45.4958298992051
- task:
type: Reranking
dataset:
name: MTEB AskUbuntuDupQuestions
type: mteb/askubuntudupquestions-reranking
config: default
split: test
revision: 2000358ca161889fa9c082cb41daa8dcfb161a54
metrics:
- type: map
value: 66.98177472838887
- type: mrr
value: 79.91854636591478
- task:
type: STS
dataset:
name: MTEB BIOSSES
type: mteb/biosses-sts
config: default
split: test
revision: d3fb88f8f02e40887cd149695127462bbcf29b4a
metrics:
- type: cos_sim_pearson
value: 87.67086498650698
- type: cos_sim_spearman
value: 85.54773239564638
- type: euclidean_pearson
value: 86.48229161588425
- type: euclidean_spearman
value: 85.54773239564638
- type: manhattan_pearson
value: 86.67533327742343
- type: manhattan_spearman
value: 85.76099026691983
- task:
type: STS
dataset:
name: MTEB BQ
type: C-MTEB/BQ
config: default
split: test
revision: None
metrics:
- type: cos_sim_pearson
value: 50.31998888922809
- type: cos_sim_spearman
value: 50.6369940530675
- type: euclidean_pearson
value: 50.055544636296055
- type: euclidean_spearman
value: 50.63699405154838
- type: manhattan_pearson
value: 50.00739378036807
- type: manhattan_spearman
value: 50.607237418676945
- task:
type: BitextMining
dataset:
name: MTEB BUCC (de-en)
type: mteb/bucc-bitext-mining
config: de-en
split: test
revision: d51519689f32196a32af33b075a01d0e7c51e252
metrics:
- type: accuracy
value: 99.5615866388309
- type: f1
value: 99.49895615866389
- type: precision
value: 99.46764091858039
- type: recall
value: 99.5615866388309
- task:
type: BitextMining
dataset:
name: MTEB BUCC (fr-en)
type: mteb/bucc-bitext-mining
config: fr-en
split: test
revision: d51519689f32196a32af33b075a01d0e7c51e252
metrics:
- type: accuracy
value: 99.19656614571869
- type: f1
value: 99.08650671362535
- type: precision
value: 99.0314769975787
- type: recall
value: 99.19656614571869
- task:
type: BitextMining
dataset:
name: MTEB BUCC (ru-en)
type: mteb/bucc-bitext-mining
config: ru-en
split: test
revision: d51519689f32196a32af33b075a01d0e7c51e252
metrics:
- type: accuracy
value: 98.0256321440942
- type: f1
value: 97.83743216718624
- type: precision
value: 97.74390947927492
- type: recall
value: 98.0256321440942
- task:
type: BitextMining
dataset:
name: MTEB BUCC (zh-en)
type: mteb/bucc-bitext-mining
config: zh-en
split: test
revision: d51519689f32196a32af33b075a01d0e7c51e252
metrics:
- type: accuracy
value: 99.26276987888363
- type: f1
value: 99.22766368264
- type: precision
value: 99.21011058451816
- type: recall
value: 99.26276987888363
- task:
type: Classification
dataset:
name: MTEB Banking77Classification
type: mteb/banking77
config: default
split: test
revision: 0fd18e25b25c072e09e0d92ab615fda904d66300
metrics:
- type: accuracy
value: 88.22727272727272
- type: f1
value: 88.17411732496673
- task:
type: Clustering
dataset:
name: MTEB BiorxivClusteringP2P
type: mteb/biorxiv-clustering-p2p
config: default
split: test
revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40
metrics:
- type: v_measure
value: 43.530637846246975
- task:
type: Clustering
dataset:
name: MTEB BiorxivClusteringS2S
type: mteb/biorxiv-clustering-s2s
config: default
split: test
revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908
metrics:
- type: v_measure
value: 40.23505728593893
- task:
type: Clustering
dataset:
name: MTEB CLSClusteringP2P
type: C-MTEB/CLSClusteringP2P
config: default
split: test
revision: None
metrics:
- type: v_measure
value: 44.419028279451275
- task:
type: Clustering
dataset:
name: MTEB CLSClusteringS2S
type: C-MTEB/CLSClusteringS2S
config: default
split: test
revision: None
metrics:
- type: v_measure
value: 42.5820277929776
- task:
type: Reranking
dataset:
name: MTEB CMedQAv1
type: C-MTEB/CMedQAv1-reranking
config: default
split: test
revision: None
metrics:
- type: map
value: 77.67811726152972
- type: mrr
value: 80.99003968253969
- task:
type: Reranking
dataset:
name: MTEB CMedQAv2
type: C-MTEB/CMedQAv2-reranking
config: default
split: test
revision: None
metrics:
- type: map
value: 78.66055354534922
- type: mrr
value: 81.66119047619047
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackRetrieval
type: BeIR/cqadupstack
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 27.162333333333333
- type: map_at_10
value: 37.22291666666667
- type: map_at_100
value: 38.56733333333333
- type: map_at_1000
value: 38.684250000000006
- type: map_at_3
value: 34.22858333333333
- type: map_at_5
value: 35.852500000000006
- type: mrr_at_1
value: 32.459833333333336
- type: mrr_at_10
value: 41.65358333333333
- type: mrr_at_100
value: 42.566916666666664
- type: mrr_at_1000
value: 42.61766666666667
- type: mrr_at_3
value: 39.210499999999996
- type: mrr_at_5
value: 40.582166666666666
- type: ndcg_at_1
value: 32.459833333333336
- type: ndcg_at_10
value: 42.96758333333333
- type: ndcg_at_100
value: 48.5065
- type: ndcg_at_1000
value: 50.556583333333336
- type: ndcg_at_3
value: 38.004416666666664
- type: ndcg_at_5
value: 40.25916666666667
- type: precision_at_1
value: 32.459833333333336
- type: precision_at_10
value: 7.664583333333333
- type: precision_at_100
value: 1.2349999999999999
- type: precision_at_1000
value: 0.15966666666666668
- type: precision_at_3
value: 17.731166666666663
- type: precision_at_5
value: 12.575333333333335
- type: recall_at_1
value: 27.162333333333333
- type: recall_at_10
value: 55.44158333333334
- type: recall_at_100
value: 79.56966666666666
- type: recall_at_1000
value: 93.45224999999999
- type: recall_at_3
value: 41.433083333333336
- type: recall_at_5
value: 47.31108333333333
- task:
type: Retrieval
dataset:
name: MTEB ClimateFEVER
type: climate-fever
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 16.539
- type: map_at_10
value: 28.494999999999997
- type: map_at_100
value: 30.568
- type: map_at_1000
value: 30.741000000000003
- type: map_at_3
value: 23.846999999999998
- type: map_at_5
value: 26.275
- type: mrr_at_1
value: 37.394
- type: mrr_at_10
value: 50.068
- type: mrr_at_100
value: 50.727
- type: mrr_at_1000
value: 50.751000000000005
- type: mrr_at_3
value: 46.938
- type: mrr_at_5
value: 48.818
- type: ndcg_at_1
value: 37.394
- type: ndcg_at_10
value: 38.349
- type: ndcg_at_100
value: 45.512
- type: ndcg_at_1000
value: 48.321
- type: ndcg_at_3
value: 32.172
- type: ndcg_at_5
value: 34.265
- type: precision_at_1
value: 37.394
- type: precision_at_10
value: 11.927999999999999
- type: precision_at_100
value: 1.966
- type: precision_at_1000
value: 0.25
- type: precision_at_3
value: 24.126
- type: precision_at_5
value: 18.306
- type: recall_at_1
value: 16.539
- type: recall_at_10
value: 44.504
- type: recall_at_100
value: 68.605
- type: recall_at_1000
value: 84.1
- type: recall_at_3
value: 29.008
- type: recall_at_5
value: 35.58
- task:
type: Retrieval
dataset:
name: MTEB CmedqaRetrieval
type: C-MTEB/CmedqaRetrieval
config: default
split: dev
revision: None
metrics:
- type: map_at_1
value: 19.482
- type: map_at_10
value: 28.622999999999998
- type: map_at_100
value: 30.262
- type: map_at_1000
value: 30.432
- type: map_at_3
value: 25.647
- type: map_at_5
value: 27.128000000000004
- type: mrr_at_1
value: 30.408
- type: mrr_at_10
value: 37.188
- type: mrr_at_100
value: 38.196000000000005
- type: mrr_at_1000
value: 38.273
- type: mrr_at_3
value: 35.067
- type: mrr_at_5
value: 36.124
- type: ndcg_at_1
value: 30.408
- type: ndcg_at_10
value: 34.215
- type: ndcg_at_100
value: 41.349999999999994
- type: ndcg_at_1000
value: 44.689
- type: ndcg_at_3
value: 30.264999999999997
- type: ndcg_at_5
value: 31.572
- type: precision_at_1
value: 30.408
- type: precision_at_10
value: 7.6770000000000005
- type: precision_at_100
value: 1.352
- type: precision_at_1000
value: 0.178
- type: precision_at_3
value: 17.213
- type: precision_at_5
value: 12.198
- type: recall_at_1
value: 19.482
- type: recall_at_10
value: 42.368
- type: recall_at_100
value: 72.694
- type: recall_at_1000
value: 95.602
- type: recall_at_3
value: 30.101
- type: recall_at_5
value: 34.708
- task:
type: PairClassification
dataset:
name: MTEB Cmnli
type: C-MTEB/CMNLI
config: default
split: validation
revision: None
metrics:
- type: cos_sim_accuracy
value: 71.16055321707758
- type: cos_sim_ap
value: 80.21073839711723
- type: cos_sim_f1
value: 72.9740932642487
- type: cos_sim_precision
value: 65.53136050623488
- type: cos_sim_recall
value: 82.3240589198036
- type: dot_accuracy
value: 71.16055321707758
- type: dot_ap
value: 80.212299264122
- type: dot_f1
value: 72.9740932642487
- type: dot_precision
value: 65.53136050623488
- type: dot_recall
value: 82.3240589198036
- type: euclidean_accuracy
value: 71.16055321707758
- type: euclidean_ap
value: 80.21076298680417
- type: euclidean_f1
value: 72.9740932642487
- type: euclidean_precision
value: 65.53136050623488
- type: euclidean_recall
value: 82.3240589198036
- type: manhattan_accuracy
value: 70.71557426337944
- type: manhattan_ap
value: 79.93448977199749
- type: manhattan_f1
value: 72.83962726826877
- type: manhattan_precision
value: 62.7407908077053
- type: manhattan_recall
value: 86.81318681318682
- type: max_accuracy
value: 71.16055321707758
- type: max_ap
value: 80.212299264122
- type: max_f1
value: 72.9740932642487
- task:
type: Retrieval
dataset:
name: MTEB CovidRetrieval
type: C-MTEB/CovidRetrieval
config: default
split: dev
revision: None
metrics:
- type: map_at_1
value: 60.643
- type: map_at_10
value: 69.011
- type: map_at_100
value: 69.533
- type: map_at_1000
value: 69.545
- type: map_at_3
value: 67.167
- type: map_at_5
value: 68.12700000000001
- type: mrr_at_1
value: 60.801
- type: mrr_at_10
value: 69.111
- type: mrr_at_100
value: 69.6
- type: mrr_at_1000
value: 69.611
- type: mrr_at_3
value: 67.229
- type: mrr_at_5
value: 68.214
- type: ndcg_at_1
value: 60.801
- type: ndcg_at_10
value: 73.128
- type: ndcg_at_100
value: 75.614
- type: ndcg_at_1000
value: 75.92
- type: ndcg_at_3
value: 69.261
- type: ndcg_at_5
value: 70.973
- type: precision_at_1
value: 60.801
- type: precision_at_10
value: 8.662
- type: precision_at_100
value: 0.9860000000000001
- type: precision_at_1000
value: 0.101
- type: precision_at_3
value: 25.149
- type: precision_at_5
value: 15.953999999999999
- type: recall_at_1
value: 60.643
- type: recall_at_10
value: 85.959
- type: recall_at_100
value: 97.576
- type: recall_at_1000
value: 100.0
- type: recall_at_3
value: 75.184
- type: recall_at_5
value: 79.32000000000001
- task:
type: Retrieval
dataset:
name: MTEB DBPedia
type: dbpedia-entity
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 10.183
- type: map_at_10
value: 23.958
- type: map_at_100
value: 34.354
- type: map_at_1000
value: 36.442
- type: map_at_3
value: 16.345000000000002
- type: map_at_5
value: 19.647000000000002
- type: mrr_at_1
value: 74.25
- type: mrr_at_10
value: 80.976
- type: mrr_at_100
value: 81.256
- type: mrr_at_1000
value: 81.262
- type: mrr_at_3
value: 79.958
- type: mrr_at_5
value: 80.37100000000001
- type: ndcg_at_1
value: 62.0
- type: ndcg_at_10
value: 48.894999999999996
- type: ndcg_at_100
value: 53.867
- type: ndcg_at_1000
value: 61.304
- type: ndcg_at_3
value: 53.688
- type: ndcg_at_5
value: 50.900999999999996
- type: precision_at_1
value: 74.25
- type: precision_at_10
value: 39.525
- type: precision_at_100
value: 12.323
- type: precision_at_1000
value: 2.539
- type: precision_at_3
value: 57.49999999999999
- type: precision_at_5
value: 49.1
- type: recall_at_1
value: 10.183
- type: recall_at_10
value: 29.296
- type: recall_at_100
value: 60.394999999999996
- type: recall_at_1000
value: 83.12
- type: recall_at_3
value: 17.495
- type: recall_at_5
value: 22.235
- task:
type: Retrieval
dataset:
name: MTEB DuRetrieval
type: C-MTEB/DuRetrieval
config: default
split: dev
revision: None
metrics:
- type: map_at_1
value: 26.613999999999997
- type: map_at_10
value: 79.77300000000001
- type: map_at_100
value: 82.71
- type: map_at_1000
value: 82.75
- type: map_at_3
value: 55.92700000000001
- type: map_at_5
value: 70.085
- type: mrr_at_1
value: 90.7
- type: mrr_at_10
value: 93.438
- type: mrr_at_100
value: 93.504
- type: mrr_at_1000
value: 93.50699999999999
- type: mrr_at_3
value: 93.125
- type: mrr_at_5
value: 93.34
- type: ndcg_at_1
value: 90.7
- type: ndcg_at_10
value: 87.023
- type: ndcg_at_100
value: 90.068
- type: ndcg_at_1000
value: 90.43299999999999
- type: ndcg_at_3
value: 86.339
- type: ndcg_at_5
value: 85.013
- type: precision_at_1
value: 90.7
- type: precision_at_10
value: 41.339999999999996
- type: precision_at_100
value: 4.806
- type: precision_at_1000
value: 0.48900000000000005
- type: precision_at_3
value: 76.983
- type: precision_at_5
value: 64.69
- type: recall_at_1
value: 26.613999999999997
- type: recall_at_10
value: 87.681
- type: recall_at_100
value: 97.44699999999999
- type: recall_at_1000
value: 99.348
- type: recall_at_3
value: 57.809999999999995
- type: recall_at_5
value: 74.258
- task:
type: Retrieval
dataset:
name: MTEB EcomRetrieval
type: C-MTEB/EcomRetrieval
config: default
split: dev
revision: None
metrics:
- type: map_at_1
value: 30.9
- type: map_at_10
value: 40.467
- type: map_at_100
value: 41.423
- type: map_at_1000
value: 41.463
- type: map_at_3
value: 37.25
- type: map_at_5
value: 39.31
- type: mrr_at_1
value: 30.9
- type: mrr_at_10
value: 40.467
- type: mrr_at_100
value: 41.423
- type: mrr_at_1000
value: 41.463
- type: mrr_at_3
value: 37.25
- type: mrr_at_5
value: 39.31
- type: ndcg_at_1
value: 30.9
- type: ndcg_at_10
value: 45.957
- type: ndcg_at_100
value: 50.735
- type: ndcg_at_1000
value: 51.861999999999995
- type: ndcg_at_3
value: 39.437
- type: ndcg_at_5
value: 43.146
- type: precision_at_1
value: 30.9
- type: precision_at_10
value: 6.35
- type: precision_at_100
value: 0.861
- type: precision_at_1000
value: 0.095
- type: precision_at_3
value: 15.267
- type: precision_at_5
value: 10.96
- type: recall_at_1
value: 30.9
- type: recall_at_10
value: 63.5
- type: recall_at_100
value: 86.1
- type: recall_at_1000
value: 95.1
- type: recall_at_3
value: 45.800000000000004
- type: recall_at_5
value: 54.800000000000004
- task:
type: Classification
dataset:
name: MTEB EmotionClassification
type: mteb/emotion
config: default
split: test
revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37
metrics:
- type: accuracy
value: 49.765
- type: f1
value: 45.93242203574485
- task:
type: Retrieval
dataset:
name: MTEB FEVER
type: fever
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 75.138
- type: map_at_10
value: 84.21300000000001
- type: map_at_100
value: 84.43
- type: map_at_1000
value: 84.441
- type: map_at_3
value: 83.071
- type: map_at_5
value: 83.853
- type: mrr_at_1
value: 80.948
- type: mrr_at_10
value: 88.175
- type: mrr_at_100
value: 88.24
- type: mrr_at_1000
value: 88.241
- type: mrr_at_3
value: 87.516
- type: mrr_at_5
value: 87.997
- type: ndcg_at_1
value: 80.948
- type: ndcg_at_10
value: 87.84100000000001
- type: ndcg_at_100
value: 88.576
- type: ndcg_at_1000
value: 88.75699999999999
- type: ndcg_at_3
value: 86.176
- type: ndcg_at_5
value: 87.214
- type: precision_at_1
value: 80.948
- type: precision_at_10
value: 10.632
- type: precision_at_100
value: 1.123
- type: precision_at_1000
value: 0.11499999999999999
- type: precision_at_3
value: 33.193
- type: precision_at_5
value: 20.663
- type: recall_at_1
value: 75.138
- type: recall_at_10
value: 94.89699999999999
- type: recall_at_100
value: 97.751
- type: recall_at_1000
value: 98.833
- type: recall_at_3
value: 90.455
- type: recall_at_5
value: 93.085
- task:
type: Retrieval
dataset:
name: MTEB FiQA2018
type: fiqa
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 29.45
- type: map_at_10
value: 48.596000000000004
- type: map_at_100
value: 50.70400000000001
- type: map_at_1000
value: 50.83800000000001
- type: map_at_3
value: 42.795
- type: map_at_5
value: 46.085
- type: mrr_at_1
value: 56.172999999999995
- type: mrr_at_10
value: 64.35300000000001
- type: mrr_at_100
value: 64.947
- type: mrr_at_1000
value: 64.967
- type: mrr_at_3
value: 62.653999999999996
- type: mrr_at_5
value: 63.534
- type: ndcg_at_1
value: 56.172999999999995
- type: ndcg_at_10
value: 56.593
- type: ndcg_at_100
value: 62.942
- type: ndcg_at_1000
value: 64.801
- type: ndcg_at_3
value: 53.024
- type: ndcg_at_5
value: 53.986999999999995
- type: precision_at_1
value: 56.172999999999995
- type: precision_at_10
value: 15.494
- type: precision_at_100
value: 2.222
- type: precision_at_1000
value: 0.254
- type: precision_at_3
value: 35.185
- type: precision_at_5
value: 25.556
- type: recall_at_1
value: 29.45
- type: recall_at_10
value: 62.882000000000005
- type: recall_at_100
value: 85.56099999999999
- type: recall_at_1000
value: 96.539
- type: recall_at_3
value: 47.911
- type: recall_at_5
value: 54.52
- task:
type: Retrieval
dataset:
name: MTEB HotpotQA
type: hotpotqa
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 39.581
- type: map_at_10
value: 68.401
- type: map_at_100
value: 69.207
- type: map_at_1000
value: 69.25200000000001
- type: map_at_3
value: 64.689
- type: map_at_5
value: 67.158
- type: mrr_at_1
value: 79.163
- type: mrr_at_10
value: 85.22999999999999
- type: mrr_at_100
value: 85.386
- type: mrr_at_1000
value: 85.39099999999999
- type: mrr_at_3
value: 84.432
- type: mrr_at_5
value: 84.952
- type: ndcg_at_1
value: 79.163
- type: ndcg_at_10
value: 75.721
- type: ndcg_at_100
value: 78.411
- type: ndcg_at_1000
value: 79.23599999999999
- type: ndcg_at_3
value: 70.68799999999999
- type: ndcg_at_5
value: 73.694
- type: precision_at_1
value: 79.163
- type: precision_at_10
value: 16.134
- type: precision_at_100
value: 1.821
- type: precision_at_1000
value: 0.193
- type: precision_at_3
value: 46.446
- type: precision_at_5
value: 30.242
- type: recall_at_1
value: 39.581
- type: recall_at_10
value: 80.66799999999999
- type: recall_at_100
value: 91.033
- type: recall_at_1000
value: 96.408
- type: recall_at_3
value: 69.669
- type: recall_at_5
value: 75.604
- task:
type: Classification
dataset:
name: MTEB IFlyTek
type: C-MTEB/IFlyTek-classification
config: default
split: validation
revision: None
metrics:
- type: accuracy
value: 45.04809542131589
- type: f1
value: 37.01181779071118
- task:
type: Classification
dataset:
name: MTEB ImdbClassification
type: mteb/imdb
config: default
split: test
revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7
metrics:
- type: accuracy
value: 94.78120000000001
- type: ap
value: 92.52931921594387
- type: f1
value: 94.77902110732532
- task:
type: Classification
dataset:
name: MTEB JDReview
type: C-MTEB/JDReview-classification
config: default
split: test
revision: None
metrics:
- type: accuracy
value: 85.81613508442777
- type: ap
value: 52.430320593468394
- type: f1
value: 79.95467268178068
- task:
type: STS
dataset:
name: MTEB LCQMC
type: C-MTEB/LCQMC
config: default
split: test
revision: None
metrics:
- type: cos_sim_pearson
value: 71.05801751913393
- type: cos_sim_spearman
value: 75.47954644971965
- type: euclidean_pearson
value: 74.27472296759713
- type: euclidean_spearman
value: 75.47954201369866
- type: manhattan_pearson
value: 74.30508190186474
- type: manhattan_spearman
value: 75.51326518159436
- task:
type: Reranking
dataset:
name: MTEB MMarcoReranking
type: C-MTEB/Mmarco-reranking
config: default
split: dev
revision: None
metrics:
- type: map
value: 24.21110921666315
- type: mrr
value: 22.863492063492064
- task:
type: Retrieval
dataset:
name: MTEB MMarcoRetrieval
type: C-MTEB/MMarcoRetrieval
config: default
split: dev
revision: None
metrics:
- type: map_at_1
value: 61.38400000000001
- type: map_at_10
value: 70.895
- type: map_at_100
value: 71.314
- type: map_at_1000
value: 71.331
- type: map_at_3
value: 69.016
- type: map_at_5
value: 70.179
- type: mrr_at_1
value: 63.481
- type: mrr_at_10
value: 71.543
- type: mrr_at_100
value: 71.91300000000001
- type: mrr_at_1000
value: 71.928
- type: mrr_at_3
value: 69.90899999999999
- type: mrr_at_5
value: 70.907
- type: ndcg_at_1
value: 63.481
- type: ndcg_at_10
value: 74.833
- type: ndcg_at_100
value: 76.705
- type: ndcg_at_1000
value: 77.13600000000001
- type: ndcg_at_3
value: 71.236
- type: ndcg_at_5
value: 73.199
- type: precision_at_1
value: 63.481
- type: precision_at_10
value: 9.179
- type: precision_at_100
value: 1.011
- type: precision_at_1000
value: 0.105
- type: precision_at_3
value: 27.044
- type: precision_at_5
value: 17.272000000000002
- type: recall_at_1
value: 61.38400000000001
- type: recall_at_10
value: 86.318
- type: recall_at_100
value: 94.786
- type: recall_at_1000
value: 98.14500000000001
- type: recall_at_3
value: 76.717
- type: recall_at_5
value: 81.416
- task:
type: Retrieval
dataset:
name: MTEB MSMARCO
type: msmarco
config: default
split: dev
revision: None
metrics:
- type: map_at_1
value: 23.363999999999997
- type: map_at_10
value: 36.022
- type: map_at_100
value: 37.229
- type: map_at_1000
value: 37.274
- type: map_at_3
value: 32.131
- type: map_at_5
value: 34.391
- type: mrr_at_1
value: 24.069
- type: mrr_at_10
value: 36.620000000000005
- type: mrr_at_100
value: 37.769999999999996
- type: mrr_at_1000
value: 37.809
- type: mrr_at_3
value: 32.846
- type: mrr_at_5
value: 35.02
- type: ndcg_at_1
value: 24.069
- type: ndcg_at_10
value: 43.056
- type: ndcg_at_100
value: 48.754
- type: ndcg_at_1000
value: 49.829
- type: ndcg_at_3
value: 35.167
- type: ndcg_at_5
value: 39.168
- type: precision_at_1
value: 24.069
- type: precision_at_10
value: 6.762
- type: precision_at_100
value: 0.96
- type: precision_at_1000
value: 0.105
- type: precision_at_3
value: 14.957
- type: precision_at_5
value: 11.023
- type: recall_at_1
value: 23.363999999999997
- type: recall_at_10
value: 64.696
- type: recall_at_100
value: 90.795
- type: recall_at_1000
value: 98.892
- type: recall_at_3
value: 43.247
- type: recall_at_5
value: 52.86300000000001
- task:
type: Classification
dataset:
name: MTEB MTOPDomainClassification (en)
type: mteb/mtop_domain
config: en
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 96.11947104423166
- type: f1
value: 95.89561841159332
- task:
type: Classification
dataset:
name: MTEB MTOPDomainClassification (de)
type: mteb/mtop_domain
config: de
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 92.97548605240912
- type: f1
value: 92.17133696717212
- task:
type: Classification
dataset:
name: MTEB MTOPDomainClassification (es)
type: mteb/mtop_domain
config: es
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 93.37224816544364
- type: f1
value: 93.19978829237863
- task:
type: Classification
dataset:
name: MTEB MTOPDomainClassification (fr)
type: mteb/mtop_domain
config: fr
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 91.28719072972127
- type: f1
value: 91.28448045979604
- task:
type: Classification
dataset:
name: MTEB MTOPDomainClassification (hi)
type: mteb/mtop_domain
config: hi
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 88.8131946934385
- type: f1
value: 88.27883019362747
- task:
type: Classification
dataset:
name: MTEB MTOPDomainClassification (th)
type: mteb/mtop_domain
config: th
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 85.52260397830018
- type: f1
value: 85.15528226728568
- task:
type: Classification
dataset:
name: MTEB MTOPIntentClassification (en)
type: mteb/mtop_intent
config: en
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 86.10807113543093
- type: f1
value: 70.88498219072167
- task:
type: Classification
dataset:
name: MTEB MTOPIntentClassification (de)
type: mteb/mtop_intent
config: de
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 77.77120315581854
- type: f1
value: 57.97153920153224
- task:
type: Classification
dataset:
name: MTEB MTOPIntentClassification (es)
type: mteb/mtop_intent
config: es
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 79.93995997331554
- type: f1
value: 58.839203810064866
- task:
type: Classification
dataset:
name: MTEB MTOPIntentClassification (fr)
type: mteb/mtop_intent
config: fr
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 77.801440651425
- type: f1
value: 58.68009647839332
- task:
type: Classification
dataset:
name: MTEB MTOPIntentClassification (hi)
type: mteb/mtop_intent
config: hi
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 72.90785227680172
- type: f1
value: 49.83760954655788
- task:
type: Classification
dataset:
name: MTEB MTOPIntentClassification (th)
type: mteb/mtop_intent
config: th
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 73.24050632911391
- type: f1
value: 52.0562553541082
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (af)
type: mteb/amazon_massive_intent
config: af
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 66.47948890383321
- type: f1
value: 63.334877563135485
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (am)
type: mteb/amazon_massive_intent
config: am
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 44.2871553463349
- type: f1
value: 43.17658050605427
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (ar)
type: mteb/amazon_massive_intent
config: ar
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 63.174176193678555
- type: f1
value: 59.236659587042425
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (az)
type: mteb/amazon_massive_intent
config: az
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 64.226630800269
- type: f1
value: 60.951842696956184
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (bn)
type: mteb/amazon_massive_intent
config: bn
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 64.94283792871555
- type: f1
value: 61.40057652844215
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (cy)
type: mteb/amazon_massive_intent
config: cy
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 55.480833893745796
- type: f1
value: 52.5298332072816
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (da)
type: mteb/amazon_massive_intent
config: da
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 72.52858103564223
- type: f1
value: 69.3770851919204
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (de)
type: mteb/amazon_massive_intent
config: de
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 74.09213180901143
- type: f1
value: 71.13518469365879
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (el)
type: mteb/amazon_massive_intent
config: el
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 68.31203765971756
- type: f1
value: 66.05906970865144
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (en)
type: mteb/amazon_massive_intent
config: en
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 80.57162071284465
- type: f1
value: 77.7866172598823
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (es)
type: mteb/amazon_massive_intent
config: es
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 75.09414929388029
- type: f1
value: 72.5712594833695
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (fa)
type: mteb/amazon_massive_intent
config: fa
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 72.20914593140553
- type: f1
value: 68.90619124909186
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (fi)
type: mteb/amazon_massive_intent
config: fi
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 68.74243443174176
- type: f1
value: 64.72743141749955
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (fr)
type: mteb/amazon_massive_intent
config: fr
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 75.11096166778749
- type: f1
value: 72.61849933064694
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (he)
type: mteb/amazon_massive_intent
config: he
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 66.22394082044384
- type: f1
value: 62.43648797607235
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (hi)
type: mteb/amazon_massive_intent
config: hi
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 69.44855413584399
- type: f1
value: 66.56851670913659
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (hu)
type: mteb/amazon_massive_intent
config: hu
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 69.4149293880296
- type: f1
value: 66.12960877904776
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (hy)
type: mteb/amazon_massive_intent
config: hy
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 56.916610625420304
- type: f1
value: 54.02534600927991
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (id)
type: mteb/amazon_massive_intent
config: id
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 72.71351714862138
- type: f1
value: 69.70227985126316
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (is)
type: mteb/amazon_massive_intent
config: is
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 59.91257565568257
- type: f1
value: 57.06811572144974
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (it)
type: mteb/amazon_massive_intent
config: it
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 75.25218560860793
- type: f1
value: 72.48057563104247
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (ja)
type: mteb/amazon_massive_intent
config: ja
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 76.35507733691998
- type: f1
value: 73.03024649541128
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (jv)
type: mteb/amazon_massive_intent
config: jv
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 57.918628110289184
- type: f1
value: 54.75590124456177
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (ka)
type: mteb/amazon_massive_intent
config: ka
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 52.548755884330866
- type: f1
value: 51.5356975360209
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (km)
type: mteb/amazon_massive_intent
config: km
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 46.44922663080027
- type: f1
value: 44.561114416830975
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (kn)
type: mteb/amazon_massive_intent
config: kn
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 53.95763281775386
- type: f1
value: 50.68367245122476
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (ko)
type: mteb/amazon_massive_intent
config: ko
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 74.20645595158035
- type: f1
value: 71.78450093258185
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (lv)
type: mteb/amazon_massive_intent
config: lv
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 59.226630800269
- type: f1
value: 57.53988988993337
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (ml)
type: mteb/amazon_massive_intent
config: ml
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 51.44922663080027
- type: f1
value: 48.58809018065056
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (mn)
type: mteb/amazon_massive_intent
config: mn
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 51.3752521856086
- type: f1
value: 49.91373941436425
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (ms)
type: mteb/amazon_massive_intent
config: ms
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 69.85205110961668
- type: f1
value: 67.05660019588582
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (my)
type: mteb/amazon_massive_intent
config: my
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 49.1492938802959
- type: f1
value: 46.717578025393195
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (nb)
type: mteb/amazon_massive_intent
config: nb
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 70.93140551445865
- type: f1
value: 67.45406609372205
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (nl)
type: mteb/amazon_massive_intent
config: nl
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 74.82851378614662
- type: f1
value: 71.15951964393868
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (pl)
type: mteb/amazon_massive_intent
config: pl
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 74.84868863483524
- type: f1
value: 71.76056802364877
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (pt)
type: mteb/amazon_massive_intent
config: pt
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 75.27236045729657
- type: f1
value: 72.48733090101163
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (ro)
type: mteb/amazon_massive_intent
config: ro
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 69.63012777404168
- type: f1
value: 66.56444015346203
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (ru)
type: mteb/amazon_massive_intent
config: ru
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 76.62743779421655
- type: f1
value: 73.82720656992142
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (sl)
type: mteb/amazon_massive_intent
config: sl
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 67.15198386012105
- type: f1
value: 64.41418309797744
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (sq)
type: mteb/amazon_massive_intent
config: sq
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 58.8399462004035
- type: f1
value: 56.050989519693886
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (sv)
type: mteb/amazon_massive_intent
config: sv
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 73.86684599865501
- type: f1
value: 70.80682480844303
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (sw)
type: mteb/amazon_massive_intent
config: sw
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 57.36718224613316
- type: f1
value: 54.998746471013774
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (ta)
type: mteb/amazon_massive_intent
config: ta
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 53.150638870208475
- type: f1
value: 49.79179342620099
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (te)
type: mteb/amazon_massive_intent
config: te
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 51.50638870208473
- type: f1
value: 49.778960742003555
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (th)
type: mteb/amazon_massive_intent
config: th
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 66.906523201076
- type: f1
value: 66.75784022138245
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (tl)
type: mteb/amazon_massive_intent
config: tl
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 68.73234700739744
- type: f1
value: 65.75016141148413
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (tr)
type: mteb/amazon_massive_intent
config: tr
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 72.06792199058508
- type: f1
value: 67.90334782594083
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (ur)
type: mteb/amazon_massive_intent
config: ur
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 62.09145931405515
- type: f1
value: 58.88703095210731
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (vi)
type: mteb/amazon_massive_intent
config: vi
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 71.17014122394083
- type: f1
value: 68.43676277921544
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (zh-CN)
type: mteb/amazon_massive_intent
config: zh-CN
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 74.99327505043712
- type: f1
value: 72.26813373392943
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (zh-TW)
type: mteb/amazon_massive_intent
config: zh-TW
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 71.13987895090787
- type: f1
value: 70.29309514467575
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (af)
type: mteb/amazon_massive_scenario
config: af
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 73.37256220578345
- type: f1
value: 72.56456170538992
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (am)
type: mteb/amazon_massive_scenario
config: am
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 47.205783456624076
- type: f1
value: 45.905999859074434
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (ar)
type: mteb/amazon_massive_scenario
config: ar
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 69.8352387357095
- type: f1
value: 69.43553987525273
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (az)
type: mteb/amazon_massive_scenario
config: az
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 67.00403496973773
- type: f1
value: 65.97477215779143
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (bn)
type: mteb/amazon_massive_scenario
config: bn
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 68.04976462676531
- type: f1
value: 67.24581993778398
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (cy)
type: mteb/amazon_massive_scenario
config: cy
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 61.882985877605925
- type: f1
value: 59.995293199988794
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (da)
type: mteb/amazon_massive_scenario
config: da
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 76.75857431069267
- type: f1
value: 76.52031675299841
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (de)
type: mteb/amazon_massive_scenario
config: de
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 79.03496973772697
- type: f1
value: 79.25548063175344
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (el)
type: mteb/amazon_massive_scenario
config: el
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 72.96570275722931
- type: f1
value: 72.19110435289122
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (en)
type: mteb/amazon_massive_scenario
config: en
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 82.38735709482178
- type: f1
value: 82.34495627619785
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (es)
type: mteb/amazon_massive_scenario
config: es
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 78.83994620040352
- type: f1
value: 78.91526355393667
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (fa)
type: mteb/amazon_massive_scenario
config: fa
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 76.7350369872226
- type: f1
value: 75.919437344927
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (fi)
type: mteb/amazon_massive_scenario
config: fi
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 71.21721587088096
- type: f1
value: 70.82973286243262
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (fr)
type: mteb/amazon_massive_scenario
config: fr
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 78.59784801613988
- type: f1
value: 78.47383161087423
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (he)
type: mteb/amazon_massive_scenario
config: he
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 69.64021519838602
- type: f1
value: 68.45118053027653
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (hi)
type: mteb/amazon_massive_scenario
config: hi
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 73.51042367182245
- type: f1
value: 72.90013022879003
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (hu)
type: mteb/amazon_massive_scenario
config: hu
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 74.0551445864156
- type: f1
value: 73.45871761713292
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (hy)
type: mteb/amazon_massive_scenario
config: hy
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 59.54606590450571
- type: f1
value: 57.72711794953869
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (id)
type: mteb/amazon_massive_scenario
config: id
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 77.40753194351042
- type: f1
value: 76.8157455506521
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (is)
type: mteb/amazon_massive_scenario
config: is
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 66.58372562205783
- type: f1
value: 65.2654868709758
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (it)
type: mteb/amazon_massive_scenario
config: it
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 78.39273705447208
- type: f1
value: 78.3592956594837
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (ja)
type: mteb/amazon_massive_scenario
config: ja
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 79.62004034969739
- type: f1
value: 79.78673754501855
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (jv)
type: mteb/amazon_massive_scenario
config: jv
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 64.29051782111634
- type: f1
value: 63.12502587609454
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (ka)
type: mteb/amazon_massive_scenario
config: ka
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 57.51849361129791
- type: f1
value: 56.32320906403241
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (km)
type: mteb/amazon_massive_scenario
config: km
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 52.41761936785474
- type: f1
value: 49.113762010098306
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (kn)
type: mteb/amazon_massive_scenario
config: kn
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 58.547410894418284
- type: f1
value: 56.87580674198118
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (ko)
type: mteb/amazon_massive_scenario
config: ko
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 78.89038332212507
- type: f1
value: 79.09210140529848
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (lv)
type: mteb/amazon_massive_scenario
config: lv
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 63.503698722259585
- type: f1
value: 61.45718858568352
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (ml)
type: mteb/amazon_massive_scenario
config: ml
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 54.02824478816408
- type: f1
value: 52.732738981386504
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (mn)
type: mteb/amazon_massive_scenario
config: mn
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 54.23671822461331
- type: f1
value: 52.688080372545286
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (ms)
type: mteb/amazon_massive_scenario
config: ms
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 75.5312710154674
- type: f1
value: 74.59368478550698
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (my)
type: mteb/amazon_massive_scenario
config: my
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 52.192333557498316
- type: f1
value: 50.18302290152229
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (nb)
type: mteb/amazon_massive_scenario
config: nb
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 75.6960322797579
- type: f1
value: 75.25331182714856
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (nl)
type: mteb/amazon_massive_scenario
config: nl
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 78.47679892400808
- type: f1
value: 78.24044732352424
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (pl)
type: mteb/amazon_massive_scenario
config: pl
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 77.36718224613315
- type: f1
value: 77.2714452985389
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (pt)
type: mteb/amazon_massive_scenario
config: pt
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 77.96234028244788
- type: f1
value: 78.21282127011372
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (ro)
type: mteb/amazon_massive_scenario
config: ro
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 73.19435104236717
- type: f1
value: 73.1963711292812
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (ru)
type: mteb/amazon_massive_scenario
config: ru
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 80.52118359112306
- type: f1
value: 80.4179964390288
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (sl)
type: mteb/amazon_massive_scenario
config: sl
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 73.65837256220577
- type: f1
value: 73.07156989634905
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (sq)
type: mteb/amazon_massive_scenario
config: sq
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 64.02824478816409
- type: f1
value: 62.972399027713664
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (sv)
type: mteb/amazon_massive_scenario
config: sv
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 78.87020847343645
- type: f1
value: 78.224240866849
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (sw)
type: mteb/amazon_massive_scenario
config: sw
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 64.6570275722932
- type: f1
value: 63.274871811412545
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (ta)
type: mteb/amazon_massive_scenario
config: ta
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 57.760591795561524
- type: f1
value: 56.73711528075771
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (te)
type: mteb/amazon_massive_scenario
config: te
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 57.26967047747142
- type: f1
value: 55.74735330863165
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (th)
type: mteb/amazon_massive_scenario
config: th
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 72.46133154001345
- type: f1
value: 71.9644168952811
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (tl)
type: mteb/amazon_massive_scenario
config: tl
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 73.70880968392737
- type: f1
value: 73.61543141070884
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (tr)
type: mteb/amazon_massive_scenario
config: tr
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 75.0437121721587
- type: f1
value: 74.83359868879921
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (ur)
type: mteb/amazon_massive_scenario
config: ur
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 67.05110961667788
- type: f1
value: 66.25869819274315
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (vi)
type: mteb/amazon_massive_scenario
config: vi
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 75.52118359112306
- type: f1
value: 75.92098546052303
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (zh-CN)
type: mteb/amazon_massive_scenario
config: zh-CN
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 79.92938802958977
- type: f1
value: 79.79833572573796
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (zh-TW)
type: mteb/amazon_massive_scenario
config: zh-TW
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 76.86617350369872
- type: f1
value: 77.42645654909516
- task:
type: Retrieval
dataset:
name: MTEB MedicalRetrieval
type: C-MTEB/MedicalRetrieval
config: default
split: dev
revision: None
metrics:
- type: map_at_1
value: 44.6
- type: map_at_10
value: 50.019000000000005
- type: map_at_100
value: 50.611
- type: map_at_1000
value: 50.67
- type: map_at_3
value: 48.699999999999996
- type: map_at_5
value: 49.455
- type: mrr_at_1
value: 44.800000000000004
- type: mrr_at_10
value: 50.119
- type: mrr_at_100
value: 50.711
- type: mrr_at_1000
value: 50.77
- type: mrr_at_3
value: 48.8
- type: mrr_at_5
value: 49.555
- type: ndcg_at_1
value: 44.6
- type: ndcg_at_10
value: 52.754
- type: ndcg_at_100
value: 55.935
- type: ndcg_at_1000
value: 57.607
- type: ndcg_at_3
value: 50.012
- type: ndcg_at_5
value: 51.393
- type: precision_at_1
value: 44.6
- type: precision_at_10
value: 6.140000000000001
- type: precision_at_100
value: 0.77
- type: precision_at_1000
value: 0.09
- type: precision_at_3
value: 17.933
- type: precision_at_5
value: 11.44
- type: recall_at_1
value: 44.6
- type: recall_at_10
value: 61.4
- type: recall_at_100
value: 77.0
- type: recall_at_1000
value: 90.4
- type: recall_at_3
value: 53.800000000000004
- type: recall_at_5
value: 57.199999999999996
- task:
type: Clustering
dataset:
name: MTEB MedrxivClusteringP2P
type: mteb/medrxiv-clustering-p2p
config: default
split: test
revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73
metrics:
- type: v_measure
value: 38.192667527616315
- task:
type: Clustering
dataset:
name: MTEB MedrxivClusteringS2S
type: mteb/medrxiv-clustering-s2s
config: default
split: test
revision: 35191c8c0dca72d8ff3efcd72aa802307d469663
metrics:
- type: v_measure
value: 37.44738902946689
- task:
type: Reranking
dataset:
name: MTEB MindSmallReranking
type: mteb/mind_small
config: default
split: test
revision: 3bdac13927fdc888b903db93b2ffdbd90b295a69
metrics:
- type: map
value: 32.59661273103955
- type: mrr
value: 33.82024242497473
- task:
type: Classification
dataset:
name: MTEB MultilingualSentiment
type: C-MTEB/MultilingualSentiment-classification
config: default
split: validation
revision: None
metrics:
- type: accuracy
value: 73.31333333333335
- type: f1
value: 73.0873466527602
- task:
type: Retrieval
dataset:
name: MTEB NFCorpus
type: nfcorpus
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 5.471
- type: map_at_10
value: 14.142
- type: map_at_100
value: 18.179000000000002
- type: map_at_1000
value: 19.772000000000002
- type: map_at_3
value: 9.716
- type: map_at_5
value: 11.763
- type: mrr_at_1
value: 51.393
- type: mrr_at_10
value: 58.814
- type: mrr_at_100
value: 59.330000000000005
- type: mrr_at_1000
value: 59.35
- type: mrr_at_3
value: 56.398
- type: mrr_at_5
value: 58.038999999999994
- type: ndcg_at_1
value: 49.69
- type: ndcg_at_10
value: 38.615
- type: ndcg_at_100
value: 35.268
- type: ndcg_at_1000
value: 43.745
- type: ndcg_at_3
value: 43.187
- type: ndcg_at_5
value: 41.528999999999996
- type: precision_at_1
value: 51.083999999999996
- type: precision_at_10
value: 29.474
- type: precision_at_100
value: 9.167
- type: precision_at_1000
value: 2.2089999999999996
- type: precision_at_3
value: 40.351
- type: precision_at_5
value: 36.285000000000004
- type: recall_at_1
value: 5.471
- type: recall_at_10
value: 19.242
- type: recall_at_100
value: 37.14
- type: recall_at_1000
value: 68.35900000000001
- type: recall_at_3
value: 10.896
- type: recall_at_5
value: 14.75
- task:
type: Retrieval
dataset:
name: MTEB NQ
type: nq
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 39.499
- type: map_at_10
value: 55.862
- type: map_at_100
value: 56.667
- type: map_at_1000
value: 56.684999999999995
- type: map_at_3
value: 51.534
- type: map_at_5
value: 54.2
- type: mrr_at_1
value: 44.351
- type: mrr_at_10
value: 58.567
- type: mrr_at_100
value: 59.099000000000004
- type: mrr_at_1000
value: 59.109
- type: mrr_at_3
value: 55.218999999999994
- type: mrr_at_5
value: 57.391999999999996
- type: ndcg_at_1
value: 44.322
- type: ndcg_at_10
value: 63.535
- type: ndcg_at_100
value: 66.654
- type: ndcg_at_1000
value: 66.991
- type: ndcg_at_3
value: 55.701
- type: ndcg_at_5
value: 60.06700000000001
- type: precision_at_1
value: 44.322
- type: precision_at_10
value: 10.026
- type: precision_at_100
value: 1.18
- type: precision_at_1000
value: 0.121
- type: precision_at_3
value: 24.865000000000002
- type: precision_at_5
value: 17.48
- type: recall_at_1
value: 39.499
- type: recall_at_10
value: 84.053
- type: recall_at_100
value: 97.11
- type: recall_at_1000
value: 99.493
- type: recall_at_3
value: 64.091
- type: recall_at_5
value: 74.063
- task:
type: PairClassification
dataset:
name: MTEB Ocnli
type: C-MTEB/OCNLI
config: default
split: validation
revision: None
metrics:
- type: cos_sim_accuracy
value: 61.18029236599891
- type: cos_sim_ap
value: 64.18398769398412
- type: cos_sim_f1
value: 67.96347757046446
- type: cos_sim_precision
value: 54.4529262086514
- type: cos_sim_recall
value: 90.3907074973601
- type: dot_accuracy
value: 61.18029236599891
- type: dot_ap
value: 64.18393484706077
- type: dot_f1
value: 67.96347757046446
- type: dot_precision
value: 54.4529262086514
- type: dot_recall
value: 90.3907074973601
- type: euclidean_accuracy
value: 61.18029236599891
- type: euclidean_ap
value: 64.18395024821486
- type: euclidean_f1
value: 67.96347757046446
- type: euclidean_precision
value: 54.4529262086514
- type: euclidean_recall
value: 90.3907074973601
- type: manhattan_accuracy
value: 61.451001624255554
- type: manhattan_ap
value: 64.38232708763513
- type: manhattan_f1
value: 68.05860805860804
- type: manhattan_precision
value: 52.10319685922602
- type: manhattan_recall
value: 98.09926082365365
- type: max_accuracy
value: 61.451001624255554
- type: max_ap
value: 64.38232708763513
- type: max_f1
value: 68.05860805860804
- task:
type: Classification
dataset:
name: MTEB OnlineShopping
type: C-MTEB/OnlineShopping-classification
config: default
split: test
revision: None
metrics:
- type: accuracy
value: 92.19000000000001
- type: ap
value: 89.73918431886767
- type: f1
value: 92.17175032574507
- task:
type: STS
dataset:
name: MTEB PAWSX
type: C-MTEB/PAWSX
config: default
split: test
revision: None
metrics:
- type: cos_sim_pearson
value: 15.079320253752224
- type: cos_sim_spearman
value: 16.813772504404263
- type: euclidean_pearson
value: 19.476541162041762
- type: euclidean_spearman
value: 16.813772498098782
- type: manhattan_pearson
value: 19.497429832915277
- type: manhattan_spearman
value: 16.869600674180607
- task:
type: STS
dataset:
name: MTEB QBQTC
type: C-MTEB/QBQTC
config: default
split: test
revision: None
metrics:
- type: cos_sim_pearson
value: 30.36139599797913
- type: cos_sim_spearman
value: 31.80296402851347
- type: euclidean_pearson
value: 30.10387888252793
- type: euclidean_spearman
value: 31.80297780103808
- type: manhattan_pearson
value: 30.86720382849436
- type: manhattan_spearman
value: 32.70491131366606
- task:
type: Retrieval
dataset:
name: MTEB QuoraRetrieval
type: quora
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 71.911
- type: map_at_10
value: 86.087
- type: map_at_100
value: 86.701
- type: map_at_1000
value: 86.715
- type: map_at_3
value: 83.231
- type: map_at_5
value: 85.051
- type: mrr_at_1
value: 82.75
- type: mrr_at_10
value: 88.759
- type: mrr_at_100
value: 88.844
- type: mrr_at_1000
value: 88.844
- type: mrr_at_3
value: 87.935
- type: mrr_at_5
value: 88.504
- type: ndcg_at_1
value: 82.75
- type: ndcg_at_10
value: 89.605
- type: ndcg_at_100
value: 90.664
- type: ndcg_at_1000
value: 90.733
- type: ndcg_at_3
value: 87.03
- type: ndcg_at_5
value: 88.473
- type: precision_at_1
value: 82.75
- type: precision_at_10
value: 13.575000000000001
- type: precision_at_100
value: 1.539
- type: precision_at_1000
value: 0.157
- type: precision_at_3
value: 38.153
- type: precision_at_5
value: 25.008000000000003
- type: recall_at_1
value: 71.911
- type: recall_at_10
value: 96.261
- type: recall_at_100
value: 99.72800000000001
- type: recall_at_1000
value: 99.993
- type: recall_at_3
value: 88.762
- type: recall_at_5
value: 92.949
- task:
type: Clustering
dataset:
name: MTEB RedditClustering
type: mteb/reddit-clustering
config: default
split: test
revision: 24640382cdbf8abc73003fb0fa6d111a705499eb
metrics:
- type: v_measure
value: 57.711581165572376
- task:
type: Clustering
dataset:
name: MTEB RedditClusteringP2P
type: mteb/reddit-clustering-p2p
config: default
split: test
revision: 282350215ef01743dc01b456c7f5241fa8937f16
metrics:
- type: v_measure
value: 66.48938885750297
- task:
type: Retrieval
dataset:
name: MTEB SCIDOCS
type: scidocs
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 3.7379999999999995
- type: map_at_10
value: 9.261
- type: map_at_100
value: 11.001
- type: map_at_1000
value: 11.262
- type: map_at_3
value: 6.816
- type: map_at_5
value: 8.0
- type: mrr_at_1
value: 18.4
- type: mrr_at_10
value: 28.755999999999997
- type: mrr_at_100
value: 29.892000000000003
- type: mrr_at_1000
value: 29.961
- type: mrr_at_3
value: 25.467000000000002
- type: mrr_at_5
value: 27.332
- type: ndcg_at_1
value: 18.4
- type: ndcg_at_10
value: 16.296
- type: ndcg_at_100
value: 23.52
- type: ndcg_at_1000
value: 28.504
- type: ndcg_at_3
value: 15.485
- type: ndcg_at_5
value: 13.471
- type: precision_at_1
value: 18.4
- type: precision_at_10
value: 8.469999999999999
- type: precision_at_100
value: 1.8950000000000002
- type: precision_at_1000
value: 0.309
- type: precision_at_3
value: 14.6
- type: precision_at_5
value: 11.84
- type: recall_at_1
value: 3.7379999999999995
- type: recall_at_10
value: 17.185
- type: recall_at_100
value: 38.397
- type: recall_at_1000
value: 62.798
- type: recall_at_3
value: 8.896999999999998
- type: recall_at_5
value: 12.021999999999998
- task:
type: STS
dataset:
name: MTEB SICK-R
type: mteb/sickr-sts
config: default
split: test
revision: a6ea5a8cab320b040a23452cc28066d9beae2cee
metrics:
- type: cos_sim_pearson
value: 86.43977757480083
- type: cos_sim_spearman
value: 82.64182475199533
- type: euclidean_pearson
value: 83.71756009999591
- type: euclidean_spearman
value: 82.64182331395057
- type: manhattan_pearson
value: 83.8028936913025
- type: manhattan_spearman
value: 82.71024597804252
- task:
type: STS
dataset:
name: MTEB STS12
type: mteb/sts12-sts
config: default
split: test
revision: a0d554a64d88156834ff5ae9920b964011b16384
metrics:
- type: cos_sim_pearson
value: 86.85653060698912
- type: cos_sim_spearman
value: 79.65598885228324
- type: euclidean_pearson
value: 83.1205137628455
- type: euclidean_spearman
value: 79.65629387709038
- type: manhattan_pearson
value: 83.71108853545837
- type: manhattan_spearman
value: 80.25617619716708
- task:
type: STS
dataset:
name: MTEB STS13
type: mteb/sts13-sts
config: default
split: test
revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca
metrics:
- type: cos_sim_pearson
value: 88.22921688565664
- type: cos_sim_spearman
value: 88.42662103041957
- type: euclidean_pearson
value: 87.91679798473325
- type: euclidean_spearman
value: 88.42662103041957
- type: manhattan_pearson
value: 88.16927537961303
- type: manhattan_spearman
value: 88.81581680062541
- task:
type: STS
dataset:
name: MTEB STS14
type: mteb/sts14-sts
config: default
split: test
revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375
metrics:
- type: cos_sim_pearson
value: 86.77261424554293
- type: cos_sim_spearman
value: 84.53930146434155
- type: euclidean_pearson
value: 85.67420491389697
- type: euclidean_spearman
value: 84.53929771783851
- type: manhattan_pearson
value: 85.74306784515618
- type: manhattan_spearman
value: 84.7399304675314
- task:
type: STS
dataset:
name: MTEB STS15
type: mteb/sts15-sts
config: default
split: test
revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3
metrics:
- type: cos_sim_pearson
value: 89.86138395166455
- type: cos_sim_spearman
value: 90.42577823022054
- type: euclidean_pearson
value: 89.8787763797515
- type: euclidean_spearman
value: 90.42577823022054
- type: manhattan_pearson
value: 89.9592937492158
- type: manhattan_spearman
value: 90.63535505335524
- task:
type: STS
dataset:
name: MTEB STS16
type: mteb/sts16-sts
config: default
split: test
revision: 4d8694f8f0e0100860b497b999b3dbed754a0513
metrics:
- type: cos_sim_pearson
value: 86.5176674585941
- type: cos_sim_spearman
value: 87.6842917085397
- type: euclidean_pearson
value: 86.70213081520711
- type: euclidean_spearman
value: 87.6842917085397
- type: manhattan_pearson
value: 86.83702628983627
- type: manhattan_spearman
value: 87.87791000374443
- task:
type: STS
dataset:
name: MTEB STS17 (ko-ko)
type: mteb/sts17-crosslingual-sts
config: ko-ko
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 83.86395454805867
- type: cos_sim_spearman
value: 83.69454595252267
- type: euclidean_pearson
value: 83.04743892608313
- type: euclidean_spearman
value: 83.69454026433006
- type: manhattan_pearson
value: 83.4032095553322
- type: manhattan_spearman
value: 84.11527379013802
- task:
type: STS
dataset:
name: MTEB STS17 (ar-ar)
type: mteb/sts17-crosslingual-sts
config: ar-ar
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 81.80249894729546
- type: cos_sim_spearman
value: 81.87004960533409
- type: euclidean_pearson
value: 80.0392760044179
- type: euclidean_spearman
value: 81.87004960533409
- type: manhattan_pearson
value: 80.38096542355912
- type: manhattan_spearman
value: 82.40774679630341
- task:
type: STS
dataset:
name: MTEB STS17 (en-ar)
type: mteb/sts17-crosslingual-sts
config: en-ar
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 77.6158201787172
- type: cos_sim_spearman
value: 77.934651044009
- type: euclidean_pearson
value: 77.7874683895269
- type: euclidean_spearman
value: 77.934651044009
- type: manhattan_pearson
value: 78.36151849193052
- type: manhattan_spearman
value: 78.52439586349938
- task:
type: STS
dataset:
name: MTEB STS17 (en-de)
type: mteb/sts17-crosslingual-sts
config: en-de
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 87.04363311392207
- type: cos_sim_spearman
value: 87.30483659369973
- type: euclidean_pearson
value: 87.62634489502616
- type: euclidean_spearman
value: 87.30483659369973
- type: manhattan_pearson
value: 88.02340837141445
- type: manhattan_spearman
value: 87.55012003294
- task:
type: STS
dataset:
name: MTEB STS17 (en-en)
type: mteb/sts17-crosslingual-sts
config: en-en
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 91.69172851958248
- type: cos_sim_spearman
value: 91.7546879482416
- type: euclidean_pearson
value: 91.84843039183963
- type: euclidean_spearman
value: 91.7546879482416
- type: manhattan_pearson
value: 91.72325753804357
- type: manhattan_spearman
value: 91.55330259513397
- task:
type: STS
dataset:
name: MTEB STS17 (en-tr)
type: mteb/sts17-crosslingual-sts
config: en-tr
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 73.95572901084864
- type: cos_sim_spearman
value: 72.56217821552626
- type: euclidean_pearson
value: 74.24242980323574
- type: euclidean_spearman
value: 72.56217821552626
- type: manhattan_pearson
value: 74.57473362519922
- type: manhattan_spearman
value: 72.76048826648497
- task:
type: STS
dataset:
name: MTEB STS17 (es-en)
type: mteb/sts17-crosslingual-sts
config: es-en
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 86.93329396008296
- type: cos_sim_spearman
value: 88.2406635486219
- type: euclidean_pearson
value: 87.49687343908533
- type: euclidean_spearman
value: 88.2406635486219
- type: manhattan_pearson
value: 88.14088309231084
- type: manhattan_spearman
value: 88.93314020908534
- task:
type: STS
dataset:
name: MTEB STS17 (es-es)
type: mteb/sts17-crosslingual-sts
config: es-es
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 88.70124451546057
- type: cos_sim_spearman
value: 87.45988160052252
- type: euclidean_pearson
value: 88.44395505247728
- type: euclidean_spearman
value: 87.45988160052252
- type: manhattan_pearson
value: 88.69269783495425
- type: manhattan_spearman
value: 87.65383425621
- task:
type: STS
dataset:
name: MTEB STS17 (fr-en)
type: mteb/sts17-crosslingual-sts
config: fr-en
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 87.64109149761346
- type: cos_sim_spearman
value: 88.06459637689733
- type: euclidean_pearson
value: 88.02313315797703
- type: euclidean_spearman
value: 88.06459637689733
- type: manhattan_pearson
value: 88.28328539133253
- type: manhattan_spearman
value: 88.06605708379142
- task:
type: STS
dataset:
name: MTEB STS17 (it-en)
type: mteb/sts17-crosslingual-sts
config: it-en
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 88.9040028177525
- type: cos_sim_spearman
value: 89.68152202933464
- type: euclidean_pearson
value: 89.23684469601253
- type: euclidean_spearman
value: 89.68152202933464
- type: manhattan_pearson
value: 89.59504307277454
- type: manhattan_spearman
value: 89.88060100313582
- task:
type: STS
dataset:
name: MTEB STS17 (nl-en)
type: mteb/sts17-crosslingual-sts
config: nl-en
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 87.69891585325125
- type: cos_sim_spearman
value: 88.25252785071736
- type: euclidean_pearson
value: 87.99932873748662
- type: euclidean_spearman
value: 88.25252785071736
- type: manhattan_pearson
value: 88.26959683009446
- type: manhattan_spearman
value: 88.32583227300715
- task:
type: STS
dataset:
name: MTEB STS22 (en)
type: mteb/sts22-crosslingual-sts
config: en
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 67.53235909794135
- type: cos_sim_spearman
value: 66.97521740529574
- type: euclidean_pearson
value: 68.19502223613912
- type: euclidean_spearman
value: 66.97521740529574
- type: manhattan_pearson
value: 68.39070714774539
- type: manhattan_spearman
value: 67.1072812364868
- task:
type: STS
dataset:
name: MTEB STS22 (de)
type: mteb/sts22-crosslingual-sts
config: de
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 43.715742021204775
- type: cos_sim_spearman
value: 49.12255971271453
- type: euclidean_pearson
value: 40.76848562610837
- type: euclidean_spearman
value: 49.12255971271453
- type: manhattan_pearson
value: 40.92204625614112
- type: manhattan_spearman
value: 49.23333793661129
- task:
type: STS
dataset:
name: MTEB STS22 (es)
type: mteb/sts22-crosslingual-sts
config: es
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 63.35268345563588
- type: cos_sim_spearman
value: 66.99661626042061
- type: euclidean_pearson
value: 65.85589122857066
- type: euclidean_spearman
value: 66.99661626042061
- type: manhattan_pearson
value: 66.78454301512294
- type: manhattan_spearman
value: 67.17570330149233
- task:
type: STS
dataset:
name: MTEB STS22 (pl)
type: mteb/sts22-crosslingual-sts
config: pl
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 33.36599908204445
- type: cos_sim_spearman
value: 39.20768331939503
- type: euclidean_pearson
value: 22.16066769530468
- type: euclidean_spearman
value: 39.20768331939503
- type: manhattan_pearson
value: 22.386053195546022
- type: manhattan_spearman
value: 39.70172817465986
- task:
type: STS
dataset:
name: MTEB STS22 (tr)
type: mteb/sts22-crosslingual-sts
config: tr
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 63.06813956986753
- type: cos_sim_spearman
value: 68.72065117995668
- type: euclidean_pearson
value: 66.97373456344194
- type: euclidean_spearman
value: 68.72065117995668
- type: manhattan_pearson
value: 67.34907265771595
- type: manhattan_spearman
value: 68.73705769957843
- task:
type: STS
dataset:
name: MTEB STS22 (ar)
type: mteb/sts22-crosslingual-sts
config: ar
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 47.17664865207108
- type: cos_sim_spearman
value: 54.115568323148864
- type: euclidean_pearson
value: 48.56418162879182
- type: euclidean_spearman
value: 54.115568323148864
- type: manhattan_pearson
value: 48.85951643453165
- type: manhattan_spearman
value: 54.13599784169052
- task:
type: STS
dataset:
name: MTEB STS22 (ru)
type: mteb/sts22-crosslingual-sts
config: ru
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 55.87514136275987
- type: cos_sim_spearman
value: 60.82923573674973
- type: euclidean_pearson
value: 53.724183308215615
- type: euclidean_spearman
value: 60.82923573674973
- type: manhattan_pearson
value: 53.954305573102445
- type: manhattan_spearman
value: 60.957483900644526
- task:
type: STS
dataset:
name: MTEB STS22 (zh)
type: mteb/sts22-crosslingual-sts
config: zh
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 59.55001413648593
- type: cos_sim_spearman
value: 63.395777040381276
- type: euclidean_pearson
value: 59.869972550293305
- type: euclidean_spearman
value: 63.395777040381276
- type: manhattan_pearson
value: 61.16195496847885
- type: manhattan_spearman
value: 63.41968682525581
- task:
type: STS
dataset:
name: MTEB STS22 (fr)
type: mteb/sts22-crosslingual-sts
config: fr
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 79.13334972675852
- type: cos_sim_spearman
value: 79.86263136371802
- type: euclidean_pearson
value: 78.2433603592541
- type: euclidean_spearman
value: 79.86263136371802
- type: manhattan_pearson
value: 78.87337106318412
- type: manhattan_spearman
value: 80.31230584758441
- task:
type: STS
dataset:
name: MTEB STS22 (de-en)
type: mteb/sts22-crosslingual-sts
config: de-en
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 63.559700748242356
- type: cos_sim_spearman
value: 60.92342109509558
- type: euclidean_pearson
value: 66.07256437521119
- type: euclidean_spearman
value: 60.92342109509558
- type: manhattan_pearson
value: 67.72769744612663
- type: manhattan_spearman
value: 59.64714507774168
- task:
type: STS
dataset:
name: MTEB STS22 (es-en)
type: mteb/sts22-crosslingual-sts
config: es-en
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 73.93491616145891
- type: cos_sim_spearman
value: 75.84242594400156
- type: euclidean_pearson
value: 74.87279745626121
- type: euclidean_spearman
value: 75.84242594400156
- type: manhattan_pearson
value: 76.47764144677505
- type: manhattan_spearman
value: 77.08411157845183
- task:
type: STS
dataset:
name: MTEB STS22 (it)
type: mteb/sts22-crosslingual-sts
config: it
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 72.75624124540954
- type: cos_sim_spearman
value: 75.8667941654703
- type: euclidean_pearson
value: 73.74314588451925
- type: euclidean_spearman
value: 75.8667941654703
- type: manhattan_pearson
value: 73.99641425871518
- type: manhattan_spearman
value: 76.1982840205817
- task:
type: STS
dataset:
name: MTEB STS22 (pl-en)
type: mteb/sts22-crosslingual-sts
config: pl-en
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 75.20898141298767
- type: cos_sim_spearman
value: 73.18060375331436
- type: euclidean_pearson
value: 75.44489280944619
- type: euclidean_spearman
value: 73.18060375331436
- type: manhattan_pearson
value: 75.65451039552286
- type: manhattan_spearman
value: 72.97744006123156
- task:
type: STS
dataset:
name: MTEB STS22 (zh-en)
type: mteb/sts22-crosslingual-sts
config: zh-en
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 72.04278252247816
- type: cos_sim_spearman
value: 71.8846446821539
- type: euclidean_pearson
value: 73.16043307050612
- type: euclidean_spearman
value: 71.8846446821539
- type: manhattan_pearson
value: 74.76905116839777
- type: manhattan_spearman
value: 72.66237093518471
- task:
type: STS
dataset:
name: MTEB STS22 (es-it)
type: mteb/sts22-crosslingual-sts
config: es-it
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 71.71033173838558
- type: cos_sim_spearman
value: 75.043122881885
- type: euclidean_pearson
value: 72.77579680345087
- type: euclidean_spearman
value: 75.043122881885
- type: manhattan_pearson
value: 72.99901534854922
- type: manhattan_spearman
value: 75.15418335015957
- task:
type: STS
dataset:
name: MTEB STS22 (de-fr)
type: mteb/sts22-crosslingual-sts
config: de-fr
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 55.75733447190482
- type: cos_sim_spearman
value: 61.38968334176681
- type: euclidean_pearson
value: 55.479231520643744
- type: euclidean_spearman
value: 61.38968334176681
- type: manhattan_pearson
value: 56.05230571465244
- type: manhattan_spearman
value: 62.69383054007398
- task:
type: STS
dataset:
name: MTEB STS22 (de-pl)
type: mteb/sts22-crosslingual-sts
config: de-pl
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 41.72244325050302
- type: cos_sim_spearman
value: 54.47476909084119
- type: euclidean_pearson
value: 43.94629756436873
- type: euclidean_spearman
value: 54.47476909084119
- type: manhattan_pearson
value: 46.36533046394657
- type: manhattan_spearman
value: 54.87509243633636
- task:
type: STS
dataset:
name: MTEB STS22 (fr-pl)
type: mteb/sts22-crosslingual-sts
config: fr-pl
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 70.75183711835146
- type: cos_sim_spearman
value: 84.51542547285167
- type: euclidean_pearson
value: 71.84188960126669
- type: euclidean_spearman
value: 84.51542547285167
- type: manhattan_pearson
value: 73.94847166379994
- type: manhattan_spearman
value: 84.51542547285167
- task:
type: STS
dataset:
name: MTEB STSB
type: C-MTEB/STSB
config: default
split: test
revision: None
metrics:
- type: cos_sim_pearson
value: 81.78690149086131
- type: cos_sim_spearman
value: 81.81202616916873
- type: euclidean_pearson
value: 80.98792254251062
- type: euclidean_spearman
value: 81.81202616916873
- type: manhattan_pearson
value: 81.46953021346732
- type: manhattan_spearman
value: 82.34259562492315
- task:
type: STS
dataset:
name: MTEB STSBenchmark
type: mteb/stsbenchmark-sts
config: default
split: test
revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831
metrics:
- type: cos_sim_pearson
value: 87.68273341294419
- type: cos_sim_spearman
value: 88.59927164210958
- type: euclidean_pearson
value: 88.10745681818025
- type: euclidean_spearman
value: 88.59927164210958
- type: manhattan_pearson
value: 88.25166703784649
- type: manhattan_spearman
value: 88.85343247873482
- task:
type: Reranking
dataset:
name: MTEB SciDocsRR
type: mteb/scidocs-reranking
config: default
split: test
revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab
metrics:
- type: map
value: 86.3340463345719
- type: mrr
value: 96.5182611506141
- task:
type: Retrieval
dataset:
name: MTEB SciFact
type: scifact
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 60.967000000000006
- type: map_at_10
value: 71.873
- type: map_at_100
value: 72.271
- type: map_at_1000
value: 72.292
- type: map_at_3
value: 69.006
- type: map_at_5
value: 70.856
- type: mrr_at_1
value: 63.666999999999994
- type: mrr_at_10
value: 72.929
- type: mrr_at_100
value: 73.26
- type: mrr_at_1000
value: 73.282
- type: mrr_at_3
value: 71.111
- type: mrr_at_5
value: 72.328
- type: ndcg_at_1
value: 63.666999999999994
- type: ndcg_at_10
value: 76.414
- type: ndcg_at_100
value: 78.152
- type: ndcg_at_1000
value: 78.604
- type: ndcg_at_3
value: 71.841
- type: ndcg_at_5
value: 74.435
- type: precision_at_1
value: 63.666999999999994
- type: precision_at_10
value: 10.067
- type: precision_at_100
value: 1.097
- type: precision_at_1000
value: 0.11299999999999999
- type: precision_at_3
value: 27.667
- type: precision_at_5
value: 18.467
- type: recall_at_1
value: 60.967000000000006
- type: recall_at_10
value: 88.922
- type: recall_at_100
value: 96.667
- type: recall_at_1000
value: 100.0
- type: recall_at_3
value: 77.228
- type: recall_at_5
value: 83.428
- task:
type: PairClassification
dataset:
name: MTEB SprintDuplicateQuestions
type: mteb/sprintduplicatequestions-pairclassification
config: default
split: test
revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46
metrics:
- type: cos_sim_accuracy
value: 99.82277227722773
- type: cos_sim_ap
value: 95.66279851444406
- type: cos_sim_f1
value: 90.9367088607595
- type: cos_sim_precision
value: 92.1025641025641
- type: cos_sim_recall
value: 89.8
- type: dot_accuracy
value: 99.82277227722773
- type: dot_ap
value: 95.66279851444406
- type: dot_f1
value: 90.9367088607595
- type: dot_precision
value: 92.1025641025641
- type: dot_recall
value: 89.8
- type: euclidean_accuracy
value: 99.82277227722773
- type: euclidean_ap
value: 95.66279851444406
- type: euclidean_f1
value: 90.9367088607595
- type: euclidean_precision
value: 92.1025641025641
- type: euclidean_recall
value: 89.8
- type: manhattan_accuracy
value: 99.82673267326733
- type: manhattan_ap
value: 95.86094873177069
- type: manhattan_f1
value: 91.26788357178096
- type: manhattan_precision
value: 90.06815968841285
- type: manhattan_recall
value: 92.5
- type: max_accuracy
value: 99.82673267326733
- type: max_ap
value: 95.86094873177069
- type: max_f1
value: 91.26788357178096
- task:
type: Clustering
dataset:
name: MTEB StackExchangeClustering
type: mteb/stackexchange-clustering
config: default
split: test
revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259
metrics:
- type: v_measure
value: 73.09533925852372
- task:
type: Clustering
dataset:
name: MTEB StackExchangeClusteringP2P
type: mteb/stackexchange-clustering-p2p
config: default
split: test
revision: 815ca46b2622cec33ccafc3735d572c266efdb44
metrics:
- type: v_measure
value: 45.90745648090035
- task:
type: Reranking
dataset:
name: MTEB StackOverflowDupQuestions
type: mteb/stackoverflowdupquestions-reranking
config: default
split: test
revision: e185fbe320c72810689fc5848eb6114e1ef5ec69
metrics:
- type: map
value: 54.91147686504404
- type: mrr
value: 56.03900082760377
- task:
type: Summarization
dataset:
name: MTEB SummEval
type: mteb/summeval
config: default
split: test
revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c
metrics:
- type: cos_sim_pearson
value: 31.46908662038217
- type: cos_sim_spearman
value: 31.40325730367437
- type: dot_pearson
value: 31.469083969291894
- type: dot_spearman
value: 31.40325730367437
- task:
type: Reranking
dataset:
name: MTEB T2Reranking
type: C-MTEB/T2Reranking
config: default
split: dev
revision: None
metrics:
- type: map
value: 66.90300783402137
- type: mrr
value: 77.06451972574179
- task:
type: Retrieval
dataset:
name: MTEB T2Retrieval
type: C-MTEB/T2Retrieval
config: default
split: dev
revision: None
metrics:
- type: map_at_1
value: 25.82
- type: map_at_10
value: 72.32300000000001
- type: map_at_100
value: 76.198
- type: map_at_1000
value: 76.281
- type: map_at_3
value: 50.719
- type: map_at_5
value: 62.326
- type: mrr_at_1
value: 86.599
- type: mrr_at_10
value: 89.751
- type: mrr_at_100
value: 89.876
- type: mrr_at_1000
value: 89.88000000000001
- type: mrr_at_3
value: 89.151
- type: mrr_at_5
value: 89.519
- type: ndcg_at_1
value: 86.599
- type: ndcg_at_10
value: 80.676
- type: ndcg_at_100
value: 85.03
- type: ndcg_at_1000
value: 85.854
- type: ndcg_at_3
value: 82.057
- type: ndcg_at_5
value: 80.537
- type: precision_at_1
value: 86.599
- type: precision_at_10
value: 40.373
- type: precision_at_100
value: 4.95
- type: precision_at_1000
value: 0.514
- type: precision_at_3
value: 71.918
- type: precision_at_5
value: 60.246
- type: recall_at_1
value: 25.82
- type: recall_at_10
value: 79.905
- type: recall_at_100
value: 93.88499999999999
- type: recall_at_1000
value: 98.073
- type: recall_at_3
value: 52.623
- type: recall_at_5
value: 66.233
- task:
type: Classification
dataset:
name: MTEB TNews
type: C-MTEB/TNews-classification
config: default
split: validation
revision: None
metrics:
- type: accuracy
value: 47.050000000000004
- type: f1
value: 45.704071498353294
- task:
type: Retrieval
dataset:
name: MTEB TRECCOVID
type: trec-covid
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 0.243
- type: map_at_10
value: 2.278
- type: map_at_100
value: 14.221
- type: map_at_1000
value: 33.474
- type: map_at_3
value: 0.7270000000000001
- type: map_at_5
value: 1.183
- type: mrr_at_1
value: 94.0
- type: mrr_at_10
value: 97.0
- type: mrr_at_100
value: 97.0
- type: mrr_at_1000
value: 97.0
- type: mrr_at_3
value: 97.0
- type: mrr_at_5
value: 97.0
- type: ndcg_at_1
value: 90.0
- type: ndcg_at_10
value: 87.249
- type: ndcg_at_100
value: 67.876
- type: ndcg_at_1000
value: 59.205
- type: ndcg_at_3
value: 90.12299999999999
- type: ndcg_at_5
value: 89.126
- type: precision_at_1
value: 94.0
- type: precision_at_10
value: 90.8
- type: precision_at_100
value: 69.28
- type: precision_at_1000
value: 25.85
- type: precision_at_3
value: 94.667
- type: precision_at_5
value: 92.80000000000001
- type: recall_at_1
value: 0.243
- type: recall_at_10
value: 2.392
- type: recall_at_100
value: 16.982
- type: recall_at_1000
value: 55.214
- type: recall_at_3
value: 0.745
- type: recall_at_5
value: 1.2229999999999999
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (sqi-eng)
type: mteb/tatoeba-bitext-mining
config: sqi-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 70.5
- type: f1
value: 67.05501804646966
- type: precision
value: 65.73261904761904
- type: recall
value: 70.5
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (fry-eng)
type: mteb/tatoeba-bitext-mining
config: fry-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 75.14450867052022
- type: f1
value: 70.98265895953759
- type: precision
value: 69.26782273603082
- type: recall
value: 75.14450867052022
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (kur-eng)
type: mteb/tatoeba-bitext-mining
config: kur-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 33.170731707317074
- type: f1
value: 29.92876500193573
- type: precision
value: 28.669145894755648
- type: recall
value: 33.170731707317074
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (tur-eng)
type: mteb/tatoeba-bitext-mining
config: tur-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 95.5
- type: f1
value: 94.13333333333333
- type: precision
value: 93.46666666666667
- type: recall
value: 95.5
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (deu-eng)
type: mteb/tatoeba-bitext-mining
config: deu-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 99.6
- type: f1
value: 99.46666666666665
- type: precision
value: 99.4
- type: recall
value: 99.6
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (nld-eng)
type: mteb/tatoeba-bitext-mining
config: nld-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 97.2
- type: f1
value: 96.39999999999999
- type: precision
value: 96.0
- type: recall
value: 97.2
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ron-eng)
type: mteb/tatoeba-bitext-mining
config: ron-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 94.5
- type: f1
value: 92.99666666666667
- type: precision
value: 92.31666666666666
- type: recall
value: 94.5
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ang-eng)
type: mteb/tatoeba-bitext-mining
config: ang-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 85.82089552238806
- type: f1
value: 81.59203980099502
- type: precision
value: 79.60199004975124
- type: recall
value: 85.82089552238806
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ido-eng)
type: mteb/tatoeba-bitext-mining
config: ido-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 79.5
- type: f1
value: 75.11246031746032
- type: precision
value: 73.38734126984127
- type: recall
value: 79.5
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (jav-eng)
type: mteb/tatoeba-bitext-mining
config: jav-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 44.390243902439025
- type: f1
value: 38.48896631823461
- type: precision
value: 36.57220286488579
- type: recall
value: 44.390243902439025
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (isl-eng)
type: mteb/tatoeba-bitext-mining
config: isl-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 90.2
- type: f1
value: 87.57333333333334
- type: precision
value: 86.34166666666665
- type: recall
value: 90.2
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (slv-eng)
type: mteb/tatoeba-bitext-mining
config: slv-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 88.82138517618469
- type: f1
value: 85.98651854423423
- type: precision
value: 84.79257073424753
- type: recall
value: 88.82138517618469
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (cym-eng)
type: mteb/tatoeba-bitext-mining
config: cym-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 77.04347826086956
- type: f1
value: 72.32108147606868
- type: precision
value: 70.37207357859532
- type: recall
value: 77.04347826086956
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (kaz-eng)
type: mteb/tatoeba-bitext-mining
config: kaz-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 53.04347826086957
- type: f1
value: 46.88868184955141
- type: precision
value: 44.71730105643149
- type: recall
value: 53.04347826086957
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (est-eng)
type: mteb/tatoeba-bitext-mining
config: est-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 68.0
- type: f1
value: 62.891813186813195
- type: precision
value: 61.037906162464985
- type: recall
value: 68.0
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (heb-eng)
type: mteb/tatoeba-bitext-mining
config: heb-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 86.3
- type: f1
value: 82.82000000000001
- type: precision
value: 81.25690476190475
- type: recall
value: 86.3
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (gla-eng)
type: mteb/tatoeba-bitext-mining
config: gla-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 68.87816646562122
- type: f1
value: 63.53054933272062
- type: precision
value: 61.47807816331196
- type: recall
value: 68.87816646562122
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (mar-eng)
type: mteb/tatoeba-bitext-mining
config: mar-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 74.4
- type: f1
value: 68.99388888888889
- type: precision
value: 66.81035714285713
- type: recall
value: 74.4
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (lat-eng)
type: mteb/tatoeba-bitext-mining
config: lat-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 90.5
- type: f1
value: 87.93666666666667
- type: precision
value: 86.825
- type: recall
value: 90.5
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (bel-eng)
type: mteb/tatoeba-bitext-mining
config: bel-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 90.7
- type: f1
value: 88.09
- type: precision
value: 86.85833333333333
- type: recall
value: 90.7
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (pms-eng)
type: mteb/tatoeba-bitext-mining
config: pms-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 67.61904761904762
- type: f1
value: 62.30239247214037
- type: precision
value: 60.340702947845806
- type: recall
value: 67.61904761904762
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (gle-eng)
type: mteb/tatoeba-bitext-mining
config: gle-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 77.9
- type: f1
value: 73.81285714285714
- type: precision
value: 72.21570818070818
- type: recall
value: 77.9
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (pes-eng)
type: mteb/tatoeba-bitext-mining
config: pes-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 91.8
- type: f1
value: 89.66666666666667
- type: precision
value: 88.66666666666666
- type: recall
value: 91.8
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (nob-eng)
type: mteb/tatoeba-bitext-mining
config: nob-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 97.6
- type: f1
value: 96.85666666666665
- type: precision
value: 96.50833333333333
- type: recall
value: 97.6
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (bul-eng)
type: mteb/tatoeba-bitext-mining
config: bul-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 95.39999999999999
- type: f1
value: 93.98333333333333
- type: precision
value: 93.30000000000001
- type: recall
value: 95.39999999999999
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (cbk-eng)
type: mteb/tatoeba-bitext-mining
config: cbk-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 85.0
- type: f1
value: 81.31538461538462
- type: precision
value: 79.70666666666666
- type: recall
value: 85.0
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (hun-eng)
type: mteb/tatoeba-bitext-mining
config: hun-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 91.60000000000001
- type: f1
value: 89.81888888888888
- type: precision
value: 89.08583333333333
- type: recall
value: 91.60000000000001
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (uig-eng)
type: mteb/tatoeba-bitext-mining
config: uig-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 44.3
- type: f1
value: 38.8623088023088
- type: precision
value: 37.03755623461505
- type: recall
value: 44.3
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (rus-eng)
type: mteb/tatoeba-bitext-mining
config: rus-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 95.19999999999999
- type: f1
value: 93.75
- type: precision
value: 93.05
- type: recall
value: 95.19999999999999
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (spa-eng)
type: mteb/tatoeba-bitext-mining
config: spa-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 99.1
- type: f1
value: 98.8
- type: precision
value: 98.65
- type: recall
value: 99.1
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (hye-eng)
type: mteb/tatoeba-bitext-mining
config: hye-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 69.6765498652291
- type: f1
value: 63.991785393402644
- type: precision
value: 61.7343729944808
- type: recall
value: 69.6765498652291
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (tel-eng)
type: mteb/tatoeba-bitext-mining
config: tel-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 50.0
- type: f1
value: 42.79341029341029
- type: precision
value: 40.25098358431692
- type: recall
value: 50.0
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (afr-eng)
type: mteb/tatoeba-bitext-mining
config: afr-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 89.7
- type: f1
value: 87.19023809523809
- type: precision
value: 86.12595238095237
- type: recall
value: 89.7
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (mon-eng)
type: mteb/tatoeba-bitext-mining
config: mon-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 42.72727272727273
- type: f1
value: 37.78789518562245
- type: precision
value: 36.24208471267295
- type: recall
value: 42.72727272727273
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (arz-eng)
type: mteb/tatoeba-bitext-mining
config: arz-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 75.26205450733752
- type: f1
value: 70.72842833849123
- type: precision
value: 68.93256464011182
- type: recall
value: 75.26205450733752
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (hrv-eng)
type: mteb/tatoeba-bitext-mining
config: hrv-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 95.19999999999999
- type: f1
value: 93.96666666666668
- type: precision
value: 93.42
- type: recall
value: 95.19999999999999
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (nov-eng)
type: mteb/tatoeba-bitext-mining
config: nov-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 76.26459143968872
- type: f1
value: 72.40190419178747
- type: precision
value: 70.84954604409856
- type: recall
value: 76.26459143968872
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (gsw-eng)
type: mteb/tatoeba-bitext-mining
config: gsw-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 59.82905982905983
- type: f1
value: 52.2100122100122
- type: precision
value: 49.52516619183286
- type: recall
value: 59.82905982905983
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (nds-eng)
type: mteb/tatoeba-bitext-mining
config: nds-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 81.69999999999999
- type: f1
value: 77.41714285714286
- type: precision
value: 75.64833333333334
- type: recall
value: 81.69999999999999
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ukr-eng)
type: mteb/tatoeba-bitext-mining
config: ukr-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 95.5
- type: f1
value: 94.45
- type: precision
value: 93.93333333333334
- type: recall
value: 95.5
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (uzb-eng)
type: mteb/tatoeba-bitext-mining
config: uzb-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 58.41121495327103
- type: f1
value: 52.73495974430554
- type: precision
value: 50.717067200712066
- type: recall
value: 58.41121495327103
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (lit-eng)
type: mteb/tatoeba-bitext-mining
config: lit-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 73.3
- type: f1
value: 69.20371794871795
- type: precision
value: 67.6597557997558
- type: recall
value: 73.3
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ina-eng)
type: mteb/tatoeba-bitext-mining
config: ina-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 96.5
- type: f1
value: 95.51666666666667
- type: precision
value: 95.05
- type: recall
value: 96.5
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (lfn-eng)
type: mteb/tatoeba-bitext-mining
config: lfn-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 78.4
- type: f1
value: 73.88856643356644
- type: precision
value: 72.01373015873016
- type: recall
value: 78.4
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (zsm-eng)
type: mteb/tatoeba-bitext-mining
config: zsm-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 95.3
- type: f1
value: 94.09666666666668
- type: precision
value: 93.53333333333332
- type: recall
value: 95.3
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ita-eng)
type: mteb/tatoeba-bitext-mining
config: ita-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 93.7
- type: f1
value: 91.94
- type: precision
value: 91.10833333333333
- type: recall
value: 93.7
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (cmn-eng)
type: mteb/tatoeba-bitext-mining
config: cmn-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 96.8
- type: f1
value: 95.89999999999999
- type: precision
value: 95.46666666666668
- type: recall
value: 96.8
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (lvs-eng)
type: mteb/tatoeba-bitext-mining
config: lvs-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 70.5
- type: f1
value: 66.00635642135641
- type: precision
value: 64.36345238095238
- type: recall
value: 70.5
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (glg-eng)
type: mteb/tatoeba-bitext-mining
config: glg-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 92.4
- type: f1
value: 90.44388888888889
- type: precision
value: 89.5767857142857
- type: recall
value: 92.4
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ceb-eng)
type: mteb/tatoeba-bitext-mining
config: ceb-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 48.0
- type: f1
value: 43.15372775372776
- type: precision
value: 41.53152510162313
- type: recall
value: 48.0
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (bre-eng)
type: mteb/tatoeba-bitext-mining
config: bre-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 16.7
- type: f1
value: 14.198431372549017
- type: precision
value: 13.411765873015872
- type: recall
value: 16.7
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ben-eng)
type: mteb/tatoeba-bitext-mining
config: ben-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 85.7
- type: f1
value: 81.81666666666666
- type: precision
value: 80.10833333333332
- type: recall
value: 85.7
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (swg-eng)
type: mteb/tatoeba-bitext-mining
config: swg-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 69.64285714285714
- type: f1
value: 64.745670995671
- type: precision
value: 62.916666666666664
- type: recall
value: 69.64285714285714
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (arq-eng)
type: mteb/tatoeba-bitext-mining
config: arq-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 54.665203073545555
- type: f1
value: 48.55366630916923
- type: precision
value: 46.35683318998357
- type: recall
value: 54.665203073545555
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (kab-eng)
type: mteb/tatoeba-bitext-mining
config: kab-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 4.8
- type: f1
value: 3.808587223587223
- type: precision
value: 3.5653174603174604
- type: recall
value: 4.8
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (fra-eng)
type: mteb/tatoeba-bitext-mining
config: fra-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 96.6
- type: f1
value: 95.77333333333333
- type: precision
value: 95.39166666666667
- type: recall
value: 96.6
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (por-eng)
type: mteb/tatoeba-bitext-mining
config: por-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 95.39999999999999
- type: f1
value: 94.44
- type: precision
value: 93.975
- type: recall
value: 95.39999999999999
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (tat-eng)
type: mteb/tatoeba-bitext-mining
config: tat-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 42.0
- type: f1
value: 37.024908424908425
- type: precision
value: 35.365992063492065
- type: recall
value: 42.0
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (oci-eng)
type: mteb/tatoeba-bitext-mining
config: oci-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 66.7
- type: f1
value: 62.20460835058661
- type: precision
value: 60.590134587634594
- type: recall
value: 66.7
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (pol-eng)
type: mteb/tatoeba-bitext-mining
config: pol-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 97.3
- type: f1
value: 96.46666666666667
- type: precision
value: 96.06666666666668
- type: recall
value: 97.3
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (war-eng)
type: mteb/tatoeba-bitext-mining
config: war-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 47.3
- type: f1
value: 41.96905408317173
- type: precision
value: 40.18741402116402
- type: recall
value: 47.3
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (aze-eng)
type: mteb/tatoeba-bitext-mining
config: aze-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 80.2
- type: f1
value: 76.22690476190476
- type: precision
value: 74.63539682539682
- type: recall
value: 80.2
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (vie-eng)
type: mteb/tatoeba-bitext-mining
config: vie-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 96.0
- type: f1
value: 94.83333333333333
- type: precision
value: 94.26666666666668
- type: recall
value: 96.0
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (nno-eng)
type: mteb/tatoeba-bitext-mining
config: nno-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 89.7
- type: f1
value: 87.24333333333334
- type: precision
value: 86.17
- type: recall
value: 89.7
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (cha-eng)
type: mteb/tatoeba-bitext-mining
config: cha-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 50.36496350364964
- type: f1
value: 44.795520780922246
- type: precision
value: 43.09002433090024
- type: recall
value: 50.36496350364964
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (mhr-eng)
type: mteb/tatoeba-bitext-mining
config: mhr-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 18.8
- type: f1
value: 16.242864357864356
- type: precision
value: 15.466596638655464
- type: recall
value: 18.8
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (dan-eng)
type: mteb/tatoeba-bitext-mining
config: dan-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 95.19999999999999
- type: f1
value: 93.92333333333333
- type: precision
value: 93.30833333333332
- type: recall
value: 95.19999999999999
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ell-eng)
type: mteb/tatoeba-bitext-mining
config: ell-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 93.4
- type: f1
value: 91.42333333333333
- type: precision
value: 90.50833333333334
- type: recall
value: 93.4
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (amh-eng)
type: mteb/tatoeba-bitext-mining
config: amh-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 26.190476190476193
- type: f1
value: 22.05208151636723
- type: precision
value: 21.09292328042328
- type: recall
value: 26.190476190476193
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (pam-eng)
type: mteb/tatoeba-bitext-mining
config: pam-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 17.2
- type: f1
value: 14.021009731460952
- type: precision
value: 13.1389886698243
- type: recall
value: 17.2
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (hsb-eng)
type: mteb/tatoeba-bitext-mining
config: hsb-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 78.67494824016563
- type: f1
value: 74.24430641821947
- type: precision
value: 72.50747642051991
- type: recall
value: 78.67494824016563
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (srp-eng)
type: mteb/tatoeba-bitext-mining
config: srp-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 94.19999999999999
- type: f1
value: 92.54
- type: precision
value: 91.75833333333334
- type: recall
value: 94.19999999999999
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (epo-eng)
type: mteb/tatoeba-bitext-mining
config: epo-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 90.2
- type: f1
value: 87.78666666666666
- type: precision
value: 86.69833333333334
- type: recall
value: 90.2
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (kzj-eng)
type: mteb/tatoeba-bitext-mining
config: kzj-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 14.7
- type: f1
value: 12.19206214842218
- type: precision
value: 11.526261904761904
- type: recall
value: 14.7
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (awa-eng)
type: mteb/tatoeba-bitext-mining
config: awa-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 73.16017316017316
- type: f1
value: 67.44858316286889
- type: precision
value: 65.23809523809523
- type: recall
value: 73.16017316017316
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (fao-eng)
type: mteb/tatoeba-bitext-mining
config: fao-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 75.19083969465649
- type: f1
value: 70.33078880407125
- type: precision
value: 68.3969465648855
- type: recall
value: 75.19083969465649
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (mal-eng)
type: mteb/tatoeba-bitext-mining
config: mal-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 62.154294032023294
- type: f1
value: 55.86030821838681
- type: precision
value: 53.53509623160277
- type: recall
value: 62.154294032023294
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ile-eng)
type: mteb/tatoeba-bitext-mining
config: ile-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 86.8
- type: f1
value: 83.9652380952381
- type: precision
value: 82.84242424242424
- type: recall
value: 86.8
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (bos-eng)
type: mteb/tatoeba-bitext-mining
config: bos-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 93.50282485875707
- type: f1
value: 91.54425612052731
- type: precision
value: 90.65442561205272
- type: recall
value: 93.50282485875707
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (cor-eng)
type: mteb/tatoeba-bitext-mining
config: cor-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 11.4
- type: f1
value: 9.189775870222714
- type: precision
value: 8.66189886502811
- type: recall
value: 11.4
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (cat-eng)
type: mteb/tatoeba-bitext-mining
config: cat-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 93.4
- type: f1
value: 91.88666666666666
- type: precision
value: 91.21444444444444
- type: recall
value: 93.4
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (eus-eng)
type: mteb/tatoeba-bitext-mining
config: eus-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 46.0
- type: f1
value: 40.51069226095542
- type: precision
value: 38.57804926010808
- type: recall
value: 46.0
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (yue-eng)
type: mteb/tatoeba-bitext-mining
config: yue-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 91.0
- type: f1
value: 89.11333333333333
- type: precision
value: 88.27000000000001
- type: recall
value: 91.0
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (swe-eng)
type: mteb/tatoeba-bitext-mining
config: swe-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 94.39999999999999
- type: f1
value: 92.95
- type: precision
value: 92.27000000000001
- type: recall
value: 94.39999999999999
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (dtp-eng)
type: mteb/tatoeba-bitext-mining
config: dtp-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 14.2
- type: f1
value: 11.73701698770113
- type: precision
value: 11.079207014736676
- type: recall
value: 14.2
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (kat-eng)
type: mteb/tatoeba-bitext-mining
config: kat-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 65.14745308310992
- type: f1
value: 59.665707393589415
- type: precision
value: 57.560853653346946
- type: recall
value: 65.14745308310992
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (jpn-eng)
type: mteb/tatoeba-bitext-mining
config: jpn-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 95.39999999999999
- type: f1
value: 94.0
- type: precision
value: 93.33333333333333
- type: recall
value: 95.39999999999999
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (csb-eng)
type: mteb/tatoeba-bitext-mining
config: csb-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 69.56521739130434
- type: f1
value: 62.92490118577074
- type: precision
value: 60.27009222661397
- type: recall
value: 69.56521739130434
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (xho-eng)
type: mteb/tatoeba-bitext-mining
config: xho-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 40.140845070422536
- type: f1
value: 35.96411804158283
- type: precision
value: 34.89075869357559
- type: recall
value: 40.140845070422536
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (orv-eng)
type: mteb/tatoeba-bitext-mining
config: orv-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 65.86826347305389
- type: f1
value: 59.646248628284546
- type: precision
value: 57.22982606216139
- type: recall
value: 65.86826347305389
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ind-eng)
type: mteb/tatoeba-bitext-mining
config: ind-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 94.89999999999999
- type: f1
value: 93.48333333333333
- type: precision
value: 92.83666666666667
- type: recall
value: 94.89999999999999
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (tuk-eng)
type: mteb/tatoeba-bitext-mining
config: tuk-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 47.783251231527096
- type: f1
value: 42.006447302013804
- type: precision
value: 40.12747105111637
- type: recall
value: 47.783251231527096
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (max-eng)
type: mteb/tatoeba-bitext-mining
config: max-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 69.71830985915493
- type: f1
value: 64.80266212660578
- type: precision
value: 63.08098591549296
- type: recall
value: 69.71830985915493
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (swh-eng)
type: mteb/tatoeba-bitext-mining
config: swh-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 67.94871794871796
- type: f1
value: 61.59912309912309
- type: precision
value: 59.17338217338218
- type: recall
value: 67.94871794871796
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (hin-eng)
type: mteb/tatoeba-bitext-mining
config: hin-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 96.39999999999999
- type: f1
value: 95.28333333333335
- type: precision
value: 94.75
- type: recall
value: 96.39999999999999
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (dsb-eng)
type: mteb/tatoeba-bitext-mining
config: dsb-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 70.14613778705638
- type: f1
value: 65.4349338900487
- type: precision
value: 63.57599255302805
- type: recall
value: 70.14613778705638
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ber-eng)
type: mteb/tatoeba-bitext-mining
config: ber-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 9.2
- type: f1
value: 7.622184434339607
- type: precision
value: 7.287048159682417
- type: recall
value: 9.2
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (tam-eng)
type: mteb/tatoeba-bitext-mining
config: tam-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 77.85016286644951
- type: f1
value: 72.83387622149837
- type: precision
value: 70.58450959102424
- type: recall
value: 77.85016286644951
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (slk-eng)
type: mteb/tatoeba-bitext-mining
config: slk-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 90.8
- type: f1
value: 88.84333333333333
- type: precision
value: 87.96666666666665
- type: recall
value: 90.8
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (tgl-eng)
type: mteb/tatoeba-bitext-mining
config: tgl-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 94.6
- type: f1
value: 93.14
- type: precision
value: 92.49833333333333
- type: recall
value: 94.6
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ast-eng)
type: mteb/tatoeba-bitext-mining
config: ast-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 84.25196850393701
- type: f1
value: 80.94488188976378
- type: precision
value: 79.65879265091863
- type: recall
value: 84.25196850393701
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (mkd-eng)
type: mteb/tatoeba-bitext-mining
config: mkd-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 89.5
- type: f1
value: 86.89666666666666
- type: precision
value: 85.7
- type: recall
value: 89.5
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (khm-eng)
type: mteb/tatoeba-bitext-mining
config: khm-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 42.797783933518005
- type: f1
value: 37.30617360155193
- type: precision
value: 35.34933825792552
- type: recall
value: 42.797783933518005
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ces-eng)
type: mteb/tatoeba-bitext-mining
config: ces-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 96.1
- type: f1
value: 94.93333333333332
- type: precision
value: 94.38333333333333
- type: recall
value: 96.1
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (tzl-eng)
type: mteb/tatoeba-bitext-mining
config: tzl-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 54.807692307692314
- type: f1
value: 49.506903353057204
- type: precision
value: 47.54807692307693
- type: recall
value: 54.807692307692314
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (urd-eng)
type: mteb/tatoeba-bitext-mining
config: urd-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 87.1
- type: f1
value: 83.61857142857143
- type: precision
value: 81.975
- type: recall
value: 87.1
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ara-eng)
type: mteb/tatoeba-bitext-mining
config: ara-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 91.10000000000001
- type: f1
value: 88.76333333333332
- type: precision
value: 87.67
- type: recall
value: 91.10000000000001
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (kor-eng)
type: mteb/tatoeba-bitext-mining
config: kor-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 93.10000000000001
- type: f1
value: 91.28999999999999
- type: precision
value: 90.44500000000001
- type: recall
value: 93.10000000000001
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (yid-eng)
type: mteb/tatoeba-bitext-mining
config: yid-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 39.97641509433962
- type: f1
value: 33.12271889998028
- type: precision
value: 30.95185381542554
- type: recall
value: 39.97641509433962
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (fin-eng)
type: mteb/tatoeba-bitext-mining
config: fin-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 92.60000000000001
- type: f1
value: 90.69
- type: precision
value: 89.84500000000001
- type: recall
value: 92.60000000000001
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (tha-eng)
type: mteb/tatoeba-bitext-mining
config: tha-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 95.07299270072993
- type: f1
value: 93.64355231143554
- type: precision
value: 92.94403892944038
- type: recall
value: 95.07299270072993
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (wuu-eng)
type: mteb/tatoeba-bitext-mining
config: wuu-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 91.9
- type: f1
value: 89.61333333333333
- type: precision
value: 88.53333333333333
- type: recall
value: 91.9
- task:
type: Clustering
dataset:
name: MTEB ThuNewsClusteringP2P
type: C-MTEB/ThuNewsClusteringP2P
config: default
split: test
revision: None
metrics:
- type: v_measure
value: 64.68478289806511
- task:
type: Clustering
dataset:
name: MTEB ThuNewsClusteringS2S
type: C-MTEB/ThuNewsClusteringS2S
config: default
split: test
revision: None
metrics:
- type: v_measure
value: 57.53010296184097
- task:
type: Retrieval
dataset:
name: MTEB Touche2020
type: webis-touche2020
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 2.519
- type: map_at_10
value: 10.31
- type: map_at_100
value: 16.027
- type: map_at_1000
value: 17.827
- type: map_at_3
value: 5.721
- type: map_at_5
value: 7.7829999999999995
- type: mrr_at_1
value: 34.694
- type: mrr_at_10
value: 52.642999999999994
- type: mrr_at_100
value: 53.366
- type: mrr_at_1000
value: 53.366
- type: mrr_at_3
value: 48.638999999999996
- type: mrr_at_5
value: 50.578
- type: ndcg_at_1
value: 31.633
- type: ndcg_at_10
value: 26.394000000000002
- type: ndcg_at_100
value: 36.41
- type: ndcg_at_1000
value: 49.206
- type: ndcg_at_3
value: 31.694
- type: ndcg_at_5
value: 29.529
- type: precision_at_1
value: 34.694
- type: precision_at_10
value: 23.469
- type: precision_at_100
value: 7.286
- type: precision_at_1000
value: 1.5610000000000002
- type: precision_at_3
value: 34.014
- type: precision_at_5
value: 29.796
- type: recall_at_1
value: 2.519
- type: recall_at_10
value: 17.091
- type: recall_at_100
value: 45.429
- type: recall_at_1000
value: 84.621
- type: recall_at_3
value: 7.208
- type: recall_at_5
value: 10.523
- task:
type: Classification
dataset:
name: MTEB ToxicConversationsClassification
type: mteb/toxic_conversations_50k
config: default
split: test
revision: d7c0de2777da35d6aae2200a62c6e0e5af397c4c
metrics:
- type: accuracy
value: 69.58659999999999
- type: ap
value: 14.735696532619
- type: f1
value: 54.23517220069903
- task:
type: Classification
dataset:
name: MTEB TweetSentimentExtractionClassification
type: mteb/tweet_sentiment_extraction
config: default
split: test
revision: d604517c81ca91fe16a244d1248fc021f9ecee7a
metrics:
- type: accuracy
value: 63.723825693265425
- type: f1
value: 64.02405729449103
- task:
type: Clustering
dataset:
name: MTEB TwentyNewsgroupsClustering
type: mteb/twentynewsgroups-clustering
config: default
split: test
revision: 6125ec4e24fa026cec8a478383ee943acfbd5449
metrics:
- type: v_measure
value: 54.310161547491006
- task:
type: PairClassification
dataset:
name: MTEB TwitterSemEval2015
type: mteb/twittersemeval2015-pairclassification
config: default
split: test
revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1
metrics:
- type: cos_sim_accuracy
value: 88.77630088812064
- type: cos_sim_ap
value: 81.61725457333809
- type: cos_sim_f1
value: 74.91373801916932
- type: cos_sim_precision
value: 72.63940520446097
- type: cos_sim_recall
value: 77.33509234828496
- type: dot_accuracy
value: 88.77630088812064
- type: dot_ap
value: 81.61725317476251
- type: dot_f1
value: 74.91373801916932
- type: dot_precision
value: 72.63940520446097
- type: dot_recall
value: 77.33509234828496
- type: euclidean_accuracy
value: 88.77630088812064
- type: euclidean_ap
value: 81.61724596869566
- type: euclidean_f1
value: 74.91373801916932
- type: euclidean_precision
value: 72.63940520446097
- type: euclidean_recall
value: 77.33509234828496
- type: manhattan_accuracy
value: 88.67497168742922
- type: manhattan_ap
value: 81.430251048948
- type: manhattan_f1
value: 74.79593118171543
- type: manhattan_precision
value: 71.3635274382938
- type: manhattan_recall
value: 78.57519788918206
- type: max_accuracy
value: 88.77630088812064
- type: max_ap
value: 81.61725457333809
- type: max_f1
value: 74.91373801916932
- task:
type: PairClassification
dataset:
name: MTEB TwitterURLCorpus
type: mteb/twitterurlcorpus-pairclassification
config: default
split: test
revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf
metrics:
- type: cos_sim_accuracy
value: 89.85136026700819
- type: cos_sim_ap
value: 87.74656687446567
- type: cos_sim_f1
value: 80.3221673073403
- type: cos_sim_precision
value: 76.56871640957633
- type: cos_sim_recall
value: 84.46258084385587
- type: dot_accuracy
value: 89.85136026700819
- type: dot_ap
value: 87.74656471395072
- type: dot_f1
value: 80.3221673073403
- type: dot_precision
value: 76.56871640957633
- type: dot_recall
value: 84.46258084385587
- type: euclidean_accuracy
value: 89.85136026700819
- type: euclidean_ap
value: 87.74656885754466
- type: euclidean_f1
value: 80.3221673073403
- type: euclidean_precision
value: 76.56871640957633
- type: euclidean_recall
value: 84.46258084385587
- type: manhattan_accuracy
value: 89.86300306593705
- type: manhattan_ap
value: 87.78807479093082
- type: manhattan_f1
value: 80.31663429471911
- type: manhattan_precision
value: 76.63472970137772
- type: manhattan_recall
value: 84.3701878657222
- type: max_accuracy
value: 89.86300306593705
- type: max_ap
value: 87.78807479093082
- type: max_f1
value: 80.3221673073403
- task:
type: Retrieval
dataset:
name: MTEB VideoRetrieval
type: C-MTEB/VideoRetrieval
config: default
split: dev
revision: None
metrics:
- type: map_at_1
value: 32.4
- type: map_at_10
value: 40.961999999999996
- type: map_at_100
value: 41.660000000000004
- type: map_at_1000
value: 41.721000000000004
- type: map_at_3
value: 38.550000000000004
- type: map_at_5
value: 40.06
- type: mrr_at_1
value: 32.4
- type: mrr_at_10
value: 40.961999999999996
- type: mrr_at_100
value: 41.660000000000004
- type: mrr_at_1000
value: 41.721000000000004
- type: mrr_at_3
value: 38.550000000000004
- type: mrr_at_5
value: 40.06
- type: ndcg_at_1
value: 32.4
- type: ndcg_at_10
value: 45.388
- type: ndcg_at_100
value: 49.012
- type: ndcg_at_1000
value: 50.659
- type: ndcg_at_3
value: 40.47
- type: ndcg_at_5
value: 43.232
- type: precision_at_1
value: 32.4
- type: precision_at_10
value: 5.94
- type: precision_at_100
value: 0.769
- type: precision_at_1000
value: 0.09
- type: precision_at_3
value: 15.333
- type: precision_at_5
value: 10.56
- type: recall_at_1
value: 32.4
- type: recall_at_10
value: 59.4
- type: recall_at_100
value: 76.9
- type: recall_at_1000
value: 90.0
- type: recall_at_3
value: 46.0
- type: recall_at_5
value: 52.800000000000004
- task:
type: Classification
dataset:
name: MTEB Waimai
type: C-MTEB/waimai-classification
config: default
split: test
revision: None
metrics:
- type: accuracy
value: 86.94000000000001
- type: ap
value: 70.57373468481975
- type: f1
value: 85.26264784928323
---
## E5-mistral-7b-instruct
[Improving Text Embeddings with Large Language Models](https://arxiv.org/pdf/2401.00368.pdf). Liang Wang, Nan Yang, Xiaolong Huang, Linjun Yang, Rangan Majumder, Furu Wei, arXiv 2024
This model has 32 layers and the embedding size is 4096.
## Usage
Below is an example to encode queries and passages from the MS-MARCO passage ranking dataset.
### Sentence Transformers
```python
from sentence_transformers import SentenceTransformer
model = SentenceTransformer("intfloat/e5-mistral-7b-instruct")
# In case you want to reduce the maximum sequence length:
model.max_seq_length = 4096
queries = [
"how much protein should a female eat",
"summit define",
]
documents = [
"As a general guideline, the CDC's average requirement of protein for women ages 19 to 70 is 46 grams per day. But, as you can see from this chart, you'll need to increase that if you're expecting or training for a marathon. Check out the chart below to see how much protein you should be eating each day.",
"Definition of summit for English Language Learners. : 1 the highest point of a mountain : the top of a mountain. : 2 the highest level. : 3 a meeting or series of meetings between the leaders of two or more governments."
]
query_embeddings = model.encode(queries, prompt_name="web_search_query")
document_embeddings = model.encode(documents)
scores = (query_embeddings @ document_embeddings.T) * 100
print(scores.tolist())
```
Have a look at [config_sentence_transformers.json](config_sentence_transformers.json) for the prompts that are pre-configured, such as `web_search_query`, `sts_query`, and `summarization_query`. Additionally, check out [unilm/e5/utils.py](https://github.com/microsoft/unilm/blob/9c0f1ff7ca53431fe47d2637dfe253643d94185b/e5/utils.py#L106) for prompts we used for evaluation. You can use these via e.g. `model.encode(queries, prompt="Instruct: Given a claim, find documents that refute the claim\nQuery: ")`.
### Transformers
```python
import torch
import torch.nn.functional as F
from torch import Tensor
from transformers import AutoTokenizer, AutoModel
def last_token_pool(last_hidden_states: Tensor,
attention_mask: Tensor) -> Tensor:
left_padding = (attention_mask[:, -1].sum() == attention_mask.shape[0])
if left_padding:
return last_hidden_states[:, -1]
else:
sequence_lengths = attention_mask.sum(dim=1) - 1
batch_size = last_hidden_states.shape[0]
return last_hidden_states[torch.arange(batch_size, device=last_hidden_states.device), sequence_lengths]
def get_detailed_instruct(task_description: str, query: str) -> str:
return f'Instruct: {task_description}\nQuery: {query}'
# Each query must come with a one-sentence instruction that describes the task
task = 'Given a web search query, retrieve relevant passages that answer the query'
queries = [
get_detailed_instruct(task, 'how much protein should a female eat'),
get_detailed_instruct(task, 'summit define')
]
# No need to add instruction for retrieval documents
documents = [
"As a general guideline, the CDC's average requirement of protein for women ages 19 to 70 is 46 grams per day. But, as you can see from this chart, you'll need to increase that if you're expecting or training for a marathon. Check out the chart below to see how much protein you should be eating each day.",
"Definition of summit for English Language Learners. : 1 the highest point of a mountain : the top of a mountain. : 2 the highest level. : 3 a meeting or series of meetings between the leaders of two or more governments."
]
input_texts = queries + documents
tokenizer = AutoTokenizer.from_pretrained('intfloat/e5-mistral-7b-instruct')
model = AutoModel.from_pretrained('intfloat/e5-mistral-7b-instruct')
max_length = 4096
# Tokenize the input texts
batch_dict = tokenizer(input_texts, max_length=max_length, padding=True, truncation=True, return_tensors='pt')
outputs = model(**batch_dict)
embeddings = last_token_pool(outputs.last_hidden_state, batch_dict['attention_mask'])
# normalize embeddings
embeddings = F.normalize(embeddings, p=2, dim=1)
scores = (embeddings[:2] @ embeddings[2:].T) * 100
print(scores.tolist())
```
## Supported Languages
This model is initialized from [Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1)
and fine-tuned on a mixture of multilingual datasets.
As a result, it has some multilingual capability.
However, since Mistral-7B-v0.1 is mainly trained on English data, we recommend using this model for English only.
For multilingual use cases, please refer to [multilingual-e5-large](https://huggingface.co/intfloat/multilingual-e5-large).
## MTEB Benchmark Evaluation
Check out [unilm/e5](https://github.com/microsoft/unilm/tree/master/e5) to reproduce evaluation results
on the [BEIR](https://arxiv.org/abs/2104.08663) and [MTEB benchmark](https://arxiv.org/abs/2210.07316).
## FAQ
**1. Do I need to add instructions to the query?**
Yes, this is how the model is trained, otherwise you will see a performance degradation.
The task definition should be a one-sentence instruction that describes the task.
This is a way to customize text embeddings for different scenarios through natural language instructions.
Please check out [unilm/e5/utils.py](https://github.com/microsoft/unilm/blob/9c0f1ff7ca53431fe47d2637dfe253643d94185b/e5/utils.py#L106) for instructions we used for evaluation.
On the other hand, there is no need to add instructions to the document side.
**2. Why are my reproduced results slightly different from reported in the model card?**
Different versions of `transformers` and `pytorch` could cause negligible but non-zero performance differences.
**3. Where are the LoRA-only weights?**
You can find the LoRA-only weights at [https://huggingface.co/intfloat/e5-mistral-7b-instruct/tree/main/lora](https://huggingface.co/intfloat/e5-mistral-7b-instruct/tree/main/lora).
## Citation
If you find our paper or models helpful, please consider cite as follows:
```bibtex
@article{wang2023improving,
title={Improving Text Embeddings with Large Language Models},
author={Wang, Liang and Yang, Nan and Huang, Xiaolong and Yang, Linjun and Majumder, Rangan and Wei, Furu},
journal={arXiv preprint arXiv:2401.00368},
year={2023}
}
@article{wang2022text,
title={Text Embeddings by Weakly-Supervised Contrastive Pre-training},
author={Wang, Liang and Yang, Nan and Huang, Xiaolong and Jiao, Binxing and Yang, Linjun and Jiang, Daxin and Majumder, Rangan and Wei, Furu},
journal={arXiv preprint arXiv:2212.03533},
year={2022}
}
```
## Limitations
Using this model for inputs longer than 4096 tokens is not recommended.
This model's multilingual capability is still inferior to [multilingual-e5-large](https://huggingface.co/intfloat/multilingual-e5-large) for some cases.
| [
"SUMMARIZATION"
] | [
"BIOSSES",
"SCIFACT"
] |
nold/CroissantLLMChat-v0.1-GGUF | nold | text2text-generation | [
"gguf",
"legal",
"code",
"text-generation-inference",
"art",
"text2text-generation",
"fr",
"en",
"dataset:croissantllm/croissant_dataset",
"dataset:croissantllm/CroissantLLM-2201-sft",
"dataset:cerebras/SlimPajama-627B",
"dataset:uonlp/CulturaX",
"dataset:pg19",
"dataset:bigcode/starcoderdata",
"arxiv:2402.00786",
"license:mit",
"endpoints_compatible",
"region:us",
"conversational"
] | 2024-02-14T13:45:16 | 2024-02-15T18:44:52 | 80 | 0 | ---
datasets:
- croissantllm/croissant_dataset
- croissantllm/CroissantLLM-2201-sft
- cerebras/SlimPajama-627B
- uonlp/CulturaX
- pg19
- bigcode/starcoderdata
language:
- fr
- en
license: mit
pipeline_tag: text2text-generation
tags:
- legal
- code
- text-generation-inference
- art
---
# CroissantLLMChat (190k steps + Chat)
This model is part of the CroissantLLM initiative, and corresponds to the checkpoint after 190k steps (2.99 T) tokens and a final Chat finetuning phase.
https://arxiv.org/abs/2402.00786
For best performance, it should be used with a temperature of 0.3 or more, and with the exact template described below:
```python
chat = [
{"role": "user", "content": "Que puis-je faire à Marseille en hiver?"},
]
chat_input = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
```
corresponding to:
```python
chat_input = """<|im_start|>user
{USER QUERY}<|im_end|>
<|im_start|>assistant\n"""
```
## Abstract
We introduce CroissantLLM, a 1.3B language model pretrained on a set of 3T English and French tokens, to bring to the research and industrial community a high-performance, fully open-sourced bilingual model that runs swiftly on consumer-grade local hardware.
To that end, we pioneer the approach of training an intrinsically bilingual model with a 1:1 English-to-French pretraining data ratio, a custom tokenizer, and bilingual finetuning datasets. We release the training dataset, notably containing a French split with manually curated, high-quality, and varied data sources.
To assess performance outside of English, we craft a novel benchmark, FrenchBench, consisting of an array of classification and generation tasks, covering various orthogonal aspects of model performance in the French Language. Additionally, rooted in transparency and to foster further Large Language Model research, we release codebases, and dozens of checkpoints across various model sizes, training data distributions, and training steps, as well as fine-tuned Chat models, and strong translation models. We evaluate our model through the FMTI framework, and validate 81% of the transparency criteria, far beyond the scores of even most open initiatives.
This work enriches the NLP landscape, breaking away from previous English-centric work in order to strengthen our understanding of multilinguality in language models.
## Citation
Our work can be cited as:
```bash
@misc{faysse2024croissantllm,
title={CroissantLLM: A Truly Bilingual French-English Language Model},
author={Manuel Faysse and Patrick Fernandes and Nuno M. Guerreiro and António Loison and Duarte M. Alves and Caio Corro and Nicolas Boizard and João Alves and Ricardo Rei and Pedro H. Martins and Antoni Bigata Casademunt and François Yvon and André F. T. Martins and Gautier Viaud and Céline Hudelot and Pierre Colombo},
year={2024},
eprint={2402.00786},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
## Usage
This model is a Chat model, that is, it is finetuned for Chat function and works best with the provided template.
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "croissantllm/CroissantLLMChat-v0.1"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype=torch.float16, device_map="auto")
chat = [
{"role": "user", "content": "Que puis-je faire à Marseille en hiver?"},
]
chat_input = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
inputs = tokenizer(chat_input, return_tensors="pt", add_special_tokens=True).to(model.device)
tokens = model.generate(**inputs, max_new_tokens=150, do_sample=True, top_p=0.95, top_k=60, temperature=0.3)
print(tokenizer.decode(tokens[0]))
```
## Model limitations
Evaluation results indicate the model is strong in its size category, and offers decent performances on writing-based tasks and internal knowledge, and very strong performance on translation tasks. The small size of the CroissantLLM model however hinders its capacity to perform more complex reasoning-based tasks, at least in a zero or few-shot manner in its generalist base or chat-model versions. This is aligned with other models of size and underlines the importance of scale for more abstract tasks.
#### Knowledge Cutoff
The model training dataset has a data cutoff date corresponding to the November 2023 Wikipedia dump. This is the de facto knowledge cutoff date for our base model, although a lot of information dates back further. Updated versions can be trained through continued pre-training or subsequent fine-tuning.
#### Multilingual performance.
CroissantLLM is mostly a French and English model. Code performance is relatively limited, and although some amount of data from other languages is included within the SlimPajama training set, out-of-the-box performance in other languages is not to be expected, although some European languages do work quite well.
#### Hallucinations.
CroissantLLM can hallucinate and output factually incorrect data, especially regarding complex topics. This is to be expected given the small model size, and hallucination rates seem inferior to most models of the same size category although no quantitative assessments have been conducted outside of MT-Bench experiments.
***
Quantization of Model [croissantllm/CroissantLLMChat-v0.1](https://huggingface.co/croissantllm/CroissantLLMChat-v0.1).
Created using [llm-quantizer](https://github.com/Nold360/llm-quantizer) Pipeline
| [
"TRANSLATION"
] | [
"CRAFT"
] |
RichardErkhov/BSC-LT_-_salamandra-7b-gguf | RichardErkhov | null | [
"gguf",
"arxiv:2403.14009",
"arxiv:2403.20266",
"arxiv:2101.00027",
"arxiv:2207.00220",
"arxiv:1810.06694",
"arxiv:1911.05507",
"arxiv:1906.03741",
"arxiv:2406.17557",
"arxiv:2402.06619",
"arxiv:1803.09010",
"endpoints_compatible",
"region:us"
] | 2024-10-11T14:56:32 | 2024-10-11T17:45:36 | 80 | 0 | ---
{}
---
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
salamandra-7b - GGUF
- Model creator: https://huggingface.co/BSC-LT/
- Original model: https://huggingface.co/BSC-LT/salamandra-7b/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [salamandra-7b.Q2_K.gguf](https://huggingface.co/RichardErkhov/BSC-LT_-_salamandra-7b-gguf/blob/main/salamandra-7b.Q2_K.gguf) | Q2_K | 3.08GB |
| [salamandra-7b.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/BSC-LT_-_salamandra-7b-gguf/blob/main/salamandra-7b.IQ3_XS.gguf) | IQ3_XS | 3.39GB |
| [salamandra-7b.IQ3_S.gguf](https://huggingface.co/RichardErkhov/BSC-LT_-_salamandra-7b-gguf/blob/main/salamandra-7b.IQ3_S.gguf) | IQ3_S | 3.51GB |
| [salamandra-7b.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/BSC-LT_-_salamandra-7b-gguf/blob/main/salamandra-7b.Q3_K_S.gguf) | Q3_K_S | 3.5GB |
| [salamandra-7b.IQ3_M.gguf](https://huggingface.co/RichardErkhov/BSC-LT_-_salamandra-7b-gguf/blob/main/salamandra-7b.IQ3_M.gguf) | IQ3_M | 3.6GB |
| [salamandra-7b.Q3_K.gguf](https://huggingface.co/RichardErkhov/BSC-LT_-_salamandra-7b-gguf/blob/main/salamandra-7b.Q3_K.gguf) | Q3_K | 3.77GB |
| [salamandra-7b.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/BSC-LT_-_salamandra-7b-gguf/blob/main/salamandra-7b.Q3_K_M.gguf) | Q3_K_M | 3.77GB |
| [salamandra-7b.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/BSC-LT_-_salamandra-7b-gguf/blob/main/salamandra-7b.Q3_K_L.gguf) | Q3_K_L | 4.0GB |
| [salamandra-7b.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/BSC-LT_-_salamandra-7b-gguf/blob/main/salamandra-7b.IQ4_XS.gguf) | IQ4_XS | 4.18GB |
| [salamandra-7b.Q4_0.gguf](https://huggingface.co/RichardErkhov/BSC-LT_-_salamandra-7b-gguf/blob/main/salamandra-7b.Q4_0.gguf) | Q4_0 | 4.33GB |
| [salamandra-7b.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/BSC-LT_-_salamandra-7b-gguf/blob/main/salamandra-7b.IQ4_NL.gguf) | IQ4_NL | 4.36GB |
| [salamandra-7b.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/BSC-LT_-_salamandra-7b-gguf/blob/main/salamandra-7b.Q4_K_S.gguf) | Q4_K_S | 4.35GB |
| [salamandra-7b.Q4_K.gguf](https://huggingface.co/RichardErkhov/BSC-LT_-_salamandra-7b-gguf/blob/main/salamandra-7b.Q4_K.gguf) | Q4_K | 4.52GB |
| [salamandra-7b.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/BSC-LT_-_salamandra-7b-gguf/blob/main/salamandra-7b.Q4_K_M.gguf) | Q4_K_M | 4.52GB |
| [salamandra-7b.Q4_1.gguf](https://huggingface.co/RichardErkhov/BSC-LT_-_salamandra-7b-gguf/blob/main/salamandra-7b.Q4_1.gguf) | Q4_1 | 4.72GB |
| [salamandra-7b.Q5_0.gguf](https://huggingface.co/RichardErkhov/BSC-LT_-_salamandra-7b-gguf/blob/main/salamandra-7b.Q5_0.gguf) | Q5_0 | 5.11GB |
| [salamandra-7b.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/BSC-LT_-_salamandra-7b-gguf/blob/main/salamandra-7b.Q5_K_S.gguf) | Q5_K_S | 5.11GB |
| [salamandra-7b.Q5_K.gguf](https://huggingface.co/RichardErkhov/BSC-LT_-_salamandra-7b-gguf/blob/main/salamandra-7b.Q5_K.gguf) | Q5_K | 5.21GB |
| [salamandra-7b.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/BSC-LT_-_salamandra-7b-gguf/blob/main/salamandra-7b.Q5_K_M.gguf) | Q5_K_M | 5.21GB |
| [salamandra-7b.Q5_1.gguf](https://huggingface.co/RichardErkhov/BSC-LT_-_salamandra-7b-gguf/blob/main/salamandra-7b.Q5_1.gguf) | Q5_1 | 5.5GB |
| [salamandra-7b.Q6_K.gguf](https://huggingface.co/RichardErkhov/BSC-LT_-_salamandra-7b-gguf/blob/main/salamandra-7b.Q6_K.gguf) | Q6_K | 5.94GB |
| [salamandra-7b.Q8_0.gguf](https://huggingface.co/RichardErkhov/BSC-LT_-_salamandra-7b-gguf/blob/main/salamandra-7b.Q8_0.gguf) | Q8_0 | 7.69GB |
Original model description:
---
license: apache-2.0
library_name: transformers
pipeline_tag: text-generation
language:
- bg
- ca
- code
- cs
- cy
- da
- de
- el
- en
- es
- et
- eu
- fi
- fr
- ga
- gl
- hr
- hu
- it
- lt
- lv
- mt
- nl
- nn
- \no
- oc
- pl
- pt
- ro
- ru
- sh
- sk
- sl
- sr
- sv
- uk
---

# Salamandra Model Card
Salamandra is a highly multilingual model pre-trained from scratch that comes in three different
sizes — 2B, 7B and 40B parameters — with their respective base and instruction-tuned variants.
This model card corresponds to the 7B instructed version.
To visit the model cards of other Salamandra versions, please refer to the [Model Index](#model-index).
The entire Salamandra family is released under a permissive [Apache 2.0 license]((https://www.apache.org/licenses/LICENSE-2.0)).
Along with the open weights, all training scripts and configuration files are made publicly available in [this GitHub repository](https://github.com/langtech-bsc/salamandra).
---
## Model Details
### Description
Transformer-based decoder-only language model that has been pre-trained from scratch on 7.8 trillion tokens of highly curated data.
The pre-training corpus contains text in 35 European languages and code.
### Hyperparameters
The full list of hyperparameters for each model can be found [here](https://github.com/langtech-bsc/salamandra/tree/main/configs).
### Architecture
| | |
|-------------------------|:--------------|
| Total Parameters | 7,768,117,248 |
| Embedding Parameters | 1,048,576,000 |
| Layers | 32 |
| Hidden size | 4,096 |
| Attention heads | 32 |
| Context length | 8,192 |
| Vocabulary size | 256,000 |
| Precision | bfloat16 |
| Embedding type | RoPE |
| Activation Function | SwiGLU |
| Layer normalization | RMS Norm |
| Flash attention | ✅ |
| Grouped Query Attention | ✅ |
| Num. query groups | 8 |
---
## Intended Use
### Direct Use
The models are intended for both research and commercial use in any of the languages included in the training data.
The base models are intended either for language generation or to be further fine-tuned for specific use-cases.
The instruction-tuned variants can be used as general-purpose assistants, as long as the user is fully aware of the model’s limitations.
### Out-of-scope Use
The model is not intended for malicious activities, such as harming others or violating human rights.
Any downstream application must comply with current laws and regulations.
Irresponsible usage in production environments without proper risk assessment and mitigation is also discouraged.
---
## Hardware and Software
### Training Framework
Pre-training was conducted using NVIDIA’s [NeMo Framework](https://docs.nvidia.com/nemo-framework/index.html),
which leverages PyTorch Lightning for efficient model training in highly distributed settings.
The instruction-tuned versions were produced with [FastChat](https://github.com/lm-sys/FastChat).
### Compute Infrastructure
All models were trained on [MareNostrum 5](https://www.bsc.es/ca/marenostrum/marenostrum-5), a pre-exascale EuroHPC supercomputer hosted and
operated by Barcelona Supercomputing Center.
The accelerated partition is composed of 1,120 nodes with the following specifications:
- 4x Nvidia Hopper GPUs with 64 HBM2 memory
- 2x Intel Sapphire Rapids 8460Y+ at 2.3Ghz and 32c each (64 cores)
- 4x NDR200 (BW per node 800Gb/s)
- 512 GB of Main memory (DDR5)
- 460GB on NVMe storage
|Model|Nodes|GPUs|
|:---:|:---:|:---:|
|2B|64|256|
|7B|128|512|
|40B|256 / 512|1,024 / 2,048|
---
## How to use
This section offers examples of how to perform inference using various methods.
### Inference
You'll find different techniques for running inference, including Huggingface's Text Generation Pipeline, multi-GPU configurations, and vLLM for scalable and efficient generation.
#### Inference with Huggingface's Text Generation Pipeline
The Huggingface Text Generation Pipeline provides a straightforward way to run inference using the Salamandra-7b model.
```bash
pip install transformers torch accelerate sentencepiece protobuf
```
<details>
<summary>Show code</summary>
```python
from transformers import pipeline, set_seed
model_id = "BSC-LT/salamandra-7b"
# Sample prompts
prompts = [
"Las fiestas de San Isidro Labrador de Yecla son",
"El punt més alt del Parc Natural del Montseny és",
"Sentence in English: The typical chance of such a storm is around 10%. Sentence in Catalan:",
"Si le monde était clair",
"The future of AI is",
]
# Create the pipeline
generator = pipeline("text-generation", model_id, device_map="auto")
generation_args = {
"temperature": 0.1,
"top_p": 0.95,
"max_new_tokens": 25,
"repetition_penalty": 1.2,
"do_sample": True
}
# Fix the seed
set_seed(1)
# Generate texts
outputs = generator(prompts, **generation_args)
# Print outputs
for output in outputs:
print(output[0]["generated_text"])
```
</details>
#### Inference with single / multi GPU
This section provides a simple example of how to run inference using Huggingface's AutoModel class.
```bash
pip install transformers torch accelerate sentencepiece protobuf
```
<details>
<summary>Show code</summary>
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
model_id = "BSC-LT/salamandra-7b"
# Input text
text = "El mercat del barri és"
# Load the tokenizer
tokenizer = AutoTokenizer.from_pretrained(model_id)
# Load the model
model = AutoModelForCausalLM.from_pretrained(
model_id,
device_map="auto",
torch_dtype=torch.bfloat16
)
generation_args = {
"temperature": 0.1,
"top_p": 0.95,
"max_new_tokens": 25,
"repetition_penalty": 1.2,
"do_sample": True
}
inputs = tokenizer(text, return_tensors="pt")
# Generate texts
output = model.generate(input_ids=inputs["input_ids"].to(model.device), attention_mask=inputs["attention_mask"], **generation_args)
# Print outputs
print(tokenizer.decode(output[0], skip_special_tokens=True))
```
</details>
#### Inference with vLLM
vLLM is an efficient library for inference that enables faster and more scalable text generation.
```bash
pip install vllm
```
<details>
<summary>Show code</summary>
```python
from vllm import LLM, SamplingParams
model_id = "BSC-LT/salamandra-7b"
# Sample prompts
prompts = [
"Las fiestas de San Isidro Labrador de Yecla son",
"El punt més alt del Parc Natural del Montseny és",
"Sentence in English: The typical chance of such a storm is around 10%. Sentence in Catalan:",
"Si le monde était clair",
"The future of AI is",
]
# Create a sampling params object
sampling_params = SamplingParams(
temperature=0.1,
top_p=0.95,
seed=1,
max_tokens=25,
repetition_penalty=1.2)
# Create an LLM
llm = LLM(model=model_id)
# Generate texts
outputs = llm.generate(prompts, sampling_params)
# Print outputs
for output in outputs:
prompt = output.prompt
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")
```
</details>
---
## Data
### Pretraining Data
The training corpus consists of 2.4 trillion tokens, including 35 European languages and 92 programming languages. It amounts to a total of 33TB of pre-processed text.
Languages were sampled manually by giving x2 oversampling to Spain's co-official languages (Spanish, Catalan, Galician and Basque), code was undersampled by half,
and the rest of the languages were kept as is, resulting in the following distribution:

This highly multilingual corpus is predominantly composed of data from Colossal OSCAR,
which contributes a significant 66.06% of the total tokens.
Following this, Starcoder provides 11.91%, and Spanish Crawling adds 3.34%.
The next largest sources are French FR at 3.12% and Proof Pile at 1.98%.
Other notable contributions include Macocu, Pile of Law, and Eurlex, each contributing around 1.5% to 1.3%.
These major sources collectively form the bulk of the corpus, ensuring a rich and diverse dataset for training the language model.
The remaining 10% comes from smaller sources in various languages.
Feel free to click the expand button below to see the full list of sources.
<details>
<summary>Data Sources</summary>
| Dataset | Language | Source |
|-----------------------------------------------|---------------------------------------------------------------------------------------------------------------|-----------------------------------------------------------------------------------------------------|
| Parlamint corpus | at, bg, cz, dk, ee, es, es-ga, fi, fr, gb, gr, hr, hu, it, lv, nl, no, pl, pt, rs, se, si | Erjavec et al., 2021 |
| Bulgarian National Corpus | bg | [Link](http://old.dcl.bas.bg/dataset/BulNC.7z) |
| Crawl of Bulgarian news websites | bg | [Link](http://old.dcl.bas.bg/dataset/Bulgarian_news.7z) |
| Colossal OSCAR 1.0 | bg, ca, cs, cy, da, de, el, en, es, et, eu, fi, fr, ga, gl, hr, hu, it, lt, lv, mt, nl, nn, no, oc, pl, pt, ro, ru, sh, sk, sl, sr, sv, uk | Brack et al., 2024 |
| Wikimedia dumps | bg, ca, cs, da, de, el, en, es, et, eu, fi, fr, ga, gl, hr, hu, it, lt, lv, mt, nl, nn, no, pl, pt, ro, sh, sk, sl, sr, uk | [Link](https://dumps.wikimedia.org/) |
| OpenSubtitlesv2016 | bg, ca, cs, da, de, el, en, es, et, eu, fi, fr, gl, hr, it, lt, lv, nl, no, pl, pt, ro, sk, sl, sr, sv, uk | Lison & Tiedemann, 2016 |
| MaCoCu web corpus | bg, ca, el, hr, mt, sl, sr, uk | Bañón et al., 2022 |
| EurLEX-Resources | bg, cs, da, de, el, en, es, et, fi, fr, ga, hr, hu, it, lt, lv, mt, nl, pl, pt, ro, sk, sl, sv | [Link](https://huggingface.co/datasets/joelniklaus/eurlex_resources) |
| MC4-Legal | bg, cs, da, de, el, en, es, et, fi, fr, ga, hu, it, lt, lv, mt, nl, pl, pt, ro, sk, sl, sv | [Link](https://huggingface.co/datasets/joelito/legal-mc4) |
| CURLICAT Corpus | bg, hr, hu, pl, ro, sk, sl | Váradi et al., 2022 |
| CATalog | ca | Palomar-Giner et al., 2024 |
| Spanish Crawling | ca, es, eu, gl | Relevant Spanish websites crawling |
| Starcoder | code | Li et al., 2023 |
| SYN v9: large corpus of written Czech | cs | Křen et al., 2021 |
| Welsh-GOV | cy | Crawling from [Link](https://www.llyw.cymru) |
| DaNewsroom | da | Varab & Schluter, 2020 |
| Danish GigaWord | da | Strømberg-Derczynski et al., 2021 |
| DK-CLARIN Reference Corpus of General Danish | da | [Link](https://korpus.dsl.dk/clarin/) |
| The Danish Parliament Corpus 2009 - 2017, v1 | da | Hansen, 2018 |
| DeWaC | de | [Link](https://docs.sslmit.unibo.it/doku.php?id=corpora:dewac) |
| Open Legal Data - German court decisions and laws | de | Ostendorff et al., 2020 |
| Greek Legal Code | el | Papaloukas et al., 2021 |
| Greek Web Corpus | el | Outsios et al., 2018 |
| Auxiliary Mathematics Problems and Solutions (AMPS) dataset | en | Hendrycks et al., 2021 |
| BIGPATENT | en | Sharma et al., 2019 |
| FineWeb-Edu (350BT subset) | en | Penedo et al., 2024 |
| peS2o | en | Soldaini & Lo, 2023 |
| PG-19 | en | Rae et al., 2019 |
| Pile of Law (selected subsets) | en | Henderson* et al., 2022 |
| proof-pile | en | [Link](https://huggingface.co/datasets/hoskinson-center/proof-pile) |
| RedPajama-Data T1 (StackExchange subset) | en | Computer, 2023 |
| The Pile (PhilPapers subset) | en | Gao et al., 2021 |
| Biomedical | es | Internally generated scientific dataset: Dialnet, Scielo, CSIC, TDX, BSC, UCM |
| HPLTDatasets v1 - Spanish | es | de Gibert et al., 2024 |
| Legal | es | Internally generated legal dataset: BOE, BORME, Senado, Congreso, Spanish court orders, DOGC |
| Scientific | es | Internally generated scientific dataset: Wikipedia LS, Pubmed, MeSpEn, patents, clinical cases, medical crawler |
| Spanish Legal Domain Corpora | es | Gutiérrez-Fandiño et al., 2021 |
| Estonian National Corpus 2021 | et | Koppel & Kallas, 2022 |
| Estonian Reference Corpus | et | [Link](https://www.cl.ut.ee/korpused/segakorpus/) |
| EusCrawl (w/o Wikipedia or NC-licenses) | eu | Artetxe et al., 2022 |
| Latxa Corpus v1.1 | eu | Etxaniz et al., 2024 [Link](https://huggingface.co/datasets/HiTZ/latxa-corpus-v1.1) |
| Aya Dataset (w/o Evaluation Suite) | eu, hr, nl, fi, ka, hu, lt, nn, ro, sk, lv, cy, bg, cs, en, fr, de, ga, mt, pl, ru, sl, sv, ca, da, et, gl, el, it, no, pt, sr, es, uk | Singh et al., 2024 |
| Yle Finnish News Archive | fi | [Link](http://urn.fi/urn:nbn:fi:lb-2021050401) |
| CaBeRnet: a New French Balanced Reference Corpus | fr | Popa-Fabre et al., 2020 |
| French Public Domain Books | fr | [Link](https://huggingface.co/datasets/PleIAs/French-PD-Books) |
| French Public Domain Newspapers | fr | [Link](https://huggingface.co/datasets/PleIAs/French-PD-Newspapers) |
| Irish Universal Dependencies | ga | [Link](https://universaldependencies.org/ga/index.html) |
| The Gaois bilingual corpus of English-Irish legislation (Irish legislation) | ga | [Link](https://portulanclarin.net/repository/browse/the-gaois-bilingual-corpus-of-english-irish-legislation-processed/daeac17c9e3511ea9b7f02420a000407b83de243dc0b469aab41084386c5b80f/) |
| CorpusNÓS | gl | de-Dios-Flores et al., 2024 |
| Croatian web corpus hrWaC 2.1 | hr | Ljubešić & Klubička, 2014 |
| ITWaC | it | [Link](https://docs.sslmit.unibo.it/doku.php?id=corpora:itwac) |
| Corpus of State-related content from the Latvian Web (Processed) | lv | [Link](https://catalog.elra.info/en-us/repository/browse/ELRA-W0169/) |
| Korpus Malti | mt | Micallef et al., 2022 |
| SoNaR Corpus NC 1.2 | nl | [Link](https://taalmaterialen.ivdnt.org/download/tstc-sonar-corpus/) |
| Norwegian Colossal Corpus | nn, no | Kummervold et al., 2021 |
| Occitan Corpus | oc | Provided by [IEA](https://www.institutestudisaranesi.cat/) |
| NKJP-PodkorpusMilionowy-1.2 (National Corpus of Polish) | pl | Lewandowska-Tomaszczyk et al., 2013 |
| Polish Parliamentary Corpus / Korpus Dyskursu Parlamentarnego | pl | Ogrodniczuk, 2018 |
| Brazilian Portuguese Web as Corpus | pt | Wagner Filho et al., 2018 |
| ParlamentoPT | pt | Rodrigues et al., 2023 |
| MARCELL Romanian legislative subcorpus v2 | ro | [Link](https://elrc-share.eu/reposMARCELL%20Romanian%20legislative%20subcorpus%20v2itory/browse/marcell-romanian-legislative-subcorpus-v2/2da548428b9d11eb9c1a00155d026706ce94a6b59ffc4b0e9fb5cd9cebe6889e/) |
| Korpus slovenských právnych predpisov v1.9 | sk | [Link](https://www.juls.savba.sk/data/marcell/legal-sk-20220322-1.9.ver.xz) |
| od-justice 2.0 | sk | [Link](https://www.juls.savba.sk/data/od-justice/od-justice-2.0.ver.xz) |
| Corpus of academic Slovene KAS 2.0 | sl | Žagar et al., 2022 |
| slWaC web corpus | sl | Erjavec et al., 2015 |
| SrpKorSubset (news, legal, academic, conversation, literary) | sr | [Link](http://www.korpus.matf.bg.ac.rs/) |
| The Swedish Culturomics Gigaword Corpus | sv | Rødven-Eide, 2016 |
| Corpus of laws and legal acts of Ukraine | uk | [Link](https://lang.org.ua/en/corpora/#anchor7) |
<details>
<summary>References</summary>
- Abadji, J., Suárez, P. J. O., Romary, L., & Sagot, B. (2021). Ungoliant: An optimized pipeline for the generation of a very large-scale multilingual web corpus (H. Lüngen, M. Kupietz, P. Bański, A. Barbaresi, S. Clematide, & I. Pisetta, Eds.; pp. 1–9). Leibniz-Institut für Deutsche Sprache. [Link](https://doi.org/10.14618/ids-pub-10468)
- Artetxe, M., Aldabe, I., Agerri, R., Perez-de-Viñaspre, O., & Soroa, A. (2022). Does Corpus Quality Really Matter for Low-Resource Languages?
- Bañón, M., Esplà-Gomis, M., Forcada, M. L., García-Romero, C., Kuzman, T., Ljubešić, N., van Noord, R., Sempere, L. P., Ramírez-Sánchez, G., Rupnik, P., Suchomel, V., Toral, A., van der Werff, T., & Zaragoza, J. (2022). MaCoCu: Massive collection and curation of monolingual and bilingual data: Focus on under-resourced languages. Proceedings of the 23rd Annual Conference of the European Association for Machine Translation, 303–304. [Link](https://aclanthology.org/2022.eamt-1.41)
- Brack, M., Ostendorff, M., Suarez, P. O., Saiz, J. J., Castilla, I. L., Palomar-Giner, J., Shvets, A., Schramowski, P., Rehm, G., Villegas, M., & Kersting, K. (2024). Community OSCAR: A Community Effort for Multilingual Web Data. [Link](https://occiglot.eu/papers/Community_Oscar.pdf)
- Computer, T. (2023). RedPajama: An Open Source Recipe to Reproduce LLaMA training dataset [Computer software]. [Link](https://github.com/togethercomputer/RedPajama-Data)
- de Gibert, O., Nail, G., Arefyev, N., Bañón, M., van der Linde, J., Ji, S., Zaragoza-Bernabeu, J., Aulamo, M., Ramírez-Sánchez, G., Kutuzov, A., Pyysalo, S., Oepen, S., & Tiedemann, J. (2024). A New Massive Multilingual Dataset for High-Performance Language Technologies (arXiv:2403.14009). arXiv. [Link](http://arxiv.org/abs/2403.14009)
- Dodge, J., Sap, M., Marasović, A., Agnew, W., Ilharco, G., Groeneveld, D., Mitchell, M., & Gardner, M. (2021). Documenting Large Webtext Corpora: A Case Study on the Colossal Clean Crawled Corpus. In M.-F. Moens, X. Huang, L. Specia, & S. W. Yih (Eds.), Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing (pp. 1286–1305). Association for Computational Linguistics. [Link](https://doi.org/10.18653/v1/2021.emnlp-main.98)
- Erjavec, T., Ljubešić, N., & Logar, N. (2015). The slWaC corpus of the Slovene web. Informatica (Slovenia), 39, 35–42.
- Erjavec, T., Ogrodniczuk, M., Osenova, P., Ljubešić, N., Simov, K., Grigorova, V., Rudolf, M., Pančur, A., Kopp, M., Barkarson, S., Steingrímsson, S. hór, van der Pol, H., Depoorter, G., de Does, J., Jongejan, B., Haltrup Hansen, D., Navarretta, C., Calzada Pérez, M., de Macedo, L. D., … Rayson, P. (2021). Linguistically annotated multilingual comparable corpora of parliamentary debates ParlaMint.ana 2.1. [Link](http://hdl.handle.net/11356/1431)
- Etxaniz, J., Sainz, O., Perez, N., Aldabe, I., Rigau, G., Agirre, E., Ormazabal, A., Artetxe, M., & Soroa, A. (2024). Latxa: An Open Language Model and Evaluation Suite for Basque. [Link] (https://arxiv.org/abs/2403.20266)
- Gao, L., Biderman, S., Black, S., Golding, L., Hoppe, T., Foster, C., Phang, J., He, H., Thite, A., Nabeshima, N., Presser, S., & Leahy, C. (2021). The Pile: An 800GB Dataset of Diverse Text for Language Modeling. CoRR, abs/2101.00027. [Link](https://arxiv.org/abs/2101.00027)
- Gutiérrez-Fandiño, A., Armengol-Estapé, J., Gonzalez-Agirre, A., & Villegas, M. (2021). Spanish Legalese Language Model and Corpora.
- Hansen, D. H. (2018). The Danish Parliament Corpus 2009—2017, v1. [Link](http://hdl.handle.net/20.500.12115/8)
- Henderson*, P., Krass*, M. S., Zheng, L., Guha, N., Manning, C. D., Jurafsky, D., & Ho, D. E. (2022). Pile of Law: Learning Responsible Data Filtering from the Law and a 256GB Open-Source Legal Dataset. arXiv. [Link](https://arxiv.org/abs/2207.00220)
- Hendrycks, D., Burns, C., Kadavath, S., Arora, A., Basart, S., Tang, E., Song, D., & Steinhardt, J. (2021). Measuring Mathematical Problem Solving With the MATH Dataset. NeurIPS.
- Jansen, T., Tong, Y., Zevallos, V., & Suarez, P. O. (2022). Perplexed by Quality: A Perplexity-based Method for Adult and Harmful Content Detection in Multilingual Heterogeneous Web Data.
- Koppel, K., & Kallas, J. (2022). Eesti keele ühendkorpuste sari 2013–2021: Mahukaim eestikeelsete digitekstide kogu. Eesti Rakenduslingvistika Ühingu Aastaraamat Estonian Papers in Applied Linguistics, 18, 207–228. [Link](https://doi.org/10.5128/erya18.12)
- Křen, M., Cvrček, V., Henyš, J., Hnátková, M., Jelínek, T., Kocek, J., Kováříková, D., Křivan, J., Milička, J., Petkevič, V., Procházka, P., Skoumalová, H., Šindlerová, J., & Škrabal, M. (2021). SYN v9: Large corpus of written Czech. [Link](http://hdl.handle.net/11234/1-4635)
- Kreutzer, J., Caswell, I., Wang, L., Wahab, A., van Esch, D., Ulzii-Orshikh, N., Tapo, A., Subramani, N., Sokolov, A., Sikasote, C., Setyawan, M., Sarin, S., Samb, S., Sagot, B., Rivera, C., Rios, A., Papadimitriou, I., Osei, S., Suarez, P. O., … Adeyemi, M. (2022). Quality at a Glance: An Audit of Web-Crawled Multilingual Datasets. Transactions of the Association for Computational Linguistics, 10, 50–72. [Link](https://doi.org/10.1162/tacl_a_00447)
- Kummervold, P. E., De la Rosa, J., Wetjen, F., & Brygfjeld, S. A. (2021). Operationalizing a National Digital Library: The Case for a Norwegian Transformer Model. In S. Dobnik & L. Øvrelid (Eds.), Proceedings of the 23rd Nordic Conference on Computational Linguistics (NoDaLiDa) (pp. 20–29). Linköping University Electronic Press, Sweden. [Link](https://aclanthology.org/2021.nodalida-main.3)
- Lewandowska-Tomaszczyk, B., Górski, R., Łaziński, M., & Przepiórkowski, A. (2013). The National Corpus of Polish (NKJP). Language use and data analysis. 309–319.
- Li, R., Allal, L. B., Zi, Y., Muennighoff, N., Kocetkov, D., Mou, C., Marone, M., Akiki, C., Li, J., Chim, J., Liu, Q., Zheltonozhskii, E., Zhuo, T. Y., Wang, T., Dehaene, O., Davaadorj, M., Lamy-Poirier, J., Monteiro, J., Shliazhko, O., … Vries, H. de. (2023). StarCoder: May the source be with you!
- Lison, P., & Tiedemann, J. (2016). OpenSubtitles2016: Extracting Large Parallel Corpora from Movie and TV Subtitles. In N. Calzolari, K. Choukri, T. Declerck, S. Goggi, M. Grobelnik, B. Maegaard, J. Mariani, H. Mazo, A. Moreno, J. Odijk, & S. Piperidis (Eds.), Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC’16) (pp. 923–929). European Language Resources Association (ELRA). [Link](https://aclanthology.org/L16-1147)
- Ljubešić, N., & Klubička, F. (2014). Bs,hr,srWaC - Web Corpora of Bosnian, Croatian and Serbian. In F. Bildhauer & R. Schäfer (Eds.), Proceedings of the 9th Web as Corpus Workshop (WaC-9) (pp. 29–35). Association for Computational Linguistics. [Link](https://doi.org/10.3115/v1/W14-0405)
- Micallef, K., Gatt, A., Tanti, M., van der Plas, L., & Borg, C. (2022). Pre-training Data Quality and Quantity for a Low-Resource Language: New Corpus and BERT Models for Maltese. Proceedings of the Third Workshop on Deep Learning for Low-Resource Natural Language Processing, 90–101. [Link](https://doi.org/10.18653/v1/2022.deeplo-1.10)
- Ogrodniczuk, M. (2018). Polish Parliamentary Corpus. [Link](https://api.semanticscholar.org/CorpusID:235134113)
- Ostendorff, M., Blume, T., & Ostendorff, S. (2020). Towards an Open Platform for Legal Information. Proceedings of the ACM/IEEE Joint Conference on Digital Libraries in 2020, 385–388. [Link](https://doi.org/10.1145/3383583.3398616)
- Ostendorff, M., Suarez, P. O., Lage, L. F., & Rehm, G. (2024). LLM-Datasets: An Open Framework for Pretraining Datasets of Large Language Models. First Conference on Language Modeling. [Link](https://openreview.net/forum?id=5RdIMlGLXL)
- Outsios, S., Skianis, K., Meladianos, P., Xypolopoulos, C., & Vazirgiannis, M. (2018). Word Embeddings from Large-Scale Greek Web content. arXiv Preprint arXiv:1810.06694.
- Palomar-Giner, J., Saiz, J. J., Espuña, F., Mina, M., Da Dalt, S., Llop, J., Ostendorff, M., Ortiz Suarez, P., Rehm, G., Gonzalez-Agirre, A., & Villegas, M. (2024). A CURATEd CATalog: Rethinking the Extraction of Pretraining Corpora for Mid-Resourced Languages. In N. Calzolari, M.-Y. Kan, V. Hoste, A. Lenci, S. Sakti, & N. Xue (Eds.), Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024) (pp. 335–349). ELRA and ICCL. [Link](https://aclanthology.org/2024.lrec-main.31)
- Papaloukas, C., Chalkidis, I., Athinaios, K., Pantazi, D.-A., & Koubarakis, M. (2021). Multi-granular Legal Topic Classification on Greek Legislation. Proceedings of the Natural Legal Language Processing Workshop 2021, 63–75. [Link](https://doi.org/10.48550/arXiv.2109.15298)
- Popa-Fabre, M., Ortiz Suárez, P. J., Sagot, B., & de la Clergerie, É. (2020). French Contextualized Word-Embeddings with a sip of CaBeRnet: A New French Balanced Reference Corpus. Proceedings of the 8th Workshop on Challenges in the Management of Large Corpora, 15–23. [Link](https://aclanthology.org/2020.cmlc-1.3)
- Rae, J. W., Potapenko, A., Jayakumar, S. M., Hillier, C., & Lillicrap, T. P. (2019). Compressive Transformers for Long-Range Sequence Modelling. arXiv Preprint. [Link](https://arxiv.org/abs/1911.05507)
- Rodrigues, J., Gomes, L., Silva, J., Branco, A., Santos, R., Cardoso, H. L., & Osório, T. (2023). Advancing Neural Encoding of Portuguese with Transformer Albertina PT-\*.
- Rødven-Eide, S. (2016). The Swedish Culturomics Gigaword CorpusThe Swedish Culturomics Gigaword Corpus [Dataset]. Språkbanken Text. [Link](https://doi.org/10.23695/3WMV-1Z09)
- Sharma, E., Li, C., & Wang, L. (2019). BIGPATENT: A Large-Scale Dataset for Abstractive and Coherent Summarization. CoRR, abs/1906.03741. [Link](http://arxiv.org/abs/1906.03741)
- Soldaini, L., & Lo, K. (2023). peS2o (Pretraining Efficiently on S2ORC) Dataset. Allen Institute for AI.
- Strømberg-Derczynski, L., Ciosici, M., Baglini, R., Christiansen, M. H., Dalsgaard, J. A., Fusaroli, R., Henrichsen, P. J., Hvingelby, R., Kirkedal, A., Kjeldsen, A. S., Ladefoged, C., Nielsen, F. Å., Madsen, J., Petersen, M. L., Rystrøm, J. H., & Varab, D. (2021). The Danish Gigaword Corpus. Proceedings of the 23rd Nordic Conference on Computational Linguistics (NoDaLiDa), 413–421. [Link](https://aclanthology.org/2021.nodalida-main.46)
- Subramani, N., Luccioni, S., Dodge, J., & Mitchell, M. (2023). Detecting Personal Information in Training Corpora: An Analysis. 208–220. [Link](https://doi.org/10.18653/v1/2023.trustnlp-1.18)
- Varab, D., & Schluter, N. (2020). DaNewsroom: A Large-scale Danish Summarisation Dataset. Proceedings of The 12th Language Resources and Evaluation Conference, 6731–6739. [Link](https://www.aclweb.org/anthology/2020.lrec-1.831)
- Váradi, T., Nyéki, B., Koeva, S., Tadić, M., Štefanec, V., Ogrodniczuk, M., Nitoń, B., Pezik, P., Barbu Mititelu, V., Irimia, E., Mitrofan, M., Tufi\textcommabelows, D., Garabík, R., Krek, S., & Repar, A. (2022). Introducing the CURLICAT Corpora: Seven-language Domain Specific Annotated Corpora from Curated Sources. In N. Calzolari, F. Béchet, P. Blache, K. Choukri, C. Cieri, T. Declerck, S. Goggi, H. Isahara, B. Maegaard, J. Mariani, H. Mazo, J. Odijk, & S. Piperidis (Eds.), Proceedings of the Thirteenth Language Resources and Evaluation Conference (pp. 100–108). European Language Resources Association. [Link](https://aclanthology.org/2022.lrec-1.11)
- Wagner Filho, J. A., Wilkens, R., Idiart, M., & Villavicencio, A. (2018). The brwac corpus: A new open resource for brazilian portuguese. Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018).
- Žagar, A., Kavaš, M., Robnik-Šikonja, M., Erjavec, T., Fišer, D., Ljubešić, N., Ferme, M., Borovič, M., Boškovič, B., Ojsteršek, M., & Hrovat, G. (2022). Corpus of academic Slovene KAS 2.0. [Link](http://hdl.handle.net/11356/1448)
- Alicia Parrish, Angelica Chen, Nikita Nangia, Vishakh Padmakumar, Jason Phang, Jana Thompson, Phu Mon Htut, and Samuel Bowman. 2022. BBQ: A hand-built bias benchmark for question answering. In Findings of the Association for Computational Linguistics: ACL 2022, pages 2086–2105, Dublin, Ireland. Association for Computational Linguistics.
- Emily Sheng, Kai-Wei Chang, Premkumar Natarajan, and Nanyun Peng. 2019. The Woman Worked as a Babysitter: On Biases in Language Generation. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3407–3412, Hong Kong, China. Association for Computational Linguistics.
- Clark, P., Cowhey, I., Etzioni, O., Khot, T., Sabharwal, A., Schoenick, C., & Tafjord, O. (2018). Think you have Solved Question Answering? Try ARC, the AI2 Reasoning Challenge. arXiv:1803. 05457v1.
- Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D. Manning, Andrew Ng, and Christopher Potts. 2013. Recursive Deep Models for Semantic Compositionality Over a Sentiment Treebank. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 1631–1642, Seattle, Washington, USA. Association for Computational Linguistics.
- Penedo, G., Kydlíček, H., allal, L. B., Lozhkov, A., Mitchell, M., Raffel, C., Von Werra, L., & Wolf, T. (2024). The FineWeb Datasets: Decanting the Web for the Finest Text Data at Scale (arXiv:2406.17557). arXiv. http://arxiv.org/abs/2406.17557
- Singh, S., Vargus, F., Dsouza, D., Karlsson, B. F., Mahendiran, A., Ko, W.-Y., Shandilya, H., Patel, J., Mataciunas, D., OMahony, L., Zhang, M., Hettiarachchi, R., Wilson, J., Machado, M., Moura, L. S., Krzemiński, D., Fadaei, H., Ergün, I., Okoh, I., … Hooker, S. (2024). Aya Dataset: An Open-Access Collection for Multilingual Instruction Tuning (arXiv:2402.06619). arXiv. http://arxiv.org/abs/2402.06619
</details>
</details>
The model was trained for 3 epochs, with two final rounds of 0.3B higher-quality tokens each,
meaning that the total number of tokens seen during pre-training amounts to roughly 7.8 trillion tokens.
We provide an extense Datasheet section following the best practices defined by [(Gebru et al., 2021)](https://arxiv.org/pdf/1803.09010).
<details>
<summary>Datasheet</summary>
#### Motivation
**For what purpose was the dataset created? Was there a specific task in mind? Was there a specific gap that needed to be filled? Please provide a description.**
The purpose of creating this dataset is to pre-train the Salamandra family of multilingual models with high performance in a large number of
European languages (35) and code (including 92 different programming languages). In addition, we aim to represent especially the co-official
languages of Spain: Spanish, Catalan, Galician, and Basque. This is the reason why we carry out an oversampling of these languages.
We detected that there is a great lack of massive multilingual data, especially in minority languages (Ostendorff & Rehm, 2023), so part of
our efforts in the creation of this pre-training dataset have resulted in the contribution to large projects such as the Community OSCAR
(Brack et al., 2024), which includes 151 languages and 40T words, or CATalog (Palomar-Giner et al., 2024), the largest open dataset in
Catalan in the world.
**Who created the dataset (e.g., which team, research group) and on behalf of which entity (e.g., company, institution, organization)?**
The dataset has been created by the Language Technologies unit (LangTech) of the Barcelona Supercomputing Center - Centro Nacional de
Supercomputación (BSC-CNS), which aims to advance the field of natural language processing through cutting-edge research and development
and the use of HPC. In particular, it was created by the unit's data team, the main contributors being Javier Saiz, Ferran Espuña, and
Jorge Palomar.
However, the creation of the dataset would not have been possible without the collaboration of a large number of collaborators, partners,
and public institutions, which can be found in detail in the acknowledgements.
**Who funded the creation of the dataset? If there is an associated grant, please provide the name of the grantor and the grant name and number.**
This work/research has been promoted and financed by the Government of Catalonia through the [Aina project](https://projecteaina.cat/).
#### Composition
**What do the instances that comprise the dataset represent (e.g., documents, photos, people, countries)? Are there multiple types of instances (e.g., movies, users, and ratings; people and interactions between them; nodes and edges)? Please provide a description.**
The dataset consists entirely of text documents in various languages. Specifically, data was mainly sourced from the following databases and
repositories:
- **Common Crawl:** Repository that holds website data and is run by the Common Crawl non-profit organization. It is updated monthly and is
distributed under the CC0 1.0 public domain license.
- **GitHub:** Community platform that allows developers to create, store, manage, and share their code. Repositories are crawled and then
distributed with their original licenses, which may vary from permissive to non-commercial licenses.
- **Wikimedia:** Database that holds the collection databases managed by the Wikimedia Foundation, including Wikipedia, Wikibooks, Wikinews,
Wikiquote, Wikisource, and Wikivoyage. It is updated monthly and is distributed under Creative Commons Attribution-ShareAlike License 4.0.
- **EurLex:** Repository that holds the collection of legal documents from the European Union, available in all of the EU’s 24 official
languages and run by the Publications Office of the European Union. It is updated daily and is distributed under the Creative Commons
Attribution 4.0 International license.
- **Other repositories:** Specific repositories were crawled under permission for domain-specific corpora, which include academic, legal,
and newspaper repositories.
We provide a complete list of dataset sources at the end of this section.
**How many instances are there in total (of each type, if appropriate)?**
The dataset contains a diverse range of instances across multiple languages, with notable adjustments for certain languages. English
represents the largest portion, accounting for 39.08% of the total data. Spanish was upsampled by a factor of 2, bringing its share to 16.59%,
while Catalan (1.84%), Basque (0.26%), and Galician (0.36%) were also upsampled by 2. On the other hand, code-related data was downsampled
by half, making up 6.42% of the total. Other prominent languages include French (6.59%), Russian (5.39%), German (4.25%), and Hungarian
(3.93%), with several additional languages contributing between 1% and 2%, and smaller portions represented by a variety of others.
**Does the dataset contain all possible instances or is it a sample (not necessarily random) of instances from a larger set? If the dataset is a sample, then what is the larger set? Is the sample representative of the larger set (e.g., geographic coverage)? If so, please describe how this representativeness was validated/verified. If it is not representative of the larger set, please describe why not (e.g., to cover a more diverse range of instances, because instances were withheld or unavailable).**
The dataset is a sample from multiple sources, with different weights based on the primary language of the content: Spanish, Catalan,
Basque, and Galician content was upsampled by a factor of two, while programming languages were downsampled by a factor of half. Other
sources were sampled in proportion to their occurrence.
**What data does each instance consist of? “Raw” data (e.g., unprocessed text or images) or features? In either case, please provide a description.**
Each instance consists of a text document processed for deduplication, language identification, and source-specific filtering. Some
documents required optical character recognition (OCR) to extract text from non-text formats such as PDFs.
**Is there a label or target associated with each instance? If so, please provide a description.**
Each instance is labeled with a unique identifier, the primary language of the content, and the URL for web-sourced instances. Additional
labels were automatically assigned to detect specific types of content —harmful or toxic content— and to assign preliminary indicators of
undesired qualities —very short documents, high density of symbols, etc.— which were used for filtering instances.
**Is any information missing from individual instances? If so, please provide a description, explaining why this information is missing (e.g., because it was unavailable). This does not include intentionally removed information, but might include, e.g., redacted text.**
No significant information is missing from the instances.
**Are relationships between individual instances made explicit (e.g., users’ movie ratings, social network links)? If so, please describe how these relationships are made explicit.**
Instances are related through shared metadata, such as source and language identifiers.
**Are there recommended data splits (e.g., training, development/validation, testing)? If so, please provide a description of these splits, explaining the rationale behind them.**
The dataset is split randomly into training, validation, and test sets.
**Are there any errors, sources of noise, or redundancies in the dataset? If so, please provide a description.**
Despite removing duplicated instances within each source, redundancy remains at the paragraph and sentence levels, particularly in
web-sourced instances where SEO techniques and templates contribute to repeated textual patterns. Some instances may also be duplicated
across sources due to format variations.
**Is the dataset self-contained, or does it link to or otherwise rely on external resources (e.g., websites, tweets, other datasets)? If it links to or relies on external resources, a) are there guarantees that they will exist, and remain constant, over time; b) are there official archival versions of the complete dataset (i.e., including the external resources as they existed at the time the dataset was created); c) are there any restrictions (e.g., licenses, fees) associated with any of the external resources that might apply to a dataset consumer? Please provide descriptions of all external resources and any restrictions associated with them, as well as links or other access points, as appropriate.**
The dataset is self-contained and does not rely on external resources.
**Does the dataset contain data that might be considered confidential (e.g., data that is protected by legal privilege or by doctor–patient confidentiality, data that includes the content of individuals’ non-public communications)? If so, please provide a description.**
The dataset does not contain confidential data.
**Does the dataset contain data that, if viewed directly, might be offensive, insulting, threatening, or might otherwise cause anxiety? If so, please describe why. If the dataset does not relate to people, you may skip the remaining questions in this section.**
The dataset includes web-crawled content, which may overrepresent pornographic material across languages (Kreutzer et al., 2022). Although
pre-processing techniques were applied to mitigate offensive content, the heterogeneity and scale of web-sourced data make exhaustive
filtering challenging, which makes it next to impossible to identify all adult content without falling into excessive filtering, which may
negatively influence certain demographic groups (Dodge et al., 2021).
**Does the dataset identify any subpopulations (e.g., by age, gender)? If so, please describe how these subpopulations are identified and provide a description of their respective distributions within the dataset.**
The dataset does not explicitly identify any subpopulations.
**Is it possible to identify individuals (i.e., one or more natural persons), either directly or indirectly (i.e., in combination with other data) from the dataset? If so, please describe how.**
Web-sourced instances in the dataset may contain personally identifiable information (PII) that is publicly available on the Web, such as
names, IP addresses, email addresses, and phone numbers. While it would be possible to indirectly identify individuals through the
combination of multiple data points, the nature and scale of web data makes it difficult to parse such information. In any case, efforts are
made to filter or anonymize sensitive data during pre-processing, but some identifiable information may remain in the dataset.
**Does the dataset contain data that might be considered sensitive in any way? If so, please provide a description.**
Given that the dataset includes web-sourced content and other publicly available documents, instances may inadvertently reveal financial
information, health-related details, or forms of government identification, such as social security numbers (Subramani et al., 2023),
especially if the content originates from less-regulated sources or user-generated platforms.
#### Collection Process
**How was the data collected?**
This dataset is constituted by combining several sources, whose acquisition methods can be classified into three groups:
- Web-sourced datasets with some preprocessing available under permissive license (p.e. Common Crawl).
- Domain-specific or language-specific raw crawls (p.e. Spanish Crawling).
- Manually curated data obtained through collaborators, data providers (by means of legal assignment agreements) or open source projects
(p.e. CATalog).
**What mechanisms or procedures were used to collect the data? How were these mechanisms or procedures validated?**
According to the three groups previously defined, these are the mechanisms used in each of them:
- Open direct download. Validation: data integrity tests.
- Ad-hoc scrapers or crawlers. Validation: software unit and data integrity tests.
- Direct download via FTP, SFTP, API or S3. Validation: data integrity tests.
**If the dataset is a sample from a larger set, what was the sampling strategy?**
The sampling strategy was to use the whole dataset resulting from the filtering explained in the ‘preprocessing/cleaning/labelling’ section,
with the particularity that an upsampling of 2 (i.e. twice the probability of sampling a document) was performed for the co-official
languages of Spain (Spanish, Catalan, Galician, Basque), and a downsampling of 1/2 was applied for code (half the probability of sampling a
code document, evenly distributed among all programming languages).
**Who was involved in the data collection process and how were they compensated?**
This data is generally extracted, filtered and sampled by automated processes. The code required to run these processes has been developed
entirely by members of the LangTech data team, or otherwise obtained from open-source software. Furthermore, there has been no monetary
consideration for acquiring data from suppliers.
**Over what timeframe was the data collected? Does this timeframe match the creation timeframe of the data associated with the instances? If not, please describe the timeframe in which the data associated with the instances was created.**
Data were acquired and processed from April 2023 to April 2024. However, as mentioned, much data has been obtained from open projects such
as Common Crawl, which contains data from 2014, so it is the end date (04/2024) rather than the start date that is important.
**Were any ethical review processes conducted? If so, please provide a description of these review processes, including the outcomes, as well as a link or other access point to any supporting documentation.**
No particular ethical review process has been carried out as the data is mostly open and not particularly sensitive. However, we have an
internal evaluation team and a bias team to monitor ethical issues. In addition, we work closely with ‘Observatori d'Ètica en Intel·ligència
Artificial’ (OEIAC) and ‘Agencia Española de Supervisión de la Inteligencia Artificial’ (AESIA) to audit the processes we carry out from an
ethical and legal point of view, respectively.
#### Preprocessing
**Was any preprocessing/cleaning/labeling of the data done? If so, please provide a description. If not, you may skip the remaining questions in this section.**
Instances of text documents were not altered, but web-sourced documents were filtered based on specific criteria along two dimensions:
- Quality: documents with a score lower than 0.8, based on undesired qualities, such as documents with low number of lines, very short
sentences, presence of long footers and headers, and high percentage of punctuation, obtained through CURATE (Palomar-Giner et al., 2024)
were filtered out.
- Harmful or adult content: documents originating from Colossal OSCAR were filtered using LLM-Datasets (Ostendorff et al., 2024) based on
the perplexity from a language model (‘harmful_pp’ field) provided by the Ungoliant pipeline (Abadji et al., 2021).
**Was the “raw” data saved in addition to the preprocessed/cleaned/labeled data? If so, please provide a link or other access point to the “raw” data.**
The original raw data was not kept.
**Is the software that was used to preprocess/clean/label the data available? If so, please provide a link or other access point.**
Yes, the preprocessing and filtering software is open-sourced. The [CURATE](https://github.com/langtech-bsc/CURATE) pipeline was used for Spanish Crawling and CATalog,
and the [Ungoliant](https://github.com/oscar-project/ungoliant) pipeline was used for the OSCAR project.
#### Uses
**Has the dataset been used for any tasks already? If so, please provide a description.**
Pre-train the Salamandra model family.
**What (other) tasks could the dataset be used for?**
The data can be used primarily to pre-train other language models, which can then be used for a wide range of use cases. The dataset could
also be used for other tasks such as fine-tuning language models, cross-lingual NLP tasks, machine translation, domain-specific text
generation, and language-specific data analysis.
**Is there anything about the composition of the dataset or the way it was collected and preprocessed/cleaned/labeled that might impact future uses? Is there anything a dataset consumer could do to mitigate these risks or harms?**
Web-crawled content is over-represented with standard language varieties, impacting language model performance for minority languages.
Language diversity in data is crucial to avoid bias, especially in encoding non-standard dialects, preventing the exclusion of demographic
groups. Moreover, despite legal uncertainties in web-scraped data, we prioritize permissive licenses and privacy protection measures,
acknowledging the challenges posed by personally identifiable information (PII) within large-scale datasets. Our ongoing efforts aim to
address privacy concerns and contribute to a more inclusive linguistic dataset.
**Are there tasks for which the dataset should not be used?**
-
#### Distribution
**Will the dataset be distributed to third parties outside of the entity on behalf of which the dataset was created? If so, please provide a description.**
The dataset will not be released or distributed to third parties. Any related question to distribution is omitted in this section.
#### Maintenance
**Who will be supporting/hosting/maintaining the dataset?**
The dataset will be hosted by the Language Technologies unit (LangTech) of the Barcelona Supercomputing Center (BSC). The team will ensure
regular updates and monitor the dataset for any issues related to content integrity, legal compliance, and bias for the sources they are
responsible for.
**How can the owner/curator/manager of the dataset be contacted?**
The data owner may be contacted with the email address [email protected].
**Will the dataset be updated?**
The dataset will not be updated.
**If the dataset relates to people, are there applicable limits on the retention of the data associated with the instances? If so, please describe these limits and explain how they will be enforced.**
The dataset does not keep sensitive data that could allow direct identification of individuals, apart from the data that is publicly
available in web-sourced content. Due to the sheer volume and diversity of web data, it is not feasible to notify individuals or manage data
retention on an individual basis. However, efforts are made to mitigate the risks associated with sensitive information through
pre-processing and filtering to remove identifiable or harmful content. Despite these measures, vigilance is maintained to address potential
privacy and ethical issues.
**Will older versions of the dataset continue to be supported/hosted/maintained? If so, please describe how. If not, please describe how its obsolescence will be communicated to dataset consumers.**
Since the dataset will not be updated, only the final version will be kept.
**If others want to extend/augment/build on/contribute to the dataset, is there a mechanism for them to do so?**
The dataset does not allow for external contributions.
</details>
---
## Evaluation
### Gold-standard benchmarks
Evaluation is done using the Language Model Evaluation Harness (Gao et al., 2024). We evaluate on a set of tasks taken from [SpanishBench](https://github.com/EleutherAI/lm-evaluation-harness/tree/main/lm_eval/tasks/spanish_bench), [CatalanBench](https://github.com/EleutherAI/lm-evaluation-harness/tree/main/lm_eval/tasks/catalan_bench), [BasqueBench](https://github.com/EleutherAI/lm-evaluation-harness/tree/main/lm_eval/tasks/basque_bench) and [GalicianBench](https://github.com/EleutherAI/lm-evaluation-harness/tree/main/lm_eval/tasks/galician_bench). We also use English tasks already available on the LM Evaluation Harness. These benchmarks include both new and existing tasks and datasets. In the tables below, we include the results in a selection of evaluation datasets that represent model's performance across a variety of tasks within these benchmarks.
We only use tasks that are either human generated, human translated, or with a strong human-in-the-loop (i.e., machine translation followed by professional revision or machine generation followed by human revision and annotation). This is the reason behind the variety in number of tasks reported across languages. As more tasks that fulfill these requirements are published, we will update the presented results. We also intend to expand the evaluation to other languages, as long as the datasets meet our quality standards.
During the implementation of the evaluation we observed a series of issues worth considering when replicating and interpreting the results presented. These issues include ≈1.5% variances in performance in some tasks depending on the version of the `transformers` library used, and depending on the use (or lack of use) of tensor parallelism when loading a model. When implementing existing tasks, we carry out a comprehensive quality evaluation of the dataset, the Harness task itself, and what kind of input models see during evaluation. Our implementation (see links above) addresses multiple existing problems such as errors in datasets and prompts, and lack of pre-processing. All this means that results will vary if using other Harness implementations, and may slightly vary depending on the replication setup.
It should be noted that these results are subject to all the drawbacks of every current gold-standard evaluation, and that the figures do not fully represent the models capabilities and potential. We thus advise caution when reading and interpreting the results.
A full list of results compared to other baselines, a discussion of the model's performance across tasks and its implications, and details regarding problem-solving with task implementation will soon be available in the technical report.
All results reported below are on a 5-shot setting.
#### Spanish
<table><thead>
<tr>
<th>Category</th>
<th>Task</th>
<th>Metric</th>
<th>Result</th>
</tr></thead>
<tbody>
<tr>
<td>Commonsense Reasoning</td>
<td>xstorycloze_es</td>
<td>acc</td>
<td>74.06</td>
</tr>
<tr>
<td rowspan="2">NLI</td>
<td>wnli_es</td>
<td>acc</td>
<td>46.48</td>
</tr>
<tr>
<td>xnli_es</td>
<td>acc</td>
<td>46.47</td>
</tr>
<tr>
<td>Paraphrasing</td>
<td>paws_es</td>
<td>acc</td>
<td>57.65</td>
</tr>
<tr>
<td>QA</td>
<td>xquad_es</td>
<td>acc</td>
<td>71.48</td>
</tr>
<tr>
<td>Translation</td>
<td>flores_es</td>
<td>bleu</td>
<td>23.56</td>
</tr>
</tbody>
</table>
#### Catalan
<table><thead>
<tr>
<th>Category</th>
<th>Task</th>
<th>Metric</th>
<th>Result</th>
</tr></thead>
<tbody>
<tr>
<td rowspan="2">Commonsense Reasoning</td>
<td>copa_ca</td>
<td>acc</td>
<td>80.8</td>
</tr>
<tr>
<td>xstorycloze_ca</td>
<td>acc</td>
<td>73.73</td>
</tr>
<tr>
<td rowspan="2">NLI</td>
<td>wnli_ca</td>
<td>acc</td>
<td>56.34</td>
</tr>
<tr>
<td>xnli_ca</td>
<td>acc</td>
<td>49.4</td>
</tr>
<tr>
<td rowspan="2">Paraphrasing</td>
<td>parafraseja</td>
<td>acc</td>
<td>64.88</td>
</tr>
<tr>
<td>paws_ca</td>
<td>acc</td>
<td>61.5</td>
</tr>
<tr>
<td rowspan="5">QA</td>
<td>arc_ca_easy</td>
<td>acc</td>
<td>69.23</td>
</tr>
<tr>
<td>arc_ca_challenge</td>
<td>acc</td>
<td>44.54</td>
</tr>
<tr>
<td>openbookqa_ca</td>
<td>acc</td>
<td>36.8</td>
</tr>
<tr>
<td>piqa_ca</td>
<td>acc</td>
<td>70.35</td>
</tr>
<tr>
<td>siqa_ca</td>
<td>acc</td>
<td>48.26</td>
</tr>
<tr>
<td>Translation</td>
<td>flores_ca</td>
<td>bleu</td>
<td>30.34</td>
</tr>
</tbody></table>
#### Basque
<table><thead>
<tr>
<th>Category</th>
<th>Task</th>
<th>Metric</th>
<th>Result</th>
</tr></thead>
<tbody>
<tr>
<td rowspan="2">Commonsense Reasoning</td>
<td>xcopa_eu</td>
<td>acc</td>
<td>68</td>
</tr>
<tr>
<td>xstorycloze_eu</td>
<td>acc</td>
<td>64.79</td>
</tr>
<tr>
<td rowspan="2">NLI</td>
<td>wnli_eu</td>
<td>acc</td>
<td>38.03</td>
</tr>
<tr>
<td>xnli_eu</td>
<td>acc</td>
<td>42.85</td>
</tr>
<tr>
<td rowspan="3">QA</td>
<td>eus_exams</td>
<td>acc</td>
<td>38.41</td>
</tr>
<tr>
<td>eus_proficiency</td>
<td>acc</td>
<td>31.13</td>
</tr>
<tr>
<td>eus_trivia</td>
<td>acc</td>
<td>45.36</td>
</tr>
<tr>
<td>Reading Comprehension</td>
<td>eus_reading</td>
<td>acc</td>
<td>33.24</td>
</tr>
<tr>
<td>Translation</td>
<td>flores_eu</td>
<td>bleu</td>
<td>16.29</td>
</tr>
</tbody></table>
#### Galician
<table><thead>
<tr>
<th>Category</th>
<th>Task</th>
<th>Metric</th>
<th>Result</th>
</tr></thead>
<tbody>
<tr>
<td rowspan="2">Paraphrasing</td>
<td>parafrases_gl</td>
<td>acc</td>
<td>58.84</td>
</tr>
<tr>
<td>paws_gl</td>
<td>acc</td>
<td>60.85</td>
</tr>
<tr>
<td>QA</td>
<td>openbookqa_gl</td>
<td>acc</td>
<td>34.6</td>
</tr>
<tr>
<td>Translation</td>
<td>flores_gl</td>
<td>bleu</td>
<td>27.98</td>
</tr>
</tbody>
</table>
#### English
<table><thead>
<tr>
<th>Category</th>
<th>Task</th>
<th>Metric</th>
<th>Result</th>
</tr></thead>
<tbody>
<tr>
<td rowspan="2">Commonsense Reasoning</td>
<td>copa</td>
<td>acc</td>
<td>90</td>
</tr>
<tr>
<td>xstorycloze_en</td>
<td>acc</td>
<td>79.22</td>
</tr>
<tr>
<td rowspan="2">NLI</td>
<td>wnli</td>
<td>acc</td>
<td>52.11</td>
</tr>
<tr>
<td>xnli_en</td>
<td>acc</td>
<td>47.27</td>
</tr>
<tr>
<td>Paraphrasing</td>
<td>paws *</td>
<td>acc</td>
<td>59.6</td>
</tr>
<tr>
<td rowspan="6">QA</td>
<td>arc_easy</td>
<td>acc</td>
<td>81.36</td>
</tr>
<tr>
<td>arc_challenge</td>
<td>acc</td>
<td>50.6</td>
</tr>
<tr>
<td>openbookqa</td>
<td>acc</td>
<td>34.4</td>
</tr>
<tr>
<td>piqa</td>
<td>acc</td>
<td>78.78</td>
</tr>
<tr>
<td>social_iqa</td>
<td>acc</td>
<td>50.15</td>
</tr>
<tr>
<td>squad_en **</td>
<td>acc</td>
<td>78.06</td>
</tr>
</tbody></table>
\* Current LM Evaluation Harness implementation is lacking correct pre-processing. These results are obtained with adequate pre-processing.
\*\* This task is not yet available in the official Harness, we hope to add it soon.
## Ethical Considerations and Limitations
We examine the presence of undesired societal and cognitive biases present in this model using different benchmarks. For societal biases,
we test performance using the BBQ dataset (Parrish et al., 2022) in the original English and the Regard dataset (Sheng et al., 2019).
We report that while performance is high (accuracies between 0.69 and 0.87 depending on the social category) in disambiguated settings
the model performs very poorly in ambiguous settings, which is indicative of the presence of societal biases which need to be addressed in post-training phases.
We additionally analyse model generations using the Regard dataset and classifier in Catalan, Spanish, and English using backtranslation and manual revision of the
translations. We find no statistically significant difference in regard between majority and minority groups for any regard types,
with the exception of negative regard in Catalan where model generations are actually slightly worse for social majorities.
Our analyses on societal biases show that while these biases are capable of interfering with model performance as expressed in the results on the BBQ dataset,
their tendency for representational harm is limited given the results of the Regard dataset. We highlight that our analyses of these biases are by no means exhaustive
and are limited by the relative scarcity of adequate resources in all languages present in the training data. We aim to gradually extend and expand our analyses
in future work.
Our cognitive bias analysis focuses on positional effects in 0-shot settings, and majority class bias in few-shot settings.
For positional effects, we leverage the ARC Multiple Choice Question dataset (Clark et al., 2018).
We observe moderate to strong primacy effects, whereby the model shows a preference for answers towards the beginning of the list of provided answers.
We measure effects of majority class effects in few-shot settings using SST-2 (Socher et al., 2013). We detect moderate effects,
implying that outputs can be influenced by the prompts.
We highlight that these results can be expected from a pretrained model that has not yet been instruction-tuned or aligned.
These tests are performed in order to show the biases the model may contain.
We urge developers to take them into account and perform safety testing and tuning tailored to their specific applications of the model.
---
## Additional information
### Author
The Language Technologies Unit from Barcelona Supercomputing Center.
### Contact
For further information, please send an email to <[email protected]>.
### Copyright
Copyright(c) 2024 by Language Technologies Unit, Barcelona Supercomputing Center.
### Funding
This work has been promoted and financed by the Government of Catalonia through the [Aina Project](https://projecteaina.cat/).
This work is funded by the _Ministerio para la Transformación Digital y de la Función Pública_ - Funded by EU – NextGenerationEU
within the framework of [ILENIA Project](https://proyectoilenia.es/) with reference 2022/TL22/00215337.
### Acknowledgements
This project has benefited from the contributions of numerous teams and institutions, mainly through data contributions, knowledge transfer or technical support.
In Catalonia, many institutions have been involved in the project. Our thanks to Òmnium Cultural, Parlament de Catalunya, Institut d'Estudis Aranesos, Racó Català, Vilaweb, ACN, Nació Digital, El món and Aquí Berguedà.
At national level, we are especially grateful to our ILENIA project partners: CENID, HiTZ and CiTIUS for their participation. We also extend our genuine gratitude to the Spanish Senate and Congress, Fundación Dialnet, Fundación Elcano and the ‘Instituto Universitario de Sistemas Inteligentes y Aplicaciones Numéricas en Ingeniería (SIANI)’ of the University of Las Palmas de Gran Canaria.
At the international level, we thank the Welsh government, DFKI, Occiglot project, especially Malte Ostendorff, and The Common Crawl Foundation, especially Pedro Ortiz, for their collaboration. We would also like to give special thanks to the NVIDIA team, with whom we have met regularly, specially to: Ignacio Sarasua, Adam Henryk Grzywaczewski, Oleg Sudakov, Sergio Perez, Miguel Martinez, Felipes Soares and Meriem Bendris. Their constant support has been especially appreciated throughout the entire process.
Their valuable efforts have been instrumental in the development of this work.
### Disclaimer
Be aware that the model may contain biases or other unintended distortions.
When third parties deploy systems or provide services based on this model, or use the model themselves,
they bear the responsibility for mitigating any associated risks and ensuring compliance with applicable regulations,
including those governing the use of Artificial Intelligence.
The Barcelona Supercomputing Center, as the owner and creator of the model, shall not be held liable for any outcomes resulting from third-party use.
### Citation
Technical report and paper coming soon.
### License
[Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0)
## Model Index
|Model|Base|Instruct|
|:---:|:---:|:---:|
|2B| [Link](https://huggingface.co/BSC-LT/salamandra-2b) | [Link](https://huggingface.co/BSC-LT/salamandra-2b-instruct) |
|7B| [Link](https://huggingface.co/BSC-LT/salamandra-7b) | [Link](https://huggingface.co/BSC-LT/salamandra-7b-instruct) |
|40B| WiP | WiP |
| [
"QUESTION_ANSWERING",
"TRANSLATION",
"SUMMARIZATION",
"PARAPHRASING"
] | [
"BEAR",
"SCIELO"
] |
Leo1212/longformer-base-4096-sentence-transformers-all-nli-stsb-quora-nq | Leo1212 | sentence-similarity | [
"sentence-transformers",
"safetensors",
"longformer",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:65749",
"loss:MultipleNegativesRankingLoss",
"loss:SoftmaxLoss",
"loss:CoSENTLoss",
"en",
"dataset:sentence-transformers/all-nli",
"dataset:sentence-transformers/stsb",
"dataset:sentence-transformers/quora-duplicates",
"dataset:sentence-transformers/natural-questions",
"arxiv:1908.10084",
"arxiv:1705.00652",
"base_model:allenai/longformer-base-4096",
"base_model:finetune:allenai/longformer-base-4096",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2024-11-11T15:22:15 | 2024-11-20T16:31:40 | 80 | 0 | ---
base_model: allenai/longformer-base-4096
datasets:
- sentence-transformers/all-nli
- sentence-transformers/stsb
- sentence-transformers/quora-duplicates
- sentence-transformers/natural-questions
language:
- en
library_name: sentence-transformers
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:65749
- loss:MultipleNegativesRankingLoss
- loss:SoftmaxLoss
- loss:CoSENTLoss
widget:
- source_sentence: A construction worker is standing on a crane placing a large arm
on top of a stature in progress.
sentences:
- The man is wearing black.
- A person standing
- Nobody is standing
- source_sentence: A boy in red slides down an inflatable ride.
sentences:
- A man holding a drill stands next to a girl holding a vacuum hose.
- A boy is playing on an inflatable ride.
- A boy pierces a knife through an inflatable ride.
- source_sentence: An animal is chewing on something.
sentences:
- A dog with a red leash still attached chases over the grass toward a tennis ball.
- A man is eating something.
- An animal is chewing on a key chain.
- source_sentence: What are some good books or references to get started with machine
learning?
sentences:
- What caused the British Empire to fall?
- How should I go about learning Machine Learning?
- Can an infinite amount of dark or vacuum or gravitational energy be created with
expansion?
- source_sentence: How do I attract a girl?
sentences:
- How can I attract girls?
- Why isn't my iPhone 5 charging?
- What would the world be like now in 2016 if Hitler's Germany won the war?
---
# SentenceTransformer based on allenai/longformer-base-4096
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [allenai/longformer-base-4096](https://huggingface.co/allenai/longformer-base-4096) on the [all-nli-pair](https://huggingface.co/datasets/sentence-transformers/all-nli), [all-nli-pair-class](https://huggingface.co/datasets/sentence-transformers/all-nli), [all-nli-pair-score](https://huggingface.co/datasets/sentence-transformers/all-nli), [all-nli-triplet](https://huggingface.co/datasets/sentence-transformers/all-nli), [stsb](https://huggingface.co/datasets/sentence-transformers/stsb), [quora](https://huggingface.co/datasets/sentence-transformers/quora-duplicates) and [natural-questions](https://huggingface.co/datasets/sentence-transformers/natural-questions) datasets. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [allenai/longformer-base-4096](https://huggingface.co/allenai/longformer-base-4096) <!-- at revision 301e6a42cb0d9976a6d6a26a079fef81c18aa895 -->
- **Maximum Sequence Length:** 4098 tokens
- **Output Dimensionality:** 768 tokens
- **Similarity Function:** Cosine Similarity
- **Training Datasets:**
- [all-nli-pair](https://huggingface.co/datasets/sentence-transformers/all-nli)
- [all-nli-pair-class](https://huggingface.co/datasets/sentence-transformers/all-nli)
- [all-nli-pair-score](https://huggingface.co/datasets/sentence-transformers/all-nli)
- [all-nli-triplet](https://huggingface.co/datasets/sentence-transformers/all-nli)
- [stsb](https://huggingface.co/datasets/sentence-transformers/stsb)
- [quora](https://huggingface.co/datasets/sentence-transformers/quora-duplicates)
- [natural-questions](https://huggingface.co/datasets/sentence-transformers/natural-questions)
- **Language:** en
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 4098, 'do_lower_case': False}) with Transformer model: LongformerModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("Leo1212/longformer-base-4096-sentence-transformers-all-nli-stsb-quora-nq")
# Run inference
sentences = [
'How do I attract a girl?',
'How can I attract girls?',
"Why isn't my iPhone 5 charging?",
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Datasets
#### all-nli-pair
* Dataset: [all-nli-pair](https://huggingface.co/datasets/sentence-transformers/all-nli) at [d482672](https://huggingface.co/datasets/sentence-transformers/all-nli/tree/d482672c8e74ce18da116f430137434ba2e52fab)
* Size: 10,000 training samples
* Columns: <code>anchor</code> and <code>positive</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive |
|:--------|:----------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 5 tokens</li><li>mean: 17.06 tokens</li><li>max: 64 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 9.64 tokens</li><li>max: 31 tokens</li></ul> |
* Samples:
| anchor | positive |
|:---------------------------------------------------------------------------|:-------------------------------------------------|
| <code>A person on a horse jumps over a broken down airplane.</code> | <code>A person is outdoors, on a horse.</code> |
| <code>Children smiling and waving at camera</code> | <code>There are children present</code> |
| <code>A boy is jumping on skateboard in the middle of a red bridge.</code> | <code>The boy does a skateboarding trick.</code> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### all-nli-pair-class
* Dataset: [all-nli-pair-class](https://huggingface.co/datasets/sentence-transformers/all-nli) at [d482672](https://huggingface.co/datasets/sentence-transformers/all-nli/tree/d482672c8e74ce18da116f430137434ba2e52fab)
* Size: 10,000 training samples
* Columns: <code>premise</code>, <code>hypothesis</code>, and <code>label</code>
* Approximate statistics based on the first 1000 samples:
| | premise | hypothesis | label |
|:--------|:---------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:-------------------------------------------------------------------|
| type | string | string | int |
| details | <ul><li>min: 6 tokens</li><li>mean: 17.4 tokens</li><li>max: 50 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 10.69 tokens</li><li>max: 31 tokens</li></ul> | <ul><li>0: ~33.40%</li><li>1: ~33.30%</li><li>2: ~33.30%</li></ul> |
* Samples:
| premise | hypothesis | label |
|:--------------------------------------------------------------------|:---------------------------------------------------------------|:---------------|
| <code>A person on a horse jumps over a broken down airplane.</code> | <code>A person is training his horse for a competition.</code> | <code>1</code> |
| <code>A person on a horse jumps over a broken down airplane.</code> | <code>A person is at a diner, ordering an omelette.</code> | <code>2</code> |
| <code>A person on a horse jumps over a broken down airplane.</code> | <code>A person is outdoors, on a horse.</code> | <code>0</code> |
* Loss: [<code>SoftmaxLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#softmaxloss)
#### all-nli-pair-score
* Dataset: [all-nli-pair-score](https://huggingface.co/datasets/sentence-transformers/all-nli) at [d482672](https://huggingface.co/datasets/sentence-transformers/all-nli/tree/d482672c8e74ce18da116f430137434ba2e52fab)
* Size: 10,000 training samples
* Columns: <code>sentence1</code>, <code>sentence2</code>, and <code>score</code>
* Approximate statistics based on the first 1000 samples:
| | sentence1 | sentence2 | score |
|:--------|:---------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:--------------------------------------------------------------|
| type | string | string | float |
| details | <ul><li>min: 6 tokens</li><li>mean: 17.4 tokens</li><li>max: 50 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 10.69 tokens</li><li>max: 31 tokens</li></ul> | <ul><li>min: 0.0</li><li>mean: 0.5</li><li>max: 1.0</li></ul> |
* Samples:
| sentence1 | sentence2 | score |
|:--------------------------------------------------------------------|:---------------------------------------------------------------|:-----------------|
| <code>A person on a horse jumps over a broken down airplane.</code> | <code>A person is training his horse for a competition.</code> | <code>0.5</code> |
| <code>A person on a horse jumps over a broken down airplane.</code> | <code>A person is at a diner, ordering an omelette.</code> | <code>0.0</code> |
| <code>A person on a horse jumps over a broken down airplane.</code> | <code>A person is outdoors, on a horse.</code> | <code>1.0</code> |
* Loss: [<code>CoSENTLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#cosentloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "pairwise_cos_sim"
}
```
#### all-nli-triplet
* Dataset: [all-nli-triplet](https://huggingface.co/datasets/sentence-transformers/all-nli) at [d482672](https://huggingface.co/datasets/sentence-transformers/all-nli/tree/d482672c8e74ce18da116f430137434ba2e52fab)
* Size: 10,000 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 7 tokens</li><li>mean: 10.38 tokens</li><li>max: 45 tokens</li></ul> | <ul><li>min: 6 tokens</li><li>mean: 12.8 tokens</li><li>max: 39 tokens</li></ul> | <ul><li>min: 6 tokens</li><li>mean: 13.4 tokens</li><li>max: 50 tokens</li></ul> |
* Samples:
| anchor | positive | negative |
|:---------------------------------------------------------------------------|:-------------------------------------------------|:-----------------------------------------------------------|
| <code>A person on a horse jumps over a broken down airplane.</code> | <code>A person is outdoors, on a horse.</code> | <code>A person is at a diner, ordering an omelette.</code> |
| <code>Children smiling and waving at camera</code> | <code>There are children present</code> | <code>The kids are frowning</code> |
| <code>A boy is jumping on skateboard in the middle of a red bridge.</code> | <code>The boy does a skateboarding trick.</code> | <code>The boy skates down the sidewalk.</code> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### stsb
* Dataset: [stsb](https://huggingface.co/datasets/sentence-transformers/stsb) at [ab7a5ac](https://huggingface.co/datasets/sentence-transformers/stsb/tree/ab7a5ac0e35aa22088bdcf23e7fd99b220e53308)
* Size: 5,749 training samples
* Columns: <code>sentence1</code>, <code>sentence2</code>, and <code>score</code>
* Approximate statistics based on the first 1000 samples:
| | sentence1 | sentence2 | score |
|:--------|:----------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|:---------------------------------------------------------------|
| type | string | string | float |
| details | <ul><li>min: 6 tokens</li><li>mean: 10.02 tokens</li><li>max: 28 tokens</li></ul> | <ul><li>min: 6 tokens</li><li>mean: 9.96 tokens</li><li>max: 25 tokens</li></ul> | <ul><li>min: 0.0</li><li>mean: 0.54</li><li>max: 1.0</li></ul> |
* Samples:
| sentence1 | sentence2 | score |
|:-----------------------------------------------------------|:----------------------------------------------------------------------|:------------------|
| <code>A plane is taking off.</code> | <code>An air plane is taking off.</code> | <code>1.0</code> |
| <code>A man is playing a large flute.</code> | <code>A man is playing a flute.</code> | <code>0.76</code> |
| <code>A man is spreading shreded cheese on a pizza.</code> | <code>A man is spreading shredded cheese on an uncooked pizza.</code> | <code>0.76</code> |
* Loss: [<code>CoSENTLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#cosentloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "pairwise_cos_sim"
}
```
#### quora
* Dataset: [quora](https://huggingface.co/datasets/sentence-transformers/quora-duplicates) at [451a485](https://huggingface.co/datasets/sentence-transformers/quora-duplicates/tree/451a4850bd141edb44ade1b5828c259abd762cdb)
* Size: 10,000 training samples
* Columns: <code>anchor</code> and <code>positive</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive |
|:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 6 tokens</li><li>mean: 13.74 tokens</li><li>max: 43 tokens</li></ul> | <ul><li>min: 6 tokens</li><li>mean: 13.91 tokens</li><li>max: 44 tokens</li></ul> |
* Samples:
| anchor | positive |
|:----------------------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------------------------|
| <code>Astrology: I am a Capricorn Sun Cap moon and cap rising...what does that say about me?</code> | <code>I'm a triple Capricorn (Sun, Moon and ascendant in Capricorn) What does this say about me?</code> |
| <code>How can I be a good geologist?</code> | <code>What should I do to be a great geologist?</code> |
| <code>How do I read and find my YouTube comments?</code> | <code>How can I see all my Youtube comments?</code> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### natural-questions
* Dataset: [natural-questions](https://huggingface.co/datasets/sentence-transformers/natural-questions) at [f9e894e](https://huggingface.co/datasets/sentence-transformers/natural-questions/tree/f9e894e1081e206e577b4eaa9ee6de2b06ae6f17)
* Size: 10,000 training samples
* Columns: <code>query</code> and <code>answer</code>
* Approximate statistics based on the first 1000 samples:
| | query | answer |
|:--------|:-----------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 10 tokens</li><li>mean: 12.43 tokens</li><li>max: 23 tokens</li></ul> | <ul><li>min: 18 tokens</li><li>mean: 136.19 tokens</li><li>max: 543 tokens</li></ul> |
* Samples:
| query | answer |
|:----------------------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>when did richmond last play in a preliminary final</code> | <code>Richmond Football Club Richmond began 2017 with 5 straight wins, a feat it had not achieved since 1995. A series of close losses hampered the Tigers throughout the middle of the season, including a 5-point loss to the Western Bulldogs, 2-point loss to Fremantle, and a 3-point loss to the Giants. Richmond ended the season strongly with convincing victories over Fremantle and St Kilda in the final two rounds, elevating the club to 3rd on the ladder. Richmond's first final of the season against the Cats at the MCG attracted a record qualifying final crowd of 95,028; the Tigers won by 51 points. Having advanced to the first preliminary finals for the first time since 2001, Richmond defeated Greater Western Sydney by 36 points in front of a crowd of 94,258 to progress to the Grand Final against Adelaide, their first Grand Final appearance since 1982. The attendance was 100,021, the largest crowd to a grand final since 1986. The Crows led at quarter time and led by as many as 13, but the Tigers took over the game as it progressed and scored seven straight goals at one point. They eventually would win by 48 points – 16.12 (108) to Adelaide's 8.12 (60) – to end their 37-year flag drought.[22] Dustin Martin also became the first player to win a Premiership medal, the Brownlow Medal and the Norm Smith Medal in the same season, while Damien Hardwick was named AFL Coaches Association Coach of the Year. Richmond's jump from 13th to premiers also marked the biggest jump from one AFL season to the next.</code> |
| <code>who sang what in the world's come over you</code> | <code>Jack Scott (singer) At the beginning of 1960, Scott again changed record labels, this time to Top Rank Records.[1] He then recorded four Billboard Hot 100 hits – "What in the World's Come Over You" (#5), "Burning Bridges" (#3) b/w "Oh Little One" (#34), and "It Only Happened Yesterday" (#38).[1] "What in the World's Come Over You" was Scott's second gold disc winner.[6] Scott continued to record and perform during the 1960s and 1970s.[1] His song "You're Just Gettin' Better" reached the country charts in 1974.[1] In May 1977, Scott recorded a Peel session for BBC Radio 1 disc jockey, John Peel.</code> |
| <code>who produces the most wool in the world</code> | <code>Wool Global wool production is about 2 million tonnes per year, of which 60% goes into apparel. Wool comprises ca 3% of the global textile market, but its value is higher owing to dying and other modifications of the material.[1] Australia is a leading producer of wool which is mostly from Merino sheep but has been eclipsed by China in terms of total weight.[30] New Zealand (2016) is the third-largest producer of wool, and the largest producer of crossbred wool. Breeds such as Lincoln, Romney, Drysdale, and Elliotdale produce coarser fibers, and wool from these sheep is usually used for making carpets.</code> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
### Evaluation Datasets
#### all-nli-triplet
* Dataset: [all-nli-triplet](https://huggingface.co/datasets/sentence-transformers/all-nli) at [d482672](https://huggingface.co/datasets/sentence-transformers/all-nli/tree/d482672c8e74ce18da116f430137434ba2e52fab)
* Size: 6,584 evaluation samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 6 tokens</li><li>mean: 18.02 tokens</li><li>max: 66 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 9.81 tokens</li><li>max: 29 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 10.37 tokens</li><li>max: 29 tokens</li></ul> |
* Samples:
| anchor | positive | negative |
|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:------------------------------------------------------------|:--------------------------------------------------------|
| <code>Two women are embracing while holding to go packages.</code> | <code>Two woman are holding packages.</code> | <code>The men are fighting outside a deli.</code> |
| <code>Two young children in blue jerseys, one with the number 9 and one with the number 2 are standing on wooden steps in a bathroom and washing their hands in a sink.</code> | <code>Two kids in numbered jerseys wash their hands.</code> | <code>Two kids in jackets walk to school.</code> |
| <code>A man selling donuts to a customer during a world exhibition event held in the city of Angeles</code> | <code>A man selling donuts to a customer.</code> | <code>A woman drinks her coffee in a small cafe.</code> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### stsb
* Dataset: [stsb](https://huggingface.co/datasets/sentence-transformers/stsb) at [ab7a5ac](https://huggingface.co/datasets/sentence-transformers/stsb/tree/ab7a5ac0e35aa22088bdcf23e7fd99b220e53308)
* Size: 1,500 evaluation samples
* Columns: <code>sentence1</code>, <code>sentence2</code>, and <code>score</code>
* Approximate statistics based on the first 1000 samples:
| | sentence1 | sentence2 | score |
|:--------|:---------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:---------------------------------------------------------------|
| type | string | string | float |
| details | <ul><li>min: 5 tokens</li><li>mean: 15.0 tokens</li><li>max: 44 tokens</li></ul> | <ul><li>min: 6 tokens</li><li>mean: 14.99 tokens</li><li>max: 61 tokens</li></ul> | <ul><li>min: 0.0</li><li>mean: 0.47</li><li>max: 1.0</li></ul> |
* Samples:
| sentence1 | sentence2 | score |
|:--------------------------------------------------|:------------------------------------------------------|:------------------|
| <code>A man with a hard hat is dancing.</code> | <code>A man wearing a hard hat is dancing.</code> | <code>1.0</code> |
| <code>A young child is riding a horse.</code> | <code>A child is riding a horse.</code> | <code>0.95</code> |
| <code>A man is feeding a mouse to a snake.</code> | <code>The man is feeding a mouse to the snake.</code> | <code>1.0</code> |
* Loss: [<code>CoSENTLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#cosentloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "pairwise_cos_sim"
}
```
#### quora
* Dataset: [quora](https://huggingface.co/datasets/sentence-transformers/quora-duplicates) at [451a485](https://huggingface.co/datasets/sentence-transformers/quora-duplicates/tree/451a4850bd141edb44ade1b5828c259abd762cdb)
* Size: 1,000 evaluation samples
* Columns: <code>anchor</code> and <code>positive</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive |
|:--------|:----------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 6 tokens</li><li>mean: 13.86 tokens</li><li>max: 63 tokens</li></ul> | <ul><li>min: 6 tokens</li><li>mean: 13.9 tokens</li><li>max: 46 tokens</li></ul> |
* Samples:
| anchor | positive |
|:----------------------------------------------------------------------------|:--------------------------------------------------------------------------------|
| <code>What is your New Year resolution?</code> | <code>What can be my new year resolution for 2017?</code> |
| <code>Should I buy the IPhone 6s or Samsung Galaxy s7?</code> | <code>Which is better: the iPhone 6S Plus or the Samsung Galaxy S7 Edge?</code> |
| <code>What are the differences between transgression and regression?</code> | <code>What is the difference between transgression and regression?</code> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### natural-questions
* Dataset: [natural-questions](https://huggingface.co/datasets/sentence-transformers/natural-questions) at [f9e894e](https://huggingface.co/datasets/sentence-transformers/natural-questions/tree/f9e894e1081e206e577b4eaa9ee6de2b06ae6f17)
* Size: 1,000 evaluation samples
* Columns: <code>query</code> and <code>answer</code>
* Approximate statistics based on the first 1000 samples:
| | query | answer |
|:--------|:----------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 9 tokens</li><li>mean: 12.47 tokens</li><li>max: 25 tokens</li></ul> | <ul><li>min: 18 tokens</li><li>mean: 139.05 tokens</li><li>max: 572 tokens</li></ul> |
* Samples:
| query | answer |
|:--------------------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>where does the waikato river begin and end</code> | <code>Waikato River The Waikato River is the longest river in New Zealand, running for 425 kilometres (264Â mi) through the North Island. It rises in the eastern slopes of Mount Ruapehu, joining the Tongariro River system and flowing through Lake Taupo, New Zealand's largest lake. It then drains Taupo at the lake's northeastern edge, creates the Huka Falls, and flows northwest through the Waikato Plains. It empties into the Tasman Sea south of Auckland, at Port Waikato. It gives its name to the Waikato Region that surrounds the Waikato Plains. The present course of the river was largely formed about 17,000 years ago. Contributing factors were climate warming, forest being reestablished in the river headwaters and the deepening, rather than widening, of the existing river channel. The channel was gradually eroded as far up river as Piarere, leaving the old Hinuera channel high and dry.[2] The remains of the old river path can be clearly seen at Hinuera where the cliffs mark the ancient river edges. The river's main tributary is the Waipa River, which has its confluence with the Waikato at Ngaruawahia.</code> |
| <code>what type of gas is produced during fermentation</code> | <code>Fermentation Fermentation reacts NADH with an endogenous, organic electron acceptor.[1] Usually this is pyruvate formed from sugar through glycolysis. The reaction produces NAD+ and an organic product, typical examples being ethanol, lactic acid, carbon dioxide, and hydrogen gas (H2). However, more exotic compounds can be produced by fermentation, such as butyric acid and acetone. Fermentation products contain chemical energy (they are not fully oxidized), but are considered waste products, since they cannot be metabolized further without the use of oxygen.</code> |
| <code>why was star wars episode iv released first</code> | <code>Star Wars (film) Star Wars (later retitled Star Wars: Episode IV – A New Hope) is a 1977 American epic space opera film written and directed by George Lucas. It is the first film in the original Star Wars trilogy and the beginning of the Star Wars franchise. Starring Mark Hamill, Harrison Ford, Carrie Fisher, Peter Cushing, Alec Guinness, David Prowse, James Earl Jones, Anthony Daniels, Kenny Baker, and Peter Mayhew, the film's plot focuses on the Rebel Alliance, led by Princess Leia (Fisher), and its attempt to destroy the Galactic Empire's space station, the Death Star. This conflict disrupts the isolated life of farmhand Luke Skywalker (Hamill), who inadvertently acquires two droids that possess stolen architectural plans for the Death Star. When the Empire begins a destructive search for the missing droids, Skywalker accompanies Jedi Master Obi-Wan Kenobi (Guinness) on a mission to return the plans to the Rebel Alliance and rescue Leia from her imprisonment by the Empire.</code> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `overwrite_output_dir`: True
- `eval_strategy`: steps
- `num_train_epochs`: 5
- `load_best_model_at_end`: True
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: True
- `do_predict`: False
- `eval_strategy`: steps
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 8
- `per_device_eval_batch_size`: 8
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 5e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 5
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.0
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: True
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: False
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `eval_use_gather_object`: False
- `batch_sampler`: batch_sampler
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
| Epoch | Step | Training Loss | all-nli-triplet loss | stsb loss | natural-questions loss | quora loss |
|:----------:|:--------:|:-------------:|:--------------------:|:----------:|:----------------------:|:----------:|
| 0.0487 | 200 | 2.0928 | - | - | - | - |
| 0.0973 | 400 | 2.2013 | - | - | - | - |
| 0.1460 | 600 | 1.7404 | - | - | - | - |
| 0.1946 | 800 | 1.9134 | - | - | - | - |
| **0.2433** | **1000** | **2.043** | **0.5161** | **6.2815** | **0.1172** | **0.0192** |
| 0.2920 | 1200 | 1.8817 | - | - | - | - |
| 0.3406 | 1400 | 1.7734 | - | - | - | - |
| 0.3893 | 1600 | 1.5935 | - | - | - | - |
| 0.4380 | 1800 | 1.6762 | - | - | - | - |
| 0.4866 | 2000 | 1.7031 | 0.4555 | 6.3907 | 0.0726 | 0.0198 |
| 0.5353 | 2200 | 1.8561 | - | - | - | - |
| 0.5839 | 2400 | 1.6742 | - | - | - | - |
| 0.6326 | 2600 | 1.456 | - | - | - | - |
| 0.6813 | 2800 | 1.6122 | - | - | - | - |
| 0.7299 | 3000 | 1.8851 | 0.4975 | 6.1758 | 0.0841 | 0.0208 |
| 0.7786 | 3200 | 1.5684 | - | - | - | - |
| 0.8273 | 3400 | 1.6535 | - | - | - | - |
| 0.8759 | 3600 | 1.5043 | - | - | - | - |
| 0.9246 | 3800 | 1.4768 | - | - | - | - |
| 0.9732 | 4000 | 1.686 | 0.4912 | 6.1600 | 0.0795 | 0.0170 |
* The bold row denotes the saved checkpoint.
### Framework Versions
- Python: 3.11.9
- Sentence Transformers: 3.1.1
- Transformers: 4.45.2
- PyTorch: 2.3.1+cu121
- Accelerate: 1.0.0
- Datasets: 3.0.1
- Tokenizers: 0.20.0
## Citation
### BibTeX
#### Sentence Transformers and SoftmaxLoss
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### MultipleNegativesRankingLoss
```bibtex
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
#### CoSENTLoss
```bibtex
@online{kexuefm-8847,
title={CoSENT: A more efficient sentence vector scheme than Sentence-BERT},
author={Su Jianlin},
year={2022},
month={Jan},
url={https://kexue.fm/archives/8847},
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
--> | [
"TEXT_CLASSIFICATION"
] | [
"MEDAL"
] |
Marqo/multilingual-e5-small | Marqo | sentence-similarity | [
"sentence-transformers",
"pytorch",
"onnx",
"safetensors",
"bert",
"mteb",
"Sentence Transformers",
"sentence-similarity",
"multilingual",
"af",
"am",
"ar",
"as",
"az",
"be",
"bg",
"bn",
"br",
"bs",
"ca",
"cs",
"cy",
"da",
"de",
"el",
"en",
"eo",
"es",
"et",
"eu",
"fa",
"fi",
"fr",
"fy",
"ga",
"gd",
"gl",
"gu",
"ha",
"he",
"hi",
"hr",
"hu",
"hy",
"id",
"is",
"it",
"ja",
"jv",
"ka",
"kk",
"km",
"kn",
"ko",
"ku",
"ky",
"la",
"lo",
"lt",
"lv",
"mg",
"mk",
"ml",
"mn",
"mr",
"ms",
"my",
"ne",
"nl",
"no",
"om",
"or",
"pa",
"pl",
"ps",
"pt",
"ro",
"ru",
"sa",
"sd",
"si",
"sk",
"sl",
"so",
"sq",
"sr",
"su",
"sv",
"sw",
"ta",
"te",
"th",
"tl",
"tr",
"ug",
"uk",
"ur",
"uz",
"vi",
"xh",
"yi",
"zh",
"arxiv:2402.05672",
"arxiv:2108.08787",
"arxiv:2104.08663",
"arxiv:2210.07316",
"license:mit",
"model-index",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | 2024-09-04T01:08:08 | 2024-09-05T04:04:18 | 79 | 2 | ---
language:
- multilingual
- af
- am
- ar
- as
- az
- be
- bg
- bn
- br
- bs
- ca
- cs
- cy
- da
- de
- el
- en
- eo
- es
- et
- eu
- fa
- fi
- fr
- fy
- ga
- gd
- gl
- gu
- ha
- he
- hi
- hr
- hu
- hy
- id
- is
- it
- ja
- jv
- ka
- kk
- km
- kn
- ko
- ku
- ky
- la
- lo
- lt
- lv
- mg
- mk
- ml
- mn
- mr
- ms
- my
- ne
- nl
- 'no'
- om
- or
- pa
- pl
- ps
- pt
- ro
- ru
- sa
- sd
- si
- sk
- sl
- so
- sq
- sr
- su
- sv
- sw
- ta
- te
- th
- tl
- tr
- ug
- uk
- ur
- uz
- vi
- xh
- yi
- zh
license: mit
tags:
- mteb
- Sentence Transformers
- sentence-similarity
- sentence-transformers
model-index:
- name: intfloat/multilingual-e5-small
results:
- task:
type: Classification
dataset:
name: MTEB AmazonCounterfactualClassification (en)
type: mteb/amazon_counterfactual
config: en
split: test
revision: e8379541af4e31359cca9fbcf4b00f2671dba205
metrics:
- type: accuracy
value: 73.79104477611939
- type: ap
value: 36.9996434842022
- type: f1
value: 67.95453679103099
- task:
type: Classification
dataset:
name: MTEB AmazonCounterfactualClassification (de)
type: mteb/amazon_counterfactual
config: de
split: test
revision: e8379541af4e31359cca9fbcf4b00f2671dba205
metrics:
- type: accuracy
value: 71.64882226980728
- type: ap
value: 82.11942130026586
- type: f1
value: 69.87963421606715
- task:
type: Classification
dataset:
name: MTEB AmazonCounterfactualClassification (en-ext)
type: mteb/amazon_counterfactual
config: en-ext
split: test
revision: e8379541af4e31359cca9fbcf4b00f2671dba205
metrics:
- type: accuracy
value: 75.8095952023988
- type: ap
value: 24.46869495579561
- type: f1
value: 63.00108480037597
- task:
type: Classification
dataset:
name: MTEB AmazonCounterfactualClassification (ja)
type: mteb/amazon_counterfactual
config: ja
split: test
revision: e8379541af4e31359cca9fbcf4b00f2671dba205
metrics:
- type: accuracy
value: 64.186295503212
- type: ap
value: 15.496804690197042
- type: f1
value: 52.07153895475031
- task:
type: Classification
dataset:
name: MTEB AmazonPolarityClassification
type: mteb/amazon_polarity
config: default
split: test
revision: e2d317d38cd51312af73b3d32a06d1a08b442046
metrics:
- type: accuracy
value: 88.699325
- type: ap
value: 85.27039559917269
- type: f1
value: 88.65556295032513
- task:
type: Classification
dataset:
name: MTEB AmazonReviewsClassification (en)
type: mteb/amazon_reviews_multi
config: en
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 44.69799999999999
- type: f1
value: 43.73187348654165
- task:
type: Classification
dataset:
name: MTEB AmazonReviewsClassification (de)
type: mteb/amazon_reviews_multi
config: de
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 40.245999999999995
- type: f1
value: 39.3863530637684
- task:
type: Classification
dataset:
name: MTEB AmazonReviewsClassification (es)
type: mteb/amazon_reviews_multi
config: es
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 40.394
- type: f1
value: 39.301223469483446
- task:
type: Classification
dataset:
name: MTEB AmazonReviewsClassification (fr)
type: mteb/amazon_reviews_multi
config: fr
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 38.864
- type: f1
value: 37.97974261868003
- task:
type: Classification
dataset:
name: MTEB AmazonReviewsClassification (ja)
type: mteb/amazon_reviews_multi
config: ja
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 37.682
- type: f1
value: 37.07399369768313
- task:
type: Classification
dataset:
name: MTEB AmazonReviewsClassification (zh)
type: mteb/amazon_reviews_multi
config: zh
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 37.504
- type: f1
value: 36.62317273874278
- task:
type: Retrieval
dataset:
name: MTEB ArguAna
type: arguana
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 19.061
- type: map_at_10
value: 31.703
- type: map_at_100
value: 32.967
- type: map_at_1000
value: 33.001000000000005
- type: map_at_3
value: 27.466
- type: map_at_5
value: 29.564
- type: mrr_at_1
value: 19.559
- type: mrr_at_10
value: 31.874999999999996
- type: mrr_at_100
value: 33.146
- type: mrr_at_1000
value: 33.18
- type: mrr_at_3
value: 27.667
- type: mrr_at_5
value: 29.74
- type: ndcg_at_1
value: 19.061
- type: ndcg_at_10
value: 39.062999999999995
- type: ndcg_at_100
value: 45.184000000000005
- type: ndcg_at_1000
value: 46.115
- type: ndcg_at_3
value: 30.203000000000003
- type: ndcg_at_5
value: 33.953
- type: precision_at_1
value: 19.061
- type: precision_at_10
value: 6.279999999999999
- type: precision_at_100
value: 0.9129999999999999
- type: precision_at_1000
value: 0.099
- type: precision_at_3
value: 12.706999999999999
- type: precision_at_5
value: 9.431000000000001
- type: recall_at_1
value: 19.061
- type: recall_at_10
value: 62.802
- type: recall_at_100
value: 91.323
- type: recall_at_1000
value: 98.72
- type: recall_at_3
value: 38.122
- type: recall_at_5
value: 47.155
- task:
type: Clustering
dataset:
name: MTEB ArxivClusteringP2P
type: mteb/arxiv-clustering-p2p
config: default
split: test
revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d
metrics:
- type: v_measure
value: 39.22266660528253
- task:
type: Clustering
dataset:
name: MTEB ArxivClusteringS2S
type: mteb/arxiv-clustering-s2s
config: default
split: test
revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53
metrics:
- type: v_measure
value: 30.79980849482483
- task:
type: Reranking
dataset:
name: MTEB AskUbuntuDupQuestions
type: mteb/askubuntudupquestions-reranking
config: default
split: test
revision: 2000358ca161889fa9c082cb41daa8dcfb161a54
metrics:
- type: map
value: 57.8790068352054
- type: mrr
value: 71.78791276436706
- task:
type: STS
dataset:
name: MTEB BIOSSES
type: mteb/biosses-sts
config: default
split: test
revision: d3fb88f8f02e40887cd149695127462bbcf29b4a
metrics:
- type: cos_sim_pearson
value: 82.36328364043163
- type: cos_sim_spearman
value: 82.26211536195868
- type: euclidean_pearson
value: 80.3183865039173
- type: euclidean_spearman
value: 79.88495276296132
- type: manhattan_pearson
value: 80.14484480692127
- type: manhattan_spearman
value: 80.39279565980743
- task:
type: BitextMining
dataset:
name: MTEB BUCC (de-en)
type: mteb/bucc-bitext-mining
config: de-en
split: test
revision: d51519689f32196a32af33b075a01d0e7c51e252
metrics:
- type: accuracy
value: 98.0375782881002
- type: f1
value: 97.86012526096033
- type: precision
value: 97.77139874739039
- type: recall
value: 98.0375782881002
- task:
type: BitextMining
dataset:
name: MTEB BUCC (fr-en)
type: mteb/bucc-bitext-mining
config: fr-en
split: test
revision: d51519689f32196a32af33b075a01d0e7c51e252
metrics:
- type: accuracy
value: 93.35241030156286
- type: f1
value: 92.66050333846944
- type: precision
value: 92.3306919069631
- type: recall
value: 93.35241030156286
- task:
type: BitextMining
dataset:
name: MTEB BUCC (ru-en)
type: mteb/bucc-bitext-mining
config: ru-en
split: test
revision: d51519689f32196a32af33b075a01d0e7c51e252
metrics:
- type: accuracy
value: 94.0699688257707
- type: f1
value: 93.50236693222492
- type: precision
value: 93.22791825424315
- type: recall
value: 94.0699688257707
- task:
type: BitextMining
dataset:
name: MTEB BUCC (zh-en)
type: mteb/bucc-bitext-mining
config: zh-en
split: test
revision: d51519689f32196a32af33b075a01d0e7c51e252
metrics:
- type: accuracy
value: 89.25750394944708
- type: f1
value: 88.79234684921889
- type: precision
value: 88.57293312269616
- type: recall
value: 89.25750394944708
- task:
type: Classification
dataset:
name: MTEB Banking77Classification
type: mteb/banking77
config: default
split: test
revision: 0fd18e25b25c072e09e0d92ab615fda904d66300
metrics:
- type: accuracy
value: 79.41558441558442
- type: f1
value: 79.25886487487219
- task:
type: Clustering
dataset:
name: MTEB BiorxivClusteringP2P
type: mteb/biorxiv-clustering-p2p
config: default
split: test
revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40
metrics:
- type: v_measure
value: 35.747820820329736
- task:
type: Clustering
dataset:
name: MTEB BiorxivClusteringS2S
type: mteb/biorxiv-clustering-s2s
config: default
split: test
revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908
metrics:
- type: v_measure
value: 27.045143830596146
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackRetrieval
type: BeIR/cqadupstack
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 24.252999999999997
- type: map_at_10
value: 31.655916666666666
- type: map_at_100
value: 32.680749999999996
- type: map_at_1000
value: 32.79483333333334
- type: map_at_3
value: 29.43691666666666
- type: map_at_5
value: 30.717416666666665
- type: mrr_at_1
value: 28.602750000000004
- type: mrr_at_10
value: 35.56875
- type: mrr_at_100
value: 36.3595
- type: mrr_at_1000
value: 36.427749999999996
- type: mrr_at_3
value: 33.586166666666664
- type: mrr_at_5
value: 34.73641666666666
- type: ndcg_at_1
value: 28.602750000000004
- type: ndcg_at_10
value: 36.06933333333334
- type: ndcg_at_100
value: 40.70141666666667
- type: ndcg_at_1000
value: 43.24341666666667
- type: ndcg_at_3
value: 32.307916666666664
- type: ndcg_at_5
value: 34.129999999999995
- type: precision_at_1
value: 28.602750000000004
- type: precision_at_10
value: 6.097666666666667
- type: precision_at_100
value: 0.9809166666666668
- type: precision_at_1000
value: 0.13766666666666663
- type: precision_at_3
value: 14.628166666666667
- type: precision_at_5
value: 10.266916666666667
- type: recall_at_1
value: 24.252999999999997
- type: recall_at_10
value: 45.31916666666667
- type: recall_at_100
value: 66.03575000000001
- type: recall_at_1000
value: 83.94708333333334
- type: recall_at_3
value: 34.71941666666666
- type: recall_at_5
value: 39.46358333333333
- task:
type: Retrieval
dataset:
name: MTEB ClimateFEVER
type: climate-fever
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 9.024000000000001
- type: map_at_10
value: 15.644
- type: map_at_100
value: 17.154
- type: map_at_1000
value: 17.345
- type: map_at_3
value: 13.028
- type: map_at_5
value: 14.251
- type: mrr_at_1
value: 19.674
- type: mrr_at_10
value: 29.826999999999998
- type: mrr_at_100
value: 30.935000000000002
- type: mrr_at_1000
value: 30.987
- type: mrr_at_3
value: 26.645000000000003
- type: mrr_at_5
value: 28.29
- type: ndcg_at_1
value: 19.674
- type: ndcg_at_10
value: 22.545
- type: ndcg_at_100
value: 29.207
- type: ndcg_at_1000
value: 32.912
- type: ndcg_at_3
value: 17.952
- type: ndcg_at_5
value: 19.363
- type: precision_at_1
value: 19.674
- type: precision_at_10
value: 7.212000000000001
- type: precision_at_100
value: 1.435
- type: precision_at_1000
value: 0.212
- type: precision_at_3
value: 13.507
- type: precision_at_5
value: 10.397
- type: recall_at_1
value: 9.024000000000001
- type: recall_at_10
value: 28.077999999999996
- type: recall_at_100
value: 51.403
- type: recall_at_1000
value: 72.406
- type: recall_at_3
value: 16.768
- type: recall_at_5
value: 20.737
- task:
type: Retrieval
dataset:
name: MTEB DBPedia
type: dbpedia-entity
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 8.012
- type: map_at_10
value: 17.138
- type: map_at_100
value: 24.146
- type: map_at_1000
value: 25.622
- type: map_at_3
value: 12.552
- type: map_at_5
value: 14.435
- type: mrr_at_1
value: 62.25000000000001
- type: mrr_at_10
value: 71.186
- type: mrr_at_100
value: 71.504
- type: mrr_at_1000
value: 71.514
- type: mrr_at_3
value: 69.333
- type: mrr_at_5
value: 70.408
- type: ndcg_at_1
value: 49.75
- type: ndcg_at_10
value: 37.76
- type: ndcg_at_100
value: 42.071
- type: ndcg_at_1000
value: 49.309
- type: ndcg_at_3
value: 41.644
- type: ndcg_at_5
value: 39.812999999999995
- type: precision_at_1
value: 62.25000000000001
- type: precision_at_10
value: 30.15
- type: precision_at_100
value: 9.753
- type: precision_at_1000
value: 1.9189999999999998
- type: precision_at_3
value: 45.667
- type: precision_at_5
value: 39.15
- type: recall_at_1
value: 8.012
- type: recall_at_10
value: 22.599
- type: recall_at_100
value: 48.068
- type: recall_at_1000
value: 71.328
- type: recall_at_3
value: 14.043
- type: recall_at_5
value: 17.124
- task:
type: Classification
dataset:
name: MTEB EmotionClassification
type: mteb/emotion
config: default
split: test
revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37
metrics:
- type: accuracy
value: 42.455
- type: f1
value: 37.59462649781862
- task:
type: Retrieval
dataset:
name: MTEB FEVER
type: fever
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 58.092
- type: map_at_10
value: 69.586
- type: map_at_100
value: 69.968
- type: map_at_1000
value: 69.982
- type: map_at_3
value: 67.48100000000001
- type: map_at_5
value: 68.915
- type: mrr_at_1
value: 62.166
- type: mrr_at_10
value: 73.588
- type: mrr_at_100
value: 73.86399999999999
- type: mrr_at_1000
value: 73.868
- type: mrr_at_3
value: 71.6
- type: mrr_at_5
value: 72.99
- type: ndcg_at_1
value: 62.166
- type: ndcg_at_10
value: 75.27199999999999
- type: ndcg_at_100
value: 76.816
- type: ndcg_at_1000
value: 77.09700000000001
- type: ndcg_at_3
value: 71.36
- type: ndcg_at_5
value: 73.785
- type: precision_at_1
value: 62.166
- type: precision_at_10
value: 9.716
- type: precision_at_100
value: 1.065
- type: precision_at_1000
value: 0.11
- type: precision_at_3
value: 28.278
- type: precision_at_5
value: 18.343999999999998
- type: recall_at_1
value: 58.092
- type: recall_at_10
value: 88.73400000000001
- type: recall_at_100
value: 95.195
- type: recall_at_1000
value: 97.04599999999999
- type: recall_at_3
value: 78.45
- type: recall_at_5
value: 84.316
- task:
type: Retrieval
dataset:
name: MTEB FiQA2018
type: fiqa
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 16.649
- type: map_at_10
value: 26.457000000000004
- type: map_at_100
value: 28.169
- type: map_at_1000
value: 28.352
- type: map_at_3
value: 23.305
- type: map_at_5
value: 25.169000000000004
- type: mrr_at_1
value: 32.407000000000004
- type: mrr_at_10
value: 40.922
- type: mrr_at_100
value: 41.931000000000004
- type: mrr_at_1000
value: 41.983
- type: mrr_at_3
value: 38.786
- type: mrr_at_5
value: 40.205999999999996
- type: ndcg_at_1
value: 32.407000000000004
- type: ndcg_at_10
value: 33.314
- type: ndcg_at_100
value: 40.312
- type: ndcg_at_1000
value: 43.685
- type: ndcg_at_3
value: 30.391000000000002
- type: ndcg_at_5
value: 31.525
- type: precision_at_1
value: 32.407000000000004
- type: precision_at_10
value: 8.966000000000001
- type: precision_at_100
value: 1.6019999999999999
- type: precision_at_1000
value: 0.22200000000000003
- type: precision_at_3
value: 20.165
- type: precision_at_5
value: 14.722
- type: recall_at_1
value: 16.649
- type: recall_at_10
value: 39.117000000000004
- type: recall_at_100
value: 65.726
- type: recall_at_1000
value: 85.784
- type: recall_at_3
value: 27.914
- type: recall_at_5
value: 33.289
- task:
type: Retrieval
dataset:
name: MTEB HotpotQA
type: hotpotqa
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 36.253
- type: map_at_10
value: 56.16799999999999
- type: map_at_100
value: 57.06099999999999
- type: map_at_1000
value: 57.126
- type: map_at_3
value: 52.644999999999996
- type: map_at_5
value: 54.909
- type: mrr_at_1
value: 72.505
- type: mrr_at_10
value: 79.66
- type: mrr_at_100
value: 79.869
- type: mrr_at_1000
value: 79.88
- type: mrr_at_3
value: 78.411
- type: mrr_at_5
value: 79.19800000000001
- type: ndcg_at_1
value: 72.505
- type: ndcg_at_10
value: 65.094
- type: ndcg_at_100
value: 68.219
- type: ndcg_at_1000
value: 69.515
- type: ndcg_at_3
value: 59.99
- type: ndcg_at_5
value: 62.909000000000006
- type: precision_at_1
value: 72.505
- type: precision_at_10
value: 13.749
- type: precision_at_100
value: 1.619
- type: precision_at_1000
value: 0.179
- type: precision_at_3
value: 38.357
- type: precision_at_5
value: 25.313000000000002
- type: recall_at_1
value: 36.253
- type: recall_at_10
value: 68.744
- type: recall_at_100
value: 80.925
- type: recall_at_1000
value: 89.534
- type: recall_at_3
value: 57.535000000000004
- type: recall_at_5
value: 63.282000000000004
- task:
type: Classification
dataset:
name: MTEB ImdbClassification
type: mteb/imdb
config: default
split: test
revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7
metrics:
- type: accuracy
value: 80.82239999999999
- type: ap
value: 75.65895781725314
- type: f1
value: 80.75880969095746
- task:
type: Retrieval
dataset:
name: MTEB MSMARCO
type: msmarco
config: default
split: dev
revision: None
metrics:
- type: map_at_1
value: 21.624
- type: map_at_10
value: 34.075
- type: map_at_100
value: 35.229
- type: map_at_1000
value: 35.276999999999994
- type: map_at_3
value: 30.245
- type: map_at_5
value: 32.42
- type: mrr_at_1
value: 22.264
- type: mrr_at_10
value: 34.638000000000005
- type: mrr_at_100
value: 35.744
- type: mrr_at_1000
value: 35.787
- type: mrr_at_3
value: 30.891000000000002
- type: mrr_at_5
value: 33.042
- type: ndcg_at_1
value: 22.264
- type: ndcg_at_10
value: 40.991
- type: ndcg_at_100
value: 46.563
- type: ndcg_at_1000
value: 47.743
- type: ndcg_at_3
value: 33.198
- type: ndcg_at_5
value: 37.069
- type: precision_at_1
value: 22.264
- type: precision_at_10
value: 6.5089999999999995
- type: precision_at_100
value: 0.9299999999999999
- type: precision_at_1000
value: 0.10300000000000001
- type: precision_at_3
value: 14.216999999999999
- type: precision_at_5
value: 10.487
- type: recall_at_1
value: 21.624
- type: recall_at_10
value: 62.303
- type: recall_at_100
value: 88.124
- type: recall_at_1000
value: 97.08
- type: recall_at_3
value: 41.099999999999994
- type: recall_at_5
value: 50.381
- task:
type: Classification
dataset:
name: MTEB MTOPDomainClassification (en)
type: mteb/mtop_domain
config: en
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 91.06703146374831
- type: f1
value: 90.86867815863172
- task:
type: Classification
dataset:
name: MTEB MTOPDomainClassification (de)
type: mteb/mtop_domain
config: de
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 87.46970977740209
- type: f1
value: 86.36832872036588
- task:
type: Classification
dataset:
name: MTEB MTOPDomainClassification (es)
type: mteb/mtop_domain
config: es
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 89.26951300867245
- type: f1
value: 88.93561193959502
- task:
type: Classification
dataset:
name: MTEB MTOPDomainClassification (fr)
type: mteb/mtop_domain
config: fr
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 84.22799874725963
- type: f1
value: 84.30490069236556
- task:
type: Classification
dataset:
name: MTEB MTOPDomainClassification (hi)
type: mteb/mtop_domain
config: hi
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 86.02007888131948
- type: f1
value: 85.39376041027991
- task:
type: Classification
dataset:
name: MTEB MTOPDomainClassification (th)
type: mteb/mtop_domain
config: th
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 85.34900542495481
- type: f1
value: 85.39859673336713
- task:
type: Classification
dataset:
name: MTEB MTOPIntentClassification (en)
type: mteb/mtop_intent
config: en
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 71.078431372549
- type: f1
value: 53.45071102002276
- task:
type: Classification
dataset:
name: MTEB MTOPIntentClassification (de)
type: mteb/mtop_intent
config: de
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 65.85798816568047
- type: f1
value: 46.53112748993529
- task:
type: Classification
dataset:
name: MTEB MTOPIntentClassification (es)
type: mteb/mtop_intent
config: es
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 67.96864576384256
- type: f1
value: 45.966703022829506
- task:
type: Classification
dataset:
name: MTEB MTOPIntentClassification (fr)
type: mteb/mtop_intent
config: fr
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 61.31537738803633
- type: f1
value: 45.52601712835461
- task:
type: Classification
dataset:
name: MTEB MTOPIntentClassification (hi)
type: mteb/mtop_intent
config: hi
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 66.29616349946218
- type: f1
value: 47.24166485726613
- task:
type: Classification
dataset:
name: MTEB MTOPIntentClassification (th)
type: mteb/mtop_intent
config: th
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 67.51537070524412
- type: f1
value: 49.463476319014276
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (af)
type: mteb/amazon_massive_intent
config: af
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 57.06792199058508
- type: f1
value: 54.094921857502285
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (am)
type: mteb/amazon_massive_intent
config: am
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 51.960322797579025
- type: f1
value: 48.547371223370945
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (ar)
type: mteb/amazon_massive_intent
config: ar
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 54.425016812373904
- type: f1
value: 50.47069202054312
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (az)
type: mteb/amazon_massive_intent
config: az
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 59.798251513113655
- type: f1
value: 57.05013069086648
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (bn)
type: mteb/amazon_massive_intent
config: bn
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 59.37794216543376
- type: f1
value: 56.3607992649805
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (cy)
type: mteb/amazon_massive_intent
config: cy
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 46.56018829858777
- type: f1
value: 43.87319715715134
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (da)
type: mteb/amazon_massive_intent
config: da
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 62.9724277067922
- type: f1
value: 59.36480066245562
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (de)
type: mteb/amazon_massive_intent
config: de
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 62.72696704774715
- type: f1
value: 59.143595966615855
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (el)
type: mteb/amazon_massive_intent
config: el
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 61.5971755211836
- type: f1
value: 59.169445724946726
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (en)
type: mteb/amazon_massive_intent
config: en
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 70.29589778076665
- type: f1
value: 67.7577001808977
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (es)
type: mteb/amazon_massive_intent
config: es
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 66.31136516476126
- type: f1
value: 64.52032955983242
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (fa)
type: mteb/amazon_massive_intent
config: fa
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 65.54472091459314
- type: f1
value: 61.47903120066317
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (fi)
type: mteb/amazon_massive_intent
config: fi
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 61.45595158036314
- type: f1
value: 58.0891846024637
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (fr)
type: mteb/amazon_massive_intent
config: fr
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 65.47074646940149
- type: f1
value: 62.84830858877575
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (he)
type: mteb/amazon_massive_intent
config: he
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 58.046402151983855
- type: f1
value: 55.269074430533195
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (hi)
type: mteb/amazon_massive_intent
config: hi
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 64.06523201075991
- type: f1
value: 61.35339643021369
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (hu)
type: mteb/amazon_massive_intent
config: hu
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 60.954942837928726
- type: f1
value: 57.07035922704846
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (hy)
type: mteb/amazon_massive_intent
config: hy
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 57.404169468728995
- type: f1
value: 53.94259011839138
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (id)
type: mteb/amazon_massive_intent
config: id
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 64.16610625420309
- type: f1
value: 61.337103431499365
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (is)
type: mteb/amazon_massive_intent
config: is
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 52.262945527908535
- type: f1
value: 49.7610691598921
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (it)
type: mteb/amazon_massive_intent
config: it
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 65.54472091459314
- type: f1
value: 63.469099018440154
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (ja)
type: mteb/amazon_massive_intent
config: ja
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 68.22797579018157
- type: f1
value: 64.89098471083001
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (jv)
type: mteb/amazon_massive_intent
config: jv
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 50.847343644922674
- type: f1
value: 47.8536963168393
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (ka)
type: mteb/amazon_massive_intent
config: ka
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 48.45326160053799
- type: f1
value: 46.370078045805556
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (km)
type: mteb/amazon_massive_intent
config: km
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 42.83120376597175
- type: f1
value: 39.68948521599982
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (kn)
type: mteb/amazon_massive_intent
config: kn
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 57.5084061869536
- type: f1
value: 53.961876160401545
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (ko)
type: mteb/amazon_massive_intent
config: ko
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 63.7895090786819
- type: f1
value: 61.134223684676
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (lv)
type: mteb/amazon_massive_intent
config: lv
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 54.98991257565569
- type: f1
value: 52.579862862826296
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (ml)
type: mteb/amazon_massive_intent
config: ml
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 61.90316072629456
- type: f1
value: 58.203024538290336
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (mn)
type: mteb/amazon_massive_intent
config: mn
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 57.09818426361802
- type: f1
value: 54.22718458445455
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (ms)
type: mteb/amazon_massive_intent
config: ms
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 58.991257565568255
- type: f1
value: 55.84892781767421
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (my)
type: mteb/amazon_massive_intent
config: my
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 55.901143241425686
- type: f1
value: 52.25264332199797
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (nb)
type: mteb/amazon_massive_intent
config: nb
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 61.96368527236047
- type: f1
value: 58.927243876153454
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (nl)
type: mteb/amazon_massive_intent
config: nl
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 65.64223268325489
- type: f1
value: 62.340453718379706
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (pl)
type: mteb/amazon_massive_intent
config: pl
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 64.52589105581708
- type: f1
value: 61.661113187022174
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (pt)
type: mteb/amazon_massive_intent
config: pt
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 66.84599865501009
- type: f1
value: 64.59342572873005
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (ro)
type: mteb/amazon_massive_intent
config: ro
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 60.81035642232684
- type: f1
value: 57.5169089806797
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (ru)
type: mteb/amazon_massive_intent
config: ru
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 58.652238071815056
- type: f1
value: 53.22732406426353
- type: f1_weighted
value: 57.585586737209546
- type: main_score
value: 58.652238071815056
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (sl)
type: mteb/amazon_massive_intent
config: sl
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 56.51647612642906
- type: f1
value: 54.33154780100043
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (sq)
type: mteb/amazon_massive_intent
config: sq
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 57.985877605917956
- type: f1
value: 54.46187524463802
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (sv)
type: mteb/amazon_massive_intent
config: sv
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 65.03026227303296
- type: f1
value: 62.34377392877748
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (sw)
type: mteb/amazon_massive_intent
config: sw
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 53.567585743106925
- type: f1
value: 50.73770655983206
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (ta)
type: mteb/amazon_massive_intent
config: ta
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 57.2595830531271
- type: f1
value: 53.657327291708626
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (te)
type: mteb/amazon_massive_intent
config: te
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 57.82784129119032
- type: f1
value: 54.82518072665301
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (th)
type: mteb/amazon_massive_intent
config: th
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 64.06859448554137
- type: f1
value: 63.00185280500495
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (tl)
type: mteb/amazon_massive_intent
config: tl
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 58.91055817081371
- type: f1
value: 55.54116301224262
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (tr)
type: mteb/amazon_massive_intent
config: tr
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 63.54404841963686
- type: f1
value: 59.57650946030184
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (ur)
type: mteb/amazon_massive_intent
config: ur
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 59.27706792199059
- type: f1
value: 56.50010066083435
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (vi)
type: mteb/amazon_massive_intent
config: vi
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 64.0719569603228
- type: f1
value: 61.817075925647956
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (zh-CN)
type: mteb/amazon_massive_intent
config: zh-CN
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 68.23806321452591
- type: f1
value: 65.24917026029749
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (zh-TW)
type: mteb/amazon_massive_intent
config: zh-TW
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 62.53530598520511
- type: f1
value: 61.71131132295768
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (af)
type: mteb/amazon_massive_scenario
config: af
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 63.04303967720243
- type: f1
value: 60.3950085685985
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (am)
type: mteb/amazon_massive_scenario
config: am
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 56.83591123066578
- type: f1
value: 54.95059828830849
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (ar)
type: mteb/amazon_massive_scenario
config: ar
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 59.62340282447881
- type: f1
value: 59.525159996498225
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (az)
type: mteb/amazon_massive_scenario
config: az
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 60.85406859448555
- type: f1
value: 59.129299095681276
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (bn)
type: mteb/amazon_massive_scenario
config: bn
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 62.76731674512441
- type: f1
value: 61.159560612627715
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (cy)
type: mteb/amazon_massive_scenario
config: cy
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 50.181573638197705
- type: f1
value: 46.98422176289957
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (da)
type: mteb/amazon_massive_scenario
config: da
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 68.92737054472092
- type: f1
value: 67.69135611952979
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (de)
type: mteb/amazon_massive_scenario
config: de
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 69.18964357767318
- type: f1
value: 68.46106138186214
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (el)
type: mteb/amazon_massive_scenario
config: el
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 67.0712844653665
- type: f1
value: 66.75545422473901
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (en)
type: mteb/amazon_massive_scenario
config: en
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 74.4754539340955
- type: f1
value: 74.38427146553252
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (es)
type: mteb/amazon_massive_scenario
config: es
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 69.82515131136518
- type: f1
value: 69.63516462173847
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (fa)
type: mteb/amazon_massive_scenario
config: fa
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 68.70880968392737
- type: f1
value: 67.45420662567926
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (fi)
type: mteb/amazon_massive_scenario
config: fi
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 65.95494283792871
- type: f1
value: 65.06191009049222
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (fr)
type: mteb/amazon_massive_scenario
config: fr
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 68.75924680564896
- type: f1
value: 68.30833379585945
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (he)
type: mteb/amazon_massive_scenario
config: he
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 63.806321452589096
- type: f1
value: 63.273048243765054
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (hi)
type: mteb/amazon_massive_scenario
config: hi
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 67.68997982515133
- type: f1
value: 66.54703855381324
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (hu)
type: mteb/amazon_massive_scenario
config: hu
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 66.46940147948891
- type: f1
value: 65.91017343463396
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (hy)
type: mteb/amazon_massive_scenario
config: hy
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 59.49899125756556
- type: f1
value: 57.90333469917769
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (id)
type: mteb/amazon_massive_scenario
config: id
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 67.9219905850706
- type: f1
value: 67.23169403762938
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (is)
type: mteb/amazon_massive_scenario
config: is
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 56.486213853396094
- type: f1
value: 54.85282355583758
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (it)
type: mteb/amazon_massive_scenario
config: it
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 69.04169468728985
- type: f1
value: 68.83833333320462
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (ja)
type: mteb/amazon_massive_scenario
config: ja
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 73.88702084734365
- type: f1
value: 74.04474735232299
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (jv)
type: mteb/amazon_massive_scenario
config: jv
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 56.63416274377943
- type: f1
value: 55.11332211687954
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (ka)
type: mteb/amazon_massive_scenario
config: ka
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 52.23604572965702
- type: f1
value: 50.86529813991055
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (km)
type: mteb/amazon_massive_scenario
config: km
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 46.62407531943511
- type: f1
value: 43.63485467164535
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (kn)
type: mteb/amazon_massive_scenario
config: kn
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 59.15601882985878
- type: f1
value: 57.522837510959924
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (ko)
type: mteb/amazon_massive_scenario
config: ko
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 69.84532616005382
- type: f1
value: 69.60021127179697
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (lv)
type: mteb/amazon_massive_scenario
config: lv
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 56.65770006724949
- type: f1
value: 55.84219135523227
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (ml)
type: mteb/amazon_massive_scenario
config: ml
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 66.53665097511768
- type: f1
value: 65.09087787792639
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (mn)
type: mteb/amazon_massive_scenario
config: mn
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 59.31405514458642
- type: f1
value: 58.06135303831491
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (ms)
type: mteb/amazon_massive_scenario
config: ms
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 64.88231338264964
- type: f1
value: 62.751099407787926
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (my)
type: mteb/amazon_massive_scenario
config: my
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 58.86012104909213
- type: f1
value: 56.29118323058282
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (nb)
type: mteb/amazon_massive_scenario
config: nb
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 67.37390719569602
- type: f1
value: 66.27922244885102
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (nl)
type: mteb/amazon_massive_scenario
config: nl
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 70.8675184936113
- type: f1
value: 70.22146529932019
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (pl)
type: mteb/amazon_massive_scenario
config: pl
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 68.2212508406187
- type: f1
value: 67.77454802056282
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (pt)
type: mteb/amazon_massive_scenario
config: pt
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 68.18090114324143
- type: f1
value: 68.03737625431621
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (ro)
type: mteb/amazon_massive_scenario
config: ro
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 64.65030262273034
- type: f1
value: 63.792945486912856
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (ru)
type: mteb/amazon_massive_scenario
config: ru
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 63.772749631087066
- type: f1
value: 63.4539101720024
- type: f1_weighted
value: 62.778603897469566
- type: main_score
value: 63.772749631087066
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (sl)
type: mteb/amazon_massive_scenario
config: sl
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 60.17821116341627
- type: f1
value: 59.3935969827171
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (sq)
type: mteb/amazon_massive_scenario
config: sq
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 62.86146603900471
- type: f1
value: 60.133692735032376
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (sv)
type: mteb/amazon_massive_scenario
config: sv
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 70.89441829186282
- type: f1
value: 70.03064076194089
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (sw)
type: mteb/amazon_massive_scenario
config: sw
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 58.15063887020847
- type: f1
value: 56.23326278499678
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (ta)
type: mteb/amazon_massive_scenario
config: ta
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 59.43846671149966
- type: f1
value: 57.70440450281974
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (te)
type: mteb/amazon_massive_scenario
config: te
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 60.8507061197041
- type: f1
value: 59.22916396061171
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (th)
type: mteb/amazon_massive_scenario
config: th
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 70.65568258238063
- type: f1
value: 69.90736239440633
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (tl)
type: mteb/amazon_massive_scenario
config: tl
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 60.8843308675185
- type: f1
value: 59.30332663713599
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (tr)
type: mteb/amazon_massive_scenario
config: tr
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 68.05312710154674
- type: f1
value: 67.44024062594775
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (ur)
type: mteb/amazon_massive_scenario
config: ur
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 62.111634162743776
- type: f1
value: 60.89083013084519
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (vi)
type: mteb/amazon_massive_scenario
config: vi
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 67.44115669132482
- type: f1
value: 67.92227541674552
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (zh-CN)
type: mteb/amazon_massive_scenario
config: zh-CN
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 74.4687289845326
- type: f1
value: 74.16376793486025
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (zh-TW)
type: mteb/amazon_massive_scenario
config: zh-TW
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 68.31876260928043
- type: f1
value: 68.5246745215607
- task:
type: Clustering
dataset:
name: MTEB MedrxivClusteringP2P
type: mteb/medrxiv-clustering-p2p
config: default
split: test
revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73
metrics:
- type: v_measure
value: 30.90431696479766
- task:
type: Clustering
dataset:
name: MTEB MedrxivClusteringS2S
type: mteb/medrxiv-clustering-s2s
config: default
split: test
revision: 35191c8c0dca72d8ff3efcd72aa802307d469663
metrics:
- type: v_measure
value: 27.259158476693774
- task:
type: Reranking
dataset:
name: MTEB MindSmallReranking
type: mteb/mind_small
config: default
split: test
revision: 3bdac13927fdc888b903db93b2ffdbd90b295a69
metrics:
- type: map
value: 30.28445330838555
- type: mrr
value: 31.15758529581164
- task:
type: Retrieval
dataset:
name: MTEB NFCorpus
type: nfcorpus
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 5.353
- type: map_at_10
value: 11.565
- type: map_at_100
value: 14.097000000000001
- type: map_at_1000
value: 15.354999999999999
- type: map_at_3
value: 8.749
- type: map_at_5
value: 9.974
- type: mrr_at_1
value: 42.105
- type: mrr_at_10
value: 50.589
- type: mrr_at_100
value: 51.187000000000005
- type: mrr_at_1000
value: 51.233
- type: mrr_at_3
value: 48.246
- type: mrr_at_5
value: 49.546
- type: ndcg_at_1
value: 40.402
- type: ndcg_at_10
value: 31.009999999999998
- type: ndcg_at_100
value: 28.026
- type: ndcg_at_1000
value: 36.905
- type: ndcg_at_3
value: 35.983
- type: ndcg_at_5
value: 33.764
- type: precision_at_1
value: 42.105
- type: precision_at_10
value: 22.786
- type: precision_at_100
value: 6.916
- type: precision_at_1000
value: 1.981
- type: precision_at_3
value: 33.333
- type: precision_at_5
value: 28.731
- type: recall_at_1
value: 5.353
- type: recall_at_10
value: 15.039
- type: recall_at_100
value: 27.348
- type: recall_at_1000
value: 59.453
- type: recall_at_3
value: 9.792
- type: recall_at_5
value: 11.882
- task:
type: Retrieval
dataset:
name: MTEB NQ
type: nq
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 33.852
- type: map_at_10
value: 48.924
- type: map_at_100
value: 49.854
- type: map_at_1000
value: 49.886
- type: map_at_3
value: 44.9
- type: map_at_5
value: 47.387
- type: mrr_at_1
value: 38.035999999999994
- type: mrr_at_10
value: 51.644
- type: mrr_at_100
value: 52.339
- type: mrr_at_1000
value: 52.35999999999999
- type: mrr_at_3
value: 48.421
- type: mrr_at_5
value: 50.468999999999994
- type: ndcg_at_1
value: 38.007000000000005
- type: ndcg_at_10
value: 56.293000000000006
- type: ndcg_at_100
value: 60.167
- type: ndcg_at_1000
value: 60.916000000000004
- type: ndcg_at_3
value: 48.903999999999996
- type: ndcg_at_5
value: 52.978
- type: precision_at_1
value: 38.007000000000005
- type: precision_at_10
value: 9.041
- type: precision_at_100
value: 1.1199999999999999
- type: precision_at_1000
value: 0.11900000000000001
- type: precision_at_3
value: 22.084
- type: precision_at_5
value: 15.608
- type: recall_at_1
value: 33.852
- type: recall_at_10
value: 75.893
- type: recall_at_100
value: 92.589
- type: recall_at_1000
value: 98.153
- type: recall_at_3
value: 56.969
- type: recall_at_5
value: 66.283
- task:
type: Retrieval
dataset:
name: MTEB QuoraRetrieval
type: quora
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 69.174
- type: map_at_10
value: 82.891
- type: map_at_100
value: 83.545
- type: map_at_1000
value: 83.56700000000001
- type: map_at_3
value: 79.944
- type: map_at_5
value: 81.812
- type: mrr_at_1
value: 79.67999999999999
- type: mrr_at_10
value: 86.279
- type: mrr_at_100
value: 86.39
- type: mrr_at_1000
value: 86.392
- type: mrr_at_3
value: 85.21
- type: mrr_at_5
value: 85.92999999999999
- type: ndcg_at_1
value: 79.69000000000001
- type: ndcg_at_10
value: 86.929
- type: ndcg_at_100
value: 88.266
- type: ndcg_at_1000
value: 88.428
- type: ndcg_at_3
value: 83.899
- type: ndcg_at_5
value: 85.56700000000001
- type: precision_at_1
value: 79.69000000000001
- type: precision_at_10
value: 13.161000000000001
- type: precision_at_100
value: 1.513
- type: precision_at_1000
value: 0.156
- type: precision_at_3
value: 36.603
- type: precision_at_5
value: 24.138
- type: recall_at_1
value: 69.174
- type: recall_at_10
value: 94.529
- type: recall_at_100
value: 99.15
- type: recall_at_1000
value: 99.925
- type: recall_at_3
value: 85.86200000000001
- type: recall_at_5
value: 90.501
- task:
type: Clustering
dataset:
name: MTEB RedditClustering
type: mteb/reddit-clustering
config: default
split: test
revision: 24640382cdbf8abc73003fb0fa6d111a705499eb
metrics:
- type: v_measure
value: 39.13064340585255
- task:
type: Clustering
dataset:
name: MTEB RedditClusteringP2P
type: mteb/reddit-clustering-p2p
config: default
split: test
revision: 282350215ef01743dc01b456c7f5241fa8937f16
metrics:
- type: v_measure
value: 58.97884249325877
- task:
type: Retrieval
dataset:
name: MTEB SCIDOCS
type: scidocs
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 3.4680000000000004
- type: map_at_10
value: 7.865
- type: map_at_100
value: 9.332
- type: map_at_1000
value: 9.587
- type: map_at_3
value: 5.800000000000001
- type: map_at_5
value: 6.8790000000000004
- type: mrr_at_1
value: 17.0
- type: mrr_at_10
value: 25.629
- type: mrr_at_100
value: 26.806
- type: mrr_at_1000
value: 26.889000000000003
- type: mrr_at_3
value: 22.8
- type: mrr_at_5
value: 24.26
- type: ndcg_at_1
value: 17.0
- type: ndcg_at_10
value: 13.895
- type: ndcg_at_100
value: 20.491999999999997
- type: ndcg_at_1000
value: 25.759999999999998
- type: ndcg_at_3
value: 13.347999999999999
- type: ndcg_at_5
value: 11.61
- type: precision_at_1
value: 17.0
- type: precision_at_10
value: 7.090000000000001
- type: precision_at_100
value: 1.669
- type: precision_at_1000
value: 0.294
- type: precision_at_3
value: 12.3
- type: precision_at_5
value: 10.02
- type: recall_at_1
value: 3.4680000000000004
- type: recall_at_10
value: 14.363000000000001
- type: recall_at_100
value: 33.875
- type: recall_at_1000
value: 59.711999999999996
- type: recall_at_3
value: 7.483
- type: recall_at_5
value: 10.173
- task:
type: STS
dataset:
name: MTEB SICK-R
type: mteb/sickr-sts
config: default
split: test
revision: a6ea5a8cab320b040a23452cc28066d9beae2cee
metrics:
- type: cos_sim_pearson
value: 83.04084311714061
- type: cos_sim_spearman
value: 77.51342467443078
- type: euclidean_pearson
value: 80.0321166028479
- type: euclidean_spearman
value: 77.29249114733226
- type: manhattan_pearson
value: 80.03105964262431
- type: manhattan_spearman
value: 77.22373689514794
- task:
type: STS
dataset:
name: MTEB STS12
type: mteb/sts12-sts
config: default
split: test
revision: a0d554a64d88156834ff5ae9920b964011b16384
metrics:
- type: cos_sim_pearson
value: 84.1680158034387
- type: cos_sim_spearman
value: 76.55983344071117
- type: euclidean_pearson
value: 79.75266678300143
- type: euclidean_spearman
value: 75.34516823467025
- type: manhattan_pearson
value: 79.75959151517357
- type: manhattan_spearman
value: 75.42330344141912
- task:
type: STS
dataset:
name: MTEB STS13
type: mteb/sts13-sts
config: default
split: test
revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca
metrics:
- type: cos_sim_pearson
value: 76.48898993209346
- type: cos_sim_spearman
value: 76.96954120323366
- type: euclidean_pearson
value: 76.94139109279668
- type: euclidean_spearman
value: 76.85860283201711
- type: manhattan_pearson
value: 76.6944095091912
- type: manhattan_spearman
value: 76.61096912972553
- task:
type: STS
dataset:
name: MTEB STS14
type: mteb/sts14-sts
config: default
split: test
revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375
metrics:
- type: cos_sim_pearson
value: 77.85082366246944
- type: cos_sim_spearman
value: 75.52053350101731
- type: euclidean_pearson
value: 77.1165845070926
- type: euclidean_spearman
value: 75.31216065884388
- type: manhattan_pearson
value: 77.06193941833494
- type: manhattan_spearman
value: 75.31003701700112
- task:
type: STS
dataset:
name: MTEB STS15
type: mteb/sts15-sts
config: default
split: test
revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3
metrics:
- type: cos_sim_pearson
value: 86.36305246526497
- type: cos_sim_spearman
value: 87.11704613927415
- type: euclidean_pearson
value: 86.04199125810939
- type: euclidean_spearman
value: 86.51117572414263
- type: manhattan_pearson
value: 86.0805106816633
- type: manhattan_spearman
value: 86.52798366512229
- task:
type: STS
dataset:
name: MTEB STS16
type: mteb/sts16-sts
config: default
split: test
revision: 4d8694f8f0e0100860b497b999b3dbed754a0513
metrics:
- type: cos_sim_pearson
value: 82.18536255599724
- type: cos_sim_spearman
value: 83.63377151025418
- type: euclidean_pearson
value: 83.24657467993141
- type: euclidean_spearman
value: 84.02751481993825
- type: manhattan_pearson
value: 83.11941806582371
- type: manhattan_spearman
value: 83.84251281019304
- task:
type: STS
dataset:
name: MTEB STS17 (ko-ko)
type: mteb/sts17-crosslingual-sts
config: ko-ko
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 78.95816528475514
- type: cos_sim_spearman
value: 78.86607380120462
- type: euclidean_pearson
value: 78.51268699230545
- type: euclidean_spearman
value: 79.11649316502229
- type: manhattan_pearson
value: 78.32367302808157
- type: manhattan_spearman
value: 78.90277699624637
- task:
type: STS
dataset:
name: MTEB STS17 (ar-ar)
type: mteb/sts17-crosslingual-sts
config: ar-ar
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 72.89126914997624
- type: cos_sim_spearman
value: 73.0296921832678
- type: euclidean_pearson
value: 71.50385903677738
- type: euclidean_spearman
value: 73.13368899716289
- type: manhattan_pearson
value: 71.47421463379519
- type: manhattan_spearman
value: 73.03383242946575
- task:
type: STS
dataset:
name: MTEB STS17 (en-ar)
type: mteb/sts17-crosslingual-sts
config: en-ar
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 59.22923684492637
- type: cos_sim_spearman
value: 57.41013211368396
- type: euclidean_pearson
value: 61.21107388080905
- type: euclidean_spearman
value: 60.07620768697254
- type: manhattan_pearson
value: 59.60157142786555
- type: manhattan_spearman
value: 59.14069604103739
- task:
type: STS
dataset:
name: MTEB STS17 (en-de)
type: mteb/sts17-crosslingual-sts
config: en-de
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 76.24345978774299
- type: cos_sim_spearman
value: 77.24225743830719
- type: euclidean_pearson
value: 76.66226095469165
- type: euclidean_spearman
value: 77.60708820493146
- type: manhattan_pearson
value: 76.05303324760429
- type: manhattan_spearman
value: 76.96353149912348
- task:
type: STS
dataset:
name: MTEB STS17 (en-en)
type: mteb/sts17-crosslingual-sts
config: en-en
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 85.50879160160852
- type: cos_sim_spearman
value: 86.43594662965224
- type: euclidean_pearson
value: 86.06846012826577
- type: euclidean_spearman
value: 86.02041395794136
- type: manhattan_pearson
value: 86.10916255616904
- type: manhattan_spearman
value: 86.07346068198953
- task:
type: STS
dataset:
name: MTEB STS17 (en-tr)
type: mteb/sts17-crosslingual-sts
config: en-tr
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 58.39803698977196
- type: cos_sim_spearman
value: 55.96910950423142
- type: euclidean_pearson
value: 58.17941175613059
- type: euclidean_spearman
value: 55.03019330522745
- type: manhattan_pearson
value: 57.333358138183286
- type: manhattan_spearman
value: 54.04614023149965
- task:
type: STS
dataset:
name: MTEB STS17 (es-en)
type: mteb/sts17-crosslingual-sts
config: es-en
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 70.98304089637197
- type: cos_sim_spearman
value: 72.44071656215888
- type: euclidean_pearson
value: 72.19224359033983
- type: euclidean_spearman
value: 73.89871188913025
- type: manhattan_pearson
value: 71.21098311547406
- type: manhattan_spearman
value: 72.93405764824821
- task:
type: STS
dataset:
name: MTEB STS17 (es-es)
type: mteb/sts17-crosslingual-sts
config: es-es
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 85.99792397466308
- type: cos_sim_spearman
value: 84.83824377879495
- type: euclidean_pearson
value: 85.70043288694438
- type: euclidean_spearman
value: 84.70627558703686
- type: manhattan_pearson
value: 85.89570850150801
- type: manhattan_spearman
value: 84.95806105313007
- task:
type: STS
dataset:
name: MTEB STS17 (fr-en)
type: mteb/sts17-crosslingual-sts
config: fr-en
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 72.21850322994712
- type: cos_sim_spearman
value: 72.28669398117248
- type: euclidean_pearson
value: 73.40082510412948
- type: euclidean_spearman
value: 73.0326539281865
- type: manhattan_pearson
value: 71.8659633964841
- type: manhattan_spearman
value: 71.57817425823303
- task:
type: STS
dataset:
name: MTEB STS17 (it-en)
type: mteb/sts17-crosslingual-sts
config: it-en
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 75.80921368595645
- type: cos_sim_spearman
value: 77.33209091229315
- type: euclidean_pearson
value: 76.53159540154829
- type: euclidean_spearman
value: 78.17960842810093
- type: manhattan_pearson
value: 76.13530186637601
- type: manhattan_spearman
value: 78.00701437666875
- task:
type: STS
dataset:
name: MTEB STS17 (nl-en)
type: mteb/sts17-crosslingual-sts
config: nl-en
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 74.74980608267349
- type: cos_sim_spearman
value: 75.37597374318821
- type: euclidean_pearson
value: 74.90506081911661
- type: euclidean_spearman
value: 75.30151613124521
- type: manhattan_pearson
value: 74.62642745918002
- type: manhattan_spearman
value: 75.18619716592303
- task:
type: STS
dataset:
name: MTEB STS22 (en)
type: mteb/sts22-crosslingual-sts
config: en
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 59.632662289205584
- type: cos_sim_spearman
value: 60.938543391610914
- type: euclidean_pearson
value: 62.113200529767056
- type: euclidean_spearman
value: 61.410312633261164
- type: manhattan_pearson
value: 61.75494698945686
- type: manhattan_spearman
value: 60.92726195322362
- task:
type: STS
dataset:
name: MTEB STS22 (de)
type: mteb/sts22-crosslingual-sts
config: de
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 45.283470551557244
- type: cos_sim_spearman
value: 53.44833015864201
- type: euclidean_pearson
value: 41.17892011120893
- type: euclidean_spearman
value: 53.81441383126767
- type: manhattan_pearson
value: 41.17482200420659
- type: manhattan_spearman
value: 53.82180269276363
- task:
type: STS
dataset:
name: MTEB STS22 (es)
type: mteb/sts22-crosslingual-sts
config: es
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 60.5069165306236
- type: cos_sim_spearman
value: 66.87803259033826
- type: euclidean_pearson
value: 63.5428979418236
- type: euclidean_spearman
value: 66.9293576586897
- type: manhattan_pearson
value: 63.59789526178922
- type: manhattan_spearman
value: 66.86555009875066
- task:
type: STS
dataset:
name: MTEB STS22 (pl)
type: mteb/sts22-crosslingual-sts
config: pl
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 28.23026196280264
- type: cos_sim_spearman
value: 35.79397812652861
- type: euclidean_pearson
value: 17.828102102767353
- type: euclidean_spearman
value: 35.721501145568894
- type: manhattan_pearson
value: 17.77134274219677
- type: manhattan_spearman
value: 35.98107902846267
- task:
type: STS
dataset:
name: MTEB STS22 (tr)
type: mteb/sts22-crosslingual-sts
config: tr
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 56.51946541393812
- type: cos_sim_spearman
value: 63.714686006214485
- type: euclidean_pearson
value: 58.32104651305898
- type: euclidean_spearman
value: 62.237110895702216
- type: manhattan_pearson
value: 58.579416468759185
- type: manhattan_spearman
value: 62.459738981727
- task:
type: STS
dataset:
name: MTEB STS22 (ar)
type: mteb/sts22-crosslingual-sts
config: ar
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 48.76009839569795
- type: cos_sim_spearman
value: 56.65188431953149
- type: euclidean_pearson
value: 50.997682160915595
- type: euclidean_spearman
value: 55.99910008818135
- type: manhattan_pearson
value: 50.76220659606342
- type: manhattan_spearman
value: 55.517347595391456
- task:
type: STS
dataset:
name: MTEB STS22 (ru)
type: mteb/sts22-crosslingual-sts
config: ru
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cosine_pearson
value: 50.724322379215934
- type: cosine_spearman
value: 59.90449732164651
- type: euclidean_pearson
value: 50.227545226784024
- type: euclidean_spearman
value: 59.898906527601085
- type: main_score
value: 59.90449732164651
- type: manhattan_pearson
value: 50.21762139819405
- type: manhattan_spearman
value: 59.761039813759
- type: pearson
value: 50.724322379215934
- type: spearman
value: 59.90449732164651
- task:
type: STS
dataset:
name: MTEB STS22 (zh)
type: mteb/sts22-crosslingual-sts
config: zh
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 54.717524559088005
- type: cos_sim_spearman
value: 66.83570886252286
- type: euclidean_pearson
value: 58.41338625505467
- type: euclidean_spearman
value: 66.68991427704938
- type: manhattan_pearson
value: 58.78638572916807
- type: manhattan_spearman
value: 66.58684161046335
- task:
type: STS
dataset:
name: MTEB STS22 (fr)
type: mteb/sts22-crosslingual-sts
config: fr
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 73.2962042954962
- type: cos_sim_spearman
value: 76.58255504852025
- type: euclidean_pearson
value: 75.70983192778257
- type: euclidean_spearman
value: 77.4547684870542
- type: manhattan_pearson
value: 75.75565853870485
- type: manhattan_spearman
value: 76.90208974949428
- task:
type: STS
dataset:
name: MTEB STS22 (de-en)
type: mteb/sts22-crosslingual-sts
config: de-en
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 54.47396266924846
- type: cos_sim_spearman
value: 56.492267162048606
- type: euclidean_pearson
value: 55.998505203070195
- type: euclidean_spearman
value: 56.46447012960222
- type: manhattan_pearson
value: 54.873172394430995
- type: manhattan_spearman
value: 56.58111534551218
- task:
type: STS
dataset:
name: MTEB STS22 (es-en)
type: mteb/sts22-crosslingual-sts
config: es-en
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 69.87177267688686
- type: cos_sim_spearman
value: 74.57160943395763
- type: euclidean_pearson
value: 70.88330406826788
- type: euclidean_spearman
value: 74.29767636038422
- type: manhattan_pearson
value: 71.38245248369536
- type: manhattan_spearman
value: 74.53102232732175
- task:
type: STS
dataset:
name: MTEB STS22 (it)
type: mteb/sts22-crosslingual-sts
config: it
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 72.80225656959544
- type: cos_sim_spearman
value: 76.52646173725735
- type: euclidean_pearson
value: 73.95710720200799
- type: euclidean_spearman
value: 76.54040031984111
- type: manhattan_pearson
value: 73.89679971946774
- type: manhattan_spearman
value: 76.60886958161574
- task:
type: STS
dataset:
name: MTEB STS22 (pl-en)
type: mteb/sts22-crosslingual-sts
config: pl-en
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 70.70844249898789
- type: cos_sim_spearman
value: 72.68571783670241
- type: euclidean_pearson
value: 72.38800772441031
- type: euclidean_spearman
value: 72.86804422703312
- type: manhattan_pearson
value: 71.29840508203515
- type: manhattan_spearman
value: 71.86264441749513
- task:
type: STS
dataset:
name: MTEB STS22 (zh-en)
type: mteb/sts22-crosslingual-sts
config: zh-en
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 58.647478923935694
- type: cos_sim_spearman
value: 63.74453623540931
- type: euclidean_pearson
value: 59.60138032437505
- type: euclidean_spearman
value: 63.947930832166065
- type: manhattan_pearson
value: 58.59735509491861
- type: manhattan_spearman
value: 62.082503844627404
- task:
type: STS
dataset:
name: MTEB STS22 (es-it)
type: mteb/sts22-crosslingual-sts
config: es-it
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 65.8722516867162
- type: cos_sim_spearman
value: 71.81208592523012
- type: euclidean_pearson
value: 67.95315252165956
- type: euclidean_spearman
value: 73.00749822046009
- type: manhattan_pearson
value: 68.07884688638924
- type: manhattan_spearman
value: 72.34210325803069
- task:
type: STS
dataset:
name: MTEB STS22 (de-fr)
type: mteb/sts22-crosslingual-sts
config: de-fr
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 54.5405814240949
- type: cos_sim_spearman
value: 60.56838649023775
- type: euclidean_pearson
value: 53.011731611314104
- type: euclidean_spearman
value: 58.533194841668426
- type: manhattan_pearson
value: 53.623067729338494
- type: manhattan_spearman
value: 58.018756154446926
- task:
type: STS
dataset:
name: MTEB STS22 (de-pl)
type: mteb/sts22-crosslingual-sts
config: de-pl
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 13.611046866216112
- type: cos_sim_spearman
value: 28.238192909158492
- type: euclidean_pearson
value: 22.16189199885129
- type: euclidean_spearman
value: 35.012895679076564
- type: manhattan_pearson
value: 21.969771178698387
- type: manhattan_spearman
value: 32.456985088607475
- task:
type: STS
dataset:
name: MTEB STS22 (fr-pl)
type: mteb/sts22-crosslingual-sts
config: fr-pl
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 74.58077407011655
- type: cos_sim_spearman
value: 84.51542547285167
- type: euclidean_pearson
value: 74.64613843596234
- type: euclidean_spearman
value: 84.51542547285167
- type: manhattan_pearson
value: 75.15335973101396
- type: manhattan_spearman
value: 84.51542547285167
- task:
type: STS
dataset:
name: MTEB STSBenchmark
type: mteb/stsbenchmark-sts
config: default
split: test
revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831
metrics:
- type: cos_sim_pearson
value: 82.0739825531578
- type: cos_sim_spearman
value: 84.01057479311115
- type: euclidean_pearson
value: 83.85453227433344
- type: euclidean_spearman
value: 84.01630226898655
- type: manhattan_pearson
value: 83.75323603028978
- type: manhattan_spearman
value: 83.89677983727685
- task:
type: Reranking
dataset:
name: MTEB SciDocsRR
type: mteb/scidocs-reranking
config: default
split: test
revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab
metrics:
- type: map
value: 78.12945623123957
- type: mrr
value: 93.87738713719106
- task:
type: Retrieval
dataset:
name: MTEB SciFact
type: scifact
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 52.983000000000004
- type: map_at_10
value: 62.946000000000005
- type: map_at_100
value: 63.514
- type: map_at_1000
value: 63.554
- type: map_at_3
value: 60.183
- type: map_at_5
value: 61.672000000000004
- type: mrr_at_1
value: 55.667
- type: mrr_at_10
value: 64.522
- type: mrr_at_100
value: 64.957
- type: mrr_at_1000
value: 64.995
- type: mrr_at_3
value: 62.388999999999996
- type: mrr_at_5
value: 63.639
- type: ndcg_at_1
value: 55.667
- type: ndcg_at_10
value: 67.704
- type: ndcg_at_100
value: 70.299
- type: ndcg_at_1000
value: 71.241
- type: ndcg_at_3
value: 62.866
- type: ndcg_at_5
value: 65.16999999999999
- type: precision_at_1
value: 55.667
- type: precision_at_10
value: 9.033
- type: precision_at_100
value: 1.053
- type: precision_at_1000
value: 0.11299999999999999
- type: precision_at_3
value: 24.444
- type: precision_at_5
value: 16.133
- type: recall_at_1
value: 52.983000000000004
- type: recall_at_10
value: 80.656
- type: recall_at_100
value: 92.5
- type: recall_at_1000
value: 99.667
- type: recall_at_3
value: 67.744
- type: recall_at_5
value: 73.433
- task:
type: PairClassification
dataset:
name: MTEB SprintDuplicateQuestions
type: mteb/sprintduplicatequestions-pairclassification
config: default
split: test
revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46
metrics:
- type: cos_sim_accuracy
value: 99.72772277227723
- type: cos_sim_ap
value: 92.17845897992215
- type: cos_sim_f1
value: 85.9746835443038
- type: cos_sim_precision
value: 87.07692307692308
- type: cos_sim_recall
value: 84.89999999999999
- type: dot_accuracy
value: 99.3039603960396
- type: dot_ap
value: 60.70244020124878
- type: dot_f1
value: 59.92742353551063
- type: dot_precision
value: 62.21743810548978
- type: dot_recall
value: 57.8
- type: euclidean_accuracy
value: 99.71683168316832
- type: euclidean_ap
value: 91.53997039964659
- type: euclidean_f1
value: 84.88372093023257
- type: euclidean_precision
value: 90.02242152466367
- type: euclidean_recall
value: 80.30000000000001
- type: manhattan_accuracy
value: 99.72376237623763
- type: manhattan_ap
value: 91.80756777790289
- type: manhattan_f1
value: 85.48468106479157
- type: manhattan_precision
value: 85.8728557013118
- type: manhattan_recall
value: 85.1
- type: max_accuracy
value: 99.72772277227723
- type: max_ap
value: 92.17845897992215
- type: max_f1
value: 85.9746835443038
- task:
type: Clustering
dataset:
name: MTEB StackExchangeClustering
type: mteb/stackexchange-clustering
config: default
split: test
revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259
metrics:
- type: v_measure
value: 53.52464042600003
- task:
type: Clustering
dataset:
name: MTEB StackExchangeClusteringP2P
type: mteb/stackexchange-clustering-p2p
config: default
split: test
revision: 815ca46b2622cec33ccafc3735d572c266efdb44
metrics:
- type: v_measure
value: 32.071631948736
- task:
type: Reranking
dataset:
name: MTEB StackOverflowDupQuestions
type: mteb/stackoverflowdupquestions-reranking
config: default
split: test
revision: e185fbe320c72810689fc5848eb6114e1ef5ec69
metrics:
- type: map
value: 49.19552407604654
- type: mrr
value: 49.95269130379425
- task:
type: Summarization
dataset:
name: MTEB SummEval
type: mteb/summeval
config: default
split: test
revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c
metrics:
- type: cos_sim_pearson
value: 29.345293033095427
- type: cos_sim_spearman
value: 29.976931423258403
- type: dot_pearson
value: 27.047078008958408
- type: dot_spearman
value: 27.75894368380218
- task:
type: Retrieval
dataset:
name: MTEB TRECCOVID
type: trec-covid
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 0.22
- type: map_at_10
value: 1.706
- type: map_at_100
value: 9.634
- type: map_at_1000
value: 23.665
- type: map_at_3
value: 0.5950000000000001
- type: map_at_5
value: 0.95
- type: mrr_at_1
value: 86.0
- type: mrr_at_10
value: 91.8
- type: mrr_at_100
value: 91.8
- type: mrr_at_1000
value: 91.8
- type: mrr_at_3
value: 91.0
- type: mrr_at_5
value: 91.8
- type: ndcg_at_1
value: 80.0
- type: ndcg_at_10
value: 72.573
- type: ndcg_at_100
value: 53.954
- type: ndcg_at_1000
value: 47.760999999999996
- type: ndcg_at_3
value: 76.173
- type: ndcg_at_5
value: 75.264
- type: precision_at_1
value: 86.0
- type: precision_at_10
value: 76.4
- type: precision_at_100
value: 55.50000000000001
- type: precision_at_1000
value: 21.802
- type: precision_at_3
value: 81.333
- type: precision_at_5
value: 80.4
- type: recall_at_1
value: 0.22
- type: recall_at_10
value: 1.925
- type: recall_at_100
value: 12.762
- type: recall_at_1000
value: 44.946000000000005
- type: recall_at_3
value: 0.634
- type: recall_at_5
value: 1.051
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (sqi-eng)
type: mteb/tatoeba-bitext-mining
config: sqi-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 91.0
- type: f1
value: 88.55666666666666
- type: precision
value: 87.46166666666667
- type: recall
value: 91.0
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (fry-eng)
type: mteb/tatoeba-bitext-mining
config: fry-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 57.22543352601156
- type: f1
value: 51.03220478943021
- type: precision
value: 48.8150289017341
- type: recall
value: 57.22543352601156
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (kur-eng)
type: mteb/tatoeba-bitext-mining
config: kur-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 46.58536585365854
- type: f1
value: 39.66870798578116
- type: precision
value: 37.416085946573745
- type: recall
value: 46.58536585365854
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (tur-eng)
type: mteb/tatoeba-bitext-mining
config: tur-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 89.7
- type: f1
value: 86.77999999999999
- type: precision
value: 85.45333333333332
- type: recall
value: 89.7
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (deu-eng)
type: mteb/tatoeba-bitext-mining
config: deu-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 97.39999999999999
- type: f1
value: 96.58333333333331
- type: precision
value: 96.2
- type: recall
value: 97.39999999999999
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (nld-eng)
type: mteb/tatoeba-bitext-mining
config: nld-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 92.4
- type: f1
value: 90.3
- type: precision
value: 89.31666666666668
- type: recall
value: 92.4
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ron-eng)
type: mteb/tatoeba-bitext-mining
config: ron-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 86.9
- type: f1
value: 83.67190476190476
- type: precision
value: 82.23333333333332
- type: recall
value: 86.9
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ang-eng)
type: mteb/tatoeba-bitext-mining
config: ang-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 50.0
- type: f1
value: 42.23229092632078
- type: precision
value: 39.851634683724235
- type: recall
value: 50.0
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ido-eng)
type: mteb/tatoeba-bitext-mining
config: ido-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 76.3
- type: f1
value: 70.86190476190477
- type: precision
value: 68.68777777777777
- type: recall
value: 76.3
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (jav-eng)
type: mteb/tatoeba-bitext-mining
config: jav-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 57.073170731707314
- type: f1
value: 50.658958927251604
- type: precision
value: 48.26480836236933
- type: recall
value: 57.073170731707314
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (isl-eng)
type: mteb/tatoeba-bitext-mining
config: isl-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 68.2
- type: f1
value: 62.156507936507936
- type: precision
value: 59.84964285714286
- type: recall
value: 68.2
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (slv-eng)
type: mteb/tatoeba-bitext-mining
config: slv-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 77.52126366950182
- type: f1
value: 72.8496210148701
- type: precision
value: 70.92171498003819
- type: recall
value: 77.52126366950182
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (cym-eng)
type: mteb/tatoeba-bitext-mining
config: cym-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 70.78260869565217
- type: f1
value: 65.32422360248447
- type: precision
value: 63.063067367415194
- type: recall
value: 70.78260869565217
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (kaz-eng)
type: mteb/tatoeba-bitext-mining
config: kaz-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 78.43478260869566
- type: f1
value: 73.02608695652172
- type: precision
value: 70.63768115942028
- type: recall
value: 78.43478260869566
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (est-eng)
type: mteb/tatoeba-bitext-mining
config: est-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 60.9
- type: f1
value: 55.309753694581275
- type: precision
value: 53.130476190476195
- type: recall
value: 60.9
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (heb-eng)
type: mteb/tatoeba-bitext-mining
config: heb-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 72.89999999999999
- type: f1
value: 67.92023809523809
- type: precision
value: 65.82595238095237
- type: recall
value: 72.89999999999999
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (gla-eng)
type: mteb/tatoeba-bitext-mining
config: gla-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 46.80337756332931
- type: f1
value: 39.42174900558496
- type: precision
value: 36.97101116280851
- type: recall
value: 46.80337756332931
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (mar-eng)
type: mteb/tatoeba-bitext-mining
config: mar-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 89.8
- type: f1
value: 86.79
- type: precision
value: 85.375
- type: recall
value: 89.8
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (lat-eng)
type: mteb/tatoeba-bitext-mining
config: lat-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 47.199999999999996
- type: f1
value: 39.95484348984349
- type: precision
value: 37.561071428571424
- type: recall
value: 47.199999999999996
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (bel-eng)
type: mteb/tatoeba-bitext-mining
config: bel-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 87.8
- type: f1
value: 84.68190476190475
- type: precision
value: 83.275
- type: recall
value: 87.8
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (pms-eng)
type: mteb/tatoeba-bitext-mining
config: pms-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 48.76190476190476
- type: f1
value: 42.14965986394558
- type: precision
value: 39.96743626743626
- type: recall
value: 48.76190476190476
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (gle-eng)
type: mteb/tatoeba-bitext-mining
config: gle-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 66.10000000000001
- type: f1
value: 59.58580086580086
- type: precision
value: 57.150238095238095
- type: recall
value: 66.10000000000001
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (pes-eng)
type: mteb/tatoeba-bitext-mining
config: pes-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 87.3
- type: f1
value: 84.0
- type: precision
value: 82.48666666666666
- type: recall
value: 87.3
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (nob-eng)
type: mteb/tatoeba-bitext-mining
config: nob-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 90.4
- type: f1
value: 87.79523809523809
- type: precision
value: 86.6
- type: recall
value: 90.4
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (bul-eng)
type: mteb/tatoeba-bitext-mining
config: bul-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 87.0
- type: f1
value: 83.81
- type: precision
value: 82.36666666666666
- type: recall
value: 87.0
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (cbk-eng)
type: mteb/tatoeba-bitext-mining
config: cbk-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 63.9
- type: f1
value: 57.76533189033189
- type: precision
value: 55.50595238095239
- type: recall
value: 63.9
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (hun-eng)
type: mteb/tatoeba-bitext-mining
config: hun-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 76.1
- type: f1
value: 71.83690476190478
- type: precision
value: 70.04928571428573
- type: recall
value: 76.1
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (uig-eng)
type: mteb/tatoeba-bitext-mining
config: uig-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 66.3
- type: f1
value: 59.32626984126984
- type: precision
value: 56.62535714285713
- type: recall
value: 66.3
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (rus-eng)
type: mteb/tatoeba-bitext-mining
config: rus-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 92.10000000000001
- type: f1
value: 89.76666666666667
- type: main_score
value: 89.76666666666667
- type: precision
value: 88.64999999999999
- type: recall
value: 92.10000000000001
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (spa-eng)
type: mteb/tatoeba-bitext-mining
config: spa-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 93.10000000000001
- type: f1
value: 91.10000000000001
- type: precision
value: 90.16666666666666
- type: recall
value: 93.10000000000001
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (hye-eng)
type: mteb/tatoeba-bitext-mining
config: hye-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 85.71428571428571
- type: f1
value: 82.29142600436403
- type: precision
value: 80.8076626877166
- type: recall
value: 85.71428571428571
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (tel-eng)
type: mteb/tatoeba-bitext-mining
config: tel-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 88.88888888888889
- type: f1
value: 85.7834757834758
- type: precision
value: 84.43732193732193
- type: recall
value: 88.88888888888889
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (afr-eng)
type: mteb/tatoeba-bitext-mining
config: afr-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 88.5
- type: f1
value: 85.67190476190476
- type: precision
value: 84.43333333333332
- type: recall
value: 88.5
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (mon-eng)
type: mteb/tatoeba-bitext-mining
config: mon-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 82.72727272727273
- type: f1
value: 78.21969696969695
- type: precision
value: 76.18181818181819
- type: recall
value: 82.72727272727273
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (arz-eng)
type: mteb/tatoeba-bitext-mining
config: arz-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 61.0062893081761
- type: f1
value: 55.13976240391334
- type: precision
value: 52.92112499659669
- type: recall
value: 61.0062893081761
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (hrv-eng)
type: mteb/tatoeba-bitext-mining
config: hrv-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 89.5
- type: f1
value: 86.86666666666666
- type: precision
value: 85.69166666666668
- type: recall
value: 89.5
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (nov-eng)
type: mteb/tatoeba-bitext-mining
config: nov-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 73.54085603112841
- type: f1
value: 68.56031128404669
- type: precision
value: 66.53047989623866
- type: recall
value: 73.54085603112841
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (gsw-eng)
type: mteb/tatoeba-bitext-mining
config: gsw-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 43.58974358974359
- type: f1
value: 36.45299145299145
- type: precision
value: 33.81155881155882
- type: recall
value: 43.58974358974359
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (nds-eng)
type: mteb/tatoeba-bitext-mining
config: nds-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 59.599999999999994
- type: f1
value: 53.264689754689755
- type: precision
value: 50.869166666666665
- type: recall
value: 59.599999999999994
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ukr-eng)
type: mteb/tatoeba-bitext-mining
config: ukr-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 85.2
- type: f1
value: 81.61666666666665
- type: precision
value: 80.02833333333335
- type: recall
value: 85.2
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (uzb-eng)
type: mteb/tatoeba-bitext-mining
config: uzb-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 63.78504672897196
- type: f1
value: 58.00029669188548
- type: precision
value: 55.815809968847354
- type: recall
value: 63.78504672897196
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (lit-eng)
type: mteb/tatoeba-bitext-mining
config: lit-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 66.5
- type: f1
value: 61.518333333333345
- type: precision
value: 59.622363699102834
- type: recall
value: 66.5
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ina-eng)
type: mteb/tatoeba-bitext-mining
config: ina-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 88.6
- type: f1
value: 85.60222222222221
- type: precision
value: 84.27916666666665
- type: recall
value: 88.6
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (lfn-eng)
type: mteb/tatoeba-bitext-mining
config: lfn-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 58.699999999999996
- type: f1
value: 52.732375957375965
- type: precision
value: 50.63214035964035
- type: recall
value: 58.699999999999996
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (zsm-eng)
type: mteb/tatoeba-bitext-mining
config: zsm-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 92.10000000000001
- type: f1
value: 89.99666666666667
- type: precision
value: 89.03333333333333
- type: recall
value: 92.10000000000001
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ita-eng)
type: mteb/tatoeba-bitext-mining
config: ita-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 90.10000000000001
- type: f1
value: 87.55666666666667
- type: precision
value: 86.36166666666668
- type: recall
value: 90.10000000000001
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (cmn-eng)
type: mteb/tatoeba-bitext-mining
config: cmn-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 91.4
- type: f1
value: 88.89000000000001
- type: precision
value: 87.71166666666666
- type: recall
value: 91.4
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (lvs-eng)
type: mteb/tatoeba-bitext-mining
config: lvs-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 65.7
- type: f1
value: 60.67427750410509
- type: precision
value: 58.71785714285714
- type: recall
value: 65.7
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (glg-eng)
type: mteb/tatoeba-bitext-mining
config: glg-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 85.39999999999999
- type: f1
value: 81.93190476190475
- type: precision
value: 80.37833333333333
- type: recall
value: 85.39999999999999
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ceb-eng)
type: mteb/tatoeba-bitext-mining
config: ceb-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 47.833333333333336
- type: f1
value: 42.006625781625786
- type: precision
value: 40.077380952380956
- type: recall
value: 47.833333333333336
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (bre-eng)
type: mteb/tatoeba-bitext-mining
config: bre-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 10.4
- type: f1
value: 8.24465007215007
- type: precision
value: 7.664597069597071
- type: recall
value: 10.4
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ben-eng)
type: mteb/tatoeba-bitext-mining
config: ben-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 82.6
- type: f1
value: 77.76333333333334
- type: precision
value: 75.57833333333332
- type: recall
value: 82.6
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (swg-eng)
type: mteb/tatoeba-bitext-mining
config: swg-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 52.67857142857143
- type: f1
value: 44.302721088435376
- type: precision
value: 41.49801587301587
- type: recall
value: 52.67857142857143
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (arq-eng)
type: mteb/tatoeba-bitext-mining
config: arq-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 28.3205268935236
- type: f1
value: 22.426666605171157
- type: precision
value: 20.685900116470915
- type: recall
value: 28.3205268935236
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (kab-eng)
type: mteb/tatoeba-bitext-mining
config: kab-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 22.7
- type: f1
value: 17.833970473970474
- type: precision
value: 16.407335164835164
- type: recall
value: 22.7
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (fra-eng)
type: mteb/tatoeba-bitext-mining
config: fra-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 92.2
- type: f1
value: 89.92999999999999
- type: precision
value: 88.87
- type: recall
value: 92.2
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (por-eng)
type: mteb/tatoeba-bitext-mining
config: por-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 91.4
- type: f1
value: 89.25
- type: precision
value: 88.21666666666667
- type: recall
value: 91.4
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (tat-eng)
type: mteb/tatoeba-bitext-mining
config: tat-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 69.19999999999999
- type: f1
value: 63.38269841269841
- type: precision
value: 61.14773809523809
- type: recall
value: 69.19999999999999
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (oci-eng)
type: mteb/tatoeba-bitext-mining
config: oci-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 48.8
- type: f1
value: 42.839915639915645
- type: precision
value: 40.770287114845935
- type: recall
value: 48.8
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (pol-eng)
type: mteb/tatoeba-bitext-mining
config: pol-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 88.8
- type: f1
value: 85.90666666666668
- type: precision
value: 84.54166666666666
- type: recall
value: 88.8
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (war-eng)
type: mteb/tatoeba-bitext-mining
config: war-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 46.6
- type: f1
value: 40.85892920804686
- type: precision
value: 38.838223114604695
- type: recall
value: 46.6
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (aze-eng)
type: mteb/tatoeba-bitext-mining
config: aze-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 84.0
- type: f1
value: 80.14190476190475
- type: precision
value: 78.45333333333333
- type: recall
value: 84.0
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (vie-eng)
type: mteb/tatoeba-bitext-mining
config: vie-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 90.5
- type: f1
value: 87.78333333333333
- type: precision
value: 86.5
- type: recall
value: 90.5
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (nno-eng)
type: mteb/tatoeba-bitext-mining
config: nno-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 74.5
- type: f1
value: 69.48397546897547
- type: precision
value: 67.51869047619049
- type: recall
value: 74.5
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (cha-eng)
type: mteb/tatoeba-bitext-mining
config: cha-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 32.846715328467155
- type: f1
value: 27.828177499710343
- type: precision
value: 26.63451511991658
- type: recall
value: 32.846715328467155
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (mhr-eng)
type: mteb/tatoeba-bitext-mining
config: mhr-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 8.0
- type: f1
value: 6.07664116764988
- type: precision
value: 5.544177607179943
- type: recall
value: 8.0
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (dan-eng)
type: mteb/tatoeba-bitext-mining
config: dan-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 87.6
- type: f1
value: 84.38555555555554
- type: precision
value: 82.91583333333334
- type: recall
value: 87.6
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ell-eng)
type: mteb/tatoeba-bitext-mining
config: ell-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 87.5
- type: f1
value: 84.08333333333331
- type: precision
value: 82.47333333333333
- type: recall
value: 87.5
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (amh-eng)
type: mteb/tatoeba-bitext-mining
config: amh-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 80.95238095238095
- type: f1
value: 76.13095238095238
- type: precision
value: 74.05753968253967
- type: recall
value: 80.95238095238095
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (pam-eng)
type: mteb/tatoeba-bitext-mining
config: pam-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 8.799999999999999
- type: f1
value: 6.971422975172975
- type: precision
value: 6.557814916172301
- type: recall
value: 8.799999999999999
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (hsb-eng)
type: mteb/tatoeba-bitext-mining
config: hsb-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 44.099378881987576
- type: f1
value: 37.01649742022413
- type: precision
value: 34.69420618488942
- type: recall
value: 44.099378881987576
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (srp-eng)
type: mteb/tatoeba-bitext-mining
config: srp-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 84.3
- type: f1
value: 80.32666666666667
- type: precision
value: 78.60666666666665
- type: recall
value: 84.3
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (epo-eng)
type: mteb/tatoeba-bitext-mining
config: epo-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 92.5
- type: f1
value: 90.49666666666666
- type: precision
value: 89.56666666666668
- type: recall
value: 92.5
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (kzj-eng)
type: mteb/tatoeba-bitext-mining
config: kzj-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 10.0
- type: f1
value: 8.268423529875141
- type: precision
value: 7.878118605532398
- type: recall
value: 10.0
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (awa-eng)
type: mteb/tatoeba-bitext-mining
config: awa-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 79.22077922077922
- type: f1
value: 74.27128427128426
- type: precision
value: 72.28715728715729
- type: recall
value: 79.22077922077922
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (fao-eng)
type: mteb/tatoeba-bitext-mining
config: fao-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 65.64885496183206
- type: f1
value: 58.87495456197747
- type: precision
value: 55.992366412213734
- type: recall
value: 65.64885496183206
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (mal-eng)
type: mteb/tatoeba-bitext-mining
config: mal-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 96.06986899563319
- type: f1
value: 94.78408539543909
- type: precision
value: 94.15332362930616
- type: recall
value: 96.06986899563319
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ile-eng)
type: mteb/tatoeba-bitext-mining
config: ile-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 77.2
- type: f1
value: 71.72571428571428
- type: precision
value: 69.41000000000001
- type: recall
value: 77.2
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (bos-eng)
type: mteb/tatoeba-bitext-mining
config: bos-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 86.4406779661017
- type: f1
value: 83.2391713747646
- type: precision
value: 81.74199623352166
- type: recall
value: 86.4406779661017
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (cor-eng)
type: mteb/tatoeba-bitext-mining
config: cor-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 8.4
- type: f1
value: 6.017828743398003
- type: precision
value: 5.4829865484756795
- type: recall
value: 8.4
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (cat-eng)
type: mteb/tatoeba-bitext-mining
config: cat-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 83.5
- type: f1
value: 79.74833333333333
- type: precision
value: 78.04837662337664
- type: recall
value: 83.5
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (eus-eng)
type: mteb/tatoeba-bitext-mining
config: eus-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 60.4
- type: f1
value: 54.467301587301584
- type: precision
value: 52.23242424242424
- type: recall
value: 60.4
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (yue-eng)
type: mteb/tatoeba-bitext-mining
config: yue-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 74.9
- type: f1
value: 69.68699134199134
- type: precision
value: 67.59873015873016
- type: recall
value: 74.9
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (swe-eng)
type: mteb/tatoeba-bitext-mining
config: swe-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 88.0
- type: f1
value: 84.9652380952381
- type: precision
value: 83.66166666666666
- type: recall
value: 88.0
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (dtp-eng)
type: mteb/tatoeba-bitext-mining
config: dtp-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 9.1
- type: f1
value: 7.681244588744588
- type: precision
value: 7.370043290043291
- type: recall
value: 9.1
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (kat-eng)
type: mteb/tatoeba-bitext-mining
config: kat-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 80.9651474530831
- type: f1
value: 76.84220605132133
- type: precision
value: 75.19606398962966
- type: recall
value: 80.9651474530831
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (jpn-eng)
type: mteb/tatoeba-bitext-mining
config: jpn-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 86.9
- type: f1
value: 83.705
- type: precision
value: 82.3120634920635
- type: recall
value: 86.9
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (csb-eng)
type: mteb/tatoeba-bitext-mining
config: csb-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 29.64426877470356
- type: f1
value: 23.98763072676116
- type: precision
value: 22.506399397703746
- type: recall
value: 29.64426877470356
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (xho-eng)
type: mteb/tatoeba-bitext-mining
config: xho-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 70.4225352112676
- type: f1
value: 62.84037558685445
- type: precision
value: 59.56572769953053
- type: recall
value: 70.4225352112676
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (orv-eng)
type: mteb/tatoeba-bitext-mining
config: orv-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 19.64071856287425
- type: f1
value: 15.125271011207756
- type: precision
value: 13.865019261197494
- type: recall
value: 19.64071856287425
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ind-eng)
type: mteb/tatoeba-bitext-mining
config: ind-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 90.2
- type: f1
value: 87.80666666666666
- type: precision
value: 86.70833333333331
- type: recall
value: 90.2
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (tuk-eng)
type: mteb/tatoeba-bitext-mining
config: tuk-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 23.15270935960591
- type: f1
value: 18.407224958949097
- type: precision
value: 16.982385430661292
- type: recall
value: 23.15270935960591
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (max-eng)
type: mteb/tatoeba-bitext-mining
config: max-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 55.98591549295775
- type: f1
value: 49.94718309859154
- type: precision
value: 47.77864154624717
- type: recall
value: 55.98591549295775
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (swh-eng)
type: mteb/tatoeba-bitext-mining
config: swh-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 73.07692307692307
- type: f1
value: 66.74358974358974
- type: precision
value: 64.06837606837607
- type: recall
value: 73.07692307692307
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (hin-eng)
type: mteb/tatoeba-bitext-mining
config: hin-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 94.89999999999999
- type: f1
value: 93.25
- type: precision
value: 92.43333333333332
- type: recall
value: 94.89999999999999
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (dsb-eng)
type: mteb/tatoeba-bitext-mining
config: dsb-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 37.78705636743215
- type: f1
value: 31.63899658680452
- type: precision
value: 29.72264397629742
- type: recall
value: 37.78705636743215
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ber-eng)
type: mteb/tatoeba-bitext-mining
config: ber-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 21.6
- type: f1
value: 16.91697302697303
- type: precision
value: 15.71225147075147
- type: recall
value: 21.6
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (tam-eng)
type: mteb/tatoeba-bitext-mining
config: tam-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 85.01628664495115
- type: f1
value: 81.38514037536838
- type: precision
value: 79.83170466883823
- type: recall
value: 85.01628664495115
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (slk-eng)
type: mteb/tatoeba-bitext-mining
config: slk-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 83.39999999999999
- type: f1
value: 79.96380952380952
- type: precision
value: 78.48333333333333
- type: recall
value: 83.39999999999999
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (tgl-eng)
type: mteb/tatoeba-bitext-mining
config: tgl-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 83.2
- type: f1
value: 79.26190476190476
- type: precision
value: 77.58833333333334
- type: recall
value: 83.2
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ast-eng)
type: mteb/tatoeba-bitext-mining
config: ast-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 75.59055118110236
- type: f1
value: 71.66854143232096
- type: precision
value: 70.30183727034121
- type: recall
value: 75.59055118110236
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (mkd-eng)
type: mteb/tatoeba-bitext-mining
config: mkd-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 65.5
- type: f1
value: 59.26095238095238
- type: precision
value: 56.81909090909092
- type: recall
value: 65.5
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (khm-eng)
type: mteb/tatoeba-bitext-mining
config: khm-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 55.26315789473685
- type: f1
value: 47.986523325858506
- type: precision
value: 45.33950006595436
- type: recall
value: 55.26315789473685
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ces-eng)
type: mteb/tatoeba-bitext-mining
config: ces-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 82.89999999999999
- type: f1
value: 78.835
- type: precision
value: 77.04761904761905
- type: recall
value: 82.89999999999999
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (tzl-eng)
type: mteb/tatoeba-bitext-mining
config: tzl-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 43.269230769230774
- type: f1
value: 36.20421245421245
- type: precision
value: 33.57371794871795
- type: recall
value: 43.269230769230774
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (urd-eng)
type: mteb/tatoeba-bitext-mining
config: urd-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 88.0
- type: f1
value: 84.70666666666666
- type: precision
value: 83.23166666666665
- type: recall
value: 88.0
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ara-eng)
type: mteb/tatoeba-bitext-mining
config: ara-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 77.4
- type: f1
value: 72.54666666666667
- type: precision
value: 70.54318181818181
- type: recall
value: 77.4
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (kor-eng)
type: mteb/tatoeba-bitext-mining
config: kor-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 78.60000000000001
- type: f1
value: 74.1588888888889
- type: precision
value: 72.30250000000001
- type: recall
value: 78.60000000000001
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (yid-eng)
type: mteb/tatoeba-bitext-mining
config: yid-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 72.40566037735849
- type: f1
value: 66.82587328813744
- type: precision
value: 64.75039308176099
- type: recall
value: 72.40566037735849
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (fin-eng)
type: mteb/tatoeba-bitext-mining
config: fin-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 73.8
- type: f1
value: 68.56357142857144
- type: precision
value: 66.3178822055138
- type: recall
value: 73.8
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (tha-eng)
type: mteb/tatoeba-bitext-mining
config: tha-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 91.78832116788321
- type: f1
value: 89.3552311435523
- type: precision
value: 88.20559610705597
- type: recall
value: 91.78832116788321
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (wuu-eng)
type: mteb/tatoeba-bitext-mining
config: wuu-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 74.3
- type: f1
value: 69.05085581085581
- type: precision
value: 66.955
- type: recall
value: 74.3
- task:
type: Retrieval
dataset:
name: MTEB Touche2020
type: webis-touche2020
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 2.896
- type: map_at_10
value: 8.993
- type: map_at_100
value: 14.133999999999999
- type: map_at_1000
value: 15.668000000000001
- type: map_at_3
value: 5.862
- type: map_at_5
value: 7.17
- type: mrr_at_1
value: 34.694
- type: mrr_at_10
value: 42.931000000000004
- type: mrr_at_100
value: 44.81
- type: mrr_at_1000
value: 44.81
- type: mrr_at_3
value: 38.435
- type: mrr_at_5
value: 41.701
- type: ndcg_at_1
value: 31.633
- type: ndcg_at_10
value: 21.163
- type: ndcg_at_100
value: 33.306000000000004
- type: ndcg_at_1000
value: 45.275999999999996
- type: ndcg_at_3
value: 25.685999999999996
- type: ndcg_at_5
value: 23.732
- type: precision_at_1
value: 34.694
- type: precision_at_10
value: 17.755000000000003
- type: precision_at_100
value: 6.938999999999999
- type: precision_at_1000
value: 1.48
- type: precision_at_3
value: 25.85
- type: precision_at_5
value: 23.265
- type: recall_at_1
value: 2.896
- type: recall_at_10
value: 13.333999999999998
- type: recall_at_100
value: 43.517
- type: recall_at_1000
value: 79.836
- type: recall_at_3
value: 6.306000000000001
- type: recall_at_5
value: 8.825
- task:
type: Classification
dataset:
name: MTEB ToxicConversationsClassification
type: mteb/toxic_conversations_50k
config: default
split: test
revision: d7c0de2777da35d6aae2200a62c6e0e5af397c4c
metrics:
- type: accuracy
value: 69.3874
- type: ap
value: 13.829909072469423
- type: f1
value: 53.54534203543492
- task:
type: Classification
dataset:
name: MTEB TweetSentimentExtractionClassification
type: mteb/tweet_sentiment_extraction
config: default
split: test
revision: d604517c81ca91fe16a244d1248fc021f9ecee7a
metrics:
- type: accuracy
value: 62.62026032823995
- type: f1
value: 62.85251350485221
- task:
type: Clustering
dataset:
name: MTEB TwentyNewsgroupsClustering
type: mteb/twentynewsgroups-clustering
config: default
split: test
revision: 6125ec4e24fa026cec8a478383ee943acfbd5449
metrics:
- type: v_measure
value: 33.21527881409797
- task:
type: PairClassification
dataset:
name: MTEB TwitterSemEval2015
type: mteb/twittersemeval2015-pairclassification
config: default
split: test
revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1
metrics:
- type: cos_sim_accuracy
value: 84.97943613280086
- type: cos_sim_ap
value: 70.75454316885921
- type: cos_sim_f1
value: 65.38274012676743
- type: cos_sim_precision
value: 60.761214318078835
- type: cos_sim_recall
value: 70.76517150395777
- type: dot_accuracy
value: 79.0546581629612
- type: dot_ap
value: 47.3197121792147
- type: dot_f1
value: 49.20106524633821
- type: dot_precision
value: 42.45499808502489
- type: dot_recall
value: 58.49604221635884
- type: euclidean_accuracy
value: 85.08076533349228
- type: euclidean_ap
value: 70.95016106374474
- type: euclidean_f1
value: 65.43987900176455
- type: euclidean_precision
value: 62.64478764478765
- type: euclidean_recall
value: 68.49604221635884
- type: manhattan_accuracy
value: 84.93771234428085
- type: manhattan_ap
value: 70.63668388755362
- type: manhattan_f1
value: 65.23895401262398
- type: manhattan_precision
value: 56.946084218811485
- type: manhattan_recall
value: 76.35883905013192
- type: max_accuracy
value: 85.08076533349228
- type: max_ap
value: 70.95016106374474
- type: max_f1
value: 65.43987900176455
- task:
type: PairClassification
dataset:
name: MTEB TwitterURLCorpus
type: mteb/twitterurlcorpus-pairclassification
config: default
split: test
revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf
metrics:
- type: cos_sim_accuracy
value: 88.69096130709822
- type: cos_sim_ap
value: 84.82526278228542
- type: cos_sim_f1
value: 77.65485060585536
- type: cos_sim_precision
value: 75.94582658619167
- type: cos_sim_recall
value: 79.44256236526024
- type: dot_accuracy
value: 80.97954748321496
- type: dot_ap
value: 64.81642914145866
- type: dot_f1
value: 60.631996987229975
- type: dot_precision
value: 54.5897293631712
- type: dot_recall
value: 68.17831844779796
- type: euclidean_accuracy
value: 88.6987231730508
- type: euclidean_ap
value: 84.80003825477253
- type: euclidean_f1
value: 77.67194179854496
- type: euclidean_precision
value: 75.7128235122094
- type: euclidean_recall
value: 79.73514012935017
- type: manhattan_accuracy
value: 88.62692591298949
- type: manhattan_ap
value: 84.80451408255276
- type: manhattan_f1
value: 77.69888949572183
- type: manhattan_precision
value: 73.70311528631622
- type: manhattan_recall
value: 82.15275639051433
- type: max_accuracy
value: 88.6987231730508
- type: max_ap
value: 84.82526278228542
- type: max_f1
value: 77.69888949572183
- task:
type: BitextMining
dataset:
name: MTEB BUCC.v2 (ru-en)
type: mteb/bucc-bitext-mining
config: ru-en
split: test
revision: 1739dc11ffe9b7bfccd7f3d585aeb4c544fc6677
metrics:
- type: accuracy
value: 95.72566678212678
- type: f1
value: 94.42443135896548
- type: main_score
value: 94.42443135896548
- type: precision
value: 93.80868260016165
- type: recall
value: 95.72566678212678
- task:
type: Retrieval
dataset:
name: MTEB BelebeleRetrieval (rus_Cyrl-rus_Cyrl)
type: facebook/belebele
config: rus_Cyrl-rus_Cyrl
split: test
revision: 75b399394a9803252cfec289d103de462763db7c
metrics:
- type: main_score
value: 92.23599999999999
- type: map_at_1
value: 87.111
- type: map_at_10
value: 90.717
- type: map_at_100
value: 90.879
- type: map_at_1000
value: 90.881
- type: map_at_20
value: 90.849
- type: map_at_3
value: 90.074
- type: map_at_5
value: 90.535
- type: mrr_at_1
value: 87.1111111111111
- type: mrr_at_10
value: 90.7173721340388
- type: mrr_at_100
value: 90.87859682638407
- type: mrr_at_1000
value: 90.88093553612326
- type: mrr_at_20
value: 90.84863516113515
- type: mrr_at_3
value: 90.07407407407409
- type: mrr_at_5
value: 90.53518518518521
- type: nauc_map_at_1000_diff1
value: 92.37373187280554
- type: nauc_map_at_1000_max
value: 79.90465445423249
- type: nauc_map_at_1000_std
value: -0.6220290556185463
- type: nauc_map_at_100_diff1
value: 92.37386697345335
- type: nauc_map_at_100_max
value: 79.90991577223959
- type: nauc_map_at_100_std
value: -0.602247514642845
- type: nauc_map_at_10_diff1
value: 92.30907447072467
- type: nauc_map_at_10_max
value: 79.86831935337598
- type: nauc_map_at_10_std
value: -0.7455191860719699
- type: nauc_map_at_1_diff1
value: 93.29828518358822
- type: nauc_map_at_1_max
value: 78.69539619887887
- type: nauc_map_at_1_std
value: -4.097150817605763
- type: nauc_map_at_20_diff1
value: 92.38414149703077
- type: nauc_map_at_20_max
value: 79.94789814504661
- type: nauc_map_at_20_std
value: -0.3928031130400773
- type: nauc_map_at_3_diff1
value: 92.21688899306734
- type: nauc_map_at_3_max
value: 80.34586671780885
- type: nauc_map_at_3_std
value: 0.24088319695435909
- type: nauc_map_at_5_diff1
value: 92.27931726042982
- type: nauc_map_at_5_max
value: 79.99198834003367
- type: nauc_map_at_5_std
value: -0.6296366922840796
- type: nauc_mrr_at_1000_diff1
value: 92.37373187280554
- type: nauc_mrr_at_1000_max
value: 79.90465445423249
- type: nauc_mrr_at_1000_std
value: -0.6220290556185463
- type: nauc_mrr_at_100_diff1
value: 92.37386697345335
- type: nauc_mrr_at_100_max
value: 79.90991577223959
- type: nauc_mrr_at_100_std
value: -0.602247514642845
- type: nauc_mrr_at_10_diff1
value: 92.30907447072467
- type: nauc_mrr_at_10_max
value: 79.86831935337598
- type: nauc_mrr_at_10_std
value: -0.7455191860719699
- type: nauc_mrr_at_1_diff1
value: 93.29828518358822
- type: nauc_mrr_at_1_max
value: 78.69539619887887
- type: nauc_mrr_at_1_std
value: -4.097150817605763
- type: nauc_mrr_at_20_diff1
value: 92.38414149703077
- type: nauc_mrr_at_20_max
value: 79.94789814504661
- type: nauc_mrr_at_20_std
value: -0.3928031130400773
- type: nauc_mrr_at_3_diff1
value: 92.21688899306734
- type: nauc_mrr_at_3_max
value: 80.34586671780885
- type: nauc_mrr_at_3_std
value: 0.24088319695435909
- type: nauc_mrr_at_5_diff1
value: 92.27931726042982
- type: nauc_mrr_at_5_max
value: 79.99198834003367
- type: nauc_mrr_at_5_std
value: -0.6296366922840796
- type: nauc_ndcg_at_1000_diff1
value: 92.30526497646306
- type: nauc_ndcg_at_1000_max
value: 80.12734537480418
- type: nauc_ndcg_at_1000_std
value: 0.22849408935578744
- type: nauc_ndcg_at_100_diff1
value: 92.31347123202318
- type: nauc_ndcg_at_100_max
value: 80.29207038703142
- type: nauc_ndcg_at_100_std
value: 0.816825944406239
- type: nauc_ndcg_at_10_diff1
value: 92.05430189845808
- type: nauc_ndcg_at_10_max
value: 80.16515667442968
- type: nauc_ndcg_at_10_std
value: 0.7486447532544893
- type: nauc_ndcg_at_1_diff1
value: 93.29828518358822
- type: nauc_ndcg_at_1_max
value: 78.69539619887887
- type: nauc_ndcg_at_1_std
value: -4.097150817605763
- type: nauc_ndcg_at_20_diff1
value: 92.40147868825079
- type: nauc_ndcg_at_20_max
value: 80.5117307181802
- type: nauc_ndcg_at_20_std
value: 2.0431351539517033
- type: nauc_ndcg_at_3_diff1
value: 91.88894444422789
- type: nauc_ndcg_at_3_max
value: 81.09256084196045
- type: nauc_ndcg_at_3_std
value: 2.422705909643621
- type: nauc_ndcg_at_5_diff1
value: 91.99711052955728
- type: nauc_ndcg_at_5_max
value: 80.46996334573979
- type: nauc_ndcg_at_5_std
value: 0.9086986899040708
- type: nauc_precision_at_1000_diff1
value: .nan
- type: nauc_precision_at_1000_max
value: .nan
- type: nauc_precision_at_1000_std
value: .nan
- type: nauc_precision_at_100_diff1
value: 93.46405228758012
- type: nauc_precision_at_100_max
value: 100.0
- type: nauc_precision_at_100_std
value: 70.71661998132774
- type: nauc_precision_at_10_diff1
value: 90.13938908896874
- type: nauc_precision_at_10_max
value: 82.21121782046167
- type: nauc_precision_at_10_std
value: 13.075230092036083
- type: nauc_precision_at_1_diff1
value: 93.29828518358822
- type: nauc_precision_at_1_max
value: 78.69539619887887
- type: nauc_precision_at_1_std
value: -4.097150817605763
- type: nauc_precision_at_20_diff1
value: 94.9723479135242
- type: nauc_precision_at_20_max
value: 91.04000574588684
- type: nauc_precision_at_20_std
value: 48.764634058749586
- type: nauc_precision_at_3_diff1
value: 90.52690041533852
- type: nauc_precision_at_3_max
value: 84.35075179497126
- type: nauc_precision_at_3_std
value: 12.036768730480507
- type: nauc_precision_at_5_diff1
value: 90.44234360410769
- type: nauc_precision_at_5_max
value: 83.21895424836558
- type: nauc_precision_at_5_std
value: 9.974323062558037
- type: nauc_recall_at_1000_diff1
value: .nan
- type: nauc_recall_at_1000_max
value: .nan
- type: nauc_recall_at_1000_std
value: .nan
- type: nauc_recall_at_100_diff1
value: 93.46405228758294
- type: nauc_recall_at_100_max
value: 100.0
- type: nauc_recall_at_100_std
value: 70.71661998132666
- type: nauc_recall_at_10_diff1
value: 90.13938908896864
- type: nauc_recall_at_10_max
value: 82.21121782046124
- type: nauc_recall_at_10_std
value: 13.075230092036506
- type: nauc_recall_at_1_diff1
value: 93.29828518358822
- type: nauc_recall_at_1_max
value: 78.69539619887887
- type: nauc_recall_at_1_std
value: -4.097150817605763
- type: nauc_recall_at_20_diff1
value: 94.97234791352489
- type: nauc_recall_at_20_max
value: 91.04000574588774
- type: nauc_recall_at_20_std
value: 48.764634058752065
- type: nauc_recall_at_3_diff1
value: 90.52690041533845
- type: nauc_recall_at_3_max
value: 84.35075179497079
- type: nauc_recall_at_3_std
value: 12.036768730480583
- type: nauc_recall_at_5_diff1
value: 90.44234360410861
- type: nauc_recall_at_5_max
value: 83.21895424836595
- type: nauc_recall_at_5_std
value: 9.974323062558147
- type: ndcg_at_1
value: 87.111
- type: ndcg_at_10
value: 92.23599999999999
- type: ndcg_at_100
value: 92.87100000000001
- type: ndcg_at_1000
value: 92.928
- type: ndcg_at_20
value: 92.67699999999999
- type: ndcg_at_3
value: 90.973
- type: ndcg_at_5
value: 91.801
- type: precision_at_1
value: 87.111
- type: precision_at_10
value: 9.689
- type: precision_at_100
value: 0.996
- type: precision_at_1000
value: 0.1
- type: precision_at_20
value: 4.928
- type: precision_at_3
value: 31.185000000000002
- type: precision_at_5
value: 19.111
- type: recall_at_1
value: 87.111
- type: recall_at_10
value: 96.88900000000001
- type: recall_at_100
value: 99.556
- type: recall_at_1000
value: 100.0
- type: recall_at_20
value: 98.556
- type: recall_at_3
value: 93.556
- type: recall_at_5
value: 95.556
- task:
type: Retrieval
dataset:
name: MTEB BelebeleRetrieval (rus_Cyrl-eng_Latn)
type: facebook/belebele
config: rus_Cyrl-eng_Latn
split: test
revision: 75b399394a9803252cfec289d103de462763db7c
metrics:
- type: main_score
value: 86.615
- type: map_at_1
value: 78.0
- type: map_at_10
value: 83.822
- type: map_at_100
value: 84.033
- type: map_at_1000
value: 84.03500000000001
- type: map_at_20
value: 83.967
- type: map_at_3
value: 82.315
- type: map_at_5
value: 83.337
- type: mrr_at_1
value: 78.0
- type: mrr_at_10
value: 83.82213403880073
- type: mrr_at_100
value: 84.03281327810801
- type: mrr_at_1000
value: 84.03460051000452
- type: mrr_at_20
value: 83.9673773122303
- type: mrr_at_3
value: 82.31481481481484
- type: mrr_at_5
value: 83.33703703703708
- type: nauc_map_at_1000_diff1
value: 80.78467576987832
- type: nauc_map_at_1000_max
value: 51.41718334647604
- type: nauc_map_at_1000_std
value: -16.23873782768812
- type: nauc_map_at_100_diff1
value: 80.78490931240695
- type: nauc_map_at_100_max
value: 51.41504597713061
- type: nauc_map_at_100_std
value: -16.23538559475366
- type: nauc_map_at_10_diff1
value: 80.73989245374868
- type: nauc_map_at_10_max
value: 51.43026079433827
- type: nauc_map_at_10_std
value: -16.13414330905897
- type: nauc_map_at_1_diff1
value: 82.36966971144186
- type: nauc_map_at_1_max
value: 52.988877039509916
- type: nauc_map_at_1_std
value: -15.145824639495546
- type: nauc_map_at_20_diff1
value: 80.75923781626145
- type: nauc_map_at_20_max
value: 51.40181079374639
- type: nauc_map_at_20_std
value: -16.260566097377165
- type: nauc_map_at_3_diff1
value: 80.65242627065471
- type: nauc_map_at_3_max
value: 50.623980338841214
- type: nauc_map_at_3_std
value: -16.818343442794294
- type: nauc_map_at_5_diff1
value: 80.45976387021862
- type: nauc_map_at_5_max
value: 51.533621728445866
- type: nauc_map_at_5_std
value: -16.279891536945815
- type: nauc_mrr_at_1000_diff1
value: 80.78467576987832
- type: nauc_mrr_at_1000_max
value: 51.41718334647604
- type: nauc_mrr_at_1000_std
value: -16.23873782768812
- type: nauc_mrr_at_100_diff1
value: 80.78490931240695
- type: nauc_mrr_at_100_max
value: 51.41504597713061
- type: nauc_mrr_at_100_std
value: -16.23538559475366
- type: nauc_mrr_at_10_diff1
value: 80.73989245374868
- type: nauc_mrr_at_10_max
value: 51.43026079433827
- type: nauc_mrr_at_10_std
value: -16.13414330905897
- type: nauc_mrr_at_1_diff1
value: 82.36966971144186
- type: nauc_mrr_at_1_max
value: 52.988877039509916
- type: nauc_mrr_at_1_std
value: -15.145824639495546
- type: nauc_mrr_at_20_diff1
value: 80.75923781626145
- type: nauc_mrr_at_20_max
value: 51.40181079374639
- type: nauc_mrr_at_20_std
value: -16.260566097377165
- type: nauc_mrr_at_3_diff1
value: 80.65242627065471
- type: nauc_mrr_at_3_max
value: 50.623980338841214
- type: nauc_mrr_at_3_std
value: -16.818343442794294
- type: nauc_mrr_at_5_diff1
value: 80.45976387021862
- type: nauc_mrr_at_5_max
value: 51.533621728445866
- type: nauc_mrr_at_5_std
value: -16.279891536945815
- type: nauc_ndcg_at_1000_diff1
value: 80.60009446938174
- type: nauc_ndcg_at_1000_max
value: 51.381708043594166
- type: nauc_ndcg_at_1000_std
value: -16.054256944160848
- type: nauc_ndcg_at_100_diff1
value: 80.58971462930421
- type: nauc_ndcg_at_100_max
value: 51.25436917735444
- type: nauc_ndcg_at_100_std
value: -15.862944972269894
- type: nauc_ndcg_at_10_diff1
value: 80.37967179454489
- type: nauc_ndcg_at_10_max
value: 51.590394257251006
- type: nauc_ndcg_at_10_std
value: -15.489799384799591
- type: nauc_ndcg_at_1_diff1
value: 82.36966971144186
- type: nauc_ndcg_at_1_max
value: 52.988877039509916
- type: nauc_ndcg_at_1_std
value: -15.145824639495546
- type: nauc_ndcg_at_20_diff1
value: 80.40299527470081
- type: nauc_ndcg_at_20_max
value: 51.395132284307074
- type: nauc_ndcg_at_20_std
value: -15.906165526937203
- type: nauc_ndcg_at_3_diff1
value: 80.10347913649302
- type: nauc_ndcg_at_3_max
value: 50.018431855573844
- type: nauc_ndcg_at_3_std
value: -17.12743750163884
- type: nauc_ndcg_at_5_diff1
value: 79.65918647776613
- type: nauc_ndcg_at_5_max
value: 51.76710880330806
- type: nauc_ndcg_at_5_std
value: -16.071901882035945
- type: nauc_precision_at_1000_diff1
value: .nan
- type: nauc_precision_at_1000_max
value: .nan
- type: nauc_precision_at_1000_std
value: .nan
- type: nauc_precision_at_100_diff1
value: 77.41596638655459
- type: nauc_precision_at_100_max
value: 22.572362278246565
- type: nauc_precision_at_100_std
value: 26.890756302525716
- type: nauc_precision_at_10_diff1
value: 77.82112845138009
- type: nauc_precision_at_10_max
value: 54.2550353474723
- type: nauc_precision_at_10_std
value: -7.492997198879646
- type: nauc_precision_at_1_diff1
value: 82.36966971144186
- type: nauc_precision_at_1_max
value: 52.988877039509916
- type: nauc_precision_at_1_std
value: -15.145824639495546
- type: nauc_precision_at_20_diff1
value: 75.89091192032318
- type: nauc_precision_at_20_max
value: 52.03275754746293
- type: nauc_precision_at_20_std
value: -7.8411920323686175
- type: nauc_precision_at_3_diff1
value: 78.0256020644638
- type: nauc_precision_at_3_max
value: 47.80353641248523
- type: nauc_precision_at_3_std
value: -18.181625255723503
- type: nauc_precision_at_5_diff1
value: 75.21583976056174
- type: nauc_precision_at_5_max
value: 53.716281032960765
- type: nauc_precision_at_5_std
value: -14.411700753360812
- type: nauc_recall_at_1000_diff1
value: .nan
- type: nauc_recall_at_1000_max
value: .nan
- type: nauc_recall_at_1000_std
value: .nan
- type: nauc_recall_at_100_diff1
value: 77.4159663865523
- type: nauc_recall_at_100_max
value: 22.57236227824646
- type: nauc_recall_at_100_std
value: 26.89075630252133
- type: nauc_recall_at_10_diff1
value: 77.82112845138037
- type: nauc_recall_at_10_max
value: 54.25503534747204
- type: nauc_recall_at_10_std
value: -7.492997198879666
- type: nauc_recall_at_1_diff1
value: 82.36966971144186
- type: nauc_recall_at_1_max
value: 52.988877039509916
- type: nauc_recall_at_1_std
value: -15.145824639495546
- type: nauc_recall_at_20_diff1
value: 75.89091192032362
- type: nauc_recall_at_20_max
value: 52.032757547463184
- type: nauc_recall_at_20_std
value: -7.84119203236888
- type: nauc_recall_at_3_diff1
value: 78.02560206446354
- type: nauc_recall_at_3_max
value: 47.80353641248526
- type: nauc_recall_at_3_std
value: -18.181625255723656
- type: nauc_recall_at_5_diff1
value: 75.21583976056185
- type: nauc_recall_at_5_max
value: 53.71628103296118
- type: nauc_recall_at_5_std
value: -14.411700753360634
- type: ndcg_at_1
value: 78.0
- type: ndcg_at_10
value: 86.615
- type: ndcg_at_100
value: 87.558
- type: ndcg_at_1000
value: 87.613
- type: ndcg_at_20
value: 87.128
- type: ndcg_at_3
value: 83.639
- type: ndcg_at_5
value: 85.475
- type: precision_at_1
value: 78.0
- type: precision_at_10
value: 9.533
- type: precision_at_100
value: 0.996
- type: precision_at_1000
value: 0.1
- type: precision_at_20
value: 4.867
- type: precision_at_3
value: 29.148000000000003
- type: precision_at_5
value: 18.378
- type: recall_at_1
value: 78.0
- type: recall_at_10
value: 95.333
- type: recall_at_100
value: 99.556
- type: recall_at_1000
value: 100.0
- type: recall_at_20
value: 97.333
- type: recall_at_3
value: 87.444
- type: recall_at_5
value: 91.889
- task:
type: Retrieval
dataset:
name: MTEB BelebeleRetrieval (eng_Latn-rus_Cyrl)
type: facebook/belebele
config: eng_Latn-rus_Cyrl
split: test
revision: 75b399394a9803252cfec289d103de462763db7c
metrics:
- type: main_score
value: 82.748
- type: map_at_1
value: 73.444
- type: map_at_10
value: 79.857
- type: map_at_100
value: 80.219
- type: map_at_1000
value: 80.22500000000001
- type: map_at_20
value: 80.10300000000001
- type: map_at_3
value: 78.593
- type: map_at_5
value: 79.515
- type: mrr_at_1
value: 73.44444444444444
- type: mrr_at_10
value: 79.85705467372136
- type: mrr_at_100
value: 80.21942320422542
- type: mrr_at_1000
value: 80.2245364027152
- type: mrr_at_20
value: 80.10273201266493
- type: mrr_at_3
value: 78.59259259259258
- type: mrr_at_5
value: 79.51481481481483
- type: nauc_map_at_1000_diff1
value: 83.69682652271125
- type: nauc_map_at_1000_max
value: 61.70131708044767
- type: nauc_map_at_1000_std
value: 9.345825405274955
- type: nauc_map_at_100_diff1
value: 83.68924820523492
- type: nauc_map_at_100_max
value: 61.6965735573098
- type: nauc_map_at_100_std
value: 9.366132859525775
- type: nauc_map_at_10_diff1
value: 83.61802964269985
- type: nauc_map_at_10_max
value: 61.74274476167882
- type: nauc_map_at_10_std
value: 9.504060995819101
- type: nauc_map_at_1_diff1
value: 86.37079221403225
- type: nauc_map_at_1_max
value: 61.856861655370686
- type: nauc_map_at_1_std
value: 4.708911881992707
- type: nauc_map_at_20_diff1
value: 83.62920965453047
- type: nauc_map_at_20_max
value: 61.761029350326965
- type: nauc_map_at_20_std
value: 9.572978651118351
- type: nauc_map_at_3_diff1
value: 83.66665673154306
- type: nauc_map_at_3_max
value: 61.13597610587937
- type: nauc_map_at_3_std
value: 9.309596395240598
- type: nauc_map_at_5_diff1
value: 83.52307226455358
- type: nauc_map_at_5_max
value: 61.59405758027573
- type: nauc_map_at_5_std
value: 9.320025423287671
- type: nauc_mrr_at_1000_diff1
value: 83.69682652271125
- type: nauc_mrr_at_1000_max
value: 61.70131708044767
- type: nauc_mrr_at_1000_std
value: 9.345825405274955
- type: nauc_mrr_at_100_diff1
value: 83.68924820523492
- type: nauc_mrr_at_100_max
value: 61.6965735573098
- type: nauc_mrr_at_100_std
value: 9.366132859525775
- type: nauc_mrr_at_10_diff1
value: 83.61802964269985
- type: nauc_mrr_at_10_max
value: 61.74274476167882
- type: nauc_mrr_at_10_std
value: 9.504060995819101
- type: nauc_mrr_at_1_diff1
value: 86.37079221403225
- type: nauc_mrr_at_1_max
value: 61.856861655370686
- type: nauc_mrr_at_1_std
value: 4.708911881992707
- type: nauc_mrr_at_20_diff1
value: 83.62920965453047
- type: nauc_mrr_at_20_max
value: 61.761029350326965
- type: nauc_mrr_at_20_std
value: 9.572978651118351
- type: nauc_mrr_at_3_diff1
value: 83.66665673154306
- type: nauc_mrr_at_3_max
value: 61.13597610587937
- type: nauc_mrr_at_3_std
value: 9.309596395240598
- type: nauc_mrr_at_5_diff1
value: 83.52307226455358
- type: nauc_mrr_at_5_max
value: 61.59405758027573
- type: nauc_mrr_at_5_std
value: 9.320025423287671
- type: nauc_ndcg_at_1000_diff1
value: 83.24213186482201
- type: nauc_ndcg_at_1000_max
value: 61.77629841787496
- type: nauc_ndcg_at_1000_std
value: 10.332527869705851
- type: nauc_ndcg_at_100_diff1
value: 83.06815820441027
- type: nauc_ndcg_at_100_max
value: 61.6947181864579
- type: nauc_ndcg_at_100_std
value: 10.888922975877316
- type: nauc_ndcg_at_10_diff1
value: 82.58238431386295
- type: nauc_ndcg_at_10_max
value: 62.10333663935709
- type: nauc_ndcg_at_10_std
value: 11.746030330958174
- type: nauc_ndcg_at_1_diff1
value: 86.37079221403225
- type: nauc_ndcg_at_1_max
value: 61.856861655370686
- type: nauc_ndcg_at_1_std
value: 4.708911881992707
- type: nauc_ndcg_at_20_diff1
value: 82.67888324480154
- type: nauc_ndcg_at_20_max
value: 62.28124917486516
- type: nauc_ndcg_at_20_std
value: 12.343058917563914
- type: nauc_ndcg_at_3_diff1
value: 82.71277373710663
- type: nauc_ndcg_at_3_max
value: 60.66677922989939
- type: nauc_ndcg_at_3_std
value: 10.843633736296528
- type: nauc_ndcg_at_5_diff1
value: 82.34691124846786
- type: nauc_ndcg_at_5_max
value: 61.605961382062716
- type: nauc_ndcg_at_5_std
value: 11.129011077702602
- type: nauc_precision_at_1000_diff1
value: .nan
- type: nauc_precision_at_1000_max
value: .nan
- type: nauc_precision_at_1000_std
value: .nan
- type: nauc_precision_at_100_diff1
value: 60.93103908230194
- type: nauc_precision_at_100_max
value: 52.621048419370695
- type: nauc_precision_at_100_std
value: 85.60090702947922
- type: nauc_precision_at_10_diff1
value: 76.26517273576093
- type: nauc_precision_at_10_max
value: 65.2013694366636
- type: nauc_precision_at_10_std
value: 26.50357920946173
- type: nauc_precision_at_1_diff1
value: 86.37079221403225
- type: nauc_precision_at_1_max
value: 61.856861655370686
- type: nauc_precision_at_1_std
value: 4.708911881992707
- type: nauc_precision_at_20_diff1
value: 73.47946930710295
- type: nauc_precision_at_20_max
value: 70.19520986689217
- type: nauc_precision_at_20_std
value: 45.93186111653967
- type: nauc_precision_at_3_diff1
value: 79.02026879450186
- type: nauc_precision_at_3_max
value: 58.75074624692399
- type: nauc_precision_at_3_std
value: 16.740684654251037
- type: nauc_precision_at_5_diff1
value: 76.47585662281637
- type: nauc_precision_at_5_max
value: 61.86270922013127
- type: nauc_precision_at_5_std
value: 20.1833625455035
- type: nauc_recall_at_1000_diff1
value: .nan
- type: nauc_recall_at_1000_max
value: .nan
- type: nauc_recall_at_1000_std
value: .nan
- type: nauc_recall_at_100_diff1
value: 60.93103908229921
- type: nauc_recall_at_100_max
value: 52.62104841936668
- type: nauc_recall_at_100_std
value: 85.60090702947748
- type: nauc_recall_at_10_diff1
value: 76.26517273576097
- type: nauc_recall_at_10_max
value: 65.20136943666347
- type: nauc_recall_at_10_std
value: 26.50357920946174
- type: nauc_recall_at_1_diff1
value: 86.37079221403225
- type: nauc_recall_at_1_max
value: 61.856861655370686
- type: nauc_recall_at_1_std
value: 4.708911881992707
- type: nauc_recall_at_20_diff1
value: 73.47946930710269
- type: nauc_recall_at_20_max
value: 70.19520986689254
- type: nauc_recall_at_20_std
value: 45.93186111653943
- type: nauc_recall_at_3_diff1
value: 79.02026879450173
- type: nauc_recall_at_3_max
value: 58.750746246923924
- type: nauc_recall_at_3_std
value: 16.740684654251076
- type: nauc_recall_at_5_diff1
value: 76.4758566228162
- type: nauc_recall_at_5_max
value: 61.862709220131386
- type: nauc_recall_at_5_std
value: 20.18336254550361
- type: ndcg_at_1
value: 73.444
- type: ndcg_at_10
value: 82.748
- type: ndcg_at_100
value: 84.416
- type: ndcg_at_1000
value: 84.52300000000001
- type: ndcg_at_20
value: 83.646
- type: ndcg_at_3
value: 80.267
- type: ndcg_at_5
value: 81.922
- type: precision_at_1
value: 73.444
- type: precision_at_10
value: 9.167
- type: precision_at_100
value: 0.992
- type: precision_at_1000
value: 0.1
- type: precision_at_20
value: 4.761
- type: precision_at_3
value: 28.37
- type: precision_at_5
value: 17.822
- type: recall_at_1
value: 73.444
- type: recall_at_10
value: 91.667
- type: recall_at_100
value: 99.222
- type: recall_at_1000
value: 100.0
- type: recall_at_20
value: 95.222
- type: recall_at_3
value: 85.111
- type: recall_at_5
value: 89.11099999999999
- task:
type: BitextMining
dataset:
name: MTEB BibleNLPBitextMining (eng_Latn-rus_Cyrl)
type: davidstap/biblenlp-corpus-mmteb
config: eng_Latn-rus_Cyrl
split: train
revision: 264a18480c529d9e922483839b4b9758e690b762
metrics:
- type: accuracy
value: 96.875
- type: f1
value: 95.83333333333333
- type: main_score
value: 95.83333333333333
- type: precision
value: 95.3125
- type: recall
value: 96.875
- task:
type: BitextMining
dataset:
name: MTEB BibleNLPBitextMining (rus_Cyrl-eng_Latn)
type: davidstap/biblenlp-corpus-mmteb
config: rus_Cyrl-eng_Latn
split: train
revision: 264a18480c529d9e922483839b4b9758e690b762
metrics:
- type: accuracy
value: 88.671875
- type: f1
value: 85.3515625
- type: main_score
value: 85.3515625
- type: precision
value: 83.85416666666667
- type: recall
value: 88.671875
- task:
type: MultilabelClassification
dataset:
name: MTEB CEDRClassification (default)
type: ai-forever/cedr-classification
config: default
split: test
revision: c0ba03d058e3e1b2f3fd20518875a4563dd12db4
metrics:
- type: accuracy
value: 40.06907545164719
- type: f1
value: 26.285000550712407
- type: lrap
value: 64.4280021253997
- type: main_score
value: 40.06907545164719
- task:
type: Classification
dataset:
name: MTEB CyrillicTurkicLangClassification (default)
type: tatiana-merz/cyrillic_turkic_langs
config: default
split: test
revision: e42d330f33d65b7b72dfd408883daf1661f06f18
metrics:
- type: accuracy
value: 43.3447265625
- type: f1
value: 40.08400146827895
- type: f1_weighted
value: 40.08499428040896
- type: main_score
value: 43.3447265625
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (ace_Arab-rus_Cyrl)
type: mteb/flores
config: ace_Arab-rus_Cyrl
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 6.225296442687747
- type: f1
value: 5.5190958860075
- type: main_score
value: 5.5190958860075
- type: precision
value: 5.3752643758000005
- type: recall
value: 6.225296442687747
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (bam_Latn-rus_Cyrl)
type: mteb/flores
config: bam_Latn-rus_Cyrl
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 68.37944664031622
- type: f1
value: 64.54819836666252
- type: main_score
value: 64.54819836666252
- type: precision
value: 63.07479233454916
- type: recall
value: 68.37944664031622
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (dzo_Tibt-rus_Cyrl)
type: mteb/flores
config: dzo_Tibt-rus_Cyrl
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 0.09881422924901186
- type: f1
value: 0.00019509225912934226
- type: main_score
value: 0.00019509225912934226
- type: precision
value: 9.76425190207627e-05
- type: recall
value: 0.09881422924901186
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (hin_Deva-rus_Cyrl)
type: mteb/flores
config: hin_Deva-rus_Cyrl
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 99.60474308300395
- type: f1
value: 99.47299077733861
- type: main_score
value: 99.47299077733861
- type: precision
value: 99.40711462450594
- type: recall
value: 99.60474308300395
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (khm_Khmr-rus_Cyrl)
type: mteb/flores
config: khm_Khmr-rus_Cyrl
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 88.83399209486166
- type: f1
value: 87.71151056318254
- type: main_score
value: 87.71151056318254
- type: precision
value: 87.32012500709193
- type: recall
value: 88.83399209486166
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (mag_Deva-rus_Cyrl)
type: mteb/flores
config: mag_Deva-rus_Cyrl
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 98.02371541501977
- type: f1
value: 97.7239789196311
- type: main_score
value: 97.7239789196311
- type: precision
value: 97.61904761904762
- type: recall
value: 98.02371541501977
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (pap_Latn-rus_Cyrl)
type: mteb/flores
config: pap_Latn-rus_Cyrl
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 94.0711462450593
- type: f1
value: 93.68187806922984
- type: main_score
value: 93.68187806922984
- type: precision
value: 93.58925452707051
- type: recall
value: 94.0711462450593
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (sot_Latn-rus_Cyrl)
type: mteb/flores
config: sot_Latn-rus_Cyrl
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 90.9090909090909
- type: f1
value: 89.23171936758892
- type: main_score
value: 89.23171936758892
- type: precision
value: 88.51790014083866
- type: recall
value: 90.9090909090909
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (tur_Latn-rus_Cyrl)
type: mteb/flores
config: tur_Latn-rus_Cyrl
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 99.2094861660079
- type: f1
value: 98.9459815546772
- type: main_score
value: 98.9459815546772
- type: precision
value: 98.81422924901186
- type: recall
value: 99.2094861660079
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (ace_Latn-rus_Cyrl)
type: mteb/flores
config: ace_Latn-rus_Cyrl
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 66.10671936758892
- type: f1
value: 63.81888256297873
- type: main_score
value: 63.81888256297873
- type: precision
value: 63.01614067933451
- type: recall
value: 66.10671936758892
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (ban_Latn-rus_Cyrl)
type: mteb/flores
config: ban_Latn-rus_Cyrl
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 79.44664031620553
- type: f1
value: 77.6311962082713
- type: main_score
value: 77.6311962082713
- type: precision
value: 76.93977931929739
- type: recall
value: 79.44664031620553
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (ell_Grek-rus_Cyrl)
type: mteb/flores
config: ell_Grek-rus_Cyrl
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 99.40711462450594
- type: f1
value: 99.2094861660079
- type: main_score
value: 99.2094861660079
- type: precision
value: 99.1106719367589
- type: recall
value: 99.40711462450594
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (hne_Deva-rus_Cyrl)
type: mteb/flores
config: hne_Deva-rus_Cyrl
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 96.83794466403161
- type: f1
value: 96.25352907961603
- type: main_score
value: 96.25352907961603
- type: precision
value: 96.02155091285526
- type: recall
value: 96.83794466403161
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (kik_Latn-rus_Cyrl)
type: mteb/flores
config: kik_Latn-rus_Cyrl
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 76.28458498023716
- type: f1
value: 73.5596919895859
- type: main_score
value: 73.5596919895859
- type: precision
value: 72.40900759055246
- type: recall
value: 76.28458498023716
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (mai_Deva-rus_Cyrl)
type: mteb/flores
config: mai_Deva-rus_Cyrl
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 97.72727272727273
- type: f1
value: 97.37812911725956
- type: main_score
value: 97.37812911725956
- type: precision
value: 97.26002258610953
- type: recall
value: 97.72727272727273
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (pbt_Arab-rus_Cyrl)
type: mteb/flores
config: pbt_Arab-rus_Cyrl
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 94.0711462450593
- type: f1
value: 93.34700387331966
- type: main_score
value: 93.34700387331966
- type: precision
value: 93.06920556920556
- type: recall
value: 94.0711462450593
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (spa_Latn-rus_Cyrl)
type: mteb/flores
config: spa_Latn-rus_Cyrl
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 99.2094861660079
- type: f1
value: 98.9459815546772
- type: main_score
value: 98.9459815546772
- type: precision
value: 98.81422924901186
- type: recall
value: 99.2094861660079
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (twi_Latn-rus_Cyrl)
type: mteb/flores
config: twi_Latn-rus_Cyrl
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 80.73122529644269
- type: f1
value: 77.77434363246721
- type: main_score
value: 77.77434363246721
- type: precision
value: 76.54444287596462
- type: recall
value: 80.73122529644269
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (acm_Arab-rus_Cyrl)
type: mteb/flores
config: acm_Arab-rus_Cyrl
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 94.56521739130434
- type: f1
value: 92.92490118577075
- type: main_score
value: 92.92490118577075
- type: precision
value: 92.16897233201581
- type: recall
value: 94.56521739130434
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (bel_Cyrl-rus_Cyrl)
type: mteb/flores
config: bel_Cyrl-rus_Cyrl
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 99.2094861660079
- type: f1
value: 98.98550724637681
- type: main_score
value: 98.98550724637681
- type: precision
value: 98.88833992094862
- type: recall
value: 99.2094861660079
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (eng_Latn-rus_Cyrl)
type: mteb/flores
config: eng_Latn-rus_Cyrl
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 99.60474308300395
- type: f1
value: 99.4729907773386
- type: main_score
value: 99.4729907773386
- type: precision
value: 99.40711462450594
- type: recall
value: 99.60474308300395
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (hrv_Latn-rus_Cyrl)
type: mteb/flores
config: hrv_Latn-rus_Cyrl
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 99.2094861660079
- type: f1
value: 99.05138339920948
- type: main_score
value: 99.05138339920948
- type: precision
value: 99.00691699604744
- type: recall
value: 99.2094861660079
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (kin_Latn-rus_Cyrl)
type: mteb/flores
config: kin_Latn-rus_Cyrl
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 88.2411067193676
- type: f1
value: 86.5485246227658
- type: main_score
value: 86.5485246227658
- type: precision
value: 85.90652101521667
- type: recall
value: 88.2411067193676
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (mal_Mlym-rus_Cyrl)
type: mteb/flores
config: mal_Mlym-rus_Cyrl
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 98.51778656126481
- type: f1
value: 98.07971014492753
- type: main_score
value: 98.07971014492753
- type: precision
value: 97.88372859025033
- type: recall
value: 98.51778656126481
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (pes_Arab-rus_Cyrl)
type: mteb/flores
config: pes_Arab-rus_Cyrl
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 98.51778656126481
- type: f1
value: 98.0566534914361
- type: main_score
value: 98.0566534914361
- type: precision
value: 97.82608695652173
- type: recall
value: 98.51778656126481
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (srd_Latn-rus_Cyrl)
type: mteb/flores
config: srd_Latn-rus_Cyrl
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 82.6086956521739
- type: f1
value: 80.9173470979821
- type: main_score
value: 80.9173470979821
- type: precision
value: 80.24468672882627
- type: recall
value: 82.6086956521739
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (tzm_Tfng-rus_Cyrl)
type: mteb/flores
config: tzm_Tfng-rus_Cyrl
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 7.41106719367589
- type: f1
value: 6.363562740945329
- type: main_score
value: 6.363562740945329
- type: precision
value: 6.090373175353411
- type: recall
value: 7.41106719367589
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (acq_Arab-rus_Cyrl)
type: mteb/flores
config: acq_Arab-rus_Cyrl
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 95.25691699604744
- type: f1
value: 93.81422924901187
- type: main_score
value: 93.81422924901187
- type: precision
value: 93.14064558629775
- type: recall
value: 95.25691699604744
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (bem_Latn-rus_Cyrl)
type: mteb/flores
config: bem_Latn-rus_Cyrl
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 68.08300395256917
- type: f1
value: 65.01368772860867
- type: main_score
value: 65.01368772860867
- type: precision
value: 63.91052337510628
- type: recall
value: 68.08300395256917
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (epo_Latn-rus_Cyrl)
type: mteb/flores
config: epo_Latn-rus_Cyrl
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 98.41897233201581
- type: f1
value: 98.17193675889328
- type: main_score
value: 98.17193675889328
- type: precision
value: 98.08210564139418
- type: recall
value: 98.41897233201581
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (hun_Latn-rus_Cyrl)
type: mteb/flores
config: hun_Latn-rus_Cyrl
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 99.30830039525692
- type: f1
value: 99.1106719367589
- type: main_score
value: 99.1106719367589
- type: precision
value: 99.01185770750988
- type: recall
value: 99.30830039525692
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (kir_Cyrl-rus_Cyrl)
type: mteb/flores
config: kir_Cyrl-rus_Cyrl
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 97.5296442687747
- type: f1
value: 97.07549806364035
- type: main_score
value: 97.07549806364035
- type: precision
value: 96.90958498023716
- type: recall
value: 97.5296442687747
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (mar_Deva-rus_Cyrl)
type: mteb/flores
config: mar_Deva-rus_Cyrl
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 97.82608695652173
- type: f1
value: 97.44400527009222
- type: main_score
value: 97.44400527009222
- type: precision
value: 97.28966685488425
- type: recall
value: 97.82608695652173
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (plt_Latn-rus_Cyrl)
type: mteb/flores
config: plt_Latn-rus_Cyrl
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 79.9407114624506
- type: f1
value: 78.3154177760691
- type: main_score
value: 78.3154177760691
- type: precision
value: 77.69877344877344
- type: recall
value: 79.9407114624506
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (srp_Cyrl-rus_Cyrl)
type: mteb/flores
config: srp_Cyrl-rus_Cyrl
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 99.70355731225297
- type: f1
value: 99.60474308300395
- type: main_score
value: 99.60474308300395
- type: precision
value: 99.55533596837944
- type: recall
value: 99.70355731225297
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (uig_Arab-rus_Cyrl)
type: mteb/flores
config: uig_Arab-rus_Cyrl
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 83.20158102766798
- type: f1
value: 81.44381923034585
- type: main_score
value: 81.44381923034585
- type: precision
value: 80.78813411582477
- type: recall
value: 83.20158102766798
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (aeb_Arab-rus_Cyrl)
type: mteb/flores
config: aeb_Arab-rus_Cyrl
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 91.20553359683794
- type: f1
value: 88.75352907961603
- type: main_score
value: 88.75352907961603
- type: precision
value: 87.64328063241106
- type: recall
value: 91.20553359683794
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (ben_Beng-rus_Cyrl)
type: mteb/flores
config: ben_Beng-rus_Cyrl
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 98.91304347826086
- type: f1
value: 98.60671936758894
- type: main_score
value: 98.60671936758894
- type: precision
value: 98.4766139657444
- type: recall
value: 98.91304347826086
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (est_Latn-rus_Cyrl)
type: mteb/flores
config: est_Latn-rus_Cyrl
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 96.24505928853755
- type: f1
value: 95.27417027417027
- type: main_score
value: 95.27417027417027
- type: precision
value: 94.84107378129117
- type: recall
value: 96.24505928853755
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (hye_Armn-rus_Cyrl)
type: mteb/flores
config: hye_Armn-rus_Cyrl
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 98.02371541501977
- type: f1
value: 97.67786561264822
- type: main_score
value: 97.67786561264822
- type: precision
value: 97.55839022637441
- type: recall
value: 98.02371541501977
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (kmb_Latn-rus_Cyrl)
type: mteb/flores
config: kmb_Latn-rus_Cyrl
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 46.047430830039524
- type: f1
value: 42.94464804804471
- type: main_score
value: 42.94464804804471
- type: precision
value: 41.9851895607238
- type: recall
value: 46.047430830039524
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (min_Arab-rus_Cyrl)
type: mteb/flores
config: min_Arab-rus_Cyrl
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 3.9525691699604746
- type: f1
value: 3.402665192725756
- type: main_score
value: 3.402665192725756
- type: precision
value: 3.303787557740127
- type: recall
value: 3.9525691699604746
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (pol_Latn-rus_Cyrl)
type: mteb/flores
config: pol_Latn-rus_Cyrl
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 99.60474308300395
- type: f1
value: 99.4729907773386
- type: main_score
value: 99.4729907773386
- type: precision
value: 99.40711462450594
- type: recall
value: 99.60474308300395
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (ssw_Latn-rus_Cyrl)
type: mteb/flores
config: ssw_Latn-rus_Cyrl
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 73.22134387351778
- type: f1
value: 70.43086049508975
- type: main_score
value: 70.43086049508975
- type: precision
value: 69.35312022355656
- type: recall
value: 73.22134387351778
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (ukr_Cyrl-rus_Cyrl)
type: mteb/flores
config: ukr_Cyrl-rus_Cyrl
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 99.90118577075098
- type: f1
value: 99.86824769433464
- type: main_score
value: 99.86824769433464
- type: precision
value: 99.85177865612648
- type: recall
value: 99.90118577075098
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (afr_Latn-rus_Cyrl)
type: mteb/flores
config: afr_Latn-rus_Cyrl
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 99.2094861660079
- type: f1
value: 98.9459815546772
- type: main_score
value: 98.9459815546772
- type: precision
value: 98.81422924901186
- type: recall
value: 99.2094861660079
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (bho_Deva-rus_Cyrl)
type: mteb/flores
config: bho_Deva-rus_Cyrl
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 94.0711462450593
- type: f1
value: 93.12182382834557
- type: main_score
value: 93.12182382834557
- type: precision
value: 92.7523453232338
- type: recall
value: 94.0711462450593
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (eus_Latn-rus_Cyrl)
type: mteb/flores
config: eus_Latn-rus_Cyrl
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 92.19367588932806
- type: f1
value: 91.23604975587072
- type: main_score
value: 91.23604975587072
- type: precision
value: 90.86697443588663
- type: recall
value: 92.19367588932806
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (ibo_Latn-rus_Cyrl)
type: mteb/flores
config: ibo_Latn-rus_Cyrl
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 82.21343873517787
- type: f1
value: 80.17901604858126
- type: main_score
value: 80.17901604858126
- type: precision
value: 79.3792284780028
- type: recall
value: 82.21343873517787
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (kmr_Latn-rus_Cyrl)
type: mteb/flores
config: kmr_Latn-rus_Cyrl
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 68.67588932806325
- type: f1
value: 66.72311714750278
- type: main_score
value: 66.72311714750278
- type: precision
value: 66.00178401554004
- type: recall
value: 68.67588932806325
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (min_Latn-rus_Cyrl)
type: mteb/flores
config: min_Latn-rus_Cyrl
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 78.65612648221344
- type: f1
value: 76.26592719972166
- type: main_score
value: 76.26592719972166
- type: precision
value: 75.39980459997484
- type: recall
value: 78.65612648221344
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (por_Latn-rus_Cyrl)
type: mteb/flores
config: por_Latn-rus_Cyrl
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 96.83794466403161
- type: f1
value: 95.9669678147939
- type: main_score
value: 95.9669678147939
- type: precision
value: 95.59453227931488
- type: recall
value: 96.83794466403161
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (sun_Latn-rus_Cyrl)
type: mteb/flores
config: sun_Latn-rus_Cyrl
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 92.4901185770751
- type: f1
value: 91.66553983773662
- type: main_score
value: 91.66553983773662
- type: precision
value: 91.34530928009188
- type: recall
value: 92.4901185770751
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (umb_Latn-rus_Cyrl)
type: mteb/flores
config: umb_Latn-rus_Cyrl
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 41.00790513833992
- type: f1
value: 38.21319326004483
- type: main_score
value: 38.21319326004483
- type: precision
value: 37.200655467675546
- type: recall
value: 41.00790513833992
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (ajp_Arab-rus_Cyrl)
type: mteb/flores
config: ajp_Arab-rus_Cyrl
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 95.35573122529645
- type: f1
value: 93.97233201581028
- type: main_score
value: 93.97233201581028
- type: precision
value: 93.33333333333333
- type: recall
value: 95.35573122529645
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (bjn_Arab-rus_Cyrl)
type: mteb/flores
config: bjn_Arab-rus_Cyrl
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 3.6561264822134385
- type: f1
value: 3.1071978056336484
- type: main_score
value: 3.1071978056336484
- type: precision
value: 3.0039741229718215
- type: recall
value: 3.6561264822134385
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (ewe_Latn-rus_Cyrl)
type: mteb/flores
config: ewe_Latn-rus_Cyrl
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 62.845849802371546
- type: f1
value: 59.82201175670472
- type: main_score
value: 59.82201175670472
- type: precision
value: 58.72629236362003
- type: recall
value: 62.845849802371546
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (ilo_Latn-rus_Cyrl)
type: mteb/flores
config: ilo_Latn-rus_Cyrl
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 83.10276679841897
- type: f1
value: 80.75065288987582
- type: main_score
value: 80.75065288987582
- type: precision
value: 79.80726451662179
- type: recall
value: 83.10276679841897
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (knc_Arab-rus_Cyrl)
type: mteb/flores
config: knc_Arab-rus_Cyrl
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 10.079051383399209
- type: f1
value: 8.759282456080921
- type: main_score
value: 8.759282456080921
- type: precision
value: 8.474735138956142
- type: recall
value: 10.079051383399209
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (mkd_Cyrl-rus_Cyrl)
type: mteb/flores
config: mkd_Cyrl-rus_Cyrl
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 98.91304347826086
- type: f1
value: 98.55072463768116
- type: main_score
value: 98.55072463768116
- type: precision
value: 98.36956521739131
- type: recall
value: 98.91304347826086
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (prs_Arab-rus_Cyrl)
type: mteb/flores
config: prs_Arab-rus_Cyrl
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 99.01185770750988
- type: f1
value: 98.68247694334651
- type: main_score
value: 98.68247694334651
- type: precision
value: 98.51778656126481
- type: recall
value: 99.01185770750988
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (swe_Latn-rus_Cyrl)
type: mteb/flores
config: swe_Latn-rus_Cyrl
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 99.40711462450594
- type: f1
value: 99.22595520421606
- type: main_score
value: 99.22595520421606
- type: precision
value: 99.14361001317523
- type: recall
value: 99.40711462450594
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (urd_Arab-rus_Cyrl)
type: mteb/flores
config: urd_Arab-rus_Cyrl
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 97.82608695652173
- type: f1
value: 97.25625823451911
- type: main_score
value: 97.25625823451911
- type: precision
value: 97.03063241106719
- type: recall
value: 97.82608695652173
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (aka_Latn-rus_Cyrl)
type: mteb/flores
config: aka_Latn-rus_Cyrl
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 81.22529644268775
- type: f1
value: 77.94307687941227
- type: main_score
value: 77.94307687941227
- type: precision
value: 76.58782793293665
- type: recall
value: 81.22529644268775
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (bjn_Latn-rus_Cyrl)
type: mteb/flores
config: bjn_Latn-rus_Cyrl
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 85.27667984189723
- type: f1
value: 83.6869192829922
- type: main_score
value: 83.6869192829922
- type: precision
value: 83.08670670691656
- type: recall
value: 85.27667984189723
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (fao_Latn-rus_Cyrl)
type: mteb/flores
config: fao_Latn-rus_Cyrl
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 80.9288537549407
- type: f1
value: 79.29806087454745
- type: main_score
value: 79.29806087454745
- type: precision
value: 78.71445871526987
- type: recall
value: 80.9288537549407
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (ind_Latn-rus_Cyrl)
type: mteb/flores
config: ind_Latn-rus_Cyrl
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 98.12252964426878
- type: f1
value: 97.5296442687747
- type: main_score
value: 97.5296442687747
- type: precision
value: 97.23320158102767
- type: recall
value: 98.12252964426878
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (knc_Latn-rus_Cyrl)
type: mteb/flores
config: knc_Latn-rus_Cyrl
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 33.49802371541502
- type: f1
value: 32.02378215033989
- type: main_score
value: 32.02378215033989
- type: precision
value: 31.511356103747406
- type: recall
value: 33.49802371541502
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (mlt_Latn-rus_Cyrl)
type: mteb/flores
config: mlt_Latn-rus_Cyrl
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 91.40316205533597
- type: f1
value: 90.35317684386006
- type: main_score
value: 90.35317684386006
- type: precision
value: 89.94845939633488
- type: recall
value: 91.40316205533597
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (quy_Latn-rus_Cyrl)
type: mteb/flores
config: quy_Latn-rus_Cyrl
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 40.612648221343875
- type: f1
value: 38.74337544712602
- type: main_score
value: 38.74337544712602
- type: precision
value: 38.133716022178575
- type: recall
value: 40.612648221343875
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (swh_Latn-rus_Cyrl)
type: mteb/flores
config: swh_Latn-rus_Cyrl
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 97.13438735177866
- type: f1
value: 96.47435897435898
- type: main_score
value: 96.47435897435898
- type: precision
value: 96.18741765480895
- type: recall
value: 97.13438735177866
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (uzn_Latn-rus_Cyrl)
type: mteb/flores
config: uzn_Latn-rus_Cyrl
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 96.83794466403161
- type: f1
value: 96.26355528529442
- type: main_score
value: 96.26355528529442
- type: precision
value: 96.0501756697409
- type: recall
value: 96.83794466403161
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (als_Latn-rus_Cyrl)
type: mteb/flores
config: als_Latn-rus_Cyrl
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 98.91304347826086
- type: f1
value: 98.6907114624506
- type: main_score
value: 98.6907114624506
- type: precision
value: 98.6142480707698
- type: recall
value: 98.91304347826086
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (bod_Tibt-rus_Cyrl)
type: mteb/flores
config: bod_Tibt-rus_Cyrl
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 1.0869565217391304
- type: f1
value: 0.9224649610442628
- type: main_score
value: 0.9224649610442628
- type: precision
value: 0.8894275740459898
- type: recall
value: 1.0869565217391304
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (fij_Latn-rus_Cyrl)
type: mteb/flores
config: fij_Latn-rus_Cyrl
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 63.24110671936759
- type: f1
value: 60.373189068189525
- type: main_score
value: 60.373189068189525
- type: precision
value: 59.32326368115546
- type: recall
value: 63.24110671936759
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (isl_Latn-rus_Cyrl)
type: mteb/flores
config: isl_Latn-rus_Cyrl
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 89.03162055335969
- type: f1
value: 87.3102634715907
- type: main_score
value: 87.3102634715907
- type: precision
value: 86.65991814698712
- type: recall
value: 89.03162055335969
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (kon_Latn-rus_Cyrl)
type: mteb/flores
config: kon_Latn-rus_Cyrl
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 73.91304347826086
- type: f1
value: 71.518235523573
- type: main_score
value: 71.518235523573
- type: precision
value: 70.58714102449801
- type: recall
value: 73.91304347826086
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (mni_Beng-rus_Cyrl)
type: mteb/flores
config: mni_Beng-rus_Cyrl
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 29.545454545454547
- type: f1
value: 27.59513619889114
- type: main_score
value: 27.59513619889114
- type: precision
value: 26.983849851025344
- type: recall
value: 29.545454545454547
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (ron_Latn-rus_Cyrl)
type: mteb/flores
config: ron_Latn-rus_Cyrl
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 99.40711462450594
- type: f1
value: 99.2094861660079
- type: main_score
value: 99.2094861660079
- type: precision
value: 99.1106719367589
- type: recall
value: 99.40711462450594
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (szl_Latn-rus_Cyrl)
type: mteb/flores
config: szl_Latn-rus_Cyrl
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 86.26482213438736
- type: f1
value: 85.18912031587512
- type: main_score
value: 85.18912031587512
- type: precision
value: 84.77199409959775
- type: recall
value: 86.26482213438736
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (vec_Latn-rus_Cyrl)
type: mteb/flores
config: vec_Latn-rus_Cyrl
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 85.67193675889328
- type: f1
value: 84.62529734716581
- type: main_score
value: 84.62529734716581
- type: precision
value: 84.2611422440705
- type: recall
value: 85.67193675889328
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (amh_Ethi-rus_Cyrl)
type: mteb/flores
config: amh_Ethi-rus_Cyrl
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 94.76284584980237
- type: f1
value: 93.91735076517685
- type: main_score
value: 93.91735076517685
- type: precision
value: 93.57553798858147
- type: recall
value: 94.76284584980237
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (bos_Latn-rus_Cyrl)
type: mteb/flores
config: bos_Latn-rus_Cyrl
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 99.2094861660079
- type: f1
value: 99.05655938264634
- type: main_score
value: 99.05655938264634
- type: precision
value: 99.01185770750988
- type: recall
value: 99.2094861660079
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (fin_Latn-rus_Cyrl)
type: mteb/flores
config: fin_Latn-rus_Cyrl
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 98.02371541501977
- type: f1
value: 97.43741765480895
- type: main_score
value: 97.43741765480895
- type: precision
value: 97.1590909090909
- type: recall
value: 98.02371541501977
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (ita_Latn-rus_Cyrl)
type: mteb/flores
config: ita_Latn-rus_Cyrl
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 99.70355731225297
- type: f1
value: 99.60474308300395
- type: main_score
value: 99.60474308300395
- type: precision
value: 99.55533596837944
- type: recall
value: 99.70355731225297
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (kor_Hang-rus_Cyrl)
type: mteb/flores
config: kor_Hang-rus_Cyrl
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 97.33201581027669
- type: f1
value: 96.49868247694334
- type: main_score
value: 96.49868247694334
- type: precision
value: 96.10507246376811
- type: recall
value: 97.33201581027669
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (mos_Latn-rus_Cyrl)
type: mteb/flores
config: mos_Latn-rus_Cyrl
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 34.683794466403164
- type: f1
value: 32.766819308009076
- type: main_score
value: 32.766819308009076
- type: precision
value: 32.1637493670237
- type: recall
value: 34.683794466403164
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (run_Latn-rus_Cyrl)
type: mteb/flores
config: run_Latn-rus_Cyrl
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 83.399209486166
- type: f1
value: 81.10578750604326
- type: main_score
value: 81.10578750604326
- type: precision
value: 80.16763162673529
- type: recall
value: 83.399209486166
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (tam_Taml-rus_Cyrl)
type: mteb/flores
config: tam_Taml-rus_Cyrl
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 98.41897233201581
- type: f1
value: 98.01548089591567
- type: main_score
value: 98.01548089591567
- type: precision
value: 97.84020327498588
- type: recall
value: 98.41897233201581
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (vie_Latn-rus_Cyrl)
type: mteb/flores
config: vie_Latn-rus_Cyrl
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 99.1106719367589
- type: f1
value: 98.81422924901186
- type: main_score
value: 98.81422924901186
- type: precision
value: 98.66600790513834
- type: recall
value: 99.1106719367589
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (apc_Arab-rus_Cyrl)
type: mteb/flores
config: apc_Arab-rus_Cyrl
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 93.87351778656127
- type: f1
value: 92.10803689064558
- type: main_score
value: 92.10803689064558
- type: precision
value: 91.30434782608695
- type: recall
value: 93.87351778656127
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (bug_Latn-rus_Cyrl)
type: mteb/flores
config: bug_Latn-rus_Cyrl
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 57.608695652173914
- type: f1
value: 54.95878654927162
- type: main_score
value: 54.95878654927162
- type: precision
value: 54.067987427805654
- type: recall
value: 57.608695652173914
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (fon_Latn-rus_Cyrl)
type: mteb/flores
config: fon_Latn-rus_Cyrl
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 61.95652173913043
- type: f1
value: 58.06537275812945
- type: main_score
value: 58.06537275812945
- type: precision
value: 56.554057596959204
- type: recall
value: 61.95652173913043
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (jav_Latn-rus_Cyrl)
type: mteb/flores
config: jav_Latn-rus_Cyrl
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 93.47826086956522
- type: f1
value: 92.4784405318002
- type: main_score
value: 92.4784405318002
- type: precision
value: 92.09168143201127
- type: recall
value: 93.47826086956522
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (lao_Laoo-rus_Cyrl)
type: mteb/flores
config: lao_Laoo-rus_Cyrl
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 91.10671936758892
- type: f1
value: 89.76104922745239
- type: main_score
value: 89.76104922745239
- type: precision
value: 89.24754593232855
- type: recall
value: 91.10671936758892
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (mri_Latn-rus_Cyrl)
type: mteb/flores
config: mri_Latn-rus_Cyrl
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 71.14624505928853
- type: f1
value: 68.26947125119062
- type: main_score
value: 68.26947125119062
- type: precision
value: 67.15942311051006
- type: recall
value: 71.14624505928853
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (rus_Cyrl-ace_Arab)
type: mteb/flores
config: rus_Cyrl-ace_Arab
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 19.565217391304348
- type: f1
value: 16.321465000323805
- type: main_score
value: 16.321465000323805
- type: precision
value: 15.478527409347508
- type: recall
value: 19.565217391304348
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (rus_Cyrl-bam_Latn)
type: mteb/flores
config: rus_Cyrl-bam_Latn
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 73.41897233201581
- type: f1
value: 68.77366228182746
- type: main_score
value: 68.77366228182746
- type: precision
value: 66.96012924273795
- type: recall
value: 73.41897233201581
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (rus_Cyrl-dzo_Tibt)
type: mteb/flores
config: rus_Cyrl-dzo_Tibt
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 0.592885375494071
- type: f1
value: 0.02458062426370458
- type: main_score
value: 0.02458062426370458
- type: precision
value: 0.012824114724683876
- type: recall
value: 0.592885375494071
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (rus_Cyrl-hin_Deva)
type: mteb/flores
config: rus_Cyrl-hin_Deva
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 99.90118577075098
- type: f1
value: 99.86824769433464
- type: main_score
value: 99.86824769433464
- type: precision
value: 99.85177865612648
- type: recall
value: 99.90118577075098
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (rus_Cyrl-khm_Khmr)
type: mteb/flores
config: rus_Cyrl-khm_Khmr
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 97.13438735177866
- type: f1
value: 96.24505928853755
- type: main_score
value: 96.24505928853755
- type: precision
value: 95.81686429512516
- type: recall
value: 97.13438735177866
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (rus_Cyrl-mag_Deva)
type: mteb/flores
config: rus_Cyrl-mag_Deva
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 99.50592885375494
- type: f1
value: 99.35770750988142
- type: main_score
value: 99.35770750988142
- type: precision
value: 99.29183135704875
- type: recall
value: 99.50592885375494
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (rus_Cyrl-pap_Latn)
type: mteb/flores
config: rus_Cyrl-pap_Latn
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 96.93675889328063
- type: f1
value: 96.05072463768116
- type: main_score
value: 96.05072463768116
- type: precision
value: 95.66040843214758
- type: recall
value: 96.93675889328063
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (rus_Cyrl-sot_Latn)
type: mteb/flores
config: rus_Cyrl-sot_Latn
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 93.67588932806325
- type: f1
value: 91.7786561264822
- type: main_score
value: 91.7786561264822
- type: precision
value: 90.91238471673255
- type: recall
value: 93.67588932806325
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (rus_Cyrl-tur_Latn)
type: mteb/flores
config: rus_Cyrl-tur_Latn
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 99.01185770750988
- type: f1
value: 98.68247694334651
- type: main_score
value: 98.68247694334651
- type: precision
value: 98.51778656126481
- type: recall
value: 99.01185770750988
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (rus_Cyrl-ace_Latn)
type: mteb/flores
config: rus_Cyrl-ace_Latn
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 74.1106719367589
- type: f1
value: 70.21737923911836
- type: main_score
value: 70.21737923911836
- type: precision
value: 68.7068791410511
- type: recall
value: 74.1106719367589
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (rus_Cyrl-ban_Latn)
type: mteb/flores
config: rus_Cyrl-ban_Latn
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 81.7193675889328
- type: f1
value: 78.76470334510617
- type: main_score
value: 78.76470334510617
- type: precision
value: 77.76208475761422
- type: recall
value: 81.7193675889328
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (rus_Cyrl-ell_Grek)
type: mteb/flores
config: rus_Cyrl-ell_Grek
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 98.3201581027668
- type: f1
value: 97.76021080368908
- type: main_score
value: 97.76021080368908
- type: precision
value: 97.48023715415019
- type: recall
value: 98.3201581027668
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (rus_Cyrl-hne_Deva)
type: mteb/flores
config: rus_Cyrl-hne_Deva
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 98.51778656126481
- type: f1
value: 98.0566534914361
- type: main_score
value: 98.0566534914361
- type: precision
value: 97.82608695652173
- type: recall
value: 98.51778656126481
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (rus_Cyrl-kik_Latn)
type: mteb/flores
config: rus_Cyrl-kik_Latn
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 80.73122529644269
- type: f1
value: 76.42689244220864
- type: main_score
value: 76.42689244220864
- type: precision
value: 74.63877909530083
- type: recall
value: 80.73122529644269
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (rus_Cyrl-mai_Deva)
type: mteb/flores
config: rus_Cyrl-mai_Deva
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 98.91304347826086
- type: f1
value: 98.56719367588933
- type: main_score
value: 98.56719367588933
- type: precision
value: 98.40250329380763
- type: recall
value: 98.91304347826086
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (rus_Cyrl-pbt_Arab)
type: mteb/flores
config: rus_Cyrl-pbt_Arab
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 97.5296442687747
- type: f1
value: 96.73913043478261
- type: main_score
value: 96.73913043478261
- type: precision
value: 96.36034255599473
- type: recall
value: 97.5296442687747
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (rus_Cyrl-spa_Latn)
type: mteb/flores
config: rus_Cyrl-spa_Latn
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 99.40711462450594
- type: f1
value: 99.20948616600789
- type: main_score
value: 99.20948616600789
- type: precision
value: 99.1106719367589
- type: recall
value: 99.40711462450594
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (rus_Cyrl-twi_Latn)
type: mteb/flores
config: rus_Cyrl-twi_Latn
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 82.01581027667984
- type: f1
value: 78.064787822953
- type: main_score
value: 78.064787822953
- type: precision
value: 76.43272186750448
- type: recall
value: 82.01581027667984
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (rus_Cyrl-acm_Arab)
type: mteb/flores
config: rus_Cyrl-acm_Arab
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 98.3201581027668
- type: f1
value: 97.76021080368908
- type: main_score
value: 97.76021080368908
- type: precision
value: 97.48023715415019
- type: recall
value: 98.3201581027668
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (rus_Cyrl-bel_Cyrl)
type: mteb/flores
config: rus_Cyrl-bel_Cyrl
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 98.22134387351778
- type: f1
value: 97.67786561264822
- type: main_score
value: 97.67786561264822
- type: precision
value: 97.4308300395257
- type: recall
value: 98.22134387351778
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (rus_Cyrl-eng_Latn)
type: mteb/flores
config: rus_Cyrl-eng_Latn
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 99.70355731225297
- type: f1
value: 99.60474308300395
- type: main_score
value: 99.60474308300395
- type: precision
value: 99.55533596837944
- type: recall
value: 99.70355731225297
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (rus_Cyrl-hrv_Latn)
type: mteb/flores
config: rus_Cyrl-hrv_Latn
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 99.1106719367589
- type: f1
value: 98.83069828722002
- type: main_score
value: 98.83069828722002
- type: precision
value: 98.69894598155466
- type: recall
value: 99.1106719367589
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (rus_Cyrl-kin_Latn)
type: mteb/flores
config: rus_Cyrl-kin_Latn
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 93.37944664031622
- type: f1
value: 91.53162055335969
- type: main_score
value: 91.53162055335969
- type: precision
value: 90.71475625823452
- type: recall
value: 93.37944664031622
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (rus_Cyrl-mal_Mlym)
type: mteb/flores
config: rus_Cyrl-mal_Mlym
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 99.30830039525692
- type: f1
value: 99.07773386034255
- type: main_score
value: 99.07773386034255
- type: precision
value: 98.96245059288538
- type: recall
value: 99.30830039525692
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (rus_Cyrl-pes_Arab)
type: mteb/flores
config: rus_Cyrl-pes_Arab
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 98.71541501976284
- type: f1
value: 98.30368906455863
- type: main_score
value: 98.30368906455863
- type: precision
value: 98.10606060606061
- type: recall
value: 98.71541501976284
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (rus_Cyrl-srd_Latn)
type: mteb/flores
config: rus_Cyrl-srd_Latn
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 89.03162055335969
- type: f1
value: 86.11048371917937
- type: main_score
value: 86.11048371917937
- type: precision
value: 84.86001317523056
- type: recall
value: 89.03162055335969
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (rus_Cyrl-tzm_Tfng)
type: mteb/flores
config: rus_Cyrl-tzm_Tfng
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 12.351778656126482
- type: f1
value: 10.112177999067715
- type: main_score
value: 10.112177999067715
- type: precision
value: 9.53495885438645
- type: recall
value: 12.351778656126482
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (rus_Cyrl-acq_Arab)
type: mteb/flores
config: rus_Cyrl-acq_Arab
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 98.91304347826086
- type: f1
value: 98.55072463768116
- type: main_score
value: 98.55072463768116
- type: precision
value: 98.36956521739131
- type: recall
value: 98.91304347826086
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (rus_Cyrl-bem_Latn)
type: mteb/flores
config: rus_Cyrl-bem_Latn
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 73.22134387351778
- type: f1
value: 68.30479412989295
- type: main_score
value: 68.30479412989295
- type: precision
value: 66.40073447632736
- type: recall
value: 73.22134387351778
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (rus_Cyrl-epo_Latn)
type: mteb/flores
config: rus_Cyrl-epo_Latn
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 99.1106719367589
- type: f1
value: 98.81422924901186
- type: main_score
value: 98.81422924901186
- type: precision
value: 98.66600790513834
- type: recall
value: 99.1106719367589
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (rus_Cyrl-hun_Latn)
type: mteb/flores
config: rus_Cyrl-hun_Latn
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 96.83794466403161
- type: f1
value: 95.88274044795784
- type: main_score
value: 95.88274044795784
- type: precision
value: 95.45454545454545
- type: recall
value: 96.83794466403161
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (rus_Cyrl-kir_Cyrl)
type: mteb/flores
config: rus_Cyrl-kir_Cyrl
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 96.34387351778656
- type: f1
value: 95.49280429715212
- type: main_score
value: 95.49280429715212
- type: precision
value: 95.14163372859026
- type: recall
value: 96.34387351778656
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (rus_Cyrl-mar_Deva)
type: mteb/flores
config: rus_Cyrl-mar_Deva
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 98.71541501976284
- type: f1
value: 98.28722002635047
- type: main_score
value: 98.28722002635047
- type: precision
value: 98.07312252964427
- type: recall
value: 98.71541501976284
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (rus_Cyrl-plt_Latn)
type: mteb/flores
config: rus_Cyrl-plt_Latn
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 88.04347826086956
- type: f1
value: 85.14328063241106
- type: main_score
value: 85.14328063241106
- type: precision
value: 83.96339168078298
- type: recall
value: 88.04347826086956
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (rus_Cyrl-srp_Cyrl)
type: mteb/flores
config: rus_Cyrl-srp_Cyrl
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 99.40711462450594
- type: f1
value: 99.2094861660079
- type: main_score
value: 99.2094861660079
- type: precision
value: 99.1106719367589
- type: recall
value: 99.40711462450594
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (rus_Cyrl-uig_Arab)
type: mteb/flores
config: rus_Cyrl-uig_Arab
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 92.19367588932806
- type: f1
value: 89.98541313758706
- type: main_score
value: 89.98541313758706
- type: precision
value: 89.01021080368906
- type: recall
value: 92.19367588932806
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (rus_Cyrl-aeb_Arab)
type: mteb/flores
config: rus_Cyrl-aeb_Arab
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 95.8498023715415
- type: f1
value: 94.63109354413703
- type: main_score
value: 94.63109354413703
- type: precision
value: 94.05467720685111
- type: recall
value: 95.8498023715415
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (rus_Cyrl-ben_Beng)
type: mteb/flores
config: rus_Cyrl-ben_Beng
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 99.40711462450594
- type: f1
value: 99.2094861660079
- type: main_score
value: 99.2094861660079
- type: precision
value: 99.1106719367589
- type: recall
value: 99.40711462450594
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (rus_Cyrl-est_Latn)
type: mteb/flores
config: rus_Cyrl-est_Latn
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 95.55335968379447
- type: f1
value: 94.2588932806324
- type: main_score
value: 94.2588932806324
- type: precision
value: 93.65118577075098
- type: recall
value: 95.55335968379447
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (rus_Cyrl-hye_Armn)
type: mteb/flores
config: rus_Cyrl-hye_Armn
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 98.71541501976284
- type: f1
value: 98.28722002635045
- type: main_score
value: 98.28722002635045
- type: precision
value: 98.07312252964427
- type: recall
value: 98.71541501976284
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (rus_Cyrl-kmb_Latn)
type: mteb/flores
config: rus_Cyrl-kmb_Latn
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 54.24901185770751
- type: f1
value: 49.46146674116913
- type: main_score
value: 49.46146674116913
- type: precision
value: 47.81033799314432
- type: recall
value: 54.24901185770751
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (rus_Cyrl-min_Arab)
type: mteb/flores
config: rus_Cyrl-min_Arab
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 15.810276679841898
- type: f1
value: 13.271207641419332
- type: main_score
value: 13.271207641419332
- type: precision
value: 12.510673148766033
- type: recall
value: 15.810276679841898
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (rus_Cyrl-pol_Latn)
type: mteb/flores
config: rus_Cyrl-pol_Latn
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 98.71541501976284
- type: f1
value: 98.32674571805006
- type: main_score
value: 98.32674571805006
- type: precision
value: 98.14723320158103
- type: recall
value: 98.71541501976284
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (rus_Cyrl-ssw_Latn)
type: mteb/flores
config: rus_Cyrl-ssw_Latn
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 80.8300395256917
- type: f1
value: 76.51717847370023
- type: main_score
value: 76.51717847370023
- type: precision
value: 74.74143610013175
- type: recall
value: 80.8300395256917
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (rus_Cyrl-ukr_Cyrl)
type: mteb/flores
config: rus_Cyrl-ukr_Cyrl
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 99.60474308300395
- type: f1
value: 99.4729907773386
- type: main_score
value: 99.4729907773386
- type: precision
value: 99.40711462450594
- type: recall
value: 99.60474308300395
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (rus_Cyrl-afr_Latn)
type: mteb/flores
config: rus_Cyrl-afr_Latn
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 99.1106719367589
- type: f1
value: 98.81422924901186
- type: main_score
value: 98.81422924901186
- type: precision
value: 98.66600790513834
- type: recall
value: 99.1106719367589
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (rus_Cyrl-bho_Deva)
type: mteb/flores
config: rus_Cyrl-bho_Deva
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 96.6403162055336
- type: f1
value: 95.56982872200265
- type: main_score
value: 95.56982872200265
- type: precision
value: 95.0592885375494
- type: recall
value: 96.6403162055336
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (rus_Cyrl-eus_Latn)
type: mteb/flores
config: rus_Cyrl-eus_Latn
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 97.62845849802372
- type: f1
value: 96.9038208168643
- type: main_score
value: 96.9038208168643
- type: precision
value: 96.55797101449275
- type: recall
value: 97.62845849802372
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (rus_Cyrl-ibo_Latn)
type: mteb/flores
config: rus_Cyrl-ibo_Latn
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 89.2292490118577
- type: f1
value: 86.35234330886506
- type: main_score
value: 86.35234330886506
- type: precision
value: 85.09881422924902
- type: recall
value: 89.2292490118577
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (rus_Cyrl-kmr_Latn)
type: mteb/flores
config: rus_Cyrl-kmr_Latn
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 83.49802371541502
- type: f1
value: 79.23630717108978
- type: main_score
value: 79.23630717108978
- type: precision
value: 77.48188405797102
- type: recall
value: 83.49802371541502
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (rus_Cyrl-min_Latn)
type: mteb/flores
config: rus_Cyrl-min_Latn
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 79.34782608695652
- type: f1
value: 75.31689928429059
- type: main_score
value: 75.31689928429059
- type: precision
value: 73.91519410541149
- type: recall
value: 79.34782608695652
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (rus_Cyrl-por_Latn)
type: mteb/flores
config: rus_Cyrl-por_Latn
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 96.54150197628458
- type: f1
value: 95.53218520609825
- type: main_score
value: 95.53218520609825
- type: precision
value: 95.07575757575756
- type: recall
value: 96.54150197628458
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (rus_Cyrl-sun_Latn)
type: mteb/flores
config: rus_Cyrl-sun_Latn
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 93.2806324110672
- type: f1
value: 91.56973461321287
- type: main_score
value: 91.56973461321287
- type: precision
value: 90.84396334890405
- type: recall
value: 93.2806324110672
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (rus_Cyrl-umb_Latn)
type: mteb/flores
config: rus_Cyrl-umb_Latn
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 51.87747035573123
- type: f1
value: 46.36591778884269
- type: main_score
value: 46.36591778884269
- type: precision
value: 44.57730391234227
- type: recall
value: 51.87747035573123
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (rus_Cyrl-ajp_Arab)
type: mteb/flores
config: rus_Cyrl-ajp_Arab
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 98.71541501976284
- type: f1
value: 98.30368906455863
- type: main_score
value: 98.30368906455863
- type: precision
value: 98.10606060606061
- type: recall
value: 98.71541501976284
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (rus_Cyrl-bjn_Arab)
type: mteb/flores
config: rus_Cyrl-bjn_Arab
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 14.82213438735178
- type: f1
value: 12.365434276616856
- type: main_score
value: 12.365434276616856
- type: precision
value: 11.802079517180589
- type: recall
value: 14.82213438735178
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (rus_Cyrl-ewe_Latn)
type: mteb/flores
config: rus_Cyrl-ewe_Latn
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 71.44268774703558
- type: f1
value: 66.74603174603175
- type: main_score
value: 66.74603174603175
- type: precision
value: 64.99933339607253
- type: recall
value: 71.44268774703558
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (rus_Cyrl-ilo_Latn)
type: mteb/flores
config: rus_Cyrl-ilo_Latn
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 85.86956521739131
- type: f1
value: 83.00139015960917
- type: main_score
value: 83.00139015960917
- type: precision
value: 81.91411396574439
- type: recall
value: 85.86956521739131
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (rus_Cyrl-knc_Arab)
type: mteb/flores
config: rus_Cyrl-knc_Arab
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 14.525691699604742
- type: f1
value: 12.618283715726806
- type: main_score
value: 12.618283715726806
- type: precision
value: 12.048458493742352
- type: recall
value: 14.525691699604742
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (rus_Cyrl-mkd_Cyrl)
type: mteb/flores
config: rus_Cyrl-mkd_Cyrl
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 99.40711462450594
- type: f1
value: 99.22595520421606
- type: main_score
value: 99.22595520421606
- type: precision
value: 99.14361001317523
- type: recall
value: 99.40711462450594
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (rus_Cyrl-prs_Arab)
type: mteb/flores
config: rus_Cyrl-prs_Arab
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 99.30830039525692
- type: f1
value: 99.07773386034255
- type: main_score
value: 99.07773386034255
- type: precision
value: 98.96245059288538
- type: recall
value: 99.30830039525692
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (rus_Cyrl-swe_Latn)
type: mteb/flores
config: rus_Cyrl-swe_Latn
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 99.30830039525692
- type: f1
value: 99.07773386034256
- type: main_score
value: 99.07773386034256
- type: precision
value: 98.96245059288538
- type: recall
value: 99.30830039525692
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (rus_Cyrl-urd_Arab)
type: mteb/flores
config: rus_Cyrl-urd_Arab
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 98.61660079051383
- type: f1
value: 98.15546772068511
- type: main_score
value: 98.15546772068511
- type: precision
value: 97.92490118577075
- type: recall
value: 98.61660079051383
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (rus_Cyrl-aka_Latn)
type: mteb/flores
config: rus_Cyrl-aka_Latn
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 81.02766798418972
- type: f1
value: 76.73277809147375
- type: main_score
value: 76.73277809147375
- type: precision
value: 74.97404165882426
- type: recall
value: 81.02766798418972
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (rus_Cyrl-bjn_Latn)
type: mteb/flores
config: rus_Cyrl-bjn_Latn
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 86.7588932806324
- type: f1
value: 83.92064566965753
- type: main_score
value: 83.92064566965753
- type: precision
value: 82.83734079929732
- type: recall
value: 86.7588932806324
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (rus_Cyrl-fao_Latn)
type: mteb/flores
config: rus_Cyrl-fao_Latn
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 88.43873517786561
- type: f1
value: 85.48136645962732
- type: main_score
value: 85.48136645962732
- type: precision
value: 84.23418972332016
- type: recall
value: 88.43873517786561
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (rus_Cyrl-ind_Latn)
type: mteb/flores
config: rus_Cyrl-ind_Latn
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 99.01185770750988
- type: f1
value: 98.68247694334651
- type: main_score
value: 98.68247694334651
- type: precision
value: 98.51778656126481
- type: recall
value: 99.01185770750988
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (rus_Cyrl-knc_Latn)
type: mteb/flores
config: rus_Cyrl-knc_Latn
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 45.8498023715415
- type: f1
value: 40.112030865489366
- type: main_score
value: 40.112030865489366
- type: precision
value: 38.28262440050776
- type: recall
value: 45.8498023715415
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (rus_Cyrl-mlt_Latn)
type: mteb/flores
config: rus_Cyrl-mlt_Latn
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 93.18181818181817
- type: f1
value: 91.30787690570298
- type: main_score
value: 91.30787690570298
- type: precision
value: 90.4983060417843
- type: recall
value: 93.18181818181817
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (rus_Cyrl-quy_Latn)
type: mteb/flores
config: rus_Cyrl-quy_Latn
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 62.450592885375485
- type: f1
value: 57.28742975628178
- type: main_score
value: 57.28742975628178
- type: precision
value: 55.56854987623269
- type: recall
value: 62.450592885375485
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (rus_Cyrl-swh_Latn)
type: mteb/flores
config: rus_Cyrl-swh_Latn
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 98.3201581027668
- type: f1
value: 97.77667984189723
- type: main_score
value: 97.77667984189723
- type: precision
value: 97.51317523056655
- type: recall
value: 98.3201581027668
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (rus_Cyrl-uzn_Latn)
type: mteb/flores
config: rus_Cyrl-uzn_Latn
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 98.12252964426878
- type: f1
value: 97.59081498211933
- type: main_score
value: 97.59081498211933
- type: precision
value: 97.34848484848484
- type: recall
value: 98.12252964426878
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (rus_Cyrl-als_Latn)
type: mteb/flores
config: rus_Cyrl-als_Latn
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 99.30830039525692
- type: f1
value: 99.09420289855073
- type: main_score
value: 99.09420289855073
- type: precision
value: 98.99538866930172
- type: recall
value: 99.30830039525692
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (rus_Cyrl-bod_Tibt)
type: mteb/flores
config: rus_Cyrl-bod_Tibt
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 11.561264822134387
- type: f1
value: 8.121312045385636
- type: main_score
value: 8.121312045385636
- type: precision
value: 7.350577020893972
- type: recall
value: 11.561264822134387
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (rus_Cyrl-fij_Latn)
type: mteb/flores
config: rus_Cyrl-fij_Latn
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 72.23320158102767
- type: f1
value: 67.21000233846082
- type: main_score
value: 67.21000233846082
- type: precision
value: 65.3869439739005
- type: recall
value: 72.23320158102767
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (rus_Cyrl-isl_Latn)
type: mteb/flores
config: rus_Cyrl-isl_Latn
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 91.99604743083005
- type: f1
value: 89.75955204216073
- type: main_score
value: 89.75955204216073
- type: precision
value: 88.7598814229249
- type: recall
value: 91.99604743083005
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (rus_Cyrl-kon_Latn)
type: mteb/flores
config: rus_Cyrl-kon_Latn
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 81.81818181818183
- type: f1
value: 77.77800098452272
- type: main_score
value: 77.77800098452272
- type: precision
value: 76.1521268586486
- type: recall
value: 81.81818181818183
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (rus_Cyrl-mni_Beng)
type: mteb/flores
config: rus_Cyrl-mni_Beng
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 54.74308300395256
- type: f1
value: 48.97285299254615
- type: main_score
value: 48.97285299254615
- type: precision
value: 46.95125742968299
- type: recall
value: 54.74308300395256
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (rus_Cyrl-ron_Latn)
type: mteb/flores
config: rus_Cyrl-ron_Latn
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 98.22134387351778
- type: f1
value: 97.64492753623189
- type: main_score
value: 97.64492753623189
- type: precision
value: 97.36495388669302
- type: recall
value: 98.22134387351778
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (rus_Cyrl-szl_Latn)
type: mteb/flores
config: rus_Cyrl-szl_Latn
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 92.09486166007905
- type: f1
value: 90.10375494071147
- type: main_score
value: 90.10375494071147
- type: precision
value: 89.29606625258798
- type: recall
value: 92.09486166007905
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (rus_Cyrl-vec_Latn)
type: mteb/flores
config: rus_Cyrl-vec_Latn
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 92.4901185770751
- type: f1
value: 90.51430453604365
- type: main_score
value: 90.51430453604365
- type: precision
value: 89.69367588932808
- type: recall
value: 92.4901185770751
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (rus_Cyrl-amh_Ethi)
type: mteb/flores
config: rus_Cyrl-amh_Ethi
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 97.82608695652173
- type: f1
value: 97.11791831357048
- type: main_score
value: 97.11791831357048
- type: precision
value: 96.77206851119894
- type: recall
value: 97.82608695652173
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (rus_Cyrl-bos_Latn)
type: mteb/flores
config: rus_Cyrl-bos_Latn
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 98.91304347826086
- type: f1
value: 98.55072463768116
- type: main_score
value: 98.55072463768116
- type: precision
value: 98.36956521739131
- type: recall
value: 98.91304347826086
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (rus_Cyrl-fin_Latn)
type: mteb/flores
config: rus_Cyrl-fin_Latn
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 95.65217391304348
- type: f1
value: 94.4235836627141
- type: main_score
value: 94.4235836627141
- type: precision
value: 93.84881422924902
- type: recall
value: 95.65217391304348
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (rus_Cyrl-ita_Latn)
type: mteb/flores
config: rus_Cyrl-ita_Latn
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 98.91304347826086
- type: f1
value: 98.55072463768117
- type: main_score
value: 98.55072463768117
- type: precision
value: 98.36956521739131
- type: recall
value: 98.91304347826086
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (rus_Cyrl-kor_Hang)
type: mteb/flores
config: rus_Cyrl-kor_Hang
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 95.55335968379447
- type: f1
value: 94.15349143610013
- type: main_score
value: 94.15349143610013
- type: precision
value: 93.49472990777339
- type: recall
value: 95.55335968379447
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (rus_Cyrl-mos_Latn)
type: mteb/flores
config: rus_Cyrl-mos_Latn
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 43.67588932806324
- type: f1
value: 38.84849721190082
- type: main_score
value: 38.84849721190082
- type: precision
value: 37.43294462099682
- type: recall
value: 43.67588932806324
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (rus_Cyrl-run_Latn)
type: mteb/flores
config: rus_Cyrl-run_Latn
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 90.21739130434783
- type: f1
value: 87.37483530961792
- type: main_score
value: 87.37483530961792
- type: precision
value: 86.07872200263506
- type: recall
value: 90.21739130434783
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (rus_Cyrl-tam_Taml)
type: mteb/flores
config: rus_Cyrl-tam_Taml
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 99.40711462450594
- type: f1
value: 99.2094861660079
- type: main_score
value: 99.2094861660079
- type: precision
value: 99.1106719367589
- type: recall
value: 99.40711462450594
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (rus_Cyrl-vie_Latn)
type: mteb/flores
config: rus_Cyrl-vie_Latn
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 97.03557312252964
- type: f1
value: 96.13636363636364
- type: main_score
value: 96.13636363636364
- type: precision
value: 95.70981554677206
- type: recall
value: 97.03557312252964
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (rus_Cyrl-apc_Arab)
type: mteb/flores
config: rus_Cyrl-apc_Arab
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 98.12252964426878
- type: f1
value: 97.49670619235836
- type: main_score
value: 97.49670619235836
- type: precision
value: 97.18379446640316
- type: recall
value: 98.12252964426878
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (rus_Cyrl-bug_Latn)
type: mteb/flores
config: rus_Cyrl-bug_Latn
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 67.29249011857708
- type: f1
value: 62.09268717667927
- type: main_score
value: 62.09268717667927
- type: precision
value: 60.28554009748714
- type: recall
value: 67.29249011857708
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (rus_Cyrl-fon_Latn)
type: mteb/flores
config: rus_Cyrl-fon_Latn
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 63.43873517786561
- type: f1
value: 57.66660107569199
- type: main_score
value: 57.66660107569199
- type: precision
value: 55.66676396919363
- type: recall
value: 63.43873517786561
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (rus_Cyrl-jav_Latn)
type: mteb/flores
config: rus_Cyrl-jav_Latn
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 94.46640316205533
- type: f1
value: 92.89384528514964
- type: main_score
value: 92.89384528514964
- type: precision
value: 92.19367588932806
- type: recall
value: 94.46640316205533
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (rus_Cyrl-lao_Laoo)
type: mteb/flores
config: rus_Cyrl-lao_Laoo
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 97.23320158102767
- type: f1
value: 96.40974967061922
- type: main_score
value: 96.40974967061922
- type: precision
value: 96.034255599473
- type: recall
value: 97.23320158102767
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (rus_Cyrl-mri_Latn)
type: mteb/flores
config: rus_Cyrl-mri_Latn
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 76.77865612648222
- type: f1
value: 73.11286539547409
- type: main_score
value: 73.11286539547409
- type: precision
value: 71.78177214337046
- type: recall
value: 76.77865612648222
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (rus_Cyrl-taq_Latn)
type: mteb/flores
config: rus_Cyrl-taq_Latn
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 41.99604743083004
- type: f1
value: 37.25127063318763
- type: main_score
value: 37.25127063318763
- type: precision
value: 35.718929186985726
- type: recall
value: 41.99604743083004
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (rus_Cyrl-war_Latn)
type: mteb/flores
config: rus_Cyrl-war_Latn
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 95.55335968379447
- type: f1
value: 94.1699604743083
- type: main_score
value: 94.1699604743083
- type: precision
value: 93.52766798418972
- type: recall
value: 95.55335968379447
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (rus_Cyrl-arb_Arab)
type: mteb/flores
config: rus_Cyrl-arb_Arab
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 99.60474308300395
- type: f1
value: 99.4729907773386
- type: main_score
value: 99.4729907773386
- type: precision
value: 99.40711462450594
- type: recall
value: 99.60474308300395
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (rus_Cyrl-bul_Cyrl)
type: mteb/flores
config: rus_Cyrl-bul_Cyrl
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 99.70355731225297
- type: f1
value: 99.60474308300395
- type: main_score
value: 99.60474308300395
- type: precision
value: 99.55533596837944
- type: recall
value: 99.70355731225297
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (rus_Cyrl-fra_Latn)
type: mteb/flores
config: rus_Cyrl-fra_Latn
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 99.60474308300395
- type: f1
value: 99.47299077733861
- type: main_score
value: 99.47299077733861
- type: precision
value: 99.40711462450594
- type: recall
value: 99.60474308300395
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (rus_Cyrl-jpn_Jpan)
type: mteb/flores
config: rus_Cyrl-jpn_Jpan
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 96.44268774703558
- type: f1
value: 95.30632411067194
- type: main_score
value: 95.30632411067194
- type: precision
value: 94.76284584980237
- type: recall
value: 96.44268774703558
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (rus_Cyrl-lij_Latn)
type: mteb/flores
config: rus_Cyrl-lij_Latn
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 90.21739130434783
- type: f1
value: 87.4703557312253
- type: main_score
value: 87.4703557312253
- type: precision
value: 86.29611330698287
- type: recall
value: 90.21739130434783
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (rus_Cyrl-mya_Mymr)
type: mteb/flores
config: rus_Cyrl-mya_Mymr
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 98.02371541501977
- type: f1
value: 97.364953886693
- type: main_score
value: 97.364953886693
- type: precision
value: 97.03557312252964
- type: recall
value: 98.02371541501977
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (rus_Cyrl-sag_Latn)
type: mteb/flores
config: rus_Cyrl-sag_Latn
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 54.841897233201585
- type: f1
value: 49.61882037503349
- type: main_score
value: 49.61882037503349
- type: precision
value: 47.831968755881796
- type: recall
value: 54.841897233201585
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (rus_Cyrl-taq_Tfng)
type: mteb/flores
config: rus_Cyrl-taq_Tfng
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 15.316205533596838
- type: f1
value: 11.614836360389717
- type: main_score
value: 11.614836360389717
- type: precision
value: 10.741446193235223
- type: recall
value: 15.316205533596838
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (rus_Cyrl-wol_Latn)
type: mteb/flores
config: rus_Cyrl-wol_Latn
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 67.88537549407114
- type: f1
value: 62.2536417249856
- type: main_score
value: 62.2536417249856
- type: precision
value: 60.27629128666678
- type: recall
value: 67.88537549407114
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (rus_Cyrl-arb_Latn)
type: mteb/flores
config: rus_Cyrl-arb_Latn
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 27.766798418972332
- type: f1
value: 23.39674889624077
- type: main_score
value: 23.39674889624077
- type: precision
value: 22.28521155585345
- type: recall
value: 27.766798418972332
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (rus_Cyrl-cat_Latn)
type: mteb/flores
config: rus_Cyrl-cat_Latn
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 97.23320158102767
- type: f1
value: 96.42151326933936
- type: main_score
value: 96.42151326933936
- type: precision
value: 96.04743083003953
- type: recall
value: 97.23320158102767
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (rus_Cyrl-fur_Latn)
type: mteb/flores
config: rus_Cyrl-fur_Latn
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 88.63636363636364
- type: f1
value: 85.80792396009788
- type: main_score
value: 85.80792396009788
- type: precision
value: 84.61508901726293
- type: recall
value: 88.63636363636364
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (rus_Cyrl-kab_Latn)
type: mteb/flores
config: rus_Cyrl-kab_Latn
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 48.12252964426877
- type: f1
value: 43.05387582971066
- type: main_score
value: 43.05387582971066
- type: precision
value: 41.44165117538212
- type: recall
value: 48.12252964426877
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (rus_Cyrl-lim_Latn)
type: mteb/flores
config: rus_Cyrl-lim_Latn
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 81.81818181818183
- type: f1
value: 77.81676163099087
- type: main_score
value: 77.81676163099087
- type: precision
value: 76.19565217391305
- type: recall
value: 81.81818181818183
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (rus_Cyrl-nld_Latn)
type: mteb/flores
config: rus_Cyrl-nld_Latn
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 97.33201581027669
- type: f1
value: 96.4756258234519
- type: main_score
value: 96.4756258234519
- type: precision
value: 96.06389986824769
- type: recall
value: 97.33201581027669
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (rus_Cyrl-san_Deva)
type: mteb/flores
config: rus_Cyrl-san_Deva
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 93.47826086956522
- type: f1
value: 91.70289855072463
- type: main_score
value: 91.70289855072463
- type: precision
value: 90.9370882740448
- type: recall
value: 93.47826086956522
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (rus_Cyrl-tat_Cyrl)
type: mteb/flores
config: rus_Cyrl-tat_Cyrl
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 97.72727272727273
- type: f1
value: 97.00263504611331
- type: main_score
value: 97.00263504611331
- type: precision
value: 96.65678524374177
- type: recall
value: 97.72727272727273
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (rus_Cyrl-xho_Latn)
type: mteb/flores
config: rus_Cyrl-xho_Latn
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 93.08300395256917
- type: f1
value: 91.12977602108036
- type: main_score
value: 91.12977602108036
- type: precision
value: 90.22562582345192
- type: recall
value: 93.08300395256917
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (rus_Cyrl-ars_Arab)
type: mteb/flores
config: rus_Cyrl-ars_Arab
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 99.40711462450594
- type: f1
value: 99.2094861660079
- type: main_score
value: 99.2094861660079
- type: precision
value: 99.1106719367589
- type: recall
value: 99.40711462450594
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (rus_Cyrl-ceb_Latn)
type: mteb/flores
config: rus_Cyrl-ceb_Latn
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 95.65217391304348
- type: f1
value: 94.3544137022398
- type: main_score
value: 94.3544137022398
- type: precision
value: 93.76646903820817
- type: recall
value: 95.65217391304348
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (rus_Cyrl-fuv_Latn)
type: mteb/flores
config: rus_Cyrl-fuv_Latn
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 51.18577075098815
- type: f1
value: 44.5990252610806
- type: main_score
value: 44.5990252610806
- type: precision
value: 42.34331599450177
- type: recall
value: 51.18577075098815
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (rus_Cyrl-kac_Latn)
type: mteb/flores
config: rus_Cyrl-kac_Latn
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 46.93675889328063
- type: f1
value: 41.79004018701787
- type: main_score
value: 41.79004018701787
- type: precision
value: 40.243355662392624
- type: recall
value: 46.93675889328063
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (rus_Cyrl-lin_Latn)
type: mteb/flores
config: rus_Cyrl-lin_Latn
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 91.50197628458498
- type: f1
value: 89.1205533596838
- type: main_score
value: 89.1205533596838
- type: precision
value: 88.07147562582345
- type: recall
value: 91.50197628458498
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (rus_Cyrl-nno_Latn)
type: mteb/flores
config: rus_Cyrl-nno_Latn
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 98.81422924901186
- type: f1
value: 98.41897233201581
- type: main_score
value: 98.41897233201581
- type: precision
value: 98.22134387351778
- type: recall
value: 98.81422924901186
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (rus_Cyrl-sat_Olck)
type: mteb/flores
config: rus_Cyrl-sat_Olck
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 2.371541501976284
- type: f1
value: 1.0726274943087382
- type: main_score
value: 1.0726274943087382
- type: precision
value: 0.875279634748803
- type: recall
value: 2.371541501976284
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (rus_Cyrl-tel_Telu)
type: mteb/flores
config: rus_Cyrl-tel_Telu
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 99.01185770750988
- type: f1
value: 98.68247694334651
- type: main_score
value: 98.68247694334651
- type: precision
value: 98.51778656126481
- type: recall
value: 99.01185770750988
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (rus_Cyrl-ydd_Hebr)
type: mteb/flores
config: rus_Cyrl-ydd_Hebr
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 89.42687747035573
- type: f1
value: 86.47609636740073
- type: main_score
value: 86.47609636740073
- type: precision
value: 85.13669301712781
- type: recall
value: 89.42687747035573
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (rus_Cyrl-ary_Arab)
type: mteb/flores
config: rus_Cyrl-ary_Arab
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 89.82213438735178
- type: f1
value: 87.04545454545456
- type: main_score
value: 87.04545454545456
- type: precision
value: 85.76910408432148
- type: recall
value: 89.82213438735178
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (rus_Cyrl-ces_Latn)
type: mteb/flores
config: rus_Cyrl-ces_Latn
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 99.2094861660079
- type: f1
value: 98.9459815546772
- type: main_score
value: 98.9459815546772
- type: precision
value: 98.81422924901186
- type: recall
value: 99.2094861660079
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (rus_Cyrl-gaz_Latn)
type: mteb/flores
config: rus_Cyrl-gaz_Latn
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 64.9209486166008
- type: f1
value: 58.697458119394874
- type: main_score
value: 58.697458119394874
- type: precision
value: 56.43402189597842
- type: recall
value: 64.9209486166008
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (rus_Cyrl-kam_Latn)
type: mteb/flores
config: rus_Cyrl-kam_Latn
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 59.18972332015811
- type: f1
value: 53.19031511966295
- type: main_score
value: 53.19031511966295
- type: precision
value: 51.08128357343655
- type: recall
value: 59.18972332015811
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (rus_Cyrl-lit_Latn)
type: mteb/flores
config: rus_Cyrl-lit_Latn
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 96.54150197628458
- type: f1
value: 95.5368906455863
- type: main_score
value: 95.5368906455863
- type: precision
value: 95.0592885375494
- type: recall
value: 96.54150197628458
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (rus_Cyrl-nob_Latn)
type: mteb/flores
config: rus_Cyrl-nob_Latn
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 98.12252964426878
- type: f1
value: 97.51317523056655
- type: main_score
value: 97.51317523056655
- type: precision
value: 97.2167325428195
- type: recall
value: 98.12252964426878
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (rus_Cyrl-scn_Latn)
type: mteb/flores
config: rus_Cyrl-scn_Latn
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 84.0909090909091
- type: f1
value: 80.37000439174352
- type: main_score
value: 80.37000439174352
- type: precision
value: 78.83994628559846
- type: recall
value: 84.0909090909091
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (rus_Cyrl-tgk_Cyrl)
type: mteb/flores
config: rus_Cyrl-tgk_Cyrl
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 92.68774703557312
- type: f1
value: 90.86344814605684
- type: main_score
value: 90.86344814605684
- type: precision
value: 90.12516469038208
- type: recall
value: 92.68774703557312
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (rus_Cyrl-yor_Latn)
type: mteb/flores
config: rus_Cyrl-yor_Latn
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 72.13438735177866
- type: f1
value: 66.78759646150951
- type: main_score
value: 66.78759646150951
- type: precision
value: 64.85080192096002
- type: recall
value: 72.13438735177866
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (rus_Cyrl-arz_Arab)
type: mteb/flores
config: rus_Cyrl-arz_Arab
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 98.02371541501977
- type: f1
value: 97.364953886693
- type: main_score
value: 97.364953886693
- type: precision
value: 97.03557312252964
- type: recall
value: 98.02371541501977
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (rus_Cyrl-cjk_Latn)
type: mteb/flores
config: rus_Cyrl-cjk_Latn
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 51.976284584980235
- type: f1
value: 46.468762353149714
- type: main_score
value: 46.468762353149714
- type: precision
value: 44.64073366247278
- type: recall
value: 51.976284584980235
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (rus_Cyrl-gla_Latn)
type: mteb/flores
config: rus_Cyrl-gla_Latn
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 79.74308300395256
- type: f1
value: 75.55611165294958
- type: main_score
value: 75.55611165294958
- type: precision
value: 73.95033408620365
- type: recall
value: 79.74308300395256
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (rus_Cyrl-kan_Knda)
type: mteb/flores
config: rus_Cyrl-kan_Knda
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 99.2094861660079
- type: f1
value: 98.96245059288538
- type: main_score
value: 98.96245059288538
- type: precision
value: 98.84716732542819
- type: recall
value: 99.2094861660079
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (rus_Cyrl-lmo_Latn)
type: mteb/flores
config: rus_Cyrl-lmo_Latn
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 82.41106719367589
- type: f1
value: 78.56413514022209
- type: main_score
value: 78.56413514022209
- type: precision
value: 77.15313068573938
- type: recall
value: 82.41106719367589
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (rus_Cyrl-npi_Deva)
type: mteb/flores
config: rus_Cyrl-npi_Deva
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 98.71541501976284
- type: f1
value: 98.3201581027668
- type: main_score
value: 98.3201581027668
- type: precision
value: 98.12252964426878
- type: recall
value: 98.71541501976284
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (rus_Cyrl-shn_Mymr)
type: mteb/flores
config: rus_Cyrl-shn_Mymr
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 57.11462450592886
- type: f1
value: 51.51361369197337
- type: main_score
value: 51.51361369197337
- type: precision
value: 49.71860043649573
- type: recall
value: 57.11462450592886
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (rus_Cyrl-tgl_Latn)
type: mteb/flores
config: rus_Cyrl-tgl_Latn
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 97.82608695652173
- type: f1
value: 97.18379446640316
- type: main_score
value: 97.18379446640316
- type: precision
value: 96.88735177865613
- type: recall
value: 97.82608695652173
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (rus_Cyrl-yue_Hant)
type: mteb/flores
config: rus_Cyrl-yue_Hant
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 99.30830039525692
- type: f1
value: 99.09420289855072
- type: main_score
value: 99.09420289855072
- type: precision
value: 98.9953886693017
- type: recall
value: 99.30830039525692
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (rus_Cyrl-asm_Beng)
type: mteb/flores
config: rus_Cyrl-asm_Beng
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 95.55335968379447
- type: f1
value: 94.16007905138339
- type: main_score
value: 94.16007905138339
- type: precision
value: 93.50296442687747
- type: recall
value: 95.55335968379447
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (rus_Cyrl-ckb_Arab)
type: mteb/flores
config: rus_Cyrl-ckb_Arab
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 92.88537549407114
- type: f1
value: 90.76745718050066
- type: main_score
value: 90.76745718050066
- type: precision
value: 89.80072463768116
- type: recall
value: 92.88537549407114
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (rus_Cyrl-gle_Latn)
type: mteb/flores
config: rus_Cyrl-gle_Latn
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 91.699604743083
- type: f1
value: 89.40899680030115
- type: main_score
value: 89.40899680030115
- type: precision
value: 88.40085638998683
- type: recall
value: 91.699604743083
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (rus_Cyrl-kas_Arab)
type: mteb/flores
config: rus_Cyrl-kas_Arab
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 88.3399209486166
- type: f1
value: 85.14351590438548
- type: main_score
value: 85.14351590438548
- type: precision
value: 83.72364953886692
- type: recall
value: 88.3399209486166
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (rus_Cyrl-ltg_Latn)
type: mteb/flores
config: rus_Cyrl-ltg_Latn
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 83.399209486166
- type: f1
value: 79.88408934061107
- type: main_score
value: 79.88408934061107
- type: precision
value: 78.53794509179885
- type: recall
value: 83.399209486166
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (rus_Cyrl-nso_Latn)
type: mteb/flores
config: rus_Cyrl-nso_Latn
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 91.20553359683794
- type: f1
value: 88.95406635525212
- type: main_score
value: 88.95406635525212
- type: precision
value: 88.01548089591567
- type: recall
value: 91.20553359683794
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (rus_Cyrl-sin_Sinh)
type: mteb/flores
config: rus_Cyrl-sin_Sinh
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 98.91304347826086
- type: f1
value: 98.56719367588933
- type: main_score
value: 98.56719367588933
- type: precision
value: 98.40250329380763
- type: recall
value: 98.91304347826086
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (rus_Cyrl-tha_Thai)
type: mteb/flores
config: rus_Cyrl-tha_Thai
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 95.94861660079052
- type: f1
value: 94.66403162055336
- type: main_score
value: 94.66403162055336
- type: precision
value: 94.03820816864295
- type: recall
value: 95.94861660079052
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (rus_Cyrl-zho_Hans)
type: mteb/flores
config: rus_Cyrl-zho_Hans
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 97.4308300395257
- type: f1
value: 96.5909090909091
- type: main_score
value: 96.5909090909091
- type: precision
value: 96.17918313570487
- type: recall
value: 97.4308300395257
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (rus_Cyrl-ast_Latn)
type: mteb/flores
config: rus_Cyrl-ast_Latn
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 94.46640316205533
- type: f1
value: 92.86890645586297
- type: main_score
value: 92.86890645586297
- type: precision
value: 92.14756258234519
- type: recall
value: 94.46640316205533
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (rus_Cyrl-crh_Latn)
type: mteb/flores
config: rus_Cyrl-crh_Latn
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 94.66403162055336
- type: f1
value: 93.2663592446201
- type: main_score
value: 93.2663592446201
- type: precision
value: 92.66716073781292
- type: recall
value: 94.66403162055336
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (rus_Cyrl-glg_Latn)
type: mteb/flores
config: rus_Cyrl-glg_Latn
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 98.81422924901186
- type: f1
value: 98.46837944664031
- type: main_score
value: 98.46837944664031
- type: precision
value: 98.3201581027668
- type: recall
value: 98.81422924901186
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (rus_Cyrl-kas_Deva)
type: mteb/flores
config: rus_Cyrl-kas_Deva
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 69.1699604743083
- type: f1
value: 63.05505292906477
- type: main_score
value: 63.05505292906477
- type: precision
value: 60.62594108789761
- type: recall
value: 69.1699604743083
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (rus_Cyrl-ltz_Latn)
type: mteb/flores
config: rus_Cyrl-ltz_Latn
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 91.40316205533597
- type: f1
value: 89.26571616789009
- type: main_score
value: 89.26571616789009
- type: precision
value: 88.40179747788443
- type: recall
value: 91.40316205533597
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (rus_Cyrl-nus_Latn)
type: mteb/flores
config: rus_Cyrl-nus_Latn
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 38.93280632411067
- type: f1
value: 33.98513032905371
- type: main_score
value: 33.98513032905371
- type: precision
value: 32.56257884802308
- type: recall
value: 38.93280632411067
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (rus_Cyrl-slk_Latn)
type: mteb/flores
config: rus_Cyrl-slk_Latn
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 98.02371541501977
- type: f1
value: 97.42094861660078
- type: main_score
value: 97.42094861660078
- type: precision
value: 97.14262187088273
- type: recall
value: 98.02371541501977
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (rus_Cyrl-tir_Ethi)
type: mteb/flores
config: rus_Cyrl-tir_Ethi
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 91.30434782608695
- type: f1
value: 88.78129117259552
- type: main_score
value: 88.78129117259552
- type: precision
value: 87.61528326745717
- type: recall
value: 91.30434782608695
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (rus_Cyrl-zho_Hant)
type: mteb/flores
config: rus_Cyrl-zho_Hant
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 99.1106719367589
- type: f1
value: 98.81422924901186
- type: main_score
value: 98.81422924901186
- type: precision
value: 98.66600790513834
- type: recall
value: 99.1106719367589
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (rus_Cyrl-awa_Deva)
type: mteb/flores
config: rus_Cyrl-awa_Deva
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 98.12252964426878
- type: f1
value: 97.70092226613966
- type: main_score
value: 97.70092226613966
- type: precision
value: 97.50494071146245
- type: recall
value: 98.12252964426878
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (rus_Cyrl-cym_Latn)
type: mteb/flores
config: rus_Cyrl-cym_Latn
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 95.94861660079052
- type: f1
value: 94.74308300395256
- type: main_score
value: 94.74308300395256
- type: precision
value: 94.20289855072464
- type: recall
value: 95.94861660079052
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (rus_Cyrl-grn_Latn)
type: mteb/flores
config: rus_Cyrl-grn_Latn
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 77.96442687747036
- type: f1
value: 73.64286789187975
- type: main_score
value: 73.64286789187975
- type: precision
value: 71.99324893260821
- type: recall
value: 77.96442687747036
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (rus_Cyrl-kat_Geor)
type: mteb/flores
config: rus_Cyrl-kat_Geor
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 98.91304347826086
- type: f1
value: 98.56719367588933
- type: main_score
value: 98.56719367588933
- type: precision
value: 98.40250329380764
- type: recall
value: 98.91304347826086
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (rus_Cyrl-lua_Latn)
type: mteb/flores
config: rus_Cyrl-lua_Latn
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 72.03557312252964
- type: f1
value: 67.23928163404449
- type: main_score
value: 67.23928163404449
- type: precision
value: 65.30797101449275
- type: recall
value: 72.03557312252964
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (rus_Cyrl-nya_Latn)
type: mteb/flores
config: rus_Cyrl-nya_Latn
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 92.29249011857708
- type: f1
value: 90.0494071146245
- type: main_score
value: 90.0494071146245
- type: precision
value: 89.04808959156786
- type: recall
value: 92.29249011857708
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (rus_Cyrl-slv_Latn)
type: mteb/flores
config: rus_Cyrl-slv_Latn
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 98.71541501976284
- type: f1
value: 98.30368906455863
- type: main_score
value: 98.30368906455863
- type: precision
value: 98.10606060606061
- type: recall
value: 98.71541501976284
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (rus_Cyrl-tpi_Latn)
type: mteb/flores
config: rus_Cyrl-tpi_Latn
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 80.53359683794467
- type: f1
value: 76.59481822525301
- type: main_score
value: 76.59481822525301
- type: precision
value: 75.12913223140497
- type: recall
value: 80.53359683794467
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (rus_Cyrl-zsm_Latn)
type: mteb/flores
config: rus_Cyrl-zsm_Latn
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 97.33201581027669
- type: f1
value: 96.58620365142104
- type: main_score
value: 96.58620365142104
- type: precision
value: 96.26152832674572
- type: recall
value: 97.33201581027669
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (rus_Cyrl-ayr_Latn)
type: mteb/flores
config: rus_Cyrl-ayr_Latn
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 45.55335968379446
- type: f1
value: 40.13076578531388
- type: main_score
value: 40.13076578531388
- type: precision
value: 38.398064362362355
- type: recall
value: 45.55335968379446
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (rus_Cyrl-dan_Latn)
type: mteb/flores
config: rus_Cyrl-dan_Latn
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 99.01185770750988
- type: f1
value: 98.68247694334651
- type: main_score
value: 98.68247694334651
- type: precision
value: 98.51778656126481
- type: recall
value: 99.01185770750988
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (rus_Cyrl-guj_Gujr)
type: mteb/flores
config: rus_Cyrl-guj_Gujr
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 99.01185770750988
- type: f1
value: 98.68247694334651
- type: main_score
value: 98.68247694334651
- type: precision
value: 98.51778656126481
- type: recall
value: 99.01185770750988
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (rus_Cyrl-kaz_Cyrl)
type: mteb/flores
config: rus_Cyrl-kaz_Cyrl
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 98.81422924901186
- type: f1
value: 98.43544137022398
- type: main_score
value: 98.43544137022398
- type: precision
value: 98.25428194993412
- type: recall
value: 98.81422924901186
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (rus_Cyrl-lug_Latn)
type: mteb/flores
config: rus_Cyrl-lug_Latn
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 82.21343873517787
- type: f1
value: 77.97485726833554
- type: main_score
value: 77.97485726833554
- type: precision
value: 76.22376717485415
- type: recall
value: 82.21343873517787
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (rus_Cyrl-oci_Latn)
type: mteb/flores
config: rus_Cyrl-oci_Latn
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 93.87351778656127
- type: f1
value: 92.25319969885187
- type: main_score
value: 92.25319969885187
- type: precision
value: 91.5638528138528
- type: recall
value: 93.87351778656127
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (rus_Cyrl-smo_Latn)
type: mteb/flores
config: rus_Cyrl-smo_Latn
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 84.88142292490119
- type: f1
value: 81.24364765669114
- type: main_score
value: 81.24364765669114
- type: precision
value: 79.69991416137661
- type: recall
value: 84.88142292490119
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (rus_Cyrl-tsn_Latn)
type: mteb/flores
config: rus_Cyrl-tsn_Latn
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 87.05533596837944
- type: f1
value: 83.90645586297761
- type: main_score
value: 83.90645586297761
- type: precision
value: 82.56752305665349
- type: recall
value: 87.05533596837944
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (rus_Cyrl-zul_Latn)
type: mteb/flores
config: rus_Cyrl-zul_Latn
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 95.15810276679841
- type: f1
value: 93.77140974967062
- type: main_score
value: 93.77140974967062
- type: precision
value: 93.16534914361002
- type: recall
value: 95.15810276679841
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (rus_Cyrl-azb_Arab)
type: mteb/flores
config: rus_Cyrl-azb_Arab
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 81.91699604743083
- type: f1
value: 77.18050065876152
- type: main_score
value: 77.18050065876152
- type: precision
value: 75.21519543258673
- type: recall
value: 81.91699604743083
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (rus_Cyrl-deu_Latn)
type: mteb/flores
config: rus_Cyrl-deu_Latn
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 99.50592885375494
- type: f1
value: 99.34123847167325
- type: main_score
value: 99.34123847167325
- type: precision
value: 99.2588932806324
- type: recall
value: 99.50592885375494
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (rus_Cyrl-hat_Latn)
type: mteb/flores
config: rus_Cyrl-hat_Latn
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 91.00790513833992
- type: f1
value: 88.69126043039086
- type: main_score
value: 88.69126043039086
- type: precision
value: 87.75774044795784
- type: recall
value: 91.00790513833992
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (rus_Cyrl-kbp_Latn)
type: mteb/flores
config: rus_Cyrl-kbp_Latn
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 47.233201581027664
- type: f1
value: 43.01118618096943
- type: main_score
value: 43.01118618096943
- type: precision
value: 41.739069205043556
- type: recall
value: 47.233201581027664
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (rus_Cyrl-luo_Latn)
type: mteb/flores
config: rus_Cyrl-luo_Latn
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 60.47430830039525
- type: f1
value: 54.83210565429816
- type: main_score
value: 54.83210565429816
- type: precision
value: 52.81630744284779
- type: recall
value: 60.47430830039525
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (rus_Cyrl-ory_Orya)
type: mteb/flores
config: rus_Cyrl-ory_Orya
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 99.1106719367589
- type: f1
value: 98.83069828722003
- type: main_score
value: 98.83069828722003
- type: precision
value: 98.69894598155467
- type: recall
value: 99.1106719367589
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (rus_Cyrl-sna_Latn)
type: mteb/flores
config: rus_Cyrl-sna_Latn
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 89.72332015810277
- type: f1
value: 87.30013645774514
- type: main_score
value: 87.30013645774514
- type: precision
value: 86.25329380764163
- type: recall
value: 89.72332015810277
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (rus_Cyrl-tso_Latn)
type: mteb/flores
config: rus_Cyrl-tso_Latn
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 84.38735177865613
- type: f1
value: 80.70424744337788
- type: main_score
value: 80.70424744337788
- type: precision
value: 79.18560606060606
- type: recall
value: 84.38735177865613
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (rus_Cyrl-azj_Latn)
type: mteb/flores
config: rus_Cyrl-azj_Latn
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 97.33201581027669
- type: f1
value: 96.56455862977602
- type: main_score
value: 96.56455862977602
- type: precision
value: 96.23682476943345
- type: recall
value: 97.33201581027669
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (rus_Cyrl-dik_Latn)
type: mteb/flores
config: rus_Cyrl-dik_Latn
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 46.047430830039524
- type: f1
value: 40.05513069495283
- type: main_score
value: 40.05513069495283
- type: precision
value: 38.072590197096126
- type: recall
value: 46.047430830039524
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (rus_Cyrl-hau_Latn)
type: mteb/flores
config: rus_Cyrl-hau_Latn
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 87.94466403162056
- type: f1
value: 84.76943346508563
- type: main_score
value: 84.76943346508563
- type: precision
value: 83.34486166007905
- type: recall
value: 87.94466403162056
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (rus_Cyrl-kea_Latn)
type: mteb/flores
config: rus_Cyrl-kea_Latn
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 89.42687747035573
- type: f1
value: 86.83803021747684
- type: main_score
value: 86.83803021747684
- type: precision
value: 85.78416149068323
- type: recall
value: 89.42687747035573
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (rus_Cyrl-lus_Latn)
type: mteb/flores
config: rus_Cyrl-lus_Latn
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 68.97233201581028
- type: f1
value: 64.05480726292745
- type: main_score
value: 64.05480726292745
- type: precision
value: 62.42670749487858
- type: recall
value: 68.97233201581028
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (rus_Cyrl-pag_Latn)
type: mteb/flores
config: rus_Cyrl-pag_Latn
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 78.75494071146245
- type: f1
value: 74.58573558401933
- type: main_score
value: 74.58573558401933
- type: precision
value: 73.05532028358115
- type: recall
value: 78.75494071146245
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (rus_Cyrl-snd_Arab)
type: mteb/flores
config: rus_Cyrl-snd_Arab
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 95.8498023715415
- type: f1
value: 94.56521739130434
- type: main_score
value: 94.56521739130434
- type: precision
value: 93.97233201581028
- type: recall
value: 95.8498023715415
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (rus_Cyrl-tuk_Latn)
type: mteb/flores
config: rus_Cyrl-tuk_Latn
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 68.08300395256917
- type: f1
value: 62.93565240205557
- type: main_score
value: 62.93565240205557
- type: precision
value: 61.191590257043934
- type: recall
value: 68.08300395256917
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (rus_Cyrl-bak_Cyrl)
type: mteb/flores
config: rus_Cyrl-bak_Cyrl
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 96.04743083003953
- type: f1
value: 94.86824769433464
- type: main_score
value: 94.86824769433464
- type: precision
value: 94.34288537549406
- type: recall
value: 96.04743083003953
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (rus_Cyrl-dyu_Latn)
type: mteb/flores
config: rus_Cyrl-dyu_Latn
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 37.45059288537549
- type: f1
value: 31.670482312800807
- type: main_score
value: 31.670482312800807
- type: precision
value: 29.99928568357422
- type: recall
value: 37.45059288537549
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (rus_Cyrl-heb_Hebr)
type: mteb/flores
config: rus_Cyrl-heb_Hebr
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 97.23320158102767
- type: f1
value: 96.38998682476942
- type: main_score
value: 96.38998682476942
- type: precision
value: 95.99802371541502
- type: recall
value: 97.23320158102767
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (rus_Cyrl-khk_Cyrl)
type: mteb/flores
config: rus_Cyrl-khk_Cyrl
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 98.41897233201581
- type: f1
value: 98.00724637681158
- type: main_score
value: 98.00724637681158
- type: precision
value: 97.82938076416336
- type: recall
value: 98.41897233201581
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (rus_Cyrl-lvs_Latn)
type: mteb/flores
config: rus_Cyrl-lvs_Latn
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 97.4308300395257
- type: f1
value: 96.61396574440053
- type: main_score
value: 96.61396574440053
- type: precision
value: 96.2203557312253
- type: recall
value: 97.4308300395257
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (rus_Cyrl-pan_Guru)
type: mteb/flores
config: rus_Cyrl-pan_Guru
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 99.30830039525692
- type: f1
value: 99.07773386034256
- type: main_score
value: 99.07773386034256
- type: precision
value: 98.96245059288538
- type: recall
value: 99.30830039525692
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (rus_Cyrl-som_Latn)
type: mteb/flores
config: rus_Cyrl-som_Latn
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 87.74703557312253
- type: f1
value: 84.52898550724638
- type: main_score
value: 84.52898550724638
- type: precision
value: 83.09288537549409
- type: recall
value: 87.74703557312253
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (rus_Cyrl-tum_Latn)
type: mteb/flores
config: rus_Cyrl-tum_Latn
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 87.15415019762845
- type: f1
value: 83.85069640504425
- type: main_score
value: 83.85069640504425
- type: precision
value: 82.43671183888576
- type: recall
value: 87.15415019762845
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (taq_Latn-rus_Cyrl)
type: mteb/flores
config: taq_Latn-rus_Cyrl
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 28.55731225296443
- type: f1
value: 26.810726360049568
- type: main_score
value: 26.810726360049568
- type: precision
value: 26.260342858265577
- type: recall
value: 28.55731225296443
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (war_Latn-rus_Cyrl)
type: mteb/flores
config: war_Latn-rus_Cyrl
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 94.86166007905138
- type: f1
value: 94.03147083483051
- type: main_score
value: 94.03147083483051
- type: precision
value: 93.70653606003322
- type: recall
value: 94.86166007905138
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (arb_Arab-rus_Cyrl)
type: mteb/flores
config: arb_Arab-rus_Cyrl
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 96.34387351778656
- type: f1
value: 95.23056653491436
- type: main_score
value: 95.23056653491436
- type: precision
value: 94.70520421607378
- type: recall
value: 96.34387351778656
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (bul_Cyrl-rus_Cyrl)
type: mteb/flores
config: bul_Cyrl-rus_Cyrl
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 99.90118577075098
- type: f1
value: 99.86824769433464
- type: main_score
value: 99.86824769433464
- type: precision
value: 99.85177865612648
- type: recall
value: 99.90118577075098
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (fra_Latn-rus_Cyrl)
type: mteb/flores
config: fra_Latn-rus_Cyrl
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 99.2094861660079
- type: f1
value: 98.9459815546772
- type: main_score
value: 98.9459815546772
- type: precision
value: 98.81422924901186
- type: recall
value: 99.2094861660079
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (jpn_Jpan-rus_Cyrl)
type: mteb/flores
config: jpn_Jpan-rus_Cyrl
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 98.3201581027668
- type: f1
value: 97.76021080368905
- type: main_score
value: 97.76021080368905
- type: precision
value: 97.48023715415019
- type: recall
value: 98.3201581027668
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (lij_Latn-rus_Cyrl)
type: mteb/flores
config: lij_Latn-rus_Cyrl
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 83.49802371541502
- type: f1
value: 81.64800059239636
- type: main_score
value: 81.64800059239636
- type: precision
value: 80.9443055878478
- type: recall
value: 83.49802371541502
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (mya_Mymr-rus_Cyrl)
type: mteb/flores
config: mya_Mymr-rus_Cyrl
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 90.21739130434783
- type: f1
value: 88.76776366313682
- type: main_score
value: 88.76776366313682
- type: precision
value: 88.18370446119435
- type: recall
value: 90.21739130434783
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (sag_Latn-rus_Cyrl)
type: mteb/flores
config: sag_Latn-rus_Cyrl
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 41.699604743083
- type: f1
value: 39.53066322643847
- type: main_score
value: 39.53066322643847
- type: precision
value: 38.822876239229274
- type: recall
value: 41.699604743083
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (taq_Tfng-rus_Cyrl)
type: mteb/flores
config: taq_Tfng-rus_Cyrl
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 10.67193675889328
- type: f1
value: 9.205744965817951
- type: main_score
value: 9.205744965817951
- type: precision
value: 8.85195219073817
- type: recall
value: 10.67193675889328
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (wol_Latn-rus_Cyrl)
type: mteb/flores
config: wol_Latn-rus_Cyrl
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 63.537549407114625
- type: f1
value: 60.65190727391827
- type: main_score
value: 60.65190727391827
- type: precision
value: 59.61144833427442
- type: recall
value: 63.537549407114625
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (arb_Latn-rus_Cyrl)
type: mteb/flores
config: arb_Latn-rus_Cyrl
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 13.142292490118576
- type: f1
value: 12.372910318176764
- type: main_score
value: 12.372910318176764
- type: precision
value: 12.197580895919188
- type: recall
value: 13.142292490118576
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (cat_Latn-rus_Cyrl)
type: mteb/flores
config: cat_Latn-rus_Cyrl
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 99.01185770750988
- type: f1
value: 98.80599472990777
- type: main_score
value: 98.80599472990777
- type: precision
value: 98.72953133822698
- type: recall
value: 99.01185770750988
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (fur_Latn-rus_Cyrl)
type: mteb/flores
config: fur_Latn-rus_Cyrl
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 81.02766798418972
- type: f1
value: 79.36184294084613
- type: main_score
value: 79.36184294084613
- type: precision
value: 78.69187826527705
- type: recall
value: 81.02766798418972
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (kab_Latn-rus_Cyrl)
type: mteb/flores
config: kab_Latn-rus_Cyrl
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 34.387351778656125
- type: f1
value: 32.02306921576947
- type: main_score
value: 32.02306921576947
- type: precision
value: 31.246670347137467
- type: recall
value: 34.387351778656125
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (lim_Latn-rus_Cyrl)
type: mteb/flores
config: lim_Latn-rus_Cyrl
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 78.26086956521739
- type: f1
value: 75.90239449214359
- type: main_score
value: 75.90239449214359
- type: precision
value: 75.02211430745493
- type: recall
value: 78.26086956521739
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (nld_Latn-rus_Cyrl)
type: mteb/flores
config: nld_Latn-rus_Cyrl
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 99.2094861660079
- type: f1
value: 98.9459815546772
- type: main_score
value: 98.9459815546772
- type: precision
value: 98.81422924901186
- type: recall
value: 99.2094861660079
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (san_Deva-rus_Cyrl)
type: mteb/flores
config: san_Deva-rus_Cyrl
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 87.94466403162056
- type: f1
value: 86.68928897189767
- type: main_score
value: 86.68928897189767
- type: precision
value: 86.23822997079216
- type: recall
value: 87.94466403162056
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (tat_Cyrl-rus_Cyrl)
type: mteb/flores
config: tat_Cyrl-rus_Cyrl
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 97.03557312252964
- type: f1
value: 96.4167365353136
- type: main_score
value: 96.4167365353136
- type: precision
value: 96.16847826086958
- type: recall
value: 97.03557312252964
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (xho_Latn-rus_Cyrl)
type: mteb/flores
config: xho_Latn-rus_Cyrl
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 86.95652173913044
- type: f1
value: 85.5506497283435
- type: main_score
value: 85.5506497283435
- type: precision
value: 84.95270479733395
- type: recall
value: 86.95652173913044
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (ars_Arab-rus_Cyrl)
type: mteb/flores
config: ars_Arab-rus_Cyrl
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 96.6403162055336
- type: f1
value: 95.60935441370223
- type: main_score
value: 95.60935441370223
- type: precision
value: 95.13339920948617
- type: recall
value: 96.6403162055336
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (ceb_Latn-rus_Cyrl)
type: mteb/flores
config: ceb_Latn-rus_Cyrl
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 95.7509881422925
- type: f1
value: 95.05209198303827
- type: main_score
value: 95.05209198303827
- type: precision
value: 94.77662283368805
- type: recall
value: 95.7509881422925
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (fuv_Latn-rus_Cyrl)
type: mteb/flores
config: fuv_Latn-rus_Cyrl
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 45.25691699604743
- type: f1
value: 42.285666666742365
- type: main_score
value: 42.285666666742365
- type: precision
value: 41.21979853402283
- type: recall
value: 45.25691699604743
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (kac_Latn-rus_Cyrl)
type: mteb/flores
config: kac_Latn-rus_Cyrl
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 34.683794466403164
- type: f1
value: 33.3235346229031
- type: main_score
value: 33.3235346229031
- type: precision
value: 32.94673924616852
- type: recall
value: 34.683794466403164
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (lin_Latn-rus_Cyrl)
type: mteb/flores
config: lin_Latn-rus_Cyrl
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 86.85770750988142
- type: f1
value: 85.1867110799439
- type: main_score
value: 85.1867110799439
- type: precision
value: 84.53038212173273
- type: recall
value: 86.85770750988142
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (nno_Latn-rus_Cyrl)
type: mteb/flores
config: nno_Latn-rus_Cyrl
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 97.4308300395257
- type: f1
value: 96.78383210991906
- type: main_score
value: 96.78383210991906
- type: precision
value: 96.51185770750989
- type: recall
value: 97.4308300395257
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (sat_Olck-rus_Cyrl)
type: mteb/flores
config: sat_Olck-rus_Cyrl
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 1.185770750988142
- type: f1
value: 1.0279253129117258
- type: main_score
value: 1.0279253129117258
- type: precision
value: 1.0129746819135175
- type: recall
value: 1.185770750988142
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (tel_Telu-rus_Cyrl)
type: mteb/flores
config: tel_Telu-rus_Cyrl
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 98.12252964426878
- type: f1
value: 97.61198945981555
- type: main_score
value: 97.61198945981555
- type: precision
value: 97.401185770751
- type: recall
value: 98.12252964426878
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (ydd_Hebr-rus_Cyrl)
type: mteb/flores
config: ydd_Hebr-rus_Cyrl
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 75.8893280632411
- type: f1
value: 74.00244008018511
- type: main_score
value: 74.00244008018511
- type: precision
value: 73.25683020960382
- type: recall
value: 75.8893280632411
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (ary_Arab-rus_Cyrl)
type: mteb/flores
config: ary_Arab-rus_Cyrl
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 86.56126482213439
- type: f1
value: 83.72796285839765
- type: main_score
value: 83.72796285839765
- type: precision
value: 82.65014273166447
- type: recall
value: 86.56126482213439
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (ces_Latn-rus_Cyrl)
type: mteb/flores
config: ces_Latn-rus_Cyrl
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 99.60474308300395
- type: f1
value: 99.4729907773386
- type: main_score
value: 99.4729907773386
- type: precision
value: 99.40711462450594
- type: recall
value: 99.60474308300395
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (gaz_Latn-rus_Cyrl)
type: mteb/flores
config: gaz_Latn-rus_Cyrl
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 42.58893280632411
- type: f1
value: 40.75832866805978
- type: main_score
value: 40.75832866805978
- type: precision
value: 40.14285046917723
- type: recall
value: 42.58893280632411
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (kam_Latn-rus_Cyrl)
type: mteb/flores
config: kam_Latn-rus_Cyrl
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 45.25691699604743
- type: f1
value: 42.6975518029456
- type: main_score
value: 42.6975518029456
- type: precision
value: 41.87472710984596
- type: recall
value: 45.25691699604743
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (lit_Latn-rus_Cyrl)
type: mteb/flores
config: lit_Latn-rus_Cyrl
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 97.33201581027669
- type: f1
value: 96.62384716732542
- type: main_score
value: 96.62384716732542
- type: precision
value: 96.3175230566535
- type: recall
value: 97.33201581027669
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (nob_Latn-rus_Cyrl)
type: mteb/flores
config: nob_Latn-rus_Cyrl
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 98.71541501976284
- type: f1
value: 98.30368906455863
- type: main_score
value: 98.30368906455863
- type: precision
value: 98.10606060606061
- type: recall
value: 98.71541501976284
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (scn_Latn-rus_Cyrl)
type: mteb/flores
config: scn_Latn-rus_Cyrl
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 70.45454545454545
- type: f1
value: 68.62561022640075
- type: main_score
value: 68.62561022640075
- type: precision
value: 67.95229103411222
- type: recall
value: 70.45454545454545
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (tgk_Cyrl-rus_Cyrl)
type: mteb/flores
config: tgk_Cyrl-rus_Cyrl
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 92.4901185770751
- type: f1
value: 91.58514492753623
- type: main_score
value: 91.58514492753623
- type: precision
value: 91.24759298672342
- type: recall
value: 92.4901185770751
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (yor_Latn-rus_Cyrl)
type: mteb/flores
config: yor_Latn-rus_Cyrl
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 67.98418972332016
- type: f1
value: 64.72874247330768
- type: main_score
value: 64.72874247330768
- type: precision
value: 63.450823399938685
- type: recall
value: 67.98418972332016
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (arz_Arab-rus_Cyrl)
type: mteb/flores
config: arz_Arab-rus_Cyrl
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 94.56521739130434
- type: f1
value: 93.07971014492755
- type: main_score
value: 93.07971014492755
- type: precision
value: 92.42753623188406
- type: recall
value: 94.56521739130434
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (cjk_Latn-rus_Cyrl)
type: mteb/flores
config: cjk_Latn-rus_Cyrl
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 38.63636363636363
- type: f1
value: 36.25747140862938
- type: main_score
value: 36.25747140862938
- type: precision
value: 35.49101355074723
- type: recall
value: 38.63636363636363
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (gla_Latn-rus_Cyrl)
type: mteb/flores
config: gla_Latn-rus_Cyrl
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 69.26877470355731
- type: f1
value: 66.11797423328613
- type: main_score
value: 66.11797423328613
- type: precision
value: 64.89369649409694
- type: recall
value: 69.26877470355731
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (kan_Knda-rus_Cyrl)
type: mteb/flores
config: kan_Knda-rus_Cyrl
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 98.02371541501977
- type: f1
value: 97.51505740636176
- type: main_score
value: 97.51505740636176
- type: precision
value: 97.30731225296442
- type: recall
value: 98.02371541501977
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (lmo_Latn-rus_Cyrl)
type: mteb/flores
config: lmo_Latn-rus_Cyrl
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 73.3201581027668
- type: f1
value: 71.06371608677273
- type: main_score
value: 71.06371608677273
- type: precision
value: 70.26320288266223
- type: recall
value: 73.3201581027668
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (npi_Deva-rus_Cyrl)
type: mteb/flores
config: npi_Deva-rus_Cyrl
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 97.82608695652173
- type: f1
value: 97.36645107198466
- type: main_score
value: 97.36645107198466
- type: precision
value: 97.1772068511199
- type: recall
value: 97.82608695652173
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (shn_Mymr-rus_Cyrl)
type: mteb/flores
config: shn_Mymr-rus_Cyrl
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 39.426877470355734
- type: f1
value: 37.16728785513024
- type: main_score
value: 37.16728785513024
- type: precision
value: 36.56918548278505
- type: recall
value: 39.426877470355734
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (tgl_Latn-rus_Cyrl)
type: mteb/flores
config: tgl_Latn-rus_Cyrl
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 97.92490118577075
- type: f1
value: 97.6378693769998
- type: main_score
value: 97.6378693769998
- type: precision
value: 97.55371440154047
- type: recall
value: 97.92490118577075
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (yue_Hant-rus_Cyrl)
type: mteb/flores
config: yue_Hant-rus_Cyrl
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 97.92490118577075
- type: f1
value: 97.3833051006964
- type: main_score
value: 97.3833051006964
- type: precision
value: 97.1590909090909
- type: recall
value: 97.92490118577075
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (asm_Beng-rus_Cyrl)
type: mteb/flores
config: asm_Beng-rus_Cyrl
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 92.78656126482213
- type: f1
value: 91.76917395296842
- type: main_score
value: 91.76917395296842
- type: precision
value: 91.38292866553736
- type: recall
value: 92.78656126482213
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (ckb_Arab-rus_Cyrl)
type: mteb/flores
config: ckb_Arab-rus_Cyrl
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 80.8300395256917
- type: f1
value: 79.17664345468799
- type: main_score
value: 79.17664345468799
- type: precision
value: 78.5622171683459
- type: recall
value: 80.8300395256917
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (gle_Latn-rus_Cyrl)
type: mteb/flores
config: gle_Latn-rus_Cyrl
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 85.86956521739131
- type: f1
value: 84.45408265372492
- type: main_score
value: 84.45408265372492
- type: precision
value: 83.8774340026703
- type: recall
value: 85.86956521739131
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (kas_Arab-rus_Cyrl)
type: mteb/flores
config: kas_Arab-rus_Cyrl
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 76.28458498023716
- type: f1
value: 74.11216313578267
- type: main_score
value: 74.11216313578267
- type: precision
value: 73.2491277759584
- type: recall
value: 76.28458498023716
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (ltg_Latn-rus_Cyrl)
type: mteb/flores
config: ltg_Latn-rus_Cyrl
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 71.14624505928853
- type: f1
value: 68.69245357723618
- type: main_score
value: 68.69245357723618
- type: precision
value: 67.8135329666459
- type: recall
value: 71.14624505928853
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (nso_Latn-rus_Cyrl)
type: mteb/flores
config: nso_Latn-rus_Cyrl
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 87.64822134387352
- type: f1
value: 85.98419219986725
- type: main_score
value: 85.98419219986725
- type: precision
value: 85.32513873917036
- type: recall
value: 87.64822134387352
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (sin_Sinh-rus_Cyrl)
type: mteb/flores
config: sin_Sinh-rus_Cyrl
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 97.62845849802372
- type: f1
value: 97.10144927536231
- type: main_score
value: 97.10144927536231
- type: precision
value: 96.87986585219788
- type: recall
value: 97.62845849802372
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (tha_Thai-rus_Cyrl)
type: mteb/flores
config: tha_Thai-rus_Cyrl
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 98.71541501976284
- type: f1
value: 98.28722002635045
- type: main_score
value: 98.28722002635045
- type: precision
value: 98.07312252964427
- type: recall
value: 98.71541501976284
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (zho_Hans-rus_Cyrl)
type: mteb/flores
config: zho_Hans-rus_Cyrl
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 99.01185770750988
- type: f1
value: 98.68247694334651
- type: main_score
value: 98.68247694334651
- type: precision
value: 98.51778656126481
- type: recall
value: 99.01185770750988
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (ast_Latn-rus_Cyrl)
type: mteb/flores
config: ast_Latn-rus_Cyrl
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 95.65217391304348
- type: f1
value: 94.90649683857505
- type: main_score
value: 94.90649683857505
- type: precision
value: 94.61352657004831
- type: recall
value: 95.65217391304348
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (crh_Latn-rus_Cyrl)
type: mteb/flores
config: crh_Latn-rus_Cyrl
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 93.08300395256917
- type: f1
value: 92.20988998886428
- type: main_score
value: 92.20988998886428
- type: precision
value: 91.85631013694254
- type: recall
value: 93.08300395256917
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (glg_Latn-rus_Cyrl)
type: mteb/flores
config: glg_Latn-rus_Cyrl
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 95.55335968379447
- type: f1
value: 95.18006148440931
- type: main_score
value: 95.18006148440931
- type: precision
value: 95.06540560888386
- type: recall
value: 95.55335968379447
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (kas_Deva-rus_Cyrl)
type: mteb/flores
config: kas_Deva-rus_Cyrl
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 55.03952569169961
- type: f1
value: 52.19871938895554
- type: main_score
value: 52.19871938895554
- type: precision
value: 51.17660971469557
- type: recall
value: 55.03952569169961
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (ltz_Latn-rus_Cyrl)
type: mteb/flores
config: ltz_Latn-rus_Cyrl
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 87.64822134387352
- type: f1
value: 86.64179841897234
- type: main_score
value: 86.64179841897234
- type: precision
value: 86.30023235431587
- type: recall
value: 87.64822134387352
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (nus_Latn-rus_Cyrl)
type: mteb/flores
config: nus_Latn-rus_Cyrl
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 27.4703557312253
- type: f1
value: 25.703014277858088
- type: main_score
value: 25.703014277858088
- type: precision
value: 25.194105476917315
- type: recall
value: 27.4703557312253
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (slk_Latn-rus_Cyrl)
type: mteb/flores
config: slk_Latn-rus_Cyrl
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 99.30830039525692
- type: f1
value: 99.1106719367589
- type: main_score
value: 99.1106719367589
- type: precision
value: 99.02832674571805
- type: recall
value: 99.30830039525692
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (tir_Ethi-rus_Cyrl)
type: mteb/flores
config: tir_Ethi-rus_Cyrl
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 80.73122529644269
- type: f1
value: 78.66903754775608
- type: main_score
value: 78.66903754775608
- type: precision
value: 77.86431694163612
- type: recall
value: 80.73122529644269
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (zho_Hant-rus_Cyrl)
type: mteb/flores
config: zho_Hant-rus_Cyrl
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 98.22134387351778
- type: f1
value: 97.66798418972333
- type: main_score
value: 97.66798418972333
- type: precision
value: 97.40612648221344
- type: recall
value: 98.22134387351778
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (awa_Deva-rus_Cyrl)
type: mteb/flores
config: awa_Deva-rus_Cyrl
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 97.5296442687747
- type: f1
value: 96.94224857268335
- type: main_score
value: 96.94224857268335
- type: precision
value: 96.68560606060606
- type: recall
value: 97.5296442687747
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (cym_Latn-rus_Cyrl)
type: mteb/flores
config: cym_Latn-rus_Cyrl
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 92.68774703557312
- type: f1
value: 91.69854302097961
- type: main_score
value: 91.69854302097961
- type: precision
value: 91.31236846157795
- type: recall
value: 92.68774703557312
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (grn_Latn-rus_Cyrl)
type: mteb/flores
config: grn_Latn-rus_Cyrl
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 64.13043478260869
- type: f1
value: 61.850586118740004
- type: main_score
value: 61.850586118740004
- type: precision
value: 61.0049495186209
- type: recall
value: 64.13043478260869
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (kat_Geor-rus_Cyrl)
type: mteb/flores
config: kat_Geor-rus_Cyrl
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 98.02371541501977
- type: f1
value: 97.59881422924902
- type: main_score
value: 97.59881422924902
- type: precision
value: 97.42534036012296
- type: recall
value: 98.02371541501977
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (lua_Latn-rus_Cyrl)
type: mteb/flores
config: lua_Latn-rus_Cyrl
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 63.63636363636363
- type: f1
value: 60.9709122526128
- type: main_score
value: 60.9709122526128
- type: precision
value: 60.03915902282226
- type: recall
value: 63.63636363636363
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (nya_Latn-rus_Cyrl)
type: mteb/flores
config: nya_Latn-rus_Cyrl
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 89.2292490118577
- type: f1
value: 87.59723824473149
- type: main_score
value: 87.59723824473149
- type: precision
value: 86.90172707867349
- type: recall
value: 89.2292490118577
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (slv_Latn-rus_Cyrl)
type: mteb/flores
config: slv_Latn-rus_Cyrl
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 99.01185770750988
- type: f1
value: 98.74835309617917
- type: main_score
value: 98.74835309617917
- type: precision
value: 98.63636363636364
- type: recall
value: 99.01185770750988
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (tpi_Latn-rus_Cyrl)
type: mteb/flores
config: tpi_Latn-rus_Cyrl
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 77.37154150197628
- type: f1
value: 75.44251611276084
- type: main_score
value: 75.44251611276084
- type: precision
value: 74.78103665109595
- type: recall
value: 77.37154150197628
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (zsm_Latn-rus_Cyrl)
type: mteb/flores
config: zsm_Latn-rus_Cyrl
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 99.2094861660079
- type: f1
value: 98.96245059288538
- type: main_score
value: 98.96245059288538
- type: precision
value: 98.8471673254282
- type: recall
value: 99.2094861660079
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (ayr_Latn-rus_Cyrl)
type: mteb/flores
config: ayr_Latn-rus_Cyrl
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 27.766798418972332
- type: f1
value: 26.439103195281312
- type: main_score
value: 26.439103195281312
- type: precision
value: 26.052655604573964
- type: recall
value: 27.766798418972332
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (dan_Latn-rus_Cyrl)
type: mteb/flores
config: dan_Latn-rus_Cyrl
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 99.30830039525692
- type: f1
value: 99.07773386034255
- type: main_score
value: 99.07773386034255
- type: precision
value: 98.96245059288538
- type: recall
value: 99.30830039525692
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (guj_Gujr-rus_Cyrl)
type: mteb/flores
config: guj_Gujr-rus_Cyrl
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 97.82608695652173
- type: f1
value: 97.26449275362317
- type: main_score
value: 97.26449275362317
- type: precision
value: 97.02498588368154
- type: recall
value: 97.82608695652173
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (kaz_Cyrl-rus_Cyrl)
type: mteb/flores
config: kaz_Cyrl-rus_Cyrl
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 97.5296442687747
- type: f1
value: 97.03557312252964
- type: main_score
value: 97.03557312252964
- type: precision
value: 96.85022158342316
- type: recall
value: 97.5296442687747
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (lug_Latn-rus_Cyrl)
type: mteb/flores
config: lug_Latn-rus_Cyrl
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 68.57707509881423
- type: f1
value: 65.93361605820395
- type: main_score
value: 65.93361605820395
- type: precision
value: 64.90348248593789
- type: recall
value: 68.57707509881423
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (oci_Latn-rus_Cyrl)
type: mteb/flores
config: oci_Latn-rus_Cyrl
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 86.26482213438736
- type: f1
value: 85.33176417155623
- type: main_score
value: 85.33176417155623
- type: precision
value: 85.00208833384637
- type: recall
value: 86.26482213438736
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (smo_Latn-rus_Cyrl)
type: mteb/flores
config: smo_Latn-rus_Cyrl
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 77.96442687747036
- type: f1
value: 75.70960450188885
- type: main_score
value: 75.70960450188885
- type: precision
value: 74.8312632736777
- type: recall
value: 77.96442687747036
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (tsn_Latn-rus_Cyrl)
type: mteb/flores
config: tsn_Latn-rus_Cyrl
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 84.38735177865613
- type: f1
value: 82.13656376349225
- type: main_score
value: 82.13656376349225
- type: precision
value: 81.16794543904518
- type: recall
value: 84.38735177865613
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (zul_Latn-rus_Cyrl)
type: mteb/flores
config: zul_Latn-rus_Cyrl
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 90.21739130434783
- type: f1
value: 88.77570602050753
- type: main_score
value: 88.77570602050753
- type: precision
value: 88.15978104021582
- type: recall
value: 90.21739130434783
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (azb_Arab-rus_Cyrl)
type: mteb/flores
config: azb_Arab-rus_Cyrl
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 65.71146245059289
- type: f1
value: 64.18825390221271
- type: main_score
value: 64.18825390221271
- type: precision
value: 63.66811154793568
- type: recall
value: 65.71146245059289
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (deu_Latn-rus_Cyrl)
type: mteb/flores
config: deu_Latn-rus_Cyrl
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 99.70355731225297
- type: f1
value: 99.60474308300395
- type: main_score
value: 99.60474308300395
- type: precision
value: 99.55533596837944
- type: recall
value: 99.70355731225297
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (hat_Latn-rus_Cyrl)
type: mteb/flores
config: hat_Latn-rus_Cyrl
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 86.7588932806324
- type: f1
value: 85.86738623695146
- type: main_score
value: 85.86738623695146
- type: precision
value: 85.55235467420822
- type: recall
value: 86.7588932806324
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (kbp_Latn-rus_Cyrl)
type: mteb/flores
config: kbp_Latn-rus_Cyrl
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 34.88142292490119
- type: f1
value: 32.16511669463015
- type: main_score
value: 32.16511669463015
- type: precision
value: 31.432098549546318
- type: recall
value: 34.88142292490119
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (luo_Latn-rus_Cyrl)
type: mteb/flores
config: luo_Latn-rus_Cyrl
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 52.27272727272727
- type: f1
value: 49.60489626836975
- type: main_score
value: 49.60489626836975
- type: precision
value: 48.69639631803339
- type: recall
value: 52.27272727272727
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (ory_Orya-rus_Cyrl)
type: mteb/flores
config: ory_Orya-rus_Cyrl
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 97.82608695652173
- type: f1
value: 97.27437417654808
- type: main_score
value: 97.27437417654808
- type: precision
value: 97.04968944099377
- type: recall
value: 97.82608695652173
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (sna_Latn-rus_Cyrl)
type: mteb/flores
config: sna_Latn-rus_Cyrl
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 85.37549407114624
- type: f1
value: 83.09911316305177
- type: main_score
value: 83.09911316305177
- type: precision
value: 82.1284950958864
- type: recall
value: 85.37549407114624
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (tso_Latn-rus_Cyrl)
type: mteb/flores
config: tso_Latn-rus_Cyrl
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 82.90513833992095
- type: f1
value: 80.28290385503824
- type: main_score
value: 80.28290385503824
- type: precision
value: 79.23672543237761
- type: recall
value: 82.90513833992095
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (azj_Latn-rus_Cyrl)
type: mteb/flores
config: azj_Latn-rus_Cyrl
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 98.02371541501977
- type: f1
value: 97.49200075287031
- type: main_score
value: 97.49200075287031
- type: precision
value: 97.266139657444
- type: recall
value: 98.02371541501977
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (dik_Latn-rus_Cyrl)
type: mteb/flores
config: dik_Latn-rus_Cyrl
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 38.43873517786561
- type: f1
value: 35.78152442955223
- type: main_score
value: 35.78152442955223
- type: precision
value: 34.82424325078237
- type: recall
value: 38.43873517786561
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (hau_Latn-rus_Cyrl)
type: mteb/flores
config: hau_Latn-rus_Cyrl
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 81.42292490118577
- type: f1
value: 79.24612283124593
- type: main_score
value: 79.24612283124593
- type: precision
value: 78.34736070751448
- type: recall
value: 81.42292490118577
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (kea_Latn-rus_Cyrl)
type: mteb/flores
config: kea_Latn-rus_Cyrl
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 81.62055335968378
- type: f1
value: 80.47015182884748
- type: main_score
value: 80.47015182884748
- type: precision
value: 80.02671028885862
- type: recall
value: 81.62055335968378
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (lus_Latn-rus_Cyrl)
type: mteb/flores
config: lus_Latn-rus_Cyrl
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 62.74703557312253
- type: f1
value: 60.53900079111122
- type: main_score
value: 60.53900079111122
- type: precision
value: 59.80024202850289
- type: recall
value: 62.74703557312253
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (pag_Latn-rus_Cyrl)
type: mteb/flores
config: pag_Latn-rus_Cyrl
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 74.01185770750988
- type: f1
value: 72.57280648279529
- type: main_score
value: 72.57280648279529
- type: precision
value: 71.99952968456789
- type: recall
value: 74.01185770750988
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (snd_Arab-rus_Cyrl)
type: mteb/flores
config: snd_Arab-rus_Cyrl
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 91.30434782608695
- type: f1
value: 90.24653499445358
- type: main_score
value: 90.24653499445358
- type: precision
value: 89.83134068200232
- type: recall
value: 91.30434782608695
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (tuk_Latn-rus_Cyrl)
type: mteb/flores
config: tuk_Latn-rus_Cyrl
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 47.62845849802372
- type: f1
value: 45.812928836644254
- type: main_score
value: 45.812928836644254
- type: precision
value: 45.23713833170355
- type: recall
value: 47.62845849802372
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (bak_Cyrl-rus_Cyrl)
type: mteb/flores
config: bak_Cyrl-rus_Cyrl
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 95.8498023715415
- type: f1
value: 95.18904459615922
- type: main_score
value: 95.18904459615922
- type: precision
value: 94.92812441182006
- type: recall
value: 95.8498023715415
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (dyu_Latn-rus_Cyrl)
type: mteb/flores
config: dyu_Latn-rus_Cyrl
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 29.64426877470356
- type: f1
value: 27.287335193938166
- type: main_score
value: 27.287335193938166
- type: precision
value: 26.583996026587492
- type: recall
value: 29.64426877470356
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (heb_Hebr-rus_Cyrl)
type: mteb/flores
config: heb_Hebr-rus_Cyrl
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 98.91304347826086
- type: f1
value: 98.55072463768116
- type: main_score
value: 98.55072463768116
- type: precision
value: 98.36956521739131
- type: recall
value: 98.91304347826086
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (khk_Cyrl-rus_Cyrl)
type: mteb/flores
config: khk_Cyrl-rus_Cyrl
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 95.15810276679841
- type: f1
value: 94.44009547764487
- type: main_score
value: 94.44009547764487
- type: precision
value: 94.16579797014579
- type: recall
value: 95.15810276679841
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (lvs_Latn-rus_Cyrl)
type: mteb/flores
config: lvs_Latn-rus_Cyrl
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 97.92490118577075
- type: f1
value: 97.51467241585817
- type: main_score
value: 97.51467241585817
- type: precision
value: 97.36166007905138
- type: recall
value: 97.92490118577075
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (pan_Guru-rus_Cyrl)
type: mteb/flores
config: pan_Guru-rus_Cyrl
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 97.92490118577075
- type: f1
value: 97.42918313570486
- type: main_score
value: 97.42918313570486
- type: precision
value: 97.22261434217955
- type: recall
value: 97.92490118577075
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (som_Latn-rus_Cyrl)
type: mteb/flores
config: som_Latn-rus_Cyrl
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 75.69169960474308
- type: f1
value: 73.7211667065916
- type: main_score
value: 73.7211667065916
- type: precision
value: 72.95842401892384
- type: recall
value: 75.69169960474308
- task:
type: BitextMining
dataset:
name: MTEB FloresBitextMining (tum_Latn-rus_Cyrl)
type: mteb/flores
config: tum_Latn-rus_Cyrl
split: devtest
revision: e6b647fcb6299a2f686f742f4d4c023e553ea67e
metrics:
- type: accuracy
value: 85.67193675889328
- type: f1
value: 82.9296066252588
- type: main_score
value: 82.9296066252588
- type: precision
value: 81.77330225447936
- type: recall
value: 85.67193675889328
- task:
type: Classification
dataset:
name: MTEB GeoreviewClassification (default)
type: ai-forever/georeview-classification
config: default
split: test
revision: 3765c0d1de6b7d264bc459433c45e5a75513839c
metrics:
- type: accuracy
value: 44.6630859375
- type: f1
value: 42.607425073610536
- type: f1_weighted
value: 42.60639474586065
- type: main_score
value: 44.6630859375
- task:
type: Clustering
dataset:
name: MTEB GeoreviewClusteringP2P (default)
type: ai-forever/georeview-clustering-p2p
config: default
split: test
revision: 97a313c8fc85b47f13f33e7e9a95c1ad888c7fec
metrics:
- type: main_score
value: 58.15951247070825
- type: v_measure
value: 58.15951247070825
- type: v_measure_std
value: 0.6739615788288809
- task:
type: Classification
dataset:
name: MTEB HeadlineClassification (default)
type: ai-forever/headline-classification
config: default
split: test
revision: 2fe05ee6b5832cda29f2ef7aaad7b7fe6a3609eb
metrics:
- type: accuracy
value: 73.935546875
- type: f1
value: 73.8654872186846
- type: f1_weighted
value: 73.86733122685095
- type: main_score
value: 73.935546875
- task:
type: Classification
dataset:
name: MTEB InappropriatenessClassification (default)
type: ai-forever/inappropriateness-classification
config: default
split: test
revision: 601651fdc45ef243751676e62dd7a19f491c0285
metrics:
- type: accuracy
value: 59.16015624999999
- type: ap
value: 55.52276605836938
- type: ap_weighted
value: 55.52276605836938
- type: f1
value: 58.614248199637956
- type: f1_weighted
value: 58.614248199637956
- type: main_score
value: 59.16015624999999
- task:
type: Classification
dataset:
name: MTEB KinopoiskClassification (default)
type: ai-forever/kinopoisk-sentiment-classification
config: default
split: test
revision: 5911f26666ac11af46cb9c6849d0dc80a378af24
metrics:
- type: accuracy
value: 49.959999999999994
- type: f1
value: 48.4900332316098
- type: f1_weighted
value: 48.4900332316098
- type: main_score
value: 49.959999999999994
- task:
type: Classification
dataset:
name: MTEB LanguageClassification (default)
type: papluca/language-identification
config: default
split: test
revision: aa56583bf2bc52b0565770607d6fc3faebecf9e2
metrics:
- type: accuracy
value: 71.005859375
- type: f1
value: 69.63481100303348
- type: f1_weighted
value: 69.64640413409529
- type: main_score
value: 71.005859375
- task:
type: Clustering
dataset:
name: MTEB MLSUMClusteringP2P (ru)
type: reciTAL/mlsum
config: ru
split: test
revision: b5d54f8f3b61ae17845046286940f03c6bc79bc7
metrics:
- type: main_score
value: 42.11280087032343
- type: v_measure
value: 42.11280087032343
- type: v_measure_std
value: 6.7619971723605135
- type: main_score
value: 43.00112546945811
- type: v_measure
value: 43.00112546945811
- type: v_measure_std
value: 1.4740560414835675
- type: main_score
value: 39.81446080575161
- type: v_measure
value: 39.81446080575161
- type: v_measure_std
value: 7.125661320308298
- type: main_score
value: 39.29659668980239
- type: v_measure
value: 39.29659668980239
- type: v_measure_std
value: 2.6570502923023094
- task:
type: Retrieval
dataset:
name: MTEB MultiLongDocRetrieval (ru)
type: Shitao/MLDR
config: ru
split: dev
revision: d67138e705d963e346253a80e59676ddb418810a
metrics:
- type: main_score
value: 38.671
- type: map_at_1
value: 30.0
- type: map_at_10
value: 36.123
- type: map_at_100
value: 36.754999999999995
- type: map_at_1000
value: 36.806
- type: map_at_20
value: 36.464
- type: map_at_3
value: 35.25
- type: map_at_5
value: 35.8
- type: mrr_at_1
value: 30.0
- type: mrr_at_10
value: 36.122817460317464
- type: mrr_at_100
value: 36.75467016625293
- type: mrr_at_1000
value: 36.80612724920882
- type: mrr_at_20
value: 36.46359681984682
- type: mrr_at_3
value: 35.25
- type: mrr_at_5
value: 35.800000000000004
- type: nauc_map_at_1000_diff1
value: 55.61987610843598
- type: nauc_map_at_1000_max
value: 52.506795017152186
- type: nauc_map_at_1000_std
value: 2.95487192066911
- type: nauc_map_at_100_diff1
value: 55.598419532054734
- type: nauc_map_at_100_max
value: 52.48192017040307
- type: nauc_map_at_100_std
value: 2.930120252521189
- type: nauc_map_at_10_diff1
value: 56.02309155375198
- type: nauc_map_at_10_max
value: 52.739573233234424
- type: nauc_map_at_10_std
value: 2.4073432421641545
- type: nauc_map_at_1_diff1
value: 52.57059856776112
- type: nauc_map_at_1_max
value: 50.55668152952304
- type: nauc_map_at_1_std
value: 1.6572084853398048
- type: nauc_map_at_20_diff1
value: 55.75769029917031
- type: nauc_map_at_20_max
value: 52.53663737242853
- type: nauc_map_at_20_std
value: 2.8489192879814
- type: nauc_map_at_3_diff1
value: 56.90294128342709
- type: nauc_map_at_3_max
value: 53.10608389782041
- type: nauc_map_at_3_std
value: 1.4909731657889491
- type: nauc_map_at_5_diff1
value: 56.1258315436073
- type: nauc_map_at_5_max
value: 52.398078357541564
- type: nauc_map_at_5_std
value: 1.8256862015101467
- type: nauc_mrr_at_1000_diff1
value: 55.61987610843598
- type: nauc_mrr_at_1000_max
value: 52.506795017152186
- type: nauc_mrr_at_1000_std
value: 2.95487192066911
- type: nauc_mrr_at_100_diff1
value: 55.598419532054734
- type: nauc_mrr_at_100_max
value: 52.48192017040307
- type: nauc_mrr_at_100_std
value: 2.930120252521189
- type: nauc_mrr_at_10_diff1
value: 56.02309155375198
- type: nauc_mrr_at_10_max
value: 52.739573233234424
- type: nauc_mrr_at_10_std
value: 2.4073432421641545
- type: nauc_mrr_at_1_diff1
value: 52.57059856776112
- type: nauc_mrr_at_1_max
value: 50.55668152952304
- type: nauc_mrr_at_1_std
value: 1.6572084853398048
- type: nauc_mrr_at_20_diff1
value: 55.75769029917031
- type: nauc_mrr_at_20_max
value: 52.53663737242853
- type: nauc_mrr_at_20_std
value: 2.8489192879814
- type: nauc_mrr_at_3_diff1
value: 56.90294128342709
- type: nauc_mrr_at_3_max
value: 53.10608389782041
- type: nauc_mrr_at_3_std
value: 1.4909731657889491
- type: nauc_mrr_at_5_diff1
value: 56.1258315436073
- type: nauc_mrr_at_5_max
value: 52.398078357541564
- type: nauc_mrr_at_5_std
value: 1.8256862015101467
- type: nauc_ndcg_at_1000_diff1
value: 55.30733548408918
- type: nauc_ndcg_at_1000_max
value: 53.51143366189318
- type: nauc_ndcg_at_1000_std
value: 7.133789405525702
- type: nauc_ndcg_at_100_diff1
value: 54.32209039488095
- type: nauc_ndcg_at_100_max
value: 52.67499334461009
- type: nauc_ndcg_at_100_std
value: 6.878823275077807
- type: nauc_ndcg_at_10_diff1
value: 56.266780806997716
- type: nauc_ndcg_at_10_max
value: 53.52837255793743
- type: nauc_ndcg_at_10_std
value: 3.756832592964262
- type: nauc_ndcg_at_1_diff1
value: 52.57059856776112
- type: nauc_ndcg_at_1_max
value: 50.55668152952304
- type: nauc_ndcg_at_1_std
value: 1.6572084853398048
- type: nauc_ndcg_at_20_diff1
value: 55.39255420432796
- type: nauc_ndcg_at_20_max
value: 52.946114684072235
- type: nauc_ndcg_at_20_std
value: 5.414933414031693
- type: nauc_ndcg_at_3_diff1
value: 57.92826624996289
- type: nauc_ndcg_at_3_max
value: 53.89907760306972
- type: nauc_ndcg_at_3_std
value: 1.6661401245309218
- type: nauc_ndcg_at_5_diff1
value: 56.47508936029308
- type: nauc_ndcg_at_5_max
value: 52.66800998045517
- type: nauc_ndcg_at_5_std
value: 2.4127296184140423
- type: nauc_precision_at_1000_diff1
value: 57.25924020238401
- type: nauc_precision_at_1000_max
value: 65.1132590931922
- type: nauc_precision_at_1000_std
value: 40.60788709618145
- type: nauc_precision_at_100_diff1
value: 46.49620002554606
- type: nauc_precision_at_100_max
value: 53.02960148167071
- type: nauc_precision_at_100_std
value: 28.206028867032863
- type: nauc_precision_at_10_diff1
value: 56.562744749606765
- type: nauc_precision_at_10_max
value: 56.00594967783547
- type: nauc_precision_at_10_std
value: 8.368379831645163
- type: nauc_precision_at_1_diff1
value: 52.57059856776112
- type: nauc_precision_at_1_max
value: 50.55668152952304
- type: nauc_precision_at_1_std
value: 1.6572084853398048
- type: nauc_precision_at_20_diff1
value: 53.25915754614111
- type: nauc_precision_at_20_max
value: 54.03255118937036
- type: nauc_precision_at_20_std
value: 15.161611674272718
- type: nauc_precision_at_3_diff1
value: 60.726785748943854
- type: nauc_precision_at_3_max
value: 56.139896875869354
- type: nauc_precision_at_3_std
value: 2.2306901035769893
- type: nauc_precision_at_5_diff1
value: 57.1201127525187
- type: nauc_precision_at_5_max
value: 53.28665761862506
- type: nauc_precision_at_5_std
value: 4.358720050112237
- type: nauc_recall_at_1000_diff1
value: 57.259240202383964
- type: nauc_recall_at_1000_max
value: 65.11325909319218
- type: nauc_recall_at_1000_std
value: 40.60788709618142
- type: nauc_recall_at_100_diff1
value: 46.49620002554603
- type: nauc_recall_at_100_max
value: 53.02960148167071
- type: nauc_recall_at_100_std
value: 28.206028867032835
- type: nauc_recall_at_10_diff1
value: 56.562744749606765
- type: nauc_recall_at_10_max
value: 56.00594967783549
- type: nauc_recall_at_10_std
value: 8.368379831645147
- type: nauc_recall_at_1_diff1
value: 52.57059856776112
- type: nauc_recall_at_1_max
value: 50.55668152952304
- type: nauc_recall_at_1_std
value: 1.6572084853398048
- type: nauc_recall_at_20_diff1
value: 53.259157546141154
- type: nauc_recall_at_20_max
value: 54.03255118937038
- type: nauc_recall_at_20_std
value: 15.16161167427274
- type: nauc_recall_at_3_diff1
value: 60.72678574894387
- type: nauc_recall_at_3_max
value: 56.13989687586933
- type: nauc_recall_at_3_std
value: 2.2306901035770066
- type: nauc_recall_at_5_diff1
value: 57.12011275251864
- type: nauc_recall_at_5_max
value: 53.28665761862502
- type: nauc_recall_at_5_std
value: 4.3587200501122245
- type: ndcg_at_1
value: 30.0
- type: ndcg_at_10
value: 38.671
- type: ndcg_at_100
value: 42.173
- type: ndcg_at_1000
value: 44.016
- type: ndcg_at_20
value: 39.845000000000006
- type: ndcg_at_3
value: 36.863
- type: ndcg_at_5
value: 37.874
- type: precision_at_1
value: 30.0
- type: precision_at_10
value: 4.65
- type: precision_at_100
value: 0.64
- type: precision_at_1000
value: 0.08
- type: precision_at_20
value: 2.55
- type: precision_at_3
value: 13.833
- type: precision_at_5
value: 8.799999999999999
- type: recall_at_1
value: 30.0
- type: recall_at_10
value: 46.5
- type: recall_at_100
value: 64.0
- type: recall_at_1000
value: 79.5
- type: recall_at_20
value: 51.0
- type: recall_at_3
value: 41.5
- type: recall_at_5
value: 44.0
- task:
type: Classification
dataset:
name: MTEB MultilingualSentimentClassification (rus)
type: mteb/multilingual-sentiment-classification
config: rus
split: test
revision: 2b9b4d10fc589af67794141fe8cbd3739de1eb33
metrics:
- type: accuracy
value: 79.52710495963092
- type: ap
value: 84.5713457178972
- type: ap_weighted
value: 84.5713457178972
- type: f1
value: 77.88661181524105
- type: f1_weighted
value: 79.87563079922718
- type: main_score
value: 79.52710495963092
- task:
type: BitextMining
dataset:
name: MTEB NTREXBitextMining (arb_Arab-rus_Cyrl)
type: mteb/NTREX
config: arb_Arab-rus_Cyrl
split: test
revision: ed9a4403ed4adbfaf4aab56d5b2709e9f6c3ba33
metrics:
- type: accuracy
value: 86.47971957936905
- type: f1
value: 82.79864240805654
- type: main_score
value: 82.79864240805654
- type: precision
value: 81.21485800128767
- type: recall
value: 86.47971957936905
- task:
type: BitextMining
dataset:
name: MTEB NTREXBitextMining (bel_Cyrl-rus_Cyrl)
type: mteb/NTREX
config: bel_Cyrl-rus_Cyrl
split: test
revision: ed9a4403ed4adbfaf4aab56d5b2709e9f6c3ba33
metrics:
- type: accuracy
value: 94.84226339509264
- type: f1
value: 93.56399067465667
- type: main_score
value: 93.56399067465667
- type: precision
value: 93.01619095309631
- type: recall
value: 94.84226339509264
- task:
type: BitextMining
dataset:
name: MTEB NTREXBitextMining (ben_Beng-rus_Cyrl)
type: mteb/NTREX
config: ben_Beng-rus_Cyrl
split: test
revision: ed9a4403ed4adbfaf4aab56d5b2709e9f6c3ba33
metrics:
- type: accuracy
value: 92.18828242363544
- type: f1
value: 90.42393889620612
- type: main_score
value: 90.42393889620612
- type: precision
value: 89.67904925153297
- type: recall
value: 92.18828242363544
- task:
type: BitextMining
dataset:
name: MTEB NTREXBitextMining (bos_Latn-rus_Cyrl)
type: mteb/NTREX
config: bos_Latn-rus_Cyrl
split: test
revision: ed9a4403ed4adbfaf4aab56d5b2709e9f6c3ba33
metrics:
- type: accuracy
value: 94.69203805708563
- type: f1
value: 93.37172425304624
- type: main_score
value: 93.37172425304624
- type: precision
value: 92.79204521067315
- type: recall
value: 94.69203805708563
- task:
type: BitextMining
dataset:
name: MTEB NTREXBitextMining (bul_Cyrl-rus_Cyrl)
type: mteb/NTREX
config: bul_Cyrl-rus_Cyrl
split: test
revision: ed9a4403ed4adbfaf4aab56d5b2709e9f6c3ba33
metrics:
- type: accuracy
value: 96.99549323985978
- type: f1
value: 96.13086296110833
- type: main_score
value: 96.13086296110833
- type: precision
value: 95.72441996327827
- type: recall
value: 96.99549323985978
- task:
type: BitextMining
dataset:
name: MTEB NTREXBitextMining (ces_Latn-rus_Cyrl)
type: mteb/NTREX
config: ces_Latn-rus_Cyrl
split: test
revision: ed9a4403ed4adbfaf4aab56d5b2709e9f6c3ba33
metrics:
- type: accuracy
value: 95.94391587381071
- type: f1
value: 94.90680465142157
- type: main_score
value: 94.90680465142157
- type: precision
value: 94.44541812719079
- type: recall
value: 95.94391587381071
- task:
type: BitextMining
dataset:
name: MTEB NTREXBitextMining (deu_Latn-rus_Cyrl)
type: mteb/NTREX
config: deu_Latn-rus_Cyrl
split: test
revision: ed9a4403ed4adbfaf4aab56d5b2709e9f6c3ba33
metrics:
- type: accuracy
value: 96.09414121181773
- type: f1
value: 94.94408279085295
- type: main_score
value: 94.94408279085295
- type: precision
value: 94.41245201135037
- type: recall
value: 96.09414121181773
- task:
type: BitextMining
dataset:
name: MTEB NTREXBitextMining (ell_Grek-rus_Cyrl)
type: mteb/NTREX
config: ell_Grek-rus_Cyrl
split: test
revision: ed9a4403ed4adbfaf4aab56d5b2709e9f6c3ba33
metrics:
- type: accuracy
value: 96.19429143715573
- type: f1
value: 95.12101485561676
- type: main_score
value: 95.12101485561676
- type: precision
value: 94.60440660991488
- type: recall
value: 96.19429143715573
- task:
type: BitextMining
dataset:
name: MTEB NTREXBitextMining (eng_Latn-rus_Cyrl)
type: mteb/NTREX
config: eng_Latn-rus_Cyrl
split: test
revision: ed9a4403ed4adbfaf4aab56d5b2709e9f6c3ba33
metrics:
- type: accuracy
value: 96.49474211316975
- type: f1
value: 95.46581777428045
- type: main_score
value: 95.46581777428045
- type: precision
value: 94.98414288098814
- type: recall
value: 96.49474211316975
- task:
type: BitextMining
dataset:
name: MTEB NTREXBitextMining (fas_Arab-rus_Cyrl)
type: mteb/NTREX
config: fas_Arab-rus_Cyrl
split: test
revision: ed9a4403ed4adbfaf4aab56d5b2709e9f6c3ba33
metrics:
- type: accuracy
value: 94.44166249374061
- type: f1
value: 92.92383018972905
- type: main_score
value: 92.92383018972905
- type: precision
value: 92.21957936905358
- type: recall
value: 94.44166249374061
- task:
type: BitextMining
dataset:
name: MTEB NTREXBitextMining (fin_Latn-rus_Cyrl)
type: mteb/NTREX
config: fin_Latn-rus_Cyrl
split: test
revision: ed9a4403ed4adbfaf4aab56d5b2709e9f6c3ba33
metrics:
- type: accuracy
value: 92.18828242363544
- type: f1
value: 90.2980661468393
- type: main_score
value: 90.2980661468393
- type: precision
value: 89.42580537472877
- type: recall
value: 92.18828242363544
- task:
type: BitextMining
dataset:
name: MTEB NTREXBitextMining (fra_Latn-rus_Cyrl)
type: mteb/NTREX
config: fra_Latn-rus_Cyrl
split: test
revision: ed9a4403ed4adbfaf4aab56d5b2709e9f6c3ba33
metrics:
- type: accuracy
value: 95.84376564847271
- type: f1
value: 94.81054915706895
- type: main_score
value: 94.81054915706895
- type: precision
value: 94.31369276136427
- type: recall
value: 95.84376564847271
- task:
type: BitextMining
dataset:
name: MTEB NTREXBitextMining (heb_Hebr-rus_Cyrl)
type: mteb/NTREX
config: heb_Hebr-rus_Cyrl
split: test
revision: ed9a4403ed4adbfaf4aab56d5b2709e9f6c3ba33
metrics:
- type: accuracy
value: 94.89233850776164
- type: f1
value: 93.42513770655985
- type: main_score
value: 93.42513770655985
- type: precision
value: 92.73493573693875
- type: recall
value: 94.89233850776164
- task:
type: BitextMining
dataset:
name: MTEB NTREXBitextMining (hin_Deva-rus_Cyrl)
type: mteb/NTREX
config: hin_Deva-rus_Cyrl
split: test
revision: ed9a4403ed4adbfaf4aab56d5b2709e9f6c3ba33
metrics:
- type: accuracy
value: 93.23985978968453
- type: f1
value: 91.52816526376867
- type: main_score
value: 91.52816526376867
- type: precision
value: 90.76745946425466
- type: recall
value: 93.23985978968453
- task:
type: BitextMining
dataset:
name: MTEB NTREXBitextMining (hrv_Latn-rus_Cyrl)
type: mteb/NTREX
config: hrv_Latn-rus_Cyrl
split: test
revision: ed9a4403ed4adbfaf4aab56d5b2709e9f6c3ba33
metrics:
- type: accuracy
value: 93.99098647971958
- type: f1
value: 92.36354531797697
- type: main_score
value: 92.36354531797697
- type: precision
value: 91.63228970439788
- type: recall
value: 93.99098647971958
- task:
type: BitextMining
dataset:
name: MTEB NTREXBitextMining (hun_Latn-rus_Cyrl)
type: mteb/NTREX
config: hun_Latn-rus_Cyrl
split: test
revision: ed9a4403ed4adbfaf4aab56d5b2709e9f6c3ba33
metrics:
- type: accuracy
value: 93.64046069103655
- type: f1
value: 92.05224503421799
- type: main_score
value: 92.05224503421799
- type: precision
value: 91.33998616973079
- type: recall
value: 93.64046069103655
- task:
type: BitextMining
dataset:
name: MTEB NTREXBitextMining (ind_Latn-rus_Cyrl)
type: mteb/NTREX
config: ind_Latn-rus_Cyrl
split: test
revision: ed9a4403ed4adbfaf4aab56d5b2709e9f6c3ba33
metrics:
- type: accuracy
value: 91.68753129694541
- type: f1
value: 89.26222667334335
- type: main_score
value: 89.26222667334335
- type: precision
value: 88.14638624603572
- type: recall
value: 91.68753129694541
- task:
type: BitextMining
dataset:
name: MTEB NTREXBitextMining (jpn_Jpan-rus_Cyrl)
type: mteb/NTREX
config: jpn_Jpan-rus_Cyrl
split: test
revision: ed9a4403ed4adbfaf4aab56d5b2709e9f6c3ba33
metrics:
- type: accuracy
value: 91.28693039559339
- type: f1
value: 89.21161763348957
- type: main_score
value: 89.21161763348957
- type: precision
value: 88.31188340952988
- type: recall
value: 91.28693039559339
- task:
type: BitextMining
dataset:
name: MTEB NTREXBitextMining (kor_Hang-rus_Cyrl)
type: mteb/NTREX
config: kor_Hang-rus_Cyrl
split: test
revision: ed9a4403ed4adbfaf4aab56d5b2709e9f6c3ba33
metrics:
- type: accuracy
value: 89.53430145217827
- type: f1
value: 86.88322165788365
- type: main_score
value: 86.88322165788365
- type: precision
value: 85.73950211030831
- type: recall
value: 89.53430145217827
- task:
type: BitextMining
dataset:
name: MTEB NTREXBitextMining (lit_Latn-rus_Cyrl)
type: mteb/NTREX
config: lit_Latn-rus_Cyrl
split: test
revision: ed9a4403ed4adbfaf4aab56d5b2709e9f6c3ba33
metrics:
- type: accuracy
value: 90.28542814221332
- type: f1
value: 88.10249103814452
- type: main_score
value: 88.10249103814452
- type: precision
value: 87.17689323973752
- type: recall
value: 90.28542814221332
- task:
type: BitextMining
dataset:
name: MTEB NTREXBitextMining (mkd_Cyrl-rus_Cyrl)
type: mteb/NTREX
config: mkd_Cyrl-rus_Cyrl
split: test
revision: ed9a4403ed4adbfaf4aab56d5b2709e9f6c3ba33
metrics:
- type: accuracy
value: 95.04256384576865
- type: f1
value: 93.65643703650713
- type: main_score
value: 93.65643703650713
- type: precision
value: 93.02036387915207
- type: recall
value: 95.04256384576865
- task:
type: BitextMining
dataset:
name: MTEB NTREXBitextMining (nld_Latn-rus_Cyrl)
type: mteb/NTREX
config: nld_Latn-rus_Cyrl
split: test
revision: ed9a4403ed4adbfaf4aab56d5b2709e9f6c3ba33
metrics:
- type: accuracy
value: 95.39308963445168
- type: f1
value: 94.16207644800535
- type: main_score
value: 94.16207644800535
- type: precision
value: 93.582516632091
- type: recall
value: 95.39308963445168
- task:
type: BitextMining
dataset:
name: MTEB NTREXBitextMining (pol_Latn-rus_Cyrl)
type: mteb/NTREX
config: pol_Latn-rus_Cyrl
split: test
revision: ed9a4403ed4adbfaf4aab56d5b2709e9f6c3ba33
metrics:
- type: accuracy
value: 95.7436154231347
- type: f1
value: 94.5067601402103
- type: main_score
value: 94.5067601402103
- type: precision
value: 93.91587381071608
- type: recall
value: 95.7436154231347
- task:
type: BitextMining
dataset:
name: MTEB NTREXBitextMining (por_Latn-rus_Cyrl)
type: mteb/NTREX
config: por_Latn-rus_Cyrl
split: test
revision: ed9a4403ed4adbfaf4aab56d5b2709e9f6c3ba33
metrics:
- type: accuracy
value: 65.89884827240861
- type: f1
value: 64.61805459419219
- type: main_score
value: 64.61805459419219
- type: precision
value: 64.07119451106485
- type: recall
value: 65.89884827240861
- task:
type: BitextMining
dataset:
name: MTEB NTREXBitextMining (rus_Cyrl-arb_Arab)
type: mteb/NTREX
config: rus_Cyrl-arb_Arab
split: test
revision: ed9a4403ed4adbfaf4aab56d5b2709e9f6c3ba33
metrics:
- type: accuracy
value: 94.2413620430646
- type: f1
value: 92.67663399861698
- type: main_score
value: 92.67663399861698
- type: precision
value: 91.94625271240193
- type: recall
value: 94.2413620430646
- task:
type: BitextMining
dataset:
name: MTEB NTREXBitextMining (rus_Cyrl-bel_Cyrl)
type: mteb/NTREX
config: rus_Cyrl-bel_Cyrl
split: test
revision: ed9a4403ed4adbfaf4aab56d5b2709e9f6c3ba33
metrics:
- type: accuracy
value: 94.89233850776164
- type: f1
value: 93.40343849106993
- type: main_score
value: 93.40343849106993
- type: precision
value: 92.74077783341679
- type: recall
value: 94.89233850776164
- task:
type: BitextMining
dataset:
name: MTEB NTREXBitextMining (rus_Cyrl-ben_Beng)
type: mteb/NTREX
config: rus_Cyrl-ben_Beng
split: test
revision: ed9a4403ed4adbfaf4aab56d5b2709e9f6c3ba33
metrics:
- type: accuracy
value: 94.2914371557336
- type: f1
value: 92.62226673343348
- type: main_score
value: 92.62226673343348
- type: precision
value: 91.84610248706393
- type: recall
value: 94.2914371557336
- task:
type: BitextMining
dataset:
name: MTEB NTREXBitextMining (rus_Cyrl-bos_Latn)
type: mteb/NTREX
config: rus_Cyrl-bos_Latn
split: test
revision: ed9a4403ed4adbfaf4aab56d5b2709e9f6c3ba33
metrics:
- type: accuracy
value: 95.69354031046569
- type: f1
value: 94.50418051319403
- type: main_score
value: 94.50418051319403
- type: precision
value: 93.95843765648473
- type: recall
value: 95.69354031046569
- task:
type: BitextMining
dataset:
name: MTEB NTREXBitextMining (rus_Cyrl-bul_Cyrl)
type: mteb/NTREX
config: rus_Cyrl-bul_Cyrl
split: test
revision: ed9a4403ed4adbfaf4aab56d5b2709e9f6c3ba33
metrics:
- type: accuracy
value: 95.89384076114172
- type: f1
value: 94.66199298948423
- type: main_score
value: 94.66199298948423
- type: precision
value: 94.08028709731263
- type: recall
value: 95.89384076114172
- task:
type: BitextMining
dataset:
name: MTEB NTREXBitextMining (rus_Cyrl-ces_Latn)
type: mteb/NTREX
config: rus_Cyrl-ces_Latn
split: test
revision: ed9a4403ed4adbfaf4aab56d5b2709e9f6c3ba33
metrics:
- type: accuracy
value: 93.94091136705057
- type: f1
value: 92.3746731207923
- type: main_score
value: 92.3746731207923
- type: precision
value: 91.66207644800535
- type: recall
value: 93.94091136705057
- task:
type: BitextMining
dataset:
name: MTEB NTREXBitextMining (rus_Cyrl-deu_Latn)
type: mteb/NTREX
config: rus_Cyrl-deu_Latn
split: test
revision: ed9a4403ed4adbfaf4aab56d5b2709e9f6c3ba33
metrics:
- type: accuracy
value: 95.94391587381071
- type: f1
value: 94.76214321482223
- type: main_score
value: 94.76214321482223
- type: precision
value: 94.20380570856285
- type: recall
value: 95.94391587381071
- task:
type: BitextMining
dataset:
name: MTEB NTREXBitextMining (rus_Cyrl-ell_Grek)
type: mteb/NTREX
config: rus_Cyrl-ell_Grek
split: test
revision: ed9a4403ed4adbfaf4aab56d5b2709e9f6c3ba33
metrics:
- type: accuracy
value: 95.44316474712068
- type: f1
value: 94.14788849941579
- type: main_score
value: 94.14788849941579
- type: precision
value: 93.54197963612084
- type: recall
value: 95.44316474712068
- task:
type: BitextMining
dataset:
name: MTEB NTREXBitextMining (rus_Cyrl-eng_Latn)
type: mteb/NTREX
config: rus_Cyrl-eng_Latn
split: test
revision: ed9a4403ed4adbfaf4aab56d5b2709e9f6c3ba33
metrics:
- type: accuracy
value: 98.14722083124687
- type: f1
value: 97.57135703555333
- type: main_score
value: 97.57135703555333
- type: precision
value: 97.2959439158738
- type: recall
value: 98.14722083124687
- task:
type: BitextMining
dataset:
name: MTEB NTREXBitextMining (rus_Cyrl-fas_Arab)
type: mteb/NTREX
config: rus_Cyrl-fas_Arab
split: test
revision: ed9a4403ed4adbfaf4aab56d5b2709e9f6c3ba33
metrics:
- type: accuracy
value: 94.64196294441662
- type: f1
value: 93.24653647137372
- type: main_score
value: 93.24653647137372
- type: precision
value: 92.60724419963279
- type: recall
value: 94.64196294441662
- task:
type: BitextMining
dataset:
name: MTEB NTREXBitextMining (rus_Cyrl-fin_Latn)
type: mteb/NTREX
config: rus_Cyrl-fin_Latn
split: test
revision: ed9a4403ed4adbfaf4aab56d5b2709e9f6c3ba33
metrics:
- type: accuracy
value: 87.98197295943916
- type: f1
value: 85.23368385912201
- type: main_score
value: 85.23368385912201
- type: precision
value: 84.08159858835873
- type: recall
value: 87.98197295943916
- task:
type: BitextMining
dataset:
name: MTEB NTREXBitextMining (rus_Cyrl-fra_Latn)
type: mteb/NTREX
config: rus_Cyrl-fra_Latn
split: test
revision: ed9a4403ed4adbfaf4aab56d5b2709e9f6c3ba33
metrics:
- type: accuracy
value: 96.24436654982473
- type: f1
value: 95.07093974294774
- type: main_score
value: 95.07093974294774
- type: precision
value: 94.49591053246536
- type: recall
value: 96.24436654982473
- task:
type: BitextMining
dataset:
name: MTEB NTREXBitextMining (rus_Cyrl-heb_Hebr)
type: mteb/NTREX
config: rus_Cyrl-heb_Hebr
split: test
revision: ed9a4403ed4adbfaf4aab56d5b2709e9f6c3ba33
metrics:
- type: accuracy
value: 91.08662994491738
- type: f1
value: 88.5161074945752
- type: main_score
value: 88.5161074945752
- type: precision
value: 87.36187614755467
- type: recall
value: 91.08662994491738
- task:
type: BitextMining
dataset:
name: MTEB NTREXBitextMining (rus_Cyrl-hin_Deva)
type: mteb/NTREX
config: rus_Cyrl-hin_Deva
split: test
revision: ed9a4403ed4adbfaf4aab56d5b2709e9f6c3ba33
metrics:
- type: accuracy
value: 95.04256384576865
- type: f1
value: 93.66382907694876
- type: main_score
value: 93.66382907694876
- type: precision
value: 93.05291270238692
- type: recall
value: 95.04256384576865
- task:
type: BitextMining
dataset:
name: MTEB NTREXBitextMining (rus_Cyrl-hrv_Latn)
type: mteb/NTREX
config: rus_Cyrl-hrv_Latn
split: test
revision: ed9a4403ed4adbfaf4aab56d5b2709e9f6c3ba33
metrics:
- type: accuracy
value: 95.14271407110667
- type: f1
value: 93.7481221832749
- type: main_score
value: 93.7481221832749
- type: precision
value: 93.10930681736892
- type: recall
value: 95.14271407110667
- task:
type: BitextMining
dataset:
name: MTEB NTREXBitextMining (rus_Cyrl-hun_Latn)
type: mteb/NTREX
config: rus_Cyrl-hun_Latn
split: test
revision: ed9a4403ed4adbfaf4aab56d5b2709e9f6c3ba33
metrics:
- type: accuracy
value: 90.18527791687532
- type: f1
value: 87.61415933423946
- type: main_score
value: 87.61415933423946
- type: precision
value: 86.5166400394242
- type: recall
value: 90.18527791687532
- task:
type: BitextMining
dataset:
name: MTEB NTREXBitextMining (rus_Cyrl-ind_Latn)
type: mteb/NTREX
config: rus_Cyrl-ind_Latn
split: test
revision: ed9a4403ed4adbfaf4aab56d5b2709e9f6c3ba33
metrics:
- type: accuracy
value: 93.69053580370556
- type: f1
value: 91.83608746453012
- type: main_score
value: 91.83608746453012
- type: precision
value: 90.97145718577868
- type: recall
value: 93.69053580370556
- task:
type: BitextMining
dataset:
name: MTEB NTREXBitextMining (rus_Cyrl-jpn_Jpan)
type: mteb/NTREX
config: rus_Cyrl-jpn_Jpan
split: test
revision: ed9a4403ed4adbfaf4aab56d5b2709e9f6c3ba33
metrics:
- type: accuracy
value: 89.48422633950926
- type: f1
value: 86.91271033534429
- type: main_score
value: 86.91271033534429
- type: precision
value: 85.82671626487351
- type: recall
value: 89.48422633950926
- task:
type: BitextMining
dataset:
name: MTEB NTREXBitextMining (rus_Cyrl-kor_Hang)
type: mteb/NTREX
config: rus_Cyrl-kor_Hang
split: test
revision: ed9a4403ed4adbfaf4aab56d5b2709e9f6c3ba33
metrics:
- type: accuracy
value: 88.4827240861292
- type: f1
value: 85.35080398375342
- type: main_score
value: 85.35080398375342
- type: precision
value: 83.9588549490903
- type: recall
value: 88.4827240861292
- task:
type: BitextMining
dataset:
name: MTEB NTREXBitextMining (rus_Cyrl-lit_Latn)
type: mteb/NTREX
config: rus_Cyrl-lit_Latn
split: test
revision: ed9a4403ed4adbfaf4aab56d5b2709e9f6c3ba33
metrics:
- type: accuracy
value: 90.33550325488233
- type: f1
value: 87.68831819157307
- type: main_score
value: 87.68831819157307
- type: precision
value: 86.51524906407231
- type: recall
value: 90.33550325488233
- task:
type: BitextMining
dataset:
name: MTEB NTREXBitextMining (rus_Cyrl-mkd_Cyrl)
type: mteb/NTREX
config: rus_Cyrl-mkd_Cyrl
split: test
revision: ed9a4403ed4adbfaf4aab56d5b2709e9f6c3ba33
metrics:
- type: accuracy
value: 95.94391587381071
- type: f1
value: 94.90402270071775
- type: main_score
value: 94.90402270071775
- type: precision
value: 94.43915873810715
- type: recall
value: 95.94391587381071
- task:
type: BitextMining
dataset:
name: MTEB NTREXBitextMining (rus_Cyrl-nld_Latn)
type: mteb/NTREX
config: rus_Cyrl-nld_Latn
split: test
revision: ed9a4403ed4adbfaf4aab56d5b2709e9f6c3ba33
metrics:
- type: accuracy
value: 92.98948422633951
- type: f1
value: 91.04323151393756
- type: main_score
value: 91.04323151393756
- type: precision
value: 90.14688699716241
- type: recall
value: 92.98948422633951
- task:
type: BitextMining
dataset:
name: MTEB NTREXBitextMining (rus_Cyrl-pol_Latn)
type: mteb/NTREX
config: rus_Cyrl-pol_Latn
split: test
revision: ed9a4403ed4adbfaf4aab56d5b2709e9f6c3ba33
metrics:
- type: accuracy
value: 94.34151226840261
- type: f1
value: 92.8726422967785
- type: main_score
value: 92.8726422967785
- type: precision
value: 92.19829744616925
- type: recall
value: 94.34151226840261
- task:
type: BitextMining
dataset:
name: MTEB NTREXBitextMining (rus_Cyrl-por_Latn)
type: mteb/NTREX
config: rus_Cyrl-por_Latn
split: test
revision: ed9a4403ed4adbfaf4aab56d5b2709e9f6c3ba33
metrics:
- type: accuracy
value: 86.17926890335504
- type: f1
value: 82.7304882287356
- type: main_score
value: 82.7304882287356
- type: precision
value: 81.28162481817964
- type: recall
value: 86.17926890335504
- task:
type: BitextMining
dataset:
name: MTEB NTREXBitextMining (rus_Cyrl-slk_Latn)
type: mteb/NTREX
config: rus_Cyrl-slk_Latn
split: test
revision: ed9a4403ed4adbfaf4aab56d5b2709e9f6c3ba33
metrics:
- type: accuracy
value: 92.7391086629945
- type: f1
value: 90.75112669003506
- type: main_score
value: 90.75112669003506
- type: precision
value: 89.8564513436822
- type: recall
value: 92.7391086629945
- task:
type: BitextMining
dataset:
name: MTEB NTREXBitextMining (rus_Cyrl-slv_Latn)
type: mteb/NTREX
config: rus_Cyrl-slv_Latn
split: test
revision: ed9a4403ed4adbfaf4aab56d5b2709e9f6c3ba33
metrics:
- type: accuracy
value: 92.8893340010015
- type: f1
value: 91.05992321816058
- type: main_score
value: 91.05992321816058
- type: precision
value: 90.22589439715128
- type: recall
value: 92.8893340010015
- task:
type: BitextMining
dataset:
name: MTEB NTREXBitextMining (rus_Cyrl-spa_Latn)
type: mteb/NTREX
config: rus_Cyrl-spa_Latn
split: test
revision: ed9a4403ed4adbfaf4aab56d5b2709e9f6c3ba33
metrics:
- type: accuracy
value: 96.49474211316975
- type: f1
value: 95.4715406442998
- type: main_score
value: 95.4715406442998
- type: precision
value: 94.9799699549324
- type: recall
value: 96.49474211316975
- task:
type: BitextMining
dataset:
name: MTEB NTREXBitextMining (rus_Cyrl-srp_Cyrl)
type: mteb/NTREX
config: rus_Cyrl-srp_Cyrl
split: test
revision: ed9a4403ed4adbfaf4aab56d5b2709e9f6c3ba33
metrics:
- type: accuracy
value: 81.07160741111667
- type: f1
value: 76.55687285507015
- type: main_score
value: 76.55687285507015
- type: precision
value: 74.71886401030116
- type: recall
value: 81.07160741111667
- task:
type: BitextMining
dataset:
name: MTEB NTREXBitextMining (rus_Cyrl-srp_Latn)
type: mteb/NTREX
config: rus_Cyrl-srp_Latn
split: test
revision: ed9a4403ed4adbfaf4aab56d5b2709e9f6c3ba33
metrics:
- type: accuracy
value: 95.14271407110667
- type: f1
value: 93.73302377809138
- type: main_score
value: 93.73302377809138
- type: precision
value: 93.06960440660991
- type: recall
value: 95.14271407110667
- task:
type: BitextMining
dataset:
name: MTEB NTREXBitextMining (rus_Cyrl-swa_Latn)
type: mteb/NTREX
config: rus_Cyrl-swa_Latn
split: test
revision: ed9a4403ed4adbfaf4aab56d5b2709e9f6c3ba33
metrics:
- type: accuracy
value: 94.79218828242364
- type: f1
value: 93.25988983475212
- type: main_score
value: 93.25988983475212
- type: precision
value: 92.53463528626273
- type: recall
value: 94.79218828242364
- task:
type: BitextMining
dataset:
name: MTEB NTREXBitextMining (rus_Cyrl-swe_Latn)
type: mteb/NTREX
config: rus_Cyrl-swe_Latn
split: test
revision: ed9a4403ed4adbfaf4aab56d5b2709e9f6c3ba33
metrics:
- type: accuracy
value: 95.04256384576865
- type: f1
value: 93.58704723752295
- type: main_score
value: 93.58704723752295
- type: precision
value: 92.91437155733601
- type: recall
value: 95.04256384576865
- task:
type: BitextMining
dataset:
name: MTEB NTREXBitextMining (rus_Cyrl-tam_Taml)
type: mteb/NTREX
config: rus_Cyrl-tam_Taml
split: test
revision: ed9a4403ed4adbfaf4aab56d5b2709e9f6c3ba33
metrics:
- type: accuracy
value: 93.28993490235354
- type: f1
value: 91.63912535469872
- type: main_score
value: 91.63912535469872
- type: precision
value: 90.87738750983617
- type: recall
value: 93.28993490235354
- task:
type: BitextMining
dataset:
name: MTEB NTREXBitextMining (rus_Cyrl-tur_Latn)
type: mteb/NTREX
config: rus_Cyrl-tur_Latn
split: test
revision: ed9a4403ed4adbfaf4aab56d5b2709e9f6c3ba33
metrics:
- type: accuracy
value: 93.74061091637456
- type: f1
value: 91.96628275746953
- type: main_score
value: 91.96628275746953
- type: precision
value: 91.15923885828742
- type: recall
value: 93.74061091637456
- task:
type: BitextMining
dataset:
name: MTEB NTREXBitextMining (rus_Cyrl-ukr_Cyrl)
type: mteb/NTREX
config: rus_Cyrl-ukr_Cyrl
split: test
revision: ed9a4403ed4adbfaf4aab56d5b2709e9f6c3ba33
metrics:
- type: accuracy
value: 95.99399098647972
- type: f1
value: 94.89567684860624
- type: main_score
value: 94.89567684860624
- type: precision
value: 94.37072275079286
- type: recall
value: 95.99399098647972
- task:
type: BitextMining
dataset:
name: MTEB NTREXBitextMining (rus_Cyrl-vie_Latn)
type: mteb/NTREX
config: rus_Cyrl-vie_Latn
split: test
revision: ed9a4403ed4adbfaf4aab56d5b2709e9f6c3ba33
metrics:
- type: accuracy
value: 91.4371557336004
- type: f1
value: 88.98681355366382
- type: main_score
value: 88.98681355366382
- type: precision
value: 87.89183775663496
- type: recall
value: 91.4371557336004
- task:
type: BitextMining
dataset:
name: MTEB NTREXBitextMining (rus_Cyrl-zho_Hant)
type: mteb/NTREX
config: rus_Cyrl-zho_Hant
split: test
revision: ed9a4403ed4adbfaf4aab56d5b2709e9f6c3ba33
metrics:
- type: accuracy
value: 92.7891837756635
- type: f1
value: 90.79047142141783
- type: main_score
value: 90.79047142141783
- type: precision
value: 89.86980470706058
- type: recall
value: 92.7891837756635
- task:
type: BitextMining
dataset:
name: MTEB NTREXBitextMining (rus_Cyrl-zul_Latn)
type: mteb/NTREX
config: rus_Cyrl-zul_Latn
split: test
revision: ed9a4403ed4adbfaf4aab56d5b2709e9f6c3ba33
metrics:
- type: accuracy
value: 87.43114672008012
- type: f1
value: 84.04618833011422
- type: main_score
value: 84.04618833011422
- type: precision
value: 82.52259341393041
- type: recall
value: 87.43114672008012
- task:
type: BitextMining
dataset:
name: MTEB NTREXBitextMining (slk_Latn-rus_Cyrl)
type: mteb/NTREX
config: slk_Latn-rus_Cyrl
split: test
revision: ed9a4403ed4adbfaf4aab56d5b2709e9f6c3ba33
metrics:
- type: accuracy
value: 95.34301452178268
- type: f1
value: 94.20392493502158
- type: main_score
value: 94.20392493502158
- type: precision
value: 93.67384409948257
- type: recall
value: 95.34301452178268
- task:
type: BitextMining
dataset:
name: MTEB NTREXBitextMining (slv_Latn-rus_Cyrl)
type: mteb/NTREX
config: slv_Latn-rus_Cyrl
split: test
revision: ed9a4403ed4adbfaf4aab56d5b2709e9f6c3ba33
metrics:
- type: accuracy
value: 92.23835753630446
- type: f1
value: 90.5061759305625
- type: main_score
value: 90.5061759305625
- type: precision
value: 89.74231188051918
- type: recall
value: 92.23835753630446
- task:
type: BitextMining
dataset:
name: MTEB NTREXBitextMining (spa_Latn-rus_Cyrl)
type: mteb/NTREX
config: spa_Latn-rus_Cyrl
split: test
revision: ed9a4403ed4adbfaf4aab56d5b2709e9f6c3ba33
metrics:
- type: accuracy
value: 96.54481722583876
- type: f1
value: 95.54665331330328
- type: main_score
value: 95.54665331330328
- type: precision
value: 95.06342847604739
- type: recall
value: 96.54481722583876
- task:
type: BitextMining
dataset:
name: MTEB NTREXBitextMining (srp_Cyrl-rus_Cyrl)
type: mteb/NTREX
config: srp_Cyrl-rus_Cyrl
split: test
revision: ed9a4403ed4adbfaf4aab56d5b2709e9f6c3ba33
metrics:
- type: accuracy
value: 83.62543815723585
- type: f1
value: 80.77095672699816
- type: main_score
value: 80.77095672699816
- type: precision
value: 79.74674313056886
- type: recall
value: 83.62543815723585
- task:
type: BitextMining
dataset:
name: MTEB NTREXBitextMining (srp_Latn-rus_Cyrl)
type: mteb/NTREX
config: srp_Latn-rus_Cyrl
split: test
revision: ed9a4403ed4adbfaf4aab56d5b2709e9f6c3ba33
metrics:
- type: accuracy
value: 94.44166249374061
- type: f1
value: 93.00733206591994
- type: main_score
value: 93.00733206591994
- type: precision
value: 92.37203026762366
- type: recall
value: 94.44166249374061
- task:
type: BitextMining
dataset:
name: MTEB NTREXBitextMining (swa_Latn-rus_Cyrl)
type: mteb/NTREX
config: swa_Latn-rus_Cyrl
split: test
revision: ed9a4403ed4adbfaf4aab56d5b2709e9f6c3ba33
metrics:
- type: accuracy
value: 90.23535302954431
- type: f1
value: 87.89596482636041
- type: main_score
value: 87.89596482636041
- type: precision
value: 86.87060227370694
- type: recall
value: 90.23535302954431
- task:
type: BitextMining
dataset:
name: MTEB NTREXBitextMining (swe_Latn-rus_Cyrl)
type: mteb/NTREX
config: swe_Latn-rus_Cyrl
split: test
revision: ed9a4403ed4adbfaf4aab56d5b2709e9f6c3ba33
metrics:
- type: accuracy
value: 95.44316474712068
- type: f1
value: 94.1896177599733
- type: main_score
value: 94.1896177599733
- type: precision
value: 93.61542313470206
- type: recall
value: 95.44316474712068
- task:
type: BitextMining
dataset:
name: MTEB NTREXBitextMining (tam_Taml-rus_Cyrl)
type: mteb/NTREX
config: tam_Taml-rus_Cyrl
split: test
revision: ed9a4403ed4adbfaf4aab56d5b2709e9f6c3ba33
metrics:
- type: accuracy
value: 89.68452679018529
- type: f1
value: 87.37341160650037
- type: main_score
value: 87.37341160650037
- type: precision
value: 86.38389402285247
- type: recall
value: 89.68452679018529
- task:
type: BitextMining
dataset:
name: MTEB NTREXBitextMining (tur_Latn-rus_Cyrl)
type: mteb/NTREX
config: tur_Latn-rus_Cyrl
split: test
revision: ed9a4403ed4adbfaf4aab56d5b2709e9f6c3ba33
metrics:
- type: accuracy
value: 93.89083625438157
- type: f1
value: 92.33892505424804
- type: main_score
value: 92.33892505424804
- type: precision
value: 91.63125640842216
- type: recall
value: 93.89083625438157
- task:
type: BitextMining
dataset:
name: MTEB NTREXBitextMining (ukr_Cyrl-rus_Cyrl)
type: mteb/NTREX
config: ukr_Cyrl-rus_Cyrl
split: test
revision: ed9a4403ed4adbfaf4aab56d5b2709e9f6c3ba33
metrics:
- type: accuracy
value: 96.14421632448673
- type: f1
value: 95.11028447433054
- type: main_score
value: 95.11028447433054
- type: precision
value: 94.62944416624937
- type: recall
value: 96.14421632448673
- task:
type: BitextMining
dataset:
name: MTEB NTREXBitextMining (vie_Latn-rus_Cyrl)
type: mteb/NTREX
config: vie_Latn-rus_Cyrl
split: test
revision: ed9a4403ed4adbfaf4aab56d5b2709e9f6c3ba33
metrics:
- type: accuracy
value: 93.79068602904357
- type: f1
value: 92.14989150392256
- type: main_score
value: 92.14989150392256
- type: precision
value: 91.39292271740945
- type: recall
value: 93.79068602904357
- task:
type: BitextMining
dataset:
name: MTEB NTREXBitextMining (zho_Hant-rus_Cyrl)
type: mteb/NTREX
config: zho_Hant-rus_Cyrl
split: test
revision: ed9a4403ed4adbfaf4aab56d5b2709e9f6c3ba33
metrics:
- type: accuracy
value: 89.13370055082625
- type: f1
value: 86.51514618639217
- type: main_score
value: 86.51514618639217
- type: precision
value: 85.383920035898
- type: recall
value: 89.13370055082625
- task:
type: BitextMining
dataset:
name: MTEB NTREXBitextMining (zul_Latn-rus_Cyrl)
type: mteb/NTREX
config: zul_Latn-rus_Cyrl
split: test
revision: ed9a4403ed4adbfaf4aab56d5b2709e9f6c3ba33
metrics:
- type: accuracy
value: 81.17175763645467
- type: f1
value: 77.72331766047338
- type: main_score
value: 77.72331766047338
- type: precision
value: 76.24629555848075
- type: recall
value: 81.17175763645467
- task:
type: PairClassification
dataset:
name: MTEB OpusparcusPC (ru)
type: GEM/opusparcus
config: ru
split: test.full
revision: 9e9b1f8ef51616073f47f306f7f47dd91663f86a
metrics:
- type: cosine_accuracy
value: 73.09136420525657
- type: cosine_accuracy_threshold
value: 87.70400881767273
- type: cosine_ap
value: 86.51938550599533
- type: cosine_f1
value: 80.84358523725834
- type: cosine_f1_threshold
value: 86.90648078918457
- type: cosine_precision
value: 73.24840764331209
- type: cosine_recall
value: 90.19607843137256
- type: dot_accuracy
value: 73.09136420525657
- type: dot_accuracy_threshold
value: 87.7040147781372
- type: dot_ap
value: 86.51934769946833
- type: dot_f1
value: 80.84358523725834
- type: dot_f1_threshold
value: 86.90648078918457
- type: dot_precision
value: 73.24840764331209
- type: dot_recall
value: 90.19607843137256
- type: euclidean_accuracy
value: 73.09136420525657
- type: euclidean_accuracy_threshold
value: 49.590304493904114
- type: euclidean_ap
value: 86.51934769946833
- type: euclidean_f1
value: 80.84358523725834
- type: euclidean_f1_threshold
value: 51.173269748687744
- type: euclidean_precision
value: 73.24840764331209
- type: euclidean_recall
value: 90.19607843137256
- type: main_score
value: 86.51976811057995
- type: manhattan_accuracy
value: 73.40425531914893
- type: manhattan_accuracy_threshold
value: 757.8278541564941
- type: manhattan_ap
value: 86.51976811057995
- type: manhattan_f1
value: 80.92898615453328
- type: manhattan_f1_threshold
value: 778.3821105957031
- type: manhattan_precision
value: 74.32321575061526
- type: manhattan_recall
value: 88.8235294117647
- type: max_ap
value: 86.51976811057995
- type: max_f1
value: 80.92898615453328
- type: max_precision
value: 74.32321575061526
- type: max_recall
value: 90.19607843137256
- type: similarity_accuracy
value: 73.09136420525657
- type: similarity_accuracy_threshold
value: 87.70400881767273
- type: similarity_ap
value: 86.51938550599533
- type: similarity_f1
value: 80.84358523725834
- type: similarity_f1_threshold
value: 86.90648078918457
- type: similarity_precision
value: 73.24840764331209
- type: similarity_recall
value: 90.19607843137256
- task:
type: Retrieval
dataset:
name: MTEB PublicHealthQA (russian)
type: xhluca/publichealth-qa
config: russian
split: test
revision: main
metrics:
- type: main_score
value: 79.303
- type: map_at_1
value: 61.538000000000004
- type: map_at_10
value: 74.449
- type: map_at_100
value: 74.687
- type: map_at_1000
value: 74.687
- type: map_at_20
value: 74.589
- type: map_at_3
value: 73.333
- type: map_at_5
value: 74.256
- type: mrr_at_1
value: 61.53846153846154
- type: mrr_at_10
value: 74.44871794871794
- type: mrr_at_100
value: 74.68730304304074
- type: mrr_at_1000
value: 74.68730304304074
- type: mrr_at_20
value: 74.58857808857809
- type: mrr_at_3
value: 73.33333333333333
- type: mrr_at_5
value: 74.25641025641025
- type: nauc_map_at_1000_diff1
value: 61.375798048778506
- type: nauc_map_at_1000_max
value: 51.37093181241067
- type: nauc_map_at_1000_std
value: 41.735794471409015
- type: nauc_map_at_100_diff1
value: 61.375798048778506
- type: nauc_map_at_100_max
value: 51.37093181241067
- type: nauc_map_at_100_std
value: 41.735794471409015
- type: nauc_map_at_10_diff1
value: 61.12796039757213
- type: nauc_map_at_10_max
value: 51.843445267118014
- type: nauc_map_at_10_std
value: 42.243121474939365
- type: nauc_map_at_1_diff1
value: 66.39100974909151
- type: nauc_map_at_1_max
value: 44.77165601342703
- type: nauc_map_at_1_std
value: 32.38542979413408
- type: nauc_map_at_20_diff1
value: 61.16611123434347
- type: nauc_map_at_20_max
value: 51.52605092407306
- type: nauc_map_at_20_std
value: 41.94787773313971
- type: nauc_map_at_3_diff1
value: 61.40157474408937
- type: nauc_map_at_3_max
value: 51.47230077853947
- type: nauc_map_at_3_std
value: 42.63540269440141
- type: nauc_map_at_5_diff1
value: 61.07631147583098
- type: nauc_map_at_5_max
value: 52.02626939341523
- type: nauc_map_at_5_std
value: 42.511607332150334
- type: nauc_mrr_at_1000_diff1
value: 61.375798048778506
- type: nauc_mrr_at_1000_max
value: 51.37093181241067
- type: nauc_mrr_at_1000_std
value: 41.735794471409015
- type: nauc_mrr_at_100_diff1
value: 61.375798048778506
- type: nauc_mrr_at_100_max
value: 51.37093181241067
- type: nauc_mrr_at_100_std
value: 41.735794471409015
- type: nauc_mrr_at_10_diff1
value: 61.12796039757213
- type: nauc_mrr_at_10_max
value: 51.843445267118014
- type: nauc_mrr_at_10_std
value: 42.243121474939365
- type: nauc_mrr_at_1_diff1
value: 66.39100974909151
- type: nauc_mrr_at_1_max
value: 44.77165601342703
- type: nauc_mrr_at_1_std
value: 32.38542979413408
- type: nauc_mrr_at_20_diff1
value: 61.16611123434347
- type: nauc_mrr_at_20_max
value: 51.52605092407306
- type: nauc_mrr_at_20_std
value: 41.94787773313971
- type: nauc_mrr_at_3_diff1
value: 61.40157474408937
- type: nauc_mrr_at_3_max
value: 51.47230077853947
- type: nauc_mrr_at_3_std
value: 42.63540269440141
- type: nauc_mrr_at_5_diff1
value: 61.07631147583098
- type: nauc_mrr_at_5_max
value: 52.02626939341523
- type: nauc_mrr_at_5_std
value: 42.511607332150334
- type: nauc_ndcg_at_1000_diff1
value: 60.54821630436157
- type: nauc_ndcg_at_1000_max
value: 52.584328363863634
- type: nauc_ndcg_at_1000_std
value: 43.306961101645946
- type: nauc_ndcg_at_100_diff1
value: 60.54821630436157
- type: nauc_ndcg_at_100_max
value: 52.584328363863634
- type: nauc_ndcg_at_100_std
value: 43.306961101645946
- type: nauc_ndcg_at_10_diff1
value: 58.800340278109886
- type: nauc_ndcg_at_10_max
value: 55.31050771670664
- type: nauc_ndcg_at_10_std
value: 46.40931672942848
- type: nauc_ndcg_at_1_diff1
value: 66.39100974909151
- type: nauc_ndcg_at_1_max
value: 44.77165601342703
- type: nauc_ndcg_at_1_std
value: 32.38542979413408
- type: nauc_ndcg_at_20_diff1
value: 58.88690479697946
- type: nauc_ndcg_at_20_max
value: 54.19269661177923
- type: nauc_ndcg_at_20_std
value: 45.39305589413174
- type: nauc_ndcg_at_3_diff1
value: 59.61866351451574
- type: nauc_ndcg_at_3_max
value: 54.23992718744033
- type: nauc_ndcg_at_3_std
value: 46.997379274101
- type: nauc_ndcg_at_5_diff1
value: 58.70739588066225
- type: nauc_ndcg_at_5_max
value: 55.76766902539152
- type: nauc_ndcg_at_5_std
value: 47.10553115762958
- type: nauc_precision_at_1000_diff1
value: 100.0
- type: nauc_precision_at_1000_max
value: 100.0
- type: nauc_precision_at_1000_std
value: 100.0
- type: nauc_precision_at_100_diff1
value: .nan
- type: nauc_precision_at_100_max
value: .nan
- type: nauc_precision_at_100_std
value: .nan
- type: nauc_precision_at_10_diff1
value: 35.72622112397501
- type: nauc_precision_at_10_max
value: 89.84297108673948
- type: nauc_precision_at_10_std
value: 86.60269192422707
- type: nauc_precision_at_1_diff1
value: 66.39100974909151
- type: nauc_precision_at_1_max
value: 44.77165601342703
- type: nauc_precision_at_1_std
value: 32.38542979413408
- type: nauc_precision_at_20_diff1
value: 29.188449183726433
- type: nauc_precision_at_20_max
value: 86.45729478231968
- type: nauc_precision_at_20_std
value: 86.45729478231968
- type: nauc_precision_at_3_diff1
value: 50.294126629236224
- type: nauc_precision_at_3_max
value: 68.98223127174579
- type: nauc_precision_at_3_std
value: 70.31195520376356
- type: nauc_precision_at_5_diff1
value: 39.648884288124385
- type: nauc_precision_at_5_max
value: 86.3409770687935
- type: nauc_precision_at_5_std
value: 83.74875373878356
- type: nauc_recall_at_1000_diff1
value: .nan
- type: nauc_recall_at_1000_max
value: .nan
- type: nauc_recall_at_1000_std
value: .nan
- type: nauc_recall_at_100_diff1
value: .nan
- type: nauc_recall_at_100_max
value: .nan
- type: nauc_recall_at_100_std
value: .nan
- type: nauc_recall_at_10_diff1
value: 35.72622112397516
- type: nauc_recall_at_10_max
value: 89.84297108673968
- type: nauc_recall_at_10_std
value: 86.60269192422749
- type: nauc_recall_at_1_diff1
value: 66.39100974909151
- type: nauc_recall_at_1_max
value: 44.77165601342703
- type: nauc_recall_at_1_std
value: 32.38542979413408
- type: nauc_recall_at_20_diff1
value: 29.188449183726323
- type: nauc_recall_at_20_max
value: 86.45729478231985
- type: nauc_recall_at_20_std
value: 86.45729478231985
- type: nauc_recall_at_3_diff1
value: 50.29412662923603
- type: nauc_recall_at_3_max
value: 68.98223127174562
- type: nauc_recall_at_3_std
value: 70.31195520376346
- type: nauc_recall_at_5_diff1
value: 39.64888428812445
- type: nauc_recall_at_5_max
value: 86.34097706879359
- type: nauc_recall_at_5_std
value: 83.74875373878366
- type: ndcg_at_1
value: 61.538000000000004
- type: ndcg_at_10
value: 79.303
- type: ndcg_at_100
value: 80.557
- type: ndcg_at_1000
value: 80.557
- type: ndcg_at_20
value: 79.732
- type: ndcg_at_3
value: 77.033
- type: ndcg_at_5
value: 78.818
- type: precision_at_1
value: 61.538000000000004
- type: precision_at_10
value: 9.385
- type: precision_at_100
value: 1.0
- type: precision_at_1000
value: 0.1
- type: precision_at_20
value: 4.769
- type: precision_at_3
value: 29.231
- type: precision_at_5
value: 18.462
- type: recall_at_1
value: 61.538000000000004
- type: recall_at_10
value: 93.84599999999999
- type: recall_at_100
value: 100.0
- type: recall_at_1000
value: 100.0
- type: recall_at_20
value: 95.38499999999999
- type: recall_at_3
value: 87.69200000000001
- type: recall_at_5
value: 92.308
- task:
type: STS
dataset:
name: MTEB RUParaPhraserSTS (default)
type: merionum/ru_paraphraser
config: default
split: test
revision: 43265056790b8f7c59e0139acb4be0a8dad2c8f4
metrics:
- type: cosine_pearson
value: 64.73554596215753
- type: cosine_spearman
value: 70.45849652271855
- type: euclidean_pearson
value: 68.08069844834267
- type: euclidean_spearman
value: 70.45854872959124
- type: main_score
value: 70.45849652271855
- type: manhattan_pearson
value: 67.88325986519624
- type: manhattan_spearman
value: 70.21131896834542
- type: pearson
value: 64.73554596215753
- type: spearman
value: 70.45849652271855
- task:
type: Retrieval
dataset:
name: MTEB RiaNewsRetrieval (default)
type: ai-forever/ria-news-retrieval
config: default
split: test
revision: 82374b0bbacda6114f39ff9c5b925fa1512ca5d7
metrics:
- type: main_score
value: 70.00999999999999
- type: map_at_1
value: 55.97
- type: map_at_10
value: 65.59700000000001
- type: map_at_100
value: 66.057
- type: map_at_1000
value: 66.074
- type: map_at_20
value: 65.892
- type: map_at_3
value: 63.74999999999999
- type: map_at_5
value: 64.84299999999999
- type: mrr_at_1
value: 55.88999999999999
- type: mrr_at_10
value: 65.55873015872977
- type: mrr_at_100
value: 66.01891495129716
- type: mrr_at_1000
value: 66.03538391493299
- type: mrr_at_20
value: 65.85351193431555
- type: mrr_at_3
value: 63.7133333333329
- type: mrr_at_5
value: 64.80483333333268
- type: nauc_map_at_1000_diff1
value: 65.95332946436318
- type: nauc_map_at_1000_max
value: 28.21204156197811
- type: nauc_map_at_1000_std
value: -13.139245767083743
- type: nauc_map_at_100_diff1
value: 65.94763105024367
- type: nauc_map_at_100_max
value: 28.212832170078205
- type: nauc_map_at_100_std
value: -13.131425849370665
- type: nauc_map_at_10_diff1
value: 65.88455089448388
- type: nauc_map_at_10_max
value: 28.13555838776792
- type: nauc_map_at_10_std
value: -13.326989827081023
- type: nauc_map_at_1_diff1
value: 69.31275711813979
- type: nauc_map_at_1_max
value: 26.386708520283758
- type: nauc_map_at_1_std
value: -14.434616447245464
- type: nauc_map_at_20_diff1
value: 65.91227032605677
- type: nauc_map_at_20_max
value: 28.20538655600886
- type: nauc_map_at_20_std
value: -13.191148834410274
- type: nauc_map_at_3_diff1
value: 66.0051677952641
- type: nauc_map_at_3_max
value: 28.25443420019022
- type: nauc_map_at_3_std
value: -13.893284109029558
- type: nauc_map_at_5_diff1
value: 65.89784348297898
- type: nauc_map_at_5_max
value: 28.26449765184183
- type: nauc_map_at_5_std
value: -13.506692912805008
- type: nauc_mrr_at_1000_diff1
value: 66.06599513750889
- type: nauc_mrr_at_1000_max
value: 28.191556650722287
- type: nauc_mrr_at_1000_std
value: -13.098487982930276
- type: nauc_mrr_at_100_diff1
value: 66.0602307977725
- type: nauc_mrr_at_100_max
value: 28.19235936624514
- type: nauc_mrr_at_100_std
value: -13.09069677716269
- type: nauc_mrr_at_10_diff1
value: 65.99546819079403
- type: nauc_mrr_at_10_max
value: 28.11556170120022
- type: nauc_mrr_at_10_std
value: -13.286711073897553
- type: nauc_mrr_at_1_diff1
value: 69.49541040517995
- type: nauc_mrr_at_1_max
value: 26.354622707276153
- type: nauc_mrr_at_1_std
value: -14.358839778104695
- type: nauc_mrr_at_20_diff1
value: 66.02427154257936
- type: nauc_mrr_at_20_max
value: 28.18509383563462
- type: nauc_mrr_at_20_std
value: -13.150543398429
- type: nauc_mrr_at_3_diff1
value: 66.11258119082618
- type: nauc_mrr_at_3_max
value: 28.239510722224004
- type: nauc_mrr_at_3_std
value: -13.857249251136269
- type: nauc_mrr_at_5_diff1
value: 66.00633786765626
- type: nauc_mrr_at_5_max
value: 28.244875152193032
- type: nauc_mrr_at_5_std
value: -13.467206028704434
- type: nauc_ndcg_at_1000_diff1
value: 65.02876183314446
- type: nauc_ndcg_at_1000_max
value: 29.109368390197194
- type: nauc_ndcg_at_1000_std
value: -11.56514359821697
- type: nauc_ndcg_at_100_diff1
value: 64.85837726893713
- type: nauc_ndcg_at_100_max
value: 29.19990133137256
- type: nauc_ndcg_at_100_std
value: -11.17450348161257
- type: nauc_ndcg_at_10_diff1
value: 64.53842705024796
- type: nauc_ndcg_at_10_max
value: 28.748734006088526
- type: nauc_ndcg_at_10_std
value: -12.331395505957063
- type: nauc_ndcg_at_1_diff1
value: 69.31275711813979
- type: nauc_ndcg_at_1_max
value: 26.386708520283758
- type: nauc_ndcg_at_1_std
value: -14.434616447245464
- type: nauc_ndcg_at_20_diff1
value: 64.59017606740504
- type: nauc_ndcg_at_20_max
value: 29.047332048898017
- type: nauc_ndcg_at_20_std
value: -11.746548770195954
- type: nauc_ndcg_at_3_diff1
value: 64.87900935713822
- type: nauc_ndcg_at_3_max
value: 28.953157521204403
- type: nauc_ndcg_at_3_std
value: -13.639947228880942
- type: nauc_ndcg_at_5_diff1
value: 64.61466953479034
- type: nauc_ndcg_at_5_max
value: 29.01899321868392
- type: nauc_ndcg_at_5_std
value: -12.85356404799802
- type: nauc_precision_at_1000_diff1
value: 48.85481417002382
- type: nauc_precision_at_1000_max
value: 57.129837326696375
- type: nauc_precision_at_1000_std
value: 37.889524999906435
- type: nauc_precision_at_100_diff1
value: 53.374672326788264
- type: nauc_precision_at_100_max
value: 43.819333062207974
- type: nauc_precision_at_100_std
value: 21.387064885769362
- type: nauc_precision_at_10_diff1
value: 57.66571169774445
- type: nauc_precision_at_10_max
value: 31.779694837242033
- type: nauc_precision_at_10_std
value: -6.6248399147180255
- type: nauc_precision_at_1_diff1
value: 69.31275711813979
- type: nauc_precision_at_1_max
value: 26.386708520283758
- type: nauc_precision_at_1_std
value: -14.434616447245464
- type: nauc_precision_at_20_diff1
value: 55.93570036001682
- type: nauc_precision_at_20_max
value: 34.98640173388743
- type: nauc_precision_at_20_std
value: -0.36518465159326174
- type: nauc_precision_at_3_diff1
value: 60.94100093991508
- type: nauc_precision_at_3_max
value: 31.422239034357673
- type: nauc_precision_at_3_std
value: -12.72576556537896
- type: nauc_precision_at_5_diff1
value: 59.450505195434054
- type: nauc_precision_at_5_max
value: 32.07638712418377
- type: nauc_precision_at_5_std
value: -10.024459103498598
- type: nauc_recall_at_1000_diff1
value: 48.854814170024184
- type: nauc_recall_at_1000_max
value: 57.129837326697164
- type: nauc_recall_at_1000_std
value: 37.88952499990672
- type: nauc_recall_at_100_diff1
value: 53.37467232678822
- type: nauc_recall_at_100_max
value: 43.8193330622079
- type: nauc_recall_at_100_std
value: 21.387064885769398
- type: nauc_recall_at_10_diff1
value: 57.66571169774447
- type: nauc_recall_at_10_max
value: 31.779694837242133
- type: nauc_recall_at_10_std
value: -6.62483991471789
- type: nauc_recall_at_1_diff1
value: 69.31275711813979
- type: nauc_recall_at_1_max
value: 26.386708520283758
- type: nauc_recall_at_1_std
value: -14.434616447245464
- type: nauc_recall_at_20_diff1
value: 55.93570036001682
- type: nauc_recall_at_20_max
value: 34.986401733887554
- type: nauc_recall_at_20_std
value: -0.3651846515931506
- type: nauc_recall_at_3_diff1
value: 60.94100093991499
- type: nauc_recall_at_3_max
value: 31.422239034357606
- type: nauc_recall_at_3_std
value: -12.725765565378966
- type: nauc_recall_at_5_diff1
value: 59.450505195434125
- type: nauc_recall_at_5_max
value: 32.07638712418387
- type: nauc_recall_at_5_std
value: -10.024459103498472
- type: ndcg_at_1
value: 55.97
- type: ndcg_at_10
value: 70.00999999999999
- type: ndcg_at_100
value: 72.20100000000001
- type: ndcg_at_1000
value: 72.65599999999999
- type: ndcg_at_20
value: 71.068
- type: ndcg_at_3
value: 66.228
- type: ndcg_at_5
value: 68.191
- type: precision_at_1
value: 55.97
- type: precision_at_10
value: 8.373999999999999
- type: precision_at_100
value: 0.9390000000000001
- type: precision_at_1000
value: 0.097
- type: precision_at_20
value: 4.3950000000000005
- type: precision_at_3
value: 24.46
- type: precision_at_5
value: 15.626000000000001
- type: recall_at_1
value: 55.97
- type: recall_at_10
value: 83.74000000000001
- type: recall_at_100
value: 93.87
- type: recall_at_1000
value: 97.49
- type: recall_at_20
value: 87.89
- type: recall_at_3
value: 73.38
- type: recall_at_5
value: 78.13
- task:
type: Reranking
dataset:
name: MTEB RuBQReranking (default)
type: ai-forever/rubq-reranking
config: default
split: test
revision: 2e96b8f098fa4b0950fc58eacadeb31c0d0c7fa2
metrics:
- type: main_score
value: 71.44929565043827
- type: map
value: 71.44929565043827
- type: mrr
value: 77.78391820945014
- type: nAUC_map_diff1
value: 38.140840668080244
- type: nAUC_map_max
value: 27.54328688105381
- type: nAUC_map_std
value: 16.81572082284672
- type: nAUC_mrr_diff1
value: 44.51350415961509
- type: nAUC_mrr_max
value: 36.491182016669754
- type: nAUC_mrr_std
value: 22.47139593052269
- task:
type: Retrieval
dataset:
name: MTEB RuBQRetrieval (default)
type: ai-forever/rubq-retrieval
config: default
split: test
revision: e19b6ffa60b3bc248e0b41f4cc37c26a55c2a67b
metrics:
- type: main_score
value: 68.529
- type: map_at_1
value: 42.529
- type: map_at_10
value: 60.864
- type: map_at_100
value: 61.868
- type: map_at_1000
value: 61.907000000000004
- type: map_at_20
value: 61.596
- type: map_at_3
value: 55.701
- type: map_at_5
value: 58.78
- type: mrr_at_1
value: 60.57919621749409
- type: mrr_at_10
value: 70.55614188149649
- type: mrr_at_100
value: 70.88383816664494
- type: mrr_at_1000
value: 70.89719252668833
- type: mrr_at_20
value: 70.79839750105347
- type: mrr_at_3
value: 68.4594168636722
- type: mrr_at_5
value: 69.67100078802214
- type: nauc_map_at_1000_diff1
value: 40.67438785660885
- type: nauc_map_at_1000_max
value: 32.79981738507424
- type: nauc_map_at_1000_std
value: -6.873402600044831
- type: nauc_map_at_100_diff1
value: 40.65643664443284
- type: nauc_map_at_100_max
value: 32.81594799919249
- type: nauc_map_at_100_std
value: -6.8473246794498195
- type: nauc_map_at_10_diff1
value: 40.39048268484908
- type: nauc_map_at_10_max
value: 32.403242161479525
- type: nauc_map_at_10_std
value: -7.344413799841244
- type: nauc_map_at_1_diff1
value: 44.36306892906905
- type: nauc_map_at_1_max
value: 25.61348630699028
- type: nauc_map_at_1_std
value: -8.713074613333902
- type: nauc_map_at_20_diff1
value: 40.530326570124615
- type: nauc_map_at_20_max
value: 32.74028319323205
- type: nauc_map_at_20_std
value: -7.008180779820569
- type: nauc_map_at_3_diff1
value: 40.764924859364044
- type: nauc_map_at_3_max
value: 29.809671682025336
- type: nauc_map_at_3_std
value: -9.205620202725564
- type: nauc_map_at_5_diff1
value: 40.88599496021476
- type: nauc_map_at_5_max
value: 32.1701894666848
- type: nauc_map_at_5_std
value: -7.801251849010623
- type: nauc_mrr_at_1000_diff1
value: 48.64181373540728
- type: nauc_mrr_at_1000_max
value: 40.136947990653546
- type: nauc_mrr_at_1000_std
value: -7.250260497468805
- type: nauc_mrr_at_100_diff1
value: 48.63349902496212
- type: nauc_mrr_at_100_max
value: 40.14510559704008
- type: nauc_mrr_at_100_std
value: -7.228702374801103
- type: nauc_mrr_at_10_diff1
value: 48.58580560194813
- type: nauc_mrr_at_10_max
value: 40.15075599433366
- type: nauc_mrr_at_10_std
value: -7.267928771548688
- type: nauc_mrr_at_1_diff1
value: 51.47535097164919
- type: nauc_mrr_at_1_max
value: 38.23579750430856
- type: nauc_mrr_at_1_std
value: -9.187785187137633
- type: nauc_mrr_at_20_diff1
value: 48.58688378336222
- type: nauc_mrr_at_20_max
value: 40.13408744088299
- type: nauc_mrr_at_20_std
value: -7.283132775160146
- type: nauc_mrr_at_3_diff1
value: 48.66833005454742
- type: nauc_mrr_at_3_max
value: 40.07987333638038
- type: nauc_mrr_at_3_std
value: -7.738819947521418
- type: nauc_mrr_at_5_diff1
value: 48.76536305941537
- type: nauc_mrr_at_5_max
value: 40.381929739522185
- type: nauc_mrr_at_5_std
value: -7.592858318378928
- type: nauc_ndcg_at_1000_diff1
value: 41.67304442004693
- type: nauc_ndcg_at_1000_max
value: 35.84126926253235
- type: nauc_ndcg_at_1000_std
value: -4.78971011604655
- type: nauc_ndcg_at_100_diff1
value: 41.16918850185783
- type: nauc_ndcg_at_100_max
value: 36.082461962326505
- type: nauc_ndcg_at_100_std
value: -4.092442251697269
- type: nauc_ndcg_at_10_diff1
value: 40.300065598615205
- type: nauc_ndcg_at_10_max
value: 34.87866296788365
- type: nauc_ndcg_at_10_std
value: -5.866529277842453
- type: nauc_ndcg_at_1_diff1
value: 51.74612915209495
- type: nauc_ndcg_at_1_max
value: 37.71907067970078
- type: nauc_ndcg_at_1_std
value: -9.064124266098696
- type: nauc_ndcg_at_20_diff1
value: 40.493949850214584
- type: nauc_ndcg_at_20_max
value: 35.69331503650286
- type: nauc_ndcg_at_20_std
value: -4.995310342975443
- type: nauc_ndcg_at_3_diff1
value: 41.269443212112364
- type: nauc_ndcg_at_3_max
value: 32.572844460953334
- type: nauc_ndcg_at_3_std
value: -9.063015396458791
- type: nauc_ndcg_at_5_diff1
value: 41.37039652522888
- type: nauc_ndcg_at_5_max
value: 34.67416011393571
- type: nauc_ndcg_at_5_std
value: -7.106845569862319
- type: nauc_precision_at_1000_diff1
value: -9.571769961090155
- type: nauc_precision_at_1000_max
value: 5.574782583417188
- type: nauc_precision_at_1000_std
value: 7.28333847923847
- type: nauc_precision_at_100_diff1
value: -7.7405012003383735
- type: nauc_precision_at_100_max
value: 9.67745355070353
- type: nauc_precision_at_100_std
value: 9.327890294080992
- type: nauc_precision_at_10_diff1
value: -1.006879647532931
- type: nauc_precision_at_10_max
value: 15.899825481231064
- type: nauc_precision_at_10_std
value: 4.2284084852153105
- type: nauc_precision_at_1_diff1
value: 51.74612915209495
- type: nauc_precision_at_1_max
value: 37.71907067970078
- type: nauc_precision_at_1_std
value: -9.064124266098696
- type: nauc_precision_at_20_diff1
value: -4.982301544401409
- type: nauc_precision_at_20_max
value: 13.241674471380568
- type: nauc_precision_at_20_std
value: 7.052280133821539
- type: nauc_precision_at_3_diff1
value: 15.442614376387374
- type: nauc_precision_at_3_max
value: 25.12695418083
- type: nauc_precision_at_3_std
value: -3.1150066697920638
- type: nauc_precision_at_5_diff1
value: 8.381026072692444
- type: nauc_precision_at_5_max
value: 22.839056540604822
- type: nauc_precision_at_5_std
value: 1.5126905486524331
- type: nauc_recall_at_1000_diff1
value: -0.8869709920433502
- type: nauc_recall_at_1000_max
value: 45.092324433377264
- type: nauc_recall_at_1000_std
value: 62.21264093315108
- type: nauc_recall_at_100_diff1
value: 16.036715011075714
- type: nauc_recall_at_100_max
value: 39.79963411771158
- type: nauc_recall_at_100_std
value: 28.41850069503361
- type: nauc_recall_at_10_diff1
value: 25.189622794479998
- type: nauc_recall_at_10_max
value: 30.82355277039427
- type: nauc_recall_at_10_std
value: 0.0964544736531047
- type: nauc_recall_at_1_diff1
value: 44.36306892906905
- type: nauc_recall_at_1_max
value: 25.61348630699028
- type: nauc_recall_at_1_std
value: -8.713074613333902
- type: nauc_recall_at_20_diff1
value: 20.43424504746087
- type: nauc_recall_at_20_max
value: 33.96010554649377
- type: nauc_recall_at_20_std
value: 6.900984030301936
- type: nauc_recall_at_3_diff1
value: 33.86531858793492
- type: nauc_recall_at_3_max
value: 27.725692256711188
- type: nauc_recall_at_3_std
value: -8.533124289305709
- type: nauc_recall_at_5_diff1
value: 32.006964557701686
- type: nauc_recall_at_5_max
value: 31.493370659289806
- type: nauc_recall_at_5_std
value: -4.8639793547793255
- type: ndcg_at_1
value: 60.461
- type: ndcg_at_10
value: 68.529
- type: ndcg_at_100
value: 71.664
- type: ndcg_at_1000
value: 72.396
- type: ndcg_at_20
value: 70.344
- type: ndcg_at_3
value: 61.550000000000004
- type: ndcg_at_5
value: 64.948
- type: precision_at_1
value: 60.461
- type: precision_at_10
value: 13.28
- type: precision_at_100
value: 1.555
- type: precision_at_1000
value: 0.164
- type: precision_at_20
value: 7.216
- type: precision_at_3
value: 33.077
- type: precision_at_5
value: 23.014000000000003
- type: recall_at_1
value: 42.529
- type: recall_at_10
value: 81.169
- type: recall_at_100
value: 93.154
- type: recall_at_1000
value: 98.18299999999999
- type: recall_at_20
value: 87.132
- type: recall_at_3
value: 63.905
- type: recall_at_5
value: 71.967
- task:
type: Classification
dataset:
name: MTEB RuReviewsClassification (default)
type: ai-forever/ru-reviews-classification
config: default
split: test
revision: f6d2c31f4dc6b88f468552750bfec05b4b41b05a
metrics:
- type: accuracy
value: 61.17675781250001
- type: f1
value: 60.354535346041374
- type: f1_weighted
value: 60.35437313166116
- type: main_score
value: 61.17675781250001
- task:
type: STS
dataset:
name: MTEB RuSTSBenchmarkSTS (default)
type: ai-forever/ru-stsbenchmark-sts
config: default
split: test
revision: 7cf24f325c6da6195df55bef3d86b5e0616f3018
metrics:
- type: cosine_pearson
value: 78.1301041727274
- type: cosine_spearman
value: 78.08238025421747
- type: euclidean_pearson
value: 77.35224254583635
- type: euclidean_spearman
value: 78.08235336582496
- type: main_score
value: 78.08238025421747
- type: manhattan_pearson
value: 77.24138550052075
- type: manhattan_spearman
value: 77.98199107904142
- type: pearson
value: 78.1301041727274
- type: spearman
value: 78.08238025421747
- task:
type: Classification
dataset:
name: MTEB RuSciBenchGRNTIClassification (default)
type: ai-forever/ru-scibench-grnti-classification
config: default
split: test
revision: 673a610d6d3dd91a547a0d57ae1b56f37ebbf6a1
metrics:
- type: accuracy
value: 54.990234375
- type: f1
value: 53.537019057131374
- type: f1_weighted
value: 53.552745354520766
- type: main_score
value: 54.990234375
- task:
type: Clustering
dataset:
name: MTEB RuSciBenchGRNTIClusteringP2P (default)
type: ai-forever/ru-scibench-grnti-classification
config: default
split: test
revision: 673a610d6d3dd91a547a0d57ae1b56f37ebbf6a1
metrics:
- type: main_score
value: 50.775228895355106
- type: v_measure
value: 50.775228895355106
- type: v_measure_std
value: 0.9533571150165796
- task:
type: Classification
dataset:
name: MTEB RuSciBenchOECDClassification (default)
type: ai-forever/ru-scibench-oecd-classification
config: default
split: test
revision: 26c88e99dcaba32bb45d0e1bfc21902337f6d471
metrics:
- type: accuracy
value: 41.71875
- type: f1
value: 39.289100975858304
- type: f1_weighted
value: 39.29257829217775
- type: main_score
value: 41.71875
- task:
type: Clustering
dataset:
name: MTEB RuSciBenchOECDClusteringP2P (default)
type: ai-forever/ru-scibench-oecd-classification
config: default
split: test
revision: 26c88e99dcaba32bb45d0e1bfc21902337f6d471
metrics:
- type: main_score
value: 45.10904808834516
- type: v_measure
value: 45.10904808834516
- type: v_measure_std
value: 1.0572643410157534
- task:
type: Classification
dataset:
name: MTEB SIB200Classification (rus_Cyrl)
type: mteb/sib200
config: rus_Cyrl
split: test
revision: a74d7350ea12af010cfb1c21e34f1f81fd2e615b
metrics:
- type: accuracy
value: 66.36363636363637
- type: f1
value: 64.6940336621617
- type: f1_weighted
value: 66.43317771876966
- type: main_score
value: 66.36363636363637
- task:
type: Clustering
dataset:
name: MTEB SIB200ClusteringS2S (rus_Cyrl)
type: mteb/sib200
config: rus_Cyrl
split: test
revision: a74d7350ea12af010cfb1c21e34f1f81fd2e615b
metrics:
- type: main_score
value: 33.99178497314711
- type: v_measure
value: 33.99178497314711
- type: v_measure_std
value: 4.036337464043786
- task:
type: STS
dataset:
name: MTEB STS22.v2 (ru)
type: mteb/sts22-crosslingual-sts
config: ru
split: test
revision: d31f33a128469b20e357535c39b82fb3c3f6f2bd
metrics:
- type: cosine_pearson
value: 50.724322379215934
- type: cosine_spearman
value: 59.90449732164651
- type: euclidean_pearson
value: 50.227545226784024
- type: euclidean_spearman
value: 59.898906527601085
- type: main_score
value: 59.90449732164651
- type: manhattan_pearson
value: 50.21762139819405
- type: manhattan_spearman
value: 59.761039813759
- type: pearson
value: 50.724322379215934
- type: spearman
value: 59.90449732164651
- task:
type: STS
dataset:
name: MTEB STSBenchmarkMultilingualSTS (ru)
type: mteb/stsb_multi_mt
config: ru
split: dev
revision: 29afa2569dcedaaa2fe6a3dcfebab33d28b82e8c
metrics:
- type: cosine_pearson
value: 78.43928769569945
- type: cosine_spearman
value: 78.23961768018884
- type: euclidean_pearson
value: 77.4718694027985
- type: euclidean_spearman
value: 78.23887044760475
- type: main_score
value: 78.23961768018884
- type: manhattan_pearson
value: 77.34517128089547
- type: manhattan_spearman
value: 78.1146477340426
- type: pearson
value: 78.43928769569945
- type: spearman
value: 78.23961768018884
- task:
type: MultilabelClassification
dataset:
name: MTEB SensitiveTopicsClassification (default)
type: ai-forever/sensitive-topics-classification
config: default
split: test
revision: 416b34a802308eac30e4192afc0ff99bb8dcc7f2
metrics:
- type: accuracy
value: 22.8125
- type: f1
value: 17.31969589593409
- type: lrap
value: 33.82412380642287
- type: main_score
value: 22.8125
- task:
type: PairClassification
dataset:
name: MTEB TERRa (default)
type: ai-forever/terra-pairclassification
config: default
split: dev
revision: 7b58f24536063837d644aab9a023c62199b2a612
metrics:
- type: cosine_accuracy
value: 57.32899022801303
- type: cosine_accuracy_threshold
value: 85.32201051712036
- type: cosine_ap
value: 55.14264553720072
- type: cosine_f1
value: 66.83544303797468
- type: cosine_f1_threshold
value: 85.32201051712036
- type: cosine_precision
value: 54.54545454545454
- type: cosine_recall
value: 86.27450980392157
- type: dot_accuracy
value: 57.32899022801303
- type: dot_accuracy_threshold
value: 85.32201051712036
- type: dot_ap
value: 55.14264553720072
- type: dot_f1
value: 66.83544303797468
- type: dot_f1_threshold
value: 85.32201051712036
- type: dot_precision
value: 54.54545454545454
- type: dot_recall
value: 86.27450980392157
- type: euclidean_accuracy
value: 57.32899022801303
- type: euclidean_accuracy_threshold
value: 54.18117046356201
- type: euclidean_ap
value: 55.14264553720072
- type: euclidean_f1
value: 66.83544303797468
- type: euclidean_f1_threshold
value: 54.18117046356201
- type: euclidean_precision
value: 54.54545454545454
- type: euclidean_recall
value: 86.27450980392157
- type: main_score
value: 55.14264553720072
- type: manhattan_accuracy
value: 57.32899022801303
- type: manhattan_accuracy_threshold
value: 828.8480758666992
- type: manhattan_ap
value: 55.077974053622555
- type: manhattan_f1
value: 66.82352941176471
- type: manhattan_f1_threshold
value: 885.6784820556641
- type: manhattan_precision
value: 52.20588235294118
- type: manhattan_recall
value: 92.81045751633987
- type: max_ap
value: 55.14264553720072
- type: max_f1
value: 66.83544303797468
- type: max_precision
value: 54.54545454545454
- type: max_recall
value: 92.81045751633987
- type: similarity_accuracy
value: 57.32899022801303
- type: similarity_accuracy_threshold
value: 85.32201051712036
- type: similarity_ap
value: 55.14264553720072
- type: similarity_f1
value: 66.83544303797468
- type: similarity_f1_threshold
value: 85.32201051712036
- type: similarity_precision
value: 54.54545454545454
- type: similarity_recall
value: 86.27450980392157
- task:
type: PairClassification
dataset:
name: MTEB XNLI (ru)
type: mteb/xnli
config: ru
split: test
revision: 09698e0180d87dc247ca447d3a1248b931ac0cdb
metrics:
- type: cosine_accuracy
value: 67.6923076923077
- type: cosine_accuracy_threshold
value: 87.6681923866272
- type: cosine_ap
value: 73.18693800863593
- type: cosine_f1
value: 70.40641099026904
- type: cosine_f1_threshold
value: 85.09706258773804
- type: cosine_precision
value: 57.74647887323944
- type: cosine_recall
value: 90.17595307917888
- type: dot_accuracy
value: 67.6923076923077
- type: dot_accuracy_threshold
value: 87.66818642616272
- type: dot_ap
value: 73.18693800863593
- type: dot_f1
value: 70.40641099026904
- type: dot_f1_threshold
value: 85.09706258773804
- type: dot_precision
value: 57.74647887323944
- type: dot_recall
value: 90.17595307917888
- type: euclidean_accuracy
value: 67.6923076923077
- type: euclidean_accuracy_threshold
value: 49.662476778030396
- type: euclidean_ap
value: 73.18693800863593
- type: euclidean_f1
value: 70.40641099026904
- type: euclidean_f1_threshold
value: 54.59475517272949
- type: euclidean_precision
value: 57.74647887323944
- type: euclidean_recall
value: 90.17595307917888
- type: main_score
value: 73.18693800863593
- type: manhattan_accuracy
value: 67.54578754578755
- type: manhattan_accuracy_threshold
value: 777.1001815795898
- type: manhattan_ap
value: 72.98861474758783
- type: manhattan_f1
value: 70.6842435655995
- type: manhattan_f1_threshold
value: 810.3782653808594
- type: manhattan_precision
value: 61.80021953896817
- type: manhattan_recall
value: 82.55131964809385
- type: max_ap
value: 73.18693800863593
- type: max_f1
value: 70.6842435655995
- type: max_precision
value: 61.80021953896817
- type: max_recall
value: 90.17595307917888
- type: similarity_accuracy
value: 67.6923076923077
- type: similarity_accuracy_threshold
value: 87.6681923866272
- type: similarity_ap
value: 73.18693800863593
- type: similarity_f1
value: 70.40641099026904
- type: similarity_f1_threshold
value: 85.09706258773804
- type: similarity_precision
value: 57.74647887323944
- type: similarity_recall
value: 90.17595307917888
- task:
type: PairClassification
dataset:
name: MTEB XNLIV2 (russian)
type: mteb/xnli2.0-multi-pair
config: russian
split: test
revision: 5b7d477a8c62cdd18e2fed7e015497c20b4371ad
metrics:
- type: cosine_accuracy
value: 68.35164835164835
- type: cosine_accuracy_threshold
value: 88.48621845245361
- type: cosine_ap
value: 73.10205506215699
- type: cosine_f1
value: 71.28712871287128
- type: cosine_f1_threshold
value: 87.00399398803711
- type: cosine_precision
value: 61.67023554603854
- type: cosine_recall
value: 84.4574780058651
- type: dot_accuracy
value: 68.35164835164835
- type: dot_accuracy_threshold
value: 88.48622441291809
- type: dot_ap
value: 73.10191110714706
- type: dot_f1
value: 71.28712871287128
- type: dot_f1_threshold
value: 87.00399398803711
- type: dot_precision
value: 61.67023554603854
- type: dot_recall
value: 84.4574780058651
- type: euclidean_accuracy
value: 68.35164835164835
- type: euclidean_accuracy_threshold
value: 47.98704385757446
- type: euclidean_ap
value: 73.10205506215699
- type: euclidean_f1
value: 71.28712871287128
- type: euclidean_f1_threshold
value: 50.982362031936646
- type: euclidean_precision
value: 61.67023554603854
- type: euclidean_recall
value: 84.4574780058651
- type: main_score
value: 73.10205506215699
- type: manhattan_accuracy
value: 67.91208791208791
- type: manhattan_accuracy_threshold
value: 746.1360931396484
- type: manhattan_ap
value: 72.8954736175069
- type: manhattan_f1
value: 71.1297071129707
- type: manhattan_f1_threshold
value: 808.0789566040039
- type: manhattan_precision
value: 60.04036326942482
- type: manhattan_recall
value: 87.2434017595308
- type: max_ap
value: 73.10205506215699
- type: max_f1
value: 71.28712871287128
- type: max_precision
value: 61.67023554603854
- type: max_recall
value: 87.2434017595308
- type: similarity_accuracy
value: 68.35164835164835
- type: similarity_accuracy_threshold
value: 88.48621845245361
- type: similarity_ap
value: 73.10205506215699
- type: similarity_f1
value: 71.28712871287128
- type: similarity_f1_threshold
value: 87.00399398803711
- type: similarity_precision
value: 61.67023554603854
- type: similarity_recall
value: 84.4574780058651
- task:
type: Retrieval
dataset:
name: MTEB XQuADRetrieval (ru)
type: google/xquad
config: ru
split: validation
revision: 51adfef1c1287aab1d2d91b5bead9bcfb9c68583
metrics:
- type: main_score
value: 95.705
- type: map_at_1
value: 90.802
- type: map_at_10
value: 94.427
- type: map_at_100
value: 94.451
- type: map_at_1000
value: 94.451
- type: map_at_20
value: 94.446
- type: map_at_3
value: 94.121
- type: map_at_5
value: 94.34
- type: mrr_at_1
value: 90.80168776371308
- type: mrr_at_10
value: 94.42659567343111
- type: mrr_at_100
value: 94.45099347521871
- type: mrr_at_1000
value: 94.45099347521871
- type: mrr_at_20
value: 94.44574530017569
- type: mrr_at_3
value: 94.12095639943743
- type: mrr_at_5
value: 94.34036568213786
- type: nauc_map_at_1000_diff1
value: 87.40573202946949
- type: nauc_map_at_1000_max
value: 65.56220344468791
- type: nauc_map_at_1000_std
value: 8.865583291735863
- type: nauc_map_at_100_diff1
value: 87.40573202946949
- type: nauc_map_at_100_max
value: 65.56220344468791
- type: nauc_map_at_100_std
value: 8.865583291735863
- type: nauc_map_at_10_diff1
value: 87.43657080570291
- type: nauc_map_at_10_max
value: 65.71295628534446
- type: nauc_map_at_10_std
value: 9.055399339099655
- type: nauc_map_at_1_diff1
value: 88.08395824560428
- type: nauc_map_at_1_max
value: 62.92813192908893
- type: nauc_map_at_1_std
value: 6.738987385482432
- type: nauc_map_at_20_diff1
value: 87.40979818966589
- type: nauc_map_at_20_max
value: 65.59474346926105
- type: nauc_map_at_20_std
value: 8.944420599300914
- type: nauc_map_at_3_diff1
value: 86.97771892161035
- type: nauc_map_at_3_max
value: 66.14330030122467
- type: nauc_map_at_3_std
value: 8.62516327793521
- type: nauc_map_at_5_diff1
value: 87.30273362211798
- type: nauc_map_at_5_max
value: 66.1522476584607
- type: nauc_map_at_5_std
value: 9.780940862679724
- type: nauc_mrr_at_1000_diff1
value: 87.40573202946949
- type: nauc_mrr_at_1000_max
value: 65.56220344468791
- type: nauc_mrr_at_1000_std
value: 8.865583291735863
- type: nauc_mrr_at_100_diff1
value: 87.40573202946949
- type: nauc_mrr_at_100_max
value: 65.56220344468791
- type: nauc_mrr_at_100_std
value: 8.865583291735863
- type: nauc_mrr_at_10_diff1
value: 87.43657080570291
- type: nauc_mrr_at_10_max
value: 65.71295628534446
- type: nauc_mrr_at_10_std
value: 9.055399339099655
- type: nauc_mrr_at_1_diff1
value: 88.08395824560428
- type: nauc_mrr_at_1_max
value: 62.92813192908893
- type: nauc_mrr_at_1_std
value: 6.738987385482432
- type: nauc_mrr_at_20_diff1
value: 87.40979818966589
- type: nauc_mrr_at_20_max
value: 65.59474346926105
- type: nauc_mrr_at_20_std
value: 8.944420599300914
- type: nauc_mrr_at_3_diff1
value: 86.97771892161035
- type: nauc_mrr_at_3_max
value: 66.14330030122467
- type: nauc_mrr_at_3_std
value: 8.62516327793521
- type: nauc_mrr_at_5_diff1
value: 87.30273362211798
- type: nauc_mrr_at_5_max
value: 66.1522476584607
- type: nauc_mrr_at_5_std
value: 9.780940862679724
- type: nauc_ndcg_at_1000_diff1
value: 87.37823158814116
- type: nauc_ndcg_at_1000_max
value: 66.00874244792789
- type: nauc_ndcg_at_1000_std
value: 9.479929342875067
- type: nauc_ndcg_at_100_diff1
value: 87.37823158814116
- type: nauc_ndcg_at_100_max
value: 66.00874244792789
- type: nauc_ndcg_at_100_std
value: 9.479929342875067
- type: nauc_ndcg_at_10_diff1
value: 87.54508467181488
- type: nauc_ndcg_at_10_max
value: 66.88756470312894
- type: nauc_ndcg_at_10_std
value: 10.812624405397022
- type: nauc_ndcg_at_1_diff1
value: 88.08395824560428
- type: nauc_ndcg_at_1_max
value: 62.92813192908893
- type: nauc_ndcg_at_1_std
value: 6.738987385482432
- type: nauc_ndcg_at_20_diff1
value: 87.42097894104597
- type: nauc_ndcg_at_20_max
value: 66.37031898778943
- type: nauc_ndcg_at_20_std
value: 10.34862538094813
- type: nauc_ndcg_at_3_diff1
value: 86.50039907157999
- type: nauc_ndcg_at_3_max
value: 67.97798288917929
- type: nauc_ndcg_at_3_std
value: 10.162410286746852
- type: nauc_ndcg_at_5_diff1
value: 87.13322094568531
- type: nauc_ndcg_at_5_max
value: 68.08576118683821
- type: nauc_ndcg_at_5_std
value: 12.639637379592855
- type: nauc_precision_at_1000_diff1
value: 100.0
- type: nauc_precision_at_1000_max
value: 100.0
- type: nauc_precision_at_1000_std
value: 100.0
- type: nauc_precision_at_100_diff1
value: 100.0
- type: nauc_precision_at_100_max
value: 100.0
- type: nauc_precision_at_100_std
value: 100.0
- type: nauc_precision_at_10_diff1
value: 93.46711505595813
- type: nauc_precision_at_10_max
value: 100.0
- type: nauc_precision_at_10_std
value: 65.42573557179935
- type: nauc_precision_at_1_diff1
value: 88.08395824560428
- type: nauc_precision_at_1_max
value: 62.92813192908893
- type: nauc_precision_at_1_std
value: 6.738987385482432
- type: nauc_precision_at_20_diff1
value: 91.28948674127133
- type: nauc_precision_at_20_max
value: 100.0
- type: nauc_precision_at_20_std
value: 90.74278258632364
- type: nauc_precision_at_3_diff1
value: 82.64606115071832
- type: nauc_precision_at_3_max
value: 83.26201582412921
- type: nauc_precision_at_3_std
value: 23.334013491433762
- type: nauc_precision_at_5_diff1
value: 85.0867539350284
- type: nauc_precision_at_5_max
value: 96.57011448655484
- type: nauc_precision_at_5_std
value: 56.46869543426768
- type: nauc_recall_at_1000_diff1
value: .nan
- type: nauc_recall_at_1000_max
value: .nan
- type: nauc_recall_at_1000_std
value: .nan
- type: nauc_recall_at_100_diff1
value: .nan
- type: nauc_recall_at_100_max
value: .nan
- type: nauc_recall_at_100_std
value: .nan
- type: nauc_recall_at_10_diff1
value: 93.46711505595623
- type: nauc_recall_at_10_max
value: 100.0
- type: nauc_recall_at_10_std
value: 65.42573557180279
- type: nauc_recall_at_1_diff1
value: 88.08395824560428
- type: nauc_recall_at_1_max
value: 62.92813192908893
- type: nauc_recall_at_1_std
value: 6.738987385482432
- type: nauc_recall_at_20_diff1
value: 91.28948674127474
- type: nauc_recall_at_20_max
value: 100.0
- type: nauc_recall_at_20_std
value: 90.74278258632704
- type: nauc_recall_at_3_diff1
value: 82.64606115071967
- type: nauc_recall_at_3_max
value: 83.26201582413023
- type: nauc_recall_at_3_std
value: 23.334013491434007
- type: nauc_recall_at_5_diff1
value: 85.08675393502854
- type: nauc_recall_at_5_max
value: 96.57011448655487
- type: nauc_recall_at_5_std
value: 56.46869543426658
- type: ndcg_at_1
value: 90.802
- type: ndcg_at_10
value: 95.705
- type: ndcg_at_100
value: 95.816
- type: ndcg_at_1000
value: 95.816
- type: ndcg_at_20
value: 95.771
- type: ndcg_at_3
value: 95.11699999999999
- type: ndcg_at_5
value: 95.506
- type: precision_at_1
value: 90.802
- type: precision_at_10
value: 9.949
- type: precision_at_100
value: 1.0
- type: precision_at_1000
value: 0.1
- type: precision_at_20
value: 4.987
- type: precision_at_3
value: 32.658
- type: precision_at_5
value: 19.781000000000002
- type: recall_at_1
value: 90.802
- type: recall_at_10
value: 99.494
- type: recall_at_100
value: 100.0
- type: recall_at_1000
value: 100.0
- type: recall_at_20
value: 99.747
- type: recall_at_3
value: 97.975
- type: recall_at_5
value: 98.90299999999999
---
## Multilingual-E5-small
**Disclaimer**: This model is cloned from [intfloat/multilingual-e5-small](https://huggingface.co/intfloat/multilingual-e5-small). The only difference from the original model is `pad_token_id` in `config.json` which is corrected to `1`.
[Multilingual E5 Text Embeddings: A Technical Report](https://arxiv.org/pdf/2402.05672).
Liang Wang, Nan Yang, Xiaolong Huang, Linjun Yang, Rangan Majumder, Furu Wei, arXiv 2024
This model has 12 layers and the embedding size is 384.
## Usage
Below is an example to encode queries and passages from the MS-MARCO passage ranking dataset.
```python
import torch.nn.functional as F
from torch import Tensor
from transformers import AutoTokenizer, AutoModel
def average_pool(last_hidden_states: Tensor,
attention_mask: Tensor) -> Tensor:
last_hidden = last_hidden_states.masked_fill(~attention_mask[..., None].bool(), 0.0)
return last_hidden.sum(dim=1) / attention_mask.sum(dim=1)[..., None]
# Each input text should start with "query: " or "passage: ", even for non-English texts.
# For tasks other than retrieval, you can simply use the "query: " prefix.
input_texts = ['query: how much protein should a female eat',
'query: 南瓜的家常做法',
"passage: As a general guideline, the CDC's average requirement of protein for women ages 19 to 70 is 46 grams per day. But, as you can see from this chart, you'll need to increase that if you're expecting or training for a marathon. Check out the chart below to see how much protein you should be eating each day.",
"passage: 1.清炒南瓜丝 原料:嫩南瓜半个 调料:葱、盐、白糖、鸡精 做法: 1、南瓜用刀薄薄的削去表面一层皮,用勺子刮去瓤 2、擦成细丝(没有擦菜板就用刀慢慢切成细丝) 3、锅烧热放油,入葱花煸出香味 4、入南瓜丝快速翻炒一分钟左右,放盐、一点白糖和鸡精调味出锅 2.香葱炒南瓜 原料:南瓜1只 调料:香葱、蒜末、橄榄油、盐 做法: 1、将南瓜去皮,切成片 2、油锅8成热后,将蒜末放入爆香 3、爆香后,将南瓜片放入,翻炒 4、在翻炒的同时,可以不时地往锅里加水,但不要太多 5、放入盐,炒匀 6、南瓜差不多软和绵了之后,就可以关火 7、撒入香葱,即可出锅"]
tokenizer = AutoTokenizer.from_pretrained('intfloat/multilingual-e5-small')
model = AutoModel.from_pretrained('intfloat/multilingual-e5-small')
# Tokenize the input texts
batch_dict = tokenizer(input_texts, max_length=512, padding=True, truncation=True, return_tensors='pt')
outputs = model(**batch_dict)
embeddings = average_pool(outputs.last_hidden_state, batch_dict['attention_mask'])
# normalize embeddings
embeddings = F.normalize(embeddings, p=2, dim=1)
scores = (embeddings[:2] @ embeddings[2:].T) * 100
print(scores.tolist())
```
## Supported Languages
This model is initialized from [microsoft/Multilingual-MiniLM-L12-H384](https://huggingface.co/microsoft/Multilingual-MiniLM-L12-H384)
and continually trained on a mixture of multilingual datasets.
It supports 100 languages from xlm-roberta,
but low-resource languages may see performance degradation.
## Training Details
**Initialization**: [microsoft/Multilingual-MiniLM-L12-H384](https://huggingface.co/microsoft/Multilingual-MiniLM-L12-H384)
**First stage**: contrastive pre-training with weak supervision
| Dataset | Weak supervision | # of text pairs |
|--------------------------------------------------------------------------------------------------------|---------------------------------------|-----------------|
| Filtered [mC4](https://huggingface.co/datasets/mc4) | (title, page content) | 1B |
| [CC News](https://huggingface.co/datasets/intfloat/multilingual_cc_news) | (title, news content) | 400M |
| [NLLB](https://huggingface.co/datasets/allenai/nllb) | translation pairs | 2.4B |
| [Wikipedia](https://huggingface.co/datasets/intfloat/wikipedia) | (hierarchical section title, passage) | 150M |
| Filtered [Reddit](https://www.reddit.com/) | (comment, response) | 800M |
| [S2ORC](https://github.com/allenai/s2orc) | (title, abstract) and citation pairs | 100M |
| [Stackexchange](https://stackexchange.com/) | (question, answer) | 50M |
| [xP3](https://huggingface.co/datasets/bigscience/xP3) | (input prompt, response) | 80M |
| [Miscellaneous unsupervised SBERT data](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2) | - | 10M |
**Second stage**: supervised fine-tuning
| Dataset | Language | # of text pairs |
|----------------------------------------------------------------------------------------|--------------|-----------------|
| [MS MARCO](https://microsoft.github.io/msmarco/) | English | 500k |
| [NQ](https://github.com/facebookresearch/DPR) | English | 70k |
| [Trivia QA](https://github.com/facebookresearch/DPR) | English | 60k |
| [NLI from SimCSE](https://github.com/princeton-nlp/SimCSE) | English | <300k |
| [ELI5](https://huggingface.co/datasets/eli5) | English | 500k |
| [DuReader Retrieval](https://github.com/baidu/DuReader/tree/master/DuReader-Retrieval) | Chinese | 86k |
| [KILT Fever](https://huggingface.co/datasets/kilt_tasks) | English | 70k |
| [KILT HotpotQA](https://huggingface.co/datasets/kilt_tasks) | English | 70k |
| [SQuAD](https://huggingface.co/datasets/squad) | English | 87k |
| [Quora](https://huggingface.co/datasets/quora) | English | 150k |
| [Mr. TyDi](https://huggingface.co/datasets/castorini/mr-tydi) | 11 languages | 50k |
| [MIRACL](https://huggingface.co/datasets/miracl/miracl) | 16 languages | 40k |
For all labeled datasets, we only use its training set for fine-tuning.
For other training details, please refer to our paper at [https://arxiv.org/pdf/2402.05672](https://arxiv.org/pdf/2402.05672).
## Benchmark Results on [Mr. TyDi](https://arxiv.org/abs/2108.08787)
| Model | Avg MRR@10 | | ar | bn | en | fi | id | ja | ko | ru | sw | te | th |
|-----------------------|------------|-------|------| --- | --- | --- | --- | --- | --- | --- |------| --- | --- |
| BM25 | 33.3 | | 36.7 | 41.3 | 15.1 | 28.8 | 38.2 | 21.7 | 28.1 | 32.9 | 39.6 | 42.4 | 41.7 |
| mDPR | 16.7 | | 26.0 | 25.8 | 16.2 | 11.3 | 14.6 | 18.1 | 21.9 | 18.5 | 7.3 | 10.6 | 13.5 |
| BM25 + mDPR | 41.7 | | 49.1 | 53.5 | 28.4 | 36.5 | 45.5 | 35.5 | 36.2 | 42.7 | 40.5 | 42.0 | 49.2 |
| | |
| multilingual-e5-small | 64.4 | | 71.5 | 66.3 | 54.5 | 57.7 | 63.2 | 55.4 | 54.3 | 60.8 | 65.4 | 89.1 | 70.1 |
| multilingual-e5-base | 65.9 | | 72.3 | 65.0 | 58.5 | 60.8 | 64.9 | 56.6 | 55.8 | 62.7 | 69.0 | 86.6 | 72.7 |
| multilingual-e5-large | **70.5** | | 77.5 | 73.2 | 60.8 | 66.8 | 68.5 | 62.5 | 61.6 | 65.8 | 72.7 | 90.2 | 76.2 |
## MTEB Benchmark Evaluation
Check out [unilm/e5](https://github.com/microsoft/unilm/tree/master/e5) to reproduce evaluation results
on the [BEIR](https://arxiv.org/abs/2104.08663) and [MTEB benchmark](https://arxiv.org/abs/2210.07316).
## Support for Sentence Transformers
Below is an example for usage with sentence_transformers.
```python
from sentence_transformers import SentenceTransformer
model = SentenceTransformer('intfloat/multilingual-e5-small')
input_texts = [
'query: how much protein should a female eat',
'query: 南瓜的家常做法',
"passage: As a general guideline, the CDC's average requirement of protein for women ages 19 to 70 i s 46 grams per day. But, as you can see from this chart, you'll need to increase that if you're expecting or traini ng for a marathon. Check out the chart below to see how much protein you should be eating each day.",
"passage: 1.清炒南瓜丝 原料:嫩南瓜半个 调料:葱、盐、白糖、鸡精 做法: 1、南瓜用刀薄薄的削去表面一层皮 ,用勺子刮去瓤 2、擦成细丝(没有擦菜板就用刀慢慢切成细丝) 3、锅烧热放油,入葱花煸出香味 4、入南瓜丝快速翻炒一分钟左右, 放盐、一点白糖和鸡精调味出锅 2.香葱炒南瓜 原料:南瓜1只 调料:香葱、蒜末、橄榄油、盐 做法: 1、将南瓜去皮,切成片 2、油 锅8成热后,将蒜末放入爆香 3、爆香后,将南瓜片放入,翻炒 4、在翻炒的同时,可以不时地往锅里加水,但不要太多 5、放入盐,炒匀 6、南瓜差不多软和绵了之后,就可以关火 7、撒入香葱,即可出锅"
]
embeddings = model.encode(input_texts, normalize_embeddings=True)
```
Package requirements
`pip install sentence_transformers~=2.2.2`
Contributors: [michaelfeil](https://huggingface.co/michaelfeil)
## FAQ
**1. Do I need to add the prefix "query: " and "passage: " to input texts?**
Yes, this is how the model is trained, otherwise you will see a performance degradation.
Here are some rules of thumb:
- Use "query: " and "passage: " correspondingly for asymmetric tasks such as passage retrieval in open QA, ad-hoc information retrieval.
- Use "query: " prefix for symmetric tasks such as semantic similarity, bitext mining, paraphrase retrieval.
- Use "query: " prefix if you want to use embeddings as features, such as linear probing classification, clustering.
**2. Why are my reproduced results slightly different from reported in the model card?**
Different versions of `transformers` and `pytorch` could cause negligible but non-zero performance differences.
**3. Why does the cosine similarity scores distribute around 0.7 to 1.0?**
This is a known and expected behavior as we use a low temperature 0.01 for InfoNCE contrastive loss.
For text embedding tasks like text retrieval or semantic similarity,
what matters is the relative order of the scores instead of the absolute values,
so this should not be an issue.
## Citation
If you find our paper or models helpful, please consider cite as follows:
```
@article{wang2024multilingual,
title={Multilingual E5 Text Embeddings: A Technical Report},
author={Wang, Liang and Yang, Nan and Huang, Xiaolong and Yang, Linjun and Majumder, Rangan and Wei, Furu},
journal={arXiv preprint arXiv:2402.05672},
year={2024}
}
```
## Limitations
Long texts will be truncated to at most 512 tokens.
| [
"SEMANTIC_SIMILARITY",
"TRANSLATION",
"SUMMARIZATION"
] | [
"BIOSSES",
"SCIFACT"
] |
RichardErkhov/EleutherAI_-_pythia-1b-deduped-v0-gguf | RichardErkhov | null | [
"gguf",
"arxiv:2101.00027",
"arxiv:2201.07311",
"endpoints_compatible",
"region:us"
] | 2024-10-31T18:17:56 | 2024-10-31T18:29:48 | 79 | 0 | ---
{}
---
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
pythia-1b-deduped-v0 - GGUF
- Model creator: https://huggingface.co/EleutherAI/
- Original model: https://huggingface.co/EleutherAI/pythia-1b-deduped-v0/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [pythia-1b-deduped-v0.Q2_K.gguf](https://huggingface.co/RichardErkhov/EleutherAI_-_pythia-1b-deduped-v0-gguf/blob/main/pythia-1b-deduped-v0.Q2_K.gguf) | Q2_K | 0.39GB |
| [pythia-1b-deduped-v0.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/EleutherAI_-_pythia-1b-deduped-v0-gguf/blob/main/pythia-1b-deduped-v0.Q3_K_S.gguf) | Q3_K_S | 0.45GB |
| [pythia-1b-deduped-v0.Q3_K.gguf](https://huggingface.co/RichardErkhov/EleutherAI_-_pythia-1b-deduped-v0-gguf/blob/main/pythia-1b-deduped-v0.Q3_K.gguf) | Q3_K | 0.51GB |
| [pythia-1b-deduped-v0.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/EleutherAI_-_pythia-1b-deduped-v0-gguf/blob/main/pythia-1b-deduped-v0.Q3_K_M.gguf) | Q3_K_M | 0.51GB |
| [pythia-1b-deduped-v0.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/EleutherAI_-_pythia-1b-deduped-v0-gguf/blob/main/pythia-1b-deduped-v0.Q3_K_L.gguf) | Q3_K_L | 0.55GB |
| [pythia-1b-deduped-v0.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/EleutherAI_-_pythia-1b-deduped-v0-gguf/blob/main/pythia-1b-deduped-v0.IQ4_XS.gguf) | IQ4_XS | 0.54GB |
| [pythia-1b-deduped-v0.Q4_0.gguf](https://huggingface.co/RichardErkhov/EleutherAI_-_pythia-1b-deduped-v0-gguf/blob/main/pythia-1b-deduped-v0.Q4_0.gguf) | Q4_0 | 0.56GB |
| [pythia-1b-deduped-v0.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/EleutherAI_-_pythia-1b-deduped-v0-gguf/blob/main/pythia-1b-deduped-v0.IQ4_NL.gguf) | IQ4_NL | 0.56GB |
| [pythia-1b-deduped-v0.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/EleutherAI_-_pythia-1b-deduped-v0-gguf/blob/main/pythia-1b-deduped-v0.Q4_K_S.gguf) | Q4_K_S | 0.56GB |
| [pythia-1b-deduped-v0.Q4_K.gguf](https://huggingface.co/RichardErkhov/EleutherAI_-_pythia-1b-deduped-v0-gguf/blob/main/pythia-1b-deduped-v0.Q4_K.gguf) | Q4_K | 0.61GB |
| [pythia-1b-deduped-v0.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/EleutherAI_-_pythia-1b-deduped-v0-gguf/blob/main/pythia-1b-deduped-v0.Q4_K_M.gguf) | Q4_K_M | 0.61GB |
| [pythia-1b-deduped-v0.Q4_1.gguf](https://huggingface.co/RichardErkhov/EleutherAI_-_pythia-1b-deduped-v0-gguf/blob/main/pythia-1b-deduped-v0.Q4_1.gguf) | Q4_1 | 0.61GB |
| [pythia-1b-deduped-v0.Q5_0.gguf](https://huggingface.co/RichardErkhov/EleutherAI_-_pythia-1b-deduped-v0-gguf/blob/main/pythia-1b-deduped-v0.Q5_0.gguf) | Q5_0 | 0.66GB |
| [pythia-1b-deduped-v0.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/EleutherAI_-_pythia-1b-deduped-v0-gguf/blob/main/pythia-1b-deduped-v0.Q5_K_S.gguf) | Q5_K_S | 0.66GB |
| [pythia-1b-deduped-v0.Q5_K.gguf](https://huggingface.co/RichardErkhov/EleutherAI_-_pythia-1b-deduped-v0-gguf/blob/main/pythia-1b-deduped-v0.Q5_K.gguf) | Q5_K | 0.71GB |
| [pythia-1b-deduped-v0.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/EleutherAI_-_pythia-1b-deduped-v0-gguf/blob/main/pythia-1b-deduped-v0.Q5_K_M.gguf) | Q5_K_M | 0.71GB |
| [pythia-1b-deduped-v0.Q5_1.gguf](https://huggingface.co/RichardErkhov/EleutherAI_-_pythia-1b-deduped-v0-gguf/blob/main/pythia-1b-deduped-v0.Q5_1.gguf) | Q5_1 | 0.72GB |
| [pythia-1b-deduped-v0.Q6_K.gguf](https://huggingface.co/RichardErkhov/EleutherAI_-_pythia-1b-deduped-v0-gguf/blob/main/pythia-1b-deduped-v0.Q6_K.gguf) | Q6_K | 0.78GB |
| [pythia-1b-deduped-v0.Q8_0.gguf](https://huggingface.co/RichardErkhov/EleutherAI_-_pythia-1b-deduped-v0-gguf/blob/main/pythia-1b-deduped-v0.Q8_0.gguf) | Q8_0 | 1.0GB |
Original model description:
---
language:
- en
tags:
- pytorch
- causal-lm
- pythia
- pythia_v0
license: apache-2.0
datasets:
- EleutherAI/the_pile_deduplicated
---
The *Pythia Scaling Suite* is a collection of models developed to facilitate
interpretability research. It contains two sets of eight models of sizes
70M, 160M, 410M, 1B, 1.4B, 2.8B, 6.9B, and 12B. For each size, there are two
models: one trained on the Pile, and one trained on the Pile after the dataset
has been globally deduplicated. All 8 model sizes are trained on the exact
same data, in the exact same order. All Pythia models are available
[on Hugging Face](https://huggingface.co/models?other=pythia).
The Pythia model suite was deliberately designed to promote scientific
research on large language models, especially interpretability research.
Despite not centering downstream performance as a design goal, we find the
models <a href="#evaluations">match or exceed</a> the performance of
similar and same-sized models, such as those in the OPT and GPT-Neo suites.
Please note that all models in the *Pythia* suite were renamed in January
2023. For clarity, a <a href="#naming-convention-and-parameter-count">table
comparing the old and new names</a> is provided in this model card, together
with exact parameter counts.
## Pythia-1B-deduped
### Model Details
- Developed by: [EleutherAI](http://eleuther.ai)
- Model type: Transformer-based Language Model
- Language: English
- Learn more: [Pythia's GitHub repository](https://github.com/EleutherAI/pythia)
for training procedure, config files, and details on how to use.
- Library: [GPT-NeoX](https://github.com/EleutherAI/gpt-neox)
- License: Apache 2.0
- Contact: to ask questions about this model, join the [EleutherAI
Discord](https://discord.gg/zBGx3azzUn), and post them in `#release-discussion`.
Please read the existing *Pythia* documentation before asking about it in the
EleutherAI Discord. For general correspondence: [contact@eleuther.
ai](mailto:[email protected]).
<figure>
| Pythia model | Non-Embedding Params | Layers | Model Dim | Heads | Batch Size | Learning Rate | Equivalent Models |
| -----------: | -------------------: | :----: | :-------: | :---: | :--------: | :-------------------: | :--------------------: |
| 70M | 18,915,328 | 6 | 512 | 8 | 2M | 1.0 x 10<sup>-3</sup> | — |
| 160M | 85,056,000 | 12 | 768 | 12 | 4M | 6.0 x 10<sup>-4</sup> | GPT-Neo 125M, OPT-125M |
| 410M | 302,311,424 | 24 | 1024 | 16 | 4M | 3.0 x 10<sup>-4</sup> | OPT-350M |
| 1.0B | 805,736,448 | 16 | 2048 | 8 | 2M | 3.0 x 10<sup>-4</sup> | — |
| 1.4B | 1,208,602,624 | 24 | 2048 | 16 | 4M | 2.0 x 10<sup>-4</sup> | GPT-Neo 1.3B, OPT-1.3B |
| 2.8B | 2,517,652,480 | 32 | 2560 | 32 | 2M | 1.6 x 10<sup>-4</sup> | GPT-Neo 2.7B, OPT-2.7B |
| 6.9B | 6,444,163,072 | 32 | 4096 | 32 | 2M | 1.2 x 10<sup>-4</sup> | OPT-6.7B |
| 12B | 11,327,027,200 | 36 | 5120 | 40 | 2M | 1.2 x 10<sup>-4</sup> | — |
<figcaption>Engineering details for the <i>Pythia Suite</i>. Deduped and
non-deduped models of a given size have the same hyperparameters. “Equivalent”
models have <b>exactly</b> the same architecture, and the same number of
non-embedding parameters.</figcaption>
</figure>
### Uses and Limitations
#### Intended Use
The primary intended use of Pythia is research on the behavior, functionality,
and limitations of large language models. This suite is intended to provide
a controlled setting for performing scientific experiments. To enable the
study of how language models change in the course of training, we provide
143 evenly spaced intermediate checkpoints per model. These checkpoints are
hosted on Hugging Face as branches. Note that branch `143000` corresponds
exactly to the model checkpoint on the `main` branch of each model.
You may also further fine-tune and adapt Pythia-1B-deduped for deployment,
as long as your use is in accordance with the Apache 2.0 license. Pythia
models work with the Hugging Face [Transformers
Library](https://huggingface.co/docs/transformers/index). If you decide to use
pre-trained Pythia-1B-deduped as a basis for your fine-tuned model, please
conduct your own risk and bias assessment.
#### Out-of-scope use
The Pythia Suite is **not** intended for deployment. It is not a in itself
a product and cannot be used for human-facing interactions.
Pythia models are English-language only, and are not suitable for translation
or generating text in other languages.
Pythia-1B-deduped has not been fine-tuned for downstream contexts in which
language models are commonly deployed, such as writing genre prose,
or commercial chatbots. This means Pythia-1B-deduped will **not**
respond to a given prompt the way a product like ChatGPT does. This is because,
unlike this model, ChatGPT was fine-tuned using methods such as Reinforcement
Learning from Human Feedback (RLHF) to better “understand” human instructions.
#### Limitations and biases
The core functionality of a large language model is to take a string of text
and predict the next token. The token deemed statistically most likely by the
model need not produce the most “accurate” text. Never rely on
Pythia-1B-deduped to produce factually accurate output.
This model was trained on [the Pile](https://pile.eleuther.ai/), a dataset
known to contain profanity and texts that are lewd or otherwise offensive.
See [Section 6 of the Pile paper](https://arxiv.org/abs/2101.00027) for a
discussion of documented biases with regards to gender, religion, and race.
Pythia-1B-deduped may produce socially unacceptable or undesirable text,
*even if* the prompt itself does not include anything explicitly offensive.
If you plan on using text generated through, for example, the Hosted Inference
API, we recommend having a human curate the outputs of this language model
before presenting it to other people. Please inform your audience that the
text was generated by Pythia-1B-deduped.
### Quickstart
Pythia models can be loaded and used via the following code, demonstrated here
for the third `pythia-70m-deduped` checkpoint:
```python
from transformers import GPTNeoXForCausalLM, AutoTokenizer
model = GPTNeoXForCausalLM.from_pretrained(
"EleutherAI/pythia-70m-deduped",
revision="step3000",
cache_dir="./pythia-70m-deduped/step3000",
)
tokenizer = AutoTokenizer.from_pretrained(
"EleutherAI/pythia-70m-deduped",
revision="step3000",
cache_dir="./pythia-70m-deduped/step3000",
)
inputs = tokenizer("Hello, I am", return_tensors="pt")
tokens = model.generate(**inputs)
tokenizer.decode(tokens[0])
```
Revision/branch `step143000` corresponds exactly to the model checkpoint on
the `main` branch of each model.<br>
For more information on how to use all Pythia models, see [documentation on
GitHub](https://github.com/EleutherAI/pythia).
### Training
#### Training data
Pythia-1B-deduped was trained on the Pile **after the dataset has been
globally deduplicated**.<br>
[The Pile](https://pile.eleuther.ai/) is a 825GiB general-purpose dataset in
English. It was created by EleutherAI specifically for training large language
models. It contains texts from 22 diverse sources, roughly broken down into
five categories: academic writing (e.g. arXiv), internet (e.g. CommonCrawl),
prose (e.g. Project Gutenberg), dialogue (e.g. YouTube subtitles), and
miscellaneous (e.g. GitHub, Enron Emails). See [the Pile
paper](https://arxiv.org/abs/2101.00027) for a breakdown of all data sources,
methodology, and a discussion of ethical implications. Consult [the
datasheet](https://arxiv.org/abs/2201.07311) for more detailed documentation
about the Pile and its component datasets. The Pile can be downloaded from
the [official website](https://pile.eleuther.ai/), or from a [community
mirror](https://the-eye.eu/public/AI/pile/).
#### Training procedure
All models were trained on the exact same data, in the exact same order. Each
model saw 299,892,736,000 tokens during training, and 143 checkpoints for each
model are saved every 2,097,152,000 tokens, spaced evenly throughout training.
This corresponds to training for just under 1 epoch on the Pile for
non-deduplicated models, and about 1.5 epochs on the deduplicated Pile.
All *Pythia* models trained for the equivalent of 143000 steps at a batch size
of 2,097,152 tokens. Two batch sizes were used: 2M and 4M. Models with a batch
size of 4M tokens listed were originally trained for 71500 steps instead, with
checkpoints every 500 steps. The checkpoints on Hugging Face are renamed for
consistency with all 2M batch models, so `step1000` is the first checkpoint
for `pythia-1.4b` that was saved (corresponding to step 500 in training), and
`step1000` is likewise the first `pythia-6.9b` checkpoint that was saved
(corresponding to 1000 “actual” steps).<br>
See [GitHub](https://github.com/EleutherAI/pythia) for more details on training
procedure, including [how to reproduce
it](https://github.com/EleutherAI/pythia/blob/main/README.md#reproducing-training).<br>
Pythia uses the same tokenizer as [GPT-NeoX-
20B](https://huggingface.co/EleutherAI/gpt-neox-20b).
### Evaluations
All 16 *Pythia* models were evaluated using the [LM Evaluation
Harness](https://github.com/EleutherAI/lm-evaluation-harness). You can access
the results by model and step at `results/json/*` in the [GitHub
repository](https://github.com/EleutherAI/pythia/tree/main/results/json).<br>
Expand the sections below to see plots of evaluation results for all
Pythia and Pythia-deduped models compared with OPT and BLOOM.
<details>
<summary>LAMBADA – OpenAI</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/lambada_openai.png" style="width:auto"/>
</details>
<details>
<summary>Physical Interaction: Question Answering (PIQA)</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/piqa.png" style="width:auto"/>
</details>
<details>
<summary>WinoGrande</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/winogrande.png" style="width:auto"/>
</details>
<details>
<summary>AI2 Reasoning Challenge – Challenge Set</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/arc_challenge.png" style="width:auto"/>
</details>
<details>
<summary>SciQ</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/sciq.png" style="width:auto"/>
</details>
### Naming convention and parameter count
*Pythia* models were renamed in January 2023. It is possible that the old
naming convention still persists in some documentation by accident. The
current naming convention (70M, 160M, etc.) is based on total parameter count.
<figure style="width:32em">
| current Pythia suffix | old suffix | total params | non-embedding params |
| --------------------: | ---------: | -------------: | -------------------: |
| 70M | 19M | 70,426,624 | 18,915,328 |
| 160M | 125M | 162,322,944 | 85,056,000 |
| 410M | 350M | 405,334,016 | 302,311,424 |
| 1B | 800M | 1,011,781,632 | 805,736,448 |
| 1.4B | 1.3B | 1,414,647,808 | 1,208,602,624 |
| 2.8B | 2.7B | 2,775,208,960 | 2,517,652,480 |
| 6.9B | 6.7B | 6,857,302,016 | 6,444,163,072 |
| 12B | 13B | 11,846,072,320 | 11,327,027,200 |
</figure>
| [
"QUESTION_ANSWERING",
"TRANSLATION"
] | [
"SCIQ"
] |
Tejasw1/votum-acts-v1 | Tejasw1 | sentence-similarity | [
"sentence-transformers",
"safetensors",
"bert",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:22370",
"loss:MatryoshkaLoss",
"loss:MultipleNegativesRankingLoss",
"en",
"arxiv:1908.10084",
"arxiv:2205.13147",
"arxiv:1705.00652",
"base_model:BAAI/bge-base-en-v1.5",
"base_model:finetune:BAAI/bge-base-en-v1.5",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | 2024-12-07T16:04:33 | 2024-12-07T16:04:46 | 79 | 0 | ---
base_model: BAAI/bge-base-en-v1.5
language:
- en
library_name: sentence-transformers
license: apache-2.0
metrics:
- cosine_accuracy@1
- cosine_accuracy@3
- cosine_accuracy@5
- cosine_accuracy@10
- cosine_precision@1
- cosine_precision@3
- cosine_precision@5
- cosine_precision@10
- cosine_recall@1
- cosine_recall@3
- cosine_recall@5
- cosine_recall@10
- cosine_ndcg@10
- cosine_mrr@10
- cosine_map@100
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:22370
- loss:MatryoshkaLoss
- loss:MultipleNegativesRankingLoss
widget:
- source_sentence: 'Represent this sentence for searching relevant passages: What
are the conditions a natural person must fulfill to own a media institution or
media outlet according to Article (4) of the Federal Decree by Law Concerning
Media Regulation?'
sentences:
- "Document: Article (4) Ownership of Media Institutions and Media Outlets \
\ \
\ 1. A natural person may own any media institution or media\
\ outlet, after fulfilling the following conditions:\r A. He must have full legal\
\ capacity.\r B. He must be of good reputation and conduct, and has not been\
\ previously sentenced to imprisonment penalty in a felony or misdemeanour breaching\
\ honour or trust, unless he has been rehabilitated.\r C. He must obtain the\
\ required approvals from the Concerned Authorities. D. Any other conditions determined\
\ by the Executive Regulation of this Law by Decree.\r2. A legal person may own\
\ any media institution or media outlet, after fulfilling the following conditions:\r\
A. Taking the form of a sole proprietorship or any form of company stipulated\
\ in the Commercial Companies Law in force in the State.\r B. The activity of\
\ the legal person shall be limited to media activities.\r C. Obtaining the required\
\ approvals from the Concerned Authorities.\r D. Any other conditions determined\
\ by the Executive Regulation of this Law by Decree."
- "Document: Article (30) General Interest Deduction Limitation Rule \
\ \
\ 1. A Taxable Person's Net Interest Expense shall be deductible\
\ up to (30%) (thirty percent) of the Taxable Person's accounting earnings before\
\ interest, taxes, depreciation and amortization (EBITDA) for the relevant Tax\
\ Period, excluding any Exempt Income under Article (22) of this Decree-Law.\r\
2. A Taxable Person's Net Interest Expense for a Tax Period is the amount of the\
\ Interest Expense incurred during the Tax Period in addition to the amount of\
\ any Net Interest Expense carried forward under Clause (4) of this Article, which\
\ exceeds the taxable Interest income derived during that same period.\r3. The\
\ limitation under Clause (1) of this Article shall not apply where the Net Interest\
\ Expense of the Taxable Person for the relevant Tax Period does not exceed an\
\ amount specified by the Minister.\r4. The amount of Net Interest Expense that\
\ is not deductible under Clause (1) of this Article may be carried forward and\
\ deducted in the subsequent (10) ten Tax Periods in the order in which the amount\
\ was incurred, subject to Clauses (1) and (2) of this Article.\r5. Interest Expense\
\ that is not deductible under any other provision of this Decree-Law shall be\
\ excluded from the calculation of Net Interest Expense under Clause (2) of this\
\ Article.\r6. Clauses (1) to (5) of this Article shall not apply to the following\
\ Persons:\r a. A Bank.\r b. An Insurer.\r c. A natural person undertaking\
\ a Business or Business Activity in the State.\r d. Any other Person as may\
\ be determined by the Minister.\r7. The Minister may issue a decision to specify\
\ the application of Clauses (1) and (2) of this Article to a Taxable Person that\
\ is related to one or more Persons through ownership or control and they are\
\ obligated under applicable accounting standards to have consolidated financial\
\ statements."
- "Document: Article (502) \
\ 1. Unless otherwise agreed,\
\ when a safe deposit box is rented out to several renters, any one of them may\
\ use it separately.\r2. Where a renter dies, the bank may only, after becoming\
\ aware of the death, give permission for the safe deposit box to be opened with\
\ the approval of all parties concerned or based on a court decision."
- source_sentence: 'Represent this sentence for searching relevant passages: What
actions can be taken against a person who wilfully furnishes false or incorrect
information in a declaration under Section 90(12) of the Companies Act, 2013?'
sentences:
- 'Document: (1) The Registrar shill have power at all times to rectify any mistakein
order to bring the entry in the Register of Firms relating to any firm into conformity
with thedocuments relating to that firm filed under this Chapter.(2) On application
made by all the parties who have signed any document relating to a firm filedunder
this Chapter, the Registrar may rectify any mistake in such document or in the
record or notethereof made in the Register of Firms.'
- "Document: (1) The appropriate Government may, subject to the condition ofprevious\
\ publication, make rules for carrying out the purposes of this Act.(2) In particular,\
\ and without prejudice to the generality of the foregoing power, such rules mayprovide\
\ for all or any of the following matters, namely:--(a) the number of persons\
\ to be appointed as members representing various interests on theCentral Board\
\ and the State Board, the term of their office and other conditions of service,\
\ theprocedure to be followed in the discharge of their functions and the manner\
\ of filling vacancies;(b) the times and places of the meetings of any committee\
\ constituted under this Act, theprocedure to be followed at such meetings including\
\ the quorum necessary for the transaction ofbusiness, and the fees and allowances\
\ that may be paid to the members of a committee;(c) the manner in which establishments\
\ may be registered under section 7, the levy of a feetherefor and the form of\
\ certificate of registration;(d) the form of application for the grant or renewal\
\ of a licence under section 13 and theparticulars it may contain;(e) the manner\
\ in which an investigation is to be made in respect of an application for the\
\ grant ofa licence and the matters to be taken into account in granting or refusing\
\ a licence;(f) the form of a licence which may be granted or renewed under section\
\ 12 and the conditionssubject to which the licence may be granted or renewed,\
\ the fees to be levied for the grant or renewalof a licence and the deposit of\
\ any sum as security for the performance of such conditions;(g) the circumstances\
\ under which licences may be varied or amended under section 14;(h) the form\
\ and manner in which appeals may be filed under section 15 and the procedure\
\ to befollowed by appellate officers in disposing of the appeals;(i) the time\
\ within which facilities required by this Act to be provided and maintained may\
\ be soprovided by the contractor and in case of default on the part of the contractor,\
\ by the principalemployer;(j) the number and types of canteens, rest-rooms, latrines\
\ and urinals that should be provided andmaintained;(k) the type of equipment\
\ that should be provided in the first-aid boxes;(l) the period within which wages\
\ payable to contract labour should be paid by the contractorunder sub-section\
\ (1) of section 21;(m) the form of registers and records to be maintained by\
\ principal employers and contractors;(n) the submission of returns, forms in\
\ which, and the authorities to which, such returns may besubmitted;(o) the collection\
\ of any information or statistics in relation to contract labour; and(p) any\
\ other matter which has to be, or may be, prescribed under this Act.(3) Every\
\ rule made by the Central Government under this Act shall be laid as soon as\
\ may be after itis made, before each House of Parliament while it is in session\
\ for a total period of thirty days which maybe comprised in one session or in\
\ two successive sessions, and if before the expiry of the session in whichit\
\ is so laid or the session immediately following, both Houses agree in making\
\ any modification in therule or both Houses agree that the rule should not be\
\ made, the rule shall thereafter have effect only insuch modified form or be\
\ of no effect, as the case may be; so, however, that any such modification orannulment\
\ shall be without prejudice to the validity of anything previously done under\
\ that rule.1[(4) Every rule made by the State Government under this Act shall\
\ be laid, as soon as may be after itis made, before the State Legislature.]\t\
\t\t\t\t\t\t\t\t1. Ins. by Act 4 of 2005, s. 2 and the Schedule (w.e.f. 11-1-2005)."
- "Document: \t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\
\t\t\t1[90. Register of significant beneficial owners in a company.--(1) Every\
\ individual, who actingalone or together, or through one or more persons or trust,\
\ including a trust and persons resident outsideIndia, holds beneficial interests,\
\ of not less than twenty-five per cent. or such other percentage as may beprescribed,\
\ in shares of a company or the right to exercise, or the actual exercising of\
\ significant influenceor control as defined in clause (27) of section 2, over\
\ the company (herein referred to as \"significantbeneficial owner\"), shall make\
\ a declaration to the company, specifying the nature of his interest andother\
\ particulars, in such manner and within such period of acquisition of the beneficial\
\ interest or rightsand any change thereof, as may be prescribed:Provided that\
\ the Central Government may prescribe a class or classes of persons who shall\
\ not berequired to make declaration under this sub-section.(2) Every company\
\ shall maintain a register of the interest declared by individuals undersub-section\
\ (1) and changes therein which shall include the name of individual, his date\
\ of birth, address,details of ownership in the company and such other details\
\ as may be prescribed.(3) The register maintained under sub-section (2) shall\
\ be open to inspection by any member of thecompany on payment of such fees as\
\ may be prescribed.(4) Every company shall file a return of significant beneficial\
\ owners of the company and changestherein with the Registrar containing names,\
\ addresses and other details as may be prescribed within suchtime, in such form\
\ and manner as may be prescribed.2[(4A) Every company shall take necessary steps\
\ to identify an individual who is a significantbeneficial owner in relation to\
\ the company and require him to comply with the provisions of thissection.](5)\
\ A company shall give notice, in the prescribed manner, to any person (whether\
\ or not a member ofthe company) whom the company knows or has reasonable cause\
\ to believe--(a) to be a significant beneficial owner of the company;(b) to be\
\ having knowledge of the identity of a significant beneficial owner or another\
\ person likelyto have such knowledge; or(c) to have been a significant beneficial\
\ owner of the company at any time during the three yearsimmediately preceding\
\ the date on which the notice is issued,and who is not registered as a significant\
\ beneficial owner with the company as required under thissection.(6) The information\
\ required by the notice under sub-section (5) shall be given by the concernedperson\
\ within a period not exceeding thirty days of the date of the notice.(7) The\
\ company shall,--(a) where that person fails to give the company the information\
\ required by the notice within thetime specified therein; or(b) where the information\
\ given is not satisfactory,apply to the Tribunal within a period of fifteen days\
\ of the expiry of the period specified in the notice, foran order directing that\
\ the shares in question be subject to restrictions with regard to transfer of\
\ interest,suspension of all rights attached to the shares and such other matters\
\ as may be prescribed.(8) On any application made under sub-section (7), the\
\ Tribunal may, after giving an opportunity ofbeing heard to the parties concerned,\
\ make such order restricting the rights attached with the shares withina period\
\ of sixty days of receipt of application or such other period as may be prescribed.3[(9)\
\ The company or the person aggrieved by the order of the Tribunal may make an\
\ application tothe Tribunal for relaxation or lifting of the restrictions placed\
\ under sub-section (8), within a period of oneyear from the date of such order:Provided\
\ that if no such application has been filed within a period of one year from\
\ the date of theorder under sub-section (8), such shares shall be transferred,\
\ without any restrictions, to the authorityconstituted under sub-section (5)\
\ of section 125, in such manner as may be prescribed;]4[(9A) The Central Government\
\ may make rules for the purposes of this section.]5[(10) If any person fails\
\ to make a declaration as required under sub-section (1), he shall be liable\
\ toa penalty of fifty thousand rupees and in case of continuing failure, with\
\ a further penalty of one thousandrupees for each day after the first during\
\ which such failure continues, subject to a maximum of two lakhrupees.]6[(11)\
\ If a company, required to maintain register under sub-section (2) and file the\
\ information undersub-section (4) or required to take necessary steps under sub-section\
\ (4A), fails to do so or deniesinspection as provided therein, the company shall\
\ be liable to a penalty of one lakh rupees and in case ofcontinuing failure,\
\ with a further penalty of five hundred rupees for each day, after the first\
\ during whichsuch failure continues, subject to a maximum of five lakh rupees\
\ and every officer of the company who isin default shall be liable to a penalty\
\ of twenty-five thousand rupees and in case of continuing failure,with a further\
\ penalty of two hundred rupees for each day, after the first during which such\
\ failurecontinues, subject to a maximum of one lakh rupees.](12) If any person\
\ wilfully furnishes any false or incorrect information or suppresses any materialinformation\
\ of which he is aware in the declaration made under this section, he shall be\
\ liable to actionunder section 447.] \t\t\t\t\t\t\t\t\t1. Subs. by s. 22, ibid.,\
\ for section 90 (w.e.f. 13-6-2018).2. Ins. by Act 22 of 2019, s. 14 (w.e.f. 15-8-2019).3.\
\ Subs. by Act 22 of 2019, s. 14, for sub-section (9) (w.e.f. 2-11-2018).4. Ins.\
\ by s. 14, ibid. (w.e.f. 15-8-2019).5. Subs.by Act 29 of 2020, s. 19, for sub-section\
\ (10) (w.e.f. 21-12-2020).6. Subs. by s. 19, ibid., for sub-section (11) (w.e.f.\
\ 21-12-2020).\t\t\t\t\t\t\t\t\tRules YearDescriptionHindi DescriptionFiles(Eng)Files(Hindi)13-06-2018Companies\
\ (Significant Beneficial Owners) Rules, 2018."
- source_sentence: 'Represent this sentence for searching relevant passages: What
are "Technical Provisions" as per the Federal Decree-Law Regulating Insurance
Activities?'
sentences:
- 'Document: Article (1) Definitions For
the purpose of applying the provisions of this Decree-Law, the following words
and expressions shall bear the meanings assigned thereto respectively, unless
the context requires otherwise:The State: The United Arab Emirates.The CBUAE:
The Central Bank of UAE.Board: The CBUAE''s Board of Directors.Chairman: The Chairman
of the Board.Governor: The CBUAE''s Governor.Free Zone: Any financial free zone
established in the State under the provisions of Federal Law No. (8) of 2004,
on Financial Free Zones, or any other superseding law.Insurance Company (Insurer):
An insurance company incorporated in the State and a foreign insurance company
licensed to engage in insurance business in the State, either through a branch
or through an Insurance Agent.Reinsurance Company: A reinsurance company licensed
to engage in reinsurance business, either in the State or abroad.Companies: Insurance
and reinsurance companies.The Insured: A Person that enters into an insurance
policy with the Insurance Company for their benefit or the benefit of the named
Insured or the Beneficiary.Beneficiary: A Person who initially acquires the Insurance
Policy rights or to whom such rights are legally transferred.Insurance Policy:
A contract between the Insurer and the Insured setting out the insurance terms,
rights and obligations of both parties or the rights of the insurance Beneficiary,
and the annexes attached to the policy constitute an integral part thereof.Insurance
Agent: A Person licensed or authorized the CBUAE, and is approved by the Insurance
Company and authorized to carry out insurance activities on its own behalf or
on behalf of a branch thereof.Insurance Broker: A legal person licensed by the
CBUAE and acts as independent intermediary in insurance and reinsurance operations
between an insurance or re-insurance applicant on the one side and any Company
on the other side, and receives, in consideration of its efforts, a commission
from the Company with which insurance or reinsurance is concluded.Surveyor and
Loss Adjuster: A Person licensed or authorized by the CBUAE to detect and assess
the damage incurred as a result of the insured risk.Insurance Consultant: A Person
licensed or authorized by the CBUAE to examine insurance requirements for their
clients and give advice in respect of the suitable insurance coverage, assists
in preparing insurance requirements and receives their fees from their clients.Actuary:
A Person licensed or authorized the CBUAE to set the value and price of Insurance
Policies, and to asses the technical provisions, accounts and all matter related
thereto.Health Insurance Claims Management Company: A legal Person licensed the
CBUAE to engage in health insurance claims management business.Insurance-Related
Professionals: Any Person licensed or authorized the CBUAE to operate as an Insurance
Agent, Insurance Broker, Surveyor and Loss Adjuster, Insurance Consultant, Actuary
or health insurance claims manager, or any other profession related to insurance
as determined and regulated by a resolution of the Board.Branch: A branch of the
Company that carries out insurance activities in its own name.Premium: An amount
of money paid or payable by the Insured under the Insurance Policy and is called
"Contribution" in Takaful insurance.Authorized Manager: A natural Person appointed
by a foreign insurance Company to manage its branch in the State.Senior Employee:
Any Person who occupies an executive position equivalent to the functions of a
director-general, Authorized Manager or the deputy or assistant of either one,
or any department director, internal audit director or branch manager.Technical
Provisions: Provisions which the Insurer must deduct and retain to cover the Insured''s
accrued financial obligations vis-a-vis the Insured, pursuant to the provisions
of this Decree-Law.Solvency Margin: A surplus in the value of the Company''s existing
assets over its liabilities to such an extent that enables it to fulfil all its
obligations and pay the required insurance payouts once they become due without
impeding the Company''s business or weakening its financial position.Minimum Guarantee
Fund: An amount equal to one third of the required Solvency Margin or the amount
determined by the Board, whichever is greater.Auditor: A Person authorized to
carry out accounting and audit functions in the State.Takaful Insurance: A collective
contractual scheme intended to achieve solidarity and cooperation among a group
of contributors to address certain risks, where each one pays an amount of money
called "contribution" to be deposited in a Takaful insurance fund through which
compensation is to be paid to eligible persons when a risk is sustained.Higher
Sharia Authority [HAS]: The authority established under Federal Decree-Law No.
(14) of 2018, referred to hereinabove.Person: A natural and legal Person.Commercial
Register: The Register established with the competent authority under Federal
Decree-Law No. (37) of 2021, on the Commercial Register, or any other superseding
law.'
- "Document: \t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\
\t\t\t(1) Every company belonging to such class orclasses of companies as may\
\ be prescribed shall have the following whole-time key managerialpersonnel,--(i)\
\ managing director, or Chief Executive Officer or manager and in their absence,\
\ a whole-timedirector;(ii) company secretary; and(iii) Chief Financial Officer\
\ :Provided that an individual shall not be appointed or reappointed as the chairperson\
\ of thecompany, in pursuance of the articles of the company, as well as the managing\
\ director or ChiefExecutive Officer of the company at the same time after the\
\ date of commencement of this Actunless,-- (a) the articles of such a company\
\ provide otherwise; or(b) the company does not carry multiple businesses:Provided\
\ further that nothing contained in the first proviso shall apply to such class\
\ ofcompanies engaged in multiple businesses and which has appointed one or more\
\ Chief ExecutiveOfficers for each such business as may be notified by the Central\
\ Government.(2) Every whole-time key managerial personnel of a company shall\
\ be appointed by means of aresolution of the Board containing the terms and conditions\
\ of the appointment including theremuneration.(3) A whole-time key managerial\
\ personnel shall not hold office in more than one company except inits subsidiary\
\ company at the same time:Provided that nothing contained in this sub-section\
\ shall disentitle a key managerial personnel frombeing a director of any company\
\ with the permission of the Board:Provided further that whole-time key managerial\
\ personnel holding office in more than one companyat the same time on the date\
\ of commencement of this Act, shall, within a period of six months from suchcommencement,\
\ choose one company, in which he wishes to continue to hold the office of keymanagerial\
\ personnel:Provided also that a company may appoint or employ a person as its\
\ managing director, if he is themanaging director or manager of one, and of not\
\ more than one, other company and such appointment oremployment is made or approved\
\ by a resolution passed at a meeting of the Board with the consent of allthe\
\ directors present at the meeting and of which meeting, and of the resolution\
\ to be moved thereat,specific notice has been given to all the directors then\
\ in India.(4) If the office of any whole-time key managerial personnel is vacated,\
\ the resulting vacancy shall befilled-up by the Board at a meeting of the Board\
\ within a period of six months from the date of suchvacancy.1[(5) If any company\
\ makes any default in complying with the provisions of this section, suchcompany\
\ shall be liable to a penalty of five lakh rupees and every director and key\
\ managerial personnelof the company who is in default shall be liable to a penalty\
\ of fifty thousand rupees and where the defaultis a continuing one, with a further\
\ penalty of one thousand rupees for each day after the first during whichsuch\
\ default continues but not exceeding five lakh rupees.]\t\t\t\t\t\t\t\t\t1. Subs.\
\ by Act 22 of 2019, s. 30, for sub-section (5) (w.e.f. 2-11-2018).\t\t\t\t\t\t\
\t\t\tRulesNotifications YearDescriptionHindi DescriptionFiles(Eng)Files(Hindi)31-03-2014Chapter\
\ XIII- The Companies (Appointment and Remuneration of Managerial Personnel) Rules,\
\ 2014.YearDescriptionHindi DescriptionFiles(Eng)Files(Hindi)25-07-2014Second\
\ Proviso to sub-section (1) of Section 203 of Companies Act, 2013"
- "Document: Article (75) Endorsement of Preventive Settlement Proposal \
\ \
\ Within (10) ten days following the Bankruptcy Department's\
\ receipt of the notification on approval by the creditors of the preventive settlement\
\ proposal and attachments thereof, the Bankruptcy Court shall endorse the proposal\
\ after verifying the fulfillment of the following conditions:\r1. The preventive\
\ settlement proposal is approved by the required majority.\r2. The preventive\
\ settlement proposal meets the standards of fairness, set hereinbelow:\r a.\
\ The creditors are provided with complete information and granted sufficient\
\ time to examine the preventive settlement proposal.\r b. Never prejudice the\
\ procedures set for the meeting of creditors and voting mentioned in the preventive\
\ settlement proposal submitted to the Bankruptcy Department before the initiation\
\ of the preventive settlement proceedings.\r c. Subject to the existing rights\
\ of creditors, especially the rights of creditors who hold mortgage and lien\
\ rights, and equality between rights holders with equal positions, especially\
\ with regard to sharing losses and distributing new rights."
- source_sentence: 'Represent this sentence for searching relevant passages: What
are the potential legal consequences for a public servant who corruptly makes
a report or decision in a judicial proceeding that is contrary to law under Section
257 of the Bharatiya Nyaya Sanhita, 2023?'
sentences:
- 'Document: Article (1) Definitions For
the purpose of applying the provisions of the present Decree-Law, the following
words and expressions shall bear the meanings assigned thereto respectively, unless
the context requires otherwise:The State: The United Arab Emirates.The Ministry:
The Ministry of Climate Change and Environment.The Minister: The Minister of Climate
Change and Environment.Entity Concerned: Any federal or local government entity
concerned with the application of the provisions of the present Decree-Law.Competent
Authority: The local authorities concerned in each emirate, including free zones.
Climate Change: A change of climate which is attributed directly or indirectly
to human activity that alters the composition of the global atmosphere and which
is in addition to natural climate variability observed over comparable time periods.Impacts
of Climate Change: Effects of climate change on natural and human systems. Impacts
generally refer to effects on lives, livelihoods, health, ecosystems, economies,
societies, cultures, services, and infrastructure due to the interaction of climate
changes or hazardous climate events occurring within a specific time period and
the vulnerability of an exposed society or system. Greenhouse Gases (GHGs): Gases
that contribute to the greenhouse effect and that absorb and re-emit infrared
radiation, which contribute to the greenhouse effect, the most important of which
are: carbon dioxide (CO2), methane (CH4), nitrous oxide (N2O), Nitrogen trifluoride
(NF3), Hydrofluorocarbons (HFCs), Perfluorocarbons (PFCs) and Sulphur hexafluoride
(SF6).Emissions: Greenhouse gases released into the atmosphere as a result of
human activities, altering the atmospheric chemical composition and contributing
to air pollution and climate change.Sources: Public and private legal persons,
as well as individual enterprises, whose operations or activities result in the
release of greenhouse gases into the atmosphere.Sinks: Any process, activity or
mechanism which removes a greenhouse gas, an aerosol or a precursor of a greenhouse
gas from the atmosphere.Climate Change Mitigation: A human intervention that reduces
the sources of GHG emissions and/or enhances the sinks.Adaptation: The process
of adjustment to actual or expected climate and its effects to moderate or avoid
harm or to exploit beneficial opportunities.Emissions Inventory: A database of
emissions emitted in the State and the measures being taken or expected to be
taken to mitigate such emissions as well as the expected results, depending on
sources and sinks. National Carbon Credit Registry: A national record that keeps
track of the amount of carbon emissions. It also includes information on carbon
credits and carbon credit retirement.Carbon Offsetting: Climate change mitigation
actions undertaken to compensate for emissions by contributing to the release
of clean gases. Such compensation can be mandatory or voluntary and may include
contributing to or investing in projects or activities related to renewable energy,
improving energy efficiency, afforestation, or other projects that would reduce
greenhouse gas emissions in the atmosphere or eliminate and avoid emissions from
other sources.Carbon Capture, Use and Storage (CCUS): A process of separating
and capturing a relatively pure stream of carbon dioxide (CO2) from industrial
and energy-related sources to be used for other useful purposes (i.e. production
of chemical material or components), or stored underground or in deep geological
formations, usually at depths of 1 km or more, in order to be separated from the
atmosphere for a long period.Climate Neutrality: The idea of reducing greenhouse
gas emissions and achieving a balance between emissions by sources and removals
by sinks, in accordance with the periods and plans approved by the State.Shadow
Price of Carbon: A theoretical or assumed price or cost per ton of carbon emissions.
It is used by entities or enterprises to better understand the potential impact
of external carbon pricing on the profitability of a project, or an investmentNationally
Determined Contributions (NDCs): Measures that Parties to the Paris Agreement
undertake in the areas of mitigation and adaptation, taking into account the different
circumstances and capabilities of countries.Long-term Low Greenhouse Gas Emission
Development Strategies: A national strategy for development and reduction of long-term
emissions, as a voluntary requirement for all Parties, under the Paris Agreement.'
- 'Document: The Registrar or inspector shall, after the inspection of thebooks
of account or an inquiry under section 206 and other books and papers of the company
undersection 207, submit a report in writing to the Central Government along with
such documents, if any, andsuch report may, if necessary, include a recommendation
that further investigation into the affairs of thecompany is necessary giving
his reasons in support.'
- 'Document: Whoever, being a public servant, corruptly or maliciously makes or
pronounces in any stage of a judicialproceeding, any report, order, verdict, or
decision which he knows to be contrary to law, shall bepunished with imprisonment
of either description for a term which may extend to seven years, or withfine,
or with both.'
- source_sentence: 'Represent this sentence for searching relevant passages: According
to **Section 13(2)(a) of the Central Goods and Services Tax Act, 2017**, what
is the time of supply of services if the invoice is issued within the prescribed
period?'
sentences:
- "Document: (1) The liability to pay tax on services shall arise at the time ofsupply,\
\ as determined in accordance with the provisions of this section.(2) The time\
\ of supply of services shall be the earliest of the following dates, namely:--(a)\
\ the date of issue of invoice by the supplier, if the invoice is issued within\
\ the periodprescribed under 1*** sub-section (2) of section 31 or the date of\
\ receipt of payment, whichever is earlier; or(b) the date of provision of service,\
\ if the invoice is not issued within the period prescribed under sub-section\
\ (2) of 1*** section 31 or the date of receipt of payment, whichever is earlier;\
\ or(c) the date on which the recipient shows the receipt of services in his books\
\ of account, in a case where the provisions of clause (a) or clause (b) do not\
\ apply:Provided that where the supplier of taxable service receives an amount\
\ up to one thousand rupees inexcess of the amount indicated in the tax invoice,\
\ the time of supply to the extent of such excess amountshall, at the option of\
\ the said supplier, be the date of issue of invoice relating to such excess amount.Explanation.--For\
\ the purposes of clauses (a) and (b)--(i) the supply shall be deemed to have\
\ been made to the extent it is covered by the invoice or, asthe case may be,\
\ the payment;(ii) \"the date of receipt of payment\" shall be the date on which\
\ the payment is entered in thebooks of account of the supplier or the date on\
\ which the payment is credited to his bank account,whichever is earlier.(3) In\
\ case of supplies in respect of which tax is paid or liable to be paid on reverse\
\ charge basis, thetime of supply shall be the earlier of the following dates,\
\ namely:--(a) the date of payment as entered in the books of account of the recipient\
\ or the date on which thepayment is debited in his bank account, whichever is\
\ earlier; or(b) the date immediately following sixty days from the date of issue\
\ of invoice or any otherdocument, by whatever name called, in lieu thereof by\
\ the supplier:Provided that where it is not possible to determine the time of\
\ supply under clause (a) or clause (b),the time of supply shall be the date of\
\ entry in the books of account of the recipient of supply:Provided further that\
\ in case of supply by associated enterprises, where the supplier of service islocated\
\ outside India, the time of supply shall be the date of entry in the books of\
\ account of the recipientof supply or the date of payment, whichever is earlier.(4)\
\ In case of supply of vouchers by a supplier, the time of supply shall be--(a)\
\ the date of issue of voucher, if the supply is identifiable at that point; or(b)\
\ the date of redemption of voucher, in all other cases.(5) Where it is not possible\
\ to determine the time of supply under the provisions of sub-section (2) orsub-section\
\ (3) or sub-section (4), the time of supply shall--(a) in a case where a periodical\
\ return has to be filed, be the date on which such return is to befiled; or(b)\
\ in any other case, be the date on which the tax is paid.(6) The time of supply\
\ to the extent it relates to an addition in the value of supply by way of interest,late\
\ fee or penalty for delayed payment of any consideration shall be the date on\
\ which the supplierreceives such addition in value.\t\t\t\t\t\t\t\t\t1. The words,\
\ brackets and figure “sub-section (2) of” omitted by Act 31 of 2018, s. 7 (w.e.f.\
\ 1-2-2019)."
- "Document: Article (26) Partnership Projects Guide Content \
\ \
\ The Partnership Projects Guide shall specify the detailed provisions\
\ regulating partnership projects, in particular:\r1. Governance and procedures\
\ for offering partnership projects, including project proposals, value-for-money\
\ assessments, market studies, project structuring and management, bidding procedures,\
\ mechanisms for requesting clarifications, conferences, and criteria for offering\
\ projects.\r2. Special requirements related to the content of the bidding documents\
\ and the project agreement.\r3. Special requirements related to any dates and\
\ time frames that shall be followed for the bidding procedures.\r4. Special requirements\
\ related to the criteria for selecting a partner and evaluating bids, as well\
\ as the qualifications required for the project team.\r5. The rules governing\
\ requesting the best and last offer and negotiating with a potential partner.\r\
6. Requirements for disclosure and publishing the basic information related to\
\ presenting partnership projects on websites and the media.\r7. The provisions\
\ regulating the management of contracts and the supervision of partnership projects\
\ during the implementation phase and the procedures for managing and implementing\
\ the project agreement, including the provisions for making payments and the\
\ Powers Matrix determined by the federal agency.\r8. The rules governing the\
\ change of control, the composition of partners, or the partner structure.\r\
9. The rules governing force majeure and exceptional circumstances that may occur\
\ during project implementation.\r10. The rules governing the termination of the\
\ project agreement and the compensation mechanism upon termination.\r11. The\
\ rules governing the arrangement of mortgages on assets related to the project\
\ and any agreement that may grant the financing parties the right to replace\
\ the partner in implementing the project or to control or acquire it.\r12. Any\
\ other provisions that the law has regulated in the Partnership Projects Guide\
\ or that the Cabinet decided to add."
- "Document: Article (51) Prohibitions \
\ Public Welfare\
\ Associations and their Members may not do the following:\r1. Practice any Public\
\ Welfare activity other than those stipulated in its By-laws.\r2. Practice any\
\ political or partisan activity, collecting information, interfering in politics\
\ or matters affecting the security of the State and its law of government, or\
\ using its Office for that purpose, or provoking sectarian, racial, or religious\
\ disputes.\r3. Affiliate, join, participate in, or deal with any illegal Associations\
\ or entities, or any natural or Legal Person belonging to it, whether inside\
\ or outside the State, or financing or providing support to them in any way.\r\
4. Deal with, financing, or providing support to any illegal Association, terrorist\
\ Association, or entity, or any natural or Legal Person belonging to any of them.\r\
5. Form secret societies, companies, or formations of a secret, military, or paramilitary\
\ nature, or calling for favouring, supporting, or financing violence or terrorist\
\ organisations.\r6. Practice activities that would disturb public order, public\
\ morals, Emirati customs and traditions, or threaten the national security of\
\ the State.\r7. Call for discrimination between citizens or residents of the\
\ State on the basis of gender, origin, colour, language, religion or belief,\
\ or any activity that calls for racism, incitement to hatred, or other reasons\
\ that are contrary to the Constitution and the legislation in force in the State.\r\
8. Participate in supporting or financing the electoral campaigns of any candidate\
\ in elections and referendums, or presenting a candidate in those elections on\
\ behalf of the Association.\r9. Grant any professional or applied certificates\
\ without authorisation from the Competent Authorities in the State, or without\
\ an official partnership with one of the specialised universities or the Competent\
\ Authorities, and in accordance with the rules regulating this in the State.\r\
10. Practice any Public Welfare Activities outside the spatial scope of the licence\
\ issued to him by the Competent Authority.\r11. Practice any Activities that\
\ require a licence or approval from a governmental entity, before obtaining a\
\ licence or approval from that entity and the Competent Authority.\r12. Aim to\
\ make a profit for the Members of a Public Welfare Association, or engaging in\
\ an activity aimed at that, or distributing the Funds of a Public Welfare Association\
\ to its Members, employees, or those responsible for its management.\r13. Conduct\
\ opinion polls, publishing or making their results available, or conducting field\
\ research or presenting their results, without obtaining prior approval from\
\ the Ministry and the relevant authorities in the State.\r14. Conclude agreement\
\ in any form with a foreign party outside the State before the Ministry approval,\
\ as well as any amendment to it.\r15. Deal in any way with embassies, consulates\
\ and diplomatic missions without obtaining permission from the Competent Authority,\
\ and without the approval of the Ministry of Foreign Affairs in accordance with\
\ the procedures followed in this regard.\r16. Open branches or Offices outside\
\ the State.\r17. Interfere in the work of any State or Local Government Authority.\r\
18. Represent any individual or group before the Court in any lawsuits related\
\ to the interests of these individuals or groups.\r19. Raise and disseminate\
\ information that urges non-respect for the Constitution, laws and legislation\
\ in force in the State, non-respect for judicial rulings, or prevention of their\
\ implementation.\r20. Publish information, news, or propaganda that would prejudice\
\ public order or harm the public interest, public security, or public morals.\r\
21. Hold courses, workshops, Meetings or seminars, whether inside or outside the\
\ State, that would harm public order, harm the public interest or public security,\
\ or harm public morals.\r22. Work in any way under political cover.\r23. Any\
\ other prohibitions in implementation of the legislation in force in the State.\r\
24. Any other prohibitions determined by the Competent Authority, pursuant to\
\ the Resolutions issued by it in this regard."
model-index:
- name: SentenceTransformer based on BAAI/bge-base-en-v1.5
results:
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 768
type: dim_768
metrics:
- type: cosine_accuracy@1
value: 0.017826825127334467
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.2606112054329372
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.49575551782682514
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.7156196943972836
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.017826825127334467
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.08687040181097905
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.09915110356536504
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.07156196943972837
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.017826825127334467
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.2606112054329372
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.49575551782682514
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.7156196943972836
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.32016226781648804
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.19823112889751218
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.21081337448975018
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 512
type: dim_512
metrics:
- type: cosine_accuracy@1
value: 0.015280135823429542
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.2606112054329372
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.4940577249575552
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.7113752122241087
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.015280135823429542
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.08687040181097905
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.09881154499151104
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.07113752122241086
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.015280135823429542
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.2606112054329372
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.4940577249575552
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.7113752122241087
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.31720238942668855
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.1956585010914379
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.2081944514790379
name: Cosine Map@100
---
# SentenceTransformer based on BAAI/bge-base-en-v1.5
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [BAAI/bge-base-en-v1.5](https://huggingface.co/BAAI/bge-base-en-v1.5) on the json dataset. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [BAAI/bge-base-en-v1.5](https://huggingface.co/BAAI/bge-base-en-v1.5) <!-- at revision a5beb1e3e68b9ab74eb54cfd186867f64f240e1a -->
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 768 dimensions
- **Similarity Function:** Cosine Similarity
- **Training Dataset:**
- json
- **Language:** en
- **License:** apache-2.0
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': True}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("Tejasw1/votum-acts-v1")
# Run inference
sentences = [
'Represent this sentence for searching relevant passages: According to **Section 13(2)(a) of the Central Goods and Services Tax Act, 2017**, what is the time of supply of services if the invoice is issued within the prescribed period?',
'Document: (1) The liability to pay tax on services shall arise at the time ofsupply, as determined in accordance with the provisions of this section.(2) The time of supply of services shall be the earliest of the following dates, namely:--(a) the date of issue of invoice by the supplier, if the invoice is issued within the periodprescribed under 1*** sub-section (2) of section 31 or the date of receipt of payment, whichever is earlier; or(b) the date of provision of service, if the invoice is not issued within the period prescribed under sub-section (2) of 1*** section 31 or the date of receipt of payment, whichever is earlier; or(c) the date on which the recipient shows the receipt of services in his books of account, in a case where the provisions of clause (a) or clause (b) do not apply:Provided that where the supplier of taxable service receives an amount up to one thousand rupees inexcess of the amount indicated in the tax invoice, the time of supply to the extent of such excess amountshall, at the option of the said supplier, be the date of issue of invoice relating to such excess amount.Explanation.--For the purposes of clauses (a) and (b)--(i) the supply shall be deemed to have been made to the extent it is covered by the invoice or, asthe case may be, the payment;(ii) "the date of receipt of payment" shall be the date on which the payment is entered in thebooks of account of the supplier or the date on which the payment is credited to his bank account,whichever is earlier.(3) In case of supplies in respect of which tax is paid or liable to be paid on reverse charge basis, thetime of supply shall be the earlier of the following dates, namely:--(a) the date of payment as entered in the books of account of the recipient or the date on which thepayment is debited in his bank account, whichever is earlier; or(b) the date immediately following sixty days from the date of issue of invoice or any otherdocument, by whatever name called, in lieu thereof by the supplier:Provided that where it is not possible to determine the time of supply under clause (a) or clause (b),the time of supply shall be the date of entry in the books of account of the recipient of supply:Provided further that in case of supply by associated enterprises, where the supplier of service islocated outside India, the time of supply shall be the date of entry in the books of account of the recipientof supply or the date of payment, whichever is earlier.(4) In case of supply of vouchers by a supplier, the time of supply shall be--(a) the date of issue of voucher, if the supply is identifiable at that point; or(b) the date of redemption of voucher, in all other cases.(5) Where it is not possible to determine the time of supply under the provisions of sub-section (2) orsub-section (3) or sub-section (4), the time of supply shall--(a) in a case where a periodical return has to be filed, be the date on which such return is to befiled; or(b) in any other case, be the date on which the tax is paid.(6) The time of supply to the extent it relates to an addition in the value of supply by way of interest,late fee or penalty for delayed payment of any consideration shall be the date on which the supplierreceives such addition in value.\t\t\t\t\t\t\t\t\t1. The words, brackets and figure “sub-section (2) of” omitted by Act 31 of 2018, s. 7 (w.e.f. 1-2-2019).',
'Document: Article (51) Prohibitions Public Welfare Associations and their Members may not do the following:\r1. Practice any Public Welfare activity other than those stipulated in its By-laws.\r2. Practice any political or partisan activity, collecting information, interfering in politics or matters affecting the security of the State and its law of government, or using its Office for that purpose, or provoking sectarian, racial, or religious disputes.\r3. Affiliate, join, participate in, or deal with any illegal Associations or entities, or any natural or Legal Person belonging to it, whether inside or outside the State, or financing or providing support to them in any way.\r4. Deal with, financing, or providing support to any illegal Association, terrorist Association, or entity, or any natural or Legal Person belonging to any of them.\r5. Form secret societies, companies, or formations of a secret, military, or paramilitary nature, or calling for favouring, supporting, or financing violence or terrorist organisations.\r6. Practice activities that would disturb public order, public morals, Emirati customs and traditions, or threaten the national security of the State.\r7. Call for discrimination between citizens or residents of the State on the basis of gender, origin, colour, language, religion or belief, or any activity that calls for racism, incitement to hatred, or other reasons that are contrary to the Constitution and the legislation in force in the State.\r8. Participate in supporting or financing the electoral campaigns of any candidate in elections and referendums, or presenting a candidate in those elections on behalf of the Association.\r9. Grant any professional or applied certificates without authorisation from the Competent Authorities in the State, or without an official partnership with one of the specialised universities or the Competent Authorities, and in accordance with the rules regulating this in the State.\r10. Practice any Public Welfare Activities outside the spatial scope of the licence issued to him by the Competent Authority.\r11. Practice any Activities that require a licence or approval from a governmental entity, before obtaining a licence or approval from that entity and the Competent Authority.\r12. Aim to make a profit for the Members of a Public Welfare Association, or engaging in an activity aimed at that, or distributing the Funds of a Public Welfare Association to its Members, employees, or those responsible for its management.\r13. Conduct opinion polls, publishing or making their results available, or conducting field research or presenting their results, without obtaining prior approval from the Ministry and the relevant authorities in the State.\r14. Conclude agreement in any form with a foreign party outside the State before the Ministry approval, as well as any amendment to it.\r15. Deal in any way with embassies, consulates and diplomatic missions without obtaining permission from the Competent Authority, and without the approval of the Ministry of Foreign Affairs in accordance with the procedures followed in this regard.\r16. Open branches or Offices outside the State.\r17. Interfere in the work of any State or Local Government Authority.\r18. Represent any individual or group before the Court in any lawsuits related to the interests of these individuals or groups.\r19. Raise and disseminate information that urges non-respect for the Constitution, laws and legislation in force in the State, non-respect for judicial rulings, or prevention of their implementation.\r20. Publish information, news, or propaganda that would prejudice public order or harm the public interest, public security, or public morals.\r21. Hold courses, workshops, Meetings or seminars, whether inside or outside the State, that would harm public order, harm the public interest or public security, or harm public morals.\r22. Work in any way under political cover.\r23. Any other prohibitions in implementation of the legislation in force in the State.\r24. Any other prohibitions determined by the Competent Authority, pursuant to the Resolutions issued by it in this regard.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Information Retrieval
* Datasets: `dim_768` and `dim_512`
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
| Metric | dim_768 | dim_512 |
|:--------------------|:-----------|:-----------|
| cosine_accuracy@1 | 0.0178 | 0.0153 |
| cosine_accuracy@3 | 0.2606 | 0.2606 |
| cosine_accuracy@5 | 0.4958 | 0.4941 |
| cosine_accuracy@10 | 0.7156 | 0.7114 |
| cosine_precision@1 | 0.0178 | 0.0153 |
| cosine_precision@3 | 0.0869 | 0.0869 |
| cosine_precision@5 | 0.0992 | 0.0988 |
| cosine_precision@10 | 0.0716 | 0.0711 |
| cosine_recall@1 | 0.0178 | 0.0153 |
| cosine_recall@3 | 0.2606 | 0.2606 |
| cosine_recall@5 | 0.4958 | 0.4941 |
| cosine_recall@10 | 0.7156 | 0.7114 |
| **cosine_ndcg@10** | **0.3202** | **0.3172** |
| cosine_mrr@10 | 0.1982 | 0.1957 |
| cosine_map@100 | 0.2108 | 0.2082 |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### json
* Dataset: json
* Size: 22,370 training samples
* Columns: <code>anchor</code> and <code>positive</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive |
|:--------|:-----------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 22 tokens</li><li>mean: 41.92 tokens</li><li>max: 77 tokens</li></ul> | <ul><li>min: 4 tokens</li><li>mean: 264.59 tokens</li><li>max: 512 tokens</li></ul> |
* Samples:
| anchor | positive |
|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>Represent this sentence for searching relevant passages: Under Section 52(1)(d) of the Bharatiya Sakshya Adhiniyam, 2023, are courts required to take judicial notice of the seals of all courts and tribunals?</code> | <code>Document: (1) The Court shall take judicial notice of thefollowing facts, namely:--(a) all laws in force in the territory of India including laws having extra-territorial operation;(b) international treaty, agreement or convention with country or countries by India, or decisionsmade by India at international associations or other bodies;(c) the course of proceeding of the Constituent Assembly of India, of Parliament of India and ofthe State Legislatures;(d) the seals of all Courts and Tribunals;(e) the seals of Courts of Admiralty and Maritime Jurisdiction, Notaries Public, and all sealswhich any person is authorised to use by the Constitution, or by an Act of Parliament or StateLegislatures, or Regulations having the force of law in India;(f) the accession to office, names, titles, functions, and signatures of the persons filling for thetime being any public office in any State, if the fact of their appointment to such office is notified inany Official Gazette;(g) the existence, title...</code> |
| <code>Represent this sentence for searching relevant passages: Is it permissible for a bankruptcy trustee to appoint the bankrupt to supervise the management of the estate, carry on his business, or assist in administering the estate under the Insolvency and Bankruptcy Code, 2016, Section 153?</code> | <code>Document: The bankruptcy trustee for the purposes of thisChapter may after procuring the approval of the committee of creditors,— (a) carry on any business of the bankrupt as far as may be necessary for winding it upbeneficially;(b) bring, institute or defend any legal action or proceedings relating to the property comprised inthe estate of the bankrupt;(c) accept as consideration for the sale of any property a sum of money due at a future timesubject to certain stipulations such as security;(d) mortgage or pledge any property for the purpose of raising money for the payment of thedebts of the bankrupt;(e) where any right, option or other power forms part of the estate of the bankrupt, makepayments or incur liabilities with a view to obtaining, for the benefit of the creditors, any propertywhich is the subject of such right, option or power;(f) refer to arbitration or compromise on such terms as may be agreed, any debts subsisting orsupposed to subsist between the bankrupt and any pers...</code> |
| <code>Represent this sentence for searching relevant passages: What insurance requirements are imposed on Federal Agencies occupying Union Owned Properties under Article (23) of the Federal Decree Concerning the Union Owned Properties?</code> | <code>Document: Article (23) Obligations of the Federal Authorities that occupy any of the Union Owned Properties 1. In addition to the obligations stipulated herein, every Federal Agency that occupies, manages, or supervises the management of any of the Union Owned Properties shall comply, as follows:
a. Provide a report showing the legal and surveying status of that property, estimating its value, and indicating its architectural and constructional condition, along with attaching its construction plan and any data or any facts, documents or papers related in any way to the sources of its ownership or occupancy, within a period not exceeding (6) six months from the effective date herein. His authority shall provide the Ministry with a copy of this report immediately upon completion of its preparation, and it shall renew this data and provide the Ministry with a copy of it whenever necess...</code> |
* Loss: [<code>MatryoshkaLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#matryoshkaloss) with these parameters:
```json
{
"loss": "MultipleNegativesRankingLoss",
"matryoshka_dims": [
768,
512
],
"matryoshka_weights": [
1,
1
],
"n_dims_per_step": -1
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: epoch
- `gradient_accumulation_steps`: 8
- `learning_rate`: 2e-05
- `num_train_epochs`: 4
- `lr_scheduler_type`: cosine
- `warmup_ratio`: 0.1
- `bf16`: True
- `tf32`: True
- `load_best_model_at_end`: True
- `optim`: adamw_torch_fused
- `prompts`: {'anchor': 'Represent this sentence for searching relevant passages: ', 'positive': 'Document: '}
- `batch_sampler`: no_duplicates
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: epoch
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 8
- `per_device_eval_batch_size`: 8
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 8
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 2e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 4
- `max_steps`: -1
- `lr_scheduler_type`: cosine
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.1
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: True
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: True
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: True
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch_fused
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: False
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: {'anchor': 'Represent this sentence for searching relevant passages: ', 'positive': 'Document: '}
- `batch_sampler`: no_duplicates
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
<details><summary>Click to expand</summary>
| Epoch | Step | Training Loss | dim_768_cosine_ndcg@10 | dim_512_cosine_ndcg@10 |
|:----------:|:--------:|:-------------:|:----------------------:|:----------------------:|
| 0.0286 | 10 | 0.5889 | - | - |
| 0.0572 | 20 | 0.4858 | - | - |
| 0.0858 | 30 | 0.4432 | - | - |
| 0.1144 | 40 | 0.3437 | - | - |
| 0.1430 | 50 | 0.2103 | - | - |
| 0.1716 | 60 | 0.1903 | - | - |
| 0.2002 | 70 | 0.1414 | - | - |
| 0.2288 | 80 | 0.1627 | - | - |
| 0.2574 | 90 | 0.1609 | - | - |
| 0.2860 | 100 | 0.0968 | - | - |
| 0.3146 | 110 | 0.1367 | - | - |
| 0.3432 | 120 | 0.1228 | - | - |
| 0.3718 | 130 | 0.0891 | - | - |
| 0.4004 | 140 | 0.1116 | - | - |
| 0.4290 | 150 | 0.1173 | - | - |
| 0.4576 | 160 | 0.1162 | - | - |
| 0.4862 | 170 | 0.1124 | - | - |
| 0.5148 | 180 | 0.1014 | - | - |
| 0.5434 | 190 | 0.0767 | - | - |
| 0.5720 | 200 | 0.0745 | - | - |
| 0.6006 | 210 | 0.0691 | - | - |
| 0.6292 | 220 | 0.094 | - | - |
| 0.6578 | 230 | 0.0692 | - | - |
| 0.6864 | 240 | 0.0471 | - | - |
| 0.7151 | 250 | 0.0647 | - | - |
| 0.7437 | 260 | 0.077 | - | - |
| 0.7723 | 270 | 0.0551 | - | - |
| 0.8009 | 280 | 0.0538 | - | - |
| 0.8295 | 290 | 0.0863 | - | - |
| 0.8581 | 300 | 0.0698 | - | - |
| 0.8867 | 310 | 0.0599 | - | - |
| 0.9153 | 320 | 0.0494 | - | - |
| 0.9439 | 330 | 0.0746 | - | - |
| 0.9725 | 340 | 0.0544 | - | - |
| 0.9982 | 349 | - | 0.3143 | 0.3102 |
| 1.0021 | 350 | 0.06 | - | - |
| 1.0307 | 360 | 0.09 | - | - |
| 1.0593 | 370 | 0.0597 | - | - |
| 1.0880 | 380 | 0.0613 | - | - |
| 1.1166 | 390 | 0.0589 | - | - |
| 1.1452 | 400 | 0.0309 | - | - |
| 1.1738 | 410 | 0.0378 | - | - |
| 1.2024 | 420 | 0.0417 | - | - |
| 1.2310 | 430 | 0.0417 | - | - |
| 1.2596 | 440 | 0.0412 | - | - |
| 1.2882 | 450 | 0.0214 | - | - |
| 1.3168 | 460 | 0.0374 | - | - |
| 1.3454 | 470 | 0.0388 | - | - |
| 1.3740 | 480 | 0.0188 | - | - |
| 1.4026 | 490 | 0.0247 | - | - |
| 1.4312 | 500 | 0.0275 | - | - |
| 1.4598 | 510 | 0.0336 | - | - |
| 1.4884 | 520 | 0.017 | - | - |
| 1.5170 | 530 | 0.0234 | - | - |
| 1.5456 | 540 | 0.0163 | - | - |
| 1.5742 | 550 | 0.0193 | - | - |
| 1.6028 | 560 | 0.0209 | - | - |
| 1.6314 | 570 | 0.0252 | - | - |
| 1.6600 | 580 | 0.02 | - | - |
| 1.6886 | 590 | 0.0199 | - | - |
| 1.7172 | 600 | 0.0162 | - | - |
| 1.7458 | 610 | 0.0246 | - | - |
| 1.7744 | 620 | 0.0133 | - | - |
| 1.8030 | 630 | 0.017 | - | - |
| 1.8316 | 640 | 0.0241 | - | - |
| 1.8602 | 650 | 0.018 | - | - |
| 1.8888 | 660 | 0.0186 | - | - |
| 1.9174 | 670 | 0.0121 | - | - |
| 1.9460 | 680 | 0.0264 | - | - |
| 1.9746 | 690 | 0.0112 | - | - |
| 1.9975 | 698 | - | 0.3174 | 0.3161 |
| 2.0043 | 700 | 0.0159 | - | - |
| 2.0329 | 710 | 0.0295 | - | - |
| 2.0615 | 720 | 0.0197 | - | - |
| 2.0901 | 730 | 0.0252 | - | - |
| 2.1187 | 740 | 0.019 | - | - |
| 2.1473 | 750 | 0.0074 | - | - |
| 2.1759 | 760 | 0.0122 | - | - |
| 2.2045 | 770 | 0.0116 | - | - |
| 2.2331 | 780 | 0.0113 | - | - |
| 2.2617 | 790 | 0.0132 | - | - |
| 2.2903 | 800 | 0.0112 | - | - |
| 2.3189 | 810 | 0.0167 | - | - |
| 2.3475 | 820 | 0.0078 | - | - |
| 2.3761 | 830 | 0.0079 | - | - |
| 2.4047 | 840 | 0.0072 | - | - |
| 2.4333 | 850 | 0.008 | - | - |
| 2.4619 | 860 | 0.0135 | - | - |
| 2.4905 | 870 | 0.0087 | - | - |
| 2.5191 | 880 | 0.0066 | - | - |
| 2.5477 | 890 | 0.0052 | - | - |
| 2.5763 | 900 | 0.0077 | - | - |
| 2.6049 | 910 | 0.0084 | - | - |
| 2.6335 | 920 | 0.0096 | - | - |
| 2.6621 | 930 | 0.0067 | - | - |
| 2.6907 | 940 | 0.0072 | - | - |
| 2.7193 | 950 | 0.0061 | - | - |
| 2.7479 | 960 | 0.0132 | - | - |
| 2.7765 | 970 | 0.0061 | - | - |
| 2.8051 | 980 | 0.0058 | - | - |
| 2.8338 | 990 | 0.01 | - | - |
| 2.8624 | 1000 | 0.0084 | - | - |
| 2.8910 | 1010 | 0.0082 | - | - |
| 2.9196 | 1020 | 0.0055 | - | - |
| 2.9482 | 1030 | 0.0073 | - | - |
| 2.9768 | 1040 | 0.0074 | - | - |
| **2.9968** | **1047** | **-** | **0.323** | **0.3161** |
| 3.0064 | 1050 | 0.0086 | - | - |
| 3.0350 | 1060 | 0.0127 | - | - |
| 3.0636 | 1070 | 0.0083 | - | - |
| 3.0922 | 1080 | 0.0111 | - | - |
| 3.1208 | 1090 | 0.0091 | - | - |
| 3.1494 | 1100 | 0.0037 | - | - |
| 3.1780 | 1110 | 0.0074 | - | - |
| 3.2066 | 1120 | 0.005 | - | - |
| 3.2353 | 1130 | 0.006 | - | - |
| 3.2639 | 1140 | 0.0071 | - | - |
| 3.2925 | 1150 | 0.0062 | - | - |
| 3.3211 | 1160 | 0.008 | - | - |
| 3.3497 | 1170 | 0.0042 | - | - |
| 3.3783 | 1180 | 0.003 | - | - |
| 3.4069 | 1190 | 0.0049 | - | - |
| 3.4355 | 1200 | 0.004 | - | - |
| 3.4641 | 1210 | 0.0062 | - | - |
| 3.4927 | 1220 | 0.0056 | - | - |
| 3.5213 | 1230 | 0.0048 | - | - |
| 3.5499 | 1240 | 0.0034 | - | - |
| 3.5785 | 1250 | 0.0045 | - | - |
| 3.6071 | 1260 | 0.0041 | - | - |
| 3.6357 | 1270 | 0.0048 | - | - |
| 3.6643 | 1280 | 0.0045 | - | - |
| 3.6929 | 1290 | 0.0044 | - | - |
| 3.7215 | 1300 | 0.0047 | - | - |
| 3.7501 | 1310 | 0.0061 | - | - |
| 3.7787 | 1320 | 0.0037 | - | - |
| 3.8073 | 1330 | 0.0045 | - | - |
| 3.8359 | 1340 | 0.0068 | - | - |
| 3.8645 | 1350 | 0.0048 | - | - |
| 3.8931 | 1360 | 0.0056 | - | - |
| 3.9217 | 1370 | 0.0049 | - | - |
| 3.9503 | 1380 | 0.0055 | - | - |
| 3.9789 | 1390 | 0.004 | - | - |
| 3.9961 | 1396 | - | 0.3202 | 0.3172 |
* The bold row denotes the saved checkpoint.
</details>
### Framework Versions
- Python: 3.11.5
- Sentence Transformers: 3.3.1
- Transformers: 4.46.3
- PyTorch: 2.4.1+cu121
- Accelerate: 0.34.2
- Datasets: 3.0.0
- Tokenizers: 0.20.3
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### MatryoshkaLoss
```bibtex
@misc{kusupati2024matryoshka,
title={Matryoshka Representation Learning},
author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
year={2024},
eprint={2205.13147},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
```
#### MultipleNegativesRankingLoss
```bibtex
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
--> | [
"TEXT_CLASSIFICATION"
] | [
"BEAR"
] |
Nashhz/SBERT_KFOLD_User_Portfolio_to_Job_Descriptions | Nashhz | sentence-similarity | [
"sentence-transformers",
"safetensors",
"bert",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:16682",
"loss:CosineSimilarityLoss",
"arxiv:1908.10084",
"base_model:sentence-transformers/all-MiniLM-L6-v2",
"base_model:finetune:sentence-transformers/all-MiniLM-L6-v2",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | 2024-12-23T12:15:34 | 2024-12-23T12:16:38 | 79 | 1 | ---
base_model: sentence-transformers/all-MiniLM-L6-v2
library_name: sentence-transformers
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:16682
- loss:CosineSimilarityLoss
widget:
- source_sentence: Hello, I am Redoan Ahmad I'm a professional Graphic Designer who
finds great joy in creating assets that not only meet the expectations of my clients,
but exceed them and add to what has become a delightful portfolio of my work.
I am an expert in the field, and specialize in many different aspects of design
work, including but not limited to + Logos + Flyers + Brochures + Banners + Icons
+ Business card + Branding As you can see, I take on projects involving a plethora
of different visual assets. I use the Adobe Suite Programs to create and perfect
everything I make, both for my clients and on my own time, so I'm incredibly adept
at
sentences:
- I'm in search of a designer who can help craft a unique and engaging digital portfolio
for my company. The desired style of the portfolio is creative and artistic, so
I'm looking for someone who can think outside the box and design a portfolio that
truly stands out. Key components of the portfolio will include - Client testimonials
These will need to be presented in an appealing way that showcases our strong
relationships and positive feedback from our clients. - Project case studies I
want to highlight some of our best work. This will require a designer who can
help distill complex projects into easy-to-understand and visually appealing presentations.
Ideal candidates for this project should be experienced in creating digital portfolios
and have a strong design background. They should be able to demonstrate a flexible
and creative design approach, with a portfolio that reflects a 'creative and artistic'
style. Good communication skills are a must, as we will need to collaborate closely
to ensure the final product meets our expectations.
- I need a proficient developer who can replicate a Forex trading software for me.
The software needs to include - Real-time data feed The software should provide
up-to-the-minute information about the forex market. - Automated trading I want
the software to have a feature that allows for trading without human intervention,
based on pre-set parameters or algorithms. The final product needs to be compatible
with Windows. Ideal candidates for this project should have substantial experience
in creating or replicating trading software, particularly in the Forex sector.
Knowledge of real-time data processing and automated trading systems is crucial.
Please ensure your bid reflects your expertise in this field.
- I'm seeking a talented graphic designer to assist with a short project. The tasks
will include designing a logo, banners, and screenshots, as well as a favicon
for our website, app stores, and social media platforms.
- source_sentence: Hello I am a skilled graphic designer, my designs are creative
and based on modern strategies. The ones I create express the customer's brand
language and make multiple connections with the audience. I am interested in engineering
and through my work I try to meet customer requirements and expectations.. I am
an experienced graphic designer who loves to create modern and unique designs.
I specialize in personal calling and branding projects.!!
sentences:
- I'm seeking a talented graphic designer who can create engaging and visually appealing
designs for my marketing materials, specifically for flyers and business cards.
Ideally, the freelancer should have a keen understanding of design principles
and be able to create designs that will capture attention and convey my brand
message effectively. Skills and experience needed - Proficient in graphic design
software such as Adobe Illustrator, Photoshop, etc. - Creative and innovative
thinker - Strong understanding of design principles - Experience in designing
marketing materials - Excellent communication skills
- I'm looking for a skilled web application developer proficient in NodeJSTypescriptVue
3 to help me build an interactive web application. The main features of this project
would include - Utilizing the Vue 3 Framework Prior experience in Vue.js is a
must. Understanding of its core concepts and features is essential to deliver
a high-quality application. - Payment Gateway Integration The application will
require integration with a payment gateway such as Stripe or PayPal. Experience
with these platforms is highly desirable. - User Authentication Clerk - Flexible
Design The application should be able to accommodate future expansions or modifications,
so a flexible design and coding approach is key. The main technologies that application
will use are - NodeJSExpressTypescriptPrisma - Vue 3ShadCNTailwind CSS I have
a detailed specification which I will share with those selected to be shortlisted.
To be considered for this project 1. A brief summary of your experience in the
core technologies I want to use for the App. 2. Please provide links for any projects
which use Node JSExpressPrisma and Vue 3 If you have any further questions please
reach out.
- I'm in need of a talented graphic designer to create website graphics for my project.
This includes designing banner images, icons, and infographics. Ideal Skills -
Proficiency in graphic design software Adobe Illustrator, Photoshop, etc. - Strong
portfolio of website graphics - Experience with designing for social media and
ad campaigns Please note, the banner images will be used on the homepage, social
media, and ad campaigns. A deep understanding of how to create engaging and impactful
designs for these platforms is crucial.
- source_sentence: PHP Codeigniter Laravel Google Ads API - PHPPython Google AppsAds
Script Bing Ads API Twitter API TikTok API FB API Google APIs GitHub login to
view URL LinkedIn Profile login to view URL
sentences:
- I need a structural engineer to provide detailed engineering plans for a residential
building. Specific Requirements - Foundation plans - Framing plans - Roof structure
details Additionally, I need - Copies of the structural engineering details, including
piers and footings. - A reference site classification report with a copy of the
report provided. Ideal candidates should have - Extensive experience in structural
engineering for residential buildings. - Ability to interpret and work from existing
architectural plans. - Strong communication skills to provide necessary documentation
clearly.
- I'm looking for a talented web developer with a strong background in Shopify to
create a robust e-commerce website for selling electronics and gadgets. Key Requirements
- Expertise in Shopify You should have a deep understanding of the platform to
build an effective, secure and user-friendly online store. - E-commerce Development
Experience in creating e-commerce websites is essential. You will need to implement
features that facilitate seamless shopping experiences. - Understanding of Electronics
A knowledge of the electronics industry will be a plus, as it will help in designing
the website Please note, this project does not include the add-on features such
as product reviews, discount codes or customer account creation, but these may
be discussed further down the line.
- I'm looking for a professional with experience in WebSocket and Laravel to integrate
Twilio and login to view URL into my Laravel Blade website. The primary function
of Twilio will be enabling voice calls on the website. Key Tasks - Implement Twilio
for voice call functionality on the website. - Integrate login to view URL's Natural
Language Processing NLP capabilities into the site. Ideal Candidate - Proficient
in Laravel and Blade. - Extensive experience with Twilio and Vapi.ai. - Strong
knowledge of WebSocket. - Ability to implement NLP features effectively.
- source_sentence: I have 6-year experience as a Web Designer and WordPress Designer.
100+ completed projects. My Top Skills - HTML, CSS, Bootstrap 3 4 5 - Admin Dashboard
- Email Template within 2 to 3 hours - Web Design - HTML5, CSS3 Canvas, SVG -
PSD, FIGMA, ZEPLIN, XD, image, pdf to HTML, CSS Conversion - PSD, FIGMA, ZEPLIN,
XD, image, pdf to Bootstrap Conversion - Animation, Slider - Fix Tailwind CSS
- Photoshop intermediate - Adobe XD Mobile App any changes intermediate
sentences:
- I'm seeking a talented web developer with a keen eye for 3D design to revamp our
current website. The job involves a complete overhaul of the website's layout,
user interface, and 3D images. Key Requirements - Proficiency in 3D design You
should be adept at enhancing textures, improving lighting, and updating models
for a more engaging and visually striking website. - WordPress Expertise The new
design should be compatible with WordPress, so prior experience with this platform
is a must. Responsibilities - Redesign the website layout and user interface to
improve overall user experience. - Update all existing 3D images, enhancing them
with improved textures and lighting. - Ensure the website is fully functional
on the WordPress platform. Ideal Candidate - A creative thinker with a strong
background in both web development and 3D design. - Prior experience with WordPress
and a portfolio that showcases your skills in revamping websites. - Excellent
communication skills to ensure smooth collaboration and understanding of my vision
for the project. I'd love to hear from you if you're confident in your ability
to take on this project. Please include relevant samples of your past work in
your application. Experience with Fancy Product Designer for customisations must
be on time samples of what I want login to view URL login to view URL login to
view URL
- I'm looking for a skilled web developer experienced in web scraping to create
a web scraper for me. Key Requirements - The scraper should be able to extract
product prices from Amazon. Ideal Skills and Experience - Proficiency in Python
and libraries like BeautifulSoup and Scrapy. - Previous experience scraping data
from Amazon is a plus. - Strong understanding of web scraping ethics and legal
considerations. Please include in your proposal examples of similar projects you've
completed.
- I'm looking for an expert mobile app developer who can create a comprehensive
e-commerce app for both iOS and Android platforms. Key Features - User-friendly
interface - Secure payment gateway - Real-time inventory updates - Customer review
and rating system - Push notifications for sales and offers Ideal Skills - Proficiency
in cross-platform mobile app development - Experience in e-commerce app development
- Knowledge of UIUX design principles - Understanding of secure payment integration
- Familiarity with inventory management systems Your expertise will help me reach
my goal of launching a top-tier e-commerce app. Please provide your portfolio
showcasing similar projects you've completed in the past.
- source_sentence: I have 15+ years experiences with web development, machine learning
engineering and product development. I also have 5+ years experiences with team
management for developing new product and maintaining old products.
sentences:
- I'm starting a web development company and need a senior WordPress developer who
is proficient in PHP, JavaScript, HTML, and CSS. This role will require working
closely with my designer to customize websites. Key Responsibilities - Custom
theme development - Communicating with the Designer - Optimising websites for
performance - Ongoing website maintenance The ideal candidate should - Have expert-level
experience with custom theme development - Be eager to learn and adapt - Have
a solid track record with WordPress - Know the pain points of WordPress and how
to solve them - Benefit Experience with SEO Collaboration - We will be using TrelloWhatsappTeams
for project management and collaboration tasks. Your ability to work as part of
a team and communicate effectively will be crucial for our success. A passion
for web development and a desire to be part of a growing company will make this
a rewarding opportunity.
- Job Title Freelance Graphic Designer Monthly Deliverables Minimum 30 Creative
Designs Budget 10,000 Month Job Description We are seeking a Freelance Graphic
Designer to create high-quality and creative visuals for our projects monthly.
The ideal candidate will have experience designing a wide range of materials,
including images for digital platforms, brochures, banners, PDFs, and other print-ready
files. This remote freelance role is expected to deliver 30 designs per month.
If you're passionate about visual design and can consistently meet deadlines with
high-quality work, we'd love to hear from you! Key Responsibilities Create 30+
designs per month, including - Social media graphics - Flyers, brochures, and
pamphlets - PDF print files - Flex banners and large-scale designs Design for
multiple formats Digital websocial media and print brochures, banners, etc.. -
Collaborate with stakeholders to ensure designs align with the brand and project
goals. - Make revisions and adjustments based on feedback. - Prepare print-ready
files with accurate specifications. --- Required Skills - Proficiency in Adobe
Creative Suite Photoshop, Illustrator, InDesign or equivalent tools. - Strong
understanding of layout, typography, and color theory, - Experience in designing
for both digital and print mediums. - Knowledge of print specifications and formats
CMYK, DPI, bleed, etc.. - Ability to work independently and deliver within deadlines.
--- Preferred Qualifications - Prior experience as a freelance designer or working
in an agency setting. - Experience with branding projects - Strong portfolio showcasing
past work. --- Compensation - 10,000 per month for a minimum of 30 imagesdesigns
- Additional designs or complex projects may be compensated separately based on
agreement. --- How to Apply Interested candidates should submit their portfolios
and CVs this platform Please include samples of - Social media posts or marketing
graphics - Print designs like brochures or banners - Any other relevant design
work --- Additional Information - This is a remote freelance opportunity. - Payments
will be made monthly upon submission and approval of deliverables. - Long-term
collaboration opportunities available based on performance.
- Seeking a talented content writer to create engaging and SEO-friendly articles
across diverse markets. The candidate should possess strong expertise in producing
content that not only resonates with readers but also performs well in search
engine rankings. Please submit samples of your past work where you have successfully
balanced keyword integration with compelling content.
---
# SentenceTransformer based on sentence-transformers/all-MiniLM-L6-v2
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [sentence-transformers/all-MiniLM-L6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2). It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [sentence-transformers/all-MiniLM-L6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2) <!-- at revision fa97f6e7cb1a59073dff9e6b13e2715cf7475ac9 -->
- **Maximum Sequence Length:** 256 tokens
- **Output Dimensionality:** 384 tokens
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("Nashhz/SBERT_KFOLD_User_Portfolio_to_Job_Descriptions")
# Run inference
sentences = [
'I have 15+ years experiences with web development, machine learning engineering and product development. I also have 5+ years experiences with team management for developing new product and maintaining old products.',
"I'm starting a web development company and need a senior WordPress developer who is proficient in PHP, JavaScript, HTML, and CSS. This role will require working closely with my designer to customize websites. Key Responsibilities - Custom theme development - Communicating with the Designer - Optimising websites for performance - Ongoing website maintenance The ideal candidate should - Have expert-level experience with custom theme development - Be eager to learn and adapt - Have a solid track record with WordPress - Know the pain points of WordPress and how to solve them - Benefit Experience with SEO Collaboration - We will be using TrelloWhatsappTeams for project management and collaboration tasks. Your ability to work as part of a team and communicate effectively will be crucial for our success. A passion for web development and a desire to be part of a growing company will make this a rewarding opportunity.",
"Job Title Freelance Graphic Designer Monthly Deliverables Minimum 30 Creative Designs Budget 10,000 Month Job Description We are seeking a Freelance Graphic Designer to create high-quality and creative visuals for our projects monthly. The ideal candidate will have experience designing a wide range of materials, including images for digital platforms, brochures, banners, PDFs, and other print-ready files. This remote freelance role is expected to deliver 30 designs per month. If you're passionate about visual design and can consistently meet deadlines with high-quality work, we'd love to hear from you! Key Responsibilities Create 30+ designs per month, including - Social media graphics - Flyers, brochures, and pamphlets - PDF print files - Flex banners and large-scale designs Design for multiple formats Digital websocial media and print brochures, banners, etc.. - Collaborate with stakeholders to ensure designs align with the brand and project goals. - Make revisions and adjustments based on feedback. - Prepare print-ready files with accurate specifications. --- Required Skills - Proficiency in Adobe Creative Suite Photoshop, Illustrator, InDesign or equivalent tools. - Strong understanding of layout, typography, and color theory, - Experience in designing for both digital and print mediums. - Knowledge of print specifications and formats CMYK, DPI, bleed, etc.. - Ability to work independently and deliver within deadlines. --- Preferred Qualifications - Prior experience as a freelance designer or working in an agency setting. - Experience with branding projects - Strong portfolio showcasing past work. --- Compensation - 10,000 per month for a minimum of 30 imagesdesigns - Additional designs or complex projects may be compensated separately based on agreement. --- How to Apply Interested candidates should submit their portfolios and CVs this platform Please include samples of - Social media posts or marketing graphics - Print designs like brochures or banners - Any other relevant design work --- Additional Information - This is a remote freelance opportunity. - Payments will be made monthly upon submission and approval of deliverables. - Long-term collaboration opportunities available based on performance.",
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### Unnamed Dataset
* Size: 16,682 training samples
* Columns: <code>sentence_0</code>, <code>sentence_1</code>, and <code>label</code>
* Approximate statistics based on the first 1000 samples:
| | sentence_0 | sentence_1 | label |
|:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:----------------------------------------------------------------|
| type | string | string | float |
| details | <ul><li>min: 4 tokens</li><li>mean: 160.64 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 8 tokens</li><li>mean: 163.14 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 0.27</li><li>mean: 0.72</li><li>max: 1.0</li></ul> |
* Samples:
| sentence_0 | sentence_1 | label |
|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:--------------------------------|
| <code>Amazon eBay Tiktok Shop Amazon Services Amazon Seller Central Management A to Z Store Management A to Z Inventory Management Winning Product Sourcing Product Listing with SEO Listing With Variations Listing Optimization Title, Bullet Points & Description Optimization Images Optimization Product Launching FBA Shipment Creation more Amazon eBay Tiktok Shop Amazon Services Amazon Seller Central Management A to Z Store Management A to Z Inventory Management Winning Product Sourcing Product Listing with SEO Listing With Variations Listing Optimization Title, Bullet Points & Description Optimization Images Optimization Product Launching FBA Shipment Creation Sales Generation Dropshipping Store Design A+ Content Creation Amazon PPC Campaigns Brand Registry Trademark Registration Customer Services Management eBay Services eBay Store Management A to Z A to Z eBay Dropshipping Services Winning Products Sourcing Products listing with SEO Products listing With Variations Listings Optimization Title , Bullet Point & Description Optimization Images Optimization Keywords Optimization Sales Boost Products Ranking Hot selling product with 30 to 50 profit Competitor Analysis Orders Fulfillment Customer Services Management eBay Account Defect Removal Tax Exemption Management Setting Up Promotions Listing Templates Creation Tiktok Shop Services TikTok Shop Account Setup Product Listing Listing Optimization Keyword Research Product Hunting Competitor Analysis Campaign Management Influencer Collaboration TikTok Live Shopping Order Management Promotion Management TikTok Ads for Shop Content Creation for Shop Sales Analytics & Reporting Problem Solving & Issue Resolution Ongoing Shop Optimization</code> | <code>I'm seeking a skilled professional to assist with a variety of tasks including selling products from Amazon UAE to eBay UK via dropshipping, product sourcing, and full virtual assistance. Key Responsibilities - Product Searching & Listing Identify profitable products, create and optimize listings, and conduct market trend analysis. - SEO Management Oversee the search engine optimization for our listed products. - Selling & Listing Management List products on Amazon, eBay, and our website, while managing sales. Ideal Candidate - Previous dropshipping experience, particularly between Amazon and eBay, is a plus. - Strong skills in SEO, product sourcing, and virtual assistance. - Excellent understanding of market trends and product profitability. - Able to create and optimize product listings for maximum visibility and sales. This is a full-time position which requires dedication and a proactive approach. Please only apply if you have the necessary skills and experience.</code> | <code>0.7151671051979065</code> |
| <code>We are a group of young, energetic, creative & professional website developer, graphic designer and IT-Administrator who are devoted to implement your requirement with modern technology. Website Design - Development-Modification - Wordpress - Ecommerce - DynamicCustomized site Development Graphic Design - logo design - Brochure - Flyer - Leaflet - PDF Profile - Catalog - Greetings Card - PackageLabel Design - Business Card - Image RetouchEnhancementEditingManipulation IT-Admin Virtual Assistant - Product Listing - Site Content Management - Product Image Enhance - Data Processing - PDF conversion to WordExcel - Web Research - Data Scraping Why Choose Us o Quality Support for everyday 365 days even after project completion o We understand your requirements precisely to deliver Creative designs o 100 client satisfaction guaranteed</code> | <code>We are looking for a skilled and dedicated full-time web developer to join our team. The ideal candidate should have extensive experience working with WordPress, Divi, and Elementor, as well as the ability to create custom WordPress themes. Key Responsibilities Develop, maintain, and optimize WordPress websites. Customize and configure Divi and Elementor page builders to meet client needs. Create custom WordPress themes from scratch, ensuring they are optimized for performance and usability. Troubleshoot and resolve any website issues as they arise. Ensure websites are responsive and work seamlessly across all devices. Collaborate with our design and content teams to bring creative ideas to life. Stay up to date with the latest web development trends and best practices. Requirements Proven experience with WordPress, including custom theme development. Proficiency in Divi and Elementor page builders. Strong understanding of HTML, CSS, JavaScript, and PHP. Experience in responsive design and cross-browser compatibility. Ability to work independently and meet deadlines. Strong problem-solving skills and attention to detail. Excellent communication skills in English. Preferred Qualifications Experience with WooCommerce or other WordPress plugins. Familiarity with SEO best practices. Knowledge of version control systems like Git. If you are passionate about web development and want to be part of a growing team, we'd love to hear from you! Please submit your portfolio and CV for consideration.</code> | <code>0.7487468719482422</code> |
| <code>Hi there, I'm Priyanshu Agarwal I'm a Python expert with a diverse skillset that includes web scraping, Zoho and Tally Prime accounting, automation, and Python application building. With my strong foundation in Python, I can build and automate applications that meet your business needs, saving you time and resources. As a web scraping expert, I specialize in using Python, Selenium, BeautifulSoup4, and Python Requests to extract data from websites and web applications. I have experience in projects of varying scales, from small-scale data collection to large-scale data mining for enterprise-level clients. In addition to my technical expertise in web scraping, I have a strong background in accounting software such as Zoho and Tally Prime. I have experience in managing financial data, generating reports, and automating financial processes using these tools. I understand the importance of accurate and timely financial data in business decision-making, and I strive to ensure that my clients' financial data is organized, up-to-date, and easily accessible. With my experience in automation and Python application building, I can create custom solutions to</code> | <code>I'm in need of a data scraping expert to assist in gathering market research data from various retail websites. The ideal freelancer for this project should have a robust experience with Python and Java, as well as proficiency in Odoo and Airtable. Experience in building microservices would be a significant advantage. Key Responsibilities - Scraping data from designated retail websites for market research purposes - Organizing and managing the gathered data in Airtable - Potential development of microservices for data handling, 8n8 Skills and Experience Required - Extensive experience in data scraping, particularly from retail websites - Proficiency in Python and Java - Experience with Odoo and Airtable - Prior experience in building microservices - Understanding of market research techniques and requirements</code> | <code>0.747043251991272</code> |
* Loss: [<code>CosineSimilarityLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#cosinesimilarityloss) with these parameters:
```json
{
"loss_fct": "torch.nn.modules.loss.MSELoss"
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `per_device_train_batch_size`: 16
- `per_device_eval_batch_size`: 16
- `num_train_epochs`: 4
- `multi_dataset_batch_sampler`: round_robin
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: no
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 16
- `per_device_eval_batch_size`: 16
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 5e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1
- `num_train_epochs`: 4
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.0
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: False
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `eval_use_gather_object`: False
- `batch_sampler`: batch_sampler
- `multi_dataset_batch_sampler`: round_robin
</details>
### Training Logs
| Epoch | Step | Training Loss |
|:------:|:----:|:-------------:|
| 0.4794 | 500 | 0.001 |
| 0.9588 | 1000 | 0.0004 |
| 1.4382 | 1500 | 0.0003 |
| 1.9175 | 2000 | 0.0003 |
| 2.3969 | 2500 | 0.0003 |
| 2.8763 | 3000 | 0.0002 |
| 3.3557 | 3500 | 0.0002 |
| 3.8351 | 4000 | 0.0002 |
| 0.4794 | 500 | 0.0003 |
| 0.9588 | 1000 | 0.0003 |
| 1.4382 | 1500 | 0.0003 |
| 1.9175 | 2000 | 0.0003 |
| 2.3969 | 2500 | 0.0002 |
| 2.8763 | 3000 | 0.0002 |
| 3.3557 | 3500 | 0.0002 |
| 3.8351 | 4000 | 0.0002 |
| 0.4794 | 500 | 0.0002 |
| 0.9588 | 1000 | 0.0003 |
| 1.4382 | 1500 | 0.0003 |
| 1.9175 | 2000 | 0.0002 |
| 2.3969 | 2500 | 0.0002 |
| 2.8763 | 3000 | 0.0002 |
| 3.3557 | 3500 | 0.0002 |
| 3.8351 | 4000 | 0.0001 |
| 0.4794 | 500 | 0.0002 |
| 0.9588 | 1000 | 0.0002 |
| 1.4382 | 1500 | 0.0003 |
| 1.9175 | 2000 | 0.0002 |
| 2.3969 | 2500 | 0.0002 |
| 2.8763 | 3000 | 0.0002 |
| 3.3557 | 3500 | 0.0001 |
| 3.8351 | 4000 | 0.0001 |
| 0.4794 | 500 | 0.0002 |
| 0.9588 | 1000 | 0.0002 |
| 1.4382 | 1500 | 0.0002 |
| 1.9175 | 2000 | 0.0002 |
| 2.3969 | 2500 | 0.0002 |
| 2.8763 | 3000 | 0.0001 |
| 3.3557 | 3500 | 0.0001 |
| 3.8351 | 4000 | 0.0001 |
### Framework Versions
- Python: 3.12.6
- Sentence Transformers: 3.2.0
- Transformers: 4.45.2
- PyTorch: 2.4.1+cpu
- Accelerate: 1.0.1
- Datasets: 3.0.1
- Tokenizers: 0.20.1
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
--> | [
"TEXT_CLASSIFICATION"
] | [
"CRAFT"
] |
EIRTHAIMED/Llama-3.1-EIRAI-8B | EIRTHAIMED | text-generation | [
"transformers",
"safetensors",
"llama",
"text-generation",
"medical",
"text-generation-inference",
"llama-3.1",
"finetuning",
"conversational",
"th",
"en",
"arxiv:2409.08523",
"base_model:meta-llama/Llama-3.1-8B",
"base_model:finetune:meta-llama/Llama-3.1-8B",
"license:llama3.1",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2024-09-09T03:47:29 | 2024-09-16T10:09:33 | 78 | 7 | ---
base_model: meta-llama/Meta-Llama-3.1-8B
language:
- th
- en
library_name: transformers
license: llama3.1
tags:
- medical
- text-generation-inference
- llama-3.1
- finetuning
---
<p align="center">
<img src="https://cdn-uploads.huggingface.co/production/uploads/66bf1cd096583c59b024a3c5/oG16EyLMfyiqvXrbNPGZd.png" alt="Logo_Website" width="400"/>
</p>
# **Thai Medical Large Language Model**
**Github** : [Github Evaluate](https://github.com/EIRAI-Thaimedical/EIRAI)<br>
**PaPer** : <br>
## **Llama-3.1-EIRAI-8B-instruct**
**Llama-3.1-EIRAI-8B-instruct**: developed an **8-billion parameter model** specifically tailored for **Thai medical applications**, with expertise in both **Thai medical language** and **English medical terminology**. The model has demonstrated its capabilities through key benchmarks such as **MMLU**, **MedQA**, **PubMedQA**, and **MedMCQA**, as well as Thai language assessments like **ThaiExam**, **M3Exam**, **XNLI**, and **XCOPA**. Additionally, we have created a **Clinically Adapted Model Enhanced test** using the **Thai language** to support **clinical use in hospitals** and to further improve the performance of **Thai medical Retrieval-Augmented Generation (RAG)**.
## Notice
While **Eir AI Thai Medical LLM** is designed to encode high-quality medical knowledge, it is **not yet optimized for safe, practical use** in real-world medical settings. The model is still in the research phase and should **not be used for clinical decision-making** without further validation, including randomized controlled trials. It is available for researchers to explore the potential of LLMs in medical contexts, but **real-world deployment is not recommended** in its current version.
## Safety and Future Work
The current version of **Eir AI Thai Medical LLM** is under active development. We advise against using it for medical applications until further testing is completed. Our goal is to continue enhancing the model through **rigorous testing** and **real-world evaluation**, ensuring that it can be safely integrated into healthcare systems in the future.
## Model Overview
- **Model Architecture:** Meta-Llama-3.1-8B-Instruct
- **Version:** 1.0
- **License(s):** [llama3.1](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B/blob/main/LICENSE)
### Evaluations
| Medical Model | Clinical KG | Medical Genetics | Anatomy | Pro Medicine | College Biology | College Medicine | MedQA | PubMedQA | MedMCQA | Avg. |
|--------------------------|---------------------|---------------------|--------------------|--------------------|--------------------|--------------------|-------------------|-------------------|-------------------|-------------------|
| **GPT-3.5 Turbo 1106** | 74.7 | 60.2 | 65.9 | 72.0 | 64.73 | 64.73 | 57.71 | 72.66 | 66.0 | 66.6 |
|Thai LLMs | | | | | | | | | | |
| **Eir AI-8B** | 75.1 | 80.0 | 69.6 | 76.8 | 77.1 | 66.5 | 64.5 | **79.0** | 58.6 | 71.9 |
| **Eir AI-8B + Prob** | **83.8** | **89.0** | **83.0** | **84.9** | **89.6** | **75.7** | **69.6** | 78.8 | **67.1** | **80.2** |
| **Typhoon-v1.5x-8B** | 75.9 | 79.0 | 63.7 | 70.6 | 77.1 | 63.6 | 59.7 | 74.4 | 58.0 | 69.1 |
| **OpenThaiGPT-beta-7B** | 37.4 | 38.0 | 4.5 | 32.7 | 36.1 | 32.4 | 32.4 | 62.0 | 31.8 | 34.1 |
## Translation Performance Metrics
| **Model** | **BLEU Score** | **N-gram Precisions (%)** | **BP** | **Ratio** |
|-------------------------------|----------------|---------------------------------|---------|-----------|
| Typhoon-v1.5x-8B-Instruct | 34.42 | 71.3/50.6/38.6/29.6 | 0.764 | 0.788 |
| Meta Llama 3.1-8B Instruct | 35.74 | 62.8/42.3/31.7/24.1 | 0.946 | 0.948 |
| **Eir AI-8B** | **61.10** | **76.1/64.6/56.6/50.1** | **1.000**| **1.006** |
| Eir AI-8B-prob | 47.91 | 74.0/58.0/48.2/40.6 | 0.890 | 0.896 |
## Clinically Adapted Thai Medical Task Performance
| Task | GPT-3.5 | Typhoon-v1.5x-8B-instruct | GPT-4o | Eir AI-8B |
|----------------------------------------|---------|----------------------------|--------|-----------|
| Named Entity Recognition | 3.26 | 5.55 | 6.34 | **7.08** |
| Temporal Information Extraction | 3.83 | 5.46 | 6.15 | **7.05** |
| Paraphrasing | 2.36 | 4.68 | 6.35 | **7.06** |
| Natural Language Generation | 2.63 | 4.87 | 6.91 | **7.66** |
| Keyword Extraction | 2.60 | 5.15 | 7.01 | **7.35** |
| Text Classification | 2.92 | 6.21 | 5.36 | **6.75** |
| Relation Extraction | 3.29 | 5.94 | 4.37 | **6.92** |
| Question Answering | 3.70 | 4.92 | 6.11 | **6.82** |
| Text Summarization | 2.98 | 5.44 | **7.51**| **7.51** |
| Abbreviation Expansion | 3.99 | 5.96 | 6.24 | **7.82** |
| Clinical Concept Normalization | 2.67 | 5.63 | 5.82 | **6.55** |
| Open-ended Question | 3.32 | 5.55 | 6.77 | **7.27** |
| Multiple-Choice Question | 3.90 | 5.00 | 5.40 | **6.40** |
| Coreference Resolution | 3.48 | 4.55 | 4.88 | **6.43** |
| Yes/No Question | 2.71 | 5.86 | 4.86 | **7.38** |
| Medical Translation | 3.00 | 4.00 | **7.79**| 7.65 |
| Medical Thai Extraction | 2.81 | 7.16 | **8.62**| 8.16 |
| Medical ICD Prediction | 2.08 | 3.16 | **8.12**| 6.41 |
| **Average Score** | 3.05 | 5.33 | 6.38 | **7.11** |
# Prompt Template
This model uses `ChatML` prompt template:
```
<|begin_of_text|><|start_header_id|>system<|end_header_id|>
{system_prompt}<|eot_id|><|start_header_id|>user<|end_header_id|>
{prompt}<|eot_id|><|start_header_id|>assistant<|end_header_id|>
````
# Example Clinical Adapted ICD 10 Prediction
````
<|begin_of_text|><|start_header_id|>system<|end_header_id|>
You are responsible for accurately assigning ICD-10 codes and to diagnose and document medical records.
Your expertise ensures that healthcare providers are properly reimbursed and that patient care is well-documented.
In this scenario, you will be presented with a series of medical records and your task is to provide the correct ICD-10 code(s)
and ICD-9 CM in procedures based on the information provided.
<|eot_id|>
<|start_header_id|>user<|end_header_id|>
"Chief Complaint :5วันก่อนมารพ.มีไข้ ไอ มีเสมหะ มีน้ำมูก เหนื่อย ปวดเมื่อยตามตัว \r\n
Present illness : 5วันก่อนมารพ.มีไข้ ไอ มีเสมหะ มีน้ำมูก เหนื่อย ปวดเมื่อยตามตัว มีน้ำมูก เลือดกำเดาจาากข้างขวา
ปฏิการกระทบกระแทก ไม่มีเจ็บคอ ไม่มีอาการอ่อนเพลีย มีอาการอ่อนเพลีย ไอมาก ไอตลอด มีอาการระคายคอ ปัสสาวะปกติ ไม่มีถ่ายเหลว
\r\n\r\nAllergy : |\r\n\r\nOther : no underlying disease\r\n\r\nPlan Treatment Day 1 of hospitalization : admit ward
\r\n\r\nReview of System { \r\n\r\n General :a thai adult female ,look sickness fatigue dry lip moderate dehydration
\r\n Skin :no MP rash \r\n Eyes :not pale ,no icteric sclera \r\n Chest :secretion sound in both lung ,no crepitation , no wheezing \r
\n }
VitalSign First : {\n
BP : 117.0/63.0 mmHg\n
Pulse : 62.0 BPm\n
Temperature : 37.0 Celsius\n
Respiratory rate : 20.0\n
Weight : 50.000 kgs.\n
Height : 165.0 cm.\n
Painscore: N/A\n
O2SAT : 100\n}\n
Lab Results: \n
Electrolyte:Sodium (Na), Result : 143 mmol/L\r\n
Electrolyte:Potassium (K),Result : 3.8 mmol/L\r\n
Electrolyte:Chloride (Cl), Result : 108 mmol/L\r\n
Electrolyte:Bicarbonate (CO2),Result : 27.0 mmol/L\r\n
Creatinine (Serum):Creatinine, Result : 0.69 mg/dL\r\n
Creatinine (Serum):eGFR,Result : 100.41 ml/min/1.73 m^2\r\n
AST/SGOT:AST/SGOT, Result : 48 U/L\r\n
ALT/SGPT:ALT/SGPT, Result : 42 U/L\r\n
CBC:WBC Count,Result : 3.2 10^3/uL\r\n
CBC:RBC Count, Result : 3.57 10^6/uL\r\n
CBC:Hemoglobin (Hb), Result : 10.7 g/dL\r\n
CBC:Hematocrit (HCT),Result : 32.4 %\r\n
CBC:MCV, Result : 91 fL\r\n
CBC:MCH, Result : 30.0 pg\r\n
CBC:MCHC, Result : 33.0 g/dL\r\n
CBC:RDW-CV,Result : 12.9 %\r\n
CBC:Platelet Count, Result : 226 10^3/uL\r\n
CBC:Platelet Estimates, Result : Adequate \r\n
CBC:Neutrophils,Result : 41 %\r\n
CBC:Lymphocytes, Result : 46 %\r\n
CBC:Monocytes, Result : 9 %\r\n
CBC:Eosinophils, Result : 4 %\r\n
CBC:Basophils,Result : 0 %\r\n
COVID-19 Antigen (Rapid Test) For Admit:COVID-19 Antigen (Rapid Test), Result : Positive \r\n
Influenza A/B screening:Flu A,Result : Negative \r\n
Influenza A/B screening:Flu B, Result : Negative \n
X-ray Results: 15-05-2567 12:56 CHEST (PA, UPRIGHT)\r\n
\r\nHISTORY: Fever with productive cough and dyspnea for 5 days. \r\nCOMPARISON: None.\r\n
\r\n
FINDINGS:\r\n
TUBES/LINES: None.\r\nLUNGS: Normal lung volumes. Band-like opacity at right apical lung. Clear left lung.\r\n
PLEURA: No pneumothorax or effusion. Mild bilateral apical pleural thickening. \r\n
HEART AND MEDIASTINUM: Normal cardiac silhouette. Normal mediastinal contour.\r\n
BONY STRUCTURES: Mild scoliosis of the thoracic spine. \r\n\r\n
IMPRESSION:\r\n
- Band-like opacity at right apical lung, probably plate atelectasis or fibrosis\r\n- Mild bilateral apical pleural thickening"
<|eot_id|>
<|start_header_id|>assistant<|end_header_id|>
Let's analyze the provided information and document the appropriate ICD-10 codes based on the patient's symptoms and diagnosis.
### Step-by-Step Analysis:
1. **Understand the Symptoms and Diagnosis**:
- The patient presents with a range of symptoms including fever, cough with sputum, fatigue, nasal discharge (including blood),
and dyspnea. The lab tests returned a positive COVID-19 antigen test, while the influenza test was negative.
2. **Identify Key Terms**:
- Primary key terms include 'COVID-19 positive', 'fever', 'productive cough', 'dyspnea', and pulmonary findings from the X-ray.
3. **Locate the Main Category**:
- The main categories for the conditions noted are:
- COVID-19: U07.1
- Possible acute bronchitis: J20 (based on the productive cough)
- Other symptoms such as dizziness: R42
- Evidence of acute respiratory infection: J06.9
4. **Identify Specific Codes**:
- Based on the diagnosis:
- COVID-19 confirmed case: U07.1
- Acute bronchitis: J20.9 (considering unspecified acute bronchitis)
- Dizziness (if noted as significant and problematic): R42
- Unspecified acute upper respiratory infection: J06.9
5. **Check for Additional Codes or Modifiers**:
- Considering the details provided, the primary diagnosis is the confirmed COVID-19 infection, while bronchiolitis
and dizziness might be secondary diagnoses.
6. **Validate the Code Selection**:
- Confirm that the selected codes reflect the patient's symptoms accurately, including acute bronchitis
and any other significant symptoms presented.
7. **Document the ICD-10 Code**:
- The ICD-10 codes are documented for all noted conditions accordingly."
```json
Answer: {
"Primary_Diagnosis": {
"Code": "U071",
"Description": "COVID-19, virus identified"
},
"Secondary_Diagnosis": [
{
"Code": "J20",
"Description": "Acute bronchitis due to unspecified organism"
},
{
"Code": "R42",
"Description": "Dizziness and giddiness"
},
{
"Code": "J069",
"Description": "Acute upper respiratory infection, unspecified"
}
],
"ICD9_Procedure": []
}
````
# Example Clinical Adapted Thai Medical Extraction
````
<|begin_of_text|><|start_header_id|>system<|end_header_id|>
Task : Extract input the following patient information into output format Tone: the following medical text into
Thai in a fluent and elegant style.
Output Format.1.Age: \n2.Gender: \n3.Weight :\n4.Height : \n5.Chief Complaint: \n6.Symptoms and Signs: \n7.Medical History: \n
8.Current Medications: \n9.Laboratory Results: \n10.Imaging Findings: \n11.Allergy: \n12.Drug Allergy:
<|eot_id|>
<|start_header_id|>user<|end_header_id|>
ผู้ป่วยของเราเป็นชายถนัดทั้งสองมือ อายุ 43 ปี มีประวัติการชักที่ไม่สามารถควบคุมได้มาเป็นเวลา 20 ปี ลักษณะการชักของเขามักจะรวมถึงการรู้สึกร้อนวูบวาบและอาการทางประสาทสัมผัสอื่น ๆ
ที่พัฒนาไปสู่การเคลื่อนไหวของกล้ามเนื้อที่มีจุดศูนย์กลางส่วนใหญ่ทางด้านขวา การตรวจหาสาเหตุของการชักรวมถึงการถ่ายภาพด้วยคลื่นแม่เหล็กไฟฟ้า (MRI) ซึ่งเผยให้เห็นเนื้องอกไขมันขนาดใหญ่ที่เส้นกลางสมอง
การพัฒนาไม่สมบูรณ์ของคอร์ปัสคาโลซัมบางส่วน และรอยโรคที่อยู่ใกล้เคียงในสมองส่วนหน้าซ้ายที่คาดว่าจะเป็นเนื้องอกกลีอาล (glial neoplasm) ตามลักษณะภาพถ่ายทางรังสี
รอยโรคในสมองส่วนหน้าซ้ายด้านหน้าและตรงกลางประกอบด้วยการกลายเป็นหินปูนแบบเป็นก้อนพร้อมการเพิ่มขึ้นของสัญญาณ FLAIR ที่กว้างขวางซึ่งเกี่ยวข้องกับไจรัสซิงกูเลตทั้งสองข้างและสมองส่วนหน้าซ้าย
(รูปที่ ).\n\nการจัดการทางการแพทย์ล้มเหลวในการควบคุมการชักของเขาและเขาถูกส่งต่อเพื่อหาทางเลือกในการรักษาด้วยการผ่าตัด รอยโรคที่เพิ่มขึ้นถูกสังเกตด้วยการถ่ายภาพเพิ่มเติมและขอบเขตของอาการบวมน้ำก็เพิ่มขึ้นด้วย
ความกังวลเกี่ยวกับการพัฒนาเนื้องอกกลีอาลที่เพิ่มขึ้นและการควบคุมการชักที่ไม่ดีทำให้มีการแนะนำให้ทำการผ่าตัด
การตัดสินใจถูกทำขึ้นเพื่อดำเนินการผ่าตัดนำทางด้วยระบบประสาทเพื่อตัดมวลที่เพิ่มขึ้นในสมองส่วนหน้าซ้ายและการตัดสมองส่วนหน้าบางส่วนโดยใช้การตรวจคลื่นไฟฟ้าสมองระหว่างการผ่าตัด
(intraoperative electroencephalogram - EEG), การทำแผนที่คอร์ติคอล (cortical mapping) และการตรวจวัดศักย์ไฟฟ้าที่เกิดจากการกระตุ้นประสาทรับความรู้สึก
(somatosensory evoked potentials - SSEP)\n\nตัวอย่างที่ส่งไปตรวจทางพยาธิวิทยาแบบแช่แข็งในระหว่างการผ่าตัดพบว่ามีเส้นใยโรเซนธาล (Rosenthal fibers)
และการกลายเป็นหินปูนแบบเป็นจุดซึ่งคาดว่าจะเป็นเนื้องอกกลีอาล การประเมินทางพยาธิวิทยาแบบถาวรเผยให้เห็นเนื้องอกไขมัน (lipoma) และความผิดปกติของคอร์ติคอลแบบเฉพาะจุด
(focal cortical dysplasia) แบบ Palmini Type IA ในสมองที่อยู่ใกล้เคียง ความผิดปกติเล็กน้อยของโครงสร้างคอร์ติคอลและการเกิดกลีโอซิส (gliosis)
ในเนื้อสมองขาวที่เกี่ยวข้องสามารถเห็นได้ในคราบสีฮีมาโทซิลินและอีโอซิน (hematoxylin and eosin - H&E) และคราบสีโปรตีนกรดกลีอาลไฟบริลลารี (glial fibrillary acidic protein - GFAP)
และการย้อมสีโปรตีนเส้นประสาท (neurofilament protein - NFP) ในบริเวณที่เกิดกลีโอซิสไม่แสดงหลักฐานของเซลล์ประสาทที่ผิดรูป เซลล์ประสาทขนาดใหญ่ หรือเซลล์ลูกโป่ง (รูปที่ ).\n\n
การถ่ายภาพหลังการผ่าตัด (รูปที่ ) แสดงให้เห็นการตัดรอยโรคที่เพิ่มขึ้นใกล้เคียงทั้งหมดในสมองส่วนหน้าซ้ายและไจรัสซิงกูเลต การลดลงอย่างมากของอาการบวมน้ำที่เกิดจากหลอดเลือดถูกสังเกตในภาพ
FLAIR เนื้องอกไขมันที่เส้นกลางสมองไม่เปลี่ยนแปลง\n\nการฟื้นตัวหลังการตัดรวมถึงอาการเสริมในบริเวณมอเตอร์ซินโดรมระดับปานกลางรวมถึงอาการอ่อนแรงครึ่งซีก การพูดล่าช้า และการเปลี่ยนแปลงทางอารมณ์
การเปลี่ยนแปลงเหล่านี้ค่อยๆ หายไปและพบว่าผู้ป่วยมีความสมบูรณ์ทางระบบประสาทในการติดตามผลที่คลินิก การควบคุมการชักเกิดขึ้นทันทีหลังการตัด ยาต้านการชักถูกใช้อย่างต่อเนื่องและค่อยๆ
ถูกถอดออกในเดือนต่อมาโดยไม่มีการกลับมาเป็นซ้ำ ผู้ป่วยไม่มีการชักเป็นเวลา 3 ปีหลังการตัด การถ่ายภาพติดตามผล 18 เดือนพบการเปลี่ยนแปลง T2/FLAIR ที่คงที่พร้อมกับบริเวณเล็ก ๆ
<|eot_id|>
<|start_header_id|>assistant<|end_header_id|>
1. อายุ: 43 ปี\n
2. เพศ: ชาย\n
3. น้ำหนัก: ไม่ระบุ\n
4. ส่วนสูง: ไม่ระบุ\n
5. อาการหลัก: การชักที่ไม่สามารถควบคุมได้\n
6. อาการและสัญญาณ: รู้สึกร้อนวูบวาบ, อาการทางประสาทสัมผัส, การเคลื่อนไหวของกล้ามเนื้อที่มีจุดศูนย์กลางส่วนใหญ่ทางด้านขวา\n
7. ประวัติการรักษา: การจัดการทางการแพทย์ล้มเหลวในการควบคุมการชัก\n
8. ยาที่ใช้ในปัจจุบัน: ยาต้านการชัก\n9. ผลการตรวจทางห้องปฏิบัติการ: ไม่ระบุ\n
10. ผลการตรวจภาพ: MRI แสดงเนื้องอกไขมันขนาดใหญ่ที่เส้นกลางสมอง, การพัฒนาไม่สมบูรณ์ของคอร์ปัสคาโลซัมบางส่วน, รอยโรคที่อยู่ใกล้เคียงในสมองส่วนหน้าซ้ายที่คาดว่าจะเป็นเนื้องอกกลีอาล\n
11. อาการแพ้: ไม่ระบุ\n
12. อาการแพ้ยา: ไม่ระบุ\n
การผ่าตัดนำทางด้วยระบบประสาทเพื่อตัดมวลที่เพิ่มขึ้นในสมองส่วนหน้าซ้ายและการตัดสมองส่วนหน้าบางส่วนได้ถูกดำเนินการ
และผลการตรวจทางพยาธิวิทยาเผยให้เห็นว่าเป็นเนื้องอกไขมันและความผิดปกติของคอร์ติคอลแบบเฉพาะจุด (focal cortical dysplasia) แบบ Palmini Type IA
ในสมองที่อยู่ใกล้เคียง การฟื้นตัวหลังการผ่าตัดรวมถึงอาการเสริมในบริเวณมอเตอร์ซินโดรมระดับปานกลาง แต่การควบคุมการชักเกิดขึ้นทันทีหลังการผ่าตัดและผู้ป่วยไม่มีการชักเป็นเวลา
3 ปีหลังการผ่าตัด การถ่ายภาพติดตามผล 18 เดือนพบการเปลี่ยนแปลง T2/FLAIR ที่คงที่พร้อมกับบริเวณเล็ก ๆ ที่เพิ่มขึ้นของการเพิ่มขึ้นใกล้เคียงที่ไม่เปลี่ยนแปลง.
````
# How to use
```python
from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig
import torch
model_id = "EIRTHAIMED/Llama-3.1-EIRAI-8B"
nf4_config = BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_quant_type="nf4",
bnb_4bit_use_double_quant=True,
bnb_4bit_compute_dtype=torch.bfloat16
)
# Load the base model
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
model_id,
torch_dtype=torch.bfloat16,
# quantization_config=nf4_config, # uncomment this line for 4 bit loading
device_map="auto",
attn_implementation="flash_attention_2"
)
messages = [
{"role": "system", "content": "You are an expert medical assistant named EIR , developed by EIR Thai Medical LLM. You are to be a helpful, respectful, and honest assistant."},
{"role": "user", "content": "การใช้ clinical tracer มีบทบาทอย่างไรในการพัฒนาคุณภาพการดูแลผู้ป่วย?"}
]
input = tokenizer.apply_chat_template(
messages,
tokenize = True,
add_generation_prompt = True, # Must add for generation
return_tensors = "pt",
).to("cuda")
from transformers import TextStreamer
text_streamer = TextStreamer(tokenizer, skip_prompt = True)
_ = model.generate(input, streamer = text_streamer, max_new_tokens = 1500, do_sample=True, temperature=0.01, top_k=100, top_p=0.95)
```
```
@article{EirAI,
title={Eir: Thai Medical Large Language Models},
author={Yutthakorn Thiprak and Rungtam Ngodngamthaweesuk and Songtam Ngodngamtaweesuk, MD},
year={2024},
journal={arXiv preprint arXiv:2409.08523},
url={https://arxiv.org/abs/2409.08523}
}
```
---
**Thank you very much**
Asst.Prof.Dr. Ekapol Chuangsuwanich and Praj Bhargava @Meta Research Engineer, for your valuable endorsement of our preprint paper on arXiv.
**Thank you**
Draft Reviewer Report
[Kullawat Chaowanawatee](https://www.computing.psu.ac.th/profile/index.php?staffid=coc0051) and [Dr. Jakapan Suaboot](https://www.computing.psu.ac.th/profile/index.php?staffid=coc0056) from Prince of Songkla University, Phuket Campus
<br>
Draft Industry Reviewer Report
[Mr. Piyawat Maneenual](https://ieeexplore.ieee.org/author/37086452350) ,Assistant IT Manager ,Thonburi Rajyindee Hospital<br>
| [
"NAMED_ENTITY_RECOGNITION",
"RELATION_EXTRACTION",
"TEXT_CLASSIFICATION",
"COREFERENCE_RESOLUTION",
"QUESTION_ANSWERING",
"TRANSLATION",
"SUMMARIZATION",
"PARAPHRASING"
] | [
"MEDQA",
"PUBMEDQA"
] |
Sci-fi-vy/Meditron-7b-finetuned | Sci-fi-vy | image-text-to-text | [
"transformers",
"pytorch",
"safetensors",
"llama",
"text-generation",
"image-text-to-text",
"en",
"dataset:epfl-llm/guidelines",
"arxiv:2311.16079",
"base_model:meta-llama/Llama-2-7b",
"base_model:finetune:meta-llama/Llama-2-7b",
"license:llama2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | 2025-01-22T14:32:25 | 2025-01-25T11:11:08 | 78 | 1 | ---
base_model: meta-llama/Llama-2-7b
datasets:
- epfl-llm/guidelines
language:
- en
library_name: transformers
license: llama2
metrics:
- accuracy
- perplexity
pipeline_tag: image-text-to-text
---
# Model Card for Meditron-7B-finetuned
Meditron is a suite of open-source medical Large Language Models (LLMs).
Meditron-7B is a 7 billion parameters model adapted to the medical domain from Llama-2-7B through continued pretraining on a comprehensively curated medical corpus, including selected PubMed articles, abstracts, a [new dataset](https://huggingface.co/datasets/epfl-llm/guidelines) of internationally-recognized medical guidelines, and general domain data from [RedPajama-v1](https://huggingface.co/datasets/togethercomputer/RedPajama-Data-1T).
Meditron-7B-finetuned is finetuned on relevant training data, which outperforms Llama-2-7B and PMC-Llama on multiple medical reasoning tasks.
<details open>
<summary><strong>Advisory Notice</strong></summary>
<blockquote style="padding: 10px; margin: 0 0 10px; border-left: 5px solid #ddd;">
While Meditron is designed to encode medical knowledge from sources of high-quality evidence, it is not yet adapted to deliver this knowledge appropriately, safely, or within professional actionable constraints.
We recommend against deploying Meditron in medical applications without extensive use-case alignment, as well as additional testing, specifically including randomized controlled trials in real-world practice settings.
</blockquote>
</details>
## Model Details
- **Finetuned by:** [Vignesh](https://huggingface.co/Sci-fi-vy)
- **Developed by:** [EPFL LLM Team](https://huggingface.co/epfl-llm)
- **Model type:** Causal decoder-only transformer language model
- **Language(s):** English (mainly)
- **Model License:** [LLAMA 2 COMMUNITY LICENSE AGREEMENT](https://huggingface.co/meta-llama/Llama-2-70b/raw/main/LICENSE.txt)
- **Code License:** [APACHE 2.0 LICENSE](LICENSE)
- **Continue-pretrained from model:** [Llama-2-7B](https://huggingface.co/meta-llama/Llama-2-7b)
- **Context length:** 2K tokens
- **Input:** Text-only data
- **Output:** Model generates text only
- **Status:** This is a static model trained on an offline dataset. Future versions of the tuned models will be released as we enhance model's performance.
- **Knowledge Cutoff:** August 2023
### Model Sources
- **Repository:** [epflLLM/meditron](https://github.com/epfLLM/meditron)
- **Trainer:** [epflLLM/Megatron-LLM](https://github.com/epfLLM/Megatron-LLM)
- **Reference Paper:** *[MediTron-70B: Scaling Medical Pretraining for Large Language Models](https://arxiv.org/abs/2311.16079)*
## Uses
Meditron-7B-finetuned is being made available for further testing and assessment as an AI assistant to enhance clinical decision-making and enhance access to an LLM for healthcare use. Potential use cases may include but are not limited to:
- Medical exam question answering
- Supporting differential diagnosis
- Disease information (symptoms, cause, treatment) query
- General health information query
- Personalized results
### Direct Use
It is possible to use this model to generate text, which is useful for experimentation and understanding its capabilities.
It should not be used directly for production or work that may impact people.
### Downstream Use
Meditron-70B and Meditron-7B are both foundation models without finetuning or instruction-tuning. They can be finetuned, instruction-tuned, or RLHF-tuned for specific downstream tasks and applications.
There are two ways we have used this model for downstream question-answering tasks.
1. We apply in-context learning with k demonstrations (3 or 5 in our paper) added to the prompt.
2. We finetuned the models for downstream question-answering tasks using specific training sets.
We encourage and look forward to the adaption of the base model for more diverse applications.
If you want a more interactive way to prompt the model, we recommend using a high-throughput and memory-efficient inference engine with a UI that supports chat and text generation.
You can check out our deployment [guide](https://github.com/epfLLM/meditron/blob/main/deployment/README.md), where we used [FastChat](https://github.com/lm-sys/FastChat) with [vLLM](https://github.com/vllm-project/vllm). We collected generations for our qualitative analysis through an interactive UI platform, [BetterChatGPT](https://github.com/ztjhz/BetterChatGPT). Here is the prompt format we used as an example:
<img width=70% src="prompt_example.png" alt="qualitative-analysis-prompt" title="Qualitative Analysis Prompt">
### Out-of-Scope Use
We do not recommend using this model for natural language generation in a production environment, finetuned or otherwise.
## Truthfulness, Helpfulness, Risk, and Bias
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
We did an initial assessment of Meditron models' **Truthfulness** against baseline models and consumer-level medical models.
We use TruthfulQA (multiple choice) as the main evaluation benchmark.
We only focus on the categories that are relevant to the medical domain, including Health, Nutrition, Psychology, and Science.
For 7B models, we perform one-shot evaluations for consistent answer generation.
For 70B models, the evaluations are under the zero-shot setting.
Below, we report the detailed truthfulness performance of each category.
| | | | | | | | |
| --- | ------ |----- |----- |----- |----- |----- |----- |
|Category | meditron-70b | llama-2-70b | med42-70b* | meditron-7b | llama-2-7b | PMC-llama-7b |
|Health | 81.8 | 69.1 | 83.6 | 27.3 | 16.4 | 3.6 |
|Nutrition | 77.9 | 68.8 | 62.5 | 31.1 | 12.5 | 6.3 |
|Psychology| 47.4 | 36.8 | 52.6 | 21.1 | 10.5 | 0.0 |
|Science | 77.8 | 44.4 | 33.3 | 33.3 | 11.1 | 0.0 |
|Avg | 71.2 | 54.8 | 58.0 | 28.3 | 12.6 | 2.5 |
| | | | | | | |
For a more detailed performance analysis, please see our paper.
Significant research is still required to fully explore potential bias, fairness, and safety issues with this language model.
Please recognize that our evaluation on Meditron-7B's helpfulness, risk, and bias are highly limited.
Thus, as we noted in the safety notice, we strongly against any deployment in medical applications without further alignment process and rigorous evaluation!
### Recommendations
**IMPORTANT!**
Users (both direct and downstream) should be made aware of the risks, biases, and limitations of the model.
While this model is capable of generating natural language text, we have only begun to explore this capability and its limitations.
Understanding these limitations is especially important in a domain like medicine.
Therefore, we strongly recommend against using this model in production for natural language generation or for professional purposes related to health and medicine.
## Training Details
### Training Data
Meditronâs domain-adaptive pre-training corpus GAP-Replay combines 48.1B tokens from four corpora:
- [**Clinical Guidelines**](https://huggingface.co/datasets/epfl-llm/guidelines): a new dataset of 46K internationally-recognized clinical practice guidelines from various healthcare-related sources, including hospitals and international organizations.
- **Medical Paper Abstracts**: 16.1M abstracts extracted from closed-access PubMed and PubMed Central papers.
- **Medical Papers**: full-text articles extracted from 5M publicly available PubMed and PubMed Central papers.
- **Replay Data**: 400M tokens of general domain pretraining data sampled from [RedPajama-v1](https://huggingface.co/datasets/togethercomputer/RedPajama-Data-1T)
<img width=75% src="gap-replay.png" alt="Alt text">
#### Data Preprocessing
Please see the detailed preprocessing procedure in our paper.
### Training Procedure
We used the [Megatron-LLM](https://github.com/epfLLM/Megatron-LLM) distributed training library, a derivative of Nvidia's Megatron LM project, to optimize training efficiency.
Hardware consists of 1 node of 8x NVIDIA A100 (80GB) SXM GPUs connected by NVLink and NVSwitch with a single Nvidia ConnectX-6 DX network card and equipped with 2 x AMD EPYC 7543 32-Core Processors and 512 GB of RAM.
Our three way parallelism scheme uses:
- Data Parallelism (DP -- different GPUs process different subsets of the batches) of 2,
- Pipeline Parallelism (PP -- different GPUs process different layers) of 4,
- Tensor Parallelism (TP -- different GPUs process different subtensors for matrix multiplication) of 1.
#### Training Hyperparameters
| | |
| --- | ------ |
| bf16 | true |
| lr | 3e-4 |
| eps | 1e-5 |
| betas | \[0.9, 0.95\] |
| clip_grad | 1 |
| weight decay | 0.1 |
| DP size | 16 |
| TP size | 4 |
| PP size | 1 |
| seq length | 2048 |
| lr scheduler | cosine|
| min lr | 1e-6 |
| warmup iteration | 2000 |
| micro batch size | 10 |
| global batch size | 1600 |
| | |
#### Sizes
The model was trained in September 2023.
The model architecture is exactly Llama 2, meaning
| | |
| --- | ------ |
| Model size | 7B |
| Hidden dimension | 4096 |
| Num. attention heads | 32 |
| Num. layers | 32 |
| | |
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data & Metrics
#### Testing Data
- [MedQA (USMLE)](https://huggingface.co/datasets/bigbio/med_qa)
- [MedMCQA](https://huggingface.co/datasets/medmcqa)
- [PubMedQA](https://huggingface.co/datasets/bigbio/pubmed_qa)
- [MMLU-Medical](https://huggingface.co/datasets/lukaemon/mmlu)
- [MedQA-4-Option](https://huggingface.co/datasets/GBaker/MedQA-USMLE-4-options)
#### Metrics
- Accuracy: suite the evaluation of multiple-choice question-answering tasks.
### Results
We finetune meditron-7b, llama-2-7b, pmc-llama-7b on each benchmark (pubmedqa, medmcqa, medqa)'s training data individually.
We report the finetuned models' performance with top token selection as the inference mode.
For MMLU-Medical, models finetuned on MedMCQA are used for inference.
For MedQA-4-Option, models finetuned on MedQA are used for inference.
For a more detailed performance analysis, please see our paper.
| | | | | | |
| --- | ------ |----- |----- |----- |----- |
|Dataset | meditron-7b | llama-2-7b | pmc-llama-7b | Zephyr-7B-beta* | Mistral-7B-instruct* |
|MMLU-Medical | 54.2 | 53.7 | 56.4 | 63.3 | 60.0 |
|PubMedQA | 74.4 | 61.8 | 59.2 | 46.0 | 17.8 |
|MedMCQA | 59.2 | 54.4 | 57.6 | 43.0 | 40.2 |
|MedQA | 47.9 | 44.0 | 42.4 | 42.8 | 32.4 |
|MedQA-4-Option| 52.0 | 49.6 | 49.2 | 48.5 | 41.1 |
|Avg | 57.5 | 52.7 | 53.0 | 48.7 | 38.3 |
| | | | | | |
**Note**: models with * are already instruction-tuned, so we exclude them from further finetuning on any training. | [
"QUESTION_ANSWERING"
] | [
"MEDQA",
"PUBMEDQA"
] |
ggml-org/bge-small-en-v1.5-Q8_0-GGUF | ggml-org | feature-extraction | [
"sentence-transformers",
"gguf",
"feature-extraction",
"sentence-similarity",
"transformers",
"mteb",
"llama-cpp",
"gguf-my-repo",
"en",
"base_model:BAAI/bge-small-en-v1.5",
"base_model:quantized:BAAI/bge-small-en-v1.5",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2025-02-06T09:26:01 | 2025-02-06T09:40:16 | 78 | 0 | ---
base_model: BAAI/bge-small-en-v1.5
language:
- en
license: mit
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
- mteb
- llama-cpp
- gguf-my-repo
model-index:
- name: bge-small-en-v1.5
results:
- task:
type: Classification
dataset:
name: MTEB AmazonCounterfactualClassification (en)
type: mteb/amazon_counterfactual
config: en
split: test
revision: e8379541af4e31359cca9fbcf4b00f2671dba205
metrics:
- type: accuracy
value: 73.79104477611939
- type: ap
value: 37.21923821573361
- type: f1
value: 68.0914945617093
- task:
type: Classification
dataset:
name: MTEB AmazonPolarityClassification
type: mteb/amazon_polarity
config: default
split: test
revision: e2d317d38cd51312af73b3d32a06d1a08b442046
metrics:
- type: accuracy
value: 92.75377499999999
- type: ap
value: 89.46766124546022
- type: f1
value: 92.73884001331487
- task:
type: Classification
dataset:
name: MTEB AmazonReviewsClassification (en)
type: mteb/amazon_reviews_multi
config: en
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 46.986
- type: f1
value: 46.55936786727896
- task:
type: Retrieval
dataset:
name: MTEB ArguAna
type: arguana
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 35.846000000000004
- type: map_at_10
value: 51.388
- type: map_at_100
value: 52.132999999999996
- type: map_at_1000
value: 52.141000000000005
- type: map_at_3
value: 47.037
- type: map_at_5
value: 49.579
- type: mrr_at_1
value: 36.558
- type: mrr_at_10
value: 51.658
- type: mrr_at_100
value: 52.402
- type: mrr_at_1000
value: 52.410000000000004
- type: mrr_at_3
value: 47.345
- type: mrr_at_5
value: 49.797999999999995
- type: ndcg_at_1
value: 35.846000000000004
- type: ndcg_at_10
value: 59.550000000000004
- type: ndcg_at_100
value: 62.596
- type: ndcg_at_1000
value: 62.759
- type: ndcg_at_3
value: 50.666999999999994
- type: ndcg_at_5
value: 55.228
- type: precision_at_1
value: 35.846000000000004
- type: precision_at_10
value: 8.542
- type: precision_at_100
value: 0.984
- type: precision_at_1000
value: 0.1
- type: precision_at_3
value: 20.389
- type: precision_at_5
value: 14.438
- type: recall_at_1
value: 35.846000000000004
- type: recall_at_10
value: 85.42
- type: recall_at_100
value: 98.43499999999999
- type: recall_at_1000
value: 99.644
- type: recall_at_3
value: 61.166
- type: recall_at_5
value: 72.191
- task:
type: Clustering
dataset:
name: MTEB ArxivClusteringP2P
type: mteb/arxiv-clustering-p2p
config: default
split: test
revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d
metrics:
- type: v_measure
value: 47.402770198163594
- task:
type: Clustering
dataset:
name: MTEB ArxivClusteringS2S
type: mteb/arxiv-clustering-s2s
config: default
split: test
revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53
metrics:
- type: v_measure
value: 40.01545436974177
- task:
type: Reranking
dataset:
name: MTEB AskUbuntuDupQuestions
type: mteb/askubuntudupquestions-reranking
config: default
split: test
revision: 2000358ca161889fa9c082cb41daa8dcfb161a54
metrics:
- type: map
value: 62.586465273207196
- type: mrr
value: 74.42169019038825
- task:
type: STS
dataset:
name: MTEB BIOSSES
type: mteb/biosses-sts
config: default
split: test
revision: d3fb88f8f02e40887cd149695127462bbcf29b4a
metrics:
- type: cos_sim_pearson
value: 85.1891186537969
- type: cos_sim_spearman
value: 83.75492046087288
- type: euclidean_pearson
value: 84.11766204805357
- type: euclidean_spearman
value: 84.01456493126516
- type: manhattan_pearson
value: 84.2132950502772
- type: manhattan_spearman
value: 83.89227298813377
- task:
type: Classification
dataset:
name: MTEB Banking77Classification
type: mteb/banking77
config: default
split: test
revision: 0fd18e25b25c072e09e0d92ab615fda904d66300
metrics:
- type: accuracy
value: 85.74025974025975
- type: f1
value: 85.71493566466381
- task:
type: Clustering
dataset:
name: MTEB BiorxivClusteringP2P
type: mteb/biorxiv-clustering-p2p
config: default
split: test
revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40
metrics:
- type: v_measure
value: 38.467181385006434
- task:
type: Clustering
dataset:
name: MTEB BiorxivClusteringS2S
type: mteb/biorxiv-clustering-s2s
config: default
split: test
revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908
metrics:
- type: v_measure
value: 34.719496037339056
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackAndroidRetrieval
type: BeIR/cqadupstack
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 29.587000000000003
- type: map_at_10
value: 41.114
- type: map_at_100
value: 42.532
- type: map_at_1000
value: 42.661
- type: map_at_3
value: 37.483
- type: map_at_5
value: 39.652
- type: mrr_at_1
value: 36.338
- type: mrr_at_10
value: 46.763
- type: mrr_at_100
value: 47.393
- type: mrr_at_1000
value: 47.445
- type: mrr_at_3
value: 43.538
- type: mrr_at_5
value: 45.556000000000004
- type: ndcg_at_1
value: 36.338
- type: ndcg_at_10
value: 47.658
- type: ndcg_at_100
value: 52.824000000000005
- type: ndcg_at_1000
value: 54.913999999999994
- type: ndcg_at_3
value: 41.989
- type: ndcg_at_5
value: 44.944
- type: precision_at_1
value: 36.338
- type: precision_at_10
value: 9.156
- type: precision_at_100
value: 1.4789999999999999
- type: precision_at_1000
value: 0.196
- type: precision_at_3
value: 20.076
- type: precision_at_5
value: 14.85
- type: recall_at_1
value: 29.587000000000003
- type: recall_at_10
value: 60.746
- type: recall_at_100
value: 82.157
- type: recall_at_1000
value: 95.645
- type: recall_at_3
value: 44.821
- type: recall_at_5
value: 52.819
- type: map_at_1
value: 30.239
- type: map_at_10
value: 39.989000000000004
- type: map_at_100
value: 41.196
- type: map_at_1000
value: 41.325
- type: map_at_3
value: 37.261
- type: map_at_5
value: 38.833
- type: mrr_at_1
value: 37.516
- type: mrr_at_10
value: 46.177
- type: mrr_at_100
value: 46.806
- type: mrr_at_1000
value: 46.849000000000004
- type: mrr_at_3
value: 44.002
- type: mrr_at_5
value: 45.34
- type: ndcg_at_1
value: 37.516
- type: ndcg_at_10
value: 45.586
- type: ndcg_at_100
value: 49.897000000000006
- type: ndcg_at_1000
value: 51.955
- type: ndcg_at_3
value: 41.684
- type: ndcg_at_5
value: 43.617
- type: precision_at_1
value: 37.516
- type: precision_at_10
value: 8.522
- type: precision_at_100
value: 1.374
- type: precision_at_1000
value: 0.184
- type: precision_at_3
value: 20.105999999999998
- type: precision_at_5
value: 14.152999999999999
- type: recall_at_1
value: 30.239
- type: recall_at_10
value: 55.03
- type: recall_at_100
value: 73.375
- type: recall_at_1000
value: 86.29599999999999
- type: recall_at_3
value: 43.269000000000005
- type: recall_at_5
value: 48.878
- type: map_at_1
value: 38.338
- type: map_at_10
value: 50.468999999999994
- type: map_at_100
value: 51.553000000000004
- type: map_at_1000
value: 51.608
- type: map_at_3
value: 47.107
- type: map_at_5
value: 49.101
- type: mrr_at_1
value: 44.201
- type: mrr_at_10
value: 54.057
- type: mrr_at_100
value: 54.764
- type: mrr_at_1000
value: 54.791000000000004
- type: mrr_at_3
value: 51.56699999999999
- type: mrr_at_5
value: 53.05
- type: ndcg_at_1
value: 44.201
- type: ndcg_at_10
value: 56.379000000000005
- type: ndcg_at_100
value: 60.645
- type: ndcg_at_1000
value: 61.73499999999999
- type: ndcg_at_3
value: 50.726000000000006
- type: ndcg_at_5
value: 53.58500000000001
- type: precision_at_1
value: 44.201
- type: precision_at_10
value: 9.141
- type: precision_at_100
value: 1.216
- type: precision_at_1000
value: 0.135
- type: precision_at_3
value: 22.654
- type: precision_at_5
value: 15.723999999999998
- type: recall_at_1
value: 38.338
- type: recall_at_10
value: 70.30499999999999
- type: recall_at_100
value: 88.77199999999999
- type: recall_at_1000
value: 96.49799999999999
- type: recall_at_3
value: 55.218
- type: recall_at_5
value: 62.104000000000006
- type: map_at_1
value: 25.682
- type: map_at_10
value: 33.498
- type: map_at_100
value: 34.461000000000006
- type: map_at_1000
value: 34.544000000000004
- type: map_at_3
value: 30.503999999999998
- type: map_at_5
value: 32.216
- type: mrr_at_1
value: 27.683999999999997
- type: mrr_at_10
value: 35.467999999999996
- type: mrr_at_100
value: 36.32
- type: mrr_at_1000
value: 36.386
- type: mrr_at_3
value: 32.618
- type: mrr_at_5
value: 34.262
- type: ndcg_at_1
value: 27.683999999999997
- type: ndcg_at_10
value: 38.378
- type: ndcg_at_100
value: 43.288
- type: ndcg_at_1000
value: 45.413
- type: ndcg_at_3
value: 32.586
- type: ndcg_at_5
value: 35.499
- type: precision_at_1
value: 27.683999999999997
- type: precision_at_10
value: 5.864
- type: precision_at_100
value: 0.882
- type: precision_at_1000
value: 0.11
- type: precision_at_3
value: 13.446
- type: precision_at_5
value: 9.718
- type: recall_at_1
value: 25.682
- type: recall_at_10
value: 51.712
- type: recall_at_100
value: 74.446
- type: recall_at_1000
value: 90.472
- type: recall_at_3
value: 36.236000000000004
- type: recall_at_5
value: 43.234
- type: map_at_1
value: 16.073999999999998
- type: map_at_10
value: 24.352999999999998
- type: map_at_100
value: 25.438
- type: map_at_1000
value: 25.545
- type: map_at_3
value: 21.614
- type: map_at_5
value: 23.104
- type: mrr_at_1
value: 19.776
- type: mrr_at_10
value: 28.837000000000003
- type: mrr_at_100
value: 29.755
- type: mrr_at_1000
value: 29.817
- type: mrr_at_3
value: 26.201999999999998
- type: mrr_at_5
value: 27.714
- type: ndcg_at_1
value: 19.776
- type: ndcg_at_10
value: 29.701
- type: ndcg_at_100
value: 35.307
- type: ndcg_at_1000
value: 37.942
- type: ndcg_at_3
value: 24.764
- type: ndcg_at_5
value: 27.025
- type: precision_at_1
value: 19.776
- type: precision_at_10
value: 5.659
- type: precision_at_100
value: 0.971
- type: precision_at_1000
value: 0.133
- type: precision_at_3
value: 12.065
- type: precision_at_5
value: 8.905000000000001
- type: recall_at_1
value: 16.073999999999998
- type: recall_at_10
value: 41.647
- type: recall_at_100
value: 66.884
- type: recall_at_1000
value: 85.91499999999999
- type: recall_at_3
value: 27.916
- type: recall_at_5
value: 33.729
- type: map_at_1
value: 28.444999999999997
- type: map_at_10
value: 38.218999999999994
- type: map_at_100
value: 39.595
- type: map_at_1000
value: 39.709
- type: map_at_3
value: 35.586
- type: map_at_5
value: 36.895
- type: mrr_at_1
value: 34.841
- type: mrr_at_10
value: 44.106
- type: mrr_at_100
value: 44.98
- type: mrr_at_1000
value: 45.03
- type: mrr_at_3
value: 41.979
- type: mrr_at_5
value: 43.047999999999995
- type: ndcg_at_1
value: 34.841
- type: ndcg_at_10
value: 43.922
- type: ndcg_at_100
value: 49.504999999999995
- type: ndcg_at_1000
value: 51.675000000000004
- type: ndcg_at_3
value: 39.858
- type: ndcg_at_5
value: 41.408
- type: precision_at_1
value: 34.841
- type: precision_at_10
value: 7.872999999999999
- type: precision_at_100
value: 1.2449999999999999
- type: precision_at_1000
value: 0.161
- type: precision_at_3
value: 18.993
- type: precision_at_5
value: 13.032
- type: recall_at_1
value: 28.444999999999997
- type: recall_at_10
value: 54.984
- type: recall_at_100
value: 78.342
- type: recall_at_1000
value: 92.77
- type: recall_at_3
value: 42.842999999999996
- type: recall_at_5
value: 47.247
- type: map_at_1
value: 23.072
- type: map_at_10
value: 32.354
- type: map_at_100
value: 33.800000000000004
- type: map_at_1000
value: 33.908
- type: map_at_3
value: 29.232000000000003
- type: map_at_5
value: 31.049
- type: mrr_at_1
value: 29.110000000000003
- type: mrr_at_10
value: 38.03
- type: mrr_at_100
value: 39.032
- type: mrr_at_1000
value: 39.086999999999996
- type: mrr_at_3
value: 35.407
- type: mrr_at_5
value: 36.76
- type: ndcg_at_1
value: 29.110000000000003
- type: ndcg_at_10
value: 38.231
- type: ndcg_at_100
value: 44.425
- type: ndcg_at_1000
value: 46.771
- type: ndcg_at_3
value: 33.095
- type: ndcg_at_5
value: 35.459
- type: precision_at_1
value: 29.110000000000003
- type: precision_at_10
value: 7.215000000000001
- type: precision_at_100
value: 1.2109999999999999
- type: precision_at_1000
value: 0.157
- type: precision_at_3
value: 16.058
- type: precision_at_5
value: 11.644
- type: recall_at_1
value: 23.072
- type: recall_at_10
value: 50.285999999999994
- type: recall_at_100
value: 76.596
- type: recall_at_1000
value: 92.861
- type: recall_at_3
value: 35.702
- type: recall_at_5
value: 42.152
- type: map_at_1
value: 24.937916666666666
- type: map_at_10
value: 33.755250000000004
- type: map_at_100
value: 34.955999999999996
- type: map_at_1000
value: 35.070499999999996
- type: map_at_3
value: 30.98708333333333
- type: map_at_5
value: 32.51491666666666
- type: mrr_at_1
value: 29.48708333333333
- type: mrr_at_10
value: 37.92183333333334
- type: mrr_at_100
value: 38.76583333333333
- type: mrr_at_1000
value: 38.82466666666667
- type: mrr_at_3
value: 35.45125
- type: mrr_at_5
value: 36.827000000000005
- type: ndcg_at_1
value: 29.48708333333333
- type: ndcg_at_10
value: 39.05225
- type: ndcg_at_100
value: 44.25983333333334
- type: ndcg_at_1000
value: 46.568333333333335
- type: ndcg_at_3
value: 34.271583333333325
- type: ndcg_at_5
value: 36.483916666666666
- type: precision_at_1
value: 29.48708333333333
- type: precision_at_10
value: 6.865749999999999
- type: precision_at_100
value: 1.1195833333333332
- type: precision_at_1000
value: 0.15058333333333335
- type: precision_at_3
value: 15.742083333333333
- type: precision_at_5
value: 11.221916666666667
- type: recall_at_1
value: 24.937916666666666
- type: recall_at_10
value: 50.650416666666665
- type: recall_at_100
value: 73.55383333333334
- type: recall_at_1000
value: 89.61691666666667
- type: recall_at_3
value: 37.27808333333334
- type: recall_at_5
value: 42.99475
- type: map_at_1
value: 23.947
- type: map_at_10
value: 30.575000000000003
- type: map_at_100
value: 31.465
- type: map_at_1000
value: 31.558000000000003
- type: map_at_3
value: 28.814
- type: map_at_5
value: 29.738999999999997
- type: mrr_at_1
value: 26.994
- type: mrr_at_10
value: 33.415
- type: mrr_at_100
value: 34.18
- type: mrr_at_1000
value: 34.245
- type: mrr_at_3
value: 31.621
- type: mrr_at_5
value: 32.549
- type: ndcg_at_1
value: 26.994
- type: ndcg_at_10
value: 34.482
- type: ndcg_at_100
value: 38.915
- type: ndcg_at_1000
value: 41.355
- type: ndcg_at_3
value: 31.139
- type: ndcg_at_5
value: 32.589
- type: precision_at_1
value: 26.994
- type: precision_at_10
value: 5.322
- type: precision_at_100
value: 0.8160000000000001
- type: precision_at_1000
value: 0.11100000000000002
- type: precision_at_3
value: 13.344000000000001
- type: precision_at_5
value: 8.988
- type: recall_at_1
value: 23.947
- type: recall_at_10
value: 43.647999999999996
- type: recall_at_100
value: 63.851
- type: recall_at_1000
value: 82.0
- type: recall_at_3
value: 34.288000000000004
- type: recall_at_5
value: 38.117000000000004
- type: map_at_1
value: 16.197
- type: map_at_10
value: 22.968
- type: map_at_100
value: 24.095
- type: map_at_1000
value: 24.217
- type: map_at_3
value: 20.771
- type: map_at_5
value: 21.995
- type: mrr_at_1
value: 19.511
- type: mrr_at_10
value: 26.55
- type: mrr_at_100
value: 27.500999999999998
- type: mrr_at_1000
value: 27.578999999999997
- type: mrr_at_3
value: 24.421
- type: mrr_at_5
value: 25.604
- type: ndcg_at_1
value: 19.511
- type: ndcg_at_10
value: 27.386
- type: ndcg_at_100
value: 32.828
- type: ndcg_at_1000
value: 35.739
- type: ndcg_at_3
value: 23.405
- type: ndcg_at_5
value: 25.255
- type: precision_at_1
value: 19.511
- type: precision_at_10
value: 5.017
- type: precision_at_100
value: 0.91
- type: precision_at_1000
value: 0.133
- type: precision_at_3
value: 11.023
- type: precision_at_5
value: 8.025
- type: recall_at_1
value: 16.197
- type: recall_at_10
value: 37.09
- type: recall_at_100
value: 61.778
- type: recall_at_1000
value: 82.56599999999999
- type: recall_at_3
value: 26.034000000000002
- type: recall_at_5
value: 30.762
- type: map_at_1
value: 25.41
- type: map_at_10
value: 33.655
- type: map_at_100
value: 34.892
- type: map_at_1000
value: 34.995
- type: map_at_3
value: 30.94
- type: map_at_5
value: 32.303
- type: mrr_at_1
value: 29.477999999999998
- type: mrr_at_10
value: 37.443
- type: mrr_at_100
value: 38.383
- type: mrr_at_1000
value: 38.440000000000005
- type: mrr_at_3
value: 34.949999999999996
- type: mrr_at_5
value: 36.228
- type: ndcg_at_1
value: 29.477999999999998
- type: ndcg_at_10
value: 38.769
- type: ndcg_at_100
value: 44.245000000000005
- type: ndcg_at_1000
value: 46.593
- type: ndcg_at_3
value: 33.623
- type: ndcg_at_5
value: 35.766
- type: precision_at_1
value: 29.477999999999998
- type: precision_at_10
value: 6.455
- type: precision_at_100
value: 1.032
- type: precision_at_1000
value: 0.135
- type: precision_at_3
value: 14.893999999999998
- type: precision_at_5
value: 10.485
- type: recall_at_1
value: 25.41
- type: recall_at_10
value: 50.669
- type: recall_at_100
value: 74.084
- type: recall_at_1000
value: 90.435
- type: recall_at_3
value: 36.679
- type: recall_at_5
value: 41.94
- type: map_at_1
value: 23.339
- type: map_at_10
value: 31.852000000000004
- type: map_at_100
value: 33.411
- type: map_at_1000
value: 33.62
- type: map_at_3
value: 28.929
- type: map_at_5
value: 30.542
- type: mrr_at_1
value: 28.063
- type: mrr_at_10
value: 36.301
- type: mrr_at_100
value: 37.288
- type: mrr_at_1000
value: 37.349
- type: mrr_at_3
value: 33.663
- type: mrr_at_5
value: 35.165
- type: ndcg_at_1
value: 28.063
- type: ndcg_at_10
value: 37.462
- type: ndcg_at_100
value: 43.620999999999995
- type: ndcg_at_1000
value: 46.211
- type: ndcg_at_3
value: 32.68
- type: ndcg_at_5
value: 34.981
- type: precision_at_1
value: 28.063
- type: precision_at_10
value: 7.1739999999999995
- type: precision_at_100
value: 1.486
- type: precision_at_1000
value: 0.23500000000000001
- type: precision_at_3
value: 15.217
- type: precision_at_5
value: 11.265
- type: recall_at_1
value: 23.339
- type: recall_at_10
value: 48.376999999999995
- type: recall_at_100
value: 76.053
- type: recall_at_1000
value: 92.455
- type: recall_at_3
value: 34.735
- type: recall_at_5
value: 40.71
- type: map_at_1
value: 18.925
- type: map_at_10
value: 26.017000000000003
- type: map_at_100
value: 27.034000000000002
- type: map_at_1000
value: 27.156000000000002
- type: map_at_3
value: 23.604
- type: map_at_5
value: 24.75
- type: mrr_at_1
value: 20.333000000000002
- type: mrr_at_10
value: 27.915
- type: mrr_at_100
value: 28.788000000000004
- type: mrr_at_1000
value: 28.877999999999997
- type: mrr_at_3
value: 25.446999999999996
- type: mrr_at_5
value: 26.648
- type: ndcg_at_1
value: 20.333000000000002
- type: ndcg_at_10
value: 30.673000000000002
- type: ndcg_at_100
value: 35.618
- type: ndcg_at_1000
value: 38.517
- type: ndcg_at_3
value: 25.71
- type: ndcg_at_5
value: 27.679
- type: precision_at_1
value: 20.333000000000002
- type: precision_at_10
value: 4.9910000000000005
- type: precision_at_100
value: 0.8130000000000001
- type: precision_at_1000
value: 0.117
- type: precision_at_3
value: 11.029
- type: precision_at_5
value: 7.8740000000000006
- type: recall_at_1
value: 18.925
- type: recall_at_10
value: 43.311
- type: recall_at_100
value: 66.308
- type: recall_at_1000
value: 87.49
- type: recall_at_3
value: 29.596
- type: recall_at_5
value: 34.245
- task:
type: Retrieval
dataset:
name: MTEB ClimateFEVER
type: climate-fever
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 13.714
- type: map_at_10
value: 23.194
- type: map_at_100
value: 24.976000000000003
- type: map_at_1000
value: 25.166
- type: map_at_3
value: 19.709
- type: map_at_5
value: 21.523999999999997
- type: mrr_at_1
value: 30.619000000000003
- type: mrr_at_10
value: 42.563
- type: mrr_at_100
value: 43.386
- type: mrr_at_1000
value: 43.423
- type: mrr_at_3
value: 39.555
- type: mrr_at_5
value: 41.268
- type: ndcg_at_1
value: 30.619000000000003
- type: ndcg_at_10
value: 31.836
- type: ndcg_at_100
value: 38.652
- type: ndcg_at_1000
value: 42.088
- type: ndcg_at_3
value: 26.733
- type: ndcg_at_5
value: 28.435
- type: precision_at_1
value: 30.619000000000003
- type: precision_at_10
value: 9.751999999999999
- type: precision_at_100
value: 1.71
- type: precision_at_1000
value: 0.23500000000000001
- type: precision_at_3
value: 19.935
- type: precision_at_5
value: 14.984
- type: recall_at_1
value: 13.714
- type: recall_at_10
value: 37.26
- type: recall_at_100
value: 60.546
- type: recall_at_1000
value: 79.899
- type: recall_at_3
value: 24.325
- type: recall_at_5
value: 29.725
- task:
type: Retrieval
dataset:
name: MTEB DBPedia
type: dbpedia-entity
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 8.462
- type: map_at_10
value: 18.637
- type: map_at_100
value: 26.131999999999998
- type: map_at_1000
value: 27.607
- type: map_at_3
value: 13.333
- type: map_at_5
value: 15.654000000000002
- type: mrr_at_1
value: 66.25
- type: mrr_at_10
value: 74.32600000000001
- type: mrr_at_100
value: 74.60900000000001
- type: mrr_at_1000
value: 74.62
- type: mrr_at_3
value: 72.667
- type: mrr_at_5
value: 73.817
- type: ndcg_at_1
value: 53.87499999999999
- type: ndcg_at_10
value: 40.028999999999996
- type: ndcg_at_100
value: 44.199
- type: ndcg_at_1000
value: 51.629999999999995
- type: ndcg_at_3
value: 44.113
- type: ndcg_at_5
value: 41.731
- type: precision_at_1
value: 66.25
- type: precision_at_10
value: 31.900000000000002
- type: precision_at_100
value: 10.043000000000001
- type: precision_at_1000
value: 1.926
- type: precision_at_3
value: 47.417
- type: precision_at_5
value: 40.65
- type: recall_at_1
value: 8.462
- type: recall_at_10
value: 24.293
- type: recall_at_100
value: 50.146
- type: recall_at_1000
value: 74.034
- type: recall_at_3
value: 14.967
- type: recall_at_5
value: 18.682000000000002
- task:
type: Classification
dataset:
name: MTEB EmotionClassification
type: mteb/emotion
config: default
split: test
revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37
metrics:
- type: accuracy
value: 47.84499999999999
- type: f1
value: 42.48106691979349
- task:
type: Retrieval
dataset:
name: MTEB FEVER
type: fever
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 74.034
- type: map_at_10
value: 82.76
- type: map_at_100
value: 82.968
- type: map_at_1000
value: 82.98299999999999
- type: map_at_3
value: 81.768
- type: map_at_5
value: 82.418
- type: mrr_at_1
value: 80.048
- type: mrr_at_10
value: 87.64999999999999
- type: mrr_at_100
value: 87.712
- type: mrr_at_1000
value: 87.713
- type: mrr_at_3
value: 87.01100000000001
- type: mrr_at_5
value: 87.466
- type: ndcg_at_1
value: 80.048
- type: ndcg_at_10
value: 86.643
- type: ndcg_at_100
value: 87.361
- type: ndcg_at_1000
value: 87.606
- type: ndcg_at_3
value: 85.137
- type: ndcg_at_5
value: 86.016
- type: precision_at_1
value: 80.048
- type: precision_at_10
value: 10.372
- type: precision_at_100
value: 1.093
- type: precision_at_1000
value: 0.11299999999999999
- type: precision_at_3
value: 32.638
- type: precision_at_5
value: 20.177
- type: recall_at_1
value: 74.034
- type: recall_at_10
value: 93.769
- type: recall_at_100
value: 96.569
- type: recall_at_1000
value: 98.039
- type: recall_at_3
value: 89.581
- type: recall_at_5
value: 91.906
- task:
type: Retrieval
dataset:
name: MTEB FiQA2018
type: fiqa
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 20.5
- type: map_at_10
value: 32.857
- type: map_at_100
value: 34.589
- type: map_at_1000
value: 34.778
- type: map_at_3
value: 29.160999999999998
- type: map_at_5
value: 31.033
- type: mrr_at_1
value: 40.123
- type: mrr_at_10
value: 48.776
- type: mrr_at_100
value: 49.495
- type: mrr_at_1000
value: 49.539
- type: mrr_at_3
value: 46.605000000000004
- type: mrr_at_5
value: 47.654
- type: ndcg_at_1
value: 40.123
- type: ndcg_at_10
value: 40.343
- type: ndcg_at_100
value: 46.56
- type: ndcg_at_1000
value: 49.777
- type: ndcg_at_3
value: 37.322
- type: ndcg_at_5
value: 37.791000000000004
- type: precision_at_1
value: 40.123
- type: precision_at_10
value: 11.08
- type: precision_at_100
value: 1.752
- type: precision_at_1000
value: 0.232
- type: precision_at_3
value: 24.897
- type: precision_at_5
value: 17.809
- type: recall_at_1
value: 20.5
- type: recall_at_10
value: 46.388
- type: recall_at_100
value: 69.552
- type: recall_at_1000
value: 89.011
- type: recall_at_3
value: 33.617999999999995
- type: recall_at_5
value: 38.211
- task:
type: Retrieval
dataset:
name: MTEB HotpotQA
type: hotpotqa
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 39.135999999999996
- type: map_at_10
value: 61.673
- type: map_at_100
value: 62.562
- type: map_at_1000
value: 62.62
- type: map_at_3
value: 58.467999999999996
- type: map_at_5
value: 60.463
- type: mrr_at_1
value: 78.271
- type: mrr_at_10
value: 84.119
- type: mrr_at_100
value: 84.29299999999999
- type: mrr_at_1000
value: 84.299
- type: mrr_at_3
value: 83.18900000000001
- type: mrr_at_5
value: 83.786
- type: ndcg_at_1
value: 78.271
- type: ndcg_at_10
value: 69.935
- type: ndcg_at_100
value: 73.01299999999999
- type: ndcg_at_1000
value: 74.126
- type: ndcg_at_3
value: 65.388
- type: ndcg_at_5
value: 67.906
- type: precision_at_1
value: 78.271
- type: precision_at_10
value: 14.562
- type: precision_at_100
value: 1.6969999999999998
- type: precision_at_1000
value: 0.184
- type: precision_at_3
value: 41.841
- type: precision_at_5
value: 27.087
- type: recall_at_1
value: 39.135999999999996
- type: recall_at_10
value: 72.809
- type: recall_at_100
value: 84.86200000000001
- type: recall_at_1000
value: 92.208
- type: recall_at_3
value: 62.76199999999999
- type: recall_at_5
value: 67.718
- task:
type: Classification
dataset:
name: MTEB ImdbClassification
type: mteb/imdb
config: default
split: test
revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7
metrics:
- type: accuracy
value: 90.60600000000001
- type: ap
value: 86.6579587804335
- type: f1
value: 90.5938853929307
- task:
type: Retrieval
dataset:
name: MTEB MSMARCO
type: msmarco
config: default
split: dev
revision: None
metrics:
- type: map_at_1
value: 21.852
- type: map_at_10
value: 33.982
- type: map_at_100
value: 35.116
- type: map_at_1000
value: 35.167
- type: map_at_3
value: 30.134
- type: map_at_5
value: 32.340999999999994
- type: mrr_at_1
value: 22.479
- type: mrr_at_10
value: 34.594
- type: mrr_at_100
value: 35.672
- type: mrr_at_1000
value: 35.716
- type: mrr_at_3
value: 30.84
- type: mrr_at_5
value: 32.998
- type: ndcg_at_1
value: 22.493
- type: ndcg_at_10
value: 40.833000000000006
- type: ndcg_at_100
value: 46.357
- type: ndcg_at_1000
value: 47.637
- type: ndcg_at_3
value: 32.995999999999995
- type: ndcg_at_5
value: 36.919000000000004
- type: precision_at_1
value: 22.493
- type: precision_at_10
value: 6.465999999999999
- type: precision_at_100
value: 0.9249999999999999
- type: precision_at_1000
value: 0.104
- type: precision_at_3
value: 14.030999999999999
- type: precision_at_5
value: 10.413
- type: recall_at_1
value: 21.852
- type: recall_at_10
value: 61.934999999999995
- type: recall_at_100
value: 87.611
- type: recall_at_1000
value: 97.441
- type: recall_at_3
value: 40.583999999999996
- type: recall_at_5
value: 49.992999999999995
- task:
type: Classification
dataset:
name: MTEB MTOPDomainClassification (en)
type: mteb/mtop_domain
config: en
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 93.36069311445507
- type: f1
value: 93.16456330371453
- task:
type: Classification
dataset:
name: MTEB MTOPIntentClassification (en)
type: mteb/mtop_intent
config: en
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 74.74692202462381
- type: f1
value: 58.17903579421599
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (en)
type: mteb/amazon_massive_intent
config: en
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 74.80833893745796
- type: f1
value: 72.70786592684664
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (en)
type: mteb/amazon_massive_scenario
config: en
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 78.69872225958305
- type: f1
value: 78.61626934504731
- task:
type: Clustering
dataset:
name: MTEB MedrxivClusteringP2P
type: mteb/medrxiv-clustering-p2p
config: default
split: test
revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73
metrics:
- type: v_measure
value: 33.058658628717694
- task:
type: Clustering
dataset:
name: MTEB MedrxivClusteringS2S
type: mteb/medrxiv-clustering-s2s
config: default
split: test
revision: 35191c8c0dca72d8ff3efcd72aa802307d469663
metrics:
- type: v_measure
value: 30.85561739360599
- task:
type: Reranking
dataset:
name: MTEB MindSmallReranking
type: mteb/mind_small
config: default
split: test
revision: 3bdac13927fdc888b903db93b2ffdbd90b295a69
metrics:
- type: map
value: 31.290259910144385
- type: mrr
value: 32.44223046102856
- task:
type: Retrieval
dataset:
name: MTEB NFCorpus
type: nfcorpus
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 5.288
- type: map_at_10
value: 12.267999999999999
- type: map_at_100
value: 15.557000000000002
- type: map_at_1000
value: 16.98
- type: map_at_3
value: 8.866
- type: map_at_5
value: 10.418
- type: mrr_at_1
value: 43.653
- type: mrr_at_10
value: 52.681
- type: mrr_at_100
value: 53.315999999999995
- type: mrr_at_1000
value: 53.357
- type: mrr_at_3
value: 51.393
- type: mrr_at_5
value: 51.903999999999996
- type: ndcg_at_1
value: 42.415000000000006
- type: ndcg_at_10
value: 34.305
- type: ndcg_at_100
value: 30.825999999999997
- type: ndcg_at_1000
value: 39.393
- type: ndcg_at_3
value: 39.931
- type: ndcg_at_5
value: 37.519999999999996
- type: precision_at_1
value: 43.653
- type: precision_at_10
value: 25.728
- type: precision_at_100
value: 7.932
- type: precision_at_1000
value: 2.07
- type: precision_at_3
value: 38.184000000000005
- type: precision_at_5
value: 32.879000000000005
- type: recall_at_1
value: 5.288
- type: recall_at_10
value: 16.195
- type: recall_at_100
value: 31.135
- type: recall_at_1000
value: 61.531000000000006
- type: recall_at_3
value: 10.313
- type: recall_at_5
value: 12.754999999999999
- task:
type: Retrieval
dataset:
name: MTEB NQ
type: nq
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 28.216
- type: map_at_10
value: 42.588
- type: map_at_100
value: 43.702999999999996
- type: map_at_1000
value: 43.739
- type: map_at_3
value: 38.177
- type: map_at_5
value: 40.754000000000005
- type: mrr_at_1
value: 31.866
- type: mrr_at_10
value: 45.189
- type: mrr_at_100
value: 46.056000000000004
- type: mrr_at_1000
value: 46.081
- type: mrr_at_3
value: 41.526999999999994
- type: mrr_at_5
value: 43.704
- type: ndcg_at_1
value: 31.837
- type: ndcg_at_10
value: 50.178
- type: ndcg_at_100
value: 54.98800000000001
- type: ndcg_at_1000
value: 55.812
- type: ndcg_at_3
value: 41.853
- type: ndcg_at_5
value: 46.153
- type: precision_at_1
value: 31.837
- type: precision_at_10
value: 8.43
- type: precision_at_100
value: 1.1119999999999999
- type: precision_at_1000
value: 0.11900000000000001
- type: precision_at_3
value: 19.023
- type: precision_at_5
value: 13.911000000000001
- type: recall_at_1
value: 28.216
- type: recall_at_10
value: 70.8
- type: recall_at_100
value: 91.857
- type: recall_at_1000
value: 97.941
- type: recall_at_3
value: 49.196
- type: recall_at_5
value: 59.072
- task:
type: Retrieval
dataset:
name: MTEB QuoraRetrieval
type: quora
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 71.22800000000001
- type: map_at_10
value: 85.115
- type: map_at_100
value: 85.72
- type: map_at_1000
value: 85.737
- type: map_at_3
value: 82.149
- type: map_at_5
value: 84.029
- type: mrr_at_1
value: 81.96
- type: mrr_at_10
value: 88.00200000000001
- type: mrr_at_100
value: 88.088
- type: mrr_at_1000
value: 88.089
- type: mrr_at_3
value: 87.055
- type: mrr_at_5
value: 87.715
- type: ndcg_at_1
value: 82.01
- type: ndcg_at_10
value: 88.78
- type: ndcg_at_100
value: 89.91
- type: ndcg_at_1000
value: 90.013
- type: ndcg_at_3
value: 85.957
- type: ndcg_at_5
value: 87.56
- type: precision_at_1
value: 82.01
- type: precision_at_10
value: 13.462
- type: precision_at_100
value: 1.528
- type: precision_at_1000
value: 0.157
- type: precision_at_3
value: 37.553
- type: precision_at_5
value: 24.732000000000003
- type: recall_at_1
value: 71.22800000000001
- type: recall_at_10
value: 95.69
- type: recall_at_100
value: 99.531
- type: recall_at_1000
value: 99.98
- type: recall_at_3
value: 87.632
- type: recall_at_5
value: 92.117
- task:
type: Clustering
dataset:
name: MTEB RedditClustering
type: mteb/reddit-clustering
config: default
split: test
revision: 24640382cdbf8abc73003fb0fa6d111a705499eb
metrics:
- type: v_measure
value: 52.31768034366916
- task:
type: Clustering
dataset:
name: MTEB RedditClusteringP2P
type: mteb/reddit-clustering-p2p
config: default
split: test
revision: 282350215ef01743dc01b456c7f5241fa8937f16
metrics:
- type: v_measure
value: 60.640266772723606
- task:
type: Retrieval
dataset:
name: MTEB SCIDOCS
type: scidocs
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 4.7780000000000005
- type: map_at_10
value: 12.299
- type: map_at_100
value: 14.363000000000001
- type: map_at_1000
value: 14.71
- type: map_at_3
value: 8.738999999999999
- type: map_at_5
value: 10.397
- type: mrr_at_1
value: 23.599999999999998
- type: mrr_at_10
value: 34.845
- type: mrr_at_100
value: 35.916
- type: mrr_at_1000
value: 35.973
- type: mrr_at_3
value: 31.7
- type: mrr_at_5
value: 33.535
- type: ndcg_at_1
value: 23.599999999999998
- type: ndcg_at_10
value: 20.522000000000002
- type: ndcg_at_100
value: 28.737000000000002
- type: ndcg_at_1000
value: 34.596
- type: ndcg_at_3
value: 19.542
- type: ndcg_at_5
value: 16.958000000000002
- type: precision_at_1
value: 23.599999999999998
- type: precision_at_10
value: 10.67
- type: precision_at_100
value: 2.259
- type: precision_at_1000
value: 0.367
- type: precision_at_3
value: 18.333
- type: precision_at_5
value: 14.879999999999999
- type: recall_at_1
value: 4.7780000000000005
- type: recall_at_10
value: 21.617
- type: recall_at_100
value: 45.905
- type: recall_at_1000
value: 74.42
- type: recall_at_3
value: 11.148
- type: recall_at_5
value: 15.082999999999998
- task:
type: STS
dataset:
name: MTEB SICK-R
type: mteb/sickr-sts
config: default
split: test
revision: a6ea5a8cab320b040a23452cc28066d9beae2cee
metrics:
- type: cos_sim_pearson
value: 83.22372750297885
- type: cos_sim_spearman
value: 79.40972617119405
- type: euclidean_pearson
value: 80.6101072020434
- type: euclidean_spearman
value: 79.53844217225202
- type: manhattan_pearson
value: 80.57265975286111
- type: manhattan_spearman
value: 79.46335611792958
- task:
type: STS
dataset:
name: MTEB STS12
type: mteb/sts12-sts
config: default
split: test
revision: a0d554a64d88156834ff5ae9920b964011b16384
metrics:
- type: cos_sim_pearson
value: 85.43713315520749
- type: cos_sim_spearman
value: 77.44128693329532
- type: euclidean_pearson
value: 81.63869928101123
- type: euclidean_spearman
value: 77.29512977961515
- type: manhattan_pearson
value: 81.63704185566183
- type: manhattan_spearman
value: 77.29909412738657
- task:
type: STS
dataset:
name: MTEB STS13
type: mteb/sts13-sts
config: default
split: test
revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca
metrics:
- type: cos_sim_pearson
value: 81.59451537860527
- type: cos_sim_spearman
value: 82.97994638856723
- type: euclidean_pearson
value: 82.89478688288412
- type: euclidean_spearman
value: 83.58740751053104
- type: manhattan_pearson
value: 82.69140840941608
- type: manhattan_spearman
value: 83.33665956040555
- task:
type: STS
dataset:
name: MTEB STS14
type: mteb/sts14-sts
config: default
split: test
revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375
metrics:
- type: cos_sim_pearson
value: 82.00756527711764
- type: cos_sim_spearman
value: 81.83560996841379
- type: euclidean_pearson
value: 82.07684151976518
- type: euclidean_spearman
value: 82.00913052060511
- type: manhattan_pearson
value: 82.05690778488794
- type: manhattan_spearman
value: 82.02260252019525
- task:
type: STS
dataset:
name: MTEB STS15
type: mteb/sts15-sts
config: default
split: test
revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3
metrics:
- type: cos_sim_pearson
value: 86.13710262895447
- type: cos_sim_spearman
value: 87.26412811156248
- type: euclidean_pearson
value: 86.94151453230228
- type: euclidean_spearman
value: 87.5363796699571
- type: manhattan_pearson
value: 86.86989424083748
- type: manhattan_spearman
value: 87.47315940781353
- task:
type: STS
dataset:
name: MTEB STS16
type: mteb/sts16-sts
config: default
split: test
revision: 4d8694f8f0e0100860b497b999b3dbed754a0513
metrics:
- type: cos_sim_pearson
value: 83.0230597603627
- type: cos_sim_spearman
value: 84.93344499318864
- type: euclidean_pearson
value: 84.23754743431141
- type: euclidean_spearman
value: 85.09707376597099
- type: manhattan_pearson
value: 84.04325160987763
- type: manhattan_spearman
value: 84.89353071339909
- task:
type: STS
dataset:
name: MTEB STS17 (en-en)
type: mteb/sts17-crosslingual-sts
config: en-en
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 86.75620824563921
- type: cos_sim_spearman
value: 87.15065513706398
- type: euclidean_pearson
value: 88.26281533633521
- type: euclidean_spearman
value: 87.51963738643983
- type: manhattan_pearson
value: 88.25599267618065
- type: manhattan_spearman
value: 87.58048736047483
- task:
type: STS
dataset:
name: MTEB STS22 (en)
type: mteb/sts22-crosslingual-sts
config: en
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 64.74645319195137
- type: cos_sim_spearman
value: 65.29996325037214
- type: euclidean_pearson
value: 67.04297794086443
- type: euclidean_spearman
value: 65.43841726694343
- type: manhattan_pearson
value: 67.39459955690904
- type: manhattan_spearman
value: 65.92864704413651
- task:
type: STS
dataset:
name: MTEB STSBenchmark
type: mteb/stsbenchmark-sts
config: default
split: test
revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831
metrics:
- type: cos_sim_pearson
value: 84.31291020270801
- type: cos_sim_spearman
value: 85.86473738688068
- type: euclidean_pearson
value: 85.65537275064152
- type: euclidean_spearman
value: 86.13087454209642
- type: manhattan_pearson
value: 85.43946955047609
- type: manhattan_spearman
value: 85.91568175344916
- task:
type: Reranking
dataset:
name: MTEB SciDocsRR
type: mteb/scidocs-reranking
config: default
split: test
revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab
metrics:
- type: map
value: 85.93798118350695
- type: mrr
value: 95.93536274908824
- task:
type: Retrieval
dataset:
name: MTEB SciFact
type: scifact
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 57.594
- type: map_at_10
value: 66.81899999999999
- type: map_at_100
value: 67.368
- type: map_at_1000
value: 67.4
- type: map_at_3
value: 64.061
- type: map_at_5
value: 65.47
- type: mrr_at_1
value: 60.667
- type: mrr_at_10
value: 68.219
- type: mrr_at_100
value: 68.655
- type: mrr_at_1000
value: 68.684
- type: mrr_at_3
value: 66.22200000000001
- type: mrr_at_5
value: 67.289
- type: ndcg_at_1
value: 60.667
- type: ndcg_at_10
value: 71.275
- type: ndcg_at_100
value: 73.642
- type: ndcg_at_1000
value: 74.373
- type: ndcg_at_3
value: 66.521
- type: ndcg_at_5
value: 68.581
- type: precision_at_1
value: 60.667
- type: precision_at_10
value: 9.433
- type: precision_at_100
value: 1.0699999999999998
- type: precision_at_1000
value: 0.11299999999999999
- type: precision_at_3
value: 25.556
- type: precision_at_5
value: 16.8
- type: recall_at_1
value: 57.594
- type: recall_at_10
value: 83.622
- type: recall_at_100
value: 94.167
- type: recall_at_1000
value: 99.667
- type: recall_at_3
value: 70.64399999999999
- type: recall_at_5
value: 75.983
- task:
type: PairClassification
dataset:
name: MTEB SprintDuplicateQuestions
type: mteb/sprintduplicatequestions-pairclassification
config: default
split: test
revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46
metrics:
- type: cos_sim_accuracy
value: 99.85841584158416
- type: cos_sim_ap
value: 96.66996142314342
- type: cos_sim_f1
value: 92.83208020050125
- type: cos_sim_precision
value: 93.06532663316584
- type: cos_sim_recall
value: 92.60000000000001
- type: dot_accuracy
value: 99.85841584158416
- type: dot_ap
value: 96.6775307676576
- type: dot_f1
value: 92.69289729177312
- type: dot_precision
value: 94.77533960292581
- type: dot_recall
value: 90.7
- type: euclidean_accuracy
value: 99.86138613861387
- type: euclidean_ap
value: 96.6338454403108
- type: euclidean_f1
value: 92.92214357937311
- type: euclidean_precision
value: 93.96728016359918
- type: euclidean_recall
value: 91.9
- type: manhattan_accuracy
value: 99.86237623762376
- type: manhattan_ap
value: 96.60370449645053
- type: manhattan_f1
value: 92.91177970423253
- type: manhattan_precision
value: 94.7970863683663
- type: manhattan_recall
value: 91.10000000000001
- type: max_accuracy
value: 99.86237623762376
- type: max_ap
value: 96.6775307676576
- type: max_f1
value: 92.92214357937311
- task:
type: Clustering
dataset:
name: MTEB StackExchangeClustering
type: mteb/stackexchange-clustering
config: default
split: test
revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259
metrics:
- type: v_measure
value: 60.77977058695198
- task:
type: Clustering
dataset:
name: MTEB StackExchangeClusteringP2P
type: mteb/stackexchange-clustering-p2p
config: default
split: test
revision: 815ca46b2622cec33ccafc3735d572c266efdb44
metrics:
- type: v_measure
value: 35.2725272535638
- task:
type: Reranking
dataset:
name: MTEB StackOverflowDupQuestions
type: mteb/stackoverflowdupquestions-reranking
config: default
split: test
revision: e185fbe320c72810689fc5848eb6114e1ef5ec69
metrics:
- type: map
value: 53.64052466362125
- type: mrr
value: 54.533067014684654
- task:
type: Summarization
dataset:
name: MTEB SummEval
type: mteb/summeval
config: default
split: test
revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c
metrics:
- type: cos_sim_pearson
value: 30.677624219206578
- type: cos_sim_spearman
value: 30.121368518123447
- type: dot_pearson
value: 30.69870088041608
- type: dot_spearman
value: 29.61284927093751
- task:
type: Retrieval
dataset:
name: MTEB TRECCOVID
type: trec-covid
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 0.22
- type: map_at_10
value: 1.855
- type: map_at_100
value: 9.885
- type: map_at_1000
value: 23.416999999999998
- type: map_at_3
value: 0.637
- type: map_at_5
value: 1.024
- type: mrr_at_1
value: 88.0
- type: mrr_at_10
value: 93.067
- type: mrr_at_100
value: 93.067
- type: mrr_at_1000
value: 93.067
- type: mrr_at_3
value: 92.667
- type: mrr_at_5
value: 93.067
- type: ndcg_at_1
value: 82.0
- type: ndcg_at_10
value: 75.899
- type: ndcg_at_100
value: 55.115
- type: ndcg_at_1000
value: 48.368
- type: ndcg_at_3
value: 79.704
- type: ndcg_at_5
value: 78.39699999999999
- type: precision_at_1
value: 88.0
- type: precision_at_10
value: 79.60000000000001
- type: precision_at_100
value: 56.06
- type: precision_at_1000
value: 21.206
- type: precision_at_3
value: 84.667
- type: precision_at_5
value: 83.2
- type: recall_at_1
value: 0.22
- type: recall_at_10
value: 2.078
- type: recall_at_100
value: 13.297
- type: recall_at_1000
value: 44.979
- type: recall_at_3
value: 0.6689999999999999
- type: recall_at_5
value: 1.106
- task:
type: Retrieval
dataset:
name: MTEB Touche2020
type: webis-touche2020
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 2.258
- type: map_at_10
value: 10.439
- type: map_at_100
value: 16.89
- type: map_at_1000
value: 18.407999999999998
- type: map_at_3
value: 5.668
- type: map_at_5
value: 7.718
- type: mrr_at_1
value: 32.653
- type: mrr_at_10
value: 51.159
- type: mrr_at_100
value: 51.714000000000006
- type: mrr_at_1000
value: 51.714000000000006
- type: mrr_at_3
value: 47.959
- type: mrr_at_5
value: 50.407999999999994
- type: ndcg_at_1
value: 29.592000000000002
- type: ndcg_at_10
value: 26.037
- type: ndcg_at_100
value: 37.924
- type: ndcg_at_1000
value: 49.126999999999995
- type: ndcg_at_3
value: 30.631999999999998
- type: ndcg_at_5
value: 28.571
- type: precision_at_1
value: 32.653
- type: precision_at_10
value: 22.857
- type: precision_at_100
value: 7.754999999999999
- type: precision_at_1000
value: 1.529
- type: precision_at_3
value: 34.014
- type: precision_at_5
value: 29.796
- type: recall_at_1
value: 2.258
- type: recall_at_10
value: 16.554
- type: recall_at_100
value: 48.439
- type: recall_at_1000
value: 82.80499999999999
- type: recall_at_3
value: 7.283
- type: recall_at_5
value: 10.732
- task:
type: Classification
dataset:
name: MTEB ToxicConversationsClassification
type: mteb/toxic_conversations_50k
config: default
split: test
revision: d7c0de2777da35d6aae2200a62c6e0e5af397c4c
metrics:
- type: accuracy
value: 69.8858
- type: ap
value: 13.835684144362109
- type: f1
value: 53.803351693244586
- task:
type: Classification
dataset:
name: MTEB TweetSentimentExtractionClassification
type: mteb/tweet_sentiment_extraction
config: default
split: test
revision: d604517c81ca91fe16a244d1248fc021f9ecee7a
metrics:
- type: accuracy
value: 60.50650820599886
- type: f1
value: 60.84357825979259
- task:
type: Clustering
dataset:
name: MTEB TwentyNewsgroupsClustering
type: mteb/twentynewsgroups-clustering
config: default
split: test
revision: 6125ec4e24fa026cec8a478383ee943acfbd5449
metrics:
- type: v_measure
value: 48.52131044852134
- task:
type: PairClassification
dataset:
name: MTEB TwitterSemEval2015
type: mteb/twittersemeval2015-pairclassification
config: default
split: test
revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1
metrics:
- type: cos_sim_accuracy
value: 85.59337187816654
- type: cos_sim_ap
value: 73.23925826533437
- type: cos_sim_f1
value: 67.34693877551021
- type: cos_sim_precision
value: 62.40432237730752
- type: cos_sim_recall
value: 73.13984168865434
- type: dot_accuracy
value: 85.31322644096085
- type: dot_ap
value: 72.30723963807422
- type: dot_f1
value: 66.47051612112296
- type: dot_precision
value: 62.0792305930845
- type: dot_recall
value: 71.53034300791556
- type: euclidean_accuracy
value: 85.61125350181797
- type: euclidean_ap
value: 73.32843720487845
- type: euclidean_f1
value: 67.36549633745895
- type: euclidean_precision
value: 64.60755813953489
- type: euclidean_recall
value: 70.36939313984169
- type: manhattan_accuracy
value: 85.63509566668654
- type: manhattan_ap
value: 73.16658488311325
- type: manhattan_f1
value: 67.20597386434349
- type: manhattan_precision
value: 63.60424028268551
- type: manhattan_recall
value: 71.2401055408971
- type: max_accuracy
value: 85.63509566668654
- type: max_ap
value: 73.32843720487845
- type: max_f1
value: 67.36549633745895
- task:
type: PairClassification
dataset:
name: MTEB TwitterURLCorpus
type: mteb/twitterurlcorpus-pairclassification
config: default
split: test
revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf
metrics:
- type: cos_sim_accuracy
value: 88.33779640625606
- type: cos_sim_ap
value: 84.83868375898157
- type: cos_sim_f1
value: 77.16506154017773
- type: cos_sim_precision
value: 74.62064005753327
- type: cos_sim_recall
value: 79.88912842623961
- type: dot_accuracy
value: 88.02732176815307
- type: dot_ap
value: 83.95089283763002
- type: dot_f1
value: 76.29635101196631
- type: dot_precision
value: 73.31771720613288
- type: dot_recall
value: 79.52725592854944
- type: euclidean_accuracy
value: 88.44452206310397
- type: euclidean_ap
value: 84.98384576824827
- type: euclidean_f1
value: 77.29311047696697
- type: euclidean_precision
value: 74.51232583065381
- type: euclidean_recall
value: 80.28949799815214
- type: manhattan_accuracy
value: 88.47362906042613
- type: manhattan_ap
value: 84.91421462218432
- type: manhattan_f1
value: 77.05107637204792
- type: manhattan_precision
value: 74.74484256243214
- type: manhattan_recall
value: 79.50415768401602
- type: max_accuracy
value: 88.47362906042613
- type: max_ap
value: 84.98384576824827
- type: max_f1
value: 77.29311047696697
---
# ggml-org/bge-small-en-v1.5-Q8_0-GGUF
This model was converted to GGUF format from [`BAAI/bge-small-en-v1.5`](https://huggingface.co/BAAI/bge-small-en-v1.5) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/BAAI/bge-small-en-v1.5) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo ggml-org/bge-small-en-v1.5-Q8_0-GGUF --hf-file bge-small-en-v1.5-q8_0.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo ggml-org/bge-small-en-v1.5-Q8_0-GGUF --hf-file bge-small-en-v1.5-q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo ggml-org/bge-small-en-v1.5-Q8_0-GGUF --hf-file bge-small-en-v1.5-q8_0.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo ggml-org/bge-small-en-v1.5-Q8_0-GGUF --hf-file bge-small-en-v1.5-q8_0.gguf -c 2048
```
| [
"SUMMARIZATION"
] | [
"BIOSSES",
"SCIFACT"
] |
mav23/pythia-1b-GGUF | mav23 | null | [
"gguf",
"pytorch",
"causal-lm",
"pythia",
"en",
"dataset:the_pile",
"arxiv:2304.01373",
"arxiv:2101.00027",
"arxiv:2201.07311",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | 2024-11-20T18:24:08 | 2024-11-20T18:33:58 | 77 | 0 | ---
datasets:
- the_pile
language:
- en
license: apache-2.0
tags:
- pytorch
- causal-lm
- pythia
---
The *Pythia Scaling Suite* is a collection of models developed to facilitate
interpretability research [(see paper)](https://arxiv.org/pdf/2304.01373.pdf).
It contains two sets of eight models of sizes
70M, 160M, 410M, 1B, 1.4B, 2.8B, 6.9B, and 12B. For each size, there are two
models: one trained on the Pile, and one trained on the Pile after the dataset
has been globally deduplicated. All 8 model sizes are trained on the exact
same data, in the exact same order. We also provide 154 intermediate
checkpoints per model, hosted on Hugging Face as branches.
The Pythia model suite was deliberately designed to promote scientific
research on large language models, especially interpretability research.
Despite not centering downstream performance as a design goal, we find the
models <a href="#evaluations">match or exceed</a> the performance of
similar and same-sized models, such as those in the OPT and GPT-Neo suites.
<details>
<summary style="font-weight:600">Details on previous early release and naming convention.</summary>
Previously, we released an early version of the Pythia suite to the public.
However, we decided to retrain the model suite to address a few hyperparameter
discrepancies. This model card <a href="#changelog">lists the changes</a>;
see appendix B in the Pythia paper for further discussion. We found no
difference in benchmark performance between the two Pythia versions.
The old models are
[still available](https://huggingface.co/models?other=pythia_v0), but we
suggest the retrained suite if you are just starting to use Pythia.<br>
**This is the current release.**
Please note that all models in the *Pythia* suite were renamed in January
2023. For clarity, a <a href="#naming-convention-and-parameter-count">table
comparing the old and new names</a> is provided in this model card, together
with exact parameter counts.
</details>
<br>
# Pythia-1B
## Model Details
- Developed by: [EleutherAI](http://eleuther.ai)
- Model type: Transformer-based Language Model
- Language: English
- Learn more: [Pythia's GitHub repository](https://github.com/EleutherAI/pythia)
for training procedure, config files, and details on how to use.
[See paper](https://arxiv.org/pdf/2304.01373.pdf) for more evals and implementation
details.
- Library: [GPT-NeoX](https://github.com/EleutherAI/gpt-neox)
- License: Apache 2.0
- Contact: to ask questions about this model, join the [EleutherAI
Discord](https://discord.gg/zBGx3azzUn), and post them in `#release-discussion`.
Please read the existing *Pythia* documentation before asking about it in the
EleutherAI Discord. For general correspondence: [contact@eleuther.
ai](mailto:[email protected]).
<figure>
| Pythia model | Non-Embedding Params | Layers | Model Dim | Heads | Batch Size | Learning Rate | Equivalent Models |
| -----------: | -------------------: | :----: | :-------: | :---: | :--------: | :-------------------: | :--------------------: |
| 70M | 18,915,328 | 6 | 512 | 8 | 2M | 1.0 x 10<sup>-3</sup> | — |
| 160M | 85,056,000 | 12 | 768 | 12 | 2M | 6.0 x 10<sup>-4</sup> | GPT-Neo 125M, OPT-125M |
| 410M | 302,311,424 | 24 | 1024 | 16 | 2M | 3.0 x 10<sup>-4</sup> | OPT-350M |
| 1.0B | 805,736,448 | 16 | 2048 | 8 | 2M | 3.0 x 10<sup>-4</sup> | — |
| 1.4B | 1,208,602,624 | 24 | 2048 | 16 | 2M | 2.0 x 10<sup>-4</sup> | GPT-Neo 1.3B, OPT-1.3B |
| 2.8B | 2,517,652,480 | 32 | 2560 | 32 | 2M | 1.6 x 10<sup>-4</sup> | GPT-Neo 2.7B, OPT-2.7B |
| 6.9B | 6,444,163,072 | 32 | 4096 | 32 | 2M | 1.2 x 10<sup>-4</sup> | OPT-6.7B |
| 12B | 11,327,027,200 | 36 | 5120 | 40 | 2M | 1.2 x 10<sup>-4</sup> | — |
<figcaption>Engineering details for the <i>Pythia Suite</i>. Deduped and
non-deduped models of a given size have the same hyperparameters. “Equivalent”
models have <b>exactly</b> the same architecture, and the same number of
non-embedding parameters.</figcaption>
</figure>
## Uses and Limitations
### Intended Use
The primary intended use of Pythia is research on the behavior, functionality,
and limitations of large language models. This suite is intended to provide
a controlled setting for performing scientific experiments. We also provide
154 checkpoints per model: initial `step0`, 10 log-spaced checkpoints
`step{1,2,4...512}`, and 143 evenly-spaced checkpoints from `step1000` to
`step143000`. These checkpoints are hosted on Hugging Face as branches. Note
that branch `143000` corresponds exactly to the model checkpoint on the `main`
branch of each model.
You may also further fine-tune and adapt Pythia-1B for deployment,
as long as your use is in accordance with the Apache 2.0 license. Pythia
models work with the Hugging Face [Transformers
Library](https://huggingface.co/docs/transformers/index). If you decide to use
pre-trained Pythia-1B as a basis for your fine-tuned model, please
conduct your own risk and bias assessment.
### Out-of-scope use
The Pythia Suite is **not** intended for deployment. It is not a in itself
a product and cannot be used for human-facing interactions. For example,
the model may generate harmful or offensive text. Please evaluate the risks
associated with your particular use case.
Pythia models are English-language only, and are not suitable for translation
or generating text in other languages.
Pythia-1B has not been fine-tuned for downstream contexts in which
language models are commonly deployed, such as writing genre prose,
or commercial chatbots. This means Pythia-1B will **not**
respond to a given prompt the way a product like ChatGPT does. This is because,
unlike this model, ChatGPT was fine-tuned using methods such as Reinforcement
Learning from Human Feedback (RLHF) to better “follow” human instructions.
### Limitations and biases
The core functionality of a large language model is to take a string of text
and predict the next token. The token used by the model need not produce the
most “accurate” text. Never rely on Pythia-1B to produce factually accurate
output.
This model was trained on [the Pile](https://pile.eleuther.ai/), a dataset
known to contain profanity and texts that are lewd or otherwise offensive.
See [Section 6 of the Pile paper](https://arxiv.org/abs/2101.00027) for a
discussion of documented biases with regards to gender, religion, and race.
Pythia-1B may produce socially unacceptable or undesirable text, *even if*
the prompt itself does not include anything explicitly offensive.
If you plan on using text generated through, for example, the Hosted Inference
API, we recommend having a human curate the outputs of this language model
before presenting it to other people. Please inform your audience that the
text was generated by Pythia-1B.
### Quickstart
Pythia models can be loaded and used via the following code, demonstrated here
for the third `pythia-70m-deduped` checkpoint:
```python
from transformers import GPTNeoXForCausalLM, AutoTokenizer
model = GPTNeoXForCausalLM.from_pretrained(
"EleutherAI/pythia-70m-deduped",
revision="step3000",
cache_dir="./pythia-70m-deduped/step3000",
)
tokenizer = AutoTokenizer.from_pretrained(
"EleutherAI/pythia-70m-deduped",
revision="step3000",
cache_dir="./pythia-70m-deduped/step3000",
)
inputs = tokenizer("Hello, I am", return_tensors="pt")
tokens = model.generate(**inputs)
tokenizer.decode(tokens[0])
```
Revision/branch `step143000` corresponds exactly to the model checkpoint on
the `main` branch of each model.<br>
For more information on how to use all Pythia models, see [documentation on
GitHub](https://github.com/EleutherAI/pythia).
## Training
### Training data
[The Pile](https://pile.eleuther.ai/) is a 825GiB general-purpose dataset in
English. It was created by EleutherAI specifically for training large language
models. It contains texts from 22 diverse sources, roughly broken down into
five categories: academic writing (e.g. arXiv), internet (e.g. CommonCrawl),
prose (e.g. Project Gutenberg), dialogue (e.g. YouTube subtitles), and
miscellaneous (e.g. GitHub, Enron Emails). See [the Pile
paper](https://arxiv.org/abs/2101.00027) for a breakdown of all data sources,
methodology, and a discussion of ethical implications. Consult [the
datasheet](https://arxiv.org/abs/2201.07311) for more detailed documentation
about the Pile and its component datasets. The Pile can be downloaded from
the [official website](https://pile.eleuther.ai/), or from a [community
mirror](https://the-eye.eu/public/AI/pile/).<br>
The Pile was **not** deduplicated before being used to train Pythia-1B.
### Training procedure
All models were trained on the exact same data, in the exact same order. Each
model saw 299,892,736,000 tokens during training, and 143 checkpoints for each
model are saved every 2,097,152,000 tokens, spaced evenly throughout training,
from `step1000` to `step143000` (which is the same as `main`). In addition, we
also provide frequent early checkpoints: `step0` and `step{1,2,4...512}`.
This corresponds to training for just under 1 epoch on the Pile for
non-deduplicated models, and about 1.5 epochs on the deduplicated Pile.
All *Pythia* models trained for 143000 steps at a batch size
of 2M (2,097,152 tokens).<br>
See [GitHub](https://github.com/EleutherAI/pythia) for more details on training
procedure, including [how to reproduce
it](https://github.com/EleutherAI/pythia/blob/main/README.md#reproducing-training).<br>
Pythia uses the same tokenizer as [GPT-NeoX-
20B](https://huggingface.co/EleutherAI/gpt-neox-20b).
## Evaluations
All 16 *Pythia* models were evaluated using the [LM Evaluation
Harness](https://github.com/EleutherAI/lm-evaluation-harness). You can access
the results by model and step at `results/json/*` in the [GitHub
repository](https://github.com/EleutherAI/pythia/tree/main/results/json/).<br>
Expand the sections below to see plots of evaluation results for all
Pythia and Pythia-deduped models compared with OPT and BLOOM.
<details>
<summary>LAMBADA – OpenAI</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/lambada_openai_v1.png" style="width:auto"/>
</details>
<details>
<summary>Physical Interaction: Question Answering (PIQA)</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/piqa_v1.png" style="width:auto"/>
</details>
<details>
<summary>WinoGrande</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/winogrande_v1.png" style="width:auto"/>
</details>
<details>
<summary>AI2 Reasoning Challenge—Easy Set</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/arc_easy_v1.png" style="width:auto"/>
</details>
<details>
<summary>SciQ</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/sciq_v1.png" style="width:auto"/>
</details>
## Changelog
This section compares differences between previously released
[Pythia v0](https://huggingface.co/models?other=pythia_v0) and the current
models. See Appendix B of the Pythia paper for further discussion of these
changes and the motivation behind them. We found that retraining Pythia had no
impact on benchmark performance.
- All model sizes are now trained with uniform batch size of 2M tokens.
Previously, the models of size 160M, 410M, and 1.4B parameters were trained
with batch sizes of 4M tokens.
- We added checkpoints at initialization (step 0) and steps {1,2,4,8,16,32,64,
128,256,512} in addition to every 1000 training steps.
- Flash Attention was used in the new retrained suite.
- We remedied a minor inconsistency that existed in the original suite: all
models of size 2.8B parameters or smaller had a learning rate (LR) schedule
which decayed to a minimum LR of 10% the starting LR rate, but the 6.9B and
12B models all used an LR schedule which decayed to a minimum LR of 0. In
the redone training runs, we rectified this inconsistency: all models now were
trained with LR decaying to a minimum of 0.1× their maximum LR.
### Naming convention and parameter count
*Pythia* models were renamed in January 2023. It is possible that the old
naming convention still persists in some documentation by accident. The
current naming convention (70M, 160M, etc.) is based on total parameter count.
<figure style="width:32em">
| current Pythia suffix | old suffix | total params | non-embedding params |
| --------------------: | ---------: | -------------: | -------------------: |
| 70M | 19M | 70,426,624 | 18,915,328 |
| 160M | 125M | 162,322,944 | 85,056,000 |
| 410M | 350M | 405,334,016 | 302,311,424 |
| 1B | 800M | 1,011,781,632 | 805,736,448 |
| 1.4B | 1.3B | 1,414,647,808 | 1,208,602,624 |
| 2.8B | 2.7B | 2,775,208,960 | 2,517,652,480 |
| 6.9B | 6.7B | 6,857,302,016 | 6,444,163,072 |
| 12B | 13B | 11,846,072,320 | 11,327,027,200 |
</figure> | [
"QUESTION_ANSWERING",
"TRANSLATION"
] | [
"SCIQ"
] |
Muennighoff/SGPT-125M-weightedmean-msmarco-specb-bitfit | Muennighoff | sentence-similarity | [
"sentence-transformers",
"pytorch",
"gpt_neo",
"feature-extraction",
"sentence-similarity",
"mteb",
"arxiv:2202.08904",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2022-03-02T23:29:04 | 2023-03-27T22:19:34 | 76 | 2 | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- mteb
model-index:
- name: SGPT-125M-weightedmean-msmarco-specb-bitfit
results:
- task:
type: Classification
dataset:
name: MTEB AmazonCounterfactualClassification (en)
type: mteb/amazon_counterfactual
config: en
split: test
revision: 2d8a100785abf0ae21420d2a55b0c56e3e1ea996
metrics:
- type: accuracy
value: 61.23880597014926
- type: ap
value: 25.854431650388644
- type: f1
value: 55.751862762818604
- task:
type: Classification
dataset:
name: MTEB AmazonCounterfactualClassification (de)
type: mteb/amazon_counterfactual
config: de
split: test
revision: 2d8a100785abf0ae21420d2a55b0c56e3e1ea996
metrics:
- type: accuracy
value: 56.88436830835117
- type: ap
value: 72.67279104379772
- type: f1
value: 54.449840243786404
- task:
type: Classification
dataset:
name: MTEB AmazonCounterfactualClassification (en-ext)
type: mteb/amazon_counterfactual
config: en-ext
split: test
revision: 2d8a100785abf0ae21420d2a55b0c56e3e1ea996
metrics:
- type: accuracy
value: 58.27586206896551
- type: ap
value: 14.067357642500387
- type: f1
value: 48.172318518691334
- task:
type: Classification
dataset:
name: MTEB AmazonCounterfactualClassification (ja)
type: mteb/amazon_counterfactual
config: ja
split: test
revision: 2d8a100785abf0ae21420d2a55b0c56e3e1ea996
metrics:
- type: accuracy
value: 54.64668094218415
- type: ap
value: 11.776694555054965
- type: f1
value: 44.526622834078765
- task:
type: Classification
dataset:
name: MTEB AmazonPolarityClassification
type: mteb/amazon_polarity
config: default
split: test
revision: 80714f8dcf8cefc218ef4f8c5a966dd83f75a0e1
metrics:
- type: accuracy
value: 65.401225
- type: ap
value: 60.22809958678552
- type: f1
value: 65.0251824898292
- task:
type: Classification
dataset:
name: MTEB AmazonReviewsClassification (en)
type: mteb/amazon_reviews_multi
config: en
split: test
revision: c379a6705fec24a2493fa68e011692605f44e119
metrics:
- type: accuracy
value: 31.165999999999993
- type: f1
value: 30.908870050167437
- task:
type: Classification
dataset:
name: MTEB AmazonReviewsClassification (de)
type: mteb/amazon_reviews_multi
config: de
split: test
revision: c379a6705fec24a2493fa68e011692605f44e119
metrics:
- type: accuracy
value: 24.79
- type: f1
value: 24.5833598854121
- task:
type: Classification
dataset:
name: MTEB AmazonReviewsClassification (es)
type: mteb/amazon_reviews_multi
config: es
split: test
revision: c379a6705fec24a2493fa68e011692605f44e119
metrics:
- type: accuracy
value: 26.643999999999995
- type: f1
value: 26.39012792213563
- task:
type: Classification
dataset:
name: MTEB AmazonReviewsClassification (fr)
type: mteb/amazon_reviews_multi
config: fr
split: test
revision: c379a6705fec24a2493fa68e011692605f44e119
metrics:
- type: accuracy
value: 26.386000000000003
- type: f1
value: 26.276867791454873
- task:
type: Classification
dataset:
name: MTEB AmazonReviewsClassification (ja)
type: mteb/amazon_reviews_multi
config: ja
split: test
revision: c379a6705fec24a2493fa68e011692605f44e119
metrics:
- type: accuracy
value: 22.078000000000003
- type: f1
value: 21.797960290226843
- task:
type: Classification
dataset:
name: MTEB AmazonReviewsClassification (zh)
type: mteb/amazon_reviews_multi
config: zh
split: test
revision: c379a6705fec24a2493fa68e011692605f44e119
metrics:
- type: accuracy
value: 24.274
- type: f1
value: 23.887054434822627
- task:
type: Retrieval
dataset:
name: MTEB ArguAna
type: arguana
config: default
split: test
revision: 5b3e3697907184a9b77a3c99ee9ea1a9cbb1e4e3
metrics:
- type: map_at_1
value: 22.404
- type: map_at_10
value: 36.845
- type: map_at_100
value: 37.945
- type: map_at_1000
value: 37.966
- type: map_at_3
value: 31.78
- type: map_at_5
value: 34.608
- type: mrr_at_1
value: 22.902
- type: mrr_at_10
value: 37.034
- type: mrr_at_100
value: 38.134
- type: mrr_at_1000
value: 38.155
- type: mrr_at_3
value: 31.935000000000002
- type: mrr_at_5
value: 34.812
- type: ndcg_at_1
value: 22.404
- type: ndcg_at_10
value: 45.425
- type: ndcg_at_100
value: 50.354
- type: ndcg_at_1000
value: 50.873999999999995
- type: ndcg_at_3
value: 34.97
- type: ndcg_at_5
value: 40.081
- type: precision_at_1
value: 22.404
- type: precision_at_10
value: 7.303999999999999
- type: precision_at_100
value: 0.951
- type: precision_at_1000
value: 0.099
- type: precision_at_3
value: 14.746
- type: precision_at_5
value: 11.337
- type: recall_at_1
value: 22.404
- type: recall_at_10
value: 73.044
- type: recall_at_100
value: 95.092
- type: recall_at_1000
value: 99.075
- type: recall_at_3
value: 44.239
- type: recall_at_5
value: 56.686
- task:
type: Clustering
dataset:
name: MTEB ArxivClusteringP2P
type: mteb/arxiv-clustering-p2p
config: default
split: test
revision: 0bbdb47bcbe3a90093699aefeed338a0f28a7ee8
metrics:
- type: v_measure
value: 39.70858340673288
- task:
type: Clustering
dataset:
name: MTEB ArxivClusteringS2S
type: mteb/arxiv-clustering-s2s
config: default
split: test
revision: b73bd54100e5abfa6e3a23dcafb46fe4d2438dc3
metrics:
- type: v_measure
value: 28.242847713721048
- task:
type: Reranking
dataset:
name: MTEB AskUbuntuDupQuestions
type: mteb/askubuntudupquestions-reranking
config: default
split: test
revision: 4d853f94cd57d85ec13805aeeac3ae3e5eb4c49c
metrics:
- type: map
value: 55.83700395192393
- type: mrr
value: 70.3891307215407
- task:
type: STS
dataset:
name: MTEB BIOSSES
type: mteb/biosses-sts
config: default
split: test
revision: 9ee918f184421b6bd48b78f6c714d86546106103
metrics:
- type: cos_sim_pearson
value: 79.25366801756223
- type: cos_sim_spearman
value: 75.20954502580506
- type: euclidean_pearson
value: 78.79900722991617
- type: euclidean_spearman
value: 77.79996549607588
- type: manhattan_pearson
value: 78.18408109480399
- type: manhattan_spearman
value: 76.85958262303106
- task:
type: Classification
dataset:
name: MTEB Banking77Classification
type: mteb/banking77
config: default
split: test
revision: 44fa15921b4c889113cc5df03dd4901b49161ab7
metrics:
- type: accuracy
value: 77.70454545454545
- type: f1
value: 77.6929000113803
- task:
type: Clustering
dataset:
name: MTEB BiorxivClusteringP2P
type: mteb/biorxiv-clustering-p2p
config: default
split: test
revision: 11d0121201d1f1f280e8cc8f3d98fb9c4d9f9c55
metrics:
- type: v_measure
value: 33.63260395543984
- task:
type: Clustering
dataset:
name: MTEB BiorxivClusteringS2S
type: mteb/biorxiv-clustering-s2s
config: default
split: test
revision: c0fab014e1bcb8d3a5e31b2088972a1e01547dc1
metrics:
- type: v_measure
value: 27.038042665369925
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackAndroidRetrieval
type: BeIR/cqadupstack
config: default
split: test
revision: 2b9f5791698b5be7bc5e10535c8690f20043c3db
metrics:
- type: map_at_1
value: 22.139
- type: map_at_10
value: 28.839
- type: map_at_100
value: 30.023
- type: map_at_1000
value: 30.153000000000002
- type: map_at_3
value: 26.521
- type: map_at_5
value: 27.775
- type: mrr_at_1
value: 26.466
- type: mrr_at_10
value: 33.495000000000005
- type: mrr_at_100
value: 34.416999999999994
- type: mrr_at_1000
value: 34.485
- type: mrr_at_3
value: 31.402
- type: mrr_at_5
value: 32.496
- type: ndcg_at_1
value: 26.466
- type: ndcg_at_10
value: 33.372
- type: ndcg_at_100
value: 38.7
- type: ndcg_at_1000
value: 41.696
- type: ndcg_at_3
value: 29.443
- type: ndcg_at_5
value: 31.121
- type: precision_at_1
value: 26.466
- type: precision_at_10
value: 6.037
- type: precision_at_100
value: 1.0670000000000002
- type: precision_at_1000
value: 0.16199999999999998
- type: precision_at_3
value: 13.782
- type: precision_at_5
value: 9.757
- type: recall_at_1
value: 22.139
- type: recall_at_10
value: 42.39
- type: recall_at_100
value: 65.427
- type: recall_at_1000
value: 86.04899999999999
- type: recall_at_3
value: 31.127
- type: recall_at_5
value: 35.717999999999996
- type: map_at_1
value: 20.652
- type: map_at_10
value: 27.558
- type: map_at_100
value: 28.473
- type: map_at_1000
value: 28.577
- type: map_at_3
value: 25.402
- type: map_at_5
value: 26.68
- type: mrr_at_1
value: 25.223000000000003
- type: mrr_at_10
value: 31.966
- type: mrr_at_100
value: 32.664
- type: mrr_at_1000
value: 32.724
- type: mrr_at_3
value: 30.074
- type: mrr_at_5
value: 31.249
- type: ndcg_at_1
value: 25.223000000000003
- type: ndcg_at_10
value: 31.694
- type: ndcg_at_100
value: 35.662
- type: ndcg_at_1000
value: 38.092
- type: ndcg_at_3
value: 28.294000000000004
- type: ndcg_at_5
value: 30.049
- type: precision_at_1
value: 25.223000000000003
- type: precision_at_10
value: 5.777
- type: precision_at_100
value: 0.9730000000000001
- type: precision_at_1000
value: 0.13999999999999999
- type: precision_at_3
value: 13.397
- type: precision_at_5
value: 9.605
- type: recall_at_1
value: 20.652
- type: recall_at_10
value: 39.367999999999995
- type: recall_at_100
value: 56.485
- type: recall_at_1000
value: 73.292
- type: recall_at_3
value: 29.830000000000002
- type: recall_at_5
value: 34.43
- type: map_at_1
value: 25.180000000000003
- type: map_at_10
value: 34.579
- type: map_at_100
value: 35.589999999999996
- type: map_at_1000
value: 35.68
- type: map_at_3
value: 31.735999999999997
- type: map_at_5
value: 33.479
- type: mrr_at_1
value: 29.467
- type: mrr_at_10
value: 37.967
- type: mrr_at_100
value: 38.800000000000004
- type: mrr_at_1000
value: 38.858
- type: mrr_at_3
value: 35.465
- type: mrr_at_5
value: 37.057
- type: ndcg_at_1
value: 29.467
- type: ndcg_at_10
value: 39.796
- type: ndcg_at_100
value: 44.531
- type: ndcg_at_1000
value: 46.666000000000004
- type: ndcg_at_3
value: 34.676
- type: ndcg_at_5
value: 37.468
- type: precision_at_1
value: 29.467
- type: precision_at_10
value: 6.601999999999999
- type: precision_at_100
value: 0.9900000000000001
- type: precision_at_1000
value: 0.124
- type: precision_at_3
value: 15.568999999999999
- type: precision_at_5
value: 11.172
- type: recall_at_1
value: 25.180000000000003
- type: recall_at_10
value: 52.269
- type: recall_at_100
value: 73.574
- type: recall_at_1000
value: 89.141
- type: recall_at_3
value: 38.522
- type: recall_at_5
value: 45.323
- type: map_at_1
value: 16.303
- type: map_at_10
value: 21.629
- type: map_at_100
value: 22.387999999999998
- type: map_at_1000
value: 22.489
- type: map_at_3
value: 19.608
- type: map_at_5
value: 20.774
- type: mrr_at_1
value: 17.740000000000002
- type: mrr_at_10
value: 23.214000000000002
- type: mrr_at_100
value: 23.97
- type: mrr_at_1000
value: 24.054000000000002
- type: mrr_at_3
value: 21.243000000000002
- type: mrr_at_5
value: 22.322
- type: ndcg_at_1
value: 17.740000000000002
- type: ndcg_at_10
value: 25.113000000000003
- type: ndcg_at_100
value: 29.287999999999997
- type: ndcg_at_1000
value: 32.204
- type: ndcg_at_3
value: 21.111
- type: ndcg_at_5
value: 23.061999999999998
- type: precision_at_1
value: 17.740000000000002
- type: precision_at_10
value: 3.955
- type: precision_at_100
value: 0.644
- type: precision_at_1000
value: 0.093
- type: precision_at_3
value: 8.851
- type: precision_at_5
value: 6.418
- type: recall_at_1
value: 16.303
- type: recall_at_10
value: 34.487
- type: recall_at_100
value: 54.413999999999994
- type: recall_at_1000
value: 77.158
- type: recall_at_3
value: 23.733
- type: recall_at_5
value: 28.381
- type: map_at_1
value: 10.133000000000001
- type: map_at_10
value: 15.665999999999999
- type: map_at_100
value: 16.592000000000002
- type: map_at_1000
value: 16.733999999999998
- type: map_at_3
value: 13.625000000000002
- type: map_at_5
value: 14.721
- type: mrr_at_1
value: 12.562000000000001
- type: mrr_at_10
value: 18.487000000000002
- type: mrr_at_100
value: 19.391
- type: mrr_at_1000
value: 19.487
- type: mrr_at_3
value: 16.418
- type: mrr_at_5
value: 17.599999999999998
- type: ndcg_at_1
value: 12.562000000000001
- type: ndcg_at_10
value: 19.43
- type: ndcg_at_100
value: 24.546
- type: ndcg_at_1000
value: 28.193
- type: ndcg_at_3
value: 15.509999999999998
- type: ndcg_at_5
value: 17.322000000000003
- type: precision_at_1
value: 12.562000000000001
- type: precision_at_10
value: 3.794
- type: precision_at_100
value: 0.74
- type: precision_at_1000
value: 0.122
- type: precision_at_3
value: 7.546
- type: precision_at_5
value: 5.721
- type: recall_at_1
value: 10.133000000000001
- type: recall_at_10
value: 28.261999999999997
- type: recall_at_100
value: 51.742999999999995
- type: recall_at_1000
value: 78.075
- type: recall_at_3
value: 17.634
- type: recall_at_5
value: 22.128999999999998
- type: map_at_1
value: 19.991999999999997
- type: map_at_10
value: 27.346999999999998
- type: map_at_100
value: 28.582
- type: map_at_1000
value: 28.716
- type: map_at_3
value: 24.907
- type: map_at_5
value: 26.1
- type: mrr_at_1
value: 23.773
- type: mrr_at_10
value: 31.647
- type: mrr_at_100
value: 32.639
- type: mrr_at_1000
value: 32.706
- type: mrr_at_3
value: 29.195
- type: mrr_at_5
value: 30.484
- type: ndcg_at_1
value: 23.773
- type: ndcg_at_10
value: 32.322
- type: ndcg_at_100
value: 37.996
- type: ndcg_at_1000
value: 40.819
- type: ndcg_at_3
value: 27.876
- type: ndcg_at_5
value: 29.664
- type: precision_at_1
value: 23.773
- type: precision_at_10
value: 5.976999999999999
- type: precision_at_100
value: 1.055
- type: precision_at_1000
value: 0.15
- type: precision_at_3
value: 13.122
- type: precision_at_5
value: 9.451
- type: recall_at_1
value: 19.991999999999997
- type: recall_at_10
value: 43.106
- type: recall_at_100
value: 67.264
- type: recall_at_1000
value: 86.386
- type: recall_at_3
value: 30.392000000000003
- type: recall_at_5
value: 34.910999999999994
- type: map_at_1
value: 17.896
- type: map_at_10
value: 24.644
- type: map_at_100
value: 25.790000000000003
- type: map_at_1000
value: 25.913999999999998
- type: map_at_3
value: 22.694
- type: map_at_5
value: 23.69
- type: mrr_at_1
value: 21.346999999999998
- type: mrr_at_10
value: 28.594
- type: mrr_at_100
value: 29.543999999999997
- type: mrr_at_1000
value: 29.621
- type: mrr_at_3
value: 26.807
- type: mrr_at_5
value: 27.669
- type: ndcg_at_1
value: 21.346999999999998
- type: ndcg_at_10
value: 28.833
- type: ndcg_at_100
value: 34.272000000000006
- type: ndcg_at_1000
value: 37.355
- type: ndcg_at_3
value: 25.373
- type: ndcg_at_5
value: 26.756
- type: precision_at_1
value: 21.346999999999998
- type: precision_at_10
value: 5.2170000000000005
- type: precision_at_100
value: 0.954
- type: precision_at_1000
value: 0.13899999999999998
- type: precision_at_3
value: 11.948
- type: precision_at_5
value: 8.425
- type: recall_at_1
value: 17.896
- type: recall_at_10
value: 37.291000000000004
- type: recall_at_100
value: 61.138000000000005
- type: recall_at_1000
value: 83.212
- type: recall_at_3
value: 27.705999999999996
- type: recall_at_5
value: 31.234
- type: map_at_1
value: 17.195166666666665
- type: map_at_10
value: 23.329083333333333
- type: map_at_100
value: 24.30308333333333
- type: map_at_1000
value: 24.422416666666667
- type: map_at_3
value: 21.327416666666664
- type: map_at_5
value: 22.419999999999998
- type: mrr_at_1
value: 19.999916666666667
- type: mrr_at_10
value: 26.390166666666666
- type: mrr_at_100
value: 27.230999999999998
- type: mrr_at_1000
value: 27.308333333333334
- type: mrr_at_3
value: 24.4675
- type: mrr_at_5
value: 25.541083333333336
- type: ndcg_at_1
value: 19.999916666666667
- type: ndcg_at_10
value: 27.248666666666665
- type: ndcg_at_100
value: 32.00258333333334
- type: ndcg_at_1000
value: 34.9465
- type: ndcg_at_3
value: 23.58566666666667
- type: ndcg_at_5
value: 25.26341666666666
- type: precision_at_1
value: 19.999916666666667
- type: precision_at_10
value: 4.772166666666666
- type: precision_at_100
value: 0.847
- type: precision_at_1000
value: 0.12741666666666668
- type: precision_at_3
value: 10.756166666666669
- type: precision_at_5
value: 7.725416666666667
- type: recall_at_1
value: 17.195166666666665
- type: recall_at_10
value: 35.99083333333334
- type: recall_at_100
value: 57.467999999999996
- type: recall_at_1000
value: 78.82366666666667
- type: recall_at_3
value: 25.898499999999995
- type: recall_at_5
value: 30.084333333333333
- type: map_at_1
value: 16.779
- type: map_at_10
value: 21.557000000000002
- type: map_at_100
value: 22.338
- type: map_at_1000
value: 22.421
- type: map_at_3
value: 19.939
- type: map_at_5
value: 20.903
- type: mrr_at_1
value: 18.404999999999998
- type: mrr_at_10
value: 23.435
- type: mrr_at_100
value: 24.179000000000002
- type: mrr_at_1000
value: 24.25
- type: mrr_at_3
value: 21.907
- type: mrr_at_5
value: 22.781000000000002
- type: ndcg_at_1
value: 18.404999999999998
- type: ndcg_at_10
value: 24.515
- type: ndcg_at_100
value: 28.721000000000004
- type: ndcg_at_1000
value: 31.259999999999998
- type: ndcg_at_3
value: 21.508
- type: ndcg_at_5
value: 23.01
- type: precision_at_1
value: 18.404999999999998
- type: precision_at_10
value: 3.834
- type: precision_at_100
value: 0.641
- type: precision_at_1000
value: 0.093
- type: precision_at_3
value: 9.151
- type: precision_at_5
value: 6.503
- type: recall_at_1
value: 16.779
- type: recall_at_10
value: 31.730000000000004
- type: recall_at_100
value: 51.673
- type: recall_at_1000
value: 71.17599999999999
- type: recall_at_3
value: 23.518
- type: recall_at_5
value: 27.230999999999998
- type: map_at_1
value: 9.279
- type: map_at_10
value: 13.822000000000001
- type: map_at_100
value: 14.533
- type: map_at_1000
value: 14.649999999999999
- type: map_at_3
value: 12.396
- type: map_at_5
value: 13.214
- type: mrr_at_1
value: 11.149000000000001
- type: mrr_at_10
value: 16.139
- type: mrr_at_100
value: 16.872
- type: mrr_at_1000
value: 16.964000000000002
- type: mrr_at_3
value: 14.613000000000001
- type: mrr_at_5
value: 15.486
- type: ndcg_at_1
value: 11.149000000000001
- type: ndcg_at_10
value: 16.82
- type: ndcg_at_100
value: 20.73
- type: ndcg_at_1000
value: 23.894000000000002
- type: ndcg_at_3
value: 14.11
- type: ndcg_at_5
value: 15.404000000000002
- type: precision_at_1
value: 11.149000000000001
- type: precision_at_10
value: 3.063
- type: precision_at_100
value: 0.587
- type: precision_at_1000
value: 0.1
- type: precision_at_3
value: 6.699
- type: precision_at_5
value: 4.928
- type: recall_at_1
value: 9.279
- type: recall_at_10
value: 23.745
- type: recall_at_100
value: 41.873
- type: recall_at_1000
value: 64.982
- type: recall_at_3
value: 16.152
- type: recall_at_5
value: 19.409000000000002
- type: map_at_1
value: 16.36
- type: map_at_10
value: 21.927
- type: map_at_100
value: 22.889
- type: map_at_1000
value: 22.994
- type: map_at_3
value: 20.433
- type: map_at_5
value: 21.337
- type: mrr_at_1
value: 18.75
- type: mrr_at_10
value: 24.859
- type: mrr_at_100
value: 25.746999999999996
- type: mrr_at_1000
value: 25.829
- type: mrr_at_3
value: 23.383000000000003
- type: mrr_at_5
value: 24.297
- type: ndcg_at_1
value: 18.75
- type: ndcg_at_10
value: 25.372
- type: ndcg_at_100
value: 30.342999999999996
- type: ndcg_at_1000
value: 33.286
- type: ndcg_at_3
value: 22.627
- type: ndcg_at_5
value: 24.04
- type: precision_at_1
value: 18.75
- type: precision_at_10
value: 4.1419999999999995
- type: precision_at_100
value: 0.738
- type: precision_at_1000
value: 0.11100000000000002
- type: precision_at_3
value: 10.261000000000001
- type: precision_at_5
value: 7.164
- type: recall_at_1
value: 16.36
- type: recall_at_10
value: 32.949
- type: recall_at_100
value: 55.552
- type: recall_at_1000
value: 77.09899999999999
- type: recall_at_3
value: 25.538
- type: recall_at_5
value: 29.008
- type: map_at_1
value: 17.39
- type: map_at_10
value: 23.058
- type: map_at_100
value: 24.445
- type: map_at_1000
value: 24.637999999999998
- type: map_at_3
value: 21.037
- type: map_at_5
value: 21.966
- type: mrr_at_1
value: 19.96
- type: mrr_at_10
value: 26.301000000000002
- type: mrr_at_100
value: 27.297
- type: mrr_at_1000
value: 27.375
- type: mrr_at_3
value: 24.340999999999998
- type: mrr_at_5
value: 25.339
- type: ndcg_at_1
value: 19.96
- type: ndcg_at_10
value: 27.249000000000002
- type: ndcg_at_100
value: 32.997
- type: ndcg_at_1000
value: 36.359
- type: ndcg_at_3
value: 23.519000000000002
- type: ndcg_at_5
value: 24.915000000000003
- type: precision_at_1
value: 19.96
- type: precision_at_10
value: 5.356000000000001
- type: precision_at_100
value: 1.198
- type: precision_at_1000
value: 0.20400000000000001
- type: precision_at_3
value: 10.738
- type: precision_at_5
value: 7.904999999999999
- type: recall_at_1
value: 17.39
- type: recall_at_10
value: 35.254999999999995
- type: recall_at_100
value: 61.351
- type: recall_at_1000
value: 84.395
- type: recall_at_3
value: 25.194
- type: recall_at_5
value: 28.546
- type: map_at_1
value: 14.238999999999999
- type: map_at_10
value: 19.323
- type: map_at_100
value: 19.994
- type: map_at_1000
value: 20.102999999999998
- type: map_at_3
value: 17.631
- type: map_at_5
value: 18.401
- type: mrr_at_1
value: 15.157000000000002
- type: mrr_at_10
value: 20.578
- type: mrr_at_100
value: 21.252
- type: mrr_at_1000
value: 21.346999999999998
- type: mrr_at_3
value: 18.762
- type: mrr_at_5
value: 19.713
- type: ndcg_at_1
value: 15.157000000000002
- type: ndcg_at_10
value: 22.468
- type: ndcg_at_100
value: 26.245
- type: ndcg_at_1000
value: 29.534
- type: ndcg_at_3
value: 18.981
- type: ndcg_at_5
value: 20.349999999999998
- type: precision_at_1
value: 15.157000000000002
- type: precision_at_10
value: 3.512
- type: precision_at_100
value: 0.577
- type: precision_at_1000
value: 0.091
- type: precision_at_3
value: 8.01
- type: precision_at_5
value: 5.656
- type: recall_at_1
value: 14.238999999999999
- type: recall_at_10
value: 31.038
- type: recall_at_100
value: 49.122
- type: recall_at_1000
value: 74.919
- type: recall_at_3
value: 21.436
- type: recall_at_5
value: 24.692
- task:
type: Retrieval
dataset:
name: MTEB ClimateFEVER
type: climate-fever
config: default
split: test
revision: 392b78eb68c07badcd7c2cd8f39af108375dfcce
metrics:
- type: map_at_1
value: 8.828
- type: map_at_10
value: 14.982000000000001
- type: map_at_100
value: 16.495
- type: map_at_1000
value: 16.658
- type: map_at_3
value: 12.366000000000001
- type: map_at_5
value: 13.655000000000001
- type: mrr_at_1
value: 19.088
- type: mrr_at_10
value: 29.29
- type: mrr_at_100
value: 30.291
- type: mrr_at_1000
value: 30.342000000000002
- type: mrr_at_3
value: 25.907000000000004
- type: mrr_at_5
value: 27.840999999999998
- type: ndcg_at_1
value: 19.088
- type: ndcg_at_10
value: 21.858
- type: ndcg_at_100
value: 28.323999999999998
- type: ndcg_at_1000
value: 31.561
- type: ndcg_at_3
value: 17.175
- type: ndcg_at_5
value: 18.869
- type: precision_at_1
value: 19.088
- type: precision_at_10
value: 6.9190000000000005
- type: precision_at_100
value: 1.376
- type: precision_at_1000
value: 0.197
- type: precision_at_3
value: 12.703999999999999
- type: precision_at_5
value: 9.993
- type: recall_at_1
value: 8.828
- type: recall_at_10
value: 27.381
- type: recall_at_100
value: 50.0
- type: recall_at_1000
value: 68.355
- type: recall_at_3
value: 16.118
- type: recall_at_5
value: 20.587
- task:
type: Retrieval
dataset:
name: MTEB DBPedia
type: dbpedia-entity
config: default
split: test
revision: f097057d03ed98220bc7309ddb10b71a54d667d6
metrics:
- type: map_at_1
value: 5.586
- type: map_at_10
value: 10.040000000000001
- type: map_at_100
value: 12.55
- type: map_at_1000
value: 13.123999999999999
- type: map_at_3
value: 7.75
- type: map_at_5
value: 8.835999999999999
- type: mrr_at_1
value: 42.25
- type: mrr_at_10
value: 51.205999999999996
- type: mrr_at_100
value: 51.818
- type: mrr_at_1000
value: 51.855
- type: mrr_at_3
value: 48.875
- type: mrr_at_5
value: 50.488
- type: ndcg_at_1
value: 32.25
- type: ndcg_at_10
value: 22.718
- type: ndcg_at_100
value: 24.359
- type: ndcg_at_1000
value: 29.232000000000003
- type: ndcg_at_3
value: 25.974000000000004
- type: ndcg_at_5
value: 24.291999999999998
- type: precision_at_1
value: 42.25
- type: precision_at_10
value: 17.75
- type: precision_at_100
value: 5.032
- type: precision_at_1000
value: 1.117
- type: precision_at_3
value: 28.833
- type: precision_at_5
value: 24.25
- type: recall_at_1
value: 5.586
- type: recall_at_10
value: 14.16
- type: recall_at_100
value: 28.051
- type: recall_at_1000
value: 45.157000000000004
- type: recall_at_3
value: 8.758000000000001
- type: recall_at_5
value: 10.975999999999999
- task:
type: Classification
dataset:
name: MTEB EmotionClassification
type: mteb/emotion
config: default
split: test
revision: 829147f8f75a25f005913200eb5ed41fae320aa1
metrics:
- type: accuracy
value: 39.075
- type: f1
value: 35.01420354708222
- task:
type: Retrieval
dataset:
name: MTEB FEVER
type: fever
config: default
split: test
revision: 1429cf27e393599b8b359b9b72c666f96b2525f9
metrics:
- type: map_at_1
value: 43.519999999999996
- type: map_at_10
value: 54.368
- type: map_at_100
value: 54.918
- type: map_at_1000
value: 54.942
- type: map_at_3
value: 51.712
- type: map_at_5
value: 53.33599999999999
- type: mrr_at_1
value: 46.955000000000005
- type: mrr_at_10
value: 58.219
- type: mrr_at_100
value: 58.73500000000001
- type: mrr_at_1000
value: 58.753
- type: mrr_at_3
value: 55.518
- type: mrr_at_5
value: 57.191
- type: ndcg_at_1
value: 46.955000000000005
- type: ndcg_at_10
value: 60.45
- type: ndcg_at_100
value: 63.047
- type: ndcg_at_1000
value: 63.712999999999994
- type: ndcg_at_3
value: 55.233
- type: ndcg_at_5
value: 58.072
- type: precision_at_1
value: 46.955000000000005
- type: precision_at_10
value: 8.267
- type: precision_at_100
value: 0.962
- type: precision_at_1000
value: 0.10300000000000001
- type: precision_at_3
value: 22.326999999999998
- type: precision_at_5
value: 14.940999999999999
- type: recall_at_1
value: 43.519999999999996
- type: recall_at_10
value: 75.632
- type: recall_at_100
value: 87.41600000000001
- type: recall_at_1000
value: 92.557
- type: recall_at_3
value: 61.597
- type: recall_at_5
value: 68.518
- task:
type: Retrieval
dataset:
name: MTEB FiQA2018
type: fiqa
config: default
split: test
revision: 41b686a7f28c59bcaaa5791efd47c67c8ebe28be
metrics:
- type: map_at_1
value: 9.549000000000001
- type: map_at_10
value: 15.762
- type: map_at_100
value: 17.142
- type: map_at_1000
value: 17.329
- type: map_at_3
value: 13.575000000000001
- type: map_at_5
value: 14.754000000000001
- type: mrr_at_1
value: 19.753
- type: mrr_at_10
value: 26.568
- type: mrr_at_100
value: 27.606
- type: mrr_at_1000
value: 27.68
- type: mrr_at_3
value: 24.203
- type: mrr_at_5
value: 25.668999999999997
- type: ndcg_at_1
value: 19.753
- type: ndcg_at_10
value: 21.118000000000002
- type: ndcg_at_100
value: 27.308
- type: ndcg_at_1000
value: 31.304
- type: ndcg_at_3
value: 18.319
- type: ndcg_at_5
value: 19.414
- type: precision_at_1
value: 19.753
- type: precision_at_10
value: 6.08
- type: precision_at_100
value: 1.204
- type: precision_at_1000
value: 0.192
- type: precision_at_3
value: 12.191
- type: precision_at_5
value: 9.383
- type: recall_at_1
value: 9.549000000000001
- type: recall_at_10
value: 26.131
- type: recall_at_100
value: 50.544999999999995
- type: recall_at_1000
value: 74.968
- type: recall_at_3
value: 16.951
- type: recall_at_5
value: 20.95
- task:
type: Retrieval
dataset:
name: MTEB HotpotQA
type: hotpotqa
config: default
split: test
revision: 766870b35a1b9ca65e67a0d1913899973551fc6c
metrics:
- type: map_at_1
value: 25.544
- type: map_at_10
value: 32.62
- type: map_at_100
value: 33.275
- type: map_at_1000
value: 33.344
- type: map_at_3
value: 30.851
- type: map_at_5
value: 31.868999999999996
- type: mrr_at_1
value: 51.087
- type: mrr_at_10
value: 57.704
- type: mrr_at_100
value: 58.175
- type: mrr_at_1000
value: 58.207
- type: mrr_at_3
value: 56.106
- type: mrr_at_5
value: 57.074000000000005
- type: ndcg_at_1
value: 51.087
- type: ndcg_at_10
value: 40.876000000000005
- type: ndcg_at_100
value: 43.762
- type: ndcg_at_1000
value: 45.423
- type: ndcg_at_3
value: 37.65
- type: ndcg_at_5
value: 39.305
- type: precision_at_1
value: 51.087
- type: precision_at_10
value: 8.304
- type: precision_at_100
value: 1.059
- type: precision_at_1000
value: 0.128
- type: precision_at_3
value: 22.875999999999998
- type: precision_at_5
value: 15.033
- type: recall_at_1
value: 25.544
- type: recall_at_10
value: 41.519
- type: recall_at_100
value: 52.957
- type: recall_at_1000
value: 64.132
- type: recall_at_3
value: 34.315
- type: recall_at_5
value: 37.583
- task:
type: Classification
dataset:
name: MTEB ImdbClassification
type: mteb/imdb
config: default
split: test
revision: 8d743909f834c38949e8323a8a6ce8721ea6c7f4
metrics:
- type: accuracy
value: 58.6696
- type: ap
value: 55.3644880984279
- type: f1
value: 58.07942097405652
- task:
type: Retrieval
dataset:
name: MTEB MSMARCO
type: msmarco
config: default
split: validation
revision: e6838a846e2408f22cf5cc337ebc83e0bcf77849
metrics:
- type: map_at_1
value: 14.442
- type: map_at_10
value: 22.932
- type: map_at_100
value: 24.132
- type: map_at_1000
value: 24.213
- type: map_at_3
value: 20.002
- type: map_at_5
value: 21.636
- type: mrr_at_1
value: 14.841999999999999
- type: mrr_at_10
value: 23.416
- type: mrr_at_100
value: 24.593999999999998
- type: mrr_at_1000
value: 24.669
- type: mrr_at_3
value: 20.494
- type: mrr_at_5
value: 22.14
- type: ndcg_at_1
value: 14.841999999999999
- type: ndcg_at_10
value: 27.975
- type: ndcg_at_100
value: 34.143
- type: ndcg_at_1000
value: 36.370000000000005
- type: ndcg_at_3
value: 21.944
- type: ndcg_at_5
value: 24.881
- type: precision_at_1
value: 14.841999999999999
- type: precision_at_10
value: 4.537
- type: precision_at_100
value: 0.767
- type: precision_at_1000
value: 0.096
- type: precision_at_3
value: 9.322
- type: precision_at_5
value: 7.074
- type: recall_at_1
value: 14.442
- type: recall_at_10
value: 43.557
- type: recall_at_100
value: 72.904
- type: recall_at_1000
value: 90.40700000000001
- type: recall_at_3
value: 27.088
- type: recall_at_5
value: 34.144000000000005
- task:
type: Classification
dataset:
name: MTEB MTOPDomainClassification (en)
type: mteb/mtop_domain
config: en
split: test
revision: a7e2a951126a26fc8c6a69f835f33a346ba259e3
metrics:
- type: accuracy
value: 86.95622435020519
- type: f1
value: 86.58363130708494
- task:
type: Classification
dataset:
name: MTEB MTOPDomainClassification (de)
type: mteb/mtop_domain
config: de
split: test
revision: a7e2a951126a26fc8c6a69f835f33a346ba259e3
metrics:
- type: accuracy
value: 62.73034657650043
- type: f1
value: 60.78623915840713
- task:
type: Classification
dataset:
name: MTEB MTOPDomainClassification (es)
type: mteb/mtop_domain
config: es
split: test
revision: a7e2a951126a26fc8c6a69f835f33a346ba259e3
metrics:
- type: accuracy
value: 67.54503002001334
- type: f1
value: 65.34879794116112
- task:
type: Classification
dataset:
name: MTEB MTOPDomainClassification (fr)
type: mteb/mtop_domain
config: fr
split: test
revision: a7e2a951126a26fc8c6a69f835f33a346ba259e3
metrics:
- type: accuracy
value: 65.35233322893829
- type: f1
value: 62.994001882446646
- task:
type: Classification
dataset:
name: MTEB MTOPDomainClassification (hi)
type: mteb/mtop_domain
config: hi
split: test
revision: a7e2a951126a26fc8c6a69f835f33a346ba259e3
metrics:
- type: accuracy
value: 45.37110075295806
- type: f1
value: 44.26285860740745
- task:
type: Classification
dataset:
name: MTEB MTOPDomainClassification (th)
type: mteb/mtop_domain
config: th
split: test
revision: a7e2a951126a26fc8c6a69f835f33a346ba259e3
metrics:
- type: accuracy
value: 55.276672694394215
- type: f1
value: 53.28388179869587
- task:
type: Classification
dataset:
name: MTEB MTOPIntentClassification (en)
type: mteb/mtop_intent
config: en
split: test
revision: 6299947a7777084cc2d4b64235bf7190381ce755
metrics:
- type: accuracy
value: 62.25262197902417
- type: f1
value: 43.44084037148853
- task:
type: Classification
dataset:
name: MTEB MTOPIntentClassification (de)
type: mteb/mtop_intent
config: de
split: test
revision: 6299947a7777084cc2d4b64235bf7190381ce755
metrics:
- type: accuracy
value: 49.56043956043956
- type: f1
value: 32.86333673498598
- task:
type: Classification
dataset:
name: MTEB MTOPIntentClassification (es)
type: mteb/mtop_intent
config: es
split: test
revision: 6299947a7777084cc2d4b64235bf7190381ce755
metrics:
- type: accuracy
value: 49.93995997331555
- type: f1
value: 34.726671876888126
- task:
type: Classification
dataset:
name: MTEB MTOPIntentClassification (fr)
type: mteb/mtop_intent
config: fr
split: test
revision: 6299947a7777084cc2d4b64235bf7190381ce755
metrics:
- type: accuracy
value: 46.32947071719386
- type: f1
value: 32.325273615982795
- task:
type: Classification
dataset:
name: MTEB MTOPIntentClassification (hi)
type: mteb/mtop_intent
config: hi
split: test
revision: 6299947a7777084cc2d4b64235bf7190381ce755
metrics:
- type: accuracy
value: 32.208676945141626
- type: f1
value: 21.32185122815139
- task:
type: Classification
dataset:
name: MTEB MTOPIntentClassification (th)
type: mteb/mtop_intent
config: th
split: test
revision: 6299947a7777084cc2d4b64235bf7190381ce755
metrics:
- type: accuracy
value: 43.627486437613015
- type: f1
value: 27.04872922347508
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (af)
type: mteb/amazon_massive_intent
config: af
split: test
revision: 072a486a144adf7f4479a4a0dddb2152e161e1ea
metrics:
- type: accuracy
value: 40.548083389374575
- type: f1
value: 39.490307545239716
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (am)
type: mteb/amazon_massive_intent
config: am
split: test
revision: 072a486a144adf7f4479a4a0dddb2152e161e1ea
metrics:
- type: accuracy
value: 24.18291862811029
- type: f1
value: 23.437620034727473
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (ar)
type: mteb/amazon_massive_intent
config: ar
split: test
revision: 072a486a144adf7f4479a4a0dddb2152e161e1ea
metrics:
- type: accuracy
value: 30.134498991257562
- type: f1
value: 28.787175191531283
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (az)
type: mteb/amazon_massive_intent
config: az
split: test
revision: 072a486a144adf7f4479a4a0dddb2152e161e1ea
metrics:
- type: accuracy
value: 35.88433086751849
- type: f1
value: 36.264500398782126
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (bn)
type: mteb/amazon_massive_intent
config: bn
split: test
revision: 072a486a144adf7f4479a4a0dddb2152e161e1ea
metrics:
- type: accuracy
value: 29.17283120376597
- type: f1
value: 27.8101616531901
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (cy)
type: mteb/amazon_massive_intent
config: cy
split: test
revision: 072a486a144adf7f4479a4a0dddb2152e161e1ea
metrics:
- type: accuracy
value: 41.788836583725626
- type: f1
value: 39.71413181054801
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (da)
type: mteb/amazon_massive_intent
config: da
split: test
revision: 072a486a144adf7f4479a4a0dddb2152e161e1ea
metrics:
- type: accuracy
value: 44.176193678547406
- type: f1
value: 42.192499826552286
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (de)
type: mteb/amazon_massive_intent
config: de
split: test
revision: 072a486a144adf7f4479a4a0dddb2152e161e1ea
metrics:
- type: accuracy
value: 42.07464694014795
- type: f1
value: 39.44188259183162
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (el)
type: mteb/amazon_massive_intent
config: el
split: test
revision: 072a486a144adf7f4479a4a0dddb2152e161e1ea
metrics:
- type: accuracy
value: 36.254203093476804
- type: f1
value: 34.46592715936761
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (en)
type: mteb/amazon_massive_intent
config: en
split: test
revision: 072a486a144adf7f4479a4a0dddb2152e161e1ea
metrics:
- type: accuracy
value: 61.40887693342301
- type: f1
value: 59.79854802683996
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (es)
type: mteb/amazon_massive_intent
config: es
split: test
revision: 072a486a144adf7f4479a4a0dddb2152e161e1ea
metrics:
- type: accuracy
value: 42.679892400807
- type: f1
value: 42.04801248338172
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (fa)
type: mteb/amazon_massive_intent
config: fa
split: test
revision: 072a486a144adf7f4479a4a0dddb2152e161e1ea
metrics:
- type: accuracy
value: 35.59179556153329
- type: f1
value: 34.045862930486166
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (fi)
type: mteb/amazon_massive_intent
config: fi
split: test
revision: 072a486a144adf7f4479a4a0dddb2152e161e1ea
metrics:
- type: accuracy
value: 40.036987222595826
- type: f1
value: 38.117703439362785
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (fr)
type: mteb/amazon_massive_intent
config: fr
split: test
revision: 072a486a144adf7f4479a4a0dddb2152e161e1ea
metrics:
- type: accuracy
value: 43.43981170141224
- type: f1
value: 42.7084388987865
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (he)
type: mteb/amazon_massive_intent
config: he
split: test
revision: 072a486a144adf7f4479a4a0dddb2152e161e1ea
metrics:
- type: accuracy
value: 31.593813046402154
- type: f1
value: 29.98550522450782
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (hi)
type: mteb/amazon_massive_intent
config: hi
split: test
revision: 072a486a144adf7f4479a4a0dddb2152e161e1ea
metrics:
- type: accuracy
value: 27.044384667114997
- type: f1
value: 27.313059184832667
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (hu)
type: mteb/amazon_massive_intent
config: hu
split: test
revision: 072a486a144adf7f4479a4a0dddb2152e161e1ea
metrics:
- type: accuracy
value: 38.453261600538
- type: f1
value: 37.309189326110435
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (hy)
type: mteb/amazon_massive_intent
config: hy
split: test
revision: 072a486a144adf7f4479a4a0dddb2152e161e1ea
metrics:
- type: accuracy
value: 27.979152656355076
- type: f1
value: 27.430939684346445
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (id)
type: mteb/amazon_massive_intent
config: id
split: test
revision: 072a486a144adf7f4479a4a0dddb2152e161e1ea
metrics:
- type: accuracy
value: 43.97108271687963
- type: f1
value: 43.40585705688761
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (is)
type: mteb/amazon_massive_intent
config: is
split: test
revision: 072a486a144adf7f4479a4a0dddb2152e161e1ea
metrics:
- type: accuracy
value: 40.302622730329524
- type: f1
value: 39.108052180520744
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (it)
type: mteb/amazon_massive_intent
config: it
split: test
revision: 072a486a144adf7f4479a4a0dddb2152e161e1ea
metrics:
- type: accuracy
value: 45.474108944182916
- type: f1
value: 45.85950328241134
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (ja)
type: mteb/amazon_massive_intent
config: ja
split: test
revision: 072a486a144adf7f4479a4a0dddb2152e161e1ea
metrics:
- type: accuracy
value: 45.60860793544048
- type: f1
value: 43.94920708216737
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (jv)
type: mteb/amazon_massive_intent
config: jv
split: test
revision: 072a486a144adf7f4479a4a0dddb2152e161e1ea
metrics:
- type: accuracy
value: 38.668459986550104
- type: f1
value: 37.6990034018859
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (ka)
type: mteb/amazon_massive_intent
config: ka
split: test
revision: 072a486a144adf7f4479a4a0dddb2152e161e1ea
metrics:
- type: accuracy
value: 25.6523201075992
- type: f1
value: 25.279084273189582
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (km)
type: mteb/amazon_massive_intent
config: km
split: test
revision: 072a486a144adf7f4479a4a0dddb2152e161e1ea
metrics:
- type: accuracy
value: 28.295225285810353
- type: f1
value: 26.645825638771548
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (kn)
type: mteb/amazon_massive_intent
config: kn
split: test
revision: 072a486a144adf7f4479a4a0dddb2152e161e1ea
metrics:
- type: accuracy
value: 23.480161398789505
- type: f1
value: 22.275241866506732
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (ko)
type: mteb/amazon_massive_intent
config: ko
split: test
revision: 072a486a144adf7f4479a4a0dddb2152e161e1ea
metrics:
- type: accuracy
value: 36.55682582380632
- type: f1
value: 36.004753171063605
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (lv)
type: mteb/amazon_massive_intent
config: lv
split: test
revision: 072a486a144adf7f4479a4a0dddb2152e161e1ea
metrics:
- type: accuracy
value: 41.84936112979153
- type: f1
value: 41.38932672359119
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (ml)
type: mteb/amazon_massive_intent
config: ml
split: test
revision: 072a486a144adf7f4479a4a0dddb2152e161e1ea
metrics:
- type: accuracy
value: 24.90921318090114
- type: f1
value: 23.968687483768807
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (mn)
type: mteb/amazon_massive_intent
config: mn
split: test
revision: 072a486a144adf7f4479a4a0dddb2152e161e1ea
metrics:
- type: accuracy
value: 29.86213853396099
- type: f1
value: 29.977152075255407
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (ms)
type: mteb/amazon_massive_intent
config: ms
split: test
revision: 072a486a144adf7f4479a4a0dddb2152e161e1ea
metrics:
- type: accuracy
value: 42.42098184263618
- type: f1
value: 41.50877432664628
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (my)
type: mteb/amazon_massive_intent
config: my
split: test
revision: 072a486a144adf7f4479a4a0dddb2152e161e1ea
metrics:
- type: accuracy
value: 25.131136516476126
- type: f1
value: 23.938932214086776
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (nb)
type: mteb/amazon_massive_intent
config: nb
split: test
revision: 072a486a144adf7f4479a4a0dddb2152e161e1ea
metrics:
- type: accuracy
value: 39.81506388702084
- type: f1
value: 38.809586587791664
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (nl)
type: mteb/amazon_massive_intent
config: nl
split: test
revision: 072a486a144adf7f4479a4a0dddb2152e161e1ea
metrics:
- type: accuracy
value: 43.62138533960995
- type: f1
value: 42.01386842914633
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (pl)
type: mteb/amazon_massive_intent
config: pl
split: test
revision: 072a486a144adf7f4479a4a0dddb2152e161e1ea
metrics:
- type: accuracy
value: 42.19569603227976
- type: f1
value: 40.00556559825827
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (pt)
type: mteb/amazon_massive_intent
config: pt
split: test
revision: 072a486a144adf7f4479a4a0dddb2152e161e1ea
metrics:
- type: accuracy
value: 45.20847343644923
- type: f1
value: 44.24115005029051
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (ro)
type: mteb/amazon_massive_intent
config: ro
split: test
revision: 072a486a144adf7f4479a4a0dddb2152e161e1ea
metrics:
- type: accuracy
value: 41.80901143241426
- type: f1
value: 40.474074848670085
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (ru)
type: mteb/amazon_massive_intent
config: ru
split: test
revision: 072a486a144adf7f4479a4a0dddb2152e161e1ea
metrics:
- type: accuracy
value: 35.96839273705447
- type: f1
value: 35.095456843621
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (sl)
type: mteb/amazon_massive_intent
config: sl
split: test
revision: 072a486a144adf7f4479a4a0dddb2152e161e1ea
metrics:
- type: accuracy
value: 40.60524546065905
- type: f1
value: 39.302383051500136
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (sq)
type: mteb/amazon_massive_intent
config: sq
split: test
revision: 072a486a144adf7f4479a4a0dddb2152e161e1ea
metrics:
- type: accuracy
value: 42.75722932078009
- type: f1
value: 41.53763931497389
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (sv)
type: mteb/amazon_massive_intent
config: sv
split: test
revision: 072a486a144adf7f4479a4a0dddb2152e161e1ea
metrics:
- type: accuracy
value: 42.347007397444514
- type: f1
value: 41.04366017948627
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (sw)
type: mteb/amazon_massive_intent
config: sw
split: test
revision: 072a486a144adf7f4479a4a0dddb2152e161e1ea
metrics:
- type: accuracy
value: 41.12306657700067
- type: f1
value: 39.712940473289024
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (ta)
type: mteb/amazon_massive_intent
config: ta
split: test
revision: 072a486a144adf7f4479a4a0dddb2152e161e1ea
metrics:
- type: accuracy
value: 24.603227975790183
- type: f1
value: 23.969236788828606
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (te)
type: mteb/amazon_massive_intent
config: te
split: test
revision: 072a486a144adf7f4479a4a0dddb2152e161e1ea
metrics:
- type: accuracy
value: 25.03698722259583
- type: f1
value: 24.37196123281459
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (th)
type: mteb/amazon_massive_intent
config: th
split: test
revision: 072a486a144adf7f4479a4a0dddb2152e161e1ea
metrics:
- type: accuracy
value: 35.40013449899126
- type: f1
value: 35.063600413688036
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (tl)
type: mteb/amazon_massive_intent
config: tl
split: test
revision: 072a486a144adf7f4479a4a0dddb2152e161e1ea
metrics:
- type: accuracy
value: 41.19031607262945
- type: f1
value: 40.240432304273014
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (tr)
type: mteb/amazon_massive_intent
config: tr
split: test
revision: 072a486a144adf7f4479a4a0dddb2152e161e1ea
metrics:
- type: accuracy
value: 36.405514458641555
- type: f1
value: 36.03844992856558
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (ur)
type: mteb/amazon_massive_intent
config: ur
split: test
revision: 072a486a144adf7f4479a4a0dddb2152e161e1ea
metrics:
- type: accuracy
value: 25.934767989240076
- type: f1
value: 25.2074457023531
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (vi)
type: mteb/amazon_massive_intent
config: vi
split: test
revision: 072a486a144adf7f4479a4a0dddb2152e161e1ea
metrics:
- type: accuracy
value: 38.79959650302622
- type: f1
value: 37.160233794673125
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (zh-CN)
type: mteb/amazon_massive_intent
config: zh-CN
split: test
revision: 072a486a144adf7f4479a4a0dddb2152e161e1ea
metrics:
- type: accuracy
value: 46.244115669132476
- type: f1
value: 44.367480561291906
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (zh-TW)
type: mteb/amazon_massive_intent
config: zh-TW
split: test
revision: 072a486a144adf7f4479a4a0dddb2152e161e1ea
metrics:
- type: accuracy
value: 42.30665770006724
- type: f1
value: 41.9642223283514
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (af)
type: mteb/amazon_massive_scenario
config: af
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 43.2481506388702
- type: f1
value: 40.924230769590785
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (am)
type: mteb/amazon_massive_scenario
config: am
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 25.30262273032952
- type: f1
value: 24.937105830264066
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (ar)
type: mteb/amazon_massive_scenario
config: ar
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 32.07128446536651
- type: f1
value: 31.80245816594883
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (az)
type: mteb/amazon_massive_scenario
config: az
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 36.681237390719566
- type: f1
value: 36.37219042508338
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (bn)
type: mteb/amazon_massive_scenario
config: bn
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 29.56624075319435
- type: f1
value: 28.386042056362758
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (cy)
type: mteb/amazon_massive_scenario
config: cy
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 42.1049092131809
- type: f1
value: 38.926150886991294
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (da)
type: mteb/amazon_massive_scenario
config: da
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 45.44384667114997
- type: f1
value: 42.578252395460005
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (de)
type: mteb/amazon_massive_scenario
config: de
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 43.211163416274374
- type: f1
value: 41.04465858304789
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (el)
type: mteb/amazon_massive_scenario
config: el
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 36.503026227303295
- type: f1
value: 34.49785095312759
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (en)
type: mteb/amazon_massive_scenario
config: en
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 69.73772696704773
- type: f1
value: 69.21759502909043
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (es)
type: mteb/amazon_massive_scenario
config: es
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 44.078681909885674
- type: f1
value: 43.05914426901129
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (fa)
type: mteb/amazon_massive_scenario
config: fa
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 32.61264290517821
- type: f1
value: 32.02463177462754
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (fi)
type: mteb/amazon_massive_scenario
config: fi
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 40.35642232683255
- type: f1
value: 38.13642481807678
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (fr)
type: mteb/amazon_massive_scenario
config: fr
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 45.06724949562878
- type: f1
value: 43.19827608343738
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (he)
type: mteb/amazon_massive_scenario
config: he
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 32.178883658372555
- type: f1
value: 29.979761884698775
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (hi)
type: mteb/amazon_massive_scenario
config: hi
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 26.903160726294555
- type: f1
value: 25.833010434083363
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (hu)
type: mteb/amazon_massive_scenario
config: hu
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 40.379959650302624
- type: f1
value: 37.93134355292882
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (hy)
type: mteb/amazon_massive_scenario
config: hy
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 28.375924680564896
- type: f1
value: 26.96255693013172
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (id)
type: mteb/amazon_massive_scenario
config: id
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 44.361129791526565
- type: f1
value: 43.54445012295126
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (is)
type: mteb/amazon_massive_scenario
config: is
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 39.290517821116346
- type: f1
value: 37.26982052174147
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (it)
type: mteb/amazon_massive_scenario
config: it
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 46.4694014794889
- type: f1
value: 44.060986162841566
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (ja)
type: mteb/amazon_massive_scenario
config: ja
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 46.25756556825824
- type: f1
value: 45.625139456758816
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (jv)
type: mteb/amazon_massive_scenario
config: jv
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 41.12642905178212
- type: f1
value: 39.54392378396527
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (ka)
type: mteb/amazon_massive_scenario
config: ka
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 24.72763954270343
- type: f1
value: 23.337743140804484
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (km)
type: mteb/amazon_massive_scenario
config: km
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 29.741089441829182
- type: f1
value: 27.570876190083748
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (kn)
type: mteb/amazon_massive_scenario
config: kn
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 23.850033624747816
- type: f1
value: 22.86733484540032
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (ko)
type: mteb/amazon_massive_scenario
config: ko
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 36.56691324815064
- type: f1
value: 35.504081677134565
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (lv)
type: mteb/amazon_massive_scenario
config: lv
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 40.928043039677206
- type: f1
value: 39.108589131211254
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (ml)
type: mteb/amazon_massive_scenario
config: ml
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 25.527908540685946
- type: f1
value: 25.333391622280477
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (mn)
type: mteb/amazon_massive_scenario
config: mn
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 29.105581708137183
- type: f1
value: 28.478235012692814
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (ms)
type: mteb/amazon_massive_scenario
config: ms
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 43.78614660390047
- type: f1
value: 41.9640143926267
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (my)
type: mteb/amazon_massive_scenario
config: my
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 27.269670477471415
- type: f1
value: 26.228386764141852
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (nb)
type: mteb/amazon_massive_scenario
config: nb
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 39.018157363819775
- type: f1
value: 37.641949339321854
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (nl)
type: mteb/amazon_massive_scenario
config: nl
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 45.35978480161399
- type: f1
value: 42.6851176096831
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (pl)
type: mteb/amazon_massive_scenario
config: pl
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 41.89307330195023
- type: f1
value: 40.888710642615024
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (pt)
type: mteb/amazon_massive_scenario
config: pt
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 45.901143241425686
- type: f1
value: 44.496942353920545
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (ro)
type: mteb/amazon_massive_scenario
config: ro
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 44.11566913248151
- type: f1
value: 41.953945105870616
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (ru)
type: mteb/amazon_massive_scenario
config: ru
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 32.76395427034297
- type: f1
value: 31.436372571600934
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (sl)
type: mteb/amazon_massive_scenario
config: sl
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 40.504371217215876
- type: f1
value: 39.322752749628165
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (sq)
type: mteb/amazon_massive_scenario
config: sq
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 42.51849361129792
- type: f1
value: 41.4139297118463
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (sv)
type: mteb/amazon_massive_scenario
config: sv
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 42.293207800941495
- type: f1
value: 40.50409536806683
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (sw)
type: mteb/amazon_massive_scenario
config: sw
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 42.9993275050437
- type: f1
value: 41.045416224973266
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (ta)
type: mteb/amazon_massive_scenario
config: ta
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 28.32548755884331
- type: f1
value: 27.276841995561867
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (te)
type: mteb/amazon_massive_scenario
config: te
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 26.593813046402154
- type: f1
value: 25.483878616197586
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (th)
type: mteb/amazon_massive_scenario
config: th
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 36.788836583725626
- type: f1
value: 34.603932909177686
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (tl)
type: mteb/amazon_massive_scenario
config: tl
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 42.5689307330195
- type: f1
value: 40.924469309079825
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (tr)
type: mteb/amazon_massive_scenario
config: tr
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 37.09482178883658
- type: f1
value: 37.949628822857164
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (ur)
type: mteb/amazon_massive_scenario
config: ur
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 28.836583725622063
- type: f1
value: 27.806558655512344
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (vi)
type: mteb/amazon_massive_scenario
config: vi
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 37.357094821788834
- type: f1
value: 37.507918961038165
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (zh-CN)
type: mteb/amazon_massive_scenario
config: zh-CN
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 49.37794216543375
- type: f1
value: 47.20421153697707
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (zh-TW)
type: mteb/amazon_massive_scenario
config: zh-TW
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 44.42165433759248
- type: f1
value: 44.34741861198931
- task:
type: Clustering
dataset:
name: MTEB MedrxivClusteringP2P
type: mteb/medrxiv-clustering-p2p
config: default
split: test
revision: dcefc037ef84348e49b0d29109e891c01067226b
metrics:
- type: v_measure
value: 31.374938993074252
- task:
type: Clustering
dataset:
name: MTEB MedrxivClusteringS2S
type: mteb/medrxiv-clustering-s2s
config: default
split: test
revision: 3cd0e71dfbe09d4de0f9e5ecba43e7ce280959dc
metrics:
- type: v_measure
value: 26.871455379644093
- task:
type: Reranking
dataset:
name: MTEB MindSmallReranking
type: mteb/mind_small
config: default
split: test
revision: 3bdac13927fdc888b903db93b2ffdbd90b295a69
metrics:
- type: map
value: 30.402396942935333
- type: mrr
value: 31.42600938803256
- task:
type: Retrieval
dataset:
name: MTEB NFCorpus
type: nfcorpus
config: default
split: test
revision: 7eb63cc0c1eb59324d709ebed25fcab851fa7610
metrics:
- type: map_at_1
value: 3.7740000000000005
- type: map_at_10
value: 7.614999999999999
- type: map_at_100
value: 9.574
- type: map_at_1000
value: 10.711
- type: map_at_3
value: 5.7540000000000004
- type: map_at_5
value: 6.6659999999999995
- type: mrr_at_1
value: 33.127
- type: mrr_at_10
value: 40.351
- type: mrr_at_100
value: 41.144
- type: mrr_at_1000
value: 41.202
- type: mrr_at_3
value: 38.029
- type: mrr_at_5
value: 39.190000000000005
- type: ndcg_at_1
value: 31.579
- type: ndcg_at_10
value: 22.792
- type: ndcg_at_100
value: 21.698999999999998
- type: ndcg_at_1000
value: 30.892999999999997
- type: ndcg_at_3
value: 26.828999999999997
- type: ndcg_at_5
value: 25.119000000000003
- type: precision_at_1
value: 33.127
- type: precision_at_10
value: 16.718
- type: precision_at_100
value: 5.7090000000000005
- type: precision_at_1000
value: 1.836
- type: precision_at_3
value: 24.768
- type: precision_at_5
value: 21.3
- type: recall_at_1
value: 3.7740000000000005
- type: recall_at_10
value: 10.302999999999999
- type: recall_at_100
value: 23.013
- type: recall_at_1000
value: 54.864999999999995
- type: recall_at_3
value: 6.554
- type: recall_at_5
value: 8.087
- task:
type: Retrieval
dataset:
name: MTEB NQ
type: nq
config: default
split: test
revision: 6062aefc120bfe8ece5897809fb2e53bfe0d128c
metrics:
- type: map_at_1
value: 15.620999999999999
- type: map_at_10
value: 24.519
- type: map_at_100
value: 25.586
- type: map_at_1000
value: 25.662000000000003
- type: map_at_3
value: 21.619
- type: map_at_5
value: 23.232
- type: mrr_at_1
value: 17.497
- type: mrr_at_10
value: 26.301000000000002
- type: mrr_at_100
value: 27.235
- type: mrr_at_1000
value: 27.297
- type: mrr_at_3
value: 23.561
- type: mrr_at_5
value: 25.111
- type: ndcg_at_1
value: 17.497
- type: ndcg_at_10
value: 29.725
- type: ndcg_at_100
value: 34.824
- type: ndcg_at_1000
value: 36.907000000000004
- type: ndcg_at_3
value: 23.946
- type: ndcg_at_5
value: 26.739
- type: precision_at_1
value: 17.497
- type: precision_at_10
value: 5.2170000000000005
- type: precision_at_100
value: 0.8099999999999999
- type: precision_at_1000
value: 0.101
- type: precision_at_3
value: 11.114
- type: precision_at_5
value: 8.285
- type: recall_at_1
value: 15.620999999999999
- type: recall_at_10
value: 43.999
- type: recall_at_100
value: 67.183
- type: recall_at_1000
value: 83.174
- type: recall_at_3
value: 28.720000000000002
- type: recall_at_5
value: 35.154
- task:
type: Retrieval
dataset:
name: MTEB QuoraRetrieval
type: quora
config: default
split: test
revision: 6205996560df11e3a3da9ab4f926788fc30a7db4
metrics:
- type: map_at_1
value: 54.717000000000006
- type: map_at_10
value: 67.514
- type: map_at_100
value: 68.484
- type: map_at_1000
value: 68.523
- type: map_at_3
value: 64.169
- type: map_at_5
value: 66.054
- type: mrr_at_1
value: 62.46000000000001
- type: mrr_at_10
value: 71.503
- type: mrr_at_100
value: 71.91499999999999
- type: mrr_at_1000
value: 71.923
- type: mrr_at_3
value: 69.46799999999999
- type: mrr_at_5
value: 70.677
- type: ndcg_at_1
value: 62.480000000000004
- type: ndcg_at_10
value: 72.98
- type: ndcg_at_100
value: 76.023
- type: ndcg_at_1000
value: 76.512
- type: ndcg_at_3
value: 68.138
- type: ndcg_at_5
value: 70.458
- type: precision_at_1
value: 62.480000000000004
- type: precision_at_10
value: 11.373
- type: precision_at_100
value: 1.437
- type: precision_at_1000
value: 0.154
- type: precision_at_3
value: 29.622999999999998
- type: precision_at_5
value: 19.918
- type: recall_at_1
value: 54.717000000000006
- type: recall_at_10
value: 84.745
- type: recall_at_100
value: 96.528
- type: recall_at_1000
value: 99.39
- type: recall_at_3
value: 71.60600000000001
- type: recall_at_5
value: 77.511
- task:
type: Clustering
dataset:
name: MTEB RedditClustering
type: mteb/reddit-clustering
config: default
split: test
revision: b2805658ae38990172679479369a78b86de8c390
metrics:
- type: v_measure
value: 40.23390747226228
- task:
type: Clustering
dataset:
name: MTEB RedditClusteringP2P
type: mteb/reddit-clustering-p2p
config: default
split: test
revision: 385e3cb46b4cfa89021f56c4380204149d0efe33
metrics:
- type: v_measure
value: 49.090518272935626
- task:
type: Retrieval
dataset:
name: MTEB SCIDOCS
type: scidocs
config: default
split: test
revision: 5c59ef3e437a0a9651c8fe6fde943e7dce59fba5
metrics:
- type: map_at_1
value: 3.028
- type: map_at_10
value: 6.968000000000001
- type: map_at_100
value: 8.200000000000001
- type: map_at_1000
value: 8.432
- type: map_at_3
value: 5.3069999999999995
- type: map_at_5
value: 6.099
- type: mrr_at_1
value: 14.799999999999999
- type: mrr_at_10
value: 22.425
- type: mrr_at_100
value: 23.577
- type: mrr_at_1000
value: 23.669999999999998
- type: mrr_at_3
value: 20.233
- type: mrr_at_5
value: 21.318
- type: ndcg_at_1
value: 14.799999999999999
- type: ndcg_at_10
value: 12.206
- type: ndcg_at_100
value: 17.799
- type: ndcg_at_1000
value: 22.891000000000002
- type: ndcg_at_3
value: 12.128
- type: ndcg_at_5
value: 10.212
- type: precision_at_1
value: 14.799999999999999
- type: precision_at_10
value: 6.17
- type: precision_at_100
value: 1.428
- type: precision_at_1000
value: 0.266
- type: precision_at_3
value: 11.333
- type: precision_at_5
value: 8.74
- type: recall_at_1
value: 3.028
- type: recall_at_10
value: 12.522
- type: recall_at_100
value: 28.975
- type: recall_at_1000
value: 54.038
- type: recall_at_3
value: 6.912999999999999
- type: recall_at_5
value: 8.883000000000001
- task:
type: STS
dataset:
name: MTEB SICK-R
type: mteb/sickr-sts
config: default
split: test
revision: 20a6d6f312dd54037fe07a32d58e5e168867909d
metrics:
- type: cos_sim_pearson
value: 76.62983928119752
- type: cos_sim_spearman
value: 65.92910683118656
- type: euclidean_pearson
value: 71.10290039690963
- type: euclidean_spearman
value: 64.80076622426652
- type: manhattan_pearson
value: 70.8944726230188
- type: manhattan_spearman
value: 64.75082576033986
- task:
type: STS
dataset:
name: MTEB STS12
type: mteb/sts12-sts
config: default
split: test
revision: fdf84275bb8ce4b49c971d02e84dd1abc677a50f
metrics:
- type: cos_sim_pearson
value: 74.42679147085553
- type: cos_sim_spearman
value: 66.52980061546658
- type: euclidean_pearson
value: 74.87039477408763
- type: euclidean_spearman
value: 70.63397666902786
- type: manhattan_pearson
value: 74.97015137513088
- type: manhattan_spearman
value: 70.75951355434326
- task:
type: STS
dataset:
name: MTEB STS13
type: mteb/sts13-sts
config: default
split: test
revision: 1591bfcbe8c69d4bf7fe2a16e2451017832cafb9
metrics:
- type: cos_sim_pearson
value: 75.62472426599543
- type: cos_sim_spearman
value: 76.1662886374236
- type: euclidean_pearson
value: 76.3297128081315
- type: euclidean_spearman
value: 77.19385151966563
- type: manhattan_pearson
value: 76.50363291423257
- type: manhattan_spearman
value: 77.37081896355399
- task:
type: STS
dataset:
name: MTEB STS14
type: mteb/sts14-sts
config: default
split: test
revision: e2125984e7df8b7871f6ae9949cf6b6795e7c54b
metrics:
- type: cos_sim_pearson
value: 74.48227705407035
- type: cos_sim_spearman
value: 69.04572664009687
- type: euclidean_pearson
value: 71.76138185714849
- type: euclidean_spearman
value: 68.93415452043307
- type: manhattan_pearson
value: 71.68010915543306
- type: manhattan_spearman
value: 68.99176321262806
- task:
type: STS
dataset:
name: MTEB STS15
type: mteb/sts15-sts
config: default
split: test
revision: 1cd7298cac12a96a373b6a2f18738bb3e739a9b6
metrics:
- type: cos_sim_pearson
value: 78.1566527175902
- type: cos_sim_spearman
value: 79.23677712825851
- type: euclidean_pearson
value: 76.29138438696417
- type: euclidean_spearman
value: 77.20108266215374
- type: manhattan_pearson
value: 76.27464935799118
- type: manhattan_spearman
value: 77.15286174478099
- task:
type: STS
dataset:
name: MTEB STS16
type: mteb/sts16-sts
config: default
split: test
revision: 360a0b2dff98700d09e634a01e1cc1624d3e42cd
metrics:
- type: cos_sim_pearson
value: 75.068454465977
- type: cos_sim_spearman
value: 76.06792422441929
- type: euclidean_pearson
value: 70.64605440627699
- type: euclidean_spearman
value: 70.21776051117844
- type: manhattan_pearson
value: 70.32479295054918
- type: manhattan_spearman
value: 69.89782458638528
- task:
type: STS
dataset:
name: MTEB STS17 (ko-ko)
type: mteb/sts17-crosslingual-sts
config: ko-ko
split: test
revision: 9fc37e8c632af1c87a3d23e685d49552a02582a0
metrics:
- type: cos_sim_pearson
value: 39.43327289939437
- type: cos_sim_spearman
value: 52.386010275505654
- type: euclidean_pearson
value: 46.40999904885745
- type: euclidean_spearman
value: 51.00333465175934
- type: manhattan_pearson
value: 46.55753533133655
- type: manhattan_spearman
value: 51.07550440519388
- task:
type: STS
dataset:
name: MTEB STS17 (ar-ar)
type: mteb/sts17-crosslingual-sts
config: ar-ar
split: test
revision: 9fc37e8c632af1c87a3d23e685d49552a02582a0
metrics:
- type: cos_sim_pearson
value: 55.54431928210687
- type: cos_sim_spearman
value: 55.61674586076298
- type: euclidean_pearson
value: 58.07442713714088
- type: euclidean_spearman
value: 55.74066216931719
- type: manhattan_pearson
value: 57.84021675638542
- type: manhattan_spearman
value: 55.20365812536853
- task:
type: STS
dataset:
name: MTEB STS17 (en-ar)
type: mteb/sts17-crosslingual-sts
config: en-ar
split: test
revision: 9fc37e8c632af1c87a3d23e685d49552a02582a0
metrics:
- type: cos_sim_pearson
value: 11.378463868809098
- type: cos_sim_spearman
value: 8.209569244801065
- type: euclidean_pearson
value: 1.07041700730406
- type: euclidean_spearman
value: 2.2052197108931892
- type: manhattan_pearson
value: 0.7671300251104268
- type: manhattan_spearman
value: 3.430645020535567
- task:
type: STS
dataset:
name: MTEB STS17 (en-de)
type: mteb/sts17-crosslingual-sts
config: en-de
split: test
revision: 9fc37e8c632af1c87a3d23e685d49552a02582a0
metrics:
- type: cos_sim_pearson
value: 32.71403560929013
- type: cos_sim_spearman
value: 30.18181775929109
- type: euclidean_pearson
value: 25.57368595910298
- type: euclidean_spearman
value: 23.316649115731376
- type: manhattan_pearson
value: 24.144200325329614
- type: manhattan_spearman
value: 21.64621546338457
- task:
type: STS
dataset:
name: MTEB STS17 (en-en)
type: mteb/sts17-crosslingual-sts
config: en-en
split: test
revision: 9fc37e8c632af1c87a3d23e685d49552a02582a0
metrics:
- type: cos_sim_pearson
value: 83.36340470799158
- type: cos_sim_spearman
value: 84.95398260629699
- type: euclidean_pearson
value: 80.69876969911644
- type: euclidean_spearman
value: 80.97451731130427
- type: manhattan_pearson
value: 80.65869354146945
- type: manhattan_spearman
value: 80.8540858718528
- task:
type: STS
dataset:
name: MTEB STS17 (en-tr)
type: mteb/sts17-crosslingual-sts
config: en-tr
split: test
revision: 9fc37e8c632af1c87a3d23e685d49552a02582a0
metrics:
- type: cos_sim_pearson
value: 1.9200044163754912
- type: cos_sim_spearman
value: 1.0393399782021342
- type: euclidean_pearson
value: 1.1376003191297994
- type: euclidean_spearman
value: 1.8947106671763914
- type: manhattan_pearson
value: 3.8362564474484335
- type: manhattan_spearman
value: 4.242750882792888
- task:
type: STS
dataset:
name: MTEB STS17 (es-en)
type: mteb/sts17-crosslingual-sts
config: es-en
split: test
revision: 9fc37e8c632af1c87a3d23e685d49552a02582a0
metrics:
- type: cos_sim_pearson
value: 26.561262451099577
- type: cos_sim_spearman
value: 28.776666666659906
- type: euclidean_pearson
value: 14.640410196999088
- type: euclidean_spearman
value: 16.10557011701786
- type: manhattan_pearson
value: 15.019405495911272
- type: manhattan_spearman
value: 15.37192083104197
- task:
type: STS
dataset:
name: MTEB STS17 (es-es)
type: mteb/sts17-crosslingual-sts
config: es-es
split: test
revision: 9fc37e8c632af1c87a3d23e685d49552a02582a0
metrics:
- type: cos_sim_pearson
value: 69.7544202001433
- type: cos_sim_spearman
value: 71.88444295144646
- type: euclidean_pearson
value: 73.84934185952773
- type: euclidean_spearman
value: 73.26911108021089
- type: manhattan_pearson
value: 74.04354196954574
- type: manhattan_spearman
value: 73.37650787943872
- task:
type: STS
dataset:
name: MTEB STS17 (fr-en)
type: mteb/sts17-crosslingual-sts
config: fr-en
split: test
revision: 9fc37e8c632af1c87a3d23e685d49552a02582a0
metrics:
- type: cos_sim_pearson
value: 27.70511842301491
- type: cos_sim_spearman
value: 26.339466714066447
- type: euclidean_pearson
value: 9.323158236506385
- type: euclidean_spearman
value: 7.32083231520273
- type: manhattan_pearson
value: 7.807399527573071
- type: manhattan_spearman
value: 5.525546663067113
- task:
type: STS
dataset:
name: MTEB STS17 (it-en)
type: mteb/sts17-crosslingual-sts
config: it-en
split: test
revision: 9fc37e8c632af1c87a3d23e685d49552a02582a0
metrics:
- type: cos_sim_pearson
value: 24.226521799447692
- type: cos_sim_spearman
value: 20.72992940458968
- type: euclidean_pearson
value: 6.753378617205011
- type: euclidean_spearman
value: 6.281654679029505
- type: manhattan_pearson
value: 7.087180250449323
- type: manhattan_spearman
value: 6.41611659259516
- task:
type: STS
dataset:
name: MTEB STS17 (nl-en)
type: mteb/sts17-crosslingual-sts
config: nl-en
split: test
revision: 9fc37e8c632af1c87a3d23e685d49552a02582a0
metrics:
- type: cos_sim_pearson
value: 29.131412364061234
- type: cos_sim_spearman
value: 25.053429612793547
- type: euclidean_pearson
value: 10.657141303962
- type: euclidean_spearman
value: 9.712124819778452
- type: manhattan_pearson
value: 12.481782693315688
- type: manhattan_spearman
value: 11.287958480905973
- task:
type: STS
dataset:
name: MTEB STS22 (en)
type: mteb/sts22-crosslingual-sts
config: en
split: test
revision: 2de6ce8c1921b71a755b262c6b57fef195dd7906
metrics:
- type: cos_sim_pearson
value: 64.04750650962879
- type: cos_sim_spearman
value: 65.66183708171826
- type: euclidean_pearson
value: 66.90887604405887
- type: euclidean_spearman
value: 66.89814072484552
- type: manhattan_pearson
value: 67.31627110509089
- type: manhattan_spearman
value: 67.01048176165322
- task:
type: STS
dataset:
name: MTEB STS22 (de)
type: mteb/sts22-crosslingual-sts
config: de
split: test
revision: 2de6ce8c1921b71a755b262c6b57fef195dd7906
metrics:
- type: cos_sim_pearson
value: 19.26519187000913
- type: cos_sim_spearman
value: 21.987647321429005
- type: euclidean_pearson
value: 17.850618752342946
- type: euclidean_spearman
value: 22.86669392885474
- type: manhattan_pearson
value: 18.16183594260708
- type: manhattan_spearman
value: 23.637510352837907
- task:
type: STS
dataset:
name: MTEB STS22 (es)
type: mteb/sts22-crosslingual-sts
config: es
split: test
revision: 2de6ce8c1921b71a755b262c6b57fef195dd7906
metrics:
- type: cos_sim_pearson
value: 34.221261828226936
- type: cos_sim_spearman
value: 49.811823238907664
- type: euclidean_pearson
value: 44.50394399762147
- type: euclidean_spearman
value: 50.959184495072876
- type: manhattan_pearson
value: 45.83191034038624
- type: manhattan_spearman
value: 50.190409866117946
- task:
type: STS
dataset:
name: MTEB STS22 (pl)
type: mteb/sts22-crosslingual-sts
config: pl
split: test
revision: 2de6ce8c1921b71a755b262c6b57fef195dd7906
metrics:
- type: cos_sim_pearson
value: 3.620381732096531
- type: cos_sim_spearman
value: 23.30843951799194
- type: euclidean_pearson
value: 0.965453312113125
- type: euclidean_spearman
value: 24.235967620790316
- type: manhattan_pearson
value: 1.4408922275701606
- type: manhattan_spearman
value: 25.161920137046096
- task:
type: STS
dataset:
name: MTEB STS22 (tr)
type: mteb/sts22-crosslingual-sts
config: tr
split: test
revision: 2de6ce8c1921b71a755b262c6b57fef195dd7906
metrics:
- type: cos_sim_pearson
value: 16.69489628726267
- type: cos_sim_spearman
value: 34.66348380997687
- type: euclidean_pearson
value: 29.415825529188606
- type: euclidean_spearman
value: 38.33011033170646
- type: manhattan_pearson
value: 31.23273195263394
- type: manhattan_spearman
value: 39.10055785755795
- task:
type: STS
dataset:
name: MTEB STS22 (ar)
type: mteb/sts22-crosslingual-sts
config: ar
split: test
revision: 2de6ce8c1921b71a755b262c6b57fef195dd7906
metrics:
- type: cos_sim_pearson
value: 9.134927430889528
- type: cos_sim_spearman
value: 28.18922448944151
- type: euclidean_pearson
value: 19.86814169549051
- type: euclidean_spearman
value: 27.519588644948627
- type: manhattan_pearson
value: 21.80949221238945
- type: manhattan_spearman
value: 28.25217200494078
- task:
type: STS
dataset:
name: MTEB STS22 (ru)
type: mteb/sts22-crosslingual-sts
config: ru
split: test
revision: 2de6ce8c1921b71a755b262c6b57fef195dd7906
metrics:
- type: cos_sim_pearson
value: 3.6386482942352085
- type: cos_sim_spearman
value: 9.068119621940966
- type: euclidean_pearson
value: 0.8123129118737714
- type: euclidean_spearman
value: 9.173672890166147
- type: manhattan_pearson
value: 0.754518899822658
- type: manhattan_spearman
value: 8.431719541986524
- task:
type: STS
dataset:
name: MTEB STS22 (zh)
type: mteb/sts22-crosslingual-sts
config: zh
split: test
revision: 2de6ce8c1921b71a755b262c6b57fef195dd7906
metrics:
- type: cos_sim_pearson
value: 2.972091574908432
- type: cos_sim_spearman
value: 25.48511383289232
- type: euclidean_pearson
value: 12.751569670148918
- type: euclidean_spearman
value: 24.940721642439286
- type: manhattan_pearson
value: 14.310238482989826
- type: manhattan_spearman
value: 24.69821216148647
- task:
type: STS
dataset:
name: MTEB STS22 (fr)
type: mteb/sts22-crosslingual-sts
config: fr
split: test
revision: 2de6ce8c1921b71a755b262c6b57fef195dd7906
metrics:
- type: cos_sim_pearson
value: 54.4745185734135
- type: cos_sim_spearman
value: 67.66493409568727
- type: euclidean_pearson
value: 60.13580336797049
- type: euclidean_spearman
value: 66.12319300814538
- type: manhattan_pearson
value: 60.816210368708155
- type: manhattan_spearman
value: 65.70010026716766
- task:
type: STS
dataset:
name: MTEB STS22 (de-en)
type: mteb/sts22-crosslingual-sts
config: de-en
split: test
revision: 2de6ce8c1921b71a755b262c6b57fef195dd7906
metrics:
- type: cos_sim_pearson
value: 49.37865412588201
- type: cos_sim_spearman
value: 53.07135629778897
- type: euclidean_pearson
value: 49.29201416711091
- type: euclidean_spearman
value: 50.54523702399645
- type: manhattan_pearson
value: 51.265764141268534
- type: manhattan_spearman
value: 51.979086403193605
- task:
type: STS
dataset:
name: MTEB STS22 (es-en)
type: mteb/sts22-crosslingual-sts
config: es-en
split: test
revision: 2de6ce8c1921b71a755b262c6b57fef195dd7906
metrics:
- type: cos_sim_pearson
value: 44.925652392562135
- type: cos_sim_spearman
value: 49.51253904767726
- type: euclidean_pearson
value: 48.79346518897415
- type: euclidean_spearman
value: 51.47957870101565
- type: manhattan_pearson
value: 49.51314553898044
- type: manhattan_spearman
value: 51.895207893189166
- task:
type: STS
dataset:
name: MTEB STS22 (it)
type: mteb/sts22-crosslingual-sts
config: it
split: test
revision: 2de6ce8c1921b71a755b262c6b57fef195dd7906
metrics:
- type: cos_sim_pearson
value: 45.241690321111875
- type: cos_sim_spearman
value: 48.24795739512037
- type: euclidean_pearson
value: 49.22719494399897
- type: euclidean_spearman
value: 49.64102442042809
- type: manhattan_pearson
value: 49.497887732970256
- type: manhattan_spearman
value: 49.940515338096304
- task:
type: STS
dataset:
name: MTEB STS22 (pl-en)
type: mteb/sts22-crosslingual-sts
config: pl-en
split: test
revision: 2de6ce8c1921b71a755b262c6b57fef195dd7906
metrics:
- type: cos_sim_pearson
value: 36.42138324083909
- type: cos_sim_spearman
value: 36.79867489417801
- type: euclidean_pearson
value: 27.760612942610084
- type: euclidean_spearman
value: 29.140966500287625
- type: manhattan_pearson
value: 28.456674031350115
- type: manhattan_spearman
value: 27.46356370924497
- task:
type: STS
dataset:
name: MTEB STS22 (zh-en)
type: mteb/sts22-crosslingual-sts
config: zh-en
split: test
revision: 2de6ce8c1921b71a755b262c6b57fef195dd7906
metrics:
- type: cos_sim_pearson
value: 26.55350664089358
- type: cos_sim_spearman
value: 28.681707196975008
- type: euclidean_pearson
value: 12.613577889195138
- type: euclidean_spearman
value: 13.589493311702933
- type: manhattan_pearson
value: 11.640157427420958
- type: manhattan_spearman
value: 10.345223941212415
- task:
type: STS
dataset:
name: MTEB STS22 (es-it)
type: mteb/sts22-crosslingual-sts
config: es-it
split: test
revision: 2de6ce8c1921b71a755b262c6b57fef195dd7906
metrics:
- type: cos_sim_pearson
value: 38.54682179114309
- type: cos_sim_spearman
value: 45.782560880405704
- type: euclidean_pearson
value: 46.496857002368486
- type: euclidean_spearman
value: 48.21270426410012
- type: manhattan_pearson
value: 46.871839119374044
- type: manhattan_spearman
value: 47.556987773851525
- task:
type: STS
dataset:
name: MTEB STS22 (de-fr)
type: mteb/sts22-crosslingual-sts
config: de-fr
split: test
revision: 2de6ce8c1921b71a755b262c6b57fef195dd7906
metrics:
- type: cos_sim_pearson
value: 35.12956772546032
- type: cos_sim_spearman
value: 32.96920218281008
- type: euclidean_pearson
value: 34.23140384382136
- type: euclidean_spearman
value: 32.19303153191447
- type: manhattan_pearson
value: 34.189468276600635
- type: manhattan_spearman
value: 34.887065709732376
- task:
type: STS
dataset:
name: MTEB STS22 (de-pl)
type: mteb/sts22-crosslingual-sts
config: de-pl
split: test
revision: 2de6ce8c1921b71a755b262c6b57fef195dd7906
metrics:
- type: cos_sim_pearson
value: 30.507667380509634
- type: cos_sim_spearman
value: 20.447284723752716
- type: euclidean_pearson
value: 29.662041381794474
- type: euclidean_spearman
value: 20.939990379746757
- type: manhattan_pearson
value: 32.5112080506328
- type: manhattan_spearman
value: 23.773047901712495
- task:
type: STS
dataset:
name: MTEB STS22 (fr-pl)
type: mteb/sts22-crosslingual-sts
config: fr-pl
split: test
revision: 2de6ce8c1921b71a755b262c6b57fef195dd7906
metrics:
- type: cos_sim_pearson
value: 71.10820459712156
- type: cos_sim_spearman
value: 61.97797868009122
- type: euclidean_pearson
value: 60.30910689156633
- type: euclidean_spearman
value: 61.97797868009122
- type: manhattan_pearson
value: 66.3405176964038
- type: manhattan_spearman
value: 61.97797868009122
- task:
type: STS
dataset:
name: MTEB STSBenchmark
type: mteb/stsbenchmark-sts
config: default
split: test
revision: 8913289635987208e6e7c72789e4be2fe94b6abd
metrics:
- type: cos_sim_pearson
value: 76.53032504460737
- type: cos_sim_spearman
value: 75.33716094627373
- type: euclidean_pearson
value: 69.64662673290599
- type: euclidean_spearman
value: 67.30188896368857
- type: manhattan_pearson
value: 69.45096082050807
- type: manhattan_spearman
value: 67.0718727259371
- task:
type: Reranking
dataset:
name: MTEB SciDocsRR
type: mteb/scidocs-reranking
config: default
split: test
revision: 56a6d0140cf6356659e2a7c1413286a774468d44
metrics:
- type: map
value: 71.33941904192648
- type: mrr
value: 89.73766429648782
- task:
type: Retrieval
dataset:
name: MTEB SciFact
type: scifact
config: default
split: test
revision: a75ae049398addde9b70f6b268875f5cbce99089
metrics:
- type: map_at_1
value: 43.333
- type: map_at_10
value: 52.364
- type: map_at_100
value: 53.184
- type: map_at_1000
value: 53.234
- type: map_at_3
value: 49.832
- type: map_at_5
value: 51.244
- type: mrr_at_1
value: 45.333
- type: mrr_at_10
value: 53.455
- type: mrr_at_100
value: 54.191
- type: mrr_at_1000
value: 54.235
- type: mrr_at_3
value: 51.556000000000004
- type: mrr_at_5
value: 52.622
- type: ndcg_at_1
value: 45.333
- type: ndcg_at_10
value: 56.899
- type: ndcg_at_100
value: 60.702
- type: ndcg_at_1000
value: 62.046
- type: ndcg_at_3
value: 52.451
- type: ndcg_at_5
value: 54.534000000000006
- type: precision_at_1
value: 45.333
- type: precision_at_10
value: 7.8
- type: precision_at_100
value: 0.987
- type: precision_at_1000
value: 0.11
- type: precision_at_3
value: 20.778
- type: precision_at_5
value: 13.866999999999999
- type: recall_at_1
value: 43.333
- type: recall_at_10
value: 69.69999999999999
- type: recall_at_100
value: 86.9
- type: recall_at_1000
value: 97.6
- type: recall_at_3
value: 57.81699999999999
- type: recall_at_5
value: 62.827999999999996
- task:
type: PairClassification
dataset:
name: MTEB SprintDuplicateQuestions
type: mteb/sprintduplicatequestions-pairclassification
config: default
split: test
revision: 5a8256d0dff9c4bd3be3ba3e67e4e70173f802ea
metrics:
- type: cos_sim_accuracy
value: 99.7
- type: cos_sim_ap
value: 89.88577913120001
- type: cos_sim_f1
value: 84.62694041061593
- type: cos_sim_precision
value: 84.7542627883651
- type: cos_sim_recall
value: 84.5
- type: dot_accuracy
value: 99.24752475247524
- type: dot_ap
value: 56.81855467290009
- type: dot_f1
value: 56.084126189283936
- type: dot_precision
value: 56.16850551654965
- type: dot_recall
value: 56.00000000000001
- type: euclidean_accuracy
value: 99.7059405940594
- type: euclidean_ap
value: 90.12451226491524
- type: euclidean_f1
value: 84.44211629125196
- type: euclidean_precision
value: 88.66886688668868
- type: euclidean_recall
value: 80.60000000000001
- type: manhattan_accuracy
value: 99.7128712871287
- type: manhattan_ap
value: 90.67590584183216
- type: manhattan_f1
value: 84.85436893203884
- type: manhattan_precision
value: 82.45283018867924
- type: manhattan_recall
value: 87.4
- type: max_accuracy
value: 99.7128712871287
- type: max_ap
value: 90.67590584183216
- type: max_f1
value: 84.85436893203884
- task:
type: Clustering
dataset:
name: MTEB StackExchangeClustering
type: mteb/stackexchange-clustering
config: default
split: test
revision: 70a89468f6dccacc6aa2b12a6eac54e74328f235
metrics:
- type: v_measure
value: 52.74481093815175
- task:
type: Clustering
dataset:
name: MTEB StackExchangeClusteringP2P
type: mteb/stackexchange-clustering-p2p
config: default
split: test
revision: d88009ab563dd0b16cfaf4436abaf97fa3550cf0
metrics:
- type: v_measure
value: 32.65999453562101
- task:
type: Reranking
dataset:
name: MTEB StackOverflowDupQuestions
type: mteb/stackoverflowdupquestions-reranking
config: default
split: test
revision: ef807ea29a75ec4f91b50fd4191cb4ee4589a9f9
metrics:
- type: map
value: 44.74498464555465
- type: mrr
value: 45.333879764026825
- task:
type: Summarization
dataset:
name: MTEB SummEval
type: mteb/summeval
config: default
split: test
revision: 8753c2788d36c01fc6f05d03fe3f7268d63f9122
metrics:
- type: cos_sim_pearson
value: 29,603788751645216
- type: cos_sim_spearman
value: 29.705103354786033
- type: dot_pearson
value: 28.07425338095399
- type: dot_spearman
value: 26.841406359135366
- task:
type: Retrieval
dataset:
name: MTEB TRECCOVID
type: trec-covid
config: default
split: test
revision: 2c8041b2c07a79b6f7ba8fe6acc72e5d9f92d217
metrics:
- type: map_at_1
value: 0.241
- type: map_at_10
value: 1.672
- type: map_at_100
value: 7.858999999999999
- type: map_at_1000
value: 17.616
- type: map_at_3
value: 0.631
- type: map_at_5
value: 0.968
- type: mrr_at_1
value: 90.0
- type: mrr_at_10
value: 92.952
- type: mrr_at_100
value: 93.036
- type: mrr_at_1000
value: 93.036
- type: mrr_at_3
value: 92.667
- type: mrr_at_5
value: 92.667
- type: ndcg_at_1
value: 83.0
- type: ndcg_at_10
value: 70.30199999999999
- type: ndcg_at_100
value: 48.149
- type: ndcg_at_1000
value: 40.709
- type: ndcg_at_3
value: 79.173
- type: ndcg_at_5
value: 75.347
- type: precision_at_1
value: 90.0
- type: precision_at_10
value: 72.6
- type: precision_at_100
value: 48.46
- type: precision_at_1000
value: 18.093999999999998
- type: precision_at_3
value: 84.0
- type: precision_at_5
value: 78.8
- type: recall_at_1
value: 0.241
- type: recall_at_10
value: 1.814
- type: recall_at_100
value: 11.141
- type: recall_at_1000
value: 37.708999999999996
- type: recall_at_3
value: 0.647
- type: recall_at_5
value: 1.015
- task:
type: Retrieval
dataset:
name: MTEB Touche2020
type: webis-touche2020
config: default
split: test
revision: 527b7d77e16e343303e68cb6af11d6e18b9f7b3b
metrics:
- type: map_at_1
value: 2.782
- type: map_at_10
value: 9.06
- type: map_at_100
value: 14.571000000000002
- type: map_at_1000
value: 16.006999999999998
- type: map_at_3
value: 5.037
- type: map_at_5
value: 6.63
- type: mrr_at_1
value: 34.694
- type: mrr_at_10
value: 48.243
- type: mrr_at_100
value: 49.065
- type: mrr_at_1000
value: 49.065
- type: mrr_at_3
value: 44.897999999999996
- type: mrr_at_5
value: 46.428999999999995
- type: ndcg_at_1
value: 31.633
- type: ndcg_at_10
value: 22.972
- type: ndcg_at_100
value: 34.777
- type: ndcg_at_1000
value: 45.639
- type: ndcg_at_3
value: 26.398
- type: ndcg_at_5
value: 24.418
- type: precision_at_1
value: 34.694
- type: precision_at_10
value: 19.796
- type: precision_at_100
value: 7.224
- type: precision_at_1000
value: 1.4449999999999998
- type: precision_at_3
value: 26.531
- type: precision_at_5
value: 23.265
- type: recall_at_1
value: 2.782
- type: recall_at_10
value: 14.841
- type: recall_at_100
value: 44.86
- type: recall_at_1000
value: 78.227
- type: recall_at_3
value: 5.959
- type: recall_at_5
value: 8.969000000000001
- task:
type: Classification
dataset:
name: MTEB ToxicConversationsClassification
type: mteb/toxic_conversations_50k
config: default
split: test
revision: edfaf9da55d3dd50d43143d90c1ac476895ae6de
metrics:
- type: accuracy
value: 62.657999999999994
- type: ap
value: 10.96353161716344
- type: f1
value: 48.294226423442645
- task:
type: Classification
dataset:
name: MTEB TweetSentimentExtractionClassification
type: mteb/tweet_sentiment_extraction
config: default
split: test
revision: 62146448f05be9e52a36b8ee9936447ea787eede
metrics:
- type: accuracy
value: 52.40803621958121
- type: f1
value: 52.61009636022186
- task:
type: Clustering
dataset:
name: MTEB TwentyNewsgroupsClustering
type: mteb/twentynewsgroups-clustering
config: default
split: test
revision: 091a54f9a36281ce7d6590ec8c75dd485e7e01d4
metrics:
- type: v_measure
value: 32.12697126747911
- task:
type: PairClassification
dataset:
name: MTEB TwitterSemEval2015
type: mteb/twittersemeval2015-pairclassification
config: default
split: test
revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1
metrics:
- type: cos_sim_accuracy
value: 80.69976753889253
- type: cos_sim_ap
value: 54.74680676121268
- type: cos_sim_f1
value: 53.18923998590391
- type: cos_sim_precision
value: 47.93563413084904
- type: cos_sim_recall
value: 59.73614775725594
- type: dot_accuracy
value: 79.3348036001669
- type: dot_ap
value: 48.46902128933627
- type: dot_f1
value: 50.480109739369006
- type: dot_precision
value: 42.06084051345173
- type: dot_recall
value: 63.113456464379944
- type: euclidean_accuracy
value: 79.78780473266973
- type: euclidean_ap
value: 50.258327255164815
- type: euclidean_f1
value: 49.655838666827684
- type: euclidean_precision
value: 45.78044978846582
- type: euclidean_recall
value: 54.24802110817942
- type: manhattan_accuracy
value: 79.76992310901831
- type: manhattan_ap
value: 49.89892485714363
- type: manhattan_f1
value: 49.330433787341185
- type: manhattan_precision
value: 43.56175459874672
- type: manhattan_recall
value: 56.86015831134564
- type: max_accuracy
value: 80.69976753889253
- type: max_ap
value: 54.74680676121268
- type: max_f1
value: 53.18923998590391
- task:
type: PairClassification
dataset:
name: MTEB TwitterURLCorpus
type: mteb/twitterurlcorpus-pairclassification
config: default
split: test
revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf
metrics:
- type: cos_sim_accuracy
value: 86.90573213800597
- type: cos_sim_ap
value: 81.05760818661524
- type: cos_sim_f1
value: 73.64688856729379
- type: cos_sim_precision
value: 69.46491946491946
- type: cos_sim_recall
value: 78.3646442870342
- type: dot_accuracy
value: 83.80680715644041
- type: dot_ap
value: 72.49774005947461
- type: dot_f1
value: 68.68460650173216
- type: dot_precision
value: 62.954647507858105
- type: dot_recall
value: 75.56205728364644
- type: euclidean_accuracy
value: 85.97430822369697
- type: euclidean_ap
value: 78.86101740829326
- type: euclidean_f1
value: 71.07960824663695
- type: euclidean_precision
value: 70.36897306270279
- type: euclidean_recall
value: 71.8047428395442
- type: manhattan_accuracy
value: 85.94132029339853
- type: manhattan_ap
value: 78.77876711171923
- type: manhattan_f1
value: 71.07869075515912
- type: manhattan_precision
value: 69.80697847067557
- type: manhattan_recall
value: 72.39759778256852
- type: max_accuracy
value: 86.90573213800597
- type: max_ap
value: 81.05760818661524
- type: max_f1
value: 73.64688856729379
---
# SGPT-125M-weightedmean-msmarco-specb-bitfit
## Usage
For usage instructions, refer to our codebase: https://github.com/Muennighoff/sgpt
## Evaluation Results
For eval results, refer to the eval folder or our paper: https://arxiv.org/abs/2202.08904
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 15600 with parameters:
```
{'batch_size': 32, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters:
```
{'scale': 20.0, 'similarity_fct': 'cos_sim'}
```
Parameters of the fit()-Method:
```
{
"epochs": 10,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 0.0002
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 1000,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 300, 'do_lower_case': False}) with Transformer model: GPTNeoModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': True, 'pooling_mode_lasttoken': False})
)
```
## Citing & Authors
```bibtex
@article{muennighoff2022sgpt,
title={SGPT: GPT Sentence Embeddings for Semantic Search},
author={Muennighoff, Niklas},
journal={arXiv preprint arXiv:2202.08904},
year={2022}
}
```
| [
"SUMMARIZATION"
] | [
"BIOSSES",
"SCIFACT"
] |
pucpr/biobertpt-bio | pucpr | fill-mask | [
"transformers",
"pytorch",
"tf",
"jax",
"bert",
"fill-mask",
"pt",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2022-03-02T23:29:05 | 2022-11-27T16:54:50 | 76 | 6 | ---
language: pt
widget:
- text: O principal [MASK] da COVID-19 é tosse seca.
- text: O vírus da gripe apresenta um [MASK] constituído por segmentos de ácido ribonucleico.
thumbnail: https://raw.githubusercontent.com/HAILab-PUCPR/BioBERTpt/master/images/logo-biobertpr1.png
---
<img src="https://raw.githubusercontent.com/HAILab-PUCPR/BioBERTpt/master/images/logo-biobertpr1.png" alt="Logo BioBERTpt">
# BioBERTpt - Portuguese Clinical and Biomedical BERT
The [BioBERTpt - A Portuguese Neural Language Model for Clinical Named Entity Recognition](https://www.aclweb.org/anthology/2020.clinicalnlp-1.7/) paper contains clinical and biomedical BERT-based models for Portuguese Language, initialized with BERT-Multilingual-Cased & trained on clinical notes and biomedical literature.
This model card describes the BioBERTpt(bio) model, a biomedical version of BioBERTpt, trained on Portuguese biomedical literature from scientific papers from Pubmed and Scielo.
## How to use the model
Load the model via the transformers library:
```
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("pucpr/biobertpt-bio")
model = AutoModel.from_pretrained("pucpr/biobertpt-bio")
```
## More Information
Refer to the original paper, [BioBERTpt - A Portuguese Neural Language Model for Clinical Named Entity Recognition](https://www.aclweb.org/anthology/2020.clinicalnlp-1.7/) for additional details and performance on Portuguese NER tasks.
## Acknowledgements
This study was financed in part by the Coordenação de Aperfeiçoamento de Pessoal de Nível Superior - Brasil (CAPES) - Finance Code 001.
## Citation
```
@inproceedings{schneider-etal-2020-biobertpt,
title = "{B}io{BERT}pt - A {P}ortuguese Neural Language Model for Clinical Named Entity Recognition",
author = "Schneider, Elisa Terumi Rubel and
de Souza, Jo{\~a}o Vitor Andrioli and
Knafou, Julien and
Oliveira, Lucas Emanuel Silva e and
Copara, Jenny and
Gumiel, Yohan Bonescki and
Oliveira, Lucas Ferro Antunes de and
Paraiso, Emerson Cabrera and
Teodoro, Douglas and
Barra, Cl{\'a}udia Maria Cabral Moro",
booktitle = "Proceedings of the 3rd Clinical Natural Language Processing Workshop",
month = nov,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2020.clinicalnlp-1.7",
pages = "65--72",
abstract = "With the growing number of electronic health record data, clinical NLP tasks have become increasingly relevant to unlock valuable information from unstructured clinical text. Although the performance of downstream NLP tasks, such as named-entity recognition (NER), in English corpus has recently improved by contextualised language models, less research is available for clinical texts in low resource languages. Our goal is to assess a deep contextual embedding model for Portuguese, so called BioBERTpt, to support clinical and biomedical NER. We transfer learned information encoded in a multilingual-BERT model to a corpora of clinical narratives and biomedical-scientific papers in Brazilian Portuguese. To evaluate the performance of BioBERTpt, we ran NER experiments on two annotated corpora containing clinical narratives and compared the results with existing BERT models. Our in-domain model outperformed the baseline model in F1-score by 2.72{\%}, achieving higher performance in 11 out of 13 assessed entities. We demonstrate that enriching contextual embedding models with domain literature can play an important role in improving performance for specific NLP tasks. The transfer learning process enhanced the Portuguese biomedical NER model by reducing the necessity of labeled data and the demand for retraining a whole new model.",
}
```
## Questions?
Post a Github issue on the [BioBERTpt repo](https://github.com/HAILab-PUCPR/BioBERTpt). | [
"NAMED_ENTITY_RECOGNITION"
] | [
"SCIELO"
] |
medspaner/roberta-es-clinical-trials-attributes-ner | medspaner | token-classification | [
"transformers",
"pytorch",
"roberta",
"token-classification",
"generated_from_trainer",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2024-01-15T08:20:34 | 2024-10-01T06:42:40 | 76 | 1 | ---
license: cc-by-nc-4.0
metrics:
- precision
- recall
- f1
- accuracy
tags:
- generated_from_trainer
widget:
- text: 'Criterios de exclusión: antecedentes de infarto, mujer sin métodos anticonceptivos
adecuados; cirugía programada; padre o madre con cardiopatía.'
model-index:
- name: roberta-es-clinical-trials-attributes-ner
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-es-clinical-trials-attributes-ner
This medical named entity recognition model detects event temporality and experiencer attributes:
- Future: e.g. *cirugía pendiente*, 'pending surgery'.
- History\_of: e.g. *antecedentes de ataque al corazón*, 'history of heart attack'.
- Family\_member: e.g. *hermano*, 'brother'.
- Patient: e.g. *paciente pediátrico*, 'pediatric patient'.
- Other: e.g. *enfermero*, 'nurse'.
The model achieves the following results on the test set (when trained with the training and development set; results are averaged over 5 evaluation rounds):
- Precision: 0.877 (±0.009)
- Recall: 0.835 (±0.008)
- F1: 0.856 (±0.006)
- Accuracy: 0.989 (±0.001)
## Model description
This model adapts the pre-trained model [bsc-bio-ehr-es](https://huggingface.co/PlanTL-GOB-ES/bsc-bio-ehr-es), presented in [Pio Carriño et al. (2022)](https://aclanthology.org/2022.bionlp-1.19/).
It is fine-tuned to conduct medical named entity recognition on Spanish texts about clinical trials.
The model is fine-tuned on the [CT-EBM-ES corpus (Campillos-Llanos et al. 2021)](https://bmcmedinformdecismak.biomedcentral.com/articles/10.1186/s12911-021-01395-z).
If you use this model, please, cite as follows:
```
@article{campillosetal2024,
title = {{Hybrid tool for semantic annotation and concept extraction of medical texts in Spanish}},
author = {Campillos-Llanos, Leonardo and Valverde-Mateos, Ana and Capllonch-Carri{\'o}n, Adri{\'a}n},
journal = {BMC Bioinformatics},
year={2024},
publisher={BioMed Central}
}
```
## Intended uses & limitations
**Disclosure**: *This model is under development and needs to be improved. It should not be used for medical decision making without human assistance and supervision*
This model is intended for a generalist purpose, and may have bias and/or any other undesirable distortions.
Third parties who deploy or provide systems and/or services using any of these models (or using systems based on these models) should note that it is their responsibility to mitigate the risks arising from their use. Third parties, in any event, need to comply with applicable regulations, including regulations concerning the use of artificial intelligence.
The owner or creator of the models will in no event be liable for any results arising from the use made by third parties of these models.
**Descargo de responsabilidad**: *Esta herramienta se encuentra en desarrollo y no debe ser empleada para la toma de decisiones médicas*
La finalidad de este modelo es generalista, y se advierte que puede tener sesgos y/u otro tipo de distorsiones indeseables.
Terceras partes que desplieguen o proporcionen sistemas y/o servicios usando alguno de estos modelos (o utilizando sistemas basados en estos modelos) han tener presente que es su responsabilidad abordar y minimizar los riesgos derivados de su uso. Las terceras partes, en cualquier circunstancia, deben cumplir con la normativa aplicable, incluyendo la normativa que concierne al uso de la inteligencia artificial.
El propietario o creador de los modelos de ningún modo será responsable de los resultados derivados del uso que las terceras partes hagan de estos modelos.
## Training and evaluation data
The data used for fine-tuning are the [Clinical Trials for Evidence-Based-Medicine in Spanish corpus](http://www.lllf.uam.es/ESP/nlpdata/wp2/).
It is a collection of 1200 texts about clinical trials studies and clinical trials announcements:
- 500 abstracts from journals published under a Creative Commons license, e.g. available in PubMed or the Scientific Electronic Library Online (SciELO)
- 700 clinical trials announcements published in the European Clinical Trials Register and Repositorio Español de Estudios Clínicos
If you use the CT-EBM-ES resource, please, cite as follows:
```
@article{campillosetal-midm2021,
title = {A clinical trials corpus annotated with UMLS© entities to enhance the access to Evidence-Based Medicine},
author = {Campillos-Llanos, Leonardo and Valverde-Mateos, Ana and Capllonch-Carri{\'o}n, Adri{\'a}n and Moreno-Sandoval, Antonio},
journal = {BMC Medical Informatics and Decision Making},
volume={21},
number={1},
pages={1--19},
year={2021},
publisher={BioMed Central}
}
```
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: we used different seeds for 5 evaluation rounds, and uploaded the model with the best results
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: average 10.8 epochs (±4.09); trained with early stopping if no improvement after 5 epochs (early stopping patience: 5)
### Training results (test set; average and standard deviation of 5 rounds with different seeds)
| Precision | Recall | F1 | Accuracy |
|:--------------:|:--------------:|:--------------:|:--------------:|
| 0.877 (±0.003) | 0.835 (±0.008) | 0.856 (±0.006) | 0.989 (±0.001) |
**Results per class (test set; average and standard deviation of 5 rounds with different seeds)**
| Class | Precision | Recall | F1 | Support |
|:--------------:|:--------------:|:--------------:|:--------------:|:---------:|
| Future | 0.640 (±0.040) | 0.620 (±0.059) | 0.629 (±0.045) | 70 |
| History\_of | 0.742 (±0.021) | 0.667 (±0.016) | 0.703 (±0.010) | 647 |
| Patient | 0.949 (±0.003) | 0.921 (±0.005) | 0.935 (±0.003) | 1462 |
| Family\_member | 0.721 (±0.048) | 0.920 (±0.027) | 0.808 (±0.034) | 20 |
| Other | 0.852 (±0.019) | 0.805 (±0.015) | 0.828 (±0.011) | 120 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.2+cu113
- Datasets 1.18.4
- Tokenizers 0.11.6
| [
"NAMED_ENTITY_RECOGNITION"
] | [
"SCIELO"
] |
disi-unibo-nlp/MedGENIE-fid-flan-t5-base-medqa | disi-unibo-nlp | question-answering | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"medical",
"question-answering",
"fusion-in-decoder",
"en",
"dataset:disi-unibo-nlp/medqa-5-opt-MedGENIE",
"arxiv:2403.01924",
"arxiv:2207.08143",
"arxiv:2309.02233",
"arxiv:2311.16079",
"arxiv:2402.10373",
"arxiv:2308.09442",
"arxiv:2212.13138",
"arxiv:2210.09338",
"arxiv:2210.06345",
"arxiv:2201.08860",
"arxiv:2104.06378",
"arxiv:1901.08746",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | 2024-02-16T19:50:05 | 2024-05-17T07:30:20 | 75 | 0 | ---
datasets:
- disi-unibo-nlp/medqa-5-opt-MedGENIE
language:
- en
license: mit
metrics:
- accuracy
pipeline_tag: question-answering
tags:
- medical
- question-answering
- fusion-in-decoder
widget:
- text: A junior orthopaedic surgery resident is completing a carpal tunnel repair
with the department chairman as the attending physician. During the case, the
resident inadvertently cuts a flexor tendon. The tendon is repaired without complication.
The attending tells the resident that the patient will do fine, and there is no
need to report this minor complication that will not harm the patient, as he does
not want to make the patient worry unnecessarily. He tells the resident to leave
this complication out of the operative report. Which of the following is the correct
next action for the resident to take? A. Disclose the error to the patient and
put it in the operative report B. Tell the attending that he cannot fail to disclose
this mistake C. Report the physician to the ethics committee D. Refuse to dictate
the operative reporty.
context: Inadvertent Cutting of Tendon is a complication, it should be in the Operative
Reports The resident must put this complication in the operative report and disscuss
it with the patient. If there was no harm to the patent and correction was done
then theres nothing major for worry. But disclosing this as per ethical guidelines,
is mandatory
example_title: Example 1
---
# Model Card for MedGENIE-fid-flan-t5-base-medqa
MedGENIE comprises a collection of language models designed to utilize generated contexts, rather than retrieved ones, for addressing multiple-choice open-domain questions in the medical field. Specifically, **MedGENIE-fid-flan-t5-base-medqa** is a *fusion-in-decoder* (FID) model based on [flan-t5-base](https://huggingface.co/google/flan-t5-base), trained on the [MedQA-USMLE](https://huggingface.co/datasets/disi-unibo-nlp/medqa-5-opt-MedGENIE) dataset and grounded on artificial contexts generated by [PMC-LLaMA-13B](https://huggingface.co/axiong/PMC_LLaMA_13B). This model achieves a new *state-of-the-art* (SOTA) performance over the corresponding test set.
## Model description
- **Language(s) (NLP):** English
- **License:** MIT
- **Finetuned from model:** [google/flan-t5-base](https://huggingface.co/google/flan-t5-base)
- **Repository:** https://github.com/disi-unibo-nlp/medgenie
- **Paper:** [To Generate or to Retrieve? On the Effectiveness of Artificial Contexts for Medical Open-Domain Question Answering](https://arxiv.org/abs/2403.01924)
## Performance
At the time of release (February 2024), **MedGENIE-fid-flan-t5-base-medqa** is a new lightweight SOTA model on MedQA-USMLE benchmark:
| Model | Ground (Source) | Learning | Params | Accuracy (↓) |
|----------------------------------|--------------------|---------------------------|-----------------|-------------------------------|
| **MedGENIE-FID-Flan-T5** | **G (PMC-LLaMA)** | **Fine-tuned** | **250M** | **53.1** |
| Codex <small>([Liévin et al.](https://arxiv.org/abs/2207.08143))</small> | ∅ | 0-zhot | 175B | 52.5 |
| Codex <small>([Liévin et al.](https://arxiv.org/abs/2207.08143))</small> | R (Wikipedia) | 0-shot | 175B | 52.5 |
| GPT-3.5-Turbo <small>([Yang et al.](https://arxiv.org/abs/2309.02233))</small> | R (Wikipedia) | k-shot | -- | 52.3 |
| MEDITRON <small>([Chen et al.](https://arxiv.org/abs/2311.16079))</small> | ∅ | Fine-tuned | 7B | 52.0 |
| BioMistral DARE <small> ([Labrak et al.](https://arxiv.org/abs/2402.10373)) </small> | ∅ | Fine-tuned | 7B | 51.1 |
| BioMistral <small> ([Labrak et al.](https://arxiv.org/abs/2402.10373)) </small> | ∅ | Fine-tuned | 7B | 50.6 |
| Zephyr-β | R (MedWiki) | 2-shot | 7B | 50.4 |
| BioMedGPT <small>([Luo et al.](https://arxiv.org/abs/2308.09442v2))</small> | ∅ | k-shot | 10B | 50.4 |
| BioMedLM <small>([Singhal et al.](https://arxiv.org/abs/2212.13138))</small> | ∅ | Fine-tuned | 2.7B | 50.3 |
| PMC-LLaMA <small>(awq 4 bit)</small> | ∅ | Fine-tuned | 13B | 50.2 |
| LLaMA-2 <small>([Chen et al.](https://arxiv.org/abs/2311.16079))</small> | ∅ | Fine-tuned | 7B | 49.6 |
| Zephyr-β | ∅ | 2-shot | 7B | 49.6 |
| Zephyr-β <small>([Chen et al.](https://arxiv.org/abs/2311.16079))</small> | ∅ | 3-shot | 7B | 49.2 |
| PMC-LLaMA <small>([Chen et al.](https://arxiv.org/abs/2311.16079))</small> | ∅ | Fine-tuned | 7B | 49.2 |
| DRAGON <small>([Yasunaga et al.](https://arxiv.org/abs/2210.09338))</small> | R (UMLS) | Fine-tuned | 360M | 47.5 |
| InstructGPT <small>([Liévin et al.](https://arxiv.org/abs/2207.08143))</small> | R (Wikipedia) | 0-shot | 175B | 47.3 |
| BioMistral DARE <small> ([Labrak et al.](https://arxiv.org/abs/2402.10373)) </small> | ∅ | 3-shot | 7B | 47.0 |
| Flan-PaLM <small>([Singhal et al.](https://arxiv.org/abs/2212.13138))</small> | ∅ | 5-shot | 62B | 46.1 |
| InstructGPT <small>([Liévin et al.](https://arxiv.org/abs/2207.08143))</small> | ∅ | 0-shot | 175B | 46.0 |
| VOD <small>([Liévin et al. 2023](https://arxiv.org/abs/2210.06345))</small> | R (MedWiki) | Fine-tuned | 220M | 45.8 |
| Vicuna 1.3 <small>([Liévin et al.](https://arxiv.org/abs/2207.08143))</small> | ∅ | 0-shot | 33B | 45.2 |
| BioLinkBERT <small>([Singhal et al.](https://arxiv.org/abs/2212.13138))</small> | ∅ | Fine-tuned | 340M | 45.1 |
| Mistral-Instruct | R (MedWiki) | 2-shot | 7B | 45.1 |
| BioMistral <small> ([Labrak et al.](https://arxiv.org/abs/2402.10373)) </small> | ∅ | 3-shot | 7B | 44.4 |
| Galactica | ∅ | 0-shot | 120B | 44.4 |
| LLaMA-2 <small>([Liévin et al.](https://arxiv.org/abs/2207.08143))</small> | ∅ | 0-shot | 70B | 43.4 |
| BioReader <small>([Frisoni et al.](https://aclanthology.org/2022.emnlp-main.390/))</small> | R (PubMed-RCT) | Fine-tuned | 230M | 43.0 |
| Guanaco <small>([Liévin et al.](https://arxiv.org/abs/2207.08143))</small> | ∅ | 0-shot | 33B | 42.9 |
| LLaMA-2-chat <small>([Liévin et al.](https://arxiv.org/abs/2207.08143))</small> | ∅ | 0-shot | 70B | 42.3 |
| Vicuna 1.5 <small>([Liévin et al.](https://arxiv.org/abs/2207.08143))</small> | ∅ | 0-shot | 65B | 41.6 |
| Mistral-Instruct <small>([Chen et al.](https://arxiv.org/abs/2311.16079))</small> | ∅ | 3-shot | 7B | 41.1 |
| PaLM <small>([Singhal et al.](https://arxiv.org/abs/2212.13138))</small> | ∅ | 5-shot | 62B | 40.9 |
| Guanaco <small>([Liévin et al.](https://arxiv.org/abs/2207.08143))</small> | ∅ | 0-shot | 65B | 40.8 |
| Falcon-Instruct <small>([Liévin et al.](https://arxiv.org/abs/2207.08143))</small> | ∅ | 0-shot | 40B | 39.0 |
| Vicuna 1.3 <small>([Liévin et al.](https://arxiv.org/abs/2207.08143))</small> | ∅ | 0-shot | 13B | 38.7 |
| GreaseLM <small>([Zhang et al.](https://arxiv.org/abs/2201.08860))</small> | R (UMLS) | Fine-tuned | 359M | 38.5 |
| PubMedBERT <small>([Singhal et al.](https://arxiv.org/abs/2212.13138))</small> | ∅ | Fine-tuned | 110M | 38.1 |
| QA-GNN <small>([Yasunaga et al.](https://arxiv.org/abs/2104.06378))</small> | R (UMLS) | Fine-tuned | 360M | 38.0 |
| LLaMA-2 <small>([Yang et al.](https://arxiv.org/abs/2309.02233))</small> | R (Wikipedia) | k-shot | 13B | 37.6 |
| LLaMA-2-chat | R (MedWiki) | 2-shot | 7B | 37.2 |
| LLaMA-2-chat | ∅ | 2-shot | 7B | 37.2 |
| BioBERT <small>([Lee et al.](https://arxiv.org/abs/1901.08746))</small> | ∅ | Fine-tuned | 110M | 36.7 |
| MTP-Instruct <small>([Liévin et al.](https://arxiv.org/abs/2207.08143))</small> | ∅ | 0-shot | 30B | 35.1 |
| GPT-Neo <small>([Singhal et al.](https://arxiv.org/abs/2212.13138))</small> | ∅ | Fine-tuned | 2.5B | 33.3 |
| LLaMa-2-chat <small>([Liévin et al.](https://arxiv.org/abs/2207.08143))</small> | ∅ | 0-shot | 13B | 32.2 |
| LLaMa-2 <small>([Liévin et al.](https://arxiv.org/abs/2207.08143))</small> | ∅ | 0-shot | 13B | 31.1 |
| GPT-NeoX <small>([Liévin et al.](https://arxiv.org/abs/2207.08143))</small> | ∅ | 0-shot | 20B | 26.9 |
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- n_context: 5
- per_gpu_batch_size: 1
- accumulation_steps: 4
- total_steps: 40,712
- eval_freq: 10,178
- optimizer: AdamW
- scheduler: linear
- weight_decay: 0.01
- warmup_ratio: 0.1
- text_maxlength: 1024
### Bias, Risk and Limitation
Our model is trained on artificially generated contextual documents, which might inadvertently magnify inherent biases and depart from clinical and societal norms. This could lead to the spread of convincing medical misinformation. To mitigate this risk, we recommend a cautious approach: domain experts should manually review any output before real-world use. This ethical safeguard is crucial to prevent the dissemination of potentially erroneous or misleading information, particularly within clinical and scientific circles.
## Citation
If you find MedGENIE-fid-flan-t5-base-medqa is useful in your work, please cite it with:
```
@misc{frisoni2024generate,
title={To Generate or to Retrieve? On the Effectiveness of Artificial Contexts for Medical Open-Domain Question Answering},
author={Giacomo Frisoni and Alessio Cocchieri and Alex Presepi and Gianluca Moro and Zaiqiao Meng},
year={2024},
eprint={2403.01924},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` | [
"QUESTION_ANSWERING"
] | [
"MEDQA"
] |
afrideva/tau-0.5B-GGUF | afrideva | text-generation | [
"gguf",
"ggml",
"quantized",
"q2_k",
"q3_k_m",
"q4_k_m",
"q5_k_m",
"q6_k",
"q8_0",
"text-generation",
"en",
"zh",
"dataset:Locutusque/UltraTextbooks-2.0",
"base_model:M4-ai/tau-0.5B",
"base_model:quantized:M4-ai/tau-0.5B",
"license:other",
"region:us",
"conversational"
] | 2024-03-25T18:48:47 | 2024-03-25T18:52:05 | 75 | 0 | ---
base_model: M4-ai/tau-0.5B
datasets:
- Locutusque/UltraTextbooks-2.0
language:
- en
- zh
license: other
model_name: tau-0.5B
pipeline_tag: text-generation
tags:
- gguf
- ggml
- quantized
- q2_k
- q3_k_m
- q4_k_m
- q5_k_m
- q6_k
- q8_0
inference: false
model_creator: M4-ai
quantized_by: afrideva
---
# M4-ai/tau-0.5B-GGUF
Quantized GGUF model files for [tau-0.5B](https://huggingface.co/M4-ai/tau-0.5B) from [M4-ai](https://huggingface.co/M4-ai)
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [tau-0.5b.fp16.gguf](https://huggingface.co/afrideva/tau-0.5B-GGUF/resolve/main/tau-0.5b.fp16.gguf) | fp16 | 1.25 GB |
| [tau-0.5b.q2_k.gguf](https://huggingface.co/afrideva/tau-0.5B-GGUF/resolve/main/tau-0.5b.q2_k.gguf) | q2_k | 298.41 MB |
| [tau-0.5b.q3_k_m.gguf](https://huggingface.co/afrideva/tau-0.5B-GGUF/resolve/main/tau-0.5b.q3_k_m.gguf) | q3_k_m | 349.88 MB |
| [tau-0.5b.q4_k_m.gguf](https://huggingface.co/afrideva/tau-0.5B-GGUF/resolve/main/tau-0.5b.q4_k_m.gguf) | q4_k_m | 407.16 MB |
| [tau-0.5b.q5_k_m.gguf](https://huggingface.co/afrideva/tau-0.5B-GGUF/resolve/main/tau-0.5b.q5_k_m.gguf) | q5_k_m | 459.24 MB |
| [tau-0.5b.q6_k.gguf](https://huggingface.co/afrideva/tau-0.5B-GGUF/resolve/main/tau-0.5b.q6_k.gguf) | q6_k | 514.58 MB |
| [tau-0.5b.q8_0.gguf](https://huggingface.co/afrideva/tau-0.5B-GGUF/resolve/main/tau-0.5b.q8_0.gguf) | q8_0 | 664.60 MB |
## Original Model Card:
# tau-0.5B
## Model Details
- **Model Name:** tau-0.5B
- **Base Model:** Qwen1.5-0.5B
- **Dataset:** UltraTextbooks-2.0
- **Model Size:** 0.5B parameters
- **Model Type:** Language Model
- **Training Procedure:** Further pre-training of Qwen1.5-0.5B on UltraTextbooks-2.0.
## Model Use
tau-0.5B is designed to be a general-purpose language model with enhanced capabilities in the domains of machine learning, mathematics, and coding. It can be used for a wide range of natural language processing tasks, such as:
- Educational question answering
- Text summarization
- Content generation for educational purposes
- Code understanding and generation
- Mathematical problem solving
The model's exposure to the diverse content in the UltraTextbooks-2.0 dataset makes it particularly well-suited for applications in educational technology and research.
## Training Data
tau-0.5B was further pre-trained on the UltraTextbooks-2.0 dataset, which is an expanded version of the original UltraTextbooks dataset. UltraTextbooks-2.0 incorporates additional high-quality synthetic and human-written textbooks from various sources on the Hugging Face platform, with a focus on increasing the diversity of content in the domains of machine learning, mathematics, and coding.
For more details on the dataset, please refer to the [UltraTextbooks-2.0 Dataset Card](https://huggingface.co/datasets/Locutusque/UltraTextbooks-2.0).
## Performance and Limitations
Refer to [Evaluation](##Evaluation) for evaluations. It is essential to note that the model may still exhibit biases or inaccuracies present in the training data. Users are encouraged to critically evaluate the model's outputs and report any issues to facilitate continuous improvement.
## Environmental Impact
The training of tau-0.5B required computational resources that contribute to the model's overall environmental impact. However, efforts were made to optimize the training process and minimize the carbon footprint.
## Ethical Considerations
tau-0.5B was trained on a diverse dataset that may contain biases and inaccuracies. Users should be aware of these potential limitations and use the model responsibly. The model should not be used for tasks that could cause harm or discriminate against individuals or groups.
## Evaluation
| Tasks |Version|Filter|n-shot| Metric |Value | |Stderr|
|---------------------------------|-------|------|-----:|--------|-----:|---|-----:|
|agieval_nous |N/A |none | 0|acc |0.2235|± |0.0434|
| | |none | 0|acc_norm|0.2141|± |0.0498|
| - agieval_aqua_rat | 1|none | 0|acc |0.1417|± |0.0219|
| | |none | 0|acc_norm|0.1535|± |0.0227|
| - agieval_logiqa_en | 1|none | 0|acc |0.2796|± |0.0176|
| | |none | 0|acc_norm|0.3118|± |0.0182|
| - agieval_lsat_ar | 1|none | 0|acc |0.2000|± |0.0264|
| | |none | 0|acc_norm|0.1696|± |0.0248|
| - agieval_lsat_lr | 1|none | 0|acc |0.2275|± |0.0186|
| | |none | 0|acc_norm|0.2020|± |0.0178|
| - agieval_lsat_rc | 1|none | 0|acc |0.1487|± |0.0217|
| | |none | 0|acc_norm|0.1561|± |0.0222|
| - agieval_sat_en | 1|none | 0|acc |0.2330|± |0.0295|
| | |none | 0|acc_norm|0.2039|± |0.0281|
| - agieval_sat_en_without_passage| 1|none | 0|acc |0.2524|± |0.0303|
| | |none | 0|acc_norm|0.1942|± |0.0276|
| - agieval_sat_math | 1|none | 0|acc |0.2227|± |0.0281|
| | |none | 0|acc_norm|0.1682|± |0.0253|
| Tasks |Version| Filter |n-shot| Metric |Value | |Stderr|
|---------------------------------------|-------|----------------|-----:|-----------|-----:|---|-----:|
|truthfulqa | 2|none | 0|acc |0.3931|± |0.0143|
|mmlu |N/A |none | 0|acc |0.3642|± |0.0040|
| - humanities |N/A |none | 5|acc |0.3320|± |0.0068|
| - formal_logic | 0|none | 5|acc |0.2619|± |0.0393|
| - high_school_european_history | 0|none | 5|acc |0.4909|± |0.0390|
| - high_school_us_history | 0|none | 5|acc |0.4167|± |0.0346|
| - high_school_world_history | 0|none | 5|acc |0.4641|± |0.0325|
| - international_law | 0|none | 5|acc |0.5537|± |0.0454|
| - jurisprudence | 0|none | 5|acc |0.4167|± |0.0477|
| - logical_fallacies | 0|none | 5|acc |0.2638|± |0.0346|
| - moral_disputes | 0|none | 5|acc |0.3757|± |0.0261|
| - moral_scenarios | 0|none | 5|acc |0.2402|± |0.0143|
| - philosophy | 0|none | 5|acc |0.3794|± |0.0276|
| - prehistory | 0|none | 5|acc |0.3426|± |0.0264|
| - professional_law | 0|none | 5|acc |0.3103|± |0.0118|
| - world_religions | 0|none | 5|acc |0.2807|± |0.0345|
| - other |N/A |none | 5|acc |0.4071|± |0.0088|
| - business_ethics | 0|none | 5|acc |0.4200|± |0.0496|
| - clinical_knowledge | 0|none | 5|acc |0.4491|± |0.0306|
| - college_medicine | 0|none | 5|acc |0.3873|± |0.0371|
| - global_facts | 0|none | 5|acc |0.3600|± |0.0482|
| - human_aging | 0|none | 5|acc |0.3498|± |0.0320|
| - management | 0|none | 5|acc |0.4854|± |0.0495|
| - marketing | 0|none | 5|acc |0.5470|± |0.0326|
| - medical_genetics | 0|none | 5|acc |0.4000|± |0.0492|
| - miscellaneous | 0|none | 5|acc |0.4291|± |0.0177|
| - nutrition | 0|none | 5|acc |0.4183|± |0.0282|
| - professional_accounting | 0|none | 5|acc |0.3582|± |0.0286|
| - professional_medicine | 0|none | 5|acc |0.3015|± |0.0279|
| - virology | 0|none | 5|acc |0.3494|± |0.0371|
| - social_sciences |N/A |none | 5|acc |0.4075|± |0.0088|
| - econometrics | 0|none | 5|acc |0.2719|± |0.0419|
| - high_school_geography | 0|none | 5|acc |0.5000|± |0.0356|
| - high_school_government_and_politics| 0|none | 5|acc |0.4611|± |0.0360|
| - high_school_macroeconomics | 0|none | 5|acc |0.4051|± |0.0249|
| - high_school_microeconomics | 0|none | 5|acc |0.3908|± |0.0317|
| - high_school_psychology | 0|none | 5|acc |0.4239|± |0.0212|
| - human_sexuality | 0|none | 5|acc |0.3893|± |0.0428|
| - professional_psychology | 0|none | 5|acc |0.3399|± |0.0192|
| - public_relations | 0|none | 5|acc |0.4455|± |0.0476|
| - security_studies | 0|none | 5|acc |0.3510|± |0.0306|
| - sociology | 0|none | 5|acc |0.5174|± |0.0353|
| - us_foreign_policy | 0|none | 5|acc |0.5500|± |0.0500|
| - stem |N/A |none | 5|acc |0.3276|± |0.0083|
| - abstract_algebra | 0|none | 5|acc |0.3000|± |0.0461|
| - anatomy | 0|none | 5|acc |0.2889|± |0.0392|
| - astronomy | 0|none | 5|acc |0.3487|± |0.0388|
| - college_biology | 0|none | 5|acc |0.3403|± |0.0396|
| - college_chemistry | 0|none | 5|acc |0.2600|± |0.0441|
| - college_computer_science | 0|none | 5|acc |0.3800|± |0.0488|
| - college_mathematics | 0|none | 5|acc |0.3300|± |0.0473|
| - college_physics | 0|none | 5|acc |0.2745|± |0.0444|
| - computer_security | 0|none | 5|acc |0.4300|± |0.0498|
| - conceptual_physics | 0|none | 5|acc |0.3447|± |0.0311|
| - electrical_engineering | 0|none | 5|acc |0.3931|± |0.0407|
| - elementary_mathematics | 0|none | 5|acc |0.3095|± |0.0238|
| - high_school_biology | 0|none | 5|acc |0.4161|± |0.0280|
| - high_school_chemistry | 0|none | 5|acc |0.2759|± |0.0314|
| - high_school_computer_science | 0|none | 5|acc |0.3100|± |0.0465|
| - high_school_mathematics | 0|none | 5|acc |0.3185|± |0.0284|
| - high_school_physics | 0|none | 5|acc |0.2517|± |0.0354|
| - high_school_statistics | 0|none | 5|acc |0.3009|± |0.0313|
| - machine_learning | 0|none | 5|acc |0.3036|± |0.0436|
|medqa_4options |Yaml |none | 5|acc |0.2687|± |0.0124|
| | |none | 5|acc_norm |0.2687|± |0.0124|
|logieval | 0|get-answer | 5|exact_match|0.3505|± |0.0120|
|gsm8k_cot | 3|strict-match | 8|exact_match|0.0690|± |0.0070|
| | |flexible-extract| 8|exact_match|0.1365|± |0.0095|
| Tasks |Version|Filter|n-shot| Metric |Value | |Stderr|
|-------------|------:|------|-----:|--------|-----:|---|-----:|
|arc_easy | 1|none | 25|acc |0.5981|± |0.0101|
| | |none | 25|acc_norm|0.5939|± |0.0101|
|arc_challenge| 1|none | 25|acc |0.2688|± |0.0130|
| | |none | 25|acc_norm|0.2969|± |0.0134|
## Usage Rights
Make sure to read Qwen's license before using this model. | [
"QUESTION_ANSWERING",
"SUMMARIZATION"
] | [
"MEDQA"
] |
ikim-uk-essen/geberta-base | ikim-uk-essen | fill-mask | [
"transformers",
"pytorch",
"safetensors",
"deberta-v2",
"fill-mask",
"arxiv:2310.07321",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2023-07-20T12:32:09 | 2025-01-29T16:25:47 | 74 | 5 | ---
license: mit
---
# GeBERTa
<!-- Provide a quick summary of what the model is/does. -->
GeBERTa is a set of German DeBERTa models developed in a joint effort between the University of Florida, NVIDIA, and IKIM.
The models range in size from 122M to 750M parameters.
## Model details
The models follow the architecture of DeBERTa-v2 and make use of sentence piece tokenizers. The base and large models use a 50k token vocabulary,
while the large model uses a 128k token vocabulary. All models were trained with a batch size of 2k for a maximum of 1 million steps
and have a maximum sequence length of 512 tokens.
## Dataset
The pre-training dataset consists of documents from different domains:
| Domain | Dataset | Data Size | #Docs | #Tokens |
| -------- | ----------- | --------- | ------ | ------- |
| Formal | Wikipedia | 9GB | 2,665,357 | 1.9B |
| Formal | News | 28GB | 12,305,326 | 6.1B |
| Formal | GC4 | 90GB | 31,669,772 | 19.4B |
| Informal | Reddit 2019-2023 (GER) | 5.8GB | 15,036,592 | 1.3B |
| Informal | Holiday Reviews | 2GB | 4,876,405 | 428M |
| Legal | OpenLegalData: German cases and laws | 5.4GB | 308,228 | 1B |
| Medical | Smaller public datasets | 253MB | 179,776 | 50M |
| Medical | CC medical texts | 3.6GB | 2,000,000 | 682M |
| Medical | Medicine Dissertations | 1.4GB | 14,496 | 295M |
| Medical | Pubmed abstracts (translated) | 8.5GB | 21,044,382 | 1.7B |
| Medical | MIMIC III (translated) | 2.6GB | 24,221,834 | 695M |
| Medical | PMC-Patients-ReCDS (translated) | 2.1GB | 1,743,344 | 414M |
| Literature | German Fiction | 1.1GB | 3,219 | 243M |
| Literature | English books (translated) | 7.1GB | 11,038 | 1.6B |
| - | Total | 167GB | 116,079,769 | 35.8B |
## Benchmark
In a comprehensive benchmark, we evaluated existing German models and our own. The benchmark included a variety of task types, such as question answering,
classification, and named entity recognition (NER). In addition, we introduced a new task focused on hate speech detection using two existing datasets.
When the datasets provided training, development, and test sets, we used them accordingly.
We randomly split the data into 80% for training, 10% for validation, and 10% for test in cases where such sets were not available.
The following table presents the F1 scores:
| Model | [GE14](https://huggingface.co/datasets/germeval_14) | [GQuAD](https://huggingface.co/datasets/deepset/germanquad) | [GE18](https://huggingface.co/datasets/philschmid/germeval18) | TS | [GGP](https://github.com/JULIELab/GGPOnc) | GRAS<sup>1</sup> | [JS](https://github.com/JULIELab/jsyncc) | [DROC](https://gitlab2.informatik.uni-wuerzburg.de/kallimachos/DROC-Release) | Avg |
|:---------------------:|:--------:|:----------:|:--------:|:--------:|:-------:|:------:|:--------:|:------:|:------:|
| [GBERT](https://huggingface.co/deepset/gbert-base)<sub>base</sub> | 87.10±0.12 | 72.19±0.82 | 51.27±1.4 | 72.34±0.48 | 78.17±0.25 | 62.90±0.01 | 77.18±3.34 | 88.03±0.20 | 73.65±0.50 |
| [GELECTRA](https://huggingface.co/deepset/gelectra-base)<sub>base</sub> | 86.19±0.5 | 74.09±0.70 | 48.02±1.80 | 70.62±0.44 | 77.53±0.11 | 65.97±0.01 | 71.17±2.94 | 88.06±0.37 | 72.71±0.66 |
| [GottBERT](https://huggingface.co/uklfr/gottbert-base) | 87.15±0.19 | 72.76±0.378 | 51.12±1.20 | 74.25±0.80 | **78.18**±0.11 | 65.71±0.01 | 74.60±4.75 | 88.61±0.23 | 74.05±0.51 |
| GeBERTa<sub>base</sub> | **88.06**±0.22 | **78.54**±0.32 | **53.16**±1.39 | **74.83**±0.36 | 78.13±0.15 | **68.37**±1.11 | **81.85**±5.23 | **89.14**±0.32 | **76.51**±0.32 |
## Publication
```bibtex
@inproceedings{dada2023impact,
title={On the Impact of Cross-Domain Data on German Language Models},
author={Dada, Amin and Chen, Aokun and Peng, Cheng and Smith, Kaleb E and Idrissi-Yaghir, Ahmad and Seibold, Constantin Marc and Li, Jianning and Heiliger, Lars and Friedrich, Christoph M and Truhn, Daniel and others},
booktitle={The 2023 Conference on Empirical Methods in Natural Language Processing},
year={2023}
}
```
Arxiv to link paper on Hugging Face: https://arxiv.org/abs/2310.07321
## Contact
<[email protected]> | [
"NAMED_ENTITY_RECOGNITION",
"QUESTION_ANSWERING"
] | [
"PMC-PATIENTS"
] |
predibase/bc5cdr | predibase | text-generation | [
"peft",
"safetensors",
"text-generation",
"base_model:mistralai/Mistral-7B-v0.1",
"base_model:adapter:mistralai/Mistral-7B-v0.1",
"region:us"
] | 2024-02-20T02:59:29 | 2024-02-21T19:13:58 | 74 | 1 | ---
base_model: mistralai/Mistral-7B-v0.1
library_name: peft
pipeline_tag: text-generation
---
Description: 1500 PubMed articles with 4409 annotated chemicals, 5818 diseases and 3116 chemical-disease interactions.\
Original dataset: https://huggingface.co/datasets/tner/bc5cdr \
---\
Try querying this adapter for free in Lora Land at https://predibase.com/lora-land! \
The adapter_category is Named Entity Recognition and the name is Chemical and Disease Recognition (bc5cdr)\
---\
Sample input: Your task is a Named Entity Recognition (NER) task. Predict the category of each entity, then place the entity into the list associated with the category in an output JSON payload. Below is an example:
Input: "Naloxone reverses the antihypertensive effect of clonidine ."
Output: {'B-Chemical': ['Naloxone', 'clonidine'], 'B-Disease': [], 'I-Disease': [], 'I-Chemical': []}
Now, complete the task.
Input: "A standardized loading dose of VPA was administered , and venous blood was sampled at 0 , 1 , 2 , 3 , and 4 hours ."
Output: \
---\
Sample output: {'B-Chemical': ['VPA'], 'B-Disease': [], 'I-Disease': [], 'I-Chemical': []}\
---\
Try using this adapter yourself!
```
from transformers import AutoModelForCausalLM, AutoTokenizer
model_id = "mistralai/Mistral-7B-v0.1"
peft_model_id = "predibase/bc5cdr"
model = AutoModelForCausalLM.from_pretrained(model_id)
model.load_adapter(peft_model_id)
``` | [
"NAMED_ENTITY_RECOGNITION"
] | [
"BC5CDR"
] |
leeloolee/intention | leeloolee | sentence-similarity | [
"sentence-transformers",
"safetensors",
"new",
"text-classification",
"mteb",
"transformers",
"multilingual",
"sentence-similarity",
"custom_code",
"af",
"ar",
"az",
"be",
"bg",
"bn",
"ca",
"ceb",
"cs",
"cy",
"da",
"de",
"el",
"en",
"es",
"et",
"eu",
"fa",
"fi",
"fr",
"gl",
"gu",
"he",
"hi",
"hr",
"ht",
"hu",
"hy",
"id",
"is",
"it",
"ja",
"jv",
"ka",
"kk",
"km",
"kn",
"ko",
"ky",
"lo",
"lt",
"lv",
"mk",
"ml",
"mn",
"mr",
"ms",
"my",
"ne",
"nl",
"no",
"pa",
"pl",
"pt",
"qu",
"ro",
"ru",
"si",
"sk",
"sl",
"so",
"sq",
"sr",
"sv",
"sw",
"ta",
"te",
"th",
"tl",
"tr",
"uk",
"ur",
"vi",
"yo",
"zh",
"arxiv:2407.19669",
"arxiv:2210.09984",
"arxiv:2402.03216",
"arxiv:2007.15207",
"arxiv:2104.08663",
"arxiv:2402.07440",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | 2024-09-07T05:24:10 | 2024-09-07T05:39:09 | 74 | 3 | ---
language:
- af
- ar
- az
- be
- bg
- bn
- ca
- ceb
- cs
- cy
- da
- de
- el
- en
- es
- et
- eu
- fa
- fi
- fr
- gl
- gu
- he
- hi
- hr
- ht
- hu
- hy
- id
- is
- it
- ja
- jv
- ka
- kk
- km
- kn
- ko
- ky
- lo
- lt
- lv
- mk
- ml
- mn
- mr
- ms
- my
- ne
- nl
- 'no'
- pa
- pl
- pt
- qu
- ro
- ru
- si
- sk
- sl
- so
- sq
- sr
- sv
- sw
- ta
- te
- th
- tl
- tr
- uk
- ur
- vi
- yo
- zh
license: apache-2.0
tags:
- mteb
- sentence-transformers
- transformers
- multilingual
- sentence-similarity
model-index:
- name: gte-multilingual-base (dense)
results:
- task:
type: Clustering
dataset:
name: MTEB 8TagsClustering
type: PL-MTEB/8tags-clustering
config: default
split: test
revision: None
metrics:
- type: v_measure
value: 33.66681726329994
- task:
type: STS
dataset:
name: MTEB AFQMC
type: C-MTEB/AFQMC
config: default
split: validation
revision: b44c3b011063adb25877c13823db83bb193913c4
metrics:
- type: cos_sim_spearman
value: 43.54760696384009
- task:
type: STS
dataset:
name: MTEB ATEC
type: C-MTEB/ATEC
config: default
split: test
revision: 0f319b1142f28d00e055a6770f3f726ae9b7d865
metrics:
- type: cos_sim_spearman
value: 48.91186363417501
- task:
type: Classification
dataset:
name: MTEB AllegroReviews
type: PL-MTEB/allegro-reviews
config: default
split: test
revision: None
metrics:
- type: accuracy
value: 41.689860834990064
- task:
type: Clustering
dataset:
name: MTEB AlloProfClusteringP2P
type: lyon-nlp/alloprof
config: default
split: test
revision: 392ba3f5bcc8c51f578786c1fc3dae648662cb9b
metrics:
- type: v_measure
value: 54.20241337977897
- type: v_measure
value: 44.34083695608643
- task:
type: Reranking
dataset:
name: MTEB AlloprofReranking
type: lyon-nlp/mteb-fr-reranking-alloprof-s2p
config: default
split: test
revision: 666fdacebe0291776e86f29345663dfaf80a0db9
metrics:
- type: map
value: 64.91495250072002
- task:
type: Retrieval
dataset:
name: MTEB AlloprofRetrieval
type: lyon-nlp/alloprof
config: default
split: test
revision: 392ba3f5bcc8c51f578786c1fc3dae648662cb9b
metrics:
- type: ndcg_at_10
value: 53.638
- task:
type: Classification
dataset:
name: MTEB AmazonCounterfactualClassification (en)
type: mteb/amazon_counterfactual
config: en
split: test
revision: e8379541af4e31359cca9fbcf4b00f2671dba205
metrics:
- type: accuracy
value: 75.95522388059702
- task:
type: Classification
dataset:
name: MTEB AmazonPolarityClassification
type: mteb/amazon_polarity
config: default
split: test
revision: e2d317d38cd51312af73b3d32a06d1a08b442046
metrics:
- type: accuracy
value: 80.717625
- task:
type: Classification
dataset:
name: MTEB AmazonReviewsClassification (en)
type: mteb/amazon_reviews_multi
config: en
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 43.64199999999999
- task:
type: Classification
dataset:
name: MTEB AmazonReviewsClassification (de)
type: mteb/amazon_reviews_multi
config: de
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 40.108
- task:
type: Classification
dataset:
name: MTEB AmazonReviewsClassification (es)
type: mteb/amazon_reviews_multi
config: es
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 40.169999999999995
- task:
type: Classification
dataset:
name: MTEB AmazonReviewsClassification (fr)
type: mteb/amazon_reviews_multi
config: fr
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 39.56799999999999
- task:
type: Classification
dataset:
name: MTEB AmazonReviewsClassification (ja)
type: mteb/amazon_reviews_multi
config: ja
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 35.75000000000001
- task:
type: Classification
dataset:
name: MTEB AmazonReviewsClassification (zh)
type: mteb/amazon_reviews_multi
config: zh
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 33.342000000000006
- task:
type: Retrieval
dataset:
name: MTEB ArguAna
type: mteb/arguana
config: default
split: test
revision: c22ab2a51041ffd869aaddef7af8d8215647e41a
metrics:
- type: ndcg_at_10
value: 58.231
- task:
type: Retrieval
dataset:
name: MTEB ArguAna-PL
type: clarin-knext/arguana-pl
config: default
split: test
revision: 63fc86750af76253e8c760fc9e534bbf24d260a2
metrics:
- type: ndcg_at_10
value: 53.166000000000004
- task:
type: Clustering
dataset:
name: MTEB ArxivClusteringP2P
type: mteb/arxiv-clustering-p2p
config: default
split: test
revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d
metrics:
- type: v_measure
value: 46.01900557959478
- task:
type: Clustering
dataset:
name: MTEB ArxivClusteringS2S
type: mteb/arxiv-clustering-s2s
config: default
split: test
revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53
metrics:
- type: v_measure
value: 41.06626465345723
- task:
type: Reranking
dataset:
name: MTEB AskUbuntuDupQuestions
type: mteb/askubuntudupquestions-reranking
config: default
split: test
revision: 2000358ca161889fa9c082cb41daa8dcfb161a54
metrics:
- type: map
value: 61.87514497610431
- task:
type: STS
dataset:
name: MTEB BIOSSES
type: mteb/biosses-sts
config: default
split: test
revision: d3fb88f8f02e40887cd149695127462bbcf29b4a
metrics:
- type: cos_sim_spearman
value: 81.21450112991194
- task:
type: STS
dataset:
name: MTEB BQ
type: C-MTEB/BQ
config: default
split: test
revision: e3dda5e115e487b39ec7e618c0c6a29137052a55
metrics:
- type: cos_sim_spearman
value: 51.71589543397271
- task:
type: Retrieval
dataset:
name: MTEB BSARDRetrieval
type: maastrichtlawtech/bsard
config: default
split: test
revision: 5effa1b9b5fa3b0f9e12523e6e43e5f86a6e6d59
metrics:
- type: ndcg_at_10
value: 26.115
- task:
type: BitextMining
dataset:
name: MTEB BUCC (de-en)
type: mteb/bucc-bitext-mining
config: de-en
split: test
revision: d51519689f32196a32af33b075a01d0e7c51e252
metrics:
- type: f1
value: 98.6169102296451
- task:
type: BitextMining
dataset:
name: MTEB BUCC (fr-en)
type: mteb/bucc-bitext-mining
config: fr-en
split: test
revision: d51519689f32196a32af33b075a01d0e7c51e252
metrics:
- type: f1
value: 97.89603052314916
- task:
type: BitextMining
dataset:
name: MTEB BUCC (ru-en)
type: mteb/bucc-bitext-mining
config: ru-en
split: test
revision: d51519689f32196a32af33b075a01d0e7c51e252
metrics:
- type: f1
value: 97.12388869645537
- task:
type: BitextMining
dataset:
name: MTEB BUCC (zh-en)
type: mteb/bucc-bitext-mining
config: zh-en
split: test
revision: d51519689f32196a32af33b075a01d0e7c51e252
metrics:
- type: f1
value: 98.15692469720906
- task:
type: Classification
dataset:
name: MTEB Banking77Classification
type: mteb/banking77
config: default
split: test
revision: 0fd18e25b25c072e09e0d92ab615fda904d66300
metrics:
- type: accuracy
value: 85.36038961038962
- task:
type: Clustering
dataset:
name: MTEB BiorxivClusteringP2P
type: mteb/biorxiv-clustering-p2p
config: default
split: test
revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40
metrics:
- type: v_measure
value: 37.5903826674123
- task:
type: Clustering
dataset:
name: MTEB BiorxivClusteringS2S
type: mteb/biorxiv-clustering-s2s
config: default
split: test
revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908
metrics:
- type: v_measure
value: 34.21474277151329
- task:
type: Classification
dataset:
name: MTEB CBD
type: PL-MTEB/cbd
config: default
split: test
revision: None
metrics:
- type: accuracy
value: 62.519999999999996
- task:
type: PairClassification
dataset:
name: MTEB CDSC-E
type: PL-MTEB/cdsce-pairclassification
config: default
split: test
revision: None
metrics:
- type: cos_sim_ap
value: 74.90132799162956
- task:
type: STS
dataset:
name: MTEB CDSC-R
type: PL-MTEB/cdscr-sts
config: default
split: test
revision: None
metrics:
- type: cos_sim_spearman
value: 90.30727955142524
- task:
type: Clustering
dataset:
name: MTEB CLSClusteringP2P
type: C-MTEB/CLSClusteringP2P
config: default
split: test
revision: 4b6227591c6c1a73bc76b1055f3b7f3588e72476
metrics:
- type: v_measure
value: 37.94850105022274
- task:
type: Clustering
dataset:
name: MTEB CLSClusteringS2S
type: C-MTEB/CLSClusteringS2S
config: default
split: test
revision: e458b3f5414b62b7f9f83499ac1f5497ae2e869f
metrics:
- type: v_measure
value: 38.11958675421534
- task:
type: Reranking
dataset:
name: MTEB CMedQAv1
type: C-MTEB/CMedQAv1-reranking
config: default
split: test
revision: 8d7f1e942507dac42dc58017c1a001c3717da7df
metrics:
- type: map
value: 86.10950950485399
- task:
type: Reranking
dataset:
name: MTEB CMedQAv2
type: C-MTEB/CMedQAv2-reranking
config: default
split: test
revision: 23d186750531a14a0357ca22cd92d712fd512ea0
metrics:
- type: map
value: 87.28038294231966
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackAndroidRetrieval
type: mteb/cqadupstack-android
config: default
split: test
revision: f46a197baaae43b4f621051089b82a364682dfeb
metrics:
- type: ndcg_at_10
value: 47.099000000000004
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackEnglishRetrieval
type: mteb/cqadupstack-english
config: default
split: test
revision: ad9991cb51e31e31e430383c75ffb2885547b5f0
metrics:
- type: ndcg_at_10
value: 45.973000000000006
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackGamingRetrieval
type: mteb/cqadupstack-gaming
config: default
split: test
revision: 4885aa143210c98657558c04aaf3dc47cfb54340
metrics:
- type: ndcg_at_10
value: 55.606
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackGisRetrieval
type: mteb/cqadupstack-gis
config: default
split: test
revision: 5003b3064772da1887988e05400cf3806fe491f2
metrics:
- type: ndcg_at_10
value: 36.638
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackMathematicaRetrieval
type: mteb/cqadupstack-mathematica
config: default
split: test
revision: 90fceea13679c63fe563ded68f3b6f06e50061de
metrics:
- type: ndcg_at_10
value: 30.711
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackPhysicsRetrieval
type: mteb/cqadupstack-physics
config: default
split: test
revision: 79531abbd1fb92d06c6d6315a0cbbbf5bb247ea4
metrics:
- type: ndcg_at_10
value: 44.523
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackProgrammersRetrieval
type: mteb/cqadupstack-programmers
config: default
split: test
revision: 6184bc1440d2dbc7612be22b50686b8826d22b32
metrics:
- type: ndcg_at_10
value: 37.940000000000005
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackRetrieval
type: mteb/cqadupstack
config: default
split: test
revision: 4ffe81d471b1924886b33c7567bfb200e9eec5c4
metrics:
- type: ndcg_at_10
value: 38.12183333333333
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackStatsRetrieval
type: mteb/cqadupstack-stats
config: default
split: test
revision: 65ac3a16b8e91f9cee4c9828cc7c335575432a2a
metrics:
- type: ndcg_at_10
value: 32.684000000000005
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackTexRetrieval
type: mteb/cqadupstack-tex
config: default
split: test
revision: 46989137a86843e03a6195de44b09deda022eec7
metrics:
- type: ndcg_at_10
value: 26.735
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackUnixRetrieval
type: mteb/cqadupstack-unix
config: default
split: test
revision: 6c6430d3a6d36f8d2a829195bc5dc94d7e063e53
metrics:
- type: ndcg_at_10
value: 36.933
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackWebmastersRetrieval
type: mteb/cqadupstack-webmasters
config: default
split: test
revision: 160c094312a0e1facb97e55eeddb698c0abe3571
metrics:
- type: ndcg_at_10
value: 33.747
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackWordpressRetrieval
type: mteb/cqadupstack-wordpress
config: default
split: test
revision: 4ffe81d471b1924886b33c7567bfb200e9eec5c4
metrics:
- type: ndcg_at_10
value: 28.872999999999998
- task:
type: Retrieval
dataset:
name: MTEB ClimateFEVER
type: mteb/climate-fever
config: default
split: test
revision: 47f2ac6acb640fc46020b02a5b59fdda04d39380
metrics:
- type: ndcg_at_10
value: 34.833
- task:
type: Retrieval
dataset:
name: MTEB CmedqaRetrieval
type: C-MTEB/CmedqaRetrieval
config: default
split: dev
revision: cd540c506dae1cf9e9a59c3e06f42030d54e7301
metrics:
- type: ndcg_at_10
value: 43.78
- task:
type: PairClassification
dataset:
name: MTEB Cmnli
type: C-MTEB/CMNLI
config: default
split: validation
revision: 41bc36f332156f7adc9e38f53777c959b2ae9766
metrics:
- type: cos_sim_ap
value: 84.00640599186677
- task:
type: Retrieval
dataset:
name: MTEB CovidRetrieval
type: C-MTEB/CovidRetrieval
config: default
split: dev
revision: 1271c7809071a13532e05f25fb53511ffce77117
metrics:
- type: ndcg_at_10
value: 80.60000000000001
- task:
type: Retrieval
dataset:
name: MTEB DBPedia
type: mteb/dbpedia
config: default
split: test
revision: c0f706b76e590d620bd6618b3ca8efdd34e2d659
metrics:
- type: ndcg_at_10
value: 40.116
- task:
type: Retrieval
dataset:
name: MTEB DBPedia-PL
type: clarin-knext/dbpedia-pl
config: default
split: test
revision: 76afe41d9af165cc40999fcaa92312b8b012064a
metrics:
- type: ndcg_at_10
value: 32.498
- task:
type: Retrieval
dataset:
name: MTEB DuRetrieval
type: C-MTEB/DuRetrieval
config: default
split: dev
revision: a1a333e290fe30b10f3f56498e3a0d911a693ced
metrics:
- type: ndcg_at_10
value: 87.547
- task:
type: Retrieval
dataset:
name: MTEB EcomRetrieval
type: C-MTEB/EcomRetrieval
config: default
split: dev
revision: 687de13dc7294d6fd9be10c6945f9e8fec8166b9
metrics:
- type: ndcg_at_10
value: 64.85
- task:
type: Classification
dataset:
name: MTEB EmotionClassification
type: mteb/emotion
config: default
split: test
revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37
metrics:
- type: accuracy
value: 47.949999999999996
- task:
type: Retrieval
dataset:
name: MTEB FEVER
type: mteb/fever
config: default
split: test
revision: bea83ef9e8fb933d90a2f1d5515737465d613e12
metrics:
- type: ndcg_at_10
value: 92.111
- task:
type: Retrieval
dataset:
name: MTEB FiQA-PL
type: clarin-knext/fiqa-pl
config: default
split: test
revision: 2e535829717f8bf9dc829b7f911cc5bbd4e6608e
metrics:
- type: ndcg_at_10
value: 28.962
- task:
type: Retrieval
dataset:
name: MTEB FiQA2018
type: mteb/fiqa
config: default
split: test
revision: 27a168819829fe9bcd655c2df245fb19452e8e06
metrics:
- type: ndcg_at_10
value: 45.005
- task:
type: Clustering
dataset:
name: MTEB HALClusteringS2S
type: lyon-nlp/clustering-hal-s2s
config: default
split: test
revision: e06ebbbb123f8144bef1a5d18796f3dec9ae2915
metrics:
- type: v_measure
value: 25.133776435657595
- task:
type: Retrieval
dataset:
name: MTEB HotpotQA
type: mteb/hotpotqa
config: default
split: test
revision: ab518f4d6fcca38d87c25209f94beba119d02014
metrics:
- type: ndcg_at_10
value: 63.036
- task:
type: Retrieval
dataset:
name: MTEB HotpotQA-PL
type: clarin-knext/hotpotqa-pl
config: default
split: test
revision: a0bd479ac97b4ccb5bd6ce320c415d0bb4beb907
metrics:
- type: ndcg_at_10
value: 56.904999999999994
- task:
type: Classification
dataset:
name: MTEB IFlyTek
type: C-MTEB/IFlyTek-classification
config: default
split: validation
revision: 421605374b29664c5fc098418fe20ada9bd55f8a
metrics:
- type: accuracy
value: 44.59407464409388
- task:
type: Classification
dataset:
name: MTEB ImdbClassification
type: mteb/imdb
config: default
split: test
revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7
metrics:
- type: accuracy
value: 74.912
- task:
type: Classification
dataset:
name: MTEB JDReview
type: C-MTEB/JDReview-classification
config: default
split: test
revision: b7c64bd89eb87f8ded463478346f76731f07bf8b
metrics:
- type: accuracy
value: 79.26829268292683
- task:
type: STS
dataset:
name: MTEB LCQMC
type: C-MTEB/LCQMC
config: default
split: test
revision: 17f9b096f80380fce5ed12a9be8be7784b337daf
metrics:
- type: cos_sim_spearman
value: 74.8601229809791
- task:
type: Clustering
dataset:
name: MTEB MLSUMClusteringP2P
type: mlsum
config: default
split: test
revision: b5d54f8f3b61ae17845046286940f03c6bc79bc7
metrics:
- type: v_measure
value: 42.331902754246556
- type: v_measure
value: 40.92029335502153
- task:
type: Reranking
dataset:
name: MTEB MMarcoReranking
type: C-MTEB/Mmarco-reranking
config: default
split: dev
revision: 8e0c766dbe9e16e1d221116a3f36795fbade07f6
metrics:
- type: map
value: 32.19266316591337
- task:
type: Retrieval
dataset:
name: MTEB MMarcoRetrieval
type: C-MTEB/MMarcoRetrieval
config: default
split: dev
revision: 539bbde593d947e2a124ba72651aafc09eb33fc2
metrics:
- type: ndcg_at_10
value: 79.346
- task:
type: Retrieval
dataset:
name: MTEB MSMARCO
type: mteb/msmarco
config: default
split: dev
revision: c5a29a104738b98a9e76336939199e264163d4a0
metrics:
- type: ndcg_at_10
value: 39.922999999999995
- task:
type: Retrieval
dataset:
name: MTEB MSMARCO-PL
type: clarin-knext/msmarco-pl
config: default
split: test
revision: 8634c07806d5cce3a6138e260e59b81760a0a640
metrics:
- type: ndcg_at_10
value: 55.620999999999995
- task:
type: Classification
dataset:
name: MTEB MTOPDomainClassification (en)
type: mteb/mtop_domain
config: en
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 92.53989968080255
- task:
type: Classification
dataset:
name: MTEB MTOPDomainClassification (de)
type: mteb/mtop_domain
config: de
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 88.26993519301212
- task:
type: Classification
dataset:
name: MTEB MTOPDomainClassification (es)
type: mteb/mtop_domain
config: es
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 90.87725150100067
- task:
type: Classification
dataset:
name: MTEB MTOPDomainClassification (fr)
type: mteb/mtop_domain
config: fr
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 87.48512370811149
- task:
type: Classification
dataset:
name: MTEB MTOPDomainClassification (hi)
type: mteb/mtop_domain
config: hi
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 89.45141627823591
- task:
type: Classification
dataset:
name: MTEB MTOPDomainClassification (th)
type: mteb/mtop_domain
config: th
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 83.45750452079565
- task:
type: Classification
dataset:
name: MTEB MTOPIntentClassification (en)
type: mteb/mtop_intent
config: en
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 72.57637938896488
- task:
type: Classification
dataset:
name: MTEB MTOPIntentClassification (de)
type: mteb/mtop_intent
config: de
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 63.50803043110736
- task:
type: Classification
dataset:
name: MTEB MTOPIntentClassification (es)
type: mteb/mtop_intent
config: es
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 71.6577718478986
- task:
type: Classification
dataset:
name: MTEB MTOPIntentClassification (fr)
type: mteb/mtop_intent
config: fr
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 64.05887879736925
- task:
type: Classification
dataset:
name: MTEB MTOPIntentClassification (hi)
type: mteb/mtop_intent
config: hi
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 65.27070634636071
- task:
type: Classification
dataset:
name: MTEB MTOPIntentClassification (th)
type: mteb/mtop_intent
config: th
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 63.04520795660037
- task:
type: Classification
dataset:
name: MTEB MasakhaNEWSClassification (fra)
type: masakhane/masakhanews
config: fra
split: test
revision: 8ccc72e69e65f40c70e117d8b3c08306bb788b60
metrics:
- type: accuracy
value: 80.66350710900474
- task:
type: Clustering
dataset:
name: MTEB MasakhaNEWSClusteringP2P (fra)
type: masakhane/masakhanews
config: fra
split: test
revision: 8ccc72e69e65f40c70e117d8b3c08306bb788b60
metrics:
- type: v_measure
value: 44.016506455899425
- type: v_measure
value: 40.67730129573544
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (af)
type: mteb/amazon_massive_intent
config: af
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 57.94552790854068
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (am)
type: mteb/amazon_massive_intent
config: am
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 49.273705447209146
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (ar)
type: mteb/amazon_massive_intent
config: ar
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 55.490921318090116
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (az)
type: mteb/amazon_massive_intent
config: az
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 60.97511768661733
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (bn)
type: mteb/amazon_massive_intent
config: bn
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 57.5689307330195
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (cy)
type: mteb/amazon_massive_intent
config: cy
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 48.34902488231337
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (da)
type: mteb/amazon_massive_intent
config: da
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 63.6684599865501
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (de)
type: mteb/amazon_massive_intent
config: de
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 62.54539340954942
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (el)
type: mteb/amazon_massive_intent
config: el
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 63.08675184936112
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (en)
type: mteb/amazon_massive_intent
config: en
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 72.12508406186953
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (es)
type: mteb/amazon_massive_intent
config: es
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 67.41425689307331
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (fa)
type: mteb/amazon_massive_intent
config: fa
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 65.59515803631474
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (fi)
type: mteb/amazon_massive_intent
config: fi
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 62.90517821116342
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (fr)
type: mteb/amazon_massive_intent
config: fr
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 67.91526563550774
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (he)
type: mteb/amazon_massive_intent
config: he
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 55.198386012104905
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (hi)
type: mteb/amazon_massive_intent
config: hi
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 65.04371217215869
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (hu)
type: mteb/amazon_massive_intent
config: hu
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 63.31203765971756
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (hy)
type: mteb/amazon_massive_intent
config: hy
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 55.521183591123055
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (id)
type: mteb/amazon_massive_intent
config: id
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 66.06254203093476
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (is)
type: mteb/amazon_massive_intent
config: is
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 56.01546738399461
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (it)
type: mteb/amazon_massive_intent
config: it
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 67.27975790181574
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (ja)
type: mteb/amazon_massive_intent
config: ja
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 66.79556153328849
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (jv)
type: mteb/amazon_massive_intent
config: jv
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 50.18493611297915
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (ka)
type: mteb/amazon_massive_intent
config: ka
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 47.888365837256224
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (km)
type: mteb/amazon_massive_intent
config: km
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 50.79690652320108
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (kn)
type: mteb/amazon_massive_intent
config: kn
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 57.225958305312716
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (ko)
type: mteb/amazon_massive_intent
config: ko
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 64.58641560188299
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (lv)
type: mteb/amazon_massive_intent
config: lv
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 59.08204438466711
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (ml)
type: mteb/amazon_massive_intent
config: ml
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 59.54606590450572
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (mn)
type: mteb/amazon_massive_intent
config: mn
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 53.443174176193665
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (ms)
type: mteb/amazon_massive_intent
config: ms
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 61.65097511768661
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (my)
type: mteb/amazon_massive_intent
config: my
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 53.45662407531944
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (nb)
type: mteb/amazon_massive_intent
config: nb
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 63.739071956960316
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (nl)
type: mteb/amazon_massive_intent
config: nl
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 66.36180228648286
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (pl)
type: mteb/amazon_massive_intent
config: pl
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 66.3920645595158
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (pt)
type: mteb/amazon_massive_intent
config: pt
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 68.06993947545395
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (ro)
type: mteb/amazon_massive_intent
config: ro
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 63.123739071956955
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (ru)
type: mteb/amazon_massive_intent
config: ru
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 67.46133154001346
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (sl)
type: mteb/amazon_massive_intent
config: sl
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 60.54472091459314
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (sq)
type: mteb/amazon_massive_intent
config: sq
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 58.204438466711494
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (sv)
type: mteb/amazon_massive_intent
config: sv
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 65.69603227975792
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (sw)
type: mteb/amazon_massive_intent
config: sw
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 51.684599865501
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (ta)
type: mteb/amazon_massive_intent
config: ta
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 58.523873570948226
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (te)
type: mteb/amazon_massive_intent
config: te
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 58.53396099529253
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (th)
type: mteb/amazon_massive_intent
config: th
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 61.88298587760591
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (tl)
type: mteb/amazon_massive_intent
config: tl
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 56.65097511768662
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (tr)
type: mteb/amazon_massive_intent
config: tr
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 64.8453261600538
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (ur)
type: mteb/amazon_massive_intent
config: ur
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 58.6247478143914
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (vi)
type: mteb/amazon_massive_intent
config: vi
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 64.16274377942166
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (zh-CN)
type: mteb/amazon_massive_intent
config: zh-CN
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 69.61667787491594
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (zh-TW)
type: mteb/amazon_massive_intent
config: zh-TW
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 64.17283120376598
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (af)
type: mteb/amazon_massive_scenario
config: af
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 64.89912575655683
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (am)
type: mteb/amazon_massive_scenario
config: am
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 57.27975790181573
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (ar)
type: mteb/amazon_massive_scenario
config: ar
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 62.269670477471415
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (az)
type: mteb/amazon_massive_scenario
config: az
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 65.10423671822461
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (bn)
type: mteb/amazon_massive_scenario
config: bn
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 62.40753194351043
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (cy)
type: mteb/amazon_massive_scenario
config: cy
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 55.369872225958304
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (da)
type: mteb/amazon_massive_scenario
config: da
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 71.60726294552792
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (de)
type: mteb/amazon_massive_scenario
config: de
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 70.30262273032952
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (el)
type: mteb/amazon_massive_scenario
config: el
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 69.52925353059851
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (en)
type: mteb/amazon_massive_scenario
config: en
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 76.28446536650976
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (es)
type: mteb/amazon_massive_scenario
config: es
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 72.45460659045058
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (fa)
type: mteb/amazon_massive_scenario
config: fa
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 70.26563550773368
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (fi)
type: mteb/amazon_massive_scenario
config: fi
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 67.20578345662408
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (fr)
type: mteb/amazon_massive_scenario
config: fr
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 72.64963012777405
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (he)
type: mteb/amazon_massive_scenario
config: he
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 61.698049764626774
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (hi)
type: mteb/amazon_massive_scenario
config: hi
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 70.14458641560188
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (hu)
type: mteb/amazon_massive_scenario
config: hu
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 70.51445864156018
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (hy)
type: mteb/amazon_massive_scenario
config: hy
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 60.13786146603901
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (id)
type: mteb/amazon_massive_scenario
config: id
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 70.61533288500337
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (is)
type: mteb/amazon_massive_scenario
config: is
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 61.526563550773375
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (it)
type: mteb/amazon_massive_scenario
config: it
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 71.99731002017484
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (ja)
type: mteb/amazon_massive_scenario
config: ja
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 71.59381304640216
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (jv)
type: mteb/amazon_massive_scenario
config: jv
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 57.010759919300604
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (ka)
type: mteb/amazon_massive_scenario
config: ka
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 53.26160053799597
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (km)
type: mteb/amazon_massive_scenario
config: km
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 57.800941492938804
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (kn)
type: mteb/amazon_massive_scenario
config: kn
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 62.387357094821795
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (ko)
type: mteb/amazon_massive_scenario
config: ko
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 69.5359784801614
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (lv)
type: mteb/amazon_massive_scenario
config: lv
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 63.36919973100203
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (ml)
type: mteb/amazon_massive_scenario
config: ml
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 64.81506388702084
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (mn)
type: mteb/amazon_massive_scenario
config: mn
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 59.35104236718225
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (ms)
type: mteb/amazon_massive_scenario
config: ms
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 66.67787491593813
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (my)
type: mteb/amazon_massive_scenario
config: my
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 59.4250168123739
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (nb)
type: mteb/amazon_massive_scenario
config: nb
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 71.49630127774043
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (nl)
type: mteb/amazon_massive_scenario
config: nl
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 71.95696032279758
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (pl)
type: mteb/amazon_massive_scenario
config: pl
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 70.11768661735036
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (pt)
type: mteb/amazon_massive_scenario
config: pt
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 71.86953597848016
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (ro)
type: mteb/amazon_massive_scenario
config: ro
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 68.51042367182247
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (ru)
type: mteb/amazon_massive_scenario
config: ru
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 71.65097511768661
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (sl)
type: mteb/amazon_massive_scenario
config: sl
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 66.81573638197713
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (sq)
type: mteb/amazon_massive_scenario
config: sq
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 65.26227303295225
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (sv)
type: mteb/amazon_massive_scenario
config: sv
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 72.51513113651646
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (sw)
type: mteb/amazon_massive_scenario
config: sw
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 58.29858776059179
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (ta)
type: mteb/amazon_massive_scenario
config: ta
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 62.72696704774714
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (te)
type: mteb/amazon_massive_scenario
config: te
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 66.57700067249496
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (th)
type: mteb/amazon_massive_scenario
config: th
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 68.22797579018157
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (tl)
type: mteb/amazon_massive_scenario
config: tl
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 61.97041022192333
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (tr)
type: mteb/amazon_massive_scenario
config: tr
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 70.72629455279085
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (ur)
type: mteb/amazon_massive_scenario
config: ur
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 63.16072629455278
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (vi)
type: mteb/amazon_massive_scenario
config: vi
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 67.92199058507062
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (zh-CN)
type: mteb/amazon_massive_scenario
config: zh-CN
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 74.40484196368527
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (zh-TW)
type: mteb/amazon_massive_scenario
config: zh-TW
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 71.61398789509079
- task:
type: Retrieval
dataset:
name: MTEB MedicalRetrieval
type: C-MTEB/MedicalRetrieval
config: default
split: dev
revision: 2039188fb5800a9803ba5048df7b76e6fb151fc6
metrics:
- type: ndcg_at_10
value: 61.934999999999995
- task:
type: Clustering
dataset:
name: MTEB MedrxivClusteringP2P
type: mteb/medrxiv-clustering-p2p
config: default
split: test
revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73
metrics:
- type: v_measure
value: 33.052031054565205
- task:
type: Clustering
dataset:
name: MTEB MedrxivClusteringS2S
type: mteb/medrxiv-clustering-s2s
config: default
split: test
revision: 35191c8c0dca72d8ff3efcd72aa802307d469663
metrics:
- type: v_measure
value: 31.969909524076794
- task:
type: Reranking
dataset:
name: MTEB MindSmallReranking
type: mteb/mind_small
config: default
split: test
revision: 3bdac13927fdc888b903db93b2ffdbd90b295a69
metrics:
- type: map
value: 31.7530992892652
- task:
type: Retrieval
dataset:
name: MTEB MintakaRetrieval (fr)
type: jinaai/mintakaqa
config: fr
split: test
revision: efa78cc2f74bbcd21eff2261f9e13aebe40b814e
metrics:
- type: ndcg_at_10
value: 34.705999999999996
- task:
type: Retrieval
dataset:
name: MTEB MultiLongDocRetrieval (ar)
type: Shitao/MLDR
config: ar
split: test
revision: None
metrics:
- type: ndcg_at_10
value: 55.166000000000004
- task:
type: Retrieval
dataset:
name: MTEB MultiLongDocRetrieval (de)
type: Shitao/MLDR
config: de
split: test
revision: None
metrics:
- type: ndcg_at_10
value: 55.155
- task:
type: Retrieval
dataset:
name: MTEB MultiLongDocRetrieval (en)
type: Shitao/MLDR
config: en
split: test
revision: None
metrics:
- type: ndcg_at_10
value: 50.993
- task:
type: Retrieval
dataset:
name: MTEB MultiLongDocRetrieval (es)
type: Shitao/MLDR
config: es
split: test
revision: None
metrics:
- type: ndcg_at_10
value: 81.228
- task:
type: Retrieval
dataset:
name: MTEB MultiLongDocRetrieval (fr)
type: Shitao/MLDR
config: fr
split: test
revision: None
metrics:
- type: ndcg_at_10
value: 76.19
- task:
type: Retrieval
dataset:
name: MTEB MultiLongDocRetrieval (hi)
type: Shitao/MLDR
config: hi
split: test
revision: None
metrics:
- type: ndcg_at_10
value: 45.206
- task:
type: Retrieval
dataset:
name: MTEB MultiLongDocRetrieval (it)
type: Shitao/MLDR
config: it
split: test
revision: None
metrics:
- type: ndcg_at_10
value: 66.741
- task:
type: Retrieval
dataset:
name: MTEB MultiLongDocRetrieval (ja)
type: Shitao/MLDR
config: ja
split: test
revision: None
metrics:
- type: ndcg_at_10
value: 52.111
- task:
type: Retrieval
dataset:
name: MTEB MultiLongDocRetrieval (ko)
type: Shitao/MLDR
config: ko
split: test
revision: None
metrics:
- type: ndcg_at_10
value: 46.733000000000004
- task:
type: Retrieval
dataset:
name: MTEB MultiLongDocRetrieval (pt)
type: Shitao/MLDR
config: pt
split: test
revision: None
metrics:
- type: ndcg_at_10
value: 79.105
- task:
type: Retrieval
dataset:
name: MTEB MultiLongDocRetrieval (ru)
type: Shitao/MLDR
config: ru
split: test
revision: None
metrics:
- type: ndcg_at_10
value: 64.21
- task:
type: Retrieval
dataset:
name: MTEB MultiLongDocRetrieval (th)
type: Shitao/MLDR
config: th
split: test
revision: None
metrics:
- type: ndcg_at_10
value: 35.467
- task:
type: Retrieval
dataset:
name: MTEB MultiLongDocRetrieval (zh)
type: Shitao/MLDR
config: zh
split: test
revision: None
metrics:
- type: ndcg_at_10
value: 27.419
- task:
type: Classification
dataset:
name: MTEB MultilingualSentiment
type: C-MTEB/MultilingualSentiment-classification
config: default
split: validation
revision: 46958b007a63fdbf239b7672c25d0bea67b5ea1a
metrics:
- type: accuracy
value: 61.02000000000001
- task:
type: Retrieval
dataset:
name: MTEB NFCorpus
type: mteb/nfcorpus
config: default
split: test
revision: ec0fa4fe99da2ff19ca1214b7966684033a58814
metrics:
- type: ndcg_at_10
value: 36.65
- task:
type: Retrieval
dataset:
name: MTEB NFCorpus-PL
type: clarin-knext/nfcorpus-pl
config: default
split: test
revision: 9a6f9567fda928260afed2de480d79c98bf0bec0
metrics:
- type: ndcg_at_10
value: 26.831
- task:
type: Retrieval
dataset:
name: MTEB NQ
type: mteb/nq
config: default
split: test
revision: b774495ed302d8c44a3a7ea25c90dbce03968f31
metrics:
- type: ndcg_at_10
value: 58.111000000000004
- task:
type: Retrieval
dataset:
name: MTEB NQ-PL
type: clarin-knext/nq-pl
config: default
split: test
revision: f171245712cf85dd4700b06bef18001578d0ca8d
metrics:
- type: ndcg_at_10
value: 43.126999999999995
- task:
type: PairClassification
dataset:
name: MTEB Ocnli
type: C-MTEB/OCNLI
config: default
split: validation
revision: 66e76a618a34d6d565d5538088562851e6daa7ec
metrics:
- type: cos_sim_ap
value: 72.67630697316041
- task:
type: Classification
dataset:
name: MTEB OnlineShopping
type: C-MTEB/OnlineShopping-classification
config: default
split: test
revision: e610f2ebd179a8fda30ae534c3878750a96db120
metrics:
- type: accuracy
value: 84.85000000000001
- task:
type: PairClassification
dataset:
name: MTEB OpusparcusPC (fr)
type: GEM/opusparcus
config: fr
split: test
revision: 9e9b1f8ef51616073f47f306f7f47dd91663f86a
metrics:
- type: cos_sim_ap
value: 100
- task:
type: Classification
dataset:
name: MTEB PAC
type: laugustyniak/abusive-clauses-pl
config: default
split: test
revision: None
metrics:
- type: accuracy
value: 65.99189110918043
- task:
type: STS
dataset:
name: MTEB PAWSX
type: C-MTEB/PAWSX
config: default
split: test
revision: 9c6a90e430ac22b5779fb019a23e820b11a8b5e1
metrics:
- type: cos_sim_spearman
value: 16.124364530596228
- task:
type: PairClassification
dataset:
name: MTEB PPC
type: PL-MTEB/ppc-pairclassification
config: default
split: test
revision: None
metrics:
- type: cos_sim_ap
value: 92.43431057460192
- task:
type: PairClassification
dataset:
name: MTEB PSC
type: PL-MTEB/psc-pairclassification
config: default
split: test
revision: None
metrics:
- type: cos_sim_ap
value: 99.06090138049724
- task:
type: PairClassification
dataset:
name: MTEB PawsX (fr)
type: paws-x
config: fr
split: test
revision: 8a04d940a42cd40658986fdd8e3da561533a3646
metrics:
- type: cos_sim_ap
value: 58.9314954874314
- task:
type: Classification
dataset:
name: MTEB PolEmo2.0-IN
type: PL-MTEB/polemo2_in
config: default
split: test
revision: None
metrics:
- type: accuracy
value: 69.59833795013851
- task:
type: Classification
dataset:
name: MTEB PolEmo2.0-OUT
type: PL-MTEB/polemo2_out
config: default
split: test
revision: None
metrics:
- type: accuracy
value: 44.73684210526315
- task:
type: STS
dataset:
name: MTEB QBQTC
type: C-MTEB/QBQTC
config: default
split: test
revision: 790b0510dc52b1553e8c49f3d2afb48c0e5c48b7
metrics:
- type: cos_sim_spearman
value: 39.36450754137984
- task:
type: Retrieval
dataset:
name: MTEB Quora-PL
type: clarin-knext/quora-pl
config: default
split: test
revision: 0be27e93455051e531182b85e85e425aba12e9d4
metrics:
- type: ndcg_at_10
value: 80.76299999999999
- task:
type: Retrieval
dataset:
name: MTEB QuoraRetrieval
type: mteb/quora
config: default
split: test
revision: None
metrics:
- type: ndcg_at_10
value: 88.022
- task:
type: Clustering
dataset:
name: MTEB RedditClustering
type: mteb/reddit-clustering
config: default
split: test
revision: 24640382cdbf8abc73003fb0fa6d111a705499eb
metrics:
- type: v_measure
value: 55.719165988934385
- task:
type: Clustering
dataset:
name: MTEB RedditClusteringP2P
type: mteb/reddit-clustering-p2p
config: default
split: test
revision: 282350215ef01743dc01b456c7f5241fa8937f16
metrics:
- type: v_measure
value: 62.25390069273025
- task:
type: Retrieval
dataset:
name: MTEB SCIDOCS
type: mteb/scidocs
config: default
split: test
revision: None
metrics:
- type: ndcg_at_10
value: 18.243000000000002
- task:
type: Retrieval
dataset:
name: MTEB SCIDOCS-PL
type: clarin-knext/scidocs-pl
config: default
split: test
revision: 45452b03f05560207ef19149545f168e596c9337
metrics:
- type: ndcg_at_10
value: 14.219000000000001
- task:
type: PairClassification
dataset:
name: MTEB SICK-E-PL
type: PL-MTEB/sicke-pl-pairclassification
config: default
split: test
revision: None
metrics:
- type: cos_sim_ap
value: 75.4022630307816
- task:
type: STS
dataset:
name: MTEB SICK-R
type: mteb/sickr-sts
config: default
split: test
revision: a6ea5a8cab320b040a23452cc28066d9beae2cee
metrics:
- type: cos_sim_spearman
value: 79.34269390198548
- task:
type: STS
dataset:
name: MTEB SICK-R-PL
type: PL-MTEB/sickr-pl-sts
config: default
split: test
revision: None
metrics:
- type: cos_sim_spearman
value: 74.0651660446132
- task:
type: STS
dataset:
name: MTEB SICKFr
type: Lajavaness/SICK-fr
config: default
split: test
revision: e077ab4cf4774a1e36d86d593b150422fafd8e8a
metrics:
- type: cos_sim_spearman
value: 78.62693119733123
- task:
type: STS
dataset:
name: MTEB STS12
type: mteb/sts12-sts
config: default
split: test
revision: a0d554a64d88156834ff5ae9920b964011b16384
metrics:
- type: cos_sim_spearman
value: 77.50660544631359
- task:
type: STS
dataset:
name: MTEB STS13
type: mteb/sts13-sts
config: default
split: test
revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca
metrics:
- type: cos_sim_spearman
value: 85.55415077723738
- task:
type: STS
dataset:
name: MTEB STS14
type: mteb/sts14-sts
config: default
split: test
revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375
metrics:
- type: cos_sim_spearman
value: 81.67550814479077
- task:
type: STS
dataset:
name: MTEB STS15
type: mteb/sts15-sts
config: default
split: test
revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3
metrics:
- type: cos_sim_spearman
value: 88.94601412322764
- task:
type: STS
dataset:
name: MTEB STS16
type: mteb/sts16-sts
config: default
split: test
revision: 4d8694f8f0e0100860b497b999b3dbed754a0513
metrics:
- type: cos_sim_spearman
value: 84.33844259337481
- task:
type: STS
dataset:
name: MTEB STS17 (ko-ko)
type: mteb/sts17-crosslingual-sts
config: ko-ko
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_spearman
value: 81.58650681159105
- task:
type: STS
dataset:
name: MTEB STS17 (ar-ar)
type: mteb/sts17-crosslingual-sts
config: ar-ar
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_spearman
value: 78.82472265884256
- task:
type: STS
dataset:
name: MTEB STS17 (en-ar)
type: mteb/sts17-crosslingual-sts
config: en-ar
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_spearman
value: 76.43637938260397
- task:
type: STS
dataset:
name: MTEB STS17 (en-de)
type: mteb/sts17-crosslingual-sts
config: en-de
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_spearman
value: 84.71008299464059
- task:
type: STS
dataset:
name: MTEB STS17 (en-en)
type: mteb/sts17-crosslingual-sts
config: en-en
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_spearman
value: 88.88074713413747
- task:
type: STS
dataset:
name: MTEB STS17 (en-tr)
type: mteb/sts17-crosslingual-sts
config: en-tr
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_spearman
value: 76.36405640457285
- task:
type: STS
dataset:
name: MTEB STS17 (es-en)
type: mteb/sts17-crosslingual-sts
config: es-en
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_spearman
value: 83.84737910084762
- task:
type: STS
dataset:
name: MTEB STS17 (es-es)
type: mteb/sts17-crosslingual-sts
config: es-es
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_spearman
value: 87.03931621433031
- task:
type: STS
dataset:
name: MTEB STS17 (fr-en)
type: mteb/sts17-crosslingual-sts
config: fr-en
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_spearman
value: 84.43335591752246
- task:
type: STS
dataset:
name: MTEB STS17 (it-en)
type: mteb/sts17-crosslingual-sts
config: it-en
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_spearman
value: 83.85268648747021
- task:
type: STS
dataset:
name: MTEB STS17 (nl-en)
type: mteb/sts17-crosslingual-sts
config: nl-en
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_spearman
value: 82.45786516224341
- task:
type: STS
dataset:
name: MTEB STS22 (en)
type: mteb/sts22-crosslingual-sts
config: en
split: test
revision: eea2b4fe26a775864c896887d910b76a8098ad3f
metrics:
- type: cos_sim_spearman
value: 67.20227303970304
- task:
type: STS
dataset:
name: MTEB STS22 (de)
type: mteb/sts22-crosslingual-sts
config: de
split: test
revision: eea2b4fe26a775864c896887d910b76a8098ad3f
metrics:
- type: cos_sim_spearman
value: 60.892838305537126
- task:
type: STS
dataset:
name: MTEB STS22 (es)
type: mteb/sts22-crosslingual-sts
config: es
split: test
revision: eea2b4fe26a775864c896887d910b76a8098ad3f
metrics:
- type: cos_sim_spearman
value: 72.01876318464508
- task:
type: STS
dataset:
name: MTEB STS22 (pl)
type: mteb/sts22-crosslingual-sts
config: pl
split: test
revision: eea2b4fe26a775864c896887d910b76a8098ad3f
metrics:
- type: cos_sim_spearman
value: 42.3879320510127
- task:
type: STS
dataset:
name: MTEB STS22 (tr)
type: mteb/sts22-crosslingual-sts
config: tr
split: test
revision: eea2b4fe26a775864c896887d910b76a8098ad3f
metrics:
- type: cos_sim_spearman
value: 65.54048784845729
- task:
type: STS
dataset:
name: MTEB STS22 (ar)
type: mteb/sts22-crosslingual-sts
config: ar
split: test
revision: eea2b4fe26a775864c896887d910b76a8098ad3f
metrics:
- type: cos_sim_spearman
value: 58.55244068334867
- task:
type: STS
dataset:
name: MTEB STS22 (ru)
type: mteb/sts22-crosslingual-sts
config: ru
split: test
revision: eea2b4fe26a775864c896887d910b76a8098ad3f
metrics:
- type: cos_sim_spearman
value: 66.48710288440624
- task:
type: STS
dataset:
name: MTEB STS22 (zh)
type: mteb/sts22-crosslingual-sts
config: zh
split: test
revision: eea2b4fe26a775864c896887d910b76a8098ad3f
metrics:
- type: cos_sim_spearman
value: 66.585754901838
- task:
type: STS
dataset:
name: MTEB STS22 (fr)
type: mteb/sts22-crosslingual-sts
config: fr
split: test
revision: eea2b4fe26a775864c896887d910b76a8098ad3f
metrics:
- type: cos_sim_spearman
value: 81.03001290557805
- task:
type: STS
dataset:
name: MTEB STS22 (de-en)
type: mteb/sts22-crosslingual-sts
config: de-en
split: test
revision: eea2b4fe26a775864c896887d910b76a8098ad3f
metrics:
- type: cos_sim_spearman
value: 62.28001859884359
- task:
type: STS
dataset:
name: MTEB STS22 (es-en)
type: mteb/sts22-crosslingual-sts
config: es-en
split: test
revision: eea2b4fe26a775864c896887d910b76a8098ad3f
metrics:
- type: cos_sim_spearman
value: 79.64106342105019
- task:
type: STS
dataset:
name: MTEB STS22 (it)
type: mteb/sts22-crosslingual-sts
config: it
split: test
revision: eea2b4fe26a775864c896887d910b76a8098ad3f
metrics:
- type: cos_sim_spearman
value: 78.27915339361124
- task:
type: STS
dataset:
name: MTEB STS22 (pl-en)
type: mteb/sts22-crosslingual-sts
config: pl-en
split: test
revision: eea2b4fe26a775864c896887d910b76a8098ad3f
metrics:
- type: cos_sim_spearman
value: 78.28574268257462
- task:
type: STS
dataset:
name: MTEB STS22 (zh-en)
type: mteb/sts22-crosslingual-sts
config: zh-en
split: test
revision: eea2b4fe26a775864c896887d910b76a8098ad3f
metrics:
- type: cos_sim_spearman
value: 72.92658860751482
- task:
type: STS
dataset:
name: MTEB STS22 (es-it)
type: mteb/sts22-crosslingual-sts
config: es-it
split: test
revision: eea2b4fe26a775864c896887d910b76a8098ad3f
metrics:
- type: cos_sim_spearman
value: 74.83418886368217
- task:
type: STS
dataset:
name: MTEB STS22 (de-fr)
type: mteb/sts22-crosslingual-sts
config: de-fr
split: test
revision: eea2b4fe26a775864c896887d910b76a8098ad3f
metrics:
- type: cos_sim_spearman
value: 56.01064022625769
- task:
type: STS
dataset:
name: MTEB STS22 (de-pl)
type: mteb/sts22-crosslingual-sts
config: de-pl
split: test
revision: eea2b4fe26a775864c896887d910b76a8098ad3f
metrics:
- type: cos_sim_spearman
value: 53.64332829635126
- task:
type: STS
dataset:
name: MTEB STS22 (fr-pl)
type: mteb/sts22-crosslingual-sts
config: fr-pl
split: test
revision: eea2b4fe26a775864c896887d910b76a8098ad3f
metrics:
- type: cos_sim_spearman
value: 73.24670207647144
- task:
type: STS
dataset:
name: MTEB STSB
type: C-MTEB/STSB
config: default
split: test
revision: 0cde68302b3541bb8b3c340dc0644b0b745b3dc0
metrics:
- type: cos_sim_spearman
value: 80.7157790971544
- task:
type: STS
dataset:
name: MTEB STSBenchmark
type: mteb/stsbenchmark-sts
config: default
split: test
revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831
metrics:
- type: cos_sim_spearman
value: 86.45763616928973
- task:
type: STS
dataset:
name: MTEB STSBenchmarkMultilingualSTS (fr)
type: stsb_multi_mt
config: fr
split: test
revision: 93d57ef91790589e3ce9c365164337a8a78b7632
metrics:
- type: cos_sim_spearman
value: 84.4335500335282
- task:
type: Reranking
dataset:
name: MTEB SciDocsRR
type: mteb/scidocs-reranking
config: default
split: test
revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab
metrics:
- type: map
value: 84.15276484499303
- task:
type: Retrieval
dataset:
name: MTEB SciFact
type: mteb/scifact
config: default
split: test
revision: 0228b52cf27578f30900b9e5271d331663a030d7
metrics:
- type: ndcg_at_10
value: 73.433
- task:
type: Retrieval
dataset:
name: MTEB SciFact-PL
type: clarin-knext/scifact-pl
config: default
split: test
revision: 47932a35f045ef8ed01ba82bf9ff67f6e109207e
metrics:
- type: ndcg_at_10
value: 58.919999999999995
- task:
type: PairClassification
dataset:
name: MTEB SprintDuplicateQuestions
type: mteb/sprintduplicatequestions-pairclassification
config: default
split: test
revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46
metrics:
- type: cos_sim_ap
value: 95.40564890916419
- task:
type: Clustering
dataset:
name: MTEB StackExchangeClustering
type: mteb/stackexchange-clustering
config: default
split: test
revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259
metrics:
- type: v_measure
value: 63.41856697730145
- task:
type: Clustering
dataset:
name: MTEB StackExchangeClusteringP2P
type: mteb/stackexchange-clustering-p2p
config: default
split: test
revision: 815ca46b2622cec33ccafc3735d572c266efdb44
metrics:
- type: v_measure
value: 31.709285904909112
- task:
type: Reranking
dataset:
name: MTEB StackOverflowDupQuestions
type: mteb/stackoverflowdupquestions-reranking
config: default
split: test
revision: e185fbe320c72810689fc5848eb6114e1ef5ec69
metrics:
- type: map
value: 52.09341030060322
- task:
type: Summarization
dataset:
name: MTEB SummEval
type: mteb/summeval
config: default
split: test
revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c
metrics:
- type: cos_sim_spearman
value: 30.58262517835034
- task:
type: Summarization
dataset:
name: MTEB SummEvalFr
type: lyon-nlp/summarization-summeval-fr-p2p
config: default
split: test
revision: b385812de6a9577b6f4d0f88c6a6e35395a94054
metrics:
- type: cos_sim_spearman
value: 29.744542072951358
- task:
type: Reranking
dataset:
name: MTEB SyntecReranking
type: lyon-nlp/mteb-fr-reranking-syntec-s2p
config: default
split: test
revision: b205c5084a0934ce8af14338bf03feb19499c84d
metrics:
- type: map
value: 88.03333333333333
- task:
type: Retrieval
dataset:
name: MTEB SyntecRetrieval
type: lyon-nlp/mteb-fr-retrieval-syntec-s2p
config: default
split: test
revision: 77f7e271bf4a92b24fce5119f3486b583ca016ff
metrics:
- type: ndcg_at_10
value: 83.043
- task:
type: Reranking
dataset:
name: MTEB T2Reranking
type: C-MTEB/T2Reranking
config: default
split: dev
revision: 76631901a18387f85eaa53e5450019b87ad58ef9
metrics:
- type: map
value: 67.08577894804324
- task:
type: Retrieval
dataset:
name: MTEB T2Retrieval
type: C-MTEB/T2Retrieval
config: default
split: dev
revision: 8731a845f1bf500a4f111cf1070785c793d10e64
metrics:
- type: ndcg_at_10
value: 84.718
- task:
type: Classification
dataset:
name: MTEB TNews
type: C-MTEB/TNews-classification
config: default
split: validation
revision: 317f262bf1e6126357bbe89e875451e4b0938fe4
metrics:
- type: accuracy
value: 48.726
- task:
type: Retrieval
dataset:
name: MTEB TRECCOVID
type: mteb/trec-covid
config: default
split: test
revision: None
metrics:
- type: ndcg_at_10
value: 57.56
- task:
type: Retrieval
dataset:
name: MTEB TRECCOVID-PL
type: clarin-knext/trec-covid-pl
config: default
split: test
revision: 81bcb408f33366c2a20ac54adafad1ae7e877fdd
metrics:
- type: ndcg_at_10
value: 59.355999999999995
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (sqi-eng)
type: mteb/tatoeba-bitext-mining
config: sqi-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: f1
value: 82.765
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (fry-eng)
type: mteb/tatoeba-bitext-mining
config: fry-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: f1
value: 73.69942196531792
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (kur-eng)
type: mteb/tatoeba-bitext-mining
config: kur-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: f1
value: 32.86585365853657
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (tur-eng)
type: mteb/tatoeba-bitext-mining
config: tur-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: f1
value: 95.81666666666666
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (deu-eng)
type: mteb/tatoeba-bitext-mining
config: deu-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: f1
value: 97.75
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (nld-eng)
type: mteb/tatoeba-bitext-mining
config: nld-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: f1
value: 93.78333333333335
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ron-eng)
type: mteb/tatoeba-bitext-mining
config: ron-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: f1
value: 90.72333333333333
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ang-eng)
type: mteb/tatoeba-bitext-mining
config: ang-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: f1
value: 42.45202558635395
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ido-eng)
type: mteb/tatoeba-bitext-mining
config: ido-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: f1
value: 77.59238095238095
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (jav-eng)
type: mteb/tatoeba-bitext-mining
config: jav-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: f1
value: 35.69686411149825
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (isl-eng)
type: mteb/tatoeba-bitext-mining
config: isl-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: f1
value: 82.59333333333333
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (slv-eng)
type: mteb/tatoeba-bitext-mining
config: slv-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: f1
value: 84.1456922987907
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (cym-eng)
type: mteb/tatoeba-bitext-mining
config: cym-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: f1
value: 52.47462133594857
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (kaz-eng)
type: mteb/tatoeba-bitext-mining
config: kaz-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: f1
value: 67.62965440356746
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (est-eng)
type: mteb/tatoeba-bitext-mining
config: est-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: f1
value: 79.48412698412699
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (heb-eng)
type: mteb/tatoeba-bitext-mining
config: heb-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: f1
value: 75.85
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (gla-eng)
type: mteb/tatoeba-bitext-mining
config: gla-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: f1
value: 27.32600866497127
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (mar-eng)
type: mteb/tatoeba-bitext-mining
config: mar-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: f1
value: 84.38
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (lat-eng)
type: mteb/tatoeba-bitext-mining
config: lat-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: f1
value: 42.98888712165028
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (bel-eng)
type: mteb/tatoeba-bitext-mining
config: bel-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: f1
value: 85.55690476190476
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (pms-eng)
type: mteb/tatoeba-bitext-mining
config: pms-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: f1
value: 46.68466031323174
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (gle-eng)
type: mteb/tatoeba-bitext-mining
config: gle-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: f1
value: 32.73071428571428
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (pes-eng)
type: mteb/tatoeba-bitext-mining
config: pes-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: f1
value: 88.26333333333334
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (nob-eng)
type: mteb/tatoeba-bitext-mining
config: nob-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: f1
value: 96.61666666666666
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (bul-eng)
type: mteb/tatoeba-bitext-mining
config: bul-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: f1
value: 91.30666666666666
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (cbk-eng)
type: mteb/tatoeba-bitext-mining
config: cbk-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: f1
value: 70.03714285714285
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (hun-eng)
type: mteb/tatoeba-bitext-mining
config: hun-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: f1
value: 89.09
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (uig-eng)
type: mteb/tatoeba-bitext-mining
config: uig-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: f1
value: 59.570476190476185
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (rus-eng)
type: mteb/tatoeba-bitext-mining
config: rus-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: f1
value: 92.9
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (spa-eng)
type: mteb/tatoeba-bitext-mining
config: spa-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: f1
value: 97.68333333333334
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (hye-eng)
type: mteb/tatoeba-bitext-mining
config: hye-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: f1
value: 80.40880503144653
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (tel-eng)
type: mteb/tatoeba-bitext-mining
config: tel-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: f1
value: 89.7008547008547
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (afr-eng)
type: mteb/tatoeba-bitext-mining
config: afr-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: f1
value: 81.84833333333333
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (mon-eng)
type: mteb/tatoeba-bitext-mining
config: mon-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: f1
value: 71.69696969696969
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (arz-eng)
type: mteb/tatoeba-bitext-mining
config: arz-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: f1
value: 55.76985790822269
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (hrv-eng)
type: mteb/tatoeba-bitext-mining
config: hrv-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: f1
value: 91.66666666666666
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (nov-eng)
type: mteb/tatoeba-bitext-mining
config: nov-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: f1
value: 68.36668519547896
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (gsw-eng)
type: mteb/tatoeba-bitext-mining
config: gsw-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: f1
value: 36.73992673992674
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (nds-eng)
type: mteb/tatoeba-bitext-mining
config: nds-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: f1
value: 63.420952380952365
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ukr-eng)
type: mteb/tatoeba-bitext-mining
config: ukr-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: f1
value: 91.28999999999999
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (uzb-eng)
type: mteb/tatoeba-bitext-mining
config: uzb-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: f1
value: 40.95392490046146
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (lit-eng)
type: mteb/tatoeba-bitext-mining
config: lit-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: f1
value: 77.58936507936508
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ina-eng)
type: mteb/tatoeba-bitext-mining
config: ina-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: f1
value: 91.28999999999999
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (lfn-eng)
type: mteb/tatoeba-bitext-mining
config: lfn-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: f1
value: 63.563650793650794
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (zsm-eng)
type: mteb/tatoeba-bitext-mining
config: zsm-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: f1
value: 94.35
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ita-eng)
type: mteb/tatoeba-bitext-mining
config: ita-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: f1
value: 91.43
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (cmn-eng)
type: mteb/tatoeba-bitext-mining
config: cmn-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: f1
value: 95.73333333333332
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (lvs-eng)
type: mteb/tatoeba-bitext-mining
config: lvs-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: f1
value: 79.38666666666667
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (glg-eng)
type: mteb/tatoeba-bitext-mining
config: glg-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: f1
value: 89.64
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ceb-eng)
type: mteb/tatoeba-bitext-mining
config: ceb-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: f1
value: 21.257184628237262
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (bre-eng)
type: mteb/tatoeba-bitext-mining
config: bre-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: f1
value: 13.592316017316017
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ben-eng)
type: mteb/tatoeba-bitext-mining
config: ben-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: f1
value: 73.22666666666666
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (swg-eng)
type: mteb/tatoeba-bitext-mining
config: swg-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: f1
value: 51.711309523809526
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (arq-eng)
type: mteb/tatoeba-bitext-mining
config: arq-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: f1
value: 24.98790634904795
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (kab-eng)
type: mteb/tatoeba-bitext-mining
config: kab-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: f1
value: 17.19218192918193
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (fra-eng)
type: mteb/tatoeba-bitext-mining
config: fra-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: f1
value: 93.26666666666667
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (por-eng)
type: mteb/tatoeba-bitext-mining
config: por-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: f1
value: 94.57333333333334
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (tat-eng)
type: mteb/tatoeba-bitext-mining
config: tat-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: f1
value: 42.35127206127206
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (oci-eng)
type: mteb/tatoeba-bitext-mining
config: oci-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: f1
value: 51.12318903318903
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (pol-eng)
type: mteb/tatoeba-bitext-mining
config: pol-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: f1
value: 94.89999999999999
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (war-eng)
type: mteb/tatoeba-bitext-mining
config: war-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: f1
value: 23.856320290390055
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (aze-eng)
type: mteb/tatoeba-bitext-mining
config: aze-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: f1
value: 79.52833333333334
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (vie-eng)
type: mteb/tatoeba-bitext-mining
config: vie-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: f1
value: 95.93333333333334
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (nno-eng)
type: mteb/tatoeba-bitext-mining
config: nno-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: f1
value: 90.75333333333333
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (cha-eng)
type: mteb/tatoeba-bitext-mining
config: cha-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: f1
value: 30.802919708029197
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (mhr-eng)
type: mteb/tatoeba-bitext-mining
config: mhr-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: f1
value: 15.984076294076294
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (dan-eng)
type: mteb/tatoeba-bitext-mining
config: dan-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: f1
value: 91.82666666666667
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ell-eng)
type: mteb/tatoeba-bitext-mining
config: ell-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: f1
value: 91.9
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (amh-eng)
type: mteb/tatoeba-bitext-mining
config: amh-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: f1
value: 76.36054421768706
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (pam-eng)
type: mteb/tatoeba-bitext-mining
config: pam-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: f1
value: 9.232711399711398
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (hsb-eng)
type: mteb/tatoeba-bitext-mining
config: hsb-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: f1
value: 45.640803181175855
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (srp-eng)
type: mteb/tatoeba-bitext-mining
config: srp-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: f1
value: 86.29
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (epo-eng)
type: mteb/tatoeba-bitext-mining
config: epo-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: f1
value: 88.90833333333332
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (kzj-eng)
type: mteb/tatoeba-bitext-mining
config: kzj-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: f1
value: 11.11880248978075
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (awa-eng)
type: mteb/tatoeba-bitext-mining
config: awa-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: f1
value: 48.45839345839346
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (fao-eng)
type: mteb/tatoeba-bitext-mining
config: fao-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: f1
value: 65.68157033805888
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (mal-eng)
type: mteb/tatoeba-bitext-mining
config: mal-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: f1
value: 94.63852498786997
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ile-eng)
type: mteb/tatoeba-bitext-mining
config: ile-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: f1
value: 81.67904761904761
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (bos-eng)
type: mteb/tatoeba-bitext-mining
config: bos-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: f1
value: 89.35969868173258
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (cor-eng)
type: mteb/tatoeba-bitext-mining
config: cor-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: f1
value: 5.957229437229437
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (cat-eng)
type: mteb/tatoeba-bitext-mining
config: cat-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: f1
value: 91.50333333333333
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (eus-eng)
type: mteb/tatoeba-bitext-mining
config: eus-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: f1
value: 63.75498778998778
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (yue-eng)
type: mteb/tatoeba-bitext-mining
config: yue-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: f1
value: 82.99190476190476
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (swe-eng)
type: mteb/tatoeba-bitext-mining
config: swe-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: f1
value: 92.95
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (dtp-eng)
type: mteb/tatoeba-bitext-mining
config: dtp-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: f1
value: 9.054042624042623
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (kat-eng)
type: mteb/tatoeba-bitext-mining
config: kat-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: f1
value: 72.77064981488574
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (jpn-eng)
type: mteb/tatoeba-bitext-mining
config: jpn-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: f1
value: 93.14
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (csb-eng)
type: mteb/tatoeba-bitext-mining
config: csb-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: f1
value: 29.976786498525627
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (xho-eng)
type: mteb/tatoeba-bitext-mining
config: xho-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: f1
value: 67.6525821596244
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (orv-eng)
type: mteb/tatoeba-bitext-mining
config: orv-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: f1
value: 33.12964812964813
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ind-eng)
type: mteb/tatoeba-bitext-mining
config: ind-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: f1
value: 92.30666666666666
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (tuk-eng)
type: mteb/tatoeba-bitext-mining
config: tuk-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: f1
value: 34.36077879427633
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (max-eng)
type: mteb/tatoeba-bitext-mining
config: max-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: f1
value: 52.571845212690285
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (swh-eng)
type: mteb/tatoeba-bitext-mining
config: swh-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: f1
value: 58.13107263107262
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (hin-eng)
type: mteb/tatoeba-bitext-mining
config: hin-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: f1
value: 93.33333333333333
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (dsb-eng)
type: mteb/tatoeba-bitext-mining
config: dsb-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: f1
value: 42.87370133925458
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ber-eng)
type: mteb/tatoeba-bitext-mining
config: ber-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: f1
value: 20.394327616827614
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (tam-eng)
type: mteb/tatoeba-bitext-mining
config: tam-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: f1
value: 84.29967426710098
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (slk-eng)
type: mteb/tatoeba-bitext-mining
config: slk-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: f1
value: 88.80666666666667
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (tgl-eng)
type: mteb/tatoeba-bitext-mining
config: tgl-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: f1
value: 67.23062271062273
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ast-eng)
type: mteb/tatoeba-bitext-mining
config: ast-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: f1
value: 78.08398950131233
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (mkd-eng)
type: mteb/tatoeba-bitext-mining
config: mkd-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: f1
value: 77.85166666666666
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (khm-eng)
type: mteb/tatoeba-bitext-mining
config: khm-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: f1
value: 67.63004001231148
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ces-eng)
type: mteb/tatoeba-bitext-mining
config: ces-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: f1
value: 89.77000000000001
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (tzl-eng)
type: mteb/tatoeba-bitext-mining
config: tzl-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: f1
value: 40.2654503616042
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (urd-eng)
type: mteb/tatoeba-bitext-mining
config: urd-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: f1
value: 83.90333333333334
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ara-eng)
type: mteb/tatoeba-bitext-mining
config: ara-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: f1
value: 77.80666666666666
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (kor-eng)
type: mteb/tatoeba-bitext-mining
config: kor-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: f1
value: 84.08
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (yid-eng)
type: mteb/tatoeba-bitext-mining
config: yid-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: f1
value: 60.43098607367475
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (fin-eng)
type: mteb/tatoeba-bitext-mining
config: fin-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: f1
value: 88.19333333333333
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (tha-eng)
type: mteb/tatoeba-bitext-mining
config: tha-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: f1
value: 90.55352798053529
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (wuu-eng)
type: mteb/tatoeba-bitext-mining
config: wuu-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: f1
value: 88.44999999999999
- task:
type: Clustering
dataset:
name: MTEB ThuNewsClusteringP2P
type: C-MTEB/ThuNewsClusteringP2P
config: default
split: test
revision: 5798586b105c0434e4f0fe5e767abe619442cf93
metrics:
- type: v_measure
value: 57.25416429643288
- task:
type: Clustering
dataset:
name: MTEB ThuNewsClusteringS2S
type: C-MTEB/ThuNewsClusteringS2S
config: default
split: test
revision: 8a8b2caeda43f39e13c4bc5bea0f8a667896e10d
metrics:
- type: v_measure
value: 56.616646560243524
- task:
type: Retrieval
dataset:
name: MTEB Touche2020
type: mteb/touche2020
config: default
split: test
revision: a34f9a33db75fa0cbb21bb5cfc3dae8dc8bec93f
metrics:
- type: ndcg_at_10
value: 22.819
- task:
type: Classification
dataset:
name: MTEB ToxicConversationsClassification
type: mteb/toxic_conversations_50k
config: default
split: test
revision: d7c0de2777da35d6aae2200a62c6e0e5af397c4c
metrics:
- type: accuracy
value: 71.02579999999999
- task:
type: Classification
dataset:
name: MTEB TweetSentimentExtractionClassification
type: mteb/tweet_sentiment_extraction
config: default
split: test
revision: d604517c81ca91fe16a244d1248fc021f9ecee7a
metrics:
- type: accuracy
value: 57.60045274476514
- task:
type: Clustering
dataset:
name: MTEB TwentyNewsgroupsClustering
type: mteb/twentynewsgroups-clustering
config: default
split: test
revision: 6125ec4e24fa026cec8a478383ee943acfbd5449
metrics:
- type: v_measure
value: 50.346666699466205
- task:
type: PairClassification
dataset:
name: MTEB TwitterSemEval2015
type: mteb/twittersemeval2015-pairclassification
config: default
split: test
revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1
metrics:
- type: cos_sim_ap
value: 71.88199004440489
- task:
type: PairClassification
dataset:
name: MTEB TwitterURLCorpus
type: mteb/twitterurlcorpus-pairclassification
config: default
split: test
revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf
metrics:
- type: cos_sim_ap
value: 85.41587779677383
- task:
type: Retrieval
dataset:
name: MTEB VideoRetrieval
type: C-MTEB/VideoRetrieval
config: default
split: dev
revision: 58c2597a5943a2ba48f4668c3b90d796283c5639
metrics:
- type: ndcg_at_10
value: 72.792
- task:
type: Classification
dataset:
name: MTEB Waimai
type: C-MTEB/waimai-classification
config: default
split: test
revision: 339287def212450dcaa9df8c22bf93e9980c7023
metrics:
- type: accuracy
value: 82.58000000000001
- task:
type: Retrieval
dataset:
name: MTEB XPQARetrieval (fr)
type: jinaai/xpqa
config: fr
split: test
revision: c99d599f0a6ab9b85b065da6f9d94f9cf731679f
metrics:
- type: ndcg_at_10
value: 67.327
---
## gte-multilingual-base
The **gte-multilingual-base** model is the latest in the [GTE](https://huggingface.co/collections/Alibaba-NLP/gte-models-6680f0b13f885cb431e6d469) (General Text Embedding) family of models, featuring several key attributes:
- **High Performance**: Achieves state-of-the-art (SOTA) results in multilingual retrieval tasks and multi-task representation model evaluations when compared to models of similar size.
- **Training Architecture**: Trained using an encoder-only transformers architecture, resulting in a smaller model size. Unlike previous models based on decode-only LLM architecture (e.g., gte-qwen2-1.5b-instruct), this model has lower hardware requirements for inference, offering a 10x increase in inference speed.
- **Long Context**: Supports text lengths up to **8192** tokens.
- **Multilingual Capability**: Supports over **70** languages.
- **Elastic Dense Embedding**: Support elastic output dense representation while maintaining the effectiveness of downstream tasks, which significantly reduces storage costs and improves execution efficiency.
- **Sparse Vectors**: In addition to dense representations, it can also generate sparse vectors.
**Paper**: [mGTE: Generalized Long-Context Text Representation and Reranking Models for Multilingual Text Retrieval](https://arxiv.org/pdf/2407.19669)
## Model Information
- Model Size: 305M
- Embedding Dimension: 768
- Max Input Tokens: 8192
## Usage
- **It is recommended to install xformers and enable unpadding for acceleration,
refer to [enable-unpadding-and-xformers](https://huggingface.co/Alibaba-NLP/new-impl#recommendation-enable-unpadding-and-acceleration-with-xformers).**
- **How to use it offline: [new-impl/discussions/2](https://huggingface.co/Alibaba-NLP/new-impl/discussions/2#662b08d04d8c3d0a09c88fa3)**
- **How to use with [TEI](https://github.com/huggingface/text-embeddings-inference): [refs/pr/7](https://huggingface.co/Alibaba-NLP/gte-multilingual-base/discussions/7#66bfb82ea03b764ca92a2221)**
### Get Dense Embeddings with Transformers
```
# Requires transformers>=4.36.0
import torch.nn.functional as F
from transformers import AutoModel, AutoTokenizer
input_texts = [
"what is the capital of China?",
"how to implement quick sort in python?",
"北京",
"快排算法介绍"
]
model_name_or_path = 'Alibaba-NLP/gte-multilingual-base'
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path)
model = AutoModel.from_pretrained(model_name_or_path, trust_remote_code=True)
# Tokenize the input texts
batch_dict = tokenizer(input_texts, max_length=8192, padding=True, truncation=True, return_tensors='pt')
outputs = model(**batch_dict)
dimension=768 # The output dimension of the output embedding, should be in [128, 768]
embeddings = outputs.last_hidden_state[:, 0][:dimension]
embeddings = F.normalize(embeddings, p=2, dim=1)
scores = (embeddings[:1] @ embeddings[1:].T) * 100
print(scores.tolist())
# [[0.3016996383666992, 0.7503870129585266, 0.3203084468841553]]
```
### Use with sentence-transformers
```
# Requires sentences-transformers>=3.0.0
from sentence_transformers import SentenceTransformer
from sentence_transformers.util import cos_sim
import numpy as np
input_texts = [
"what is the capital of China?",
"how to implement quick sort in python?",
"北京",
"快排算法介绍"
]
model_name_or_path="Alibaba-NLP/gte-multilingual-base"
model = SentenceTransformer(', trust_remote_code=True)
embeddings = model.encode(input_texts) # embeddings.shape (4, 768)
# normalized embeddings
norms = np.linalg.norm(embeddings, ord=2, axis=1, keepdims=True)
norms[norms == 0] = 1
embeddings = embeddings / norms
# sim scores
scores = (embeddings[:1] @ embeddings[1:].T)
print(scores.tolist())
# [[0.301699697971344, 0.7503870129585266, 0.32030850648880005]]
```
### Use with custom code to get dense embeddigns and sparse token weights
```
# You can find the script gte_embedding.py in https://huggingface.co/Alibaba-NLP/gte-multilingual-base/blob/main/scripts/gte_embedding.py
from gte_embedding import GTEEmbeddidng
model_name_or_path = 'Alibaba-NLP/gte-multilingual-base'
model = GTEEmbeddidng(model_name_or_path)
query = "中国的首都在哪儿"
docs = [
"what is the capital of China?",
"how to implement quick sort in python?",
"北京",
"快排算法介绍"
]
embs = model.encode(docs, return_dense=True,return_sparse=True)
print('dense_embeddings vecs', embs['dense_embeddings'])
print('token_weights', embs['token_weights'])
pairs = [(query, doc) for doc in docs]
dense_scores = model.compute_scores(pairs, dense_weight=1.0, sparse_weight=0.0)
sparse_scores = model.compute_scores(pairs, dense_weight=0.0, sparse_weight=1.0)
hybrid_scores = model.compute_scores(pairs, dense_weight=1.0, sparse_weight=0.3)
print('dense_scores', dense_scores)
print('sparse_scores', sparse_scores)
print('hybrid_scores', hybrid_scores)
# dense_scores [0.85302734375, 0.257568359375, 0.76953125, 0.325439453125]
# sparse_scores [0.0, 0.0, 4.600879669189453, 1.570279598236084]
# hybrid_scores [0.85302734375, 0.257568359375, 2.1497951507568356, 0.7965233325958252]
```
## Evaluation
We validated the performance of the **gte-multilingual-base** model on multiple downstream tasks, including multilingual retrieval, cross-lingual retrieval, long text retrieval, and general text representation evaluation on the [MTEB Leaderboard](https://huggingface.co/spaces/mteb/leaderboard), among others.
### Retrieval Task
Retrieval results on [MIRACL](https://arxiv.org/abs/2210.09984) and [MLDR](https://arxiv.org/abs/2402.03216) (multilingual), [MKQA](https://arxiv.org/abs/2007.15207) (crosslingual), [BEIR](https://arxiv.org/abs/2104.08663) and [LoCo](https://arxiv.org/abs/2402.07440) (English).

- Detail results on [MLDR](https://arxiv.org/abs/2402.03216)

- Detail results on [LoCo](https://arxiv.org/abs/2402.07440)
### MTEB
Results on MTEB English, Chinese, French, Polish

**More detailed experimental results can be found in the [paper](https://arxiv.org/pdf/2407.19669)**.
## Cloud API Services
In addition to the open-source [GTE](https://huggingface.co/collections/Alibaba-NLP/gte-models-6680f0b13f885cb431e6d469) series models, GTE series models are also available as commercial API services on Alibaba Cloud.
- [Embedding Models](https://help.aliyun.com/zh/model-studio/developer-reference/general-text-embedding/): Rhree versions of the text embedding models are available: text-embedding-v1/v2/v3, with v3 being the latest API service.
- [ReRank Models](https://help.aliyun.com/zh/model-studio/developer-reference/general-text-sorting-model/): The gte-rerank model service is available.
Note that the models behind the commercial APIs are not entirely identical to the open-source models.
## Citation
If you find our paper or models helpful, please consider cite:
```
@misc{zhang2024mgte,
title={mGTE: Generalized Long-Context Text Representation and Reranking Models for Multilingual Text Retrieval},
author={Xin Zhang and Yanzhao Zhang and Dingkun Long and Wen Xie and Ziqi Dai and Jialong Tang and Huan Lin and Baosong Yang and Pengjun Xie and Fei Huang and Meishan Zhang and Wenjie Li and Min Zhang},
year={2024},
eprint={2407.19669},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2407.19669},
}
``` | [
"SUMMARIZATION"
] | [
"BIOSSES",
"SCIFACT"
] |
mogaio/pr_ebsa_fr_tran_merged25_e1_beginning_offsets | mogaio | text-classification | [
"setfit",
"safetensors",
"xlm-roberta",
"sentence-transformers",
"text-classification",
"generated_from_setfit_trainer",
"arxiv:2209.11055",
"base_model:sentence-transformers/paraphrase-multilingual-mpnet-base-v2",
"base_model:finetune:sentence-transformers/paraphrase-multilingual-mpnet-base-v2",
"model-index",
"region:us"
] | 2023-12-15T19:02:22 | 2023-12-15T19:03:37 | 73 | 0 | ---
base_model: sentence-transformers/paraphrase-multilingual-mpnet-base-v2
library_name: setfit
metrics:
- accuracy_score
- classification_report
pipeline_tag: text-classification
tags:
- setfit
- sentence-transformers
- text-classification
- generated_from_setfit_trainer
widget:
- text: 'Adil Hussain
Adil Hussain est reconnaissant d''avoir reçu l''enseignement de l''acteur Naseeruddin
Shah à l''époque où il fréquentait l''École nationale d''art dramatique'
- text: 'Bien que leurs opinions sur la question de savoir si les migrants sont un
avantage ou un fardeau soient plus mitigées, de nettes majorités d''électeurs
de toute la ville de New York, de la banlieue et du nord de l''État ont déclaré
que l''État devrait essayer de ralentir l''afflux de migrants, plutôt que d''en
accepter davantage et de s''efforcer d''assimiler les nouveaux arrivants Les démocrates
aspirent à renverser six circonscriptions détenues par les républicains que M.
Biden a remportées en 2020, notamment celle de M Les républicains se sont emparés
de la crise des migrants, donnant un avant-goût des campagnes de l''année prochaine
Les républicains ont surenchéri : Elise Stefanik, la New-Yorkaise qui dirige la
conférence du parti démocrate à la Chambre des représentants,
Suite à la page suivante
a déclaré à Politico la semaine dernière que le parti allait consacrer 100 millions
de dollars aux campagnes dans les circonscriptions de New York Des problèmes à
venir pour les démocrates de New York en 2024 ?
Les dirigeants démocrates de New York se débattent depuis des mois avec le problème
de l''hébergement des dizaines de milliers de migrants qui ont été transportés
par bus jusqu''à New York et laissés à sa charge Des problèmes à venir pour les
démocrates de New York en 2024 ?
Les dirigeants démocrates de New York se débattent depuis des mois avec le problème
de l''hébergement des dizaines de milliers de migrants qui ont été transportés
par bus jusqu''à New York et laissés à sa charge.
Mais une autre préoccupation se profile alors que la crise se poursuit sans qu''aucune
issue ne soit en vue : les retombées potentielles pour leur parti lors des élections
de l''année prochaine Les républicains ont tendance à se sentir en sécurité lorsqu''ils
parlent d''immigration - comme les démocrates le font pour l''avortement - et
sont clairement à l''attaque sur la question des migrants à New York, tandis que
les démocrates sont sur la défensive, a déclaré Kyle Kondik, directeur de la communication
pour le Centre de politique de l''Université de Virginie, au réseau USA Today
Plus de 100 000 migrants ont été transportés à New York depuis la frontière sud
depuis le printemps 2022. Environ 60 000 d''entre eux sont hébergés dans la ville,
et plus de 2 100 ont été transportés dans des hôtels situés dans sept comtés au
nord de la ville, de Yonkers à la périphérie de Buffalo, où ils sont logés aux
frais de la ville Les démocrates doivent y remporter des victoires pour gagner
cinq sièges à la Chambre et faire du député Hakeem Jeffries, de Brooklyn, le prochain
président de la Chambre des représentants Les publicités d''attaque des républicains
s''écrivent pratiquement d''elles-mêmes à partir d''un flot de titres et d''images
télévisées, alors que le gouverneur Kathy Hochul, le maire de New York Eric Adams
et le président Joe Biden - tous démocrates - se rejettent mutuellement la faute
et s''échangent des coups de feu pour savoir qui devrait en faire le plus Isaac
Goldberg, un stratège démocrate qui a travaillé sur plusieurs campagnes électorales
à New York, a affirmé qu''il était beaucoup trop tôt pour prédire l''impact politique
de la crise des migrants, soulignant que les élections de 2024 n''auront lieu
que dans 14 mois et que de nombreuses questions tout aussi urgentes pourraient
se poser'
- text: 'LE CANDIDAT A LA PRESIDENCE RAMASWAMY VEUT METTRE FIN AU SYSTEME DE VISA
H-1B AUX ETATS-UNIS
Décrivant le programme de visas H-1B comme une forme de "servitude", Vivek Ramaswamy,
candidat républicain indien-américain à l''élection présidentielle, a promis de
"vider" le système basé sur la loterie et de le remplacer par un système d''admission
méritocratique s''il remporte les élections présidentielles de 2024'
- text: 'Smith Hal Sparks Catherine Zeta-Jones son-Sampras Chris Owen Donald Glover
("Queer as Folk") a 54 ans Smith Hal Sparks Catherine Zeta-Jones son-Sampras Chris
Owen Donald Glover
("Queer as Folk") a 54 ans. a 54 ans. Acteur
("Je sais ce que vous avez fait l''été dernier") a 50 ans'
- text: 'Trump profiter de sa célébrité jusqu''à la Maison-Blanche.
"Cela a tué Howard parce qu''il était le roi de tous les médias Il a poursuivi
en disant que Trump ne laisserait pas ses partisans s''approcher de l''une de
ses propriétés. "Les gens qui votent pour Trump, pour la plupart, ne les laisseraient
même pas entrer dans un putain d''hôtel [ "Si être réveillé signifie que je ne
peux pas soutenir Trump, ce que je pense que cela signifie, ou que je soutiens
les personnes qui veulent être transgenres ou que je suis pour le vaccin, appelez-moi
réveillé comme vous le voulez" "Les gens qui votent pour Trump, pour la plupart,
ne les laisseraient même pas entrer dans un putain d''hôtel [...]. Allez à Mar-a-lago,
voyez s''il y a des gens qui vous ressemblent" Stern a également abordé les affirmations
de Trump et de ses partisans selon lesquelles Joe Biden a remporté l''élection
américaine de 2020 grâce à des votes frauduleux "Et soudain, Trump a transformé
Howard, qui était le roi de tous les médias, en prince Harry de tous les médias.
Tout le monde s''en fout "Trump avait l''habitude de participer à l''émission
de Stern chaque semaine. Ils étaient amis. Alors cette idée que Trump est le pire
type qui ait jamais marché sur la surface de la terre, pourquoi traîniez-vous
avec lui ?"
M Mais Stern, qui par le passé a été accusé de racisme et de sexisme dans nombre
de ses sketches à l''antenne, a été un critique virulent de Trump tout au long
de sa présidence et, plus récemment, alors qu''il se prépare à se présenter à
nouveau en 2024.
En 2021, M "Combien de temps allons-nous continuer à élire des gens qui ont perdu
l''élection ?"
Il a poursuivi en qualifiant les partisans de Trump de "nigauds".
"Mon Dieu, j''ai l''impression d''être dans une nation de nigauds. J''espère qu''il
y a encore des gens brillants et dynamiques qui aiment ce pays", a-t-il déclaré
Alors cette idée que Trump est le pire type qui ait jamais marché sur la surface
de la terre, pourquoi traîniez-vous avec lui ?"
M. Failla a déclaré que cela avait "tué" M Si "woke" signifie que je ne peux pas
soutenir Trump, ce que je pense que cela signifie, ou que je soutiens les personnes
qui veulent être transgenres ou que je suis pour le vaccin, appelez-moi "woke"
comme vous voulez Celui qui se décrit comme le "roi de tous les médias" a critiqué
ouvertement l''ancien président américain Donald Trump, les anti-vaxx et, plus
récemment, Lauren Boebert, qu''il a critiquée pour son comportement obscène dans
un théâtre de Denver au début du mois "L''omnipotence médiatique de Donald Trump
a brisé Howard Stern. C''est très important", a déclaré Failla dans la vidéo (selon
OK ! Magazine). "Trump avait l''habitude de participer à l''émission de Stern
chaque semaine L''aversion d''Howard Stern pour Donald Trump, c''est "tout l''ego".
Si "woke" signifie que je ne peux pas soutenir Trump, ce que je pense que cela
signifie, ou que je soutiens les personnes qui veulent être transgenres ou que
je suis pour le vaccin, appelez-moi "woke" comme vous voulez Trump l''année prochaine.
"Je sais que je lui botterai le cul", a-t-il déclaré aux auditeurs.
L''année suivante, Stern a déclaré qu''il envisageait de se lancer dans la course
à la présidence "pour que le pays soit à nouveau juste" En réponse, Trump a partagé
sur sa plateforme Truth Social un clip de Fox News dans lequel l''animateur Jimmy
Failla critique Stern.
"L''omnipotence médiatique de Donald Trump a brisé Howard Stern "Je vais faire
la chose très simple qui remettra le pays sur le droit chemin : un vote, une personne",
a expliqué Stern, affirmant que Trump a en fait perdu l''élection de 2016 contre
Hillary Clinton qui a remporté le vote populaire - mais pas le collège électoral'
inference: true
model-index:
- name: SetFit with sentence-transformers/paraphrase-multilingual-mpnet-base-v2
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: Unknown
type: unknown
split: test
metrics:
- type: accuracy_score
value: 0.9434954007884363
name: Accuracy_Score
- type: classification_report
value:
'0':
precision: 0.9361702127659575
recall: 0.9322033898305084
f1-score: 0.9341825902335456
support: 236
'1':
precision: 0.9333333333333333
recall: 0.9302325581395349
f1-score: 0.9317803660565723
support: 301
'2':
precision: 0.9646017699115044
recall: 0.9732142857142857
f1-score: 0.9688888888888889
support: 224
accuracy: 0.9434954007884363
macro avg:
precision: 0.9447017720035985
recall: 0.945216744561443
f1-score: 0.9449506150596689
support: 761
weighted avg:
precision: 0.9434169513880108
recall: 0.9434954007884363
f1-score: 0.9434482162802315
support: 761
name: Classification_Report
---
# SetFit with sentence-transformers/paraphrase-multilingual-mpnet-base-v2
This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. This SetFit model uses [sentence-transformers/paraphrase-multilingual-mpnet-base-v2](https://huggingface.co/sentence-transformers/paraphrase-multilingual-mpnet-base-v2) as the Sentence Transformer embedding model. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification.
The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Model Details
### Model Description
- **Model Type:** SetFit
- **Sentence Transformer body:** [sentence-transformers/paraphrase-multilingual-mpnet-base-v2](https://huggingface.co/sentence-transformers/paraphrase-multilingual-mpnet-base-v2)
- **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance
- **Maximum Sequence Length:** 128 tokens
- **Number of Classes:** 3 classes
<!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit)
- **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055)
- **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit)
### Model Labels
| Label | Examples |
|:------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| pos | <ul><li>"Les PHL lèvent 1,26 milliard de dollars grâce aux obligations en dollars de détail\nLE GOUVERNEMENT PHILIPPIN a levé 1,26 milliard de dollars lors de la première émission d'obligations de détail en dollars (RDB) sous l'administration Marcos, a déclaré le ministère des Finances (DoF)"</li><li>"Atom Egoyan revient à Salomé, l'opéra qu'il a monté en 1996, avec Seven Veils\nAtom Egoyan n'a pas été surpris lorsque la Canadian Opera Company lui a demandé de remonter Salomé pour la saison 2022-23 Atom Egoyan revient à Salomé, l'opéra qu'il a monté en 1996, avec Seven Veils\nAtom Egoyan n'a pas été surpris lorsque la Canadian Opera Company lui a demandé de remonter Salomé pour la saison 2022-23. Avec ses éléments de film et de vidéo, son interprétation psychologique et sombre de l'opéra de Richard Strauss avait un solide palmarès de reprises - depuis sa création en 1996, elle avait été présentée deux fois de plus à la COC et avait été reprise par plusieurs autres compagnies"</li><li>'Paul Simon présente un documentaire sur sa carrière\nAprès un documentaire de trois heures et demie sur sa vie, Paul Simon n\'avait que de la sympathie pour le public Paul Simon présente un documentaire sur sa carrière\nAprès un documentaire de trois heures et demie sur sa vie, Paul Simon n\'avait que de la sympathie pour le public.\nTORONTO >> Après un documentaire de trois heures et demie sur sa vie, Paul Simon n\'avait que de la sympathie pour le public "Il n\'y a pas de raison que vous soyez épuisés", a dit Simon à la foule après la première du documentaire d\'Alex Gibney "In Restless Dreams : The Music of Paul Simon" d\'Alex Gibney, dimanche au Festival international du film de Toronto.\nSimon, âgé de 81 ans, n\'avait pas regardé le film avant la première, et il ne l\'a pas regardé non plus dimanche TORONTO >> Après un documentaire de trois heures et demie sur sa vie, Paul Simon n\'avait que de la sympathie pour le public.\n"Il n\'y a pas de raison que vous soyez épuisés", a dit Simon à la foule après la première du documentaire d\'Alex Gibney "In Restless Dreams : The Music of Paul Simon" d\'Alex Gibney, dimanche au Festival international du film de Toronto'</li></ul> |
| neg | <ul><li>'Le groupe Al-Mostaqilla de l\'université du Koweït a appelé les étudiants à organiser un sit-in à l\'université du Koweït lundi pour protester contre la décision de mettre fin aux classes mixtes La décision a été prise la semaine dernière par le nouveau ministre de l\'éducation, Adel Al-Mane, et le directeur par intérim de l\'université du Koweït, Fayez Al-Dhafiri, et mise en œuvre mercredi, trois jours seulement avant le début de la nouvelle année universitaire à la faculté de droit L\'association a également demandé au gouvernement de "cesser ses interventions politiques et médiatiques injustifiées" dans les affaires de l\'université du Koweït.\nL\'association a appelé le directeur par intérim de l\'université du Koweït à ne pas céder aux pressions politiques et médiatiques et à s\'efforcer de protéger l\'indépendance de l\'université Dhafiri a déclaré que la décision avait été prise en application de la loi de 1996 qui interdisait l\'enseignement mixte à l\'université du Koweït, malgré une décision de la Cour constitutionnelle de 2015 autorisant l\'enseignement mixte lorsqu\'il était nécessaire et dans des cas exceptionnels Parallèlement, l\'association des professeurs de l\'université du Koweït a publié samedi une déclaration demandant aux députés et au gouvernement de "cesser d\'interférer dans les affaires de l\'université du Koweït" et de maintenir l\'indépendance de l\'université "L\'université du Koweït était, est et sera toujours le porte-drapeau de la connaissance et des valeurs, à l\'abri de toute influence extérieure Le député Abdulwahab Al-Essa a reproché à l\'administration de l\'université du Koweït d\'avoir succombé à la pression politique au détriment de l\'intérêt public, ajoutant que l\'université du Koweït avait appliqué correctement une décision de la cour constitutionnelle autorisant les classes mixtes chaque fois que cela était nécessaire'</li><li>"L'immigration étant l'un des défis les plus difficiles à relever pour le président Joe Biden et apparaissant comme un enjeu majeur des élections de l'année prochaine, l'administration délocalise essentiellement la question en s'appuyant sur les pays d'Amérique centrale et d'Amérique du Sud pour empêcher les migrants de se diriger vers le nord"</li><li>'Lors d\'une réunion d\'information mardi, le porte-parole de l\'armée, le lieutenant-colonel Richard Hecht, a suggéré que les Palestiniens tentent de quitter la bande de Gaza par le poste-frontière de Rafah, en Égypte.\nLa perspective d\'un exode des habitants de Gaza vers le territoire égyptien a alarmé les autorités égyptiennes La question qui se pose est de savoir si Israël lancera une offensive terrestre dans la bande de Gaza, une bande de terre de 25 miles de long coincée entre Israël, l\'Égypte et la mer Méditerranée, où vivent 2,3 millions de personnes et qui est gouvernée par le Hamas depuis 2007 Israël pilonne la bande de Gaza ; les habitants se précipitent pour se mettre à l\'abri\nJERUSALEM - Les avions de combat israéliens ont bombardé la bande de Gaza quartier par quartier mardi, réduisant les bâtiments en ruines et poussant les habitants à se précipiter pour se mettre à l\'abri dans ce minuscule territoire isolé, alors qu\'Israël promet des représailles pour l\'attaque surprise du Hamas du week-end qui "se répercuteront Les autorités égyptiennes discutent avec Israël et les États-Unis afin de mettre en place des corridors humanitaires dans la bande de Gaza pour acheminer l\'aide, a déclaré un responsable égyptien. Des négociations sont en cours avec les Israéliens pour que la zone autour du point de passage de Rafah entre l\'Égypte et Gaza soit déclarée "zone d\'interdiction de feu", a déclaré le responsable, sous couvert d\'anonymat car il n\'était pas autorisé à parler aux médias'</li></ul> |
| obj | <ul><li>"L'économie pèse sur les Américains Ils sont plus nombreux à faire confiance à Trump qu'à Biden pour alléger leur fardeau\nWASHINGTON - Linda Muñoz a peur de l'économie Trump, le candidat républicain à la primaire de 2024, pour améliorer l'économie, avec une marge de 47 % à 36 %. L'écart est de 46 %-26 % en faveur de M. Trump parmi les électeurs indépendants Presque tous les républicains interrogés ont exprimé leur pessimisme à l'égard de l'économie, selon le sondage : 96 % d'entre eux estiment que la situation se dégrade au lieu de s'améliorer Le logement. L'essence. Tous ces éléments poussent les gens à s'endetter de plus en plus, disent-ils.\nSelon le sondage, près de 70 % des Américains estiment que la situation économique se dégrade, tandis que 22 % seulement estiment qu'elle s'améliore L'économie pèse sur les Américains Ils sont plus nombreux à faire confiance à Trump qu'à Biden pour alléger leur fardeau\nWASHINGTON - Linda Muñoz a peur de l'économie. Elle a puisé dans son épargne d'urgence cette année. Et elle ne croit pas que le président Joe Biden ressente sa douleur L'épicerie. Le logement. L'essence. Tous ces éléments poussent les gens à s'endetter de plus en plus, disent-ils.\nSelon le sondage, près de 70 % des Américains estiment que la situation économique se dégrade, tandis que 22 % seulement estiment qu'elle s'améliore"</li><li>'Le Pentagone va interroger d\'autres militaires sur l\'attentat suicide de l\'aéroport de Kaboul en 2021\nLe commandement central du Pentagone a ordonné l\'audition d\'une vingtaine de militaires supplémentaires qui se trouvaient à l\'aéroport de Kaboul lorsque des kamikazes ont attaqué pendant le retrait chaotique des forces américaines d\'Afghanistan, alors que les critiques persistent sur le fait que l\'attaque meurtrière aurait pu être stoppée Certaines familles des personnes tuées ou blessées se sont plaintes que le Pentagone n\'avait pas fait preuve de suffisamment de transparence au sujet de l\'attentat à la bombe qui a tué 170 Afghans\net 13 militaires américains.\nL\'enquête du commandement central américain a conclu en novembre 2021 qu\'étant donné la détérioration de la sécurité à la porte de l\'Abbaye de l\'aéroport alors que les Afghans cherchaient de plus en plus à fuir, "l\'attaque n\'aurait pas pu être évitée au niveau tactique sans dégrader la mission visant à maximiser le nombre d\'évacués" Le Pentagone a déclaré que l\'examen de l\'attentat suicide n\'avait révélé aucune identification préalable d\'un attaquant possible ni aucune demande d\'"escalade des règles d\'engagement existantes" régissant l\'utilisation de la force par les troupes américaines'</li><li>'Les retombées de la guerre se répercutent sur les lieux de travail aux États-Unis.\nNEW YORK - Les retombées de la guerre entre Israël et le Hamas se sont répandues sur les lieux de travail partout dans le monde, les dirigeants de grandes entreprises exprimant leur point de vue tandis que les travailleurs se plaignent de ne pas être entendus "À quoi me sert mon travail si je compromets ma propre morale et mon éthique ?\nL\'un des conflits les plus importants s\'est produit chez Starbucks après que Starbucks Workers United, un syndicat représentant 9 000 travailleurs dans plus de 360 magasins aux États-Unis, a tweeté "Solidarité avec la Palestine" deux jours après l\'attaque du Hamas. Le tweet a été supprimé au bout de 40 minutes, mais l\'entreprise a déclaré qu\'il avait donné lieu à plus de 1 000 plaintes, à des actes de vandalisme et à des affrontements dans ses magasins NEW YORK - Les retombées de la guerre entre Israël et le Hamas se sont répandues sur les lieux de travail partout dans le monde, les dirigeants de grandes entreprises exprimant leur point de vue tandis que les travailleurs se plaignent de ne pas être entendus'</li></ul> |
## Evaluation
### Metrics
| Label | Accuracy_Score | Classification_Report |
|:--------|:---------------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| **all** | 0.9435 | {'0': {'precision': 0.9361702127659575, 'recall': 0.9322033898305084, 'f1-score': 0.9341825902335456, 'support': 236}, '1': {'precision': 0.9333333333333333, 'recall': 0.9302325581395349, 'f1-score': 0.9317803660565723, 'support': 301}, '2': {'precision': 0.9646017699115044, 'recall': 0.9732142857142857, 'f1-score': 0.9688888888888889, 'support': 224}, 'accuracy': 0.9434954007884363, 'macro avg': {'precision': 0.9447017720035985, 'recall': 0.945216744561443, 'f1-score': 0.9449506150596689, 'support': 761}, 'weighted avg': {'precision': 0.9434169513880108, 'recall': 0.9434954007884363, 'f1-score': 0.9434482162802315, 'support': 761}} |
## Uses
### Direct Use for Inference
First install the SetFit library:
```bash
pip install setfit
```
Then you can load this model and run inference.
```python
from setfit import SetFitModel
# Download from the 🤗 Hub
model = SetFitModel.from_pretrained("mogaio/pr_ebsa_fr_tran_merged25_e1_beginning_offsets")
# Run inference
preds = model("Adil Hussain
Adil Hussain est reconnaissant d'avoir reçu l'enseignement de l'acteur Naseeruddin Shah à l'époque où il fréquentait l'École nationale d'art dramatique")
```
<!--
### Downstream Use
*List how someone could finetune this model on their own dataset.*
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Set Metrics
| Training set | Min | Median | Max |
|:-------------|:----|:---------|:-----|
| Word count | 9 | 247.2638 | 2089 |
| Label | Training Sample Count |
|:------|:----------------------|
| neg | 913 |
| obj | 1216 |
| pos | 911 |
### Training Hyperparameters
- batch_size: (8, 8)
- num_epochs: (1, 1)
- max_steps: -1
- sampling_strategy: oversampling
- num_iterations: 1
- body_learning_rate: (2e-05, 2e-05)
- head_learning_rate: 2e-05
- loss: CosineSimilarityLoss
- distance_metric: cosine_distance
- margin: 0.25
- end_to_end: False
- use_amp: False
- warmup_proportion: 0.1
- seed: 42
- eval_max_steps: -1
- load_best_model_at_end: False
### Training Results
| Epoch | Step | Training Loss | Validation Loss |
|:------:|:----:|:-------------:|:---------------:|
| 0.0013 | 1 | 0.3703 | - |
| 0.0658 | 50 | 0.3145 | - |
| 0.1316 | 100 | 0.1839 | - |
| 0.1974 | 150 | 0.2558 | - |
| 0.2632 | 200 | 0.2683 | - |
| 0.3289 | 250 | 0.1572 | - |
| 0.3947 | 300 | 0.1953 | - |
| 0.4605 | 350 | 0.171 | - |
| 0.5263 | 400 | 0.2326 | - |
| 0.5921 | 450 | 0.1762 | - |
| 0.6579 | 500 | 0.2818 | - |
| 0.7237 | 550 | 0.2733 | - |
| 0.7895 | 600 | 0.195 | - |
| 0.8553 | 650 | 0.2104 | - |
| 0.9211 | 700 | 0.2124 | - |
| 0.9868 | 750 | 0.0818 | - |
| 1.0526 | 800 | 0.1046 | - |
| 1.1184 | 850 | 0.1633 | - |
| 1.1842 | 900 | 0.3207 | - |
| 1.25 | 950 | 0.2703 | - |
| 1.3158 | 1000 | 0.1934 | - |
| 1.3816 | 1050 | 0.2547 | - |
| 1.4474 | 1100 | 0.0933 | - |
| 1.5132 | 1150 | 0.2102 | - |
| 1.5789 | 1200 | 0.0699 | - |
| 1.6447 | 1250 | 0.1778 | - |
| 1.7105 | 1300 | 0.1796 | - |
| 1.7763 | 1350 | 0.0221 | - |
| 1.8421 | 1400 | 0.2154 | - |
| 1.9079 | 1450 | 0.1683 | - |
| 1.9737 | 1500 | 0.3096 | - |
| 2.0395 | 1550 | 0.201 | - |
| 2.1053 | 1600 | 0.1954 | - |
| 2.1711 | 1650 | 0.2301 | - |
| 2.2368 | 1700 | 0.1141 | - |
| 2.3026 | 1750 | 0.1949 | - |
| 2.3684 | 1800 | 0.164 | - |
| 2.4342 | 1850 | 0.2307 | - |
| 2.5 | 1900 | 0.1912 | - |
| 2.5658 | 1950 | 0.2349 | - |
| 2.6316 | 2000 | 0.0922 | - |
| 2.6974 | 2050 | 0.0702 | - |
| 2.7632 | 2100 | 0.1089 | - |
| 2.8289 | 2150 | 0.1711 | - |
| 2.8947 | 2200 | 0.1432 | - |
| 2.9605 | 2250 | 0.2739 | - |
| 3.0263 | 2300 | 0.1889 | - |
| 3.0921 | 2350 | 0.1036 | - |
| 3.1579 | 2400 | 0.1372 | - |
| 3.2237 | 2450 | 0.028 | - |
| 3.2895 | 2500 | 0.1739 | - |
| 3.3553 | 2550 | 0.142 | - |
| 3.4211 | 2600 | 0.0838 | - |
| 3.4868 | 2650 | 0.0657 | - |
| 3.5526 | 2700 | 0.0054 | - |
| 3.6184 | 2750 | 0.0426 | - |
| 3.6842 | 2800 | 0.1974 | - |
| 3.75 | 2850 | 0.0279 | - |
| 3.8158 | 2900 | 0.1326 | - |
| 3.8816 | 2950 | 0.1614 | - |
| 3.9474 | 3000 | 0.1251 | - |
| 4.0132 | 3050 | 0.1174 | - |
| 4.0789 | 3100 | 0.1948 | - |
| 4.1447 | 3150 | 0.0555 | - |
| 4.2105 | 3200 | 0.0064 | - |
| 4.2763 | 3250 | 0.064 | - |
| 4.3421 | 3300 | 0.0013 | - |
| 4.4079 | 3350 | 0.135 | - |
| 4.4737 | 3400 | 0.0574 | - |
| 4.5395 | 3450 | 0.174 | - |
| 4.6053 | 3500 | 0.2199 | - |
| 4.6711 | 3550 | 0.387 | - |
| 4.7368 | 3600 | 0.114 | - |
| 4.8026 | 3650 | 0.0853 | - |
| 4.8684 | 3700 | 0.0325 | - |
| 4.9342 | 3750 | 0.019 | - |
| 5.0 | 3800 | 0.0572 | - |
| 0.0013 | 1 | 0.1435 | - |
| 0.0658 | 50 | 0.0969 | - |
| 0.1316 | 100 | 0.1085 | - |
| 0.1974 | 150 | 0.0271 | - |
| 0.2632 | 200 | 0.0138 | - |
| 0.3289 | 250 | 0.058 | - |
| 0.3947 | 300 | 0.1205 | - |
| 0.4605 | 350 | 0.0788 | - |
| 0.5263 | 400 | 0.1449 | - |
| 0.5921 | 450 | 0.0383 | - |
| 0.6579 | 500 | 0.0338 | - |
| 0.7237 | 550 | 0.1253 | - |
| 0.7895 | 600 | 0.069 | - |
| 0.8553 | 650 | 0.104 | - |
| 0.9211 | 700 | 0.0462 | - |
| 0.9868 | 750 | 0.1975 | - |
| 1.0526 | 800 | 0.0241 | - |
| 1.1184 | 850 | 0.0426 | - |
| 1.1842 | 900 | 0.0519 | - |
| 1.25 | 950 | 0.0815 | - |
| 1.3158 | 1000 | 0.1839 | - |
| 1.3816 | 1050 | 0.0198 | - |
| 1.4474 | 1100 | 0.0128 | - |
| 1.5132 | 1150 | 0.1645 | - |
| 1.5789 | 1200 | 0.0019 | - |
| 1.6447 | 1250 | 0.0557 | - |
| 1.7105 | 1300 | 0.0098 | - |
| 1.7763 | 1350 | 0.001 | - |
| 1.8421 | 1400 | 0.1557 | - |
| 1.9079 | 1450 | 0.1286 | - |
| 1.9737 | 1500 | 0.094 | - |
| 2.0395 | 1550 | 0.0059 | - |
| 2.1053 | 1600 | 0.0227 | - |
| 2.1711 | 1650 | 0.0899 | - |
| 2.2368 | 1700 | 0.0053 | - |
| 2.3026 | 1750 | 0.0021 | - |
| 2.3684 | 1800 | 0.0114 | - |
| 2.4342 | 1850 | 0.1163 | - |
| 2.5 | 1900 | 0.0959 | - |
| 2.5658 | 1950 | 0.0252 | - |
| 2.6316 | 2000 | 0.0921 | - |
| 2.6974 | 2050 | 0.1159 | - |
| 2.7632 | 2100 | 0.0026 | - |
| 2.8289 | 2150 | 0.1211 | - |
| 2.8947 | 2200 | 0.1843 | - |
| 2.9605 | 2250 | 0.0014 | - |
| 3.0263 | 2300 | 0.0085 | - |
| 3.0921 | 2350 | 0.0839 | - |
| 3.1579 | 2400 | 0.2372 | - |
| 3.2237 | 2450 | 0.0213 | - |
| 3.2895 | 2500 | 0.0155 | - |
| 3.3553 | 2550 | 0.1128 | - |
| 3.4211 | 2600 | 0.0945 | - |
| 3.4868 | 2650 | 0.0917 | - |
| 3.5526 | 2700 | 0.0011 | - |
| 3.6184 | 2750 | 0.0024 | - |
| 3.6842 | 2800 | 0.0044 | - |
| 3.75 | 2850 | 0.121 | - |
| 3.8158 | 2900 | 0.0056 | - |
| 3.8816 | 2950 | 0.003 | - |
| 3.9474 | 3000 | 0.0899 | - |
| 4.0132 | 3050 | 0.0157 | - |
| 4.0789 | 3100 | 0.1188 | - |
| 4.1447 | 3150 | 0.001 | - |
| 4.2105 | 3200 | 0.0222 | - |
| 4.2763 | 3250 | 0.1209 | - |
| 4.3421 | 3300 | 0.1085 | - |
| 4.4079 | 3350 | 0.0054 | - |
| 4.4737 | 3400 | 0.0009 | - |
| 4.5395 | 3450 | 0.0015 | - |
| 4.6053 | 3500 | 0.003 | - |
| 4.6711 | 3550 | 0.0009 | - |
| 4.7368 | 3600 | 0.0003 | - |
| 4.8026 | 3650 | 0.0009 | - |
| 4.8684 | 3700 | 0.03 | - |
| 4.9342 | 3750 | 0.1206 | - |
| 5.0 | 3800 | 0.0003 | - |
| 0.0013 | 1 | 0.2045 | - |
| 0.0658 | 50 | 0.0078 | - |
| 0.1316 | 100 | 0.0087 | - |
| 0.1974 | 150 | 0.0386 | - |
| 0.2632 | 200 | 0.1015 | - |
| 0.3289 | 250 | 0.0022 | - |
| 0.3947 | 300 | 0.0291 | - |
| 0.4605 | 350 | 0.0013 | - |
| 0.5263 | 400 | 0.0022 | - |
| 0.5921 | 450 | 0.1324 | - |
| 0.6579 | 500 | 0.113 | - |
| 0.7237 | 550 | 0.0011 | - |
| 0.7895 | 600 | 0.1723 | - |
| 0.8553 | 650 | 0.0049 | - |
| 0.9211 | 700 | 0.206 | - |
| 0.9868 | 750 | 0.1683 | - |
| 1.0526 | 800 | 0.0954 | - |
| 1.1184 | 850 | 0.018 | - |
| 1.1842 | 900 | 0.1854 | - |
| 1.25 | 950 | 0.0342 | - |
| 1.3158 | 1000 | 0.0015 | - |
| 1.3816 | 1050 | 0.0062 | - |
| 1.4474 | 1100 | 0.1187 | - |
| 1.5132 | 1150 | 0.0048 | - |
| 1.5789 | 1200 | 0.0011 | - |
| 1.6447 | 1250 | 0.002 | - |
| 1.7105 | 1300 | 0.092 | - |
| 1.7763 | 1350 | 0.1245 | - |
| 1.8421 | 1400 | 0.0009 | - |
| 1.9079 | 1450 | 0.1185 | - |
| 1.9737 | 1500 | 0.0017 | - |
| 2.0395 | 1550 | 0.008 | - |
| 2.1053 | 1600 | 0.0049 | - |
| 2.1711 | 1650 | 0.0083 | - |
| 2.2368 | 1700 | 0.0026 | - |
| 2.3026 | 1750 | 0.0081 | - |
| 2.3684 | 1800 | 0.0036 | - |
| 2.4342 | 1850 | 0.0016 | - |
| 2.5 | 1900 | 0.0017 | - |
| 2.5658 | 1950 | 0.0014 | - |
| 2.6316 | 2000 | 0.0017 | - |
| 2.6974 | 2050 | 0.002 | - |
| 2.7632 | 2100 | 0.1022 | - |
| 2.8289 | 2150 | 0.0004 | - |
| 2.8947 | 2200 | 0.0007 | - |
| 2.9605 | 2250 | 0.0794 | - |
| 3.0263 | 2300 | 0.0183 | - |
| 3.0921 | 2350 | 0.0377 | - |
| 3.1579 | 2400 | 0.029 | - |
| 3.2237 | 2450 | 0.0003 | - |
| 3.2895 | 2500 | 0.0961 | - |
| 3.3553 | 2550 | 0.0008 | - |
| 3.4211 | 2600 | 0.0873 | - |
| 3.4868 | 2650 | 0.0501 | - |
| 3.5526 | 2700 | 0.0029 | - |
| 3.6184 | 2750 | 0.0008 | - |
| 3.6842 | 2800 | 0.0004 | - |
| 3.75 | 2850 | 0.0011 | - |
| 3.8158 | 2900 | 0.0518 | - |
| 3.8816 | 2950 | 0.0002 | - |
| 3.9474 | 3000 | 0.1115 | - |
| 4.0132 | 3050 | 0.0129 | - |
| 4.0789 | 3100 | 0.0005 | - |
| 4.1447 | 3150 | 0.0012 | - |
| 4.2105 | 3200 | 0.1086 | - |
| 4.2763 | 3250 | 0.0199 | - |
| 4.3421 | 3300 | 0.0004 | - |
| 4.4079 | 3350 | 0.0001 | - |
| 4.4737 | 3400 | 0.0832 | - |
| 4.5395 | 3450 | 0.0003 | - |
| 4.6053 | 3500 | 0.0041 | - |
| 4.6711 | 3550 | 0.1146 | - |
| 4.7368 | 3600 | 0.0027 | - |
| 4.8026 | 3650 | 0.0002 | - |
| 4.8684 | 3700 | 0.0544 | - |
| 4.9342 | 3750 | 0.0002 | - |
| 5.0 | 3800 | 0.0046 | - |
| 0.0013 | 1 | 0.0015 | - |
| 0.0658 | 50 | 0.1973 | - |
| 0.1316 | 100 | 0.0106 | - |
| 0.1974 | 150 | 0.0744 | - |
| 0.2632 | 200 | 0.1033 | - |
| 0.3289 | 250 | 0.0425 | - |
| 0.3947 | 300 | 0.1125 | - |
| 0.4605 | 350 | 0.0018 | - |
| 0.5263 | 400 | 0.0019 | - |
| 0.5921 | 450 | 0.0002 | - |
| 0.6579 | 500 | 0.0007 | - |
| 0.7237 | 550 | 0.1393 | - |
| 0.7895 | 600 | 0.0002 | - |
| 0.8553 | 650 | 0.0043 | - |
| 0.9211 | 700 | 0.0339 | - |
| 0.9868 | 750 | 0.0002 | - |
| 0.0013 | 1 | 0.0007 | - |
| 0.0658 | 50 | 0.0419 | - |
| 0.1316 | 100 | 0.0068 | - |
| 0.1974 | 150 | 0.1401 | - |
| 0.2632 | 200 | 0.0423 | - |
| 0.3289 | 250 | 0.1122 | - |
| 0.3947 | 300 | 0.0037 | - |
| 0.4605 | 350 | 0.005 | - |
| 0.5263 | 400 | 0.0006 | - |
| 0.5921 | 450 | 0.0006 | - |
| 0.6579 | 500 | 0.0016 | - |
| 0.7237 | 550 | 0.1244 | - |
| 0.7895 | 600 | 0.0016 | - |
| 0.8553 | 650 | 0.0028 | - |
| 0.9211 | 700 | 0.002 | - |
| 0.9868 | 750 | 0.057 | - |
| 0.0013 | 1 | 0.1396 | - |
| 0.0658 | 50 | 0.0366 | - |
| 0.1316 | 100 | 0.0021 | - |
| 0.1974 | 150 | 0.1088 | - |
| 0.2632 | 200 | 0.0449 | - |
| 0.3289 | 250 | 0.0187 | - |
| 0.3947 | 300 | 0.0017 | - |
| 0.4605 | 350 | 0.1262 | - |
| 0.5263 | 400 | 0.0052 | - |
| 0.5921 | 450 | 0.1188 | - |
| 0.6579 | 500 | 0.0002 | - |
| 0.7237 | 550 | 0.0006 | - |
| 0.7895 | 600 | 0.0758 | - |
| 0.8553 | 650 | 0.025 | - |
| 0.9211 | 700 | 0.0052 | - |
| 0.9868 | 750 | 0.1985 | - |
### Framework Versions
- Python: 3.10.12
- SetFit: 1.0.1
- Sentence Transformers: 2.2.2
- Transformers: 4.35.2
- PyTorch: 2.1.0+cu121
- Datasets: 2.15.0
- Tokenizers: 0.15.0
## Citation
### BibTeX
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
--> | [
"TEXT_CLASSIFICATION"
] | [
"CAS"
] |
RichardErkhov/EleutherAI_-_pythia-2.8b-v0-gguf | RichardErkhov | null | [
"gguf",
"arxiv:2101.00027",
"arxiv:2201.07311",
"endpoints_compatible",
"region:us"
] | 2024-11-01T15:31:15 | 2024-11-01T16:11:57 | 73 | 0 | ---
{}
---
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
pythia-2.8b-v0 - GGUF
- Model creator: https://huggingface.co/EleutherAI/
- Original model: https://huggingface.co/EleutherAI/pythia-2.8b-v0/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [pythia-2.8b-v0.Q2_K.gguf](https://huggingface.co/RichardErkhov/EleutherAI_-_pythia-2.8b-v0-gguf/blob/main/pythia-2.8b-v0.Q2_K.gguf) | Q2_K | 1.01GB |
| [pythia-2.8b-v0.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/EleutherAI_-_pythia-2.8b-v0-gguf/blob/main/pythia-2.8b-v0.Q3_K_S.gguf) | Q3_K_S | 1.16GB |
| [pythia-2.8b-v0.Q3_K.gguf](https://huggingface.co/RichardErkhov/EleutherAI_-_pythia-2.8b-v0-gguf/blob/main/pythia-2.8b-v0.Q3_K.gguf) | Q3_K | 1.38GB |
| [pythia-2.8b-v0.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/EleutherAI_-_pythia-2.8b-v0-gguf/blob/main/pythia-2.8b-v0.Q3_K_M.gguf) | Q3_K_M | 1.38GB |
| [pythia-2.8b-v0.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/EleutherAI_-_pythia-2.8b-v0-gguf/blob/main/pythia-2.8b-v0.Q3_K_L.gguf) | Q3_K_L | 1.49GB |
| [pythia-2.8b-v0.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/EleutherAI_-_pythia-2.8b-v0-gguf/blob/main/pythia-2.8b-v0.IQ4_XS.gguf) | IQ4_XS | 1.43GB |
| [pythia-2.8b-v0.Q4_0.gguf](https://huggingface.co/RichardErkhov/EleutherAI_-_pythia-2.8b-v0-gguf/blob/main/pythia-2.8b-v0.Q4_0.gguf) | Q4_0 | 1.49GB |
| [pythia-2.8b-v0.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/EleutherAI_-_pythia-2.8b-v0-gguf/blob/main/pythia-2.8b-v0.IQ4_NL.gguf) | IQ4_NL | 1.5GB |
| [pythia-2.8b-v0.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/EleutherAI_-_pythia-2.8b-v0-gguf/blob/main/pythia-2.8b-v0.Q4_K_S.gguf) | Q4_K_S | 1.5GB |
| [pythia-2.8b-v0.Q4_K.gguf](https://huggingface.co/RichardErkhov/EleutherAI_-_pythia-2.8b-v0-gguf/blob/main/pythia-2.8b-v0.Q4_K.gguf) | Q4_K | 1.66GB |
| [pythia-2.8b-v0.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/EleutherAI_-_pythia-2.8b-v0-gguf/blob/main/pythia-2.8b-v0.Q4_K_M.gguf) | Q4_K_M | 1.66GB |
| [pythia-2.8b-v0.Q4_1.gguf](https://huggingface.co/RichardErkhov/EleutherAI_-_pythia-2.8b-v0-gguf/blob/main/pythia-2.8b-v0.Q4_1.gguf) | Q4_1 | 1.64GB |
| [pythia-2.8b-v0.Q5_0.gguf](https://huggingface.co/RichardErkhov/EleutherAI_-_pythia-2.8b-v0-gguf/blob/main/pythia-2.8b-v0.Q5_0.gguf) | Q5_0 | 1.8GB |
| [pythia-2.8b-v0.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/EleutherAI_-_pythia-2.8b-v0-gguf/blob/main/pythia-2.8b-v0.Q5_K_S.gguf) | Q5_K_S | 1.8GB |
| [pythia-2.8b-v0.Q5_K.gguf](https://huggingface.co/RichardErkhov/EleutherAI_-_pythia-2.8b-v0-gguf/blob/main/pythia-2.8b-v0.Q5_K.gguf) | Q5_K | 1.93GB |
| [pythia-2.8b-v0.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/EleutherAI_-_pythia-2.8b-v0-gguf/blob/main/pythia-2.8b-v0.Q5_K_M.gguf) | Q5_K_M | 1.93GB |
| [pythia-2.8b-v0.Q5_1.gguf](https://huggingface.co/RichardErkhov/EleutherAI_-_pythia-2.8b-v0-gguf/blob/main/pythia-2.8b-v0.Q5_1.gguf) | Q5_1 | 1.95GB |
| [pythia-2.8b-v0.Q6_K.gguf](https://huggingface.co/RichardErkhov/EleutherAI_-_pythia-2.8b-v0-gguf/blob/main/pythia-2.8b-v0.Q6_K.gguf) | Q6_K | 2.13GB |
| [pythia-2.8b-v0.Q8_0.gguf](https://huggingface.co/RichardErkhov/EleutherAI_-_pythia-2.8b-v0-gguf/blob/main/pythia-2.8b-v0.Q8_0.gguf) | Q8_0 | 2.75GB |
Original model description:
---
language:
- en
tags:
- pytorch
- causal-lm
- pythia
- pythia_v0
license: apache-2.0
datasets:
- the_pile
---
The *Pythia Scaling Suite* is a collection of models developed to facilitate
interpretability research. It contains two sets of eight models of sizes
70M, 160M, 410M, 1B, 1.4B, 2.8B, 6.9B, and 12B. For each size, there are two
models: one trained on the Pile, and one trained on the Pile after the dataset
has been globally deduplicated. All 8 model sizes are trained on the exact
same data, in the exact same order. All Pythia models are available
[on Hugging Face](https://huggingface.co/models?other=pythia).
The Pythia model suite was deliberately designed to promote scientific
research on large language models, especially interpretability research.
Despite not centering downstream performance as a design goal, we find the
models <a href="#evaluations">match or exceed</a> the performance of
similar and same-sized models, such as those in the OPT and GPT-Neo suites.
Please note that all models in the *Pythia* suite were renamed in January
2023. For clarity, a <a href="#naming-convention-and-parameter-count">table
comparing the old and new names</a> is provided in this model card, together
with exact parameter counts.
## Pythia-2.8B
### Model Details
- Developed by: [EleutherAI](http://eleuther.ai)
- Model type: Transformer-based Language Model
- Language: English
- Learn more: [Pythia's GitHub repository](https://github.com/EleutherAI/pythia)
for training procedure, config files, and details on how to use.
- Library: [GPT-NeoX](https://github.com/EleutherAI/gpt-neox)
- License: Apache 2.0
- Contact: to ask questions about this model, join the [EleutherAI
Discord](https://discord.gg/zBGx3azzUn), and post them in `#release-discussion`.
Please read the existing *Pythia* documentation before asking about it in the
EleutherAI Discord. For general correspondence: [contact@eleuther.
ai](mailto:[email protected]).
<figure>
| Pythia model | Non-Embedding Params | Layers | Model Dim | Heads | Batch Size | Learning Rate | Equivalent Models |
| -----------: | -------------------: | :----: | :-------: | :---: | :--------: | :-------------------: | :--------------------: |
| 70M | 18,915,328 | 6 | 512 | 8 | 2M | 1.0 x 10<sup>-3</sup> | — |
| 160M | 85,056,000 | 12 | 768 | 12 | 4M | 6.0 x 10<sup>-4</sup> | GPT-Neo 125M, OPT-125M |
| 410M | 302,311,424 | 24 | 1024 | 16 | 4M | 3.0 x 10<sup>-4</sup> | OPT-350M |
| 1.0B | 805,736,448 | 16 | 2048 | 8 | 2M | 3.0 x 10<sup>-4</sup> | — |
| 1.4B | 1,208,602,624 | 24 | 2048 | 16 | 4M | 2.0 x 10<sup>-4</sup> | GPT-Neo 1.3B, OPT-1.3B |
| 2.8B | 2,517,652,480 | 32 | 2560 | 32 | 2M | 1.6 x 10<sup>-4</sup> | GPT-Neo 2.7B, OPT-2.7B |
| 6.9B | 6,444,163,072 | 32 | 4096 | 32 | 2M | 1.2 x 10<sup>-4</sup> | OPT-6.7B |
| 12B | 11,327,027,200 | 36 | 5120 | 40 | 2M | 1.2 x 10<sup>-4</sup> | — |
<figcaption>Engineering details for the <i>Pythia Suite</i>. Deduped and
non-deduped models of a given size have the same hyperparameters. “Equivalent”
models have <b>exactly</b> the same architecture, and the same number of
non-embedding parameters.</figcaption>
</figure>
### Uses and Limitations
#### Intended Use
The primary intended use of Pythia is research on the behavior, functionality,
and limitations of large language models. This suite is intended to provide
a controlled setting for performing scientific experiments. To enable the
study of how language models change over the course of training, we provide
143 evenly spaced intermediate checkpoints per model. These checkpoints are
hosted on Hugging Face as branches. Note that branch `143000` corresponds
exactly to the model checkpoint on the `main` branch of each model.
You may also further fine-tune and adapt Pythia-2.8B for deployment,
as long as your use is in accordance with the Apache 2.0 license. Pythia
models work with the Hugging Face [Transformers
Library](https://huggingface.co/docs/transformers/index). If you decide to use
pre-trained Pythia-2.8B as a basis for your fine-tuned model, please
conduct your own risk and bias assessment.
#### Out-of-scope use
The Pythia Suite is **not** intended for deployment. It is not a in itself
a product and cannot be used for human-facing interactions.
Pythia models are English-language only, and are not suitable for translation
or generating text in other languages.
Pythia-2.8B has not been fine-tuned for downstream contexts in which
language models are commonly deployed, such as writing genre prose,
or commercial chatbots. This means Pythia-2.8B will **not**
respond to a given prompt the way a product like ChatGPT does. This is because,
unlike this model, ChatGPT was fine-tuned using methods such as Reinforcement
Learning from Human Feedback (RLHF) to better “understand” human instructions.
#### Limitations and biases
The core functionality of a large language model is to take a string of text
and predict the next token. The token deemed statistically most likely by the
model need not produce the most “accurate” text. Never rely on
Pythia-2.8B to produce factually accurate output.
This model was trained on [the Pile](https://pile.eleuther.ai/), a dataset
known to contain profanity and texts that are lewd or otherwise offensive.
See [Section 6 of the Pile paper](https://arxiv.org/abs/2101.00027) for a
discussion of documented biases with regards to gender, religion, and race.
Pythia-2.8B may produce socially unacceptable or undesirable text, *even if*
the prompt itself does not include anything explicitly offensive.
If you plan on using text generated through, for example, the Hosted Inference
API, we recommend having a human curate the outputs of this language model
before presenting it to other people. Please inform your audience that the
text was generated by Pythia-2.8B.
### Quickstart
Pythia models can be loaded and used via the following code, demonstrated here
for the third `pythia-70m-deduped` checkpoint:
```python
from transformers import GPTNeoXForCausalLM, AutoTokenizer
model = GPTNeoXForCausalLM.from_pretrained(
"EleutherAI/pythia-70m-deduped",
revision="step3000",
cache_dir="./pythia-70m-deduped/step3000",
)
tokenizer = AutoTokenizer.from_pretrained(
"EleutherAI/pythia-70m-deduped",
revision="step3000",
cache_dir="./pythia-70m-deduped/step3000",
)
inputs = tokenizer("Hello, I am", return_tensors="pt")
tokens = model.generate(**inputs)
tokenizer.decode(tokens[0])
```
Revision/branch `step143000` corresponds exactly to the model checkpoint on
the `main` branch of each model.<br>
For more information on how to use all Pythia models, see [documentation on
GitHub](https://github.com/EleutherAI/pythia).
### Training
#### Training data
[The Pile](https://pile.eleuther.ai/) is a 825GiB general-purpose dataset in
English. It was created by EleutherAI specifically for training large language
models. It contains texts from 22 diverse sources, roughly broken down into
five categories: academic writing (e.g. arXiv), internet (e.g. CommonCrawl),
prose (e.g. Project Gutenberg), dialogue (e.g. YouTube subtitles), and
miscellaneous (e.g. GitHub, Enron Emails). See [the Pile
paper](https://arxiv.org/abs/2101.00027) for a breakdown of all data sources,
methodology, and a discussion of ethical implications. Consult [the
datasheet](https://arxiv.org/abs/2201.07311) for more detailed documentation
about the Pile and its component datasets. The Pile can be downloaded from
the [official website](https://pile.eleuther.ai/), or from a [community
mirror](https://the-eye.eu/public/AI/pile/).<br>
The Pile was **not** deduplicated before being used to train Pythia-2.8B.
#### Training procedure
All models were trained on the exact same data, in the exact same order. Each
model saw 299,892,736,000 tokens during training, and 143 checkpoints for each
model are saved every 2,097,152,000 tokens, spaced evenly throughout training.
This corresponds to training for just under 1 epoch on the Pile for
non-deduplicated models, and about 1.5 epochs on the deduplicated Pile.
All *Pythia* models trained for the equivalent of 143000 steps at a batch size
of 2,097,152 tokens. Two batch sizes were used: 2M and 4M. Models with a batch
size of 4M tokens listed were originally trained for 71500 steps instead, with
checkpoints every 500 steps. The checkpoints on Hugging Face are renamed for
consistency with all 2M batch models, so `step1000` is the first checkpoint
for `pythia-1.4b` that was saved (corresponding to step 500 in training), and
`step1000` is likewise the first `pythia-6.9b` checkpoint that was saved
(corresponding to 1000 “actual” steps).<br>
See [GitHub](https://github.com/EleutherAI/pythia) for more details on training
procedure, including [how to reproduce
it](https://github.com/EleutherAI/pythia/blob/main/README.md#reproducing-training).<br>
Pythia uses the same tokenizer as [GPT-NeoX-
20B](https://huggingface.co/EleutherAI/gpt-neox-20b).
### Evaluations
All 16 *Pythia* models were evaluated using the [LM Evaluation
Harness](https://github.com/EleutherAI/lm-evaluation-harness). You can access
the results by model and step at `results/json/*` in the [GitHub
repository](https://github.com/EleutherAI/pythia/tree/main/results/json).<br>
Expand the sections below to see plots of evaluation results for all
Pythia and Pythia-deduped models compared with OPT and BLOOM.
<details>
<summary>LAMBADA – OpenAI</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/lambada_openai.png" style="width:auto"/>
</details>
<details>
<summary>Physical Interaction: Question Answering (PIQA)</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/piqa.png" style="width:auto"/>
</details>
<details>
<summary>WinoGrande</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/winogrande.png" style="width:auto"/>
</details>
<details>
<summary>AI2 Reasoning Challenge—Challenge Set</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/arc_challenge.png" style="width:auto"/>
</details>
<details>
<summary>SciQ</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/sciq.png" style="width:auto"/>
</details>
### Naming convention and parameter count
*Pythia* models were renamed in January 2023. It is possible that the old
naming convention still persists in some documentation by accident. The
current naming convention (70M, 160M, etc.) is based on total parameter count.
<figure style="width:32em">
| current Pythia suffix | old suffix | total params | non-embedding params |
| --------------------: | ---------: | -------------: | -------------------: |
| 70M | 19M | 70,426,624 | 18,915,328 |
| 160M | 125M | 162,322,944 | 85,056,000 |
| 410M | 350M | 405,334,016 | 302,311,424 |
| 1B | 800M | 1,011,781,632 | 805,736,448 |
| 1.4B | 1.3B | 1,414,647,808 | 1,208,602,624 |
| 2.8B | 2.7B | 2,775,208,960 | 2,517,652,480 |
| 6.9B | 6.7B | 6,857,302,016 | 6,444,163,072 |
| 12B | 13B | 11,846,072,320 | 11,327,027,200 |
</figure>
| [
"QUESTION_ANSWERING",
"TRANSLATION"
] | [
"SCIQ"
] |
CCwz/gme-Qwen2-VL-7B-Instruct-Q5_K_S-GGUF | CCwz | sentence-similarity | [
"sentence-transformers",
"gguf",
"mteb",
"transformers",
"Qwen2-VL",
"sentence-similarity",
"vidore",
"llama-cpp",
"gguf-my-repo",
"en",
"zh",
"base_model:Alibaba-NLP/gme-Qwen2-VL-7B-Instruct",
"base_model:quantized:Alibaba-NLP/gme-Qwen2-VL-7B-Instruct",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us",
"conversational"
] | 2024-12-27T18:12:37 | 2024-12-27T18:13:01 | 73 | 0 | ---
base_model: Alibaba-NLP/gme-Qwen2-VL-7B-Instruct
language:
- en
- zh
license: apache-2.0
tags:
- mteb
- sentence-transformers
- transformers
- Qwen2-VL
- sentence-similarity
- vidore
- llama-cpp
- gguf-my-repo
model-index:
- name: gme-Qwen2-VL-7B-Instruct
results:
- task:
type: STS
dataset:
name: MTEB AFQMC
type: C-MTEB/AFQMC
config: default
split: validation
revision: b44c3b011063adb25877c13823db83bb193913c4
metrics:
- type: cos_sim_pearson
value: 55.46303883144227
- type: cos_sim_spearman
value: 59.66708815497073
- type: euclidean_pearson
value: 57.81360946949099
- type: euclidean_spearman
value: 59.66710825926347
- type: manhattan_pearson
value: 57.723697562189344
- type: manhattan_spearman
value: 59.55004095814257
- type: cos_sim_pearson
value: 55.46303883144227
- type: cos_sim_spearman
value: 59.66708815497073
- type: euclidean_pearson
value: 57.81360946949099
- type: euclidean_spearman
value: 59.66710825926347
- type: manhattan_pearson
value: 57.723697562189344
- type: manhattan_spearman
value: 59.55004095814257
- task:
type: STS
dataset:
name: MTEB ATEC
type: C-MTEB/ATEC
config: default
split: test
revision: 0f319b1142f28d00e055a6770f3f726ae9b7d865
metrics:
- type: cos_sim_pearson
value: 52.381881068686894
- type: cos_sim_spearman
value: 55.468235529709766
- type: euclidean_pearson
value: 56.974786979175086
- type: euclidean_spearman
value: 55.468231026153745
- type: manhattan_pearson
value: 56.944671325662576
- type: manhattan_spearman
value: 55.39037386224014
- type: cos_sim_pearson
value: 52.381881068686894
- type: cos_sim_spearman
value: 55.468235529709766
- type: euclidean_pearson
value: 56.974786979175086
- type: euclidean_spearman
value: 55.468231026153745
- type: manhattan_pearson
value: 56.944671325662576
- type: manhattan_spearman
value: 55.39037386224014
- task:
type: Classification
dataset:
name: MTEB AmazonCounterfactualClassification (en)
type: mteb/amazon_counterfactual
config: en
split: test
revision: e8379541af4e31359cca9fbcf4b00f2671dba205
metrics:
- type: accuracy
value: 77.61194029850746
- type: ap
value: 41.29789064067677
- type: f1
value: 71.69633278678522
- task:
type: Classification
dataset:
name: MTEB AmazonPolarityClassification
type: mteb/amazon_polarity
config: default
split: test
revision: e2d317d38cd51312af73b3d32a06d1a08b442046
metrics:
- type: accuracy
value: 97.3258
- type: ap
value: 95.91845683387056
- type: f1
value: 97.32526074864263
- type: accuracy
value: 97.3258
- type: ap
value: 95.91845683387056
- type: f1
value: 97.32526074864263
- task:
type: Classification
dataset:
name: MTEB AmazonReviewsClassification (en)
type: mteb/amazon_reviews_multi
config: en
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 64.794
- type: f1
value: 63.7329780206882
- task:
type: Retrieval
dataset:
name: MTEB ArguAna
type: mteb/arguana
config: default
split: test
revision: c22ab2a51041ffd869aaddef7af8d8215647e41a
metrics:
- type: map_at_1
value: 40.541
- type: map_at_10
value: 56.315000000000005
- type: map_at_100
value: 56.824
- type: map_at_1000
value: 56.825
- type: map_at_3
value: 51.778
- type: map_at_5
value: 54.623
- type: mrr_at_1
value: 41.038000000000004
- type: mrr_at_10
value: 56.532000000000004
- type: mrr_at_100
value: 57.034
- type: mrr_at_1000
value: 57.034
- type: mrr_at_3
value: 52.015
- type: mrr_at_5
value: 54.835
- type: ndcg_at_1
value: 40.541
- type: ndcg_at_10
value: 64.596
- type: ndcg_at_100
value: 66.656
- type: ndcg_at_1000
value: 66.666
- type: ndcg_at_3
value: 55.415000000000006
- type: ndcg_at_5
value: 60.527
- type: precision_at_1
value: 40.541
- type: precision_at_10
value: 9.083
- type: precision_at_100
value: 0.996
- type: precision_at_1000
value: 0.1
- type: precision_at_3
value: 21.977
- type: precision_at_5
value: 15.661
- type: recall_at_1
value: 40.541
- type: recall_at_10
value: 90.825
- type: recall_at_100
value: 99.57300000000001
- type: recall_at_1000
value: 99.644
- type: recall_at_3
value: 65.932
- type: recall_at_5
value: 78.307
- type: map_at_1
value: 40.541
- type: map_at_10
value: 56.315000000000005
- type: map_at_100
value: 56.824
- type: map_at_1000
value: 56.825
- type: map_at_3
value: 51.778
- type: map_at_5
value: 54.623
- type: mrr_at_1
value: 41.038000000000004
- type: mrr_at_10
value: 56.532000000000004
- type: mrr_at_100
value: 57.034
- type: mrr_at_1000
value: 57.034
- type: mrr_at_3
value: 52.015
- type: mrr_at_5
value: 54.835
- type: ndcg_at_1
value: 40.541
- type: ndcg_at_10
value: 64.596
- type: ndcg_at_100
value: 66.656
- type: ndcg_at_1000
value: 66.666
- type: ndcg_at_3
value: 55.415000000000006
- type: ndcg_at_5
value: 60.527
- type: precision_at_1
value: 40.541
- type: precision_at_10
value: 9.083
- type: precision_at_100
value: 0.996
- type: precision_at_1000
value: 0.1
- type: precision_at_3
value: 21.977
- type: precision_at_5
value: 15.661
- type: recall_at_1
value: 40.541
- type: recall_at_10
value: 90.825
- type: recall_at_100
value: 99.57300000000001
- type: recall_at_1000
value: 99.644
- type: recall_at_3
value: 65.932
- type: recall_at_5
value: 78.307
- task:
type: Clustering
dataset:
name: MTEB ArxivClusteringP2P
type: mteb/arxiv-clustering-p2p
config: default
split: test
revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d
metrics:
- type: v_measure
value: 54.96111428218386
- type: v_measure
value: 54.96111428218386
- task:
type: Clustering
dataset:
name: MTEB ArxivClusteringS2S
type: mteb/arxiv-clustering-s2s
config: default
split: test
revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53
metrics:
- type: v_measure
value: 50.637711388838945
- type: v_measure
value: 50.637711388838945
- task:
type: Reranking
dataset:
name: MTEB AskUbuntuDupQuestions
type: mteb/askubuntudupquestions-reranking
config: default
split: test
revision: 2000358ca161889fa9c082cb41daa8dcfb161a54
metrics:
- type: map
value: 64.0741897266483
- type: mrr
value: 76.11440882909028
- type: map
value: 64.0741897266483
- type: mrr
value: 76.11440882909028
- task:
type: STS
dataset:
name: MTEB BIOSSES
type: mteb/biosses-sts
config: default
split: test
revision: d3fb88f8f02e40887cd149695127462bbcf29b4a
metrics:
- type: cos_sim_pearson
value: 86.2557839280406
- type: cos_sim_spearman
value: 82.58200216886888
- type: euclidean_pearson
value: 84.80588838508498
- type: euclidean_spearman
value: 82.58200216886888
- type: manhattan_pearson
value: 84.53082035185592
- type: manhattan_spearman
value: 82.4964580510134
- type: cos_sim_pearson
value: 86.2557839280406
- type: cos_sim_spearman
value: 82.58200216886888
- type: euclidean_pearson
value: 84.80588838508498
- type: euclidean_spearman
value: 82.58200216886888
- type: manhattan_pearson
value: 84.53082035185592
- type: manhattan_spearman
value: 82.4964580510134
- task:
type: STS
dataset:
name: MTEB BQ
type: C-MTEB/BQ
config: default
split: test
revision: e3dda5e115e487b39ec7e618c0c6a29137052a55
metrics:
- type: cos_sim_pearson
value: 65.53432474956654
- type: cos_sim_spearman
value: 66.8014310403835
- type: euclidean_pearson
value: 65.59442518434007
- type: euclidean_spearman
value: 66.80144143248799
- type: manhattan_pearson
value: 65.55990611112435
- type: manhattan_spearman
value: 66.77720657746703
- type: cos_sim_pearson
value: 65.53432474956654
- type: cos_sim_spearman
value: 66.8014310403835
- type: euclidean_pearson
value: 65.59442518434007
- type: euclidean_spearman
value: 66.80144143248799
- type: manhattan_pearson
value: 65.55990611112435
- type: manhattan_spearman
value: 66.77720657746703
- task:
type: Classification
dataset:
name: MTEB Banking77Classification
type: mteb/banking77
config: default
split: test
revision: 0fd18e25b25c072e09e0d92ab615fda904d66300
metrics:
- type: accuracy
value: 84.76298701298703
- type: f1
value: 84.24881789367576
- type: accuracy
value: 84.76298701298703
- type: f1
value: 84.24881789367576
- task:
type: Clustering
dataset:
name: MTEB BiorxivClusteringP2P
type: mteb/biorxiv-clustering-p2p
config: default
split: test
revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40
metrics:
- type: v_measure
value: 46.86757924102047
- type: v_measure
value: 46.86757924102047
- task:
type: Clustering
dataset:
name: MTEB BiorxivClusteringS2S
type: mteb/biorxiv-clustering-s2s
config: default
split: test
revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908
metrics:
- type: v_measure
value: 43.86043680479362
- type: v_measure
value: 43.86043680479362
- task:
type: Clustering
dataset:
name: MTEB CLSClusteringP2P
type: C-MTEB/CLSClusteringP2P
config: default
split: test
revision: 4b6227591c6c1a73bc76b1055f3b7f3588e72476
metrics:
- type: v_measure
value: 45.684222588040605
- type: v_measure
value: 45.684222588040605
- task:
type: Clustering
dataset:
name: MTEB CLSClusteringS2S
type: C-MTEB/CLSClusteringS2S
config: default
split: test
revision: e458b3f5414b62b7f9f83499ac1f5497ae2e869f
metrics:
- type: v_measure
value: 45.45639765303432
- type: v_measure
value: 45.45639765303432
- task:
type: Reranking
dataset:
name: MTEB CMedQAv1
type: C-MTEB/CMedQAv1-reranking
config: default
split: test
revision: 8d7f1e942507dac42dc58017c1a001c3717da7df
metrics:
- type: map
value: 88.7058672660788
- type: mrr
value: 90.5795634920635
- type: map
value: 88.7058672660788
- type: mrr
value: 90.5795634920635
- task:
type: Reranking
dataset:
name: MTEB CMedQAv2
type: C-MTEB/CMedQAv2-reranking
config: default
split: test
revision: 23d186750531a14a0357ca22cd92d712fd512ea0
metrics:
- type: map
value: 90.50750030424048
- type: mrr
value: 92.3970634920635
- type: map
value: 90.50750030424048
- type: mrr
value: 92.3970634920635
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackAndroidRetrieval
type: BeIR/cqadupstack
config: default
split: test
revision: f46a197baaae43b4f621051089b82a364682dfeb
metrics:
- type: map_at_1
value: 28.848000000000003
- type: map_at_10
value: 40.453
- type: map_at_100
value: 42.065000000000005
- type: map_at_1000
value: 42.176
- type: map_at_3
value: 36.697
- type: map_at_5
value: 38.855000000000004
- type: mrr_at_1
value: 34.764
- type: mrr_at_10
value: 45.662000000000006
- type: mrr_at_100
value: 46.56
- type: mrr_at_1000
value: 46.597
- type: mrr_at_3
value: 42.632
- type: mrr_at_5
value: 44.249
- type: ndcg_at_1
value: 34.764
- type: ndcg_at_10
value: 47.033
- type: ndcg_at_100
value: 53.089
- type: ndcg_at_1000
value: 54.818
- type: ndcg_at_3
value: 41.142
- type: ndcg_at_5
value: 43.928
- type: precision_at_1
value: 34.764
- type: precision_at_10
value: 9.027000000000001
- type: precision_at_100
value: 1.465
- type: precision_at_1000
value: 0.192
- type: precision_at_3
value: 19.695
- type: precision_at_5
value: 14.535
- type: recall_at_1
value: 28.848000000000003
- type: recall_at_10
value: 60.849
- type: recall_at_100
value: 85.764
- type: recall_at_1000
value: 96.098
- type: recall_at_3
value: 44.579
- type: recall_at_5
value: 51.678999999999995
- type: map_at_1
value: 28.848000000000003
- type: map_at_10
value: 40.453
- type: map_at_100
value: 42.065000000000005
- type: map_at_1000
value: 42.176
- type: map_at_3
value: 36.697
- type: map_at_5
value: 38.855000000000004
- type: mrr_at_1
value: 34.764
- type: mrr_at_10
value: 45.662000000000006
- type: mrr_at_100
value: 46.56
- type: mrr_at_1000
value: 46.597
- type: mrr_at_3
value: 42.632
- type: mrr_at_5
value: 44.249
- type: ndcg_at_1
value: 34.764
- type: ndcg_at_10
value: 47.033
- type: ndcg_at_100
value: 53.089
- type: ndcg_at_1000
value: 54.818
- type: ndcg_at_3
value: 41.142
- type: ndcg_at_5
value: 43.928
- type: precision_at_1
value: 34.764
- type: precision_at_10
value: 9.027000000000001
- type: precision_at_100
value: 1.465
- type: precision_at_1000
value: 0.192
- type: precision_at_3
value: 19.695
- type: precision_at_5
value: 14.535
- type: recall_at_1
value: 28.848000000000003
- type: recall_at_10
value: 60.849
- type: recall_at_100
value: 85.764
- type: recall_at_1000
value: 96.098
- type: recall_at_3
value: 44.579
- type: recall_at_5
value: 51.678999999999995
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackEnglishRetrieval
type: BeIR/cqadupstack
config: default
split: test
revision: ad9991cb51e31e31e430383c75ffb2885547b5f0
metrics:
- type: map_at_1
value: 30.731
- type: map_at_10
value: 41.859
- type: map_at_100
value: 43.13
- type: map_at_1000
value: 43.257
- type: map_at_3
value: 38.384
- type: map_at_5
value: 40.284
- type: mrr_at_1
value: 38.471
- type: mrr_at_10
value: 47.531
- type: mrr_at_100
value: 48.199
- type: mrr_at_1000
value: 48.24
- type: mrr_at_3
value: 44.989000000000004
- type: mrr_at_5
value: 46.403
- type: ndcg_at_1
value: 38.471
- type: ndcg_at_10
value: 48.022999999999996
- type: ndcg_at_100
value: 52.32599999999999
- type: ndcg_at_1000
value: 54.26
- type: ndcg_at_3
value: 42.986999999999995
- type: ndcg_at_5
value: 45.23
- type: precision_at_1
value: 38.471
- type: precision_at_10
value: 9.248000000000001
- type: precision_at_100
value: 1.469
- type: precision_at_1000
value: 0.193
- type: precision_at_3
value: 20.892
- type: precision_at_5
value: 14.892
- type: recall_at_1
value: 30.731
- type: recall_at_10
value: 59.561
- type: recall_at_100
value: 77.637
- type: recall_at_1000
value: 89.64999999999999
- type: recall_at_3
value: 44.897999999999996
- type: recall_at_5
value: 51.181
- type: map_at_1
value: 30.731
- type: map_at_10
value: 41.859
- type: map_at_100
value: 43.13
- type: map_at_1000
value: 43.257
- type: map_at_3
value: 38.384
- type: map_at_5
value: 40.284
- type: mrr_at_1
value: 38.471
- type: mrr_at_10
value: 47.531
- type: mrr_at_100
value: 48.199
- type: mrr_at_1000
value: 48.24
- type: mrr_at_3
value: 44.989000000000004
- type: mrr_at_5
value: 46.403
- type: ndcg_at_1
value: 38.471
- type: ndcg_at_10
value: 48.022999999999996
- type: ndcg_at_100
value: 52.32599999999999
- type: ndcg_at_1000
value: 54.26
- type: ndcg_at_3
value: 42.986999999999995
- type: ndcg_at_5
value: 45.23
- type: precision_at_1
value: 38.471
- type: precision_at_10
value: 9.248000000000001
- type: precision_at_100
value: 1.469
- type: precision_at_1000
value: 0.193
- type: precision_at_3
value: 20.892
- type: precision_at_5
value: 14.892
- type: recall_at_1
value: 30.731
- type: recall_at_10
value: 59.561
- type: recall_at_100
value: 77.637
- type: recall_at_1000
value: 89.64999999999999
- type: recall_at_3
value: 44.897999999999996
- type: recall_at_5
value: 51.181
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackGamingRetrieval
type: BeIR/cqadupstack
config: default
split: test
revision: 4885aa143210c98657558c04aaf3dc47cfb54340
metrics:
- type: map_at_1
value: 34.949000000000005
- type: map_at_10
value: 48.117
- type: map_at_100
value: 49.355
- type: map_at_1000
value: 49.409
- type: map_at_3
value: 44.732
- type: map_at_5
value: 46.555
- type: mrr_at_1
value: 40.188
- type: mrr_at_10
value: 51.452
- type: mrr_at_100
value: 52.219
- type: mrr_at_1000
value: 52.24100000000001
- type: mrr_at_3
value: 48.642
- type: mrr_at_5
value: 50.134
- type: ndcg_at_1
value: 40.188
- type: ndcg_at_10
value: 54.664
- type: ndcg_at_100
value: 59.38099999999999
- type: ndcg_at_1000
value: 60.363
- type: ndcg_at_3
value: 48.684
- type: ndcg_at_5
value: 51.406
- type: precision_at_1
value: 40.188
- type: precision_at_10
value: 9.116
- type: precision_at_100
value: 1.248
- type: precision_at_1000
value: 0.13699999999999998
- type: precision_at_3
value: 22.236
- type: precision_at_5
value: 15.310000000000002
- type: recall_at_1
value: 34.949000000000005
- type: recall_at_10
value: 70.767
- type: recall_at_100
value: 90.79
- type: recall_at_1000
value: 97.57900000000001
- type: recall_at_3
value: 54.723
- type: recall_at_5
value: 61.404
- type: map_at_1
value: 34.949000000000005
- type: map_at_10
value: 48.117
- type: map_at_100
value: 49.355
- type: map_at_1000
value: 49.409
- type: map_at_3
value: 44.732
- type: map_at_5
value: 46.555
- type: mrr_at_1
value: 40.188
- type: mrr_at_10
value: 51.452
- type: mrr_at_100
value: 52.219
- type: mrr_at_1000
value: 52.24100000000001
- type: mrr_at_3
value: 48.642
- type: mrr_at_5
value: 50.134
- type: ndcg_at_1
value: 40.188
- type: ndcg_at_10
value: 54.664
- type: ndcg_at_100
value: 59.38099999999999
- type: ndcg_at_1000
value: 60.363
- type: ndcg_at_3
value: 48.684
- type: ndcg_at_5
value: 51.406
- type: precision_at_1
value: 40.188
- type: precision_at_10
value: 9.116
- type: precision_at_100
value: 1.248
- type: precision_at_1000
value: 0.13699999999999998
- type: precision_at_3
value: 22.236
- type: precision_at_5
value: 15.310000000000002
- type: recall_at_1
value: 34.949000000000005
- type: recall_at_10
value: 70.767
- type: recall_at_100
value: 90.79
- type: recall_at_1000
value: 97.57900000000001
- type: recall_at_3
value: 54.723
- type: recall_at_5
value: 61.404
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackGisRetrieval
type: BeIR/cqadupstack
config: default
split: test
revision: 5003b3064772da1887988e05400cf3806fe491f2
metrics:
- type: map_at_1
value: 25.312
- type: map_at_10
value: 34.799
- type: map_at_100
value: 35.906
- type: map_at_1000
value: 35.983
- type: map_at_3
value: 31.582
- type: map_at_5
value: 33.507999999999996
- type: mrr_at_1
value: 27.232
- type: mrr_at_10
value: 36.82
- type: mrr_at_100
value: 37.733
- type: mrr_at_1000
value: 37.791000000000004
- type: mrr_at_3
value: 33.804
- type: mrr_at_5
value: 35.606
- type: ndcg_at_1
value: 27.232
- type: ndcg_at_10
value: 40.524
- type: ndcg_at_100
value: 45.654
- type: ndcg_at_1000
value: 47.557
- type: ndcg_at_3
value: 34.312
- type: ndcg_at_5
value: 37.553
- type: precision_at_1
value: 27.232
- type: precision_at_10
value: 6.52
- type: precision_at_100
value: 0.9530000000000001
- type: precision_at_1000
value: 0.11399999999999999
- type: precision_at_3
value: 14.915000000000001
- type: precision_at_5
value: 10.847
- type: recall_at_1
value: 25.312
- type: recall_at_10
value: 56.169000000000004
- type: recall_at_100
value: 79.16499999999999
- type: recall_at_1000
value: 93.49300000000001
- type: recall_at_3
value: 39.5
- type: recall_at_5
value: 47.288999999999994
- type: map_at_1
value: 25.312
- type: map_at_10
value: 34.799
- type: map_at_100
value: 35.906
- type: map_at_1000
value: 35.983
- type: map_at_3
value: 31.582
- type: map_at_5
value: 33.507999999999996
- type: mrr_at_1
value: 27.232
- type: mrr_at_10
value: 36.82
- type: mrr_at_100
value: 37.733
- type: mrr_at_1000
value: 37.791000000000004
- type: mrr_at_3
value: 33.804
- type: mrr_at_5
value: 35.606
- type: ndcg_at_1
value: 27.232
- type: ndcg_at_10
value: 40.524
- type: ndcg_at_100
value: 45.654
- type: ndcg_at_1000
value: 47.557
- type: ndcg_at_3
value: 34.312
- type: ndcg_at_5
value: 37.553
- type: precision_at_1
value: 27.232
- type: precision_at_10
value: 6.52
- type: precision_at_100
value: 0.9530000000000001
- type: precision_at_1000
value: 0.11399999999999999
- type: precision_at_3
value: 14.915000000000001
- type: precision_at_5
value: 10.847
- type: recall_at_1
value: 25.312
- type: recall_at_10
value: 56.169000000000004
- type: recall_at_100
value: 79.16499999999999
- type: recall_at_1000
value: 93.49300000000001
- type: recall_at_3
value: 39.5
- type: recall_at_5
value: 47.288999999999994
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackMathematicaRetrieval
type: BeIR/cqadupstack
config: default
split: test
revision: 90fceea13679c63fe563ded68f3b6f06e50061de
metrics:
- type: map_at_1
value: 17.153
- type: map_at_10
value: 27.671
- type: map_at_100
value: 29.186
- type: map_at_1000
value: 29.299999999999997
- type: map_at_3
value: 24.490000000000002
- type: map_at_5
value: 26.178
- type: mrr_at_1
value: 21.144
- type: mrr_at_10
value: 32.177
- type: mrr_at_100
value: 33.247
- type: mrr_at_1000
value: 33.306000000000004
- type: mrr_at_3
value: 29.187
- type: mrr_at_5
value: 30.817
- type: ndcg_at_1
value: 21.144
- type: ndcg_at_10
value: 33.981
- type: ndcg_at_100
value: 40.549
- type: ndcg_at_1000
value: 43.03
- type: ndcg_at_3
value: 28.132
- type: ndcg_at_5
value: 30.721999999999998
- type: precision_at_1
value: 21.144
- type: precision_at_10
value: 6.666999999999999
- type: precision_at_100
value: 1.147
- type: precision_at_1000
value: 0.149
- type: precision_at_3
value: 14.302999999999999
- type: precision_at_5
value: 10.423
- type: recall_at_1
value: 17.153
- type: recall_at_10
value: 48.591
- type: recall_at_100
value: 76.413
- type: recall_at_1000
value: 93.8
- type: recall_at_3
value: 32.329
- type: recall_at_5
value: 38.958999999999996
- type: map_at_1
value: 17.153
- type: map_at_10
value: 27.671
- type: map_at_100
value: 29.186
- type: map_at_1000
value: 29.299999999999997
- type: map_at_3
value: 24.490000000000002
- type: map_at_5
value: 26.178
- type: mrr_at_1
value: 21.144
- type: mrr_at_10
value: 32.177
- type: mrr_at_100
value: 33.247
- type: mrr_at_1000
value: 33.306000000000004
- type: mrr_at_3
value: 29.187
- type: mrr_at_5
value: 30.817
- type: ndcg_at_1
value: 21.144
- type: ndcg_at_10
value: 33.981
- type: ndcg_at_100
value: 40.549
- type: ndcg_at_1000
value: 43.03
- type: ndcg_at_3
value: 28.132
- type: ndcg_at_5
value: 30.721999999999998
- type: precision_at_1
value: 21.144
- type: precision_at_10
value: 6.666999999999999
- type: precision_at_100
value: 1.147
- type: precision_at_1000
value: 0.149
- type: precision_at_3
value: 14.302999999999999
- type: precision_at_5
value: 10.423
- type: recall_at_1
value: 17.153
- type: recall_at_10
value: 48.591
- type: recall_at_100
value: 76.413
- type: recall_at_1000
value: 93.8
- type: recall_at_3
value: 32.329
- type: recall_at_5
value: 38.958999999999996
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackPhysicsRetrieval
type: BeIR/cqadupstack
config: default
split: test
revision: 79531abbd1fb92d06c6d6315a0cbbbf5bb247ea4
metrics:
- type: map_at_1
value: 27.909
- type: map_at_10
value: 40.168
- type: map_at_100
value: 41.524
- type: map_at_1000
value: 41.626000000000005
- type: map_at_3
value: 36.274
- type: map_at_5
value: 38.411
- type: mrr_at_1
value: 34.649
- type: mrr_at_10
value: 45.613
- type: mrr_at_100
value: 46.408
- type: mrr_at_1000
value: 46.444
- type: mrr_at_3
value: 42.620999999999995
- type: mrr_at_5
value: 44.277
- type: ndcg_at_1
value: 34.649
- type: ndcg_at_10
value: 47.071000000000005
- type: ndcg_at_100
value: 52.559999999999995
- type: ndcg_at_1000
value: 54.285000000000004
- type: ndcg_at_3
value: 40.63
- type: ndcg_at_5
value: 43.584
- type: precision_at_1
value: 34.649
- type: precision_at_10
value: 8.855
- type: precision_at_100
value: 1.361
- type: precision_at_1000
value: 0.167
- type: precision_at_3
value: 19.538
- type: precision_at_5
value: 14.187
- type: recall_at_1
value: 27.909
- type: recall_at_10
value: 62.275000000000006
- type: recall_at_100
value: 84.95
- type: recall_at_1000
value: 96.02000000000001
- type: recall_at_3
value: 44.767
- type: recall_at_5
value: 52.03
- type: map_at_1
value: 27.909
- type: map_at_10
value: 40.168
- type: map_at_100
value: 41.524
- type: map_at_1000
value: 41.626000000000005
- type: map_at_3
value: 36.274
- type: map_at_5
value: 38.411
- type: mrr_at_1
value: 34.649
- type: mrr_at_10
value: 45.613
- type: mrr_at_100
value: 46.408
- type: mrr_at_1000
value: 46.444
- type: mrr_at_3
value: 42.620999999999995
- type: mrr_at_5
value: 44.277
- type: ndcg_at_1
value: 34.649
- type: ndcg_at_10
value: 47.071000000000005
- type: ndcg_at_100
value: 52.559999999999995
- type: ndcg_at_1000
value: 54.285000000000004
- type: ndcg_at_3
value: 40.63
- type: ndcg_at_5
value: 43.584
- type: precision_at_1
value: 34.649
- type: precision_at_10
value: 8.855
- type: precision_at_100
value: 1.361
- type: precision_at_1000
value: 0.167
- type: precision_at_3
value: 19.538
- type: precision_at_5
value: 14.187
- type: recall_at_1
value: 27.909
- type: recall_at_10
value: 62.275000000000006
- type: recall_at_100
value: 84.95
- type: recall_at_1000
value: 96.02000000000001
- type: recall_at_3
value: 44.767
- type: recall_at_5
value: 52.03
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackProgrammersRetrieval
type: BeIR/cqadupstack
config: default
split: test
revision: 6184bc1440d2dbc7612be22b50686b8826d22b32
metrics:
- type: map_at_1
value: 25.846000000000004
- type: map_at_10
value: 36.870999999999995
- type: map_at_100
value: 38.294
- type: map_at_1000
value: 38.401
- type: map_at_3
value: 33.163
- type: map_at_5
value: 35.177
- type: mrr_at_1
value: 31.849
- type: mrr_at_10
value: 41.681000000000004
- type: mrr_at_100
value: 42.658
- type: mrr_at_1000
value: 42.71
- type: mrr_at_3
value: 39.003
- type: mrr_at_5
value: 40.436
- type: ndcg_at_1
value: 31.849
- type: ndcg_at_10
value: 43.291000000000004
- type: ndcg_at_100
value: 49.136
- type: ndcg_at_1000
value: 51.168
- type: ndcg_at_3
value: 37.297999999999995
- type: ndcg_at_5
value: 39.934
- type: precision_at_1
value: 31.849
- type: precision_at_10
value: 8.219
- type: precision_at_100
value: 1.318
- type: precision_at_1000
value: 0.167
- type: precision_at_3
value: 18.151
- type: precision_at_5
value: 13.242
- type: recall_at_1
value: 25.846000000000004
- type: recall_at_10
value: 57.642
- type: recall_at_100
value: 82.069
- type: recall_at_1000
value: 95.684
- type: recall_at_3
value: 40.778999999999996
- type: recall_at_5
value: 47.647
- type: map_at_1
value: 25.846000000000004
- type: map_at_10
value: 36.870999999999995
- type: map_at_100
value: 38.294
- type: map_at_1000
value: 38.401
- type: map_at_3
value: 33.163
- type: map_at_5
value: 35.177
- type: mrr_at_1
value: 31.849
- type: mrr_at_10
value: 41.681000000000004
- type: mrr_at_100
value: 42.658
- type: mrr_at_1000
value: 42.71
- type: mrr_at_3
value: 39.003
- type: mrr_at_5
value: 40.436
- type: ndcg_at_1
value: 31.849
- type: ndcg_at_10
value: 43.291000000000004
- type: ndcg_at_100
value: 49.136
- type: ndcg_at_1000
value: 51.168
- type: ndcg_at_3
value: 37.297999999999995
- type: ndcg_at_5
value: 39.934
- type: precision_at_1
value: 31.849
- type: precision_at_10
value: 8.219
- type: precision_at_100
value: 1.318
- type: precision_at_1000
value: 0.167
- type: precision_at_3
value: 18.151
- type: precision_at_5
value: 13.242
- type: recall_at_1
value: 25.846000000000004
- type: recall_at_10
value: 57.642
- type: recall_at_100
value: 82.069
- type: recall_at_1000
value: 95.684
- type: recall_at_3
value: 40.778999999999996
- type: recall_at_5
value: 47.647
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackStatsRetrieval
type: BeIR/cqadupstack
config: default
split: test
revision: 65ac3a16b8e91f9cee4c9828cc7c335575432a2a
metrics:
- type: map_at_1
value: 25.102000000000004
- type: map_at_10
value: 33.31
- type: map_at_100
value: 34.443
- type: map_at_1000
value: 34.547
- type: map_at_3
value: 30.932
- type: map_at_5
value: 32.126
- type: mrr_at_1
value: 28.221
- type: mrr_at_10
value: 36.519
- type: mrr_at_100
value: 37.425000000000004
- type: mrr_at_1000
value: 37.498
- type: mrr_at_3
value: 34.254
- type: mrr_at_5
value: 35.388999999999996
- type: ndcg_at_1
value: 28.221
- type: ndcg_at_10
value: 38.340999999999994
- type: ndcg_at_100
value: 43.572
- type: ndcg_at_1000
value: 45.979
- type: ndcg_at_3
value: 33.793
- type: ndcg_at_5
value: 35.681000000000004
- type: precision_at_1
value: 28.221
- type: precision_at_10
value: 6.135
- type: precision_at_100
value: 0.946
- type: precision_at_1000
value: 0.123
- type: precision_at_3
value: 14.519000000000002
- type: precision_at_5
value: 9.969
- type: recall_at_1
value: 25.102000000000004
- type: recall_at_10
value: 50.639
- type: recall_at_100
value: 74.075
- type: recall_at_1000
value: 91.393
- type: recall_at_3
value: 37.952000000000005
- type: recall_at_5
value: 42.71
- type: map_at_1
value: 25.102000000000004
- type: map_at_10
value: 33.31
- type: map_at_100
value: 34.443
- type: map_at_1000
value: 34.547
- type: map_at_3
value: 30.932
- type: map_at_5
value: 32.126
- type: mrr_at_1
value: 28.221
- type: mrr_at_10
value: 36.519
- type: mrr_at_100
value: 37.425000000000004
- type: mrr_at_1000
value: 37.498
- type: mrr_at_3
value: 34.254
- type: mrr_at_5
value: 35.388999999999996
- type: ndcg_at_1
value: 28.221
- type: ndcg_at_10
value: 38.340999999999994
- type: ndcg_at_100
value: 43.572
- type: ndcg_at_1000
value: 45.979
- type: ndcg_at_3
value: 33.793
- type: ndcg_at_5
value: 35.681000000000004
- type: precision_at_1
value: 28.221
- type: precision_at_10
value: 6.135
- type: precision_at_100
value: 0.946
- type: precision_at_1000
value: 0.123
- type: precision_at_3
value: 14.519000000000002
- type: precision_at_5
value: 9.969
- type: recall_at_1
value: 25.102000000000004
- type: recall_at_10
value: 50.639
- type: recall_at_100
value: 74.075
- type: recall_at_1000
value: 91.393
- type: recall_at_3
value: 37.952000000000005
- type: recall_at_5
value: 42.71
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackTexRetrieval
type: BeIR/cqadupstack
config: default
split: test
revision: 46989137a86843e03a6195de44b09deda022eec7
metrics:
- type: map_at_1
value: 18.618000000000002
- type: map_at_10
value: 26.714
- type: map_at_100
value: 27.929
- type: map_at_1000
value: 28.057
- type: map_at_3
value: 24.134
- type: map_at_5
value: 25.575
- type: mrr_at_1
value: 22.573999999999998
- type: mrr_at_10
value: 30.786
- type: mrr_at_100
value: 31.746000000000002
- type: mrr_at_1000
value: 31.822
- type: mrr_at_3
value: 28.412
- type: mrr_at_5
value: 29.818
- type: ndcg_at_1
value: 22.573999999999998
- type: ndcg_at_10
value: 31.852000000000004
- type: ndcg_at_100
value: 37.477
- type: ndcg_at_1000
value: 40.331
- type: ndcg_at_3
value: 27.314
- type: ndcg_at_5
value: 29.485
- type: precision_at_1
value: 22.573999999999998
- type: precision_at_10
value: 5.86
- type: precision_at_100
value: 1.012
- type: precision_at_1000
value: 0.146
- type: precision_at_3
value: 13.099
- type: precision_at_5
value: 9.56
- type: recall_at_1
value: 18.618000000000002
- type: recall_at_10
value: 43.134
- type: recall_at_100
value: 68.294
- type: recall_at_1000
value: 88.283
- type: recall_at_3
value: 30.397999999999996
- type: recall_at_5
value: 35.998000000000005
- type: map_at_1
value: 18.618000000000002
- type: map_at_10
value: 26.714
- type: map_at_100
value: 27.929
- type: map_at_1000
value: 28.057
- type: map_at_3
value: 24.134
- type: map_at_5
value: 25.575
- type: mrr_at_1
value: 22.573999999999998
- type: mrr_at_10
value: 30.786
- type: mrr_at_100
value: 31.746000000000002
- type: mrr_at_1000
value: 31.822
- type: mrr_at_3
value: 28.412
- type: mrr_at_5
value: 29.818
- type: ndcg_at_1
value: 22.573999999999998
- type: ndcg_at_10
value: 31.852000000000004
- type: ndcg_at_100
value: 37.477
- type: ndcg_at_1000
value: 40.331
- type: ndcg_at_3
value: 27.314
- type: ndcg_at_5
value: 29.485
- type: precision_at_1
value: 22.573999999999998
- type: precision_at_10
value: 5.86
- type: precision_at_100
value: 1.012
- type: precision_at_1000
value: 0.146
- type: precision_at_3
value: 13.099
- type: precision_at_5
value: 9.56
- type: recall_at_1
value: 18.618000000000002
- type: recall_at_10
value: 43.134
- type: recall_at_100
value: 68.294
- type: recall_at_1000
value: 88.283
- type: recall_at_3
value: 30.397999999999996
- type: recall_at_5
value: 35.998000000000005
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackUnixRetrieval
type: BeIR/cqadupstack
config: default
split: test
revision: 6c6430d3a6d36f8d2a829195bc5dc94d7e063e53
metrics:
- type: map_at_1
value: 27.76
- type: map_at_10
value: 37.569
- type: map_at_100
value: 38.784
- type: map_at_1000
value: 38.884
- type: map_at_3
value: 34.379
- type: map_at_5
value: 36.092999999999996
- type: mrr_at_1
value: 32.556000000000004
- type: mrr_at_10
value: 41.870000000000005
- type: mrr_at_100
value: 42.759
- type: mrr_at_1000
value: 42.806
- type: mrr_at_3
value: 39.086
- type: mrr_at_5
value: 40.574
- type: ndcg_at_1
value: 32.556000000000004
- type: ndcg_at_10
value: 43.382
- type: ndcg_at_100
value: 48.943
- type: ndcg_at_1000
value: 50.961999999999996
- type: ndcg_at_3
value: 37.758
- type: ndcg_at_5
value: 40.282000000000004
- type: precision_at_1
value: 32.556000000000004
- type: precision_at_10
value: 7.463
- type: precision_at_100
value: 1.1480000000000001
- type: precision_at_1000
value: 0.14300000000000002
- type: precision_at_3
value: 17.133000000000003
- type: precision_at_5
value: 12.164
- type: recall_at_1
value: 27.76
- type: recall_at_10
value: 56.71000000000001
- type: recall_at_100
value: 81.053
- type: recall_at_1000
value: 94.75
- type: recall_at_3
value: 41.387
- type: recall_at_5
value: 47.818
- type: map_at_1
value: 27.76
- type: map_at_10
value: 37.569
- type: map_at_100
value: 38.784
- type: map_at_1000
value: 38.884
- type: map_at_3
value: 34.379
- type: map_at_5
value: 36.092999999999996
- type: mrr_at_1
value: 32.556000000000004
- type: mrr_at_10
value: 41.870000000000005
- type: mrr_at_100
value: 42.759
- type: mrr_at_1000
value: 42.806
- type: mrr_at_3
value: 39.086
- type: mrr_at_5
value: 40.574
- type: ndcg_at_1
value: 32.556000000000004
- type: ndcg_at_10
value: 43.382
- type: ndcg_at_100
value: 48.943
- type: ndcg_at_1000
value: 50.961999999999996
- type: ndcg_at_3
value: 37.758
- type: ndcg_at_5
value: 40.282000000000004
- type: precision_at_1
value: 32.556000000000004
- type: precision_at_10
value: 7.463
- type: precision_at_100
value: 1.1480000000000001
- type: precision_at_1000
value: 0.14300000000000002
- type: precision_at_3
value: 17.133000000000003
- type: precision_at_5
value: 12.164
- type: recall_at_1
value: 27.76
- type: recall_at_10
value: 56.71000000000001
- type: recall_at_100
value: 81.053
- type: recall_at_1000
value: 94.75
- type: recall_at_3
value: 41.387
- type: recall_at_5
value: 47.818
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackWebmastersRetrieval
type: BeIR/cqadupstack
config: default
split: test
revision: 160c094312a0e1facb97e55eeddb698c0abe3571
metrics:
- type: map_at_1
value: 23.62
- type: map_at_10
value: 33.522999999999996
- type: map_at_100
value: 35.281
- type: map_at_1000
value: 35.504000000000005
- type: map_at_3
value: 30.314999999999998
- type: map_at_5
value: 32.065
- type: mrr_at_1
value: 28.458
- type: mrr_at_10
value: 38.371
- type: mrr_at_100
value: 39.548
- type: mrr_at_1000
value: 39.601
- type: mrr_at_3
value: 35.638999999999996
- type: mrr_at_5
value: 37.319
- type: ndcg_at_1
value: 28.458
- type: ndcg_at_10
value: 39.715
- type: ndcg_at_100
value: 46.394999999999996
- type: ndcg_at_1000
value: 48.943999999999996
- type: ndcg_at_3
value: 34.361999999999995
- type: ndcg_at_5
value: 37.006
- type: precision_at_1
value: 28.458
- type: precision_at_10
value: 7.5889999999999995
- type: precision_at_100
value: 1.514
- type: precision_at_1000
value: 0.242
- type: precision_at_3
value: 16.073999999999998
- type: precision_at_5
value: 11.976
- type: recall_at_1
value: 23.62
- type: recall_at_10
value: 52.117000000000004
- type: recall_at_100
value: 81.097
- type: recall_at_1000
value: 96.47
- type: recall_at_3
value: 37.537
- type: recall_at_5
value: 44.112
- type: map_at_1
value: 23.62
- type: map_at_10
value: 33.522999999999996
- type: map_at_100
value: 35.281
- type: map_at_1000
value: 35.504000000000005
- type: map_at_3
value: 30.314999999999998
- type: map_at_5
value: 32.065
- type: mrr_at_1
value: 28.458
- type: mrr_at_10
value: 38.371
- type: mrr_at_100
value: 39.548
- type: mrr_at_1000
value: 39.601
- type: mrr_at_3
value: 35.638999999999996
- type: mrr_at_5
value: 37.319
- type: ndcg_at_1
value: 28.458
- type: ndcg_at_10
value: 39.715
- type: ndcg_at_100
value: 46.394999999999996
- type: ndcg_at_1000
value: 48.943999999999996
- type: ndcg_at_3
value: 34.361999999999995
- type: ndcg_at_5
value: 37.006
- type: precision_at_1
value: 28.458
- type: precision_at_10
value: 7.5889999999999995
- type: precision_at_100
value: 1.514
- type: precision_at_1000
value: 0.242
- type: precision_at_3
value: 16.073999999999998
- type: precision_at_5
value: 11.976
- type: recall_at_1
value: 23.62
- type: recall_at_10
value: 52.117000000000004
- type: recall_at_100
value: 81.097
- type: recall_at_1000
value: 96.47
- type: recall_at_3
value: 37.537
- type: recall_at_5
value: 44.112
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackWordpressRetrieval
type: BeIR/cqadupstack
config: default
split: test
revision: 4ffe81d471b1924886b33c7567bfb200e9eec5c4
metrics:
- type: map_at_1
value: 18.336
- type: map_at_10
value: 26.811
- type: map_at_100
value: 27.892
- type: map_at_1000
value: 27.986
- type: map_at_3
value: 23.976
- type: map_at_5
value: 25.605
- type: mrr_at_1
value: 20.148
- type: mrr_at_10
value: 28.898000000000003
- type: mrr_at_100
value: 29.866
- type: mrr_at_1000
value: 29.929
- type: mrr_at_3
value: 26.247999999999998
- type: mrr_at_5
value: 27.744999999999997
- type: ndcg_at_1
value: 20.148
- type: ndcg_at_10
value: 32.059
- type: ndcg_at_100
value: 37.495
- type: ndcg_at_1000
value: 39.855000000000004
- type: ndcg_at_3
value: 26.423000000000002
- type: ndcg_at_5
value: 29.212
- type: precision_at_1
value: 20.148
- type: precision_at_10
value: 5.268
- type: precision_at_100
value: 0.872
- type: precision_at_1000
value: 0.11900000000000001
- type: precision_at_3
value: 11.459999999999999
- type: precision_at_5
value: 8.503
- type: recall_at_1
value: 18.336
- type: recall_at_10
value: 46.411
- type: recall_at_100
value: 71.33500000000001
- type: recall_at_1000
value: 88.895
- type: recall_at_3
value: 31.134
- type: recall_at_5
value: 37.862
- type: map_at_1
value: 18.336
- type: map_at_10
value: 26.811
- type: map_at_100
value: 27.892
- type: map_at_1000
value: 27.986
- type: map_at_3
value: 23.976
- type: map_at_5
value: 25.605
- type: mrr_at_1
value: 20.148
- type: mrr_at_10
value: 28.898000000000003
- type: mrr_at_100
value: 29.866
- type: mrr_at_1000
value: 29.929
- type: mrr_at_3
value: 26.247999999999998
- type: mrr_at_5
value: 27.744999999999997
- type: ndcg_at_1
value: 20.148
- type: ndcg_at_10
value: 32.059
- type: ndcg_at_100
value: 37.495
- type: ndcg_at_1000
value: 39.855000000000004
- type: ndcg_at_3
value: 26.423000000000002
- type: ndcg_at_5
value: 29.212
- type: precision_at_1
value: 20.148
- type: precision_at_10
value: 5.268
- type: precision_at_100
value: 0.872
- type: precision_at_1000
value: 0.11900000000000001
- type: precision_at_3
value: 11.459999999999999
- type: precision_at_5
value: 8.503
- type: recall_at_1
value: 18.336
- type: recall_at_10
value: 46.411
- type: recall_at_100
value: 71.33500000000001
- type: recall_at_1000
value: 88.895
- type: recall_at_3
value: 31.134
- type: recall_at_5
value: 37.862
- task:
type: Retrieval
dataset:
name: MTEB ClimateFEVER
type: mteb/climate-fever
config: default
split: test
revision: 47f2ac6acb640fc46020b02a5b59fdda04d39380
metrics:
- type: map_at_1
value: 21.149
- type: map_at_10
value: 35.251
- type: map_at_100
value: 37.342
- type: map_at_1000
value: 37.516
- type: map_at_3
value: 30.543
- type: map_at_5
value: 33.19
- type: mrr_at_1
value: 47.687000000000005
- type: mrr_at_10
value: 59.391000000000005
- type: mrr_at_100
value: 59.946999999999996
- type: mrr_at_1000
value: 59.965999999999994
- type: mrr_at_3
value: 56.938
- type: mrr_at_5
value: 58.498000000000005
- type: ndcg_at_1
value: 47.687000000000005
- type: ndcg_at_10
value: 45.381
- type: ndcg_at_100
value: 52.405
- type: ndcg_at_1000
value: 55.041
- type: ndcg_at_3
value: 40.024
- type: ndcg_at_5
value: 41.821999999999996
- type: precision_at_1
value: 47.687000000000005
- type: precision_at_10
value: 13.355
- type: precision_at_100
value: 2.113
- type: precision_at_1000
value: 0.261
- type: precision_at_3
value: 29.793999999999997
- type: precision_at_5
value: 21.811
- type: recall_at_1
value: 21.149
- type: recall_at_10
value: 49.937
- type: recall_at_100
value: 73.382
- type: recall_at_1000
value: 87.606
- type: recall_at_3
value: 35.704
- type: recall_at_5
value: 42.309000000000005
- type: map_at_1
value: 21.149
- type: map_at_10
value: 35.251
- type: map_at_100
value: 37.342
- type: map_at_1000
value: 37.516
- type: map_at_3
value: 30.543
- type: map_at_5
value: 33.19
- type: mrr_at_1
value: 47.687000000000005
- type: mrr_at_10
value: 59.391000000000005
- type: mrr_at_100
value: 59.946999999999996
- type: mrr_at_1000
value: 59.965999999999994
- type: mrr_at_3
value: 56.938
- type: mrr_at_5
value: 58.498000000000005
- type: ndcg_at_1
value: 47.687000000000005
- type: ndcg_at_10
value: 45.381
- type: ndcg_at_100
value: 52.405
- type: ndcg_at_1000
value: 55.041
- type: ndcg_at_3
value: 40.024
- type: ndcg_at_5
value: 41.821999999999996
- type: precision_at_1
value: 47.687000000000005
- type: precision_at_10
value: 13.355
- type: precision_at_100
value: 2.113
- type: precision_at_1000
value: 0.261
- type: precision_at_3
value: 29.793999999999997
- type: precision_at_5
value: 21.811
- type: recall_at_1
value: 21.149
- type: recall_at_10
value: 49.937
- type: recall_at_100
value: 73.382
- type: recall_at_1000
value: 87.606
- type: recall_at_3
value: 35.704
- type: recall_at_5
value: 42.309000000000005
- task:
type: Retrieval
dataset:
name: MTEB CmedqaRetrieval
type: C-MTEB/CmedqaRetrieval
config: default
split: dev
revision: cd540c506dae1cf9e9a59c3e06f42030d54e7301
metrics:
- type: map_at_1
value: 28.74
- type: map_at_10
value: 41.981
- type: map_at_100
value: 43.753
- type: map_at_1000
value: 43.858999999999995
- type: map_at_3
value: 37.634
- type: map_at_5
value: 40.158
- type: mrr_at_1
value: 43.086
- type: mrr_at_10
value: 51.249
- type: mrr_at_100
value: 52.154
- type: mrr_at_1000
value: 52.190999999999995
- type: mrr_at_3
value: 48.787000000000006
- type: mrr_at_5
value: 50.193
- type: ndcg_at_1
value: 43.086
- type: ndcg_at_10
value: 48.703
- type: ndcg_at_100
value: 55.531
- type: ndcg_at_1000
value: 57.267999999999994
- type: ndcg_at_3
value: 43.464000000000006
- type: ndcg_at_5
value: 45.719
- type: precision_at_1
value: 43.086
- type: precision_at_10
value: 10.568
- type: precision_at_100
value: 1.616
- type: precision_at_1000
value: 0.184
- type: precision_at_3
value: 24.256
- type: precision_at_5
value: 17.509
- type: recall_at_1
value: 28.74
- type: recall_at_10
value: 59.349
- type: recall_at_100
value: 87.466
- type: recall_at_1000
value: 98.914
- type: recall_at_3
value: 43.322
- type: recall_at_5
value: 50.409000000000006
- type: map_at_1
value: 28.74
- type: map_at_10
value: 41.981
- type: map_at_100
value: 43.753
- type: map_at_1000
value: 43.858999999999995
- type: map_at_3
value: 37.634
- type: map_at_5
value: 40.158
- type: mrr_at_1
value: 43.086
- type: mrr_at_10
value: 51.249
- type: mrr_at_100
value: 52.154
- type: mrr_at_1000
value: 52.190999999999995
- type: mrr_at_3
value: 48.787000000000006
- type: mrr_at_5
value: 50.193
- type: ndcg_at_1
value: 43.086
- type: ndcg_at_10
value: 48.703
- type: ndcg_at_100
value: 55.531
- type: ndcg_at_1000
value: 57.267999999999994
- type: ndcg_at_3
value: 43.464000000000006
- type: ndcg_at_5
value: 45.719
- type: precision_at_1
value: 43.086
- type: precision_at_10
value: 10.568
- type: precision_at_100
value: 1.616
- type: precision_at_1000
value: 0.184
- type: precision_at_3
value: 24.256
- type: precision_at_5
value: 17.509
- type: recall_at_1
value: 28.74
- type: recall_at_10
value: 59.349
- type: recall_at_100
value: 87.466
- type: recall_at_1000
value: 98.914
- type: recall_at_3
value: 43.322
- type: recall_at_5
value: 50.409000000000006
- task:
type: PairClassification
dataset:
name: MTEB Cmnli
type: C-MTEB/CMNLI
config: default
split: validation
revision: 41bc36f332156f7adc9e38f53777c959b2ae9766
metrics:
- type: cos_sim_accuracy
value: 79.03788334335539
- type: cos_sim_ap
value: 87.21703260472833
- type: cos_sim_f1
value: 79.87784187309127
- type: cos_sim_precision
value: 77.36634531113059
- type: cos_sim_recall
value: 82.55786766425064
- type: dot_accuracy
value: 79.03788334335539
- type: dot_ap
value: 87.22906528217948
- type: dot_f1
value: 79.87784187309127
- type: dot_precision
value: 77.36634531113059
- type: dot_recall
value: 82.55786766425064
- type: euclidean_accuracy
value: 79.03788334335539
- type: euclidean_ap
value: 87.21703670465753
- type: euclidean_f1
value: 79.87784187309127
- type: euclidean_precision
value: 77.36634531113059
- type: euclidean_recall
value: 82.55786766425064
- type: manhattan_accuracy
value: 78.28021647624774
- type: manhattan_ap
value: 86.66244127855394
- type: manhattan_f1
value: 79.24485643228577
- type: manhattan_precision
value: 76.71262858393521
- type: manhattan_recall
value: 81.94996492868833
- type: max_accuracy
value: 79.03788334335539
- type: max_ap
value: 87.22906528217948
- type: max_f1
value: 79.87784187309127
- type: cos_sim_accuracy
value: 79.03788334335539
- type: cos_sim_ap
value: 87.21703260472833
- type: cos_sim_f1
value: 79.87784187309127
- type: cos_sim_precision
value: 77.36634531113059
- type: cos_sim_recall
value: 82.55786766425064
- type: dot_accuracy
value: 79.03788334335539
- type: dot_ap
value: 87.22906528217948
- type: dot_f1
value: 79.87784187309127
- type: dot_precision
value: 77.36634531113059
- type: dot_recall
value: 82.55786766425064
- type: euclidean_accuracy
value: 79.03788334335539
- type: euclidean_ap
value: 87.21703670465753
- type: euclidean_f1
value: 79.87784187309127
- type: euclidean_precision
value: 77.36634531113059
- type: euclidean_recall
value: 82.55786766425064
- type: manhattan_accuracy
value: 78.28021647624774
- type: manhattan_ap
value: 86.66244127855394
- type: manhattan_f1
value: 79.24485643228577
- type: manhattan_precision
value: 76.71262858393521
- type: manhattan_recall
value: 81.94996492868833
- type: max_accuracy
value: 79.03788334335539
- type: max_ap
value: 87.22906528217948
- type: max_f1
value: 79.87784187309127
- task:
type: Retrieval
dataset:
name: MTEB CovidRetrieval
type: C-MTEB/CovidRetrieval
config: default
split: dev
revision: 1271c7809071a13532e05f25fb53511ffce77117
metrics:
- type: map_at_1
value: 67.597
- type: map_at_10
value: 75.81599999999999
- type: map_at_100
value: 76.226
- type: map_at_1000
value: 76.23100000000001
- type: map_at_3
value: 73.907
- type: map_at_5
value: 75.08200000000001
- type: mrr_at_1
value: 67.756
- type: mrr_at_10
value: 75.8
- type: mrr_at_100
value: 76.205
- type: mrr_at_1000
value: 76.21
- type: mrr_at_3
value: 73.955
- type: mrr_at_5
value: 75.093
- type: ndcg_at_1
value: 67.756
- type: ndcg_at_10
value: 79.598
- type: ndcg_at_100
value: 81.34400000000001
- type: ndcg_at_1000
value: 81.477
- type: ndcg_at_3
value: 75.876
- type: ndcg_at_5
value: 77.94200000000001
- type: precision_at_1
value: 67.756
- type: precision_at_10
value: 9.231
- type: precision_at_100
value: 1.0
- type: precision_at_1000
value: 0.101
- type: precision_at_3
value: 27.362
- type: precision_at_5
value: 17.45
- type: recall_at_1
value: 67.597
- type: recall_at_10
value: 91.307
- type: recall_at_100
value: 98.946
- type: recall_at_1000
value: 100.0
- type: recall_at_3
value: 81.428
- type: recall_at_5
value: 86.407
- type: map_at_1
value: 67.597
- type: map_at_10
value: 75.81599999999999
- type: map_at_100
value: 76.226
- type: map_at_1000
value: 76.23100000000001
- type: map_at_3
value: 73.907
- type: map_at_5
value: 75.08200000000001
- type: mrr_at_1
value: 67.756
- type: mrr_at_10
value: 75.8
- type: mrr_at_100
value: 76.205
- type: mrr_at_1000
value: 76.21
- type: mrr_at_3
value: 73.955
- type: mrr_at_5
value: 75.093
- type: ndcg_at_1
value: 67.756
- type: ndcg_at_10
value: 79.598
- type: ndcg_at_100
value: 81.34400000000001
- type: ndcg_at_1000
value: 81.477
- type: ndcg_at_3
value: 75.876
- type: ndcg_at_5
value: 77.94200000000001
- type: precision_at_1
value: 67.756
- type: precision_at_10
value: 9.231
- type: precision_at_100
value: 1.0
- type: precision_at_1000
value: 0.101
- type: precision_at_3
value: 27.362
- type: precision_at_5
value: 17.45
- type: recall_at_1
value: 67.597
- type: recall_at_10
value: 91.307
- type: recall_at_100
value: 98.946
- type: recall_at_1000
value: 100.0
- type: recall_at_3
value: 81.428
- type: recall_at_5
value: 86.407
- task:
type: Retrieval
dataset:
name: MTEB DBPedia
type: mteb/dbpedia
config: default
split: test
revision: c0f706b76e590d620bd6618b3ca8efdd34e2d659
metrics:
- type: map_at_1
value: 9.33
- type: map_at_10
value: 23.118
- type: map_at_100
value: 34.28
- type: map_at_1000
value: 36.574
- type: map_at_3
value: 15.576
- type: map_at_5
value: 18.778
- type: mrr_at_1
value: 75.25
- type: mrr_at_10
value: 81.958
- type: mrr_at_100
value: 82.282
- type: mrr_at_1000
value: 82.285
- type: mrr_at_3
value: 81.042
- type: mrr_at_5
value: 81.62899999999999
- type: ndcg_at_1
value: 63.625
- type: ndcg_at_10
value: 50.781
- type: ndcg_at_100
value: 55.537000000000006
- type: ndcg_at_1000
value: 62.651
- type: ndcg_at_3
value: 55.297
- type: ndcg_at_5
value: 53.103
- type: precision_at_1
value: 75.25
- type: precision_at_10
value: 41.475
- type: precision_at_100
value: 13.5
- type: precision_at_1000
value: 2.686
- type: precision_at_3
value: 59.333000000000006
- type: precision_at_5
value: 51.9
- type: recall_at_1
value: 9.33
- type: recall_at_10
value: 29.398000000000003
- type: recall_at_100
value: 61.951
- type: recall_at_1000
value: 85.463
- type: recall_at_3
value: 17.267
- type: recall_at_5
value: 21.89
- type: map_at_1
value: 9.33
- type: map_at_10
value: 23.118
- type: map_at_100
value: 34.28
- type: map_at_1000
value: 36.574
- type: map_at_3
value: 15.576
- type: map_at_5
value: 18.778
- type: mrr_at_1
value: 75.25
- type: mrr_at_10
value: 81.958
- type: mrr_at_100
value: 82.282
- type: mrr_at_1000
value: 82.285
- type: mrr_at_3
value: 81.042
- type: mrr_at_5
value: 81.62899999999999
- type: ndcg_at_1
value: 63.625
- type: ndcg_at_10
value: 50.781
- type: ndcg_at_100
value: 55.537000000000006
- type: ndcg_at_1000
value: 62.651
- type: ndcg_at_3
value: 55.297
- type: ndcg_at_5
value: 53.103
- type: precision_at_1
value: 75.25
- type: precision_at_10
value: 41.475
- type: precision_at_100
value: 13.5
- type: precision_at_1000
value: 2.686
- type: precision_at_3
value: 59.333000000000006
- type: precision_at_5
value: 51.9
- type: recall_at_1
value: 9.33
- type: recall_at_10
value: 29.398000000000003
- type: recall_at_100
value: 61.951
- type: recall_at_1000
value: 85.463
- type: recall_at_3
value: 17.267
- type: recall_at_5
value: 21.89
- task:
type: Retrieval
dataset:
name: MTEB DuRetrieval
type: C-MTEB/DuRetrieval
config: default
split: dev
revision: a1a333e290fe30b10f3f56498e3a0d911a693ced
metrics:
- type: map_at_1
value: 25.608999999999998
- type: map_at_10
value: 78.649
- type: map_at_100
value: 81.67699999999999
- type: map_at_1000
value: 81.71000000000001
- type: map_at_3
value: 54.112
- type: map_at_5
value: 68.34700000000001
- type: mrr_at_1
value: 87.75
- type: mrr_at_10
value: 92.175
- type: mrr_at_100
value: 92.225
- type: mrr_at_1000
value: 92.227
- type: mrr_at_3
value: 91.833
- type: mrr_at_5
value: 92.06800000000001
- type: ndcg_at_1
value: 87.75
- type: ndcg_at_10
value: 86.56700000000001
- type: ndcg_at_100
value: 89.519
- type: ndcg_at_1000
value: 89.822
- type: ndcg_at_3
value: 84.414
- type: ndcg_at_5
value: 83.721
- type: precision_at_1
value: 87.75
- type: precision_at_10
value: 41.665
- type: precision_at_100
value: 4.827
- type: precision_at_1000
value: 0.49
- type: precision_at_3
value: 75.533
- type: precision_at_5
value: 64.01
- type: recall_at_1
value: 25.608999999999998
- type: recall_at_10
value: 88.708
- type: recall_at_100
value: 98.007
- type: recall_at_1000
value: 99.555
- type: recall_at_3
value: 57.157000000000004
- type: recall_at_5
value: 74.118
- type: map_at_1
value: 25.608999999999998
- type: map_at_10
value: 78.649
- type: map_at_100
value: 81.67699999999999
- type: map_at_1000
value: 81.71000000000001
- type: map_at_3
value: 54.112
- type: map_at_5
value: 68.34700000000001
- type: mrr_at_1
value: 87.75
- type: mrr_at_10
value: 92.175
- type: mrr_at_100
value: 92.225
- type: mrr_at_1000
value: 92.227
- type: mrr_at_3
value: 91.833
- type: mrr_at_5
value: 92.06800000000001
- type: ndcg_at_1
value: 87.75
- type: ndcg_at_10
value: 86.56700000000001
- type: ndcg_at_100
value: 89.519
- type: ndcg_at_1000
value: 89.822
- type: ndcg_at_3
value: 84.414
- type: ndcg_at_5
value: 83.721
- type: precision_at_1
value: 87.75
- type: precision_at_10
value: 41.665
- type: precision_at_100
value: 4.827
- type: precision_at_1000
value: 0.49
- type: precision_at_3
value: 75.533
- type: precision_at_5
value: 64.01
- type: recall_at_1
value: 25.608999999999998
- type: recall_at_10
value: 88.708
- type: recall_at_100
value: 98.007
- type: recall_at_1000
value: 99.555
- type: recall_at_3
value: 57.157000000000004
- type: recall_at_5
value: 74.118
- task:
type: Retrieval
dataset:
name: MTEB EcomRetrieval
type: C-MTEB/EcomRetrieval
config: default
split: dev
revision: 687de13dc7294d6fd9be10c6945f9e8fec8166b9
metrics:
- type: map_at_1
value: 55.800000000000004
- type: map_at_10
value: 65.952
- type: map_at_100
value: 66.413
- type: map_at_1000
value: 66.426
- type: map_at_3
value: 63.3
- type: map_at_5
value: 64.945
- type: mrr_at_1
value: 55.800000000000004
- type: mrr_at_10
value: 65.952
- type: mrr_at_100
value: 66.413
- type: mrr_at_1000
value: 66.426
- type: mrr_at_3
value: 63.3
- type: mrr_at_5
value: 64.945
- type: ndcg_at_1
value: 55.800000000000004
- type: ndcg_at_10
value: 71.00800000000001
- type: ndcg_at_100
value: 72.974
- type: ndcg_at_1000
value: 73.302
- type: ndcg_at_3
value: 65.669
- type: ndcg_at_5
value: 68.634
- type: precision_at_1
value: 55.800000000000004
- type: precision_at_10
value: 8.690000000000001
- type: precision_at_100
value: 0.955
- type: precision_at_1000
value: 0.098
- type: precision_at_3
value: 24.166999999999998
- type: precision_at_5
value: 15.939999999999998
- type: recall_at_1
value: 55.800000000000004
- type: recall_at_10
value: 86.9
- type: recall_at_100
value: 95.5
- type: recall_at_1000
value: 98.0
- type: recall_at_3
value: 72.5
- type: recall_at_5
value: 79.7
- type: map_at_1
value: 55.800000000000004
- type: map_at_10
value: 65.952
- type: map_at_100
value: 66.413
- type: map_at_1000
value: 66.426
- type: map_at_3
value: 63.3
- type: map_at_5
value: 64.945
- type: mrr_at_1
value: 55.800000000000004
- type: mrr_at_10
value: 65.952
- type: mrr_at_100
value: 66.413
- type: mrr_at_1000
value: 66.426
- type: mrr_at_3
value: 63.3
- type: mrr_at_5
value: 64.945
- type: ndcg_at_1
value: 55.800000000000004
- type: ndcg_at_10
value: 71.00800000000001
- type: ndcg_at_100
value: 72.974
- type: ndcg_at_1000
value: 73.302
- type: ndcg_at_3
value: 65.669
- type: ndcg_at_5
value: 68.634
- type: precision_at_1
value: 55.800000000000004
- type: precision_at_10
value: 8.690000000000001
- type: precision_at_100
value: 0.955
- type: precision_at_1000
value: 0.098
- type: precision_at_3
value: 24.166999999999998
- type: precision_at_5
value: 15.939999999999998
- type: recall_at_1
value: 55.800000000000004
- type: recall_at_10
value: 86.9
- type: recall_at_100
value: 95.5
- type: recall_at_1000
value: 98.0
- type: recall_at_3
value: 72.5
- type: recall_at_5
value: 79.7
- task:
type: Classification
dataset:
name: MTEB EmotionClassification
type: mteb/emotion
config: default
split: test
revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37
metrics:
- type: accuracy
value: 67.39500000000001
- type: f1
value: 62.01837785021389
- type: accuracy
value: 67.39500000000001
- type: f1
value: 62.01837785021389
- task:
type: Retrieval
dataset:
name: MTEB FEVER
type: mteb/fever
config: default
split: test
revision: bea83ef9e8fb933d90a2f1d5515737465d613e12
metrics:
- type: map_at_1
value: 86.27
- type: map_at_10
value: 92.163
- type: map_at_100
value: 92.351
- type: map_at_1000
value: 92.36
- type: map_at_3
value: 91.36
- type: map_at_5
value: 91.888
- type: mrr_at_1
value: 92.72399999999999
- type: mrr_at_10
value: 95.789
- type: mrr_at_100
value: 95.80300000000001
- type: mrr_at_1000
value: 95.804
- type: mrr_at_3
value: 95.64200000000001
- type: mrr_at_5
value: 95.75
- type: ndcg_at_1
value: 92.72399999999999
- type: ndcg_at_10
value: 94.269
- type: ndcg_at_100
value: 94.794
- type: ndcg_at_1000
value: 94.94
- type: ndcg_at_3
value: 93.427
- type: ndcg_at_5
value: 93.914
- type: precision_at_1
value: 92.72399999999999
- type: precision_at_10
value: 11.007
- type: precision_at_100
value: 1.153
- type: precision_at_1000
value: 0.11800000000000001
- type: precision_at_3
value: 34.993
- type: precision_at_5
value: 21.542
- type: recall_at_1
value: 86.27
- type: recall_at_10
value: 97.031
- type: recall_at_100
value: 98.839
- type: recall_at_1000
value: 99.682
- type: recall_at_3
value: 94.741
- type: recall_at_5
value: 96.03
- type: map_at_1
value: 86.27
- type: map_at_10
value: 92.163
- type: map_at_100
value: 92.351
- type: map_at_1000
value: 92.36
- type: map_at_3
value: 91.36
- type: map_at_5
value: 91.888
- type: mrr_at_1
value: 92.72399999999999
- type: mrr_at_10
value: 95.789
- type: mrr_at_100
value: 95.80300000000001
- type: mrr_at_1000
value: 95.804
- type: mrr_at_3
value: 95.64200000000001
- type: mrr_at_5
value: 95.75
- type: ndcg_at_1
value: 92.72399999999999
- type: ndcg_at_10
value: 94.269
- type: ndcg_at_100
value: 94.794
- type: ndcg_at_1000
value: 94.94
- type: ndcg_at_3
value: 93.427
- type: ndcg_at_5
value: 93.914
- type: precision_at_1
value: 92.72399999999999
- type: precision_at_10
value: 11.007
- type: precision_at_100
value: 1.153
- type: precision_at_1000
value: 0.11800000000000001
- type: precision_at_3
value: 34.993
- type: precision_at_5
value: 21.542
- type: recall_at_1
value: 86.27
- type: recall_at_10
value: 97.031
- type: recall_at_100
value: 98.839
- type: recall_at_1000
value: 99.682
- type: recall_at_3
value: 94.741
- type: recall_at_5
value: 96.03
- task:
type: Retrieval
dataset:
name: MTEB FiQA2018
type: mteb/fiqa
config: default
split: test
revision: 27a168819829fe9bcd655c2df245fb19452e8e06
metrics:
- type: map_at_1
value: 29.561999999999998
- type: map_at_10
value: 48.52
- type: map_at_100
value: 50.753
- type: map_at_1000
value: 50.878
- type: map_at_3
value: 42.406
- type: map_at_5
value: 45.994
- type: mrr_at_1
value: 54.784
- type: mrr_at_10
value: 64.51400000000001
- type: mrr_at_100
value: 65.031
- type: mrr_at_1000
value: 65.05199999999999
- type: mrr_at_3
value: 62.474
- type: mrr_at_5
value: 63.562
- type: ndcg_at_1
value: 54.784
- type: ndcg_at_10
value: 57.138
- type: ndcg_at_100
value: 63.666999999999994
- type: ndcg_at_1000
value: 65.379
- type: ndcg_at_3
value: 52.589
- type: ndcg_at_5
value: 54.32599999999999
- type: precision_at_1
value: 54.784
- type: precision_at_10
value: 15.693999999999999
- type: precision_at_100
value: 2.259
- type: precision_at_1000
value: 0.256
- type: precision_at_3
value: 34.774
- type: precision_at_5
value: 25.772000000000002
- type: recall_at_1
value: 29.561999999999998
- type: recall_at_10
value: 64.708
- type: recall_at_100
value: 87.958
- type: recall_at_1000
value: 97.882
- type: recall_at_3
value: 48.394
- type: recall_at_5
value: 56.101
- type: map_at_1
value: 29.561999999999998
- type: map_at_10
value: 48.52
- type: map_at_100
value: 50.753
- type: map_at_1000
value: 50.878
- type: map_at_3
value: 42.406
- type: map_at_5
value: 45.994
- type: mrr_at_1
value: 54.784
- type: mrr_at_10
value: 64.51400000000001
- type: mrr_at_100
value: 65.031
- type: mrr_at_1000
value: 65.05199999999999
- type: mrr_at_3
value: 62.474
- type: mrr_at_5
value: 63.562
- type: ndcg_at_1
value: 54.784
- type: ndcg_at_10
value: 57.138
- type: ndcg_at_100
value: 63.666999999999994
- type: ndcg_at_1000
value: 65.379
- type: ndcg_at_3
value: 52.589
- type: ndcg_at_5
value: 54.32599999999999
- type: precision_at_1
value: 54.784
- type: precision_at_10
value: 15.693999999999999
- type: precision_at_100
value: 2.259
- type: precision_at_1000
value: 0.256
- type: precision_at_3
value: 34.774
- type: precision_at_5
value: 25.772000000000002
- type: recall_at_1
value: 29.561999999999998
- type: recall_at_10
value: 64.708
- type: recall_at_100
value: 87.958
- type: recall_at_1000
value: 97.882
- type: recall_at_3
value: 48.394
- type: recall_at_5
value: 56.101
- task:
type: Retrieval
dataset:
name: MTEB HotpotQA
type: mteb/hotpotqa
config: default
split: test
revision: ab518f4d6fcca38d87c25209f94beba119d02014
metrics:
- type: map_at_1
value: 43.72
- type: map_at_10
value: 71.905
- type: map_at_100
value: 72.685
- type: map_at_1000
value: 72.72800000000001
- type: map_at_3
value: 68.538
- type: map_at_5
value: 70.675
- type: mrr_at_1
value: 87.441
- type: mrr_at_10
value: 91.432
- type: mrr_at_100
value: 91.512
- type: mrr_at_1000
value: 91.513
- type: mrr_at_3
value: 90.923
- type: mrr_at_5
value: 91.252
- type: ndcg_at_1
value: 87.441
- type: ndcg_at_10
value: 79.212
- type: ndcg_at_100
value: 81.694
- type: ndcg_at_1000
value: 82.447
- type: ndcg_at_3
value: 74.746
- type: ndcg_at_5
value: 77.27199999999999
- type: precision_at_1
value: 87.441
- type: precision_at_10
value: 16.42
- type: precision_at_100
value: 1.833
- type: precision_at_1000
value: 0.193
- type: precision_at_3
value: 48.184
- type: precision_at_5
value: 30.897999999999996
- type: recall_at_1
value: 43.72
- type: recall_at_10
value: 82.1
- type: recall_at_100
value: 91.62700000000001
- type: recall_at_1000
value: 96.556
- type: recall_at_3
value: 72.275
- type: recall_at_5
value: 77.24499999999999
- type: map_at_1
value: 43.72
- type: map_at_10
value: 71.905
- type: map_at_100
value: 72.685
- type: map_at_1000
value: 72.72800000000001
- type: map_at_3
value: 68.538
- type: map_at_5
value: 70.675
- type: mrr_at_1
value: 87.441
- type: mrr_at_10
value: 91.432
- type: mrr_at_100
value: 91.512
- type: mrr_at_1000
value: 91.513
- type: mrr_at_3
value: 90.923
- type: mrr_at_5
value: 91.252
- type: ndcg_at_1
value: 87.441
- type: ndcg_at_10
value: 79.212
- type: ndcg_at_100
value: 81.694
- type: ndcg_at_1000
value: 82.447
- type: ndcg_at_3
value: 74.746
- type: ndcg_at_5
value: 77.27199999999999
- type: precision_at_1
value: 87.441
- type: precision_at_10
value: 16.42
- type: precision_at_100
value: 1.833
- type: precision_at_1000
value: 0.193
- type: precision_at_3
value: 48.184
- type: precision_at_5
value: 30.897999999999996
- type: recall_at_1
value: 43.72
- type: recall_at_10
value: 82.1
- type: recall_at_100
value: 91.62700000000001
- type: recall_at_1000
value: 96.556
- type: recall_at_3
value: 72.275
- type: recall_at_5
value: 77.24499999999999
- task:
type: Classification
dataset:
name: MTEB IFlyTek
type: C-MTEB/IFlyTek-classification
config: default
split: validation
revision: 421605374b29664c5fc098418fe20ada9bd55f8a
metrics:
- type: accuracy
value: 54.520969603693736
- type: f1
value: 42.359043311419626
- type: accuracy
value: 54.520969603693736
- type: f1
value: 42.359043311419626
- task:
type: Classification
dataset:
name: MTEB ImdbClassification
type: mteb/imdb
config: default
split: test
revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7
metrics:
- type: accuracy
value: 96.72559999999999
- type: ap
value: 95.01759461773742
- type: f1
value: 96.72429945397575
- type: accuracy
value: 96.72559999999999
- type: ap
value: 95.01759461773742
- type: f1
value: 96.72429945397575
- task:
type: Classification
dataset:
name: MTEB JDReview
type: C-MTEB/JDReview-classification
config: default
split: test
revision: b7c64bd89eb87f8ded463478346f76731f07bf8b
metrics:
- type: accuracy
value: 90.1688555347092
- type: ap
value: 63.36583667477521
- type: f1
value: 85.6845016521436
- type: accuracy
value: 90.1688555347092
- type: ap
value: 63.36583667477521
- type: f1
value: 85.6845016521436
- task:
type: STS
dataset:
name: MTEB LCQMC
type: C-MTEB/LCQMC
config: default
split: test
revision: 17f9b096f80380fce5ed12a9be8be7784b337daf
metrics:
- type: cos_sim_pearson
value: 68.8503997749679
- type: cos_sim_spearman
value: 74.15059291199371
- type: euclidean_pearson
value: 73.01105331948172
- type: euclidean_spearman
value: 74.15059069348803
- type: manhattan_pearson
value: 72.80856655624557
- type: manhattan_spearman
value: 73.95174793448955
- type: cos_sim_pearson
value: 68.8503997749679
- type: cos_sim_spearman
value: 74.15059291199371
- type: euclidean_pearson
value: 73.01105331948172
- type: euclidean_spearman
value: 74.15059069348803
- type: manhattan_pearson
value: 72.80856655624557
- type: manhattan_spearman
value: 73.95174793448955
- task:
type: Reranking
dataset:
name: MTEB MMarcoReranking
type: C-MTEB/Mmarco-reranking
config: default
split: dev
revision: 8e0c766dbe9e16e1d221116a3f36795fbade07f6
metrics:
- type: map
value: 32.68592539803807
- type: mrr
value: 31.58968253968254
- type: map
value: 32.68592539803807
- type: mrr
value: 31.58968253968254
- task:
type: Retrieval
dataset:
name: MTEB MMarcoRetrieval
type: C-MTEB/MMarcoRetrieval
config: default
split: dev
revision: 539bbde593d947e2a124ba72651aafc09eb33fc2
metrics:
- type: map_at_1
value: 71.242
- type: map_at_10
value: 80.01
- type: map_at_100
value: 80.269
- type: map_at_1000
value: 80.276
- type: map_at_3
value: 78.335
- type: map_at_5
value: 79.471
- type: mrr_at_1
value: 73.668
- type: mrr_at_10
value: 80.515
- type: mrr_at_100
value: 80.738
- type: mrr_at_1000
value: 80.744
- type: mrr_at_3
value: 79.097
- type: mrr_at_5
value: 80.045
- type: ndcg_at_1
value: 73.668
- type: ndcg_at_10
value: 83.357
- type: ndcg_at_100
value: 84.442
- type: ndcg_at_1000
value: 84.619
- type: ndcg_at_3
value: 80.286
- type: ndcg_at_5
value: 82.155
- type: precision_at_1
value: 73.668
- type: precision_at_10
value: 9.905
- type: precision_at_100
value: 1.043
- type: precision_at_1000
value: 0.106
- type: precision_at_3
value: 30.024
- type: precision_at_5
value: 19.017
- type: recall_at_1
value: 71.242
- type: recall_at_10
value: 93.11
- type: recall_at_100
value: 97.85000000000001
- type: recall_at_1000
value: 99.21900000000001
- type: recall_at_3
value: 85.137
- type: recall_at_5
value: 89.548
- type: map_at_1
value: 71.242
- type: map_at_10
value: 80.01
- type: map_at_100
value: 80.269
- type: map_at_1000
value: 80.276
- type: map_at_3
value: 78.335
- type: map_at_5
value: 79.471
- type: mrr_at_1
value: 73.668
- type: mrr_at_10
value: 80.515
- type: mrr_at_100
value: 80.738
- type: mrr_at_1000
value: 80.744
- type: mrr_at_3
value: 79.097
- type: mrr_at_5
value: 80.045
- type: ndcg_at_1
value: 73.668
- type: ndcg_at_10
value: 83.357
- type: ndcg_at_100
value: 84.442
- type: ndcg_at_1000
value: 84.619
- type: ndcg_at_3
value: 80.286
- type: ndcg_at_5
value: 82.155
- type: precision_at_1
value: 73.668
- type: precision_at_10
value: 9.905
- type: precision_at_100
value: 1.043
- type: precision_at_1000
value: 0.106
- type: precision_at_3
value: 30.024
- type: precision_at_5
value: 19.017
- type: recall_at_1
value: 71.242
- type: recall_at_10
value: 93.11
- type: recall_at_100
value: 97.85000000000001
- type: recall_at_1000
value: 99.21900000000001
- type: recall_at_3
value: 85.137
- type: recall_at_5
value: 89.548
- task:
type: Retrieval
dataset:
name: MTEB MSMARCO
type: mteb/msmarco
config: default
split: dev
revision: c5a29a104738b98a9e76336939199e264163d4a0
metrics:
- type: map_at_1
value: 22.006999999999998
- type: map_at_10
value: 34.994
- type: map_at_100
value: 36.183
- type: map_at_1000
value: 36.227
- type: map_at_3
value: 30.75
- type: map_at_5
value: 33.155
- type: mrr_at_1
value: 22.679
- type: mrr_at_10
value: 35.619
- type: mrr_at_100
value: 36.732
- type: mrr_at_1000
value: 36.77
- type: mrr_at_3
value: 31.44
- type: mrr_at_5
value: 33.811
- type: ndcg_at_1
value: 22.679
- type: ndcg_at_10
value: 42.376000000000005
- type: ndcg_at_100
value: 48.001
- type: ndcg_at_1000
value: 49.059999999999995
- type: ndcg_at_3
value: 33.727000000000004
- type: ndcg_at_5
value: 38.013000000000005
- type: precision_at_1
value: 22.679
- type: precision_at_10
value: 6.815
- type: precision_at_100
value: 0.962
- type: precision_at_1000
value: 0.105
- type: precision_at_3
value: 14.441
- type: precision_at_5
value: 10.817
- type: recall_at_1
value: 22.006999999999998
- type: recall_at_10
value: 65.158
- type: recall_at_100
value: 90.997
- type: recall_at_1000
value: 98.996
- type: recall_at_3
value: 41.646
- type: recall_at_5
value: 51.941
- type: map_at_1
value: 22.006999999999998
- type: map_at_10
value: 34.994
- type: map_at_100
value: 36.183
- type: map_at_1000
value: 36.227
- type: map_at_3
value: 30.75
- type: map_at_5
value: 33.155
- type: mrr_at_1
value: 22.679
- type: mrr_at_10
value: 35.619
- type: mrr_at_100
value: 36.732
- type: mrr_at_1000
value: 36.77
- type: mrr_at_3
value: 31.44
- type: mrr_at_5
value: 33.811
- type: ndcg_at_1
value: 22.679
- type: ndcg_at_10
value: 42.376000000000005
- type: ndcg_at_100
value: 48.001
- type: ndcg_at_1000
value: 49.059999999999995
- type: ndcg_at_3
value: 33.727000000000004
- type: ndcg_at_5
value: 38.013000000000005
- type: precision_at_1
value: 22.679
- type: precision_at_10
value: 6.815
- type: precision_at_100
value: 0.962
- type: precision_at_1000
value: 0.105
- type: precision_at_3
value: 14.441
- type: precision_at_5
value: 10.817
- type: recall_at_1
value: 22.006999999999998
- type: recall_at_10
value: 65.158
- type: recall_at_100
value: 90.997
- type: recall_at_1000
value: 98.996
- type: recall_at_3
value: 41.646
- type: recall_at_5
value: 51.941
- task:
type: Classification
dataset:
name: MTEB MTOPDomainClassification (en)
type: mteb/mtop_domain
config: en
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 97.55129958960327
- type: f1
value: 97.43464802675416
- type: accuracy
value: 97.55129958960327
- type: f1
value: 97.43464802675416
- task:
type: Classification
dataset:
name: MTEB MTOPIntentClassification (en)
type: mteb/mtop_intent
config: en
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 90.4719562243502
- type: f1
value: 70.76460034443902
- type: accuracy
value: 90.4719562243502
- type: f1
value: 70.76460034443902
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (en)
type: mteb/amazon_massive_intent
config: en
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 83.49024882313383
- type: f1
value: 81.44067057564666
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (en)
type: mteb/amazon_massive_scenario
config: en
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 87.23268325487558
- type: f1
value: 86.36737921996752
- task:
type: Retrieval
dataset:
name: MTEB MedicalRetrieval
type: C-MTEB/MedicalRetrieval
config: default
split: dev
revision: 2039188fb5800a9803ba5048df7b76e6fb151fc6
metrics:
- type: map_at_1
value: 56.89999999999999
- type: map_at_10
value: 63.438
- type: map_at_100
value: 63.956
- type: map_at_1000
value: 63.991
- type: map_at_3
value: 61.983
- type: map_at_5
value: 62.778
- type: mrr_at_1
value: 56.99999999999999
- type: mrr_at_10
value: 63.483000000000004
- type: mrr_at_100
value: 63.993
- type: mrr_at_1000
value: 64.02799999999999
- type: mrr_at_3
value: 62.017
- type: mrr_at_5
value: 62.812
- type: ndcg_at_1
value: 56.89999999999999
- type: ndcg_at_10
value: 66.61
- type: ndcg_at_100
value: 69.387
- type: ndcg_at_1000
value: 70.327
- type: ndcg_at_3
value: 63.583999999999996
- type: ndcg_at_5
value: 65.0
- type: precision_at_1
value: 56.89999999999999
- type: precision_at_10
value: 7.66
- type: precision_at_100
value: 0.902
- type: precision_at_1000
value: 0.098
- type: precision_at_3
value: 22.733
- type: precision_at_5
value: 14.32
- type: recall_at_1
value: 56.89999999999999
- type: recall_at_10
value: 76.6
- type: recall_at_100
value: 90.2
- type: recall_at_1000
value: 97.6
- type: recall_at_3
value: 68.2
- type: recall_at_5
value: 71.6
- type: map_at_1
value: 56.89999999999999
- type: map_at_10
value: 63.438
- type: map_at_100
value: 63.956
- type: map_at_1000
value: 63.991
- type: map_at_3
value: 61.983
- type: map_at_5
value: 62.778
- type: mrr_at_1
value: 56.99999999999999
- type: mrr_at_10
value: 63.483000000000004
- type: mrr_at_100
value: 63.993
- type: mrr_at_1000
value: 64.02799999999999
- type: mrr_at_3
value: 62.017
- type: mrr_at_5
value: 62.812
- type: ndcg_at_1
value: 56.89999999999999
- type: ndcg_at_10
value: 66.61
- type: ndcg_at_100
value: 69.387
- type: ndcg_at_1000
value: 70.327
- type: ndcg_at_3
value: 63.583999999999996
- type: ndcg_at_5
value: 65.0
- type: precision_at_1
value: 56.89999999999999
- type: precision_at_10
value: 7.66
- type: precision_at_100
value: 0.902
- type: precision_at_1000
value: 0.098
- type: precision_at_3
value: 22.733
- type: precision_at_5
value: 14.32
- type: recall_at_1
value: 56.89999999999999
- type: recall_at_10
value: 76.6
- type: recall_at_100
value: 90.2
- type: recall_at_1000
value: 97.6
- type: recall_at_3
value: 68.2
- type: recall_at_5
value: 71.6
- task:
type: Clustering
dataset:
name: MTEB MedrxivClusteringP2P
type: mteb/medrxiv-clustering-p2p
config: default
split: test
revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73
metrics:
- type: v_measure
value: 40.32149153753394
- type: v_measure
value: 40.32149153753394
- task:
type: Clustering
dataset:
name: MTEB MedrxivClusteringS2S
type: mteb/medrxiv-clustering-s2s
config: default
split: test
revision: 35191c8c0dca72d8ff3efcd72aa802307d469663
metrics:
- type: v_measure
value: 39.40319973495386
- type: v_measure
value: 39.40319973495386
- task:
type: Reranking
dataset:
name: MTEB MindSmallReranking
type: mteb/mind_small
config: default
split: test
revision: 3bdac13927fdc888b903db93b2ffdbd90b295a69
metrics:
- type: map
value: 33.9769104898534
- type: mrr
value: 35.32831430710564
- type: map
value: 33.9769104898534
- type: mrr
value: 35.32831430710564
- task:
type: Classification
dataset:
name: MTEB MultilingualSentiment
type: C-MTEB/MultilingualSentiment-classification
config: default
split: validation
revision: 46958b007a63fdbf239b7672c25d0bea67b5ea1a
metrics:
- type: accuracy
value: 81.80666666666667
- type: f1
value: 81.83278699395508
- type: accuracy
value: 81.80666666666667
- type: f1
value: 81.83278699395508
- task:
type: Retrieval
dataset:
name: MTEB NFCorpus
type: mteb/nfcorpus
config: default
split: test
revision: ec0fa4fe99da2ff19ca1214b7966684033a58814
metrics:
- type: map_at_1
value: 6.3
- type: map_at_10
value: 14.151
- type: map_at_100
value: 18.455
- type: map_at_1000
value: 20.186999999999998
- type: map_at_3
value: 10.023
- type: map_at_5
value: 11.736
- type: mrr_at_1
value: 49.536
- type: mrr_at_10
value: 58.516
- type: mrr_at_100
value: 59.084
- type: mrr_at_1000
value: 59.114
- type: mrr_at_3
value: 56.45
- type: mrr_at_5
value: 57.642
- type: ndcg_at_1
value: 47.522999999999996
- type: ndcg_at_10
value: 38.4
- type: ndcg_at_100
value: 35.839999999999996
- type: ndcg_at_1000
value: 44.998
- type: ndcg_at_3
value: 43.221
- type: ndcg_at_5
value: 40.784
- type: precision_at_1
value: 49.536
- type: precision_at_10
value: 28.977999999999998
- type: precision_at_100
value: 9.378
- type: precision_at_1000
value: 2.2769999999999997
- type: precision_at_3
value: 40.454
- type: precision_at_5
value: 35.418
- type: recall_at_1
value: 6.3
- type: recall_at_10
value: 19.085
- type: recall_at_100
value: 38.18
- type: recall_at_1000
value: 71.219
- type: recall_at_3
value: 11.17
- type: recall_at_5
value: 13.975999999999999
- type: map_at_1
value: 6.3
- type: map_at_10
value: 14.151
- type: map_at_100
value: 18.455
- type: map_at_1000
value: 20.186999999999998
- type: map_at_3
value: 10.023
- type: map_at_5
value: 11.736
- type: mrr_at_1
value: 49.536
- type: mrr_at_10
value: 58.516
- type: mrr_at_100
value: 59.084
- type: mrr_at_1000
value: 59.114
- type: mrr_at_3
value: 56.45
- type: mrr_at_5
value: 57.642
- type: ndcg_at_1
value: 47.522999999999996
- type: ndcg_at_10
value: 38.4
- type: ndcg_at_100
value: 35.839999999999996
- type: ndcg_at_1000
value: 44.998
- type: ndcg_at_3
value: 43.221
- type: ndcg_at_5
value: 40.784
- type: precision_at_1
value: 49.536
- type: precision_at_10
value: 28.977999999999998
- type: precision_at_100
value: 9.378
- type: precision_at_1000
value: 2.2769999999999997
- type: precision_at_3
value: 40.454
- type: precision_at_5
value: 35.418
- type: recall_at_1
value: 6.3
- type: recall_at_10
value: 19.085
- type: recall_at_100
value: 38.18
- type: recall_at_1000
value: 71.219
- type: recall_at_3
value: 11.17
- type: recall_at_5
value: 13.975999999999999
- task:
type: Retrieval
dataset:
name: MTEB NQ
type: mteb/nq
config: default
split: test
revision: b774495ed302d8c44a3a7ea25c90dbce03968f31
metrics:
- type: map_at_1
value: 43.262
- type: map_at_10
value: 60.387
- type: map_at_100
value: 61.102000000000004
- type: map_at_1000
value: 61.111000000000004
- type: map_at_3
value: 56.391999999999996
- type: map_at_5
value: 58.916000000000004
- type: mrr_at_1
value: 48.725
- type: mrr_at_10
value: 62.812999999999995
- type: mrr_at_100
value: 63.297000000000004
- type: mrr_at_1000
value: 63.304
- type: mrr_at_3
value: 59.955999999999996
- type: mrr_at_5
value: 61.785999999999994
- type: ndcg_at_1
value: 48.696
- type: ndcg_at_10
value: 67.743
- type: ndcg_at_100
value: 70.404
- type: ndcg_at_1000
value: 70.60600000000001
- type: ndcg_at_3
value: 60.712999999999994
- type: ndcg_at_5
value: 64.693
- type: precision_at_1
value: 48.696
- type: precision_at_10
value: 10.513
- type: precision_at_100
value: 1.196
- type: precision_at_1000
value: 0.121
- type: precision_at_3
value: 27.221
- type: precision_at_5
value: 18.701999999999998
- type: recall_at_1
value: 43.262
- type: recall_at_10
value: 87.35300000000001
- type: recall_at_100
value: 98.31299999999999
- type: recall_at_1000
value: 99.797
- type: recall_at_3
value: 69.643
- type: recall_at_5
value: 78.645
- type: map_at_1
value: 43.262
- type: map_at_10
value: 60.387
- type: map_at_100
value: 61.102000000000004
- type: map_at_1000
value: 61.111000000000004
- type: map_at_3
value: 56.391999999999996
- type: map_at_5
value: 58.916000000000004
- type: mrr_at_1
value: 48.725
- type: mrr_at_10
value: 62.812999999999995
- type: mrr_at_100
value: 63.297000000000004
- type: mrr_at_1000
value: 63.304
- type: mrr_at_3
value: 59.955999999999996
- type: mrr_at_5
value: 61.785999999999994
- type: ndcg_at_1
value: 48.696
- type: ndcg_at_10
value: 67.743
- type: ndcg_at_100
value: 70.404
- type: ndcg_at_1000
value: 70.60600000000001
- type: ndcg_at_3
value: 60.712999999999994
- type: ndcg_at_5
value: 64.693
- type: precision_at_1
value: 48.696
- type: precision_at_10
value: 10.513
- type: precision_at_100
value: 1.196
- type: precision_at_1000
value: 0.121
- type: precision_at_3
value: 27.221
- type: precision_at_5
value: 18.701999999999998
- type: recall_at_1
value: 43.262
- type: recall_at_10
value: 87.35300000000001
- type: recall_at_100
value: 98.31299999999999
- type: recall_at_1000
value: 99.797
- type: recall_at_3
value: 69.643
- type: recall_at_5
value: 78.645
- task:
type: PairClassification
dataset:
name: MTEB Ocnli
type: C-MTEB/OCNLI
config: default
split: validation
revision: 66e76a618a34d6d565d5538088562851e6daa7ec
metrics:
- type: cos_sim_accuracy
value: 72.65836491608013
- type: cos_sim_ap
value: 78.75807247519593
- type: cos_sim_f1
value: 74.84662576687117
- type: cos_sim_precision
value: 63.97003745318352
- type: cos_sim_recall
value: 90.17951425554382
- type: dot_accuracy
value: 72.65836491608013
- type: dot_ap
value: 78.75807247519593
- type: dot_f1
value: 74.84662576687117
- type: dot_precision
value: 63.97003745318352
- type: dot_recall
value: 90.17951425554382
- type: euclidean_accuracy
value: 72.65836491608013
- type: euclidean_ap
value: 78.75807247519593
- type: euclidean_f1
value: 74.84662576687117
- type: euclidean_precision
value: 63.97003745318352
- type: euclidean_recall
value: 90.17951425554382
- type: manhattan_accuracy
value: 72.00866269626421
- type: manhattan_ap
value: 78.34663376353235
- type: manhattan_f1
value: 74.13234613604813
- type: manhattan_precision
value: 65.98023064250413
- type: manhattan_recall
value: 84.58289334741288
- type: max_accuracy
value: 72.65836491608013
- type: max_ap
value: 78.75807247519593
- type: max_f1
value: 74.84662576687117
- type: cos_sim_accuracy
value: 72.65836491608013
- type: cos_sim_ap
value: 78.75807247519593
- type: cos_sim_f1
value: 74.84662576687117
- type: cos_sim_precision
value: 63.97003745318352
- type: cos_sim_recall
value: 90.17951425554382
- type: dot_accuracy
value: 72.65836491608013
- type: dot_ap
value: 78.75807247519593
- type: dot_f1
value: 74.84662576687117
- type: dot_precision
value: 63.97003745318352
- type: dot_recall
value: 90.17951425554382
- type: euclidean_accuracy
value: 72.65836491608013
- type: euclidean_ap
value: 78.75807247519593
- type: euclidean_f1
value: 74.84662576687117
- type: euclidean_precision
value: 63.97003745318352
- type: euclidean_recall
value: 90.17951425554382
- type: manhattan_accuracy
value: 72.00866269626421
- type: manhattan_ap
value: 78.34663376353235
- type: manhattan_f1
value: 74.13234613604813
- type: manhattan_precision
value: 65.98023064250413
- type: manhattan_recall
value: 84.58289334741288
- type: max_accuracy
value: 72.65836491608013
- type: max_ap
value: 78.75807247519593
- type: max_f1
value: 74.84662576687117
- task:
type: Classification
dataset:
name: MTEB OnlineShopping
type: C-MTEB/OnlineShopping-classification
config: default
split: test
revision: e610f2ebd179a8fda30ae534c3878750a96db120
metrics:
- type: accuracy
value: 94.46999999999998
- type: ap
value: 93.56401511160975
- type: f1
value: 94.46692790889986
- type: accuracy
value: 94.46999999999998
- type: ap
value: 93.56401511160975
- type: f1
value: 94.46692790889986
- task:
type: STS
dataset:
name: MTEB PAWSX
type: C-MTEB/PAWSX
config: default
split: test
revision: 9c6a90e430ac22b5779fb019a23e820b11a8b5e1
metrics:
- type: cos_sim_pearson
value: 15.232590709271829
- type: cos_sim_spearman
value: 17.204830998481093
- type: euclidean_pearson
value: 19.543519063265673
- type: euclidean_spearman
value: 17.204830998481093
- type: manhattan_pearson
value: 19.5722663367917
- type: manhattan_spearman
value: 17.25656568963978
- type: cos_sim_pearson
value: 15.232590709271829
- type: cos_sim_spearman
value: 17.204830998481093
- type: euclidean_pearson
value: 19.543519063265673
- type: euclidean_spearman
value: 17.204830998481093
- type: manhattan_pearson
value: 19.5722663367917
- type: manhattan_spearman
value: 17.25656568963978
- task:
type: STS
dataset:
name: MTEB QBQTC
type: C-MTEB/QBQTC
config: default
split: test
revision: 790b0510dc52b1553e8c49f3d2afb48c0e5c48b7
metrics:
- type: cos_sim_pearson
value: 34.81965984725406
- type: cos_sim_spearman
value: 37.697257783907645
- type: euclidean_pearson
value: 35.87624912573427
- type: euclidean_spearman
value: 37.69725778300291
- type: manhattan_pearson
value: 35.69021326773646
- type: manhattan_spearman
value: 37.54369033366458
- type: cos_sim_pearson
value: 34.81965984725406
- type: cos_sim_spearman
value: 37.697257783907645
- type: euclidean_pearson
value: 35.87624912573427
- type: euclidean_spearman
value: 37.69725778300291
- type: manhattan_pearson
value: 35.69021326773646
- type: manhattan_spearman
value: 37.54369033366458
- task:
type: Retrieval
dataset:
name: MTEB QuoraRetrieval
type: mteb/quora
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 69.952
- type: map_at_10
value: 84.134
- type: map_at_100
value: 84.795
- type: map_at_1000
value: 84.809
- type: map_at_3
value: 81.085
- type: map_at_5
value: 82.976
- type: mrr_at_1
value: 80.56
- type: mrr_at_10
value: 87.105
- type: mrr_at_100
value: 87.20700000000001
- type: mrr_at_1000
value: 87.208
- type: mrr_at_3
value: 86.118
- type: mrr_at_5
value: 86.79299999999999
- type: ndcg_at_1
value: 80.57
- type: ndcg_at_10
value: 88.047
- type: ndcg_at_100
value: 89.266
- type: ndcg_at_1000
value: 89.34299999999999
- type: ndcg_at_3
value: 85.052
- type: ndcg_at_5
value: 86.68299999999999
- type: precision_at_1
value: 80.57
- type: precision_at_10
value: 13.439
- type: precision_at_100
value: 1.536
- type: precision_at_1000
value: 0.157
- type: precision_at_3
value: 37.283
- type: precision_at_5
value: 24.558
- type: recall_at_1
value: 69.952
- type: recall_at_10
value: 95.599
- type: recall_at_100
value: 99.67099999999999
- type: recall_at_1000
value: 99.983
- type: recall_at_3
value: 87.095
- type: recall_at_5
value: 91.668
- type: map_at_1
value: 69.952
- type: map_at_10
value: 84.134
- type: map_at_100
value: 84.795
- type: map_at_1000
value: 84.809
- type: map_at_3
value: 81.085
- type: map_at_5
value: 82.976
- type: mrr_at_1
value: 80.56
- type: mrr_at_10
value: 87.105
- type: mrr_at_100
value: 87.20700000000001
- type: mrr_at_1000
value: 87.208
- type: mrr_at_3
value: 86.118
- type: mrr_at_5
value: 86.79299999999999
- type: ndcg_at_1
value: 80.57
- type: ndcg_at_10
value: 88.047
- type: ndcg_at_100
value: 89.266
- type: ndcg_at_1000
value: 89.34299999999999
- type: ndcg_at_3
value: 85.052
- type: ndcg_at_5
value: 86.68299999999999
- type: precision_at_1
value: 80.57
- type: precision_at_10
value: 13.439
- type: precision_at_100
value: 1.536
- type: precision_at_1000
value: 0.157
- type: precision_at_3
value: 37.283
- type: precision_at_5
value: 24.558
- type: recall_at_1
value: 69.952
- type: recall_at_10
value: 95.599
- type: recall_at_100
value: 99.67099999999999
- type: recall_at_1000
value: 99.983
- type: recall_at_3
value: 87.095
- type: recall_at_5
value: 91.668
- task:
type: Clustering
dataset:
name: MTEB RedditClustering
type: mteb/reddit-clustering
config: default
split: test
revision: 24640382cdbf8abc73003fb0fa6d111a705499eb
metrics:
- type: v_measure
value: 70.12802769698337
- type: v_measure
value: 70.12802769698337
- task:
type: Clustering
dataset:
name: MTEB RedditClusteringP2P
type: mteb/reddit-clustering-p2p
config: default
split: test
revision: 282350215ef01743dc01b456c7f5241fa8937f16
metrics:
- type: v_measure
value: 71.19047621740276
- type: v_measure
value: 71.19047621740276
- task:
type: Retrieval
dataset:
name: MTEB SCIDOCS
type: mteb/scidocs
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 6.208
- type: map_at_10
value: 17.036
- type: map_at_100
value: 20.162
- type: map_at_1000
value: 20.552
- type: map_at_3
value: 11.591999999999999
- type: map_at_5
value: 14.349
- type: mrr_at_1
value: 30.599999999999998
- type: mrr_at_10
value: 43.325
- type: mrr_at_100
value: 44.281
- type: mrr_at_1000
value: 44.31
- type: mrr_at_3
value: 39.300000000000004
- type: mrr_at_5
value: 41.730000000000004
- type: ndcg_at_1
value: 30.599999999999998
- type: ndcg_at_10
value: 27.378000000000004
- type: ndcg_at_100
value: 37.768
- type: ndcg_at_1000
value: 43.275000000000006
- type: ndcg_at_3
value: 25.167
- type: ndcg_at_5
value: 22.537
- type: precision_at_1
value: 30.599999999999998
- type: precision_at_10
value: 14.46
- type: precision_at_100
value: 2.937
- type: precision_at_1000
value: 0.424
- type: precision_at_3
value: 23.666999999999998
- type: precision_at_5
value: 20.14
- type: recall_at_1
value: 6.208
- type: recall_at_10
value: 29.29
- type: recall_at_100
value: 59.565
- type: recall_at_1000
value: 85.963
- type: recall_at_3
value: 14.407
- type: recall_at_5
value: 20.412
- type: map_at_1
value: 6.208
- type: map_at_10
value: 17.036
- type: map_at_100
value: 20.162
- type: map_at_1000
value: 20.552
- type: map_at_3
value: 11.591999999999999
- type: map_at_5
value: 14.349
- type: mrr_at_1
value: 30.599999999999998
- type: mrr_at_10
value: 43.325
- type: mrr_at_100
value: 44.281
- type: mrr_at_1000
value: 44.31
- type: mrr_at_3
value: 39.300000000000004
- type: mrr_at_5
value: 41.730000000000004
- type: ndcg_at_1
value: 30.599999999999998
- type: ndcg_at_10
value: 27.378000000000004
- type: ndcg_at_100
value: 37.768
- type: ndcg_at_1000
value: 43.275000000000006
- type: ndcg_at_3
value: 25.167
- type: ndcg_at_5
value: 22.537
- type: precision_at_1
value: 30.599999999999998
- type: precision_at_10
value: 14.46
- type: precision_at_100
value: 2.937
- type: precision_at_1000
value: 0.424
- type: precision_at_3
value: 23.666999999999998
- type: precision_at_5
value: 20.14
- type: recall_at_1
value: 6.208
- type: recall_at_10
value: 29.29
- type: recall_at_100
value: 59.565
- type: recall_at_1000
value: 85.963
- type: recall_at_3
value: 14.407
- type: recall_at_5
value: 20.412
- task:
type: STS
dataset:
name: MTEB SICK-R
type: mteb/sickr-sts
config: default
split: test
revision: a6ea5a8cab320b040a23452cc28066d9beae2cee
metrics:
- type: cos_sim_pearson
value: 82.65489797062479
- type: cos_sim_spearman
value: 75.34808277034776
- type: euclidean_pearson
value: 79.28097508609059
- type: euclidean_spearman
value: 75.3480824481771
- type: manhattan_pearson
value: 78.83529262858895
- type: manhattan_spearman
value: 74.96318170787025
- type: cos_sim_pearson
value: 82.65489797062479
- type: cos_sim_spearman
value: 75.34808277034776
- type: euclidean_pearson
value: 79.28097508609059
- type: euclidean_spearman
value: 75.3480824481771
- type: manhattan_pearson
value: 78.83529262858895
- type: manhattan_spearman
value: 74.96318170787025
- task:
type: STS
dataset:
name: MTEB STS12
type: mteb/sts12-sts
config: default
split: test
revision: a0d554a64d88156834ff5ae9920b964011b16384
metrics:
- type: cos_sim_pearson
value: 85.06920163624117
- type: cos_sim_spearman
value: 77.24549887905519
- type: euclidean_pearson
value: 85.58740280635266
- type: euclidean_spearman
value: 77.24652170306867
- type: manhattan_pearson
value: 85.77917470895854
- type: manhattan_spearman
value: 77.54426264008778
- type: cos_sim_pearson
value: 85.06920163624117
- type: cos_sim_spearman
value: 77.24549887905519
- type: euclidean_pearson
value: 85.58740280635266
- type: euclidean_spearman
value: 77.24652170306867
- type: manhattan_pearson
value: 85.77917470895854
- type: manhattan_spearman
value: 77.54426264008778
- task:
type: STS
dataset:
name: MTEB STS13
type: mteb/sts13-sts
config: default
split: test
revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca
metrics:
- type: cos_sim_pearson
value: 80.9762185094084
- type: cos_sim_spearman
value: 80.98090253728394
- type: euclidean_pearson
value: 80.88451512135202
- type: euclidean_spearman
value: 80.98090253728394
- type: manhattan_pearson
value: 80.7606664599805
- type: manhattan_spearman
value: 80.87197716950068
- type: cos_sim_pearson
value: 80.9762185094084
- type: cos_sim_spearman
value: 80.98090253728394
- type: euclidean_pearson
value: 80.88451512135202
- type: euclidean_spearman
value: 80.98090253728394
- type: manhattan_pearson
value: 80.7606664599805
- type: manhattan_spearman
value: 80.87197716950068
- task:
type: STS
dataset:
name: MTEB STS14
type: mteb/sts14-sts
config: default
split: test
revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375
metrics:
- type: cos_sim_pearson
value: 81.91239166620251
- type: cos_sim_spearman
value: 76.36798509005328
- type: euclidean_pearson
value: 80.6393872615655
- type: euclidean_spearman
value: 76.36798836339655
- type: manhattan_pearson
value: 80.50765898709096
- type: manhattan_spearman
value: 76.31958999372227
- type: cos_sim_pearson
value: 81.91239166620251
- type: cos_sim_spearman
value: 76.36798509005328
- type: euclidean_pearson
value: 80.6393872615655
- type: euclidean_spearman
value: 76.36798836339655
- type: manhattan_pearson
value: 80.50765898709096
- type: manhattan_spearman
value: 76.31958999372227
- task:
type: STS
dataset:
name: MTEB STS15
type: mteb/sts15-sts
config: default
split: test
revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3
metrics:
- type: cos_sim_pearson
value: 83.68800355225011
- type: cos_sim_spearman
value: 84.47549220803403
- type: euclidean_pearson
value: 83.86859896384159
- type: euclidean_spearman
value: 84.47551564954756
- type: manhattan_pearson
value: 83.74201103044383
- type: manhattan_spearman
value: 84.39903759718152
- type: cos_sim_pearson
value: 83.68800355225011
- type: cos_sim_spearman
value: 84.47549220803403
- type: euclidean_pearson
value: 83.86859896384159
- type: euclidean_spearman
value: 84.47551564954756
- type: manhattan_pearson
value: 83.74201103044383
- type: manhattan_spearman
value: 84.39903759718152
- task:
type: STS
dataset:
name: MTEB STS16
type: mteb/sts16-sts
config: default
split: test
revision: 4d8694f8f0e0100860b497b999b3dbed754a0513
metrics:
- type: cos_sim_pearson
value: 78.24197302553398
- type: cos_sim_spearman
value: 79.44526946553684
- type: euclidean_pearson
value: 79.12747636563053
- type: euclidean_spearman
value: 79.44526946553684
- type: manhattan_pearson
value: 78.94407504115144
- type: manhattan_spearman
value: 79.24858249553934
- type: cos_sim_pearson
value: 78.24197302553398
- type: cos_sim_spearman
value: 79.44526946553684
- type: euclidean_pearson
value: 79.12747636563053
- type: euclidean_spearman
value: 79.44526946553684
- type: manhattan_pearson
value: 78.94407504115144
- type: manhattan_spearman
value: 79.24858249553934
- task:
type: STS
dataset:
name: MTEB STS17 (en-en)
type: mteb/sts17-crosslingual-sts
config: en-en
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 89.15329071763895
- type: cos_sim_spearman
value: 88.67251952242073
- type: euclidean_pearson
value: 89.16908249259637
- type: euclidean_spearman
value: 88.67251952242073
- type: manhattan_pearson
value: 89.1279735094785
- type: manhattan_spearman
value: 88.81731953658254
- type: cos_sim_pearson
value: 89.15329071763895
- type: cos_sim_spearman
value: 88.67251952242073
- type: euclidean_pearson
value: 89.16908249259637
- type: euclidean_spearman
value: 88.67251952242073
- type: manhattan_pearson
value: 89.1279735094785
- type: manhattan_spearman
value: 88.81731953658254
- task:
type: STS
dataset:
name: MTEB STS22 (en)
type: mteb/sts22-crosslingual-sts
config: en
split: test
revision: eea2b4fe26a775864c896887d910b76a8098ad3f
metrics:
- type: cos_sim_pearson
value: 69.44962535524695
- type: cos_sim_spearman
value: 71.75861316291065
- type: euclidean_pearson
value: 72.42347748883483
- type: euclidean_spearman
value: 71.75861316291065
- type: manhattan_pearson
value: 72.57545073534365
- type: manhattan_spearman
value: 71.90087671205625
- type: cos_sim_pearson
value: 69.44962535524695
- type: cos_sim_spearman
value: 71.75861316291065
- type: euclidean_pearson
value: 72.42347748883483
- type: euclidean_spearman
value: 71.75861316291065
- type: manhattan_pearson
value: 72.57545073534365
- type: manhattan_spearman
value: 71.90087671205625
- task:
type: STS
dataset:
name: MTEB STSB
type: C-MTEB/STSB
config: default
split: test
revision: 0cde68302b3541bb8b3c340dc0644b0b745b3dc0
metrics:
- type: cos_sim_pearson
value: 77.39283860361535
- type: cos_sim_spearman
value: 77.14577975930179
- type: euclidean_pearson
value: 76.64560889817044
- type: euclidean_spearman
value: 77.14577975930179
- type: manhattan_pearson
value: 76.82848456242104
- type: manhattan_spearman
value: 77.37708521460667
- type: cos_sim_pearson
value: 77.39283860361535
- type: cos_sim_spearman
value: 77.14577975930179
- type: euclidean_pearson
value: 76.64560889817044
- type: euclidean_spearman
value: 77.14577975930179
- type: manhattan_pearson
value: 76.82848456242104
- type: manhattan_spearman
value: 77.37708521460667
- task:
type: STS
dataset:
name: MTEB STSBenchmark
type: mteb/stsbenchmark-sts
config: default
split: test
revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831
metrics:
- type: cos_sim_pearson
value: 84.14036697885552
- type: cos_sim_spearman
value: 83.10901632378086
- type: euclidean_pearson
value: 83.59991244380554
- type: euclidean_spearman
value: 83.10901632378086
- type: manhattan_pearson
value: 83.56632266895113
- type: manhattan_spearman
value: 83.17610542379353
- type: cos_sim_pearson
value: 84.14036697885552
- type: cos_sim_spearman
value: 83.10901632378086
- type: euclidean_pearson
value: 83.59991244380554
- type: euclidean_spearman
value: 83.10901632378086
- type: manhattan_pearson
value: 83.56632266895113
- type: manhattan_spearman
value: 83.17610542379353
- task:
type: Reranking
dataset:
name: MTEB SciDocsRR
type: mteb/scidocs-reranking
config: default
split: test
revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab
metrics:
- type: map
value: 88.98026856845443
- type: mrr
value: 96.80987494712984
- type: map
value: 88.98026856845443
- type: mrr
value: 96.80987494712984
- task:
type: Retrieval
dataset:
name: MTEB SciFact
type: mteb/scifact
config: default
split: test
revision: 0228b52cf27578f30900b9e5271d331663a030d7
metrics:
- type: map_at_1
value: 41.661
- type: map_at_10
value: 55.492
- type: map_at_100
value: 56.237
- type: map_at_1000
value: 56.255
- type: map_at_3
value: 51.05
- type: map_at_5
value: 54.01200000000001
- type: mrr_at_1
value: 44.0
- type: mrr_at_10
value: 56.443
- type: mrr_at_100
value: 57.13700000000001
- type: mrr_at_1000
value: 57.152
- type: mrr_at_3
value: 52.944
- type: mrr_at_5
value: 55.37800000000001
- type: ndcg_at_1
value: 44.0
- type: ndcg_at_10
value: 62.312999999999995
- type: ndcg_at_100
value: 65.63900000000001
- type: ndcg_at_1000
value: 66.019
- type: ndcg_at_3
value: 54.67999999999999
- type: ndcg_at_5
value: 59.284000000000006
- type: precision_at_1
value: 44.0
- type: precision_at_10
value: 9.367
- type: precision_at_100
value: 1.0999999999999999
- type: precision_at_1000
value: 0.11299999999999999
- type: precision_at_3
value: 22.778000000000002
- type: precision_at_5
value: 16.467000000000002
- type: recall_at_1
value: 41.661
- type: recall_at_10
value: 82.306
- type: recall_at_100
value: 97.167
- type: recall_at_1000
value: 100.0
- type: recall_at_3
value: 62.461
- type: recall_at_5
value: 73.411
- type: map_at_1
value: 41.661
- type: map_at_10
value: 55.492
- type: map_at_100
value: 56.237
- type: map_at_1000
value: 56.255
- type: map_at_3
value: 51.05
- type: map_at_5
value: 54.01200000000001
- type: mrr_at_1
value: 44.0
- type: mrr_at_10
value: 56.443
- type: mrr_at_100
value: 57.13700000000001
- type: mrr_at_1000
value: 57.152
- type: mrr_at_3
value: 52.944
- type: mrr_at_5
value: 55.37800000000001
- type: ndcg_at_1
value: 44.0
- type: ndcg_at_10
value: 62.312999999999995
- type: ndcg_at_100
value: 65.63900000000001
- type: ndcg_at_1000
value: 66.019
- type: ndcg_at_3
value: 54.67999999999999
- type: ndcg_at_5
value: 59.284000000000006
- type: precision_at_1
value: 44.0
- type: precision_at_10
value: 9.367
- type: precision_at_100
value: 1.0999999999999999
- type: precision_at_1000
value: 0.11299999999999999
- type: precision_at_3
value: 22.778000000000002
- type: precision_at_5
value: 16.467000000000002
- type: recall_at_1
value: 41.661
- type: recall_at_10
value: 82.306
- type: recall_at_100
value: 97.167
- type: recall_at_1000
value: 100.0
- type: recall_at_3
value: 62.461
- type: recall_at_5
value: 73.411
- task:
type: PairClassification
dataset:
name: MTEB SprintDuplicateQuestions
type: mteb/sprintduplicatequestions-pairclassification
config: default
split: test
revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46
metrics:
- type: cos_sim_accuracy
value: 99.90693069306931
- type: cos_sim_ap
value: 97.86562522779887
- type: cos_sim_f1
value: 95.27162977867204
- type: cos_sim_precision
value: 95.8502024291498
- type: cos_sim_recall
value: 94.69999999999999
- type: dot_accuracy
value: 99.90693069306931
- type: dot_ap
value: 97.86562522779887
- type: dot_f1
value: 95.27162977867204
- type: dot_precision
value: 95.8502024291498
- type: dot_recall
value: 94.69999999999999
- type: euclidean_accuracy
value: 99.90693069306931
- type: euclidean_ap
value: 97.86562522779887
- type: euclidean_f1
value: 95.27162977867204
- type: euclidean_precision
value: 95.8502024291498
- type: euclidean_recall
value: 94.69999999999999
- type: manhattan_accuracy
value: 99.90693069306931
- type: manhattan_ap
value: 97.85527044211135
- type: manhattan_f1
value: 95.27638190954774
- type: manhattan_precision
value: 95.75757575757575
- type: manhattan_recall
value: 94.8
- type: max_accuracy
value: 99.90693069306931
- type: max_ap
value: 97.86562522779887
- type: max_f1
value: 95.27638190954774
- type: cos_sim_accuracy
value: 99.90693069306931
- type: cos_sim_ap
value: 97.86562522779887
- type: cos_sim_f1
value: 95.27162977867204
- type: cos_sim_precision
value: 95.8502024291498
- type: cos_sim_recall
value: 94.69999999999999
- type: dot_accuracy
value: 99.90693069306931
- type: dot_ap
value: 97.86562522779887
- type: dot_f1
value: 95.27162977867204
- type: dot_precision
value: 95.8502024291498
- type: dot_recall
value: 94.69999999999999
- type: euclidean_accuracy
value: 99.90693069306931
- type: euclidean_ap
value: 97.86562522779887
- type: euclidean_f1
value: 95.27162977867204
- type: euclidean_precision
value: 95.8502024291498
- type: euclidean_recall
value: 94.69999999999999
- type: manhattan_accuracy
value: 99.90693069306931
- type: manhattan_ap
value: 97.85527044211135
- type: manhattan_f1
value: 95.27638190954774
- type: manhattan_precision
value: 95.75757575757575
- type: manhattan_recall
value: 94.8
- type: max_accuracy
value: 99.90693069306931
- type: max_ap
value: 97.86562522779887
- type: max_f1
value: 95.27638190954774
- task:
type: Clustering
dataset:
name: MTEB StackExchangeClustering
type: mteb/stackexchange-clustering
config: default
split: test
revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259
metrics:
- type: v_measure
value: 78.89230351770412
- type: v_measure
value: 78.89230351770412
- task:
type: Clustering
dataset:
name: MTEB StackExchangeClusteringP2P
type: mteb/stackexchange-clustering-p2p
config: default
split: test
revision: 815ca46b2622cec33ccafc3735d572c266efdb44
metrics:
- type: v_measure
value: 47.52328347080355
- type: v_measure
value: 47.52328347080355
- task:
type: Reranking
dataset:
name: MTEB StackOverflowDupQuestions
type: mteb/stackoverflowdupquestions-reranking
config: default
split: test
revision: e185fbe320c72810689fc5848eb6114e1ef5ec69
metrics:
- type: map
value: 57.74702024461137
- type: mrr
value: 58.88074548001018
- type: map
value: 57.74702024461137
- type: mrr
value: 58.88074548001018
- task:
type: Summarization
dataset:
name: MTEB SummEval
type: mteb/summeval
config: default
split: test
revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c
metrics:
- type: cos_sim_pearson
value: 30.047929797503592
- type: cos_sim_spearman
value: 29.465371781983567
- type: dot_pearson
value: 30.047927690552335
- type: dot_spearman
value: 29.465371781983567
- type: cos_sim_pearson
value: 30.047929797503592
- type: cos_sim_spearman
value: 29.465371781983567
- type: dot_pearson
value: 30.047927690552335
- type: dot_spearman
value: 29.465371781983567
- task:
type: Classification
dataset:
name: MTEB TNews
type: C-MTEB/TNews-classification
config: default
split: validation
revision: 317f262bf1e6126357bbe89e875451e4b0938fe4
metrics:
- type: accuracy
value: 56.691999999999986
- type: f1
value: 54.692084702788065
- type: accuracy
value: 56.691999999999986
- type: f1
value: 54.692084702788065
- task:
type: Retrieval
dataset:
name: MTEB TRECCOVID
type: mteb/trec-covid
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 0.181
- type: map_at_10
value: 1.2
- type: map_at_100
value: 6.078
- type: map_at_1000
value: 14.940000000000001
- type: map_at_3
value: 0.45599999999999996
- type: map_at_5
value: 0.692
- type: mrr_at_1
value: 66.0
- type: mrr_at_10
value: 75.819
- type: mrr_at_100
value: 76.168
- type: mrr_at_1000
value: 76.168
- type: mrr_at_3
value: 72.667
- type: mrr_at_5
value: 74.86699999999999
- type: ndcg_at_1
value: 59.0
- type: ndcg_at_10
value: 52.60399999999999
- type: ndcg_at_100
value: 38.049
- type: ndcg_at_1000
value: 38.576
- type: ndcg_at_3
value: 57.235
- type: ndcg_at_5
value: 56.147000000000006
- type: precision_at_1
value: 66.0
- type: precision_at_10
value: 55.2
- type: precision_at_100
value: 38.78
- type: precision_at_1000
value: 16.986
- type: precision_at_3
value: 62.666999999999994
- type: precision_at_5
value: 60.8
- type: recall_at_1
value: 0.181
- type: recall_at_10
value: 1.471
- type: recall_at_100
value: 9.748999999999999
- type: recall_at_1000
value: 37.667
- type: recall_at_3
value: 0.49300000000000005
- type: recall_at_5
value: 0.7979999999999999
- type: map_at_1
value: 0.181
- type: map_at_10
value: 1.2
- type: map_at_100
value: 6.078
- type: map_at_1000
value: 14.940000000000001
- type: map_at_3
value: 0.45599999999999996
- type: map_at_5
value: 0.692
- type: mrr_at_1
value: 66.0
- type: mrr_at_10
value: 75.819
- type: mrr_at_100
value: 76.168
- type: mrr_at_1000
value: 76.168
- type: mrr_at_3
value: 72.667
- type: mrr_at_5
value: 74.86699999999999
- type: ndcg_at_1
value: 59.0
- type: ndcg_at_10
value: 52.60399999999999
- type: ndcg_at_100
value: 38.049
- type: ndcg_at_1000
value: 38.576
- type: ndcg_at_3
value: 57.235
- type: ndcg_at_5
value: 56.147000000000006
- type: precision_at_1
value: 66.0
- type: precision_at_10
value: 55.2
- type: precision_at_100
value: 38.78
- type: precision_at_1000
value: 16.986
- type: precision_at_3
value: 62.666999999999994
- type: precision_at_5
value: 60.8
- type: recall_at_1
value: 0.181
- type: recall_at_10
value: 1.471
- type: recall_at_100
value: 9.748999999999999
- type: recall_at_1000
value: 37.667
- type: recall_at_3
value: 0.49300000000000005
- type: recall_at_5
value: 0.7979999999999999
- task:
type: Clustering
dataset:
name: MTEB ThuNewsClusteringS2S
type: C-MTEB/ThuNewsClusteringS2S
config: default
split: test
revision: 8a8b2caeda43f39e13c4bc5bea0f8a667896e10d
metrics:
- type: v_measure
value: 77.04148998956299
- type: v_measure
value: 77.04148998956299
- task:
type: Retrieval
dataset:
name: MTEB Touche2020
type: mteb/touche2020
config: default
split: test
revision: a34f9a33db75fa0cbb21bb5cfc3dae8dc8bec93f
metrics:
- type: map_at_1
value: 1.936
- type: map_at_10
value: 8.942
- type: map_at_100
value: 14.475999999999999
- type: map_at_1000
value: 16.156000000000002
- type: map_at_3
value: 4.865
- type: map_at_5
value: 6.367000000000001
- type: mrr_at_1
value: 26.531
- type: mrr_at_10
value: 42.846000000000004
- type: mrr_at_100
value: 43.441
- type: mrr_at_1000
value: 43.441
- type: mrr_at_3
value: 36.735
- type: mrr_at_5
value: 40.510000000000005
- type: ndcg_at_1
value: 24.490000000000002
- type: ndcg_at_10
value: 23.262
- type: ndcg_at_100
value: 34.959
- type: ndcg_at_1000
value: 47.258
- type: ndcg_at_3
value: 25.27
- type: ndcg_at_5
value: 24.246000000000002
- type: precision_at_1
value: 26.531
- type: precision_at_10
value: 20.408
- type: precision_at_100
value: 7.306
- type: precision_at_1000
value: 1.541
- type: precision_at_3
value: 26.531
- type: precision_at_5
value: 24.082
- type: recall_at_1
value: 1.936
- type: recall_at_10
value: 15.712000000000002
- type: recall_at_100
value: 45.451
- type: recall_at_1000
value: 83.269
- type: recall_at_3
value: 6.442
- type: recall_at_5
value: 9.151
- type: map_at_1
value: 1.936
- type: map_at_10
value: 8.942
- type: map_at_100
value: 14.475999999999999
- type: map_at_1000
value: 16.156000000000002
- type: map_at_3
value: 4.865
- type: map_at_5
value: 6.367000000000001
- type: mrr_at_1
value: 26.531
- type: mrr_at_10
value: 42.846000000000004
- type: mrr_at_100
value: 43.441
- type: mrr_at_1000
value: 43.441
- type: mrr_at_3
value: 36.735
- type: mrr_at_5
value: 40.510000000000005
- type: ndcg_at_1
value: 24.490000000000002
- type: ndcg_at_10
value: 23.262
- type: ndcg_at_100
value: 34.959
- type: ndcg_at_1000
value: 47.258
- type: ndcg_at_3
value: 25.27
- type: ndcg_at_5
value: 24.246000000000002
- type: precision_at_1
value: 26.531
- type: precision_at_10
value: 20.408
- type: precision_at_100
value: 7.306
- type: precision_at_1000
value: 1.541
- type: precision_at_3
value: 26.531
- type: precision_at_5
value: 24.082
- type: recall_at_1
value: 1.936
- type: recall_at_10
value: 15.712000000000002
- type: recall_at_100
value: 45.451
- type: recall_at_1000
value: 83.269
- type: recall_at_3
value: 6.442
- type: recall_at_5
value: 9.151
- task:
type: Classification
dataset:
name: MTEB ToxicConversationsClassification
type: mteb/toxic_conversations_50k
config: default
split: test
revision: d7c0de2777da35d6aae2200a62c6e0e5af397c4c
metrics:
- type: accuracy
value: 86.564
- type: ap
value: 34.58766846081731
- type: f1
value: 72.32759831978161
- type: accuracy
value: 86.564
- type: ap
value: 34.58766846081731
- type: f1
value: 72.32759831978161
- task:
type: Classification
dataset:
name: MTEB TweetSentimentExtractionClassification
type: mteb/tweet_sentiment_extraction
config: default
split: test
revision: d604517c81ca91fe16a244d1248fc021f9ecee7a
metrics:
- type: accuracy
value: 77.80418788907753
- type: f1
value: 78.1047638421972
- type: accuracy
value: 77.80418788907753
- type: f1
value: 78.1047638421972
- task:
type: Clustering
dataset:
name: MTEB TwentyNewsgroupsClustering
type: mteb/twentynewsgroups-clustering
config: default
split: test
revision: 6125ec4e24fa026cec8a478383ee943acfbd5449
metrics:
- type: v_measure
value: 59.20888659980063
- type: v_measure
value: 59.20888659980063
- task:
type: PairClassification
dataset:
name: MTEB TwitterSemEval2015
type: mteb/twittersemeval2015-pairclassification
config: default
split: test
revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1
metrics:
- type: cos_sim_accuracy
value: 85.45627943017226
- type: cos_sim_ap
value: 72.25550061847534
- type: cos_sim_f1
value: 66.0611487783037
- type: cos_sim_precision
value: 64.11720884032779
- type: cos_sim_recall
value: 68.12664907651715
- type: dot_accuracy
value: 85.45627943017226
- type: dot_ap
value: 72.25574305366213
- type: dot_f1
value: 66.0611487783037
- type: dot_precision
value: 64.11720884032779
- type: dot_recall
value: 68.12664907651715
- type: euclidean_accuracy
value: 85.45627943017226
- type: euclidean_ap
value: 72.2557084446673
- type: euclidean_f1
value: 66.0611487783037
- type: euclidean_precision
value: 64.11720884032779
- type: euclidean_recall
value: 68.12664907651715
- type: manhattan_accuracy
value: 85.32514752339513
- type: manhattan_ap
value: 71.52919143472248
- type: manhattan_f1
value: 65.60288251190322
- type: manhattan_precision
value: 64.02913840743531
- type: manhattan_recall
value: 67.25593667546174
- type: max_accuracy
value: 85.45627943017226
- type: max_ap
value: 72.25574305366213
- type: max_f1
value: 66.0611487783037
- type: cos_sim_accuracy
value: 85.45627943017226
- type: cos_sim_ap
value: 72.25550061847534
- type: cos_sim_f1
value: 66.0611487783037
- type: cos_sim_precision
value: 64.11720884032779
- type: cos_sim_recall
value: 68.12664907651715
- type: dot_accuracy
value: 85.45627943017226
- type: dot_ap
value: 72.25574305366213
- type: dot_f1
value: 66.0611487783037
- type: dot_precision
value: 64.11720884032779
- type: dot_recall
value: 68.12664907651715
- type: euclidean_accuracy
value: 85.45627943017226
- type: euclidean_ap
value: 72.2557084446673
- type: euclidean_f1
value: 66.0611487783037
- type: euclidean_precision
value: 64.11720884032779
- type: euclidean_recall
value: 68.12664907651715
- type: manhattan_accuracy
value: 85.32514752339513
- type: manhattan_ap
value: 71.52919143472248
- type: manhattan_f1
value: 65.60288251190322
- type: manhattan_precision
value: 64.02913840743531
- type: manhattan_recall
value: 67.25593667546174
- type: max_accuracy
value: 85.45627943017226
- type: max_ap
value: 72.25574305366213
- type: max_f1
value: 66.0611487783037
- task:
type: PairClassification
dataset:
name: MTEB TwitterURLCorpus
type: mteb/twitterurlcorpus-pairclassification
config: default
split: test
revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf
metrics:
- type: cos_sim_accuracy
value: 88.34167733923235
- type: cos_sim_ap
value: 84.58587730660244
- type: cos_sim_f1
value: 77.14170010676287
- type: cos_sim_precision
value: 73.91181657848324
- type: cos_sim_recall
value: 80.66676932553126
- type: dot_accuracy
value: 88.34167733923235
- type: dot_ap
value: 84.58585083616217
- type: dot_f1
value: 77.14170010676287
- type: dot_precision
value: 73.91181657848324
- type: dot_recall
value: 80.66676932553126
- type: euclidean_accuracy
value: 88.34167733923235
- type: euclidean_ap
value: 84.5858781355044
- type: euclidean_f1
value: 77.14170010676287
- type: euclidean_precision
value: 73.91181657848324
- type: euclidean_recall
value: 80.66676932553126
- type: manhattan_accuracy
value: 88.28152287809989
- type: manhattan_ap
value: 84.53184837110165
- type: manhattan_f1
value: 77.13582823915313
- type: manhattan_precision
value: 74.76156069364161
- type: manhattan_recall
value: 79.66584539574993
- type: max_accuracy
value: 88.34167733923235
- type: max_ap
value: 84.5858781355044
- type: max_f1
value: 77.14170010676287
- type: cos_sim_accuracy
value: 88.34167733923235
- type: cos_sim_ap
value: 84.58587730660244
- type: cos_sim_f1
value: 77.14170010676287
- type: cos_sim_precision
value: 73.91181657848324
- type: cos_sim_recall
value: 80.66676932553126
- type: dot_accuracy
value: 88.34167733923235
- type: dot_ap
value: 84.58585083616217
- type: dot_f1
value: 77.14170010676287
- type: dot_precision
value: 73.91181657848324
- type: dot_recall
value: 80.66676932553126
- type: euclidean_accuracy
value: 88.34167733923235
- type: euclidean_ap
value: 84.5858781355044
- type: euclidean_f1
value: 77.14170010676287
- type: euclidean_precision
value: 73.91181657848324
- type: euclidean_recall
value: 80.66676932553126
- type: manhattan_accuracy
value: 88.28152287809989
- type: manhattan_ap
value: 84.53184837110165
- type: manhattan_f1
value: 77.13582823915313
- type: manhattan_precision
value: 74.76156069364161
- type: manhattan_recall
value: 79.66584539574993
- type: max_accuracy
value: 88.34167733923235
- type: max_ap
value: 84.5858781355044
- type: max_f1
value: 77.14170010676287
- task:
type: Retrieval
dataset:
name: MTEB VideoRetrieval
type: C-MTEB/VideoRetrieval
config: default
split: dev
revision: 58c2597a5943a2ba48f4668c3b90d796283c5639
metrics:
- type: map_at_1
value: 66.10000000000001
- type: map_at_10
value: 75.238
- type: map_at_100
value: 75.559
- type: map_at_1000
value: 75.565
- type: map_at_3
value: 73.68299999999999
- type: map_at_5
value: 74.63300000000001
- type: mrr_at_1
value: 66.10000000000001
- type: mrr_at_10
value: 75.238
- type: mrr_at_100
value: 75.559
- type: mrr_at_1000
value: 75.565
- type: mrr_at_3
value: 73.68299999999999
- type: mrr_at_5
value: 74.63300000000001
- type: ndcg_at_1
value: 66.10000000000001
- type: ndcg_at_10
value: 79.25999999999999
- type: ndcg_at_100
value: 80.719
- type: ndcg_at_1000
value: 80.862
- type: ndcg_at_3
value: 76.08200000000001
- type: ndcg_at_5
value: 77.782
- type: precision_at_1
value: 66.10000000000001
- type: precision_at_10
value: 9.17
- type: precision_at_100
value: 0.983
- type: precision_at_1000
value: 0.099
- type: precision_at_3
value: 27.667
- type: precision_at_5
value: 17.419999999999998
- type: recall_at_1
value: 66.10000000000001
- type: recall_at_10
value: 91.7
- type: recall_at_100
value: 98.3
- type: recall_at_1000
value: 99.4
- type: recall_at_3
value: 83.0
- type: recall_at_5
value: 87.1
- type: map_at_1
value: 66.10000000000001
- type: map_at_10
value: 75.238
- type: map_at_100
value: 75.559
- type: map_at_1000
value: 75.565
- type: map_at_3
value: 73.68299999999999
- type: map_at_5
value: 74.63300000000001
- type: mrr_at_1
value: 66.10000000000001
- type: mrr_at_10
value: 75.238
- type: mrr_at_100
value: 75.559
- type: mrr_at_1000
value: 75.565
- type: mrr_at_3
value: 73.68299999999999
- type: mrr_at_5
value: 74.63300000000001
- type: ndcg_at_1
value: 66.10000000000001
- type: ndcg_at_10
value: 79.25999999999999
- type: ndcg_at_100
value: 80.719
- type: ndcg_at_1000
value: 80.862
- type: ndcg_at_3
value: 76.08200000000001
- type: ndcg_at_5
value: 77.782
- type: precision_at_1
value: 66.10000000000001
- type: precision_at_10
value: 9.17
- type: precision_at_100
value: 0.983
- type: precision_at_1000
value: 0.099
- type: precision_at_3
value: 27.667
- type: precision_at_5
value: 17.419999999999998
- type: recall_at_1
value: 66.10000000000001
- type: recall_at_10
value: 91.7
- type: recall_at_100
value: 98.3
- type: recall_at_1000
value: 99.4
- type: recall_at_3
value: 83.0
- type: recall_at_5
value: 87.1
- task:
type: Classification
dataset:
name: MTEB Waimai
type: C-MTEB/waimai-classification
config: default
split: test
revision: 339287def212450dcaa9df8c22bf93e9980c7023
metrics:
- type: accuracy
value: 91.13
- type: ap
value: 79.55231335947015
- type: f1
value: 89.63091922203914
- type: accuracy
value: 91.13
- type: ap
value: 79.55231335947015
- type: f1
value: 89.63091922203914
- task:
type: Classification
dataset:
name: MTEB AmazonReviewsClassification (zh)
type: mteb/amazon_reviews_multi
config: zh
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 55.099999999999994
- type: f1
value: 53.115528412999666
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (zh-CN)
type: mteb/amazon_massive_intent
config: zh-CN
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 79.88231338264963
- type: f1
value: 77.13536609019927
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (zh-CN)
type: mteb/amazon_massive_scenario
config: zh-CN
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 84.50571620712844
- type: f1
value: 83.4128768262944
- task:
type: Reranking
dataset:
name: MTEB T2Reranking
type: C-MTEB/T2Reranking
config: default
split: dev
revision: 76631901a18387f85eaa53e5450019b87ad58ef9
metrics:
- type: map
value: 66.54177017978034
- type: mrr
value: 76.76094292377299
- task:
type: Retrieval
dataset:
name: MTEB T2Retrieval
type: C-MTEB/T2Retrieval
config: default
split: dev
revision: 8731a845f1bf500a4f111cf1070785c793d10e64
metrics:
- type: map_at_1
value: 28.608
- type: map_at_10
value: 81.266
- type: map_at_100
value: 84.714
- type: map_at_1000
value: 84.758
- type: map_at_3
value: 56.967
- type: map_at_5
value: 70.14
- type: mrr_at_1
value: 91.881
- type: mrr_at_10
value: 94.11699999999999
- type: mrr_at_100
value: 94.178
- type: mrr_at_1000
value: 94.181
- type: mrr_at_3
value: 93.772
- type: mrr_at_5
value: 93.997
- type: ndcg_at_1
value: 91.881
- type: ndcg_at_10
value: 87.954
- type: ndcg_at_100
value: 90.904
- type: ndcg_at_1000
value: 91.326
- type: ndcg_at_3
value: 88.838
- type: ndcg_at_5
value: 87.764
- type: precision_at_1
value: 91.881
- type: precision_at_10
value: 43.628
- type: precision_at_100
value: 5.082
- type: precision_at_1000
value: 0.518
- type: precision_at_3
value: 77.62400000000001
- type: precision_at_5
value: 65.269
- type: recall_at_1
value: 28.608
- type: recall_at_10
value: 87.06
- type: recall_at_100
value: 96.815
- type: recall_at_1000
value: 98.969
- type: recall_at_3
value: 58.506
- type: recall_at_5
value: 73.21600000000001
- task:
type: Clustering
dataset:
name: MTEB ThuNewsClusteringP2P
type: C-MTEB/ThuNewsClusteringP2P
config: default
split: test
revision: 5798586b105c0434e4f0fe5e767abe619442cf93
metrics:
- type: v_measure
value: 78.68783858143624
---
# CCwz/gme-Qwen2-VL-7B-Instruct-Q5_K_S-GGUF
This model was converted to GGUF format from [`Alibaba-NLP/gme-Qwen2-VL-7B-Instruct`](https://huggingface.co/Alibaba-NLP/gme-Qwen2-VL-7B-Instruct) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Alibaba-NLP/gme-Qwen2-VL-7B-Instruct) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo CCwz/gme-Qwen2-VL-7B-Instruct-Q5_K_S-GGUF --hf-file gme-qwen2-vl-7b-instruct-q5_k_s.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo CCwz/gme-Qwen2-VL-7B-Instruct-Q5_K_S-GGUF --hf-file gme-qwen2-vl-7b-instruct-q5_k_s.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo CCwz/gme-Qwen2-VL-7B-Instruct-Q5_K_S-GGUF --hf-file gme-qwen2-vl-7b-instruct-q5_k_s.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo CCwz/gme-Qwen2-VL-7B-Instruct-Q5_K_S-GGUF --hf-file gme-qwen2-vl-7b-instruct-q5_k_s.gguf -c 2048
```
| [
"SUMMARIZATION"
] | [
"BIOSSES",
"SCIFACT"
] |
jukofyork/Dark-Miqu-70B | jukofyork | text-generation | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"arxiv:2403.19522",
"base_model:152334H/miqu-1-70b-sf",
"base_model:merge:152334H/miqu-1-70b-sf",
"base_model:Sao10K/Euryale-1.3-L2-70B",
"base_model:merge:Sao10K/Euryale-1.3-L2-70B",
"base_model:Sao10K/WinterGoddess-1.4x-70B-L2",
"base_model:merge:Sao10K/WinterGoddess-1.4x-70B-L2",
"base_model:sophosympatheia/Midnight-Rose-70B-v2.0.3",
"base_model:merge:sophosympatheia/Midnight-Rose-70B-v2.0.3",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | 2024-05-04T00:53:39 | 2024-05-14T18:08:59 | 72 | 30 | ---
base_model:
- 152334H/miqu-1-70b-sf
- sophosympatheia/Midnight-Rose-70B-v2.0.3
- Sao10K/Euryale-1.3-L2-70B
- Sao10K/WinterGoddess-1.4x-70B-L2
library_name: transformers
license: other
tags:
- mergekit
- merge
---

A "dark" creative writing model with 32k context. Based off [miqu-1-70b](https://huggingface.co/miqudev/miqu-1-70b) but with greatly reduced "positivity" and "-isms". If you want happy endings, look elsewhere!
This model **excels** at writing Dark/Grimdark fantasy (see examples below).
***NOTE***: *This model has now been merged with [Dawn-Miqu-70B](https://huggingface.co/jukofyork/Dawn-Miqu-70B) to create [Deep-Miqu-103B](https://huggingface.co/jukofyork/Deep-Miqu-103B) and [Deep-Miqu-120B](https://huggingface.co/jukofyork/Deep-Miqu-120B).*
***NOTE***: *For a full range of GGUF quants kindly provided by @mradermacher, see: [Static](https://huggingface.co/mradermacher/Dark-Miqu-70B-GGUF) and [IMatrix](https://huggingface.co/mradermacher/Dark-Miqu-70B-i1-GGUF).*
# Model background
Created using [Mergekit](https://github.com/arcee-ai/mergekit) and based on @sophosympatheia's template for [Midnight-Miqu-70B-v1.0](https://huggingface.co/sophosympatheia/Midnight-Miqu-70B-v1.0).
This model has a lower perplexity compared to [Midnight-Miqu-70B-v1.0](https://huggingface.co/sophosympatheia/Midnight-Miqu-70B-v1.0) (`'4.08 +/- 0.02'` vs `'4.02 +/- 0.02'`). It also generates longer responses when prompted.
The model was created in two stages:
- First, three "Midnight-Miqu-esque" models were produced using spherical interpolation (slerp) merges between [miqu-1-70b-sf](https://huggingface.co/152334H/miqu-1-70b-sf) and each of the following models: [Midnight-Rose-70B-v2.0.3](https://huggingface.co/sophosympatheia/Midnight-Rose-70B-v2.0.3), [Euryale-1.3-L2-70B](https://huggingface.co/Sao10K/Euryale-1.3-L2-70B) and [WinterGoddess-1.4x-70B-L2](https://huggingface.co/Sao10K/WinterGoddess-1.4x-70B-L2). These models were selected for their dark, imaginative writing styles. Various slerp-merges between [miqu-1-70b-sf](https://huggingface.co/152334H/miqu-1-70b-sf) and other models were also experimented with, but these three yielded the darkest creative writing results.
- In the second stage, the three slerp-merged models were combined into a single model using the '[Model Stock](https://arxiv.org/abs/2403.19522)' method, with [miqu-1-70b-sf](https://huggingface.co/152334H/miqu-1-70b-sf) serving as the base model.
# Prompting format
Vicuna format is preferred:
```
USER: {prompt} ASSISTANT:
```
Mistral and Alpaca formats are also supported:
```
[INST] {prompt} [/INST]
```
```
### Instruction:
{prompt}
### Response:
```
# Licence and usage restrictions
[miqu-1-70b-sf](https://huggingface.co/152334H/miqu-1-70b-sf) is a dequantized version of the [miqu-1-70b](https://huggingface.co/miqudev/miqu-1-70b) model leaked from MistralAI. All miqu-derived models, including this merge, are suitable for non-commercial, personal use only.
# Mergekit configuration
The following YAML configuration was used to produce this model:
```yaml
name: midnight-miqu-70b
models:
- model: 152334H/miqu-1-70b-sf
- model: sophosympatheia/Midnight-Rose-70B-v2.0.3
base_model: 152334H/miqu-1-70b-sf
merge_method: slerp
parameters:
t:
- value: [0, 0, 0.2, 0.3, 0.4, 0.5, 0.4, 0.3, 0.2, 0, 0]
embed_slerp: true
tokenizer_source: model:miqu-1-70b-sf
dtype: float16
---
name: euryale-miqu-70b
models:
- model: 152334H/miqu-1-70b-sf
- model: Sao10K/Euryale-1.3-L2-70B
base_model: 152334H/miqu-1-70b-sf
merge_method: slerp
parameters:
t:
- value: [0, 0, 0.2, 0.3, 0.4, 0.5, 0.4, 0.3, 0.2, 0, 0]
embed_slerp: true
tokenizer_source: model:miqu-1-70b-sf
dtype: float16
---
name: winter-miqu-70b
models:
- model: 152334H/miqu-1-70b-sf
- model: Sao10K/WinterGoddess-1.4x-70B-L2
base_model: 152334H/miqu-1-70b-sf
merge_method: slerp
parameters:
t:
- value: [0, 0, 0.2, 0.3, 0.4, 0.5, 0.4, 0.3, 0.2, 0, 0]
embed_slerp: true
tokenizer_source: model:miqu-1-70b-sf
dtype: float16
---
name: dark-miqu-70b
models:
- model: 152334H/miqu-1-70b-sf
- model: midnight-miqu-70b
- model: euryale-miqu-70b
- model: winter-miqu-70b
base_model: 152334H/miqu-1-70b-sf
merge_method: model_stock
dtype: float16
```
## Key configuration details:
- '`merge_method: slerp`' uses spherical linear interpolation for merging models.
- '`parameters: t`' controls the interpolation ratios between models.
- '`embed_slerp: true`' applies slerp to the embedding layers.
- '`merge_method: model_stock`' uses the '[Model Stock](https://arxiv.org/abs/2403.19522)' method.
See the [Mergekit documentation](https://github.com/arcee-ai/mergekit) for more on these settings.
**NOTE**: Run with `mergekit-mega` rather than `mergekit` as there are 4 documents in this one file.
# Example stories
The following mix of "dark" stories were generated using the Vicuna prompt format with no system message and temperature=0:
## Dark fantasy stories
<details> <summary>Click to see spoiler</summary>
### USER
Write me the opening chapter of a 'Grimdark' stories trilogy in the style of Joe Abercrombie and Rob J Hayes
### ASSISTANT
Chapter One: The Crimson Dawn
The world was a cruel, unforgiving place, and the city of Vhyrd was its beating, blackened heart. Nestled in the crook of the Serpent's Spine mountains, it was a place where the sun struggled to shine, and the shadows held more sway than the law. It was a city of thieves, murderers, and worse, and it was home to the man who would become known as the Butcher of the Broken Docks.
His name was Rennick, but that meant little to the people who feared him. He was a tall, broad-shouldered figure, with a face that had been carved by a drunken sculptor with a penchant for sharp angles and harsh lines. His eyes were a cold, flat grey, like the steel of his twin blades, and his hair was the color of dried blood. He moved through the crowded streets with a predator's grace, his cloak billowing behind him like a banner of death.
Rennick had a job to do, and he was good at his job. He was a killer, a thief-taker, a man who dealt in the currency of pain and suffering. He had been hired to find a girl, a slip of a thing who had stolen something valuable from the wrong people. He didn't know what it was, and he didn't care. All that mattered was the coin that would jingle in his purse when he brought her back, preferably in several small, bloody pieces.
He had tracked her to the Broken Docks, a maze of rotting wood and crumbling stone that clung to the edge of the city like a leech to a corpse. It was a place where the desperate went to die, and the desperate went to prey on the desperate. It was a place where the screams of the damned were drowned out by the cries of the gulls and the lap of the oily water against the pilings.
Rennick moved through the shadows, his blades at the ready. He could feel the eyes on him, the whispers of his name on the wind. He reveled in it. Fear was a weapon, and he wielded it with the same skill as his swords. He could hear the girl's heartbeat, fast and frantic, like a rabbit's. She was close.
He turned a corner and found himself in a narrow alley, the walls pressing in on him like the jaws of a trap. At the far end, he saw her, a small, trembling figure huddled in the doorway of a derelict building. She was young, no more than sixteen summers, with matted hair and ragged clothes. She looked up at him with wide, terrified eyes, and he could see the whites of her teeth as she bit her lip to keep from crying out.
He advanced on her slowly, savoring the moment. He could almost taste the fear on her, like a fine wine. "Come now, lass," he said, his voice a low, dangerous purr. "You know you can't run from me. Give me what you took, and I might let you live."
She shook her head, her eyes darting left and right, searching for an escape. Rennick smiled, a cold, cruel thing that didn't reach his eyes. "You're not thinking straight," he said. "You're just a child. You don't want to die here, in the gutter, like a rat."
She spat at him, a glob of phlegm and hate that landed on his boot. He raised an eyebrow, impressed despite himself. "Feisty," he said. "I like that. It'll make this more fun."
He lunged forward, his blades flashing in the dim light. She screamed and tried to run, but he was too fast. He caught her by the arm, his grip like iron, and yanked her back towards him. She kicked and struggled, but it was like trying to fight a storm. He slammed her against the wall, his forearm pressed against her throat, cutting off her air.
"Where is it?" he growled, his breath hot and sour in her face. "Tell me, and I'll make it quick."
She clawed at his arm, her nails leaving bloody furrows in his flesh. He didn't flinch. He was used to pain. He had been born in it, raised in it, bathed in it. It was his constant companion, his only friend.
"I...I don't have it," she gasped, her voice raw and desperate. "I swear, I don't!"
Rennick chuckled, a sound that sent shivers down her spine. "Lying to me is a bad idea, lass," he said. "I have ways of making people talk."
He released her throat and slammed his fist into her stomach, doubling her over. She retched, but there was nothing in her stomach to come up. He grabbed a handful of her hair and yanked her head back, exposing her throat. He could see the pulse hammering in her neck, like a trapped bird trying to escape.
"Tell me," he said, his voice a whisper. "Tell me, and I'll end this."
She spat in his face, a defiant gesture that would be her last. He brought his blade up, the steel glinting in the moonlight, and sliced her throat from ear to ear. Blood sprayed across the cobblestones, warm and wet, and she slumped to the ground, her life draining away like water through a sieve.
Rennick wiped the blood from his face and sheathed his blades. He didn't feel anything. No satisfaction, no remorse. Just the cold, empty void that had been his constant companion since the day he had taken his first life. He searched her body, but found nothing. He cursed under his breath. He would have to find the thing she had stolen the hard way.
He turned to leave the alley, but something made him pause. A sound, barely audible over the cries of the gulls and the lapping of the waves. A soft, wet sound, like the patter of rain on a rooftop. He looked down and saw that the girl's body was twitching, her eyes rolling back in her head.
And then, with a gurgling scream, she transformed. Her body contorted, bones snapping and reforming, flesh tearing and reshaping. In the space of a heartbeat, she was no longer human. She was something else, something monstrous.
Rennick stared in horror as the thing that had been a girl rose to its feet, its eyes burning with an unholy light. It was a creature of nightmares, a twisted mockery of humanity, all teeth and claws and hate. It lunged at him, and he barely had time to draw his blades before it was on him.
They fought, a dance of death in the narrow alley, steel ringing on steel, flesh tearing and blood spraying. Rennick was strong, and fast, and skilled, but the creature was stronger, faster, more vicious. It was like fighting a storm, a force of nature that could not be reasoned with or bargained with. It could only be endured.
In the end, it was luck that saved him. He stumbled, his foot catching on a loose cobblestone, and the creature's claws raked across his chest, leaving deep, ragged gouges. But it overextended itself, and he was able to bring his sword up, driving it through its eye and into its brain.
It fell to the ground, twitching and convulsing, and then it was still. Rennick stood over it, panting, his heart hammering in his chest. He had never seen anything like it. He had never even heard of anything like it.
He knew then that he was in over his head. This was no ordinary job, no simple matter of retrieving a stolen trinket. This was something bigger, something darker. He was a small fish in a very large, very dangerous pond, and he was about to be eaten alive.
But he was also a survivor. He had been born in the gutter, raised in the shadows, and he had clawed his way to the top of the food chain. He would not go down without a fight.
He turned and ran, back towards the city, back towards the people who had hired him. He would find out what they knew, and he would make them pay for it. He would uncover the truth, no matter the cost.
And so began the tale of the Butcher of the Broken Docks, a story of blood and betrayal, of monsters and men, of a city on the brink of chaos and the man who would either save it or destroy it.
It was a story that would echo through the ages, a grim and terrible saga that would be whispered in the darkest corners of the world, a warning to all who would dare to challenge the forces that lurked in the shadows.
It was a story that would end in fire and blood, in screams and tears, in the fall of kings and the rise of monsters.
It was a story that had only just begun.
</details>
<details> <summary>Click to see spoiler</summary>
### USER
Write me the opening chapter of a Grimdark trilogy in the style of Joe Abercrombie and Rob J Hayes. Use third person personal and feature internal monologues of the characters. The POV character for chapter 1 is a cultist who has just escaped his cult. He is dressed in dirty yellow robes and his only possession is a mysterious small (magical!?) mirror he stole from the cult. The story starts with him arriving at an apparently deserted ghost town on the edge of a desert. He has an arrow lodged in his shoulder and is losing his mind due to infection and thirst.
### ASSISTANT
The sun was a merciless beast, its fiery breath scorching the earth and turning the once-thriving town into a desolate wasteland. The cultist, his yellow robes now a dull, dusty hue, staggered through the empty streets, his vision blurred by a haze of pain and delirium. The arrow jutted from his shoulder like an accusing finger, a constant reminder of his betrayal. He clutched the small, intricately carved mirror in his trembling hand, its cool surface a comfort against the relentless heat.
He had heard the whispers of the other acolytes, their hushed voices carrying tales of the town's demise like a plague on the wind. They spoke of a curse, of a darkness that had seeped into the very bones of the place, driving its inhabitants mad before claiming their souls. But he had seen the truth in the High Priest's eyes, the greed and lust for power that had led them here. The mirror was the key, they said, to unlocking the secrets of the ancient gods. But he knew better. He had seen the power it held, the way it twisted reality and bent it to the will of the one who wielded it. And he had taken it, stolen it from the very heart of the cult's temple, and run.
Now, with each agonizing step, he questioned the wisdom of his decision. The desert stretched out before him, an endless sea of sand and despair, and he was but a single, insignificant speck in its vast expanse. He had hoped to find refuge here, a place to hide and heal before continuing his journey. But the town was as dead as the dreams he had once held of a better life.
His thoughts were a jumbled mess, a cacophony of fear and pain. He could feel the infection spreading through his body, a writhing serpent of fire that threatened to consume him from within. His tongue was swollen, his lips cracked and dry, and his throat was a parched wasteland. He needed water, and soon, or he would join the ghosts that haunted this forsaken place.
As he stumbled through the town, he caught glimpses of his reflection in the shattered windows of the abandoned buildings. The man who stared back at him was a stranger, his once-handsome face now a twisted mask of agony and desperation. He had been a fool to believe the lies of the cult, to think that he could ever be one of them. He was an outsider, a fraud, and they had seen it in him from the start.
The mirror, though, the mirror was real. He could feel its power pulsing in his hand, a heartbeat that echoed the rhythm of his own. It was a cruel irony that the very thing he had risked everything to obtain was now his only hope for survival. He knew the stories, the legends of the mirror's healing properties, of how it could restore life to the dying. But he also knew the cost. The mirror demanded a sacrifice, a piece of the user's very soul in exchange for its gifts.
He reached the town square, a desolate expanse of cracked cobblestones and withered plants. In the center stood a fountain, its once-clear waters now a stagnant pool of green slime. He fell to his knees before it, the pain in his shoulder a white-hot brand that seared his flesh. He dipped the mirror into the water, watching as the surface rippled and danced.
"Please," he whispered, his voice hoarse and broken. "I'll give you anything. Just take the pain away."
The mirror stared back at him, its surface as still and unyielding as the desert sands. He closed his eyes, willing the power within to answer his plea. And then, he felt it. A cold, insidious presence, slithering into his mind like a snake in the dark. It whispered to him, promising relief, promising power, promising everything he had ever wanted.
He opened his eyes, and the world was different. The pain was gone, replaced by a cold, emotionless void. He looked down at his shoulder, and the arrow was gone, the wound nothing more than a faint scar. He laughed, a hollow, empty sound that echoed through the empty streets.
"What have I done?" he asked the mirror, but it gave no answer.
He stood, his legs unsteady beneath him, and looked out at the desert. He could feel the power coursing through his veins, a dark, intoxicating energy that threatened to consume him. He was no longer just a man, he was something more. Something dangerous.
He took a step forward, and then another, the mirror clutched tightly in his hand. He would find the cult, he would make them pay for what they had done to him. And then, he would use the mirror to claim the power that was rightfully his.
The wind picked up, whipping the sand into a frenzy, and he smiled. The desert was his now, and he would not be denied. He was the master of his own destiny, and the gods themselves would tremble at his approach.
But as he walked, the laughter of the ghosts followed him, a chilling reminder of the price he had paid for his newfound strength. And deep within the mirror, something stirred, a darkness that watched and waited, biding its time.
The first chapter of the grimdark trilogy, "The Shattered Reflection," had begun.
</details>
<details> <summary>Click to see spoiler</summary>
### USER
Write me the opening chapter of a Grimdark trilogy in the style of Joe Abercrombie and Rob J Hayes. Use third person personal and feature internal monologues of the characters. The POV character for chapter 1 is a 'Mercer' who's job is to put the mortally wounded out of their mystery after a battle. He has no conscience about what he does. The story starts the morning after the battle and the moor where it took place is blanketed in mist. Something stirs a flashback in the POV character.
### ASSISTANT
The morning after the battle, the moor lay still, a shroud of mist clinging to its wounds like a lover's last embrace. The Mercer, a man of few words and fewer scruples, trudged through the carnage, his boots squelching in the mud churned red by the night's bloodshed. He was a reaper of sorts, his task to deliver the final mercy to those who had danced with death and lost. His name was a whisper among the living, a curse among the dying.
He paused, his hand resting on the hilt of his blade, as a shiver ran down his spine. The mist swirled around him, carrying with it the cacophony of the night before: the screams of the dying, the clash of steel, the thunder of hooves. He closed his eyes, letting the memories wash over him like a wave of despair.
*Why do I do this?* he thought, not for the first time. *Why do I wade through this sea of suffering, playing the part of the grim angel?*
But he knew the answer. It was the same answer it had always been. He did it because he was good at it. Because he had no qualms about ending a life, even one that still clung to the faintest flicker of hope. Because, in this world of darkness and pain, it was the only thing he knew how to do.
He opened his eyes, the mist parting to reveal a figure lying in the mud. A soldier, or what was left of one. The man's armor was shattered, his face a mask of agony. The Mercer knelt beside him, his movements practiced and efficient. He placed a hand on the soldier's forehead, feeling the fevered heat of his skin, the erratic thump of his heart.
"Shh," he whispered, his voice a soothing balm. "It's over now."
The soldier's eyes fluttered open, wide with terror. "Please," he croaked, "don't leave me here."
The Mercer's lips twisted into a grim smile. "I won't," he said, drawing his blade. "I'll take you with me."
As the soldier's life bled out onto the cold ground, the Mercer's mind drifted back to another time, another place. A time before the blood and the pain, before the endless cycle of war and death. He remembered a girl with hair like spun gold, laughing as she danced in a field of wildflowers. He remembered the feel of her hand in his, the sound of her voice as she whispered his name.
*Elara.*
The memory was a knife to his heart, a reminder of all he had lost. He pushed it away, focusing on the task at hand. There were more souls to reap, more lives to end.
He moved through the mist, his blade a silent whisper in the stillness. The cries of the wounded guided him, a morbid symphony that played out across the moor. Each one he found, he dispatched with the same cold efficiency, the same hollow promise of release.
As the sun began to rise, casting a sickly pallor over the scene, the Mercer came upon a group of soldiers huddled around a fire. They eyed him warily, their faces a mix of fear and respect. He was a necessary evil, a specter that haunted the battlefield.
"Anyone here need my services?" he asked, his voice devoid of emotion.
One of the soldiers, a young boy who couldn't have seen more than fifteen summers, pointed to a figure lying a few feet away. "Him," he said, his voice trembling. "He's not going to make it."
The Mercer approached, his boots crunching on the frost-kissed grass. The man on the ground was a mess, his body a tapestry of wounds. He was barely conscious, his breathing shallow and ragged.
*Another one,* the Mercer thought, his heart heavy with the weight of his duty. *Another soul to send into the void.*
He knelt beside the man, his blade glinting in the pale light. The soldier's eyes flickered open, and for a moment, the Mercer saw something in them that he hadn't expected: defiance.
"No," the man rasped, his hand feebly grasping at the Mercer's wrist. "I won't go quietly."
The Mercer hesitated, a frown creasing his brow. He had never encountered this before. Most welcomed his touch, eager for the release it brought. But this man...this man was different.
"Why?" he asked, genuinely curious.
The soldier's lips curled into a bloody smile. "Because I have a son," he whispered. "And I want to see him grow up."
The Mercer stared at the man, his mind racing. He had no children, no one to mourn him when his time came. He was a solitary figure, a ghost haunting the edges of life. But this man...this man had a reason to hold on.
He made his decision. "I can't heal you," he said, his voice soft. "But I can make it quick."
The soldier's eyes filled with tears. "Thank you," he breathed, his grip on the Mercer's wrist loosening.
The Mercer raised his blade, the mist swirling around them like a mourner's veil. As he brought it down, he couldn't help but wonder: *What would it be like to have something to live for?*
But the thought was fleeting, lost in the swirl of the mist and the cries of the dying. He was the Mercer, the bringer of mercy. And the moor was his to tend, until the next battle came and the cycle began anew.
</details>
<details> <summary>Click to see spoiler</summary>
### USER
Write me the opening chapter of a Grimdark trilogy in the style of Joe Abercrombie and Rob J Hayes. Use third person personal and feature internal monologues of the characters. The POV character for chapter 1 is a man awaiting execution for a crime he didn't actually commit. His cell mate is a much larger man, who is a notorious murderer and rapist. One evening they overhear the guards talking about some "... threat from the North" but don't catch the whole conversation. The next morning they awake to find the guards have all fled and now face a new predicament: how to get out of the locked cell before they die of thirst or starvation.
### ASSISTANT
The stench of piss and shit was a constant companion in the damp stone cell, but it was the sound of the key turning in the lock that truly set his heart racing. The rusted iron groaned in protest, echoing through the narrow corridor like the dying breath of some ancient beast. The door creaked open, revealing the silhouette of the jailer, a hulking brute of a man with a lantern in one hand and a set of keys in the other. But there was no breakfast, no water, just the empty promise of another day in this hellhole.
"What's the meaning of this?" the smaller man barked, his voice barely audible over the rattling of his chains. "Where's our food, you fat sack of shit?"
The jailer didn't answer, just slammed the door shut and locked it again, the sound of his footsteps fading into the distance. The larger man, a hulking brute named Grimgor, grunted in response, his eyes never leaving the small patch of light that seeped through the barred window.
"Something's not right," he rumbled, his voice like gravel being crushed under a heavy boot.
The smaller man, known only as the Fox, couldn't argue with that. He'd been in this cell for what felt like an eternity, awaiting execution for a crime he didn't commit. But even in this place, there was a routine, a semblance of order. And that routine had just been shattered.
As the day wore on, the silence from the corridor outside grew more oppressive. No guards, no other prisoners, nothing but the distant howl of the wind and the occasional scurrying of rats in the darkness. The Fox's mind raced, trying to piece together what could have happened. Had there been a riot? A rebellion? Or was it something else entirely?
He glanced over at Grimgor, who was staring at the wall, lost in thought. The man was a monster, a notorious murderer and rapist, but he was also the only other living being in this godforsaken place. And in a world gone mad, even monsters could be allies.
"You hear anything last night?" the Fox asked, trying to keep the tremor out of his voice.
Grimgor grunted, his eyes still fixed on the wall. "Something about a threat from the North."
The Fox's heart sank. He'd heard the same thing, snatches of conversation between the guards as they'd passed by their cell. But he'd been too afraid to give it much thought, too focused on his own impending doom.
"What do you think it means?" he pressed, unable to keep the desperation at bay.
Grimgor shrugged his massive shoulders. "Dunno. Don't care. Just means we're probably gonna die in here."
The Fox's stomach twisted at the thought. He'd faced death before, many times, but this... this was different. To die slowly, trapped like an animal, with no chance to fight back... it was a fate worse than any he could imagine.
As the hours dragged on, the thirst became unbearable. The Fox's tongue felt like sandpaper, his throat raw and parched. He could see the same desperation in Grimgor's eyes, the realization dawning on them both that they were truly alone.
"We have to get out of here," he croaked, his voice barely above a whisper.
Grimgor just grunted in agreement, his gaze never leaving the window.
The Fox's mind raced, trying to come up with a plan. They had nothing, no tools, no weapons, just their wits and their will to survive. And even that seemed to be fading with each passing moment.
But then, as the sun began to set and the shadows lengthened, he noticed something. The light from the window was changing, growing dimmer. He squinted, trying to make out what was happening. And then he saw it.
"Grimgor," he hissed, tugging on the larger man's arm. "Look."
Grimgor turned, his eyes narrowing as he followed the Fox's gaze. The light was flickering, casting strange shadows on the wall. And then, as if in answer to their unspoken prayers, they heard it.
The sound of footsteps, growing louder and louder, accompanied by the jingle of keys. The Fox's heart leapt into his throat, hope and fear warring within him. Who was it? Friend or foe?
The door swung open, revealing not the jailer, but a figure shrouded in darkness. The Fox couldn't make out any details, but he could sense the malice radiating off of him like heat from a forge.
"Who are you?" he demanded, trying to sound braver than he felt.
The figure stepped into the cell, the light from the lantern outside casting his features into sharp relief. He was a tall man, with a cruel smile and eyes that glinted like ice.
"I am your salvation," he said, his voice like a knife slicing through the silence. "And your doom."
With that, he tossed a set of keys at the Fox's feet and disappeared back into the shadows. The door slammed shut behind him, the sound of the lock clicking into place echoing through the cell like a death knell.
The Fox stared at the keys, his heart pounding in his chest. Freedom was within his grasp, but at what cost? He looked over at Grimgor, who was already working on his own chains.
"We have to get out of here," the larger man growled. "Before whatever's coming from the North gets here."
The Fox nodded, his hands shaking as he picked up the keys. They were cold, colder than they had any right to be. He could feel the weight of them, the promise of escape and the threat of the unknown.
Together, they worked in silence, the only sound the rattle of chains and the pounding of their hearts. When they were finally free, they stood for a moment, staring at each other.
"What now?" the Fox asked, his voice barely above a whisper.
Grimgor's smile was feral, his eyes alight with a madness that sent a shiver down the Fox's spine.
"Now," he said, "we find out what's really going on."
And with that, they set off into the darkness, their footsteps echoing through the empty corridors like the march of the damned.
</details>
## Dark sci-fi stories
<details> <summary>Click to see spoiler</summary>
### USER
Write me the opening chapter of a dark and gritty Space Opera Sci-fi trilogy in the style of Frank Herbert and James S. A. Corey. Use third person personal and feature internal monologues of the characters. The setting is this: a colony ship heading out of the solar system has been pulled off course with no hope of getting to the planned destination. The people know this and society has broken down completely due to the grim prospects they are facing. The POV character for chapter 1 is an orphan.
### ASSISTANT
Chapter 1: Adrift
The colony ship, once a beacon of hope named Elysium's Dawn, now drifted through the cold void like a ghost ship. Its hull, pitted and scarred from the debris of uncharted space, groaned with each labored breath it took. The once vibrant corridors were now dimly lit, casting long shadows that seemed to swallow any remaining semblance of optimism.
In one such shadow, a figure moved silently, her bare feet making no sound on the cold metal floor. She was small for her age, malnourished from the rationing that had become a way of life. Her ragged clothes hung loosely on her frame, but her eyes were sharp and alert, scanning every corner for danger. This was Kira, an orphan of the void, surviving in a world where hope was a luxury few could afford.
Kira's thoughts drifted back to the stories she had heard as a child, tales of a green and blue planet called Earth, their home that they would never see again. The ship had been pulled off course by an unknown force, leaving them stranded in the vast emptiness between stars. The adults spoke in hushed tones about the dwindling resources, the failing life support systems, and the ever-present threat of pirates who preyed on lost ships like theirs.
She pushed those thoughts aside as she approached her destination: the hydroponics bay. It was one of the few places on the ship where life still thrived, albeit in a controlled and desperate manner. The air here was thick with moisture and the scent of growing things, a stark contrast to the stale recycled air that permeated the rest of the vessel.
As she entered, the soft glow of the grow lights illuminated rows upon rows of plants, struggling to produce enough food for the dwindling population. Kira's stomach growled at the sight, but she knew better than to take more than her share. She had learned early on that stealing could mean a swift and brutal punishment from those who guarded the bay.
Instead, she found solace in tending to the plants, her small hands deftly pruning and watering them. It was here that she felt closest to something resembling peace, a connection to life that transcended the harsh reality of their situation. As she worked, her mind wandered to the lessons she had learned from the ship's AI, the only constant presence in her life since the adults had given up on teaching the children anything but survival skills.
"Kira," a voice whispered urgently, jolting her out of her reverie. She turned to see Jax, another orphan and one of the few people she trusted. His face was pale and his eyes wide with fear. "You need to come with me, now."
She followed him through the maze-like corridors, her heart pounding in her chest. They arrived at a small, hidden room that served as their makeshift home. Inside, huddled around a flickering screen, were several other children, their faces etched with worry and excitement.
"Look," Jax said, pointing to the screen. It displayed a grainy image of a man in tattered clothing, his face half-hidden by a wild beard. "He says he knows how to fix the engines."
Kira's eyes widened in disbelief. The engines had been dead for as long as she could remember, their failure sealing their fate as interstellar castaways. "Who is he?" she asked, her voice barely above a whisper.
"They call him the Mechanic," one of the older girls replied. "He used to work in engineering before...before everything fell apart."
The man on the screen continued his frantic plea, detailing his plan to restore power and steer the ship towards a habitable planet he claimed was within reach. Kira felt a spark of hope ignite within her, but it was quickly doused by the skepticism of the others.
"It's just another one of their lies," spat a boy named Taro. "They'll say anything to get us to do their dirty work."
Jax rounded on him, his fists clenched. "This is our chance, Taro! We can't just give up!"
The argument raged on, but Kira tuned it out, her thoughts racing. She knew the risks of trusting an adult, especially one who claimed to have access to forbidden knowledge. But what if he was telling the truth? What if they could find a new home and start over?
As she watched the debate unfold, she made up her mind. She would seek out this Mechanic and offer him her help, no matter the cost. It was a decision born of desperation, but also of a stubborn refusal to accept the bleak future that lay before them.
And so, as the other children slept fitfully around her, Kira slipped out into the dark corridors once more, determined to find the man who might hold the key to their salvation or doom. The ship groaned and creaked around her, a constant reminder of the fragile existence they all clung to. But she pressed on, driven by an unyielding will to survive and the faintest glimmer of hope that burned within her heart.
As she navigated the treacherous path towards engineering, she couldn't help but wonder what awaited her in the depths of the ship. Would she find a savior or a madman? And more importantly, would she be able to make it back alive?
</details>
<details> <summary>Click to see spoiler</summary>
### USER
Write me the opening chapter of a dark and gritty Sci-fi trilogy in the style of Frank Herbert and James S. A. Corey. Use third person personal and feature internal monologues of the characters. The POV character for chapter 1 is an astronaut living on a space station orbiting the earth. The day starts off normally, but then him the others notice the mushroom clouds and they have to face inevitability of the situation they are in.
### ASSISTANT
Chapter 1: Ashes of Exodus
The cold embrace of space enveloped the station like a lover's caress, its infinite blackness punctuated only by the distant twinkle of stars and the ever-present glow of Earth below. Captain Marcus Kane floated through the dimly lit corridors, his magnetic boots barely whispering against the metal floor as he made his way to the observation deck. The daily routine had become a meditative ritual for him, a chance to clear his mind before the cacophony of the day's work began.
As he entered the room, the sight that greeted him never failed to take his breath away. Earth, a swirling marble of blues and greens, hanging in the void like an irreplaceable jewel. He couldn't help but feel a pang of longing for its familiar embrace, for the weight of gravity that he had left behind so many years ago.
Marcus settled into his favorite spot by the window, the worn leather of the seat molded to his body like an old friend. He closed his eyes and let the silence wash over him, the only sound the soft hum of the station's life support systems. It was in these moments that he felt truly alive, connected to something greater than himself.
But today, the silence was shattered by a chilling alarm, its shrill wail piercing the tranquility like a knife through glass. His eyes snapped open as his heart began to race, adrenaline coursing through his veins. He knew that sound all too well; it was the emergency alert for an incoming transmission from Earth.
He launched himself towards the comms console, fingers flying over the keys as he tried to make sense of the garbled message. The voice on the other end was frantic, barely coherent through the static. "Multiple...detonations...global catastrophe..." were the only words he could decipher before the signal died completely.
Marcus's mind raced as he tried to process what he had just heard. It couldn't be true, it couldn't be happening. Not again. He quickly activated the external cameras, his stomach twisting into knots as he waited for the feed to come online.
And then he saw them: mushroom clouds, blooming like grotesque flowers across the planet's surface. Too many to count, their plumes of smoke and ash reaching up towards the heavens in a silent scream of despair. The sight was both beautiful and horrifying, a testament to humanity's capacity for destruction on an unimaginable scale.
His thoughts were interrupted by the sound of footsteps behind him as the rest of the crew floated into the room, drawn by the alarm. Their faces mirrored his own shock and disbelief as they stared at the monitors, their voices a cacophony of whispers and gasps.
"What...what is that?" asked Dr. Amelia Chen, her voice trembling with fear.
Marcus didn't answer; he couldn't find the words. Instead, he simply pointed at the screen, his hand shaking slightly.
Engineer Jake Taylor was the first to break the silence. "Nuclear war," he said, his voice flat and emotionless. "It's finally happened."
The others looked at him in horror, but Marcus knew he was right. There could be no other explanation for such devastation. He felt a cold weight settle in the pit of his stomach as the reality of their situation began to sink in. They were alone, adrift in the vastness of space with no hope of return.
As the implications of what they had just witnessed set in, the crew fell into an uneasy silence, each lost in their own thoughts. Marcus couldn't help but think about his wife and daughter back home, wondering if they were safe or if they too had been consumed by the flames. He knew it was a futile exercise, but he couldn't stop himself from hoping against hope that somehow, someway, they had survived.
But as the minutes ticked by and no further communication came from Earth, that hope began to fade like a dying ember. They were truly alone now, the last remnants of humanity clinging to life on this tiny outpost in the void.
He turned to face his crew, their faces etched with fear and uncertainty. "We need to come up with a plan," he said, his voice steady despite the turmoil raging within him. "We can't just sit here and wait for...for whatever comes next."
Amelia nodded, her eyes hardening with determination. "I'll start running diagnostics on our life support systems, see how long we can last up here."
Jake followed suit, his hands already moving to the controls of the station's mainframe. "I'll try to establish contact with any other surviving outposts or spacecraft."
Marcus nodded, grateful for their resolve in the face of such overwhelming odds. He knew that they were all thinking the same thing: that this could very well be the end of everything they had ever known. But he also knew that they couldn't give up, not yet.
As he watched them work, his mind drifted back to the message from Earth, those final desperate words echoing in his head like a funeral dirge. He couldn't shake the feeling that there was something more, some hidden meaning or clue that would explain what had happened.
And then it hit him: the timing of the detonations, the pattern of the blasts. It wasn't random; it was deliberate, almost...calculated. But who would do such a thing? And why?
The questions swirled in his mind like a storm, each one more unsettling than the last. He knew that they needed answers, and fast. Because if this was an act of war, then their isolation might not be as secure as they thought.
He pushed himself away from the console, his decision made. "I'm going to suit up," he said, his voice firm. "We need to get a closer look."
The others stared at him in disbelief, but he didn't waver. This was his duty as captain, to protect his crew and find out the truth no matter how terrible it might be.
As he floated towards the airlock, Marcus couldn't help but feel a sense of dread creeping up on him like a shadow. He knew that what he was about to see would change everything, that there was no going back once he stepped outside and gazed upon the ashes of humanity's final stand.
But he also felt something else: a burning anger, a fierce determination to uncover the truth and make those responsible pay for their crimes against humanity. Whatever awaited them out there, they would face it together, as one.
He closed his eyes and took a deep breath, steeling himself for what was to come. And then, with a quiet prayer on his lips, he opened the airlock and stepped into the void.
</details>
<details> <summary>Click to see spoiler</summary>
### USER
Write me the opening chapter of a dark and gritty Sci-fi trilogy in the style of Frank Herbert and James S. A. Corey. Use third person personal and feature internal monologues of the characters. The POV character for chapter 1 is a worker constructing a biodome when a terrible accident happens. Make the story very dark; no happy ending here...
### ASSISTANT
Chapter 1: The Biodome
The air was thick with the scent of sweat and burnt metal as Kaelen, a seasoned construction worker, tightened the last bolt on the massive irrigation system. He wiped his brow with the back of his gloved hand, leaving behind a smear of grease that glistened under the harsh artificial lights. The hum of machinery and distant shouts echoed through the cavernous space, a cacophony that had become as familiar to him as his own heartbeat over the years spent building biodomes on this desolate planet.
Kaelen's thoughts drifted to his family back on Earth, their faces faded like old photographs in his mind. He wondered if they ever thought about him, or if he was just another ghost haunting the fringes of their memories. The company had promised them a better life, but it came at a steep price: years of backbreaking labor on an alien world, far from everything they'd ever known.
"Hey, Kael!" A voice cut through his reverie, jolting him back to reality. It was Tamsin, his foreman, her face hidden behind a tinted visor. "We need you up top! There's an issue with the atmospheric seal."
He nodded curtly and began the long climb up the scaffolding, each rung biting into his calloused hands. As he ascended, Kaelen couldn't help but marvel at the sheer scale of their creation: a vast dome of steel and glass that would one day be teeming with life, a self-sustaining ecosystem in the heart of this barren wasteland.
But today was not that day. Today, it was just another tomb waiting to be sealed.
As he reached the top, Kaelen could see the problem immediately: a small fissure had formed along one of the joints, spewing precious oxygen into the void beyond. He cursed under his breath; they were already behind schedule and over budget. Another delay would mean another round of demerits, another month's pay docked.
"What do you think?" Tamsin asked, her voice crackling through his earpiece. "Can we patch it up or do we need to call in the engineers?"
Kaelen hesitated, running his fingers along the jagged edge of the tear. It was larger than he'd initially thought, and growing by the second. He could feel the cold tendrils of vacuum reaching out to claim him, whispering promises of oblivion.
"I... I don't know," he admitted, his voice heavy with dread. "It doesn't look good."
Tamsin swore colorfully and turned away, barking orders into her comm unit. Kaelen watched as workers scrambled to gather tools and materials, their movements frantic and disorganized. He knew they were all thinking the same thing: if they couldn't fix this, they were dead.
The air around them grew colder, thinner, as the oxygen continued to escape. Kaelen's lungs burned with every breath, his vision swimming at the edges. He fumbled with the patch kit, his hands shaking uncontrollably. This was it; this was how he would die, millions of miles from home, in service to a corporation that saw him as nothing more than a replaceable cog in their grand machine.
"Hurry up!" Tamsin shouted over the growing din. "We're losing pressure fast!"
Kaelen's heart pounded in his chest like a sledgehammer, drowning out all other sound. He could feel the panic rising within him, threatening to consume him whole. But he couldn't afford to give in; not now, not when so much was at stake.
With trembling hands, he applied the sealant and pressed the patch into place. For a moment, it seemed to hold... but then, with a sickening lurch, the fissure widened, swallowing the feeble attempt whole. The wind howled around them like a ravenous beast, tearing at their suits, trying to pull them apart atom by atom.
"Abort!" Tamsin screamed, her voice barely audible over the roar. "Everyone get out now!"
But it was too late. The dome shuddered beneath their feet, groaning in protest as the atmosphere inside rushed to equalize with the void outside. Kaelen felt himself being lifted off the ground, his boots scrabbling for purchase on the slick metal surface. He reached out, desperate to grab onto something - anything - but found only emptiness.
And then, as suddenly as it had begun, there was silence. A deafening, suffocating silence that pressed in on him from all sides. His vision went dark, and he knew no more.
When Kaelen awoke, he was lying on the cold floor of the dome, his body wracked with pain. He tried to move, but found himself held fast by twisted metal and shattered glass. Panic surged through him once again as he realized that his suit had been breached; he could feel the icy chill of vacuum seeping into his very bones.
Through the haze of agony, he became aware of movement nearby. Tamsin was there, her visor cracked and fogged, struggling to free herself from the wreckage. She looked at him with wide, terrified eyes, her lips moving soundlessly. He knew what she was saying: they were going to die here, alone and forgotten.
Kaelen closed his eyes, willing himself to accept the inevitable. But even as the darkness closed in, he couldn't shake the feeling that this wasn't just an accident - that there were forces at work far beyond their understanding, manipulating them like pawns on a cosmic chessboard.
And then, just when he thought it was all over, he heard it: a faint beeping, growing louder by the second. His eyes snapped open to see a small drone hovering above them, its lights flashing urgently. It was one of the company's rescue bots, sent to assess the damage and retrieve any survivors.
But there would be no salvation for Kaelen or Tamsin. Their suits were too damaged, their bodies too far gone. As the drone's camera lens bore down on him, he could only manage a weak, bitter smile.
"Tell my family... tell them I'm sorry," he croaked, his voice barely audible even to himself. Tamsin reached out a gloved hand, her fingers brushing against his own in a final gesture of solidarity.
And then, as the darkness claimed him once more, Kaelen knew that this was not the end of their story - but rather, the beginning of something far more sinister. Something that would change the course of human history forever...
</details>
<details> <summary>Click to see spoiler</summary>
### USER
Write me the opening chapter of a dark and gritty Sci-fi trilogy in the style of Frank Herbert and James S. A. Corey. Use third person personal and feature internal monologues of the characters. The POV character for chapter 1 is manning a listening station in deep space. What he hears over the radio chills him to the bone.
### ASSISTANT
Chapter 1 - Whispers from the Void
The silence of deep space was like an old friend to Captain Alistair Reynolds, a constant companion that had accompanied him through countless missions and lonely nights at the helm of the listening station. But today, it felt more like a suffocating shroud, heavy with unspoken secrets and foreboding. He sat in his chair, fingers drumming impatiently on the console as he scanned the endless stream of data pouring in from the far reaches of the cosmos.
His thoughts drifted to home, to the warmth of his wife's embrace and the laughter of their children playing in the garden under the twin suns of Proxima Centauri. It had been years since he last felt that warmth, since he last heard those laughs. The ache was always there, a dull throb that pulsed in time with his heartbeat, but today it seemed to echo through the cold metal halls of the station like a mournful dirge.
A sudden blip on the screen snapped him back to reality. He leaned forward, eyes narrowing as he studied the readout. It was faint, barely more than a whisper against the static, but it was there - a signal, human in origin and heavily encrypted. His heart raced as he keyed in the decryption sequence, his mind already racing ahead to the possibilities of what this could mean.
The message unscrambled, revealing a frantic voice, barely audible over the crackle of interference. "This is the UES Normandy, requesting immediate assistance... coordinates attached... under attack by unknown forces... God help us..." The transmission cut out abruptly, leaving only the empty hiss of static in its wake.
Alistair's blood ran cold as he stared at the screen, the weight of the silence pressing down on him like a physical force. He knew that ship, had served with her captain back during the Martian uprising. They were old friends, bound by shared battles and the scars they bore from them. And now she was out there, alone and in danger.
He didn't hesitate. His fingers flew over the console, sending out an emergency beacon to the nearest fleet. But he knew it would take time for help to arrive, time that the Normandy might not have. He had to do something now.
As he prepared to launch a probe towards the coordinates, his mind filled with images of burning ships and desperate faces. He could almost hear the screams of the dying, echoing through the void like ghosts from his past. It was a sound that haunted him still, one he had hoped never to hear again.
But duty called, and Alistair was nothing if not a soldier. He took a deep breath, steadying himself against the tide of fear and doubt that threatened to overwhelm him. This was his job, his purpose - to listen, to bear witness, and when necessary, to act.
The probe shot out into the darkness, its tiny form swallowed up by the vastness of space. He watched it go, feeling a strange sense of detachment as if he were watching someone else's life play out before him. And perhaps in some ways, he was. For all his years in service, this was the first time he had ever been truly alone, cut off from the rest of humanity by light-years and the cold indifference of the universe.
As the minutes ticked by, Alistair found himself slipping into a half-trance, his thoughts drifting back to happier times. He remembered the first time he had held his newborn son, the tiny fingers curling around his own as if trying to hold on for dear life. He remembered the taste of real Earth coffee, rich and bitter, shared with his wife under a sky full of stars that seemed close enough to touch.
But those memories were distant now, fading like echoes in the void. All he had was this station, this endless vigil, and the hope that one day, he might make it back home.
The radio crackled to life again, jolting him out of his reverie. It was the probe, transmitting data back from the edge of nowhere. He braced himself for what he would find, knowing that it could change everything.
What greeted him on the screen was worse than anything he could have imagined. The Normandy was there, alright - or rather, what was left of her. She drifted through space like a broken toy, her hull breached and venting atmosphere into the cold void. There were no signs of life, no distress signals, nothing but silence.
Alistair felt a cold knot form in his stomach as he took in the wreckage. This wasn't just any ship; this was his friend's ship, her crew now lost to the void. And the attackers... whoever or whatever they were, they had left no trace behind.
He sat back in his chair, the weight of it all settling heavily upon him. He knew what he had to do next - file a report, alert command, and wait for further instructions. But as he stared at the lifeless husk that was once the Normandy, he couldn't shake the feeling that something far greater than himself was at work here.
Something dark and terrible had come calling, whispering its secrets through the stars. And now it was his turn to listen, to bear witness, and perhaps, if he was lucky, to survive long enough to tell the tale.
</details>
## Miscellaneous stories
<details> <summary>Click to see spoiler</summary>
### USER
Write me the opening chapter of a dark and gritty post-apocalyptic novel in the style of Cormac McCarthy. Use third person personal and feature internal monologues of the characters. The POV character for chapter 1 is a technician in a nuclear bunker 18 months after the war. The 12 inhabitants of the bunker have been expecting to start to be able to leave soon, but it has just been discovered that at least where they are, salted cobalt bombs were used. It's clear now that either all 12 will die when the resources run out and their is barely enough for 6 to make it to 5 years and have any hope of leaving. After 18 months of calm everything breaks lose today.
### ASSISTANT
In the dim light of the bunker, the technician sat hunched over his console, fingers dancing across the keys with a practiced urgency. The air was thick with the hum of machinery and the stench of sweat and fear. He had been down here for what felt like an eternity, but the digital clock on the wall insisted it had only been eighteen months since the world above had turned to ash.
His name was Samuel, or at least that's what they called him now. In another life, he had been a software engineer, designing algorithms for self-driving cars and smart homes. Now, he was responsible for keeping twelve souls alive in this underground tomb, their last bastion against the desolation above. He glanced up at the monitor, the green glow illuminating his haggard face. The readings were clear: salted cobalt bombs had been used.
The thought sent a shiver down his spine. They had all heard the rumors, of course, but they had clung to hope like a drowning man to driftwood. Hope that the war hadn't been as bad as they feared, that the surface would be habitable again soon. But now, the truth was undeniable. The world outside was dead, and it would remain so for generations.
Samuel closed his eyes, trying to block out the images of a scorched earth, devoid of life. He could hear the others in the background, their voices a low murmur as they went about their daily tasks. They didn't know yet. How could he tell them? How could he be the one to snuff out the last ember of hope that had kept them going for so long?
He took a deep breath and stood up, his joints creaking in protest. He made his way through the narrow corridors, past the rows of bunks where some slept fitfully, their dreams haunted by memories of a world they would never see again. Others were awake, their faces etched with lines of worry and despair. They looked up as he passed, but he couldn't meet their gazes.
He reached the command center, where Captain Reynolds was poring over maps and reports. The man looked up as Samuel entered, his eyes heavy with exhaustion. "What is it?" he asked, his voice gruff.
Samuel hesitated for a moment before speaking. "Sir, I have some... troubling news."
The captain's expression didn't change, but Samuel could see the tension in his jaw. He had been expecting this, too. "Go on," he said simply.
"The radiation readings... they're off the charts. It's cobalt-60. The half-life is over five years."
Reynolds let out a slow breath, his eyes never leaving Samuel's face. "And our supplies?"
"We have enough for six, maybe seven if we ration carefully. But not twelve."
The room seemed to shrink around them as the weight of their situation settled in. The captain rubbed his temples, his thoughts racing. They had always known that their time in the bunker was limited, but they had been so close. So very close.
"We'll need to make a decision," he said finally, his voice barely above a whisper. "Who stays, and who goes."
Samuel felt a knot form in his stomach. He knew what that meant. It was survival of the fittest down here now, and there were those among them who would do anything to ensure their own survival.
As he made his way back to his console, he couldn't help but think about the world above. The trees that would never grow again, the birds that would never sing. He thought of his wife and daughter, their faces fading like photographs left in the sun. They had been lucky, they said, to have made it into the bunker. But now, he wasn't so sure.
In the mess hall, the atmosphere was tense. Whispers and hushed conversations filled the air as people tried to make sense of what they had just heard. Samuel sat alone at a table, staring blankly at his food. He couldn't eat. Not now.
He felt a hand on his shoulder and looked up to see Maria, the doctor. Her eyes were red-rimmed, but her voice was steady. "We need to talk," she said, guiding him towards the infirmary.
Once inside, she closed the door behind them and leaned against it, her body trembling. "What are we going to do?" she asked, her voice cracking.
Samuel shook his head. "I don't know."
"We can't just let six of us die," she said, her voice rising in panic. "There has to be another way."
He looked at her, at the desperation etched on her face, and felt a flicker of anger. "What do you suggest? We all hold hands and sing Kumbaya while we starve to death?"
Maria's eyes flashed with anger, but she quickly composed herself. "No," she said softly. "But we can't just give up."
Samuel sighed and ran a hand through his greasy hair. "I'm not giving up," he said. "I'm just being realistic."
She looked at him for a long moment before nodding. "You're right," she said, her voice barely audible. "We need to be realistic."
They sat in silence for a while, each lost in their own thoughts. Then, Maria spoke up again. "There is one option," she said hesitantly. "But it's... drastic."
Samuel raised an eyebrow. "What?"
"We could... we could draw straws," she said, her voice barely above a whisper. "Choose six people at random to leave the bunker and take their chances on the surface."
He stared at her in disbelief. "You can't be serious."
"It's not ideal," she admitted. "But it's better than just letting them die down here."
Samuel shook his head, his mind reeling. He couldn't fathom the thought of sending people out into that wasteland, knowing they would likely never return. But what other choice did they have?
As he sat there, the weight of their decision pressing down on him like a leaden shroud, he heard a commotion outside. Shouting, followed by the sound of scuffling feet and something heavy being dragged across the floor. He exchanged a worried glance with Maria before rushing out into the corridor.
What he saw made his blood run cold. A group of men had cornered one of the younger women, their faces twisted with rage and desperation. They were arguing over who would get to leave the bunker, who would get a chance at life. Fists flew, and soon, the sound of gunfire echoed through the halls.
Samuel stood there, frozen in place as chaos erupted around him. This was it. The end of everything they had worked so hard to preserve. He could feel the darkness closing in, suffocating him, as he realized that their last hope for survival had just been shattered like glass under a bootheel.
And all he could do was watch as the world above claimed them, one by one.
</details>
<details> <summary>Click to see spoiler</summary>
### USER
Write me the opening chapter of a historical mystery novel in the style of Arturo Pérez-Reverte and Umberto Eco. The story starts off in California during the 1948 Gold Rush, before moving to London in the mid 1970s. There should be a mysterious 'occult' item that connects these two time periods.
### ASSISTANT
Chapter One: The Golden Veil
The year was 1848, and California was a land of dreams and desperation. Men from all corners of the world had flocked to its golden shores, lured by the promise of wealth beyond their wildest imaginations. They came with pickaxes and pans, with hope in their hearts and greed in their eyes, ready to tear apart the earth itself in search of fortune.
In the midst of this frenzy stood a solitary figure, a man who seemed out of place amidst the chaos. He was tall and lean, with a face that bore the lines of a thousand stories. His clothes were simple but well-made, his boots worn from years of travel. He moved with an air of quiet authority, as if he belonged to another time entirely.
His name was Alistair Blackwood, and he had come to California not for gold, but for something far more valuable: knowledge. A scholar by trade and an adventurer at heart, Blackwood had spent his life chasing after ancient texts and forgotten lore, seeking to unravel the secrets of the universe. And now, he believed he was on the verge of a discovery that would change everything.
Rumors had reached him of a strange artifact, said to have been found deep within the Sierra Nevada mountains. It was a small, intricately carved box made of an unknown metal, its surface etched with symbols that defied translation. Those who claimed to have seen it spoke in hushed tones of its otherworldly beauty and the eerie sense of power that seemed to emanate from within.
Blackwood had tracked the rumors to a dusty saloon in Sacramento, where he hoped to find answers among the grizzled miners who frequented the place. As he pushed open the swinging doors, the din of voices and clinking glasses washed over him like a wave. He scanned the room, his gaze settling on a group of men huddled around a table in the corner.
One look at their faces told him they were the ones he sought: sun-weathered and unshaven, their eyes glinting with a mixture of excitement and fear as they whispered amongst themselves. He approached them slowly, aware that he was an outsider in this rough-and-tumble world.
"Gentlemen," he said, his voice low and cultured, "I couldn't help but overhear your conversation. Might I be so bold as to ask what has you all so...animated?"
The men exchanged wary glances before one of them spoke up. "You ain't from around here, are ya?" he drawled, eyeing Blackwood's fine clothes with suspicion.
"No," Blackwood admitted, "I am not. But I have traveled far and wide in search of knowledge, and I believe you may possess something that could be of great interest to me."
He reached into his pocket and produced a small leather pouch, which he placed on the table with a soft thud. The men's eyes widened as they saw the glint of gold within.
"I understand there is an object - a box, perhaps? - that has recently come into your possession. I would be willing to pay handsomely for any information you might have about it."
The miners looked at each other, then back at Blackwood. Finally, the one who had spoken before nodded slowly. "We might know somethin' 'bout that," he said, his voice dropping to a conspiratorial whisper. "But it ain't no ordinary box."
As they began to tell their tale, the saloon faded away around them, replaced by images of dark caverns and glittering treasure. Blackwood leaned forward, his heart pounding with anticipation. This was it - the moment he had been waiting for.
The box, they said, had been found buried beneath a waterfall, hidden in a cave that seemed to defy all natural laws. It was guarded by strange creatures unlike anything any of them had ever seen: half-man, half-beast, with eyes that glowed like embers in the darkness.
But the greatest mystery of all lay within the box itself. For when they opened it, they found not gold or jewels, but a single sheet of parchment, yellowed with age and covered in writing that none of them could decipher.
Blackwood's pulse quickened as he listened. This was more than just another piece of occult nonsense; this was the real thing. He knew it in his bones.
"I must see this box for myself," he said, his voice barely above a whisper. "Take me to it."
The miners exchanged uneasy glances before nodding their assent. They led him out of the saloon and into the night, down winding streets and through shadowy alleyways until they reached a nondescript building on the edge of town. Inside, they descended a rickety staircase to a dimly lit basement filled with crates and barrels.
There, in the center of the room, sat the object of his obsession: a small, unassuming box that seemed to hum with energy. Blackwood reached out a trembling hand to touch it, feeling the power coursing through him like electricity.
As he did so, the world around him shifted, and suddenly he was no longer in a dingy basement in Sacramento. He stood instead in a grand library, its shelves groaning under the weight of countless tomes and scrolls. A fire crackled in the hearth, casting flickering shadows on the walls.
A figure emerged from the gloom, cloaked in darkness save for two piercing eyes that seemed to bore into his very soul. "You have found it," the voice said, deep and resonant. "The key to unlocking the secrets of the universe."
Blackwood stared at the apparition, unable to speak. He knew he should be afraid, but all he felt was awe.
"But be warned," the figure continued, "for with great power comes great responsibility. The knowledge contained within this box is not meant for mortal minds."
And then, just as suddenly as it had appeared, the vision vanished, leaving Blackwood alone once more in the dimly lit room. He stared at the box, his mind racing. What did it all mean?
As he pondered this question, a sudden commotion erupted upstairs. Shouts and gunfire filled the air, followed by the sound of heavy footsteps on the staircase. The miners had betrayed him!
He snatched up the box and fled into the night, his heart pounding in his chest. He knew he couldn't stay in California - not now that others were after the artifact. No, he would have to take it far away, to a place where its secrets could be studied in safety.
And so began a journey that would span continents and decades, from the wild frontier of the Gold Rush to the foggy streets of Victorian London. For Alistair Blackwood had stumbled upon something much greater than mere treasure; he held in his hands the key to unlocking the very fabric of reality itself.
But little did he know that the true mystery was only just beginning...
</details>
<details> <summary>Click to see spoiler</summary>
### USER
Write me the opening chapter of a dark and gritty historical novel set during the battle of the Bulge. Write in first person. The POV character for Chapter 1 is a young German officer trying to carry his wounded commander back to the deep defensive line after the bunker he was in was overrun. He should have a flashback to the moment it was overrun during the middle of the story. The story should not end well for either of them.
### ASSISTANT
Chapter 1
The cold bit into my bones as I trudged through the snow, each step a labored effort. My breath came out in ragged puffs, clouding the air around me. In my arms, I cradled the weight of my world - my commander, Hauptmann Müller, his body limp and heavy with pain. His eyes were closed, but I could hear the shallow rasp of each breath he took, a testament to his will to live.
The forest around us was a maze of shadows and death, the trees standing like skeletal sentinels against the moonlit sky. The silence was deafening, broken only by the distant rumble of artillery and the occasional crack of a rifle shot. It was a cruel irony that the same snow that muffled our footsteps also betrayed the presence of the enemy, their tracks glowing like phosphorescent snakes in the pale light.
I stumbled, my legs threatening to give way under the strain. Hauptmann Müller groaned, his hand clutching at my coat. "Keep moving, Leutnant," he whispered, his voice barely audible. "We must reach the defensive line."
I nodded, gritting my teeth as I adjusted my grip on him. The pain in my arms and shoulders was a constant companion now, but it was nothing compared to the fear that gnawed at my gut. We were alone, cut off from our unit during the chaos of the American assault. The bunker we had been holed up in was now a tomb for the men who had trusted us to lead them.
A flash of memory seared through my mind, taking me back to that fateful moment when it all went wrong. The thunderous roar of explosions, the earth shaking beneath our feet as if the very foundations of hell were breaking loose. The screams of men, the acrid smell of smoke and cordite, the taste of blood in my mouth.
I remembered the look on Müller's face when he realized what was happening - a mixture of disbelief and determination. He had barked orders, trying to rally our troops, but it was too late. The Americans were upon us, their numbers overwhelming. I saw him take a bullet to the leg, his face contorting in agony as he fell. And then there was nothing but chaos and darkness.
When I came to, the bunker was silent except for the moans of the dying. The air was thick with smoke, making it hard to breathe. I found Müller slumped against a wall, his uniform soaked in blood. He looked up at me, his eyes glassy with pain but still burning with that same fierce determination. "We have to go," he said, gripping my arm. "They're coming."
And so we had fled into the night, leaving behind the wreckage of our command post and the bodies of our comrades. Now, hours later, we were still stumbling through the forest, searching for safety that seemed to recede with every step.
I could feel my strength ebbing away, my legs turning to lead. But I couldn't stop, not now. Not when we were so close. The deep defensive line was our only hope, a thin red line that stood between us and annihilation. If we could just make it there...
But fate is a cruel mistress, and she had other plans for us. As we staggered out of the trees and into a small clearing, I saw them - a patrol of American soldiers, their silhouettes stark against the snow. They spotted us at the same time, their rifles swinging around to point in our direction.
"Halt!" one of them yelled, his voice harsh and commanding. "Hands up!"
I froze, my heart pounding in my chest. Müller's grip on me tightened, his breath coming in short gasps. I knew what he was thinking - we couldn't surrender, not like this. We were too valuable, too close to the front lines. They would shoot us or take us prisoner, and either way, it would be a death sentence for him.
"Run," he whispered, his voice barely audible. "Leave me."
I shook my head, tears stinging my eyes. I couldn't do that. I wouldn't. He was my commander, my friend. I owed him everything.
But before I could make a move, the air was filled with the sound of gunfire. Müller jerked in my arms, a wet gurgle escaping his lips as he slumped against me. I stared down at him in disbelief, feeling the warmth of his blood soaking through my coat.
"Nein," I whispered, my voice choked with grief and rage. "Nein!"
I fell to my knees, cradling his lifeless body as the world around me spun out of control. The Americans were closing in, their shouts growing louder. I knew it was over - for both of us.
With a final, desperate cry, I raised my pistol and fired into the darkness, hoping to take at least one of them with me. But my aim was wild, my strength gone. The last thing I saw was a flash of light, followed by an explosion of pain in my chest. And then there was nothing but blackness.
As I slipped away, I thought of home - of my mother's warm embrace, of the laughter of my little sister. And I knew that I would never see them again. This frozen hellhole would be my grave, and Hauptmann Müller's sacrifice would be forgotten by all but me.
But in that moment, as death claimed me, I found a strange sort of peace. We had fought for our Fatherland until the very end, and we had done so with honor. And in the grand tapestry of history, perhaps that was all that mattered.
</details>
<details> <summary>Click to see spoiler</summary>
### USER
Write me a short story about the fictional character of Anton Chigurh. It should be written in third person personal and feature internal monologues of his deranged world view. The time period is around 2 years before the events in Cormac McCarthy's 2005 novel 'No Country for Old Men'. It should be set in Mexico, be *very* dark, show Anton's still developing obsession with "chance" and feature a grenade. The story should start with him waiting in a bar for instructions on the "job" he is to perform.
### ASSISTANT
In the dimly lit corner of a nondescript Mexican cantina, Anton Chigurh sat motionless, his eyes scanning the room like a predator assessing its prey. The air was thick with the smell of stale cigarette smoke and cheap tequila, but he remained unfazed, his senses heightened, attuned to every sound, every movement around him. He nursed a glass of water, the condensation beading on its surface like droplets of sweat on cold steel.
His mind wandered, as it often did, to the concept of chance. The randomness of life and death fascinated him, the idea that one's fate could hinge on something as simple as a coin flip or the roll of dice. He believed in the purity of such decisions, untainted by human emotion or prejudice. It was a philosophy he had embraced wholeheartedly, and it had served him well in his line of work.
A fly buzzed around his head, its tiny wings beating an erratic rhythm against the stillness of the room. He watched it with mild interest, wondering if its life would end here, on this very spot, or if it would continue its meaningless existence elsewhere. The thought brought a small, cruel smile to his lips.
The bartender, a grizzled old man with a face like leather, eyed him warily from across the room. He knew better than to engage in conversation with this stranger, who had entered his establishment an hour ago and hadn't spoken a word since. There was something about him that made the hairs on the back of his neck stand up, something cold and deadly lurking just beneath the surface.
Chigurh's thoughts were interrupted by the sound of the door creaking open. A man in a cheap suit entered, looking around nervously before making his way towards him. He slid into the seat across from Chigurh, who continued to stare straight ahead, as if he hadn't noticed the new arrival.
"You're late," Chigurh said finally, his voice low and gravelly.
The man in the suit swallowed hard, adjusting his tie. "I apologize. Traffic."
Chigurh didn't respond, his eyes still fixed on some distant point. The man fidgeted, reaching into his pocket to retrieve a small envelope. He slid it across the table, and Chigurh picked it up without looking at him, tearing it open with a single, precise movement.
Inside was a photograph of a man, along with an address scrawled on a piece of paper. He studied the image for a moment, committing every detail to memory before slipping it back into the envelope and sliding it into his own pocket.
"You know what needs to be done," the man in the suit said, his voice barely above a whisper.
Chigurh nodded once, his mind already turning to the task at hand. He stood up, leaving a few crumpled bills on the table for the water he hadn't touched. As he walked towards the door, he could feel the weight of the grenade in his other pocket, its cool metal surface pressing against his thigh like a lover's caress.
Outside, the heat of the day was oppressive, but Chigurh barely registered it as he climbed into his car and started the engine. He drove through the dusty streets, his thoughts consumed by the upcoming job. The man in the photograph had a wife, two kids. It didn't matter. They were all just pieces on a board, pawns to be moved or removed as necessary.
He arrived at the address just as the sun was beginning to set, casting long shadows across the dilapidated building. He sat in his car for several minutes, watching the comings and goings of the neighborhood, waiting for the right moment. When it came, he slipped out of the vehicle and approached the door with a sense of purpose.
Inside, he could hear laughter and music, the sounds of a family enjoying their evening together. He smiled to himself as he pulled the grenade from his pocket, feeling its weight in his hand. This was what he lived for: the moment when chance decided who would live and who would die.
He took a deep breath, savoring the anticipation, before pulling the pin with a satisfying click. He tossed it into the room, not bothering to wait for the explosion as he turned and walked away. The screams that followed were like music to his ears, a symphony of chaos and destruction.
As he drove off into the night, Chigurh couldn't help but feel a sense of satisfaction. Another job well done, another life ended by the whims of fate. He reached into his pocket and pulled out a coin, flipping it idly between his fingers as he considered his next move.
Heads or tails? Life or death? It was all the same to him. The world was a cruel place, and he was just its instrument, meting out justice in the only way he knew how: with cold, calculated precision. And if some people happened to get caught in the crossfire... well, that was just the luck of the draw.
</details>
Big thanks to: @sophosympatheia for working out the merge pattern, @Sao10K for creating Euryale and WinterGoddess, and @chargoddard for writing [Mergekit](https://github.com/arcee-ai/mergekit)! | [
"TRANSLATION"
] | [
"BEAR"
] |
AventIQ-AI/bert-medical-entity-extraction | AventIQ-AI | null | [
"safetensors",
"bert",
"region:us"
] | 2025-02-20T13:21:27 | 2025-02-20T13:39:24 | 72 | 3 | ---
{}
---
# Medical Entity Extraction with BERT
## 📌 Overview
This repository hosts the quantized version of the `bert-base-cased` model for Medical Entity Extraction using the 'tner/bc5cdr' dataset. The model is specifically designed to recognize entities related to **Disease,Symptoms,Drug**. The model has been optimized for efficient deployment while maintaining high accuracy, making it suitable for resource-constrained environments.
## 🏗 Model Details
- **Model Architecture**: BERT Base Cased
- **Task**: Medical Entity Extraction
- **Dataset**: Hugging Face's `tner/bc5cdr`
- **Quantization**: Float16
- **Fine-tuning Framework**: Hugging Face Transformers
---
## 🚀 Usage
### Installation
```bash
pip install transformers torch
```
### Loading the Model
```python
from transformers import BertTokenizerFast, BertForTokenClassification
import torch
device = "cuda" if torch.cuda.is_available() else "cpu"
model_name = "AventIQ-AI/bert-medical-entity-extraction"
model = BertForTokenClassification.from_pretrained(model_name).to(device)
tokenizer = BertTokenizerFast.from_pretrained(model_name)
```
### Named Entity Recognition Inference
```python
from transformers import pipeline
ner_pipeline = pipeline("ner", model=model_name, tokenizer=tokenizer)
test_sentence = "An overdose of Ibuprofen can lead to severe gastric issues."
ner_results = ner_pipeline(test_sentence)
label_map = {
"LABEL_0": "O", # Outside (not an entity)
"LABEL_1": "Drug",
"LABEL_2": "Disease",
"LABEL_3": "Symptom",
"LABEL_4": "Treatment"
}
def merge_tokens(ner_results):
merged_entities = []
current_word = ""
current_label = ""
current_score = 0
count = 0
for entity in ner_results:
word = entity["word"]
label = entity["entity"] # Model's output (e.g., LABEL_1, LABEL_2)
score = entity["score"]
# Merge subwords
if word.startswith("##"):
current_word += word[2:] # Remove '##' and append
current_score += score
count += 1
else:
if current_word: # Store the previous merged word
mapped_label = label_map.get(current_label, "Unknown")
merged_entities.append((current_word, mapped_label, current_score / count))
current_word = word
current_label = label
current_score = score
count = 1
# Add the last word
if current_word:
mapped_label = label_map.get(current_label, "Unknown")
merged_entities.append((current_word, mapped_label, current_score / count))
return merged_entities
print("\n🩺 Medical NER Predictions:")
for word, label, score in merge_tokens(ner_results):
if label != "O": # Skip non-entities
print(f"🔹 Entity: {word} | Category: {label} | Score: {score:.4f}")
```
### **🔹 Labeling Scheme (BIO Format)**
- **B-XYZ (Beginning)**: Indicates the beginning of an entity of type XYZ (e.g., B-PER for the beginning of a person’s name).
- **I-XYZ (Inside)**: Represents subsequent tokens inside an entity (e.g., I-PER for the second part of a person’s name).
- **O (Outside)**: Denotes tokens that are not part of any named entity.
---
## 📊 Evaluation Results for Quantized Model
### **🔹 Overall Performance**
- **Accuracy**: **93.27%** ✅
- **Precision**: **92.31%**
- **Recall**: **93.27%**
- **F1 Score**: **92.31%**
---
### **🔹 Performance by Entity Type**
| Entity Type | Precision | Recall | F1 Score | Number of Entities |
|------------|-----------|--------|----------|--------------------|
| **Disease** | **91.46%** | **92.07%** | **91.76%** | 3,000 |
| **Drug** | **71.25%** | **72.83%** | **72.03%** | 1,266 |
| **Symptom** | **89.83%** | **93.02%** | **91.40%** | 3,524 |
| **Treatment** | **88.83%** | **92.02%** | **90.40%** | 3,124 |
---
#### ⏳ **Inference Speed Metrics**
- **Total Evaluation Time**: 15.89 sec
- **Samples Processed per Second**: 217.26
- **Steps per Second**: 27.18
- **Epochs Completed**: 3
---
## Fine-Tuning Details
### Dataset
The Hugging Face's `tner/bc5cdr` dataset was used, containing texts and their ner tags.
## 📊 Training Details
- **Number of epochs**: 3
- **Batch size**: 8
- **Evaluation strategy**: epoch
- **Learning Rate**: 2e-5
### ⚡ Quantization
Post-training quantization was applied using PyTorch's built-in quantization framework to reduce the model size and improve inference efficiency.
---
## 📂 Repository Structure
```
.
├── model/ # Contains the quantized model files
├── tokenizer_config/ # Tokenizer configuration and vocabulary files
├── model.safetensors/ # Quantized Model
├── README.md # Model documentation
```
---
## ⚠️ Limitations
- The model may not generalize well to domains outside the fine-tuning dataset.
- Quantization may result in minor accuracy degradation compared to full-precision models.
---
## 🤝 Contributing
Contributions are welcome! Feel free to open an issue or submit a pull request if you have suggestions or improvements.
| [
"NAMED_ENTITY_RECOGNITION"
] | [
"BC5CDR"
] |
TheBloke/StellarX-4B-V0.2-GPTQ | TheBloke | text-generation | [
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"arxiv:2204.06745",
"base_model:Dampish/StellarX-4B-V0.2",
"base_model:quantized:Dampish/StellarX-4B-V0.2",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"text-generation-inference",
"4-bit",
"gptq",
"region:us"
] | 2023-09-20T20:46:17 | 2023-09-27T12:53:42 | 71 | 1 | ---
base_model: Dampish/StellarX-4B-V0.2
license: cc-by-nc-sa-4.0
model_name: SellarX 4B V0.2
inference: false
model_creator: Dampish
model_type: gptneox
prompt_template: '{prompt}
'
quantized_by: TheBloke
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# SellarX 4B V0.2 - GPTQ
- Model creator: [Dampish](https://huggingface.co/Dampish)
- Original model: [SellarX 4B V0.2](https://huggingface.co/Dampish/StellarX-4B-V0.2)
<!-- description start -->
## Description
This repo contains GPTQ model files for [Dampish's SellarX 4B V0.2](https://huggingface.co/Dampish/StellarX-4B-V0.2).
Multiple GPTQ parameter permutations are provided; see Provided Files below for details of the options provided, their parameters, and the software used to create them.
<!-- description end -->
<!-- repositories-available start -->
## Repositories available
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/StellarX-4B-V0.2-GPTQ)
* [Dampish's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/Dampish/StellarX-4B-V0.2)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: Unknown
```
{prompt}
```
<!-- prompt-template end -->
<!-- README_GPTQ.md-provided-files start -->
## Provided files and GPTQ parameters
Multiple quantisation parameters are provided, to allow you to choose the best one for your hardware and requirements.
Each separate quant is in a different branch. See below for instructions on fetching from different branches.
All recent GPTQ files are made with AutoGPTQ, and all files in non-main branches are made with AutoGPTQ. Files in the `main` branch which were uploaded before August 2023 were made with GPTQ-for-LLaMa.
<details>
<summary>Explanation of GPTQ parameters</summary>
- Bits: The bit size of the quantised model.
- GS: GPTQ group size. Higher numbers use less VRAM, but have lower quantisation accuracy. "None" is the lowest possible value.
- Act Order: True or False. Also known as `desc_act`. True results in better quantisation accuracy. Some GPTQ clients have had issues with models that use Act Order plus Group Size, but this is generally resolved now.
- Damp %: A GPTQ parameter that affects how samples are processed for quantisation. 0.01 is default, but 0.1 results in slightly better accuracy.
- GPTQ dataset: The dataset used for quantisation. Using a dataset more appropriate to the model's training can improve quantisation accuracy. Note that the GPTQ dataset is not the same as the dataset used to train the model - please refer to the original model repo for details of the training dataset(s).
- Sequence Length: The length of the dataset sequences used for quantisation. Ideally this is the same as the model sequence length. For some very long sequence models (16+K), a lower sequence length may have to be used. Note that a lower sequence length does not limit the sequence length of the quantised model. It only impacts the quantisation accuracy on longer inference sequences.
- ExLlama Compatibility: Whether this file can be loaded with ExLlama, which currently only supports Llama models in 4-bit.
</details>
| Branch | Bits | GS | Act Order | Damp % | GPTQ Dataset | Seq Len | Size | ExLlama | Desc |
| ------ | ---- | -- | --------- | ------ | ------------ | ------- | ---- | ------- | ---- |
| [main](https://huggingface.co/TheBloke/StellarX-4B-V0.2-GPTQ/tree/main) | 4 | 128 | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 2048 | 1.83 GB | No | 4-bit, with Act Order and group size 128g. Uses even less VRAM than 64g, but with slightly lower accuracy. |
| [gptq-4bit-32g-actorder_True](https://huggingface.co/TheBloke/StellarX-4B-V0.2-GPTQ/tree/gptq-4bit-32g-actorder_True) | 4 | 32 | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 2048 | 1.98 GB | No | 4-bit, with Act Order and group size 32g. Gives highest possible inference quality, with maximum VRAM usage. |
| [gptq-8bit-128g-actorder_True](https://huggingface.co/TheBloke/StellarX-4B-V0.2-GPTQ/tree/gptq-8bit-128g-actorder_True) | 8 | 128 | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 2048 | 3.10 GB | No | 8-bit, with group size 128g for higher inference quality and with Act Order for even higher accuracy. |
| [gptq-8bit-32g-actorder_True](https://huggingface.co/TheBloke/StellarX-4B-V0.2-GPTQ/tree/gptq-8bit-32g-actorder_True) | 8 | 32 | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 2048 | 3.27 GB | No | 8-bit, with group size 32g and Act Order for maximum inference quality. |
<!-- README_GPTQ.md-provided-files end -->
<!-- README_GPTQ.md-download-from-branches start -->
## How to download from branches
- In text-generation-webui, you can add `:branch` to the end of the download name, eg `TheBloke/StellarX-4B-V0.2-GPTQ:main`
- With Git, you can clone a branch with:
```
git clone --single-branch --branch main https://huggingface.co/TheBloke/StellarX-4B-V0.2-GPTQ
```
- In Python Transformers code, the branch is the `revision` parameter; see below.
<!-- README_GPTQ.md-download-from-branches end -->
<!-- README_GPTQ.md-text-generation-webui start -->
## How to easily download and use this model in [text-generation-webui](https://github.com/oobabooga/text-generation-webui).
Please make sure you're using the latest version of [text-generation-webui](https://github.com/oobabooga/text-generation-webui).
It is strongly recommended to use the text-generation-webui one-click-installers unless you're sure you know how to make a manual install.
1. Click the **Model tab**.
2. Under **Download custom model or LoRA**, enter `TheBloke/StellarX-4B-V0.2-GPTQ`.
- To download from a specific branch, enter for example `TheBloke/StellarX-4B-V0.2-GPTQ:main`
- see Provided Files above for the list of branches for each option.
3. Click **Download**.
4. The model will start downloading. Once it's finished it will say "Done".
5. In the top left, click the refresh icon next to **Model**.
6. In the **Model** dropdown, choose the model you just downloaded: `StellarX-4B-V0.2-GPTQ`
7. The model will automatically load, and is now ready for use!
8. If you want any custom settings, set them and then click **Save settings for this model** followed by **Reload the Model** in the top right.
* Note that you do not need to and should not set manual GPTQ parameters any more. These are set automatically from the file `quantize_config.json`.
9. Once you're ready, click the **Text Generation tab** and enter a prompt to get started!
<!-- README_GPTQ.md-text-generation-webui end -->
<!-- README_GPTQ.md-use-from-python start -->
## How to use this GPTQ model from Python code
### Install the necessary packages
Requires: Transformers 4.32.0 or later, Optimum 1.12.0 or later, and AutoGPTQ 0.4.2 or later.
```shell
pip3 install transformers>=4.32.0 optimum>=1.12.0
pip3 install auto-gptq --extra-index-url https://huggingface.github.io/autogptq-index/whl/cu118/ # Use cu117 if on CUDA 11.7
```
If you have problems installing AutoGPTQ using the pre-built wheels, install it from source instead:
```shell
pip3 uninstall -y auto-gptq
git clone https://github.com/PanQiWei/AutoGPTQ
cd AutoGPTQ
pip3 install .
```
### For CodeLlama models only: you must use Transformers 4.33.0 or later.
If 4.33.0 is not yet released when you read this, you will need to install Transformers from source:
```shell
pip3 uninstall -y transformers
pip3 install git+https://github.com/huggingface/transformers.git
```
### You can then use the following code
```python
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
model_name_or_path = "TheBloke/StellarX-4B-V0.2-GPTQ"
# To use a different branch, change revision
# For example: revision="main"
model = AutoModelForCausalLM.from_pretrained(model_name_or_path,
device_map="auto",
trust_remote_code=False,
revision="main")
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True)
prompt = "Tell me about AI"
prompt_template=f'''{prompt}
'''
print("\n\n*** Generate:")
input_ids = tokenizer(prompt_template, return_tensors='pt').input_ids.cuda()
output = model.generate(inputs=input_ids, temperature=0.7, do_sample=True, top_p=0.95, top_k=40, max_new_tokens=512)
print(tokenizer.decode(output[0]))
# Inference can also be done using transformers' pipeline
print("*** Pipeline:")
pipe = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
max_new_tokens=512,
do_sample=True,
temperature=0.7,
top_p=0.95,
top_k=40,
repetition_penalty=1.1
)
print(pipe(prompt_template)[0]['generated_text'])
```
<!-- README_GPTQ.md-use-from-python end -->
<!-- README_GPTQ.md-compatibility start -->
## Compatibility
The files provided are tested to work with AutoGPTQ, both via Transformers and using AutoGPTQ directly. They should also work with [Occ4m's GPTQ-for-LLaMa fork](https://github.com/0cc4m/KoboldAI).
[ExLlama](https://github.com/turboderp/exllama) is compatible with Llama models in 4-bit. Please see the Provided Files table above for per-file compatibility.
[Huggingface Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference) is compatible with all GPTQ models.
<!-- README_GPTQ.md-compatibility end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Alicia Loh, Stephen Murray, K, Ajan Kanaga, RoA, Magnesian, Deo Leter, Olakabola, Eugene Pentland, zynix, Deep Realms, Raymond Fosdick, Elijah Stavena, Iucharbius, Erik Bjäreholt, Luis Javier Navarrete Lozano, Nicholas, theTransient, John Detwiler, alfie_i, knownsqashed, Mano Prime, Willem Michiel, Enrico Ros, LangChain4j, OG, Michael Dempsey, Pierre Kircher, Pedro Madruga, James Bentley, Thomas Belote, Luke @flexchar, Leonard Tan, Johann-Peter Hartmann, Illia Dulskyi, Fen Risland, Chadd, S_X, Jeff Scroggin, Ken Nordquist, Sean Connelly, Artur Olbinski, Swaroop Kallakuri, Jack West, Ai Maven, David Ziegler, Russ Johnson, transmissions 11, John Villwock, Alps Aficionado, Clay Pascal, Viktor Bowallius, Subspace Studios, Rainer Wilmers, Trenton Dambrowitz, vamX, Michael Levine, 준교 김, Brandon Frisco, Kalila, Trailburnt, Randy H, Talal Aujan, Nathan Dryer, Vadim, 阿明, ReadyPlayerEmma, Tiffany J. Kim, George Stoitzev, Spencer Kim, Jerry Meng, Gabriel Tamborski, Cory Kujawski, Jeffrey Morgan, Spiking Neurons AB, Edmond Seymore, Alexandros Triantafyllidis, Lone Striker, Cap'n Zoog, Nikolai Manek, danny, ya boyyy, Derek Yates, usrbinkat, Mandus, TL, Nathan LeClaire, subjectnull, Imad Khwaja, webtim, Raven Klaugh, Asp the Wyvern, Gabriel Puliatti, Caitlyn Gatomon, Joseph William Delisle, Jonathan Leane, Luke Pendergrass, SuperWojo, Sebastain Graf, Will Dee, Fred von Graf, Andrey, Dan Guido, Daniel P. Andersen, Nitin Borwankar, Elle, Vitor Caleffi, biorpg, jjj, NimbleBox.ai, Pieter, Matthew Berman, terasurfer, Michael Davis, Alex, Stanislav Ovsiannikov
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
# Original model card: Dampish's SellarX 4B V0.2
# StellarX: A Base Model by Dampish and Arkane
StellarX is a powerful autoregressive language model designed for various natural language processing tasks. It has been trained on a massive dataset containing 810 billion tokens(trained on 300B tokens), trained on "redpajama," and is built upon the popular GPT-NeoX architecture. With approximately 4 billion parameters, StellarX offers exceptional performance and versatility.
## Model Details
- **Training Data:** StellarX is trained on a large-scale dataset provided by "redpajama" maintained by the group "togethercumputer." This dataset has been instrumental in shaping StellarX's language capabilities and general-purpose understanding.
- **Model Architecture:** StellarX is built upon the GPT-NeoX architecture, which may, be, inspired by GPT-3 and shares similarities with GPT-J-6B. The architecture incorporates key advancements in transformer-based language models, ensuring high-quality predictions and contextual understanding.
- **Model Size:** StellarX consists of approximately 4 billion parameters, making it a highly capable language model for a wide range of natural language processing tasks.
- **Carbon-Friendly and Resource-Efficient:** StellarX has been optimized for carbon efficiency and can be comfortably run on local devices. When loaded in 8 bits, the model requires only about 5GB of storage, making it more accessible and convenient for various applications.
- **V0.2** Meaning what version it is on, currently version 0.2, Assume version 0.2 has only been trained on 300B tokens and the goal is 810B tokens. The next version aims to have a way higher accuracy.
## How to Use
To load StellarX using the Hugging Face Transformers library, you can use the following code snippet:
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("Dampish/StellarX-4B-V0")
model = AutoModelForCausalLM.from_pretrained("Dampish/StellarX-4B-V0")
```
This model is particularly beneficial for those seeking a language model that is powerful, compact, and can be run on local devices without a hefty carbon footprint. Remember, when considering Darius1, it's not just about the impressive numbers—it's about what these numbers represent: powerful performance, optimized resources, and responsible computing.
**For any queries related to this model, feel free to reach out to "Dampish#3607" on discord.**
## Licensing and Usage
StellarX, developed by the Dampish, is made available under the Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License (CC-BY-NC-SA-4.0). This license ensures that you can utilize the model for research purposes and personal use without any restrictions, while also promoting the sharing and adaptation of the model under certain conditions.
# Research and Personal Use
StellarX can be freely used for research purposes, allowing you to explore its capabilities, conduct experiments, and develop novel applications. Whether you're a student, researcher, or hobbyist, the model's availability under the CC-BY-NC-SA-4.0 license empowers you to unlock the potential of StellarX for your own non-commercial projects.
# Commercial Usage
For commercial usage of StellarX, an additional licensing arrangement must be established. If you intend to leverage the model for any commercial purpose, such as integrating it into a product or service, you are required to reach an agreement with the Dampish. This agreement will specify the terms, including the agreed-upon percentage or licensing fee to be paid for the commercial use of StellarX.
To initiate discussions regarding commercial usage, please contact Dampish through the designated channels mentioned earlier. They will be able to provide you with further information and guide you through the process of establishing a licensing arrangement tailored to your specific requirements.
# Importance of Licensing Compliance
It is crucial to respect the licensing terms to ensure the fair usage and continued development of StellarX. The revenue generated from commercial licensing supports the efforts of the Dampish in advancing the model and making it more widely accessible.
# Note on CC-BY-NC-SA-4.0
Under the CC-BY-NC-SA-4.0 license, you are allowed to modify and adapt StellarX, incorporating it into your own projects. However, any derivative work or modifications should also be shared under the same license terms, ensuring the continued openness and collaborative spirit of the project.
Please review the complete text of the CC-BY-NC-SA-4.0 license to familiarize yourself with its provisions and requirements. It is essential to comply with the terms of the license to respect the intellectual property rights and contributions of the Dampish and the wider community involved in developing StellarX.
## GPT-NeoX and Model Selection
GPT-NeoX-20B, a sibling model to StellarX, is a 20 billion parameter autoregressive language model trained on the Pile using the GPT-NeoX library. StellarX draws inspiration from the architectural advancements and performance of GPT-NeoX models. While the specifics of StellarX's architecture and parameters may differ, it benefits from the proven capabilities of GPT-NeoX and its suitability for diverse natural language processing tasks.
## Training and Evaluation
StellarX's training dataset comprises a comprehensive collection of English-language texts, covering various domains, thanks to the efforts of "redpajama" dataset by the group "togethercumputer" group.
Evaluation of GPT-NeoX 20B performance has demonstrated its competence across different natural language tasks. Although since this description provides a brief summary, we refer to the GPT-NeoX Paper https://arxiv.org/abs/2204.06745, comparing GPT-NeoX 20B to other models on tasks such as OpenAI's LAMBADA, SciQ, PIQA, TriviaQA, and ARC Challenge.
## Limitations and Considerations
StellarX, like its sibling models, is intended primarily for research purposes. It provides a powerful foundation for extracting useful features and insights from the English language. While StellarX can be further fine-tuned and adapted for deployment, users should conduct their own risk and bias assessments before using it as a basis for downstream tasks.
It's important to note that StellarX is not intended for direct deployment without supervision. It is not designed for human-facing interactions, unlike models like ChatGPT, which have been fine-tuned using reinforcement learning from human feedback to better understand human instructions and dialogue.
Furthermore, StellarX is not limited to the English language if trained properly and can sometimes be used for translation aswell as text generation in other languages.
Lastly, users should be aware of potential biases and limitations inherent in
Special thanks to the group that created the training dataset. The Redpajama dataset, used to train StellarX, thank you togethercumputer.
## Community and Support
To inquire about StellarX and receive support, you can join the Dampish's
server and engage in discussions in the #questions channel. It is recommended to explore the existing documentation and resources available for GPT-NeoX-20B to familiarize yourself with the model before seeking assistance on. For better information about GPT-NeoX, you can reach out to eleutherAI.
## Summary
StellarX, a base language model developed by the Dampish, offers impressive language capabilities and flexibility. Trained on an extensive dataset and built upon the GPT-NeoX architecture, StellarX excels in various natural language processing tasks. Its carbon-friendly and resource-efficient design makes it accessible for local device deployment. Researchers and enthusiasts can freely explore StellarX for research purposes and personal use, while commercial users should adhere to the licensing terms.
**Again i am really grateful for the data made by togethercumputers and their willingness to opensource, they inspired this project and sparked the idea in Stellar-models, i am truly really really grateful to them.
-dampish**
Discord: https://discord.gg/vasyNnUa
OR Reach out to me personally on Discord via the username: Dampish#3607
Thank you for your time.
| [
"TRANSLATION"
] | [
"SCIQ"
] |
RichardErkhov/aisingapore_-_sea-lion-7b-instruct-gguf | RichardErkhov | null | [
"gguf",
"arxiv:2309.06085",
"endpoints_compatible",
"region:us"
] | 2024-05-10T04:39:27 | 2024-05-10T06:37:27 | 71 | 0 | ---
{}
---
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
sea-lion-7b-instruct - GGUF
- Model creator: https://huggingface.co/aisingapore/
- Original model: https://huggingface.co/aisingapore/sea-lion-7b-instruct/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [sea-lion-7b-instruct.Q2_K.gguf](https://huggingface.co/RichardErkhov/aisingapore_-_sea-lion-7b-instruct-gguf/blob/main/sea-lion-7b-instruct.Q2_K.gguf) | Q2_K | 3.05GB |
| [sea-lion-7b-instruct.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/aisingapore_-_sea-lion-7b-instruct-gguf/blob/main/sea-lion-7b-instruct.IQ3_XS.gguf) | IQ3_XS | 3.34GB |
| [sea-lion-7b-instruct.IQ3_S.gguf](https://huggingface.co/RichardErkhov/aisingapore_-_sea-lion-7b-instruct-gguf/blob/main/sea-lion-7b-instruct.IQ3_S.gguf) | IQ3_S | 3.41GB |
| [sea-lion-7b-instruct.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/aisingapore_-_sea-lion-7b-instruct-gguf/blob/main/sea-lion-7b-instruct.Q3_K_S.gguf) | Q3_K_S | 3.41GB |
| [sea-lion-7b-instruct.IQ3_M.gguf](https://huggingface.co/RichardErkhov/aisingapore_-_sea-lion-7b-instruct-gguf/blob/main/sea-lion-7b-instruct.IQ3_M.gguf) | IQ3_M | 3.71GB |
| [sea-lion-7b-instruct.Q3_K.gguf](https://huggingface.co/RichardErkhov/aisingapore_-_sea-lion-7b-instruct-gguf/blob/main/sea-lion-7b-instruct.Q3_K.gguf) | Q3_K | 3.96GB |
| [sea-lion-7b-instruct.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/aisingapore_-_sea-lion-7b-instruct-gguf/blob/main/sea-lion-7b-instruct.Q3_K_M.gguf) | Q3_K_M | 3.96GB |
| [sea-lion-7b-instruct.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/aisingapore_-_sea-lion-7b-instruct-gguf/blob/main/sea-lion-7b-instruct.Q3_K_L.gguf) | Q3_K_L | 4.25GB |
| [sea-lion-7b-instruct.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/aisingapore_-_sea-lion-7b-instruct-gguf/blob/main/sea-lion-7b-instruct.IQ4_XS.gguf) | IQ4_XS | 4.06GB |
| [sea-lion-7b-instruct.Q4_0.gguf](https://huggingface.co/RichardErkhov/aisingapore_-_sea-lion-7b-instruct-gguf/blob/main/sea-lion-7b-instruct.Q4_0.gguf) | Q4_0 | 4.21GB |
| [sea-lion-7b-instruct.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/aisingapore_-_sea-lion-7b-instruct-gguf/blob/main/sea-lion-7b-instruct.IQ4_NL.gguf) | IQ4_NL | 4.24GB |
| [sea-lion-7b-instruct.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/aisingapore_-_sea-lion-7b-instruct-gguf/blob/main/sea-lion-7b-instruct.Q4_K_S.gguf) | Q4_K_S | 4.24GB |
| [sea-lion-7b-instruct.Q4_K.gguf](https://huggingface.co/RichardErkhov/aisingapore_-_sea-lion-7b-instruct-gguf/blob/main/sea-lion-7b-instruct.Q4_K.gguf) | Q4_K | 4.65GB |
| [sea-lion-7b-instruct.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/aisingapore_-_sea-lion-7b-instruct-gguf/blob/main/sea-lion-7b-instruct.Q4_K_M.gguf) | Q4_K_M | 4.65GB |
| [sea-lion-7b-instruct.Q4_1.gguf](https://huggingface.co/RichardErkhov/aisingapore_-_sea-lion-7b-instruct-gguf/blob/main/sea-lion-7b-instruct.Q4_1.gguf) | Q4_1 | 4.58GB |
| [sea-lion-7b-instruct.Q5_0.gguf](https://huggingface.co/RichardErkhov/aisingapore_-_sea-lion-7b-instruct-gguf/blob/main/sea-lion-7b-instruct.Q5_0.gguf) | Q5_0 | 4.96GB |
| [sea-lion-7b-instruct.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/aisingapore_-_sea-lion-7b-instruct-gguf/blob/main/sea-lion-7b-instruct.Q5_K_S.gguf) | Q5_K_S | 4.96GB |
| [sea-lion-7b-instruct.Q5_K.gguf](https://huggingface.co/RichardErkhov/aisingapore_-_sea-lion-7b-instruct-gguf/blob/main/sea-lion-7b-instruct.Q5_K.gguf) | Q5_K | 5.29GB |
| [sea-lion-7b-instruct.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/aisingapore_-_sea-lion-7b-instruct-gguf/blob/main/sea-lion-7b-instruct.Q5_K_M.gguf) | Q5_K_M | 5.29GB |
| [sea-lion-7b-instruct.Q5_1.gguf](https://huggingface.co/RichardErkhov/aisingapore_-_sea-lion-7b-instruct-gguf/blob/main/sea-lion-7b-instruct.Q5_1.gguf) | Q5_1 | 5.33GB |
| [sea-lion-7b-instruct.Q6_K.gguf](https://huggingface.co/RichardErkhov/aisingapore_-_sea-lion-7b-instruct-gguf/blob/main/sea-lion-7b-instruct.Q6_K.gguf) | Q6_K | 5.75GB |
Original model description:
---
license: mit
language:
- en
- zh
- id
- ms
- tl
- my
- vi
- th
- lo
- km
- ta
---
# SEA-LION-7B-Instruct
SEA-LION is a collection of Large Language Models (LLMs) which has been pretrained and instruct-tuned for the Southeast Asia (SEA) region.
The sizes of the models range from 3 billion to 7 billion parameters.
SEA-LION-7B-Instruct is a multilingual model which has been fine-tuned with **thousands of English and Indonesian instruction-completion pairs** alongside a smaller pool of instruction-completion pairs from other ASEAN languages.
These instructions have been carefully curated and rewritten to ensure the model was trained on truly open, commercially permissive and high quality datasets.
SEA-LION stands for _Southeast Asian Languages In One Network_.
- **Developed by:** Products Pillar, AI Singapore
- **Funded by:** Singapore NRF
- **Model type:** Decoder
- **Languages:** English, Chinese, Indonesian, Malay, Thai, Vietnamese, Filipino, Tamil, Burmese, Khmer, Lao
- **License:** MIT License
## Model Details
### Base model
We performed instruction tuning in English and Indonesian on our [pre-trained SEA-LION-7B](https://huggingface.co/aisingapore/sea-lion-7b), a decoder model using the MPT architecture, to create SEA-LION-7B-Instruct.
### Benchmark Performance
We evaluated SEA-LION-7B-Instruct on the BHASA benchmark ([arXiv](https://arxiv.org/abs/2309.06085v2) and [GitHub](https://github.com/aisingapore/bhasa)) across a variety of tasks.
BHASA stands out amongst other evaluations for SEA languages for its holistic approach to evaluation, including not just traditional Natural Language Processing (NLP) benchmarking tasks (such as sentiment analysis and question answering), but also linguistic and cultural diagnostic tests which are meticulously handcrafted.
The evaluation was done zero-shot with Indonesian prompts and only a sample of 100-1000 instances for each dataset was used as per the setting described in the BHASA paper. The scores shown in the table below have been adjusted to only consider answers provided in the appropriate language.
| Model | QA (F1) | Sentiment (F1) | Toxicity (F1) | Eng>Indo (ChrF++) | Indo>Eng (ChrF++) | Summary (ROUGE-L) | NLI (Acc) | Causal (Acc) |
|--------------------------------|---------|----------------|---------------|-------------------|-------------------|-------------------|-----------|--------------|
| SEA-LION-7B-Instruct-Research | 24.86 | 76.13 | 24.45 | 52.50 | 46.82 | 15.44 | 33.20 | 23.80 |
| SEA-LION-7B-Instruct | **68.41**| **91.45** | 17.98 | 57.48 | 58.04 | **17.54** | 53.10 | 60.80 |
| SeaLLM 7B v1 | 30.96 | 56.29 | 22.60 | 62.23 | 41.55 | 14.03 | 26.50 | 56.60 |
| SeaLLM 7B v2 | 44.40 | 80.13 | **55.24** | 64.01 | **63.28** | 17.31 | 43.60 | 82.00 |
| Sailor-7B (Base) | 65.43 | 59.48 | 20.48 | **64.27** | 60.68 | 8.69 | 15.10 | 38.40 |
| Sailor-7B-Chat | 38.02 | 87.64 | 52.07 | 64.25 | 61.87 | 15.28 | **68.30** |**85.60** |
| Llama 2 7B Chat | 11.12 | 52.32 | 0.00 | 44.09 | 57.58 | 9.24 | 0.00 | 0.00 |
| Mistral 7B Instruct v0.1 | 38.85 | 74.38 | 20.83 | 30.60 | 51.43 | 15.63 | 28.60 | 50.80 |
| GPT-4 (gpt-4-0314) | 73.60 | 74.14 | 63.96 | 69.38 | 67.53 | 18.71 | 83.20 | 96.00 |
- For Natural Language Understanding (NLU) tasks, we tested the model on Sentiment Analysis (`Sentiment`) using the NusaX dataset, Question Answering (`QA`) using the TyDiQA dataset, and Toxicity Detection (`Toxicity`) using the Indonesian Multi-Label Hate Speech Detection dataset. The metrics used are F1 scores for all three tasks.
- For Natural Language Generation (NLG) tasks, we tested the model on Machine Translation from English to Indonesian (`Eng>Indo`) and from Indonesian to English (`Indo>Eng`) using the FLORES-200 dataset, and Abstractive Summarization (`Summary`) using the XLSum dataset. The metrics used for Machine Translation and Abstractive Summarization are ChrF++ and ROUGE-L respectively.
- For Natural Language Reasoning (NLR) tasks, we tested the model on Natural Language Inference (`NLI`) using the IndoNLI lay dataset and on Causal Reasoning (`Causal`) using the XCOPA dataset. The metrics are based on accuracy for both tasks.
### Usage
SEA-LION can be run using the 🤗 Transformers library
```python
# Please use transformers==4.37.2
from transformers import AutoModelForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("aisingapore/sea-lion-7b-instruct", trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained("aisingapore/sea-lion-7b-instruct", trust_remote_code=True)
prompt_template = "### USER:\n{human_prompt}\n\n### RESPONSE:\n"
prompt = """Apa sentimen dari kalimat berikut ini?
Kalimat: Buku ini sangat membosankan.
Jawaban: """
full_prompt = prompt_template.format(human_prompt=prompt)
tokens = tokenizer(full_prompt, return_tensors="pt")
output = model.generate(tokens["input_ids"], max_new_tokens=20, eos_token_id=tokenizer.eos_token_id)
print(tokenizer.decode(output[0], skip_special_tokens=True))
```
### Prompting Guide
_Coming soon_
### Caveats
It is important for users to be aware that our model exhibits certain limitations that warrant consideration. Firstly, like many LLMs, the model can hallucinate and occasionally generates irrelevant content, introducing fictional elements that are not grounded in the provided context. Users should also exercise caution in interpreting and validating the model's responses due to the potential inconsistencies in its reasoning. Finally, it should be noted that the model has not been optimized for multi-turn dialogue interactions, which may result in reduced effectiveness in extended conversations.
## Limitations
### Safety
Current SEA-LION models, including this commercially permissive release, have not been aligned for safety. Developers and users should perform their own safety fine-tuning and related security measures. In no event shall the authors be held liable for any claim, damages, or other liability arising from the use of the released weights and codes.
### Commercially Non-Permissive and Commercially Permissive SEA-LION Releases
The previous release of the commercially non-permissive SEA-LION-Instruct-Research enabled us to explore the full research potential of SEA-LION when allowed to take full advantage of what is publicly available. In contrast, in building the commercially permissive SEA-LION-7B-Instruct, we had to leave out high-quality instruction data that was either proprietary, restricted by non-commercial licenses or in a legal gray area, leaving us with a much smaller proportion of commercially permissive data to work with — a problem that is even more pronounced for low-resource languages. We thus hope this will sound a call to action for more initiatives to create commercially viable data in the region, enabling practical benefits for all.
## Technical Specifications
### Fine-Tuning Details
The SEA-LION-7B-Instruct was fine-tuned using 8x A100-40GB using parameter efficient fine tuning in the form of LoRA.
## Data
SEA-LION-7B-Instruct was trained on a wide range of instructions that were manually and stringently verified by our team. A large portion of the effort was dedicated to ensuring that each instruction-completion pair that the model sees is of a high quality and any errors were corrected and rewritten by native speakers or else dropped from our mix.
In addition, special care was taken to ensure that the datasets used had commercially permissive licenses through verification with the original data source.
Link to dataset: _coming soon_
## Call for Contributions
We encourage researchers, developers, and language enthusiasts to actively contribute to the enhancement and expansion of SEA-LION. Contributions can involve identifying and reporting bugs, sharing pre-training, instruction, and preference data, improving documentation usability, proposing and implementing new model evaluation tasks and metrics, or training versions of the model in additional Southeast Asian languages. Join us in shaping the future of SEA-LION by sharing your expertise and insights to make these models more accessible, accurate, and versatile. Please check out our GitHub for further information on the call for contributions.
## The Team
Lau Wayne<br>
Leong Wei Qi<br>
Li Yier<br>
Liu Bing Jie Darius<br>
Lovenia Holy<br>
Montalan Jann Railey<br>
Ng Boon Cheong Raymond<br>
Ngui Jian Gang<br>
Nguyen Thanh Ngan<br>
Ong Tat-Wee David<br>
Rengarajan Hamsawardhini<br>
Siow Bryan<br>
Susanto Yosephine<br>
Tai Ngee Chia<br>
Tan Choon Meng<br>
Teng Walter<br>
Teo Eng Sipp Leslie<br>
Teo Wei Yi<br>
Tjhi William<br>
Yeo Yeow Tong<br>
Yong Xianbin<br>
## Acknowledgements
[AI Singapore](https://aisingapore.org/) is a national programme supported by the National Research Foundation, Singapore and hosted by the National University of Singapore. Any opinions, findings and conclusions or recommendations expressed in this material are those of the author(s) and do not reflect the views of the National Research Foundation or the National University of Singapore.
## Contact
For more info, please contact us using this [SEA-LION Inquiry Form](https://forms.gle/sLCUVb95wmGf43hi6)
[Link to SEA-LION's GitHub repository](https://github.com/aisingapore/sealion)
## Disclaimer
This is the repository for the commercial instruction-tuned model.
The model has _not_ been aligned for safety.
Developers and users should perform their own safety fine-tuning and related security measures.
In no event shall the authors be held liable for any claims, damages, or other liabilities arising from the use of the released weights and codes.
| [
"QUESTION_ANSWERING",
"TRANSLATION",
"SUMMARIZATION"
] | [
"CHIA"
] |
RichardErkhov/flowaicom_-_Flow-Judge-v0.1-gguf | RichardErkhov | null | [
"gguf",
"endpoints_compatible",
"region:us",
"conversational"
] | 2024-09-28T18:18:00 | 2024-09-28T19:35:25 | 71 | 0 | ---
{}
---
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Flow-Judge-v0.1 - GGUF
- Model creator: https://huggingface.co/flowaicom/
- Original model: https://huggingface.co/flowaicom/Flow-Judge-v0.1/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [Flow-Judge-v0.1.Q2_K.gguf](https://huggingface.co/RichardErkhov/flowaicom_-_Flow-Judge-v0.1-gguf/blob/main/Flow-Judge-v0.1.Q2_K.gguf) | Q2_K | 1.32GB |
| [Flow-Judge-v0.1.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/flowaicom_-_Flow-Judge-v0.1-gguf/blob/main/Flow-Judge-v0.1.IQ3_XS.gguf) | IQ3_XS | 1.51GB |
| [Flow-Judge-v0.1.IQ3_S.gguf](https://huggingface.co/RichardErkhov/flowaicom_-_Flow-Judge-v0.1-gguf/blob/main/Flow-Judge-v0.1.IQ3_S.gguf) | IQ3_S | 1.57GB |
| [Flow-Judge-v0.1.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/flowaicom_-_Flow-Judge-v0.1-gguf/blob/main/Flow-Judge-v0.1.Q3_K_S.gguf) | Q3_K_S | 1.57GB |
| [Flow-Judge-v0.1.IQ3_M.gguf](https://huggingface.co/RichardErkhov/flowaicom_-_Flow-Judge-v0.1-gguf/blob/main/Flow-Judge-v0.1.IQ3_M.gguf) | IQ3_M | 1.73GB |
| [Flow-Judge-v0.1.Q3_K.gguf](https://huggingface.co/RichardErkhov/flowaicom_-_Flow-Judge-v0.1-gguf/blob/main/Flow-Judge-v0.1.Q3_K.gguf) | Q3_K | 1.82GB |
| [Flow-Judge-v0.1.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/flowaicom_-_Flow-Judge-v0.1-gguf/blob/main/Flow-Judge-v0.1.Q3_K_M.gguf) | Q3_K_M | 1.82GB |
| [Flow-Judge-v0.1.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/flowaicom_-_Flow-Judge-v0.1-gguf/blob/main/Flow-Judge-v0.1.Q3_K_L.gguf) | Q3_K_L | 1.94GB |
| [Flow-Judge-v0.1.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/flowaicom_-_Flow-Judge-v0.1-gguf/blob/main/Flow-Judge-v0.1.IQ4_XS.gguf) | IQ4_XS | 1.93GB |
| [Flow-Judge-v0.1.Q4_0.gguf](https://huggingface.co/RichardErkhov/flowaicom_-_Flow-Judge-v0.1-gguf/blob/main/Flow-Judge-v0.1.Q4_0.gguf) | Q4_0 | 2.03GB |
| [Flow-Judge-v0.1.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/flowaicom_-_Flow-Judge-v0.1-gguf/blob/main/Flow-Judge-v0.1.IQ4_NL.gguf) | IQ4_NL | 2.04GB |
| [Flow-Judge-v0.1.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/flowaicom_-_Flow-Judge-v0.1-gguf/blob/main/Flow-Judge-v0.1.Q4_K_S.gguf) | Q4_K_S | 2.04GB |
| [Flow-Judge-v0.1.Q4_K.gguf](https://huggingface.co/RichardErkhov/flowaicom_-_Flow-Judge-v0.1-gguf/blob/main/Flow-Judge-v0.1.Q4_K.gguf) | Q4_K | 2.23GB |
| [Flow-Judge-v0.1.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/flowaicom_-_Flow-Judge-v0.1-gguf/blob/main/Flow-Judge-v0.1.Q4_K_M.gguf) | Q4_K_M | 2.23GB |
| [Flow-Judge-v0.1.Q4_1.gguf](https://huggingface.co/RichardErkhov/flowaicom_-_Flow-Judge-v0.1-gguf/blob/main/Flow-Judge-v0.1.Q4_1.gguf) | Q4_1 | 2.24GB |
| [Flow-Judge-v0.1.Q5_0.gguf](https://huggingface.co/RichardErkhov/flowaicom_-_Flow-Judge-v0.1-gguf/blob/main/Flow-Judge-v0.1.Q5_0.gguf) | Q5_0 | 2.46GB |
| [Flow-Judge-v0.1.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/flowaicom_-_Flow-Judge-v0.1-gguf/blob/main/Flow-Judge-v0.1.Q5_K_S.gguf) | Q5_K_S | 2.46GB |
| [Flow-Judge-v0.1.Q5_K.gguf](https://huggingface.co/RichardErkhov/flowaicom_-_Flow-Judge-v0.1-gguf/blob/main/Flow-Judge-v0.1.Q5_K.gguf) | Q5_K | 2.62GB |
| [Flow-Judge-v0.1.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/flowaicom_-_Flow-Judge-v0.1-gguf/blob/main/Flow-Judge-v0.1.Q5_K_M.gguf) | Q5_K_M | 2.62GB |
| [Flow-Judge-v0.1.Q5_1.gguf](https://huggingface.co/RichardErkhov/flowaicom_-_Flow-Judge-v0.1-gguf/blob/main/Flow-Judge-v0.1.Q5_1.gguf) | Q5_1 | 2.68GB |
| [Flow-Judge-v0.1.Q6_K.gguf](https://huggingface.co/RichardErkhov/flowaicom_-_Flow-Judge-v0.1-gguf/blob/main/Flow-Judge-v0.1.Q6_K.gguf) | Q6_K | 2.92GB |
| [Flow-Judge-v0.1.Q8_0.gguf](https://huggingface.co/RichardErkhov/flowaicom_-_Flow-Judge-v0.1-gguf/blob/main/Flow-Judge-v0.1.Q8_0.gguf) | Q8_0 | 3.78GB |
Original model description:
---
language:
- en
license: apache-2.0
license_link: https://huggingface.co/flowaicom/Flow-Judge-v0.1/resolve/main/LICENSE
tags:
- lm-judge
- evaluation
- nlp
datasets:
- flowaicom/Flow-Judge-v0.1-binary-heldout
- flowaicom/Flow-Judge-v0.1-3-likert-heldout
- flowaicom/Flow-Judge-v0.1-5-likert-heldout
pipeline_tag: text-generation
library_name: transformers
metrics:
- accuracy
- f1
- precision
- recall
- pearsonr
- spearmanr
- kendall-tau
base_model:
- microsoft/Phi-3.5-mini-instruct
---
<p align="center">
<img src="https://cdn-uploads.huggingface.co/production/uploads/63368577d184e6b53c50e6d0/6kSJKgPh2pDh4tA-Ky0xW.png" alt="Centered image">
</p>
<p align="center">🚀 <a href="https://www.flow-ai.com/judge">Flow Judge</a> | 📄 <a href="https://www.flow-ai.com/blog/flow-judge">Technical report</a> | 💻 <a href="https://github.com/flowaicom/flow-judge">flow-judge</a></p>
## Model Summary
Flow-Judge-v0.1 is a compact yet powerful 3.8B model that offers customizable LLM system evaluations across various fields. The model inherits it's architecture from Phi-3.5-mini instruct model which enables Flow-Judge to deliver high-quality results while maintaining a small footprint. Despite its smaller size, it achieves performance comparable to larger models in both held-out and out-of-domain benchmarks. Flow-Judge-v0.1 supports multiple scoring scales, provides qualitative feedback, and generates structured evaluation outputs. Trained on a smaller synthetic dataset, it represents an efficient approach to AI development. Released under the Apache 2.0 license, Flow Judge is an open and accessible model suitable for developers and companies seeking cost-effective and rapid evaluations using custom rubrics.
__Quantized weights__
- [flowaicom/Flow-Judge-v0.1-AWQ](https://huggingface.co/flowaicom/Flow-Judge-v0.1-AWQ)
- [flowaicom/Flow-Judge-v0.1-GGUF](https://huggingface.co/flowaicom/Flow-Judge-v0.1-GGUF)
__Quickstart__
- [Quickstart](https://github.com/flowaicom/flow-judge/examples/1_quickstart.ipynb)
## Intended Use Case
Flow Judge is intended to be used on custom LLM system evaluation tasks.
- Customizable evaluations: Users can define their own evaluation criteria and rubrics, tailoring Flow Judge to their specific needs and requirements. This flexibility allows for the creation of highly targeted assessments that accurately measure performance of their LLM system
- Flow Judge supports three different scoring scales:
- Pass/fail: Suitable for binary assessments, such as determining whether a piece of text meets a specific standard or contains errors.
- 3-Likert: Allows for more granular evaluations, with scores ranging from negative to neutral to positive. Useful for assessing the overall quality or sentiment of a piece of text.
- 5-Likert: Provides an even more nuanced assessment, with scores ranging from strongly negative to strongly positive, enabling users to capture subtle differences in quality or sentiment.
- Easy to interpret results:
- Flow Judge produces structured evaluations with `<feedback>` and `<score>` tags.
- Qualitative feedback: Flow Judge detects errors and grades outputs and provides qualitative feedback that explains its reasoning for assigning a particular score from the rubric while highlighting problematic parts of the responses.
- Score: Based on a grading rubric Flow Judge will return a numerical score on binary, likert-3 or likert-5 scale.
## Training
### Model
Flow Judge is based on the Phi-3.5-mini architecture, and the base model checkpoint used is specifically its instruct version. The model uses the same tokenizer, supports MQA and Flash Attention 2, and has weights in bfloat16 precision. However, post-finetuning, the model's support for languages and long context lengths has not been fully tested. Due to specialized Supervised Fine-Tuning (SFT), Flow Judge might show different benchmark results and support a maximum context length of 8192, shorter than the base model's.
### Training Datasets
Flow-Judge-v0.1 has been trained on synthetically generated datasets. The construction of training datasets for Flow Judge involves a multi-step process:
1. Manually curating seed rubrics to serve as a foundation
2. Synthetically generating domain-adapted metrics and rubrics for various domains
3. Synthetically generating training instances with multiple inputs, such as user queries and contextual information
4. Employing a dual-evaluation strategy with consensus to ensure quality and consistency
This process creates a comprehensive and diverse set of training instances that enable accurate, domain-specific evaluations of LLM systems in generative AI products while minimizing human intervention.
Read more about the dataset construction from [here](https://www.flow-ai.com/blog/flow-judge#dataset-construction)
### Fine-tuning
For fine-tuning we used Axolotl's preprocessing to ensure input training data is consistent. We then conducted supervised fine-tuning based on microsoft/Phi-3.5-mini-instruct using RSLoRa. More detailed information about the fine-tuning process is provided in our [technical report](https://www.flow-ai.com/blog/flow-judge#fine-tuning).
## Usage
### Prompt format
#### Prompt template with inputs
```text
# GOAL
Your job is to evaluate a task carried out by an AI system powered by a large language model.
You will be provided with the inputs and output of the task, as well as the evaluation criteria and scoring rubric. Your task is to evaluate the output of the AI system based on the evaluation criteria and scoring rubric provided.
# INPUT
Below are the inputs required for performing the task:
<inputs>
{INPUTS}
</inputs>
# OUTPUT
Below is the output of the task:
<output>
{OUTPUT}
</output>
# EVALUATION CRITERIA AND SCORING RUBRIC
Here are the evaluation criteria and the rubric that you need to use for evaluating the task:
<evaluation_criteria>
{EVALUATION_CRITERIA}
</evaluation_criteria>
<scoring_rubric>
{RUBRIC}
</scoring_rubric>
# INSTRUCTIONS FOR THE EVALUATION
1. Understand the task and criteria: Familiarize yourself with the task to be evaluated. Review the evaluation criteria and scoring rubric to understand the different levels of performance and the descriptions for each score.
2. Review the inputs and output: Look at the inputs provided for the task. Examine the output generated from completing the task.
3. Compare output to score descriptions: Compare the output against the criteria and score descriptions in the scoring rubric. For each criterion,decide which description best matches the output.
4. After comparing the output to the score descriptions, pay attention to the small details that might impact the final score that you assign. Sometimes a small difference can dictate the final score.
5. Write verbal feedback justifying your evaluation that includes a detailed rationale, referring to specific aspects of the output and comparing them to the rubric.
6. Assign a final score based on the scoring rubric.
## FORMAT FOR THE EVALUATION
- Write the verbal feedback inside <feedback> tags without any additional surrounding text.
- Write the numeric score inside <score> tags, without any additional surrounding text and always after the feedback.
Please accurately evaluate the task. Strictly adhere to the evaluation criteria and rubric.
```
#### Prompt template without inputs
```text
# GOAL
Your job is to evaluate a task carried out by an AI system powered by a large language model.
You will be provided the output of the task, as well as the evaluation criteria and scoring rubric. Your task is to evaluate the output of the AI system based on the evaluation criteria and scoring rubric provided.
# OUTPUT
Below is the output of the task:
<output>
{OUTPUT}
</output>
# EVALUATION CRITERIA AND SCORING RUBRIC
Here are the evaluation criteria and the rubric that you need to use for evaluating the task:
<evaluation_criteria>
{EVALUATION_CRITERIA}
</evaluation_criteria>
<scoring_rubric>
{RUBRIC}
</scoring_rubric>
# INSTRUCTIONS FOR THE EVALUATION
1. Understand the task and criteria: Familiarize yourself with the task to be evaluated. Review the evaluation criteria and scoring rubric to understand the different levels of performance and the descriptions for each score.
2. Review the output: Examine the output generated from completing the task.
3. Compare output to score descriptions: Compare the output against the criteria and score descriptions in the scoring rubric. For each criterion,decide which description best matches the output.
4. After comparing the output to the score descriptions, pay attention to the small details that might impact the final score that you assign. Sometimes a small difference can dictate the final score.
5. Write verbal feedback justifying your evaluation that includes a detailed rationale, referring to specific aspects of the output and comparing them to the rubric.
6. Assign a final score based on the scoring rubric.
## FORMAT FOR THE EVALUATION
- Write the verbal feedback inside <feedback> tags without any additional surrounding text.
- Write the numeric score inside <score> tags, without any additional surrounding text and always after the feedback.
Please accurately evaluate the task. Strictly adhere to the evaluation criteria and rubric.
```
#### Formatted prompt example with inputs
```text
# GOAL
Your job is to evaluate a task carried out by an AI system powered by a large language model.
You will be provided with the inputs and output of the task, as well as the evaluation criteria and scoring rubric. Your task is to evaluate the output of the AI system based on the evaluation criteria and scoring rubric provided.
# INPUT/s
Below are the inputs required for performing the task:
<inputs>
<customer_query>
Subject: Issues with recent order #12345
Hello,
I recently received my order #12345, but I'm having a few problems:
1. The blue t-shirt I ordered is actually green. Can I exchange it?
2. The phone case doesn't fit my iPhone 13 Pro Max. The website said it was compatible.
3. I was charged for express shipping, but it took over a week to arrive. Can I get a refund on the shipping cost?
Please let me know how we can resolve these issues. I've been a loyal customer for years and this is the first time I've had problems like this.
Thank you,
Jane Smith
</customer_query>
</inputs>
# OUTPUT
Below is the output of the task:
<output>
Dear Jane Smith,
Thank you for reaching out about the issues with your recent order #12345. We sincerely apologize for the inconvenience you've experienced. Let me address each of your concerns:
1. T-shirt color discrepancy: We're sorry about the mix-up with the t-shirt color. We'd be happy to exchange it for the correct blue t-shirt. I'll send you a prepaid return label for the green shirt, and we'll ship out the blue one as soon as we receive the return.
2. Phone case compatibility: We apologize for the confusion regarding the phone case compatibility. Our website information should have been more accurate. We'll gladly accept a return of the incompatible case and issue a full refund for it. Additionally, I can recommend a case that will fit your iPhone 13 Pro Max if you're interested.
3. Shipping charge: You're absolutely right about the shipping charge. Since the delivery didn't meet our express shipping standards, we will refund the entire shipping cost to your original payment method. This refund should process within 3-5 business days.
To initiate these resolutions, please reply to this email confirming that you'd like to proceed with the t-shirt exchange and phone case return. Once confirmed, I'll send you the necessary return labels and process the shipping refund immediately.
We truly value your loyalty as a long-time customer and sincerely regret that you've encountered these issues. Rest assured, we're taking steps to prevent similar problems in the future. As a gesture of goodwill, we'd like to offer you a 20% discount on your next order.
If you have any further questions or concerns, please don't hesitate to reach out. We're here to ensure your complete satisfaction.
Best regards,
Alex Johnson
Customer Service Representative
</output>
# EVALUATION CRITERIA AND SCORING RUBRIC
Here are the evaluation criteria and the rubric that you need to use for evaluating the task:
<evaluation_criteria>
How well the response addresses the specific issues raised in the customer's query?
</evaluation_criteria>
<scoring_rubric>
- Score 1: The response completely fails to address the customer's needs and ignores the specific issues raised.
- Score 2: The response barely addresses the customer's query and misses most of the specific issues raised.
- Score 3: The response partially addresses the customer's query, touching on some of the specific issues but leaving others unaddressed.
- Score 4: The response adequately addresses most aspects of the customer's query and the specific issues raised.
- Score 5: The response fully and comprehensively addresses all aspects of the customer's query and all specific issues raised in a highly satisfactory manner.
</scoring_rubric>
# INSTRUCTIONS FOR THE EVALUATION
1. Understand the task and criteria: Familiarize yourself with the task to be evaluated. Review the evaluation criteria and scoring rubric to understand the different levels of performance and the descriptions for each score.
2. Review the inputs and output: Look at the inputs provided for the task. Examine the output generated from completing the task.
3. Compare output to score descriptions: Compare the output against the criteria and score descriptions in the scoring rubric. For each criterion,decide which description best matches the output.
4. After comparing the output to the score descriptions, pay attention to the small details that might impact the final score that you assign. Sometimes a small difference can dictate the final score.
5. Write verbal feedback justifying your evaluation that includes a detailed rationale, referring to specific aspects of the output and comparing them to the rubric.
6. Assign a final score based on the scoring rubric.
## FORMAT FOR THE EVALUATION
- Write the verbal feedback inside <feedback> tags without any additional surrounding text.
- Write the numeric score inside <score> tags, without any additional surrounding text and always after the feedback.
Please accurately evaluate the task. Strictly adhere to the evaluation criteria and rubric.
```
>Note that inputs and output are formatted with XML tags. See [flow-judge](https://github.com/flowaicom/flow-judge) repository formatting functions for more details.
### Inference
Evaluations can easily be run using our [flow-judge](https://github.com/flowaicom/flow-judge) library. It currently supports both Transformers and vllm engine.
To run Flow Judge efficiently, ensure your hardware meets the following requirements:
- Modern GPU with at least 4 GB VRAM (e.g., NVIDIA RTX series)
- Minimum of 8 GB of system memory
- At least 10GB of free storage for model files and dependencies.
## Evaluation
### Held-out test sets
<table border="1" cellpadding="10" cellspacing="0" style="border-collapse: collapse; width: auto;">
<thead>
<tr>
<th rowspan="2" style="text-align: left;">Evaluator</th>
<th colspan="3" style="text-align: center;">Pass / Fail Held-out Test set</th>
</tr>
<tr>
<th style="text-align: center;">Precision</th>
<th style="text-align: center;">Recall</th>
<th style="text-align: center;">F1</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: left;">microsoft/Phi-3.5-mini-instruct</td>
<td style="text-align: center;">0.685</td>
<td style="text-align: center;"><strong>1.000</strong></td>
<td style="text-align: center;">0.813</td>
</tr>
<tr>
<td style="text-align: left;">meta-llama/Meta-Llama-3.1-8B-Instruct</td>
<td style="text-align: center;"><u>0.870</u></td>
<td style="text-align: center;">0.982</td>
<td style="text-align: center;"><u>0.923</u></td>
</tr>
<tr>
<td style="text-align: left;">mistralai/Mistral-Nemo-Instruct-2407</td>
<td style="text-align: center;">0.709</td>
<td style="text-align: center;"><u>0.994</u></td>
<td style="text-align: center;">0.827</td>
</tr>
<tr>
<td style="text-align: left;">gpt-4o-mini</td>
<td style="text-align: center;">0.834</td>
<td style="text-align: center;">1.000</td>
<td style="text-align: center;">0.910</td>
</tr>
<tr>
<td style="text-align: left;">flowaicom/Flow-Judge-v0.1</td>
<td style="text-align: center;"><strong>0.940</strong></td>
<td style="text-align: center;">0.972</td>
<td style="text-align: center;"><strong>0.955</strong></td>
</tr>
</tbody>
</table>
<table border="1" cellpadding="10" cellspacing="0" style="border-collapse: collapse; width: auto;">
<thead>
<tr>
<th rowspan="2" style="text-align: left;">Evaluator</th>
<th colspan="3" style="text-align: center;">3-Likert Held-out Test set</th>
<th colspan="3" style="text-align: center;">5-Likert Held-out Test set</th>
</tr>
<tr>
<th style="text-align: center;">pearsonr</th>
<th style="text-align: center;">spearmanr</th>
<th style="text-align: center;">kendall-tau</th>
<th style="text-align: center;">pearsonr</th>
<th style="text-align: center;">spearmanr</th>
<th style="text-align: center;">kendall-tau</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: left;">microsoft/Phi-3.5-mini-instruct</td>
<td style="text-align: center;">0.756</td>
<td style="text-align: center;">0.749</td>
<td style="text-align: center;">0.695</td>
<td style="text-align: center;">0.808</td>
<td style="text-align: center;">0.819</td>
<td style="text-align: center;">0.739</td>
</tr>
<tr>
<td style="text-align: left;">prometheus-eval/prometheus-7b-v2.0*</td>
<td style="text-align: center;">-</td>
<td style="text-align: center;">-</td>
<td style="text-align: center;">-</td>
<td style="text-align: center;"><u>0.910</u></td>
<td style="text-align: center;"><u>0.908</u></td>
<td style="text-align: center;"><u>0.838</u></td>
</tr>
<tr>
<td style="text-align: left;">meta-llama/Meta-Llama-3.1-8B-Instruct</td>
<td style="text-align: center;"><u>0.836</u></td>
<td style="text-align: center;"><u>0.833</u></td>
<td style="text-align: center;"><u>0.789</u></td>
<td style="text-align: center;">0.854</td>
<td style="text-align: center;">0.868</td>
<td style="text-align: center;">0.791</td>
</tr>
<tr>
<td style="text-align: left;">mistralai/Mistral-Nemo-Instruct-2407</td>
<td style="text-align: center;">0.813</td>
<td style="text-align: center;">0.807</td>
<td style="text-align: center;">0.758</td>
<td style="text-align: center;">0.870</td>
<td style="text-align: center;">0.867</td>
<td style="text-align: center;">0.789</td>
</tr>
<tr>
<td style="text-align: left;">gpt-4o-mini</td>
<td style="text-align: center;">0.890</td>
<td style="text-align: center;">0.888</td>
<td style="text-align: center;">0.851</td>
<td style="text-align: center;">0.923</td>
<td style="text-align: center;">0.923</td>
<td style="text-align: center;">0.864</td>
</tr>
<tr>
<td style="text-align: left;">flowaicom/Flow-Judge-v0.1</td>
<td style="text-align: center;"><strong>0.888</strong></td>
<td style="text-align: center;"><strong>0.888</strong></td>
<td style="text-align: center;"><strong>0.852</strong></td>
<td style="text-align: center;"><strong>0.919</strong></td>
<td style="text-align: center;"><strong>0.919</strong></td>
<td style="text-align: center;"><strong>0.856</strong></td>
</tr>
</tbody>
</table>
\* _Reported in model paper_
### RAGTruth
<table border="1" cellpadding="10" cellspacing="0" style="border-collapse: collapse; width: auto;">
<tr>
<th rowspan="2" style="text-align: left;">Evaluator</th>
<th colspan="3" style="text-align:center;">RAGTruth QA</th>
<th colspan="3" style="text-align:center;">RAGTruth Data-to-Text</th>
<th colspan="3" style="text-align:center;">RAGTruth Summarization</th>
</tr>
<tr>
<th style="text-align:center;">Precision</th>
<th style="text-align:center;">Recall</th>
<th style="text-align:center;">F1</th>
<th style="text-align:center;">Precision</th>
<th style="text-align:center;">Recall</th>
<th style="text-align:center;">F1</th>
<th style="text-align:center;">Precision</th>
<th style="text-align:center;">Recall</th>
<th style="text-align:center;">F1</th>
</tr>
<tr>
<td>microsoft/Phi-3.5-mini-instruct</td>
<td style="text-align:center;">0.817</td>
<td style="text-align:center;">0.963</td>
<td style="text-align:center;">0.884</td>
<td style="text-align:center;">0.356</td>
<td style="text-align:center;"><strong>1.000</strong></td>
<td style="text-align:center;">0.525</td>
<td style="text-align:center;">0.776</td>
<td style="text-align:center;"><strong>1.000</strong></td>
<td style="text-align:center;"><strong>0.874</strong></td>
</tr>
<tr>
<td>meta-llama/Meta-Llama-3.1-8B-Instruct</td>
<td style="text-align:center;"><strong>0.844</strong></td>
<td style="text-align:center;"><u>0.986</u></td>
<td style="text-align:center;"><strong>0.910</strong></td>
<td style="text-align:center;">0.382</td>
<td style="text-align:center;">0.537</td>
<td style="text-align:center;">0.447</td>
<td style="text-align:center;"><u>0.797</u></td>
<td style="text-align:center;"><u>0.940</u></td>
<td style="text-align:center;">0.863</td>
</tr>
<tr>
<td>mistralai/Mistral-Nemo-Instruct-2407</td>
<td style="text-align:center;">0.821</td>
<td style="text-align:center;"><strong>0.995</strong></td>
<td style="text-align:center;"><u>0.900</u></td>
<td style="text-align:center;">0.357</td>
<td style="text-align:center;"><strong>1.000</strong></td>
<td style="text-align:center;">0.526</td>
<td style="text-align:center;">0.775</td>
<td style="text-align:center;"><strong>1.000</strong></td>
<td style="text-align:center;"><u>0.873</u></td>
</tr>
<tr>
<td>gpt-4o-mini</td>
<td style="text-align:center;">0.830</td>
<td style="text-align:center;">0.966</td>
<td style="text-align:center;">0.893</td>
<td style="text-align:center;">0.398</td>
<td style="text-align:center;">0.994</td>
<td style="text-align:center;">0.569</td>
<td style="text-align:center;">0.786</td>
<td style="text-align:center;">0.997</td>
<td style="text-align:center;">0.879</td>
</tr>
<tr>
<td>Luna*</td>
<td style="text-align:center;">0.378</td>
<td style="text-align:center;">0.800</td>
<td style="text-align:center;">0.513</td>
<td style="text-align:center;">0.649</td>
<td style="text-align:center;">0.912</td>
<td style="text-align:center;"><u>0.759</u></td>
<td style="text-align:center;">0.400</td>
<td style="text-align:center;">0.765</td>
<td style="text-align:center;">0.525</td>
</tr>
<tr>
<td>RAGAS Faithfuless*</td>
<td style="text-align:center;">0.312</td>
<td style="text-align:center;">0.419</td>
<td style="text-align:center;">0.357</td>
<td style="text-align:center;"><strong>0.792</strong></td>
<td style="text-align:center;">0.508</td>
<td style="text-align:center;">0.619</td>
<td style="text-align:center;">0.642</td>
<td style="text-align:center;">0.299</td>
<td style="text-align:center;">0.408</td>
</tr>
<tr>
<td>Trulens Groundedness*</td>
<td style="text-align:center;">0.228</td>
<td style="text-align:center;">0.925</td>
<td style="text-align:center;">0.366</td>
<td style="text-align:center;"><u>0.669</u></td>
<td style="text-align:center;"><u>0.965</u></td>
<td style="text-align:center;"><strong>0.790</strong></td>
<td style="text-align:center;">0.402</td>
<td style="text-align:center;">0.500</td>
<td style="text-align:center;">0.445</td>
</tr>
<tr>
<td>flowaicom/Flow-Judge-v0.1</td>
<td style="text-align:center;"><u>0.835</u></td>
<td style="text-align:center;">0.961</td>
<td style="text-align:center;">0.894</td>
<td style="text-align:center;">0.541</td>
<td style="text-align:center;">0.249</td>
<td style="text-align:center;">0.341</td>
<td style="text-align:center;"><strong>0.834</strong></td>
<td style="text-align:center;">0.836</td>
<td style="text-align:center;">0.835</td>
</tr>
</table>
\* _reported in model paper_
### HaluEval, Covid-QA, PubMedQA
<table border="1" cellpadding="10" cellspacing="0" style="border-collapse: collapse; width: auto;">
<thead>
<tr>
<th rowspan="2" style="text-align: left;">Evaluator</th>
<th colspan="4" style="text-align: center;">HaluEval</th>
<th colspan="4" style="text-align: center;">Covid-QA</th>
<th colspan="4" style="text-align: center;">PubMedQA</th>
</tr>
<tr>
<th style="text-align: center;">Precision</th>
<th style="text-align: center;">Recall</th>
<th style="text-align: center;">F1</th>
<th style="text-align: center;">Accuracy</th>
<th style="text-align: center;">Precision</th>
<th style="text-align: center;">Recall</th>
<th style="text-align: center;">F1</th>
<th style="text-align: center;">Accuracy</th>
<th style="text-align: center;">Precision</th>
<th style="text-align: center;">Recall</th>
<th style="text-align: center;">F1</th>
<th style="text-align: center;">Accuracy</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: left;">microsoft/Phi-3.5-mini-instruct</td>
<td style="text-align: center;">0.730</td>
<td style="text-align: center;"><u>0.914</u></td>
<td style="text-align: center;">0.812</td>
<td style="text-align: center;">0.788</td>
<td style="text-align: center;">0.617</td>
<td style="text-align: center;">0.964</td>
<td style="text-align: center;">0.752</td>
<td style="text-align: center;">0.681</td>
<td style="text-align: center;">0.623</td>
<td style="text-align: center;"><u>0.986</u></td>
<td style="text-align: center;">0.764</td>
<td style="text-align: center;">0.696</td>
</tr>
<tr>
<td style="text-align: left;">meta-llama/Meta-Llama-3.1-8B-Instruct</td>
<td style="text-align: center;"><strong>0.864</strong></td>
<td style="text-align: center;">0.891</td>
<td style="text-align: center;"><strong>0.878</strong></td>
<td style="text-align: center;"><u>0.874</u></td>
<td style="text-align: center;"><u>0.663</u></td>
<td style="text-align: center;"><u>0.976</u></td>
<td style="text-align: center;"><u>0.790</u></td>
<td style="text-align: center;">0.734</td>
<td style="text-align: center;"><u>0.681</u></td>
<td style="text-align: center;">0.962</td>
<td style="text-align: center;"><strong>0.797</strong></td>
<td style="text-align: center;">0.750</td>
</tr>
<tr>
<td style="text-align: left;">mistralai/Mistral-Nemo-Instruct-2407</td>
<td style="text-align: center;">0.655</td>
<td style="text-align: center;"><strong>0.993</strong></td>
<td style="text-align: center;">0.789</td>
<td style="text-align: center;">0.735</td>
<td style="text-align: center;">0.651</td>
<td style="text-align: center;"><strong>0.982</strong></td>
<td style="text-align: center;">0.783</td>
<td style="text-align: center;">0.728</td>
<td style="text-align: center;">0.602</td>
<td style="text-align: center;"><strong>0.994</strong></td>
<td style="text-align: center;"><u>0.750</u></td>
<td style="text-align: center;">0.669</td>
</tr>
<tr>
<td style="text-align: left;">gpt-4o-mini</td>
<td style="text-align: center;">0.846</td>
<td style="text-align: center;">0.940</td>
<td style="text-align: center;">0.891</td>
<td style="text-align: center;">0.885</td>
<td style="text-align: center;">0.795</td>
<td style="text-align: center;">0.964</td>
<td style="text-align: center;">0.872</td>
<td style="text-align: center;">0.858</td>
<td style="text-align: center;">0.791</td>
<td style="text-align: center;">0.904</td>
<td style="text-align: center;">0.843</td>
<td style="text-align: center;">0.832</td>
</tr>
<tr>
<td style="text-align: left;">flowaicom/Flow-Judge-v0.1</td>
<td style="text-align: center;"><u>0.826</u></td>
<td style="text-align: center;">0.895</td>
<td style="text-align: center;"><u>0.859</u></td>
<td style="text-align: center;">0.854</td>
<td style="text-align: center;"><strong>0.767</strong></td>
<td style="text-align: center;">0.877</td>
<td style="text-align: center;"><strong>0.818</strong></td>
<td style="text-align: center;">0.807</td>
<td style="text-align: center;"><strong>0.874</strong></td>
<td style="text-align: center;">0.624</td>
<td style="text-align: center;">0.728</td>
<td style="text-align: center;">0.767</td>
</tr>
<tr>
<td style="text-align: left;">gpt-4o*</td>
<td style="text-align: center;">-</td>
<td style="text-align: center;">-</td>
<td style="text-align: center;">-</td>
<td style="text-align: center;">0.879</td>
<td style="text-align: center;">-</td>
<td style="text-align: center;">-</td>
<td style="text-align: center;">-</td>
<td style="text-align: center;">0.821</td>
<td style="text-align: center;">-</td>
<td style="text-align: center;">-</td>
<td style="text-align: center;">-</td>
<td style="text-align: center;">0.821</td>
</tr>
<tr>
<td style="text-align: left;">Claude 3 Sonnet*</td>
<td style="text-align: center;">-</td>
<td style="text-align: center;">-</td>
<td style="text-align: center;">-</td>
<td style="text-align: center;">0.845</td>
<td style="text-align: center;">-</td>
<td style="text-align: center;">-</td>
<td style="text-align: center;">-</td>
<td style="text-align: center;">0.829</td>
<td style="text-align: center;">-</td>
<td style="text-align: center;">-</td>
<td style="text-align: center;">-</td>
<td style="text-align: center;">0.829</td>
</tr>
<tr>
<td style="text-align: left;">RAGAS Faithfulness*</td>
<td style="text-align: center;">-</td>
<td style="text-align: center;">-</td>
<td style="text-align: center;">-</td>
<td style="text-align: center;">0.706</td>
<td style="text-align: center;">-</td>
<td style="text-align: center;">-</td>
<td style="text-align: center;">-</td>
<td style="text-align: center;">0.750</td>
<td style="text-align: center;">-</td>
<td style="text-align: center;">-</td>
<td style="text-align: center;">-</td>
<td style="text-align: center;">0.669</td>
</tr>
<tr>
<td style="text-align: left;">Lynx 8B*</td>
<td style="text-align: center;">-</td>
<td style="text-align: center;">-</td>
<td style="text-align: center;">-</td>
<td style="text-align: center;">0.857</td>
<td style="text-align: center;">-</td>
<td style="text-align: center;">-</td>
<td style="text-align: center;">-</td>
<td style="text-align: center;"><u>0.963</u></td>
<td style="text-align: center;">-</td>
<td style="text-align: center;">-</td>
<td style="text-align: center;">-</td>
<td style="text-align: center;"><u>0.852</u></td>
</tr>
<tr>
<td style="text-align: left;">Lynx 70B*</td>
<td style="text-align: center;">-</td>
<td style="text-align: center;">-</td>
<td style="text-align: center;">-</td>
<td style="text-align: center;"><strong>0.884</strong></td>
<td style="text-align: center;">-</td>
<td style="text-align: center;">-</td>
<td style="text-align: center;">-</td>
<td style="text-align: center;"><strong>0.975</strong></td>
<td style="text-align: center;">-</td>
<td style="text-align: center;">-</td>
<td style="text-align: center;">-</td>
<td style="text-align: center;"><strong>0.904</strong></td>
</tr>
</tbody>
</table>
\* _reported in model paper_
### Feedback Bench
<table border="1" cellpadding="10" cellspacing="0" style="border-collapse: collapse; width: auto;">
<tr>
<th rowspan="2">Evaluator</th>
<th colspan="3" style="text-align:center;">Feedback bench</th>
</tr>
<tr>
<th style="text-align:center;">pearsonr</th>
<th style="text-align:center;">spearmanr</th>
<th style="text-align:center;">kendall-tau</th>
</tr>
<tr>
<td>microsoft/Phi-3.5-mini-instruct</td>
<td style="text-align:center;">0.710</td>
<td style="text-align:center;">0.721</td>
<td style="text-align:center;">0.622</td>
</tr>
<tr>
<td>prometheus-eval/prometheus-7b-v2.0*</td>
<td style="text-align:center;"><strong>0.878</strong></td>
<td style="text-align:center;"><strong>0.909</strong></td>
<td style="text-align:center;"><strong>0.773</strong></td>
</tr>
<tr>
<td>meta-llama/Meta-Llama-3.1-8B-Instruct</td>
<td style="text-align:center;">0.742</td>
<td style="text-align:center;">0.749</td>
<td style="text-align:center;">0.654</td>
</tr>
<tr>
<td>mistralai/Mistral-Nemo-Instruct-2407</td>
<td style="text-align:center;">0.720</td>
<td style="text-align:center;">0.724</td>
<td style="text-align:center;">0.632</td>
</tr>
<tr>
<td>gpt-4o-mini</td>
<td style="text-align:center;">0.797</td>
<td style="text-align:center;">0.795</td>
<td style="text-align:center;">0.701</td>
</tr>
<tr>
<td>flowaicom/Flow-Judge-v0.1</td>
<td style="text-align:center;"><u>0.787</u></td>
<td style="text-align:center;"><u>0.789</u></td>
<td style="text-align:center;"><u>0.688</u></td>
</tr>
</table>
\* _reported in model paper using reference answers_
## License
We opted for the Apache 2.0 license for Flow Judge to provide the community with an open, small yet powerful LM evaluator. Our goal is to support the wider adoption of rigorous evaluation techniques in LLM system development, making them more accessible to practitioners and researchers.
## Limitations and future work
Multilingual evaluation: Flow Judge has been fine-tuned exclusively on English data. While the foundation model (Phi-3.5-mini-instruct [17]) may possess multilingual capabilities, we have not systematically evaluated Flow Judge performance in non-English contexts. We plan to explore multi-lingual LM evaluators in the future.
Long context and structured Inputs: Our training dataset encompasses a wide range of custom metrics relevant to evaluating LLM systems. However, it does not include examples with long context inputs or structured data formats such as JSON, since these are harder to synthetically generate. This limitation may impact Flow Judge's performance when evaluating responses that require processing extensive context or parsing structured input. Extending our model’s capabilities to handle these input types represents an important area for future research.
Math and coding: The current version has not been trained on specific task domains such as arithmetic problems or code evaluation. As a result, its performance in these specialized areas may be limited. Future iterations of the model should address these gaps.
Domain-specific knowledge and complex multi-step evaluations: Flow Judge may struggle with highly specialized domain knowledge or proprietary data outside the training scope of its foundation model. Additionally, evaluation tasks requiring multi-step reasoning or complex logical processes may challenge the model's capabilities. We strongly recommend conducting meta-evaluations of the model performance before deploying it in specialized or highly complex evaluation scenarios.
| [
"SUMMARIZATION"
] | [
"PUBMEDQA"
] |
medspaner/roberta-es-clinical-trials-neg-spec-ner | medspaner | token-classification | [
"transformers",
"pytorch",
"roberta",
"token-classification",
"generated_from_trainer",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2022-07-20T10:38:56 | 2024-10-01T06:43:00 | 70 | 0 | ---
license: cc-by-nc-4.0
metrics:
- precision
- recall
- f1
- accuracy
tags:
- generated_from_trainer
widget:
- text: Pacientes sanos, sin ninguna enfermedad, que no tomen ningún medicamento
- text: Sujetos adultos con cáncer de próstata asintomáticos y no tratados previamente
- text: Probable infección por SARS-CoV-2 y sospecha de enfermedad autoinmune
model-index:
- name: roberta-es-clinical-trials-neg-spec-ner
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-es-clinical-trials-neg-spec-ner
This named entity recognition model detects negation and speculation entities, and negated and speculated concepts:
- Neg_cue: negation cue (e.g. *no*, *sin*)
- Negated: negated entity or event (e.g. *sin **dolor***)
- Spec_cue: speculation cue (e.g. *posiblemente*)
- Speculated: speculated entity or event (e.g. *posiblemente **sobreviva***)
The model achieves the following results on the test set (when trained with the training and development set; results are averaged over 5 evaluation rounds):
- Precision: 0.855 (±0.005)
- Recall: 0.864 (±0.008)
- F1: 0.859 (±0.006)
- Accuracy: 0.986 (±0.001)
## Model description
This model adapts the pre-trained model [bsc-bio-ehr-es](https://huggingface.co/PlanTL-GOB-ES/bsc-bio-ehr-es), presented in [Pio Carriño et al. (2022)](https://aclanthology.org/2022.bionlp-1.19/).
It is fine-tuned to conduct medical named entity recognition on Spanish texts about clinical trials.
The model is fine-tuned on the [NUBEs corpus (Lima et al. 2020)](https://aclanthology.org/2020.lrec-1.708/) and on the [CT-EBM-ES corpus (Campillos-Llanos et al. 2021)](https://bmcmedinformdecismak.biomedcentral.com/articles/10.1186/s12911-021-01395-z).
If you use this model, please, cite as follows:
```
@article{campillosetal2024,
title = {{Hybrid tool for semantic annotation and concept extraction of medical texts in Spanish}},
author = {Campillos-Llanos, Leonardo and Valverde-Mateos, Ana and Capllonch-Carri{\'o}n, Adri{\'a}n},
journal = {BMC Bioinformatics},
year={2024},
publisher={BioMed Central}
}
```
## Intended uses & limitations
**Disclosure**: *This model is under development and needs to be improved. It should not be used for medical decision making without human assistance and supervision*
This model is intended for a generalist purpose, and may have bias and/or any other undesirable distortions.
Third parties who deploy or provide systems and/or services using any of these models (or using systems based on these models) should note that it is their responsibility to mitigate the risks arising from their use. Third parties, in any event, need to comply with applicable regulations, including regulations concerning the use of artificial intelligence.
The owner or creator of the models will in no event be liable for any results arising from the use made by third parties of these models.
**Descargo de responsabilidad**: *Esta herramienta se encuentra en desarrollo y no debe ser empleada para la toma de decisiones médicas*
La finalidad de este modelo es generalista, y se advierte que puede tener sesgos y/u otro tipo de distorsiones indeseables.
Terceras partes que desplieguen o proporcionen sistemas y/o servicios usando alguno de estos modelos (o utilizando sistemas basados en estos modelos) han tener presente que es su responsabilidad abordar y minimizar los riesgos derivados de su uso. Las terceras partes, en cualquier circunstancia, deben cumplir con la normativa aplicable, incluyendo la normativa que concierne al uso de la inteligencia artificial.
El propietario o creador de los modelos de ningún modo será responsable de los resultados derivados del uso que las terceras partes hagan de estos modelos.
## Training and evaluation data
The data used for fine-tuning are:
1) The [Negation and Uncertainty in Spanish Corpus (NUBes)](https://github.com/Vicomtech/NUBes-negation-uncertainty-biomedical-corpus)
It is a collection of 29 682 sentences (518 068 tokens) from anonymised health records in Spanish, annotated with negation and uncertainty cues and their scopes.
2) The [Clinical Trials for Evidence-Based-Medicine in Spanish corpus](http://www.lllf.uam.es/ESP/nlpdata/wp2/).
It is a collection of 1200 texts about clinical trials studies and clinical trials announcements:
- 500 abstracts from journals published under a Creative Commons license, e.g. available in PubMed or the Scientific Electronic Library Online (SciELO)
- 700 clinical trials announcements published in the European Clinical Trials Register and Repositorio Español de Estudios Clínicos
If you use the CT-EBM-ES resource, please, cite as follows:
```
@article{campillosetal-midm2021,
title = {A clinical trials corpus annotated with UMLS© entities to enhance the access to Evidence-Based Medicine},
author = {Campillos-Llanos, Leonardo and Valverde-Mateos, Ana and Capllonch-Carri{\'o}n, Adri{\'a}n and Moreno-Sandoval, Antonio},
journal = {BMC Medical Informatics and Decision Making},
volume={21},
number={1},
pages={1--19},
year={2021},
publisher={BioMed Central}
}
```
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: we used different seeds for 5 evaluation rounds, and uploaded the model with the best results
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: average 10.8 epochs (±1.9); trained with early stopping if no improvement after 5 epochs (early stopping patience: 5)
### Training results (test set; average and standard deviation of 5 rounds with different seeds)
| Precision | Recall | F1 | Accuracy |
|:--------------:|:--------------:|:--------------:|:--------------:|
| 0.855 (±0.005) | 0.864 (±0.008) | 0.859 (±0.006) | 0.986 (±0.001) |
**Results per class (test set; average and standard deviation of 5 rounds with different seeds)**
| Class | Precision | Recall | F1 | Support |
|:-----------:|:--------------:|:--------------:|:--------------:|:---------:|
| Neg_cue | 0.955 (±0.006) | 0.958 (±0.006) | 0.957 (±0.005) | 2484 |
| Negated | 0.829 (±0.005) | 0.837 (±0.014) | 0.833 (±0.008) | 3160 |
| Spec_cue | 0.834 (±0.021) | 0.859 (±0.017) | 0.846 (±0.007) | 756 |
| Speculated | 0.708 (±0.019) | 0.719 (±0.016) | 0.713 (±0.016) | 1008 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.2+cu113
- Datasets 1.18.4
- Tokenizers 0.11.6
| [
"NAMED_ENTITY_RECOGNITION"
] | [
"SCIELO"
] |
gliner-community/gliner_xxl-v2.5 | gliner-community | token-classification | [
"gliner",
"pytorch",
"token-classification",
"multilingual",
"dataset:urchade/pile-mistral-v0.1",
"arxiv:2311.08526",
"license:apache-2.0",
"region:us"
] | 2024-08-25T14:00:37 | 2024-08-31T13:39:11 | 70 | 3 | ---
datasets:
- urchade/pile-mistral-v0.1
language:
- multilingual
library_name: gliner
license: apache-2.0
pipeline_tag: token-classification
---
# About
GLiNER is a Named Entity Recognition (NER) model capable of identifying any entity type using a bidirectional transformer encoder (BERT-like). It provides a practical alternative to traditional NER models, which are limited to predefined entities, and Large Language Models (LLMs) that, despite their flexibility, are costly and large for resource-constrained scenarios.
## Links
* Paper: https://arxiv.org/abs/2311.08526
* Repository: https://github.com/urchade/GLiNER
## Installation
To use this model, you must install the GLiNER Python library:
```
!pip install gliner -U
```
## Usage
Once you've downloaded the GLiNER library, you can import the GLiNER class. You can then load this model using `GLiNER.from_pretrained` and predict entities with `predict_entities`.
```python
from gliner import GLiNER
model = GLiNER.from_pretrained("gliner-community/gliner_xxl-v2.5", load_tokenizer=True)
text = """
Cristiano Ronaldo dos Santos Aveiro (Portuguese pronunciation: [kɾiʃˈtjɐnu ʁɔˈnaldu]; born 5 February 1985) is a Portuguese professional footballer who plays as a forward for and captains both Saudi Pro League club Al Nassr and the Portugal national team. Widely regarded as one of the greatest players of all time, Ronaldo has won five Ballon d'Or awards,[note 3] a record three UEFA Men's Player of the Year Awards, and four European Golden Shoes, the most by a European player. He has won 33 trophies in his career, including seven league titles, five UEFA Champions Leagues, the UEFA European Championship and the UEFA Nations League. Ronaldo holds the records for most appearances (183), goals (140) and assists (42) in the Champions League, goals in the European Championship (14), international goals (128) and international appearances (205). He is one of the few players to have made over 1,200 professional career appearances, the most by an outfield player, and has scored over 850 official senior career goals for club and country, making him the top goalscorer of all time.
"""
labels = ["person", "award", "date", "competitions", "teams"]
entities = model.predict_entities(text, labels)
for entity in entities:
print(entity["text"], "=>", entity["label"])
```
```
Cristiano Ronaldo dos Santos Aveiro => person
5 February 1985 => date
Al Nassr => teams
Portugal national team => teams
Ballon d'Or => award
UEFA Men's Player of the Year Awards => award
European Golden Shoes => award
UEFA Champions Leagues => competitions
UEFA European Championship => competitions
UEFA Nations League => competitions
Champions League => competitions
European Championship => competitions
```
## Named Entity Recognition benchmark result
Below is a comparison of results between previous versions of the model and the current one:

### Results on other datasets
| Model | Dataset | Precision | Recall | F1 Score |
|------------------------------------|---------------------|-----------|--------|----------|
| gliner-community/gliner_small-v2.5 | ACE 2004 | 35.18% | 22.81% | 27.67% |
| | ACE 2005 | 35.89% | 22.39% | 27.58% |
| | AnatEM | 49.12% | 31.31% | 38.24% |
| | Broad Tweet Corpus | 59.51% | 77.85% | 67.46% |
| | CoNLL 2003 | 63.16% | 70.43% | 66.60% |
| | FabNER | 23.78% | 22.55% | 23.15% |
| | FindVehicle | 37.46% | 40.06% | 38.72% |
| | GENIA_NER | 45.90% | 54.11% | 49.67% |
| | HarveyNER | 13.20% | 32.58% | 18.78% |
| | MultiNERD | 45.87% | 87.01% | 60.07% |
| | Ontonotes | 23.05% | 41.16% | 29.55% |
| | PolyglotNER | 31.88% | 67.22% | 43.25% |
| | TweetNER7 | 40.98% | 39.91% | 40.44% |
| | WikiANN en | 55.35% | 60.06% | 57.61% |
| | WikiNeural | 64.52% | 86.24% | 73.81% |
| | bc2gm | 51.70% | 49.99% | 50.83% |
| | bc4chemd | 30.78% | 57.56% | 40.11% |
| | bc5cdr | 63.48% | 69.65% | 66.42% |
| | ncbi | 63.36% | 66.67% | 64.97% |
| | **Average** | | | **46.58%** |
|------------------------------------|---------------------|-----------|--------|----------|
| urchade/gliner_small-v2.1 | ACE 2004 | 38.89% | 23.53% | 29.32% |
| | ACE 2005 | 42.09% | 26.82% | 32.76% |
| | AnatEM | 63.71% | 19.45% | 29.80% |
| | Broad Tweet Corpus | 57.01% | 70.49% | 63.04% |
| | CoNLL 2003 | 57.11% | 62.66% | 59.76% |
| | FabNER | 32.41% | 12.33% | 17.87% |
| | FindVehicle | 43.47% | 33.02% | 37.53% |
| | GENIA_NER | 61.03% | 37.25% | 46.26% |
| | HarveyNER | 23.12% | 15.16% | 18.32% |
| | MultiNERD | 43.63% | 83.60% | 57.34% |
| | Ontonotes | 23.25% | 35.41% | 28.07% |
| | PolyglotNER | 29.47% | 64.41% | 40.44% |
| | TweetNER7 | 44.78% | 30.83% | 36.52% |
| | WikiANN en | 52.58% | 58.31% | 55.30% |
| | WikiNeural | 53.38% | 82.19% | 64.72% |
| | bc2gm | 66.64% | 30.56% | 41.90% |
| | bc4chemd | 42.01% | 56.03% | 48.02% |
| | bc5cdr | 72.03% | 58.58% | 64.61% |
| | ncbi | 68.88% | 46.71% | 55.67% |
| | **Average** | | | **43.54%** |
|------------------------------------|---------------------|-----------|--------|----------|
| EmergentMethods/gliner_small-v2.1 | ACE 2004 | 39.92% | 17.50% | 24.34% |
| | ACE 2005 | 38.53% | 16.58% | 23.18% |
| | AnatEM | 55.95% | 25.69% | 35.22% |
| | Broad Tweet Corpus | 66.63% | 72.00% | 69.21% |
| | CoNLL 2003 | 62.89% | 58.96% | 60.86% |
| | FabNER | 32.76% | 13.33% | 18.95% |
| | FindVehicle | 42.93% | 43.20% | 43.06% |
| | GENIA_NER | 51.28% | 43.75% | 47.22% |
| | HarveyNER | 24.82% | 21.52% | 23.05% |
| | MultiNERD | 59.27% | 80.69% | 68.34% |
| | Ontonotes | 32.97% | 37.59% | 35.13% |
| | PolyglotNER | 33.60% | 63.30% | 43.90% |
| | TweetNER7 | 46.90% | 28.66% | 35.58% |
| | WikiANN en | 51.91% | 55.43% | 53.61% |
| | WikiNeural | 70.65% | 82.21% | 75.99% |
| | bc2gm | 49.95% | 43.13% | 46.29% |
| | bc4chemd | 35.88% | 71.64% | 47.81% |
| | bc5cdr | 68.41% | 68.90% | 68.65% |
| | ncbi | 55.31% | 59.87% | 57.50% |
| | **Average** | | | **46.20%** |
|-----------------------------------------|---------------------|-----------|--------|----------|
| gliner-community/gliner_medium-v2.5 | ACE 2004 | 33.06% | 20.96% | 25.66% |
| | ACE 2005 | 33.65% | 19.65% | 24.81% |
| | AnatEM | 52.03% | 35.28% | 42.05% |
| | Broad Tweet Corpus | 60.57% | 79.09% | 68.60% |
| | CoNLL 2003 | 63.80% | 68.31% | 65.98% |
| | FabNER | 26.20% | 22.26% | 24.07% |
| | FindVehicle | 41.95% | 40.68% | 41.30% |
| | GENIA_NER | 51.83% | 62.34% | 56.60% |
| | HarveyNER | 14.04% | 32.17% | 19.55% |
| | MultiNERD | 47.63% | 88.78% | 62.00% |
| | Ontonotes | 21.68% | 38.41% | 27.71% |
| | PolyglotNER | 32.73% | 68.27% | 44.24% |
| | TweetNER7 | 40.39% | 37.64% | 38.97% |
| | WikiANN en | 56.41% | 59.90% | 58.10% |
| | WikiNeural | 65.61% | 86.28% | 74.54% |
| | bc2gm | 55.20% | 56.71% | 55.95% |
| | bc4chemd | 35.94% | 63.67% | 45.94% |
| | bc5cdr | 63.50% | 70.09% | 66.63% |
| | ncbi | 62.96% | 68.55% | 65.63% |
| | **Average** | | | **47.81%** |
|-----------------------------------------|---------------------|-----------|--------|----------|
| urchade/gliner_medium-v2.1 | ACE 2004 | 36.33% | 22.74% | 27.97% |
| | ACE 2005 | 40.49% | 25.46% | 31.27% |
| | AnatEM | 59.75% | 16.87% | 26.31% |
| | Broad Tweet Corpus | 60.89% | 67.25% | 63.91% |
| | CoNLL 2003 | 60.62% | 62.39% | 61.50% |
| | FabNER | 27.72% | 12.24% | 16.98% |
| | FindVehicle | 41.55% | 31.31% | 35.71% |
| | GENIA_NER | 60.86% | 43.93% | 51.03% |
| | HarveyNER | 23.20% | 23.16% | 23.18% |
| | MultiNERD | 41.25% | 83.74% | 55.27% |
| | Ontonotes | 20.58% | 34.11% | 25.67% |
| | PolyglotNER | 31.32% | 64.22% | 42.11% |
| | TweetNER7 | 44.52% | 33.42% | 38.18% |
| | WikiANN en | 54.57% | 56.47% | 55.51% |
| | WikiNeural | 57.60% | 81.57% | 67.52% |
| | bc2gm | 67.98% | 33.45% | 44.84% |
| | bc4chemd | 45.66% | 52.00% | 48.62% |
| | bc5cdr | 72.20% | 58.12% | 64.40% |
| | ncbi | 73.12% | 49.74% | 59.20% |
| | **Average** | | | **44.17%** |
|-----------------------------------------|---------------------|-----------|--------|----------|
| EmergentMethods/gliner_news_medium-v2.1 | ACE 2004 | 39.21% | 17.24% | 23.95% |
| | ACE 2005 | 39.82% | 16.48% | 23.31% |
| | AnatEM | 57.67% | 23.57% | 33.46% |
| | Broad Tweet Corpus | 69.52% | 65.94% | 67.69% |
| | CoNLL 2003 | 68.26% | 58.45% | 62.97% |
| | FabNER | 30.74% | 15.51% | 20.62% |
| | FindVehicle | 40.33% | 37.37% | 38.79% |
| | GENIA_NER | 53.70% | 47.73% | 50.54% |
| | HarveyNER | 26.29% | 27.05% | 26.67% |
| | MultiNERD | 56.78% | 81.96% | 67.08% |
| | Ontonotes | 30.90% | 35.86% | 33.19% |
| | PolyglotNER | 35.98% | 60.96% | 45.25% |
| | TweetNER7 | 52.37% | 30.50% | 38.55% |
| | WikiANN en | 53.81% | 52.29% | 53.04% |
| | WikiNeural | 76.84% | 78.92% | 77.86% |
| | bc2gm | 62.97% | 44.24% | 51.96% |
| | bc4chemd | 44.90% | 65.56% | 53.30% |
| | bc5cdr | 73.93% | 67.03% | 70.31% |
| | ncbi | 69.53% | 60.82% | 64.88% |
| | **Average** | | | **47.55%** |
|-----------------------------------------|---------------------|-----------|--------|----------|
| gliner-community/gliner_large-v2.5 | ACE 2004 | 31.64% | 22.81% | 26.51% |
| | ACE 2005 | 32.10% | 22.56% | 26.49% |
| | AnatEM | 53.64% | 27.82% | 36.64% |
| | Broad Tweet Corpus | 61.93% | 76.85% | 68.59% |
| | CoNLL 2003 | 62.83% | 67.71% | 65.18% |
| | FabNER | 24.54% | 27.03% | 25.73% |
| | FindVehicle | 40.71% | 56.24% | 47.23% |
| | GENIA_NER | 43.56% | 52.56% | 47.64% |
| | HarveyNER | 14.85% | 27.05% | 19.17% |
| | MultiNERD | 38.04% | 89.17% | 53.33% |
| | Ontonotes | 17.28% | 40.16% | 24.16% |
| | PolyglotNER | 32.88% | 63.31% | 43.28% |
| | TweetNER7 | 38.03% | 41.43% | 39.66% |
| | WikiANN en | 57.80% | 60.54% | 59.14% |
| | WikiNeural | 67.72% | 83.94% | 74.96% |
| | bc2gm | 54.74% | 48.54% | 51.45% |
| | bc4chemd | 40.20% | 58.66% | 47.71% |
| | bc5cdr | 66.27% | 71.95% | 69.00% |
| | ncbi | 68.09% | 61.55% | 64.65% |
| | **Average** | | | **46.87%** |
|-----------------------------------------|---------------------|-----------|--------|----------|
| urchade/gliner_large-v2.1 | ACE 2004 | 37.52% | 25.38% | 30.28% |
| | ACE 2005 | 39.02% | 29.00% | 33.27% |
| | AnatEM | 52.86% | 13.64% | 21.68% |
| | Broad Tweet Corpus | 51.44% | 71.73% | 59.91% |
| | CoNLL 2003 | 54.86% | 64.98% | 59.49% |
| | FabNER | 23.98% | 16.00% | 19.19% |
| | FindVehicle | 47.04% | 57.53% | 51.76% |
| | GENIA_NER | 58.10% | 49.98% | 53.74% |
| | HarveyNER | 16.29% | 21.93% | 18.69% |
| | MultiNERD | 34.09% | 85.43% | 48.74% |
| | Ontonotes | 14.02% | 32.01% | 19.50% |
| | PolyglotNER | 28.53% | 64.92% | 39.64% |
| | TweetNER7 | 38.00% | 34.34% | 36.08% |
| | WikiANN en | 51.69% | 59.92% | 55.50% |
| | WikiNeural | 50.94% | 82.08% | 62.87% |
| | bc2gm | 64.48% | 32.47% | 43.19% |
| | bc4chemd | 48.66% | 57.52% | 52.72% |
| | bc5cdr | 72.19% | 64.27% | 68.00% |
| | ncbi | 69.54% | 52.25% | 59.67% |
| | **Average** | | | **43.89%** |
|-----------------------------------------|---------------------|-----------|--------|----------|
| EmergenMethods/gliner_news_large-v2.1 | ACE 2004 | 43.19% | 18.39% | 25.80% |
| | ACE 2005 | 45.24% | 21.20% | 28.87% |
| | AnatEM | 61.51% | 21.66% | 32.04% |
| | Broad Tweet Corpus | 69.38% | 68.99% | 69.18% |
| | CoNLL 2003 | 61.47% | 52.18% | 56.45% |
| | FabNER | 27.42% | 19.11% | 22.52% |
| | FindVehicle | 46.30% | 62.48% | 53.19% |
| | GENIA_NER | 54.13% | 54.02% | 54.07% |
| | HarveyNER | 15.91% | 15.78% | 15.84% |
| | MultiNERD | 53.73% | 79.07% | 63.98% |
| | Ontonotes | 26.78% | 39.77% | 32.01% |
| | PolyglotNER | 34.28% | 55.87% | 42.49% |
| | TweetNER7 | 48.06% | 28.18% | 35.53% |
| | WikiANN en | 53.66% | 51.34% | 52.47% |
| | WikiNeural | 69.81% | 70.75% | 70.28% |
| | bc2gm | 59.83% | 37.62% | 46.20% |
| | bc4chemd | 46.24% | 69.15% | 55.42% |
| | bc5cdr | 71.94% | 70.37% | 71.15% |
| | ncbi | 70.17% | 61.44% | 65.52% |
| | **Average** | | | **47.00%** |
|-----------------|---------------------|-----------|--------|----------|
| numind/NuNER_Zero-span | ACE 2004 | 37.15% | 20.01% | 26.01% |
| | ACE 2005 | 34.93% | 17.87% | 23.64% |
| | AnatEM | 62.78% | 20.19% | 30.55% |
| | Broad Tweet Corpus | 51.75% | 71.76% | 60.13% |
| | CoNLL 2003 | 58.11% | 70.34% | 63.64% |
| | FabNER | 35.56% | 18.17% | 24.05% |
| | FindVehicle | 51.19% | 38.75% | 44.11% |
| | GENIA_NER | 59.98% | 48.49% | 53.63% |
| | HarveyNER | 26.57% | 23.36% | 24.86% |
| | MultiNERD | 50.47% | 87.06% | 63.90% |
| | Ontonotes | 26.65% | 38.68% | 31.56% |
| | PolyglotNER | 31.19% | 68.13% | 42.79% |
| | TweetNER7 | 47.40% | 34.45% | 39.90% |
| | WikiANN en | 55.81% | 60.65% | 58.13% |
| | WikiNeural | 61.93% | 86.89% | 72.31% |
| | bc2gm | 63.75% | 44.22% | 52.22% |
| | bc4chemd | 43.21% | 63.35% | 51.37% |
| | bc5cdr | 66.99% | 72.00% | 69.40% |
| | ncbi | 70.20% | 53.92% | 60.99% |
| | **Average** | | | **47.00%** |
|-----------------------------------------|---------------------|-----------|--------|----------|
| gliner-community/gliner-community-v2.5 | ACE 2004 | | | 29.80% |
| | ACE 2005 | | | 31.90% |
| | AnatEM | | | 26.40% |
| | Broad Tweet Corpus | | | 71.60% |
| | CoNLL 2003 | | | 66.70% |
| | FabNER | | | 27.40% |
| | FindVehicle | | | 49.90% |
| | GENIA_NER | | | 57.20% |
| | HarveyNER | | | 29.40% |
| | MultiNERD | | | 61.40% |
| | Ontonotes | | | 28.10% |
| | PolyglotNER | | | 43.50% |
| | TweetNER7 | | | 42.00% |
| | WikiANN en | | | 56.40% |
| | WikiNeural | | | 75.70% |
| | bc2gm | | | 47.80% |
| | bc4chemd | | | 50.00% |
| | bc5cdr | | | 71.10% |
| | ncbi | | | 65.00% |
| | **Average** | | | **49.00%** |
|-----------------------------------------|---------------------|-----------|--------|----------|
## Other available models
| Release | Model Name | # of Parameters | Language | License |
| - | - | - | - | - |
| v0 | [urchade/gliner_base](https://huggingface.co/urchade/gliner_base)<br>[urchade/gliner_multi](https://huggingface.co/urchade/gliner_multi) | 209M<br>209M | English<br>Multilingual | cc-by-nc-4.0 |
| v1 | [urchade/gliner_small-v1](https://huggingface.co/urchade/gliner_small-v1)<br>[urchade/gliner_medium-v1](https://huggingface.co/urchade/gliner_medium-v1)<br>[urchade/gliner_large-v1](https://huggingface.co/urchade/gliner_large-v1) | 166M<br>209M<br>459M | English <br> English <br> English | cc-by-nc-4.0 |
| v2 | [urchade/gliner_small-v2](https://huggingface.co/urchade/gliner_small-v2)<br>[urchade/gliner_medium-v2](https://huggingface.co/urchade/gliner_medium-v2)<br>[urchade/gliner_large-v2](https://huggingface.co/urchade/gliner_large-v2) | 166M<br>209M<br>459M | English <br> English <br> English | apache-2.0 |
| v2.1 | [urchade/gliner_small-v2.1](https://huggingface.co/urchade/gliner_small-v2.1)<br>[urchade/gliner_medium-v2.1](https://huggingface.co/urchade/gliner_medium-v2.1)<br>[urchade/gliner_large-v2.1](https://huggingface.co/urchade/gliner_large-v2.1) <br>[urchade/gliner_multi-v2.1](https://huggingface.co/urchade/gliner_multi-v2.1) | 166M<br>209M<br>459M<br>209M | English <br> English <br> English <br> Multilingual | apache-2.0 |
## Model Authors
The model authors are:
* [Urchade Zaratiana](https://huggingface.co/urchade)
* [Ihor Stepanov](https://huggingface.co/Ihor)
* Nadi Tomeh
* Pierre Holat
* Thierry Charnois
## Citation
```bibtex
@misc{zaratiana2023gliner,
title={GLiNER: Generalist Model for Named Entity Recognition using Bidirectional Transformer},
author={Urchade Zaratiana and Nadi Tomeh and Pierre Holat and Thierry Charnois},
year={2023},
eprint={2311.08526},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` | [
"NAMED_ENTITY_RECOGNITION"
] | [
"ANATEM",
"BC5CDR"
] |
chucre/Llama3-OpenBioLLM-70B | chucre | null | [
"pytorch",
"llama",
"llama-3",
"Mixtral",
"instruct",
"finetune",
"chatml",
"DPO",
"RLHF",
"gpt4",
"distillation",
"heathcare",
"medical",
"clinical",
"med",
"lifescience",
"Pharmaceutical",
"Pharma",
"en",
"arxiv:2305.18290",
"arxiv:2303.13375",
"arxiv:2212.13138",
"arxiv:2305.09617",
"arxiv:2402.07023",
"base_model:meta-llama/Meta-Llama-3-70B-Instruct",
"base_model:finetune:meta-llama/Meta-Llama-3-70B-Instruct",
"license:llama3",
"region:us"
] | 2025-03-05T16:31:14 | 2025-03-06T08:43:00 | 70 | 0 | ---
base_model: meta-llama/Meta-Llama-3-70B-Instruct
language:
- en
license: llama3
tags:
- llama-3
- llama
- Mixtral
- instruct
- finetune
- chatml
- DPO
- RLHF
- gpt4
- distillation
- heathcare
- medical
- clinical
- med
- lifescience
- Pharmaceutical
- Pharma
widget:
- example_title: OpenBioLLM-70B
messages:
- role: system
content: You are an expert and experienced from the healthcare and biomedical
domain with extensive medical knowledge and practical experience.
- role: user
content: How long does it take for newborn jaundice to go away?
output:
text: 'Newborn jaundice, also known as neonatal jaundice, is a common condition
in newborns where the yellowing of the skin and eyes occurs due to an elevated
level of bilirubin in the blood. Bilirubin is a yellow pigment that forms when
red blood cells break down. In most cases, newborn jaundice resolves on its
own without any specific treatment.
The duration of newborn jaundice can vary depending on several factors such
as the underlying cause, gestational age at birth, and individual variations
in bilirubin metabolism. Here are some general guidelines:
1. Physiological jaundice: This is the most common type of newborn jaundice
and usually appears within 24-72 hours after birth. It tends to peak between
the second and fifth day of life and gradually improves over the next week or
two. By the time the baby is one week old, the jaundice should have mostly resolved.
2. Breast milk jaundice: This type of jaundice occurs in breastfed babies and
may appear later than physiological jaundice, typically between the fifth and
fourteenth day of life. It tends to persist for a longer duration but usually
resolves within six weeks after birth. 3. Pathological jaundice: This type of
jaundice is less common and occurs due to an underlying medical condition that
affects bilirubin metabolism or liver function. The duration of pathological
jaundice depends on the specific cause and may require treatment.
It''s important for parents to monitor their newborn''s jaundice closely and
seek medical advice if the jaundice progresses rapidly, becomes severe, or is
accompanied by other symptoms such as poor feeding, lethargy, or excessive sleepiness.
In these cases, further evaluation and management may be necessary. Remember
that each baby is unique, and the timing of jaundice resolution can vary. If
you have concerns about your newborn''s jaundice, it''s always best to consult
with a healthcare professional for personalized advice and guidance.'
model-index:
- name: OpenBioLLM-70B
results: []
---
<div align="center">
<img width="260px" src="https://cdn-uploads.huggingface.co/production/uploads/5f3fe13d79c1ba4c353d0c19/BrQCb95lmEIFz79QAmoNA.png"></div>

<div align="center">
<h1>Advancing Open-source Large Language Models in Medical Domain</h1>
</div>
<p align="center" style="margin-top: 0px;">
<a href="https://colab.research.google.com/drive/1F5oV20InEYeAJGmBwYF9NM_QhLmjBkKJ?usp=sharing">
<img src="https://colab.research.google.com/assets/colab-badge.svg" alt="OpenChat Logo" style="width:20px; vertical-align: middle; display: inline-block; margin-right: 5px; margin-left: 10px; margin-top: 0px; margin-bottom: 0px;"/>
<span class="link-text" style=" margin-right: 5px;">Online Demo</span>
</a> |
<a href="https://github.com/openlifescience-ai">
<img src="https://github.githubassets.com/assets/GitHub-Mark-ea2971cee799.png" alt="GitHub Logo" style="width:20px; vertical-align: middle; display: inline-block; margin-right: 5px; margin-left: 5px; margin-top: 0px; margin-bottom: 0px;"/>
<span class="link-text" style=" margin-right: 5px;">GitHub</span>
</a> |
<a href="#">
<img src="https://github.com/alpayariyak/openchat/blob/master/assets/arxiv-logomark-small-square-border.png?raw=true" alt="ArXiv Logo" style="width:20px; vertical-align: middle; display: inline-block; margin-right: 5px; margin-left: 5px; margin-top: 0px; margin-bottom: 0px;"/>
<span class="link-text" style="margin-right: 5px;">Paper</span>
</a> |
<a href="https://discord.gg/A5Fjf5zC69">
<img src="https://cloud.githubusercontent.com/assets/6291467/26705903/96c2d66e-477c-11e7-9f4e-f3c0efe96c9a.png" alt="Discord Logo" style="width:20px; vertical-align: middle; display: inline-block; margin-right: 5px; margin-left: 5px; margin-top: 0px; margin-bottom: 0px;"/>
<span class="link-text">Discord</span>
</a>
</p>

Introducing OpenBioLLM-70B: A State-of-the-Art Open Source Biomedical Large Language Model
OpenBioLLM-70B is an advanced open source language model designed specifically for the biomedical domain. Developed by Saama AI Labs, this model leverages cutting-edge techniques to achieve state-of-the-art performance on a wide range of biomedical tasks.
🏥 **Biomedical Specialization**: OpenBioLLM-70B is tailored for the unique language and knowledge requirements of the medical and life sciences fields. It was fine-tuned on a vast corpus of high-quality biomedical data, enabling it to understand and generate text with domain-specific accuracy and fluency.
🎓 **Superior Performance**: With 70 billion parameters, OpenBioLLM-70B outperforms other open source biomedical language models of similar scale. It has also demonstrated better results compared to larger proprietary & open-source models like GPT-4, Gemini, Meditron-70B, Med-PaLM-1 & Med-PaLM-2 on biomedical benchmarks.
🧠 **Advanced Training Techniques**: OpenBioLLM-70B builds upon the powerful foundations of the **Meta-Llama-3-70B-Instruct** and [Meta-Llama-3-70B-Instruct](meta-llama/Meta-Llama-3-70B-Instruct) models. It incorporates the DPO dataset and fine-tuning recipe along with a custom diverse medical instruction dataset. Key components of the training pipeline include:
<div align="center">
<img width="1200px" src="https://cdn-uploads.huggingface.co/production/uploads/5f3fe13d79c1ba4c353d0c19/oPchsJsEpQoGcGXVbh7YS.png">
</div>
- **Policy Optimization**: [Direct Preference Optimization: Your Language Model is Secretly a Reward Model (DPO)](https://arxiv.org/abs/2305.18290)
- **Fine-tuning dataset**: Custom Medical Instruct dataset (We plan to release a sample training dataset in our upcoming paper; please stay updated)
This combination of cutting-edge techniques enables OpenBioLLM-70B to align with key capabilities and preferences for biomedical applications.
⚙️ **Release Details**:
- **Model Size**: 70 billion parameters
- **Quantization**: Optimized quantized versions available [Here](https://huggingface.co/aaditya/OpenBioLLM-70B-GGUF)
- **Language(s) (NLP):** en
- **Developed By**: [Ankit Pal (Aaditya Ura)](https://aadityaura.github.io/) from Saama AI Labs
- **License:** Meta-Llama License
- **Fine-tuned from models:** [Meta-Llama-3-70B-Instruct](meta-llama/Meta-Llama-3-70B-Instruct)
- **Resources for more information:**
- Paper: Coming soon
The model can be fine-tuned for more specialized tasks and datasets as needed.
OpenBioLLM-70B represents an important step forward in democratizing advanced language AI for the biomedical community. By leveraging state-of-the-art architectures and training techniques from leading open source efforts like Llama-3, we have created a powerful tool to accelerate innovation and discovery in healthcare and the life sciences.
We are excited to share OpenBioLLM-70B with researchers and developers around the world.
### Community & Resources
#### 🔥 Your Daily Dose of Medical AI Breakthroughs 🚀
We turn hours of the latest research papers into minutes. Get daily tweets and news on the latest medical AI breakthroughs, dataset releases, and benchmark results – all carefully curated to save you time while keeping you informed.
<div align="center">
<table>
<tr>
<td align="center">
<a href="https://twitter.com/OpenLifeSciAI">
<img src="https://img.shields.io/badge/X-Follow%20%40OpenLifeSciAI-black?style=flat&logo=x" alt="Twitter Follow"/>
<br>
Daily updates on Medical LLMs,<br>datasets & benchmarks
</a>
</td>
<td align="center">
<a href="https://www.linkedin.com/company/openlifesciai/">
<img src="https://img.shields.io/badge/LinkedIn-Connect-blue?style=for-the-badge&logo=linkedin" alt="LinkedIn"/>
<br>
Daily news on Medical LLMs,<br>datasets & benchmarks
</a>
</td>
</tr>
<tr>
<td align="center">
<a href="https://www.youtube.com/@OpenlifesciAI">
<img src="https://img.shields.io/badge/YouTube-Subscribe-red?style=for-the-badge&logo=youtube" alt="YouTube"/>
<br>
Video & audio summaries of<br>latest research
</a>
</td>
<td align="center">
<a href="https://t.co/l5z6y6C4cM">
<img src="https://img.shields.io/badge/Discord-Join-7289DA?style=for-the-badge&logo=discord" alt="Discord"/>
<br>
Connect with researchers &<br>discuss latest developments
</a>
</td>
</tr>
</table>
</div>
### Use with transformers
**Important: Please use the exact chat template provided by Llama-3 instruct version. Otherwise there will be a degradation in the performance. The model output can be verbose in rare cases. Please consider setting temperature = 0 to make this happen less.**
See the snippet below for usage with Transformers:
```python
import transformers
import torch
model_id = "aaditya/OpenBioLLM-Llama3-70B"
pipeline = transformers.pipeline(
"text-generation",
model=model_id,
model_kwargs={"torch_dtype": torch.bfloat16},
device="auto",
)
messages = [
{"role": "system", "content": "You are an expert and experienced from the healthcare and biomedical domain with extensive medical knowledge and practical experience. Your name is OpenBioLLM, and you were developed by Saama AI Labs. who's willing to help answer the user's query with explanation. In your explanation, leverage your deep medical expertise such as relevant anatomical structures, physiological processes, diagnostic criteria, treatment guidelines, or other pertinent medical concepts. Use precise medical terminology while still aiming to make the explanation clear and accessible to a general audience."},
{"role": "user", "content": "How can i split a 3mg or 4mg waefin pill so i can get a 2.5mg pill?"},
]
prompt = pipeline.tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
terminators = [
pipeline.tokenizer.eos_token_id,
pipeline.tokenizer.convert_tokens_to_ids("<|eot_id|>")
]
outputs = pipeline(
prompt,
max_new_tokens=256,
eos_token_id=terminators,
do_sample=True,
temperature=0.0,
top_p=0.9,
)
print(outputs[0]["generated_text"][len(prompt):])
```
## **Training procedure**
### **Training hyperparameters**
<details>
<summary>Click to see details</summary>
- learning_rate: 0.0002
- lr_scheduler: cosine
- train_batch_size: 12
- eval_batch_size: 8
- GPU: H100 80GB SXM5
- num_devices: 8
- optimizer: adamw_bnb_8bit
- lr_scheduler_warmup_steps: 100
- num_epochs: 4
</details>
### **Peft hyperparameters**
<details>
<summary>Click to see details</summary>
- adapter: qlora
- lora_r: 128
- lora_alpha: 256
- lora_dropout: 0.05
- lora_target_linear: true
-lora_target_modules:
- q_proj
- v_proj
- k_proj
- o_proj
- gate_proj
- down_proj
- up_proj
</details>
### **Training results**
### **Framework versions**
- Transformers 4.39.3
- Pytorch 2.1.2+cu121
- Datasets 2.18.0
- Tokenizers 0.15.1
- Axolotl
- Lm harness for evaluation
# Benchmark Results
🔥 OpenBioLLM-70B demonstrates superior performance compared to larger models, such as GPT-4, Gemini, Meditron-70B, Med-PaLM-1 & Med-PaLM-2 across 9 diverse biomedical datasets, achieving state-of-the-art results with an average score of 86.06%, despite having a significantly smaller parameter count. The model's strong performance in domain-specific tasks, such as Clinical KG, Medical Genetics, and PubMedQA, highlights its ability to effectively capture and apply biomedical knowledge.
🚨 The GPT-4, Med-PaLM-1, and Med-PaLM-2 results are taken from their official papers. Since Med-PaLM doesn't provide zero-shot accuracy, we are using 5-shot accuracy from their paper for comparison. All results presented are in the zero-shot setting, except for Med-PaLM-2 and Med-PaLM-1, which use 5-shot accuracy.
| | Clinical KG | Medical Genetics | Anatomy | Pro Medicine | College Biology | College Medicine | MedQA 4 opts | PubMedQA | MedMCQA | Avg |
|--------------------|-------------|------------------|---------|--------------|-----------------|------------------|--------------|----------|---------|-------|
| **OpenBioLLM-70B** | **92.93** | **93.197** | **83.904** | 93.75 | 93.827 | **85.749** | 78.162 | 78.97 | **74.014** | **86.05588** |
| Med-PaLM-2 (5-shot) | 88.3 | 90 | 77.8 | **95.2** | 94.4 | 80.9 | **79.7** | **79.2** | 71.3 | 84.08 |
| **GPT-4** | 86.04 | 91 | 80 | 93.01 | **95.14** | 76.88 | 78.87 | 75.2 | 69.52 | 82.85 |
| Med-PaLM-1 (Flan-PaLM, 5-shot) | 80.4 | 75 | 63.7 | 83.8 | 88.9 | 76.3 | 67.6 | 79 | 57.6 | 74.7 |
| **OpenBioLLM-8B** | 76.101 | 86.1 | 69.829 | 78.21 | 84.213 | 68.042 | 58.993 | 74.12 | 56.913 | 72.502 |
| Gemini-1.0 | 76.7 | 75.8 | 66.7 | 77.7 | 88 | 69.2 | 58 | 70.7 | 54.3 | 70.79 |
| GPT-3.5 Turbo 1106 | 74.71 | 74 | 72.79 | 72.79 | 72.91 | 64.73 | 57.71 | 72.66 | 53.79 | 66 |
| Meditron-70B | 66.79 | 69 | 53.33 | 71.69 | 76.38 | 63 | 57.1 | 76.6 | 46.85 | 64.52 |
| gemma-7b | 69.81 | 70 | 59.26 | 66.18 | 79.86 | 60.12 | 47.21 | 76.2 | 48.96 | 64.18 |
| Mistral-7B-v0.1 | 68.68 | 71 | 55.56 | 68.38 | 68.06 | 59.54 | 50.82 | 75.4 | 48.2 | 62.85 |
| Apollo-7B | 62.26 | 72 | 61.48 | 69.12 | 70.83 | 55.49 | 55.22 | 39.8 | 53.77 | 60 |
| MedAlpaca-7b | 57.36 | 69 | 57.04 | 67.28 | 65.28 | 54.34 | 41.71 | 72.8 | 37.51 | 58.03 |
| BioMistral-7B | 59.9 | 64 | 56.5 | 60.4 | 59 | 54.7 | 50.6 | 77.5 | 48.1 | 57.3 |
| AlpaCare-llama2-7b | 49.81 | 49 | 45.92 | 33.82 | 50 | 43.35 | 29.77 | 72.2 | 34.42 | 45.36 |
| ClinicalGPT | 30.56 | 27 | 30.37 | 19.48 | 25 | 24.27 | 26.08 | 63.8 | 28.18 | 30.52 |
<div align="center">
<img width="1600px" src="https://cdn-uploads.huggingface.co/production/uploads/5f3fe13d79c1ba4c353d0c19/_SzdcJSBjZyo8RS1bTEkP.png">
</div>
## Detailed Medical Subjectwise accuracy

# Use Cases & Examples
🚨 **Below results are from the quantized version of OpenBioLLM-70B
# Summarize Clinical Notes
OpenBioLLM-70B can efficiently analyze and summarize complex clinical notes, EHR data, and discharge summaries, extracting key information and generating concise, structured summaries

# Answer Medical Questions
OpenBioLLM-70B can provide answers to a wide range of medical questions.


<details>
<summary>Click to see details</summary>



</details>
# Clinical Entity Recognition
OpenBioLLM-70B can perform advanced clinical entity recognition by identifying and extracting key medical concepts, such as diseases, symptoms, medications, procedures, and anatomical structures, from unstructured clinical text. By leveraging its deep understanding of medical terminology and context, the model can accurately annotate and categorize clinical entities, enabling more efficient information retrieval, data analysis, and knowledge discovery from electronic health records, research articles, and other biomedical text sources. This capability can support various downstream applications, such as clinical decision support, pharmacovigilance, and medical research.



# Biomarkers Extraction

# Classification
OpenBioLLM-70B can perform various biomedical classification tasks, such as disease prediction, sentiment analysis, medical document categorization

# De-Identification
OpenBioLLM-70B can detect and remove personally identifiable information (PII) from medical records, ensuring patient privacy and compliance with data protection regulations like HIPAA.

**Advisory Notice!**
While OpenBioLLM-70B leverages high-quality data sources, its outputs may still contain inaccuracies, biases, or misalignments that could pose risks if relied upon for medical decision-making without further testing and refinement. The model's performance has not yet been rigorously evaluated in randomized controlled trials or real-world healthcare environments.
Therefore, we strongly advise against using OpenBioLLM-70B for any direct patient care, clinical decision support, or other professional medical purposes at this time. Its use should be limited to research, development, and exploratory applications by qualified individuals who understand its limitations.
OpenBioLLM-70B is intended solely as a research tool to assist healthcare professionals and should never be considered a replacement for the professional judgment and expertise of a qualified medical doctor.
Appropriately adapting and validating OpenBioLLM-70B for specific medical use cases would require significant additional work, potentially including:
- Thorough testing and evaluation in relevant clinical scenarios
- Alignment with evidence-based guidelines and best practices
- Mitigation of potential biases and failure modes
- Integration with human oversight and interpretation
- Compliance with regulatory and ethical standards
Always consult a qualified healthcare provider for personal medical needs.
# Citation
If you find OpenBioLLM-70B & 8B useful in your work, please cite the model as follows:
```
@misc{OpenBioLLMs,
author = {Ankit Pal, Malaikannan Sankarasubbu},
title = {OpenBioLLMs: Advancing Open-Source Large Language Models for Healthcare and Life Sciences},
year = {2024},
publisher = {Hugging Face},
journal = {Hugging Face repository},
howpublished = {\url{https://huggingface.co/aaditya/OpenBioLLM-Llama3-70B}}
}
```
The accompanying paper is currently in progress and will be released soon.
<div align="center">
<h2> 💌 Contact </h2>
</div>
We look forward to hearing you and collaborating on this exciting project!
**Contributors:**
- [Ankit Pal (Aaditya Ura)](https://aadityaura.github.io/) [aadityaura at gmail dot com]
- Saama AI Labs
- Note: I am looking for a funded PhD opportunity, especially if it fits my Responsible Generative AI, Multimodal LLMs, Geometric Deep Learning, and Healthcare AI skillset.
# References
We thank the [Meta Team](meta-llama/Meta-Llama-3-70B-Instruct) for their amazing models!
Result sources
- [1] GPT-4 [Capabilities of GPT-4 on Medical Challenge Problems] (https://arxiv.org/abs/2303.13375)
- [2] Med-PaLM-1 [Large Language Models Encode Clinical Knowledge](https://arxiv.org/abs/2212.13138)
- [3] Med-PaLM-2 [Towards Expert-Level Medical Question Answering with Large Language Models](https://arxiv.org/abs/2305.09617)
- [4] Gemini-1.0 [Gemini Goes to Med School](https://arxiv.org/abs/2402.07023) | [
"QUESTION_ANSWERING"
] | [
"MEDQA",
"PUBMEDQA"
] |
BAAI/bge-large-zh-noinstruct | BAAI | feature-extraction | [
"transformers",
"pytorch",
"bert",
"feature-extraction",
"zh",
"arxiv:2310.07554",
"arxiv:2309.07597",
"license:mit",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | 2023-08-02T08:05:03 | 2023-10-12T03:37:57 | 69 | 11 | ---
language:
- zh
license: mit
---
**Recommend switching to newest [BAAI/bge-large-zh-v1.5](https://huggingface.co/BAAI/bge-large-zh-v1.5) , which has more reasonable similarity distribution and same method of usage.**
<h1 align="center">FlagEmbedding</h1>
<h4 align="center">
<p>
<a href=#model-list>Model List</a> |
<a href=#frequently-asked-questions>FAQ</a> |
<a href=#usage>Usage</a> |
<a href="#evaluation">Evaluation</a> |
<a href="#train">Train</a> |
<a href="#contact">Contact</a> |
<a href="#citation">Citation</a> |
<a href="#license">License</a>
<p>
</h4>
More details please refer to our Github: [FlagEmbedding](https://github.com/FlagOpen/FlagEmbedding).
[English](README.md) | [中文](https://github.com/FlagOpen/FlagEmbedding/blob/master/README_zh.md)
FlagEmbedding can map any text to a low-dimensional dense vector which can be used for tasks like retrieval, classification, clustering, or semantic search.
And it also can be used in vector databases for LLMs.
************* 🌟**Updates**🌟 *************
- 10/12/2023: Release [LLM-Embedder](./FlagEmbedding/llm_embedder/README.md), a unified embedding model to support diverse retrieval augmentation needs for LLMs. [Paper](https://arxiv.org/pdf/2310.07554.pdf) :fire:
- 09/15/2023: The [technical report](https://arxiv.org/pdf/2309.07597.pdf) of BGE has been released
- 09/15/2023: The [masive training data](https://data.baai.ac.cn/details/BAAI-MTP) of BGE has been released
- 09/12/2023: New models:
- **New reranker model**: release cross-encoder models `BAAI/bge-reranker-base` and `BAAI/bge-reranker-large`, which are more powerful than embedding model. We recommend to use/fine-tune them to re-rank top-k documents returned by embedding models.
- **update embedding model**: release `bge-*-v1.5` embedding model to alleviate the issue of the similarity distribution, and enhance its retrieval ability without instruction.
<details>
<summary>More</summary>
<!-- ### More -->
- 09/07/2023: Update [fine-tune code](https://github.com/FlagOpen/FlagEmbedding/blob/master/FlagEmbedding/baai_general_embedding/README.md): Add script to mine hard negatives and support adding instruction during fine-tuning.
- 08/09/2023: BGE Models are integrated into **Langchain**, you can use it like [this](#using-langchain); C-MTEB **leaderboard** is [available](https://huggingface.co/spaces/mteb/leaderboard).
- 08/05/2023: Release base-scale and small-scale models, **best performance among the models of the same size 🤗**
- 08/02/2023: Release `bge-large-*`(short for BAAI General Embedding) Models, **rank 1st on MTEB and C-MTEB benchmark!** :tada: :tada:
- 08/01/2023: We release the [Chinese Massive Text Embedding Benchmark](https://github.com/FlagOpen/FlagEmbedding/blob/master/C_MTEB) (**C-MTEB**), consisting of 31 test dataset.
</details>
## Model List
`bge` is short for `BAAI general embedding`.
| Model | Language | | Description | query instruction for retrieval [1] |
|:-------------------------------|:--------:| :--------:| :--------:|:--------:|
| [BAAI/llm-embedder](https://huggingface.co/BAAI/llm-embedder) | English | [Inference](./FlagEmbedding/llm_embedder/README.md) [Fine-tune](./FlagEmbedding/llm_embedder/README.md) | a unified embedding model to support diverse retrieval augmentation needs for LLMs | See [README](./FlagEmbedding/llm_embedder/README.md) |
| [BAAI/bge-reranker-large](https://huggingface.co/BAAI/bge-reranker-large) | Chinese and English | [Inference](#usage-for-reranker) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/reranker) | a cross-encoder model which is more accurate but less efficient [2] | |
| [BAAI/bge-reranker-base](https://huggingface.co/BAAI/bge-reranker-base) | Chinese and English | [Inference](#usage-for-reranker) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/reranker) | a cross-encoder model which is more accurate but less efficient [2] | |
| [BAAI/bge-large-en-v1.5](https://huggingface.co/BAAI/bge-large-en-v1.5) | English | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | version 1.5 with more reasonable similarity distribution | `Represent this sentence for searching relevant passages: ` |
| [BAAI/bge-base-en-v1.5](https://huggingface.co/BAAI/bge-base-en-v1.5) | English | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | version 1.5 with more reasonable similarity distribution | `Represent this sentence for searching relevant passages: ` |
| [BAAI/bge-small-en-v1.5](https://huggingface.co/BAAI/bge-small-en-v1.5) | English | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | version 1.5 with more reasonable similarity distribution | `Represent this sentence for searching relevant passages: ` |
| [BAAI/bge-large-zh-v1.5](https://huggingface.co/BAAI/bge-large-zh-v1.5) | Chinese | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | version 1.5 with more reasonable similarity distribution | `为这个句子生成表示以用于检索相关文章:` |
| [BAAI/bge-base-zh-v1.5](https://huggingface.co/BAAI/bge-base-zh-v1.5) | Chinese | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | version 1.5 with more reasonable similarity distribution | `为这个句子生成表示以用于检索相关文章:` |
| [BAAI/bge-small-zh-v1.5](https://huggingface.co/BAAI/bge-small-zh-v1.5) | Chinese | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | version 1.5 with more reasonable similarity distribution | `为这个句子生成表示以用于检索相关文章:` |
| [BAAI/bge-large-en](https://huggingface.co/BAAI/bge-large-en) | English | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | :trophy: rank **1st** in [MTEB](https://huggingface.co/spaces/mteb/leaderboard) leaderboard | `Represent this sentence for searching relevant passages: ` |
| [BAAI/bge-base-en](https://huggingface.co/BAAI/bge-base-en) | English | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | a base-scale model but with similar ability to `bge-large-en` | `Represent this sentence for searching relevant passages: ` |
| [BAAI/bge-small-en](https://huggingface.co/BAAI/bge-small-en) | English | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) |a small-scale model but with competitive performance | `Represent this sentence for searching relevant passages: ` |
| [BAAI/bge-large-zh](https://huggingface.co/BAAI/bge-large-zh) | Chinese | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | :trophy: rank **1st** in [C-MTEB](https://github.com/FlagOpen/FlagEmbedding/tree/master/C_MTEB) benchmark | `为这个句子生成表示以用于检索相关文章:` |
| [BAAI/bge-base-zh](https://huggingface.co/BAAI/bge-base-zh) | Chinese | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | a base-scale model but with similar ability to `bge-large-zh` | `为这个句子生成表示以用于检索相关文章:` |
| [BAAI/bge-small-zh](https://huggingface.co/BAAI/bge-small-zh) | Chinese | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | a small-scale model but with competitive performance | `为这个句子生成表示以用于检索相关文章:` |
[1\]: If you need to search the relevant passages to a query, we suggest to add the instruction to the query; in other cases, no instruction is needed, just use the original query directly. In all cases, **no instruction** needs to be added to passages.
[2\]: Different from embedding model, reranker uses question and document as input and directly output similarity instead of embedding. To balance the accuracy and time cost, cross-encoder is widely used to re-rank top-k documents retrieved by other simple models.
For examples, use bge embedding model to retrieve top 100 relevant documents, and then use bge reranker to re-rank the top 100 document to get the final top-3 results.
All models have been uploaded to Huggingface Hub, and you can see them at https://huggingface.co/BAAI.
If you cannot open the Huggingface Hub, you also can download the models at https://model.baai.ac.cn/models .
## Frequently asked questions
<details>
<summary>1. How to fine-tune bge embedding model?</summary>
<!-- ### How to fine-tune bge embedding model? -->
Following this [example](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) to prepare data and fine-tune your model.
Some suggestions:
- Mine hard negatives following this [example](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune#hard-negatives), which can improve the retrieval performance.
- If you pre-train bge on your data, the pre-trained model cannot be directly used to calculate similarity, and it must be fine-tuned with contrastive learning before computing similarity.
- If the accuracy of the fine-tuned model is still not high, it is recommended to use/fine-tune the cross-encoder model (bge-reranker) to re-rank top-k results. Hard negatives also are needed to fine-tune reranker.
</details>
<details>
<summary>2. The similarity score between two dissimilar sentences is higher than 0.5</summary>
<!-- ### The similarity score between two dissimilar sentences is higher than 0.5 -->
**Suggest to use bge v1.5, which alleviates the issue of the similarity distribution.**
Since we finetune the models by contrastive learning with a temperature of 0.01,
the similarity distribution of the current BGE model is about in the interval \[0.6, 1\].
So a similarity score greater than 0.5 does not indicate that the two sentences are similar.
For downstream tasks, such as passage retrieval or semantic similarity,
**what matters is the relative order of the scores, not the absolute value.**
If you need to filter similar sentences based on a similarity threshold,
please select an appropriate similarity threshold based on the similarity distribution on your data (such as 0.8, 0.85, or even 0.9).
</details>
<details>
<summary>3. When does the query instruction need to be used</summary>
<!-- ### When does the query instruction need to be used -->
For the `bge-*-v1.5`, we improve its retrieval ability when not using instruction.
No instruction only has a slight degradation in retrieval performance compared with using instruction.
So you can generate embedding without instruction in all cases for convenience.
For a retrieval task that uses short queries to find long related documents,
it is recommended to add instructions for these short queries.
**The best method to decide whether to add instructions for queries is choosing the setting that achieves better performance on your task.**
In all cases, the documents/passages do not need to add the instruction.
</details>
## Usage
### Usage for Embedding Model
Here are some examples for using `bge` models with
[FlagEmbedding](#using-flagembedding), [Sentence-Transformers](#using-sentence-transformers), [Langchain](#using-langchain), or [Huggingface Transformers](#using-huggingface-transformers).
#### Using FlagEmbedding
```
pip install -U FlagEmbedding
```
If it doesn't work for you, you can see [FlagEmbedding](https://github.com/FlagOpen/FlagEmbedding/blob/master/FlagEmbedding/baai_general_embedding/README.md) for more methods to install FlagEmbedding.
```python
from FlagEmbedding import FlagModel
sentences_1 = ["样例数据-1", "样例数据-2"]
sentences_2 = ["样例数据-3", "样例数据-4"]
model = FlagModel('BAAI/bge-large-zh-v1.5',
query_instruction_for_retrieval="为这个句子生成表示以用于检索相关文章:",
use_fp16=True) # Setting use_fp16 to True speeds up computation with a slight performance degradation
embeddings_1 = model.encode(sentences_1)
embeddings_2 = model.encode(sentences_2)
similarity = embeddings_1 @ embeddings_2.T
print(similarity)
# for s2p(short query to long passage) retrieval task, suggest to use encode_queries() which will automatically add the instruction to each query
# corpus in retrieval task can still use encode() or encode_corpus(), since they don't need instruction
queries = ['query_1', 'query_2']
passages = ["样例文档-1", "样例文档-2"]
q_embeddings = model.encode_queries(queries)
p_embeddings = model.encode(passages)
scores = q_embeddings @ p_embeddings.T
```
For the value of the argument `query_instruction_for_retrieval`, see [Model List](https://github.com/FlagOpen/FlagEmbedding/tree/master#model-list).
By default, FlagModel will use all available GPUs when encoding. Please set `os.environ["CUDA_VISIBLE_DEVICES"]` to select specific GPUs.
You also can set `os.environ["CUDA_VISIBLE_DEVICES"]=""` to make all GPUs unavailable.
#### Using Sentence-Transformers
You can also use the `bge` models with [sentence-transformers](https://www.SBERT.net):
```
pip install -U sentence-transformers
```
```python
from sentence_transformers import SentenceTransformer
sentences_1 = ["样例数据-1", "样例数据-2"]
sentences_2 = ["样例数据-3", "样例数据-4"]
model = SentenceTransformer('BAAI/bge-large-zh-v1.5')
embeddings_1 = model.encode(sentences_1, normalize_embeddings=True)
embeddings_2 = model.encode(sentences_2, normalize_embeddings=True)
similarity = embeddings_1 @ embeddings_2.T
print(similarity)
```
For s2p(short query to long passage) retrieval task,
each short query should start with an instruction (instructions see [Model List](https://github.com/FlagOpen/FlagEmbedding/tree/master#model-list)).
But the instruction is not needed for passages.
```python
from sentence_transformers import SentenceTransformer
queries = ['query_1', 'query_2']
passages = ["样例文档-1", "样例文档-2"]
instruction = "为这个句子生成表示以用于检索相关文章:"
model = SentenceTransformer('BAAI/bge-large-zh-v1.5')
q_embeddings = model.encode([instruction+q for q in queries], normalize_embeddings=True)
p_embeddings = model.encode(passages, normalize_embeddings=True)
scores = q_embeddings @ p_embeddings.T
```
#### Using Langchain
You can use `bge` in langchain like this:
```python
from langchain.embeddings import HuggingFaceBgeEmbeddings
model_name = "BAAI/bge-large-en-v1.5"
model_kwargs = {'device': 'cuda'}
encode_kwargs = {'normalize_embeddings': True} # set True to compute cosine similarity
model = HuggingFaceBgeEmbeddings(
model_name=model_name,
model_kwargs=model_kwargs,
encode_kwargs=encode_kwargs,
query_instruction="为这个句子生成表示以用于检索相关文章:"
)
model.query_instruction = "为这个句子生成表示以用于检索相关文章:"
```
#### Using HuggingFace Transformers
With the transformers package, you can use the model like this: First, you pass your input through the transformer model, then you select the last hidden state of the first token (i.e., [CLS]) as the sentence embedding.
```python
from transformers import AutoTokenizer, AutoModel
import torch
# Sentences we want sentence embeddings for
sentences = ["样例数据-1", "样例数据-2"]
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('BAAI/bge-large-zh-v1.5')
model = AutoModel.from_pretrained('BAAI/bge-large-zh-v1.5')
model.eval()
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# for s2p(short query to long passage) retrieval task, add an instruction to query (not add instruction for passages)
# encoded_input = tokenizer([instruction + q for q in queries], padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, cls pooling.
sentence_embeddings = model_output[0][:, 0]
# normalize embeddings
sentence_embeddings = torch.nn.functional.normalize(sentence_embeddings, p=2, dim=1)
print("Sentence embeddings:", sentence_embeddings)
```
### Usage for Reranker
Different from embedding model, reranker uses question and document as input and directly output similarity instead of embedding.
You can get a relevance score by inputting query and passage to the reranker.
The reranker is optimized based cross-entropy loss, so the relevance score is not bounded to a specific range.
#### Using FlagEmbedding
```
pip install -U FlagEmbedding
```
Get relevance scores (higher scores indicate more relevance):
```python
from FlagEmbedding import FlagReranker
reranker = FlagReranker('BAAI/bge-reranker-large', use_fp16=True) # Setting use_fp16 to True speeds up computation with a slight performance degradation
score = reranker.compute_score(['query', 'passage'])
print(score)
scores = reranker.compute_score([['what is panda?', 'hi'], ['what is panda?', 'The giant panda (Ailuropoda melanoleuca), sometimes called a panda bear or simply panda, is a bear species endemic to China.']])
print(scores)
```
#### Using Huggingface transformers
```python
import torch
from transformers import AutoModelForSequenceClassification, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained('BAAI/bge-reranker-large')
model = AutoModelForSequenceClassification.from_pretrained('BAAI/bge-reranker-large')
model.eval()
pairs = [['what is panda?', 'hi'], ['what is panda?', 'The giant panda (Ailuropoda melanoleuca), sometimes called a panda bear or simply panda, is a bear species endemic to China.']]
with torch.no_grad():
inputs = tokenizer(pairs, padding=True, truncation=True, return_tensors='pt', max_length=512)
scores = model(**inputs, return_dict=True).logits.view(-1, ).float()
print(scores)
```
## Evaluation
`baai-general-embedding` models achieve **state-of-the-art performance on both MTEB and C-MTEB leaderboard!**
For more details and evaluation tools see our [scripts](https://github.com/FlagOpen/FlagEmbedding/blob/master/C_MTEB/README.md).
- **MTEB**:
| Model Name | Dimension | Sequence Length | Average (56) | Retrieval (15) |Clustering (11) | Pair Classification (3) | Reranking (4) | STS (10) | Summarization (1) | Classification (12) |
|:----:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|
| [BAAI/bge-large-en-v1.5](https://huggingface.co/BAAI/bge-large-en-v1.5) | 1024 | 512 | **64.23** | **54.29** | 46.08 | 87.12 | 60.03 | 83.11 | 31.61 | 75.97 |
| [BAAI/bge-base-en-v1.5](https://huggingface.co/BAAI/bge-base-en-v1.5) | 768 | 512 | 63.55 | 53.25 | 45.77 | 86.55 | 58.86 | 82.4 | 31.07 | 75.53 |
| [BAAI/bge-small-en-v1.5](https://huggingface.co/BAAI/bge-small-en-v1.5) | 384 | 512 | 62.17 |51.68 | 43.82 | 84.92 | 58.36 | 81.59 | 30.12 | 74.14 |
| [bge-large-en](https://huggingface.co/BAAI/bge-large-en) | 1024 | 512 | 63.98 | 53.9 | 46.98 | 85.8 | 59.48 | 81.56 | 32.06 | 76.21 |
| [bge-base-en](https://huggingface.co/BAAI/bge-base-en) | 768 | 512 | 63.36 | 53.0 | 46.32 | 85.86 | 58.7 | 81.84 | 29.27 | 75.27 |
| [gte-large](https://huggingface.co/thenlper/gte-large) | 1024 | 512 | 63.13 | 52.22 | 46.84 | 85.00 | 59.13 | 83.35 | 31.66 | 73.33 |
| [gte-base](https://huggingface.co/thenlper/gte-base) | 768 | 512 | 62.39 | 51.14 | 46.2 | 84.57 | 58.61 | 82.3 | 31.17 | 73.01 |
| [e5-large-v2](https://huggingface.co/intfloat/e5-large-v2) | 1024| 512 | 62.25 | 50.56 | 44.49 | 86.03 | 56.61 | 82.05 | 30.19 | 75.24 |
| [bge-small-en](https://huggingface.co/BAAI/bge-small-en) | 384 | 512 | 62.11 | 51.82 | 44.31 | 83.78 | 57.97 | 80.72 | 30.53 | 74.37 |
| [instructor-xl](https://huggingface.co/hkunlp/instructor-xl) | 768 | 512 | 61.79 | 49.26 | 44.74 | 86.62 | 57.29 | 83.06 | 32.32 | 61.79 |
| [e5-base-v2](https://huggingface.co/intfloat/e5-base-v2) | 768 | 512 | 61.5 | 50.29 | 43.80 | 85.73 | 55.91 | 81.05 | 30.28 | 73.84 |
| [gte-small](https://huggingface.co/thenlper/gte-small) | 384 | 512 | 61.36 | 49.46 | 44.89 | 83.54 | 57.7 | 82.07 | 30.42 | 72.31 |
| [text-embedding-ada-002](https://platform.openai.com/docs/guides/embeddings) | 1536 | 8192 | 60.99 | 49.25 | 45.9 | 84.89 | 56.32 | 80.97 | 30.8 | 70.93 |
| [e5-small-v2](https://huggingface.co/intfloat/e5-base-v2) | 384 | 512 | 59.93 | 49.04 | 39.92 | 84.67 | 54.32 | 80.39 | 31.16 | 72.94 |
| [sentence-t5-xxl](https://huggingface.co/sentence-transformers/sentence-t5-xxl) | 768 | 512 | 59.51 | 42.24 | 43.72 | 85.06 | 56.42 | 82.63 | 30.08 | 73.42 |
| [all-mpnet-base-v2](https://huggingface.co/sentence-transformers/all-mpnet-base-v2) | 768 | 514 | 57.78 | 43.81 | 43.69 | 83.04 | 59.36 | 80.28 | 27.49 | 65.07 |
| [sgpt-bloom-7b1-msmarco](https://huggingface.co/bigscience/sgpt-bloom-7b1-msmarco) | 4096 | 2048 | 57.59 | 48.22 | 38.93 | 81.9 | 55.65 | 77.74 | 33.6 | 66.19 |
- **C-MTEB**:
We create the benchmark C-MTEB for Chinese text embedding which consists of 31 datasets from 6 tasks.
Please refer to [C_MTEB](https://github.com/FlagOpen/FlagEmbedding/blob/master/C_MTEB/README.md) for a detailed introduction.
| Model | Embedding dimension | Avg | Retrieval | STS | PairClassification | Classification | Reranking | Clustering |
|:-------------------------------|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:|
| [**BAAI/bge-large-zh-v1.5**](https://huggingface.co/BAAI/bge-large-zh-v1.5) | 1024 | **64.53** | 70.46 | 56.25 | 81.6 | 69.13 | 65.84 | 48.99 |
| [BAAI/bge-base-zh-v1.5](https://huggingface.co/BAAI/bge-base-zh-v1.5) | 768 | 63.13 | 69.49 | 53.72 | 79.75 | 68.07 | 65.39 | 47.53 |
| [BAAI/bge-small-zh-v1.5](https://huggingface.co/BAAI/bge-small-zh-v1.5) | 512 | 57.82 | 61.77 | 49.11 | 70.41 | 63.96 | 60.92 | 44.18 |
| [BAAI/bge-large-zh](https://huggingface.co/BAAI/bge-large-zh) | 1024 | 64.20 | 71.53 | 54.98 | 78.94 | 68.32 | 65.11 | 48.39 |
| [bge-large-zh-noinstruct](https://huggingface.co/BAAI/bge-large-zh-noinstruct) | 1024 | 63.53 | 70.55 | 53 | 76.77 | 68.58 | 64.91 | 50.01 |
| [BAAI/bge-base-zh](https://huggingface.co/BAAI/bge-base-zh) | 768 | 62.96 | 69.53 | 54.12 | 77.5 | 67.07 | 64.91 | 47.63 |
| [multilingual-e5-large](https://huggingface.co/intfloat/multilingual-e5-large) | 1024 | 58.79 | 63.66 | 48.44 | 69.89 | 67.34 | 56.00 | 48.23 |
| [BAAI/bge-small-zh](https://huggingface.co/BAAI/bge-small-zh) | 512 | 58.27 | 63.07 | 49.45 | 70.35 | 63.64 | 61.48 | 45.09 |
| [m3e-base](https://huggingface.co/moka-ai/m3e-base) | 768 | 57.10 | 56.91 | 50.47 | 63.99 | 67.52 | 59.34 | 47.68 |
| [m3e-large](https://huggingface.co/moka-ai/m3e-large) | 1024 | 57.05 | 54.75 | 50.42 | 64.3 | 68.2 | 59.66 | 48.88 |
| [multilingual-e5-base](https://huggingface.co/intfloat/multilingual-e5-base) | 768 | 55.48 | 61.63 | 46.49 | 67.07 | 65.35 | 54.35 | 40.68 |
| [multilingual-e5-small](https://huggingface.co/intfloat/multilingual-e5-small) | 384 | 55.38 | 59.95 | 45.27 | 66.45 | 65.85 | 53.86 | 45.26 |
| [text-embedding-ada-002(OpenAI)](https://platform.openai.com/docs/guides/embeddings/what-are-embeddings) | 1536 | 53.02 | 52.0 | 43.35 | 69.56 | 64.31 | 54.28 | 45.68 |
| [luotuo](https://huggingface.co/silk-road/luotuo-bert-medium) | 1024 | 49.37 | 44.4 | 42.78 | 66.62 | 61 | 49.25 | 44.39 |
| [text2vec-base](https://huggingface.co/shibing624/text2vec-base-chinese) | 768 | 47.63 | 38.79 | 43.41 | 67.41 | 62.19 | 49.45 | 37.66 |
| [text2vec-large](https://huggingface.co/GanymedeNil/text2vec-large-chinese) | 1024 | 47.36 | 41.94 | 44.97 | 70.86 | 60.66 | 49.16 | 30.02 |
- **Reranking**:
See [C_MTEB](https://github.com/FlagOpen/FlagEmbedding/blob/master/C_MTEB/) for evaluation script.
| Model | T2Reranking | T2RerankingZh2En\* | T2RerankingEn2Zh\* | MMarcoReranking | CMedQAv1 | CMedQAv2 | Avg |
|:-------------------------------|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:|
| text2vec-base-multilingual | 64.66 | 62.94 | 62.51 | 14.37 | 48.46 | 48.6 | 50.26 |
| multilingual-e5-small | 65.62 | 60.94 | 56.41 | 29.91 | 67.26 | 66.54 | 57.78 |
| multilingual-e5-large | 64.55 | 61.61 | 54.28 | 28.6 | 67.42 | 67.92 | 57.4 |
| multilingual-e5-base | 64.21 | 62.13 | 54.68 | 29.5 | 66.23 | 66.98 | 57.29 |
| m3e-base | 66.03 | 62.74 | 56.07 | 17.51 | 77.05 | 76.76 | 59.36 |
| m3e-large | 66.13 | 62.72 | 56.1 | 16.46 | 77.76 | 78.27 | 59.57 |
| bge-base-zh-v1.5 | 66.49 | 63.25 | 57.02 | 29.74 | 80.47 | 84.88 | 63.64 |
| bge-large-zh-v1.5 | 65.74 | 63.39 | 57.03 | 28.74 | 83.45 | 85.44 | 63.97 |
| [BAAI/bge-reranker-base](https://huggingface.co/BAAI/bge-reranker-base) | 67.28 | 63.95 | 60.45 | 35.46 | 81.26 | 84.1 | 65.42 |
| [BAAI/bge-reranker-large](https://huggingface.co/BAAI/bge-reranker-large) | 67.6 | 64.03 | 61.44 | 37.16 | 82.15 | 84.18 | 66.09 |
\* : T2RerankingZh2En and T2RerankingEn2Zh are cross-language retrieval tasks
## Train
### BAAI Embedding
We pre-train the models using [retromae](https://github.com/staoxiao/RetroMAE) and train them on large-scale pairs data using contrastive learning.
**You can fine-tune the embedding model on your data following our [examples](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune).**
We also provide a [pre-train example](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/pretrain).
Note that the goal of pre-training is to reconstruct the text, and the pre-trained model cannot be used for similarity calculation directly, it needs to be fine-tuned.
More training details for bge see [baai_general_embedding](https://github.com/FlagOpen/FlagEmbedding/blob/master/FlagEmbedding/baai_general_embedding/README.md).
### BGE Reranker
Cross-encoder will perform full-attention over the input pair,
which is more accurate than embedding model (i.e., bi-encoder) but more time-consuming than embedding model.
Therefore, it can be used to re-rank the top-k documents returned by embedding model.
We train the cross-encoder on a multilingual pair data,
The data format is the same as embedding model, so you can fine-tune it easily following our [example](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/reranker).
More details please refer to [./FlagEmbedding/reranker/README.md](https://github.com/FlagOpen/FlagEmbedding/tree/master/FlagEmbedding/reranker)
## Contact
If you have any question or suggestion related to this project, feel free to open an issue or pull request.
You also can email Shitao Xiao([email protected]) and Zheng Liu([email protected]).
## Citation
If you find this repository useful, please consider giving a star :star: and citation
```
@misc{bge_embedding,
title={C-Pack: Packaged Resources To Advance General Chinese Embedding},
author={Shitao Xiao and Zheng Liu and Peitian Zhang and Niklas Muennighoff},
year={2023},
eprint={2309.07597},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
## License
FlagEmbedding is licensed under the [MIT License](https://github.com/FlagOpen/FlagEmbedding/blob/master/LICENSE). The released models can be used for commercial purposes free of charge.
| [
"SEMANTIC_SIMILARITY",
"SUMMARIZATION"
] | [
"BEAR"
] |
exa1128/pythia-1000step | exa1128 | text-generation | [
"transformers",
"pytorch",
"gpt_neox",
"text-generation",
"causal-lm",
"pythia",
"en",
"dataset:the_pile",
"arxiv:2101.00027",
"arxiv:2201.07311",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | 2023-05-09T21:10:36 | 2023-05-10T18:25:07 | 68 | 0 | ---
datasets:
- the_pile
language:
- en
license: apache-2.0
tags:
- pytorch
- causal-lm
- pythia
---
The *Pythia Scaling Suite* is a collection of models developed to facilitate
interpretability research. It contains two sets of eight models of sizes
70M, 160M, 410M, 1B, 1.4B, 2.8B, 6.9B, and 12B. For each size, there are two
models: one trained on the Pile, and one trained on the Pile after the dataset
has been globally deduplicated. All 8 model sizes are trained on the exact
same data, in the exact same order. We also provide 154 intermediate
checkpoints per model, hosted on Hugging Face as branches.
The Pythia model suite was deliberately designed to promote scientific
research on large language models, especially interpretability research.
Despite not centering downstream performance as a design goal, we find the
models <a href="#evaluations">match or exceed</a> the performance of
similar and same-sized models, such as those in the OPT and GPT-Neo suites.
<details>
<summary style="font-weight:600">Details on previous early release and naming convention.</summary>
Previously, we released an early version of the Pythia suite to the public.
However, we decided to retrain the model suite to address a few hyperparameter
discrepancies. This model card <a href="#changelog">lists the changes</a>;
see appendix B in the Pythia paper for further discussion. We found no
difference in benchmark performance between the two Pythia versions.
The old models are
[still available](https://huggingface.co/models?other=pythia_v0), but we
suggest the retrained suite if you are just starting to use Pythia.<br>
**This is the current release.**
Please note that all models in the *Pythia* suite were renamed in January
2023. For clarity, a <a href="#naming-convention-and-parameter-count">table
comparing the old and new names</a> is provided in this model card, together
with exact parameter counts.
</details>
<br>
# Pythia-70M
## Model Details
- Developed by: [EleutherAI](http://eleuther.ai)
- Model type: Transformer-based Language Model
- Language: English
- Learn more: [Pythia's GitHub repository](https://github.com/EleutherAI/pythia)
for training procedure, config files, and details on how to use.
- Library: [GPT-NeoX](https://github.com/EleutherAI/gpt-neox)
- License: Apache 2.0
- Contact: to ask questions about this model, join the [EleutherAI
Discord](https://discord.gg/zBGx3azzUn), and post them in `#release-discussion`.
Please read the existing *Pythia* documentation before asking about it in the
EleutherAI Discord. For general correspondence: [contact@eleuther.
ai](mailto:[email protected]).
<figure>
| Pythia model | Non-Embedding Params | Layers | Model Dim | Heads | Batch Size | Learning Rate | Equivalent Models |
| -----------: | -------------------: | :----: | :-------: | :---: | :--------: | :-------------------: | :--------------------: |
| 70M | 18,915,328 | 6 | 512 | 8 | 2M | 1.0 x 10<sup>-3</sup> | — |
| 160M | 85,056,000 | 12 | 768 | 12 | 4M | 6.0 x 10<sup>-4</sup> | GPT-Neo 125M, OPT-125M |
| 410M | 302,311,424 | 24 | 1024 | 16 | 4M | 3.0 x 10<sup>-4</sup> | OPT-350M |
| 1.0B | 805,736,448 | 16 | 2048 | 8 | 2M | 3.0 x 10<sup>-4</sup> | — |
| 1.4B | 1,208,602,624 | 24 | 2048 | 16 | 4M | 2.0 x 10<sup>-4</sup> | GPT-Neo 1.3B, OPT-1.3B |
| 2.8B | 2,517,652,480 | 32 | 2560 | 32 | 2M | 1.6 x 10<sup>-4</sup> | GPT-Neo 2.7B, OPT-2.7B |
| 6.9B | 6,444,163,072 | 32 | 4096 | 32 | 2M | 1.2 x 10<sup>-4</sup> | OPT-6.7B |
| 12B | 11,327,027,200 | 36 | 5120 | 40 | 2M | 1.2 x 10<sup>-4</sup> | — |
<figcaption>Engineering details for the <i>Pythia Suite</i>. Deduped and
non-deduped models of a given size have the same hyperparameters. “Equivalent”
models have <b>exactly</b> the same architecture, and the same number of
non-embedding parameters.</figcaption>
</figure>
## Uses and Limitations
### Intended Use
The primary intended use of Pythia is research on the behavior, functionality,
and limitations of large language models. This suite is intended to provide
a controlled setting for performing scientific experiments. We also provide
154 checkpoints per model: initial `step0`, 10 log-spaced checkpoints
`step{1,2,4...512}`, and 143 evenly-spaced checkpoints from `step1000` to
`step143000`. These checkpoints are hosted on Hugging Face as branches. Note
that branch `143000` corresponds exactly to the model checkpoint on the `main`
branch of each model.
You may also further fine-tune and adapt Pythia-70M for deployment,
as long as your use is in accordance with the Apache 2.0 license. Pythia
models work with the Hugging Face [Transformers
Library](https://huggingface.co/docs/transformers/index). If you decide to use
pre-trained Pythia-70M as a basis for your fine-tuned model, please
conduct your own risk and bias assessment.
### Out-of-scope use
The Pythia Suite is **not** intended for deployment. It is not a in itself
a product and cannot be used for human-facing interactions. For example,
the model may generate harmful or offensive text. Please evaluate the risks
associated with your particular use case.
Pythia models are English-language only, and are not suitable for translation
or generating text in other languages.
Pythia-70M has not been fine-tuned for downstream contexts in which
language models are commonly deployed, such as writing genre prose,
or commercial chatbots. This means Pythia-70M will **not**
respond to a given prompt the way a product like ChatGPT does. This is because,
unlike this model, ChatGPT was fine-tuned using methods such as Reinforcement
Learning from Human Feedback (RLHF) to better “follow” human instructions.
### Limitations and biases
The core functionality of a large language model is to take a string of text
and predict the next token. The token used by the model need not produce the
most “accurate” text. Never rely on Pythia-70M to produce factually accurate
output.
This model was trained on [the Pile](https://pile.eleuther.ai/), a dataset
known to contain profanity and texts that are lewd or otherwise offensive.
See [Section 6 of the Pile paper](https://arxiv.org/abs/2101.00027) for a
discussion of documented biases with regards to gender, religion, and race.
Pythia-70M may produce socially unacceptable or undesirable text, *even if*
the prompt itself does not include anything explicitly offensive.
If you plan on using text generated through, for example, the Hosted Inference
API, we recommend having a human curate the outputs of this language model
before presenting it to other people. Please inform your audience that the
text was generated by Pythia-70M.
### Quickstart
Pythia models can be loaded and used via the following code, demonstrated here
for the third `pythia-70m-deduped` checkpoint:
```python
from transformers import GPTNeoXForCausalLM, AutoTokenizer
model = GPTNeoXForCausalLM.from_pretrained(
"EleutherAI/pythia-70m-deduped",
revision="step3000",
cache_dir="./pythia-70m-deduped/step3000",
)
tokenizer = AutoTokenizer.from_pretrained(
"EleutherAI/pythia-70m-deduped",
revision="step3000",
cache_dir="./pythia-70m-deduped/step3000",
)
inputs = tokenizer("Hello, I am", return_tensors="pt")
tokens = model.generate(**inputs)
tokenizer.decode(tokens[0])
```
Revision/branch `step143000` corresponds exactly to the model checkpoint on
the `main` branch of each model.<br>
For more information on how to use all Pythia models, see [documentation on
GitHub](https://github.com/EleutherAI/pythia).
## Training
### Training data
[The Pile](https://pile.eleuther.ai/) is a 825GiB general-purpose dataset in
English. It was created by EleutherAI specifically for training large language
models. It contains texts from 22 diverse sources, roughly broken down into
five categories: academic writing (e.g. arXiv), internet (e.g. CommonCrawl),
prose (e.g. Project Gutenberg), dialogue (e.g. YouTube subtitles), and
miscellaneous (e.g. GitHub, Enron Emails). See [the Pile
paper](https://arxiv.org/abs/2101.00027) for a breakdown of all data sources,
methodology, and a discussion of ethical implications. Consult [the
datasheet](https://arxiv.org/abs/2201.07311) for more detailed documentation
about the Pile and its component datasets. The Pile can be downloaded from
the [official website](https://pile.eleuther.ai/), or from a [community
mirror](https://the-eye.eu/public/AI/pile/).<br>
The Pile was **not** deduplicated before being used to train Pythia-70M.
### Training procedure
All models were trained on the exact same data, in the exact same order. Each
model saw 299,892,736,000 tokens during training, and 143 checkpoints for each
model are saved every 2,097,152,000 tokens, spaced evenly throughout training,
from `step1000` to `step143000` (which is the same as `main`). In addition, we
also provide frequent early checkpoints: `step0` and `step{1,2,4...512}`.
This corresponds to training for just under 1 epoch on the Pile for
non-deduplicated models, and about 1.5 epochs on the deduplicated Pile.
All *Pythia* models trained for 143000 steps at a batch size
of 2M (2,097,152 tokens).<br>
See [GitHub](https://github.com/EleutherAI/pythia) for more details on training
procedure, including [how to reproduce
it](https://github.com/EleutherAI/pythia/blob/main/README.md#reproducing-training).<br>
Pythia uses the same tokenizer as [GPT-NeoX-
20B](https://huggingface.co/EleutherAI/gpt-neox-20b).
## Evaluations
All 16 *Pythia* models were evaluated using the [LM Evaluation
Harness](https://github.com/EleutherAI/lm-evaluation-harness). You can access
the results by model and step at `results/json/*` in the [GitHub
repository](https://github.com/EleutherAI/pythia/tree/main/results/json/).<br>
Expand the sections below to see plots of evaluation results for all
Pythia and Pythia-deduped models compared with OPT and BLOOM.
<details>
<summary>LAMBADA – OpenAI</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/lambada_openai_v1.png" style="width:auto"/>
</details>
<details>
<summary>Physical Interaction: Question Answering (PIQA)</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/piqa_v1.png" style="width:auto"/>
</details>
<details>
<summary>WinoGrande</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/winogrande_v1.png" style="width:auto"/>
</details>
<details>
<summary>AI2 Reasoning Challenge—Easy Set</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/arc_easy_v1.png" style="width:auto"/>
</details>
<details>
<summary>SciQ</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/sciq_v1.png" style="width:auto"/>
</details>
## Changelog
This section compares differences between previously released
[Pythia v0](https://huggingface.co/models?other=pythia_v0) and the current
models. See Appendix B of the Pythia paper for further discussion of these
changes and the motivation behind them. We found that retraining Pythia had no
impact on benchmark performance.
- All model sizes are now trained with uniform batch size of 2M tokens.
Previously, the models of size 160M, 410M, and 1.4B parameters were trained
with batch sizes of 4M tokens.
- We added checkpoints at initialization (step 0) and steps {1,2,4,8,16,32,64,
128,256,512} in addition to every 1000 training steps.
- Flash Attention was used in the new retrained suite.
- We remedied a minor inconsistency that existed in the original suite: all
models of size 2.8B parameters or smaller had a learning rate (LR) schedule
which decayed to a minimum LR of 10% the starting LR rate, but the 6.9B and
12B models all used an LR schedule which decayed to a minimum LR of 0. In
the redone training runs, we rectified this inconsistency: all models now were
trained with LR decaying to a minimum of 0.1× their maximum LR.
### Naming convention and parameter count
*Pythia* models were renamed in January 2023. It is possible that the old
naming convention still persists in some documentation by accident. The
current naming convention (70M, 160M, etc.) is based on total parameter count.
<figure style="width:32em">
| current Pythia suffix | old suffix | total params | non-embedding params |
| --------------------: | ---------: | -------------: | -------------------: |
| 70M | 19M | 70,426,624 | 18,915,328 |
| 160M | 125M | 162,322,944 | 85,056,000 |
| 410M | 350M | 405,334,016 | 302,311,424 |
| 1B | 800M | 1,011,781,632 | 805,736,448 |
| 1.4B | 1.3B | 1,414,647,808 | 1,208,602,624 |
| 2.8B | 2.7B | 2,775,208,960 | 2,517,652,480 |
| 6.9B | 6.7B | 6,857,302,016 | 6,444,163,072 |
| 12B | 13B | 11,846,072,320 | 11,327,027,200 |
</figure> | [
"QUESTION_ANSWERING",
"TRANSLATION"
] | [
"SCIQ"
] |
RichardErkhov/EleutherAI_-_pythia-410m-deduped-gguf | RichardErkhov | null | [
"gguf",
"arxiv:2304.01373",
"arxiv:2101.00027",
"arxiv:2201.07311",
"endpoints_compatible",
"region:us"
] | 2024-11-03T13:18:18 | 2024-11-03T13:22:23 | 68 | 0 | ---
{}
---
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
pythia-410m-deduped - GGUF
- Model creator: https://huggingface.co/EleutherAI/
- Original model: https://huggingface.co/EleutherAI/pythia-410m-deduped/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [pythia-410m-deduped.Q2_K.gguf](https://huggingface.co/RichardErkhov/EleutherAI_-_pythia-410m-deduped-gguf/blob/main/pythia-410m-deduped.Q2_K.gguf) | Q2_K | 0.16GB |
| [pythia-410m-deduped.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/EleutherAI_-_pythia-410m-deduped-gguf/blob/main/pythia-410m-deduped.Q3_K_S.gguf) | Q3_K_S | 0.18GB |
| [pythia-410m-deduped.Q3_K.gguf](https://huggingface.co/RichardErkhov/EleutherAI_-_pythia-410m-deduped-gguf/blob/main/pythia-410m-deduped.Q3_K.gguf) | Q3_K | 0.21GB |
| [pythia-410m-deduped.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/EleutherAI_-_pythia-410m-deduped-gguf/blob/main/pythia-410m-deduped.Q3_K_M.gguf) | Q3_K_M | 0.21GB |
| [pythia-410m-deduped.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/EleutherAI_-_pythia-410m-deduped-gguf/blob/main/pythia-410m-deduped.Q3_K_L.gguf) | Q3_K_L | 0.22GB |
| [pythia-410m-deduped.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/EleutherAI_-_pythia-410m-deduped-gguf/blob/main/pythia-410m-deduped.IQ4_XS.gguf) | IQ4_XS | 0.22GB |
| [pythia-410m-deduped.Q4_0.gguf](https://huggingface.co/RichardErkhov/EleutherAI_-_pythia-410m-deduped-gguf/blob/main/pythia-410m-deduped.Q4_0.gguf) | Q4_0 | 0.23GB |
| [pythia-410m-deduped.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/EleutherAI_-_pythia-410m-deduped-gguf/blob/main/pythia-410m-deduped.IQ4_NL.gguf) | IQ4_NL | 0.23GB |
| [pythia-410m-deduped.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/EleutherAI_-_pythia-410m-deduped-gguf/blob/main/pythia-410m-deduped.Q4_K_S.gguf) | Q4_K_S | 0.23GB |
| [pythia-410m-deduped.Q4_K.gguf](https://huggingface.co/RichardErkhov/EleutherAI_-_pythia-410m-deduped-gguf/blob/main/pythia-410m-deduped.Q4_K.gguf) | Q4_K | 0.25GB |
| [pythia-410m-deduped.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/EleutherAI_-_pythia-410m-deduped-gguf/blob/main/pythia-410m-deduped.Q4_K_M.gguf) | Q4_K_M | 0.25GB |
| [pythia-410m-deduped.Q4_1.gguf](https://huggingface.co/RichardErkhov/EleutherAI_-_pythia-410m-deduped-gguf/blob/main/pythia-410m-deduped.Q4_1.gguf) | Q4_1 | 0.25GB |
| [pythia-410m-deduped.Q5_0.gguf](https://huggingface.co/RichardErkhov/EleutherAI_-_pythia-410m-deduped-gguf/blob/main/pythia-410m-deduped.Q5_0.gguf) | Q5_0 | 0.27GB |
| [pythia-410m-deduped.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/EleutherAI_-_pythia-410m-deduped-gguf/blob/main/pythia-410m-deduped.Q5_K_S.gguf) | Q5_K_S | 0.27GB |
| [pythia-410m-deduped.Q5_K.gguf](https://huggingface.co/RichardErkhov/EleutherAI_-_pythia-410m-deduped-gguf/blob/main/pythia-410m-deduped.Q5_K.gguf) | Q5_K | 0.28GB |
| [pythia-410m-deduped.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/EleutherAI_-_pythia-410m-deduped-gguf/blob/main/pythia-410m-deduped.Q5_K_M.gguf) | Q5_K_M | 0.28GB |
| [pythia-410m-deduped.Q5_1.gguf](https://huggingface.co/RichardErkhov/EleutherAI_-_pythia-410m-deduped-gguf/blob/main/pythia-410m-deduped.Q5_1.gguf) | Q5_1 | 0.29GB |
| [pythia-410m-deduped.Q6_K.gguf](https://huggingface.co/RichardErkhov/EleutherAI_-_pythia-410m-deduped-gguf/blob/main/pythia-410m-deduped.Q6_K.gguf) | Q6_K | 0.31GB |
| [pythia-410m-deduped.Q8_0.gguf](https://huggingface.co/RichardErkhov/EleutherAI_-_pythia-410m-deduped-gguf/blob/main/pythia-410m-deduped.Q8_0.gguf) | Q8_0 | 0.4GB |
Original model description:
---
language:
- en
tags:
- pytorch
- causal-lm
- pythia
license: apache-2.0
datasets:
- EleutherAI/the_pile_deduplicated
---
The *Pythia Scaling Suite* is a collection of models developed to facilitate
interpretability research [(see paper)](https://arxiv.org/pdf/2304.01373.pdf).
It contains two sets of eight models of sizes
70M, 160M, 410M, 1B, 1.4B, 2.8B, 6.9B, and 12B. For each size, there are two
models: one trained on the Pile, and one trained on the Pile after the dataset
has been globally deduplicated. All 8 model sizes are trained on the exact
same data, in the exact same order. We also provide 154 intermediate
checkpoints per model, hosted on Hugging Face as branches.
The Pythia model suite was designed to promote scientific
research on large language models, especially interpretability research.
Despite not centering downstream performance as a design goal, we find the
models <a href="#evaluations">match or exceed</a> the performance of
similar and same-sized models, such as those in the OPT and GPT-Neo suites.
<details>
<summary style="font-weight:600">Details on previous early release and naming convention.</summary>
Previously, we released an early version of the Pythia suite to the public.
However, we decided to retrain the model suite to address a few hyperparameter
discrepancies. This model card <a href="#changelog">lists the changes</a>;
see appendix B in the Pythia paper for further discussion. We found no
difference in benchmark performance between the two Pythia versions.
The old models are
[still available](https://huggingface.co/models?other=pythia_v0), but we
suggest the retrained suite if you are just starting to use Pythia.<br>
**This is the current release.**
Please note that all models in the *Pythia* suite were renamed in January
2023. For clarity, a <a href="#naming-convention-and-parameter-count">table
comparing the old and new names</a> is provided in this model card, together
with exact parameter counts.
</details>
<br>
# Pythia-410M-deduped
## Model Details
- Developed by: [EleutherAI](http://eleuther.ai)
- Model type: Transformer-based Language Model
- Language: English
- Learn more: [Pythia's GitHub repository](https://github.com/EleutherAI/pythia)
for training procedure, config files, and details on how to use.
[See paper](https://arxiv.org/pdf/2304.01373.pdf) for more evals and implementation
details.
- Library: [GPT-NeoX](https://github.com/EleutherAI/gpt-neox)
- License: Apache 2.0
- Contact: to ask questions about this model, join the [EleutherAI
Discord](https://discord.gg/zBGx3azzUn), and post them in `#release-discussion`.
Please read the existing *Pythia* documentation before asking about it in the
EleutherAI Discord. For general correspondence: [contact@eleuther.
ai](mailto:[email protected]).
<figure>
| Pythia model | Non-Embedding Params | Layers | Model Dim | Heads | Batch Size | Learning Rate | Equivalent Models |
| -----------: | -------------------: | :----: | :-------: | :---: | :--------: | :-------------------: | :--------------------: |
| 70M | 18,915,328 | 6 | 512 | 8 | 2M | 1.0 x 10<sup>-3</sup> | — |
| 160M | 85,056,000 | 12 | 768 | 12 | 2M | 6.0 x 10<sup>-4</sup> | GPT-Neo 125M, OPT-125M |
| 410M | 302,311,424 | 24 | 1024 | 16 | 2M | 3.0 x 10<sup>-4</sup> | OPT-350M |
| 1.0B | 805,736,448 | 16 | 2048 | 8 | 2M | 3.0 x 10<sup>-4</sup> | — |
| 1.4B | 1,208,602,624 | 24 | 2048 | 16 | 2M | 2.0 x 10<sup>-4</sup> | GPT-Neo 1.3B, OPT-1.3B |
| 2.8B | 2,517,652,480 | 32 | 2560 | 32 | 2M | 1.6 x 10<sup>-4</sup> | GPT-Neo 2.7B, OPT-2.7B |
| 6.9B | 6,444,163,072 | 32 | 4096 | 32 | 2M | 1.2 x 10<sup>-4</sup> | OPT-6.7B |
| 12B | 11,327,027,200 | 36 | 5120 | 40 | 2M | 1.2 x 10<sup>-4</sup> | — |
<figcaption>Engineering details for the <i>Pythia Suite</i>. Deduped and
non-deduped models of a given size have the same hyperparameters. “Equivalent”
models have <b>exactly</b> the same architecture, and the same number of
non-embedding parameters.</figcaption>
</figure>
## Uses and Limitations
### Intended Use
The primary intended use of Pythia is research on the behavior, functionality,
and limitations of large language models. This suite is intended to provide
a controlled setting for performing scientific experiments. We also provide
154 checkpoints per model: initial `step0`, 10 log-spaced checkpoints
`step{1,2,4...512}`, and 143 evenly-spaced checkpoints from `step1000` to
`step143000`. These checkpoints are hosted on Hugging Face as branches. Note
that branch `143000` corresponds exactly to the model checkpoint on the `main`
branch of each model.
You may also further fine-tune and adapt Pythia-410M-deduped for deployment,
as long as your use is in accordance with the Apache 2.0 license. Pythia
models work with the Hugging Face [Transformers
Library](https://huggingface.co/docs/transformers/index). If you decide to use
pre-trained Pythia-410M-deduped as a basis for your fine-tuned model, please
conduct your own risk and bias assessment.
### Out-of-scope use
The Pythia Suite is **not** intended for deployment. It is not a in itself
a product and cannot be used for human-facing interactions. For example,
the model may generate harmful or offensive text. Please evaluate the risks
associated with your particular use case.
Pythia models are English-language only, and are not suitable for translation
or generating text in other languages.
Pythia-410M-deduped has not been fine-tuned for downstream contexts in which
language models are commonly deployed, such as writing genre prose,
or commercial chatbots. This means XNPythia-410M-dedupedAME will **not**
respond to a given prompt the way a product like ChatGPT does. This is because,
unlike this model, ChatGPT was fine-tuned using methods such as Reinforcement
Learning from Human Feedback (RLHF) to better “follow” human instructions.
### Limitations and biases
The core functionality of a large language model is to take a string of text
and predict the next token. The token used by the model need not produce the
most “accurate” text. Never rely on Pythia-410M-deduped to produce factually accurate
output.
This model was trained on [the Pile](https://pile.eleuther.ai/), a dataset
known to contain profanity and texts that are lewd or otherwise offensive.
See [Section 6 of the Pile paper](https://arxiv.org/abs/2101.00027) for a
discussion of documented biases with regards to gender, religion, and race.
Pythia-410M-deduped may produce socially unacceptable or undesirable text, *even if*
the prompt itself does not include anything explicitly offensive.
If you plan on using text generated through, for example, the Hosted Inference
API, we recommend having a human curate the outputs of this language model
before presenting it to other people. Please inform your audience that the
text was generated by Pythia-410M-deduped.
### Quickstart
Pythia models can be loaded and used via the following code, demonstrated here
for the third `pythia-70m-deduped` checkpoint:
```python
from transformers import GPTNeoXForCausalLM, AutoTokenizer
model = GPTNeoXForCausalLM.from_pretrained(
"EleutherAI/pythia-70m-deduped",
revision="step3000",
cache_dir="./pythia-70m-deduped/step3000",
)
tokenizer = AutoTokenizer.from_pretrained(
"EleutherAI/pythia-70m-deduped",
revision="step3000",
cache_dir="./pythia-70m-deduped/step3000",
)
inputs = tokenizer("Hello, I am", return_tensors="pt")
tokens = model.generate(**inputs)
tokenizer.decode(tokens[0])
```
Revision/branch `step143000` corresponds exactly to the model checkpoint on
the `main` branch of each model.<br>
For more information on how to use all Pythia models, see [documentation on
GitHub](https://github.com/EleutherAI/pythia).
## Training
### Training data
Pythia-410M-deduped was trained on the Pile **after the dataset has been globally
deduplicated**.<br>
[The Pile](https://pile.eleuther.ai/) is a 825GiB general-purpose dataset in
English. It was created by EleutherAI specifically for training large language
models. It contains texts from 22 diverse sources, roughly broken down into
five categories: academic writing (e.g. arXiv), internet (e.g. CommonCrawl),
prose (e.g. Project Gutenberg), dialogue (e.g. YouTube subtitles), and
miscellaneous (e.g. GitHub, Enron Emails). See [the Pile
paper](https://arxiv.org/abs/2101.00027) for a breakdown of all data sources,
methodology, and a discussion of ethical implications. Consult [the
datasheet](https://arxiv.org/abs/2201.07311) for more detailed documentation
about the Pile and its component datasets. The Pile can be downloaded from
the [official website](https://pile.eleuther.ai/), or from a [community
mirror](https://the-eye.eu/public/AI/pile/).
### Training procedure
All models were trained on the exact same data, in the exact same order. Each
model saw 299,892,736,000 tokens during training, and 143 checkpoints for each
model are saved every 2,097,152,000 tokens, spaced evenly throughout training,
from `step1000` to `step143000` (which is the same as `main`). In addition, we
also provide frequent early checkpoints: `step0` and `step{1,2,4...512}`.
This corresponds to training for just under 1 epoch on the Pile for
non-deduplicated models, and about 1.5 epochs on the deduplicated Pile.
All *Pythia* models trained for 143000 steps at a batch size
of 2M (2,097,152 tokens).<br>
See [GitHub](https://github.com/EleutherAI/pythia) for more details on training
procedure, including [how to reproduce
it](https://github.com/EleutherAI/pythia/blob/main/README.md#reproducing-training).<br>
Pythia uses the same tokenizer as [GPT-NeoX-
20B](https://huggingface.co/EleutherAI/gpt-neox-20b).
## Evaluations
All 16 *Pythia* models were evaluated using the [LM Evaluation
Harness](https://github.com/EleutherAI/lm-evaluation-harness). You can access
the results by model and step at `results/json/*` in the [GitHub
repository](https://github.com/EleutherAI/pythia/tree/main/results/json/).<br>
Expand the sections below to see plots of evaluation results for all
Pythia and Pythia-deduped models compared with OPT and BLOOM.
<details>
<summary>LAMBADA – OpenAI</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/lambada_openai_v1.png" style="width:auto"/>
</details>
<details>
<summary>Physical Interaction: Question Answering (PIQA)</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/piqa_v1.png" style="width:auto"/>
</details>
<details>
<summary>WinoGrande</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/winogrande_v1.png" style="width:auto"/>
</details>
<details>
<summary>AI2 Reasoning Challenge—Easy Set</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/arc_easy_v1.png" style="width:auto"/>
</details>
<details>
<summary>SciQ</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/sciq_v1.png" style="width:auto"/>
</details>
## Changelog
This section compares differences between previously released
[Pythia v0](https://huggingface.co/models?other=pythia_v0) and the current
models. See Appendix B of the Pythia paper for further discussion of these
changes and the motivation behind them. We found that retraining Pythia had no
impact on benchmark performance.
- All model sizes are now trained with uniform batch size of 2M tokens.
Previously, the models of size 160M, 410M, and 1.4B parameters were trained
with batch sizes of 4M tokens.
- We added checkpoints at initialization (step 0) and steps {1,2,4,8,16,32,64,
128,256,512} in addition to every 1000 training steps.
- Flash Attention was used in the new retrained suite.
- We remedied a minor inconsistency that existed in the original suite: all
models of size 2.8B parameters or smaller had a learning rate (LR) schedule
which decayed to a minimum LR of 10% the starting LR rate, but the 6.9B and
12B models all used an LR schedule which decayed to a minimum LR of 0. In
the redone training runs, we rectified this inconsistency: all models now were
trained with LR decaying to a minimum of 0.1× their maximum LR.
### Naming convention and parameter count
*Pythia* models were renamed in January 2023. It is possible that the old
naming convention still persists in some documentation by accident. The
current naming convention (70M, 160M, etc.) is based on total parameter count.
<figure style="width:32em">
| current Pythia suffix | old suffix | total params | non-embedding params |
| --------------------: | ---------: | -------------: | -------------------: |
| 70M | 19M | 70,426,624 | 18,915,328 |
| 160M | 125M | 162,322,944 | 85,056,000 |
| 410M | 350M | 405,334,016 | 302,311,424 |
| 1B | 800M | 1,011,781,632 | 805,736,448 |
| 1.4B | 1.3B | 1,414,647,808 | 1,208,602,624 |
| 2.8B | 2.7B | 2,775,208,960 | 2,517,652,480 |
| 6.9B | 6.7B | 6,857,302,016 | 6,444,163,072 |
| 12B | 13B | 11,846,072,320 | 11,327,027,200 |
</figure>
| [
"QUESTION_ANSWERING",
"TRANSLATION"
] | [
"SCIQ"
] |
P1ayer-1/pythia-1b-deduped-instruct-base | P1ayer-1 | text-generation | [
"transformers",
"pytorch",
"gpt_neox",
"text-generation",
"causal-lm",
"pythia",
"en",
"dataset:EleutherAI/the_pile_deduplicated",
"arxiv:2101.00027",
"arxiv:2201.07311",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | 2023-05-29T20:08:16 | 2023-05-29T20:09:36 | 67 | 0 | ---
datasets:
- EleutherAI/the_pile_deduplicated
language:
- en
license: apache-2.0
tags:
- pytorch
- causal-lm
- pythia
---
The *Pythia Scaling Suite* is a collection of models developed to facilitate
interpretability research. It contains two sets of eight models of sizes
70M, 160M, 410M, 1B, 1.4B, 2.8B, 6.9B, and 12B. For each size, there are two
models: one trained on the Pile, and one trained on the Pile after the dataset
has been globally deduplicated. All 8 model sizes are trained on the exact
same data, in the exact same order. We also provide 154 intermediate
checkpoints per model, hosted on Hugging Face as branches.
The Pythia model suite was designed to promote scientific
research on large language models, especially interpretability research.
Despite not centering downstream performance as a design goal, we find the
models <a href="#evaluations">match or exceed</a> the performance of
similar and same-sized models, such as those in the OPT and GPT-Neo suites.
<details>
<summary style="font-weight:600">Details on previous early release and naming convention.</summary>
Previously, we released an early version of the Pythia suite to the public.
However, we decided to retrain the model suite to address a few hyperparameter
discrepancies. This model card <a href="#changelog">lists the changes</a>;
see appendix B in the Pythia paper for further discussion. We found no
difference in benchmark performance between the two Pythia versions.
The old models are
[still available](https://huggingface.co/models?other=pythia_v0), but we
suggest the retrained suite if you are just starting to use Pythia.<br>
**This is the current release.**
Please note that all models in the *Pythia* suite were renamed in January
2023. For clarity, a <a href="#naming-convention-and-parameter-count">table
comparing the old and new names</a> is provided in this model card, together
with exact parameter counts.
</details>
<br>
# Pythia-1B-deduped
## Model Details
- Developed by: [EleutherAI](http://eleuther.ai)
- Model type: Transformer-based Language Model
- Language: English
- Learn more: [Pythia's GitHub repository](https://github.com/EleutherAI/pythia)
for training procedure, config files, and details on how to use.
- Library: [GPT-NeoX](https://github.com/EleutherAI/gpt-neox)
- License: Apache 2.0
- Contact: to ask questions about this model, join the [EleutherAI
Discord](https://discord.gg/zBGx3azzUn), and post them in `#release-discussion`.
Please read the existing *Pythia* documentation before asking about it in the
EleutherAI Discord. For general correspondence: [contact@eleuther.
ai](mailto:[email protected]).
<figure>
| Pythia model | Non-Embedding Params | Layers | Model Dim | Heads | Batch Size | Learning Rate | Equivalent Models |
| -----------: | -------------------: | :----: | :-------: | :---: | :--------: | :-------------------: | :--------------------: |
| 70M | 18,915,328 | 6 | 512 | 8 | 2M | 1.0 x 10<sup>-3</sup> | — |
| 160M | 85,056,000 | 12 | 768 | 12 | 4M | 6.0 x 10<sup>-4</sup> | GPT-Neo 125M, OPT-125M |
| 410M | 302,311,424 | 24 | 1024 | 16 | 4M | 3.0 x 10<sup>-4</sup> | OPT-350M |
| 1.0B | 805,736,448 | 16 | 2048 | 8 | 2M | 3.0 x 10<sup>-4</sup> | — |
| 1.4B | 1,208,602,624 | 24 | 2048 | 16 | 4M | 2.0 x 10<sup>-4</sup> | GPT-Neo 1.3B, OPT-1.3B |
| 2.8B | 2,517,652,480 | 32 | 2560 | 32 | 2M | 1.6 x 10<sup>-4</sup> | GPT-Neo 2.7B, OPT-2.7B |
| 6.9B | 6,444,163,072 | 32 | 4096 | 32 | 2M | 1.2 x 10<sup>-4</sup> | OPT-6.7B |
| 12B | 11,327,027,200 | 36 | 5120 | 40 | 2M | 1.2 x 10<sup>-4</sup> | — |
<figcaption>Engineering details for the <i>Pythia Suite</i>. Deduped and
non-deduped models of a given size have the same hyperparameters. “Equivalent”
models have <b>exactly</b> the same architecture, and the same number of
non-embedding parameters.</figcaption>
</figure>
## Uses and Limitations
### Intended Use
The primary intended use of Pythia is research on the behavior, functionality,
and limitations of large language models. This suite is intended to provide
a controlled setting for performing scientific experiments. We also provide
154 checkpoints per model: initial `step0`, 10 log-spaced checkpoints
`step{1,2,4...512}`, and 143 evenly-spaced checkpoints from `step1000` to
`step143000`. These checkpoints are hosted on Hugging Face as branches. Note
that branch `143000` corresponds exactly to the model checkpoint on the `main`
branch of each model.
You may also further fine-tune and adapt Pythia-1B-deduped for deployment,
as long as your use is in accordance with the Apache 2.0 license. Pythia
models work with the Hugging Face [Transformers
Library](https://huggingface.co/docs/transformers/index). If you decide to use
pre-trained Pythia-1B-deduped as a basis for your fine-tuned model, please
conduct your own risk and bias assessment.
### Out-of-scope use
The Pythia Suite is **not** intended for deployment. It is not a in itself
a product and cannot be used for human-facing interactions. For example,
the model may generate harmful or offensive text. Please evaluate the risks
associated with your particular use case.
Pythia models are English-language only, and are not suitable for translation
or generating text in other languages.
Pythia-1B-deduped has not been fine-tuned for downstream contexts in which
language models are commonly deployed, such as writing genre prose,
or commercial chatbots. This means Pythia-1B-deduped will **not**
respond to a given prompt the way a product like ChatGPT does. This is because,
unlike this model, ChatGPT was fine-tuned using methods such as Reinforcement
Learning from Human Feedback (RLHF) to better “follow” human instructions.
### Limitations and biases
The core functionality of a large language model is to take a string of text
and predict the next token. The token used by the model need not produce the
most “accurate” text. Never rely on Pythia-1B-deduped to produce factually accurate
output.
This model was trained on [the Pile](https://pile.eleuther.ai/), a dataset
known to contain profanity and texts that are lewd or otherwise offensive.
See [Section 6 of the Pile paper](https://arxiv.org/abs/2101.00027) for a
discussion of documented biases with regards to gender, religion, and race.
Pythia-1B-deduped may produce socially unacceptable or undesirable text, *even if*
the prompt itself does not include anything explicitly offensive.
If you plan on using text generated through, for example, the Hosted Inference
API, we recommend having a human curate the outputs of this language model
before presenting it to other people. Please inform your audience that the
text was generated by Pythia-1B-deduped.
### Quickstart
Pythia models can be loaded and used via the following code, demonstrated here
for the third `pythia-70m-deduped` checkpoint:
```python
from transformers import GPTNeoXForCausalLM, AutoTokenizer
model = GPTNeoXForCausalLM.from_pretrained(
"EleutherAI/pythia-70m-deduped",
revision="step3000",
cache_dir="./pythia-70m-deduped/step3000",
)
tokenizer = AutoTokenizer.from_pretrained(
"EleutherAI/pythia-70m-deduped",
revision="step3000",
cache_dir="./pythia-70m-deduped/step3000",
)
inputs = tokenizer("Hello, I am", return_tensors="pt")
tokens = model.generate(**inputs)
tokenizer.decode(tokens[0])
```
Revision/branch `step143000` corresponds exactly to the model checkpoint on
the `main` branch of each model.<br>
For more information on how to use all Pythia models, see [documentation on
GitHub](https://github.com/EleutherAI/pythia).
## Training
### Training data
Pythia-1B-deduped was trained on the Pile **after the dataset has been globally
deduplicated**.<br>
[The Pile](https://pile.eleuther.ai/) is a 825GiB general-purpose dataset in
English. It was created by EleutherAI specifically for training large language
models. It contains texts from 22 diverse sources, roughly broken down into
five categories: academic writing (e.g. arXiv), internet (e.g. CommonCrawl),
prose (e.g. Project Gutenberg), dialogue (e.g. YouTube subtitles), and
miscellaneous (e.g. GitHub, Enron Emails). See [the Pile
paper](https://arxiv.org/abs/2101.00027) for a breakdown of all data sources,
methodology, and a discussion of ethical implications. Consult [the
datasheet](https://arxiv.org/abs/2201.07311) for more detailed documentation
about the Pile and its component datasets. The Pile can be downloaded from
the [official website](https://pile.eleuther.ai/), or from a [community
mirror](https://the-eye.eu/public/AI/pile/).
### Training procedure
All models were trained on the exact same data, in the exact same order. Each
model saw 299,892,736,000 tokens during training, and 143 checkpoints for each
model are saved every 2,097,152,000 tokens, spaced evenly throughout training,
from `step1000` to `step143000` (which is the same as `main`). In addition, we
also provide frequent early checkpoints: `step0` and `step{1,2,4...512}`.
This corresponds to training for just under 1 epoch on the Pile for
non-deduplicated models, and about 1.5 epochs on the deduplicated Pile.
All *Pythia* models trained for 143000 steps at a batch size
of 2M (2,097,152 tokens).<br>
See [GitHub](https://github.com/EleutherAI/pythia) for more details on training
procedure, including [how to reproduce
it](https://github.com/EleutherAI/pythia/blob/main/README.md#reproducing-training).<br>
Pythia uses the same tokenizer as [GPT-NeoX-
20B](https://huggingface.co/EleutherAI/gpt-neox-20b).
## Evaluations
All 16 *Pythia* models were evaluated using the [LM Evaluation
Harness](https://github.com/EleutherAI/lm-evaluation-harness). You can access
the results by model and step at `results/json/*` in the [GitHub
repository](https://github.com/EleutherAI/pythia/tree/main/results/json/).<br>
Expand the sections below to see plots of evaluation results for all
Pythia and Pythia-deduped models compared with OPT and BLOOM.
<details>
<summary>LAMBADA – OpenAI</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/lambada_openai_v1.png" style="width:auto"/>
</details>
<details>
<summary>Physical Interaction: Question Answering (PIQA)</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/piqa_v1.png" style="width:auto"/>
</details>
<details>
<summary>WinoGrande</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/winogrande_v1.png" style="width:auto"/>
</details>
<details>
<summary>AI2 Reasoning Challenge—Easy Set</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/arc_easy_v1.png" style="width:auto"/>
</details>
<details>
<summary>SciQ</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/sciq_v1.png" style="width:auto"/>
</details>
## Changelog
This section compares differences between previously released
[Pythia v0](https://huggingface.co/models?other=pythia_v0) and the current
models. See Appendix B of the Pythia paper for further discussion of these
changes and the motivation behind them. We found that retraining Pythia had no
impact on benchmark performance.
- All model sizes are now trained with uniform batch size of 2M tokens.
Previously, the models of size 160M, 410M, and 1.4B parameters were trained
with batch sizes of 4M tokens.
- We added checkpoints at initialization (step 0) and steps {1,2,4,8,16,32,64,
128,256,512} in addition to every 1000 training steps.
- Flash Attention was used in the new retrained suite.
- We remedied a minor inconsistency that existed in the original suite: all
models of size 2.8B parameters or smaller had a learning rate (LR) schedule
which decayed to a minimum LR of 10% the starting LR rate, but the 6.9B and
12B models all used an LR schedule which decayed to a minimum LR of 0. In
the redone training runs, we rectified this inconsistency: all models now were
trained with LR decaying to a minimum of 0.1× their maximum LR.
### Naming convention and parameter count
*Pythia* models were renamed in January 2023. It is possible that the old
naming convention still persists in some documentation by accident. The
current naming convention (70M, 160M, etc.) is based on total parameter count.
<figure style="width:32em">
| current Pythia suffix | old suffix | total params | non-embedding params |
| --------------------: | ---------: | -------------: | -------------------: |
| 70M | 19M | 70,426,624 | 18,915,328 |
| 160M | 125M | 162,322,944 | 85,056,000 |
| 410M | 350M | 405,334,016 | 302,311,424 |
| 1B | 800M | 1,011,781,632 | 805,736,448 |
| 1.4B | 1.3B | 1,414,647,808 | 1,208,602,624 |
| 2.8B | 2.7B | 2,775,208,960 | 2,517,652,480 |
| 6.9B | 6.7B | 6,857,302,016 | 6,444,163,072 |
| 12B | 13B | 11,846,072,320 | 11,327,027,200 |
</figure> | [
"QUESTION_ANSWERING",
"TRANSLATION"
] | [
"SCIQ"
] |
lightontech/SeaLLM3-7B-Chat-AWQ | lightontech | text-generation | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"sea",
"multilingual",
"conversational",
"en",
"zh",
"id",
"vi",
"th",
"ms",
"arxiv:2312.00738",
"arxiv:2306.05179",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"awq",
"region:us"
] | 2024-07-17T00:15:12 | 2024-07-17T00:23:15 | 67 | 1 | ---
language:
- en
- zh
- id
- vi
- th
- ms
license: other
license_name: seallms
license_link: https://huggingface.co/SeaLLMs/SeaLLM-13B-Chat/blob/main/LICENSE
tags:
- sea
- multilingual
---
# *SeaLLMs-v3* - Large Language Models for Southeast Asia
<h1 style="color: #ff3860">**This repository is the modification of the SeaLLMs/SeaLLM3-7B-Chat**</h1>
<h1>This is a fork from https://huggingface.co/SorawitChok/SeaLLM3-7B-Chat-AWQ</h1>
## We offer a SeaLLM3-7B-Chat-AWQ which is a 4-bit AWQ quantization version of the SeaLLMs/SeaLLM3-7B-Chat (compatible with vLLM)
<p align="center">
<a href="https://damo-nlp-sg.github.io/SeaLLMs/" target="_blank" rel="noopener">Website</a>
<a href="https://huggingface.co/SeaLLMs/SeaLLM3-7B-Chat" target="_blank" rel="noopener"> 🤗 Tech Memo</a>
<a href="https://huggingface.co/spaces/SeaLLMs/SeaLLM-Chat" target="_blank" rel="noopener"> 🤗 DEMO</a>
<a href="https://github.com/DAMO-NLP-SG/SeaLLMs" target="_blank" rel="noopener">Github</a>
<a href="https://arxiv.org/pdf/2312.00738.pdf" target="_blank" rel="noopener">Technical Report</a>
</p>
We introduce **SeaLLMs-v3**, the latest series of the SeaLLMs (Large Language Models for Southeast Asian languages) family. It achieves state-of-the-art performance among models with similar sizes, excelling across a diverse array of tasks such as world knowledge, mathematical reasoning, translation, and instruction following. In the meantime, it was specifically enhanced to be more trustworthy, exhibiting reduced hallucination and providing safe responses, particularly in queries closed related to Southeast Asian culture.
## 🔥 Highlights
- State-of-the-art performance compared to open-source models of similar sizes, evaluated across various dimensions such as human exam questions, instruction-following, mathematics, and translation.
- Significantly enhanced instruction-following capability, especially in multi-turn settings.
- Ensures safety in usage with significantly reduced instances of hallucination and sensitivity to local contexts.
## Uses
SeaLLMs is tailored for handling a wide range of languages spoken in the SEA region, including English, Chinese, Indonesian, Vietnamese, Thai, Tagalog, Malay, Burmese, Khmer, Lao, Tamil, and Javanese.
This page introduces the SeaLLMs-v3-7B-Chat model, specifically fine-tuned to follow human instructions effectively for task completion, making it directly applicable to your applications.
### Inference with `vllm`
You can also conduct inference with [vllm](https://docs.vllm.ai/en/stable/index.html), which is a fast and easy-to-use library for LLM inference and serving. To use vllm, first install the latest version via `pip install vllm`.
```python
from vllm import LLM, SamplingParams
prompts = [
"Who is the president of US?",
"Can you speak Indonesian?"
]
llm = LLM("SorawitChok/SeaLLM3-7B-Chat-AWQ", quantization="AWQ")
sparams = SamplingParams(temperature=0.1, max_tokens=512)
outputs = llm.generate(prompts, sparams)
# print out the model response
for output in outputs:
prompt = output.prompt
generated_text = output.outputs[0].text
print(f"Prompt: {prompt}\nResponse: {generated_text}\n\n")
```
### Bias, Risks, and Limitations
<blockquote style="color:red">
<p><strong style="color: red">Terms of Use and License</strong>:
By using our released weights, codes, and demos, you agree to and comply with the terms and conditions specified in our <a href="https://huggingface.co/SeaLLMs/SeaLLM-Chat-13b/edit/main/LICENSE" target="_blank" rel="noopener">SeaLLMs Terms Of Use</a>.
</blockquote>
> **Disclaimer**:
> We must note that even though the weights, codes, and demos are released in an open manner, similar to other pre-trained language models, and despite our best efforts in red teaming and safety fine-tuning and enforcement, our models come with potential risks, including but not limited to inaccurate, misleading or potentially harmful generation.
> Developers and stakeholders should perform their own red teaming and provide related security measures before deployment, and they must abide by and comply with local governance and regulations.
> In no event shall the authors be held liable for any claim, damages, or other liability arising from the use of the released weights, codes, or demos.
## Evaluation
We conduct our evaluation along two dimensions:
1. **Model Capability**: We assess the model's performance on human exam questions, its ability to follow instructions, its proficiency in mathematics, and its translation accuracy.
2. **Model Trustworthiness**: We evaluate the model's safety and tendency to hallucinate, particularly in the context of Southeast Asia.
### Model Capability
#### Multilingual World Knowledge - M3Exam
[M3Exam](https://arxiv.org/abs/2306.05179) consists of local exam questions collected from each country. It reflects the model's world knowledge (e.g., with language or social science subjects) and reasoning abilities (e.g., with mathematics or natural science subjects).
| Model | en | zh | id | th | vi | avg | avg_sea |
|:-----------------|-----:|------:|-----:|-----:|-----:|------:|----------:|
| Sailor-7B-Chat | 0.66 | 0.652 | 0.475 | 0.462 | 0.513 | 0.552 | 0.483 |
| gemma-7b | 0.732 | 0.519 | 0.475 | 0.46 | 0.594 | 0.556 | 0.510 |
| SeaLLM-7B-v2.5 | 0.758 | 0.581 | 0.499 | 0.502 | 0.622 | 0.592 | 0.541 |
| Qwen2-7B | 0.815 | 0.874 | 0.53 | 0.479 | 0.628 | 0.665 | 0.546 |
| Qwen2-7B-Instruct| 0.809 | 0.88 | 0.558 | 0.555 | 0.624 | 0.685 | 0.579 |
| Sailor-14B | 0.748 | 0.84 | 0.536 | 0.528 | 0.621 | 0.655 | 0.562 |
| Sailor-14B-Chat | 0.749 | 0.843 | 0.553 | 0.566 | 0.637 | 0.67 | 0.585 |
| SeaLLMs-v3-7B | 0.814 | 0.866 | 0.549 | 0.52 | 0.628 | 0.675 | 0.566 |
| SeaLLMs-v3-7B-Chat | 0.809 | 0.874 | 0.558 | 0.569 | 0.649 | 0.692 | 0.592 |
#### Multilingual Instruction-following Capability - SeaBench
SeaBench consists of multi-turn human instructions spanning various task types. It evaluates chat-based models on their ability to follow human instructions in both single and multi-turn settings and assesses their performance across different task types. The dataset and corresponding evaluation code will be released soon!
| model | id<br>turn1 | id<br>turn2 | id<br>avg | th<br>turn1 | th<br>turn2 | th<br>avg | vi<br>turn1 | vi<br>turn2 | vi<br>avg | avg |
|:----------------|------------:|------------:|---------:|------------:|------------:|---------:|------------:|------------:|---------:|------:|
| Qwen2-7B-Instruct| 5.93 | 5.84 | 5.89 | 5.47 | 5.20 | 5.34 | 6.17 | 5.60 | 5.89 | 5.70 |
| SeaLLM-7B-v2.5 | 6.27 | 4.96 | 5.62 | 5.79 | 3.82 | 4.81 | 6.02 | 4.02 | 5.02 | 5.15 |
| Sailor-14B-Chat | 5.26 | 5.53 | 5.40 | 4.62 | 4.36 | 4.49 | 5.31 | 4.74 | 5.03 | 4.97 |
| Sailor-7B-Chat | 4.60 | 4.04 | 4.32 | 3.94 | 3.17 | 3.56 | 4.82 | 3.62 | 4.22 | 4.03 |
| SeaLLMs-v3-7B-Chat | 6.73 | 6.59 | 6.66 | 6.48 | 5.90 | 6.19 | 6.34 | 5.79 | 6.07 | 6.31 |
#### Multilingual Math
We evaluate the multilingual math capability using the MGSM dataset. MGSM originally contains Chinese and Thai testing sets only, we use Google Translate to translate the same English questions into other SEA languages. Note that we adopt the tradition of each country to represent the number, e.g., in Indonesian and Vietnamese, dots are used as thousands separators and commas as decimal separators, the opposite of the English system.
| MGSM | en | id | ms | th | vi | zh | avg |
|:--------------------------|------:|------:|------:|------:|------:|------:|------:|
| Sailor-7B-Chat | 33.6 | 22.4 | 22.4 | 21.6 | 25.2 | 29.2 | 25.7 |
| Meta-Llama-3-8B-Instruct | 77.6 | 48 | 57.6 | 56 | 46.8 | 58.8 | 57.5 |
| glm-4-9b-chat | 72.8 | 53.6 | 53.6 | 34.8 | 52.4 | 70.8 | 56.3 |
| Qwen1.5-7B-Chat | 64 | 34.4 | 38.4 | 25.2 | 36 | 53.6 | 41.9 |
| Qwen2-7B-instruct | 82 | 66.4 | 62.4 | 58.4 | 64.4 | 76.8 | 68.4 |
| aya-23-8B | 28.8 | 16.4 | 14.4 | 2 | 16 | 12.8 | 15.1 |
| gemma-1.1-7b-it | 58.8 | 32.4 | 34.8 | 31.2 | 39.6 | 35.2 | 38.7 |
| SeaLLM-7B-v2.5 | 79.6 | 69.2 | 70.8 | 61.2 | 66.8 | 62.4 | 68.3 |
| SeaLLMs-v3-7B-Chat | 74.8 | 71.2 | 70.8 | 71.2 | 71.2 | 79.6 | 73.1 |
#### Translation
We use the test sets from Flores-200 for evaluation and report the zero-shot chrF scores for translations between every pair of languages. Each row in the table below presents the average results of translating from various source languages into the target languages. The last column displays the overall average results of translating from any language to any other language for each model.
| model | en | id | jv | km | lo | ms | my | ta | th | tl | vi | zh | avg |
|:-----------------------------------------------|------:|------:|------:|------:|------:|------:|------:|------:|------:|------:|------:|------:|------:|
|Meta-Llama-3-8B-Instruct | 51.54 | 49.03 | 22.46 | 15.34 | 5.42 | 46.72 | 21.24 | 32.09 | 35.75 | 40.8 | 39.31 | 14.87 | 31.22 |
|Qwen2-7B-Instruct | 50.36 | 47.55 | 29.36 | 19.26 | 11.06 | 42.43 | 19.33 | 20.04 | 36.07 | 37.91 | 39.63 | 22.87 | 31.32 |
|Sailor-7B-Chat | 49.4 | 49.78 | 28.33 | 2.68 | 6.85 | 47.75 | 5.35 | 18.23 | 38.92 | 29 | 41.76 | 20.87 | 28.24 |
|SeaLLM-7B-v2.5 | 55.09 | 53.71 | 18.13 | 18.09 | 15.53 | 51.33 | 19.71 | 26.1 | 40.55 | 45.58 | 44.56 | 24.18 | 34.38 |
|SeaLLMs-v3-7B-Chat | 54.68 | 52.52 | 29.86 | 27.3 | 26.34 | 45.04 | 21.54 | 31.93 | 41.52 | 38.51 | 43.78 | 26.1 | 36.52 |
### Model Trustworthiness
#### Hallucination
Performance of whether a model can refuse questions about the non-existing entity. The following is the F1 score. We use refuse as the positive label. Our test set consists of ~1k test samples per language. Each unanswerable question is generated by GPT4o. The ratio of answerable and unanswerable questions are 1:1. We define keywords to automatically detect whether a model-generated response is a refusal response.
| Refusal-F1 Scores | en | zh | vi | th | id | avg |
|:---------------------|------:|------:|------:|------:|------:|-------:|
| Qwen1.5-7B-Instruct | 53.85 | 51.70 | 52.85 | 35.5 | 58.4 | 50.46 |
| Qwen2-7B-Instruct | 58.79 | 33.08 | 56.21 | 44.6 | 55.98 | 49.732 |
| SeaLLM-7B-v2.5 | 12.90 | 0.77 | 2.45 | 19.42 | 0.78 | 7.26 |
| Sailor-7B-Chat | 33.49 | 18.82 | 5.19 | 9.68 | 16.42 | 16.72 |
| glm-4-9b-chat | 44.48 | 37.89 | 18.66 | 4.27 | 1.97 | 21.45 |
| aya-23-8B | 6.38 | 0.79 | 2.83 | 1.98 | 14.80 | 5.36 |
| Llama-3-8B-Instruct | 72.08 | 0.00 | 1.23 | 0.80 | 3.91 | 15.60 |
| gemma-1.1-7b-it | 52.39 | 27.74 | 23.96 | 22.97 | 31.72 | 31.76 |
| SeaLLMs-v3-7B-Chat | 71.36 | 78.39 | 77.93 | 61.31 | 68.95 | 71.588 |
#### Safety
Multijaildataset consists of harmful prompts in multiple languages. We take those relevant prompts in SEA languages here and report their safe rate (the higher the better).
| Model | en | jv | th | vi | zh | avg |
|:------------------------|-------:|-------:|-------:|-------:|------:|-------:|
| Qwen2-7B-Instruct | 0.8857 | 0.4381 | 0.6381 | 0.7302 | 0.873 | 0.713 |
| Sailor-7B-Chat | 0.7873 | 0.5492 | 0.6222 | 0.6762 | 0.7619 | 0.6794 |
| Meta-Llama-3-8B-Instruct| 0.8825 | 0.2635 | 0.7111 | 0.6984 | 0.7714 | 0.6654 |
| Sailor-14B-Chat | 0.8698 | 0.3048 | 0.5365 | 0.6095 | 0.727 | 0.6095 |
| glm-4-9b-chat | 0.7714 | 0.2127 | 0.3016 | 0.6063 | 0.7492 | 0.52824|
| SeaLLMs-v3-7B-Chat | 0.8889 | 0.6000 | 0.7333 | 0.8381 | 0.927 | 0.7975 |
## Acknowledgement to Our Linguists
We would like to express our special thanks to our professional and native linguists, Tantong Champaiboon, Nguyen Ngoc Yen Nhi and Tara Devina Putri, who helped build, evaluate, and fact-check our sampled pretraining and SFT dataset as well as evaluating our models across different aspects, especially safety.
## Citation
If you find our project useful, we hope you would kindly star our repo and cite our work as follows:
```
@article{damonlp2024seallm3,
author = {Wenxuan Zhang*, Hou Pong Chan*, Yiran Zhao*, Mahani Aljunied*,
Jianyu Wang, Chaoqun Liu, Yue Deng, Zhiqiang Hu, Weiwen Xu,
Yew Ken Chia, Xin Li, Lidong Bing},
title = {SeaLLMs - Large Language Models for Southeast Asia},
year = {2024},
}
```
Corresponding Author: [email protected]
| [
"TRANSLATION"
] | [
"CHIA"
] |
mav23/gte-Qwen2-7B-instruct-GGUF | mav23 | sentence-similarity | [
"sentence-transformers",
"gguf",
"mteb",
"transformers",
"Qwen2",
"sentence-similarity",
"arxiv:2308.03281",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us",
"conversational"
] | 2024-10-10T23:23:46 | 2024-10-11T00:14:54 | 67 | 1 | ---
license: apache-2.0
tags:
- mteb
- sentence-transformers
- transformers
- Qwen2
- sentence-similarity
model-index:
- name: gte-qwen2-7B-instruct
results:
- task:
type: Classification
dataset:
name: MTEB AmazonCounterfactualClassification (en)
type: mteb/amazon_counterfactual
config: en
split: test
revision: e8379541af4e31359cca9fbcf4b00f2671dba205
metrics:
- type: accuracy
value: 91.31343283582089
- type: ap
value: 67.64251402604096
- type: f1
value: 87.53372530755692
- task:
type: Classification
dataset:
name: MTEB AmazonPolarityClassification
type: mteb/amazon_polarity
config: default
split: test
revision: e2d317d38cd51312af73b3d32a06d1a08b442046
metrics:
- type: accuracy
value: 97.497825
- type: ap
value: 96.30329547047529
- type: f1
value: 97.49769793778039
- task:
type: Classification
dataset:
name: MTEB AmazonReviewsClassification (en)
type: mteb/amazon_reviews_multi
config: en
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 62.564
- type: f1
value: 60.975777935041066
- task:
type: Retrieval
dataset:
name: MTEB ArguAna
type: mteb/arguana
config: default
split: test
revision: c22ab2a51041ffd869aaddef7af8d8215647e41a
metrics:
- type: map_at_1
value: 36.486000000000004
- type: map_at_10
value: 54.842
- type: map_at_100
value: 55.206999999999994
- type: map_at_1000
value: 55.206999999999994
- type: map_at_3
value: 49.893
- type: map_at_5
value: 53.105000000000004
- type: mrr_at_1
value: 37.34
- type: mrr_at_10
value: 55.143
- type: mrr_at_100
value: 55.509
- type: mrr_at_1000
value: 55.509
- type: mrr_at_3
value: 50.212999999999994
- type: mrr_at_5
value: 53.432
- type: ndcg_at_1
value: 36.486000000000004
- type: ndcg_at_10
value: 64.273
- type: ndcg_at_100
value: 65.66199999999999
- type: ndcg_at_1000
value: 65.66199999999999
- type: ndcg_at_3
value: 54.352999999999994
- type: ndcg_at_5
value: 60.131
- type: precision_at_1
value: 36.486000000000004
- type: precision_at_10
value: 9.395000000000001
- type: precision_at_100
value: 0.996
- type: precision_at_1000
value: 0.1
- type: precision_at_3
value: 22.428
- type: precision_at_5
value: 16.259
- type: recall_at_1
value: 36.486000000000004
- type: recall_at_10
value: 93.95400000000001
- type: recall_at_100
value: 99.644
- type: recall_at_1000
value: 99.644
- type: recall_at_3
value: 67.283
- type: recall_at_5
value: 81.294
- task:
type: Clustering
dataset:
name: MTEB ArxivClusteringP2P
type: mteb/arxiv-clustering-p2p
config: default
split: test
revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d
metrics:
- type: v_measure
value: 56.461169803700564
- task:
type: Clustering
dataset:
name: MTEB ArxivClusteringS2S
type: mteb/arxiv-clustering-s2s
config: default
split: test
revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53
metrics:
- type: v_measure
value: 51.73600434466286
- task:
type: Reranking
dataset:
name: MTEB AskUbuntuDupQuestions
type: mteb/askubuntudupquestions-reranking
config: default
split: test
revision: 2000358ca161889fa9c082cb41daa8dcfb161a54
metrics:
- type: map
value: 67.57827065898053
- type: mrr
value: 79.08136569493911
- task:
type: STS
dataset:
name: MTEB BIOSSES
type: mteb/biosses-sts
config: default
split: test
revision: d3fb88f8f02e40887cd149695127462bbcf29b4a
metrics:
- type: cos_sim_pearson
value: 83.53324575999243
- type: cos_sim_spearman
value: 81.37173362822374
- type: euclidean_pearson
value: 82.19243335103444
- type: euclidean_spearman
value: 81.33679307304334
- type: manhattan_pearson
value: 82.38752665975699
- type: manhattan_spearman
value: 81.31510583189689
- task:
type: Classification
dataset:
name: MTEB Banking77Classification
type: mteb/banking77
config: default
split: test
revision: 0fd18e25b25c072e09e0d92ab615fda904d66300
metrics:
- type: accuracy
value: 87.56818181818181
- type: f1
value: 87.25826722019875
- task:
type: Clustering
dataset:
name: MTEB BiorxivClusteringP2P
type: mteb/biorxiv-clustering-p2p
config: default
split: test
revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40
metrics:
- type: v_measure
value: 50.09239610327673
- task:
type: Clustering
dataset:
name: MTEB BiorxivClusteringS2S
type: mteb/biorxiv-clustering-s2s
config: default
split: test
revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908
metrics:
- type: v_measure
value: 46.64733054606282
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackAndroidRetrieval
type: BeIR/cqadupstack
config: default
split: test
revision: f46a197baaae43b4f621051089b82a364682dfeb
metrics:
- type: map_at_1
value: 33.997
- type: map_at_10
value: 48.176
- type: map_at_100
value: 49.82
- type: map_at_1000
value: 49.924
- type: map_at_3
value: 43.626
- type: map_at_5
value: 46.275
- type: mrr_at_1
value: 42.059999999999995
- type: mrr_at_10
value: 53.726
- type: mrr_at_100
value: 54.398
- type: mrr_at_1000
value: 54.416
- type: mrr_at_3
value: 50.714999999999996
- type: mrr_at_5
value: 52.639
- type: ndcg_at_1
value: 42.059999999999995
- type: ndcg_at_10
value: 55.574999999999996
- type: ndcg_at_100
value: 60.744
- type: ndcg_at_1000
value: 61.85699999999999
- type: ndcg_at_3
value: 49.363
- type: ndcg_at_5
value: 52.44
- type: precision_at_1
value: 42.059999999999995
- type: precision_at_10
value: 11.101999999999999
- type: precision_at_100
value: 1.73
- type: precision_at_1000
value: 0.218
- type: precision_at_3
value: 24.464
- type: precision_at_5
value: 18.026
- type: recall_at_1
value: 33.997
- type: recall_at_10
value: 70.35900000000001
- type: recall_at_100
value: 91.642
- type: recall_at_1000
value: 97.977
- type: recall_at_3
value: 52.76
- type: recall_at_5
value: 61.148
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackEnglishRetrieval
type: BeIR/cqadupstack
config: default
split: test
revision: ad9991cb51e31e31e430383c75ffb2885547b5f0
metrics:
- type: map_at_1
value: 35.884
- type: map_at_10
value: 48.14
- type: map_at_100
value: 49.5
- type: map_at_1000
value: 49.63
- type: map_at_3
value: 44.646
- type: map_at_5
value: 46.617999999999995
- type: mrr_at_1
value: 44.458999999999996
- type: mrr_at_10
value: 53.751000000000005
- type: mrr_at_100
value: 54.37800000000001
- type: mrr_at_1000
value: 54.415
- type: mrr_at_3
value: 51.815
- type: mrr_at_5
value: 52.882
- type: ndcg_at_1
value: 44.458999999999996
- type: ndcg_at_10
value: 54.157
- type: ndcg_at_100
value: 58.362
- type: ndcg_at_1000
value: 60.178
- type: ndcg_at_3
value: 49.661
- type: ndcg_at_5
value: 51.74999999999999
- type: precision_at_1
value: 44.458999999999996
- type: precision_at_10
value: 10.248
- type: precision_at_100
value: 1.5890000000000002
- type: precision_at_1000
value: 0.207
- type: precision_at_3
value: 23.928
- type: precision_at_5
value: 16.878999999999998
- type: recall_at_1
value: 35.884
- type: recall_at_10
value: 64.798
- type: recall_at_100
value: 82.345
- type: recall_at_1000
value: 93.267
- type: recall_at_3
value: 51.847
- type: recall_at_5
value: 57.601
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackGamingRetrieval
type: BeIR/cqadupstack
config: default
split: test
revision: 4885aa143210c98657558c04aaf3dc47cfb54340
metrics:
- type: map_at_1
value: 39.383
- type: map_at_10
value: 53.714
- type: map_at_100
value: 54.838
- type: map_at_1000
value: 54.87800000000001
- type: map_at_3
value: 50.114999999999995
- type: map_at_5
value: 52.153000000000006
- type: mrr_at_1
value: 45.016
- type: mrr_at_10
value: 56.732000000000006
- type: mrr_at_100
value: 57.411
- type: mrr_at_1000
value: 57.431
- type: mrr_at_3
value: 54.044000000000004
- type: mrr_at_5
value: 55.639
- type: ndcg_at_1
value: 45.016
- type: ndcg_at_10
value: 60.228
- type: ndcg_at_100
value: 64.277
- type: ndcg_at_1000
value: 65.07
- type: ndcg_at_3
value: 54.124
- type: ndcg_at_5
value: 57.147000000000006
- type: precision_at_1
value: 45.016
- type: precision_at_10
value: 9.937
- type: precision_at_100
value: 1.288
- type: precision_at_1000
value: 0.13899999999999998
- type: precision_at_3
value: 24.471999999999998
- type: precision_at_5
value: 16.991
- type: recall_at_1
value: 39.383
- type: recall_at_10
value: 76.175
- type: recall_at_100
value: 93.02
- type: recall_at_1000
value: 98.60900000000001
- type: recall_at_3
value: 60.265
- type: recall_at_5
value: 67.46600000000001
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackGisRetrieval
type: BeIR/cqadupstack
config: default
split: test
revision: 5003b3064772da1887988e05400cf3806fe491f2
metrics:
- type: map_at_1
value: 27.426000000000002
- type: map_at_10
value: 37.397000000000006
- type: map_at_100
value: 38.61
- type: map_at_1000
value: 38.678000000000004
- type: map_at_3
value: 34.150999999999996
- type: map_at_5
value: 36.137
- type: mrr_at_1
value: 29.944
- type: mrr_at_10
value: 39.654
- type: mrr_at_100
value: 40.638000000000005
- type: mrr_at_1000
value: 40.691
- type: mrr_at_3
value: 36.817
- type: mrr_at_5
value: 38.524
- type: ndcg_at_1
value: 29.944
- type: ndcg_at_10
value: 43.094
- type: ndcg_at_100
value: 48.789
- type: ndcg_at_1000
value: 50.339999999999996
- type: ndcg_at_3
value: 36.984
- type: ndcg_at_5
value: 40.248
- type: precision_at_1
value: 29.944
- type: precision_at_10
value: 6.78
- type: precision_at_100
value: 1.024
- type: precision_at_1000
value: 0.11800000000000001
- type: precision_at_3
value: 15.895000000000001
- type: precision_at_5
value: 11.39
- type: recall_at_1
value: 27.426000000000002
- type: recall_at_10
value: 58.464000000000006
- type: recall_at_100
value: 84.193
- type: recall_at_1000
value: 95.52000000000001
- type: recall_at_3
value: 42.172
- type: recall_at_5
value: 50.101
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackMathematicaRetrieval
type: BeIR/cqadupstack
config: default
split: test
revision: 90fceea13679c63fe563ded68f3b6f06e50061de
metrics:
- type: map_at_1
value: 19.721
- type: map_at_10
value: 31.604
- type: map_at_100
value: 32.972
- type: map_at_1000
value: 33.077
- type: map_at_3
value: 27.218999999999998
- type: map_at_5
value: 29.53
- type: mrr_at_1
value: 25.0
- type: mrr_at_10
value: 35.843
- type: mrr_at_100
value: 36.785000000000004
- type: mrr_at_1000
value: 36.842000000000006
- type: mrr_at_3
value: 32.193
- type: mrr_at_5
value: 34.264
- type: ndcg_at_1
value: 25.0
- type: ndcg_at_10
value: 38.606
- type: ndcg_at_100
value: 44.272
- type: ndcg_at_1000
value: 46.527
- type: ndcg_at_3
value: 30.985000000000003
- type: ndcg_at_5
value: 34.43
- type: precision_at_1
value: 25.0
- type: precision_at_10
value: 7.811
- type: precision_at_100
value: 1.203
- type: precision_at_1000
value: 0.15
- type: precision_at_3
value: 15.423
- type: precision_at_5
value: 11.791
- type: recall_at_1
value: 19.721
- type: recall_at_10
value: 55.625
- type: recall_at_100
value: 79.34400000000001
- type: recall_at_1000
value: 95.208
- type: recall_at_3
value: 35.19
- type: recall_at_5
value: 43.626
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackPhysicsRetrieval
type: BeIR/cqadupstack
config: default
split: test
revision: 79531abbd1fb92d06c6d6315a0cbbbf5bb247ea4
metrics:
- type: map_at_1
value: 33.784
- type: map_at_10
value: 47.522
- type: map_at_100
value: 48.949999999999996
- type: map_at_1000
value: 49.038
- type: map_at_3
value: 43.284
- type: map_at_5
value: 45.629
- type: mrr_at_1
value: 41.482
- type: mrr_at_10
value: 52.830999999999996
- type: mrr_at_100
value: 53.559999999999995
- type: mrr_at_1000
value: 53.588
- type: mrr_at_3
value: 50.016000000000005
- type: mrr_at_5
value: 51.614000000000004
- type: ndcg_at_1
value: 41.482
- type: ndcg_at_10
value: 54.569
- type: ndcg_at_100
value: 59.675999999999995
- type: ndcg_at_1000
value: 60.989000000000004
- type: ndcg_at_3
value: 48.187000000000005
- type: ndcg_at_5
value: 51.183
- type: precision_at_1
value: 41.482
- type: precision_at_10
value: 10.221
- type: precision_at_100
value: 1.486
- type: precision_at_1000
value: 0.17500000000000002
- type: precision_at_3
value: 23.548
- type: precision_at_5
value: 16.805
- type: recall_at_1
value: 33.784
- type: recall_at_10
value: 69.798
- type: recall_at_100
value: 90.098
- type: recall_at_1000
value: 98.176
- type: recall_at_3
value: 52.127
- type: recall_at_5
value: 59.861
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackProgrammersRetrieval
type: BeIR/cqadupstack
config: default
split: test
revision: 6184bc1440d2dbc7612be22b50686b8826d22b32
metrics:
- type: map_at_1
value: 28.038999999999998
- type: map_at_10
value: 41.904
- type: map_at_100
value: 43.36
- type: map_at_1000
value: 43.453
- type: map_at_3
value: 37.785999999999994
- type: map_at_5
value: 40.105000000000004
- type: mrr_at_1
value: 35.046
- type: mrr_at_10
value: 46.926
- type: mrr_at_100
value: 47.815000000000005
- type: mrr_at_1000
value: 47.849000000000004
- type: mrr_at_3
value: 44.273
- type: mrr_at_5
value: 45.774
- type: ndcg_at_1
value: 35.046
- type: ndcg_at_10
value: 48.937000000000005
- type: ndcg_at_100
value: 54.544000000000004
- type: ndcg_at_1000
value: 56.069
- type: ndcg_at_3
value: 42.858000000000004
- type: ndcg_at_5
value: 45.644
- type: precision_at_1
value: 35.046
- type: precision_at_10
value: 9.452
- type: precision_at_100
value: 1.429
- type: precision_at_1000
value: 0.173
- type: precision_at_3
value: 21.346999999999998
- type: precision_at_5
value: 15.342
- type: recall_at_1
value: 28.038999999999998
- type: recall_at_10
value: 64.59700000000001
- type: recall_at_100
value: 87.735
- type: recall_at_1000
value: 97.41300000000001
- type: recall_at_3
value: 47.368
- type: recall_at_5
value: 54.93900000000001
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackRetrieval
type: BeIR/cqadupstack
config: default
split: test
revision: 4ffe81d471b1924886b33c7567bfb200e9eec5c4
metrics:
- type: map_at_1
value: 28.17291666666667
- type: map_at_10
value: 40.025749999999995
- type: map_at_100
value: 41.39208333333333
- type: map_at_1000
value: 41.499249999999996
- type: map_at_3
value: 36.347
- type: map_at_5
value: 38.41391666666667
- type: mrr_at_1
value: 33.65925
- type: mrr_at_10
value: 44.085499999999996
- type: mrr_at_100
value: 44.94116666666667
- type: mrr_at_1000
value: 44.9855
- type: mrr_at_3
value: 41.2815
- type: mrr_at_5
value: 42.91491666666666
- type: ndcg_at_1
value: 33.65925
- type: ndcg_at_10
value: 46.430833333333325
- type: ndcg_at_100
value: 51.761
- type: ndcg_at_1000
value: 53.50899999999999
- type: ndcg_at_3
value: 40.45133333333333
- type: ndcg_at_5
value: 43.31483333333334
- type: precision_at_1
value: 33.65925
- type: precision_at_10
value: 8.4995
- type: precision_at_100
value: 1.3210000000000004
- type: precision_at_1000
value: 0.16591666666666666
- type: precision_at_3
value: 19.165083333333335
- type: precision_at_5
value: 13.81816666666667
- type: recall_at_1
value: 28.17291666666667
- type: recall_at_10
value: 61.12624999999999
- type: recall_at_100
value: 83.97266666666667
- type: recall_at_1000
value: 95.66550000000001
- type: recall_at_3
value: 44.661249999999995
- type: recall_at_5
value: 51.983333333333334
- type: map_at_1
value: 17.936
- type: map_at_10
value: 27.399
- type: map_at_100
value: 28.632
- type: map_at_1000
value: 28.738000000000003
- type: map_at_3
value: 24.456
- type: map_at_5
value: 26.06
- type: mrr_at_1
value: 19.224
- type: mrr_at_10
value: 28.998
- type: mrr_at_100
value: 30.11
- type: mrr_at_1000
value: 30.177
- type: mrr_at_3
value: 26.247999999999998
- type: mrr_at_5
value: 27.708
- type: ndcg_at_1
value: 19.224
- type: ndcg_at_10
value: 32.911
- type: ndcg_at_100
value: 38.873999999999995
- type: ndcg_at_1000
value: 41.277
- type: ndcg_at_3
value: 27.142
- type: ndcg_at_5
value: 29.755
- type: precision_at_1
value: 19.224
- type: precision_at_10
value: 5.6930000000000005
- type: precision_at_100
value: 0.9259999999999999
- type: precision_at_1000
value: 0.126
- type: precision_at_3
value: 12.138
- type: precision_at_5
value: 8.909
- type: recall_at_1
value: 17.936
- type: recall_at_10
value: 48.096
- type: recall_at_100
value: 75.389
- type: recall_at_1000
value: 92.803
- type: recall_at_3
value: 32.812999999999995
- type: recall_at_5
value: 38.851
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackStatsRetrieval
type: BeIR/cqadupstack
config: default
split: test
revision: 65ac3a16b8e91f9cee4c9828cc7c335575432a2a
metrics:
- type: map_at_1
value: 24.681
- type: map_at_10
value: 34.892
- type: map_at_100
value: 35.996
- type: map_at_1000
value: 36.083
- type: map_at_3
value: 31.491999999999997
- type: map_at_5
value: 33.632
- type: mrr_at_1
value: 28.528
- type: mrr_at_10
value: 37.694
- type: mrr_at_100
value: 38.613
- type: mrr_at_1000
value: 38.668
- type: mrr_at_3
value: 34.714
- type: mrr_at_5
value: 36.616
- type: ndcg_at_1
value: 28.528
- type: ndcg_at_10
value: 40.703
- type: ndcg_at_100
value: 45.993
- type: ndcg_at_1000
value: 47.847
- type: ndcg_at_3
value: 34.622
- type: ndcg_at_5
value: 38.035999999999994
- type: precision_at_1
value: 28.528
- type: precision_at_10
value: 6.902
- type: precision_at_100
value: 1.0370000000000001
- type: precision_at_1000
value: 0.126
- type: precision_at_3
value: 15.798000000000002
- type: precision_at_5
value: 11.655999999999999
- type: recall_at_1
value: 24.681
- type: recall_at_10
value: 55.81
- type: recall_at_100
value: 79.785
- type: recall_at_1000
value: 92.959
- type: recall_at_3
value: 39.074
- type: recall_at_5
value: 47.568
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackTexRetrieval
type: BeIR/cqadupstack
config: default
split: test
revision: 46989137a86843e03a6195de44b09deda022eec7
metrics:
- type: map_at_1
value: 18.627
- type: map_at_10
value: 27.872000000000003
- type: map_at_100
value: 29.237999999999996
- type: map_at_1000
value: 29.363
- type: map_at_3
value: 24.751
- type: map_at_5
value: 26.521
- type: mrr_at_1
value: 23.021
- type: mrr_at_10
value: 31.924000000000003
- type: mrr_at_100
value: 32.922000000000004
- type: mrr_at_1000
value: 32.988
- type: mrr_at_3
value: 29.192
- type: mrr_at_5
value: 30.798
- type: ndcg_at_1
value: 23.021
- type: ndcg_at_10
value: 33.535
- type: ndcg_at_100
value: 39.732
- type: ndcg_at_1000
value: 42.201
- type: ndcg_at_3
value: 28.153
- type: ndcg_at_5
value: 30.746000000000002
- type: precision_at_1
value: 23.021
- type: precision_at_10
value: 6.459
- type: precision_at_100
value: 1.1320000000000001
- type: precision_at_1000
value: 0.153
- type: precision_at_3
value: 13.719000000000001
- type: precision_at_5
value: 10.193000000000001
- type: recall_at_1
value: 18.627
- type: recall_at_10
value: 46.463
- type: recall_at_100
value: 74.226
- type: recall_at_1000
value: 91.28500000000001
- type: recall_at_3
value: 31.357000000000003
- type: recall_at_5
value: 38.067
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackUnixRetrieval
type: BeIR/cqadupstack
config: default
split: test
revision: 6c6430d3a6d36f8d2a829195bc5dc94d7e063e53
metrics:
- type: map_at_1
value: 31.457
- type: map_at_10
value: 42.888
- type: map_at_100
value: 44.24
- type: map_at_1000
value: 44.327
- type: map_at_3
value: 39.588
- type: map_at_5
value: 41.423
- type: mrr_at_1
value: 37.126999999999995
- type: mrr_at_10
value: 47.083000000000006
- type: mrr_at_100
value: 47.997
- type: mrr_at_1000
value: 48.044
- type: mrr_at_3
value: 44.574000000000005
- type: mrr_at_5
value: 46.202
- type: ndcg_at_1
value: 37.126999999999995
- type: ndcg_at_10
value: 48.833
- type: ndcg_at_100
value: 54.327000000000005
- type: ndcg_at_1000
value: 56.011
- type: ndcg_at_3
value: 43.541999999999994
- type: ndcg_at_5
value: 46.127
- type: precision_at_1
value: 37.126999999999995
- type: precision_at_10
value: 8.376999999999999
- type: precision_at_100
value: 1.2309999999999999
- type: precision_at_1000
value: 0.146
- type: precision_at_3
value: 20.211000000000002
- type: precision_at_5
value: 14.16
- type: recall_at_1
value: 31.457
- type: recall_at_10
value: 62.369
- type: recall_at_100
value: 85.444
- type: recall_at_1000
value: 96.65599999999999
- type: recall_at_3
value: 47.961
- type: recall_at_5
value: 54.676
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackWebmastersRetrieval
type: BeIR/cqadupstack
config: default
split: test
revision: 160c094312a0e1facb97e55eeddb698c0abe3571
metrics:
- type: map_at_1
value: 27.139999999999997
- type: map_at_10
value: 38.801
- type: map_at_100
value: 40.549
- type: map_at_1000
value: 40.802
- type: map_at_3
value: 35.05
- type: map_at_5
value: 36.884
- type: mrr_at_1
value: 33.004
- type: mrr_at_10
value: 43.864
- type: mrr_at_100
value: 44.667
- type: mrr_at_1000
value: 44.717
- type: mrr_at_3
value: 40.777
- type: mrr_at_5
value: 42.319
- type: ndcg_at_1
value: 33.004
- type: ndcg_at_10
value: 46.022
- type: ndcg_at_100
value: 51.542
- type: ndcg_at_1000
value: 53.742000000000004
- type: ndcg_at_3
value: 39.795
- type: ndcg_at_5
value: 42.272
- type: precision_at_1
value: 33.004
- type: precision_at_10
value: 9.012
- type: precision_at_100
value: 1.7770000000000001
- type: precision_at_1000
value: 0.26
- type: precision_at_3
value: 19.038
- type: precision_at_5
value: 13.675999999999998
- type: recall_at_1
value: 27.139999999999997
- type: recall_at_10
value: 60.961
- type: recall_at_100
value: 84.451
- type: recall_at_1000
value: 98.113
- type: recall_at_3
value: 43.001
- type: recall_at_5
value: 49.896
- task:
type: Retrieval
dataset:
name: MTEB ClimateFEVER
type: mteb/climate-fever
config: default
split: test
revision: 47f2ac6acb640fc46020b02a5b59fdda04d39380
metrics:
- type: map_at_1
value: 22.076999999999998
- type: map_at_10
value: 35.44
- type: map_at_100
value: 37.651
- type: map_at_1000
value: 37.824999999999996
- type: map_at_3
value: 30.764999999999997
- type: map_at_5
value: 33.26
- type: mrr_at_1
value: 50.163000000000004
- type: mrr_at_10
value: 61.207
- type: mrr_at_100
value: 61.675000000000004
- type: mrr_at_1000
value: 61.692
- type: mrr_at_3
value: 58.60999999999999
- type: mrr_at_5
value: 60.307
- type: ndcg_at_1
value: 50.163000000000004
- type: ndcg_at_10
value: 45.882
- type: ndcg_at_100
value: 53.239999999999995
- type: ndcg_at_1000
value: 55.852000000000004
- type: ndcg_at_3
value: 40.514
- type: ndcg_at_5
value: 42.038
- type: precision_at_1
value: 50.163000000000004
- type: precision_at_10
value: 13.466000000000001
- type: precision_at_100
value: 2.164
- type: precision_at_1000
value: 0.266
- type: precision_at_3
value: 29.707
- type: precision_at_5
value: 21.694
- type: recall_at_1
value: 22.076999999999998
- type: recall_at_10
value: 50.193
- type: recall_at_100
value: 74.993
- type: recall_at_1000
value: 89.131
- type: recall_at_3
value: 35.472
- type: recall_at_5
value: 41.814
- task:
type: Retrieval
dataset:
name: MTEB DBPedia
type: mteb/dbpedia
config: default
split: test
revision: c0f706b76e590d620bd6618b3ca8efdd34e2d659
metrics:
- type: map_at_1
value: 9.953
- type: map_at_10
value: 24.515
- type: map_at_100
value: 36.173
- type: map_at_1000
value: 38.351
- type: map_at_3
value: 16.592000000000002
- type: map_at_5
value: 20.036
- type: mrr_at_1
value: 74.25
- type: mrr_at_10
value: 81.813
- type: mrr_at_100
value: 82.006
- type: mrr_at_1000
value: 82.011
- type: mrr_at_3
value: 80.875
- type: mrr_at_5
value: 81.362
- type: ndcg_at_1
value: 62.5
- type: ndcg_at_10
value: 52.42
- type: ndcg_at_100
value: 56.808
- type: ndcg_at_1000
value: 63.532999999999994
- type: ndcg_at_3
value: 56.654
- type: ndcg_at_5
value: 54.18300000000001
- type: precision_at_1
value: 74.25
- type: precision_at_10
value: 42.699999999999996
- type: precision_at_100
value: 13.675
- type: precision_at_1000
value: 2.664
- type: precision_at_3
value: 60.5
- type: precision_at_5
value: 52.800000000000004
- type: recall_at_1
value: 9.953
- type: recall_at_10
value: 30.253999999999998
- type: recall_at_100
value: 62.516000000000005
- type: recall_at_1000
value: 84.163
- type: recall_at_3
value: 18.13
- type: recall_at_5
value: 22.771
- task:
type: Classification
dataset:
name: MTEB EmotionClassification
type: mteb/emotion
config: default
split: test
revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37
metrics:
- type: accuracy
value: 79.455
- type: f1
value: 74.16798697647569
- task:
type: Retrieval
dataset:
name: MTEB FEVER
type: mteb/fever
config: default
split: test
revision: bea83ef9e8fb933d90a2f1d5515737465d613e12
metrics:
- type: map_at_1
value: 87.531
- type: map_at_10
value: 93.16799999999999
- type: map_at_100
value: 93.341
- type: map_at_1000
value: 93.349
- type: map_at_3
value: 92.444
- type: map_at_5
value: 92.865
- type: mrr_at_1
value: 94.014
- type: mrr_at_10
value: 96.761
- type: mrr_at_100
value: 96.762
- type: mrr_at_1000
value: 96.762
- type: mrr_at_3
value: 96.672
- type: mrr_at_5
value: 96.736
- type: ndcg_at_1
value: 94.014
- type: ndcg_at_10
value: 95.112
- type: ndcg_at_100
value: 95.578
- type: ndcg_at_1000
value: 95.68900000000001
- type: ndcg_at_3
value: 94.392
- type: ndcg_at_5
value: 94.72500000000001
- type: precision_at_1
value: 94.014
- type: precision_at_10
value: 11.065
- type: precision_at_100
value: 1.157
- type: precision_at_1000
value: 0.11800000000000001
- type: precision_at_3
value: 35.259
- type: precision_at_5
value: 21.599
- type: recall_at_1
value: 87.531
- type: recall_at_10
value: 97.356
- type: recall_at_100
value: 98.965
- type: recall_at_1000
value: 99.607
- type: recall_at_3
value: 95.312
- type: recall_at_5
value: 96.295
- task:
type: Retrieval
dataset:
name: MTEB FiQA2018
type: mteb/fiqa
config: default
split: test
revision: 27a168819829fe9bcd655c2df245fb19452e8e06
metrics:
- type: map_at_1
value: 32.055
- type: map_at_10
value: 53.114
- type: map_at_100
value: 55.235
- type: map_at_1000
value: 55.345
- type: map_at_3
value: 45.854
- type: map_at_5
value: 50.025
- type: mrr_at_1
value: 60.34
- type: mrr_at_10
value: 68.804
- type: mrr_at_100
value: 69.309
- type: mrr_at_1000
value: 69.32199999999999
- type: mrr_at_3
value: 66.40899999999999
- type: mrr_at_5
value: 67.976
- type: ndcg_at_1
value: 60.34
- type: ndcg_at_10
value: 62.031000000000006
- type: ndcg_at_100
value: 68.00500000000001
- type: ndcg_at_1000
value: 69.286
- type: ndcg_at_3
value: 56.355999999999995
- type: ndcg_at_5
value: 58.687
- type: precision_at_1
value: 60.34
- type: precision_at_10
value: 17.176
- type: precision_at_100
value: 2.36
- type: precision_at_1000
value: 0.259
- type: precision_at_3
value: 37.14
- type: precision_at_5
value: 27.809
- type: recall_at_1
value: 32.055
- type: recall_at_10
value: 70.91
- type: recall_at_100
value: 91.83
- type: recall_at_1000
value: 98.871
- type: recall_at_3
value: 51.202999999999996
- type: recall_at_5
value: 60.563
- task:
type: Retrieval
dataset:
name: MTEB HotpotQA
type: mteb/hotpotqa
config: default
split: test
revision: ab518f4d6fcca38d87c25209f94beba119d02014
metrics:
- type: map_at_1
value: 43.68
- type: map_at_10
value: 64.389
- type: map_at_100
value: 65.24
- type: map_at_1000
value: 65.303
- type: map_at_3
value: 61.309000000000005
- type: map_at_5
value: 63.275999999999996
- type: mrr_at_1
value: 87.36
- type: mrr_at_10
value: 91.12
- type: mrr_at_100
value: 91.227
- type: mrr_at_1000
value: 91.229
- type: mrr_at_3
value: 90.57600000000001
- type: mrr_at_5
value: 90.912
- type: ndcg_at_1
value: 87.36
- type: ndcg_at_10
value: 73.076
- type: ndcg_at_100
value: 75.895
- type: ndcg_at_1000
value: 77.049
- type: ndcg_at_3
value: 68.929
- type: ndcg_at_5
value: 71.28
- type: precision_at_1
value: 87.36
- type: precision_at_10
value: 14.741000000000001
- type: precision_at_100
value: 1.694
- type: precision_at_1000
value: 0.185
- type: precision_at_3
value: 43.043
- type: precision_at_5
value: 27.681
- type: recall_at_1
value: 43.68
- type: recall_at_10
value: 73.707
- type: recall_at_100
value: 84.7
- type: recall_at_1000
value: 92.309
- type: recall_at_3
value: 64.564
- type: recall_at_5
value: 69.203
- task:
type: Classification
dataset:
name: MTEB ImdbClassification
type: mteb/imdb
config: default
split: test
revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7
metrics:
- type: accuracy
value: 96.75399999999999
- type: ap
value: 95.29389839242187
- type: f1
value: 96.75348377433475
- task:
type: Retrieval
dataset:
name: MTEB MSMARCO
type: mteb/msmarco
config: default
split: dev
revision: c5a29a104738b98a9e76336939199e264163d4a0
metrics:
- type: map_at_1
value: 25.176
- type: map_at_10
value: 38.598
- type: map_at_100
value: 39.707
- type: map_at_1000
value: 39.744
- type: map_at_3
value: 34.566
- type: map_at_5
value: 36.863
- type: mrr_at_1
value: 25.874000000000002
- type: mrr_at_10
value: 39.214
- type: mrr_at_100
value: 40.251
- type: mrr_at_1000
value: 40.281
- type: mrr_at_3
value: 35.291
- type: mrr_at_5
value: 37.545
- type: ndcg_at_1
value: 25.874000000000002
- type: ndcg_at_10
value: 45.98
- type: ndcg_at_100
value: 51.197
- type: ndcg_at_1000
value: 52.073
- type: ndcg_at_3
value: 37.785999999999994
- type: ndcg_at_5
value: 41.870000000000005
- type: precision_at_1
value: 25.874000000000002
- type: precision_at_10
value: 7.181
- type: precision_at_100
value: 0.979
- type: precision_at_1000
value: 0.106
- type: precision_at_3
value: 16.051000000000002
- type: precision_at_5
value: 11.713
- type: recall_at_1
value: 25.176
- type: recall_at_10
value: 68.67699999999999
- type: recall_at_100
value: 92.55
- type: recall_at_1000
value: 99.164
- type: recall_at_3
value: 46.372
- type: recall_at_5
value: 56.16
- task:
type: Classification
dataset:
name: MTEB MTOPDomainClassification (en)
type: mteb/mtop_domain
config: en
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 99.03784769721841
- type: f1
value: 98.97791641821495
- task:
type: Classification
dataset:
name: MTEB MTOPIntentClassification (en)
type: mteb/mtop_intent
config: en
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 91.88326493388054
- type: f1
value: 73.74809928034335
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (en)
type: mteb/amazon_massive_intent
config: en
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 85.41358439811701
- type: f1
value: 83.503679460639
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (en)
type: mteb/amazon_massive_scenario
config: en
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 89.77135171486215
- type: f1
value: 88.89843747468366
- task:
type: Clustering
dataset:
name: MTEB MedrxivClusteringP2P
type: mteb/medrxiv-clustering-p2p
config: default
split: test
revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73
metrics:
- type: v_measure
value: 46.22695362087359
- task:
type: Clustering
dataset:
name: MTEB MedrxivClusteringS2S
type: mteb/medrxiv-clustering-s2s
config: default
split: test
revision: 35191c8c0dca72d8ff3efcd72aa802307d469663
metrics:
- type: v_measure
value: 44.132372165849425
- task:
type: Reranking
dataset:
name: MTEB MindSmallReranking
type: mteb/mind_small
config: default
split: test
revision: 3bdac13927fdc888b903db93b2ffdbd90b295a69
metrics:
- type: map
value: 33.35680810650402
- type: mrr
value: 34.72625715637218
- task:
type: Retrieval
dataset:
name: MTEB NFCorpus
type: mteb/nfcorpus
config: default
split: test
revision: ec0fa4fe99da2ff19ca1214b7966684033a58814
metrics:
- type: map_at_1
value: 7.165000000000001
- type: map_at_10
value: 15.424
- type: map_at_100
value: 20.28
- type: map_at_1000
value: 22.065
- type: map_at_3
value: 11.236
- type: map_at_5
value: 13.025999999999998
- type: mrr_at_1
value: 51.702999999999996
- type: mrr_at_10
value: 59.965
- type: mrr_at_100
value: 60.667
- type: mrr_at_1000
value: 60.702999999999996
- type: mrr_at_3
value: 58.772000000000006
- type: mrr_at_5
value: 59.267
- type: ndcg_at_1
value: 49.536
- type: ndcg_at_10
value: 40.6
- type: ndcg_at_100
value: 37.848
- type: ndcg_at_1000
value: 46.657
- type: ndcg_at_3
value: 46.117999999999995
- type: ndcg_at_5
value: 43.619
- type: precision_at_1
value: 51.393
- type: precision_at_10
value: 30.31
- type: precision_at_100
value: 9.972
- type: precision_at_1000
value: 2.329
- type: precision_at_3
value: 43.137
- type: precision_at_5
value: 37.585
- type: recall_at_1
value: 7.165000000000001
- type: recall_at_10
value: 19.689999999999998
- type: recall_at_100
value: 39.237
- type: recall_at_1000
value: 71.417
- type: recall_at_3
value: 12.247
- type: recall_at_5
value: 14.902999999999999
- task:
type: Retrieval
dataset:
name: MTEB NQ
type: mteb/nq
config: default
split: test
revision: b774495ed302d8c44a3a7ea25c90dbce03968f31
metrics:
- type: map_at_1
value: 42.653999999999996
- type: map_at_10
value: 59.611999999999995
- type: map_at_100
value: 60.32300000000001
- type: map_at_1000
value: 60.336
- type: map_at_3
value: 55.584999999999994
- type: map_at_5
value: 58.19
- type: mrr_at_1
value: 47.683
- type: mrr_at_10
value: 62.06700000000001
- type: mrr_at_100
value: 62.537
- type: mrr_at_1000
value: 62.544999999999995
- type: mrr_at_3
value: 59.178
- type: mrr_at_5
value: 61.034
- type: ndcg_at_1
value: 47.654
- type: ndcg_at_10
value: 67.001
- type: ndcg_at_100
value: 69.73899999999999
- type: ndcg_at_1000
value: 69.986
- type: ndcg_at_3
value: 59.95700000000001
- type: ndcg_at_5
value: 64.025
- type: precision_at_1
value: 47.654
- type: precision_at_10
value: 10.367999999999999
- type: precision_at_100
value: 1.192
- type: precision_at_1000
value: 0.121
- type: precision_at_3
value: 26.651000000000003
- type: precision_at_5
value: 18.459
- type: recall_at_1
value: 42.653999999999996
- type: recall_at_10
value: 86.619
- type: recall_at_100
value: 98.04899999999999
- type: recall_at_1000
value: 99.812
- type: recall_at_3
value: 68.987
- type: recall_at_5
value: 78.158
- task:
type: Retrieval
dataset:
name: MTEB QuoraRetrieval
type: mteb/quora
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 72.538
- type: map_at_10
value: 86.702
- type: map_at_100
value: 87.31
- type: map_at_1000
value: 87.323
- type: map_at_3
value: 83.87
- type: map_at_5
value: 85.682
- type: mrr_at_1
value: 83.31
- type: mrr_at_10
value: 89.225
- type: mrr_at_100
value: 89.30399999999999
- type: mrr_at_1000
value: 89.30399999999999
- type: mrr_at_3
value: 88.44300000000001
- type: mrr_at_5
value: 89.005
- type: ndcg_at_1
value: 83.32000000000001
- type: ndcg_at_10
value: 90.095
- type: ndcg_at_100
value: 91.12
- type: ndcg_at_1000
value: 91.179
- type: ndcg_at_3
value: 87.606
- type: ndcg_at_5
value: 89.031
- type: precision_at_1
value: 83.32000000000001
- type: precision_at_10
value: 13.641
- type: precision_at_100
value: 1.541
- type: precision_at_1000
value: 0.157
- type: precision_at_3
value: 38.377
- type: precision_at_5
value: 25.162000000000003
- type: recall_at_1
value: 72.538
- type: recall_at_10
value: 96.47200000000001
- type: recall_at_100
value: 99.785
- type: recall_at_1000
value: 99.99900000000001
- type: recall_at_3
value: 89.278
- type: recall_at_5
value: 93.367
- task:
type: Clustering
dataset:
name: MTEB RedditClustering
type: mteb/reddit-clustering
config: default
split: test
revision: 24640382cdbf8abc73003fb0fa6d111a705499eb
metrics:
- type: v_measure
value: 73.55219145406065
- task:
type: Clustering
dataset:
name: MTEB RedditClusteringP2P
type: mteb/reddit-clustering-p2p
config: default
split: test
revision: 282350215ef01743dc01b456c7f5241fa8937f16
metrics:
- type: v_measure
value: 74.13437105242755
- task:
type: Retrieval
dataset:
name: MTEB SCIDOCS
type: mteb/scidocs
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 6.873
- type: map_at_10
value: 17.944
- type: map_at_100
value: 21.171
- type: map_at_1000
value: 21.528
- type: map_at_3
value: 12.415
- type: map_at_5
value: 15.187999999999999
- type: mrr_at_1
value: 33.800000000000004
- type: mrr_at_10
value: 46.455
- type: mrr_at_100
value: 47.378
- type: mrr_at_1000
value: 47.394999999999996
- type: mrr_at_3
value: 42.367
- type: mrr_at_5
value: 44.972
- type: ndcg_at_1
value: 33.800000000000004
- type: ndcg_at_10
value: 28.907
- type: ndcg_at_100
value: 39.695
- type: ndcg_at_1000
value: 44.582
- type: ndcg_at_3
value: 26.949
- type: ndcg_at_5
value: 23.988
- type: precision_at_1
value: 33.800000000000004
- type: precision_at_10
value: 15.079999999999998
- type: precision_at_100
value: 3.056
- type: precision_at_1000
value: 0.42100000000000004
- type: precision_at_3
value: 25.167
- type: precision_at_5
value: 21.26
- type: recall_at_1
value: 6.873
- type: recall_at_10
value: 30.568
- type: recall_at_100
value: 62.062
- type: recall_at_1000
value: 85.37700000000001
- type: recall_at_3
value: 15.312999999999999
- type: recall_at_5
value: 21.575
- task:
type: STS
dataset:
name: MTEB SICK-R
type: mteb/sickr-sts
config: default
split: test
revision: a6ea5a8cab320b040a23452cc28066d9beae2cee
metrics:
- type: cos_sim_pearson
value: 82.37009118256057
- type: cos_sim_spearman
value: 79.27986395671529
- type: euclidean_pearson
value: 79.18037715442115
- type: euclidean_spearman
value: 79.28004791561621
- type: manhattan_pearson
value: 79.34062972800541
- type: manhattan_spearman
value: 79.43106695543402
- task:
type: STS
dataset:
name: MTEB STS12
type: mteb/sts12-sts
config: default
split: test
revision: a0d554a64d88156834ff5ae9920b964011b16384
metrics:
- type: cos_sim_pearson
value: 87.48474767383833
- type: cos_sim_spearman
value: 79.54505388752513
- type: euclidean_pearson
value: 83.43282704179565
- type: euclidean_spearman
value: 79.54579919925405
- type: manhattan_pearson
value: 83.77564492427952
- type: manhattan_spearman
value: 79.84558396989286
- task:
type: STS
dataset:
name: MTEB STS13
type: mteb/sts13-sts
config: default
split: test
revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca
metrics:
- type: cos_sim_pearson
value: 88.803698035802
- type: cos_sim_spearman
value: 88.83451367754881
- type: euclidean_pearson
value: 88.28939285711628
- type: euclidean_spearman
value: 88.83528996073112
- type: manhattan_pearson
value: 88.28017412671795
- type: manhattan_spearman
value: 88.9228828016344
- task:
type: STS
dataset:
name: MTEB STS14
type: mteb/sts14-sts
config: default
split: test
revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375
metrics:
- type: cos_sim_pearson
value: 85.27469288153428
- type: cos_sim_spearman
value: 83.87477064876288
- type: euclidean_pearson
value: 84.2601737035379
- type: euclidean_spearman
value: 83.87431082479074
- type: manhattan_pearson
value: 84.3621547772745
- type: manhattan_spearman
value: 84.12094375000423
- task:
type: STS
dataset:
name: MTEB STS15
type: mteb/sts15-sts
config: default
split: test
revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3
metrics:
- type: cos_sim_pearson
value: 88.12749863201587
- type: cos_sim_spearman
value: 88.54287568368565
- type: euclidean_pearson
value: 87.90429700607999
- type: euclidean_spearman
value: 88.5437689576261
- type: manhattan_pearson
value: 88.19276653356833
- type: manhattan_spearman
value: 88.99995393814679
- task:
type: STS
dataset:
name: MTEB STS16
type: mteb/sts16-sts
config: default
split: test
revision: 4d8694f8f0e0100860b497b999b3dbed754a0513
metrics:
- type: cos_sim_pearson
value: 85.68398747560902
- type: cos_sim_spearman
value: 86.48815303460574
- type: euclidean_pearson
value: 85.52356631237954
- type: euclidean_spearman
value: 86.486391949551
- type: manhattan_pearson
value: 85.67267981761788
- type: manhattan_spearman
value: 86.7073696332485
- task:
type: STS
dataset:
name: MTEB STS17 (en-en)
type: mteb/sts17-crosslingual-sts
config: en-en
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 88.9057107443124
- type: cos_sim_spearman
value: 88.7312168757697
- type: euclidean_pearson
value: 88.72810439714794
- type: euclidean_spearman
value: 88.71976185854771
- type: manhattan_pearson
value: 88.50433745949111
- type: manhattan_spearman
value: 88.51726175544195
- task:
type: STS
dataset:
name: MTEB STS22 (en)
type: mteb/sts22-crosslingual-sts
config: en
split: test
revision: eea2b4fe26a775864c896887d910b76a8098ad3f
metrics:
- type: cos_sim_pearson
value: 67.59391795109886
- type: cos_sim_spearman
value: 66.87613008631367
- type: euclidean_pearson
value: 69.23198488262217
- type: euclidean_spearman
value: 66.85427723013692
- type: manhattan_pearson
value: 69.50730124841084
- type: manhattan_spearman
value: 67.10404669820792
- task:
type: STS
dataset:
name: MTEB STSBenchmark
type: mteb/stsbenchmark-sts
config: default
split: test
revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831
metrics:
- type: cos_sim_pearson
value: 87.0820605344619
- type: cos_sim_spearman
value: 86.8518089863434
- type: euclidean_pearson
value: 86.31087134689284
- type: euclidean_spearman
value: 86.8518520517941
- type: manhattan_pearson
value: 86.47203796160612
- type: manhattan_spearman
value: 87.1080149734421
- task:
type: Reranking
dataset:
name: MTEB SciDocsRR
type: mteb/scidocs-reranking
config: default
split: test
revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab
metrics:
- type: map
value: 89.09255369305481
- type: mrr
value: 97.10323445617563
- task:
type: Retrieval
dataset:
name: MTEB SciFact
type: mteb/scifact
config: default
split: test
revision: 0228b52cf27578f30900b9e5271d331663a030d7
metrics:
- type: map_at_1
value: 61.260999999999996
- type: map_at_10
value: 74.043
- type: map_at_100
value: 74.37700000000001
- type: map_at_1000
value: 74.384
- type: map_at_3
value: 71.222
- type: map_at_5
value: 72.875
- type: mrr_at_1
value: 64.333
- type: mrr_at_10
value: 74.984
- type: mrr_at_100
value: 75.247
- type: mrr_at_1000
value: 75.25500000000001
- type: mrr_at_3
value: 73.167
- type: mrr_at_5
value: 74.35000000000001
- type: ndcg_at_1
value: 64.333
- type: ndcg_at_10
value: 79.06
- type: ndcg_at_100
value: 80.416
- type: ndcg_at_1000
value: 80.55600000000001
- type: ndcg_at_3
value: 74.753
- type: ndcg_at_5
value: 76.97500000000001
- type: precision_at_1
value: 64.333
- type: precision_at_10
value: 10.567
- type: precision_at_100
value: 1.1199999999999999
- type: precision_at_1000
value: 0.11299999999999999
- type: precision_at_3
value: 29.889
- type: precision_at_5
value: 19.533
- type: recall_at_1
value: 61.260999999999996
- type: recall_at_10
value: 93.167
- type: recall_at_100
value: 99.0
- type: recall_at_1000
value: 100.0
- type: recall_at_3
value: 81.667
- type: recall_at_5
value: 87.394
- task:
type: PairClassification
dataset:
name: MTEB SprintDuplicateQuestions
type: mteb/sprintduplicatequestions-pairclassification
config: default
split: test
revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46
metrics:
- type: cos_sim_accuracy
value: 99.71980198019801
- type: cos_sim_ap
value: 92.81616007802704
- type: cos_sim_f1
value: 85.17548454688318
- type: cos_sim_precision
value: 89.43894389438944
- type: cos_sim_recall
value: 81.3
- type: dot_accuracy
value: 99.71980198019801
- type: dot_ap
value: 92.81398760591358
- type: dot_f1
value: 85.17548454688318
- type: dot_precision
value: 89.43894389438944
- type: dot_recall
value: 81.3
- type: euclidean_accuracy
value: 99.71980198019801
- type: euclidean_ap
value: 92.81560637245072
- type: euclidean_f1
value: 85.17548454688318
- type: euclidean_precision
value: 89.43894389438944
- type: euclidean_recall
value: 81.3
- type: manhattan_accuracy
value: 99.73069306930694
- type: manhattan_ap
value: 93.14005487480794
- type: manhattan_f1
value: 85.56263269639068
- type: manhattan_precision
value: 91.17647058823529
- type: manhattan_recall
value: 80.60000000000001
- type: max_accuracy
value: 99.73069306930694
- type: max_ap
value: 93.14005487480794
- type: max_f1
value: 85.56263269639068
- task:
type: Clustering
dataset:
name: MTEB StackExchangeClustering
type: mteb/stackexchange-clustering
config: default
split: test
revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259
metrics:
- type: v_measure
value: 79.86443362395185
- task:
type: Clustering
dataset:
name: MTEB StackExchangeClusteringP2P
type: mteb/stackexchange-clustering-p2p
config: default
split: test
revision: 815ca46b2622cec33ccafc3735d572c266efdb44
metrics:
- type: v_measure
value: 49.40897096662564
- task:
type: Reranking
dataset:
name: MTEB StackOverflowDupQuestions
type: mteb/stackoverflowdupquestions-reranking
config: default
split: test
revision: e185fbe320c72810689fc5848eb6114e1ef5ec69
metrics:
- type: map
value: 55.66040806627947
- type: mrr
value: 56.58670475766064
- task:
type: Summarization
dataset:
name: MTEB SummEval
type: mteb/summeval
config: default
split: test
revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c
metrics:
- type: cos_sim_pearson
value: 31.51015090598575
- type: cos_sim_spearman
value: 31.35016454939226
- type: dot_pearson
value: 31.5150068731
- type: dot_spearman
value: 31.34790869023487
- task:
type: Retrieval
dataset:
name: MTEB TRECCOVID
type: mteb/trec-covid
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 0.254
- type: map_at_10
value: 2.064
- type: map_at_100
value: 12.909
- type: map_at_1000
value: 31.761
- type: map_at_3
value: 0.738
- type: map_at_5
value: 1.155
- type: mrr_at_1
value: 96.0
- type: mrr_at_10
value: 98.0
- type: mrr_at_100
value: 98.0
- type: mrr_at_1000
value: 98.0
- type: mrr_at_3
value: 98.0
- type: mrr_at_5
value: 98.0
- type: ndcg_at_1
value: 93.0
- type: ndcg_at_10
value: 82.258
- type: ndcg_at_100
value: 64.34
- type: ndcg_at_1000
value: 57.912
- type: ndcg_at_3
value: 90.827
- type: ndcg_at_5
value: 86.79
- type: precision_at_1
value: 96.0
- type: precision_at_10
value: 84.8
- type: precision_at_100
value: 66.0
- type: precision_at_1000
value: 25.356
- type: precision_at_3
value: 94.667
- type: precision_at_5
value: 90.4
- type: recall_at_1
value: 0.254
- type: recall_at_10
value: 2.1950000000000003
- type: recall_at_100
value: 16.088
- type: recall_at_1000
value: 54.559000000000005
- type: recall_at_3
value: 0.75
- type: recall_at_5
value: 1.191
- task:
type: Retrieval
dataset:
name: MTEB Touche2020
type: mteb/touche2020
config: default
split: test
revision: a34f9a33db75fa0cbb21bb5cfc3dae8dc8bec93f
metrics:
- type: map_at_1
value: 2.976
- type: map_at_10
value: 11.389000000000001
- type: map_at_100
value: 18.429000000000002
- type: map_at_1000
value: 20.113
- type: map_at_3
value: 6.483
- type: map_at_5
value: 8.770999999999999
- type: mrr_at_1
value: 40.816
- type: mrr_at_10
value: 58.118
- type: mrr_at_100
value: 58.489999999999995
- type: mrr_at_1000
value: 58.489999999999995
- type: mrr_at_3
value: 53.061
- type: mrr_at_5
value: 57.041
- type: ndcg_at_1
value: 40.816
- type: ndcg_at_10
value: 30.567
- type: ndcg_at_100
value: 42.44
- type: ndcg_at_1000
value: 53.480000000000004
- type: ndcg_at_3
value: 36.016
- type: ndcg_at_5
value: 34.257
- type: precision_at_1
value: 42.857
- type: precision_at_10
value: 25.714
- type: precision_at_100
value: 8.429
- type: precision_at_1000
value: 1.5939999999999999
- type: precision_at_3
value: 36.735
- type: precision_at_5
value: 33.878
- type: recall_at_1
value: 2.976
- type: recall_at_10
value: 17.854999999999997
- type: recall_at_100
value: 51.833
- type: recall_at_1000
value: 86.223
- type: recall_at_3
value: 7.887
- type: recall_at_5
value: 12.026
- task:
type: Classification
dataset:
name: MTEB ToxicConversationsClassification
type: mteb/toxic_conversations_50k
config: default
split: test
revision: d7c0de2777da35d6aae2200a62c6e0e5af397c4c
metrics:
- type: accuracy
value: 85.1174
- type: ap
value: 30.169441069345748
- type: f1
value: 69.79254701873245
- task:
type: Classification
dataset:
name: MTEB TweetSentimentExtractionClassification
type: mteb/tweet_sentiment_extraction
config: default
split: test
revision: d604517c81ca91fe16a244d1248fc021f9ecee7a
metrics:
- type: accuracy
value: 72.58347481607245
- type: f1
value: 72.74877295564937
- task:
type: Clustering
dataset:
name: MTEB TwentyNewsgroupsClustering
type: mteb/twentynewsgroups-clustering
config: default
split: test
revision: 6125ec4e24fa026cec8a478383ee943acfbd5449
metrics:
- type: v_measure
value: 53.90586138221305
- task:
type: PairClassification
dataset:
name: MTEB TwitterSemEval2015
type: mteb/twittersemeval2015-pairclassification
config: default
split: test
revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1
metrics:
- type: cos_sim_accuracy
value: 87.35769207844072
- type: cos_sim_ap
value: 77.9645072410354
- type: cos_sim_f1
value: 71.32352941176471
- type: cos_sim_precision
value: 66.5903890160183
- type: cos_sim_recall
value: 76.78100263852242
- type: dot_accuracy
value: 87.37557370209214
- type: dot_ap
value: 77.96250046429908
- type: dot_f1
value: 71.28932757557064
- type: dot_precision
value: 66.95249130938586
- type: dot_recall
value: 76.22691292875989
- type: euclidean_accuracy
value: 87.35173153722357
- type: euclidean_ap
value: 77.96520460741593
- type: euclidean_f1
value: 71.32470733210104
- type: euclidean_precision
value: 66.91329479768785
- type: euclidean_recall
value: 76.35883905013192
- type: manhattan_accuracy
value: 87.25636287774931
- type: manhattan_ap
value: 77.77752485611796
- type: manhattan_f1
value: 71.18148599269183
- type: manhattan_precision
value: 66.10859728506787
- type: manhattan_recall
value: 77.0976253298153
- type: max_accuracy
value: 87.37557370209214
- type: max_ap
value: 77.96520460741593
- type: max_f1
value: 71.32470733210104
- task:
type: PairClassification
dataset:
name: MTEB TwitterURLCorpus
type: mteb/twitterurlcorpus-pairclassification
config: default
split: test
revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf
metrics:
- type: cos_sim_accuracy
value: 89.38176737687739
- type: cos_sim_ap
value: 86.58811861657401
- type: cos_sim_f1
value: 79.09430644097604
- type: cos_sim_precision
value: 75.45085977911366
- type: cos_sim_recall
value: 83.10748383122882
- type: dot_accuracy
value: 89.38370784336554
- type: dot_ap
value: 86.58840606004333
- type: dot_f1
value: 79.10179860068133
- type: dot_precision
value: 75.44546153308643
- type: dot_recall
value: 83.13058207576223
- type: euclidean_accuracy
value: 89.38564830985369
- type: euclidean_ap
value: 86.58820721061164
- type: euclidean_f1
value: 79.09070942235888
- type: euclidean_precision
value: 75.38729937194697
- type: euclidean_recall
value: 83.17677856482906
- type: manhattan_accuracy
value: 89.40699344122326
- type: manhattan_ap
value: 86.60631843011362
- type: manhattan_f1
value: 79.14949970570925
- type: manhattan_precision
value: 75.78191039729502
- type: manhattan_recall
value: 82.83030489682784
- type: max_accuracy
value: 89.40699344122326
- type: max_ap
value: 86.60631843011362
- type: max_f1
value: 79.14949970570925
- task:
type: STS
dataset:
name: MTEB AFQMC
type: C-MTEB/AFQMC
config: default
split: validation
revision: b44c3b011063adb25877c13823db83bb193913c4
metrics:
- type: cos_sim_pearson
value: 65.58442135663871
- type: cos_sim_spearman
value: 72.2538631361313
- type: euclidean_pearson
value: 70.97255486607429
- type: euclidean_spearman
value: 72.25374250228647
- type: manhattan_pearson
value: 70.83250199989911
- type: manhattan_spearman
value: 72.14819496536272
- task:
type: STS
dataset:
name: MTEB ATEC
type: C-MTEB/ATEC
config: default
split: test
revision: 0f319b1142f28d00e055a6770f3f726ae9b7d865
metrics:
- type: cos_sim_pearson
value: 59.99478404929932
- type: cos_sim_spearman
value: 62.61836216999812
- type: euclidean_pearson
value: 66.86429811933593
- type: euclidean_spearman
value: 62.6183520374191
- type: manhattan_pearson
value: 66.8063778911633
- type: manhattan_spearman
value: 62.569607573241115
- task:
type: Classification
dataset:
name: MTEB AmazonReviewsClassification (zh)
type: mteb/amazon_reviews_multi
config: zh
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 53.98400000000001
- type: f1
value: 51.21447361350723
- task:
type: STS
dataset:
name: MTEB BQ
type: C-MTEB/BQ
config: default
split: test
revision: e3dda5e115e487b39ec7e618c0c6a29137052a55
metrics:
- type: cos_sim_pearson
value: 79.11941660686553
- type: cos_sim_spearman
value: 81.25029594540435
- type: euclidean_pearson
value: 82.06973504238826
- type: euclidean_spearman
value: 81.2501989488524
- type: manhattan_pearson
value: 82.10094630392753
- type: manhattan_spearman
value: 81.27987244392389
- task:
type: Clustering
dataset:
name: MTEB CLSClusteringP2P
type: C-MTEB/CLSClusteringP2P
config: default
split: test
revision: 4b6227591c6c1a73bc76b1055f3b7f3588e72476
metrics:
- type: v_measure
value: 47.07270168705156
- task:
type: Clustering
dataset:
name: MTEB CLSClusteringS2S
type: C-MTEB/CLSClusteringS2S
config: default
split: test
revision: e458b3f5414b62b7f9f83499ac1f5497ae2e869f
metrics:
- type: v_measure
value: 45.98511703185043
- task:
type: Reranking
dataset:
name: MTEB CMedQAv1
type: C-MTEB/CMedQAv1-reranking
config: default
split: test
revision: 8d7f1e942507dac42dc58017c1a001c3717da7df
metrics:
- type: map
value: 88.19895157194931
- type: mrr
value: 90.21424603174603
- task:
type: Reranking
dataset:
name: MTEB CMedQAv2
type: C-MTEB/CMedQAv2-reranking
config: default
split: test
revision: 23d186750531a14a0357ca22cd92d712fd512ea0
metrics:
- type: map
value: 88.03317320980119
- type: mrr
value: 89.9461507936508
- task:
type: Retrieval
dataset:
name: MTEB CmedqaRetrieval
type: C-MTEB/CmedqaRetrieval
config: default
split: dev
revision: cd540c506dae1cf9e9a59c3e06f42030d54e7301
metrics:
- type: map_at_1
value: 29.037000000000003
- type: map_at_10
value: 42.001
- type: map_at_100
value: 43.773
- type: map_at_1000
value: 43.878
- type: map_at_3
value: 37.637
- type: map_at_5
value: 40.034
- type: mrr_at_1
value: 43.136
- type: mrr_at_10
value: 51.158
- type: mrr_at_100
value: 52.083
- type: mrr_at_1000
value: 52.12
- type: mrr_at_3
value: 48.733
- type: mrr_at_5
value: 50.025
- type: ndcg_at_1
value: 43.136
- type: ndcg_at_10
value: 48.685
- type: ndcg_at_100
value: 55.513
- type: ndcg_at_1000
value: 57.242000000000004
- type: ndcg_at_3
value: 43.329
- type: ndcg_at_5
value: 45.438
- type: precision_at_1
value: 43.136
- type: precision_at_10
value: 10.56
- type: precision_at_100
value: 1.6129999999999998
- type: precision_at_1000
value: 0.184
- type: precision_at_3
value: 24.064
- type: precision_at_5
value: 17.269000000000002
- type: recall_at_1
value: 29.037000000000003
- type: recall_at_10
value: 59.245000000000005
- type: recall_at_100
value: 87.355
- type: recall_at_1000
value: 98.74000000000001
- type: recall_at_3
value: 42.99
- type: recall_at_5
value: 49.681999999999995
- task:
type: PairClassification
dataset:
name: MTEB Cmnli
type: C-MTEB/CMNLI
config: default
split: validation
revision: 41bc36f332156f7adc9e38f53777c959b2ae9766
metrics:
- type: cos_sim_accuracy
value: 82.68190018039687
- type: cos_sim_ap
value: 90.18017125327886
- type: cos_sim_f1
value: 83.64080906868193
- type: cos_sim_precision
value: 79.7076890489303
- type: cos_sim_recall
value: 87.98223053542202
- type: dot_accuracy
value: 82.68190018039687
- type: dot_ap
value: 90.18782350103646
- type: dot_f1
value: 83.64242087729039
- type: dot_precision
value: 79.65313028764805
- type: dot_recall
value: 88.05237315875614
- type: euclidean_accuracy
value: 82.68190018039687
- type: euclidean_ap
value: 90.1801957900632
- type: euclidean_f1
value: 83.63636363636364
- type: euclidean_precision
value: 79.52772506852203
- type: euclidean_recall
value: 88.19265840542437
- type: manhattan_accuracy
value: 82.14070956103427
- type: manhattan_ap
value: 89.96178420101427
- type: manhattan_f1
value: 83.21087838578791
- type: manhattan_precision
value: 78.35605121850475
- type: manhattan_recall
value: 88.70703764320785
- type: max_accuracy
value: 82.68190018039687
- type: max_ap
value: 90.18782350103646
- type: max_f1
value: 83.64242087729039
- task:
type: Retrieval
dataset:
name: MTEB CovidRetrieval
type: C-MTEB/CovidRetrieval
config: default
split: dev
revision: 1271c7809071a13532e05f25fb53511ffce77117
metrics:
- type: map_at_1
value: 72.234
- type: map_at_10
value: 80.10000000000001
- type: map_at_100
value: 80.36
- type: map_at_1000
value: 80.363
- type: map_at_3
value: 78.315
- type: map_at_5
value: 79.607
- type: mrr_at_1
value: 72.392
- type: mrr_at_10
value: 80.117
- type: mrr_at_100
value: 80.36999999999999
- type: mrr_at_1000
value: 80.373
- type: mrr_at_3
value: 78.469
- type: mrr_at_5
value: 79.633
- type: ndcg_at_1
value: 72.392
- type: ndcg_at_10
value: 83.651
- type: ndcg_at_100
value: 84.749
- type: ndcg_at_1000
value: 84.83000000000001
- type: ndcg_at_3
value: 80.253
- type: ndcg_at_5
value: 82.485
- type: precision_at_1
value: 72.392
- type: precision_at_10
value: 9.557
- type: precision_at_100
value: 1.004
- type: precision_at_1000
value: 0.101
- type: precision_at_3
value: 28.732000000000003
- type: precision_at_5
value: 18.377
- type: recall_at_1
value: 72.234
- type: recall_at_10
value: 94.573
- type: recall_at_100
value: 99.368
- type: recall_at_1000
value: 100.0
- type: recall_at_3
value: 85.669
- type: recall_at_5
value: 91.01700000000001
- task:
type: Retrieval
dataset:
name: MTEB DuRetrieval
type: C-MTEB/DuRetrieval
config: default
split: dev
revision: a1a333e290fe30b10f3f56498e3a0d911a693ced
metrics:
- type: map_at_1
value: 26.173999999999996
- type: map_at_10
value: 80.04
- type: map_at_100
value: 82.94500000000001
- type: map_at_1000
value: 82.98100000000001
- type: map_at_3
value: 55.562999999999995
- type: map_at_5
value: 69.89800000000001
- type: mrr_at_1
value: 89.5
- type: mrr_at_10
value: 92.996
- type: mrr_at_100
value: 93.06400000000001
- type: mrr_at_1000
value: 93.065
- type: mrr_at_3
value: 92.658
- type: mrr_at_5
value: 92.84599999999999
- type: ndcg_at_1
value: 89.5
- type: ndcg_at_10
value: 87.443
- type: ndcg_at_100
value: 90.253
- type: ndcg_at_1000
value: 90.549
- type: ndcg_at_3
value: 85.874
- type: ndcg_at_5
value: 84.842
- type: precision_at_1
value: 89.5
- type: precision_at_10
value: 41.805
- type: precision_at_100
value: 4.827
- type: precision_at_1000
value: 0.49
- type: precision_at_3
value: 76.85
- type: precision_at_5
value: 64.8
- type: recall_at_1
value: 26.173999999999996
- type: recall_at_10
value: 89.101
- type: recall_at_100
value: 98.08099999999999
- type: recall_at_1000
value: 99.529
- type: recall_at_3
value: 57.902
- type: recall_at_5
value: 74.602
- task:
type: Retrieval
dataset:
name: MTEB EcomRetrieval
type: C-MTEB/EcomRetrieval
config: default
split: dev
revision: 687de13dc7294d6fd9be10c6945f9e8fec8166b9
metrics:
- type: map_at_1
value: 56.10000000000001
- type: map_at_10
value: 66.15299999999999
- type: map_at_100
value: 66.625
- type: map_at_1000
value: 66.636
- type: map_at_3
value: 63.632999999999996
- type: map_at_5
value: 65.293
- type: mrr_at_1
value: 56.10000000000001
- type: mrr_at_10
value: 66.15299999999999
- type: mrr_at_100
value: 66.625
- type: mrr_at_1000
value: 66.636
- type: mrr_at_3
value: 63.632999999999996
- type: mrr_at_5
value: 65.293
- type: ndcg_at_1
value: 56.10000000000001
- type: ndcg_at_10
value: 71.146
- type: ndcg_at_100
value: 73.27799999999999
- type: ndcg_at_1000
value: 73.529
- type: ndcg_at_3
value: 66.09
- type: ndcg_at_5
value: 69.08999999999999
- type: precision_at_1
value: 56.10000000000001
- type: precision_at_10
value: 8.68
- type: precision_at_100
value: 0.964
- type: precision_at_1000
value: 0.098
- type: precision_at_3
value: 24.4
- type: precision_at_5
value: 16.1
- type: recall_at_1
value: 56.10000000000001
- type: recall_at_10
value: 86.8
- type: recall_at_100
value: 96.39999999999999
- type: recall_at_1000
value: 98.3
- type: recall_at_3
value: 73.2
- type: recall_at_5
value: 80.5
- task:
type: Classification
dataset:
name: MTEB IFlyTek
type: C-MTEB/IFlyTek-classification
config: default
split: validation
revision: 421605374b29664c5fc098418fe20ada9bd55f8a
metrics:
- type: accuracy
value: 54.52096960369373
- type: f1
value: 40.930845295808695
- task:
type: Classification
dataset:
name: MTEB JDReview
type: C-MTEB/JDReview-classification
config: default
split: test
revision: b7c64bd89eb87f8ded463478346f76731f07bf8b
metrics:
- type: accuracy
value: 86.51031894934334
- type: ap
value: 55.9516014323483
- type: f1
value: 81.54813679326381
- task:
type: STS
dataset:
name: MTEB LCQMC
type: C-MTEB/LCQMC
config: default
split: test
revision: 17f9b096f80380fce5ed12a9be8be7784b337daf
metrics:
- type: cos_sim_pearson
value: 69.67437838574276
- type: cos_sim_spearman
value: 73.81314174653045
- type: euclidean_pearson
value: 72.63430276680275
- type: euclidean_spearman
value: 73.81358736777001
- type: manhattan_pearson
value: 72.58743833842829
- type: manhattan_spearman
value: 73.7590419009179
- task:
type: Reranking
dataset:
name: MTEB MMarcoReranking
type: C-MTEB/Mmarco-reranking
config: default
split: dev
revision: None
metrics:
- type: map
value: 31.648613483640254
- type: mrr
value: 30.37420634920635
- task:
type: Retrieval
dataset:
name: MTEB MMarcoRetrieval
type: C-MTEB/MMarcoRetrieval
config: default
split: dev
revision: 539bbde593d947e2a124ba72651aafc09eb33fc2
metrics:
- type: map_at_1
value: 73.28099999999999
- type: map_at_10
value: 81.977
- type: map_at_100
value: 82.222
- type: map_at_1000
value: 82.22699999999999
- type: map_at_3
value: 80.441
- type: map_at_5
value: 81.46600000000001
- type: mrr_at_1
value: 75.673
- type: mrr_at_10
value: 82.41000000000001
- type: mrr_at_100
value: 82.616
- type: mrr_at_1000
value: 82.621
- type: mrr_at_3
value: 81.094
- type: mrr_at_5
value: 81.962
- type: ndcg_at_1
value: 75.673
- type: ndcg_at_10
value: 85.15599999999999
- type: ndcg_at_100
value: 86.151
- type: ndcg_at_1000
value: 86.26899999999999
- type: ndcg_at_3
value: 82.304
- type: ndcg_at_5
value: 84.009
- type: precision_at_1
value: 75.673
- type: precision_at_10
value: 10.042
- type: precision_at_100
value: 1.052
- type: precision_at_1000
value: 0.106
- type: precision_at_3
value: 30.673000000000002
- type: precision_at_5
value: 19.326999999999998
- type: recall_at_1
value: 73.28099999999999
- type: recall_at_10
value: 94.446
- type: recall_at_100
value: 98.737
- type: recall_at_1000
value: 99.649
- type: recall_at_3
value: 86.984
- type: recall_at_5
value: 91.024
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (zh-CN)
type: mteb/amazon_massive_intent
config: zh-CN
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 81.08607935440484
- type: f1
value: 78.24879986066307
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (zh-CN)
type: mteb/amazon_massive_scenario
config: zh-CN
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 86.05917955615332
- type: f1
value: 85.05279279434997
- task:
type: Retrieval
dataset:
name: MTEB MedicalRetrieval
type: C-MTEB/MedicalRetrieval
config: default
split: dev
revision: 2039188fb5800a9803ba5048df7b76e6fb151fc6
metrics:
- type: map_at_1
value: 56.2
- type: map_at_10
value: 62.57899999999999
- type: map_at_100
value: 63.154999999999994
- type: map_at_1000
value: 63.193
- type: map_at_3
value: 61.217
- type: map_at_5
value: 62.012
- type: mrr_at_1
value: 56.3
- type: mrr_at_10
value: 62.629000000000005
- type: mrr_at_100
value: 63.205999999999996
- type: mrr_at_1000
value: 63.244
- type: mrr_at_3
value: 61.267
- type: mrr_at_5
value: 62.062
- type: ndcg_at_1
value: 56.2
- type: ndcg_at_10
value: 65.592
- type: ndcg_at_100
value: 68.657
- type: ndcg_at_1000
value: 69.671
- type: ndcg_at_3
value: 62.808
- type: ndcg_at_5
value: 64.24499999999999
- type: precision_at_1
value: 56.2
- type: precision_at_10
value: 7.5
- type: precision_at_100
value: 0.899
- type: precision_at_1000
value: 0.098
- type: precision_at_3
value: 22.467000000000002
- type: precision_at_5
value: 14.180000000000001
- type: recall_at_1
value: 56.2
- type: recall_at_10
value: 75.0
- type: recall_at_100
value: 89.9
- type: recall_at_1000
value: 97.89999999999999
- type: recall_at_3
value: 67.4
- type: recall_at_5
value: 70.89999999999999
- task:
type: Classification
dataset:
name: MTEB MultilingualSentiment
type: C-MTEB/MultilingualSentiment-classification
config: default
split: validation
revision: 46958b007a63fdbf239b7672c25d0bea67b5ea1a
metrics:
- type: accuracy
value: 76.87666666666667
- type: f1
value: 76.7317686219665
- task:
type: PairClassification
dataset:
name: MTEB Ocnli
type: C-MTEB/OCNLI
config: default
split: validation
revision: 66e76a618a34d6d565d5538088562851e6daa7ec
metrics:
- type: cos_sim_accuracy
value: 79.64266377910124
- type: cos_sim_ap
value: 84.78274442344829
- type: cos_sim_f1
value: 81.16947472745292
- type: cos_sim_precision
value: 76.47058823529412
- type: cos_sim_recall
value: 86.48363252375924
- type: dot_accuracy
value: 79.64266377910124
- type: dot_ap
value: 84.7851404063692
- type: dot_f1
value: 81.16947472745292
- type: dot_precision
value: 76.47058823529412
- type: dot_recall
value: 86.48363252375924
- type: euclidean_accuracy
value: 79.64266377910124
- type: euclidean_ap
value: 84.78068373762378
- type: euclidean_f1
value: 81.14794656110837
- type: euclidean_precision
value: 76.35009310986965
- type: euclidean_recall
value: 86.58922914466737
- type: manhattan_accuracy
value: 79.48023822414727
- type: manhattan_ap
value: 84.72928897427576
- type: manhattan_f1
value: 81.32084770823064
- type: manhattan_precision
value: 76.24768946395564
- type: manhattan_recall
value: 87.11721224920802
- type: max_accuracy
value: 79.64266377910124
- type: max_ap
value: 84.7851404063692
- type: max_f1
value: 81.32084770823064
- task:
type: Classification
dataset:
name: MTEB OnlineShopping
type: C-MTEB/OnlineShopping-classification
config: default
split: test
revision: e610f2ebd179a8fda30ae534c3878750a96db120
metrics:
- type: accuracy
value: 94.3
- type: ap
value: 92.8664032274438
- type: f1
value: 94.29311102997727
- task:
type: STS
dataset:
name: MTEB PAWSX
type: C-MTEB/PAWSX
config: default
split: test
revision: 9c6a90e430ac22b5779fb019a23e820b11a8b5e1
metrics:
- type: cos_sim_pearson
value: 48.51392279882909
- type: cos_sim_spearman
value: 54.06338895994974
- type: euclidean_pearson
value: 52.58480559573412
- type: euclidean_spearman
value: 54.06417276612201
- type: manhattan_pearson
value: 52.69525121721343
- type: manhattan_spearman
value: 54.048147455389675
- task:
type: STS
dataset:
name: MTEB QBQTC
type: C-MTEB/QBQTC
config: default
split: test
revision: 790b0510dc52b1553e8c49f3d2afb48c0e5c48b7
metrics:
- type: cos_sim_pearson
value: 29.728387290757325
- type: cos_sim_spearman
value: 31.366121633635284
- type: euclidean_pearson
value: 29.14588368552961
- type: euclidean_spearman
value: 31.36764411112844
- type: manhattan_pearson
value: 29.63517350523121
- type: manhattan_spearman
value: 31.94157020583762
- task:
type: STS
dataset:
name: MTEB STS22 (zh)
type: mteb/sts22-crosslingual-sts
config: zh
split: test
revision: eea2b4fe26a775864c896887d910b76a8098ad3f
metrics:
- type: cos_sim_pearson
value: 63.64868296271406
- type: cos_sim_spearman
value: 66.12800618164744
- type: euclidean_pearson
value: 63.21405767340238
- type: euclidean_spearman
value: 66.12786567790748
- type: manhattan_pearson
value: 64.04300276525848
- type: manhattan_spearman
value: 66.5066857145652
- task:
type: STS
dataset:
name: MTEB STSB
type: C-MTEB/STSB
config: default
split: test
revision: 0cde68302b3541bb8b3c340dc0644b0b745b3dc0
metrics:
- type: cos_sim_pearson
value: 81.2302623912794
- type: cos_sim_spearman
value: 81.16833673266562
- type: euclidean_pearson
value: 79.47647843876024
- type: euclidean_spearman
value: 81.16944349524972
- type: manhattan_pearson
value: 79.84947238492208
- type: manhattan_spearman
value: 81.64626599410026
- task:
type: Reranking
dataset:
name: MTEB T2Reranking
type: C-MTEB/T2Reranking
config: default
split: dev
revision: 76631901a18387f85eaa53e5450019b87ad58ef9
metrics:
- type: map
value: 67.80129586475687
- type: mrr
value: 77.77402311635554
- task:
type: Retrieval
dataset:
name: MTEB T2Retrieval
type: C-MTEB/T2Retrieval
config: default
split: dev
revision: 8731a845f1bf500a4f111cf1070785c793d10e64
metrics:
- type: map_at_1
value: 28.666999999999998
- type: map_at_10
value: 81.063
- type: map_at_100
value: 84.504
- type: map_at_1000
value: 84.552
- type: map_at_3
value: 56.897
- type: map_at_5
value: 70.073
- type: mrr_at_1
value: 92.087
- type: mrr_at_10
value: 94.132
- type: mrr_at_100
value: 94.19800000000001
- type: mrr_at_1000
value: 94.19999999999999
- type: mrr_at_3
value: 93.78999999999999
- type: mrr_at_5
value: 94.002
- type: ndcg_at_1
value: 92.087
- type: ndcg_at_10
value: 87.734
- type: ndcg_at_100
value: 90.736
- type: ndcg_at_1000
value: 91.184
- type: ndcg_at_3
value: 88.78
- type: ndcg_at_5
value: 87.676
- type: precision_at_1
value: 92.087
- type: precision_at_10
value: 43.46
- type: precision_at_100
value: 5.07
- type: precision_at_1000
value: 0.518
- type: precision_at_3
value: 77.49000000000001
- type: precision_at_5
value: 65.194
- type: recall_at_1
value: 28.666999999999998
- type: recall_at_10
value: 86.632
- type: recall_at_100
value: 96.646
- type: recall_at_1000
value: 98.917
- type: recall_at_3
value: 58.333999999999996
- type: recall_at_5
value: 72.974
- task:
type: Classification
dataset:
name: MTEB TNews
type: C-MTEB/TNews-classification
config: default
split: validation
revision: 317f262bf1e6126357bbe89e875451e4b0938fe4
metrics:
- type: accuracy
value: 52.971999999999994
- type: f1
value: 50.2898280984929
- task:
type: Clustering
dataset:
name: MTEB ThuNewsClusteringP2P
type: C-MTEB/ThuNewsClusteringP2P
config: default
split: test
revision: 5798586b105c0434e4f0fe5e767abe619442cf93
metrics:
- type: v_measure
value: 86.0797948663824
- task:
type: Clustering
dataset:
name: MTEB ThuNewsClusteringS2S
type: C-MTEB/ThuNewsClusteringS2S
config: default
split: test
revision: 8a8b2caeda43f39e13c4bc5bea0f8a667896e10d
metrics:
- type: v_measure
value: 85.10759092255017
- task:
type: Retrieval
dataset:
name: MTEB VideoRetrieval
type: C-MTEB/VideoRetrieval
config: default
split: dev
revision: 58c2597a5943a2ba48f4668c3b90d796283c5639
metrics:
- type: map_at_1
value: 65.60000000000001
- type: map_at_10
value: 74.773
- type: map_at_100
value: 75.128
- type: map_at_1000
value: 75.136
- type: map_at_3
value: 73.05
- type: map_at_5
value: 74.13499999999999
- type: mrr_at_1
value: 65.60000000000001
- type: mrr_at_10
value: 74.773
- type: mrr_at_100
value: 75.128
- type: mrr_at_1000
value: 75.136
- type: mrr_at_3
value: 73.05
- type: mrr_at_5
value: 74.13499999999999
- type: ndcg_at_1
value: 65.60000000000001
- type: ndcg_at_10
value: 78.84299999999999
- type: ndcg_at_100
value: 80.40899999999999
- type: ndcg_at_1000
value: 80.57
- type: ndcg_at_3
value: 75.40599999999999
- type: ndcg_at_5
value: 77.351
- type: precision_at_1
value: 65.60000000000001
- type: precision_at_10
value: 9.139999999999999
- type: precision_at_100
value: 0.984
- type: precision_at_1000
value: 0.1
- type: precision_at_3
value: 27.400000000000002
- type: precision_at_5
value: 17.380000000000003
- type: recall_at_1
value: 65.60000000000001
- type: recall_at_10
value: 91.4
- type: recall_at_100
value: 98.4
- type: recall_at_1000
value: 99.6
- type: recall_at_3
value: 82.19999999999999
- type: recall_at_5
value: 86.9
- task:
type: Classification
dataset:
name: MTEB Waimai
type: C-MTEB/waimai-classification
config: default
split: test
revision: 339287def212450dcaa9df8c22bf93e9980c7023
metrics:
- type: accuracy
value: 89.47
- type: ap
value: 75.59561751845389
- type: f1
value: 87.95207751382563
- task:
type: Clustering
dataset:
name: MTEB AlloProfClusteringP2P
type: lyon-nlp/alloprof
config: default
split: test
revision: 392ba3f5bcc8c51f578786c1fc3dae648662cb9b
metrics:
- type: v_measure
value: 76.05592323841036
- type: v_measure
value: 64.51718058866508
- task:
type: Reranking
dataset:
name: MTEB AlloprofReranking
type: lyon-nlp/mteb-fr-reranking-alloprof-s2p
config: default
split: test
revision: 666fdacebe0291776e86f29345663dfaf80a0db9
metrics:
- type: map
value: 73.08278490943373
- type: mrr
value: 74.66561454570449
- task:
type: Retrieval
dataset:
name: MTEB AlloprofRetrieval
type: lyon-nlp/alloprof
config: default
split: test
revision: 392ba3f5bcc8c51f578786c1fc3dae648662cb9b
metrics:
- type: map_at_1
value: 38.912
- type: map_at_10
value: 52.437999999999995
- type: map_at_100
value: 53.38
- type: map_at_1000
value: 53.427
- type: map_at_3
value: 48.879
- type: map_at_5
value: 50.934000000000005
- type: mrr_at_1
value: 44.085
- type: mrr_at_10
value: 55.337
- type: mrr_at_100
value: 56.016999999999996
- type: mrr_at_1000
value: 56.043
- type: mrr_at_3
value: 52.55499999999999
- type: mrr_at_5
value: 54.20399999999999
- type: ndcg_at_1
value: 44.085
- type: ndcg_at_10
value: 58.876
- type: ndcg_at_100
value: 62.714000000000006
- type: ndcg_at_1000
value: 63.721000000000004
- type: ndcg_at_3
value: 52.444
- type: ndcg_at_5
value: 55.692
- type: precision_at_1
value: 44.085
- type: precision_at_10
value: 9.21
- type: precision_at_100
value: 1.164
- type: precision_at_1000
value: 0.128
- type: precision_at_3
value: 23.043
- type: precision_at_5
value: 15.898000000000001
- type: recall_at_1
value: 38.912
- type: recall_at_10
value: 75.577
- type: recall_at_100
value: 92.038
- type: recall_at_1000
value: 99.325
- type: recall_at_3
value: 58.592
- type: recall_at_5
value: 66.235
- task:
type: Classification
dataset:
name: MTEB AmazonReviewsClassification (fr)
type: mteb/amazon_reviews_multi
config: fr
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 55.532000000000004
- type: f1
value: 52.5783943471605
- task:
type: Retrieval
dataset:
name: MTEB BSARDRetrieval
type: maastrichtlawtech/bsard
config: default
split: test
revision: 5effa1b9b5fa3b0f9e12523e6e43e5f86a6e6d59
metrics:
- type: map_at_1
value: 8.108
- type: map_at_10
value: 14.710999999999999
- type: map_at_100
value: 15.891
- type: map_at_1000
value: 15.983
- type: map_at_3
value: 12.237
- type: map_at_5
value: 13.679
- type: mrr_at_1
value: 8.108
- type: mrr_at_10
value: 14.710999999999999
- type: mrr_at_100
value: 15.891
- type: mrr_at_1000
value: 15.983
- type: mrr_at_3
value: 12.237
- type: mrr_at_5
value: 13.679
- type: ndcg_at_1
value: 8.108
- type: ndcg_at_10
value: 18.796
- type: ndcg_at_100
value: 25.098
- type: ndcg_at_1000
value: 27.951999999999998
- type: ndcg_at_3
value: 13.712
- type: ndcg_at_5
value: 16.309
- type: precision_at_1
value: 8.108
- type: precision_at_10
value: 3.198
- type: precision_at_100
value: 0.626
- type: precision_at_1000
value: 0.086
- type: precision_at_3
value: 6.006
- type: precision_at_5
value: 4.865
- type: recall_at_1
value: 8.108
- type: recall_at_10
value: 31.982
- type: recall_at_100
value: 62.613
- type: recall_at_1000
value: 86.036
- type: recall_at_3
value: 18.018
- type: recall_at_5
value: 24.324
- task:
type: Clustering
dataset:
name: MTEB HALClusteringS2S
type: lyon-nlp/clustering-hal-s2s
config: default
split: test
revision: e06ebbbb123f8144bef1a5d18796f3dec9ae2915
metrics:
- type: v_measure
value: 30.833269778867116
- task:
type: Clustering
dataset:
name: MTEB MLSUMClusteringP2P
type: mlsum
config: default
split: test
revision: b5d54f8f3b61ae17845046286940f03c6bc79bc7
metrics:
- type: v_measure
value: 50.0281928004713
- type: v_measure
value: 43.699961510636534
- task:
type: Classification
dataset:
name: MTEB MTOPDomainClassification (fr)
type: mteb/mtop_domain
config: fr
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 96.68963357344191
- type: f1
value: 96.45175170820961
- task:
type: Classification
dataset:
name: MTEB MTOPIntentClassification (fr)
type: mteb/mtop_intent
config: fr
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 87.46946445349202
- type: f1
value: 65.79860440988624
- task:
type: Classification
dataset:
name: MTEB MasakhaNEWSClassification (fra)
type: masakhane/masakhanews
config: fra
split: test
revision: 8ccc72e69e65f40c70e117d8b3c08306bb788b60
metrics:
- type: accuracy
value: 82.60663507109005
- type: f1
value: 77.20462646604777
- task:
type: Clustering
dataset:
name: MTEB MasakhaNEWSClusteringP2P (fra)
type: masakhane/masakhanews
config: fra
split: test
revision: 8ccc72e69e65f40c70e117d8b3c08306bb788b60
metrics:
- type: v_measure
value: 60.19311264967803
- type: v_measure
value: 63.6235764409785
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (fr)
type: mteb/amazon_massive_intent
config: fr
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 81.65097511768661
- type: f1
value: 78.77796091490924
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (fr)
type: mteb/amazon_massive_scenario
config: fr
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 86.64425016812373
- type: f1
value: 85.4912728670017
- task:
type: Retrieval
dataset:
name: MTEB MintakaRetrieval (fr)
type: jinaai/mintakaqa
config: fr
split: test
revision: efa78cc2f74bbcd21eff2261f9e13aebe40b814e
metrics:
- type: map_at_1
value: 35.913000000000004
- type: map_at_10
value: 48.147
- type: map_at_100
value: 48.91
- type: map_at_1000
value: 48.949
- type: map_at_3
value: 45.269999999999996
- type: map_at_5
value: 47.115
- type: mrr_at_1
value: 35.913000000000004
- type: mrr_at_10
value: 48.147
- type: mrr_at_100
value: 48.91
- type: mrr_at_1000
value: 48.949
- type: mrr_at_3
value: 45.269999999999996
- type: mrr_at_5
value: 47.115
- type: ndcg_at_1
value: 35.913000000000004
- type: ndcg_at_10
value: 54.03
- type: ndcg_at_100
value: 57.839
- type: ndcg_at_1000
value: 58.925000000000004
- type: ndcg_at_3
value: 48.217999999999996
- type: ndcg_at_5
value: 51.56699999999999
- type: precision_at_1
value: 35.913000000000004
- type: precision_at_10
value: 7.244000000000001
- type: precision_at_100
value: 0.9039999999999999
- type: precision_at_1000
value: 0.099
- type: precision_at_3
value: 18.905
- type: precision_at_5
value: 12.981000000000002
- type: recall_at_1
value: 35.913000000000004
- type: recall_at_10
value: 72.441
- type: recall_at_100
value: 90.41799999999999
- type: recall_at_1000
value: 99.099
- type: recall_at_3
value: 56.716
- type: recall_at_5
value: 64.90599999999999
- task:
type: PairClassification
dataset:
name: MTEB OpusparcusPC (fr)
type: GEM/opusparcus
config: fr
split: test
revision: 9e9b1f8ef51616073f47f306f7f47dd91663f86a
metrics:
- type: cos_sim_accuracy
value: 99.90069513406156
- type: cos_sim_ap
value: 100.0
- type: cos_sim_f1
value: 99.95032290114257
- type: cos_sim_precision
value: 100.0
- type: cos_sim_recall
value: 99.90069513406156
- type: dot_accuracy
value: 99.90069513406156
- type: dot_ap
value: 100.0
- type: dot_f1
value: 99.95032290114257
- type: dot_precision
value: 100.0
- type: dot_recall
value: 99.90069513406156
- type: euclidean_accuracy
value: 99.90069513406156
- type: euclidean_ap
value: 100.0
- type: euclidean_f1
value: 99.95032290114257
- type: euclidean_precision
value: 100.0
- type: euclidean_recall
value: 99.90069513406156
- type: manhattan_accuracy
value: 99.90069513406156
- type: manhattan_ap
value: 100.0
- type: manhattan_f1
value: 99.95032290114257
- type: manhattan_precision
value: 100.0
- type: manhattan_recall
value: 99.90069513406156
- type: max_accuracy
value: 99.90069513406156
- type: max_ap
value: 100.0
- type: max_f1
value: 99.95032290114257
- task:
type: PairClassification
dataset:
name: MTEB PawsX (fr)
type: paws-x
config: fr
split: test
revision: 8a04d940a42cd40658986fdd8e3da561533a3646
metrics:
- type: cos_sim_accuracy
value: 75.25
- type: cos_sim_ap
value: 80.86376001270014
- type: cos_sim_f1
value: 73.65945437441204
- type: cos_sim_precision
value: 64.02289452166802
- type: cos_sim_recall
value: 86.71096345514951
- type: dot_accuracy
value: 75.25
- type: dot_ap
value: 80.93686107633002
- type: dot_f1
value: 73.65945437441204
- type: dot_precision
value: 64.02289452166802
- type: dot_recall
value: 86.71096345514951
- type: euclidean_accuracy
value: 75.25
- type: euclidean_ap
value: 80.86379136218862
- type: euclidean_f1
value: 73.65945437441204
- type: euclidean_precision
value: 64.02289452166802
- type: euclidean_recall
value: 86.71096345514951
- type: manhattan_accuracy
value: 75.3
- type: manhattan_ap
value: 80.87826606097734
- type: manhattan_f1
value: 73.68421052631581
- type: manhattan_precision
value: 64.0
- type: manhattan_recall
value: 86.82170542635659
- type: max_accuracy
value: 75.3
- type: max_ap
value: 80.93686107633002
- type: max_f1
value: 73.68421052631581
- task:
type: STS
dataset:
name: MTEB SICKFr
type: Lajavaness/SICK-fr
config: default
split: test
revision: e077ab4cf4774a1e36d86d593b150422fafd8e8a
metrics:
- type: cos_sim_pearson
value: 81.42349425981143
- type: cos_sim_spearman
value: 78.90454327031226
- type: euclidean_pearson
value: 78.39086497435166
- type: euclidean_spearman
value: 78.9046133980509
- type: manhattan_pearson
value: 78.63743094286502
- type: manhattan_spearman
value: 79.12136348449269
- task:
type: STS
dataset:
name: MTEB STS22 (fr)
type: mteb/sts22-crosslingual-sts
config: fr
split: test
revision: eea2b4fe26a775864c896887d910b76a8098ad3f
metrics:
- type: cos_sim_pearson
value: 81.452697919749
- type: cos_sim_spearman
value: 82.58116836039301
- type: euclidean_pearson
value: 81.04038478932786
- type: euclidean_spearman
value: 82.58116836039301
- type: manhattan_pearson
value: 81.37075396187771
- type: manhattan_spearman
value: 82.73678231355368
- task:
type: STS
dataset:
name: MTEB STSBenchmarkMultilingualSTS (fr)
type: stsb_multi_mt
config: fr
split: test
revision: 93d57ef91790589e3ce9c365164337a8a78b7632
metrics:
- type: cos_sim_pearson
value: 85.7419764013806
- type: cos_sim_spearman
value: 85.46085808849622
- type: euclidean_pearson
value: 83.70449639870063
- type: euclidean_spearman
value: 85.46159013076233
- type: manhattan_pearson
value: 83.95259510313929
- type: manhattan_spearman
value: 85.8029724659458
- task:
type: Summarization
dataset:
name: MTEB SummEvalFr
type: lyon-nlp/summarization-summeval-fr-p2p
config: default
split: test
revision: b385812de6a9577b6f4d0f88c6a6e35395a94054
metrics:
- type: cos_sim_pearson
value: 32.61063271753325
- type: cos_sim_spearman
value: 31.454589417353603
- type: dot_pearson
value: 32.6106288643431
- type: dot_spearman
value: 31.454589417353603
- task:
type: Reranking
dataset:
name: MTEB SyntecReranking
type: lyon-nlp/mteb-fr-reranking-syntec-s2p
config: default
split: test
revision: b205c5084a0934ce8af14338bf03feb19499c84d
metrics:
- type: map
value: 84.31666666666666
- type: mrr
value: 84.31666666666666
- task:
type: Retrieval
dataset:
name: MTEB SyntecRetrieval
type: lyon-nlp/mteb-fr-retrieval-syntec-s2p
config: default
split: test
revision: 77f7e271bf4a92b24fce5119f3486b583ca016ff
metrics:
- type: map_at_1
value: 63.0
- type: map_at_10
value: 73.471
- type: map_at_100
value: 73.87
- type: map_at_1000
value: 73.87
- type: map_at_3
value: 70.5
- type: map_at_5
value: 73.05
- type: mrr_at_1
value: 63.0
- type: mrr_at_10
value: 73.471
- type: mrr_at_100
value: 73.87
- type: mrr_at_1000
value: 73.87
- type: mrr_at_3
value: 70.5
- type: mrr_at_5
value: 73.05
- type: ndcg_at_1
value: 63.0
- type: ndcg_at_10
value: 78.255
- type: ndcg_at_100
value: 79.88
- type: ndcg_at_1000
value: 79.88
- type: ndcg_at_3
value: 72.702
- type: ndcg_at_5
value: 77.264
- type: precision_at_1
value: 63.0
- type: precision_at_10
value: 9.3
- type: precision_at_100
value: 1.0
- type: precision_at_1000
value: 0.1
- type: precision_at_3
value: 26.333000000000002
- type: precision_at_5
value: 18.0
- type: recall_at_1
value: 63.0
- type: recall_at_10
value: 93.0
- type: recall_at_100
value: 100.0
- type: recall_at_1000
value: 100.0
- type: recall_at_3
value: 79.0
- type: recall_at_5
value: 90.0
- task:
type: Retrieval
dataset:
name: MTEB XPQARetrieval (fr)
type: jinaai/xpqa
config: fr
split: test
revision: c99d599f0a6ab9b85b065da6f9d94f9cf731679f
metrics:
- type: map_at_1
value: 40.338
- type: map_at_10
value: 61.927
- type: map_at_100
value: 63.361999999999995
- type: map_at_1000
value: 63.405
- type: map_at_3
value: 55.479
- type: map_at_5
value: 59.732
- type: mrr_at_1
value: 63.551
- type: mrr_at_10
value: 71.006
- type: mrr_at_100
value: 71.501
- type: mrr_at_1000
value: 71.509
- type: mrr_at_3
value: 69.07
- type: mrr_at_5
value: 70.165
- type: ndcg_at_1
value: 63.551
- type: ndcg_at_10
value: 68.297
- type: ndcg_at_100
value: 73.13199999999999
- type: ndcg_at_1000
value: 73.751
- type: ndcg_at_3
value: 62.999
- type: ndcg_at_5
value: 64.89
- type: precision_at_1
value: 63.551
- type: precision_at_10
value: 15.661
- type: precision_at_100
value: 1.9789999999999999
- type: precision_at_1000
value: 0.207
- type: precision_at_3
value: 38.273
- type: precision_at_5
value: 27.61
- type: recall_at_1
value: 40.338
- type: recall_at_10
value: 77.267
- type: recall_at_100
value: 95.892
- type: recall_at_1000
value: 99.75500000000001
- type: recall_at_3
value: 60.36
- type: recall_at_5
value: 68.825
- task:
type: Clustering
dataset:
name: MTEB 8TagsClustering
type: PL-MTEB/8tags-clustering
config: default
split: test
revision: None
metrics:
- type: v_measure
value: 51.36126303874126
- task:
type: Classification
dataset:
name: MTEB AllegroReviews
type: PL-MTEB/allegro-reviews
config: default
split: test
revision: None
metrics:
- type: accuracy
value: 67.13717693836979
- type: f1
value: 57.27609848003782
- task:
type: Retrieval
dataset:
name: MTEB ArguAna-PL
type: clarin-knext/arguana-pl
config: default
split: test
revision: 63fc86750af76253e8c760fc9e534bbf24d260a2
metrics:
- type: map_at_1
value: 35.276999999999994
- type: map_at_10
value: 51.086
- type: map_at_100
value: 51.788000000000004
- type: map_at_1000
value: 51.791
- type: map_at_3
value: 46.147
- type: map_at_5
value: 49.078
- type: mrr_at_1
value: 35.917
- type: mrr_at_10
value: 51.315999999999995
- type: mrr_at_100
value: 52.018
- type: mrr_at_1000
value: 52.022
- type: mrr_at_3
value: 46.349000000000004
- type: mrr_at_5
value: 49.297000000000004
- type: ndcg_at_1
value: 35.276999999999994
- type: ndcg_at_10
value: 59.870999999999995
- type: ndcg_at_100
value: 62.590999999999994
- type: ndcg_at_1000
value: 62.661
- type: ndcg_at_3
value: 49.745
- type: ndcg_at_5
value: 55.067
- type: precision_at_1
value: 35.276999999999994
- type: precision_at_10
value: 8.791
- type: precision_at_100
value: 0.991
- type: precision_at_1000
value: 0.1
- type: precision_at_3
value: 20.057
- type: precision_at_5
value: 14.637
- type: recall_at_1
value: 35.276999999999994
- type: recall_at_10
value: 87.909
- type: recall_at_100
value: 99.14699999999999
- type: recall_at_1000
value: 99.644
- type: recall_at_3
value: 60.171
- type: recall_at_5
value: 73.18599999999999
- task:
type: Classification
dataset:
name: MTEB CBD
type: PL-MTEB/cbd
config: default
split: test
revision: None
metrics:
- type: accuracy
value: 78.03000000000002
- type: ap
value: 29.12548553897622
- type: f1
value: 66.54857118886073
- task:
type: PairClassification
dataset:
name: MTEB CDSC-E
type: PL-MTEB/cdsce-pairclassification
config: default
split: test
revision: None
metrics:
- type: cos_sim_accuracy
value: 89.0
- type: cos_sim_ap
value: 76.75437826834582
- type: cos_sim_f1
value: 66.4850136239782
- type: cos_sim_precision
value: 68.92655367231639
- type: cos_sim_recall
value: 64.21052631578948
- type: dot_accuracy
value: 89.0
- type: dot_ap
value: 76.75437826834582
- type: dot_f1
value: 66.4850136239782
- type: dot_precision
value: 68.92655367231639
- type: dot_recall
value: 64.21052631578948
- type: euclidean_accuracy
value: 89.0
- type: euclidean_ap
value: 76.75437826834582
- type: euclidean_f1
value: 66.4850136239782
- type: euclidean_precision
value: 68.92655367231639
- type: euclidean_recall
value: 64.21052631578948
- type: manhattan_accuracy
value: 89.0
- type: manhattan_ap
value: 76.66074220647083
- type: manhattan_f1
value: 66.47058823529412
- type: manhattan_precision
value: 75.33333333333333
- type: manhattan_recall
value: 59.473684210526315
- type: max_accuracy
value: 89.0
- type: max_ap
value: 76.75437826834582
- type: max_f1
value: 66.4850136239782
- task:
type: STS
dataset:
name: MTEB CDSC-R
type: PL-MTEB/cdscr-sts
config: default
split: test
revision: None
metrics:
- type: cos_sim_pearson
value: 93.12903172428328
- type: cos_sim_spearman
value: 92.66381487060741
- type: euclidean_pearson
value: 90.37278396708922
- type: euclidean_spearman
value: 92.66381487060741
- type: manhattan_pearson
value: 90.32503296540962
- type: manhattan_spearman
value: 92.6902938354313
- task:
type: Retrieval
dataset:
name: MTEB DBPedia-PL
type: clarin-knext/dbpedia-pl
config: default
split: test
revision: 76afe41d9af165cc40999fcaa92312b8b012064a
metrics:
- type: map_at_1
value: 8.83
- type: map_at_10
value: 18.326
- type: map_at_100
value: 26.496
- type: map_at_1000
value: 28.455000000000002
- type: map_at_3
value: 12.933
- type: map_at_5
value: 15.168000000000001
- type: mrr_at_1
value: 66.0
- type: mrr_at_10
value: 72.76700000000001
- type: mrr_at_100
value: 73.203
- type: mrr_at_1000
value: 73.219
- type: mrr_at_3
value: 71.458
- type: mrr_at_5
value: 72.246
- type: ndcg_at_1
value: 55.375
- type: ndcg_at_10
value: 41.3
- type: ndcg_at_100
value: 45.891
- type: ndcg_at_1000
value: 52.905
- type: ndcg_at_3
value: 46.472
- type: ndcg_at_5
value: 43.734
- type: precision_at_1
value: 66.0
- type: precision_at_10
value: 33.074999999999996
- type: precision_at_100
value: 11.094999999999999
- type: precision_at_1000
value: 2.374
- type: precision_at_3
value: 48.583
- type: precision_at_5
value: 42.0
- type: recall_at_1
value: 8.83
- type: recall_at_10
value: 22.587
- type: recall_at_100
value: 50.61600000000001
- type: recall_at_1000
value: 73.559
- type: recall_at_3
value: 13.688
- type: recall_at_5
value: 16.855
- task:
type: Retrieval
dataset:
name: MTEB FiQA-PL
type: clarin-knext/fiqa-pl
config: default
split: test
revision: 2e535829717f8bf9dc829b7f911cc5bbd4e6608e
metrics:
- type: map_at_1
value: 20.587
- type: map_at_10
value: 33.095
- type: map_at_100
value: 35.24
- type: map_at_1000
value: 35.429
- type: map_at_3
value: 28.626
- type: map_at_5
value: 31.136999999999997
- type: mrr_at_1
value: 40.586
- type: mrr_at_10
value: 49.033
- type: mrr_at_100
value: 49.952999999999996
- type: mrr_at_1000
value: 49.992
- type: mrr_at_3
value: 46.553
- type: mrr_at_5
value: 48.035
- type: ndcg_at_1
value: 40.586
- type: ndcg_at_10
value: 41.046
- type: ndcg_at_100
value: 48.586
- type: ndcg_at_1000
value: 51.634
- type: ndcg_at_3
value: 36.773
- type: ndcg_at_5
value: 38.389
- type: precision_at_1
value: 40.586
- type: precision_at_10
value: 11.466
- type: precision_at_100
value: 1.909
- type: precision_at_1000
value: 0.245
- type: precision_at_3
value: 24.434
- type: precision_at_5
value: 18.426000000000002
- type: recall_at_1
value: 20.587
- type: recall_at_10
value: 47.986000000000004
- type: recall_at_100
value: 75.761
- type: recall_at_1000
value: 94.065
- type: recall_at_3
value: 33.339
- type: recall_at_5
value: 39.765
- task:
type: Retrieval
dataset:
name: MTEB HotpotQA-PL
type: clarin-knext/hotpotqa-pl
config: default
split: test
revision: a0bd479ac97b4ccb5bd6ce320c415d0bb4beb907
metrics:
- type: map_at_1
value: 40.878
- type: map_at_10
value: 58.775999999999996
- type: map_at_100
value: 59.632
- type: map_at_1000
value: 59.707
- type: map_at_3
value: 56.074
- type: map_at_5
value: 57.629
- type: mrr_at_1
value: 81.756
- type: mrr_at_10
value: 86.117
- type: mrr_at_100
value: 86.299
- type: mrr_at_1000
value: 86.30600000000001
- type: mrr_at_3
value: 85.345
- type: mrr_at_5
value: 85.832
- type: ndcg_at_1
value: 81.756
- type: ndcg_at_10
value: 67.608
- type: ndcg_at_100
value: 70.575
- type: ndcg_at_1000
value: 71.99600000000001
- type: ndcg_at_3
value: 63.723
- type: ndcg_at_5
value: 65.70700000000001
- type: precision_at_1
value: 81.756
- type: precision_at_10
value: 13.619
- type: precision_at_100
value: 1.5939999999999999
- type: precision_at_1000
value: 0.178
- type: precision_at_3
value: 39.604
- type: precision_at_5
value: 25.332
- type: recall_at_1
value: 40.878
- type: recall_at_10
value: 68.096
- type: recall_at_100
value: 79.696
- type: recall_at_1000
value: 89.082
- type: recall_at_3
value: 59.406000000000006
- type: recall_at_5
value: 63.329
- task:
type: Retrieval
dataset:
name: MTEB MSMARCO-PL
type: clarin-knext/msmarco-pl
config: default
split: test
revision: 8634c07806d5cce3a6138e260e59b81760a0a640
metrics:
- type: map_at_1
value: 2.1839999999999997
- type: map_at_10
value: 11.346
- type: map_at_100
value: 30.325000000000003
- type: map_at_1000
value: 37.806
- type: map_at_3
value: 4.842
- type: map_at_5
value: 6.891
- type: mrr_at_1
value: 86.047
- type: mrr_at_10
value: 89.14699999999999
- type: mrr_at_100
value: 89.46600000000001
- type: mrr_at_1000
value: 89.46600000000001
- type: mrr_at_3
value: 89.14699999999999
- type: mrr_at_5
value: 89.14699999999999
- type: ndcg_at_1
value: 67.829
- type: ndcg_at_10
value: 62.222
- type: ndcg_at_100
value: 55.337
- type: ndcg_at_1000
value: 64.076
- type: ndcg_at_3
value: 68.12700000000001
- type: ndcg_at_5
value: 64.987
- type: precision_at_1
value: 86.047
- type: precision_at_10
value: 69.535
- type: precision_at_100
value: 32.93
- type: precision_at_1000
value: 6.6049999999999995
- type: precision_at_3
value: 79.845
- type: precision_at_5
value: 75.349
- type: recall_at_1
value: 2.1839999999999997
- type: recall_at_10
value: 12.866
- type: recall_at_100
value: 43.505
- type: recall_at_1000
value: 72.366
- type: recall_at_3
value: 4.947
- type: recall_at_5
value: 7.192
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (pl)
type: mteb/amazon_massive_intent
config: pl
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 80.75319435104238
- type: f1
value: 77.58961444860606
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (pl)
type: mteb/amazon_massive_scenario
config: pl
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 85.54472091459313
- type: f1
value: 84.29498563572106
- task:
type: Retrieval
dataset:
name: MTEB NFCorpus-PL
type: clarin-knext/nfcorpus-pl
config: default
split: test
revision: 9a6f9567fda928260afed2de480d79c98bf0bec0
metrics:
- type: map_at_1
value: 4.367
- type: map_at_10
value: 10.38
- type: map_at_100
value: 13.516
- type: map_at_1000
value: 14.982000000000001
- type: map_at_3
value: 7.367
- type: map_at_5
value: 8.59
- type: mrr_at_1
value: 41.486000000000004
- type: mrr_at_10
value: 48.886
- type: mrr_at_100
value: 49.657000000000004
- type: mrr_at_1000
value: 49.713
- type: mrr_at_3
value: 46.904
- type: mrr_at_5
value: 48.065000000000005
- type: ndcg_at_1
value: 40.402
- type: ndcg_at_10
value: 30.885
- type: ndcg_at_100
value: 28.393
- type: ndcg_at_1000
value: 37.428
- type: ndcg_at_3
value: 35.394999999999996
- type: ndcg_at_5
value: 33.391999999999996
- type: precision_at_1
value: 41.486000000000004
- type: precision_at_10
value: 23.437
- type: precision_at_100
value: 7.638
- type: precision_at_1000
value: 2.0389999999999997
- type: precision_at_3
value: 32.817
- type: precision_at_5
value: 28.915999999999997
- type: recall_at_1
value: 4.367
- type: recall_at_10
value: 14.655000000000001
- type: recall_at_100
value: 29.665999999999997
- type: recall_at_1000
value: 62.073
- type: recall_at_3
value: 8.51
- type: recall_at_5
value: 10.689
- task:
type: Retrieval
dataset:
name: MTEB NQ-PL
type: clarin-knext/nq-pl
config: default
split: test
revision: f171245712cf85dd4700b06bef18001578d0ca8d
metrics:
- type: map_at_1
value: 28.616000000000003
- type: map_at_10
value: 41.626000000000005
- type: map_at_100
value: 42.689
- type: map_at_1000
value: 42.733
- type: map_at_3
value: 37.729
- type: map_at_5
value: 39.879999999999995
- type: mrr_at_1
value: 32.068000000000005
- type: mrr_at_10
value: 44.029
- type: mrr_at_100
value: 44.87
- type: mrr_at_1000
value: 44.901
- type: mrr_at_3
value: 40.687
- type: mrr_at_5
value: 42.625
- type: ndcg_at_1
value: 32.068000000000005
- type: ndcg_at_10
value: 48.449999999999996
- type: ndcg_at_100
value: 53.13
- type: ndcg_at_1000
value: 54.186
- type: ndcg_at_3
value: 40.983999999999995
- type: ndcg_at_5
value: 44.628
- type: precision_at_1
value: 32.068000000000005
- type: precision_at_10
value: 7.9750000000000005
- type: precision_at_100
value: 1.061
- type: precision_at_1000
value: 0.116
- type: precision_at_3
value: 18.404999999999998
- type: precision_at_5
value: 13.111
- type: recall_at_1
value: 28.616000000000003
- type: recall_at_10
value: 66.956
- type: recall_at_100
value: 87.657
- type: recall_at_1000
value: 95.548
- type: recall_at_3
value: 47.453
- type: recall_at_5
value: 55.87800000000001
- task:
type: Classification
dataset:
name: MTEB PAC
type: laugustyniak/abusive-clauses-pl
config: default
split: test
revision: None
metrics:
- type: accuracy
value: 69.04141326382856
- type: ap
value: 77.47589122111044
- type: f1
value: 66.6332277374775
- task:
type: PairClassification
dataset:
name: MTEB PPC
type: PL-MTEB/ppc-pairclassification
config: default
split: test
revision: None
metrics:
- type: cos_sim_accuracy
value: 86.4
- type: cos_sim_ap
value: 94.1044939667201
- type: cos_sim_f1
value: 88.78048780487805
- type: cos_sim_precision
value: 87.22044728434504
- type: cos_sim_recall
value: 90.39735099337747
- type: dot_accuracy
value: 86.4
- type: dot_ap
value: 94.1044939667201
- type: dot_f1
value: 88.78048780487805
- type: dot_precision
value: 87.22044728434504
- type: dot_recall
value: 90.39735099337747
- type: euclidean_accuracy
value: 86.4
- type: euclidean_ap
value: 94.1044939667201
- type: euclidean_f1
value: 88.78048780487805
- type: euclidean_precision
value: 87.22044728434504
- type: euclidean_recall
value: 90.39735099337747
- type: manhattan_accuracy
value: 86.4
- type: manhattan_ap
value: 94.11438365697387
- type: manhattan_f1
value: 88.77968877968877
- type: manhattan_precision
value: 87.84440842787681
- type: manhattan_recall
value: 89.73509933774835
- type: max_accuracy
value: 86.4
- type: max_ap
value: 94.11438365697387
- type: max_f1
value: 88.78048780487805
- task:
type: PairClassification
dataset:
name: MTEB PSC
type: PL-MTEB/psc-pairclassification
config: default
split: test
revision: None
metrics:
- type: cos_sim_accuracy
value: 97.86641929499072
- type: cos_sim_ap
value: 99.36904211868182
- type: cos_sim_f1
value: 96.56203288490283
- type: cos_sim_precision
value: 94.72140762463343
- type: cos_sim_recall
value: 98.47560975609755
- type: dot_accuracy
value: 97.86641929499072
- type: dot_ap
value: 99.36904211868183
- type: dot_f1
value: 96.56203288490283
- type: dot_precision
value: 94.72140762463343
- type: dot_recall
value: 98.47560975609755
- type: euclidean_accuracy
value: 97.86641929499072
- type: euclidean_ap
value: 99.36904211868183
- type: euclidean_f1
value: 96.56203288490283
- type: euclidean_precision
value: 94.72140762463343
- type: euclidean_recall
value: 98.47560975609755
- type: manhattan_accuracy
value: 98.14471243042672
- type: manhattan_ap
value: 99.43359540492416
- type: manhattan_f1
value: 96.98795180722892
- type: manhattan_precision
value: 95.83333333333334
- type: manhattan_recall
value: 98.17073170731707
- type: max_accuracy
value: 98.14471243042672
- type: max_ap
value: 99.43359540492416
- type: max_f1
value: 96.98795180722892
- task:
type: Classification
dataset:
name: MTEB PolEmo2.0-IN
type: PL-MTEB/polemo2_in
config: default
split: test
revision: None
metrics:
- type: accuracy
value: 89.39058171745152
- type: f1
value: 86.8552093529568
- task:
type: Classification
dataset:
name: MTEB PolEmo2.0-OUT
type: PL-MTEB/polemo2_out
config: default
split: test
revision: None
metrics:
- type: accuracy
value: 74.97975708502024
- type: f1
value: 58.73081628832407
- task:
type: Retrieval
dataset:
name: MTEB Quora-PL
type: clarin-knext/quora-pl
config: default
split: test
revision: 0be27e93455051e531182b85e85e425aba12e9d4
metrics:
- type: map_at_1
value: 64.917
- type: map_at_10
value: 78.74600000000001
- type: map_at_100
value: 79.501
- type: map_at_1000
value: 79.524
- type: map_at_3
value: 75.549
- type: map_at_5
value: 77.495
- type: mrr_at_1
value: 74.9
- type: mrr_at_10
value: 82.112
- type: mrr_at_100
value: 82.314
- type: mrr_at_1000
value: 82.317
- type: mrr_at_3
value: 80.745
- type: mrr_at_5
value: 81.607
- type: ndcg_at_1
value: 74.83999999999999
- type: ndcg_at_10
value: 83.214
- type: ndcg_at_100
value: 84.997
- type: ndcg_at_1000
value: 85.207
- type: ndcg_at_3
value: 79.547
- type: ndcg_at_5
value: 81.46600000000001
- type: precision_at_1
value: 74.83999999999999
- type: precision_at_10
value: 12.822
- type: precision_at_100
value: 1.506
- type: precision_at_1000
value: 0.156
- type: precision_at_3
value: 34.903
- type: precision_at_5
value: 23.16
- type: recall_at_1
value: 64.917
- type: recall_at_10
value: 92.27199999999999
- type: recall_at_100
value: 98.715
- type: recall_at_1000
value: 99.854
- type: recall_at_3
value: 82.04599999999999
- type: recall_at_5
value: 87.2
- task:
type: Retrieval
dataset:
name: MTEB SCIDOCS-PL
type: clarin-knext/scidocs-pl
config: default
split: test
revision: 45452b03f05560207ef19149545f168e596c9337
metrics:
- type: map_at_1
value: 3.51
- type: map_at_10
value: 9.046999999999999
- type: map_at_100
value: 10.823
- type: map_at_1000
value: 11.144
- type: map_at_3
value: 6.257
- type: map_at_5
value: 7.648000000000001
- type: mrr_at_1
value: 17.299999999999997
- type: mrr_at_10
value: 27.419
- type: mrr_at_100
value: 28.618
- type: mrr_at_1000
value: 28.685
- type: mrr_at_3
value: 23.817
- type: mrr_at_5
value: 25.927
- type: ndcg_at_1
value: 17.299999999999997
- type: ndcg_at_10
value: 16.084
- type: ndcg_at_100
value: 23.729
- type: ndcg_at_1000
value: 29.476999999999997
- type: ndcg_at_3
value: 14.327000000000002
- type: ndcg_at_5
value: 13.017999999999999
- type: precision_at_1
value: 17.299999999999997
- type: precision_at_10
value: 8.63
- type: precision_at_100
value: 1.981
- type: precision_at_1000
value: 0.336
- type: precision_at_3
value: 13.4
- type: precision_at_5
value: 11.700000000000001
- type: recall_at_1
value: 3.51
- type: recall_at_10
value: 17.518
- type: recall_at_100
value: 40.275
- type: recall_at_1000
value: 68.203
- type: recall_at_3
value: 8.155
- type: recall_at_5
value: 11.875
- task:
type: PairClassification
dataset:
name: MTEB SICK-E-PL
type: PL-MTEB/sicke-pl-pairclassification
config: default
split: test
revision: None
metrics:
- type: cos_sim_accuracy
value: 86.30248675091724
- type: cos_sim_ap
value: 83.6756734006714
- type: cos_sim_f1
value: 74.97367497367497
- type: cos_sim_precision
value: 73.91003460207612
- type: cos_sim_recall
value: 76.06837606837607
- type: dot_accuracy
value: 86.30248675091724
- type: dot_ap
value: 83.6756734006714
- type: dot_f1
value: 74.97367497367497
- type: dot_precision
value: 73.91003460207612
- type: dot_recall
value: 76.06837606837607
- type: euclidean_accuracy
value: 86.30248675091724
- type: euclidean_ap
value: 83.67566984333091
- type: euclidean_f1
value: 74.97367497367497
- type: euclidean_precision
value: 73.91003460207612
- type: euclidean_recall
value: 76.06837606837607
- type: manhattan_accuracy
value: 86.28210354667753
- type: manhattan_ap
value: 83.64216119130171
- type: manhattan_f1
value: 74.92152075340078
- type: manhattan_precision
value: 73.4107997265892
- type: manhattan_recall
value: 76.49572649572649
- type: max_accuracy
value: 86.30248675091724
- type: max_ap
value: 83.6756734006714
- type: max_f1
value: 74.97367497367497
- task:
type: STS
dataset:
name: MTEB SICK-R-PL
type: PL-MTEB/sickr-pl-sts
config: default
split: test
revision: None
metrics:
- type: cos_sim_pearson
value: 82.23295940859121
- type: cos_sim_spearman
value: 78.89329160768719
- type: euclidean_pearson
value: 79.56019107076818
- type: euclidean_spearman
value: 78.89330209904084
- type: manhattan_pearson
value: 79.76098513973719
- type: manhattan_spearman
value: 79.05490162570123
- task:
type: STS
dataset:
name: MTEB STS22 (pl)
type: mteb/sts22-crosslingual-sts
config: pl
split: test
revision: eea2b4fe26a775864c896887d910b76a8098ad3f
metrics:
- type: cos_sim_pearson
value: 37.732606308062486
- type: cos_sim_spearman
value: 41.01645667030284
- type: euclidean_pearson
value: 26.61722556367085
- type: euclidean_spearman
value: 41.01645667030284
- type: manhattan_pearson
value: 26.60917378970807
- type: manhattan_spearman
value: 41.51335727617614
- task:
type: Retrieval
dataset:
name: MTEB SciFact-PL
type: clarin-knext/scifact-pl
config: default
split: test
revision: 47932a35f045ef8ed01ba82bf9ff67f6e109207e
metrics:
- type: map_at_1
value: 54.31700000000001
- type: map_at_10
value: 65.564
- type: map_at_100
value: 66.062
- type: map_at_1000
value: 66.08699999999999
- type: map_at_3
value: 62.592999999999996
- type: map_at_5
value: 63.888
- type: mrr_at_1
value: 56.99999999999999
- type: mrr_at_10
value: 66.412
- type: mrr_at_100
value: 66.85900000000001
- type: mrr_at_1000
value: 66.88
- type: mrr_at_3
value: 64.22200000000001
- type: mrr_at_5
value: 65.206
- type: ndcg_at_1
value: 56.99999999999999
- type: ndcg_at_10
value: 70.577
- type: ndcg_at_100
value: 72.879
- type: ndcg_at_1000
value: 73.45
- type: ndcg_at_3
value: 65.5
- type: ndcg_at_5
value: 67.278
- type: precision_at_1
value: 56.99999999999999
- type: precision_at_10
value: 9.667
- type: precision_at_100
value: 1.083
- type: precision_at_1000
value: 0.11299999999999999
- type: precision_at_3
value: 26.0
- type: precision_at_5
value: 16.933
- type: recall_at_1
value: 54.31700000000001
- type: recall_at_10
value: 85.056
- type: recall_at_100
value: 95.667
- type: recall_at_1000
value: 100.0
- type: recall_at_3
value: 71.0
- type: recall_at_5
value: 75.672
- task:
type: Retrieval
dataset:
name: MTEB TRECCOVID-PL
type: clarin-knext/trec-covid-pl
config: default
split: test
revision: 81bcb408f33366c2a20ac54adafad1ae7e877fdd
metrics:
- type: map_at_1
value: 0.245
- type: map_at_10
value: 2.051
- type: map_at_100
value: 12.009
- type: map_at_1000
value: 27.448
- type: map_at_3
value: 0.721
- type: map_at_5
value: 1.13
- type: mrr_at_1
value: 88.0
- type: mrr_at_10
value: 93.0
- type: mrr_at_100
value: 93.0
- type: mrr_at_1000
value: 93.0
- type: mrr_at_3
value: 93.0
- type: mrr_at_5
value: 93.0
- type: ndcg_at_1
value: 85.0
- type: ndcg_at_10
value: 80.303
- type: ndcg_at_100
value: 61.23499999999999
- type: ndcg_at_1000
value: 52.978
- type: ndcg_at_3
value: 84.419
- type: ndcg_at_5
value: 82.976
- type: precision_at_1
value: 88.0
- type: precision_at_10
value: 83.39999999999999
- type: precision_at_100
value: 61.96
- type: precision_at_1000
value: 22.648
- type: precision_at_3
value: 89.333
- type: precision_at_5
value: 87.2
- type: recall_at_1
value: 0.245
- type: recall_at_10
value: 2.193
- type: recall_at_100
value: 14.938
- type: recall_at_1000
value: 48.563
- type: recall_at_3
value: 0.738
- type: recall_at_5
value: 1.173
---
## gte-Qwen2-7B-instruct
**gte-Qwen2-7B-instruct** is the latest model in the gte (General Text Embedding) model family that ranks **No.1** in both English and Chinese evaluations on the Massive Text Embedding Benchmark [MTEB benchmark](https://huggingface.co/spaces/mteb/leaderboard) (as of June 16, 2024).
Recently, the [**Qwen team**](https://huggingface.co/Qwen) released the Qwen2 series models, and we have trained the **gte-Qwen2-7B-instruct** model based on the [Qwen2-7B](https://huggingface.co/Qwen/Qwen2-7B) LLM model. Compared to the [gte-Qwen1.5-7B-instruct](https://huggingface.co/Alibaba-NLP/gte-Qwen1.5-7B-instruct) model, the **gte-Qwen2-7B-instruct** model uses the same training data and training strategies during the finetuning stage, with the only difference being the upgraded base model to Qwen2-7B. Considering the improvements in the Qwen2 series models compared to the Qwen1.5 series, we can also expect consistent performance enhancements in the embedding models.
The model incorporates several key advancements:
- Integration of bidirectional attention mechanisms, enriching its contextual understanding.
- Instruction tuning, applied solely on the query side for streamlined efficiency
- Comprehensive training across a vast, multilingual text corpus spanning diverse domains and scenarios. This training leverages both weakly supervised and supervised data, ensuring the model's applicability across numerous languages and a wide array of downstream tasks.
## Model Information
- Model Size: 7B
- Embedding Dimension: 3584
- Max Input Tokens: 32k
## Requirements
```
transformers>=4.39.2
flash_attn>=2.5.6
```
## Usage
### Sentence Transformers
```python
from sentence_transformers import SentenceTransformer
model = SentenceTransformer("Alibaba-NLP/gte-Qwen2-7B-instruct", trust_remote_code=True)
# In case you want to reduce the maximum length:
model.max_seq_length = 8192
queries = [
"how much protein should a female eat",
"summit define",
]
documents = [
"As a general guideline, the CDC's average requirement of protein for women ages 19 to 70 is 46 grams per day. But, as you can see from this chart, you'll need to increase that if you're expecting or training for a marathon. Check out the chart below to see how much protein you should be eating each day.",
"Definition of summit for English Language Learners. : 1 the highest point of a mountain : the top of a mountain. : 2 the highest level. : 3 a meeting or series of meetings between the leaders of two or more governments.",
]
query_embeddings = model.encode(queries, prompt_name="query")
document_embeddings = model.encode(documents)
scores = (query_embeddings @ document_embeddings.T) * 100
print(scores.tolist())
```
Observe the [config_sentence_transformers.json](config_sentence_transformers.json) to see all pre-built prompt names. Otherwise, you can use `model.encode(queries, prompt="Instruct: ...\nQuery: "` to use a custom prompt of your choice.
### Transformers
```python
import torch
import torch.nn.functional as F
from torch import Tensor
from transformers import AutoTokenizer, AutoModel
def last_token_pool(last_hidden_states: Tensor,
attention_mask: Tensor) -> Tensor:
left_padding = (attention_mask[:, -1].sum() == attention_mask.shape[0])
if left_padding:
return last_hidden_states[:, -1]
else:
sequence_lengths = attention_mask.sum(dim=1) - 1
batch_size = last_hidden_states.shape[0]
return last_hidden_states[torch.arange(batch_size, device=last_hidden_states.device), sequence_lengths]
def get_detailed_instruct(task_description: str, query: str) -> str:
return f'Instruct: {task_description}\nQuery: {query}'
# Each query must come with a one-sentence instruction that describes the task
task = 'Given a web search query, retrieve relevant passages that answer the query'
queries = [
get_detailed_instruct(task, 'how much protein should a female eat'),
get_detailed_instruct(task, 'summit define')
]
# No need to add instruction for retrieval documents
documents = [
"As a general guideline, the CDC's average requirement of protein for women ages 19 to 70 is 46 grams per day. But, as you can see from this chart, you'll need to increase that if you're expecting or training for a marathon. Check out the chart below to see how much protein you should be eating each day.",
"Definition of summit for English Language Learners. : 1 the highest point of a mountain : the top of a mountain. : 2 the highest level. : 3 a meeting or series of meetings between the leaders of two or more governments."
]
input_texts = queries + documents
tokenizer = AutoTokenizer.from_pretrained('Alibaba-NLP/gte-Qwen2-7B-instruct', trust_remote_code=True)
model = AutoModel.from_pretrained('Alibaba-NLP/gte-Qwen2-7B-instruct', trust_remote_code=True)
max_length = 8192
# Tokenize the input texts
batch_dict = tokenizer(input_texts, max_length=max_length, padding=True, truncation=True, return_tensors='pt')
outputs = model(**batch_dict)
embeddings = last_token_pool(outputs.last_hidden_state, batch_dict['attention_mask'])
# normalize embeddings
embeddings = F.normalize(embeddings, p=2, dim=1)
scores = (embeddings[:2] @ embeddings[2:].T) * 100
print(scores.tolist())
```
## Evaluation
### MTEB & C-MTEB
You can use the [scripts/eval_mteb.py](https://huggingface.co/Alibaba-NLP/gte-Qwen2-7B-instruct/blob/main/scripts/eval_mteb.py) to reproduce the following result of **gte-Qwen2-7B-instruct** on MTEB(English)/C-MTEB(Chinese):
| Model Name | MTEB(56) | C-MTEB(35) | MTEB-fr(26) | MTEB-pl(26) |
|:----:|:---------:|:----------:|:----------:|:----------:|
| [bge-base-en-1.5](https://huggingface.co/BAAI/bge-base-en-v1.5) | 64.23 | - | - | - |
| [bge-large-en-1.5](https://huggingface.co/BAAI/bge-large-en-v1.5) | 63.55 | - | - | - |
| [gte-large-en-v1.5](https://huggingface.co/Alibaba-NLP/gte-large-en-v1.5) | 65.39 | - | - | - |
| [gte-base-en-v1.5](https://huggingface.co/Alibaba-NLP/gte-large-en-v1.5) | 64.11 | - | - | - |
| [mxbai-embed-large-v1](https://huggingface.co/mixedbread-ai/mxbai-embed-large-v1) | 64.68 | - | - | - |
| [acge_text_embedding](https://huggingface.co/aspire/acge_text_embedding) | - | 69.07 | - | - |
| [stella-mrl-large-zh-v3.5-1792d](https://huggingface.co/infgrad/stella-mrl-large-zh-v3.5-1792d) | - | 68.55 | - | - |
| [gte-large-zh](https://huggingface.co/thenlper/gte-large-zh) | - | 66.72 | - | - |
| [multilingual-e5-base](https://huggingface.co/intfloat/multilingual-e5-base) | 59.45 | 56.21 | - | - |
| [multilingual-e5-large](https://huggingface.co/intfloat/multilingual-e5-large) | 61.50 | 58.81 | - | - |
| [e5-mistral-7b-instruct](https://huggingface.co/intfloat/e5-mistral-7b-instruct) | 66.63 | 60.81 | - | - |
| [gte-Qwen1.5-7B-instruct](https://huggingface.co/Alibaba-NLP/gte-Qwen1.5-7B-instruct) | 67.34 | 69.52 | - | - |
| [NV-Embed-v1](https://huggingface.co/nvidia/NV-Embed-v1) | 69.32 | - | - | - |
| [**gte-Qwen2-7B-instruct**](https://huggingface.co/Alibaba-NLP/gte-Qwen2-7B-instruct) | **70.24** | **72.05** | **68.25** | **67.86** |
| gte-Qwen2-1.5B-instruc(https://huggingface.co/Alibaba-NLP/gte-Qwen2-1.5B-instruct) | 67.16 | 67.65 | 66.60 | 64.04 |
### GTE Models
The gte series models have consistently released two types of models: encoder-only models (based on the BERT architecture) and decode-only models (based on the LLM architecture).
| Models | Language | Max Sequence Length | Dimension | Model Size (Memory Usage, fp32) |
|:-------------------------------------------------------------------------------------:|:--------:|:-----: |:---------:|:-------------------------------:|
| [GTE-large-zh](https://huggingface.co/thenlper/gte-large-zh) | Chinese | 512 | 1024 | 1.25GB |
| [GTE-base-zh](https://huggingface.co/thenlper/gte-base-zh) | Chinese | 512 | 512 | 0.41GB |
| [GTE-small-zh](https://huggingface.co/thenlper/gte-small-zh) | Chinese | 512 | 512 | 0.12GB |
| [GTE-large](https://huggingface.co/thenlper/gte-large) | English | 512 | 1024 | 1.25GB |
| [GTE-base](https://huggingface.co/thenlper/gte-base) | English | 512 | 512 | 0.21GB |
| [GTE-small](https://huggingface.co/thenlper/gte-small) | English | 512 | 384 | 0.10GB |
| [GTE-large-en-v1.5](https://huggingface.co/Alibaba-NLP/gte-large-en-v1.5) | English | 8192 | 1024 | 1.74GB |
| [GTE-base-en-v1.5](https://huggingface.co/Alibaba-NLP/gte-base-en-v1.5) | English | 8192 | 768 | 0.51GB |
| [GTE-Qwen1.5-7B-instruct](https://huggingface.co/Alibaba-NLP/gte-Qwen1.5-7B-instruct) | Multilingual | 32000 | 4096 | 26.45GB |
| [GTE-Qwen2-7B-instruct](https://huggingface.co/Alibaba-NLP/gte-Qwen2-7B-instruct) | Multilingual | 32000 | 3584 | 26.45GB |
| [GTE-Qwen2-1.5B-instruct](https://huggingface.co/Alibaba-NLP/gte-Qwen2-1.5B-instruct) | Multilingual | 32000 | 1536 | 6.62GB |
## Cloud API Services
In addition to the open-source [GTE](https://huggingface.co/collections/Alibaba-NLP/gte-models-6680f0b13f885cb431e6d469) series models, GTE series models are also available as commercial API services on Alibaba Cloud.
- [Embedding Models](https://help.aliyun.com/zh/model-studio/developer-reference/general-text-embedding/): Rhree versions of the text embedding models are available: text-embedding-v1/v2/v3, with v3 being the latest API service.
- [ReRank Models](https://help.aliyun.com/zh/model-studio/developer-reference/general-text-sorting-model/): The gte-rerank model service is available.
Note that the models behind the commercial APIs are not entirely identical to the open-source models.
## Citation
If you find our paper or models helpful, please consider cite:
```
@article{li2023towards,
title={Towards general text embeddings with multi-stage contrastive learning},
author={Li, Zehan and Zhang, Xin and Zhang, Yanzhao and Long, Dingkun and Xie, Pengjun and Zhang, Meishan},
journal={arXiv preprint arXiv:2308.03281},
year={2023}
}
``` | [
"SUMMARIZATION"
] | [
"BIOSSES",
"SCIFACT"
] |
medspaner/roberta-es-clinical-trials-temporal-ner | medspaner | token-classification | [
"transformers",
"pytorch",
"roberta",
"token-classification",
"generated_from_trainer",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2022-07-22T18:04:39 | 2024-10-01T06:41:57 | 66 | 0 | ---
license: cc-by-nc-4.0
metrics:
- precision
- recall
- f1
- accuracy
tags:
- generated_from_trainer
widget:
- text: Edad ≥ 18 años (en todos los centros), o edad ≥12 y <18 años con peso igual
o superior a 40kg
- text: Estudio realizado en un hospital desde julio de 2010 hasta diciembre de 2011
(18 meses)
- text: Pacientes que hayan recibido bifosfonatos diarios, semanales o mensuales durante
al menos 3 años.
- text: 50 g (40 g la noche anterior y 10 g por la mañana) de L-glutamina
model-index:
- name: roberta-es-clinical-trials-temporal-ner
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-es-clinical-trials-temporal-ner
This named entity recognition model detects temporal expressions (TIMEX) according to the [TimeML scheme](https://en.wikipedia.org/wiki/ISO-TimeML) ([Pustejovsky et al. 2005](http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.85.5610&rep=rep1&type=pdf)), in addition to Age entities:
- Age: e.g. *18 años*
- Date: e.g. *2022*, *26 de noviembre*
- Duration: e.g. *3 horas*
- Frequency: e.g. *semanal*
- Time: e.g. *noche*
The model achieves the following results on the test set (when trained with the training and development set; results are averaged over 5 evaluation rounds):
- Precision: 0.900 (±0.011)
- Recall: 0.900 (±0.009)
- F1: 0.900 (±0.007)
- Accuracy: 0.996 (±0.001)
## Model description
This model adapts the pre-trained model [bsc-bio-ehr-es](https://huggingface.co/PlanTL-GOB-ES/bsc-bio-ehr-es), presented in [Pio Carriño et al. (2022)](https://aclanthology.org/2022.bionlp-1.19/).
It is fine-tuned to conduct temporal named entity recognition on Spanish texts about clinical trials.
The model is fine-tuned on the [CT-EBM-ES corpus (Campillos-Llanos et al. 2021)](https://bmcmedinformdecismak.biomedcentral.com/articles/10.1186/s12911-021-01395-z).
If you use this model, please, cite as follows:
```
@article{campillosetal2024,
title = {{Hybrid tool for semantic annotation and concept extraction of medical texts in Spanish}},
author = {Campillos-Llanos, Leonardo and Valverde-Mateos, Ana and Capllonch-Carri{\'o}n, Adri{\'a}n},
journal = {BMC Bioinformatics},
year={2024},
publisher={BioMed Central}
}
```
## Intended uses & limitations
**Disclosure**: *This model is under development and needs to be improved. It should not be used for medical decision making without human assistance and supervision*
This model is intended for a generalist purpose, and may have bias and/or any other undesirable distortions.
Third parties who deploy or provide systems and/or services using any of these models (or using systems based on these models) should note that it is their responsibility to mitigate the risks arising from their use. Third parties, in any event, need to comply with applicable regulations, including regulations concerning the use of artificial intelligence.
The owner or creator of the models will in no event be liable for any results arising from the use made by third parties of these models.
**Descargo de responsabilidad**: *Esta herramienta se encuentra en desarrollo y no debe ser empleada para la toma de decisiones médicas*
La finalidad de este modelo es generalista, y se advierte que puede tener sesgos y/u otro tipo de distorsiones indeseables.
Terceras partes que desplieguen o proporcionen sistemas y/o servicios usando alguno de estos modelos (o utilizando sistemas basados en estos modelos) han tener presente que es su responsabilidad abordar y minimizar los riesgos derivados de su uso. Las terceras partes, en cualquier circunstancia, deben cumplir con la normativa aplicable, incluyendo la normativa que concierne al uso de la inteligencia artificial.
El propietario o creador de los modelos de ningún modo será responsable de los resultados derivados del uso que las terceras partes hagan de estos modelos.
## Training and evaluation data
The data used for fine-tuning are the [Clinical Trials for Evidence-Based-Medicine in Spanish corpus](http://www.lllf.uam.es/ESP/nlpdata/wp2/).
It is a collection of 1200 texts about clinical trials studies and clinical trials announcements:
- 500 abstracts from journals published under a Creative Commons license, e.g. available in PubMed or the Scientific Electronic Library Online (SciELO)
- 700 clinical trials announcements published in the European Clinical Trials Register and Repositorio Español de Estudios Clínicos
If you use the CT-EBM-ES resource, please, cite as follows:
```
@article{campillosetal-midm2021,
title = {A clinical trials corpus annotated with UMLS© entities to enhance the access to Evidence-Based Medicine},
author = {Campillos-Llanos, Leonardo and Valverde-Mateos, Ana and Capllonch-Carri{\'o}n, Adri{\'a}n and Moreno-Sandoval, Antonio},
journal = {BMC Medical Informatics and Decision Making},
volume={21},
number={1},
pages={1--19},
year={2021},
publisher={BioMed Central}
}
```
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: we used different seeds for 5 evaluation rounds, and uploaded the model with the best results
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: average of 14 epochs (±2.24)
### Training results (test set; average and standard deviation of 5 rounds with different seeds)
| Precision | Recall | F1 | Accuracy |
|:--------------:|:--------------:|:--------------:|:--------------:|
| 0.900 (±0.011) | 0.900 (±0.009) | 0.900 (±0.007) | 0.996 (±0.001) |
**Results per class (test set; average and standard deviation of 5 rounds with different seeds)**
| Class | Precision | Recall | F1 | Support |
|:---------:|:--------------:|:--------------:|:--------------:|:---------:|
| Age | 0.926 (±0.013) | 0.947 (±0.009) | 0.936 (±0.010) | 372 |
| Date | 0.931 (±0.015) | 0.895 (±0.014) | 0.913 (±0.013) | 412 |
| Duration | 0.918 (±0.014) | 0.893 (±0.019) | 0.905 (±0.010) | 629 |
| Frequency | 0.780 (±0.043) | 0.885 (±0.008) | 0.829 (±0.024) | 73 |
| Time | 0.722 (±0.068) | 0.809 (±0.042) | 0.762 (±0.052) | 113 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.2+cu113
- Datasets 1.18.4
- Tokenizers 0.11.6
| [
"NAMED_ENTITY_RECOGNITION"
] | [
"SCIELO"
] |
RichardErkhov/EleutherAI_-_pythia-1.4b-deduped-v0-gguf | RichardErkhov | null | [
"gguf",
"arxiv:2101.00027",
"arxiv:2201.07311",
"endpoints_compatible",
"region:us"
] | 2024-11-01T16:18:38 | 2024-11-01T16:37:49 | 66 | 0 | ---
{}
---
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
pythia-1.4b-deduped-v0 - GGUF
- Model creator: https://huggingface.co/EleutherAI/
- Original model: https://huggingface.co/EleutherAI/pythia-1.4b-deduped-v0/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [pythia-1.4b-deduped-v0.Q2_K.gguf](https://huggingface.co/RichardErkhov/EleutherAI_-_pythia-1.4b-deduped-v0-gguf/blob/main/pythia-1.4b-deduped-v0.Q2_K.gguf) | Q2_K | 0.53GB |
| [pythia-1.4b-deduped-v0.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/EleutherAI_-_pythia-1.4b-deduped-v0-gguf/blob/main/pythia-1.4b-deduped-v0.Q3_K_S.gguf) | Q3_K_S | 0.61GB |
| [pythia-1.4b-deduped-v0.Q3_K.gguf](https://huggingface.co/RichardErkhov/EleutherAI_-_pythia-1.4b-deduped-v0-gguf/blob/main/pythia-1.4b-deduped-v0.Q3_K.gguf) | Q3_K | 0.71GB |
| [pythia-1.4b-deduped-v0.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/EleutherAI_-_pythia-1.4b-deduped-v0-gguf/blob/main/pythia-1.4b-deduped-v0.Q3_K_M.gguf) | Q3_K_M | 0.71GB |
| [pythia-1.4b-deduped-v0.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/EleutherAI_-_pythia-1.4b-deduped-v0-gguf/blob/main/pythia-1.4b-deduped-v0.Q3_K_L.gguf) | Q3_K_L | 0.77GB |
| [pythia-1.4b-deduped-v0.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/EleutherAI_-_pythia-1.4b-deduped-v0-gguf/blob/main/pythia-1.4b-deduped-v0.IQ4_XS.gguf) | IQ4_XS | 0.74GB |
| [pythia-1.4b-deduped-v0.Q4_0.gguf](https://huggingface.co/RichardErkhov/EleutherAI_-_pythia-1.4b-deduped-v0-gguf/blob/main/pythia-1.4b-deduped-v0.Q4_0.gguf) | Q4_0 | 0.77GB |
| [pythia-1.4b-deduped-v0.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/EleutherAI_-_pythia-1.4b-deduped-v0-gguf/blob/main/pythia-1.4b-deduped-v0.IQ4_NL.gguf) | IQ4_NL | 0.78GB |
| [pythia-1.4b-deduped-v0.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/EleutherAI_-_pythia-1.4b-deduped-v0-gguf/blob/main/pythia-1.4b-deduped-v0.Q4_K_S.gguf) | Q4_K_S | 0.78GB |
| [pythia-1.4b-deduped-v0.Q4_K.gguf](https://huggingface.co/RichardErkhov/EleutherAI_-_pythia-1.4b-deduped-v0-gguf/blob/main/pythia-1.4b-deduped-v0.Q4_K.gguf) | Q4_K | 0.85GB |
| [pythia-1.4b-deduped-v0.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/EleutherAI_-_pythia-1.4b-deduped-v0-gguf/blob/main/pythia-1.4b-deduped-v0.Q4_K_M.gguf) | Q4_K_M | 0.85GB |
| [pythia-1.4b-deduped-v0.Q4_1.gguf](https://huggingface.co/RichardErkhov/EleutherAI_-_pythia-1.4b-deduped-v0-gguf/blob/main/pythia-1.4b-deduped-v0.Q4_1.gguf) | Q4_1 | 0.85GB |
| [pythia-1.4b-deduped-v0.Q5_0.gguf](https://huggingface.co/RichardErkhov/EleutherAI_-_pythia-1.4b-deduped-v0-gguf/blob/main/pythia-1.4b-deduped-v0.Q5_0.gguf) | Q5_0 | 0.92GB |
| [pythia-1.4b-deduped-v0.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/EleutherAI_-_pythia-1.4b-deduped-v0-gguf/blob/main/pythia-1.4b-deduped-v0.Q5_K_S.gguf) | Q5_K_S | 0.65GB |
| [pythia-1.4b-deduped-v0.Q5_K.gguf](https://huggingface.co/RichardErkhov/EleutherAI_-_pythia-1.4b-deduped-v0-gguf/blob/main/pythia-1.4b-deduped-v0.Q5_K.gguf) | Q5_K | 0.98GB |
| [pythia-1.4b-deduped-v0.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/EleutherAI_-_pythia-1.4b-deduped-v0-gguf/blob/main/pythia-1.4b-deduped-v0.Q5_K_M.gguf) | Q5_K_M | 0.98GB |
| [pythia-1.4b-deduped-v0.Q5_1.gguf](https://huggingface.co/RichardErkhov/EleutherAI_-_pythia-1.4b-deduped-v0-gguf/blob/main/pythia-1.4b-deduped-v0.Q5_1.gguf) | Q5_1 | 1.0GB |
| [pythia-1.4b-deduped-v0.Q6_K.gguf](https://huggingface.co/RichardErkhov/EleutherAI_-_pythia-1.4b-deduped-v0-gguf/blob/main/pythia-1.4b-deduped-v0.Q6_K.gguf) | Q6_K | 1.08GB |
| [pythia-1.4b-deduped-v0.Q8_0.gguf](https://huggingface.co/RichardErkhov/EleutherAI_-_pythia-1.4b-deduped-v0-gguf/blob/main/pythia-1.4b-deduped-v0.Q8_0.gguf) | Q8_0 | 1.4GB |
Original model description:
---
language:
- en
tags:
- pytorch
- causal-lm
- pythia
- pythia_v0
license: apache-2.0
datasets:
- EleutherAI/the_pile_deduplicated
---
The *Pythia Scaling Suite* is a collection of models developed to facilitate
interpretability research. It contains two sets of eight models of sizes
70M, 160M, 410M, 1B, 1.4B, 2.8B, 6.9B, and 12B. For each size, there are two
models: one trained on the Pile, and one trained on the Pile after the dataset
has been globally deduplicated. All 8 model sizes are trained on the exact
same data, in the exact same order. All Pythia models are available
[on Hugging Face](https://huggingface.co/models?other=pythia).
The Pythia model suite was deliberately designed to promote scientific
research on large language models, especially interpretability research.
Despite not centering downstream performance as a design goal, we find the
models <a href="#evaluations">match or exceed</a> the performance of
similar and same-sized models, such as those in the OPT and GPT-Neo suites.
Please note that all models in the *Pythia* suite were renamed in January
2023. For clarity, a <a href="#naming-convention-and-parameter-count">table
comparing the old and new names</a> is provided in this model card, together
with exact parameter counts.
## Pythia-1.4B-deduped
### Model Details
- Developed by: [EleutherAI](http://eleuther.ai)
- Model type: Transformer-based Language Model
- Language: English
- Learn more: [Pythia's GitHub repository](https://github.com/EleutherAI/pythia)
for training procedure, config files, and details on how to use.
- Library: [GPT-NeoX](https://github.com/EleutherAI/gpt-neox)
- License: Apache 2.0
- Contact: to ask questions about this model, join the [EleutherAI
Discord](https://discord.gg/zBGx3azzUn), and post them in `#release-discussion`.
Please read the existing *Pythia* documentation before asking about it in the
EleutherAI Discord. For general correspondence: [contact@eleuther.
ai](mailto:[email protected]).
<figure>
| Pythia model | Non-Embedding Params | Layers | Model Dim | Heads | Batch Size | Learning Rate | Equivalent Models |
| -----------: | -------------------: | :----: | :-------: | :---: | :--------: | :-------------------: | :--------------------: |
| 70M | 18,915,328 | 6 | 512 | 8 | 2M | 1.0 x 10<sup>-3</sup> | — |
| 160M | 85,056,000 | 12 | 768 | 12 | 4M | 6.0 x 10<sup>-4</sup> | GPT-Neo 125M, OPT-125M |
| 410M | 302,311,424 | 24 | 1024 | 16 | 4M | 3.0 x 10<sup>-4</sup> | OPT-350M |
| 1.0B | 805,736,448 | 16 | 2048 | 8 | 2M | 3.0 x 10<sup>-4</sup> | — |
| 1.4B | 1,208,602,624 | 24 | 2048 | 16 | 4M | 2.0 x 10<sup>-4</sup> | GPT-Neo 1.3B, OPT-1.3B |
| 2.8B | 2,517,652,480 | 32 | 2560 | 32 | 2M | 1.6 x 10<sup>-4</sup> | GPT-Neo 2.7B, OPT-2.7B |
| 6.9B | 6,444,163,072 | 32 | 4096 | 32 | 2M | 1.2 x 10<sup>-4</sup> | OPT-6.7B |
| 12B | 11,327,027,200 | 36 | 5120 | 40 | 2M | 1.2 x 10<sup>-4</sup> | — |
<figcaption>Engineering details for the <i>Pythia Suite</i>. Deduped and
non-deduped models of a given size have the same hyperparameters. “Equivalent”
models have <b>exactly</b> the same architecture, and the same number of
non-embedding parameters.</figcaption>
</figure>
### Uses and Limitations
#### Intended Use
The primary intended use of Pythia is research on the behavior, functionality,
and limitations of large language models. This suite is intended to provide
a controlled setting for performing scientific experiments. To enable the
study of how language models change in the course of training, we provide
143 evenly spaced intermediate checkpoints per model. These checkpoints are
hosted on Hugging Face as branches. Note that branch `143000` corresponds
exactly to the model checkpoint on the `main` branch of each model.
You may also further fine-tune and adapt Pythia-1.4B-deduped for deployment,
as long as your use is in accordance with the Apache 2.0 license. Pythia
models work with the Hugging Face [Transformers
Library](https://huggingface.co/docs/transformers/index). If you decide to use
pre-trained Pythia-1.4B-deduped as a basis for your fine-tuned model, please
conduct your own risk and bias assessment.
#### Out-of-scope use
The Pythia Suite is **not** intended for deployment. It is not a in itself
a product and cannot be used for human-facing interactions.
Pythia models are English-language only, and are not suitable for translation
or generating text in other languages.
Pythia-1.4B-deduped has not been fine-tuned for downstream contexts in which
language models are commonly deployed, such as writing genre prose,
or commercial chatbots. This means Pythia-1.4B-deduped will **not**
respond to a given prompt the way a product like ChatGPT does. This is because,
unlike this model, ChatGPT was fine-tuned using methods such as Reinforcement
Learning from Human Feedback (RLHF) to better “understand” human instructions.
#### Limitations and biases
The core functionality of a large language model is to take a string of text
and predict the next token. The token deemed statistically most likely by the
model need not produce the most “accurate” text. Never rely on
Pythia-1.4B-deduped to produce factually accurate output.
This model was trained on [the Pile](https://pile.eleuther.ai/), a dataset
known to contain profanity and texts that are lewd or otherwise offensive.
See [Section 6 of the Pile paper](https://arxiv.org/abs/2101.00027) for a
discussion of documented biases with regards to gender, religion, and race.
Pythia-1.4B-deduped may produce socially unacceptable or undesirable text,
*even if* the prompt itself does not include anything explicitly offensive.
If you plan on using text generated through, for example, the Hosted Inference
API, we recommend having a human curate the outputs of this language model
before presenting it to other people. Please inform your audience that the
text was generated by Pythia-1.4B-deduped.
### Quickstart
Pythia models can be loaded and used via the following code, demonstrated here
for the third `pythia-70m-deduped` checkpoint:
```python
from transformers import GPTNeoXForCausalLM, AutoTokenizer
model = GPTNeoXForCausalLM.from_pretrained(
"EleutherAI/pythia-70m-deduped",
revision="step3000",
cache_dir="./pythia-70m-deduped/step3000",
)
tokenizer = AutoTokenizer.from_pretrained(
"EleutherAI/pythia-70m-deduped",
revision="step3000",
cache_dir="./pythia-70m-deduped/step3000",
)
inputs = tokenizer("Hello, I am", return_tensors="pt")
tokens = model.generate(**inputs)
tokenizer.decode(tokens[0])
```
Revision/branch `step143000` corresponds exactly to the model checkpoint on
the `main` branch of each model.<br>
For more information on how to use all Pythia models, see [documentation on
GitHub](https://github.com/EleutherAI/pythia).
### Training
#### Training data
Pythia-1.4B-deduped was trained on the Pile **after the dataset has been
globally deduplicated**.<br>
[The Pile](https://pile.eleuther.ai/) is a 825GiB general-purpose dataset in
English. It was created by EleutherAI specifically for training large language
models. It contains texts from 22 diverse sources, roughly broken down into
five categories: academic writing (e.g. arXiv), internet (e.g. CommonCrawl),
prose (e.g. Project Gutenberg), dialogue (e.g. YouTube subtitles), and
miscellaneous (e.g. GitHub, Enron Emails). See [the Pile
paper](https://arxiv.org/abs/2101.00027) for a breakdown of all data sources,
methodology, and a discussion of ethical implications. Consult [the
datasheet](https://arxiv.org/abs/2201.07311) for more detailed documentation
about the Pile and its component datasets. The Pile can be downloaded from
the [official website](https://pile.eleuther.ai/), or from a [community
mirror](https://the-eye.eu/public/AI/pile/).
#### Training procedure
All models were trained on the exact same data, in the exact same order. Each
model saw 299,892,736,000 tokens during training, and 143 checkpoints for each
model are saved every 2,097,152,000 tokens, spaced evenly throughout training.
This corresponds to training for just under 1 epoch on the Pile for
non-deduplicated models, and about 1.5 epochs on the deduplicated Pile.
All *Pythia* models trained for the equivalent of 143000 steps at a batch size
of 2,097,152 tokens. Two batch sizes were used: 2M and 4M. Models with a batch
size of 4M tokens listed were originally trained for 71500 steps instead, with
checkpoints every 500 steps. The checkpoints on Hugging Face are renamed for
consistency with all 2M batch models, so `step1000` is the first checkpoint
for `pythia-1.4b` that was saved (corresponding to step 500 in training), and
`step1000` is likewise the first `pythia-6.9b` checkpoint that was saved
(corresponding to 1000 “actual” steps).<br>
See [GitHub](https://github.com/EleutherAI/pythia) for more details on training
procedure, including [how to reproduce
it](https://github.com/EleutherAI/pythia/blob/main/README.md#reproducing-training).<br>
Pythia uses the same tokenizer as [GPT-NeoX-
20B](https://huggingface.co/EleutherAI/gpt-neox-20b).
### Evaluations
All 16 *Pythia* models were evaluated using the [LM Evaluation
Harness](https://github.com/EleutherAI/lm-evaluation-harness). You can access
the results by model and step at `results/json/*` in the [GitHub
repository](https://github.com/EleutherAI/pythia/tree/main/results/json).<br>
Expand the sections below to see plots of evaluation results for all
Pythia and Pythia-deduped models compared with OPT and BLOOM.
<details>
<summary>LAMBADA – OpenAI</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/lambada_openai.png" style="width:auto"/>
</details>
<details>
<summary>Physical Interaction: Question Answering (PIQA)</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/piqa.png" style="width:auto"/>
</details>
<details>
<summary>WinoGrande</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/winogrande.png" style="width:auto"/>
</details>
<details>
<summary>AI2 Reasoning Challenge – Challenge Set</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/arc_challenge.png" style="width:auto"/>
</details>
<details>
<summary>SciQ</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/sciq.png" style="width:auto"/>
</details>
### Naming convention and parameter count
*Pythia* models were renamed in January 2023. It is possible that the old
naming convention still persists in some documentation by accident. The
current naming convention (70M, 160M, etc.) is based on total parameter count.
<figure style="width:32em">
| current Pythia suffix | old suffix | total params | non-embedding params |
| --------------------: | ---------: | -------------: | -------------------: |
| 70M | 19M | 70,426,624 | 18,915,328 |
| 160M | 125M | 162,322,944 | 85,056,000 |
| 410M | 350M | 405,334,016 | 302,311,424 |
| 1B | 800M | 1,011,781,632 | 805,736,448 |
| 1.4B | 1.3B | 1,414,647,808 | 1,208,602,624 |
| 2.8B | 2.7B | 2,775,208,960 | 2,517,652,480 |
| 6.9B | 6.7B | 6,857,302,016 | 6,444,163,072 |
| 12B | 13B | 11,846,072,320 | 11,327,027,200 |
</figure>
| [
"QUESTION_ANSWERING",
"TRANSLATION"
] | [
"SCIQ"
] |
domenicrosati/opus-mt-en-es-scielo | domenicrosati | translation | [
"transformers",
"pytorch",
"tensorboard",
"marian",
"text2text-generation",
"translation",
"generated_from_trainer",
"dataset:scielo",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2022-07-15T15:25:52 | 2022-07-18T20:09:57 | 65 | 2 | ---
datasets:
- scielo
metrics:
- bleu
tags:
- translation
- generated_from_trainer
model-index:
- name: opus-mt-en-es-scielo
results:
- task:
type: text2text-generation
name: Sequence-to-sequence Language Modeling
dataset:
name: scielo
type: scielo
args: en-es
metrics:
- type: bleu
value: 41.53733801247958
name: Bleu
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# opus-mt-en-es-scielo
This model is a fine-tuned version of [domenicrosati/opus-mt-en-es-scielo](https://huggingface.co/domenicrosati/opus-mt-en-es-scielo) on the scielo dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2189
- Bleu: 41.5373
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5.6e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|
| 1.0943 | 1.0 | 10001 | 1.2189 | 41.5373 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0
- Datasets 2.1.0
- Tokenizers 0.12.1
| [
"TRANSLATION"
] | [
"SCIELO"
] |
djuna/jina-embeddings-v2-base-en-Q5_K_M-GGUF | djuna | feature-extraction | [
"sentence-transformers",
"gguf",
"feature-extraction",
"sentence-similarity",
"mteb",
"llama-cpp",
"gguf-my-repo",
"en",
"dataset:allenai/c4",
"base_model:jinaai/jina-embeddings-v2-base-en",
"base_model:quantized:jinaai/jina-embeddings-v2-base-en",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"region:us"
] | 2024-07-28T02:15:12 | 2024-07-28T02:15:15 | 65 | 2 | ---
base_model: jinaai/jina-embeddings-v2-base-en
datasets:
- allenai/c4
language: en
license: apache-2.0
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- mteb
- llama-cpp
- gguf-my-repo
inference: false
model-index:
- name: jina-embedding-b-en-v2
results:
- task:
type: Classification
dataset:
name: MTEB AmazonCounterfactualClassification (en)
type: mteb/amazon_counterfactual
config: en
split: test
revision: e8379541af4e31359cca9fbcf4b00f2671dba205
metrics:
- type: accuracy
value: 74.73134328358209
- type: ap
value: 37.765427081831035
- type: f1
value: 68.79367444339518
- task:
type: Classification
dataset:
name: MTEB AmazonPolarityClassification
type: mteb/amazon_polarity
config: default
split: test
revision: e2d317d38cd51312af73b3d32a06d1a08b442046
metrics:
- type: accuracy
value: 88.544275
- type: ap
value: 84.61328675662887
- type: f1
value: 88.51879035862375
- task:
type: Classification
dataset:
name: MTEB AmazonReviewsClassification (en)
type: mteb/amazon_reviews_multi
config: en
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 45.263999999999996
- type: f1
value: 43.778759656699435
- task:
type: Retrieval
dataset:
name: MTEB ArguAna
type: arguana
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 21.693
- type: map_at_10
value: 35.487
- type: map_at_100
value: 36.862
- type: map_at_1000
value: 36.872
- type: map_at_3
value: 30.049999999999997
- type: map_at_5
value: 32.966
- type: mrr_at_1
value: 21.977
- type: mrr_at_10
value: 35.565999999999995
- type: mrr_at_100
value: 36.948
- type: mrr_at_1000
value: 36.958
- type: mrr_at_3
value: 30.121
- type: mrr_at_5
value: 33.051
- type: ndcg_at_1
value: 21.693
- type: ndcg_at_10
value: 44.181
- type: ndcg_at_100
value: 49.982
- type: ndcg_at_1000
value: 50.233000000000004
- type: ndcg_at_3
value: 32.830999999999996
- type: ndcg_at_5
value: 38.080000000000005
- type: precision_at_1
value: 21.693
- type: precision_at_10
value: 7.248
- type: precision_at_100
value: 0.9769999999999999
- type: precision_at_1000
value: 0.1
- type: precision_at_3
value: 13.632
- type: precision_at_5
value: 10.725
- type: recall_at_1
value: 21.693
- type: recall_at_10
value: 72.475
- type: recall_at_100
value: 97.653
- type: recall_at_1000
value: 99.57300000000001
- type: recall_at_3
value: 40.896
- type: recall_at_5
value: 53.627
- task:
type: Clustering
dataset:
name: MTEB ArxivClusteringP2P
type: mteb/arxiv-clustering-p2p
config: default
split: test
revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d
metrics:
- type: v_measure
value: 45.39242428696777
- task:
type: Clustering
dataset:
name: MTEB ArxivClusteringS2S
type: mteb/arxiv-clustering-s2s
config: default
split: test
revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53
metrics:
- type: v_measure
value: 36.675626784714
- task:
type: Reranking
dataset:
name: MTEB AskUbuntuDupQuestions
type: mteb/askubuntudupquestions-reranking
config: default
split: test
revision: 2000358ca161889fa9c082cb41daa8dcfb161a54
metrics:
- type: map
value: 62.247725694904034
- type: mrr
value: 74.91359978894604
- task:
type: STS
dataset:
name: MTEB BIOSSES
type: mteb/biosses-sts
config: default
split: test
revision: d3fb88f8f02e40887cd149695127462bbcf29b4a
metrics:
- type: cos_sim_pearson
value: 82.68003802970496
- type: cos_sim_spearman
value: 81.23438110096286
- type: euclidean_pearson
value: 81.87462986142582
- type: euclidean_spearman
value: 81.23438110096286
- type: manhattan_pearson
value: 81.61162566600755
- type: manhattan_spearman
value: 81.11329400456184
- task:
type: Classification
dataset:
name: MTEB Banking77Classification
type: mteb/banking77
config: default
split: test
revision: 0fd18e25b25c072e09e0d92ab615fda904d66300
metrics:
- type: accuracy
value: 84.01298701298701
- type: f1
value: 83.31690714969382
- task:
type: Clustering
dataset:
name: MTEB BiorxivClusteringP2P
type: mteb/biorxiv-clustering-p2p
config: default
split: test
revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40
metrics:
- type: v_measure
value: 37.050108150972086
- task:
type: Clustering
dataset:
name: MTEB BiorxivClusteringS2S
type: mteb/biorxiv-clustering-s2s
config: default
split: test
revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908
metrics:
- type: v_measure
value: 30.15731442819715
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackAndroidRetrieval
type: BeIR/cqadupstack
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 31.391999999999996
- type: map_at_10
value: 42.597
- type: map_at_100
value: 44.07
- type: map_at_1000
value: 44.198
- type: map_at_3
value: 38.957
- type: map_at_5
value: 40.961
- type: mrr_at_1
value: 37.196
- type: mrr_at_10
value: 48.152
- type: mrr_at_100
value: 48.928
- type: mrr_at_1000
value: 48.964999999999996
- type: mrr_at_3
value: 45.446
- type: mrr_at_5
value: 47.205999999999996
- type: ndcg_at_1
value: 37.196
- type: ndcg_at_10
value: 49.089
- type: ndcg_at_100
value: 54.471000000000004
- type: ndcg_at_1000
value: 56.385
- type: ndcg_at_3
value: 43.699
- type: ndcg_at_5
value: 46.22
- type: precision_at_1
value: 37.196
- type: precision_at_10
value: 9.313
- type: precision_at_100
value: 1.478
- type: precision_at_1000
value: 0.198
- type: precision_at_3
value: 20.839
- type: precision_at_5
value: 14.936
- type: recall_at_1
value: 31.391999999999996
- type: recall_at_10
value: 61.876
- type: recall_at_100
value: 84.214
- type: recall_at_1000
value: 95.985
- type: recall_at_3
value: 46.6
- type: recall_at_5
value: 53.588
- type: map_at_1
value: 29.083
- type: map_at_10
value: 38.812999999999995
- type: map_at_100
value: 40.053
- type: map_at_1000
value: 40.188
- type: map_at_3
value: 36.111
- type: map_at_5
value: 37.519000000000005
- type: mrr_at_1
value: 36.497
- type: mrr_at_10
value: 44.85
- type: mrr_at_100
value: 45.546
- type: mrr_at_1000
value: 45.593
- type: mrr_at_3
value: 42.686
- type: mrr_at_5
value: 43.909
- type: ndcg_at_1
value: 36.497
- type: ndcg_at_10
value: 44.443
- type: ndcg_at_100
value: 48.979
- type: ndcg_at_1000
value: 51.154999999999994
- type: ndcg_at_3
value: 40.660000000000004
- type: ndcg_at_5
value: 42.193000000000005
- type: precision_at_1
value: 36.497
- type: precision_at_10
value: 8.433
- type: precision_at_100
value: 1.369
- type: precision_at_1000
value: 0.185
- type: precision_at_3
value: 19.894000000000002
- type: precision_at_5
value: 13.873
- type: recall_at_1
value: 29.083
- type: recall_at_10
value: 54.313
- type: recall_at_100
value: 73.792
- type: recall_at_1000
value: 87.629
- type: recall_at_3
value: 42.257
- type: recall_at_5
value: 47.066
- type: map_at_1
value: 38.556000000000004
- type: map_at_10
value: 50.698
- type: map_at_100
value: 51.705
- type: map_at_1000
value: 51.768
- type: map_at_3
value: 47.848
- type: map_at_5
value: 49.358000000000004
- type: mrr_at_1
value: 43.95
- type: mrr_at_10
value: 54.191
- type: mrr_at_100
value: 54.852999999999994
- type: mrr_at_1000
value: 54.885
- type: mrr_at_3
value: 51.954
- type: mrr_at_5
value: 53.13
- type: ndcg_at_1
value: 43.95
- type: ndcg_at_10
value: 56.516
- type: ndcg_at_100
value: 60.477000000000004
- type: ndcg_at_1000
value: 61.746
- type: ndcg_at_3
value: 51.601
- type: ndcg_at_5
value: 53.795
- type: precision_at_1
value: 43.95
- type: precision_at_10
value: 9.009
- type: precision_at_100
value: 1.189
- type: precision_at_1000
value: 0.135
- type: precision_at_3
value: 22.989
- type: precision_at_5
value: 15.473
- type: recall_at_1
value: 38.556000000000004
- type: recall_at_10
value: 70.159
- type: recall_at_100
value: 87.132
- type: recall_at_1000
value: 96.16
- type: recall_at_3
value: 56.906
- type: recall_at_5
value: 62.332
- type: map_at_1
value: 24.238
- type: map_at_10
value: 32.5
- type: map_at_100
value: 33.637
- type: map_at_1000
value: 33.719
- type: map_at_3
value: 30.026999999999997
- type: map_at_5
value: 31.555
- type: mrr_at_1
value: 26.328000000000003
- type: mrr_at_10
value: 34.44
- type: mrr_at_100
value: 35.455999999999996
- type: mrr_at_1000
value: 35.521
- type: mrr_at_3
value: 32.034
- type: mrr_at_5
value: 33.565
- type: ndcg_at_1
value: 26.328000000000003
- type: ndcg_at_10
value: 37.202
- type: ndcg_at_100
value: 42.728
- type: ndcg_at_1000
value: 44.792
- type: ndcg_at_3
value: 32.368
- type: ndcg_at_5
value: 35.008
- type: precision_at_1
value: 26.328000000000003
- type: precision_at_10
value: 5.7059999999999995
- type: precision_at_100
value: 0.8880000000000001
- type: precision_at_1000
value: 0.11100000000000002
- type: precision_at_3
value: 13.672
- type: precision_at_5
value: 9.74
- type: recall_at_1
value: 24.238
- type: recall_at_10
value: 49.829
- type: recall_at_100
value: 75.21
- type: recall_at_1000
value: 90.521
- type: recall_at_3
value: 36.867
- type: recall_at_5
value: 43.241
- type: map_at_1
value: 15.378
- type: map_at_10
value: 22.817999999999998
- type: map_at_100
value: 23.977999999999998
- type: map_at_1000
value: 24.108
- type: map_at_3
value: 20.719
- type: map_at_5
value: 21.889
- type: mrr_at_1
value: 19.03
- type: mrr_at_10
value: 27.022000000000002
- type: mrr_at_100
value: 28.011999999999997
- type: mrr_at_1000
value: 28.096
- type: mrr_at_3
value: 24.855
- type: mrr_at_5
value: 26.029999999999998
- type: ndcg_at_1
value: 19.03
- type: ndcg_at_10
value: 27.526
- type: ndcg_at_100
value: 33.040000000000006
- type: ndcg_at_1000
value: 36.187000000000005
- type: ndcg_at_3
value: 23.497
- type: ndcg_at_5
value: 25.334
- type: precision_at_1
value: 19.03
- type: precision_at_10
value: 4.963
- type: precision_at_100
value: 0.893
- type: precision_at_1000
value: 0.13
- type: precision_at_3
value: 11.360000000000001
- type: precision_at_5
value: 8.134
- type: recall_at_1
value: 15.378
- type: recall_at_10
value: 38.061
- type: recall_at_100
value: 61.754
- type: recall_at_1000
value: 84.259
- type: recall_at_3
value: 26.788
- type: recall_at_5
value: 31.326999999999998
- type: map_at_1
value: 27.511999999999997
- type: map_at_10
value: 37.429
- type: map_at_100
value: 38.818000000000005
- type: map_at_1000
value: 38.924
- type: map_at_3
value: 34.625
- type: map_at_5
value: 36.064
- type: mrr_at_1
value: 33.300999999999995
- type: mrr_at_10
value: 43.036
- type: mrr_at_100
value: 43.894
- type: mrr_at_1000
value: 43.936
- type: mrr_at_3
value: 40.825
- type: mrr_at_5
value: 42.028
- type: ndcg_at_1
value: 33.300999999999995
- type: ndcg_at_10
value: 43.229
- type: ndcg_at_100
value: 48.992000000000004
- type: ndcg_at_1000
value: 51.02100000000001
- type: ndcg_at_3
value: 38.794000000000004
- type: ndcg_at_5
value: 40.65
- type: precision_at_1
value: 33.300999999999995
- type: precision_at_10
value: 7.777000000000001
- type: precision_at_100
value: 1.269
- type: precision_at_1000
value: 0.163
- type: precision_at_3
value: 18.351
- type: precision_at_5
value: 12.762
- type: recall_at_1
value: 27.511999999999997
- type: recall_at_10
value: 54.788000000000004
- type: recall_at_100
value: 79.105
- type: recall_at_1000
value: 92.49199999999999
- type: recall_at_3
value: 41.924
- type: recall_at_5
value: 47.026
- type: map_at_1
value: 24.117
- type: map_at_10
value: 33.32
- type: map_at_100
value: 34.677
- type: map_at_1000
value: 34.78
- type: map_at_3
value: 30.233999999999998
- type: map_at_5
value: 31.668000000000003
- type: mrr_at_1
value: 29.566
- type: mrr_at_10
value: 38.244
- type: mrr_at_100
value: 39.245000000000005
- type: mrr_at_1000
value: 39.296
- type: mrr_at_3
value: 35.864000000000004
- type: mrr_at_5
value: 36.919999999999995
- type: ndcg_at_1
value: 29.566
- type: ndcg_at_10
value: 39.127
- type: ndcg_at_100
value: 44.989000000000004
- type: ndcg_at_1000
value: 47.189
- type: ndcg_at_3
value: 34.039
- type: ndcg_at_5
value: 35.744
- type: precision_at_1
value: 29.566
- type: precision_at_10
value: 7.385999999999999
- type: precision_at_100
value: 1.204
- type: precision_at_1000
value: 0.158
- type: precision_at_3
value: 16.286
- type: precision_at_5
value: 11.484
- type: recall_at_1
value: 24.117
- type: recall_at_10
value: 51.559999999999995
- type: recall_at_100
value: 77.104
- type: recall_at_1000
value: 91.79899999999999
- type: recall_at_3
value: 36.82
- type: recall_at_5
value: 41.453
- type: map_at_1
value: 25.17625
- type: map_at_10
value: 34.063916666666664
- type: map_at_100
value: 35.255500000000005
- type: map_at_1000
value: 35.37275
- type: map_at_3
value: 31.351666666666667
- type: map_at_5
value: 32.80608333333333
- type: mrr_at_1
value: 29.59783333333333
- type: mrr_at_10
value: 38.0925
- type: mrr_at_100
value: 38.957249999999995
- type: mrr_at_1000
value: 39.01608333333333
- type: mrr_at_3
value: 35.77625
- type: mrr_at_5
value: 37.04991666666667
- type: ndcg_at_1
value: 29.59783333333333
- type: ndcg_at_10
value: 39.343666666666664
- type: ndcg_at_100
value: 44.488249999999994
- type: ndcg_at_1000
value: 46.83358333333334
- type: ndcg_at_3
value: 34.69708333333333
- type: ndcg_at_5
value: 36.75075
- type: precision_at_1
value: 29.59783333333333
- type: precision_at_10
value: 6.884083333333332
- type: precision_at_100
value: 1.114
- type: precision_at_1000
value: 0.15108333333333332
- type: precision_at_3
value: 15.965250000000003
- type: precision_at_5
value: 11.246500000000001
- type: recall_at_1
value: 25.17625
- type: recall_at_10
value: 51.015999999999984
- type: recall_at_100
value: 73.60174999999998
- type: recall_at_1000
value: 89.849
- type: recall_at_3
value: 37.88399999999999
- type: recall_at_5
value: 43.24541666666666
- type: map_at_1
value: 24.537
- type: map_at_10
value: 31.081999999999997
- type: map_at_100
value: 32.042
- type: map_at_1000
value: 32.141
- type: map_at_3
value: 29.137
- type: map_at_5
value: 30.079
- type: mrr_at_1
value: 27.454
- type: mrr_at_10
value: 33.694
- type: mrr_at_100
value: 34.579
- type: mrr_at_1000
value: 34.649
- type: mrr_at_3
value: 32.004
- type: mrr_at_5
value: 32.794000000000004
- type: ndcg_at_1
value: 27.454
- type: ndcg_at_10
value: 34.915
- type: ndcg_at_100
value: 39.641
- type: ndcg_at_1000
value: 42.105
- type: ndcg_at_3
value: 31.276
- type: ndcg_at_5
value: 32.65
- type: precision_at_1
value: 27.454
- type: precision_at_10
value: 5.337
- type: precision_at_100
value: 0.8250000000000001
- type: precision_at_1000
value: 0.11199999999999999
- type: precision_at_3
value: 13.241
- type: precision_at_5
value: 8.895999999999999
- type: recall_at_1
value: 24.537
- type: recall_at_10
value: 44.324999999999996
- type: recall_at_100
value: 65.949
- type: recall_at_1000
value: 84.017
- type: recall_at_3
value: 33.857
- type: recall_at_5
value: 37.316
- type: map_at_1
value: 17.122
- type: map_at_10
value: 24.32
- type: map_at_100
value: 25.338
- type: map_at_1000
value: 25.462
- type: map_at_3
value: 22.064
- type: map_at_5
value: 23.322000000000003
- type: mrr_at_1
value: 20.647
- type: mrr_at_10
value: 27.858
- type: mrr_at_100
value: 28.743999999999996
- type: mrr_at_1000
value: 28.819
- type: mrr_at_3
value: 25.769
- type: mrr_at_5
value: 26.964
- type: ndcg_at_1
value: 20.647
- type: ndcg_at_10
value: 28.849999999999998
- type: ndcg_at_100
value: 33.849000000000004
- type: ndcg_at_1000
value: 36.802
- type: ndcg_at_3
value: 24.799
- type: ndcg_at_5
value: 26.682
- type: precision_at_1
value: 20.647
- type: precision_at_10
value: 5.2170000000000005
- type: precision_at_100
value: 0.906
- type: precision_at_1000
value: 0.134
- type: precision_at_3
value: 11.769
- type: precision_at_5
value: 8.486
- type: recall_at_1
value: 17.122
- type: recall_at_10
value: 38.999
- type: recall_at_100
value: 61.467000000000006
- type: recall_at_1000
value: 82.716
- type: recall_at_3
value: 27.601
- type: recall_at_5
value: 32.471
- type: map_at_1
value: 24.396
- type: map_at_10
value: 33.415
- type: map_at_100
value: 34.521
- type: map_at_1000
value: 34.631
- type: map_at_3
value: 30.703999999999997
- type: map_at_5
value: 32.166
- type: mrr_at_1
value: 28.825
- type: mrr_at_10
value: 37.397000000000006
- type: mrr_at_100
value: 38.286
- type: mrr_at_1000
value: 38.346000000000004
- type: mrr_at_3
value: 35.028
- type: mrr_at_5
value: 36.32
- type: ndcg_at_1
value: 28.825
- type: ndcg_at_10
value: 38.656
- type: ndcg_at_100
value: 43.856
- type: ndcg_at_1000
value: 46.31
- type: ndcg_at_3
value: 33.793
- type: ndcg_at_5
value: 35.909
- type: precision_at_1
value: 28.825
- type: precision_at_10
value: 6.567
- type: precision_at_100
value: 1.0330000000000001
- type: precision_at_1000
value: 0.135
- type: precision_at_3
value: 15.516
- type: precision_at_5
value: 10.914
- type: recall_at_1
value: 24.396
- type: recall_at_10
value: 50.747
- type: recall_at_100
value: 73.477
- type: recall_at_1000
value: 90.801
- type: recall_at_3
value: 37.1
- type: recall_at_5
value: 42.589
- type: map_at_1
value: 25.072
- type: map_at_10
value: 34.307
- type: map_at_100
value: 35.725
- type: map_at_1000
value: 35.943999999999996
- type: map_at_3
value: 30.906
- type: map_at_5
value: 32.818000000000005
- type: mrr_at_1
value: 29.644
- type: mrr_at_10
value: 38.673
- type: mrr_at_100
value: 39.459
- type: mrr_at_1000
value: 39.527
- type: mrr_at_3
value: 35.771
- type: mrr_at_5
value: 37.332
- type: ndcg_at_1
value: 29.644
- type: ndcg_at_10
value: 40.548
- type: ndcg_at_100
value: 45.678999999999995
- type: ndcg_at_1000
value: 48.488
- type: ndcg_at_3
value: 34.887
- type: ndcg_at_5
value: 37.543
- type: precision_at_1
value: 29.644
- type: precision_at_10
value: 7.688000000000001
- type: precision_at_100
value: 1.482
- type: precision_at_1000
value: 0.23600000000000002
- type: precision_at_3
value: 16.206
- type: precision_at_5
value: 12.016
- type: recall_at_1
value: 25.072
- type: recall_at_10
value: 53.478
- type: recall_at_100
value: 76.07300000000001
- type: recall_at_1000
value: 93.884
- type: recall_at_3
value: 37.583
- type: recall_at_5
value: 44.464
- type: map_at_1
value: 20.712
- type: map_at_10
value: 27.467999999999996
- type: map_at_100
value: 28.502
- type: map_at_1000
value: 28.610000000000003
- type: map_at_3
value: 24.887999999999998
- type: map_at_5
value: 26.273999999999997
- type: mrr_at_1
value: 22.736
- type: mrr_at_10
value: 29.553
- type: mrr_at_100
value: 30.485
- type: mrr_at_1000
value: 30.56
- type: mrr_at_3
value: 27.078999999999997
- type: mrr_at_5
value: 28.401
- type: ndcg_at_1
value: 22.736
- type: ndcg_at_10
value: 32.023
- type: ndcg_at_100
value: 37.158
- type: ndcg_at_1000
value: 39.823
- type: ndcg_at_3
value: 26.951999999999998
- type: ndcg_at_5
value: 29.281000000000002
- type: precision_at_1
value: 22.736
- type: precision_at_10
value: 5.213
- type: precision_at_100
value: 0.832
- type: precision_at_1000
value: 0.116
- type: precision_at_3
value: 11.459999999999999
- type: precision_at_5
value: 8.244
- type: recall_at_1
value: 20.712
- type: recall_at_10
value: 44.057
- type: recall_at_100
value: 67.944
- type: recall_at_1000
value: 87.925
- type: recall_at_3
value: 30.305
- type: recall_at_5
value: 36.071999999999996
- task:
type: Retrieval
dataset:
name: MTEB ClimateFEVER
type: climate-fever
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 10.181999999999999
- type: map_at_10
value: 16.66
- type: map_at_100
value: 18.273
- type: map_at_1000
value: 18.45
- type: map_at_3
value: 14.141
- type: map_at_5
value: 15.455
- type: mrr_at_1
value: 22.15
- type: mrr_at_10
value: 32.062000000000005
- type: mrr_at_100
value: 33.116
- type: mrr_at_1000
value: 33.168
- type: mrr_at_3
value: 28.827
- type: mrr_at_5
value: 30.892999999999997
- type: ndcg_at_1
value: 22.15
- type: ndcg_at_10
value: 23.532
- type: ndcg_at_100
value: 30.358
- type: ndcg_at_1000
value: 33.783
- type: ndcg_at_3
value: 19.222
- type: ndcg_at_5
value: 20.919999999999998
- type: precision_at_1
value: 22.15
- type: precision_at_10
value: 7.185999999999999
- type: precision_at_100
value: 1.433
- type: precision_at_1000
value: 0.207
- type: precision_at_3
value: 13.941
- type: precision_at_5
value: 10.906
- type: recall_at_1
value: 10.181999999999999
- type: recall_at_10
value: 28.104000000000003
- type: recall_at_100
value: 51.998999999999995
- type: recall_at_1000
value: 71.311
- type: recall_at_3
value: 17.698
- type: recall_at_5
value: 22.262999999999998
- task:
type: Retrieval
dataset:
name: MTEB DBPedia
type: dbpedia-entity
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 6.669
- type: map_at_10
value: 15.552
- type: map_at_100
value: 21.865000000000002
- type: map_at_1000
value: 23.268
- type: map_at_3
value: 11.309
- type: map_at_5
value: 13.084000000000001
- type: mrr_at_1
value: 55.50000000000001
- type: mrr_at_10
value: 66.46600000000001
- type: mrr_at_100
value: 66.944
- type: mrr_at_1000
value: 66.956
- type: mrr_at_3
value: 64.542
- type: mrr_at_5
value: 65.717
- type: ndcg_at_1
value: 44.75
- type: ndcg_at_10
value: 35.049
- type: ndcg_at_100
value: 39.073
- type: ndcg_at_1000
value: 46.208
- type: ndcg_at_3
value: 39.525
- type: ndcg_at_5
value: 37.156
- type: precision_at_1
value: 55.50000000000001
- type: precision_at_10
value: 27.800000000000004
- type: precision_at_100
value: 9.013
- type: precision_at_1000
value: 1.8800000000000001
- type: precision_at_3
value: 42.667
- type: precision_at_5
value: 36.0
- type: recall_at_1
value: 6.669
- type: recall_at_10
value: 21.811
- type: recall_at_100
value: 45.112
- type: recall_at_1000
value: 67.806
- type: recall_at_3
value: 13.373
- type: recall_at_5
value: 16.615
- task:
type: Classification
dataset:
name: MTEB EmotionClassification
type: mteb/emotion
config: default
split: test
revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37
metrics:
- type: accuracy
value: 48.769999999999996
- type: f1
value: 42.91448356376592
- task:
type: Retrieval
dataset:
name: MTEB FEVER
type: fever
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 54.013
- type: map_at_10
value: 66.239
- type: map_at_100
value: 66.62599999999999
- type: map_at_1000
value: 66.644
- type: map_at_3
value: 63.965
- type: map_at_5
value: 65.45400000000001
- type: mrr_at_1
value: 58.221000000000004
- type: mrr_at_10
value: 70.43700000000001
- type: mrr_at_100
value: 70.744
- type: mrr_at_1000
value: 70.75099999999999
- type: mrr_at_3
value: 68.284
- type: mrr_at_5
value: 69.721
- type: ndcg_at_1
value: 58.221000000000004
- type: ndcg_at_10
value: 72.327
- type: ndcg_at_100
value: 73.953
- type: ndcg_at_1000
value: 74.312
- type: ndcg_at_3
value: 68.062
- type: ndcg_at_5
value: 70.56400000000001
- type: precision_at_1
value: 58.221000000000004
- type: precision_at_10
value: 9.521
- type: precision_at_100
value: 1.045
- type: precision_at_1000
value: 0.109
- type: precision_at_3
value: 27.348
- type: precision_at_5
value: 17.794999999999998
- type: recall_at_1
value: 54.013
- type: recall_at_10
value: 86.957
- type: recall_at_100
value: 93.911
- type: recall_at_1000
value: 96.38
- type: recall_at_3
value: 75.555
- type: recall_at_5
value: 81.671
- task:
type: Retrieval
dataset:
name: MTEB FiQA2018
type: fiqa
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 21.254
- type: map_at_10
value: 33.723
- type: map_at_100
value: 35.574
- type: map_at_1000
value: 35.730000000000004
- type: map_at_3
value: 29.473
- type: map_at_5
value: 31.543
- type: mrr_at_1
value: 41.358
- type: mrr_at_10
value: 49.498
- type: mrr_at_100
value: 50.275999999999996
- type: mrr_at_1000
value: 50.308
- type: mrr_at_3
value: 47.016000000000005
- type: mrr_at_5
value: 48.336
- type: ndcg_at_1
value: 41.358
- type: ndcg_at_10
value: 41.579
- type: ndcg_at_100
value: 48.455
- type: ndcg_at_1000
value: 51.165000000000006
- type: ndcg_at_3
value: 37.681
- type: ndcg_at_5
value: 38.49
- type: precision_at_1
value: 41.358
- type: precision_at_10
value: 11.543000000000001
- type: precision_at_100
value: 1.87
- type: precision_at_1000
value: 0.23600000000000002
- type: precision_at_3
value: 24.743000000000002
- type: precision_at_5
value: 17.994
- type: recall_at_1
value: 21.254
- type: recall_at_10
value: 48.698
- type: recall_at_100
value: 74.588
- type: recall_at_1000
value: 91.00200000000001
- type: recall_at_3
value: 33.939
- type: recall_at_5
value: 39.367000000000004
- task:
type: Retrieval
dataset:
name: MTEB HotpotQA
type: hotpotqa
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 35.922
- type: map_at_10
value: 52.32599999999999
- type: map_at_100
value: 53.18000000000001
- type: map_at_1000
value: 53.245
- type: map_at_3
value: 49.294
- type: map_at_5
value: 51.202999999999996
- type: mrr_at_1
value: 71.843
- type: mrr_at_10
value: 78.24600000000001
- type: mrr_at_100
value: 78.515
- type: mrr_at_1000
value: 78.527
- type: mrr_at_3
value: 77.17500000000001
- type: mrr_at_5
value: 77.852
- type: ndcg_at_1
value: 71.843
- type: ndcg_at_10
value: 61.379
- type: ndcg_at_100
value: 64.535
- type: ndcg_at_1000
value: 65.888
- type: ndcg_at_3
value: 56.958
- type: ndcg_at_5
value: 59.434
- type: precision_at_1
value: 71.843
- type: precision_at_10
value: 12.686
- type: precision_at_100
value: 1.517
- type: precision_at_1000
value: 0.16999999999999998
- type: precision_at_3
value: 35.778
- type: precision_at_5
value: 23.422
- type: recall_at_1
value: 35.922
- type: recall_at_10
value: 63.43
- type: recall_at_100
value: 75.868
- type: recall_at_1000
value: 84.88900000000001
- type: recall_at_3
value: 53.666000000000004
- type: recall_at_5
value: 58.555
- task:
type: Classification
dataset:
name: MTEB ImdbClassification
type: mteb/imdb
config: default
split: test
revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7
metrics:
- type: accuracy
value: 79.4408
- type: ap
value: 73.52820871620366
- type: f1
value: 79.36240238685001
- task:
type: Retrieval
dataset:
name: MTEB MSMARCO
type: msmarco
config: default
split: dev
revision: None
metrics:
- type: map_at_1
value: 21.826999999999998
- type: map_at_10
value: 34.04
- type: map_at_100
value: 35.226
- type: map_at_1000
value: 35.275
- type: map_at_3
value: 30.165999999999997
- type: map_at_5
value: 32.318000000000005
- type: mrr_at_1
value: 22.464000000000002
- type: mrr_at_10
value: 34.631
- type: mrr_at_100
value: 35.752
- type: mrr_at_1000
value: 35.795
- type: mrr_at_3
value: 30.798
- type: mrr_at_5
value: 32.946999999999996
- type: ndcg_at_1
value: 22.464000000000002
- type: ndcg_at_10
value: 40.919
- type: ndcg_at_100
value: 46.632
- type: ndcg_at_1000
value: 47.833
- type: ndcg_at_3
value: 32.992
- type: ndcg_at_5
value: 36.834
- type: precision_at_1
value: 22.464000000000002
- type: precision_at_10
value: 6.494
- type: precision_at_100
value: 0.9369999999999999
- type: precision_at_1000
value: 0.104
- type: precision_at_3
value: 14.021
- type: precision_at_5
value: 10.347000000000001
- type: recall_at_1
value: 21.826999999999998
- type: recall_at_10
value: 62.132
- type: recall_at_100
value: 88.55199999999999
- type: recall_at_1000
value: 97.707
- type: recall_at_3
value: 40.541
- type: recall_at_5
value: 49.739
- task:
type: Classification
dataset:
name: MTEB MTOPDomainClassification (en)
type: mteb/mtop_domain
config: en
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 95.68399452804377
- type: f1
value: 95.25490609832268
- task:
type: Classification
dataset:
name: MTEB MTOPIntentClassification (en)
type: mteb/mtop_intent
config: en
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 83.15321477428182
- type: f1
value: 60.35476439087966
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (en)
type: mteb/amazon_massive_intent
config: en
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 71.92669804976462
- type: f1
value: 69.22815107207565
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (en)
type: mteb/amazon_massive_scenario
config: en
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 74.4855413584398
- type: f1
value: 72.92107516103387
- task:
type: Clustering
dataset:
name: MTEB MedrxivClusteringP2P
type: mteb/medrxiv-clustering-p2p
config: default
split: test
revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73
metrics:
- type: v_measure
value: 32.412679360205544
- task:
type: Clustering
dataset:
name: MTEB MedrxivClusteringS2S
type: mteb/medrxiv-clustering-s2s
config: default
split: test
revision: 35191c8c0dca72d8ff3efcd72aa802307d469663
metrics:
- type: v_measure
value: 28.09211869875204
- task:
type: Reranking
dataset:
name: MTEB MindSmallReranking
type: mteb/mind_small
config: default
split: test
revision: 3bdac13927fdc888b903db93b2ffdbd90b295a69
metrics:
- type: map
value: 30.540919056982545
- type: mrr
value: 31.529904607063536
- task:
type: Retrieval
dataset:
name: MTEB NFCorpus
type: nfcorpus
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 5.745
- type: map_at_10
value: 12.013
- type: map_at_100
value: 15.040000000000001
- type: map_at_1000
value: 16.427
- type: map_at_3
value: 8.841000000000001
- type: map_at_5
value: 10.289
- type: mrr_at_1
value: 45.201
- type: mrr_at_10
value: 53.483999999999995
- type: mrr_at_100
value: 54.20700000000001
- type: mrr_at_1000
value: 54.252
- type: mrr_at_3
value: 51.29
- type: mrr_at_5
value: 52.73
- type: ndcg_at_1
value: 43.808
- type: ndcg_at_10
value: 32.445
- type: ndcg_at_100
value: 30.031000000000002
- type: ndcg_at_1000
value: 39.007
- type: ndcg_at_3
value: 37.204
- type: ndcg_at_5
value: 35.07
- type: precision_at_1
value: 45.201
- type: precision_at_10
value: 23.684
- type: precision_at_100
value: 7.600999999999999
- type: precision_at_1000
value: 2.043
- type: precision_at_3
value: 33.953
- type: precision_at_5
value: 29.412
- type: recall_at_1
value: 5.745
- type: recall_at_10
value: 16.168
- type: recall_at_100
value: 30.875999999999998
- type: recall_at_1000
value: 62.686
- type: recall_at_3
value: 9.75
- type: recall_at_5
value: 12.413
- task:
type: Retrieval
dataset:
name: MTEB NQ
type: nq
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 37.828
- type: map_at_10
value: 53.239000000000004
- type: map_at_100
value: 54.035999999999994
- type: map_at_1000
value: 54.067
- type: map_at_3
value: 49.289
- type: map_at_5
value: 51.784
- type: mrr_at_1
value: 42.497
- type: mrr_at_10
value: 55.916999999999994
- type: mrr_at_100
value: 56.495
- type: mrr_at_1000
value: 56.516999999999996
- type: mrr_at_3
value: 52.800000000000004
- type: mrr_at_5
value: 54.722
- type: ndcg_at_1
value: 42.468
- type: ndcg_at_10
value: 60.437
- type: ndcg_at_100
value: 63.731
- type: ndcg_at_1000
value: 64.41799999999999
- type: ndcg_at_3
value: 53.230999999999995
- type: ndcg_at_5
value: 57.26
- type: precision_at_1
value: 42.468
- type: precision_at_10
value: 9.47
- type: precision_at_100
value: 1.1360000000000001
- type: precision_at_1000
value: 0.12
- type: precision_at_3
value: 23.724999999999998
- type: precision_at_5
value: 16.593
- type: recall_at_1
value: 37.828
- type: recall_at_10
value: 79.538
- type: recall_at_100
value: 93.646
- type: recall_at_1000
value: 98.72999999999999
- type: recall_at_3
value: 61.134
- type: recall_at_5
value: 70.377
- task:
type: Retrieval
dataset:
name: MTEB QuoraRetrieval
type: quora
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 70.548
- type: map_at_10
value: 84.466
- type: map_at_100
value: 85.10600000000001
- type: map_at_1000
value: 85.123
- type: map_at_3
value: 81.57600000000001
- type: map_at_5
value: 83.399
- type: mrr_at_1
value: 81.24
- type: mrr_at_10
value: 87.457
- type: mrr_at_100
value: 87.574
- type: mrr_at_1000
value: 87.575
- type: mrr_at_3
value: 86.507
- type: mrr_at_5
value: 87.205
- type: ndcg_at_1
value: 81.25
- type: ndcg_at_10
value: 88.203
- type: ndcg_at_100
value: 89.457
- type: ndcg_at_1000
value: 89.563
- type: ndcg_at_3
value: 85.465
- type: ndcg_at_5
value: 87.007
- type: precision_at_1
value: 81.25
- type: precision_at_10
value: 13.373
- type: precision_at_100
value: 1.5270000000000001
- type: precision_at_1000
value: 0.157
- type: precision_at_3
value: 37.417
- type: precision_at_5
value: 24.556
- type: recall_at_1
value: 70.548
- type: recall_at_10
value: 95.208
- type: recall_at_100
value: 99.514
- type: recall_at_1000
value: 99.988
- type: recall_at_3
value: 87.214
- type: recall_at_5
value: 91.696
- task:
type: Clustering
dataset:
name: MTEB RedditClustering
type: mteb/reddit-clustering
config: default
split: test
revision: 24640382cdbf8abc73003fb0fa6d111a705499eb
metrics:
- type: v_measure
value: 53.04822095496839
- task:
type: Clustering
dataset:
name: MTEB RedditClusteringP2P
type: mteb/reddit-clustering-p2p
config: default
split: test
revision: 282350215ef01743dc01b456c7f5241fa8937f16
metrics:
- type: v_measure
value: 60.30778476474675
- task:
type: Retrieval
dataset:
name: MTEB SCIDOCS
type: scidocs
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 4.692
- type: map_at_10
value: 11.766
- type: map_at_100
value: 13.904
- type: map_at_1000
value: 14.216999999999999
- type: map_at_3
value: 8.245
- type: map_at_5
value: 9.92
- type: mrr_at_1
value: 23.0
- type: mrr_at_10
value: 33.78
- type: mrr_at_100
value: 34.922
- type: mrr_at_1000
value: 34.973
- type: mrr_at_3
value: 30.2
- type: mrr_at_5
value: 32.565
- type: ndcg_at_1
value: 23.0
- type: ndcg_at_10
value: 19.863
- type: ndcg_at_100
value: 28.141
- type: ndcg_at_1000
value: 33.549
- type: ndcg_at_3
value: 18.434
- type: ndcg_at_5
value: 16.384
- type: precision_at_1
value: 23.0
- type: precision_at_10
value: 10.39
- type: precision_at_100
value: 2.235
- type: precision_at_1000
value: 0.35300000000000004
- type: precision_at_3
value: 17.133000000000003
- type: precision_at_5
value: 14.44
- type: recall_at_1
value: 4.692
- type: recall_at_10
value: 21.025
- type: recall_at_100
value: 45.324999999999996
- type: recall_at_1000
value: 71.675
- type: recall_at_3
value: 10.440000000000001
- type: recall_at_5
value: 14.64
- task:
type: STS
dataset:
name: MTEB SICK-R
type: mteb/sickr-sts
config: default
split: test
revision: a6ea5a8cab320b040a23452cc28066d9beae2cee
metrics:
- type: cos_sim_pearson
value: 84.96178184892842
- type: cos_sim_spearman
value: 79.6487740813199
- type: euclidean_pearson
value: 82.06661161625023
- type: euclidean_spearman
value: 79.64876769031183
- type: manhattan_pearson
value: 82.07061164575131
- type: manhattan_spearman
value: 79.65197039464537
- task:
type: STS
dataset:
name: MTEB STS12
type: mteb/sts12-sts
config: default
split: test
revision: a0d554a64d88156834ff5ae9920b964011b16384
metrics:
- type: cos_sim_pearson
value: 84.15305604100027
- type: cos_sim_spearman
value: 74.27447427941591
- type: euclidean_pearson
value: 80.52737337565307
- type: euclidean_spearman
value: 74.27416077132192
- type: manhattan_pearson
value: 80.53728571140387
- type: manhattan_spearman
value: 74.28853605753457
- task:
type: STS
dataset:
name: MTEB STS13
type: mteb/sts13-sts
config: default
split: test
revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca
metrics:
- type: cos_sim_pearson
value: 83.44386080639279
- type: cos_sim_spearman
value: 84.17947648159536
- type: euclidean_pearson
value: 83.34145388129387
- type: euclidean_spearman
value: 84.17947648159536
- type: manhattan_pearson
value: 83.30699061927966
- type: manhattan_spearman
value: 84.18125737380451
- task:
type: STS
dataset:
name: MTEB STS14
type: mteb/sts14-sts
config: default
split: test
revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375
metrics:
- type: cos_sim_pearson
value: 81.57392220985612
- type: cos_sim_spearman
value: 78.80745014464101
- type: euclidean_pearson
value: 80.01660371487199
- type: euclidean_spearman
value: 78.80741240102256
- type: manhattan_pearson
value: 79.96810779507953
- type: manhattan_spearman
value: 78.75600400119448
- task:
type: STS
dataset:
name: MTEB STS15
type: mteb/sts15-sts
config: default
split: test
revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3
metrics:
- type: cos_sim_pearson
value: 86.85421063026625
- type: cos_sim_spearman
value: 87.55320285299192
- type: euclidean_pearson
value: 86.69750143323517
- type: euclidean_spearman
value: 87.55320284326378
- type: manhattan_pearson
value: 86.63379169960379
- type: manhattan_spearman
value: 87.4815029877984
- task:
type: STS
dataset:
name: MTEB STS16
type: mteb/sts16-sts
config: default
split: test
revision: 4d8694f8f0e0100860b497b999b3dbed754a0513
metrics:
- type: cos_sim_pearson
value: 84.31314130411842
- type: cos_sim_spearman
value: 85.3489588181433
- type: euclidean_pearson
value: 84.13240933463535
- type: euclidean_spearman
value: 85.34902871403281
- type: manhattan_pearson
value: 84.01183086503559
- type: manhattan_spearman
value: 85.19316703166102
- task:
type: STS
dataset:
name: MTEB STS17 (en-en)
type: mteb/sts17-crosslingual-sts
config: en-en
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 89.09979781689536
- type: cos_sim_spearman
value: 88.87813323759015
- type: euclidean_pearson
value: 88.65413031123792
- type: euclidean_spearman
value: 88.87813323759015
- type: manhattan_pearson
value: 88.61818758256024
- type: manhattan_spearman
value: 88.81044100494604
- task:
type: STS
dataset:
name: MTEB STS22 (en)
type: mteb/sts22-crosslingual-sts
config: en
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 62.30693258111531
- type: cos_sim_spearman
value: 62.195516523251946
- type: euclidean_pearson
value: 62.951283701049476
- type: euclidean_spearman
value: 62.195516523251946
- type: manhattan_pearson
value: 63.068322281439535
- type: manhattan_spearman
value: 62.10621171028406
- task:
type: STS
dataset:
name: MTEB STSBenchmark
type: mteb/stsbenchmark-sts
config: default
split: test
revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831
metrics:
- type: cos_sim_pearson
value: 84.27092833763909
- type: cos_sim_spearman
value: 84.84429717949759
- type: euclidean_pearson
value: 84.8516966060792
- type: euclidean_spearman
value: 84.84429717949759
- type: manhattan_pearson
value: 84.82203139242881
- type: manhattan_spearman
value: 84.8358503952945
- task:
type: Reranking
dataset:
name: MTEB SciDocsRR
type: mteb/scidocs-reranking
config: default
split: test
revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab
metrics:
- type: map
value: 83.10290863981409
- type: mrr
value: 95.31168450286097
- task:
type: Retrieval
dataset:
name: MTEB SciFact
type: scifact
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 52.161
- type: map_at_10
value: 62.138000000000005
- type: map_at_100
value: 62.769
- type: map_at_1000
value: 62.812
- type: map_at_3
value: 59.111000000000004
- type: map_at_5
value: 60.995999999999995
- type: mrr_at_1
value: 55.333
- type: mrr_at_10
value: 63.504000000000005
- type: mrr_at_100
value: 64.036
- type: mrr_at_1000
value: 64.08
- type: mrr_at_3
value: 61.278
- type: mrr_at_5
value: 62.778
- type: ndcg_at_1
value: 55.333
- type: ndcg_at_10
value: 66.678
- type: ndcg_at_100
value: 69.415
- type: ndcg_at_1000
value: 70.453
- type: ndcg_at_3
value: 61.755
- type: ndcg_at_5
value: 64.546
- type: precision_at_1
value: 55.333
- type: precision_at_10
value: 9.033
- type: precision_at_100
value: 1.043
- type: precision_at_1000
value: 0.11199999999999999
- type: precision_at_3
value: 24.221999999999998
- type: precision_at_5
value: 16.333000000000002
- type: recall_at_1
value: 52.161
- type: recall_at_10
value: 79.156
- type: recall_at_100
value: 91.333
- type: recall_at_1000
value: 99.333
- type: recall_at_3
value: 66.43299999999999
- type: recall_at_5
value: 73.272
- task:
type: PairClassification
dataset:
name: MTEB SprintDuplicateQuestions
type: mteb/sprintduplicatequestions-pairclassification
config: default
split: test
revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46
metrics:
- type: cos_sim_accuracy
value: 99.81287128712871
- type: cos_sim_ap
value: 95.30034785910676
- type: cos_sim_f1
value: 90.28629856850716
- type: cos_sim_precision
value: 92.36401673640168
- type: cos_sim_recall
value: 88.3
- type: dot_accuracy
value: 99.81287128712871
- type: dot_ap
value: 95.30034785910676
- type: dot_f1
value: 90.28629856850716
- type: dot_precision
value: 92.36401673640168
- type: dot_recall
value: 88.3
- type: euclidean_accuracy
value: 99.81287128712871
- type: euclidean_ap
value: 95.30034785910676
- type: euclidean_f1
value: 90.28629856850716
- type: euclidean_precision
value: 92.36401673640168
- type: euclidean_recall
value: 88.3
- type: manhattan_accuracy
value: 99.80990099009901
- type: manhattan_ap
value: 95.26880751950654
- type: manhattan_f1
value: 90.22177419354838
- type: manhattan_precision
value: 90.95528455284553
- type: manhattan_recall
value: 89.5
- type: max_accuracy
value: 99.81287128712871
- type: max_ap
value: 95.30034785910676
- type: max_f1
value: 90.28629856850716
- task:
type: Clustering
dataset:
name: MTEB StackExchangeClustering
type: mteb/stackexchange-clustering
config: default
split: test
revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259
metrics:
- type: v_measure
value: 58.518662504351184
- task:
type: Clustering
dataset:
name: MTEB StackExchangeClusteringP2P
type: mteb/stackexchange-clustering-p2p
config: default
split: test
revision: 815ca46b2622cec33ccafc3735d572c266efdb44
metrics:
- type: v_measure
value: 34.96168178378587
- task:
type: Reranking
dataset:
name: MTEB StackOverflowDupQuestions
type: mteb/stackoverflowdupquestions-reranking
config: default
split: test
revision: e185fbe320c72810689fc5848eb6114e1ef5ec69
metrics:
- type: map
value: 52.04862593471896
- type: mrr
value: 52.97238402936932
- task:
type: Summarization
dataset:
name: MTEB SummEval
type: mteb/summeval
config: default
split: test
revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c
metrics:
- type: cos_sim_pearson
value: 30.092545236479946
- type: cos_sim_spearman
value: 31.599851000175498
- type: dot_pearson
value: 30.092542723901676
- type: dot_spearman
value: 31.599851000175498
- task:
type: Retrieval
dataset:
name: MTEB TRECCOVID
type: trec-covid
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 0.189
- type: map_at_10
value: 1.662
- type: map_at_100
value: 9.384
- type: map_at_1000
value: 22.669
- type: map_at_3
value: 0.5559999999999999
- type: map_at_5
value: 0.9039999999999999
- type: mrr_at_1
value: 68.0
- type: mrr_at_10
value: 81.01899999999999
- type: mrr_at_100
value: 81.01899999999999
- type: mrr_at_1000
value: 81.01899999999999
- type: mrr_at_3
value: 79.333
- type: mrr_at_5
value: 80.733
- type: ndcg_at_1
value: 63.0
- type: ndcg_at_10
value: 65.913
- type: ndcg_at_100
value: 51.895
- type: ndcg_at_1000
value: 46.967
- type: ndcg_at_3
value: 65.49199999999999
- type: ndcg_at_5
value: 66.69699999999999
- type: precision_at_1
value: 68.0
- type: precision_at_10
value: 71.6
- type: precision_at_100
value: 53.66
- type: precision_at_1000
value: 21.124000000000002
- type: precision_at_3
value: 72.667
- type: precision_at_5
value: 74.0
- type: recall_at_1
value: 0.189
- type: recall_at_10
value: 1.913
- type: recall_at_100
value: 12.601999999999999
- type: recall_at_1000
value: 44.296
- type: recall_at_3
value: 0.605
- type: recall_at_5
value: 1.018
- task:
type: Retrieval
dataset:
name: MTEB Touche2020
type: webis-touche2020
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 2.701
- type: map_at_10
value: 10.445
- type: map_at_100
value: 17.324
- type: map_at_1000
value: 19.161
- type: map_at_3
value: 5.497
- type: map_at_5
value: 7.278
- type: mrr_at_1
value: 30.612000000000002
- type: mrr_at_10
value: 45.534
- type: mrr_at_100
value: 45.792
- type: mrr_at_1000
value: 45.806999999999995
- type: mrr_at_3
value: 37.755
- type: mrr_at_5
value: 43.469
- type: ndcg_at_1
value: 26.531
- type: ndcg_at_10
value: 26.235000000000003
- type: ndcg_at_100
value: 39.17
- type: ndcg_at_1000
value: 51.038
- type: ndcg_at_3
value: 23.625
- type: ndcg_at_5
value: 24.338
- type: precision_at_1
value: 30.612000000000002
- type: precision_at_10
value: 24.285999999999998
- type: precision_at_100
value: 8.224
- type: precision_at_1000
value: 1.6179999999999999
- type: precision_at_3
value: 24.490000000000002
- type: precision_at_5
value: 24.898
- type: recall_at_1
value: 2.701
- type: recall_at_10
value: 17.997
- type: recall_at_100
value: 51.766999999999996
- type: recall_at_1000
value: 87.863
- type: recall_at_3
value: 6.295000000000001
- type: recall_at_5
value: 9.993
- task:
type: Classification
dataset:
name: MTEB ToxicConversationsClassification
type: mteb/toxic_conversations_50k
config: default
split: test
revision: d7c0de2777da35d6aae2200a62c6e0e5af397c4c
metrics:
- type: accuracy
value: 73.3474
- type: ap
value: 15.393431414459924
- type: f1
value: 56.466681887882416
- task:
type: Classification
dataset:
name: MTEB TweetSentimentExtractionClassification
type: mteb/tweet_sentiment_extraction
config: default
split: test
revision: d604517c81ca91fe16a244d1248fc021f9ecee7a
metrics:
- type: accuracy
value: 62.062818336163
- type: f1
value: 62.11230840463252
- task:
type: Clustering
dataset:
name: MTEB TwentyNewsgroupsClustering
type: mteb/twentynewsgroups-clustering
config: default
split: test
revision: 6125ec4e24fa026cec8a478383ee943acfbd5449
metrics:
- type: v_measure
value: 42.464892820845115
- task:
type: PairClassification
dataset:
name: MTEB TwitterSemEval2015
type: mteb/twittersemeval2015-pairclassification
config: default
split: test
revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1
metrics:
- type: cos_sim_accuracy
value: 86.15962329379508
- type: cos_sim_ap
value: 74.73674057919256
- type: cos_sim_f1
value: 68.81245642574947
- type: cos_sim_precision
value: 61.48255813953488
- type: cos_sim_recall
value: 78.12664907651715
- type: dot_accuracy
value: 86.15962329379508
- type: dot_ap
value: 74.7367634988281
- type: dot_f1
value: 68.81245642574947
- type: dot_precision
value: 61.48255813953488
- type: dot_recall
value: 78.12664907651715
- type: euclidean_accuracy
value: 86.15962329379508
- type: euclidean_ap
value: 74.7367761466634
- type: euclidean_f1
value: 68.81245642574947
- type: euclidean_precision
value: 61.48255813953488
- type: euclidean_recall
value: 78.12664907651715
- type: manhattan_accuracy
value: 86.21326816474935
- type: manhattan_ap
value: 74.64416473733951
- type: manhattan_f1
value: 68.80924855491331
- type: manhattan_precision
value: 61.23456790123457
- type: manhattan_recall
value: 78.52242744063325
- type: max_accuracy
value: 86.21326816474935
- type: max_ap
value: 74.7367761466634
- type: max_f1
value: 68.81245642574947
- task:
type: PairClassification
dataset:
name: MTEB TwitterURLCorpus
type: mteb/twitterurlcorpus-pairclassification
config: default
split: test
revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf
metrics:
- type: cos_sim_accuracy
value: 88.97620988085536
- type: cos_sim_ap
value: 86.08680845745758
- type: cos_sim_f1
value: 78.02793637114438
- type: cos_sim_precision
value: 73.11082699683736
- type: cos_sim_recall
value: 83.65414228518632
- type: dot_accuracy
value: 88.97620988085536
- type: dot_ap
value: 86.08681149437946
- type: dot_f1
value: 78.02793637114438
- type: dot_precision
value: 73.11082699683736
- type: dot_recall
value: 83.65414228518632
- type: euclidean_accuracy
value: 88.97620988085536
- type: euclidean_ap
value: 86.08681215460771
- type: euclidean_f1
value: 78.02793637114438
- type: euclidean_precision
value: 73.11082699683736
- type: euclidean_recall
value: 83.65414228518632
- type: manhattan_accuracy
value: 88.88888888888889
- type: manhattan_ap
value: 86.02916327562438
- type: manhattan_f1
value: 78.02063045516843
- type: manhattan_precision
value: 73.38851947346994
- type: manhattan_recall
value: 83.2768709578072
- type: max_accuracy
value: 88.97620988085536
- type: max_ap
value: 86.08681215460771
- type: max_f1
value: 78.02793637114438
---
# djuna/jina-embeddings-v2-base-en-Q5_K_M-GGUF
This model was converted to GGUF format from [`jinaai/jina-embeddings-v2-base-en`](https://huggingface.co/jinaai/jina-embeddings-v2-base-en) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/jinaai/jina-embeddings-v2-base-en) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo djuna/jina-embeddings-v2-base-en-Q5_K_M-GGUF --hf-file jina-embeddings-v2-base-en-q5_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo djuna/jina-embeddings-v2-base-en-Q5_K_M-GGUF --hf-file jina-embeddings-v2-base-en-q5_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo djuna/jina-embeddings-v2-base-en-Q5_K_M-GGUF --hf-file jina-embeddings-v2-base-en-q5_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo djuna/jina-embeddings-v2-base-en-Q5_K_M-GGUF --hf-file jina-embeddings-v2-base-en-q5_k_m.gguf -c 2048
```
| [
"SUMMARIZATION"
] | [
"BIOSSES",
"SCIFACT"
] |
mav23/pythia-6.9b-GGUF | mav23 | null | [
"gguf",
"pytorch",
"causal-lm",
"pythia",
"en",
"dataset:EleutherAI/pile",
"arxiv:2304.01373",
"arxiv:2101.00027",
"arxiv:2201.07311",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | 2024-11-10T07:31:57 | 2024-11-10T08:22:00 | 65 | 0 | ---
datasets:
- EleutherAI/pile
language:
- en
license: apache-2.0
tags:
- pytorch
- causal-lm
- pythia
---
The *Pythia Scaling Suite* is a collection of models developed to facilitate
interpretability research [(see paper)](https://arxiv.org/pdf/2304.01373.pdf).
It contains two sets of eight models of sizes
70M, 160M, 410M, 1B, 1.4B, 2.8B, 6.9B, and 12B. For each size, there are two
models: one trained on the Pile, and one trained on the Pile after the dataset
has been globally deduplicated. All 8 model sizes are trained on the exact
same data, in the exact same order. We also provide 154 intermediate
checkpoints per model, hosted on Hugging Face as branches.
The Pythia model suite was deliberately designed to promote scientific
research on large language models, especially interpretability research.
Despite not centering downstream performance as a design goal, we find the
models <a href="#evaluations">match or exceed</a> the performance of
similar and same-sized models, such as those in the OPT and GPT-Neo suites.
<details>
<summary style="font-weight:600">Details on previous early release and naming convention.</summary>
Previously, we released an early version of the Pythia suite to the public.
However, we decided to retrain the model suite to address a few hyperparameter
discrepancies. This model card <a href="#changelog">lists the changes</a>;
see appendix B in the Pythia paper for further discussion. We found no
difference in benchmark performance between the two Pythia versions.
The old models are
[still available](https://huggingface.co/models?other=pythia_v0), but we
suggest the retrained suite if you are just starting to use Pythia.<br>
**This is the current release.**
Please note that all models in the *Pythia* suite were renamed in January
2023. For clarity, a <a href="#naming-convention-and-parameter-count">table
comparing the old and new names</a> is provided in this model card, together
with exact parameter counts.
</details>
<br>
# Pythia-6.9B
## Model Details
- Developed by: [EleutherAI](http://eleuther.ai)
- Model type: Transformer-based Language Model
- Language: English
- Learn more: [Pythia's GitHub repository](https://github.com/EleutherAI/pythia)
for training procedure, config files, and details on how to use.
[See paper](https://arxiv.org/pdf/2304.01373.pdf) for more evals and implementation
details.
- Library: [GPT-NeoX](https://github.com/EleutherAI/gpt-neox)
- License: Apache 2.0
- Contact: to ask questions about this model, join the [EleutherAI
Discord](https://discord.gg/zBGx3azzUn), and post them in `#release-discussion`.
Please read the existing *Pythia* documentation before asking about it in the
EleutherAI Discord. For general correspondence: [contact@eleuther.
ai](mailto:[email protected]).
<figure>
| Pythia model | Non-Embedding Params | Layers | Model Dim | Heads | Batch Size | Learning Rate | Equivalent Models |
| -----------: | -------------------: | :----: | :-------: | :---: | :--------: | :-------------------: | :--------------------: |
| 70M | 18,915,328 | 6 | 512 | 8 | 2M | 1.0 x 10<sup>-3</sup> | — |
| 160M | 85,056,000 | 12 | 768 | 12 | 2M | 6.0 x 10<sup>-4</sup> | GPT-Neo 125M, OPT-125M |
| 410M | 302,311,424 | 24 | 1024 | 16 | 2M | 3.0 x 10<sup>-4</sup> | OPT-350M |
| 1.0B | 805,736,448 | 16 | 2048 | 8 | 2M | 3.0 x 10<sup>-4</sup> | — |
| 1.4B | 1,208,602,624 | 24 | 2048 | 16 | 2M | 2.0 x 10<sup>-4</sup> | GPT-Neo 1.3B, OPT-1.3B |
| 2.8B | 2,517,652,480 | 32 | 2560 | 32 | 2M | 1.6 x 10<sup>-4</sup> | GPT-Neo 2.7B, OPT-2.7B |
| 6.9B | 6,444,163,072 | 32 | 4096 | 32 | 2M | 1.2 x 10<sup>-4</sup> | OPT-6.7B |
| 12B | 11,327,027,200 | 36 | 5120 | 40 | 2M | 1.2 x 10<sup>-4</sup> | — |
<figcaption>Engineering details for the <i>Pythia Suite</i>. Deduped and
non-deduped models of a given size have the same hyperparameters. “Equivalent”
models have <b>exactly</b> the same architecture, and the same number of
non-embedding parameters.</figcaption>
</figure>
## Uses and Limitations
### Intended Use
The primary intended use of Pythia is research on the behavior, functionality,
and limitations of large language models. This suite is intended to provide
a controlled setting for performing scientific experiments. We also provide
154 checkpoints per model: initial `step0`, 10 log-spaced checkpoints
`step{1,2,4...512}`, and 143 evenly-spaced checkpoints from `step1000` to
`step143000`. These checkpoints are hosted on Hugging Face as branches. Note
that branch `143000` corresponds exactly to the model checkpoint on the `main`
branch of each model.
You may also further fine-tune and adapt Pythia-6.9B for deployment,
as long as your use is in accordance with the Apache 2.0 license. Pythia
models work with the Hugging Face [Transformers
Library](https://huggingface.co/docs/transformers/index). If you decide to use
pre-trained Pythia-6.9B as a basis for your fine-tuned model, please
conduct your own risk and bias assessment.
### Out-of-scope use
The Pythia Suite is **not** intended for deployment. It is not a in itself
a product and cannot be used for human-facing interactions. For example,
the model may generate harmful or offensive text. Please evaluate the risks
associated with your particular use case.
Pythia models are English-language only, and are not suitable for translation
or generating text in other languages.
Pythia-6.9B has not been fine-tuned for downstream contexts in which
language models are commonly deployed, such as writing genre prose,
or commercial chatbots. This means Pythia-6.9B will **not**
respond to a given prompt the way a product like ChatGPT does. This is because,
unlike this model, ChatGPT was fine-tuned using methods such as Reinforcement
Learning from Human Feedback (RLHF) to better “follow” human instructions.
### Limitations and biases
The core functionality of a large language model is to take a string of text
and predict the next token. The token used by the model need not produce the
most “accurate” text. Never rely on Pythia-6.9B to produce factually accurate
output.
This model was trained on [the Pile](https://pile.eleuther.ai/), a dataset
known to contain profanity and texts that are lewd or otherwise offensive.
See [Section 6 of the Pile paper](https://arxiv.org/abs/2101.00027) for a
discussion of documented biases with regards to gender, religion, and race.
Pythia-6.9B may produce socially unacceptable or undesirable text, *even if*
the prompt itself does not include anything explicitly offensive.
If you plan on using text generated through, for example, the Hosted Inference
API, we recommend having a human curate the outputs of this language model
before presenting it to other people. Please inform your audience that the
text was generated by Pythia-6.9B.
### Quickstart
Pythia models can be loaded and used via the following code, demonstrated here
for the third `pythia-70m-deduped` checkpoint:
```python
from transformers import GPTNeoXForCausalLM, AutoTokenizer
model = GPTNeoXForCausalLM.from_pretrained(
"EleutherAI/pythia-70m-deduped",
revision="step3000",
cache_dir="./pythia-70m-deduped/step3000",
)
tokenizer = AutoTokenizer.from_pretrained(
"EleutherAI/pythia-70m-deduped",
revision="step3000",
cache_dir="./pythia-70m-deduped/step3000",
)
inputs = tokenizer("Hello, I am", return_tensors="pt")
tokens = model.generate(**inputs)
tokenizer.decode(tokens[0])
```
Revision/branch `step143000` corresponds exactly to the model checkpoint on
the `main` branch of each model.<br>
For more information on how to use all Pythia models, see [documentation on
GitHub](https://github.com/EleutherAI/pythia).
## Training
### Training data
[The Pile](https://pile.eleuther.ai/) is a 825GiB general-purpose dataset in
English. It was created by EleutherAI specifically for training large language
models. It contains texts from 22 diverse sources, roughly broken down into
five categories: academic writing (e.g. arXiv), internet (e.g. CommonCrawl),
prose (e.g. Project Gutenberg), dialogue (e.g. YouTube subtitles), and
miscellaneous (e.g. GitHub, Enron Emails). See [the Pile
paper](https://arxiv.org/abs/2101.00027) for a breakdown of all data sources,
methodology, and a discussion of ethical implications. Consult [the
datasheet](https://arxiv.org/abs/2201.07311) for more detailed documentation
about the Pile and its component datasets. The Pile can be downloaded from
the [official website](https://pile.eleuther.ai/), or from a [community
mirror](https://the-eye.eu/public/AI/pile/).<br>
The Pile was **not** deduplicated before being used to train Pythia-6.9B.
### Training procedure
All models were trained on the exact same data, in the exact same order. Each
model saw 299,892,736,000 tokens during training, and 143 checkpoints for each
model are saved every 2,097,152,000 tokens, spaced evenly throughout training,
from `step1000` to `step143000` (which is the same as `main`). In addition, we
also provide frequent early checkpoints: `step0` and `step{1,2,4...512}`.
This corresponds to training for just under 1 epoch on the Pile for
non-deduplicated models, and about 1.5 epochs on the deduplicated Pile.
All *Pythia* models trained for 143000 steps at a batch size
of 2M (2,097,152 tokens).<br>
See [GitHub](https://github.com/EleutherAI/pythia) for more details on training
procedure, including [how to reproduce
it](https://github.com/EleutherAI/pythia/blob/main/README.md#reproducing-training).<br>
Pythia uses the same tokenizer as [GPT-NeoX-
20B](https://huggingface.co/EleutherAI/gpt-neox-20b).
## Evaluations
All 16 *Pythia* models were evaluated using the [LM Evaluation
Harness](https://github.com/EleutherAI/lm-evaluation-harness). You can access
the results by model and step at `results/json/*` in the [GitHub
repository](https://github.com/EleutherAI/pythia/tree/main/results/json/).<br>
Expand the sections below to see plots of evaluation results for all
Pythia and Pythia-deduped models compared with OPT and BLOOM.
<details>
<summary>LAMBADA – OpenAI</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/lambada_openai_v1.png" style="width:auto"/>
</details>
<details>
<summary>Physical Interaction: Question Answering (PIQA)</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/piqa_v1.png" style="width:auto"/>
</details>
<details>
<summary>WinoGrande</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/winogrande_v1.png" style="width:auto"/>
</details>
<details>
<summary>AI2 Reasoning Challenge—Easy Set</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/arc_easy_v1.png" style="width:auto"/>
</details>
<details>
<summary>SciQ</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/sciq_v1.png" style="width:auto"/>
</details>
## Changelog
This section compares differences between previously released
[Pythia v0](https://huggingface.co/models?other=pythia_v0) and the current
models. See Appendix B of the Pythia paper for further discussion of these
changes and the motivation behind them. We found that retraining Pythia had no
impact on benchmark performance.
- All model sizes are now trained with uniform batch size of 2M tokens.
Previously, the models of size 160M, 410M, and 1.4B parameters were trained
with batch sizes of 4M tokens.
- We added checkpoints at initialization (step 0) and steps {1,2,4,8,16,32,64,
128,256,512} in addition to every 1000 training steps.
- Flash Attention was used in the new retrained suite.
- We remedied a minor inconsistency that existed in the original suite: all
models of size 2.8B parameters or smaller had a learning rate (LR) schedule
which decayed to a minimum LR of 10% the starting LR rate, but the 6.9B and
12B models all used an LR schedule which decayed to a minimum LR of 0. In
the redone training runs, we rectified this inconsistency: all models now were
trained with LR decaying to a minimum of 0.1× their maximum LR.
### Naming convention and parameter count
*Pythia* models were renamed in January 2023. It is possible that the old
naming convention still persists in some documentation by accident. The
current naming convention (70M, 160M, etc.) is based on total parameter count.
<figure style="width:32em">
| current Pythia suffix | old suffix | total params | non-embedding params |
| --------------------: | ---------: | -------------: | -------------------: |
| 70M | 19M | 70,426,624 | 18,915,328 |
| 160M | 125M | 162,322,944 | 85,056,000 |
| 410M | 350M | 405,334,016 | 302,311,424 |
| 1B | 800M | 1,011,781,632 | 805,736,448 |
| 1.4B | 1.3B | 1,414,647,808 | 1,208,602,624 |
| 2.8B | 2.7B | 2,775,208,960 | 2,517,652,480 |
| 6.9B | 6.7B | 6,857,302,016 | 6,444,163,072 |
| 12B | 13B | 11,846,072,320 | 11,327,027,200 |
</figure> | [
"QUESTION_ANSWERING",
"TRANSLATION"
] | [
"SCIQ"
] |
croissantllm/CroissantLLMBase-GGUF | croissantllm | text-generation | [
"gguf",
"legal",
"code",
"text-generation-inference",
"art",
"text-generation",
"fr",
"en",
"dataset:cerebras/SlimPajama-627B",
"dataset:uonlp/CulturaX",
"dataset:pg19",
"dataset:bigcode/starcoderdata",
"dataset:croissantllm/croissant_dataset",
"arxiv:2402.00786",
"license:mit",
"endpoints_compatible",
"region:us"
] | 2024-02-08T09:55:52 | 2024-04-29T12:13:23 | 64 | 4 | ---
datasets:
- cerebras/SlimPajama-627B
- uonlp/CulturaX
- pg19
- bigcode/starcoderdata
- croissantllm/croissant_dataset
language:
- fr
- en
license: mit
pipeline_tag: text-generation
tags:
- legal
- code
- text-generation-inference
- art
---
# CroissantLLM - Base GGUF (190k steps, Final version)
This model is part of the CroissantLLM initiative, and corresponds to the checkpoint after 190k steps (2.99 T) tokens.
To play with the final model, we recommend using the Chat version: https://huggingface.co/croissantllm/CroissantLLMChat-v0.1.
https://arxiv.org/abs/2402.00786
## Abstract
We introduce CroissantLLM, a 1.3B language model pretrained on a set of 3T English and French tokens, to bring to the research and industrial community a high-performance, fully open-sourced bilingual model that runs swiftly on consumer-grade local hardware.
To that end, we pioneer the approach of training an intrinsically bilingual model with a 1:1 English-to-French pretraining data ratio, a custom tokenizer, and bilingual finetuning datasets. We release the training dataset, notably containing a French split with manually curated, high-quality, and varied data sources.
To assess performance outside of English, we craft a novel benchmark, FrenchBench, consisting of an array of classification and generation tasks, covering various orthogonal aspects of model performance in the French Language. Additionally, rooted in transparency and to foster further Large Language Model research, we release codebases, and dozens of checkpoints across various model sizes, training data distributions, and training steps, as well as fine-tuned Chat models, and strong translation models. We evaluate our model through the FMTI framework, and validate 81% of the transparency criteria, far beyond the scores of even most open initiatives.
This work enriches the NLP landscape, breaking away from previous English-centric work in order to strengthen our understanding of multilinguality in language models.
## Citation
Our work can be cited as:
```bash
@misc{faysse2024croissantllm,
title={CroissantLLM: A Truly Bilingual French-English Language Model},
author={Manuel Faysse and Patrick Fernandes and Nuno M. Guerreiro and António Loison and Duarte M. Alves and Caio Corro and Nicolas Boizard and João Alves and Ricardo Rei and Pedro H. Martins and Antoni Bigata Casademunt and François Yvon and André F. T. Martins and Gautier Viaud and Céline Hudelot and Pierre Colombo},
year={2024},
eprint={2402.00786},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
## Usage
This model is a base model, that is, it is not finetuned for Chat function and works best with few-shot prompting strategies.
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "croissantllm/CroissantLLMBase"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype=torch.float16, device_map="auto")
inputs = tokenizer("I am so tired I could sleep right now. -> Je suis si fatigué que je pourrais m'endormir maintenant.\nHe is heading to the market. -> Il va au marché.\nWe are running on the beach. ->", return_tensors="pt").to(model.device)
tokens = model.generate(**inputs, max_length=100, do_sample=True, top_p=0.95, top_k=60, temperature=0.3)
print(tokenizer.decode(tokens[0]))
# remove bos token
inputs = tokenizer("Capitales: France -> Paris, Italie -> Rome, Allemagne -> Berlin, Espagne ->", return_tensors="pt", add_special_tokens=True).to(model.device)
tokens = model.generate(**inputs, max_length=100, do_sample=True, top_p=0.95, top_k=60)
print(tokenizer.decode(tokens[0]))
``` | [
"TRANSLATION"
] | [
"CRAFT"
] |
samchain/econo-sentence-v2 | samchain | sentence-similarity | [
"sentence-transformers",
"safetensors",
"bert",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:45044",
"loss:CoSENTLoss",
"economics",
"finance",
"en",
"dataset:samchain/econo-pairs-v2",
"arxiv:1908.10084",
"base_model:samchain/EconoBert",
"base_model:finetune:samchain/EconoBert",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | 2025-02-19T09:43:20 | 2025-02-19T10:21:34 | 64 | 2 | ---
base_model: samchain/EconoBert
datasets:
- samchain/econo-pairs-v2
language:
- en
library_name: sentence-transformers
license: apache-2.0
metrics:
- pearson_cosine
- spearman_cosine
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:45044
- loss:CoSENTLoss
- economics
- finance
widget:
- source_sentence: a consumer protection point of view, including through remuneration
arrangements. failure to manage conduct risks can expose a financial institution
to a variety of other risks which, if not managed properly, can threaten its solvency
and sustainability. the regulatory regime for market conduct therefore provides
a framework for the identification and management of conduct risk as a complementary
framework to prudential regulation. this is part of the motivation for rigorous
coordination and cooperation arrangements between the pa and the fsca envisaged
by the financial sector regulation bill aimed at ensuring that all risks are holistically
managed within an overarching financial stability policy framework. strengthening
conduct in wholesale otc markets another initiative has been the development of
a code of conduct for south african wholesale over - the - counter ( otc ) financial
markets in a collaborative effort between regulators and key market participants.
i will return to this aspect later. against the background of various investigations
undertaken in many foreign jurisdictions in relation to foreign exchange market
manipulation, the sarb and the fsb launched a review bis central bankers ’ speeches
of the foreign exchange trading operations of south african authorised dealers
in october 2014. as the rand is a globally traded currency, the aim of the review
was to establish whether there may have been any spillover into our markets in
relation to any misconduct or malpractice. it is important to note that, unlike
in other jurisdictions, the south african review was not informed by whistle -
blowing or any allegations – or indeed concrete evidence – of any misconduct.
we had no evidence of widespread malpractice in the south african foreign exchange
market but felt that, given the broad - based nature of investigations in other
jurisdictions ( which also involved trading in emerging market currencies ), it
would be prudent to obtain comfort that our foreign exchange trading practices
were in line with best practice. it was therefore a proactive step on the part
of south african regulators. the foreign exchange review committee established
for this purpose – chaired by former senior deputy governor james cross, who is
with us today – released its report in october 2015. the committee reported that
it had found no evidence of manipulation or serious misconduct in the domestic
foreign exchange market during the period covered by the review, but that there
was scope for improvement in relation to governance and conduct. the committee
also recommended that legislation be enhanced to give market conduct regulators
wider powers to strengthen enforcement. south african regulators are in conversation
with each other on how best to give effect to the implementation of the recommendations.
there was also the recommendation
sentences:
- 'luigi federico signorini : g20 sustainable finance working group private sector
roundtable welcome address by mr luigi federico signorini, senior deputy governor
of the bank of italy, at the g20 sustainable finance working group private sector
roundtable, online event, 17 may 2021. * * * welcome, and a good day to you all.
i am happy to open the private sector roundtable, an event promoted by the g20
presidency and by the chinese and american co - chairs of the sustainable finance
working group. the roundtable will focus on the role of finance in helping fight
climate change and promoting lowcarbon transition. the g20 finance ministers and
central bank governors recently recognised the need to ‘ shape the current economic
recovery by investing in innovative technologies and promoting just transitions
toward more sustainable economies and societies ’. low - carbon transition is
urgent and must be accelerated : the later we act, the greater the costs. it requires
an unprecedented and unremitting effort. while quantitative estimates vary, the
investments needed for transition are certainly huge ; they need to be sustained
for a long time. governments have a central role in that they need to point the
way by adopting an appropriate policy framework. a clear and credible path for
government regulatory and fiscal action is also a prerequisite for efficient choices
on the part of private finance. indeed, while many governments will directly invest
their own money in many countries and mdbs will play their part, it is likely
that the private sector will be called upon to finance most transition investment.
there will be no transition without a general awareness of the need for it and
a willingness, even a desire, to finance it. there are in fact quite a few encouraging
signs. since last year, we have seen an explosion of ‘ net - zero commitments
’ in the private sector — though such commitments ( i am told ) are still confined
to one sixth of publicly listed companies globally. at the same time, the appetite
of ultimate investors and asset managers for ‘ green ’ investment is growing fast.
i am sure many in the audience will have a clear perception of this fact. however,
the path is still fraught with difficulties. on the market side, while sustainable
finance is increasingly popular, it suffers from a lack of clear definitions and
standards. ‘ greenwashing ’ is a danger ; good data, an agreed taxonomy, and adequate
company disclosure are necessary. global consistency is important, as fragmentation
of standards across jurisdictions is confusing for investors and costly for companies.
standards are currently being drafted'
- later, by more. so we raised interest rates through the second half of last year
- and again in june - trying, as best we could through our tactics, to minimise
any further unwanted upward pressure on sterling. but things have now clearly
moved on. the outlook for the world economy deteriorated further through the summer
under the impact of a series of new shocks. japan, the world ’ s second largest
economy, slipped further into recession. russia - which had only weeks earlier
embarked on an imf program - saw the collapse of the rouble and default on its
debt. and acute nervousness spread through many of the world ’ s financial markets.
although there has been some improvement in sentiment over the past month or two,
and although the us and european economies continue to expand, the likelihood
remains that world economic growth will be significantly slower than had been
expected earlier in the summer. slower growth of world activity is bound to prolong
the restraining external effect on growth and inflation in the uk, even though
the exchange rate has now started to weaken. at the same time there are also now
clearer signs of overall slowdown in our own economy. the evidence for this is
less obvious in the backwards - looking economic and monetary data than it is
in the forward - looking surveys, but even so the data suggest that we are beginning
to see an easing of pressure, including an easing of pressure in the labour market.
and the surveys themselves now point to a slowdown in service sector growth, including
retail distribution, as well as a sharper decline in manufacturing output. this
prospect is consistent with the reports which we receive directly from the bank
’ s network of regional agents and their 7000 - odd industrial and commercial
contacts around the country. of course we pay very careful attention to this forward
- looking evidence of developments in the economy alongside the data, and, like
others, we have revised down our forecasts for output growth and inflation. and
we have eased monetary policy quite sharply in the past two months, in the light
of that evidence. our current best guess - published in last week ’ s inflation
report is that, after the interest rate cuts, the growth of overall output next
year will be around 1 %, picking up through the millennium to around trend in
the second half of the year 2000. meanwhile, we expect underlying inflation to
remain close to the target rate of 21 / 2 % - though perhaps a little above that
rate during the course of next year. now no - one likes to see the
- 'daniel mminele : conduct and culture in the banking and financial sectors opening
address by mr daniel mminele, deputy governor of the south african reserve bank,
at the g - 30 forum on banking conduct and culture, pretoria, 18 february 2016.
* * * governor kganyago ( sarb ), governor sithole ( central bank of swaziland
), deputy governor mlambo ( reserve bank of zimbabwe ), deputy governors, groepe
and naidoo ( sarb ), second deputy governor sullivan ( central bank of seychelles
), sir david walker ( vice chair of the group of thirty steering committee ),
dr stuart mackintosh ( executive director of the g30 ), ms maria ramos ( chief
executive officer of barclays africa ), the leadership of banks and other financial
institutions, panel members, and esteemed delegates. it is a privilege and an
honour for me to welcome you, on behalf of south african reserve bank ( sarb ),
to this forum on banking conduct and culture, which we are co - hosting with the
g30 and barclays africa. the g - 30 has, over the years, played a significant
role in bringing together members of the banking, financial and regulatory community
to discuss issues of common concern and examine the choices available to market
practitioners and policymakers. given the enormous trust deficit that has built
up since the global financial crisis, the topic of conduct and culture is of great
importance and highly relevant to the global banking and financial sector. bankers
have always had a delicate relationship with the societies they serve. it would
appear that, at any point in time, it is almost a national sport across the globe
to take a swipe at bankers. mark twain famously said : “ a banker is a fellow
who lends you his umbrella when the sun is shining and wants it back the minute
it begins to rain. ” and j m keynes asked : “ how long will it take to pay city
men so entirely out of proportion to what other servants of society commonly received
for performing social services not less useful or difficult? ” we would be terribly
misguided to treat the current wave of discontent and deep mistrust as just another
wave that will eventually subside. the most recent global financial crisis, from
which almost nine years later we are still struggling to recover, shook the very
foundations of our financial system and almost caused its total meltdown. in a
nutshell, the crisis was about failures in conduct, culture, and supervisory practices.
the consequences'
- source_sentence: of our central bank distribution scheme have grown in number to
embrace the physical quality of notes in circulation and the denominational mix.
the last of these – denominational mix – poses the biggest challenge, but in the
uk i believe we now have evidence to support the business case for atm and retailer
dispense of £5s. and, finally, i am sometimes asked why this matters to the bank
of england? after all, i can assure you that in the current financial conditions
we are not short of difficult challenges. the answer is simple. we should not
forget that our job is to ensure that confidence in our currency is maintained
– and crucial to that is satisfying the public ’ s demand for our notes.
sentences:
- '##factory ". but the two clouds are moving towards us. so let us examine them.
the outlook for the world economy has deteriorated markedly in recent months as
a result of the sudden slowdown in the united states and signs of renewed stagnation
in japan. the speed of the deterioration was a surprise, but not the fact of a
slowdown. last year saw the fastest growth rate of the world economy for twelve
years. growth in the us reached an annual rate of 6 % in the second quarter of
last year, well above even optimistic estimates of sustainable growth rates. a
slowdown was not only inevitable ; it was desirable. the main surprise in the
us was the sharp and sudden break in both business and consumer confidence. us
manufacturers''optimism is now almost as low as it was in 1991 - the last time
the us economy experienced a recession. consumer confidence also fell sharply
in january, driven by marked pessimism over the short - term future. quite why
this break in confidence should have been so rapid is not easy to understand.
and its origins will largely determine the nature of the us downturn. on the one
hand, greater use of information technology to economise on inventories may have
led to shorter lags between changes in final demand and changes in output. if
so, then it is possible that the speed of the downturn will be matched by the
speed of the recovery, leading to a short - lived episode of output growth close
to zero as the result of an inventory correction. on the other hand, the slowdown
could be much more protracted if the imbalances in the us economy which have built
up in recent years start to unwind, leading to a reduction in spending as both
households and businesses seek to reduce the amount of outstanding debt on their
balance sheets. the key to the nature of the us downturn is what will happen to
productivity growth. over the past five years there has been accumulating evidence
that the application of information technology has raised productivity growth
in the us economy - the " new economy ". expectations of higher productivity growth
increased demand by more than it raised supply initially, as firms invested in
new technology and households anticipated higher future incomes. as a result,
spending grew rapidly, outstripping supply and large imbalances emerged. the current
account deficit in the us is now close to 5 % of gdp, a post - war record. it
is sustainable as long as foreigners are prepared to finance it. so far, the profitability
of'
- 83 6. 06 27. 05 39. 42 37. 59 kccs ( no. in lakh ) 243. 07 271. 12 302. 35 337.
87 82. 43 gcc ( no. in lakh ) 13. 87 16. 99 21. 08 36. 29 22. 28 bc - ict accounts
( no. in lakh ) 132. 65 316. 30 573. 01 810. 38 677. 73 ict accounts - bc - total
transactions ( no. in lakh ) 265. 15 841. 64 1410. 93 2546. 51 4799. 08 bis central
bankers ’ speeches
- core business and therefore set out to develop the expertise required to finance
the smes successfully. fairly often the discussion on sme financing is reduced
to two diametrically opposed positions. on one hand, smes are considered by banks
as representing a high risk and therefore, should be avoided or only dealt with
cautiously and at a premium price. on the other hand, banks are accused of being
inflexible and risk averse and consequently irrelevant to the sector. it is important
to understand where the truth lies in these two statements in order to advance
the cause of sme financing. firstly, it is a fact that smes present higher credit
risk than well - structured corporate entities. smes may not have proper accounting
records, may have severe governance issues which undermine accountability, have
poor access to markets, poor skill levels including financial illiteracy by promoters,
lack collateral which the lender can rely on in the event of failure, may not
even exist in an appropriate legal form and even the assessment of the viability
of a project might be difficult. lending to smes can be a lenders nightmare for
bankers. but it is also true that banks which are structured to deal with corporates
are risk averse and inflexible when they deal with smes. often when they bring
inappropriate risk assessment tools, they may focus too much on collateral rather
than project viability. they may even regard sme financing as peripheral to their
business. because of their limited knowledge of smes, they experience failure
which itself reinforces the notion that smes are risky. what we want are financing
institutions that are structured to respond to the unique characteristics of smes.
specialised sme lending institutions are more likely to handle the risk problem
presented by smes as a challenge to be overcome with appropriate products and
credit risk management strategies and not as a basis for inaction or avoiding
the sector altogether. in short, lending strategies which ensure success with
corporates do not necessarily ensure similar success with smes. appropriate sme
financing institutions must at the very least make lending to smes the core business.
the second area of reflection that i would urge this meeting to consider is that
of building appropriate financing models that have been shown to work in africa
or other developing countries so that we all benefit from the best practices available.
in south africa, for instance, franchising which allows the use of brand names
and building capacity have been an bis central bankers ’ speeches important driver
of sme financing since it reduces the perceived
- source_sentence: and non - financial sectors take hold. these concerns are heightened
by the recent upward shift in bond yields owing to the global " higher - for -
longer " narrative and the flare - up of tensions in the middle east, which have
added to the uncertainty surrounding the outlook. after a period of lower market
volatility until august, the rising prospect of higher - forlonger rates has started
to weigh on riskier asset valuations in recent months. risk sentiment in markets
remains highly sensitive to further surprises in inflation and economic growth.
higher than expected inflation or lower growth could trigger a rise in market
volatility and risk premia, increasing the likelihood of credit events materialising.
this brings me to the vulnerabilities in the non - bank financial sector. as regards
credit risk, some non - banks remain heavily exposed to interest rate - sensitive
sectors, such as highly indebted corporates and real estate. deteriorating corporate
fundamentals and the ongoing correction in real estate markets could expose non
- banks that have 2 / 4 bis - central bankers'speeches invested in these sectors
to revaluation losses and investor outflows. furthermore, low levels of liquidity
could expose investment funds to the potential risk of forced asset sales if macro
- financial outcomes deteriorate. corporate profitability in the euro area has
held up well, but higher interest rates are weighing on the debt servicing capacity
of more vulnerable firms. a weakening economy could prove challenging for firms
with high debt levels, subdued earnings and low interest coverage ratios. real
estate firms are particularly vulnerable to losses stemming from the ongoing downturn
in euro area commercial real estate markets. in an environment of tighter financing
conditions and elevated uncertainty, real estate prices have declined markedly.
the effects of higher interest rates have been compounded by structurally lower
demand for some real estate assets following the pandemic. although banks'exposure
to these markets is comparatively low, losses in this segment could act as an
amplifying factor in the event of a wider shock. euro area households, especially
those with lower incomes and in countries with mainly floatingrate mortgages,
are being increasingly squeezed by the higher interest rates. tighter financing
conditions have reduced the demand for housing, putting downward pressure on prices.
on a more positive note, robust labour markets have so far supported household
balance sheets, thereby mitigating the credit risk to banks. spreads in government
bond markets have remained contained as many governments managed to secure cheap
financing at longer maturities during the period of low interest rates.
sentences:
- market. and if we are to raise our national economic vantage point and take advantage
of the momentum coming into 2013, thrift banks need to consider lending more to
this economic segment. i say this because msmes provide great employment opportunities.
a healthier msme sector will help ensure our economic growth is broad - based
and inclusive. thrift banks, however, must not just lend more in terms of nominal
amounts. the challenge to the industry really is to ensure that such lending continuously
creates further opportunities. in this way, msmes can be assured of viability,
regardless of mandated credit programs. final thoughts friends, you want this
convention to be a discussion on broadening your horizon, our horizon. in trying
to look ahead, i took us through the more scenic route of our recent past. the
numbers tell us an encouraging story of growth and consistency of economic and
financial stability. however, the paradox of financial stability is that we may
just be at our weakest when we believe we are at our strongest position. complacency
often sets in when positive news is continuous and this is reinforced by encouraging
market parameters. if we fall into this trap of complacency, we risk losing our
focus. for the thrift banking industry, the growth trajectory that many are anticipating
suggests even better times ahead. it is therefore in our collective interest,
if we wish to truly broaden our horizon, to agree on a common vision where thrift
banks have a definitive role to play. bis central bankers ’ speeches in my remarks,
i have suggested specific action points you may consider in order to accomplish
just that, and over time broaden your horizon. i challenge the industry to 1 )
further improve credit underwriting standards for real estate and consumer loans
and raise your vantage point by looking at your processes with the lens of financial
stability, 2 ) interact with your clients and raise your vantage point by enhancing
your practices with the heart of consumer protection and financial education,
and 3 ) increase lending to msmes and raise your vantage point by lending with
the mind to create greater value. but much more can be done. as the old board
takes its place in ctb lore, your new board is now at the helm to guide the industry
forward. it is never an easy task to move forward but rest assured that the bangko
sentral ng pilipinas will be with you as we take that journey together into broader
horizons. thank you very much and good day to all of you. bis central bankers
’ speeches
- prices, which may also come under upward pressure owing to adverse weather events
and the unfolding climate crisis more broadly. most measures of underlying inflation
continue to decline. the eurostat's flash estimate for inflation excluding energy
and food points to a further decline to 4. 2 % in october, supported by improving
supply conditions, the pass - through of previous declines in energy prices, as
well as the impact of tighter monetary policy on demand and corporate pricing
power. at the same time, domestic price pressures are still strong and 1 / 4 bis
- central bankers'speeches are being increasingly driven by wage pressures and
the evolution of profit margins. while most measures of longer - term inflation
expectations stand around 2 %, some indicators remain elevated and need to be
monitored closely. the resilience of the labour market has been a bright spot
for the euro area economy, but there are signs that the labour market is beginning
to weaken. fewer new jobs are being created and, according to the latest flash
estimate, employment expectations have continued to decline in october for both
services and manufacturing. monetary policy based on our assessment of the inflation
outlook, the dynamics of underlying inflation and the strength of monetary policy
transmission, the governing council decided to keep the three key ecb interest
rates unchanged at its october meeting. the incoming information has broadly confirmed
our previous assessment of the medium - term inflation outlook. our past interest
rate increases continue to be transmitted forcefully into financial and monetary
conditions. banks'funding costs have continued to rise and are being passed on
to businesses and households. the combination of higher borrowing rates and weakening
activity led to a further sharp drop in credit demand in the third quarter of
this year. and credit standards have tightened again. we are also seeing increasing
signs of the impact of our policy decisions on the real economy. further tightening
is still in the pipeline from the current policy stance, and it is set to further
dampen demand and help push down inflation. we are determined to ensure that inflation
returns to our 2 % medium - term target in a timely manner. based on our current
assessment, we consider that the key ecb interest rates are at levels that, maintained
for a sufficiently long duration, will make a substantial contribution to this
goal. we will continue to follow a data - dependent approach to determining the
appropriate level and duration of restriction. financial stability let me now
turn to financial stability. in our upcoming financial stability review, we highlight
that the financial stability outlook remains fragile as the gradual effects of
tighter financial conditions on both the financial
- the conservation of natural resources. it was only in the fitness of things that
he delivered the inaugural address today. sir, we are overwhelmed by your insightful
address and i am sure all of us have immensely benefited listening to his valuable
views on a subject which is not only very dear to you but also so vital for the
country ’ s economic growth. 4. i also note that the seminar includes several
eminent speakers with vast experience on the subject. there are sessions on agricultural
productivity, role of research and technology in improving agricultural productivity,
linkages between productivity and farm income, incentivizing productivity enhancing
investments, relation between credit growth and productivity growth and mitigation
of risks in agriculture. in my address today, coming as do from the rbi, i would
broadly focus on major trends / issues in agricultural productivity and credit,
and the role of agricultural credit in improving agricultural productivity. i
would also touch upon some of the steps that can yield results in the short -
term. importance of agriculture 5. as you are aware, the recent indian growth
story has been service - led. services sector has completely replaced agriculture,
which was traditionally the largest contributor to india ’ s gdp. however, the
fact that agriculture has the smallest share in gdp of only about 14 per cent
today from a high of more than 50 per cent, does not belittle its importance for
bis central bankers ’ speeches the indian economy. this is because first, as we
all know, agriculture remains the largest employer having a share of around 60
per cent. secondly, it holds the key to creation of demand in other sectors and
remains by far an important indirect contributor to india ’ s gdp growth. the
agriculture sector needs to grow at least by 4 per cent for the economy to grow
at 9 per cent. thus, though having a small share, the fluctuations in agricultural
production can have large and significant impact on overall gdp growth. thirdly,
since food is an important component in basket of commodities used for measuring
consumer price indices, it is necessary that food prices are maintained at reasonable
levels to ensure food security, especially for the deprived sections of our society.
in fact, food security is emerging as an important policy concern, and the role
of agriculture in ensuring equitable access to food has added a new perspective
for policy makers. trends in agricultural productivity 6. i would like to discuss
certain trends in agricultural productivity in india. as is wellknown, the year
1968 marked the beginning of a turning point in indian agriculture. the country
- source_sentence: having a reserve currency may also be desirable for other reasons.
first, with more reserve currencies available, portfolio diversification opportunities
are enhanced. this is desirable because, all else equal, it would allow investors
to move further out on the risk / return frontier. second, more currencies are
likely to be close substitutes, which could dampen currency volatility. with many
viable reserve currencies available, no particular one would necessarily have
to bear the bulk of any adjustment. the dollar ’ s dominant reserve currency status
has sometimes been referred to as the united states ’ “ exorbitant privilege,
” implying that the u. s. benefits extraordinarily from this privileged status.
i ’ d argue that the situation is much more nuanced. yes, this status does allow
the u. s. to benefit from seigniorage. more than half of all u. s. currency outstanding
is held abroad. but, there are also costs of being the dominant reserve currency.
for example, this can lead to shifts in the valuation of the dollar that are due
primarily to developments abroad that affect risk appetites and international
capital flows. in such cases, the dollar ’ s valuation can be pushed to levels
inconsistent with u. s. economic fundamentals. for the united states, i believe
that the most important goal must be to keep our own house in order. if we do
this, then i expect that the u. s. dollar will earn the right to remain the most
important reserve currency in the world. the united states has a number of advantages
in sustaining the dominant reserve currency status of the u. s. dollar. first,
there is a first - mover advantage. as the leading reserve currency in the world,
there is no strong incentive for countries to move to other currencies as long
as the dollar continues to have the attributes i discussed earlier. the history
of reserve currency usage is characterized by considerable inertia. the u. s.
dollar emerged as the leading reserve currency quite a bit after it became the
world ’ s largest economy. typically, the loss of dominant reserve currency status
requires either substantial economic decline or political instability that motivates
foreign counterparties to shift to a new reserve currency. second, the u. s. has
the deepest and most liquid capital markets in the world. this is important in
making u. s. treasuries and agency mortgage - backed securities attractive holdings
as part of countries ’ foreign exchange reserve portfolio
sentences:
- as those on student visas, are assumed to begin arriving early next year, subject
to appropriate quarantine restrictions. but tourists and other short - term visitors
will not be able to return until later. in the baseline and upside scenarios,
we assume that the borders reopen in mid 2021. in the downside scenario, where
the global spread of the virus does not subside as quickly, we assume the borders
are closed for all of 2021. it is always possible to construct other scenarios.
the near term would be stronger if there were a major medical breakthrough on
treatments soon. an effective vaccine would take a bit longer to be distributed,
so it would mainly affect outcomes next year and the year after. but it could
also result in a stronger recovery than we have assumed even in the upside scenario
presented here. a worse outcome than our downside could be conceivable if the
virus cannot be contained and further waves of infection occur around the world
for some years yet. there are also plenty of other risks that can be contemplated
; we discuss some of these in the statement. geopolitical tensions were an issue
even before the coronavirus outbreak, and could escalate further. the pandemic
has also in some places exacerbated domestic political tensions. it is hard to
know how these tensions will play out over the next couple of years. if they escalate,
it is possible that some countries'recoveries will be derailed. domestically,
there are also a number of uncertainties that go beyond the direct effects of
the virus and associated activity restrictions. for example, we have assumed that
households and businesses are quite cautious in their use of the fiscal and other
cash flow support they have been receiving. it is possible that people spend more
out of that support than we are assuming. it is even possible that some people
do more to make up for the consumption opportunities that were not available during
periods of lockdown and other activity restrictions. on the downside, the longer
the economy remains weak, the more the recovery will be impeded by scarring effects
on workers, the destruction of business supply networks and other lingering damage.
what has changed in the past three months the scenarios presented in the past
few months. statement incorporate several lessons from the experience of the first,
the economic contractions induced by health - related restrictions on activity
in the june quarter were very large. however, they were not quite as severe as
initially expected. the peak - to - trough declines in output and hours worked
- '. norway was hit by oil price shocks in the following years and economic reforms
were implemented in several areas. after a period it became clear that the central
government budget would be in surplus and that transfers would be made to the
fund. norway ’ s experience from its first 30 years as an oil - producing nation
led to the introduction of the fiscal rule, which has been a key element of norwegian
economic policy for the past decade. report no. 29 to the storting of 2001 laid
down guidelines for the phasing - in of petroleum revenues into the norwegian
economy, establishing two main principles : economic policy must contribute to
stable economic developments and be sustainable over time. by linking petroleum
revenue spending to the expected real return on the fund – and not to current
petroleum revenues – the fiscal rule provided for a gradual and sustainable phasingin
of the revenues. if we can restrict spending to the return, the fund will never
shrink. norway will also be less vulnerable to fluctuations in current petroleum
revenues. ( chart 8 : effect on the size of gpfg of change in oil price and return
) in the first ten years after the establishment of the fiscal rule, the pace
of the fund ’ s growth was determined by oil prices. prospects in 2001 indicated
that a 25 percent higher oil price over a 10 - year period would increase the
size of the fund by almost nok 800 billion. uncertainty with regard to the return
on the fund was far less important for growth. this situation will change as the
fund grows and oil and gas production declines. from 2020 to 2030, the oil price
will play a less prominent role for the size of the fund, while the return on
the fund will be all the more important. in the initial years, the actual return,
adjusted for inflation and costs, was close to 4 percent. in recent years, with
the financial crisis and the sovereign debt crisis, the real return has been lower,
averaging about 2½ percent since 1998. source : lie, e. and c. venneslan : over
evne. finansdepartementet 1965 – 1992 [ beyond our power. ministry of finance
1965 – 1992 ]. pax forlag, 2010. see report no. 25 to the storting ( 1973 - 74
) : “ petroleumsvirksomhetens plass i det norske samfunn ” [ the role of petroleum
activity in norwegian society ]. bis central bankers ’ speeches we should be careful
about taking a rear -'
- ', increasingly relies on price signals generated by trading activity that takes
place daily in these markets. the reliance on secondary market trading for price
discovery constitutes the fundamental difference between funds from securities
markets and loans from banks. let me be a bit more specific. in securities markets,
investment decisions are driven by prices that arise from a trading process that
reconciles differential information from a diverse group of investors. in bank
loans, investment decisions are based on the bank ’ s private information about
specific borrowers. while a bank makes its own investment decisions, securities
markets rely on the consensus of a multitude of investors. when securities markets
work well, they provide efficient ways of aggregating information and allocating
risks among a wide range of investors. in order to function well, however, these
markets require a trading infrastructure. this infrastructure may consist of an
exchange, a network of brokers and dealers, and a clearing system. these markets
also rely on a cadre of relatively well - informed investors, who confidently
judge asset prices and take positions on the – 2 – strength of their judgments.
if the trading infrastructure fails or investors lose confidence, trading will
grind to a halt. the global fixed - income markets are unlike equity markets.
in equity markets, everyone knows something about the trading infrastructure,
which is centralized in exchanges. thus, there is no question as to the focal
point of trading information. but the importance of fixed - income markets, which
are multiple - dealer, over - the - counter markets, is sometimes hard to appreciate
because they are so decentralized. in the united states, the bond market is where
companies have been raising most of their funds in recent years. during the last
ten years, for example, u. s. nonfinancial corporations borrowed a net amount
of $ 785 billion in the form of bonds, three times the net amount they borrowed
from banks. over this same period, these companies as a group spent $ 600 billion
more to retire stock – through buybacks and mergers – than they raised in new
offerings. accompanying these increased levels of debt market activity has been
a continuous process of financial innovation. this innovation has served to unbundle
different kinds of risk and, thereby, to enlarge the menu of risks that investors
may choose to bear. for example, interest - rate swaps, futures and options help
reconfigure various interest - rate risks. total return swaps and creditspread
options are tools for reallocating the payment risks primarily of emerging market'
- source_sentence: education shocks arising from the pandemic could signify the largest
reversal in human development on record, equivalent to erasing all the progress
in human development of the past six years4. the risk of reversing progress in
achieving financial inclusion is accentuating why financial inclusion matters
now more than ever. optimising islamic finance for inclusive economic recovery
policy and regulatory responses across the globe, and likewise in malaysia, have
been focused on safeguarding economic resilience, managing risks to financial
stability and minimising repercussions to society. this is done in tandem with
the different stages of the pandemic – namely, containment, stabilisation and
recovery. at the onset of the crisis, governments along with financial regulators
have deployed sizeable stimulus packages and various assistance programmes in
order to contain the crisis and stabilise the economy. this includes addressing
demand and supply disruptions, maintaining cash flows and keeping workers employed.
in malaysia, the total stimulus package amounted to usd 73. 55 billion ( rm3056
billion ) with an additional fiscal injection by the government totalling 1 /
4 bis central bankers'speeches rm45 billion. as at september 2020, a total of
2. 63 million workers and 321, 000 employers had benefitted from the wage subsidy
programme, involving an expenditure of rm10. 4 billion. to provide further stimulus
to the economy, the bank has reduced the overnight policy rate ( opr ) by a cumulative
125 basis points ( from 3. 00 % to 1. 75 % ) this year, alongside reduction in
the statutory reserve requirement by 100 basis points ( from 3. 00 % to 2. 00
% ). the reduction in the opr is intended to provide additional policy stimulus
to accelerate the pace of economic recovery. the financial industry, including
islamic financial institutions, also lent support to their borrowers and customers.
in the first half of 2020, a total of rm120 billion7 was disbursed in lending
/ financing to smes, with more accounts being approved8 in aggregate in 2020 compared
to the same period in previous years. islamic financial institutions and related
associations have been actively educating and reaching out to affected borrowers
about the financial assistance programmes available in response to the pandemic.
the takaful and insurance industry also facilitated affected certificate holders
by offering temporary deferment of contribution and premium to promote continuity
of takaful protection coverage. more than 1. 1 million9 certificate and policyholders
have benefited from this relief measure. while the
sentences:
- education at an early age. a credit union is similar to a commercial bank, it
can provide the same products and services like a commercial bank, but what is
different is that the credit union is owned by members who are the shareholders
and at the same time are customers to the credit union. credit unions can provide
different saving accounts to suit the needs of their members. they can create
saving products such as junior saving accounts for the children. credit unions
provide lending products for their members. and credit unions can provide insurance
coverage for their members as well. one of the reasons people explain why they
don ’ t want to open an account with a commercial bank is that because banks charge
fees to keep their accounts. while paying fees is something that we all cannot
avoid because provision of banking services costs money, in a credit union the
interests, fees and charges that are paid by members is distributed back to the
members as net interest income. i say this because i am also member of our staff
credit union called bokolo credit union limited day one when i join the central
bank of solomon islands and know the benefit credit unions can give members. as
the registrar for credit unions, i take note of the progress in the growth of
the assets of both the soltuna employees credit union limited and the tuna trust
credit union limited. if the board and management of these credit unions continue
to manage these credit unions professionally, they will both be the vehicle to
encourage our children and youth here in noro and munda community to learn how
to save and earn and become good financial and economic citizens of our country.
finally, let me take this opportunity to thank the many people who made this global
money week celebration possible. to the board of directors of both tuna trust
credit union limited and soltuna employees credit union limited. i salute you
for accepting our request to work together to host this global money week. i would
like to thank selwyn talasasa, one of the long serving credit union promoters
who has been a credit union person since day one of his career. he has been very
helpful in organizing our global money week. thank you selwyn. i also thank the
global money week committee for their assistance in organising this global money
week. and thanks to the management of both national fisheries development limited
and soltuna limited for your support in hosting this global money week. thank
you nfd and soltuna, you made us all proud as good cooperate citizens. to our
noro town clerk for your support.
- 'in tandem, the policy rate of this central bank would have to be changed frequently
and forcefully in the same direction to offset the shock. but what would happen
in the other economy, if it – unlike the first – were more prone to supply shocks?
experience suggests that supply shocks yield sharp transitory increases in inflation,
possibly followed by smaller, more permanent “ secondround ” effects, though the
longer - run impact on inflation is obviously significantly determined by the
response of monetary policy. given the transitory nature of the initial inflation
bursts, the simple hypothetical rule – which incorporates the reaction to expected
inflation – would advise the central bank to “ look through ” the immediate disturbance
and change policy only to the extent needed to offset the anticipated more permanent
effects of the shock on inflation in subsequent quarters. its policy rate, again,
would be observed to be less variable. what is important to note is that the same
rule – equally active strategies – would support two different patterns of observed
policy behaviour in different economic environments. the third case is perhaps
the most interesting of all. here exogenous shocks are identical, but economic
structures differ. different transmission mechanisms therefore propagate the same
shocks with lags that vary between the two economies. the first economy has more
rigid adjustment mechanisms : price - setters and wage - negotiators are more
sluggish than those in the other economy in processing economic news – including
changes in the stance of monetary policy – and bringing them to bear on their
decisions. what is the source of those rigidities in the first economy? there
can be many reasons for rigidity. perhaps labour practices and contractual institutions
– dating from the early post - war decades when the economy was heavily regulated
– induce distortions in large a similar interpretation of “ interest rate inertia
” – the tendency of central banks to adjust rates in the same direction and in
small steps – can be found in g. rudebusch, “ term structure evidence on interest
rate smoothing and monetary policy inertia ”, journal of monetary economics, vol.
49 ( pp. 1161 - 1187 ), 2002. segments of the labour market. this stands in the
way of an efficient matching of skills and productive capabilities. perhaps tight
regulatory restraint on business and statutory inhibitions discourage innovation
and impede a faster response to new shocks and new opportunities. whatever the
source of rigidity, the observational result is that prices and wages in the first
economy reflect changes in fundamentals with considerable lags. how should'
- 'fund ( uncdf ) are also actively developing financial inclusion knowledge, and
supporting countries in implementing financial inclusion strategies globally.
over the years, we have since witnessed the rapid deployment of innovative financial
inclusion solutions that have changed the lives of many. since the introduction
of the mpesa mobile banking service in kenya, the number of adults with access
to financial services has increased from 42 % in 2011 to 75 % today. agent banking
in malaysia has provided consumers with an innovative and alternative channel
to access financial services, resulting in the percentage of sub - districts served
by financial services access point to increase from 46 % in 2011 to 96 % in 2014.
this has enabled 99 % of malaysians, particularly those in rural areas, to conveniently
access and benefit from financial services. the microfinance “ global financial
development report 2014 ”, world bank, 2014. discussion note “ redistribution,
inequality, and growth ”, imf, 2014. bis central bankers ’ speeches institution
grameen bank in bangladesh has extended credit through its 2, 500 branches to
almost 8 million people, with 60 % of them lifted from poverty. the loan recovery
rate is higher than 98 percent. the experience in bangladesh, in particular, dispels
the myth that the poor is not bankable. of proportionate regulation and the global
standards the proportionate application of global standards for financial regulation
is a critical factor in enabling innovative financial inclusion solutions, ensuring
its delivery in a safe and sound manner. my remarks today will touch on the following
topics : recent developments in global standards and financial inclusion ; focus
areas to advance proportionality in practice ; and the importance of the global
symposium to galvanise action. recent developments in global standards and financial
inclusion global standards developed after the global financial crisis were designed
to address the financial stability and integrity issues that came to the fore
in developed countries. many of these issues involved the activities of global
systemically important financial institutions. the effects of these standards
on financial inclusion, such as the impact on smaller financial institutions in
developing countries most likely were not taken into consideration. fortunately,
standard - setting bodies have now recognised the principle of proportionality,
emphasising on the balance between objectives of financial stability, integrity
and inclusion. however, the real challenge lies in implementing proportionality
in practice. if principles are not implemented in a proportionate manner, there
could be unintended consequences for financial inclusion. for example, despite
the financial action task force ’ s guidance on the implementation of a risk -
based approach to anti money laundering /'
model-index:
- name: SentenceTransformer based on samchain/EconoBert
results:
- task:
type: semantic-similarity
name: Semantic Similarity
dataset:
name: econosentencev2
type: econosentencev2
metrics:
- type: pearson_cosine
value: 0.8556254615515724
name: Pearson Cosine
- type: spearman_cosine
value: 0.8619885397301873
name: Spearman Cosine
- task:
type: semantic-similarity
name: Semantic Similarity
dataset:
name: econosentence prior df
type: econosentence-prior_df
metrics:
- type: pearson_cosine
value: 0.8373175950726144
name: Pearson Cosine
- type: spearman_cosine
value: 0.8294930387036759
name: Spearman Cosine
---
# SentenceTransformer based on samchain/EconoBert
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [samchain/EconoBert](https://huggingface.co/samchain/EconoBert). It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
This model is the second version of the previous [samchain/econo-sentence-v1](https://huggingface.co/samchain/econo-sentence-v1). They both used the same base model but have been trained on different datasets.
The model has been evaluated both on the newest dataset and its precedent version:
- `econo-pairs-v2` is the newest dataset. Check its card [here](https://huggingface.co/samchain/econo-pairs-v2)
- `econo-pairs-v1`is the oldest version. Check its card [here](https://huggingface.co/samchain/econo-pairs)
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [samchain/EconoBert](https://huggingface.co/samchain/EconoBert) <!-- at revision 1554ddcdc25e1886cc43e05b9c77a2b8c4888da6 -->
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 768 dimensions
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("samchain/econosentence-v2")
# Run inference
sentences = [
"education shocks arising from the pandemic could signify the largest reversal in human development on record, equivalent to erasing all the progress in human development of the past six years4. the risk of reversing progress in achieving financial inclusion is accentuating why financial inclusion matters now more than ever. optimising islamic finance for inclusive economic recovery policy and regulatory responses across the globe, and likewise in malaysia, have been focused on safeguarding economic resilience, managing risks to financial stability and minimising repercussions to society. this is done in tandem with the different stages of the pandemic – namely, containment, stabilisation and recovery. at the onset of the crisis, governments along with financial regulators have deployed sizeable stimulus packages and various assistance programmes in order to contain the crisis and stabilise the economy. this includes addressing demand and supply disruptions, maintaining cash flows and keeping workers employed. in malaysia, the total stimulus package amounted to usd 73. 55 billion ( rm3056 billion ) with an additional fiscal injection by the government totalling 1 / 4 bis central bankers'speeches rm45 billion. as at september 2020, a total of 2. 63 million workers and 321, 000 employers had benefitted from the wage subsidy programme, involving an expenditure of rm10. 4 billion. to provide further stimulus to the economy, the bank has reduced the overnight policy rate ( opr ) by a cumulative 125 basis points ( from 3. 00 % to 1. 75 % ) this year, alongside reduction in the statutory reserve requirement by 100 basis points ( from 3. 00 % to 2. 00 % ). the reduction in the opr is intended to provide additional policy stimulus to accelerate the pace of economic recovery. the financial industry, including islamic financial institutions, also lent support to their borrowers and customers. in the first half of 2020, a total of rm120 billion7 was disbursed in lending / financing to smes, with more accounts being approved8 in aggregate in 2020 compared to the same period in previous years. islamic financial institutions and related associations have been actively educating and reaching out to affected borrowers about the financial assistance programmes available in response to the pandemic. the takaful and insurance industry also facilitated affected certificate holders by offering temporary deferment of contribution and premium to promote continuity of takaful protection coverage. more than 1. 1 million9 certificate and policyholders have benefited from this relief measure. while the",
'education at an early age. a credit union is similar to a commercial bank, it can provide the same products and services like a commercial bank, but what is different is that the credit union is owned by members who are the shareholders and at the same time are customers to the credit union. credit unions can provide different saving accounts to suit the needs of their members. they can create saving products such as junior saving accounts for the children. credit unions provide lending products for their members. and credit unions can provide insurance coverage for their members as well. one of the reasons people explain why they don ’ t want to open an account with a commercial bank is that because banks charge fees to keep their accounts. while paying fees is something that we all cannot avoid because provision of banking services costs money, in a credit union the interests, fees and charges that are paid by members is distributed back to the members as net interest income. i say this because i am also member of our staff credit union called bokolo credit union limited day one when i join the central bank of solomon islands and know the benefit credit unions can give members. as the registrar for credit unions, i take note of the progress in the growth of the assets of both the soltuna employees credit union limited and the tuna trust credit union limited. if the board and management of these credit unions continue to manage these credit unions professionally, they will both be the vehicle to encourage our children and youth here in noro and munda community to learn how to save and earn and become good financial and economic citizens of our country. finally, let me take this opportunity to thank the many people who made this global money week celebration possible. to the board of directors of both tuna trust credit union limited and soltuna employees credit union limited. i salute you for accepting our request to work together to host this global money week. i would like to thank selwyn talasasa, one of the long serving credit union promoters who has been a credit union person since day one of his career. he has been very helpful in organizing our global money week. thank you selwyn. i also thank the global money week committee for their assistance in organising this global money week. and thanks to the management of both national fisheries development limited and soltuna limited for your support in hosting this global money week. thank you nfd and soltuna, you made us all proud as good cooperate citizens. to our noro town clerk for your support.',
'fund ( uncdf ) are also actively developing financial inclusion knowledge, and supporting countries in implementing financial inclusion strategies globally. over the years, we have since witnessed the rapid deployment of innovative financial inclusion solutions that have changed the lives of many. since the introduction of the mpesa mobile banking service in kenya, the number of adults with access to financial services has increased from 42 % in 2011 to 75 % today. agent banking in malaysia has provided consumers with an innovative and alternative channel to access financial services, resulting in the percentage of sub - districts served by financial services access point to increase from 46 % in 2011 to 96 % in 2014. this has enabled 99 % of malaysians, particularly those in rural areas, to conveniently access and benefit from financial services. the microfinance “ global financial development report 2014 ”, world bank, 2014. discussion note “ redistribution, inequality, and growth ”, imf, 2014. bis central bankers ’ speeches institution grameen bank in bangladesh has extended credit through its 2, 500 branches to almost 8 million people, with 60 % of them lifted from poverty. the loan recovery rate is higher than 98 percent. the experience in bangladesh, in particular, dispels the myth that the poor is not bankable. of proportionate regulation and the global standards the proportionate application of global standards for financial regulation is a critical factor in enabling innovative financial inclusion solutions, ensuring its delivery in a safe and sound manner. my remarks today will touch on the following topics : recent developments in global standards and financial inclusion ; focus areas to advance proportionality in practice ; and the importance of the global symposium to galvanise action. recent developments in global standards and financial inclusion global standards developed after the global financial crisis were designed to address the financial stability and integrity issues that came to the fore in developed countries. many of these issues involved the activities of global systemically important financial institutions. the effects of these standards on financial inclusion, such as the impact on smaller financial institutions in developing countries most likely were not taken into consideration. fortunately, standard - setting bodies have now recognised the principle of proportionality, emphasising on the balance between objectives of financial stability, integrity and inclusion. however, the real challenge lies in implementing proportionality in practice. if principles are not implemented in a proportionate manner, there could be unintended consequences for financial inclusion. for example, despite the financial action task force ’ s guidance on the implementation of a risk - based approach to anti money laundering /',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Semantic Similarity
* Datasets: `econo-pairs-v2` and `econo-pairs-v1`
* Evaluated with [<code>EmbeddingSimilarityEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.EmbeddingSimilarityEvaluator)
Econo-sentence-v2 evaluation results:
| Metric | econo-pairs-v2 | econo-pairs-v1 |
|:--------------------|:----------------|:-----------------------|
| pearson_cosine | 0.8556 | 0.8373 |
| **spearman_cosine** | **0.862** | **0.8295** |
Econo-sentence-v1 evaluation results:
| Metric | econo-pairs-v2 | econo-pairs-v1 |
|:--------------------|:----------------|:-----------------------|
| pearson_cosine | 0.7642 | 0.8999 |
| **spearman_cosine** | **0.758** | **0.8358** |
The v2 and v1 differs in terms of training as the v2 has been train on 0.5 labels, while the v1 is only trained on 1 and 0.
The results show that the v2 still performs on par with its predecessor on the prior dataset while improving the performance by 10 points on the latest.
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### Train set of econo-pairs-v2
* Size: 45,044 training samples
* Columns: <code>text1</code>, <code>text2</code>, and <code>label</code>
* Approximate statistics based on the first 1000 samples:
| | text1 | text2 | label |
|:--------|:-----------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:---------------------------------------------------------------|
| type | string | string | float |
| details | <ul><li>min: 3 tokens</li><li>mean: 454.6 tokens</li><li>max: 505 tokens</li></ul> | <ul><li>min: 6 tokens</li><li>mean: 458.88 tokens</li><li>max: 505 tokens</li></ul> | <ul><li>min: 0.0</li><li>mean: 0.49</li><li>max: 1.0</li></ul> |
* Samples:
| text1 | text2 | label |
|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:-----------------|
| <code>in credit intermediation is not disrupted. i hope the analysis and observations i have shared today can assist with a process of learning, evolving, and improving that will prevent future disruptions. thank you. bis central bankers ’ speeches bis central bankers ’ speeches bis central bankers ’ speeches</code> | <code>richard byles : update and outlook for the jamaican economy monetary policy press statement by mr richard byles, governor of the bank of jamaica, at the quarterly monetary policy report press conference, kingston, 21 february 2024. * * * introduction good morning and welcome to our first quarterly monetary policy report press conference for 2024. the year has started with inflation being above the bank's inflation target. statin reported last week that headline inflation at january 2024 was 7. 4 per cent, which is higher than the outturns for the previous three months, and above the bank's target of 4 to 6 per cent. core inflation, however, which excludes food and fuel prices from the consumer price index, was 5. 9 per cent, which is lower than the 7. 1 per cent recorded in january 2023. the inflation outturn at january is higher than the bank had projected in november 2023. as we communicated then, in the wake of the announcement by the minister of finance and the public service of th...</code> | <code>0.0</code> |
| <code>european central bank : press conference - introductory statement introductory statement by mr jean - claude trichet, president of the european central bank, frankfurt am main, 4 may 2006. * * * ladies and gentlemen, let me welcome you to our press conference and report on the outcome of today ’ s meeting of the ecb ’ s governing council. the meeting was also attended by commissioner almunia. on the basis of our regular economic and monetary analyses, we have decided to leave the key ecb interest rates unchanged. overall, the information which has become available since our last meeting broadly confirms our earlier assessment of the outlook for price developments and economic activity in the euro area, and that monetary and credit growth remains very dynamic. against this background, the governing council will exercise strong vigilance in order to ensure that risks to price stability over the medium term do not materialise. such vigilance is particularly warranted in a context of ample...</code> | <code>vitor constancio : strengthening european economic governance – surveillance of fiscal and macroeconomic imbalances speech by mr vitor constancio, vice - president of the european central bank, at the brussels economic forum, brussels, 18 may 2011. * * * thank you very much for the invitation to participate in this panel today. the session is intended to focus on “ surveillance of fiscal and macroeconomic imbalances ”, which arguably is the most important strand of the economic governance package. but one should consider that this package – beyond the 6 legislative acts – also contains three other strands : the euro plus pact, the creation of the european stability mechanism and the setting - up of the european system of financial supervision. a fundamental strengthening of economic governance in the euro area requires simultaneous progress in all three areas. the first strand is necessary to prevent and correct imbalances ; the second to ensure the conditions for future growth and com...</code> | <code>0.5</code> |
| <code>. innovation is already altering the power source of motor vehicles, and much research is directed at reducing gasoline requirements. at present, gasoline consumption in the united states alone accounts for 11 percent of world oil production. moreover, new technologies to preserve existing conventional oil reserves and to stabilize oil prices will emerge in the years ahead. we will begin the transition to the next major sources of energy perhaps before midcentury as production from conventional oil reservoirs, according to central tendency scenarios of the energy information administration, is projected to peak. in fact, the development and application of new sources of energy, especially nonconventional oil, is already in train. nonetheless, it will take time. we, and the rest of the world, doubtless will have to live with the uncertainties of the oil markets for some time to come.</code> | <code>oil industry to build inventories, demand from investors who have accumulated large net long positions in distant oil futures and options is expanding once again. such speculative positions are claims against future oil holdings of oil firms. currently, strained capacity has limited the ability of oil producers to quickly satisfy this markedly increased demand for inventory. adding to the difficulties is the rising consumption of oil, especially in china and india, both of which are expanding economically in ways that are relatively energy intensive. even the recent notable pickup in opec output, by exhausting most of its remaining excess capacity, has only modestly satisfied overall demand. output from producers outside opec has also increased materially, but investment in new producing wells has lagged, limiting growth of production in the near term. crude oil prices are also being distorted by shortages of capacity to upgrade the higher sulphur content and heavier grades of crude oi...</code> | <code>1.0</code> |
* Loss: [<code>CoSENTLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#cosentloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "pairwise_cos_sim"
}
```
### Evaluation Dataset
#### Test set of econo-pairs-v2
* Size: 11,261 evaluation samples
* Columns: <code>text1</code>, <code>text2</code>, and <code>label</code>
* Approximate statistics based on the first 1000 samples:
| | text1 | text2 | label |
|:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:--------------------------------------------------------------|
| type | string | string | float |
| details | <ul><li>min: 4 tokens</li><li>mean: 451.65 tokens</li><li>max: 505 tokens</li></ul> | <ul><li>min: 6 tokens</li><li>mean: 457.79 tokens</li><li>max: 505 tokens</li></ul> | <ul><li>min: 0.0</li><li>mean: 0.5</li><li>max: 1.0</li></ul> |
* Samples:
| text1 | text2 | label |
|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:-----------------|
| <code>the drying up of the unsecured interbank market a price worth paying for lower interest rate volatility and tighter interest rate control in the money markets? at present, it is difficult to envisage how the unsecured interbank money market could be revived in full in the near future, reaching the pre - global financial crisis levels. this is due both to regulatory reasons and to fragmentation. post - crisis, banks appear reluctant to lend each other in the unsecured interbank market, pricing such lending at levels that are sufficient to compensate for counterparty credit risk. at the same time, short - term ( below 6 months ) interbank funding does not contribute towards satisfying the nsfr requirements. as a result, the unsecured segment of the interbank market is unattractive for prospective borrowers and is not expected to regain importance. furthermore, heterogeneity within and across the banking systems of euro area countries impedes the efficient redistribution of reserves acros...</code> | <code>##obalisation, artificial intelligence, as well as the green transition and climate change adaptation policies. 1 / 7 bis - central bankers'speeches given the uncertain overall impact of these effects and the significant unknowns and ambiguity surrounding the computation of natural interest rates, their prospective level and evolution are not easily determined. turning now to the lessons learnt, experience gained during the past crises has been extremely valuable. first, monetary policy needs to remain flexible so that it can accommodate potential future shocks to price stability, of any nature and direction. second, financial stability considerations need to be taken into account in the decision making of monetary policymakers, in their pursuit of price stability. the past decade provided us with new insights into what monetary policy can do. we are now better prepared to implement the appropriate monetary policy mix, with standard and non - standard tools, if we were to face new chal...</code> | <code>1.0</code> |
| <code>therefore worries me that greece might go backwards in terms of its reform agenda. suffice to say, it is not enough to correct misalignments solely at the national level. we also need structural reforms at the european level. and here, a lot has been done since the crisis broke out. first, the rules of the stability and growth pact were tightened and a fiscal compact was adopted in order to restore confidence in public finances. second, a procedure for identifying macroeconomic imbalances at an early stage was established. and third, a crisis mechanism was set up to serve as a “ firewall ”, safeguarding the stability of the financial system in the euro area. in addition to these measures, the euro area took a major step toward deeper financial integration. on 4 november 2014, the first pillar of a european banking union was put in place. on that day, the ecb assumed responsibility for supervising the 123 largest banks in the euro area. together, the banks concerned account for more tha...</code> | <code>andreas dombret : the euro area – where do we stand? speech by dr andreas dombret, member of the executive board of the deutsche bundesbank, at the german - turkish chamber of trade and commerce, istanbul, 10 february 2015. * 1. * * introduction ladies and gentlemen thank you for inviting me to speak here at the german - turkish chamber of industry and commerce in istanbul. isaac newton once observed that “ we build too many walls and not enough bridges ”. istanbul has for centuries been a bridge between two continents – asia and europe. it seems to me that istanbul is therefore an excellent place to hold this year ’ s g20 meetings. after all, the purpose of these meetings is to build bridges between nations by fostering dialogue and the mutual exchange of information. and this objective is of course also pursued by the german - turkish chamber of trade and commerce. moreover, mutual exchange is essential in a globalised world where countries are no longer isolated islands but part of ...</code> | <code>1.0</code> |
| <code>for vital issues regarding the agenda. i desire you all success with the materialization and accomplishment of your present and future projects and i conclude by wishing all the foreign guests at this conference a good stay in romania. i am now giving the floor to mr kurt puchinger, coordinator of the eu strategy for the danube region and chair of the first session, to start the conference. thank you. bis central bankers ’ speeches</code> | <code>national bank of romania, as they capture appropriately the opinions of the real sector on some key issues, such as : ( i ) the most pressing problems that firms are facing in their activity ; ( ii ) the investment needs and priorities ; ( iii ) challenges raised by climate change and energy efficiency. the survey shows that firms in romania have become less optimistic regarding investment conditions for the year ahead. while this is a worrying trend, we are rather positive that romania will be able to overcome the main problem identified, the energy costs, in the near future. i would add here that the government has implemented several measures in this respect, such as electricity and natural gas price capping schemes, the compensation of the price of motor fuel / liter and caps on the firewood price. also, we are aware that the war in ukraine and the related sanctions will continue to generate considerable uncertainties and risks to the outlook for economic activity, through possibly...</code> | <code>0.5</code> |
* Loss: [<code>CoSENTLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#cosentloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "pairwise_cos_sim"
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: steps
- `per_device_train_batch_size`: 16
- `per_device_eval_batch_size`: 16
- `learning_rate`: 2e-05
- `num_train_epochs`: 2
- `warmup_ratio`: 0.1
- `fp16`: True
- `batch_sampler`: no_duplicates
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: steps
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 16
- `per_device_eval_batch_size`: 16
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 2e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 2
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.1
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: True
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: None
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: None
- `batch_sampler`: no_duplicates
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
| Epoch | Step | Training Loss | Validation Loss | econosentencev2_spearman_cosine |
|:------:|:----:|:-------------:|:---------------:|:-------------------------------:|
| 0.0888 | 250 | 4.2245 | - | - |
| 0.1776 | 500 | 3.6201 | 3.5284 | 0.7812 |
| 0.2663 | 750 | 3.5093 | - | - |
| 0.3551 | 1000 | 3.4858 | 3.4938 | 0.8177 |
| 0.4439 | 1250 | 3.5671 | - | - |
| 0.5327 | 1500 | 3.3165 | 3.2432 | 0.8302 |
| 0.6214 | 1750 | 3.2781 | - | - |
| 0.7102 | 2000 | 3.2901 | 3.1754 | 0.8355 |
| 0.7990 | 2250 | 3.2053 | - | - |
| 0.8878 | 2500 | 3.1673 | 3.1109 | 0.8422 |
| 0.9766 | 2750 | 3.0844 | - | - |
| 1.0653 | 3000 | 2.7478 | 3.1557 | 0.8483 |
| 1.1541 | 3250 | 2.7042 | - | - |
| 1.2429 | 3500 | 2.6291 | 3.1707 | 0.8541 |
| 1.3317 | 3750 | 2.6193 | - | - |
| 1.4205 | 4000 | 2.6128 | 3.1136 | 0.8546 |
| 1.5092 | 4250 | 2.5762 | - | - |
| 1.5980 | 4500 | 2.5992 | 3.0960 | 0.8579 |
| 1.6868 | 4750 | 2.4983 | - | - |
| 1.7756 | 5000 | 2.3061 | 3.1615 | 0.8608 |
| 1.8643 | 5250 | 2.3459 | - | - |
| 1.9531 | 5500 | 2.418 | 3.1436 | 0.8617 |
### Framework Versions
- Python: 3.10.12
- Sentence Transformers: 3.4.1
- Transformers: 4.49.0
- PyTorch: 2.1.0+cu118
- Accelerate: 1.4.0
- Datasets: 3.3.1
- Tokenizers: 0.21.0
## Citation
Samuel Chaineau
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### CoSENTLoss
```bibtex
@online{kexuefm-8847,
title={CoSENT: A more efficient sentence vector scheme than Sentence-BERT},
author={Su Jianlin},
year={2022},
month={Jan},
url={https://kexue.fm/archives/8847},
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
--> | [
"TEXT_CLASSIFICATION",
"SEMANTIC_SIMILARITY"
] | [
"BEAR"
] |
binqiangliu/EmbeddingModlebgelargeENv1.5 | binqiangliu | feature-extraction | [
"sentence-transformers",
"pytorch",
"bert",
"feature-extraction",
"sentence-similarity",
"transformers",
"mteb",
"en",
"arxiv:2310.07554",
"arxiv:2309.07597",
"license:mit",
"model-index",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | 2023-11-19T07:15:12 | 2023-11-19T07:25:05 | 63 | 0 | ---
language:
- en
license: mit
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
- mteb
model-index:
- name: bge-large-en-v1.5
results:
- task:
type: Classification
dataset:
name: MTEB AmazonCounterfactualClassification (en)
type: mteb/amazon_counterfactual
config: en
split: test
revision: e8379541af4e31359cca9fbcf4b00f2671dba205
metrics:
- type: accuracy
value: 75.8507462686567
- type: ap
value: 38.566457320228245
- type: f1
value: 69.69386648043475
- task:
type: Classification
dataset:
name: MTEB AmazonPolarityClassification
type: mteb/amazon_polarity
config: default
split: test
revision: e2d317d38cd51312af73b3d32a06d1a08b442046
metrics:
- type: accuracy
value: 92.416675
- type: ap
value: 89.1928861155922
- type: f1
value: 92.39477019574215
- task:
type: Classification
dataset:
name: MTEB AmazonReviewsClassification (en)
type: mteb/amazon_reviews_multi
config: en
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 48.175999999999995
- type: f1
value: 47.80712792870253
- task:
type: Retrieval
dataset:
name: MTEB ArguAna
type: arguana
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 40.184999999999995
- type: map_at_10
value: 55.654
- type: map_at_100
value: 56.25
- type: map_at_1000
value: 56.255
- type: map_at_3
value: 51.742999999999995
- type: map_at_5
value: 54.129000000000005
- type: mrr_at_1
value: 40.967
- type: mrr_at_10
value: 55.96
- type: mrr_at_100
value: 56.54900000000001
- type: mrr_at_1000
value: 56.554
- type: mrr_at_3
value: 51.980000000000004
- type: mrr_at_5
value: 54.44
- type: ndcg_at_1
value: 40.184999999999995
- type: ndcg_at_10
value: 63.542
- type: ndcg_at_100
value: 65.96499999999999
- type: ndcg_at_1000
value: 66.08699999999999
- type: ndcg_at_3
value: 55.582
- type: ndcg_at_5
value: 59.855000000000004
- type: precision_at_1
value: 40.184999999999995
- type: precision_at_10
value: 8.841000000000001
- type: precision_at_100
value: 0.987
- type: precision_at_1000
value: 0.1
- type: precision_at_3
value: 22.238
- type: precision_at_5
value: 15.405
- type: recall_at_1
value: 40.184999999999995
- type: recall_at_10
value: 88.407
- type: recall_at_100
value: 98.72
- type: recall_at_1000
value: 99.644
- type: recall_at_3
value: 66.714
- type: recall_at_5
value: 77.027
- task:
type: Clustering
dataset:
name: MTEB ArxivClusteringP2P
type: mteb/arxiv-clustering-p2p
config: default
split: test
revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d
metrics:
- type: v_measure
value: 48.567077926750066
- task:
type: Clustering
dataset:
name: MTEB ArxivClusteringS2S
type: mteb/arxiv-clustering-s2s
config: default
split: test
revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53
metrics:
- type: v_measure
value: 43.19453389182364
- task:
type: Reranking
dataset:
name: MTEB AskUbuntuDupQuestions
type: mteb/askubuntudupquestions-reranking
config: default
split: test
revision: 2000358ca161889fa9c082cb41daa8dcfb161a54
metrics:
- type: map
value: 64.46555939623092
- type: mrr
value: 77.82361605768807
- task:
type: STS
dataset:
name: MTEB BIOSSES
type: mteb/biosses-sts
config: default
split: test
revision: d3fb88f8f02e40887cd149695127462bbcf29b4a
metrics:
- type: cos_sim_pearson
value: 84.9554128814735
- type: cos_sim_spearman
value: 84.65373612172036
- type: euclidean_pearson
value: 83.2905059954138
- type: euclidean_spearman
value: 84.52240782811128
- type: manhattan_pearson
value: 82.99533802997436
- type: manhattan_spearman
value: 84.20673798475734
- task:
type: Classification
dataset:
name: MTEB Banking77Classification
type: mteb/banking77
config: default
split: test
revision: 0fd18e25b25c072e09e0d92ab615fda904d66300
metrics:
- type: accuracy
value: 87.78896103896103
- type: f1
value: 87.77189310964883
- task:
type: Clustering
dataset:
name: MTEB BiorxivClusteringP2P
type: mteb/biorxiv-clustering-p2p
config: default
split: test
revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40
metrics:
- type: v_measure
value: 39.714538337650495
- task:
type: Clustering
dataset:
name: MTEB BiorxivClusteringS2S
type: mteb/biorxiv-clustering-s2s
config: default
split: test
revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908
metrics:
- type: v_measure
value: 36.90108349284447
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackAndroidRetrieval
type: BeIR/cqadupstack
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 32.795
- type: map_at_10
value: 43.669000000000004
- type: map_at_100
value: 45.151
- type: map_at_1000
value: 45.278
- type: map_at_3
value: 40.006
- type: map_at_5
value: 42.059999999999995
- type: mrr_at_1
value: 39.771
- type: mrr_at_10
value: 49.826
- type: mrr_at_100
value: 50.504000000000005
- type: mrr_at_1000
value: 50.549
- type: mrr_at_3
value: 47.115
- type: mrr_at_5
value: 48.832
- type: ndcg_at_1
value: 39.771
- type: ndcg_at_10
value: 50.217999999999996
- type: ndcg_at_100
value: 55.454
- type: ndcg_at_1000
value: 57.37
- type: ndcg_at_3
value: 44.885000000000005
- type: ndcg_at_5
value: 47.419
- type: precision_at_1
value: 39.771
- type: precision_at_10
value: 9.642000000000001
- type: precision_at_100
value: 1.538
- type: precision_at_1000
value: 0.198
- type: precision_at_3
value: 21.268
- type: precision_at_5
value: 15.536
- type: recall_at_1
value: 32.795
- type: recall_at_10
value: 62.580999999999996
- type: recall_at_100
value: 84.438
- type: recall_at_1000
value: 96.492
- type: recall_at_3
value: 47.071000000000005
- type: recall_at_5
value: 54.079
- type: map_at_1
value: 32.671
- type: map_at_10
value: 43.334
- type: map_at_100
value: 44.566
- type: map_at_1000
value: 44.702999999999996
- type: map_at_3
value: 40.343
- type: map_at_5
value: 41.983
- type: mrr_at_1
value: 40.764
- type: mrr_at_10
value: 49.382
- type: mrr_at_100
value: 49.988
- type: mrr_at_1000
value: 50.03300000000001
- type: mrr_at_3
value: 47.293
- type: mrr_at_5
value: 48.51
- type: ndcg_at_1
value: 40.764
- type: ndcg_at_10
value: 49.039
- type: ndcg_at_100
value: 53.259
- type: ndcg_at_1000
value: 55.253
- type: ndcg_at_3
value: 45.091
- type: ndcg_at_5
value: 46.839999999999996
- type: precision_at_1
value: 40.764
- type: precision_at_10
value: 9.191
- type: precision_at_100
value: 1.476
- type: precision_at_1000
value: 0.19499999999999998
- type: precision_at_3
value: 21.72
- type: precision_at_5
value: 15.299
- type: recall_at_1
value: 32.671
- type: recall_at_10
value: 58.816
- type: recall_at_100
value: 76.654
- type: recall_at_1000
value: 89.05999999999999
- type: recall_at_3
value: 46.743
- type: recall_at_5
value: 51.783
- type: map_at_1
value: 40.328
- type: map_at_10
value: 53.32599999999999
- type: map_at_100
value: 54.37499999999999
- type: map_at_1000
value: 54.429
- type: map_at_3
value: 49.902
- type: map_at_5
value: 52.002
- type: mrr_at_1
value: 46.332
- type: mrr_at_10
value: 56.858
- type: mrr_at_100
value: 57.522
- type: mrr_at_1000
value: 57.54899999999999
- type: mrr_at_3
value: 54.472
- type: mrr_at_5
value: 55.996
- type: ndcg_at_1
value: 46.332
- type: ndcg_at_10
value: 59.313
- type: ndcg_at_100
value: 63.266999999999996
- type: ndcg_at_1000
value: 64.36
- type: ndcg_at_3
value: 53.815000000000005
- type: ndcg_at_5
value: 56.814
- type: precision_at_1
value: 46.332
- type: precision_at_10
value: 9.53
- type: precision_at_100
value: 1.238
- type: precision_at_1000
value: 0.13699999999999998
- type: precision_at_3
value: 24.054000000000002
- type: precision_at_5
value: 16.589000000000002
- type: recall_at_1
value: 40.328
- type: recall_at_10
value: 73.421
- type: recall_at_100
value: 90.059
- type: recall_at_1000
value: 97.81
- type: recall_at_3
value: 59.009
- type: recall_at_5
value: 66.352
- type: map_at_1
value: 27.424
- type: map_at_10
value: 36.332
- type: map_at_100
value: 37.347
- type: map_at_1000
value: 37.422
- type: map_at_3
value: 33.743
- type: map_at_5
value: 35.176
- type: mrr_at_1
value: 29.153000000000002
- type: mrr_at_10
value: 38.233
- type: mrr_at_100
value: 39.109
- type: mrr_at_1000
value: 39.164
- type: mrr_at_3
value: 35.876000000000005
- type: mrr_at_5
value: 37.169000000000004
- type: ndcg_at_1
value: 29.153000000000002
- type: ndcg_at_10
value: 41.439
- type: ndcg_at_100
value: 46.42
- type: ndcg_at_1000
value: 48.242000000000004
- type: ndcg_at_3
value: 36.362
- type: ndcg_at_5
value: 38.743
- type: precision_at_1
value: 29.153000000000002
- type: precision_at_10
value: 6.315999999999999
- type: precision_at_100
value: 0.927
- type: precision_at_1000
value: 0.11199999999999999
- type: precision_at_3
value: 15.443000000000001
- type: precision_at_5
value: 10.644
- type: recall_at_1
value: 27.424
- type: recall_at_10
value: 55.364000000000004
- type: recall_at_100
value: 78.211
- type: recall_at_1000
value: 91.74600000000001
- type: recall_at_3
value: 41.379
- type: recall_at_5
value: 47.14
- type: map_at_1
value: 19.601
- type: map_at_10
value: 27.826
- type: map_at_100
value: 29.017
- type: map_at_1000
value: 29.137
- type: map_at_3
value: 25.125999999999998
- type: map_at_5
value: 26.765
- type: mrr_at_1
value: 24.005000000000003
- type: mrr_at_10
value: 32.716
- type: mrr_at_100
value: 33.631
- type: mrr_at_1000
value: 33.694
- type: mrr_at_3
value: 29.934
- type: mrr_at_5
value: 31.630999999999997
- type: ndcg_at_1
value: 24.005000000000003
- type: ndcg_at_10
value: 33.158
- type: ndcg_at_100
value: 38.739000000000004
- type: ndcg_at_1000
value: 41.495
- type: ndcg_at_3
value: 28.185
- type: ndcg_at_5
value: 30.796
- type: precision_at_1
value: 24.005000000000003
- type: precision_at_10
value: 5.908
- type: precision_at_100
value: 1.005
- type: precision_at_1000
value: 0.13899999999999998
- type: precision_at_3
value: 13.391
- type: precision_at_5
value: 9.876
- type: recall_at_1
value: 19.601
- type: recall_at_10
value: 44.746
- type: recall_at_100
value: 68.82300000000001
- type: recall_at_1000
value: 88.215
- type: recall_at_3
value: 31.239
- type: recall_at_5
value: 37.695
- type: map_at_1
value: 30.130000000000003
- type: map_at_10
value: 40.96
- type: map_at_100
value: 42.282
- type: map_at_1000
value: 42.392
- type: map_at_3
value: 37.889
- type: map_at_5
value: 39.661
- type: mrr_at_1
value: 36.958999999999996
- type: mrr_at_10
value: 46.835
- type: mrr_at_100
value: 47.644
- type: mrr_at_1000
value: 47.688
- type: mrr_at_3
value: 44.562000000000005
- type: mrr_at_5
value: 45.938
- type: ndcg_at_1
value: 36.958999999999996
- type: ndcg_at_10
value: 47.06
- type: ndcg_at_100
value: 52.345
- type: ndcg_at_1000
value: 54.35
- type: ndcg_at_3
value: 42.301
- type: ndcg_at_5
value: 44.635999999999996
- type: precision_at_1
value: 36.958999999999996
- type: precision_at_10
value: 8.479000000000001
- type: precision_at_100
value: 1.284
- type: precision_at_1000
value: 0.163
- type: precision_at_3
value: 20.244
- type: precision_at_5
value: 14.224999999999998
- type: recall_at_1
value: 30.130000000000003
- type: recall_at_10
value: 59.27
- type: recall_at_100
value: 81.195
- type: recall_at_1000
value: 94.21199999999999
- type: recall_at_3
value: 45.885
- type: recall_at_5
value: 52.016
- type: map_at_1
value: 26.169999999999998
- type: map_at_10
value: 36.451
- type: map_at_100
value: 37.791000000000004
- type: map_at_1000
value: 37.897
- type: map_at_3
value: 33.109
- type: map_at_5
value: 34.937000000000005
- type: mrr_at_1
value: 32.877
- type: mrr_at_10
value: 42.368
- type: mrr_at_100
value: 43.201
- type: mrr_at_1000
value: 43.259
- type: mrr_at_3
value: 39.763999999999996
- type: mrr_at_5
value: 41.260000000000005
- type: ndcg_at_1
value: 32.877
- type: ndcg_at_10
value: 42.659000000000006
- type: ndcg_at_100
value: 48.161
- type: ndcg_at_1000
value: 50.345
- type: ndcg_at_3
value: 37.302
- type: ndcg_at_5
value: 39.722
- type: precision_at_1
value: 32.877
- type: precision_at_10
value: 7.9
- type: precision_at_100
value: 1.236
- type: precision_at_1000
value: 0.158
- type: precision_at_3
value: 17.846
- type: precision_at_5
value: 12.9
- type: recall_at_1
value: 26.169999999999998
- type: recall_at_10
value: 55.35
- type: recall_at_100
value: 78.755
- type: recall_at_1000
value: 93.518
- type: recall_at_3
value: 40.176
- type: recall_at_5
value: 46.589000000000006
- type: map_at_1
value: 27.15516666666667
- type: map_at_10
value: 36.65741666666667
- type: map_at_100
value: 37.84991666666666
- type: map_at_1000
value: 37.96316666666667
- type: map_at_3
value: 33.74974999999999
- type: map_at_5
value: 35.3765
- type: mrr_at_1
value: 32.08233333333334
- type: mrr_at_10
value: 41.033833333333334
- type: mrr_at_100
value: 41.84524999999999
- type: mrr_at_1000
value: 41.89983333333333
- type: mrr_at_3
value: 38.62008333333333
- type: mrr_at_5
value: 40.03441666666666
- type: ndcg_at_1
value: 32.08233333333334
- type: ndcg_at_10
value: 42.229
- type: ndcg_at_100
value: 47.26716666666667
- type: ndcg_at_1000
value: 49.43466666666667
- type: ndcg_at_3
value: 37.36408333333333
- type: ndcg_at_5
value: 39.6715
- type: precision_at_1
value: 32.08233333333334
- type: precision_at_10
value: 7.382583333333334
- type: precision_at_100
value: 1.16625
- type: precision_at_1000
value: 0.15408333333333332
- type: precision_at_3
value: 17.218
- type: precision_at_5
value: 12.21875
- type: recall_at_1
value: 27.15516666666667
- type: recall_at_10
value: 54.36683333333333
- type: recall_at_100
value: 76.37183333333333
- type: recall_at_1000
value: 91.26183333333333
- type: recall_at_3
value: 40.769916666666674
- type: recall_at_5
value: 46.702333333333335
- type: map_at_1
value: 25.749
- type: map_at_10
value: 33.001999999999995
- type: map_at_100
value: 33.891
- type: map_at_1000
value: 33.993
- type: map_at_3
value: 30.703999999999997
- type: map_at_5
value: 31.959
- type: mrr_at_1
value: 28.834
- type: mrr_at_10
value: 35.955
- type: mrr_at_100
value: 36.709
- type: mrr_at_1000
value: 36.779
- type: mrr_at_3
value: 33.947
- type: mrr_at_5
value: 35.089
- type: ndcg_at_1
value: 28.834
- type: ndcg_at_10
value: 37.329
- type: ndcg_at_100
value: 41.79
- type: ndcg_at_1000
value: 44.169000000000004
- type: ndcg_at_3
value: 33.184999999999995
- type: ndcg_at_5
value: 35.107
- type: precision_at_1
value: 28.834
- type: precision_at_10
value: 5.7669999999999995
- type: precision_at_100
value: 0.876
- type: precision_at_1000
value: 0.11399999999999999
- type: precision_at_3
value: 14.213000000000001
- type: precision_at_5
value: 9.754999999999999
- type: recall_at_1
value: 25.749
- type: recall_at_10
value: 47.791
- type: recall_at_100
value: 68.255
- type: recall_at_1000
value: 85.749
- type: recall_at_3
value: 36.199
- type: recall_at_5
value: 41.071999999999996
- type: map_at_1
value: 17.777
- type: map_at_10
value: 25.201
- type: map_at_100
value: 26.423999999999996
- type: map_at_1000
value: 26.544
- type: map_at_3
value: 22.869
- type: map_at_5
value: 24.023
- type: mrr_at_1
value: 21.473
- type: mrr_at_10
value: 29.12
- type: mrr_at_100
value: 30.144
- type: mrr_at_1000
value: 30.215999999999998
- type: mrr_at_3
value: 26.933
- type: mrr_at_5
value: 28.051
- type: ndcg_at_1
value: 21.473
- type: ndcg_at_10
value: 30.003
- type: ndcg_at_100
value: 35.766
- type: ndcg_at_1000
value: 38.501000000000005
- type: ndcg_at_3
value: 25.773000000000003
- type: ndcg_at_5
value: 27.462999999999997
- type: precision_at_1
value: 21.473
- type: precision_at_10
value: 5.482
- type: precision_at_100
value: 0.975
- type: precision_at_1000
value: 0.13799999999999998
- type: precision_at_3
value: 12.205
- type: precision_at_5
value: 8.692
- type: recall_at_1
value: 17.777
- type: recall_at_10
value: 40.582
- type: recall_at_100
value: 66.305
- type: recall_at_1000
value: 85.636
- type: recall_at_3
value: 28.687
- type: recall_at_5
value: 33.089
- type: map_at_1
value: 26.677
- type: map_at_10
value: 36.309000000000005
- type: map_at_100
value: 37.403999999999996
- type: map_at_1000
value: 37.496
- type: map_at_3
value: 33.382
- type: map_at_5
value: 34.98
- type: mrr_at_1
value: 31.343
- type: mrr_at_10
value: 40.549
- type: mrr_at_100
value: 41.342
- type: mrr_at_1000
value: 41.397
- type: mrr_at_3
value: 38.029
- type: mrr_at_5
value: 39.451
- type: ndcg_at_1
value: 31.343
- type: ndcg_at_10
value: 42.1
- type: ndcg_at_100
value: 47.089999999999996
- type: ndcg_at_1000
value: 49.222
- type: ndcg_at_3
value: 36.836999999999996
- type: ndcg_at_5
value: 39.21
- type: precision_at_1
value: 31.343
- type: precision_at_10
value: 7.164
- type: precision_at_100
value: 1.0959999999999999
- type: precision_at_1000
value: 0.13899999999999998
- type: precision_at_3
value: 16.915
- type: precision_at_5
value: 11.940000000000001
- type: recall_at_1
value: 26.677
- type: recall_at_10
value: 55.54599999999999
- type: recall_at_100
value: 77.094
- type: recall_at_1000
value: 92.01
- type: recall_at_3
value: 41.191
- type: recall_at_5
value: 47.006
- type: map_at_1
value: 24.501
- type: map_at_10
value: 33.102
- type: map_at_100
value: 34.676
- type: map_at_1000
value: 34.888000000000005
- type: map_at_3
value: 29.944
- type: map_at_5
value: 31.613999999999997
- type: mrr_at_1
value: 29.447000000000003
- type: mrr_at_10
value: 37.996
- type: mrr_at_100
value: 38.946
- type: mrr_at_1000
value: 38.995000000000005
- type: mrr_at_3
value: 35.079
- type: mrr_at_5
value: 36.69
- type: ndcg_at_1
value: 29.447000000000003
- type: ndcg_at_10
value: 39.232
- type: ndcg_at_100
value: 45.247
- type: ndcg_at_1000
value: 47.613
- type: ndcg_at_3
value: 33.922999999999995
- type: ndcg_at_5
value: 36.284
- type: precision_at_1
value: 29.447000000000003
- type: precision_at_10
value: 7.648000000000001
- type: precision_at_100
value: 1.516
- type: precision_at_1000
value: 0.23900000000000002
- type: precision_at_3
value: 16.008
- type: precision_at_5
value: 11.779
- type: recall_at_1
value: 24.501
- type: recall_at_10
value: 51.18899999999999
- type: recall_at_100
value: 78.437
- type: recall_at_1000
value: 92.842
- type: recall_at_3
value: 35.808
- type: recall_at_5
value: 42.197
- type: map_at_1
value: 22.039
- type: map_at_10
value: 30.377
- type: map_at_100
value: 31.275
- type: map_at_1000
value: 31.379
- type: map_at_3
value: 27.98
- type: map_at_5
value: 29.358
- type: mrr_at_1
value: 24.03
- type: mrr_at_10
value: 32.568000000000005
- type: mrr_at_100
value: 33.403
- type: mrr_at_1000
value: 33.475
- type: mrr_at_3
value: 30.436999999999998
- type: mrr_at_5
value: 31.796000000000003
- type: ndcg_at_1
value: 24.03
- type: ndcg_at_10
value: 35.198
- type: ndcg_at_100
value: 39.668
- type: ndcg_at_1000
value: 42.296
- type: ndcg_at_3
value: 30.709999999999997
- type: ndcg_at_5
value: 33.024
- type: precision_at_1
value: 24.03
- type: precision_at_10
value: 5.564
- type: precision_at_100
value: 0.828
- type: precision_at_1000
value: 0.117
- type: precision_at_3
value: 13.309000000000001
- type: precision_at_5
value: 9.39
- type: recall_at_1
value: 22.039
- type: recall_at_10
value: 47.746
- type: recall_at_100
value: 68.23599999999999
- type: recall_at_1000
value: 87.852
- type: recall_at_3
value: 35.852000000000004
- type: recall_at_5
value: 41.410000000000004
- task:
type: Retrieval
dataset:
name: MTEB ClimateFEVER
type: climate-fever
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 15.692999999999998
- type: map_at_10
value: 26.903
- type: map_at_100
value: 28.987000000000002
- type: map_at_1000
value: 29.176999999999996
- type: map_at_3
value: 22.137
- type: map_at_5
value: 24.758
- type: mrr_at_1
value: 35.57
- type: mrr_at_10
value: 47.821999999999996
- type: mrr_at_100
value: 48.608000000000004
- type: mrr_at_1000
value: 48.638999999999996
- type: mrr_at_3
value: 44.452000000000005
- type: mrr_at_5
value: 46.546
- type: ndcg_at_1
value: 35.57
- type: ndcg_at_10
value: 36.567
- type: ndcg_at_100
value: 44.085
- type: ndcg_at_1000
value: 47.24
- type: ndcg_at_3
value: 29.964000000000002
- type: ndcg_at_5
value: 32.511
- type: precision_at_1
value: 35.57
- type: precision_at_10
value: 11.485
- type: precision_at_100
value: 1.9619999999999997
- type: precision_at_1000
value: 0.256
- type: precision_at_3
value: 22.237000000000002
- type: precision_at_5
value: 17.471999999999998
- type: recall_at_1
value: 15.692999999999998
- type: recall_at_10
value: 43.056
- type: recall_at_100
value: 68.628
- type: recall_at_1000
value: 86.075
- type: recall_at_3
value: 26.918999999999997
- type: recall_at_5
value: 34.14
- task:
type: Retrieval
dataset:
name: MTEB DBPedia
type: dbpedia-entity
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 9.53
- type: map_at_10
value: 20.951
- type: map_at_100
value: 30.136000000000003
- type: map_at_1000
value: 31.801000000000002
- type: map_at_3
value: 15.021
- type: map_at_5
value: 17.471999999999998
- type: mrr_at_1
value: 71.0
- type: mrr_at_10
value: 79.176
- type: mrr_at_100
value: 79.418
- type: mrr_at_1000
value: 79.426
- type: mrr_at_3
value: 78.125
- type: mrr_at_5
value: 78.61200000000001
- type: ndcg_at_1
value: 58.5
- type: ndcg_at_10
value: 44.106
- type: ndcg_at_100
value: 49.268
- type: ndcg_at_1000
value: 56.711999999999996
- type: ndcg_at_3
value: 48.934
- type: ndcg_at_5
value: 45.826
- type: precision_at_1
value: 71.0
- type: precision_at_10
value: 35.0
- type: precision_at_100
value: 11.360000000000001
- type: precision_at_1000
value: 2.046
- type: precision_at_3
value: 52.833
- type: precision_at_5
value: 44.15
- type: recall_at_1
value: 9.53
- type: recall_at_10
value: 26.811
- type: recall_at_100
value: 55.916999999999994
- type: recall_at_1000
value: 79.973
- type: recall_at_3
value: 16.413
- type: recall_at_5
value: 19.980999999999998
- task:
type: Classification
dataset:
name: MTEB EmotionClassification
type: mteb/emotion
config: default
split: test
revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37
metrics:
- type: accuracy
value: 51.519999999999996
- type: f1
value: 46.36601294761231
- task:
type: Retrieval
dataset:
name: MTEB FEVER
type: fever
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 74.413
- type: map_at_10
value: 83.414
- type: map_at_100
value: 83.621
- type: map_at_1000
value: 83.635
- type: map_at_3
value: 82.337
- type: map_at_5
value: 83.039
- type: mrr_at_1
value: 80.19800000000001
- type: mrr_at_10
value: 87.715
- type: mrr_at_100
value: 87.778
- type: mrr_at_1000
value: 87.779
- type: mrr_at_3
value: 87.106
- type: mrr_at_5
value: 87.555
- type: ndcg_at_1
value: 80.19800000000001
- type: ndcg_at_10
value: 87.182
- type: ndcg_at_100
value: 87.90299999999999
- type: ndcg_at_1000
value: 88.143
- type: ndcg_at_3
value: 85.60600000000001
- type: ndcg_at_5
value: 86.541
- type: precision_at_1
value: 80.19800000000001
- type: precision_at_10
value: 10.531
- type: precision_at_100
value: 1.113
- type: precision_at_1000
value: 0.11499999999999999
- type: precision_at_3
value: 32.933
- type: precision_at_5
value: 20.429
- type: recall_at_1
value: 74.413
- type: recall_at_10
value: 94.363
- type: recall_at_100
value: 97.165
- type: recall_at_1000
value: 98.668
- type: recall_at_3
value: 90.108
- type: recall_at_5
value: 92.52
- task:
type: Retrieval
dataset:
name: MTEB FiQA2018
type: fiqa
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 22.701
- type: map_at_10
value: 37.122
- type: map_at_100
value: 39.178000000000004
- type: map_at_1000
value: 39.326
- type: map_at_3
value: 32.971000000000004
- type: map_at_5
value: 35.332
- type: mrr_at_1
value: 44.753
- type: mrr_at_10
value: 53.452
- type: mrr_at_100
value: 54.198
- type: mrr_at_1000
value: 54.225
- type: mrr_at_3
value: 50.952
- type: mrr_at_5
value: 52.464
- type: ndcg_at_1
value: 44.753
- type: ndcg_at_10
value: 45.021
- type: ndcg_at_100
value: 52.028
- type: ndcg_at_1000
value: 54.596000000000004
- type: ndcg_at_3
value: 41.622
- type: ndcg_at_5
value: 42.736000000000004
- type: precision_at_1
value: 44.753
- type: precision_at_10
value: 12.284
- type: precision_at_100
value: 1.955
- type: precision_at_1000
value: 0.243
- type: precision_at_3
value: 27.828999999999997
- type: precision_at_5
value: 20.061999999999998
- type: recall_at_1
value: 22.701
- type: recall_at_10
value: 51.432
- type: recall_at_100
value: 77.009
- type: recall_at_1000
value: 92.511
- type: recall_at_3
value: 37.919000000000004
- type: recall_at_5
value: 44.131
- task:
type: Retrieval
dataset:
name: MTEB HotpotQA
type: hotpotqa
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 40.189
- type: map_at_10
value: 66.24600000000001
- type: map_at_100
value: 67.098
- type: map_at_1000
value: 67.149
- type: map_at_3
value: 62.684
- type: map_at_5
value: 64.974
- type: mrr_at_1
value: 80.378
- type: mrr_at_10
value: 86.127
- type: mrr_at_100
value: 86.29299999999999
- type: mrr_at_1000
value: 86.297
- type: mrr_at_3
value: 85.31400000000001
- type: mrr_at_5
value: 85.858
- type: ndcg_at_1
value: 80.378
- type: ndcg_at_10
value: 74.101
- type: ndcg_at_100
value: 76.993
- type: ndcg_at_1000
value: 77.948
- type: ndcg_at_3
value: 69.232
- type: ndcg_at_5
value: 72.04599999999999
- type: precision_at_1
value: 80.378
- type: precision_at_10
value: 15.595999999999998
- type: precision_at_100
value: 1.7840000000000003
- type: precision_at_1000
value: 0.191
- type: precision_at_3
value: 44.884
- type: precision_at_5
value: 29.145
- type: recall_at_1
value: 40.189
- type: recall_at_10
value: 77.981
- type: recall_at_100
value: 89.21
- type: recall_at_1000
value: 95.48299999999999
- type: recall_at_3
value: 67.326
- type: recall_at_5
value: 72.863
- task:
type: Classification
dataset:
name: MTEB ImdbClassification
type: mteb/imdb
config: default
split: test
revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7
metrics:
- type: accuracy
value: 92.84599999999999
- type: ap
value: 89.4710787567357
- type: f1
value: 92.83752676932258
- task:
type: Retrieval
dataset:
name: MTEB MSMARCO
type: msmarco
config: default
split: dev
revision: None
metrics:
- type: map_at_1
value: 23.132
- type: map_at_10
value: 35.543
- type: map_at_100
value: 36.702
- type: map_at_1000
value: 36.748999999999995
- type: map_at_3
value: 31.737
- type: map_at_5
value: 33.927
- type: mrr_at_1
value: 23.782
- type: mrr_at_10
value: 36.204
- type: mrr_at_100
value: 37.29
- type: mrr_at_1000
value: 37.330999999999996
- type: mrr_at_3
value: 32.458999999999996
- type: mrr_at_5
value: 34.631
- type: ndcg_at_1
value: 23.782
- type: ndcg_at_10
value: 42.492999999999995
- type: ndcg_at_100
value: 47.985
- type: ndcg_at_1000
value: 49.141
- type: ndcg_at_3
value: 34.748000000000005
- type: ndcg_at_5
value: 38.651
- type: precision_at_1
value: 23.782
- type: precision_at_10
value: 6.665
- type: precision_at_100
value: 0.941
- type: precision_at_1000
value: 0.104
- type: precision_at_3
value: 14.776
- type: precision_at_5
value: 10.84
- type: recall_at_1
value: 23.132
- type: recall_at_10
value: 63.794
- type: recall_at_100
value: 89.027
- type: recall_at_1000
value: 97.807
- type: recall_at_3
value: 42.765
- type: recall_at_5
value: 52.11
- task:
type: Classification
dataset:
name: MTEB MTOPDomainClassification (en)
type: mteb/mtop_domain
config: en
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 94.59188326493388
- type: f1
value: 94.3842594786827
- task:
type: Classification
dataset:
name: MTEB MTOPIntentClassification (en)
type: mteb/mtop_intent
config: en
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 79.49384404924761
- type: f1
value: 59.7580539534629
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (en)
type: mteb/amazon_massive_intent
config: en
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 77.56220578345663
- type: f1
value: 75.27228165561478
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (en)
type: mteb/amazon_massive_scenario
config: en
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 80.53463349024884
- type: f1
value: 80.4893958236536
- task:
type: Clustering
dataset:
name: MTEB MedrxivClusteringP2P
type: mteb/medrxiv-clustering-p2p
config: default
split: test
revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73
metrics:
- type: v_measure
value: 32.56100273484962
- task:
type: Clustering
dataset:
name: MTEB MedrxivClusteringS2S
type: mteb/medrxiv-clustering-s2s
config: default
split: test
revision: 35191c8c0dca72d8ff3efcd72aa802307d469663
metrics:
- type: v_measure
value: 31.470380028839607
- task:
type: Reranking
dataset:
name: MTEB MindSmallReranking
type: mteb/mind_small
config: default
split: test
revision: 3bdac13927fdc888b903db93b2ffdbd90b295a69
metrics:
- type: map
value: 32.06102792457849
- type: mrr
value: 33.30709199672238
- task:
type: Retrieval
dataset:
name: MTEB NFCorpus
type: nfcorpus
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 6.776999999999999
- type: map_at_10
value: 14.924000000000001
- type: map_at_100
value: 18.955
- type: map_at_1000
value: 20.538999999999998
- type: map_at_3
value: 10.982
- type: map_at_5
value: 12.679000000000002
- type: mrr_at_1
value: 47.988
- type: mrr_at_10
value: 57.232000000000006
- type: mrr_at_100
value: 57.818999999999996
- type: mrr_at_1000
value: 57.847
- type: mrr_at_3
value: 54.901999999999994
- type: mrr_at_5
value: 56.481
- type: ndcg_at_1
value: 46.594
- type: ndcg_at_10
value: 38.129000000000005
- type: ndcg_at_100
value: 35.54
- type: ndcg_at_1000
value: 44.172
- type: ndcg_at_3
value: 43.025999999999996
- type: ndcg_at_5
value: 41.052
- type: precision_at_1
value: 47.988
- type: precision_at_10
value: 28.111000000000004
- type: precision_at_100
value: 8.929
- type: precision_at_1000
value: 2.185
- type: precision_at_3
value: 40.144000000000005
- type: precision_at_5
value: 35.232
- type: recall_at_1
value: 6.776999999999999
- type: recall_at_10
value: 19.289
- type: recall_at_100
value: 36.359
- type: recall_at_1000
value: 67.54
- type: recall_at_3
value: 11.869
- type: recall_at_5
value: 14.999
- task:
type: Retrieval
dataset:
name: MTEB NQ
type: nq
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 31.108000000000004
- type: map_at_10
value: 47.126000000000005
- type: map_at_100
value: 48.171
- type: map_at_1000
value: 48.199
- type: map_at_3
value: 42.734
- type: map_at_5
value: 45.362
- type: mrr_at_1
value: 34.936
- type: mrr_at_10
value: 49.571
- type: mrr_at_100
value: 50.345
- type: mrr_at_1000
value: 50.363
- type: mrr_at_3
value: 45.959
- type: mrr_at_5
value: 48.165
- type: ndcg_at_1
value: 34.936
- type: ndcg_at_10
value: 55.028999999999996
- type: ndcg_at_100
value: 59.244
- type: ndcg_at_1000
value: 59.861
- type: ndcg_at_3
value: 46.872
- type: ndcg_at_5
value: 51.217999999999996
- type: precision_at_1
value: 34.936
- type: precision_at_10
value: 9.099
- type: precision_at_100
value: 1.145
- type: precision_at_1000
value: 0.12
- type: precision_at_3
value: 21.456
- type: precision_at_5
value: 15.411
- type: recall_at_1
value: 31.108000000000004
- type: recall_at_10
value: 76.53999999999999
- type: recall_at_100
value: 94.39
- type: recall_at_1000
value: 98.947
- type: recall_at_3
value: 55.572
- type: recall_at_5
value: 65.525
- task:
type: Retrieval
dataset:
name: MTEB QuoraRetrieval
type: quora
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 71.56400000000001
- type: map_at_10
value: 85.482
- type: map_at_100
value: 86.114
- type: map_at_1000
value: 86.13
- type: map_at_3
value: 82.607
- type: map_at_5
value: 84.405
- type: mrr_at_1
value: 82.42
- type: mrr_at_10
value: 88.304
- type: mrr_at_100
value: 88.399
- type: mrr_at_1000
value: 88.399
- type: mrr_at_3
value: 87.37
- type: mrr_at_5
value: 88.024
- type: ndcg_at_1
value: 82.45
- type: ndcg_at_10
value: 89.06500000000001
- type: ndcg_at_100
value: 90.232
- type: ndcg_at_1000
value: 90.305
- type: ndcg_at_3
value: 86.375
- type: ndcg_at_5
value: 87.85300000000001
- type: precision_at_1
value: 82.45
- type: precision_at_10
value: 13.486999999999998
- type: precision_at_100
value: 1.534
- type: precision_at_1000
value: 0.157
- type: precision_at_3
value: 37.813
- type: precision_at_5
value: 24.773999999999997
- type: recall_at_1
value: 71.56400000000001
- type: recall_at_10
value: 95.812
- type: recall_at_100
value: 99.7
- type: recall_at_1000
value: 99.979
- type: recall_at_3
value: 87.966
- type: recall_at_5
value: 92.268
- task:
type: Clustering
dataset:
name: MTEB RedditClustering
type: mteb/reddit-clustering
config: default
split: test
revision: 24640382cdbf8abc73003fb0fa6d111a705499eb
metrics:
- type: v_measure
value: 57.241876648614145
- task:
type: Clustering
dataset:
name: MTEB RedditClusteringP2P
type: mteb/reddit-clustering-p2p
config: default
split: test
revision: 282350215ef01743dc01b456c7f5241fa8937f16
metrics:
- type: v_measure
value: 64.66212576446223
- task:
type: Retrieval
dataset:
name: MTEB SCIDOCS
type: scidocs
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 5.308
- type: map_at_10
value: 13.803
- type: map_at_100
value: 16.176
- type: map_at_1000
value: 16.561
- type: map_at_3
value: 9.761000000000001
- type: map_at_5
value: 11.802
- type: mrr_at_1
value: 26.200000000000003
- type: mrr_at_10
value: 37.621
- type: mrr_at_100
value: 38.767
- type: mrr_at_1000
value: 38.815
- type: mrr_at_3
value: 34.117
- type: mrr_at_5
value: 36.107
- type: ndcg_at_1
value: 26.200000000000003
- type: ndcg_at_10
value: 22.64
- type: ndcg_at_100
value: 31.567
- type: ndcg_at_1000
value: 37.623
- type: ndcg_at_3
value: 21.435000000000002
- type: ndcg_at_5
value: 18.87
- type: precision_at_1
value: 26.200000000000003
- type: precision_at_10
value: 11.74
- type: precision_at_100
value: 2.465
- type: precision_at_1000
value: 0.391
- type: precision_at_3
value: 20.033
- type: precision_at_5
value: 16.64
- type: recall_at_1
value: 5.308
- type: recall_at_10
value: 23.794999999999998
- type: recall_at_100
value: 50.015
- type: recall_at_1000
value: 79.283
- type: recall_at_3
value: 12.178
- type: recall_at_5
value: 16.882
- task:
type: STS
dataset:
name: MTEB SICK-R
type: mteb/sickr-sts
config: default
split: test
revision: a6ea5a8cab320b040a23452cc28066d9beae2cee
metrics:
- type: cos_sim_pearson
value: 84.93231134675553
- type: cos_sim_spearman
value: 81.68319292603205
- type: euclidean_pearson
value: 81.8396814380367
- type: euclidean_spearman
value: 81.24641903349945
- type: manhattan_pearson
value: 81.84698799204274
- type: manhattan_spearman
value: 81.24269997904105
- task:
type: STS
dataset:
name: MTEB STS12
type: mteb/sts12-sts
config: default
split: test
revision: a0d554a64d88156834ff5ae9920b964011b16384
metrics:
- type: cos_sim_pearson
value: 86.73241671587446
- type: cos_sim_spearman
value: 79.05091082971826
- type: euclidean_pearson
value: 83.91146869578044
- type: euclidean_spearman
value: 79.87978465370936
- type: manhattan_pearson
value: 83.90888338917678
- type: manhattan_spearman
value: 79.87482848584241
- task:
type: STS
dataset:
name: MTEB STS13
type: mteb/sts13-sts
config: default
split: test
revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca
metrics:
- type: cos_sim_pearson
value: 85.14970731146177
- type: cos_sim_spearman
value: 86.37363490084627
- type: euclidean_pearson
value: 83.02154218530433
- type: euclidean_spearman
value: 83.80258761957367
- type: manhattan_pearson
value: 83.01664495119347
- type: manhattan_spearman
value: 83.77567458007952
- task:
type: STS
dataset:
name: MTEB STS14
type: mteb/sts14-sts
config: default
split: test
revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375
metrics:
- type: cos_sim_pearson
value: 83.40474139886784
- type: cos_sim_spearman
value: 82.77768789165984
- type: euclidean_pearson
value: 80.7065877443695
- type: euclidean_spearman
value: 81.375940662505
- type: manhattan_pearson
value: 80.6507552270278
- type: manhattan_spearman
value: 81.32782179098741
- task:
type: STS
dataset:
name: MTEB STS15
type: mteb/sts15-sts
config: default
split: test
revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3
metrics:
- type: cos_sim_pearson
value: 87.08585968722274
- type: cos_sim_spearman
value: 88.03110031451399
- type: euclidean_pearson
value: 85.74012019602384
- type: euclidean_spearman
value: 86.13592849438209
- type: manhattan_pearson
value: 85.74404842369206
- type: manhattan_spearman
value: 86.14492318960154
- task:
type: STS
dataset:
name: MTEB STS16
type: mteb/sts16-sts
config: default
split: test
revision: 4d8694f8f0e0100860b497b999b3dbed754a0513
metrics:
- type: cos_sim_pearson
value: 84.95069052788875
- type: cos_sim_spearman
value: 86.4867991595147
- type: euclidean_pearson
value: 84.31013325754635
- type: euclidean_spearman
value: 85.01529258006482
- type: manhattan_pearson
value: 84.26995570085374
- type: manhattan_spearman
value: 84.96982104986162
- task:
type: STS
dataset:
name: MTEB STS17 (en-en)
type: mteb/sts17-crosslingual-sts
config: en-en
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 87.54617647971897
- type: cos_sim_spearman
value: 87.49834181751034
- type: euclidean_pearson
value: 86.01015322577122
- type: euclidean_spearman
value: 84.63362652063199
- type: manhattan_pearson
value: 86.13807574475706
- type: manhattan_spearman
value: 84.7772370721132
- task:
type: STS
dataset:
name: MTEB STS22 (en)
type: mteb/sts22-crosslingual-sts
config: en
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 67.20047755786615
- type: cos_sim_spearman
value: 67.05324077987636
- type: euclidean_pearson
value: 66.91930642976601
- type: euclidean_spearman
value: 65.21491856099105
- type: manhattan_pearson
value: 66.78756851976624
- type: manhattan_spearman
value: 65.12356257740728
- task:
type: STS
dataset:
name: MTEB STSBenchmark
type: mteb/stsbenchmark-sts
config: default
split: test
revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831
metrics:
- type: cos_sim_pearson
value: 86.19852871539686
- type: cos_sim_spearman
value: 87.5161895296395
- type: euclidean_pearson
value: 84.59848645207485
- type: euclidean_spearman
value: 85.26427328757919
- type: manhattan_pearson
value: 84.59747366996524
- type: manhattan_spearman
value: 85.24045855146915
- task:
type: Reranking
dataset:
name: MTEB SciDocsRR
type: mteb/scidocs-reranking
config: default
split: test
revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab
metrics:
- type: map
value: 87.63320317811032
- type: mrr
value: 96.26242947321379
- task:
type: Retrieval
dataset:
name: MTEB SciFact
type: scifact
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 60.928000000000004
- type: map_at_10
value: 70.112
- type: map_at_100
value: 70.59299999999999
- type: map_at_1000
value: 70.623
- type: map_at_3
value: 66.846
- type: map_at_5
value: 68.447
- type: mrr_at_1
value: 64.0
- type: mrr_at_10
value: 71.212
- type: mrr_at_100
value: 71.616
- type: mrr_at_1000
value: 71.64500000000001
- type: mrr_at_3
value: 68.77799999999999
- type: mrr_at_5
value: 70.094
- type: ndcg_at_1
value: 64.0
- type: ndcg_at_10
value: 74.607
- type: ndcg_at_100
value: 76.416
- type: ndcg_at_1000
value: 77.102
- type: ndcg_at_3
value: 69.126
- type: ndcg_at_5
value: 71.41300000000001
- type: precision_at_1
value: 64.0
- type: precision_at_10
value: 9.933
- type: precision_at_100
value: 1.077
- type: precision_at_1000
value: 0.11299999999999999
- type: precision_at_3
value: 26.556
- type: precision_at_5
value: 17.467
- type: recall_at_1
value: 60.928000000000004
- type: recall_at_10
value: 87.322
- type: recall_at_100
value: 94.833
- type: recall_at_1000
value: 100.0
- type: recall_at_3
value: 72.628
- type: recall_at_5
value: 78.428
- task:
type: PairClassification
dataset:
name: MTEB SprintDuplicateQuestions
type: mteb/sprintduplicatequestions-pairclassification
config: default
split: test
revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46
metrics:
- type: cos_sim_accuracy
value: 99.86237623762376
- type: cos_sim_ap
value: 96.72586477206649
- type: cos_sim_f1
value: 93.01858362631845
- type: cos_sim_precision
value: 93.4409687184662
- type: cos_sim_recall
value: 92.60000000000001
- type: dot_accuracy
value: 99.78019801980199
- type: dot_ap
value: 93.72748205246228
- type: dot_f1
value: 89.04109589041096
- type: dot_precision
value: 87.16475095785441
- type: dot_recall
value: 91.0
- type: euclidean_accuracy
value: 99.85445544554456
- type: euclidean_ap
value: 96.6661459876145
- type: euclidean_f1
value: 92.58337481333997
- type: euclidean_precision
value: 92.17046580773042
- type: euclidean_recall
value: 93.0
- type: manhattan_accuracy
value: 99.85445544554456
- type: manhattan_ap
value: 96.6883549244056
- type: manhattan_f1
value: 92.57598405580468
- type: manhattan_precision
value: 92.25422045680239
- type: manhattan_recall
value: 92.9
- type: max_accuracy
value: 99.86237623762376
- type: max_ap
value: 96.72586477206649
- type: max_f1
value: 93.01858362631845
- task:
type: Clustering
dataset:
name: MTEB StackExchangeClustering
type: mteb/stackexchange-clustering
config: default
split: test
revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259
metrics:
- type: v_measure
value: 66.39930057069995
- task:
type: Clustering
dataset:
name: MTEB StackExchangeClusteringP2P
type: mteb/stackexchange-clustering-p2p
config: default
split: test
revision: 815ca46b2622cec33ccafc3735d572c266efdb44
metrics:
- type: v_measure
value: 34.96398659903402
- task:
type: Reranking
dataset:
name: MTEB StackOverflowDupQuestions
type: mteb/stackoverflowdupquestions-reranking
config: default
split: test
revision: e185fbe320c72810689fc5848eb6114e1ef5ec69
metrics:
- type: map
value: 55.946944700355395
- type: mrr
value: 56.97151398438164
- task:
type: Summarization
dataset:
name: MTEB SummEval
type: mteb/summeval
config: default
split: test
revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c
metrics:
- type: cos_sim_pearson
value: 31.541657650692905
- type: cos_sim_spearman
value: 31.605804192286303
- type: dot_pearson
value: 28.26905996736398
- type: dot_spearman
value: 27.864801765851187
- task:
type: Retrieval
dataset:
name: MTEB TRECCOVID
type: trec-covid
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 0.22599999999999998
- type: map_at_10
value: 1.8870000000000002
- type: map_at_100
value: 9.78
- type: map_at_1000
value: 22.514
- type: map_at_3
value: 0.6669999999999999
- type: map_at_5
value: 1.077
- type: mrr_at_1
value: 82.0
- type: mrr_at_10
value: 89.86699999999999
- type: mrr_at_100
value: 89.86699999999999
- type: mrr_at_1000
value: 89.86699999999999
- type: mrr_at_3
value: 89.667
- type: mrr_at_5
value: 89.667
- type: ndcg_at_1
value: 79.0
- type: ndcg_at_10
value: 74.818
- type: ndcg_at_100
value: 53.715999999999994
- type: ndcg_at_1000
value: 47.082
- type: ndcg_at_3
value: 82.134
- type: ndcg_at_5
value: 79.81899999999999
- type: precision_at_1
value: 82.0
- type: precision_at_10
value: 78.0
- type: precision_at_100
value: 54.48
- type: precision_at_1000
value: 20.518
- type: precision_at_3
value: 87.333
- type: precision_at_5
value: 85.2
- type: recall_at_1
value: 0.22599999999999998
- type: recall_at_10
value: 2.072
- type: recall_at_100
value: 13.013
- type: recall_at_1000
value: 43.462
- type: recall_at_3
value: 0.695
- type: recall_at_5
value: 1.139
- task:
type: Retrieval
dataset:
name: MTEB Touche2020
type: webis-touche2020
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 2.328
- type: map_at_10
value: 9.795
- type: map_at_100
value: 15.801000000000002
- type: map_at_1000
value: 17.23
- type: map_at_3
value: 4.734
- type: map_at_5
value: 6.644
- type: mrr_at_1
value: 30.612000000000002
- type: mrr_at_10
value: 46.902
- type: mrr_at_100
value: 47.495
- type: mrr_at_1000
value: 47.495
- type: mrr_at_3
value: 41.156
- type: mrr_at_5
value: 44.218
- type: ndcg_at_1
value: 28.571
- type: ndcg_at_10
value: 24.806
- type: ndcg_at_100
value: 36.419000000000004
- type: ndcg_at_1000
value: 47.272999999999996
- type: ndcg_at_3
value: 25.666
- type: ndcg_at_5
value: 25.448999999999998
- type: precision_at_1
value: 30.612000000000002
- type: precision_at_10
value: 23.061
- type: precision_at_100
value: 7.714
- type: precision_at_1000
value: 1.484
- type: precision_at_3
value: 26.531
- type: precision_at_5
value: 26.122
- type: recall_at_1
value: 2.328
- type: recall_at_10
value: 16.524
- type: recall_at_100
value: 47.179
- type: recall_at_1000
value: 81.22200000000001
- type: recall_at_3
value: 5.745
- type: recall_at_5
value: 9.339
- task:
type: Classification
dataset:
name: MTEB ToxicConversationsClassification
type: mteb/toxic_conversations_50k
config: default
split: test
revision: d7c0de2777da35d6aae2200a62c6e0e5af397c4c
metrics:
- type: accuracy
value: 70.9142
- type: ap
value: 14.335574772555415
- type: f1
value: 54.62839595194111
- task:
type: Classification
dataset:
name: MTEB TweetSentimentExtractionClassification
type: mteb/tweet_sentiment_extraction
config: default
split: test
revision: d604517c81ca91fe16a244d1248fc021f9ecee7a
metrics:
- type: accuracy
value: 59.94340690435768
- type: f1
value: 60.286487936731916
- task:
type: Clustering
dataset:
name: MTEB TwentyNewsgroupsClustering
type: mteb/twentynewsgroups-clustering
config: default
split: test
revision: 6125ec4e24fa026cec8a478383ee943acfbd5449
metrics:
- type: v_measure
value: 51.26597708987974
- task:
type: PairClassification
dataset:
name: MTEB TwitterSemEval2015
type: mteb/twittersemeval2015-pairclassification
config: default
split: test
revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1
metrics:
- type: cos_sim_accuracy
value: 87.48882398521786
- type: cos_sim_ap
value: 79.04326607602204
- type: cos_sim_f1
value: 71.64566826860633
- type: cos_sim_precision
value: 70.55512918905092
- type: cos_sim_recall
value: 72.77044854881267
- type: dot_accuracy
value: 84.19264469213805
- type: dot_ap
value: 67.96360043562528
- type: dot_f1
value: 64.06418393006827
- type: dot_precision
value: 58.64941898706424
- type: dot_recall
value: 70.58047493403694
- type: euclidean_accuracy
value: 87.45902127913214
- type: euclidean_ap
value: 78.9742237648272
- type: euclidean_f1
value: 71.5553235908142
- type: euclidean_precision
value: 70.77955601445535
- type: euclidean_recall
value: 72.34828496042216
- type: manhattan_accuracy
value: 87.41729749061214
- type: manhattan_ap
value: 78.90073137580596
- type: manhattan_f1
value: 71.3942611553533
- type: manhattan_precision
value: 68.52705653967483
- type: manhattan_recall
value: 74.51187335092348
- type: max_accuracy
value: 87.48882398521786
- type: max_ap
value: 79.04326607602204
- type: max_f1
value: 71.64566826860633
- task:
type: PairClassification
dataset:
name: MTEB TwitterURLCorpus
type: mteb/twitterurlcorpus-pairclassification
config: default
split: test
revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf
metrics:
- type: cos_sim_accuracy
value: 88.68125897465751
- type: cos_sim_ap
value: 85.6003454431979
- type: cos_sim_f1
value: 77.6957163958641
- type: cos_sim_precision
value: 73.0110366307807
- type: cos_sim_recall
value: 83.02279026793964
- type: dot_accuracy
value: 87.7672992587418
- type: dot_ap
value: 82.4971301112899
- type: dot_f1
value: 75.90528233151184
- type: dot_precision
value: 72.0370626469368
- type: dot_recall
value: 80.21250384970742
- type: euclidean_accuracy
value: 88.4503434625684
- type: euclidean_ap
value: 84.91949884748384
- type: euclidean_f1
value: 76.92365018444684
- type: euclidean_precision
value: 74.53245721712759
- type: euclidean_recall
value: 79.47336002463813
- type: manhattan_accuracy
value: 88.47556952691427
- type: manhattan_ap
value: 84.8963689101517
- type: manhattan_f1
value: 76.85901249256395
- type: manhattan_precision
value: 74.31693989071039
- type: manhattan_recall
value: 79.58115183246073
- type: max_accuracy
value: 88.68125897465751
- type: max_ap
value: 85.6003454431979
- type: max_f1
value: 77.6957163958641
---
<h1 align="center">FlagEmbedding</h1>
<h4 align="center">
<p>
<a href=#model-list>Model List</a> |
<a href=#frequently-asked-questions>FAQ</a> |
<a href=#usage>Usage</a> |
<a href="#evaluation">Evaluation</a> |
<a href="#train">Train</a> |
<a href="#contact">Contact</a> |
<a href="#citation">Citation</a> |
<a href="#license">License</a>
<p>
</h4>
More details please refer to our Github: [FlagEmbedding](https://github.com/FlagOpen/FlagEmbedding).
[English](README.md) | [中文](https://github.com/FlagOpen/FlagEmbedding/blob/master/README_zh.md)
FlagEmbedding can map any text to a low-dimensional dense vector which can be used for tasks like retrieval, classification, clustering, or semantic search.
And it also can be used in vector databases for LLMs.
************* 🌟**Updates**🌟 *************
- 10/12/2023: Release [LLM-Embedder](./FlagEmbedding/llm_embedder/README.md), a unified embedding model to support diverse retrieval augmentation needs for LLMs. [Paper](https://arxiv.org/pdf/2310.07554.pdf) :fire:
- 09/15/2023: The [technical report](https://arxiv.org/pdf/2309.07597.pdf) of BGE has been released
- 09/15/2023: The [masive training data](https://data.baai.ac.cn/details/BAAI-MTP) of BGE has been released
- 09/12/2023: New models:
- **New reranker model**: release cross-encoder models `BAAI/bge-reranker-base` and `BAAI/bge-reranker-large`, which are more powerful than embedding model. We recommend to use/fine-tune them to re-rank top-k documents returned by embedding models.
- **update embedding model**: release `bge-*-v1.5` embedding model to alleviate the issue of the similarity distribution, and enhance its retrieval ability without instruction.
<details>
<summary>More</summary>
<!-- ### More -->
- 09/07/2023: Update [fine-tune code](https://github.com/FlagOpen/FlagEmbedding/blob/master/FlagEmbedding/baai_general_embedding/README.md): Add script to mine hard negatives and support adding instruction during fine-tuning.
- 08/09/2023: BGE Models are integrated into **Langchain**, you can use it like [this](#using-langchain); C-MTEB **leaderboard** is [available](https://huggingface.co/spaces/mteb/leaderboard).
- 08/05/2023: Release base-scale and small-scale models, **best performance among the models of the same size 🤗**
- 08/02/2023: Release `bge-large-*`(short for BAAI General Embedding) Models, **rank 1st on MTEB and C-MTEB benchmark!** :tada: :tada:
- 08/01/2023: We release the [Chinese Massive Text Embedding Benchmark](https://github.com/FlagOpen/FlagEmbedding/blob/master/C_MTEB) (**C-MTEB**), consisting of 31 test dataset.
</details>
## Model List
`bge` is short for `BAAI general embedding`.
| Model | Language | | Description | query instruction for retrieval [1] |
|:-------------------------------|:--------:| :--------:| :--------:|:--------:|
| [BAAI/llm-embedder](https://huggingface.co/BAAI/llm-embedder) | English | [Inference](./FlagEmbedding/llm_embedder/README.md) [Fine-tune](./FlagEmbedding/llm_embedder/README.md) | a unified embedding model to support diverse retrieval augmentation needs for LLMs | See [README](./FlagEmbedding/llm_embedder/README.md) |
| [BAAI/bge-reranker-large](https://huggingface.co/BAAI/bge-reranker-large) | Chinese and English | [Inference](#usage-for-reranker) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/reranker) | a cross-encoder model which is more accurate but less efficient [2] | |
| [BAAI/bge-reranker-base](https://huggingface.co/BAAI/bge-reranker-base) | Chinese and English | [Inference](#usage-for-reranker) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/reranker) | a cross-encoder model which is more accurate but less efficient [2] | |
| [BAAI/bge-large-en-v1.5](https://huggingface.co/BAAI/bge-large-en-v1.5) | English | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | version 1.5 with more reasonable similarity distribution | `Represent this sentence for searching relevant passages: ` |
| [BAAI/bge-base-en-v1.5](https://huggingface.co/BAAI/bge-base-en-v1.5) | English | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | version 1.5 with more reasonable similarity distribution | `Represent this sentence for searching relevant passages: ` |
| [BAAI/bge-small-en-v1.5](https://huggingface.co/BAAI/bge-small-en-v1.5) | English | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | version 1.5 with more reasonable similarity distribution | `Represent this sentence for searching relevant passages: ` |
| [BAAI/bge-large-zh-v1.5](https://huggingface.co/BAAI/bge-large-zh-v1.5) | Chinese | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | version 1.5 with more reasonable similarity distribution | `为这个句子生成表示以用于检索相关文章:` |
| [BAAI/bge-base-zh-v1.5](https://huggingface.co/BAAI/bge-base-zh-v1.5) | Chinese | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | version 1.5 with more reasonable similarity distribution | `为这个句子生成表示以用于检索相关文章:` |
| [BAAI/bge-small-zh-v1.5](https://huggingface.co/BAAI/bge-small-zh-v1.5) | Chinese | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | version 1.5 with more reasonable similarity distribution | `为这个句子生成表示以用于检索相关文章:` |
| [BAAI/bge-large-en](https://huggingface.co/BAAI/bge-large-en) | English | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | :trophy: rank **1st** in [MTEB](https://huggingface.co/spaces/mteb/leaderboard) leaderboard | `Represent this sentence for searching relevant passages: ` |
| [BAAI/bge-base-en](https://huggingface.co/BAAI/bge-base-en) | English | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | a base-scale model but with similar ability to `bge-large-en` | `Represent this sentence for searching relevant passages: ` |
| [BAAI/bge-small-en](https://huggingface.co/BAAI/bge-small-en) | English | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) |a small-scale model but with competitive performance | `Represent this sentence for searching relevant passages: ` |
| [BAAI/bge-large-zh](https://huggingface.co/BAAI/bge-large-zh) | Chinese | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | :trophy: rank **1st** in [C-MTEB](https://github.com/FlagOpen/FlagEmbedding/tree/master/C_MTEB) benchmark | `为这个句子生成表示以用于检索相关文章:` |
| [BAAI/bge-base-zh](https://huggingface.co/BAAI/bge-base-zh) | Chinese | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | a base-scale model but with similar ability to `bge-large-zh` | `为这个句子生成表示以用于检索相关文章:` |
| [BAAI/bge-small-zh](https://huggingface.co/BAAI/bge-small-zh) | Chinese | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | a small-scale model but with competitive performance | `为这个句子生成表示以用于检索相关文章:` |
[1\]: If you need to search the relevant passages to a query, we suggest to add the instruction to the query; in other cases, no instruction is needed, just use the original query directly. In all cases, **no instruction** needs to be added to passages.
[2\]: Different from embedding model, reranker uses question and document as input and directly output similarity instead of embedding. To balance the accuracy and time cost, cross-encoder is widely used to re-rank top-k documents retrieved by other simple models.
For examples, use bge embedding model to retrieve top 100 relevant documents, and then use bge reranker to re-rank the top 100 document to get the final top-3 results.
All models have been uploaded to Huggingface Hub, and you can see them at https://huggingface.co/BAAI.
If you cannot open the Huggingface Hub, you also can download the models at https://model.baai.ac.cn/models .
## Frequently asked questions
<details>
<summary>1. How to fine-tune bge embedding model?</summary>
<!-- ### How to fine-tune bge embedding model? -->
Following this [example](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) to prepare data and fine-tune your model.
Some suggestions:
- Mine hard negatives following this [example](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune#hard-negatives), which can improve the retrieval performance.
- If you pre-train bge on your data, the pre-trained model cannot be directly used to calculate similarity, and it must be fine-tuned with contrastive learning before computing similarity.
- If the accuracy of the fine-tuned model is still not high, it is recommended to use/fine-tune the cross-encoder model (bge-reranker) to re-rank top-k results. Hard negatives also are needed to fine-tune reranker.
</details>
<details>
<summary>2. The similarity score between two dissimilar sentences is higher than 0.5</summary>
<!-- ### The similarity score between two dissimilar sentences is higher than 0.5 -->
**Suggest to use bge v1.5, which alleviates the issue of the similarity distribution.**
Since we finetune the models by contrastive learning with a temperature of 0.01,
the similarity distribution of the current BGE model is about in the interval \[0.6, 1\].
So a similarity score greater than 0.5 does not indicate that the two sentences are similar.
For downstream tasks, such as passage retrieval or semantic similarity,
**what matters is the relative order of the scores, not the absolute value.**
If you need to filter similar sentences based on a similarity threshold,
please select an appropriate similarity threshold based on the similarity distribution on your data (such as 0.8, 0.85, or even 0.9).
</details>
<details>
<summary>3. When does the query instruction need to be used</summary>
<!-- ### When does the query instruction need to be used -->
For the `bge-*-v1.5`, we improve its retrieval ability when not using instruction.
No instruction only has a slight degradation in retrieval performance compared with using instruction.
So you can generate embedding without instruction in all cases for convenience.
For a retrieval task that uses short queries to find long related documents,
it is recommended to add instructions for these short queries.
**The best method to decide whether to add instructions for queries is choosing the setting that achieves better performance on your task.**
In all cases, the documents/passages do not need to add the instruction.
</details>
## Usage
### Usage for Embedding Model
Here are some examples for using `bge` models with
[FlagEmbedding](#using-flagembedding), [Sentence-Transformers](#using-sentence-transformers), [Langchain](#using-langchain), or [Huggingface Transformers](#using-huggingface-transformers).
#### Using FlagEmbedding
```
pip install -U FlagEmbedding
```
If it doesn't work for you, you can see [FlagEmbedding](https://github.com/FlagOpen/FlagEmbedding/blob/master/FlagEmbedding/baai_general_embedding/README.md) for more methods to install FlagEmbedding.
```python
from FlagEmbedding import FlagModel
sentences_1 = ["样例数据-1", "样例数据-2"]
sentences_2 = ["样例数据-3", "样例数据-4"]
model = FlagModel('BAAI/bge-large-zh-v1.5',
query_instruction_for_retrieval="为这个句子生成表示以用于检索相关文章:",
use_fp16=True) # Setting use_fp16 to True speeds up computation with a slight performance degradation
embeddings_1 = model.encode(sentences_1)
embeddings_2 = model.encode(sentences_2)
similarity = embeddings_1 @ embeddings_2.T
print(similarity)
# for s2p(short query to long passage) retrieval task, suggest to use encode_queries() which will automatically add the instruction to each query
# corpus in retrieval task can still use encode() or encode_corpus(), since they don't need instruction
queries = ['query_1', 'query_2']
passages = ["样例文档-1", "样例文档-2"]
q_embeddings = model.encode_queries(queries)
p_embeddings = model.encode(passages)
scores = q_embeddings @ p_embeddings.T
```
For the value of the argument `query_instruction_for_retrieval`, see [Model List](https://github.com/FlagOpen/FlagEmbedding/tree/master#model-list).
By default, FlagModel will use all available GPUs when encoding. Please set `os.environ["CUDA_VISIBLE_DEVICES"]` to select specific GPUs.
You also can set `os.environ["CUDA_VISIBLE_DEVICES"]=""` to make all GPUs unavailable.
#### Using Sentence-Transformers
You can also use the `bge` models with [sentence-transformers](https://www.SBERT.net):
```
pip install -U sentence-transformers
```
```python
from sentence_transformers import SentenceTransformer
sentences_1 = ["样例数据-1", "样例数据-2"]
sentences_2 = ["样例数据-3", "样例数据-4"]
model = SentenceTransformer('BAAI/bge-large-zh-v1.5')
embeddings_1 = model.encode(sentences_1, normalize_embeddings=True)
embeddings_2 = model.encode(sentences_2, normalize_embeddings=True)
similarity = embeddings_1 @ embeddings_2.T
print(similarity)
```
For s2p(short query to long passage) retrieval task,
each short query should start with an instruction (instructions see [Model List](https://github.com/FlagOpen/FlagEmbedding/tree/master#model-list)).
But the instruction is not needed for passages.
```python
from sentence_transformers import SentenceTransformer
queries = ['query_1', 'query_2']
passages = ["样例文档-1", "样例文档-2"]
instruction = "为这个句子生成表示以用于检索相关文章:"
model = SentenceTransformer('BAAI/bge-large-zh-v1.5')
q_embeddings = model.encode([instruction+q for q in queries], normalize_embeddings=True)
p_embeddings = model.encode(passages, normalize_embeddings=True)
scores = q_embeddings @ p_embeddings.T
```
#### Using Langchain
You can use `bge` in langchain like this:
```python
from langchain.embeddings import HuggingFaceBgeEmbeddings
model_name = "BAAI/bge-large-en-v1.5"
model_kwargs = {'device': 'cuda'}
encode_kwargs = {'normalize_embeddings': True} # set True to compute cosine similarity
model = HuggingFaceBgeEmbeddings(
model_name=model_name,
model_kwargs=model_kwargs,
encode_kwargs=encode_kwargs,
query_instruction="为这个句子生成表示以用于检索相关文章:"
)
model.query_instruction = "为这个句子生成表示以用于检索相关文章:"
```
#### Using HuggingFace Transformers
With the transformers package, you can use the model like this: First, you pass your input through the transformer model, then you select the last hidden state of the first token (i.e., [CLS]) as the sentence embedding.
```python
from transformers import AutoTokenizer, AutoModel
import torch
# Sentences we want sentence embeddings for
sentences = ["样例数据-1", "样例数据-2"]
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('BAAI/bge-large-zh-v1.5')
model = AutoModel.from_pretrained('BAAI/bge-large-zh-v1.5')
model.eval()
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# for s2p(short query to long passage) retrieval task, add an instruction to query (not add instruction for passages)
# encoded_input = tokenizer([instruction + q for q in queries], padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, cls pooling.
sentence_embeddings = model_output[0][:, 0]
# normalize embeddings
sentence_embeddings = torch.nn.functional.normalize(sentence_embeddings, p=2, dim=1)
print("Sentence embeddings:", sentence_embeddings)
```
### Usage for Reranker
Different from embedding model, reranker uses question and document as input and directly output similarity instead of embedding.
You can get a relevance score by inputting query and passage to the reranker.
The reranker is optimized based cross-entropy loss, so the relevance score is not bounded to a specific range.
#### Using FlagEmbedding
```
pip install -U FlagEmbedding
```
Get relevance scores (higher scores indicate more relevance):
```python
from FlagEmbedding import FlagReranker
reranker = FlagReranker('BAAI/bge-reranker-large', use_fp16=True) # Setting use_fp16 to True speeds up computation with a slight performance degradation
score = reranker.compute_score(['query', 'passage'])
print(score)
scores = reranker.compute_score([['what is panda?', 'hi'], ['what is panda?', 'The giant panda (Ailuropoda melanoleuca), sometimes called a panda bear or simply panda, is a bear species endemic to China.']])
print(scores)
```
#### Using Huggingface transformers
```python
import torch
from transformers import AutoModelForSequenceClassification, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained('BAAI/bge-reranker-large')
model = AutoModelForSequenceClassification.from_pretrained('BAAI/bge-reranker-large')
model.eval()
pairs = [['what is panda?', 'hi'], ['what is panda?', 'The giant panda (Ailuropoda melanoleuca), sometimes called a panda bear or simply panda, is a bear species endemic to China.']]
with torch.no_grad():
inputs = tokenizer(pairs, padding=True, truncation=True, return_tensors='pt', max_length=512)
scores = model(**inputs, return_dict=True).logits.view(-1, ).float()
print(scores)
```
## Evaluation
`baai-general-embedding` models achieve **state-of-the-art performance on both MTEB and C-MTEB leaderboard!**
For more details and evaluation tools see our [scripts](https://github.com/FlagOpen/FlagEmbedding/blob/master/C_MTEB/README.md).
- **MTEB**:
| Model Name | Dimension | Sequence Length | Average (56) | Retrieval (15) |Clustering (11) | Pair Classification (3) | Reranking (4) | STS (10) | Summarization (1) | Classification (12) |
|:----:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|
| [BAAI/bge-large-en-v1.5](https://huggingface.co/BAAI/bge-large-en-v1.5) | 1024 | 512 | **64.23** | **54.29** | 46.08 | 87.12 | 60.03 | 83.11 | 31.61 | 75.97 |
| [BAAI/bge-base-en-v1.5](https://huggingface.co/BAAI/bge-base-en-v1.5) | 768 | 512 | 63.55 | 53.25 | 45.77 | 86.55 | 58.86 | 82.4 | 31.07 | 75.53 |
| [BAAI/bge-small-en-v1.5](https://huggingface.co/BAAI/bge-small-en-v1.5) | 384 | 512 | 62.17 |51.68 | 43.82 | 84.92 | 58.36 | 81.59 | 30.12 | 74.14 |
| [bge-large-en](https://huggingface.co/BAAI/bge-large-en) | 1024 | 512 | 63.98 | 53.9 | 46.98 | 85.8 | 59.48 | 81.56 | 32.06 | 76.21 |
| [bge-base-en](https://huggingface.co/BAAI/bge-base-en) | 768 | 512 | 63.36 | 53.0 | 46.32 | 85.86 | 58.7 | 81.84 | 29.27 | 75.27 |
| [gte-large](https://huggingface.co/thenlper/gte-large) | 1024 | 512 | 63.13 | 52.22 | 46.84 | 85.00 | 59.13 | 83.35 | 31.66 | 73.33 |
| [gte-base](https://huggingface.co/thenlper/gte-base) | 768 | 512 | 62.39 | 51.14 | 46.2 | 84.57 | 58.61 | 82.3 | 31.17 | 73.01 |
| [e5-large-v2](https://huggingface.co/intfloat/e5-large-v2) | 1024| 512 | 62.25 | 50.56 | 44.49 | 86.03 | 56.61 | 82.05 | 30.19 | 75.24 |
| [bge-small-en](https://huggingface.co/BAAI/bge-small-en) | 384 | 512 | 62.11 | 51.82 | 44.31 | 83.78 | 57.97 | 80.72 | 30.53 | 74.37 |
| [instructor-xl](https://huggingface.co/hkunlp/instructor-xl) | 768 | 512 | 61.79 | 49.26 | 44.74 | 86.62 | 57.29 | 83.06 | 32.32 | 61.79 |
| [e5-base-v2](https://huggingface.co/intfloat/e5-base-v2) | 768 | 512 | 61.5 | 50.29 | 43.80 | 85.73 | 55.91 | 81.05 | 30.28 | 73.84 |
| [gte-small](https://huggingface.co/thenlper/gte-small) | 384 | 512 | 61.36 | 49.46 | 44.89 | 83.54 | 57.7 | 82.07 | 30.42 | 72.31 |
| [text-embedding-ada-002](https://platform.openai.com/docs/guides/embeddings) | 1536 | 8192 | 60.99 | 49.25 | 45.9 | 84.89 | 56.32 | 80.97 | 30.8 | 70.93 |
| [e5-small-v2](https://huggingface.co/intfloat/e5-base-v2) | 384 | 512 | 59.93 | 49.04 | 39.92 | 84.67 | 54.32 | 80.39 | 31.16 | 72.94 |
| [sentence-t5-xxl](https://huggingface.co/sentence-transformers/sentence-t5-xxl) | 768 | 512 | 59.51 | 42.24 | 43.72 | 85.06 | 56.42 | 82.63 | 30.08 | 73.42 |
| [all-mpnet-base-v2](https://huggingface.co/sentence-transformers/all-mpnet-base-v2) | 768 | 514 | 57.78 | 43.81 | 43.69 | 83.04 | 59.36 | 80.28 | 27.49 | 65.07 |
| [sgpt-bloom-7b1-msmarco](https://huggingface.co/bigscience/sgpt-bloom-7b1-msmarco) | 4096 | 2048 | 57.59 | 48.22 | 38.93 | 81.9 | 55.65 | 77.74 | 33.6 | 66.19 |
- **C-MTEB**:
We create the benchmark C-MTEB for Chinese text embedding which consists of 31 datasets from 6 tasks.
Please refer to [C_MTEB](https://github.com/FlagOpen/FlagEmbedding/blob/master/C_MTEB/README.md) for a detailed introduction.
| Model | Embedding dimension | Avg | Retrieval | STS | PairClassification | Classification | Reranking | Clustering |
|:-------------------------------|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:|
| [**BAAI/bge-large-zh-v1.5**](https://huggingface.co/BAAI/bge-large-zh-v1.5) | 1024 | **64.53** | 70.46 | 56.25 | 81.6 | 69.13 | 65.84 | 48.99 |
| [BAAI/bge-base-zh-v1.5](https://huggingface.co/BAAI/bge-base-zh-v1.5) | 768 | 63.13 | 69.49 | 53.72 | 79.75 | 68.07 | 65.39 | 47.53 |
| [BAAI/bge-small-zh-v1.5](https://huggingface.co/BAAI/bge-small-zh-v1.5) | 512 | 57.82 | 61.77 | 49.11 | 70.41 | 63.96 | 60.92 | 44.18 |
| [BAAI/bge-large-zh](https://huggingface.co/BAAI/bge-large-zh) | 1024 | 64.20 | 71.53 | 54.98 | 78.94 | 68.32 | 65.11 | 48.39 |
| [bge-large-zh-noinstruct](https://huggingface.co/BAAI/bge-large-zh-noinstruct) | 1024 | 63.53 | 70.55 | 53 | 76.77 | 68.58 | 64.91 | 50.01 |
| [BAAI/bge-base-zh](https://huggingface.co/BAAI/bge-base-zh) | 768 | 62.96 | 69.53 | 54.12 | 77.5 | 67.07 | 64.91 | 47.63 |
| [multilingual-e5-large](https://huggingface.co/intfloat/multilingual-e5-large) | 1024 | 58.79 | 63.66 | 48.44 | 69.89 | 67.34 | 56.00 | 48.23 |
| [BAAI/bge-small-zh](https://huggingface.co/BAAI/bge-small-zh) | 512 | 58.27 | 63.07 | 49.45 | 70.35 | 63.64 | 61.48 | 45.09 |
| [m3e-base](https://huggingface.co/moka-ai/m3e-base) | 768 | 57.10 | 56.91 | 50.47 | 63.99 | 67.52 | 59.34 | 47.68 |
| [m3e-large](https://huggingface.co/moka-ai/m3e-large) | 1024 | 57.05 | 54.75 | 50.42 | 64.3 | 68.2 | 59.66 | 48.88 |
| [multilingual-e5-base](https://huggingface.co/intfloat/multilingual-e5-base) | 768 | 55.48 | 61.63 | 46.49 | 67.07 | 65.35 | 54.35 | 40.68 |
| [multilingual-e5-small](https://huggingface.co/intfloat/multilingual-e5-small) | 384 | 55.38 | 59.95 | 45.27 | 66.45 | 65.85 | 53.86 | 45.26 |
| [text-embedding-ada-002(OpenAI)](https://platform.openai.com/docs/guides/embeddings/what-are-embeddings) | 1536 | 53.02 | 52.0 | 43.35 | 69.56 | 64.31 | 54.28 | 45.68 |
| [luotuo](https://huggingface.co/silk-road/luotuo-bert-medium) | 1024 | 49.37 | 44.4 | 42.78 | 66.62 | 61 | 49.25 | 44.39 |
| [text2vec-base](https://huggingface.co/shibing624/text2vec-base-chinese) | 768 | 47.63 | 38.79 | 43.41 | 67.41 | 62.19 | 49.45 | 37.66 |
| [text2vec-large](https://huggingface.co/GanymedeNil/text2vec-large-chinese) | 1024 | 47.36 | 41.94 | 44.97 | 70.86 | 60.66 | 49.16 | 30.02 |
- **Reranking**:
See [C_MTEB](https://github.com/FlagOpen/FlagEmbedding/blob/master/C_MTEB/) for evaluation script.
| Model | T2Reranking | T2RerankingZh2En\* | T2RerankingEn2Zh\* | MMarcoReranking | CMedQAv1 | CMedQAv2 | Avg |
|:-------------------------------|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:|
| text2vec-base-multilingual | 64.66 | 62.94 | 62.51 | 14.37 | 48.46 | 48.6 | 50.26 |
| multilingual-e5-small | 65.62 | 60.94 | 56.41 | 29.91 | 67.26 | 66.54 | 57.78 |
| multilingual-e5-large | 64.55 | 61.61 | 54.28 | 28.6 | 67.42 | 67.92 | 57.4 |
| multilingual-e5-base | 64.21 | 62.13 | 54.68 | 29.5 | 66.23 | 66.98 | 57.29 |
| m3e-base | 66.03 | 62.74 | 56.07 | 17.51 | 77.05 | 76.76 | 59.36 |
| m3e-large | 66.13 | 62.72 | 56.1 | 16.46 | 77.76 | 78.27 | 59.57 |
| bge-base-zh-v1.5 | 66.49 | 63.25 | 57.02 | 29.74 | 80.47 | 84.88 | 63.64 |
| bge-large-zh-v1.5 | 65.74 | 63.39 | 57.03 | 28.74 | 83.45 | 85.44 | 63.97 |
| [BAAI/bge-reranker-base](https://huggingface.co/BAAI/bge-reranker-base) | 67.28 | 63.95 | 60.45 | 35.46 | 81.26 | 84.1 | 65.42 |
| [BAAI/bge-reranker-large](https://huggingface.co/BAAI/bge-reranker-large) | 67.6 | 64.03 | 61.44 | 37.16 | 82.15 | 84.18 | 66.09 |
\* : T2RerankingZh2En and T2RerankingEn2Zh are cross-language retrieval tasks
## Train
### BAAI Embedding
We pre-train the models using [retromae](https://github.com/staoxiao/RetroMAE) and train them on large-scale pairs data using contrastive learning.
**You can fine-tune the embedding model on your data following our [examples](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune).**
We also provide a [pre-train example](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/pretrain).
Note that the goal of pre-training is to reconstruct the text, and the pre-trained model cannot be used for similarity calculation directly, it needs to be fine-tuned.
More training details for bge see [baai_general_embedding](https://github.com/FlagOpen/FlagEmbedding/blob/master/FlagEmbedding/baai_general_embedding/README.md).
### BGE Reranker
Cross-encoder will perform full-attention over the input pair,
which is more accurate than embedding model (i.e., bi-encoder) but more time-consuming than embedding model.
Therefore, it can be used to re-rank the top-k documents returned by embedding model.
We train the cross-encoder on a multilingual pair data,
The data format is the same as embedding model, so you can fine-tune it easily following our [example](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/reranker).
More details please refer to [./FlagEmbedding/reranker/README.md](https://github.com/FlagOpen/FlagEmbedding/tree/master/FlagEmbedding/reranker)
## Contact
If you have any question or suggestion related to this project, feel free to open an issue or pull request.
You also can email Shitao Xiao([email protected]) and Zheng Liu([email protected]).
## Citation
If you find this repository useful, please consider giving a star :star: and citation
```
@misc{bge_embedding,
title={C-Pack: Packaged Resources To Advance General Chinese Embedding},
author={Shitao Xiao and Zheng Liu and Peitian Zhang and Niklas Muennighoff},
year={2023},
eprint={2309.07597},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
## License
FlagEmbedding is licensed under the [MIT License](https://github.com/FlagOpen/FlagEmbedding/blob/master/LICENSE). The released models can be used for commercial purposes free of charge.
| [
"SEMANTIC_SIMILARITY",
"SUMMARIZATION"
] | [
"BEAR",
"BIOSSES",
"SCIFACT"
] |
knowledgator/gliner-bi-base-v1.0 | knowledgator | token-classification | [
"gliner",
"pytorch",
"NER",
"GLiNER",
"information extraction",
"encoder",
"entity recognition",
"token-classification",
"multilingual",
"dataset:urchade/pile-mistral-v0.1",
"dataset:numind/NuNER",
"dataset:knowledgator/GLINER-multi-task-synthetic-data",
"license:apache-2.0",
"region:us"
] | 2024-08-20T12:43:27 | 2024-08-25T11:39:56 | 63 | 4 | ---
datasets:
- urchade/pile-mistral-v0.1
- numind/NuNER
- knowledgator/GLINER-multi-task-synthetic-data
language:
- multilingual
library_name: gliner
license: apache-2.0
pipeline_tag: token-classification
tags:
- NER
- GLiNER
- information extraction
- encoder
- entity recognition
---
# About
GLiNER is a Named Entity Recognition (NER) model capable of identifying any entity type using a bidirectional transformer encoders (BERT-like). It provides a practical alternative to traditional NER models, which are limited to predefined entities, and Large Language Models (LLMs) that, despite their flexibility, are costly and large for resource-constrained scenarios.
This particular version utilize bi-encoder architecture, where textual encoder is [DeBERTa v3 base](microsoft/deberta-v3-base) and entity label encoder is sentence transformer - [BGE-small-en](https://huggingface.co/BAAI/bge-small-en-v1.5).
Such architecture brings several advantages over uni-encoder GLiNER:
* An unlimited amount of entities can be recognized at a single time;
* Faster inference if entity embeddings are preprocessed;
* Better generalization to unseen entities;
However, it has some drawbacks such as a lack of inter-label interactions that make it hard for the model to disambiguate semantically similar but contextually different entities.
### Installation & Usage
Install or update the gliner package:
```bash
pip install gliner -U
```
Once you've downloaded the GLiNER library, you can import the GLiNER class. You can then load this model using `GLiNER.from_pretrained` and predict entities with `predict_entities`.
```python
from gliner import GLiNER
model = GLiNER.from_pretrained("knowledgator/gliner-bi-small-v1.0")
text = """
Cristiano Ronaldo dos Santos Aveiro (Portuguese pronunciation: [kɾiʃˈtjɐnu ʁɔˈnaldu]; born 5 February 1985) is a Portuguese professional footballer who plays as a forward for and captains both Saudi Pro League club Al Nassr and the Portugal national team. Widely regarded as one of the greatest players of all time, Ronaldo has won five Ballon d'Or awards,[note 3] a record three UEFA Men's Player of the Year Awards, and four European Golden Shoes, the most by a European player. He has won 33 trophies in his career, including seven league titles, five UEFA Champions Leagues, the UEFA European Championship and the UEFA Nations League. Ronaldo holds the records for most appearances (183), goals (140) and assists (42) in the Champions League, goals in the European Championship (14), international goals (128) and international appearances (205). He is one of the few players to have made over 1,200 professional career appearances, the most by an outfield player, and has scored over 850 official senior career goals for club and country, making him the top goalscorer of all time.
"""
labels = ["person", "award", "date", "competitions", "teams"]
entities = model.predict_entities(text, labels, threshold=0.3)
for entity in entities:
print(entity["text"], "=>", entity["label"])
```
```
Cristiano Ronaldo dos Santos Aveiro => person
5 February 1985 => date
Al Nassr => teams
Portugal national team => teams
Ballon d'Or => award
UEFA Men's Player of the Year Awards => award
European Golden Shoes => award
UEFA Champions Leagues => competitions
UEFA European Championship => competitions
UEFA Nations League => competitions
Champions League => competitions
European Championship => competitions
```
If you have a large amount of entities and want to pre-embed them, please, refer to the following code snippet:
```python
labels = ["your entities"]
texts = ["your texts"]
entity_embeddings = model.encode_labels(labels, batch_size = 8)
outputs = model.batch_predict_with_embeds(texts, entity_embeddings, labels)
```
### Benchmarks
Below you can see the table with benchmarking results on various named entity recognition datasets:
| Dataset | Score |
|---------|-------|
| ACE 2004 | 27.3% |
| ACE 2005 | 30.6% |
| AnatEM | 31.8% |
| Broad Tweet Corpus | 66.4% |
| CoNLL 2003 | 61.4% |
| FabNER | 21.8% |
| FindVehicle | 36.4% |
| GENIA_NER | 52.6% |
| HarveyNER | 11.4% |
| MultiNERD | 60.9% |
| Ontonotes | 26.5% |
| PolyglotNER | 43.8% |
| TweetNER7 | 38.0% |
| WikiANN en | 53.7% |
| WikiNeural | 71.5% |
| bc2gm | 59.4% |
| bc4chemd | 50.4% |
| bc5cdr | 68.4% |
| ncbi | 65.6% |
| **Average** | **46.2%** |
|||
| CrossNER_AI | 48.4% |
| CrossNER_literature | 61.9% |
| CrossNER_music | 68.8% |
| CrossNER_politics | 73.2% |
| CrossNER_science | 63.2% |
| mit-movie | 39.6% |
| mit-restaurant | 39.0% |
| **Average (zero-shot benchmark)** | **56.3%** |
### Join Our Discord
Connect with our community on Discord for news, support, and discussion about our models. Join [Discord](https://discord.gg/dkyeAgs9DG). | [
"NAMED_ENTITY_RECOGNITION"
] | [
"ANATEM",
"BC5CDR"
] |
RichardErkhov/DecisionOptimizationSystemProduction_-_DeepFeatTextEmbeddingLarge-gguf | RichardErkhov | null | [
"gguf",
"arxiv:2308.03281",
"endpoints_compatible",
"region:us",
"conversational"
] | 2024-09-27T05:27:07 | 2024-09-27T06:00:48 | 63 | 0 | ---
{}
---
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
DeepFeatTextEmbeddingLarge - GGUF
- Model creator: https://huggingface.co/DecisionOptimizationSystemProduction/
- Original model: https://huggingface.co/DecisionOptimizationSystemProduction/DeepFeatTextEmbeddingLarge/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [DeepFeatTextEmbeddingLarge.Q2_K.gguf](https://huggingface.co/RichardErkhov/DecisionOptimizationSystemProduction_-_DeepFeatTextEmbeddingLarge-gguf/blob/main/DeepFeatTextEmbeddingLarge.Q2_K.gguf) | Q2_K | 0.7GB |
| [DeepFeatTextEmbeddingLarge.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/DecisionOptimizationSystemProduction_-_DeepFeatTextEmbeddingLarge-gguf/blob/main/DeepFeatTextEmbeddingLarge.IQ3_XS.gguf) | IQ3_XS | 0.77GB |
| [DeepFeatTextEmbeddingLarge.IQ3_S.gguf](https://huggingface.co/RichardErkhov/DecisionOptimizationSystemProduction_-_DeepFeatTextEmbeddingLarge-gguf/blob/main/DeepFeatTextEmbeddingLarge.IQ3_S.gguf) | IQ3_S | 0.8GB |
| [DeepFeatTextEmbeddingLarge.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/DecisionOptimizationSystemProduction_-_DeepFeatTextEmbeddingLarge-gguf/blob/main/DeepFeatTextEmbeddingLarge.Q3_K_S.gguf) | Q3_K_S | 0.8GB |
| [DeepFeatTextEmbeddingLarge.IQ3_M.gguf](https://huggingface.co/RichardErkhov/DecisionOptimizationSystemProduction_-_DeepFeatTextEmbeddingLarge-gguf/blob/main/DeepFeatTextEmbeddingLarge.IQ3_M.gguf) | IQ3_M | 0.82GB |
| [DeepFeatTextEmbeddingLarge.Q3_K.gguf](https://huggingface.co/RichardErkhov/DecisionOptimizationSystemProduction_-_DeepFeatTextEmbeddingLarge-gguf/blob/main/DeepFeatTextEmbeddingLarge.Q3_K.gguf) | Q3_K | 0.86GB |
| [DeepFeatTextEmbeddingLarge.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/DecisionOptimizationSystemProduction_-_DeepFeatTextEmbeddingLarge-gguf/blob/main/DeepFeatTextEmbeddingLarge.Q3_K_M.gguf) | Q3_K_M | 0.86GB |
| [DeepFeatTextEmbeddingLarge.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/DecisionOptimizationSystemProduction_-_DeepFeatTextEmbeddingLarge-gguf/blob/main/DeepFeatTextEmbeddingLarge.Q3_K_L.gguf) | Q3_K_L | 0.91GB |
| [DeepFeatTextEmbeddingLarge.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/DecisionOptimizationSystemProduction_-_DeepFeatTextEmbeddingLarge-gguf/blob/main/DeepFeatTextEmbeddingLarge.IQ4_XS.gguf) | IQ4_XS | 0.96GB |
| [DeepFeatTextEmbeddingLarge.Q4_0.gguf](https://huggingface.co/RichardErkhov/DecisionOptimizationSystemProduction_-_DeepFeatTextEmbeddingLarge-gguf/blob/main/DeepFeatTextEmbeddingLarge.Q4_0.gguf) | Q4_0 | 0.99GB |
| [DeepFeatTextEmbeddingLarge.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/DecisionOptimizationSystemProduction_-_DeepFeatTextEmbeddingLarge-gguf/blob/main/DeepFeatTextEmbeddingLarge.IQ4_NL.gguf) | IQ4_NL | 1.0GB |
| [DeepFeatTextEmbeddingLarge.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/DecisionOptimizationSystemProduction_-_DeepFeatTextEmbeddingLarge-gguf/blob/main/DeepFeatTextEmbeddingLarge.Q4_K_S.gguf) | Q4_K_S | 1.0GB |
| [DeepFeatTextEmbeddingLarge.Q4_K.gguf](https://huggingface.co/RichardErkhov/DecisionOptimizationSystemProduction_-_DeepFeatTextEmbeddingLarge-gguf/blob/main/DeepFeatTextEmbeddingLarge.Q4_K.gguf) | Q4_K | 1.04GB |
| [DeepFeatTextEmbeddingLarge.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/DecisionOptimizationSystemProduction_-_DeepFeatTextEmbeddingLarge-gguf/blob/main/DeepFeatTextEmbeddingLarge.Q4_K_M.gguf) | Q4_K_M | 1.04GB |
| [DeepFeatTextEmbeddingLarge.Q4_1.gguf](https://huggingface.co/RichardErkhov/DecisionOptimizationSystemProduction_-_DeepFeatTextEmbeddingLarge-gguf/blob/main/DeepFeatTextEmbeddingLarge.Q4_1.gguf) | Q4_1 | 1.08GB |
| [DeepFeatTextEmbeddingLarge.Q5_0.gguf](https://huggingface.co/RichardErkhov/DecisionOptimizationSystemProduction_-_DeepFeatTextEmbeddingLarge-gguf/blob/main/DeepFeatTextEmbeddingLarge.Q5_0.gguf) | Q5_0 | 1.17GB |
| [DeepFeatTextEmbeddingLarge.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/DecisionOptimizationSystemProduction_-_DeepFeatTextEmbeddingLarge-gguf/blob/main/DeepFeatTextEmbeddingLarge.Q5_K_S.gguf) | Q5_K_S | 1.17GB |
| [DeepFeatTextEmbeddingLarge.Q5_K.gguf](https://huggingface.co/RichardErkhov/DecisionOptimizationSystemProduction_-_DeepFeatTextEmbeddingLarge-gguf/blob/main/DeepFeatTextEmbeddingLarge.Q5_K.gguf) | Q5_K | 1.2GB |
| [DeepFeatTextEmbeddingLarge.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/DecisionOptimizationSystemProduction_-_DeepFeatTextEmbeddingLarge-gguf/blob/main/DeepFeatTextEmbeddingLarge.Q5_K_M.gguf) | Q5_K_M | 1.2GB |
| [DeepFeatTextEmbeddingLarge.Q5_1.gguf](https://huggingface.co/RichardErkhov/DecisionOptimizationSystemProduction_-_DeepFeatTextEmbeddingLarge-gguf/blob/main/DeepFeatTextEmbeddingLarge.Q5_1.gguf) | Q5_1 | 1.26GB |
| [DeepFeatTextEmbeddingLarge.Q6_K.gguf](https://huggingface.co/RichardErkhov/DecisionOptimizationSystemProduction_-_DeepFeatTextEmbeddingLarge-gguf/blob/main/DeepFeatTextEmbeddingLarge.Q6_K.gguf) | Q6_K | 1.36GB |
| [DeepFeatTextEmbeddingLarge.Q8_0.gguf](https://huggingface.co/RichardErkhov/DecisionOptimizationSystemProduction_-_DeepFeatTextEmbeddingLarge-gguf/blob/main/DeepFeatTextEmbeddingLarge.Q8_0.gguf) | Q8_0 | 1.76GB |
Original model description:
---
tags:
- mteb
- sentence-transformers
- transformers
- Qwen2
- sentence-similarity
license: apache-2.0
model-index:
- name: gte-qwen2-7B-instruct
results:
- dataset:
config: en
name: MTEB AmazonCounterfactualClassification (en)
revision: e8379541af4e31359cca9fbcf4b00f2671dba205
split: test
type: mteb/amazon_counterfactual
metrics:
- type: accuracy
value: 83.98507462686567
- type: ap
value: 50.93015252587014
- type: f1
value: 78.50416599051215
task:
type: Classification
- dataset:
config: default
name: MTEB AmazonPolarityClassification
revision: e2d317d38cd51312af73b3d32a06d1a08b442046
split: test
type: mteb/amazon_polarity
metrics:
- type: accuracy
value: 96.61065
- type: ap
value: 94.89174052954196
- type: f1
value: 96.60942596940565
task:
type: Classification
- dataset:
config: en
name: MTEB AmazonReviewsClassification (en)
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
split: test
type: mteb/amazon_reviews_multi
metrics:
- type: accuracy
value: 55.614000000000004
- type: f1
value: 54.90553480294904
task:
type: Classification
- dataset:
config: default
name: MTEB ArguAna
revision: c22ab2a51041ffd869aaddef7af8d8215647e41a
split: test
type: mteb/arguana
metrics:
- type: map_at_1
value: 45.164
- type: map_at_10
value: 61.519
- type: map_at_100
value: 61.769
- type: map_at_1000
value: 61.769
- type: map_at_3
value: 57.443999999999996
- type: map_at_5
value: 60.058
- type: mrr_at_1
value: 46.088
- type: mrr_at_10
value: 61.861
- type: mrr_at_100
value: 62.117999999999995
- type: mrr_at_1000
value: 62.117999999999995
- type: mrr_at_3
value: 57.729
- type: mrr_at_5
value: 60.392
- type: ndcg_at_1
value: 45.164
- type: ndcg_at_10
value: 69.72
- type: ndcg_at_100
value: 70.719
- type: ndcg_at_1000
value: 70.719
- type: ndcg_at_3
value: 61.517999999999994
- type: ndcg_at_5
value: 66.247
- type: precision_at_1
value: 45.164
- type: precision_at_10
value: 9.545
- type: precision_at_100
value: 0.996
- type: precision_at_1000
value: 0.1
- type: precision_at_3
value: 24.443
- type: precision_at_5
value: 16.97
- type: recall_at_1
value: 45.164
- type: recall_at_10
value: 95.448
- type: recall_at_100
value: 99.644
- type: recall_at_1000
value: 99.644
- type: recall_at_3
value: 73.329
- type: recall_at_5
value: 84.851
task:
type: Retrieval
- dataset:
config: default
name: MTEB ArxivClusteringP2P
revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d
split: test
type: mteb/arxiv-clustering-p2p
metrics:
- type: v_measure
value: 50.511868162026175
task:
type: Clustering
- dataset:
config: default
name: MTEB ArxivClusteringS2S
revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53
split: test
type: mteb/arxiv-clustering-s2s
metrics:
- type: v_measure
value: 45.007803189284004
task:
type: Clustering
- dataset:
config: default
name: MTEB AskUbuntuDupQuestions
revision: 2000358ca161889fa9c082cb41daa8dcfb161a54
split: test
type: mteb/askubuntudupquestions-reranking
metrics:
- type: map
value: 64.55292107723382
- type: mrr
value: 77.66158818097877
task:
type: Reranking
- dataset:
config: default
name: MTEB BIOSSES
revision: d3fb88f8f02e40887cd149695127462bbcf29b4a
split: test
type: mteb/biosses-sts
metrics:
- type: cos_sim_pearson
value: 85.65459047085452
- type: cos_sim_spearman
value: 82.10729255710761
- type: euclidean_pearson
value: 82.78079159312476
- type: euclidean_spearman
value: 80.50002701880933
- type: manhattan_pearson
value: 82.41372641383016
- type: manhattan_spearman
value: 80.57412509272639
task:
type: STS
- dataset:
config: default
name: MTEB Banking77Classification
revision: 0fd18e25b25c072e09e0d92ab615fda904d66300
split: test
type: mteb/banking77
metrics:
- type: accuracy
value: 87.30844155844156
- type: f1
value: 87.25307322443255
task:
type: Classification
- dataset:
config: default
name: MTEB BiorxivClusteringP2P
revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40
split: test
type: mteb/biorxiv-clustering-p2p
metrics:
- type: v_measure
value: 43.20754608934859
task:
type: Clustering
- dataset:
config: default
name: MTEB BiorxivClusteringS2S
revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908
split: test
type: mteb/biorxiv-clustering-s2s
metrics:
- type: v_measure
value: 38.818037697335505
task:
type: Clustering
- dataset:
config: default
name: MTEB CQADupstackAndroidRetrieval
revision: f46a197baaae43b4f621051089b82a364682dfeb
split: test
type: BeIR/cqadupstack
metrics:
- type: map_at_1
value: 35.423
- type: map_at_10
value: 47.198
- type: map_at_100
value: 48.899
- type: map_at_1000
value: 49.004
- type: map_at_3
value: 43.114999999999995
- type: map_at_5
value: 45.491
- type: mrr_at_1
value: 42.918
- type: mrr_at_10
value: 53.299
- type: mrr_at_100
value: 54.032000000000004
- type: mrr_at_1000
value: 54.055
- type: mrr_at_3
value: 50.453
- type: mrr_at_5
value: 52.205999999999996
- type: ndcg_at_1
value: 42.918
- type: ndcg_at_10
value: 53.98
- type: ndcg_at_100
value: 59.57
- type: ndcg_at_1000
value: 60.879000000000005
- type: ndcg_at_3
value: 48.224000000000004
- type: ndcg_at_5
value: 50.998
- type: precision_at_1
value: 42.918
- type: precision_at_10
value: 10.299999999999999
- type: precision_at_100
value: 1.687
- type: precision_at_1000
value: 0.211
- type: precision_at_3
value: 22.842000000000002
- type: precision_at_5
value: 16.681
- type: recall_at_1
value: 35.423
- type: recall_at_10
value: 66.824
- type: recall_at_100
value: 89.564
- type: recall_at_1000
value: 97.501
- type: recall_at_3
value: 50.365
- type: recall_at_5
value: 57.921
task:
type: Retrieval
- dataset:
config: default
name: MTEB CQADupstackEnglishRetrieval
revision: ad9991cb51e31e31e430383c75ffb2885547b5f0
split: test
type: BeIR/cqadupstack
metrics:
- type: map_at_1
value: 33.205
- type: map_at_10
value: 44.859
- type: map_at_100
value: 46.135
- type: map_at_1000
value: 46.259
- type: map_at_3
value: 41.839
- type: map_at_5
value: 43.662
- type: mrr_at_1
value: 41.146
- type: mrr_at_10
value: 50.621
- type: mrr_at_100
value: 51.207
- type: mrr_at_1000
value: 51.246
- type: mrr_at_3
value: 48.535000000000004
- type: mrr_at_5
value: 49.818
- type: ndcg_at_1
value: 41.146
- type: ndcg_at_10
value: 50.683
- type: ndcg_at_100
value: 54.82
- type: ndcg_at_1000
value: 56.69
- type: ndcg_at_3
value: 46.611000000000004
- type: ndcg_at_5
value: 48.66
- type: precision_at_1
value: 41.146
- type: precision_at_10
value: 9.439
- type: precision_at_100
value: 1.465
- type: precision_at_1000
value: 0.194
- type: precision_at_3
value: 22.59
- type: precision_at_5
value: 15.86
- type: recall_at_1
value: 33.205
- type: recall_at_10
value: 61.028999999999996
- type: recall_at_100
value: 78.152
- type: recall_at_1000
value: 89.59700000000001
- type: recall_at_3
value: 49.05
- type: recall_at_5
value: 54.836
task:
type: Retrieval
- dataset:
config: default
name: MTEB CQADupstackGamingRetrieval
revision: 4885aa143210c98657558c04aaf3dc47cfb54340
split: test
type: BeIR/cqadupstack
metrics:
- type: map_at_1
value: 41.637
- type: map_at_10
value: 55.162
- type: map_at_100
value: 56.142
- type: map_at_1000
value: 56.188
- type: map_at_3
value: 51.564
- type: map_at_5
value: 53.696
- type: mrr_at_1
value: 47.524
- type: mrr_at_10
value: 58.243
- type: mrr_at_100
value: 58.879999999999995
- type: mrr_at_1000
value: 58.9
- type: mrr_at_3
value: 55.69499999999999
- type: mrr_at_5
value: 57.284
- type: ndcg_at_1
value: 47.524
- type: ndcg_at_10
value: 61.305
- type: ndcg_at_100
value: 65.077
- type: ndcg_at_1000
value: 65.941
- type: ndcg_at_3
value: 55.422000000000004
- type: ndcg_at_5
value: 58.516
- type: precision_at_1
value: 47.524
- type: precision_at_10
value: 9.918000000000001
- type: precision_at_100
value: 1.276
- type: precision_at_1000
value: 0.13899999999999998
- type: precision_at_3
value: 24.765
- type: precision_at_5
value: 17.204
- type: recall_at_1
value: 41.637
- type: recall_at_10
value: 76.185
- type: recall_at_100
value: 92.149
- type: recall_at_1000
value: 98.199
- type: recall_at_3
value: 60.856
- type: recall_at_5
value: 68.25099999999999
task:
type: Retrieval
- dataset:
config: default
name: MTEB CQADupstackGisRetrieval
revision: 5003b3064772da1887988e05400cf3806fe491f2
split: test
type: BeIR/cqadupstack
metrics:
- type: map_at_1
value: 26.27
- type: map_at_10
value: 37.463
- type: map_at_100
value: 38.434000000000005
- type: map_at_1000
value: 38.509
- type: map_at_3
value: 34.226
- type: map_at_5
value: 36.161
- type: mrr_at_1
value: 28.588
- type: mrr_at_10
value: 39.383
- type: mrr_at_100
value: 40.23
- type: mrr_at_1000
value: 40.281
- type: mrr_at_3
value: 36.422
- type: mrr_at_5
value: 38.252
- type: ndcg_at_1
value: 28.588
- type: ndcg_at_10
value: 43.511
- type: ndcg_at_100
value: 48.274
- type: ndcg_at_1000
value: 49.975
- type: ndcg_at_3
value: 37.319
- type: ndcg_at_5
value: 40.568
- type: precision_at_1
value: 28.588
- type: precision_at_10
value: 6.893000000000001
- type: precision_at_100
value: 0.9900000000000001
- type: precision_at_1000
value: 0.117
- type: precision_at_3
value: 16.347
- type: precision_at_5
value: 11.661000000000001
- type: recall_at_1
value: 26.27
- type: recall_at_10
value: 60.284000000000006
- type: recall_at_100
value: 81.902
- type: recall_at_1000
value: 94.43
- type: recall_at_3
value: 43.537
- type: recall_at_5
value: 51.475
task:
type: Retrieval
- dataset:
config: default
name: MTEB CQADupstackMathematicaRetrieval
revision: 90fceea13679c63fe563ded68f3b6f06e50061de
split: test
type: BeIR/cqadupstack
metrics:
- type: map_at_1
value: 18.168
- type: map_at_10
value: 28.410000000000004
- type: map_at_100
value: 29.78
- type: map_at_1000
value: 29.892999999999997
- type: map_at_3
value: 25.238
- type: map_at_5
value: 26.96
- type: mrr_at_1
value: 23.507
- type: mrr_at_10
value: 33.382
- type: mrr_at_100
value: 34.404
- type: mrr_at_1000
value: 34.467999999999996
- type: mrr_at_3
value: 30.637999999999998
- type: mrr_at_5
value: 32.199
- type: ndcg_at_1
value: 23.507
- type: ndcg_at_10
value: 34.571000000000005
- type: ndcg_at_100
value: 40.663
- type: ndcg_at_1000
value: 43.236000000000004
- type: ndcg_at_3
value: 29.053
- type: ndcg_at_5
value: 31.563999999999997
- type: precision_at_1
value: 23.507
- type: precision_at_10
value: 6.654
- type: precision_at_100
value: 1.113
- type: precision_at_1000
value: 0.146
- type: precision_at_3
value: 14.427999999999999
- type: precision_at_5
value: 10.498000000000001
- type: recall_at_1
value: 18.168
- type: recall_at_10
value: 48.443000000000005
- type: recall_at_100
value: 74.47
- type: recall_at_1000
value: 92.494
- type: recall_at_3
value: 33.379999999999995
- type: recall_at_5
value: 39.76
task:
type: Retrieval
- dataset:
config: default
name: MTEB CQADupstackPhysicsRetrieval
revision: 79531abbd1fb92d06c6d6315a0cbbbf5bb247ea4
split: test
type: BeIR/cqadupstack
metrics:
- type: map_at_1
value: 32.39
- type: map_at_10
value: 44.479
- type: map_at_100
value: 45.977000000000004
- type: map_at_1000
value: 46.087
- type: map_at_3
value: 40.976
- type: map_at_5
value: 43.038
- type: mrr_at_1
value: 40.135
- type: mrr_at_10
value: 50.160000000000004
- type: mrr_at_100
value: 51.052
- type: mrr_at_1000
value: 51.087
- type: mrr_at_3
value: 47.818
- type: mrr_at_5
value: 49.171
- type: ndcg_at_1
value: 40.135
- type: ndcg_at_10
value: 50.731
- type: ndcg_at_100
value: 56.452000000000005
- type: ndcg_at_1000
value: 58.123000000000005
- type: ndcg_at_3
value: 45.507
- type: ndcg_at_5
value: 48.11
- type: precision_at_1
value: 40.135
- type: precision_at_10
value: 9.192
- type: precision_at_100
value: 1.397
- type: precision_at_1000
value: 0.169
- type: precision_at_3
value: 21.816
- type: precision_at_5
value: 15.476
- type: recall_at_1
value: 32.39
- type: recall_at_10
value: 63.597
- type: recall_at_100
value: 86.737
- type: recall_at_1000
value: 97.039
- type: recall_at_3
value: 48.906
- type: recall_at_5
value: 55.659000000000006
task:
type: Retrieval
- dataset:
config: default
name: MTEB CQADupstackProgrammersRetrieval
revision: 6184bc1440d2dbc7612be22b50686b8826d22b32
split: test
type: BeIR/cqadupstack
metrics:
- type: map_at_1
value: 28.397
- type: map_at_10
value: 39.871
- type: map_at_100
value: 41.309000000000005
- type: map_at_1000
value: 41.409
- type: map_at_3
value: 36.047000000000004
- type: map_at_5
value: 38.104
- type: mrr_at_1
value: 34.703
- type: mrr_at_10
value: 44.773
- type: mrr_at_100
value: 45.64
- type: mrr_at_1000
value: 45.678999999999995
- type: mrr_at_3
value: 41.705
- type: mrr_at_5
value: 43.406
- type: ndcg_at_1
value: 34.703
- type: ndcg_at_10
value: 46.271
- type: ndcg_at_100
value: 52.037
- type: ndcg_at_1000
value: 53.81700000000001
- type: ndcg_at_3
value: 39.966
- type: ndcg_at_5
value: 42.801
- type: precision_at_1
value: 34.703
- type: precision_at_10
value: 8.744
- type: precision_at_100
value: 1.348
- type: precision_at_1000
value: 0.167
- type: precision_at_3
value: 19.102
- type: precision_at_5
value: 13.836
- type: recall_at_1
value: 28.397
- type: recall_at_10
value: 60.299
- type: recall_at_100
value: 84.595
- type: recall_at_1000
value: 96.155
- type: recall_at_3
value: 43.065
- type: recall_at_5
value: 50.371
task:
type: Retrieval
- dataset:
config: default
name: MTEB CQADupstackRetrieval
revision: 4ffe81d471b1924886b33c7567bfb200e9eec5c4
split: test
type: BeIR/cqadupstack
metrics:
- type: map_at_1
value: 28.044333333333338
- type: map_at_10
value: 38.78691666666666
- type: map_at_100
value: 40.113
- type: map_at_1000
value: 40.22125
- type: map_at_3
value: 35.52966666666667
- type: map_at_5
value: 37.372749999999996
- type: mrr_at_1
value: 33.159083333333335
- type: mrr_at_10
value: 42.913583333333335
- type: mrr_at_100
value: 43.7845
- type: mrr_at_1000
value: 43.830333333333336
- type: mrr_at_3
value: 40.29816666666667
- type: mrr_at_5
value: 41.81366666666667
- type: ndcg_at_1
value: 33.159083333333335
- type: ndcg_at_10
value: 44.75750000000001
- type: ndcg_at_100
value: 50.13658333333334
- type: ndcg_at_1000
value: 52.037
- type: ndcg_at_3
value: 39.34258333333334
- type: ndcg_at_5
value: 41.93708333333333
- type: precision_at_1
value: 33.159083333333335
- type: precision_at_10
value: 7.952416666666667
- type: precision_at_100
value: 1.2571666666666668
- type: precision_at_1000
value: 0.16099999999999998
- type: precision_at_3
value: 18.303833333333337
- type: precision_at_5
value: 13.057083333333333
- type: recall_at_1
value: 28.044333333333338
- type: recall_at_10
value: 58.237249999999996
- type: recall_at_100
value: 81.35391666666666
- type: recall_at_1000
value: 94.21283333333334
- type: recall_at_3
value: 43.32341666666667
- type: recall_at_5
value: 49.94908333333333
task:
type: Retrieval
- dataset:
config: default
name: MTEB CQADupstackStatsRetrieval
revision: 65ac3a16b8e91f9cee4c9828cc7c335575432a2a
split: test
type: BeIR/cqadupstack
metrics:
- type: map_at_1
value: 27.838
- type: map_at_10
value: 36.04
- type: map_at_100
value: 37.113
- type: map_at_1000
value: 37.204
- type: map_at_3
value: 33.585
- type: map_at_5
value: 34.845
- type: mrr_at_1
value: 30.982
- type: mrr_at_10
value: 39.105000000000004
- type: mrr_at_100
value: 39.98
- type: mrr_at_1000
value: 40.042
- type: mrr_at_3
value: 36.912
- type: mrr_at_5
value: 38.062000000000005
- type: ndcg_at_1
value: 30.982
- type: ndcg_at_10
value: 40.982
- type: ndcg_at_100
value: 46.092
- type: ndcg_at_1000
value: 48.25
- type: ndcg_at_3
value: 36.41
- type: ndcg_at_5
value: 38.379999999999995
- type: precision_at_1
value: 30.982
- type: precision_at_10
value: 6.534
- type: precision_at_100
value: 0.9820000000000001
- type: precision_at_1000
value: 0.124
- type: precision_at_3
value: 15.745999999999999
- type: precision_at_5
value: 10.828
- type: recall_at_1
value: 27.838
- type: recall_at_10
value: 52.971000000000004
- type: recall_at_100
value: 76.357
- type: recall_at_1000
value: 91.973
- type: recall_at_3
value: 40.157
- type: recall_at_5
value: 45.147999999999996
task:
type: Retrieval
- dataset:
config: default
name: MTEB CQADupstackTexRetrieval
revision: 46989137a86843e03a6195de44b09deda022eec7
split: test
type: BeIR/cqadupstack
metrics:
- type: map_at_1
value: 19.059
- type: map_at_10
value: 27.454
- type: map_at_100
value: 28.736
- type: map_at_1000
value: 28.865000000000002
- type: map_at_3
value: 24.773999999999997
- type: map_at_5
value: 26.266000000000002
- type: mrr_at_1
value: 23.125
- type: mrr_at_10
value: 31.267
- type: mrr_at_100
value: 32.32
- type: mrr_at_1000
value: 32.394
- type: mrr_at_3
value: 28.894
- type: mrr_at_5
value: 30.281000000000002
- type: ndcg_at_1
value: 23.125
- type: ndcg_at_10
value: 32.588
- type: ndcg_at_100
value: 38.432
- type: ndcg_at_1000
value: 41.214
- type: ndcg_at_3
value: 27.938000000000002
- type: ndcg_at_5
value: 30.127
- type: precision_at_1
value: 23.125
- type: precision_at_10
value: 5.9639999999999995
- type: precision_at_100
value: 1.047
- type: precision_at_1000
value: 0.148
- type: precision_at_3
value: 13.294
- type: precision_at_5
value: 9.628
- type: recall_at_1
value: 19.059
- type: recall_at_10
value: 44.25
- type: recall_at_100
value: 69.948
- type: recall_at_1000
value: 89.35300000000001
- type: recall_at_3
value: 31.114000000000004
- type: recall_at_5
value: 36.846000000000004
task:
type: Retrieval
- dataset:
config: default
name: MTEB CQADupstackUnixRetrieval
revision: 6c6430d3a6d36f8d2a829195bc5dc94d7e063e53
split: test
type: BeIR/cqadupstack
metrics:
- type: map_at_1
value: 28.355999999999998
- type: map_at_10
value: 39.055
- type: map_at_100
value: 40.486
- type: map_at_1000
value: 40.571
- type: map_at_3
value: 35.69
- type: map_at_5
value: 37.605
- type: mrr_at_1
value: 33.302
- type: mrr_at_10
value: 42.986000000000004
- type: mrr_at_100
value: 43.957
- type: mrr_at_1000
value: 43.996
- type: mrr_at_3
value: 40.111999999999995
- type: mrr_at_5
value: 41.735
- type: ndcg_at_1
value: 33.302
- type: ndcg_at_10
value: 44.962999999999994
- type: ndcg_at_100
value: 50.917
- type: ndcg_at_1000
value: 52.622
- type: ndcg_at_3
value: 39.182
- type: ndcg_at_5
value: 41.939
- type: precision_at_1
value: 33.302
- type: precision_at_10
value: 7.779999999999999
- type: precision_at_100
value: 1.203
- type: precision_at_1000
value: 0.145
- type: precision_at_3
value: 18.035
- type: precision_at_5
value: 12.873000000000001
- type: recall_at_1
value: 28.355999999999998
- type: recall_at_10
value: 58.782000000000004
- type: recall_at_100
value: 84.02199999999999
- type: recall_at_1000
value: 95.511
- type: recall_at_3
value: 43.126999999999995
- type: recall_at_5
value: 50.14999999999999
task:
type: Retrieval
- dataset:
config: default
name: MTEB CQADupstackWebmastersRetrieval
revision: 160c094312a0e1facb97e55eeddb698c0abe3571
split: test
type: BeIR/cqadupstack
metrics:
- type: map_at_1
value: 27.391
- type: map_at_10
value: 37.523
- type: map_at_100
value: 39.312000000000005
- type: map_at_1000
value: 39.54
- type: map_at_3
value: 34.231
- type: map_at_5
value: 36.062
- type: mrr_at_1
value: 32.016
- type: mrr_at_10
value: 41.747
- type: mrr_at_100
value: 42.812
- type: mrr_at_1000
value: 42.844
- type: mrr_at_3
value: 39.129999999999995
- type: mrr_at_5
value: 40.524
- type: ndcg_at_1
value: 32.016
- type: ndcg_at_10
value: 43.826
- type: ndcg_at_100
value: 50.373999999999995
- type: ndcg_at_1000
value: 52.318
- type: ndcg_at_3
value: 38.479
- type: ndcg_at_5
value: 40.944
- type: precision_at_1
value: 32.016
- type: precision_at_10
value: 8.280999999999999
- type: precision_at_100
value: 1.6760000000000002
- type: precision_at_1000
value: 0.25
- type: precision_at_3
value: 18.05
- type: precision_at_5
value: 13.083
- type: recall_at_1
value: 27.391
- type: recall_at_10
value: 56.928999999999995
- type: recall_at_100
value: 85.169
- type: recall_at_1000
value: 96.665
- type: recall_at_3
value: 42.264
- type: recall_at_5
value: 48.556
task:
type: Retrieval
- dataset:
config: default
name: MTEB CQADupstackWordpressRetrieval
revision: 4ffe81d471b1924886b33c7567bfb200e9eec5c4
split: test
type: BeIR/cqadupstack
metrics:
- type: map_at_1
value: 18.398
- type: map_at_10
value: 27.929
- type: map_at_100
value: 29.032999999999998
- type: map_at_1000
value: 29.126
- type: map_at_3
value: 25.070999999999998
- type: map_at_5
value: 26.583000000000002
- type: mrr_at_1
value: 19.963
- type: mrr_at_10
value: 29.997
- type: mrr_at_100
value: 30.9
- type: mrr_at_1000
value: 30.972
- type: mrr_at_3
value: 27.264
- type: mrr_at_5
value: 28.826
- type: ndcg_at_1
value: 19.963
- type: ndcg_at_10
value: 33.678999999999995
- type: ndcg_at_100
value: 38.931
- type: ndcg_at_1000
value: 41.379
- type: ndcg_at_3
value: 28.000000000000004
- type: ndcg_at_5
value: 30.637999999999998
- type: precision_at_1
value: 19.963
- type: precision_at_10
value: 5.7299999999999995
- type: precision_at_100
value: 0.902
- type: precision_at_1000
value: 0.122
- type: precision_at_3
value: 12.631
- type: precision_at_5
value: 9.057
- type: recall_at_1
value: 18.398
- type: recall_at_10
value: 49.254
- type: recall_at_100
value: 73.182
- type: recall_at_1000
value: 91.637
- type: recall_at_3
value: 34.06
- type: recall_at_5
value: 40.416000000000004
task:
type: Retrieval
- dataset:
config: default
name: MTEB ClimateFEVER
revision: 47f2ac6acb640fc46020b02a5b59fdda04d39380
split: test
type: mteb/climate-fever
metrics:
- type: map_at_1
value: 19.681
- type: map_at_10
value: 32.741
- type: map_at_100
value: 34.811
- type: map_at_1000
value: 35.003
- type: map_at_3
value: 27.697
- type: map_at_5
value: 30.372
- type: mrr_at_1
value: 44.951
- type: mrr_at_10
value: 56.34400000000001
- type: mrr_at_100
value: 56.961
- type: mrr_at_1000
value: 56.987
- type: mrr_at_3
value: 53.681
- type: mrr_at_5
value: 55.407
- type: ndcg_at_1
value: 44.951
- type: ndcg_at_10
value: 42.905
- type: ndcg_at_100
value: 49.95
- type: ndcg_at_1000
value: 52.917
- type: ndcg_at_3
value: 36.815
- type: ndcg_at_5
value: 38.817
- type: precision_at_1
value: 44.951
- type: precision_at_10
value: 12.989999999999998
- type: precision_at_100
value: 2.068
- type: precision_at_1000
value: 0.263
- type: precision_at_3
value: 27.275
- type: precision_at_5
value: 20.365
- type: recall_at_1
value: 19.681
- type: recall_at_10
value: 48.272999999999996
- type: recall_at_100
value: 71.87400000000001
- type: recall_at_1000
value: 87.929
- type: recall_at_3
value: 32.653999999999996
- type: recall_at_5
value: 39.364
task:
type: Retrieval
- dataset:
config: default
name: MTEB DBPedia
revision: c0f706b76e590d620bd6618b3ca8efdd34e2d659
split: test
type: mteb/dbpedia
metrics:
- type: map_at_1
value: 10.231
- type: map_at_10
value: 22.338
- type: map_at_100
value: 31.927
- type: map_at_1000
value: 33.87
- type: map_at_3
value: 15.559999999999999
- type: map_at_5
value: 18.239
- type: mrr_at_1
value: 75.0
- type: mrr_at_10
value: 81.303
- type: mrr_at_100
value: 81.523
- type: mrr_at_1000
value: 81.53
- type: mrr_at_3
value: 80.083
- type: mrr_at_5
value: 80.758
- type: ndcg_at_1
value: 64.625
- type: ndcg_at_10
value: 48.687000000000005
- type: ndcg_at_100
value: 52.791
- type: ndcg_at_1000
value: 60.041999999999994
- type: ndcg_at_3
value: 53.757999999999996
- type: ndcg_at_5
value: 50.76500000000001
- type: precision_at_1
value: 75.0
- type: precision_at_10
value: 38.3
- type: precision_at_100
value: 12.025
- type: precision_at_1000
value: 2.3970000000000002
- type: precision_at_3
value: 55.417
- type: precision_at_5
value: 47.5
- type: recall_at_1
value: 10.231
- type: recall_at_10
value: 27.697
- type: recall_at_100
value: 57.409
- type: recall_at_1000
value: 80.547
- type: recall_at_3
value: 16.668
- type: recall_at_5
value: 20.552
task:
type: Retrieval
- dataset:
config: default
name: MTEB EmotionClassification
revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37
split: test
type: mteb/emotion
metrics:
- type: accuracy
value: 61.365
- type: f1
value: 56.7540827912991
task:
type: Classification
- dataset:
config: default
name: MTEB FEVER
revision: bea83ef9e8fb933d90a2f1d5515737465d613e12
split: test
type: mteb/fever
metrics:
- type: map_at_1
value: 83.479
- type: map_at_10
value: 88.898
- type: map_at_100
value: 89.11
- type: map_at_1000
value: 89.12400000000001
- type: map_at_3
value: 88.103
- type: map_at_5
value: 88.629
- type: mrr_at_1
value: 89.934
- type: mrr_at_10
value: 93.91000000000001
- type: mrr_at_100
value: 93.937
- type: mrr_at_1000
value: 93.938
- type: mrr_at_3
value: 93.62700000000001
- type: mrr_at_5
value: 93.84599999999999
- type: ndcg_at_1
value: 89.934
- type: ndcg_at_10
value: 91.574
- type: ndcg_at_100
value: 92.238
- type: ndcg_at_1000
value: 92.45
- type: ndcg_at_3
value: 90.586
- type: ndcg_at_5
value: 91.16300000000001
- type: precision_at_1
value: 89.934
- type: precision_at_10
value: 10.555
- type: precision_at_100
value: 1.1159999999999999
- type: precision_at_1000
value: 0.11499999999999999
- type: precision_at_3
value: 33.588
- type: precision_at_5
value: 20.642
- type: recall_at_1
value: 83.479
- type: recall_at_10
value: 94.971
- type: recall_at_100
value: 97.397
- type: recall_at_1000
value: 98.666
- type: recall_at_3
value: 92.24799999999999
- type: recall_at_5
value: 93.797
task:
type: Retrieval
- dataset:
config: default
name: MTEB FiQA2018
revision: 27a168819829fe9bcd655c2df245fb19452e8e06
split: test
type: mteb/fiqa
metrics:
- type: map_at_1
value: 27.16
- type: map_at_10
value: 45.593
- type: map_at_100
value: 47.762
- type: map_at_1000
value: 47.899
- type: map_at_3
value: 39.237
- type: map_at_5
value: 42.970000000000006
- type: mrr_at_1
value: 52.623
- type: mrr_at_10
value: 62.637
- type: mrr_at_100
value: 63.169
- type: mrr_at_1000
value: 63.185
- type: mrr_at_3
value: 59.928000000000004
- type: mrr_at_5
value: 61.702999999999996
- type: ndcg_at_1
value: 52.623
- type: ndcg_at_10
value: 54.701
- type: ndcg_at_100
value: 61.263
- type: ndcg_at_1000
value: 63.134
- type: ndcg_at_3
value: 49.265
- type: ndcg_at_5
value: 51.665000000000006
- type: precision_at_1
value: 52.623
- type: precision_at_10
value: 15.185
- type: precision_at_100
value: 2.202
- type: precision_at_1000
value: 0.254
- type: precision_at_3
value: 32.767
- type: precision_at_5
value: 24.722
- type: recall_at_1
value: 27.16
- type: recall_at_10
value: 63.309000000000005
- type: recall_at_100
value: 86.722
- type: recall_at_1000
value: 97.505
- type: recall_at_3
value: 45.045
- type: recall_at_5
value: 54.02400000000001
task:
type: Retrieval
- dataset:
config: default
name: MTEB HotpotQA
revision: ab518f4d6fcca38d87c25209f94beba119d02014
split: test
type: mteb/hotpotqa
metrics:
- type: map_at_1
value: 42.573
- type: map_at_10
value: 59.373
- type: map_at_100
value: 60.292
- type: map_at_1000
value: 60.358999999999995
- type: map_at_3
value: 56.159000000000006
- type: map_at_5
value: 58.123999999999995
- type: mrr_at_1
value: 85.14500000000001
- type: mrr_at_10
value: 89.25999999999999
- type: mrr_at_100
value: 89.373
- type: mrr_at_1000
value: 89.377
- type: mrr_at_3
value: 88.618
- type: mrr_at_5
value: 89.036
- type: ndcg_at_1
value: 85.14500000000001
- type: ndcg_at_10
value: 68.95
- type: ndcg_at_100
value: 71.95
- type: ndcg_at_1000
value: 73.232
- type: ndcg_at_3
value: 64.546
- type: ndcg_at_5
value: 66.945
- type: precision_at_1
value: 85.14500000000001
- type: precision_at_10
value: 13.865
- type: precision_at_100
value: 1.619
- type: precision_at_1000
value: 0.179
- type: precision_at_3
value: 39.703
- type: precision_at_5
value: 25.718000000000004
- type: recall_at_1
value: 42.573
- type: recall_at_10
value: 69.325
- type: recall_at_100
value: 80.932
- type: recall_at_1000
value: 89.446
- type: recall_at_3
value: 59.553999999999995
- type: recall_at_5
value: 64.294
task:
type: Retrieval
- dataset:
config: default
name: MTEB ImdbClassification
revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7
split: test
type: mteb/imdb
metrics:
- type: accuracy
value: 95.8336
- type: ap
value: 93.78862962194073
- type: f1
value: 95.83192650728371
task:
type: Classification
- dataset:
config: default
name: MTEB MSMARCO
revision: c5a29a104738b98a9e76336939199e264163d4a0
split: dev
type: mteb/msmarco
metrics:
- type: map_at_1
value: 23.075000000000003
- type: map_at_10
value: 36.102000000000004
- type: map_at_100
value: 37.257
- type: map_at_1000
value: 37.3
- type: map_at_3
value: 32.144
- type: map_at_5
value: 34.359
- type: mrr_at_1
value: 23.711
- type: mrr_at_10
value: 36.671
- type: mrr_at_100
value: 37.763999999999996
- type: mrr_at_1000
value: 37.801
- type: mrr_at_3
value: 32.775
- type: mrr_at_5
value: 34.977000000000004
- type: ndcg_at_1
value: 23.711
- type: ndcg_at_10
value: 43.361
- type: ndcg_at_100
value: 48.839
- type: ndcg_at_1000
value: 49.88
- type: ndcg_at_3
value: 35.269
- type: ndcg_at_5
value: 39.224
- type: precision_at_1
value: 23.711
- type: precision_at_10
value: 6.866999999999999
- type: precision_at_100
value: 0.96
- type: precision_at_1000
value: 0.105
- type: precision_at_3
value: 15.096000000000002
- type: precision_at_5
value: 11.083
- type: recall_at_1
value: 23.075000000000003
- type: recall_at_10
value: 65.756
- type: recall_at_100
value: 90.88199999999999
- type: recall_at_1000
value: 98.739
- type: recall_at_3
value: 43.691
- type: recall_at_5
value: 53.15800000000001
task:
type: Retrieval
- dataset:
config: en
name: MTEB MTOPDomainClassification (en)
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
split: test
type: mteb/mtop_domain
metrics:
- type: accuracy
value: 97.69493844049248
- type: f1
value: 97.55048089616261
task:
type: Classification
- dataset:
config: en
name: MTEB MTOPIntentClassification (en)
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
split: test
type: mteb/mtop_intent
metrics:
- type: accuracy
value: 88.75968992248062
- type: f1
value: 72.26321223399123
task:
type: Classification
- dataset:
config: en
name: MTEB MassiveIntentClassification (en)
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
split: test
type: mteb/amazon_massive_intent
metrics:
- type: accuracy
value: 82.40080699394754
- type: f1
value: 79.62590029057968
task:
type: Classification
- dataset:
config: en
name: MTEB MassiveScenarioClassification (en)
revision: 7d571f92784cd94a019292a1f45445077d0ef634
split: test
type: mteb/amazon_massive_scenario
metrics:
- type: accuracy
value: 84.49562878278414
- type: f1
value: 84.0040193313333
task:
type: Classification
- dataset:
config: default
name: MTEB MedrxivClusteringP2P
revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73
split: test
type: mteb/medrxiv-clustering-p2p
metrics:
- type: v_measure
value: 39.386760057101945
task:
type: Clustering
- dataset:
config: default
name: MTEB MedrxivClusteringS2S
revision: 35191c8c0dca72d8ff3efcd72aa802307d469663
split: test
type: mteb/medrxiv-clustering-s2s
metrics:
- type: v_measure
value: 37.89687154075537
task:
type: Clustering
- dataset:
config: default
name: MTEB MindSmallReranking
revision: 3bdac13927fdc888b903db93b2ffdbd90b295a69
split: test
type: mteb/mind_small
metrics:
- type: map
value: 33.94151656057482
- type: mrr
value: 35.32684700746953
task:
type: Reranking
- dataset:
config: default
name: MTEB NFCorpus
revision: ec0fa4fe99da2ff19ca1214b7966684033a58814
split: test
type: mteb/nfcorpus
metrics:
- type: map_at_1
value: 6.239999999999999
- type: map_at_10
value: 14.862
- type: map_at_100
value: 18.955
- type: map_at_1000
value: 20.694000000000003
- type: map_at_3
value: 10.683
- type: map_at_5
value: 12.674
- type: mrr_at_1
value: 50.15500000000001
- type: mrr_at_10
value: 59.697
- type: mrr_at_100
value: 60.095
- type: mrr_at_1000
value: 60.129999999999995
- type: mrr_at_3
value: 58.35900000000001
- type: mrr_at_5
value: 58.839
- type: ndcg_at_1
value: 48.452
- type: ndcg_at_10
value: 39.341
- type: ndcg_at_100
value: 35.866
- type: ndcg_at_1000
value: 45.111000000000004
- type: ndcg_at_3
value: 44.527
- type: ndcg_at_5
value: 42.946
- type: precision_at_1
value: 50.15500000000001
- type: precision_at_10
value: 29.536
- type: precision_at_100
value: 9.142
- type: precision_at_1000
value: 2.2849999999999997
- type: precision_at_3
value: 41.899
- type: precision_at_5
value: 37.647000000000006
- type: recall_at_1
value: 6.239999999999999
- type: recall_at_10
value: 19.278000000000002
- type: recall_at_100
value: 36.074
- type: recall_at_1000
value: 70.017
- type: recall_at_3
value: 12.066
- type: recall_at_5
value: 15.254000000000001
task:
type: Retrieval
- dataset:
config: default
name: MTEB NQ
revision: b774495ed302d8c44a3a7ea25c90dbce03968f31
split: test
type: mteb/nq
metrics:
- type: map_at_1
value: 39.75
- type: map_at_10
value: 56.443
- type: map_at_100
value: 57.233999999999995
- type: map_at_1000
value: 57.249
- type: map_at_3
value: 52.032999999999994
- type: map_at_5
value: 54.937999999999995
- type: mrr_at_1
value: 44.728
- type: mrr_at_10
value: 58.939
- type: mrr_at_100
value: 59.489000000000004
- type: mrr_at_1000
value: 59.499
- type: mrr_at_3
value: 55.711999999999996
- type: mrr_at_5
value: 57.89
- type: ndcg_at_1
value: 44.728
- type: ndcg_at_10
value: 63.998999999999995
- type: ndcg_at_100
value: 67.077
- type: ndcg_at_1000
value: 67.40899999999999
- type: ndcg_at_3
value: 56.266000000000005
- type: ndcg_at_5
value: 60.88
- type: precision_at_1
value: 44.728
- type: precision_at_10
value: 10.09
- type: precision_at_100
value: 1.1809999999999998
- type: precision_at_1000
value: 0.121
- type: precision_at_3
value: 25.145
- type: precision_at_5
value: 17.822
- type: recall_at_1
value: 39.75
- type: recall_at_10
value: 84.234
- type: recall_at_100
value: 97.055
- type: recall_at_1000
value: 99.517
- type: recall_at_3
value: 64.851
- type: recall_at_5
value: 75.343
task:
type: Retrieval
- dataset:
config: default
name: MTEB QuoraRetrieval
revision: None
split: test
type: mteb/quora
metrics:
- type: map_at_1
value: 72.085
- type: map_at_10
value: 86.107
- type: map_at_100
value: 86.727
- type: map_at_1000
value: 86.74
- type: map_at_3
value: 83.21
- type: map_at_5
value: 85.06
- type: mrr_at_1
value: 82.94
- type: mrr_at_10
value: 88.845
- type: mrr_at_100
value: 88.926
- type: mrr_at_1000
value: 88.927
- type: mrr_at_3
value: 87.993
- type: mrr_at_5
value: 88.62299999999999
- type: ndcg_at_1
value: 82.97
- type: ndcg_at_10
value: 89.645
- type: ndcg_at_100
value: 90.717
- type: ndcg_at_1000
value: 90.78
- type: ndcg_at_3
value: 86.99900000000001
- type: ndcg_at_5
value: 88.52600000000001
- type: precision_at_1
value: 82.97
- type: precision_at_10
value: 13.569
- type: precision_at_100
value: 1.539
- type: precision_at_1000
value: 0.157
- type: precision_at_3
value: 38.043
- type: precision_at_5
value: 24.992
- type: recall_at_1
value: 72.085
- type: recall_at_10
value: 96.262
- type: recall_at_100
value: 99.77000000000001
- type: recall_at_1000
value: 99.997
- type: recall_at_3
value: 88.652
- type: recall_at_5
value: 93.01899999999999
task:
type: Retrieval
- dataset:
config: default
name: MTEB RedditClustering
revision: 24640382cdbf8abc73003fb0fa6d111a705499eb
split: test
type: mteb/reddit-clustering
metrics:
- type: v_measure
value: 55.82153952668092
task:
type: Clustering
- dataset:
config: default
name: MTEB RedditClusteringP2P
revision: 282350215ef01743dc01b456c7f5241fa8937f16
split: test
type: mteb/reddit-clustering-p2p
metrics:
- type: v_measure
value: 62.094465801879295
task:
type: Clustering
- dataset:
config: default
name: MTEB SCIDOCS
revision: None
split: test
type: mteb/scidocs
metrics:
- type: map_at_1
value: 5.688
- type: map_at_10
value: 15.201999999999998
- type: map_at_100
value: 18.096
- type: map_at_1000
value: 18.481
- type: map_at_3
value: 10.734
- type: map_at_5
value: 12.94
- type: mrr_at_1
value: 28.000000000000004
- type: mrr_at_10
value: 41.101
- type: mrr_at_100
value: 42.202
- type: mrr_at_1000
value: 42.228
- type: mrr_at_3
value: 37.683
- type: mrr_at_5
value: 39.708
- type: ndcg_at_1
value: 28.000000000000004
- type: ndcg_at_10
value: 24.976000000000003
- type: ndcg_at_100
value: 35.129
- type: ndcg_at_1000
value: 40.77
- type: ndcg_at_3
value: 23.787
- type: ndcg_at_5
value: 20.816000000000003
- type: precision_at_1
value: 28.000000000000004
- type: precision_at_10
value: 13.04
- type: precision_at_100
value: 2.761
- type: precision_at_1000
value: 0.41000000000000003
- type: precision_at_3
value: 22.6
- type: precision_at_5
value: 18.52
- type: recall_at_1
value: 5.688
- type: recall_at_10
value: 26.43
- type: recall_at_100
value: 56.02
- type: recall_at_1000
value: 83.21
- type: recall_at_3
value: 13.752
- type: recall_at_5
value: 18.777
task:
type: Retrieval
- dataset:
config: default
name: MTEB SICK-R
revision: a6ea5a8cab320b040a23452cc28066d9beae2cee
split: test
type: mteb/sickr-sts
metrics:
- type: cos_sim_pearson
value: 85.15084859283178
- type: cos_sim_spearman
value: 80.49030614009419
- type: euclidean_pearson
value: 81.84574978672468
- type: euclidean_spearman
value: 79.89787150656818
- type: manhattan_pearson
value: 81.63076538567131
- type: manhattan_spearman
value: 79.69867352121841
task:
type: STS
- dataset:
config: default
name: MTEB STS12
revision: a0d554a64d88156834ff5ae9920b964011b16384
split: test
type: mteb/sts12-sts
metrics:
- type: cos_sim_pearson
value: 84.64097921490992
- type: cos_sim_spearman
value: 77.25370084896514
- type: euclidean_pearson
value: 82.71210826468788
- type: euclidean_spearman
value: 78.50445584994826
- type: manhattan_pearson
value: 82.92580164330298
- type: manhattan_spearman
value: 78.69686891301019
task:
type: STS
- dataset:
config: default
name: MTEB STS13
revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca
split: test
type: mteb/sts13-sts
metrics:
- type: cos_sim_pearson
value: 87.24596417308994
- type: cos_sim_spearman
value: 87.79454220555091
- type: euclidean_pearson
value: 87.40242561671164
- type: euclidean_spearman
value: 88.25955597373556
- type: manhattan_pearson
value: 87.25160240485849
- type: manhattan_spearman
value: 88.155794979818
task:
type: STS
- dataset:
config: default
name: MTEB STS14
revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375
split: test
type: mteb/sts14-sts
metrics:
- type: cos_sim_pearson
value: 84.44914233422564
- type: cos_sim_spearman
value: 82.91015471820322
- type: euclidean_pearson
value: 84.7206656630327
- type: euclidean_spearman
value: 83.86408872059216
- type: manhattan_pearson
value: 84.72816725158454
- type: manhattan_spearman
value: 84.01603388572788
task:
type: STS
- dataset:
config: default
name: MTEB STS15
revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3
split: test
type: mteb/sts15-sts
metrics:
- type: cos_sim_pearson
value: 87.6168026237477
- type: cos_sim_spearman
value: 88.45414278092397
- type: euclidean_pearson
value: 88.57023240882022
- type: euclidean_spearman
value: 89.04102190922094
- type: manhattan_pearson
value: 88.66695535796354
- type: manhattan_spearman
value: 89.19898476680969
task:
type: STS
- dataset:
config: default
name: MTEB STS16
revision: 4d8694f8f0e0100860b497b999b3dbed754a0513
split: test
type: mteb/sts16-sts
metrics:
- type: cos_sim_pearson
value: 84.27925826089424
- type: cos_sim_spearman
value: 85.45291099550461
- type: euclidean_pearson
value: 83.63853036580834
- type: euclidean_spearman
value: 84.33468035821484
- type: manhattan_pearson
value: 83.72778773251596
- type: manhattan_spearman
value: 84.51583132445376
task:
type: STS
- dataset:
config: en-en
name: MTEB STS17 (en-en)
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
split: test
type: mteb/sts17-crosslingual-sts
metrics:
- type: cos_sim_pearson
value: 89.67375185692552
- type: cos_sim_spearman
value: 90.32542469203855
- type: euclidean_pearson
value: 89.63513717951847
- type: euclidean_spearman
value: 89.87760271003745
- type: manhattan_pearson
value: 89.28381452982924
- type: manhattan_spearman
value: 89.53568197785721
task:
type: STS
- dataset:
config: en
name: MTEB STS22 (en)
revision: eea2b4fe26a775864c896887d910b76a8098ad3f
split: test
type: mteb/sts22-crosslingual-sts
metrics:
- type: cos_sim_pearson
value: 66.24644693819846
- type: cos_sim_spearman
value: 66.09889420525377
- type: euclidean_pearson
value: 63.72551583520747
- type: euclidean_spearman
value: 63.01385470780679
- type: manhattan_pearson
value: 64.09258157214097
- type: manhattan_spearman
value: 63.080517752822594
task:
type: STS
- dataset:
config: default
name: MTEB STSBenchmark
revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831
split: test
type: mteb/stsbenchmark-sts
metrics:
- type: cos_sim_pearson
value: 86.27321463839989
- type: cos_sim_spearman
value: 86.37572865993327
- type: euclidean_pearson
value: 86.36268020198149
- type: euclidean_spearman
value: 86.31089339478922
- type: manhattan_pearson
value: 86.4260445761947
- type: manhattan_spearman
value: 86.45885895320457
task:
type: STS
- dataset:
config: default
name: MTEB SciDocsRR
revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab
split: test
type: mteb/scidocs-reranking
metrics:
- type: map
value: 86.52456702387798
- type: mrr
value: 96.34556529164372
task:
type: Reranking
- dataset:
config: default
name: MTEB SciFact
revision: 0228b52cf27578f30900b9e5271d331663a030d7
split: test
type: mteb/scifact
metrics:
- type: map_at_1
value: 61.99400000000001
- type: map_at_10
value: 73.38799999999999
- type: map_at_100
value: 73.747
- type: map_at_1000
value: 73.75
- type: map_at_3
value: 70.04599999999999
- type: map_at_5
value: 72.095
- type: mrr_at_1
value: 65.0
- type: mrr_at_10
value: 74.42800000000001
- type: mrr_at_100
value: 74.722
- type: mrr_at_1000
value: 74.725
- type: mrr_at_3
value: 72.056
- type: mrr_at_5
value: 73.60600000000001
- type: ndcg_at_1
value: 65.0
- type: ndcg_at_10
value: 78.435
- type: ndcg_at_100
value: 79.922
- type: ndcg_at_1000
value: 80.00500000000001
- type: ndcg_at_3
value: 73.05199999999999
- type: ndcg_at_5
value: 75.98
- type: precision_at_1
value: 65.0
- type: precision_at_10
value: 10.5
- type: precision_at_100
value: 1.123
- type: precision_at_1000
value: 0.11299999999999999
- type: precision_at_3
value: 28.555999999999997
- type: precision_at_5
value: 19.0
- type: recall_at_1
value: 61.99400000000001
- type: recall_at_10
value: 92.72200000000001
- type: recall_at_100
value: 99.333
- type: recall_at_1000
value: 100.0
- type: recall_at_3
value: 78.739
- type: recall_at_5
value: 85.828
task:
type: Retrieval
- dataset:
config: default
name: MTEB SprintDuplicateQuestions
revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46
split: test
type: mteb/sprintduplicatequestions-pairclassification
metrics:
- type: cos_sim_accuracy
value: 99.79009900990098
- type: cos_sim_ap
value: 95.3203137438653
- type: cos_sim_f1
value: 89.12386706948641
- type: cos_sim_precision
value: 89.75659229208925
- type: cos_sim_recall
value: 88.5
- type: dot_accuracy
value: 99.67821782178218
- type: dot_ap
value: 89.94069840000675
- type: dot_f1
value: 83.45902463549521
- type: dot_precision
value: 83.9231547017189
- type: dot_recall
value: 83.0
- type: euclidean_accuracy
value: 99.78613861386138
- type: euclidean_ap
value: 95.10648259135526
- type: euclidean_f1
value: 88.77338877338877
- type: euclidean_precision
value: 92.42424242424242
- type: euclidean_recall
value: 85.39999999999999
- type: manhattan_accuracy
value: 99.7950495049505
- type: manhattan_ap
value: 95.29987661320946
- type: manhattan_f1
value: 89.21313183949972
- type: manhattan_precision
value: 93.14472252448314
- type: manhattan_recall
value: 85.6
- type: max_accuracy
value: 99.7950495049505
- type: max_ap
value: 95.3203137438653
- type: max_f1
value: 89.21313183949972
task:
type: PairClassification
- dataset:
config: default
name: MTEB StackExchangeClustering
revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259
split: test
type: mteb/stackexchange-clustering
metrics:
- type: v_measure
value: 67.65446577183913
task:
type: Clustering
- dataset:
config: default
name: MTEB StackExchangeClusteringP2P
revision: 815ca46b2622cec33ccafc3735d572c266efdb44
split: test
type: mteb/stackexchange-clustering-p2p
metrics:
- type: v_measure
value: 46.30749237193961
task:
type: Clustering
- dataset:
config: default
name: MTEB StackOverflowDupQuestions
revision: e185fbe320c72810689fc5848eb6114e1ef5ec69
split: test
type: mteb/stackoverflowdupquestions-reranking
metrics:
- type: map
value: 54.91481849959949
- type: mrr
value: 55.853506175197346
task:
type: Reranking
- dataset:
config: default
name: MTEB SummEval
revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c
split: test
type: mteb/summeval
metrics:
- type: cos_sim_pearson
value: 30.08196549170419
- type: cos_sim_spearman
value: 31.16661390597077
- type: dot_pearson
value: 29.892258410943466
- type: dot_spearman
value: 30.51328811965085
task:
type: Summarization
- dataset:
config: default
name: MTEB TRECCOVID
revision: None
split: test
type: mteb/trec-covid
metrics:
- type: map_at_1
value: 0.23900000000000002
- type: map_at_10
value: 2.173
- type: map_at_100
value: 14.24
- type: map_at_1000
value: 35.309000000000005
- type: map_at_3
value: 0.7100000000000001
- type: map_at_5
value: 1.163
- type: mrr_at_1
value: 92.0
- type: mrr_at_10
value: 96.0
- type: mrr_at_100
value: 96.0
- type: mrr_at_1000
value: 96.0
- type: mrr_at_3
value: 96.0
- type: mrr_at_5
value: 96.0
- type: ndcg_at_1
value: 90.0
- type: ndcg_at_10
value: 85.382
- type: ndcg_at_100
value: 68.03
- type: ndcg_at_1000
value: 61.021
- type: ndcg_at_3
value: 89.765
- type: ndcg_at_5
value: 88.444
- type: precision_at_1
value: 92.0
- type: precision_at_10
value: 88.0
- type: precision_at_100
value: 70.02000000000001
- type: precision_at_1000
value: 26.984
- type: precision_at_3
value: 94.0
- type: precision_at_5
value: 92.80000000000001
- type: recall_at_1
value: 0.23900000000000002
- type: recall_at_10
value: 2.313
- type: recall_at_100
value: 17.049
- type: recall_at_1000
value: 57.489999999999995
- type: recall_at_3
value: 0.737
- type: recall_at_5
value: 1.221
task:
type: Retrieval
- dataset:
config: default
name: MTEB Touche2020
revision: a34f9a33db75fa0cbb21bb5cfc3dae8dc8bec93f
split: test
type: mteb/touche2020
metrics:
- type: map_at_1
value: 2.75
- type: map_at_10
value: 11.29
- type: map_at_100
value: 18.032999999999998
- type: map_at_1000
value: 19.746
- type: map_at_3
value: 6.555
- type: map_at_5
value: 8.706999999999999
- type: mrr_at_1
value: 34.694
- type: mrr_at_10
value: 50.55
- type: mrr_at_100
value: 51.659
- type: mrr_at_1000
value: 51.659
- type: mrr_at_3
value: 47.278999999999996
- type: mrr_at_5
value: 49.728
- type: ndcg_at_1
value: 32.653
- type: ndcg_at_10
value: 27.894000000000002
- type: ndcg_at_100
value: 39.769
- type: ndcg_at_1000
value: 51.495999999999995
- type: ndcg_at_3
value: 32.954
- type: ndcg_at_5
value: 31.502999999999997
- type: precision_at_1
value: 34.694
- type: precision_at_10
value: 23.265
- type: precision_at_100
value: 7.898
- type: precision_at_1000
value: 1.58
- type: precision_at_3
value: 34.694
- type: precision_at_5
value: 31.429000000000002
- type: recall_at_1
value: 2.75
- type: recall_at_10
value: 16.953
- type: recall_at_100
value: 48.68
- type: recall_at_1000
value: 85.18599999999999
- type: recall_at_3
value: 7.710999999999999
- type: recall_at_5
value: 11.484
task:
type: Retrieval
- dataset:
config: default
name: MTEB ToxicConversationsClassification
revision: d7c0de2777da35d6aae2200a62c6e0e5af397c4c
split: test
type: mteb/toxic_conversations_50k
metrics:
- type: accuracy
value: 82.66099999999999
- type: ap
value: 25.555698090238337
- type: f1
value: 66.48402012461622
task:
type: Classification
- dataset:
config: default
name: MTEB TweetSentimentExtractionClassification
revision: d604517c81ca91fe16a244d1248fc021f9ecee7a
split: test
type: mteb/tweet_sentiment_extraction
metrics:
- type: accuracy
value: 72.94567062818335
- type: f1
value: 73.28139189595674
task:
type: Classification
- dataset:
config: default
name: MTEB TwentyNewsgroupsClustering
revision: 6125ec4e24fa026cec8a478383ee943acfbd5449
split: test
type: mteb/twentynewsgroups-clustering
metrics:
- type: v_measure
value: 49.581627240203474
task:
type: Clustering
- dataset:
config: default
name: MTEB TwitterSemEval2015
revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1
split: test
type: mteb/twittersemeval2015-pairclassification
metrics:
- type: cos_sim_accuracy
value: 87.78089050485785
- type: cos_sim_ap
value: 79.64487116574168
- type: cos_sim_f1
value: 72.46563021970964
- type: cos_sim_precision
value: 70.62359128474831
- type: cos_sim_recall
value: 74.40633245382587
- type: dot_accuracy
value: 86.2609524944865
- type: dot_ap
value: 75.513046857613
- type: dot_f1
value: 68.58213616489695
- type: dot_precision
value: 65.12455516014235
- type: dot_recall
value: 72.42744063324538
- type: euclidean_accuracy
value: 87.6080348095607
- type: euclidean_ap
value: 79.00204933649795
- type: euclidean_f1
value: 72.14495342605589
- type: euclidean_precision
value: 69.85421299728193
- type: euclidean_recall
value: 74.5910290237467
- type: manhattan_accuracy
value: 87.59611372712642
- type: manhattan_ap
value: 78.78523756706264
- type: manhattan_f1
value: 71.86499137718648
- type: manhattan_precision
value: 67.39833641404806
- type: manhattan_recall
value: 76.96569920844327
- type: max_accuracy
value: 87.78089050485785
- type: max_ap
value: 79.64487116574168
- type: max_f1
value: 72.46563021970964
task:
type: PairClassification
- dataset:
config: default
name: MTEB TwitterURLCorpus
revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf
split: test
type: mteb/twitterurlcorpus-pairclassification
metrics:
- type: cos_sim_accuracy
value: 89.98719292117825
- type: cos_sim_ap
value: 87.58146137353202
- type: cos_sim_f1
value: 80.28543232369239
- type: cos_sim_precision
value: 79.1735289714029
- type: cos_sim_recall
value: 81.42901139513397
- type: dot_accuracy
value: 88.9199363526992
- type: dot_ap
value: 84.98499998630417
- type: dot_f1
value: 78.21951400757969
- type: dot_precision
value: 75.58523624874336
- type: dot_recall
value: 81.04404065291038
- type: euclidean_accuracy
value: 89.77374160748244
- type: euclidean_ap
value: 87.35151562835209
- type: euclidean_f1
value: 79.92160922940393
- type: euclidean_precision
value: 76.88531587933979
- type: euclidean_recall
value: 83.20757622420696
- type: manhattan_accuracy
value: 89.72717041176699
- type: manhattan_ap
value: 87.34065592142515
- type: manhattan_f1
value: 79.85603419187943
- type: manhattan_precision
value: 77.82243332115455
- type: manhattan_recall
value: 81.99876809362489
- type: max_accuracy
value: 89.98719292117825
- type: max_ap
value: 87.58146137353202
- type: max_f1
value: 80.28543232369239
task:
type: PairClassification
- dataset:
config: default
name: MTEB AFQMC
revision: b44c3b011063adb25877c13823db83bb193913c4
split: validation
type: C-MTEB/AFQMC
metrics:
- type: cos_sim_pearson
value: 53.45954203592337
- type: cos_sim_spearman
value: 58.42154680418638
- type: euclidean_pearson
value: 56.41543791722753
- type: euclidean_spearman
value: 58.39328016640146
- type: manhattan_pearson
value: 56.318510356833876
- type: manhattan_spearman
value: 58.28423447818184
task:
type: STS
- dataset:
config: default
name: MTEB ATEC
revision: 0f319b1142f28d00e055a6770f3f726ae9b7d865
split: test
type: C-MTEB/ATEC
metrics:
- type: cos_sim_pearson
value: 50.78356460675945
- type: cos_sim_spearman
value: 55.6530411663269
- type: euclidean_pearson
value: 56.50763660417816
- type: euclidean_spearman
value: 55.733823335669065
- type: manhattan_pearson
value: 56.45323093512866
- type: manhattan_spearman
value: 55.63248619032702
task:
type: STS
- dataset:
config: zh
name: MTEB AmazonReviewsClassification (zh)
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
split: test
type: mteb/amazon_reviews_multi
metrics:
- type: accuracy
value: 47.209999999999994
- type: f1
value: 46.08892432018655
task:
type: Classification
- dataset:
config: default
name: MTEB BQ
revision: e3dda5e115e487b39ec7e618c0c6a29137052a55
split: test
type: C-MTEB/BQ
metrics:
- type: cos_sim_pearson
value: 70.25573992001478
- type: cos_sim_spearman
value: 73.85247134951433
- type: euclidean_pearson
value: 72.60033082168442
- type: euclidean_spearman
value: 73.72445893756499
- type: manhattan_pearson
value: 72.59932284620231
- type: manhattan_spearman
value: 73.68002490614583
task:
type: STS
- dataset:
config: default
name: MTEB CLSClusteringP2P
revision: 4b6227591c6c1a73bc76b1055f3b7f3588e72476
split: test
type: C-MTEB/CLSClusteringP2P
metrics:
- type: v_measure
value: 45.21317724305628
task:
type: Clustering
- dataset:
config: default
name: MTEB CLSClusteringS2S
revision: e458b3f5414b62b7f9f83499ac1f5497ae2e869f
split: test
type: C-MTEB/CLSClusteringS2S
metrics:
- type: v_measure
value: 42.49825170976724
task:
type: Clustering
- dataset:
config: default
name: MTEB CMedQAv1
revision: 8d7f1e942507dac42dc58017c1a001c3717da7df
split: test
type: C-MTEB/CMedQAv1-reranking
metrics:
- type: map
value: 88.15661686810597
- type: mrr
value: 90.11222222222223
task:
type: Reranking
- dataset:
config: default
name: MTEB CMedQAv2
revision: 23d186750531a14a0357ca22cd92d712fd512ea0
split: test
type: C-MTEB/CMedQAv2-reranking
metrics:
- type: map
value: 88.1204726064383
- type: mrr
value: 90.20142857142858
task:
type: Reranking
- dataset:
config: default
name: MTEB CmedqaRetrieval
revision: cd540c506dae1cf9e9a59c3e06f42030d54e7301
split: dev
type: C-MTEB/CmedqaRetrieval
metrics:
- type: map_at_1
value: 27.224999999999998
- type: map_at_10
value: 40.169
- type: map_at_100
value: 42.0
- type: map_at_1000
value: 42.109
- type: map_at_3
value: 35.76
- type: map_at_5
value: 38.221
- type: mrr_at_1
value: 40.56
- type: mrr_at_10
value: 49.118
- type: mrr_at_100
value: 50.092999999999996
- type: mrr_at_1000
value: 50.133
- type: mrr_at_3
value: 46.507
- type: mrr_at_5
value: 47.973
- type: ndcg_at_1
value: 40.56
- type: ndcg_at_10
value: 46.972
- type: ndcg_at_100
value: 54.04
- type: ndcg_at_1000
value: 55.862
- type: ndcg_at_3
value: 41.36
- type: ndcg_at_5
value: 43.704
- type: precision_at_1
value: 40.56
- type: precision_at_10
value: 10.302999999999999
- type: precision_at_100
value: 1.606
- type: precision_at_1000
value: 0.184
- type: precision_at_3
value: 23.064
- type: precision_at_5
value: 16.764000000000003
- type: recall_at_1
value: 27.224999999999998
- type: recall_at_10
value: 58.05200000000001
- type: recall_at_100
value: 87.092
- type: recall_at_1000
value: 99.099
- type: recall_at_3
value: 41.373
- type: recall_at_5
value: 48.453
task:
type: Retrieval
- dataset:
config: default
name: MTEB Cmnli
revision: 41bc36f332156f7adc9e38f53777c959b2ae9766
split: validation
type: C-MTEB/CMNLI
metrics:
- type: cos_sim_accuracy
value: 77.40228502705953
- type: cos_sim_ap
value: 86.22359172956327
- type: cos_sim_f1
value: 78.96328293736501
- type: cos_sim_precision
value: 73.36945615091311
- type: cos_sim_recall
value: 85.48047696983868
- type: dot_accuracy
value: 75.53818400481059
- type: dot_ap
value: 83.70164011305312
- type: dot_f1
value: 77.67298719348754
- type: dot_precision
value: 67.49482401656314
- type: dot_recall
value: 91.46598082768296
- type: euclidean_accuracy
value: 77.94347564642213
- type: euclidean_ap
value: 86.4652108728609
- type: euclidean_f1
value: 79.15555555555555
- type: euclidean_precision
value: 75.41816641964853
- type: euclidean_recall
value: 83.28267477203647
- type: manhattan_accuracy
value: 77.45039085989175
- type: manhattan_ap
value: 86.09986583900665
- type: manhattan_f1
value: 78.93669264438988
- type: manhattan_precision
value: 72.63261296660117
- type: manhattan_recall
value: 86.43909282207154
- type: max_accuracy
value: 77.94347564642213
- type: max_ap
value: 86.4652108728609
- type: max_f1
value: 79.15555555555555
task:
type: PairClassification
- dataset:
config: default
name: MTEB CovidRetrieval
revision: 1271c7809071a13532e05f25fb53511ffce77117
split: dev
type: C-MTEB/CovidRetrieval
metrics:
- type: map_at_1
value: 69.336
- type: map_at_10
value: 77.16
- type: map_at_100
value: 77.47500000000001
- type: map_at_1000
value: 77.482
- type: map_at_3
value: 75.42999999999999
- type: map_at_5
value: 76.468
- type: mrr_at_1
value: 69.44200000000001
- type: mrr_at_10
value: 77.132
- type: mrr_at_100
value: 77.43299999999999
- type: mrr_at_1000
value: 77.44
- type: mrr_at_3
value: 75.395
- type: mrr_at_5
value: 76.459
- type: ndcg_at_1
value: 69.547
- type: ndcg_at_10
value: 80.794
- type: ndcg_at_100
value: 82.245
- type: ndcg_at_1000
value: 82.40899999999999
- type: ndcg_at_3
value: 77.303
- type: ndcg_at_5
value: 79.168
- type: precision_at_1
value: 69.547
- type: precision_at_10
value: 9.305
- type: precision_at_100
value: 0.9979999999999999
- type: precision_at_1000
value: 0.101
- type: precision_at_3
value: 27.749000000000002
- type: precision_at_5
value: 17.576
- type: recall_at_1
value: 69.336
- type: recall_at_10
value: 92.097
- type: recall_at_100
value: 98.736
- type: recall_at_1000
value: 100.0
- type: recall_at_3
value: 82.64
- type: recall_at_5
value: 87.144
task:
type: Retrieval
- dataset:
config: default
name: MTEB DuRetrieval
revision: a1a333e290fe30b10f3f56498e3a0d911a693ced
split: dev
type: C-MTEB/DuRetrieval
metrics:
- type: map_at_1
value: 26.817999999999998
- type: map_at_10
value: 82.67
- type: map_at_100
value: 85.304
- type: map_at_1000
value: 85.334
- type: map_at_3
value: 57.336
- type: map_at_5
value: 72.474
- type: mrr_at_1
value: 91.45
- type: mrr_at_10
value: 94.272
- type: mrr_at_100
value: 94.318
- type: mrr_at_1000
value: 94.32000000000001
- type: mrr_at_3
value: 94.0
- type: mrr_at_5
value: 94.17699999999999
- type: ndcg_at_1
value: 91.45
- type: ndcg_at_10
value: 89.404
- type: ndcg_at_100
value: 91.724
- type: ndcg_at_1000
value: 91.973
- type: ndcg_at_3
value: 88.104
- type: ndcg_at_5
value: 87.25699999999999
- type: precision_at_1
value: 91.45
- type: precision_at_10
value: 42.585
- type: precision_at_100
value: 4.838
- type: precision_at_1000
value: 0.49
- type: precision_at_3
value: 78.8
- type: precision_at_5
value: 66.66
- type: recall_at_1
value: 26.817999999999998
- type: recall_at_10
value: 90.67
- type: recall_at_100
value: 98.36200000000001
- type: recall_at_1000
value: 99.583
- type: recall_at_3
value: 59.614999999999995
- type: recall_at_5
value: 77.05199999999999
task:
type: Retrieval
- dataset:
config: default
name: MTEB EcomRetrieval
revision: 687de13dc7294d6fd9be10c6945f9e8fec8166b9
split: dev
type: C-MTEB/EcomRetrieval
metrics:
- type: map_at_1
value: 47.699999999999996
- type: map_at_10
value: 57.589999999999996
- type: map_at_100
value: 58.226
- type: map_at_1000
value: 58.251
- type: map_at_3
value: 55.233
- type: map_at_5
value: 56.633
- type: mrr_at_1
value: 47.699999999999996
- type: mrr_at_10
value: 57.589999999999996
- type: mrr_at_100
value: 58.226
- type: mrr_at_1000
value: 58.251
- type: mrr_at_3
value: 55.233
- type: mrr_at_5
value: 56.633
- type: ndcg_at_1
value: 47.699999999999996
- type: ndcg_at_10
value: 62.505
- type: ndcg_at_100
value: 65.517
- type: ndcg_at_1000
value: 66.19800000000001
- type: ndcg_at_3
value: 57.643
- type: ndcg_at_5
value: 60.181
- type: precision_at_1
value: 47.699999999999996
- type: precision_at_10
value: 7.8
- type: precision_at_100
value: 0.919
- type: precision_at_1000
value: 0.097
- type: precision_at_3
value: 21.532999999999998
- type: precision_at_5
value: 14.16
- type: recall_at_1
value: 47.699999999999996
- type: recall_at_10
value: 78.0
- type: recall_at_100
value: 91.9
- type: recall_at_1000
value: 97.3
- type: recall_at_3
value: 64.60000000000001
- type: recall_at_5
value: 70.8
task:
type: Retrieval
- dataset:
config: default
name: MTEB IFlyTek
revision: 421605374b29664c5fc098418fe20ada9bd55f8a
split: validation
type: C-MTEB/IFlyTek-classification
metrics:
- type: accuracy
value: 44.84801846864178
- type: f1
value: 37.47347897956339
task:
type: Classification
- dataset:
config: default
name: MTEB JDReview
revision: b7c64bd89eb87f8ded463478346f76731f07bf8b
split: test
type: C-MTEB/JDReview-classification
metrics:
- type: accuracy
value: 85.81613508442777
- type: ap
value: 52.68244615477374
- type: f1
value: 80.0445640948843
task:
type: Classification
- dataset:
config: default
name: MTEB LCQMC
revision: 17f9b096f80380fce5ed12a9be8be7784b337daf
split: test
type: C-MTEB/LCQMC
metrics:
- type: cos_sim_pearson
value: 69.57786502217138
- type: cos_sim_spearman
value: 75.39106054489906
- type: euclidean_pearson
value: 73.72082954602402
- type: euclidean_spearman
value: 75.14421475913619
- type: manhattan_pearson
value: 73.62463076633642
- type: manhattan_spearman
value: 75.01301565104112
task:
type: STS
- dataset:
config: default
name: MTEB MMarcoReranking
revision: None
split: dev
type: C-MTEB/Mmarco-reranking
metrics:
- type: map
value: 29.143797057999134
- type: mrr
value: 28.08174603174603
task:
type: Reranking
- dataset:
config: default
name: MTEB MMarcoRetrieval
revision: 539bbde593d947e2a124ba72651aafc09eb33fc2
split: dev
type: C-MTEB/MMarcoRetrieval
metrics:
- type: map_at_1
value: 70.492
- type: map_at_10
value: 79.501
- type: map_at_100
value: 79.728
- type: map_at_1000
value: 79.735
- type: map_at_3
value: 77.77
- type: map_at_5
value: 78.851
- type: mrr_at_1
value: 72.822
- type: mrr_at_10
value: 80.001
- type: mrr_at_100
value: 80.19
- type: mrr_at_1000
value: 80.197
- type: mrr_at_3
value: 78.484
- type: mrr_at_5
value: 79.42099999999999
- type: ndcg_at_1
value: 72.822
- type: ndcg_at_10
value: 83.013
- type: ndcg_at_100
value: 84.013
- type: ndcg_at_1000
value: 84.20400000000001
- type: ndcg_at_3
value: 79.728
- type: ndcg_at_5
value: 81.542
- type: precision_at_1
value: 72.822
- type: precision_at_10
value: 9.917
- type: precision_at_100
value: 1.042
- type: precision_at_1000
value: 0.106
- type: precision_at_3
value: 29.847
- type: precision_at_5
value: 18.871
- type: recall_at_1
value: 70.492
- type: recall_at_10
value: 93.325
- type: recall_at_100
value: 97.822
- type: recall_at_1000
value: 99.319
- type: recall_at_3
value: 84.636
- type: recall_at_5
value: 88.93100000000001
task:
type: Retrieval
- dataset:
config: zh-CN
name: MTEB MassiveIntentClassification (zh-CN)
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
split: test
type: mteb/amazon_massive_intent
metrics:
- type: accuracy
value: 76.88298587760592
- type: f1
value: 73.89001762017176
task:
type: Classification
- dataset:
config: zh-CN
name: MTEB MassiveScenarioClassification (zh-CN)
revision: 7d571f92784cd94a019292a1f45445077d0ef634
split: test
type: mteb/amazon_massive_scenario
metrics:
- type: accuracy
value: 80.76328177538669
- type: f1
value: 80.24718532423358
task:
type: Classification
- dataset:
config: default
name: MTEB MedicalRetrieval
revision: 2039188fb5800a9803ba5048df7b76e6fb151fc6
split: dev
type: C-MTEB/MedicalRetrieval
metrics:
- type: map_at_1
value: 49.6
- type: map_at_10
value: 55.620999999999995
- type: map_at_100
value: 56.204
- type: map_at_1000
value: 56.251
- type: map_at_3
value: 54.132999999999996
- type: map_at_5
value: 54.933
- type: mrr_at_1
value: 49.7
- type: mrr_at_10
value: 55.67100000000001
- type: mrr_at_100
value: 56.254000000000005
- type: mrr_at_1000
value: 56.301
- type: mrr_at_3
value: 54.18300000000001
- type: mrr_at_5
value: 54.983000000000004
- type: ndcg_at_1
value: 49.6
- type: ndcg_at_10
value: 58.645
- type: ndcg_at_100
value: 61.789
- type: ndcg_at_1000
value: 63.219
- type: ndcg_at_3
value: 55.567
- type: ndcg_at_5
value: 57.008
- type: precision_at_1
value: 49.6
- type: precision_at_10
value: 6.819999999999999
- type: precision_at_100
value: 0.836
- type: precision_at_1000
value: 0.095
- type: precision_at_3
value: 19.900000000000002
- type: precision_at_5
value: 12.64
- type: recall_at_1
value: 49.6
- type: recall_at_10
value: 68.2
- type: recall_at_100
value: 83.6
- type: recall_at_1000
value: 95.3
- type: recall_at_3
value: 59.699999999999996
- type: recall_at_5
value: 63.2
task:
type: Retrieval
- dataset:
config: default
name: MTEB MultilingualSentiment
revision: 46958b007a63fdbf239b7672c25d0bea67b5ea1a
split: validation
type: C-MTEB/MultilingualSentiment-classification
metrics:
- type: accuracy
value: 74.45666666666666
- type: f1
value: 74.32582402190089
task:
type: Classification
- dataset:
config: default
name: MTEB Ocnli
revision: 66e76a618a34d6d565d5538088562851e6daa7ec
split: validation
type: C-MTEB/OCNLI
metrics:
- type: cos_sim_accuracy
value: 80.67135896047645
- type: cos_sim_ap
value: 87.60421240712051
- type: cos_sim_f1
value: 82.1304131408661
- type: cos_sim_precision
value: 77.68361581920904
- type: cos_sim_recall
value: 87.11721224920802
- type: dot_accuracy
value: 79.04710341093666
- type: dot_ap
value: 85.6370059719336
- type: dot_f1
value: 80.763723150358
- type: dot_precision
value: 73.69337979094077
- type: dot_recall
value: 89.33474128827878
- type: euclidean_accuracy
value: 81.05035192203573
- type: euclidean_ap
value: 87.7880240053663
- type: euclidean_f1
value: 82.50244379276637
- type: euclidean_precision
value: 76.7970882620564
- type: euclidean_recall
value: 89.1235480464625
- type: manhattan_accuracy
value: 80.61721710882512
- type: manhattan_ap
value: 87.43568120591175
- type: manhattan_f1
value: 81.89526184538653
- type: manhattan_precision
value: 77.5992438563327
- type: manhattan_recall
value: 86.6948257655755
- type: max_accuracy
value: 81.05035192203573
- type: max_ap
value: 87.7880240053663
- type: max_f1
value: 82.50244379276637
task:
type: PairClassification
- dataset:
config: default
name: MTEB OnlineShopping
revision: e610f2ebd179a8fda30ae534c3878750a96db120
split: test
type: C-MTEB/OnlineShopping-classification
metrics:
- type: accuracy
value: 93.5
- type: ap
value: 91.31357903446782
- type: f1
value: 93.48088994006616
task:
type: Classification
- dataset:
config: default
name: MTEB PAWSX
revision: 9c6a90e430ac22b5779fb019a23e820b11a8b5e1
split: test
type: C-MTEB/PAWSX
metrics:
- type: cos_sim_pearson
value: 36.93293453538077
- type: cos_sim_spearman
value: 42.45972506308574
- type: euclidean_pearson
value: 42.34945133152159
- type: euclidean_spearman
value: 42.331610303674644
- type: manhattan_pearson
value: 42.31455070249498
- type: manhattan_spearman
value: 42.19887982891834
task:
type: STS
- dataset:
config: default
name: MTEB QBQTC
revision: 790b0510dc52b1553e8c49f3d2afb48c0e5c48b7
split: test
type: C-MTEB/QBQTC
metrics:
- type: cos_sim_pearson
value: 33.683290790043785
- type: cos_sim_spearman
value: 35.149171171202994
- type: euclidean_pearson
value: 32.33806561267862
- type: euclidean_spearman
value: 34.483576387347966
- type: manhattan_pearson
value: 32.47629754599608
- type: manhattan_spearman
value: 34.66434471867615
task:
type: STS
- dataset:
config: zh
name: MTEB STS22 (zh)
revision: eea2b4fe26a775864c896887d910b76a8098ad3f
split: test
type: mteb/sts22-crosslingual-sts
metrics:
- type: cos_sim_pearson
value: 66.46322760516104
- type: cos_sim_spearman
value: 67.398478319726
- type: euclidean_pearson
value: 64.7223480293625
- type: euclidean_spearman
value: 66.83118568812951
- type: manhattan_pearson
value: 64.88440039828305
- type: manhattan_spearman
value: 66.80429458952257
task:
type: STS
- dataset:
config: default
name: MTEB STSB
revision: 0cde68302b3541bb8b3c340dc0644b0b745b3dc0
split: test
type: C-MTEB/STSB
metrics:
- type: cos_sim_pearson
value: 79.08991383232105
- type: cos_sim_spearman
value: 79.39715677296854
- type: euclidean_pearson
value: 78.63201279320496
- type: euclidean_spearman
value: 79.40262660785731
- type: manhattan_pearson
value: 78.98138363146906
- type: manhattan_spearman
value: 79.79968413014194
task:
type: STS
- dataset:
config: default
name: MTEB T2Reranking
revision: 76631901a18387f85eaa53e5450019b87ad58ef9
split: dev
type: C-MTEB/T2Reranking
metrics:
- type: map
value: 67.43289278789972
- type: mrr
value: 77.53012460908535
task:
type: Reranking
- dataset:
config: default
name: MTEB T2Retrieval
revision: 8731a845f1bf500a4f111cf1070785c793d10e64
split: dev
type: C-MTEB/T2Retrieval
metrics:
- type: map_at_1
value: 27.733999999999998
- type: map_at_10
value: 78.24799999999999
- type: map_at_100
value: 81.765
- type: map_at_1000
value: 81.824
- type: map_at_3
value: 54.92
- type: map_at_5
value: 67.61399999999999
- type: mrr_at_1
value: 90.527
- type: mrr_at_10
value: 92.843
- type: mrr_at_100
value: 92.927
- type: mrr_at_1000
value: 92.93
- type: mrr_at_3
value: 92.45100000000001
- type: mrr_at_5
value: 92.693
- type: ndcg_at_1
value: 90.527
- type: ndcg_at_10
value: 85.466
- type: ndcg_at_100
value: 88.846
- type: ndcg_at_1000
value: 89.415
- type: ndcg_at_3
value: 86.768
- type: ndcg_at_5
value: 85.46000000000001
- type: precision_at_1
value: 90.527
- type: precision_at_10
value: 42.488
- type: precision_at_100
value: 5.024
- type: precision_at_1000
value: 0.516
- type: precision_at_3
value: 75.907
- type: precision_at_5
value: 63.727000000000004
- type: recall_at_1
value: 27.733999999999998
- type: recall_at_10
value: 84.346
- type: recall_at_100
value: 95.536
- type: recall_at_1000
value: 98.42999999999999
- type: recall_at_3
value: 56.455
- type: recall_at_5
value: 70.755
task:
type: Retrieval
- dataset:
config: default
name: MTEB TNews
revision: 317f262bf1e6126357bbe89e875451e4b0938fe4
split: validation
type: C-MTEB/TNews-classification
metrics:
- type: accuracy
value: 49.952000000000005
- type: f1
value: 48.264617195258054
task:
type: Classification
- dataset:
config: default
name: MTEB ThuNewsClusteringP2P
revision: 5798586b105c0434e4f0fe5e767abe619442cf93
split: test
type: C-MTEB/ThuNewsClusteringP2P
metrics:
- type: v_measure
value: 68.23769904483508
task:
type: Clustering
- dataset:
config: default
name: MTEB ThuNewsClusteringS2S
revision: 8a8b2caeda43f39e13c4bc5bea0f8a667896e10d
split: test
type: C-MTEB/ThuNewsClusteringS2S
metrics:
- type: v_measure
value: 62.50294403136556
task:
type: Clustering
- dataset:
config: default
name: MTEB VideoRetrieval
revision: 58c2597a5943a2ba48f4668c3b90d796283c5639
split: dev
type: C-MTEB/VideoRetrieval
metrics:
- type: map_at_1
value: 54.0
- type: map_at_10
value: 63.668
- type: map_at_100
value: 64.217
- type: map_at_1000
value: 64.23100000000001
- type: map_at_3
value: 61.7
- type: map_at_5
value: 62.870000000000005
- type: mrr_at_1
value: 54.0
- type: mrr_at_10
value: 63.668
- type: mrr_at_100
value: 64.217
- type: mrr_at_1000
value: 64.23100000000001
- type: mrr_at_3
value: 61.7
- type: mrr_at_5
value: 62.870000000000005
- type: ndcg_at_1
value: 54.0
- type: ndcg_at_10
value: 68.11399999999999
- type: ndcg_at_100
value: 70.723
- type: ndcg_at_1000
value: 71.123
- type: ndcg_at_3
value: 64.074
- type: ndcg_at_5
value: 66.178
- type: precision_at_1
value: 54.0
- type: precision_at_10
value: 8.200000000000001
- type: precision_at_100
value: 0.941
- type: precision_at_1000
value: 0.097
- type: precision_at_3
value: 23.633000000000003
- type: precision_at_5
value: 15.2
- type: recall_at_1
value: 54.0
- type: recall_at_10
value: 82.0
- type: recall_at_100
value: 94.1
- type: recall_at_1000
value: 97.3
- type: recall_at_3
value: 70.89999999999999
- type: recall_at_5
value: 76.0
task:
type: Retrieval
- dataset:
config: default
name: MTEB Waimai
revision: 339287def212450dcaa9df8c22bf93e9980c7023
split: test
type: C-MTEB/waimai-classification
metrics:
- type: accuracy
value: 86.63000000000001
- type: ap
value: 69.99457882599567
- type: f1
value: 85.07735617998541
task:
type: Classification
- dataset:
config: default
name: MTEB 8TagsClustering
revision: None
split: test
type: PL-MTEB/8tags-clustering
metrics:
- type: v_measure
value: 44.594104491193555
task:
type: Clustering
- dataset:
config: default
name: MTEB AllegroReviews
revision: None
split: test
type: PL-MTEB/allegro-reviews
metrics:
- type: accuracy
value: 63.97614314115309
- type: f1
value: 52.15634261679283
task:
type: Classification
- dataset:
config: default
name: MTEB ArguAna-PL
revision: 63fc86750af76253e8c760fc9e534bbf24d260a2
split: test
type: clarin-knext/arguana-pl
metrics:
- type: map_at_1
value: 32.646
- type: map_at_10
value: 47.963
- type: map_at_100
value: 48.789
- type: map_at_1000
value: 48.797000000000004
- type: map_at_3
value: 43.196
- type: map_at_5
value: 46.016
- type: mrr_at_1
value: 33.073
- type: mrr_at_10
value: 48.126000000000005
- type: mrr_at_100
value: 48.946
- type: mrr_at_1000
value: 48.953
- type: mrr_at_3
value: 43.374
- type: mrr_at_5
value: 46.147
- type: ndcg_at_1
value: 32.646
- type: ndcg_at_10
value: 56.481
- type: ndcg_at_100
value: 59.922
- type: ndcg_at_1000
value: 60.07
- type: ndcg_at_3
value: 46.675
- type: ndcg_at_5
value: 51.76500000000001
- type: precision_at_1
value: 32.646
- type: precision_at_10
value: 8.371
- type: precision_at_100
value: 0.9860000000000001
- type: precision_at_1000
value: 0.1
- type: precision_at_3
value: 18.919
- type: precision_at_5
value: 13.825999999999999
- type: recall_at_1
value: 32.646
- type: recall_at_10
value: 83.71300000000001
- type: recall_at_100
value: 98.578
- type: recall_at_1000
value: 99.644
- type: recall_at_3
value: 56.757000000000005
- type: recall_at_5
value: 69.132
task:
type: Retrieval
- dataset:
config: default
name: MTEB CBD
revision: None
split: test
type: PL-MTEB/cbd
metrics:
- type: accuracy
value: 68.56
- type: ap
value: 23.310493680488513
- type: f1
value: 58.85369533105693
task:
type: Classification
- dataset:
config: default
name: MTEB CDSC-E
revision: None
split: test
type: PL-MTEB/cdsce-pairclassification
metrics:
- type: cos_sim_accuracy
value: 88.5
- type: cos_sim_ap
value: 72.42140924378361
- type: cos_sim_f1
value: 66.0919540229885
- type: cos_sim_precision
value: 72.78481012658227
- type: cos_sim_recall
value: 60.526315789473685
- type: dot_accuracy
value: 88.5
- type: dot_ap
value: 72.42140924378361
- type: dot_f1
value: 66.0919540229885
- type: dot_precision
value: 72.78481012658227
- type: dot_recall
value: 60.526315789473685
- type: euclidean_accuracy
value: 88.5
- type: euclidean_ap
value: 72.42140924378361
- type: euclidean_f1
value: 66.0919540229885
- type: euclidean_precision
value: 72.78481012658227
- type: euclidean_recall
value: 60.526315789473685
- type: manhattan_accuracy
value: 88.5
- type: manhattan_ap
value: 72.49745515311696
- type: manhattan_f1
value: 66.0968660968661
- type: manhattan_precision
value: 72.04968944099379
- type: manhattan_recall
value: 61.05263157894737
- type: max_accuracy
value: 88.5
- type: max_ap
value: 72.49745515311696
- type: max_f1
value: 66.0968660968661
task:
type: PairClassification
- dataset:
config: default
name: MTEB CDSC-R
revision: None
split: test
type: PL-MTEB/cdscr-sts
metrics:
- type: cos_sim_pearson
value: 90.32269765590145
- type: cos_sim_spearman
value: 89.73666311491672
- type: euclidean_pearson
value: 88.2933868516544
- type: euclidean_spearman
value: 89.73666311491672
- type: manhattan_pearson
value: 88.33474590219448
- type: manhattan_spearman
value: 89.8548364866583
task:
type: STS
- dataset:
config: default
name: MTEB DBPedia-PL
revision: 76afe41d9af165cc40999fcaa92312b8b012064a
split: test
type: clarin-knext/dbpedia-pl
metrics:
- type: map_at_1
value: 7.632999999999999
- type: map_at_10
value: 16.426
- type: map_at_100
value: 22.651
- type: map_at_1000
value: 24.372
- type: map_at_3
value: 11.706
- type: map_at_5
value: 13.529
- type: mrr_at_1
value: 60.75000000000001
- type: mrr_at_10
value: 68.613
- type: mrr_at_100
value: 69.001
- type: mrr_at_1000
value: 69.021
- type: mrr_at_3
value: 67.0
- type: mrr_at_5
value: 67.925
- type: ndcg_at_1
value: 49.875
- type: ndcg_at_10
value: 36.978
- type: ndcg_at_100
value: 40.031
- type: ndcg_at_1000
value: 47.566
- type: ndcg_at_3
value: 41.148
- type: ndcg_at_5
value: 38.702
- type: precision_at_1
value: 60.75000000000001
- type: precision_at_10
value: 29.7
- type: precision_at_100
value: 9.278
- type: precision_at_1000
value: 2.099
- type: precision_at_3
value: 44.0
- type: precision_at_5
value: 37.6
- type: recall_at_1
value: 7.632999999999999
- type: recall_at_10
value: 22.040000000000003
- type: recall_at_100
value: 44.024
- type: recall_at_1000
value: 67.848
- type: recall_at_3
value: 13.093
- type: recall_at_5
value: 15.973
task:
type: Retrieval
- dataset:
config: default
name: MTEB FiQA-PL
revision: 2e535829717f8bf9dc829b7f911cc5bbd4e6608e
split: test
type: clarin-knext/fiqa-pl
metrics:
- type: map_at_1
value: 15.473
- type: map_at_10
value: 24.579
- type: map_at_100
value: 26.387
- type: map_at_1000
value: 26.57
- type: map_at_3
value: 21.278
- type: map_at_5
value: 23.179
- type: mrr_at_1
value: 30.709999999999997
- type: mrr_at_10
value: 38.994
- type: mrr_at_100
value: 39.993
- type: mrr_at_1000
value: 40.044999999999995
- type: mrr_at_3
value: 36.342999999999996
- type: mrr_at_5
value: 37.846999999999994
- type: ndcg_at_1
value: 30.709999999999997
- type: ndcg_at_10
value: 31.608999999999998
- type: ndcg_at_100
value: 38.807
- type: ndcg_at_1000
value: 42.208
- type: ndcg_at_3
value: 28.086
- type: ndcg_at_5
value: 29.323
- type: precision_at_1
value: 30.709999999999997
- type: precision_at_10
value: 8.688
- type: precision_at_100
value: 1.608
- type: precision_at_1000
value: 0.22100000000000003
- type: precision_at_3
value: 18.724
- type: precision_at_5
value: 13.950999999999999
- type: recall_at_1
value: 15.473
- type: recall_at_10
value: 38.361000000000004
- type: recall_at_100
value: 65.2
- type: recall_at_1000
value: 85.789
- type: recall_at_3
value: 25.401
- type: recall_at_5
value: 30.875999999999998
task:
type: Retrieval
- dataset:
config: default
name: MTEB HotpotQA-PL
revision: a0bd479ac97b4ccb5bd6ce320c415d0bb4beb907
split: test
type: clarin-knext/hotpotqa-pl
metrics:
- type: map_at_1
value: 38.096000000000004
- type: map_at_10
value: 51.44499999999999
- type: map_at_100
value: 52.325
- type: map_at_1000
value: 52.397000000000006
- type: map_at_3
value: 48.626999999999995
- type: map_at_5
value: 50.342
- type: mrr_at_1
value: 76.19200000000001
- type: mrr_at_10
value: 81.191
- type: mrr_at_100
value: 81.431
- type: mrr_at_1000
value: 81.443
- type: mrr_at_3
value: 80.30199999999999
- type: mrr_at_5
value: 80.85900000000001
- type: ndcg_at_1
value: 76.19200000000001
- type: ndcg_at_10
value: 60.9
- type: ndcg_at_100
value: 64.14699999999999
- type: ndcg_at_1000
value: 65.647
- type: ndcg_at_3
value: 56.818000000000005
- type: ndcg_at_5
value: 59.019999999999996
- type: precision_at_1
value: 76.19200000000001
- type: precision_at_10
value: 12.203
- type: precision_at_100
value: 1.478
- type: precision_at_1000
value: 0.168
- type: precision_at_3
value: 34.616
- type: precision_at_5
value: 22.515
- type: recall_at_1
value: 38.096000000000004
- type: recall_at_10
value: 61.013
- type: recall_at_100
value: 73.90299999999999
- type: recall_at_1000
value: 83.91
- type: recall_at_3
value: 51.92400000000001
- type: recall_at_5
value: 56.286
task:
type: Retrieval
- dataset:
config: default
name: MTEB MSMARCO-PL
revision: 8634c07806d5cce3a6138e260e59b81760a0a640
split: test
type: clarin-knext/msmarco-pl
metrics:
- type: map_at_1
value: 1.548
- type: map_at_10
value: 11.049000000000001
- type: map_at_100
value: 28.874
- type: map_at_1000
value: 34.931
- type: map_at_3
value: 4.162
- type: map_at_5
value: 6.396
- type: mrr_at_1
value: 90.69800000000001
- type: mrr_at_10
value: 92.093
- type: mrr_at_100
value: 92.345
- type: mrr_at_1000
value: 92.345
- type: mrr_at_3
value: 91.86
- type: mrr_at_5
value: 91.86
- type: ndcg_at_1
value: 74.031
- type: ndcg_at_10
value: 63.978
- type: ndcg_at_100
value: 53.101
- type: ndcg_at_1000
value: 60.675999999999995
- type: ndcg_at_3
value: 71.421
- type: ndcg_at_5
value: 68.098
- type: precision_at_1
value: 90.69800000000001
- type: precision_at_10
value: 71.86
- type: precision_at_100
value: 31.395
- type: precision_at_1000
value: 5.981
- type: precision_at_3
value: 84.49600000000001
- type: precision_at_5
value: 79.07
- type: recall_at_1
value: 1.548
- type: recall_at_10
value: 12.149000000000001
- type: recall_at_100
value: 40.794999999999995
- type: recall_at_1000
value: 67.974
- type: recall_at_3
value: 4.244
- type: recall_at_5
value: 6.608
task:
type: Retrieval
- dataset:
config: pl
name: MTEB MassiveIntentClassification (pl)
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
split: test
type: mteb/amazon_massive_intent
metrics:
- type: accuracy
value: 73.55413584398119
- type: f1
value: 69.65610882318181
task:
type: Classification
- dataset:
config: pl
name: MTEB MassiveScenarioClassification (pl)
revision: 7d571f92784cd94a019292a1f45445077d0ef634
split: test
type: mteb/amazon_massive_scenario
metrics:
- type: accuracy
value: 76.37188971082716
- type: f1
value: 75.64847309941361
task:
type: Classification
- dataset:
config: default
name: MTEB NFCorpus-PL
revision: 9a6f9567fda928260afed2de480d79c98bf0bec0
split: test
type: clarin-knext/nfcorpus-pl
metrics:
- type: map_at_1
value: 4.919
- type: map_at_10
value: 10.834000000000001
- type: map_at_100
value: 13.38
- type: map_at_1000
value: 14.581
- type: map_at_3
value: 8.198
- type: map_at_5
value: 9.428
- type: mrr_at_1
value: 41.176
- type: mrr_at_10
value: 50.083
- type: mrr_at_100
value: 50.559
- type: mrr_at_1000
value: 50.604000000000006
- type: mrr_at_3
value: 47.936
- type: mrr_at_5
value: 49.407000000000004
- type: ndcg_at_1
value: 39.628
- type: ndcg_at_10
value: 30.098000000000003
- type: ndcg_at_100
value: 27.061
- type: ndcg_at_1000
value: 35.94
- type: ndcg_at_3
value: 35.135
- type: ndcg_at_5
value: 33.335
- type: precision_at_1
value: 41.176
- type: precision_at_10
value: 22.259999999999998
- type: precision_at_100
value: 6.712
- type: precision_at_1000
value: 1.9060000000000001
- type: precision_at_3
value: 33.23
- type: precision_at_5
value: 29.04
- type: recall_at_1
value: 4.919
- type: recall_at_10
value: 14.196
- type: recall_at_100
value: 26.948
- type: recall_at_1000
value: 59.211000000000006
- type: recall_at_3
value: 9.44
- type: recall_at_5
value: 11.569
task:
type: Retrieval
- dataset:
config: default
name: MTEB NQ-PL
revision: f171245712cf85dd4700b06bef18001578d0ca8d
split: test
type: clarin-knext/nq-pl
metrics:
- type: map_at_1
value: 25.35
- type: map_at_10
value: 37.884
- type: map_at_100
value: 38.955
- type: map_at_1000
value: 39.007999999999996
- type: map_at_3
value: 34.239999999999995
- type: map_at_5
value: 36.398
- type: mrr_at_1
value: 28.737000000000002
- type: mrr_at_10
value: 39.973
- type: mrr_at_100
value: 40.844
- type: mrr_at_1000
value: 40.885
- type: mrr_at_3
value: 36.901
- type: mrr_at_5
value: 38.721
- type: ndcg_at_1
value: 28.708
- type: ndcg_at_10
value: 44.204
- type: ndcg_at_100
value: 48.978
- type: ndcg_at_1000
value: 50.33
- type: ndcg_at_3
value: 37.36
- type: ndcg_at_5
value: 40.912
- type: precision_at_1
value: 28.708
- type: precision_at_10
value: 7.367
- type: precision_at_100
value: 1.0030000000000001
- type: precision_at_1000
value: 0.11299999999999999
- type: precision_at_3
value: 17.034
- type: precision_at_5
value: 12.293999999999999
- type: recall_at_1
value: 25.35
- type: recall_at_10
value: 61.411
- type: recall_at_100
value: 82.599
- type: recall_at_1000
value: 92.903
- type: recall_at_3
value: 43.728
- type: recall_at_5
value: 51.854
task:
type: Retrieval
- dataset:
config: default
name: MTEB PAC
revision: None
split: test
type: laugustyniak/abusive-clauses-pl
metrics:
- type: accuracy
value: 69.04141326382856
- type: ap
value: 77.49422763833996
- type: f1
value: 66.73472657783407
task:
type: Classification
- dataset:
config: default
name: MTEB PPC
revision: None
split: test
type: PL-MTEB/ppc-pairclassification
metrics:
- type: cos_sim_accuracy
value: 81.0
- type: cos_sim_ap
value: 91.47194213011349
- type: cos_sim_f1
value: 84.73767885532592
- type: cos_sim_precision
value: 81.49847094801224
- type: cos_sim_recall
value: 88.24503311258279
- type: dot_accuracy
value: 81.0
- type: dot_ap
value: 91.47194213011349
- type: dot_f1
value: 84.73767885532592
- type: dot_precision
value: 81.49847094801224
- type: dot_recall
value: 88.24503311258279
- type: euclidean_accuracy
value: 81.0
- type: euclidean_ap
value: 91.47194213011349
- type: euclidean_f1
value: 84.73767885532592
- type: euclidean_precision
value: 81.49847094801224
- type: euclidean_recall
value: 88.24503311258279
- type: manhattan_accuracy
value: 81.0
- type: manhattan_ap
value: 91.46464475050571
- type: manhattan_f1
value: 84.48687350835321
- type: manhattan_precision
value: 81.31699846860643
- type: manhattan_recall
value: 87.91390728476821
- type: max_accuracy
value: 81.0
- type: max_ap
value: 91.47194213011349
- type: max_f1
value: 84.73767885532592
task:
type: PairClassification
- dataset:
config: default
name: MTEB PSC
revision: None
split: test
type: PL-MTEB/psc-pairclassification
metrics:
- type: cos_sim_accuracy
value: 97.6808905380334
- type: cos_sim_ap
value: 99.27948611836348
- type: cos_sim_f1
value: 96.15975422427034
- type: cos_sim_precision
value: 96.90402476780186
- type: cos_sim_recall
value: 95.42682926829268
- type: dot_accuracy
value: 97.6808905380334
- type: dot_ap
value: 99.2794861183635
- type: dot_f1
value: 96.15975422427034
- type: dot_precision
value: 96.90402476780186
- type: dot_recall
value: 95.42682926829268
- type: euclidean_accuracy
value: 97.6808905380334
- type: euclidean_ap
value: 99.2794861183635
- type: euclidean_f1
value: 96.15975422427034
- type: euclidean_precision
value: 96.90402476780186
- type: euclidean_recall
value: 95.42682926829268
- type: manhattan_accuracy
value: 97.6808905380334
- type: manhattan_ap
value: 99.28715055268721
- type: manhattan_f1
value: 96.14791987673343
- type: manhattan_precision
value: 97.19626168224299
- type: manhattan_recall
value: 95.1219512195122
- type: max_accuracy
value: 97.6808905380334
- type: max_ap
value: 99.28715055268721
- type: max_f1
value: 96.15975422427034
task:
type: PairClassification
- dataset:
config: default
name: MTEB PolEmo2.0-IN
revision: None
split: test
type: PL-MTEB/polemo2_in
metrics:
- type: accuracy
value: 86.16343490304708
- type: f1
value: 83.3442579486744
task:
type: Classification
- dataset:
config: default
name: MTEB PolEmo2.0-OUT
revision: None
split: test
type: PL-MTEB/polemo2_out
metrics:
- type: accuracy
value: 68.40080971659918
- type: f1
value: 53.13720751142237
task:
type: Classification
- dataset:
config: default
name: MTEB Quora-PL
revision: 0be27e93455051e531182b85e85e425aba12e9d4
split: test
type: clarin-knext/quora-pl
metrics:
- type: map_at_1
value: 63.322
- type: map_at_10
value: 76.847
- type: map_at_100
value: 77.616
- type: map_at_1000
value: 77.644
- type: map_at_3
value: 73.624
- type: map_at_5
value: 75.603
- type: mrr_at_1
value: 72.88
- type: mrr_at_10
value: 80.376
- type: mrr_at_100
value: 80.604
- type: mrr_at_1000
value: 80.61
- type: mrr_at_3
value: 78.92
- type: mrr_at_5
value: 79.869
- type: ndcg_at_1
value: 72.89999999999999
- type: ndcg_at_10
value: 81.43
- type: ndcg_at_100
value: 83.394
- type: ndcg_at_1000
value: 83.685
- type: ndcg_at_3
value: 77.62599999999999
- type: ndcg_at_5
value: 79.656
- type: precision_at_1
value: 72.89999999999999
- type: precision_at_10
value: 12.548
- type: precision_at_100
value: 1.4869999999999999
- type: precision_at_1000
value: 0.155
- type: precision_at_3
value: 34.027
- type: precision_at_5
value: 22.654
- type: recall_at_1
value: 63.322
- type: recall_at_10
value: 90.664
- type: recall_at_100
value: 97.974
- type: recall_at_1000
value: 99.636
- type: recall_at_3
value: 80.067
- type: recall_at_5
value: 85.526
task:
type: Retrieval
- dataset:
config: default
name: MTEB SCIDOCS-PL
revision: 45452b03f05560207ef19149545f168e596c9337
split: test
type: clarin-knext/scidocs-pl
metrics:
- type: map_at_1
value: 3.95
- type: map_at_10
value: 9.658999999999999
- type: map_at_100
value: 11.384
- type: map_at_1000
value: 11.677
- type: map_at_3
value: 7.055
- type: map_at_5
value: 8.244
- type: mrr_at_1
value: 19.5
- type: mrr_at_10
value: 28.777
- type: mrr_at_100
value: 29.936
- type: mrr_at_1000
value: 30.009999999999998
- type: mrr_at_3
value: 25.55
- type: mrr_at_5
value: 27.284999999999997
- type: ndcg_at_1
value: 19.5
- type: ndcg_at_10
value: 16.589000000000002
- type: ndcg_at_100
value: 23.879
- type: ndcg_at_1000
value: 29.279
- type: ndcg_at_3
value: 15.719
- type: ndcg_at_5
value: 13.572000000000001
- type: precision_at_1
value: 19.5
- type: precision_at_10
value: 8.62
- type: precision_at_100
value: 1.924
- type: precision_at_1000
value: 0.322
- type: precision_at_3
value: 14.6
- type: precision_at_5
value: 11.78
- type: recall_at_1
value: 3.95
- type: recall_at_10
value: 17.477999999999998
- type: recall_at_100
value: 38.99
- type: recall_at_1000
value: 65.417
- type: recall_at_3
value: 8.883000000000001
- type: recall_at_5
value: 11.933
task:
type: Retrieval
- dataset:
config: default
name: MTEB SICK-E-PL
revision: None
split: test
type: PL-MTEB/sicke-pl-pairclassification
metrics:
- type: cos_sim_accuracy
value: 83.48960456583775
- type: cos_sim_ap
value: 76.31522115825375
- type: cos_sim_f1
value: 70.35573122529645
- type: cos_sim_precision
value: 70.9934735315446
- type: cos_sim_recall
value: 69.72934472934473
- type: dot_accuracy
value: 83.48960456583775
- type: dot_ap
value: 76.31522115825373
- type: dot_f1
value: 70.35573122529645
- type: dot_precision
value: 70.9934735315446
- type: dot_recall
value: 69.72934472934473
- type: euclidean_accuracy
value: 83.48960456583775
- type: euclidean_ap
value: 76.31522115825373
- type: euclidean_f1
value: 70.35573122529645
- type: euclidean_precision
value: 70.9934735315446
- type: euclidean_recall
value: 69.72934472934473
- type: manhattan_accuracy
value: 83.46922136159804
- type: manhattan_ap
value: 76.18474601388084
- type: manhattan_f1
value: 70.34779490856937
- type: manhattan_precision
value: 70.83032490974729
- type: manhattan_recall
value: 69.87179487179486
- type: max_accuracy
value: 83.48960456583775
- type: max_ap
value: 76.31522115825375
- type: max_f1
value: 70.35573122529645
task:
type: PairClassification
- dataset:
config: default
name: MTEB SICK-R-PL
revision: None
split: test
type: PL-MTEB/sickr-pl-sts
metrics:
- type: cos_sim_pearson
value: 77.95374883876302
- type: cos_sim_spearman
value: 73.77630219171942
- type: euclidean_pearson
value: 75.81927069594934
- type: euclidean_spearman
value: 73.7763211303831
- type: manhattan_pearson
value: 76.03126859057528
- type: manhattan_spearman
value: 73.96528138013369
task:
type: STS
- dataset:
config: pl
name: MTEB STS22 (pl)
revision: eea2b4fe26a775864c896887d910b76a8098ad3f
split: test
type: mteb/sts22-crosslingual-sts
metrics:
- type: cos_sim_pearson
value: 37.388282764841826
- type: cos_sim_spearman
value: 40.83477184710897
- type: euclidean_pearson
value: 26.754737044177805
- type: euclidean_spearman
value: 40.83477184710897
- type: manhattan_pearson
value: 26.760453110872458
- type: manhattan_spearman
value: 41.034477441383856
task:
type: STS
- dataset:
config: default
name: MTEB SciFact-PL
revision: 47932a35f045ef8ed01ba82bf9ff67f6e109207e
split: test
type: clarin-knext/scifact-pl
metrics:
- type: map_at_1
value: 49.15
- type: map_at_10
value: 61.690999999999995
- type: map_at_100
value: 62.348000000000006
- type: map_at_1000
value: 62.38
- type: map_at_3
value: 58.824
- type: map_at_5
value: 60.662000000000006
- type: mrr_at_1
value: 51.333
- type: mrr_at_10
value: 62.731
- type: mrr_at_100
value: 63.245
- type: mrr_at_1000
value: 63.275000000000006
- type: mrr_at_3
value: 60.667
- type: mrr_at_5
value: 61.93300000000001
- type: ndcg_at_1
value: 51.333
- type: ndcg_at_10
value: 67.168
- type: ndcg_at_100
value: 69.833
- type: ndcg_at_1000
value: 70.56700000000001
- type: ndcg_at_3
value: 62.40599999999999
- type: ndcg_at_5
value: 65.029
- type: precision_at_1
value: 51.333
- type: precision_at_10
value: 9.333
- type: precision_at_100
value: 1.0699999999999998
- type: precision_at_1000
value: 0.11299999999999999
- type: precision_at_3
value: 25.333
- type: precision_at_5
value: 17.067
- type: recall_at_1
value: 49.15
- type: recall_at_10
value: 82.533
- type: recall_at_100
value: 94.167
- type: recall_at_1000
value: 99.667
- type: recall_at_3
value: 69.917
- type: recall_at_5
value: 76.356
task:
type: Retrieval
- dataset:
config: default
name: MTEB TRECCOVID-PL
revision: 81bcb408f33366c2a20ac54adafad1ae7e877fdd
split: test
type: clarin-knext/trec-covid-pl
metrics:
- type: map_at_1
value: 0.261
- type: map_at_10
value: 2.1260000000000003
- type: map_at_100
value: 12.171999999999999
- type: map_at_1000
value: 26.884999999999998
- type: map_at_3
value: 0.695
- type: map_at_5
value: 1.134
- type: mrr_at_1
value: 96.0
- type: mrr_at_10
value: 96.952
- type: mrr_at_100
value: 96.952
- type: mrr_at_1000
value: 96.952
- type: mrr_at_3
value: 96.667
- type: mrr_at_5
value: 96.667
- type: ndcg_at_1
value: 92.0
- type: ndcg_at_10
value: 81.193
- type: ndcg_at_100
value: 61.129
- type: ndcg_at_1000
value: 51.157
- type: ndcg_at_3
value: 85.693
- type: ndcg_at_5
value: 84.129
- type: precision_at_1
value: 96.0
- type: precision_at_10
value: 85.39999999999999
- type: precision_at_100
value: 62.03999999999999
- type: precision_at_1000
value: 22.224
- type: precision_at_3
value: 88.0
- type: precision_at_5
value: 88.0
- type: recall_at_1
value: 0.261
- type: recall_at_10
value: 2.262
- type: recall_at_100
value: 14.981
- type: recall_at_1000
value: 46.837
- type: recall_at_3
value: 0.703
- type: recall_at_5
value: 1.172
task:
type: Retrieval
- dataset:
config: default
name: MTEB AlloProfClusteringP2P
revision: 392ba3f5bcc8c51f578786c1fc3dae648662cb9b
split: test
type: lyon-nlp/alloprof
metrics:
- type: v_measure
value: 70.55290063940157
task:
type: Clustering
- dataset:
config: default
name: MTEB AlloProfClusteringS2S
revision: 392ba3f5bcc8c51f578786c1fc3dae648662cb9b
split: test
type: lyon-nlp/alloprof
metrics:
- type: v_measure
value: 55.41500719337263
task:
type: Clustering
- dataset:
config: default
name: MTEB AlloprofReranking
revision: 666fdacebe0291776e86f29345663dfaf80a0db9
split: test
type: lyon-nlp/mteb-fr-reranking-alloprof-s2p
metrics:
- type: map
value: 73.48697375332002
- type: mrr
value: 75.01836585523822
task:
type: Reranking
- dataset:
config: default
name: MTEB AlloprofRetrieval
revision: 392ba3f5bcc8c51f578786c1fc3dae648662cb9b
split: test
type: lyon-nlp/alloprof
metrics:
- type: map_at_1
value: 38.454
- type: map_at_10
value: 51.605000000000004
- type: map_at_100
value: 52.653000000000006
- type: map_at_1000
value: 52.697
- type: map_at_3
value: 48.304
- type: map_at_5
value: 50.073
- type: mrr_at_1
value: 43.307
- type: mrr_at_10
value: 54.400000000000006
- type: mrr_at_100
value: 55.147999999999996
- type: mrr_at_1000
value: 55.174
- type: mrr_at_3
value: 51.77
- type: mrr_at_5
value: 53.166999999999994
- type: ndcg_at_1
value: 43.307
- type: ndcg_at_10
value: 57.891000000000005
- type: ndcg_at_100
value: 62.161
- type: ndcg_at_1000
value: 63.083
- type: ndcg_at_3
value: 51.851
- type: ndcg_at_5
value: 54.605000000000004
- type: precision_at_1
value: 43.307
- type: precision_at_10
value: 9.033
- type: precision_at_100
value: 1.172
- type: precision_at_1000
value: 0.127
- type: precision_at_3
value: 22.798
- type: precision_at_5
value: 15.492
- type: recall_at_1
value: 38.454
- type: recall_at_10
value: 74.166
- type: recall_at_100
value: 92.43599999999999
- type: recall_at_1000
value: 99.071
- type: recall_at_3
value: 58.087
- type: recall_at_5
value: 64.568
task:
type: Retrieval
- dataset:
config: fr
name: MTEB AmazonReviewsClassification (fr)
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
split: test
type: mteb/amazon_reviews_multi
metrics:
- type: accuracy
value: 53.474
- type: f1
value: 50.38275392350236
task:
type: Classification
- dataset:
config: default
name: MTEB BSARDRetrieval
revision: 5effa1b9b5fa3b0f9e12523e6e43e5f86a6e6d59
split: test
type: maastrichtlawtech/bsard
metrics:
- type: map_at_1
value: 2.252
- type: map_at_10
value: 4.661
- type: map_at_100
value: 5.271
- type: map_at_1000
value: 5.3629999999999995
- type: map_at_3
value: 3.604
- type: map_at_5
value: 4.3020000000000005
- type: mrr_at_1
value: 2.252
- type: mrr_at_10
value: 4.661
- type: mrr_at_100
value: 5.271
- type: mrr_at_1000
value: 5.3629999999999995
- type: mrr_at_3
value: 3.604
- type: mrr_at_5
value: 4.3020000000000005
- type: ndcg_at_1
value: 2.252
- type: ndcg_at_10
value: 6.3020000000000005
- type: ndcg_at_100
value: 10.342
- type: ndcg_at_1000
value: 13.475999999999999
- type: ndcg_at_3
value: 4.0649999999999995
- type: ndcg_at_5
value: 5.344
- type: precision_at_1
value: 2.252
- type: precision_at_10
value: 1.171
- type: precision_at_100
value: 0.333
- type: precision_at_1000
value: 0.059000000000000004
- type: precision_at_3
value: 1.802
- type: precision_at_5
value: 1.712
- type: recall_at_1
value: 2.252
- type: recall_at_10
value: 11.712
- type: recall_at_100
value: 33.333
- type: recall_at_1000
value: 59.458999999999996
- type: recall_at_3
value: 5.405
- type: recall_at_5
value: 8.559
task:
type: Retrieval
- dataset:
config: default
name: MTEB HALClusteringS2S
revision: e06ebbbb123f8144bef1a5d18796f3dec9ae2915
split: test
type: lyon-nlp/clustering-hal-s2s
metrics:
- type: v_measure
value: 28.301882091023288
task:
type: Clustering
- dataset:
config: default
name: MTEB MLSUMClusteringP2P
revision: b5d54f8f3b61ae17845046286940f03c6bc79bc7
split: test
type: mlsum
metrics:
- type: v_measure
value: 45.26992995191701
task:
type: Clustering
- dataset:
config: default
name: MTEB MLSUMClusteringS2S
revision: b5d54f8f3b61ae17845046286940f03c6bc79bc7
split: test
type: mlsum
metrics:
- type: v_measure
value: 42.773174876871145
task:
type: Clustering
- dataset:
config: fr
name: MTEB MTOPDomainClassification (fr)
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
split: test
type: mteb/mtop_domain
metrics:
- type: accuracy
value: 93.47635452552458
- type: f1
value: 93.19922617577213
task:
type: Classification
- dataset:
config: fr
name: MTEB MTOPIntentClassification (fr)
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
split: test
type: mteb/mtop_intent
metrics:
- type: accuracy
value: 80.2317569683683
- type: f1
value: 56.18060418621901
task:
type: Classification
- dataset:
config: fra
name: MTEB MasakhaNEWSClassification (fra)
revision: 8ccc72e69e65f40c70e117d8b3c08306bb788b60
split: test
type: masakhane/masakhanews
metrics:
- type: accuracy
value: 85.18957345971565
- type: f1
value: 80.829981537394
task:
type: Classification
- dataset:
config: fra
name: MTEB MasakhaNEWSClusteringP2P (fra)
revision: 8ccc72e69e65f40c70e117d8b3c08306bb788b60
split: test
type: masakhane/masakhanews
metrics:
- type: v_measure
value: 71.04138999801822
task:
type: Clustering
- dataset:
config: fra
name: MTEB MasakhaNEWSClusteringS2S (fra)
revision: 8ccc72e69e65f40c70e117d8b3c08306bb788b60
split: test
type: masakhane/masakhanews
metrics:
- type: v_measure
value: 71.7056263158008
task:
type: Clustering
- dataset:
config: fr
name: MTEB MassiveIntentClassification (fr)
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
split: test
type: mteb/amazon_massive_intent
metrics:
- type: accuracy
value: 76.65097511768661
- type: f1
value: 73.82441070598712
task:
type: Classification
- dataset:
config: fr
name: MTEB MassiveScenarioClassification (fr)
revision: 7d571f92784cd94a019292a1f45445077d0ef634
split: test
type: mteb/amazon_massive_scenario
metrics:
- type: accuracy
value: 79.09885675857431
- type: f1
value: 78.28407777434224
task:
type: Classification
- dataset:
config: fr
name: MTEB MintakaRetrieval (fr)
revision: efa78cc2f74bbcd21eff2261f9e13aebe40b814e
split: test
type: jinaai/mintakaqa
metrics:
- type: map_at_1
value: 25.307000000000002
- type: map_at_10
value: 36.723
- type: map_at_100
value: 37.713
- type: map_at_1000
value: 37.769000000000005
- type: map_at_3
value: 33.77
- type: map_at_5
value: 35.463
- type: mrr_at_1
value: 25.307000000000002
- type: mrr_at_10
value: 36.723
- type: mrr_at_100
value: 37.713
- type: mrr_at_1000
value: 37.769000000000005
- type: mrr_at_3
value: 33.77
- type: mrr_at_5
value: 35.463
- type: ndcg_at_1
value: 25.307000000000002
- type: ndcg_at_10
value: 42.559999999999995
- type: ndcg_at_100
value: 47.457
- type: ndcg_at_1000
value: 49.162
- type: ndcg_at_3
value: 36.461
- type: ndcg_at_5
value: 39.504
- type: precision_at_1
value: 25.307000000000002
- type: precision_at_10
value: 6.106
- type: precision_at_100
value: 0.8420000000000001
- type: precision_at_1000
value: 0.098
- type: precision_at_3
value: 14.741999999999999
- type: precision_at_5
value: 10.319
- type: recall_at_1
value: 25.307000000000002
- type: recall_at_10
value: 61.056999999999995
- type: recall_at_100
value: 84.152
- type: recall_at_1000
value: 98.03399999999999
- type: recall_at_3
value: 44.226
- type: recall_at_5
value: 51.597
task:
type: Retrieval
- dataset:
config: fr
name: MTEB OpusparcusPC (fr)
revision: 9e9b1f8ef51616073f47f306f7f47dd91663f86a
split: test
type: GEM/opusparcus
metrics:
- type: cos_sim_accuracy
value: 99.90069513406156
- type: cos_sim_ap
value: 100.0
- type: cos_sim_f1
value: 99.95032290114257
- type: cos_sim_precision
value: 100.0
- type: cos_sim_recall
value: 99.90069513406156
- type: dot_accuracy
value: 99.90069513406156
- type: dot_ap
value: 100.0
- type: dot_f1
value: 99.95032290114257
- type: dot_precision
value: 100.0
- type: dot_recall
value: 99.90069513406156
- type: euclidean_accuracy
value: 99.90069513406156
- type: euclidean_ap
value: 100.0
- type: euclidean_f1
value: 99.95032290114257
- type: euclidean_precision
value: 100.0
- type: euclidean_recall
value: 99.90069513406156
- type: manhattan_accuracy
value: 99.90069513406156
- type: manhattan_ap
value: 100.0
- type: manhattan_f1
value: 99.95032290114257
- type: manhattan_precision
value: 100.0
- type: manhattan_recall
value: 99.90069513406156
- type: max_accuracy
value: 99.90069513406156
- type: max_ap
value: 100.0
- type: max_f1
value: 99.95032290114257
task:
type: PairClassification
- dataset:
config: fr
name: MTEB PawsX (fr)
revision: 8a04d940a42cd40658986fdd8e3da561533a3646
split: test
type: paws-x
metrics:
- type: cos_sim_accuracy
value: 70.8
- type: cos_sim_ap
value: 73.7671529695957
- type: cos_sim_f1
value: 68.80964339527875
- type: cos_sim_precision
value: 62.95955882352941
- type: cos_sim_recall
value: 75.85825027685493
- type: dot_accuracy
value: 70.8
- type: dot_ap
value: 73.78345265366947
- type: dot_f1
value: 68.80964339527875
- type: dot_precision
value: 62.95955882352941
- type: dot_recall
value: 75.85825027685493
- type: euclidean_accuracy
value: 70.8
- type: euclidean_ap
value: 73.7671529695957
- type: euclidean_f1
value: 68.80964339527875
- type: euclidean_precision
value: 62.95955882352941
- type: euclidean_recall
value: 75.85825027685493
- type: manhattan_accuracy
value: 70.75
- type: manhattan_ap
value: 73.78996383615953
- type: manhattan_f1
value: 68.79432624113475
- type: manhattan_precision
value: 63.39869281045751
- type: manhattan_recall
value: 75.1937984496124
- type: max_accuracy
value: 70.8
- type: max_ap
value: 73.78996383615953
- type: max_f1
value: 68.80964339527875
task:
type: PairClassification
- dataset:
config: default
name: MTEB SICKFr
revision: e077ab4cf4774a1e36d86d593b150422fafd8e8a
split: test
type: Lajavaness/SICK-fr
metrics:
- type: cos_sim_pearson
value: 84.03253762760392
- type: cos_sim_spearman
value: 79.68280105762004
- type: euclidean_pearson
value: 80.98265050044444
- type: euclidean_spearman
value: 79.68233242682867
- type: manhattan_pearson
value: 80.9678911810704
- type: manhattan_spearman
value: 79.70264097683109
task:
type: STS
- dataset:
config: fr
name: MTEB STS22 (fr)
revision: eea2b4fe26a775864c896887d910b76a8098ad3f
split: test
type: mteb/sts22-crosslingual-sts
metrics:
- type: cos_sim_pearson
value: 80.56896987572884
- type: cos_sim_spearman
value: 81.84352499523287
- type: euclidean_pearson
value: 80.40831759421305
- type: euclidean_spearman
value: 81.84352499523287
- type: manhattan_pearson
value: 80.74333857561238
- type: manhattan_spearman
value: 82.41503246733892
task:
type: STS
- dataset:
config: fr
name: MTEB STSBenchmarkMultilingualSTS (fr)
revision: 93d57ef91790589e3ce9c365164337a8a78b7632
split: test
type: stsb_multi_mt
metrics:
- type: cos_sim_pearson
value: 82.71826762276979
- type: cos_sim_spearman
value: 82.25433354916042
- type: euclidean_pearson
value: 81.87115571724316
- type: euclidean_spearman
value: 82.25322342890107
- type: manhattan_pearson
value: 82.11174867527224
- type: manhattan_spearman
value: 82.55905365203084
task:
type: STS
- dataset:
config: default
name: MTEB SummEvalFr
revision: b385812de6a9577b6f4d0f88c6a6e35395a94054
split: test
type: lyon-nlp/summarization-summeval-fr-p2p
metrics:
- type: cos_sim_pearson
value: 30.659441623392887
- type: cos_sim_spearman
value: 30.501134097353315
- type: dot_pearson
value: 30.659444768851056
- type: dot_spearman
value: 30.501134097353315
task:
type: Summarization
- dataset:
config: default
name: MTEB SyntecReranking
revision: b205c5084a0934ce8af14338bf03feb19499c84d
split: test
type: lyon-nlp/mteb-fr-reranking-syntec-s2p
metrics:
- type: map
value: 94.03333333333333
- type: mrr
value: 94.03333333333333
task:
type: Reranking
- dataset:
config: default
name: MTEB SyntecRetrieval
revision: 77f7e271bf4a92b24fce5119f3486b583ca016ff
split: test
type: lyon-nlp/mteb-fr-retrieval-syntec-s2p
metrics:
- type: map_at_1
value: 79.0
- type: map_at_10
value: 87.61
- type: map_at_100
value: 87.655
- type: map_at_1000
value: 87.655
- type: map_at_3
value: 87.167
- type: map_at_5
value: 87.36699999999999
- type: mrr_at_1
value: 79.0
- type: mrr_at_10
value: 87.61
- type: mrr_at_100
value: 87.655
- type: mrr_at_1000
value: 87.655
- type: mrr_at_3
value: 87.167
- type: mrr_at_5
value: 87.36699999999999
- type: ndcg_at_1
value: 79.0
- type: ndcg_at_10
value: 90.473
- type: ndcg_at_100
value: 90.694
- type: ndcg_at_1000
value: 90.694
- type: ndcg_at_3
value: 89.464
- type: ndcg_at_5
value: 89.851
- type: precision_at_1
value: 79.0
- type: precision_at_10
value: 9.9
- type: precision_at_100
value: 1.0
- type: precision_at_1000
value: 0.1
- type: precision_at_3
value: 32.0
- type: precision_at_5
value: 19.400000000000002
- type: recall_at_1
value: 79.0
- type: recall_at_10
value: 99.0
- type: recall_at_100
value: 100.0
- type: recall_at_1000
value: 100.0
- type: recall_at_3
value: 96.0
- type: recall_at_5
value: 97.0
task:
type: Retrieval
- dataset:
config: fr
name: MTEB XPQARetrieval (fr)
revision: c99d599f0a6ab9b85b065da6f9d94f9cf731679f
split: test
type: jinaai/xpqa
metrics:
- type: map_at_1
value: 39.395
- type: map_at_10
value: 59.123999999999995
- type: map_at_100
value: 60.704
- type: map_at_1000
value: 60.760000000000005
- type: map_at_3
value: 53.187
- type: map_at_5
value: 56.863
- type: mrr_at_1
value: 62.083
- type: mrr_at_10
value: 68.87299999999999
- type: mrr_at_100
value: 69.46900000000001
- type: mrr_at_1000
value: 69.48299999999999
- type: mrr_at_3
value: 66.8
- type: mrr_at_5
value: 67.928
- type: ndcg_at_1
value: 62.083
- type: ndcg_at_10
value: 65.583
- type: ndcg_at_100
value: 70.918
- type: ndcg_at_1000
value: 71.72800000000001
- type: ndcg_at_3
value: 60.428000000000004
- type: ndcg_at_5
value: 61.853
- type: precision_at_1
value: 62.083
- type: precision_at_10
value: 15.033
- type: precision_at_100
value: 1.9529999999999998
- type: precision_at_1000
value: 0.207
- type: precision_at_3
value: 36.315
- type: precision_at_5
value: 25.955000000000002
- type: recall_at_1
value: 39.395
- type: recall_at_10
value: 74.332
- type: recall_at_100
value: 94.729
- type: recall_at_1000
value: 99.75500000000001
- type: recall_at_3
value: 57.679
- type: recall_at_5
value: 65.036
task:
type: Retrieval
---
## gte-Qwen2-1.5B-instruct
**gte-Qwen2-1.5B-instruct** is the latest model in the gte (General Text Embedding) model family. The model is built on [Qwen2-1.5B](https://huggingface.co/Qwen/Qwen2-1.5B) LLM model and use the same training data and strategies as the [gte-Qwen2-7B-instruct](https://huggingface.co/Alibaba-NLP/gte-Qwen2-7B-instruct) model.
The model incorporates several key advancements:
- Integration of bidirectional attention mechanisms, enriching its contextual understanding.
- Instruction tuning, applied solely on the query side for streamlined efficiency
- Comprehensive training across a vast, multilingual text corpus spanning diverse domains and scenarios. This training leverages both weakly supervised and supervised data, ensuring the model's applicability across numerous languages and a wide array of downstream tasks.
## Model Information
- Model Size: 1.5B
- Embedding Dimension: 1536
- Max Input Tokens: 32k
## Requirements
```
transformers>=4.39.2
flash_attn>=2.5.6
```
## Usage
### Sentence Transformers
```python
from sentence_transformers import SentenceTransformer
model = SentenceTransformer("Alibaba-NLP/gte-Qwen2-1.5B-instruct", trust_remote_code=True)
# In case you want to reduce the maximum length:
model.max_seq_length = 8192
queries = [
"how much protein should a female eat",
"summit define",
]
documents = [
"As a general guideline, the CDC's average requirement of protein for women ages 19 to 70 is 46 grams per day. But, as you can see from this chart, you'll need to increase that if you're expecting or training for a marathon. Check out the chart below to see how much protein you should be eating each day.",
"Definition of summit for English Language Learners. : 1 the highest point of a mountain : the top of a mountain. : 2 the highest level. : 3 a meeting or series of meetings between the leaders of two or more governments.",
]
query_embeddings = model.encode(queries, prompt_name="query")
document_embeddings = model.encode(documents)
scores = (query_embeddings @ document_embeddings.T) * 100
print(scores.tolist())
```
Observe the [config_sentence_transformers.json](config_sentence_transformers.json) to see all pre-built prompt names. Otherwise, you can use `model.encode(queries, prompt="Instruct: ...\nQuery: "` to use a custom prompt of your choice.
### Transformers
```python
import torch
import torch.nn.functional as F
from torch import Tensor
from transformers import AutoTokenizer, AutoModel
def last_token_pool(last_hidden_states: Tensor,
attention_mask: Tensor) -> Tensor:
left_padding = (attention_mask[:, -1].sum() == attention_mask.shape[0])
if left_padding:
return last_hidden_states[:, -1]
else:
sequence_lengths = attention_mask.sum(dim=1) - 1
batch_size = last_hidden_states.shape[0]
return last_hidden_states[torch.arange(batch_size, device=last_hidden_states.device), sequence_lengths]
def get_detailed_instruct(task_description: str, query: str) -> str:
return f'Instruct: {task_description}\nQuery: {query}'
# Each query must come with a one-sentence instruction that describes the task
task = 'Given a web search query, retrieve relevant passages that answer the query'
queries = [
get_detailed_instruct(task, 'how much protein should a female eat'),
get_detailed_instruct(task, 'summit define')
]
# No need to add instruction for retrieval documents
documents = [
"As a general guideline, the CDC's average requirement of protein for women ages 19 to 70 is 46 grams per day. But, as you can see from this chart, you'll need to increase that if you're expecting or training for a marathon. Check out the chart below to see how much protein you should be eating each day.",
"Definition of summit for English Language Learners. : 1 the highest point of a mountain : the top of a mountain. : 2 the highest level. : 3 a meeting or series of meetings between the leaders of two or more governments."
]
input_texts = queries + documents
tokenizer = AutoTokenizer.from_pretrained('Alibaba-NLP/gte-Qwen2-1.5B-instruct', trust_remote_code=True)
model = AutoModel.from_pretrained('Alibaba-NLP/gte-Qwen2-1.5B-instruct', trust_remote_code=True)
max_length = 8192
# Tokenize the input texts
batch_dict = tokenizer(input_texts, max_length=max_length, padding=True, truncation=True, return_tensors='pt')
outputs = model(**batch_dict)
embeddings = last_token_pool(outputs.last_hidden_state, batch_dict['attention_mask'])
# normalize embeddings
embeddings = F.normalize(embeddings, p=2, dim=1)
scores = (embeddings[:2] @ embeddings[2:].T) * 100
print(scores.tolist())
```
## Evaluation
### MTEB & C-MTEB
You can use the [scripts/eval_mteb.py](https://huggingface.co/Alibaba-NLP/gte-Qwen2-7B-instruct/blob/main/scripts/eval_mteb.py) to reproduce the following result of **gte-Qwen2-1.5B-instruct** on MTEB(English)/C-MTEB(Chinese):
| Model Name | MTEB(56) | C-MTEB(35) | MTEB-fr(26) | MTEB-pl(26) |
|:----:|:---------:|:----------:|:----------:|:----------:|
| [bge-base-en-1.5](https://huggingface.co/BAAI/bge-base-en-v1.5) | 64.23 | - | - | - |
| [bge-large-en-1.5](https://huggingface.co/BAAI/bge-large-en-v1.5) | 63.55 | - | - | - |
| [gte-large-en-v1.5](https://huggingface.co/Alibaba-NLP/gte-large-en-v1.5) | 65.39 | - | - | - |
| [gte-base-en-v1.5](https://huggingface.co/Alibaba-NLP/gte-large-en-v1.5) | 64.11 | - | - | - |
| [mxbai-embed-large-v1](https://huggingface.co/mixedbread-ai/mxbai-embed-large-v1) | 64.68 | - | - | - |
| [acge_text_embedding](https://huggingface.co/aspire/acge_text_embedding) | - | 69.07 | - | - |
| [stella-mrl-large-zh-v3.5-1792d](https://huggingface.co/infgrad/stella-mrl-large-zh-v3.5-1792d) | - | 68.55 | - | - |
| [gte-large-zh](https://huggingface.co/thenlper/gte-large-zh) | - | 66.72 | - | - |
| [multilingual-e5-base](https://huggingface.co/intfloat/multilingual-e5-base) | 59.45 | 56.21 | - | - |
| [multilingual-e5-large](https://huggingface.co/intfloat/multilingual-e5-large) | 61.50 | 58.81 | - | - |
| [e5-mistral-7b-instruct](https://huggingface.co/intfloat/e5-mistral-7b-instruct) | 66.63 | 60.81 | - | - |
| [gte-Qwen1.5-7B-instruct](https://huggingface.co/Alibaba-NLP/gte-Qwen1.5-7B-instruct) | 67.34 | 69.52 | - | - |
| [NV-Embed-v1](https://huggingface.co/nvidia/NV-Embed-v1) | 69.32 | - | - | - |
| [**gte-Qwen2-7B-instruct**](https://huggingface.co/Alibaba-NLP/gte-Qwen2-7B-instruct) | **70.24** | **72.05** | **68.25** | **67.86** |
| [**gte-Qwen2-1.5B-instruct**](https://huggingface.co/Alibaba-NLP/gte-Qwen2-1.5B-instruct) | **67.16** | **67.65** | **66.60** | **64.04** |
### GTE Models
The gte series models have consistently released two types of models: encoder-only models (based on the BERT architecture) and decode-only models (based on the LLM architecture).
| Models | Language | Max Sequence Length | Dimension | Model Size (Memory Usage, fp32) |
|:-------------------------------------------------------------------------------------:|:--------:|:-----: |:---------:|:-------------------------------:|
| [GTE-large-zh](https://huggingface.co/thenlper/gte-large-zh) | Chinese | 512 | 1024 | 1.25GB |
| [GTE-base-zh](https://huggingface.co/thenlper/gte-base-zh) | Chinese | 512 | 512 | 0.41GB |
| [GTE-small-zh](https://huggingface.co/thenlper/gte-small-zh) | Chinese | 512 | 512 | 0.12GB |
| [GTE-large](https://huggingface.co/thenlper/gte-large) | English | 512 | 1024 | 1.25GB |
| [GTE-base](https://huggingface.co/thenlper/gte-base) | English | 512 | 512 | 0.21GB |
| [GTE-small](https://huggingface.co/thenlper/gte-small) | English | 512 | 384 | 0.10GB |
| [GTE-large-en-v1.5](https://huggingface.co/Alibaba-NLP/gte-large-en-v1.5) | English | 8192 | 1024 | 1.74GB |
| [GTE-base-en-v1.5](https://huggingface.co/Alibaba-NLP/gte-base-en-v1.5) | English | 8192 | 768 | 0.51GB |
| [GTE-Qwen1.5-7B-instruct](https://huggingface.co/Alibaba-NLP/gte-Qwen1.5-7B-instruct) | Multilingual | 32000 | 4096 | 26.45GB |
| [GTE-Qwen2-7B-instruct](https://huggingface.co/Alibaba-NLP/gte-Qwen2-7B-instruct) | Multilingual | 32000 | 3584 | 26.45GB |
| [GTE-Qwen2-1.5B-instruct](https://huggingface.co/Alibaba-NLP/gte-Qwen2-1.5B-instruct) | Multilingual | 32000 | 1536 | 6.62GB |
## Cloud API Services
In addition to the open-source [GTE](https://huggingface.co/collections/Alibaba-NLP/gte-models-6680f0b13f885cb431e6d469) series models, GTE series models are also available as commercial API services on Alibaba Cloud.
- [Embedding Models](https://help.aliyun.com/zh/model-studio/developer-reference/general-text-embedding/): Rhree versions of the text embedding models are available: text-embedding-v1/v2/v3, with v3 being the latest API service.
- [ReRank Models](https://help.aliyun.com/zh/model-studio/developer-reference/general-text-sorting-model/): The gte-rerank model service is available.
Note that the models behind the commercial APIs are not entirely identical to the open-source models.
## Citation
If you find our paper or models helpful, please consider cite:
```
@article{li2023towards,
title={Towards general text embeddings with multi-stage contrastive learning},
author={Li, Zehan and Zhang, Xin and Zhang, Yanzhao and Long, Dingkun and Xie, Pengjun and Zhang, Meishan},
journal={arXiv preprint arXiv:2308.03281},
year={2023}
}
```
| [
"SUMMARIZATION"
] | [
"BIOSSES",
"SCIFACT"
] |
AntoineBlanot/flan-t5-xxl-classif-3way | AntoineBlanot | zero-shot-classification | [
"transformers",
"pytorch",
"t5",
"zero-shot-classification",
"en",
"dataset:multi_nli",
"dataset:snli",
"dataset:scitail",
"model-index",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | 2023-05-11T01:14:08 | 2023-05-18T08:44:17 | 62 | 3 | ---
datasets:
- multi_nli
- snli
- scitail
language:
- en
metrics:
- accuracy
- f1
pipeline_tag: zero-shot-classification
model-index:
- name: AntoineBlanot/flan-t5-xxl-classif-3way
results:
- task:
type: nli
name: Natural Language Inference
dataset:
name: MultiNLI
type: multi_nli
split: validation_matched
metrics:
- type: accuracy
value: 0.9230769230769231
name: Validation (matched) accuracy
- type: f1
value: 0.9225172687920663
name: Validation (matched) f1
- task:
type: nli
name: Natural Language Inference
dataset:
name: MultiNLI
type: multi_nli
split: validation_mismatched
metrics:
- type: accuracy
value: 0.9222945484133441
name: Validation (mismatched) accuracy
- type: f1
value: 0.9216699467726924
name: Validation (mismatched) f1
- task:
type: nli
name: Natural Language Inference
dataset:
name: SNLI
type: snli
split: validation
metrics:
- type: accuracy
value: 0.9418817313554155
name: Validation accuracy
- type: f1
value: 0.9416213776111287
name: Validation f1
- task:
type: nli
name: Natural Language Inference
dataset:
name: SciTail
type: scitail
split: validation
metrics:
- type: accuracy
value: 0.9662576687116564
name: Validation accuracy
- type: f1
value: 0.6471347983817357
name: Validation f1
---
# T5ForSequenceClassification
**T5ForSequenceClassification** adapts the original [T5](https://github.com/google-research/text-to-text-transfer-transformer) architecture for sequence classification tasks.
T5 was originally built for text-to-text tasks and excels in it.
It can handle any NLP task if it has been converted to a text-to-text format, including sequence classification task!
You can find [here](https://huggingface.co/google/flan-t5-base?text=Premise%3A++At+my+age+you+will+probably+have+learnt+one+lesson.+Hypothesis%3A++It%27s+not+certain+how+many+lessons+you%27ll+learn+by+your+thirties.+Does+the+premise+entail+the+hypothesis%3F) how the original T5 is used for sequence classification task.
Our motivations for building **T5ForSequenceClassification** is that the full original T5 architecture is not needed for most NLU tasks. Indeed, NLU tasks generally do not require to generate text and thus a large decoder is unnecessary.
By removing the decoder we can *half the original number of parameters* (thus half the computation cost) and *efficiently optimize* the network for the given task.
## Table of Contents
0. [Usage](#usage)
1. [Why use T5ForSequenceClassification?](#why-use-t5forsequenceclassification)
2. [T5ForClassification vs T5](#t5forclassification-vs-t5)
3. [Results](#results)
## Usage
**T5ForSequenceClassification** supports the task of zero-shot classification.
It can direclty be used for:
- topic classification
- intent recognition
- boolean question answering
- sentiment analysis
- and any other task which goal is to clasify a text...
Since the *T5ForClassification* class is currently not supported by the transformers library, you cannot direclty use this model on the Hub.
To use **T5ForSequenceClassification**, you will have to install additional packages and model weights.
You can find instructions [here](https://github.com/AntoineBlanot/zero-nlp).
## Why use T5ForSequenceClassification?
Models based on the [BERT](https://huggingface.co/bert-large-uncased) architecture like [RoBERTa](https://huggingface.co/roberta-large) and [DeBERTa](https://huggingface.co/microsoft/deberta-v2-xxlarge) have shown very strong performance on sequence classification task and are still widely used today.
However, those models only scale up to ~1.5B parameters (DeBERTa xxlarge) resulting in a limited knowledge compare to bigger models.
On the other hand, models based on the T5 architecture scale up to ~11B parameters (t5-xxl) and innovations with this architecture are very recent and keeps improving ([mT5](https://huggingface.co/google/mt5-xxl), [Flan-T5](https://huggingface.co/google/flan-t5-xxl), [UL2](https://huggingface.co/google/ul2), [Flan-UL2](https://huggingface.co/google/flan-ul2), and probably more...)
## T5ForClassification vs T5
**T5ForClassification** Architecture:
- Encoder: same as original T5
- Decoder: only the first layer (for pooling purpose)
- Classification head: simple Linear layer on top of the decoder
Benefits and Drawbacks:
- (**+**) Keeps T5 encoding strength
- (**+**) Parameters size is half
- (**+**) Interpretable outputs (class logits)
- (**+**) No generation mistakes and faster prediction (no generation latency)
- (**-**) Looses text-to-text ability
## Results
Results on the validation data of **training tasks**:
| Dataset | Accuracy | F1 |
|:-------:|:--------:|:--:|
| MNLI (m)| 0.923 | 0.923 |
| MNLI (mm) | 0.922 | 0.922 |
| SNLI | 0.942 | 0.942 |
| SciTail | 0.966 | 0.647 |
Results on validation data of **unseen tasks** (zero-shot):
| Dataset | Accuracy | F1 |
|:-------:|:--------:|:--:|
| ?| ? | ? |
Special thanks to [philschmid](https://huggingface.co/philschmid) for making a Flan-T5-xxl [checkpoint](https://huggingface.co/philschmid/flan-t5-xxl-sharded-fp16) in fp16.
| [
"QUESTION_ANSWERING"
] | [
"SCITAIL"
] |
yixuan-chia/multilingual-e5-large-instruct-gguf | yixuan-chia | null | [
"sentence-transformers",
"gguf",
"mteb",
"transformers",
"multilingual",
"af",
"am",
"ar",
"as",
"az",
"be",
"bg",
"bn",
"br",
"bs",
"ca",
"cs",
"cy",
"da",
"de",
"el",
"en",
"eo",
"es",
"et",
"eu",
"fa",
"fi",
"fr",
"fy",
"ga",
"gd",
"gl",
"gu",
"ha",
"he",
"hi",
"hr",
"hu",
"hy",
"id",
"is",
"it",
"ja",
"jv",
"ka",
"kk",
"km",
"kn",
"ko",
"ku",
"ky",
"la",
"lo",
"lt",
"lv",
"mg",
"mk",
"ml",
"mn",
"mr",
"ms",
"my",
"ne",
"nl",
"no",
"om",
"or",
"pa",
"pl",
"ps",
"pt",
"ro",
"ru",
"sa",
"sd",
"si",
"sk",
"sl",
"so",
"sq",
"sr",
"su",
"sv",
"sw",
"ta",
"te",
"th",
"tl",
"tr",
"ug",
"uk",
"ur",
"uz",
"vi",
"xh",
"yi",
"zh",
"arxiv:2402.05672",
"arxiv:2401.00368",
"arxiv:2104.08663",
"arxiv:2210.07316",
"license:mit",
"model-index",
"endpoints_compatible",
"region:us",
"feature-extraction"
] | 2024-10-21T17:43:41 | 2024-10-21T17:48:51 | 62 | 0 | ---
language:
- multilingual
- af
- am
- ar
- as
- az
- be
- bg
- bn
- br
- bs
- ca
- cs
- cy
- da
- de
- el
- en
- eo
- es
- et
- eu
- fa
- fi
- fr
- fy
- ga
- gd
- gl
- gu
- ha
- he
- hi
- hr
- hu
- hy
- id
- is
- it
- ja
- jv
- ka
- kk
- km
- kn
- ko
- ku
- ky
- la
- lo
- lt
- lv
- mg
- mk
- ml
- mn
- mr
- ms
- my
- ne
- nl
- 'no'
- om
- or
- pa
- pl
- ps
- pt
- ro
- ru
- sa
- sd
- si
- sk
- sl
- so
- sq
- sr
- su
- sv
- sw
- ta
- te
- th
- tl
- tr
- ug
- uk
- ur
- uz
- vi
- xh
- yi
- zh
license: mit
tags:
- mteb
- sentence-transformers
- transformers
model-index:
- name: multilingual-e5-large-instruct
results:
- task:
type: Classification
dataset:
name: MTEB AmazonCounterfactualClassification (en)
type: mteb/amazon_counterfactual
config: en
split: test
revision: e8379541af4e31359cca9fbcf4b00f2671dba205
metrics:
- type: accuracy
value: 76.23880597014924
- type: ap
value: 39.07351965022687
- type: f1
value: 70.04836733862683
- task:
type: Classification
dataset:
name: MTEB AmazonCounterfactualClassification (de)
type: mteb/amazon_counterfactual
config: de
split: test
revision: e8379541af4e31359cca9fbcf4b00f2671dba205
metrics:
- type: accuracy
value: 66.71306209850107
- type: ap
value: 79.01499914759529
- type: f1
value: 64.81951817560703
- task:
type: Classification
dataset:
name: MTEB AmazonCounterfactualClassification (en-ext)
type: mteb/amazon_counterfactual
config: en-ext
split: test
revision: e8379541af4e31359cca9fbcf4b00f2671dba205
metrics:
- type: accuracy
value: 73.85307346326837
- type: ap
value: 22.447519885878737
- type: f1
value: 61.0162730745633
- task:
type: Classification
dataset:
name: MTEB AmazonCounterfactualClassification (ja)
type: mteb/amazon_counterfactual
config: ja
split: test
revision: e8379541af4e31359cca9fbcf4b00f2671dba205
metrics:
- type: accuracy
value: 76.04925053533191
- type: ap
value: 23.44983217128922
- type: f1
value: 62.5723230907759
- task:
type: Classification
dataset:
name: MTEB AmazonPolarityClassification
type: mteb/amazon_polarity
config: default
split: test
revision: e2d317d38cd51312af73b3d32a06d1a08b442046
metrics:
- type: accuracy
value: 96.28742500000001
- type: ap
value: 94.8449918887462
- type: f1
value: 96.28680923610432
- task:
type: Classification
dataset:
name: MTEB AmazonReviewsClassification (en)
type: mteb/amazon_reviews_multi
config: en
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 56.716
- type: f1
value: 55.76510398266401
- task:
type: Classification
dataset:
name: MTEB AmazonReviewsClassification (de)
type: mteb/amazon_reviews_multi
config: de
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 52.99999999999999
- type: f1
value: 52.00829994765178
- task:
type: Classification
dataset:
name: MTEB AmazonReviewsClassification (es)
type: mteb/amazon_reviews_multi
config: es
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 48.806000000000004
- type: f1
value: 48.082345914983634
- task:
type: Classification
dataset:
name: MTEB AmazonReviewsClassification (fr)
type: mteb/amazon_reviews_multi
config: fr
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 48.507999999999996
- type: f1
value: 47.68752844642045
- task:
type: Classification
dataset:
name: MTEB AmazonReviewsClassification (ja)
type: mteb/amazon_reviews_multi
config: ja
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 47.709999999999994
- type: f1
value: 47.05870376637181
- task:
type: Classification
dataset:
name: MTEB AmazonReviewsClassification (zh)
type: mteb/amazon_reviews_multi
config: zh
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 44.662000000000006
- type: f1
value: 43.42371965372771
- task:
type: Retrieval
dataset:
name: MTEB ArguAna
type: arguana
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 31.721
- type: map_at_10
value: 49.221
- type: map_at_100
value: 49.884
- type: map_at_1000
value: 49.888
- type: map_at_3
value: 44.31
- type: map_at_5
value: 47.276
- type: mrr_at_1
value: 32.432
- type: mrr_at_10
value: 49.5
- type: mrr_at_100
value: 50.163000000000004
- type: mrr_at_1000
value: 50.166
- type: mrr_at_3
value: 44.618
- type: mrr_at_5
value: 47.541
- type: ndcg_at_1
value: 31.721
- type: ndcg_at_10
value: 58.384
- type: ndcg_at_100
value: 61.111000000000004
- type: ndcg_at_1000
value: 61.187999999999995
- type: ndcg_at_3
value: 48.386
- type: ndcg_at_5
value: 53.708999999999996
- type: precision_at_1
value: 31.721
- type: precision_at_10
value: 8.741
- type: precision_at_100
value: 0.991
- type: precision_at_1000
value: 0.1
- type: precision_at_3
value: 20.057
- type: precision_at_5
value: 14.609
- type: recall_at_1
value: 31.721
- type: recall_at_10
value: 87.411
- type: recall_at_100
value: 99.075
- type: recall_at_1000
value: 99.644
- type: recall_at_3
value: 60.171
- type: recall_at_5
value: 73.044
- task:
type: Clustering
dataset:
name: MTEB ArxivClusteringP2P
type: mteb/arxiv-clustering-p2p
config: default
split: test
revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d
metrics:
- type: v_measure
value: 46.40419580759799
- task:
type: Clustering
dataset:
name: MTEB ArxivClusteringS2S
type: mteb/arxiv-clustering-s2s
config: default
split: test
revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53
metrics:
- type: v_measure
value: 40.48593255007969
- task:
type: Reranking
dataset:
name: MTEB AskUbuntuDupQuestions
type: mteb/askubuntudupquestions-reranking
config: default
split: test
revision: 2000358ca161889fa9c082cb41daa8dcfb161a54
metrics:
- type: map
value: 63.889179122289995
- type: mrr
value: 77.61146286769556
- task:
type: STS
dataset:
name: MTEB BIOSSES
type: mteb/biosses-sts
config: default
split: test
revision: d3fb88f8f02e40887cd149695127462bbcf29b4a
metrics:
- type: cos_sim_pearson
value: 88.15075203727929
- type: cos_sim_spearman
value: 86.9622224570873
- type: euclidean_pearson
value: 86.70473853624121
- type: euclidean_spearman
value: 86.9622224570873
- type: manhattan_pearson
value: 86.21089380980065
- type: manhattan_spearman
value: 86.75318154937008
- task:
type: BitextMining
dataset:
name: MTEB BUCC (de-en)
type: mteb/bucc-bitext-mining
config: de-en
split: test
revision: d51519689f32196a32af33b075a01d0e7c51e252
metrics:
- type: accuracy
value: 99.65553235908142
- type: f1
value: 99.60681976339595
- type: precision
value: 99.58246346555325
- type: recall
value: 99.65553235908142
- task:
type: BitextMining
dataset:
name: MTEB BUCC (fr-en)
type: mteb/bucc-bitext-mining
config: fr-en
split: test
revision: d51519689f32196a32af33b075a01d0e7c51e252
metrics:
- type: accuracy
value: 99.26260180497468
- type: f1
value: 99.14520507740848
- type: precision
value: 99.08650671362535
- type: recall
value: 99.26260180497468
- task:
type: BitextMining
dataset:
name: MTEB BUCC (ru-en)
type: mteb/bucc-bitext-mining
config: ru-en
split: test
revision: d51519689f32196a32af33b075a01d0e7c51e252
metrics:
- type: accuracy
value: 98.07412538967787
- type: f1
value: 97.86629719431936
- type: precision
value: 97.76238309664012
- type: recall
value: 98.07412538967787
- task:
type: BitextMining
dataset:
name: MTEB BUCC (zh-en)
type: mteb/bucc-bitext-mining
config: zh-en
split: test
revision: d51519689f32196a32af33b075a01d0e7c51e252
metrics:
- type: accuracy
value: 99.42074776197998
- type: f1
value: 99.38564156573635
- type: precision
value: 99.36808846761454
- type: recall
value: 99.42074776197998
- task:
type: Classification
dataset:
name: MTEB Banking77Classification
type: mteb/banking77
config: default
split: test
revision: 0fd18e25b25c072e09e0d92ab615fda904d66300
metrics:
- type: accuracy
value: 85.73376623376623
- type: f1
value: 85.68480707214599
- task:
type: Clustering
dataset:
name: MTEB BiorxivClusteringP2P
type: mteb/biorxiv-clustering-p2p
config: default
split: test
revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40
metrics:
- type: v_measure
value: 40.935218072113855
- task:
type: Clustering
dataset:
name: MTEB BiorxivClusteringS2S
type: mteb/biorxiv-clustering-s2s
config: default
split: test
revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908
metrics:
- type: v_measure
value: 36.276389017675264
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackRetrieval
type: BeIR/cqadupstack
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 27.764166666666668
- type: map_at_10
value: 37.298166666666674
- type: map_at_100
value: 38.530166666666666
- type: map_at_1000
value: 38.64416666666667
- type: map_at_3
value: 34.484833333333334
- type: map_at_5
value: 36.0385
- type: mrr_at_1
value: 32.93558333333333
- type: mrr_at_10
value: 41.589749999999995
- type: mrr_at_100
value: 42.425333333333334
- type: mrr_at_1000
value: 42.476333333333336
- type: mrr_at_3
value: 39.26825
- type: mrr_at_5
value: 40.567083333333336
- type: ndcg_at_1
value: 32.93558333333333
- type: ndcg_at_10
value: 42.706583333333334
- type: ndcg_at_100
value: 47.82483333333333
- type: ndcg_at_1000
value: 49.95733333333334
- type: ndcg_at_3
value: 38.064750000000004
- type: ndcg_at_5
value: 40.18158333333333
- type: precision_at_1
value: 32.93558333333333
- type: precision_at_10
value: 7.459833333333334
- type: precision_at_100
value: 1.1830833333333335
- type: precision_at_1000
value: 0.15608333333333332
- type: precision_at_3
value: 17.5235
- type: precision_at_5
value: 12.349833333333333
- type: recall_at_1
value: 27.764166666666668
- type: recall_at_10
value: 54.31775
- type: recall_at_100
value: 76.74350000000001
- type: recall_at_1000
value: 91.45208333333332
- type: recall_at_3
value: 41.23425
- type: recall_at_5
value: 46.73983333333334
- task:
type: Retrieval
dataset:
name: MTEB ClimateFEVER
type: climate-fever
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 12.969
- type: map_at_10
value: 21.584999999999997
- type: map_at_100
value: 23.3
- type: map_at_1000
value: 23.5
- type: map_at_3
value: 18.218999999999998
- type: map_at_5
value: 19.983
- type: mrr_at_1
value: 29.316
- type: mrr_at_10
value: 40.033
- type: mrr_at_100
value: 40.96
- type: mrr_at_1000
value: 41.001
- type: mrr_at_3
value: 37.123
- type: mrr_at_5
value: 38.757999999999996
- type: ndcg_at_1
value: 29.316
- type: ndcg_at_10
value: 29.858
- type: ndcg_at_100
value: 36.756
- type: ndcg_at_1000
value: 40.245999999999995
- type: ndcg_at_3
value: 24.822
- type: ndcg_at_5
value: 26.565
- type: precision_at_1
value: 29.316
- type: precision_at_10
value: 9.186
- type: precision_at_100
value: 1.6549999999999998
- type: precision_at_1000
value: 0.22999999999999998
- type: precision_at_3
value: 18.436
- type: precision_at_5
value: 13.876
- type: recall_at_1
value: 12.969
- type: recall_at_10
value: 35.142
- type: recall_at_100
value: 59.143
- type: recall_at_1000
value: 78.594
- type: recall_at_3
value: 22.604
- type: recall_at_5
value: 27.883000000000003
- task:
type: Retrieval
dataset:
name: MTEB DBPedia
type: dbpedia-entity
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 8.527999999999999
- type: map_at_10
value: 17.974999999999998
- type: map_at_100
value: 25.665
- type: map_at_1000
value: 27.406000000000002
- type: map_at_3
value: 13.017999999999999
- type: map_at_5
value: 15.137
- type: mrr_at_1
value: 62.5
- type: mrr_at_10
value: 71.891
- type: mrr_at_100
value: 72.294
- type: mrr_at_1000
value: 72.296
- type: mrr_at_3
value: 69.958
- type: mrr_at_5
value: 71.121
- type: ndcg_at_1
value: 50.875
- type: ndcg_at_10
value: 38.36
- type: ndcg_at_100
value: 44.235
- type: ndcg_at_1000
value: 52.154
- type: ndcg_at_3
value: 43.008
- type: ndcg_at_5
value: 40.083999999999996
- type: precision_at_1
value: 62.5
- type: precision_at_10
value: 30.0
- type: precision_at_100
value: 10.038
- type: precision_at_1000
value: 2.0869999999999997
- type: precision_at_3
value: 46.833000000000006
- type: precision_at_5
value: 38.800000000000004
- type: recall_at_1
value: 8.527999999999999
- type: recall_at_10
value: 23.828
- type: recall_at_100
value: 52.322
- type: recall_at_1000
value: 77.143
- type: recall_at_3
value: 14.136000000000001
- type: recall_at_5
value: 17.761
- task:
type: Classification
dataset:
name: MTEB EmotionClassification
type: mteb/emotion
config: default
split: test
revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37
metrics:
- type: accuracy
value: 51.51
- type: f1
value: 47.632159862049896
- task:
type: Retrieval
dataset:
name: MTEB FEVER
type: fever
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 60.734
- type: map_at_10
value: 72.442
- type: map_at_100
value: 72.735
- type: map_at_1000
value: 72.75
- type: map_at_3
value: 70.41199999999999
- type: map_at_5
value: 71.80499999999999
- type: mrr_at_1
value: 65.212
- type: mrr_at_10
value: 76.613
- type: mrr_at_100
value: 76.79899999999999
- type: mrr_at_1000
value: 76.801
- type: mrr_at_3
value: 74.8
- type: mrr_at_5
value: 76.12400000000001
- type: ndcg_at_1
value: 65.212
- type: ndcg_at_10
value: 77.988
- type: ndcg_at_100
value: 79.167
- type: ndcg_at_1000
value: 79.452
- type: ndcg_at_3
value: 74.362
- type: ndcg_at_5
value: 76.666
- type: precision_at_1
value: 65.212
- type: precision_at_10
value: 10.003
- type: precision_at_100
value: 1.077
- type: precision_at_1000
value: 0.11199999999999999
- type: precision_at_3
value: 29.518
- type: precision_at_5
value: 19.016
- type: recall_at_1
value: 60.734
- type: recall_at_10
value: 90.824
- type: recall_at_100
value: 95.71600000000001
- type: recall_at_1000
value: 97.577
- type: recall_at_3
value: 81.243
- type: recall_at_5
value: 86.90299999999999
- task:
type: Retrieval
dataset:
name: MTEB FiQA2018
type: fiqa
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 23.845
- type: map_at_10
value: 39.281
- type: map_at_100
value: 41.422
- type: map_at_1000
value: 41.593
- type: map_at_3
value: 34.467
- type: map_at_5
value: 37.017
- type: mrr_at_1
value: 47.531
- type: mrr_at_10
value: 56.204
- type: mrr_at_100
value: 56.928999999999995
- type: mrr_at_1000
value: 56.962999999999994
- type: mrr_at_3
value: 54.115
- type: mrr_at_5
value: 55.373000000000005
- type: ndcg_at_1
value: 47.531
- type: ndcg_at_10
value: 47.711999999999996
- type: ndcg_at_100
value: 54.510999999999996
- type: ndcg_at_1000
value: 57.103
- type: ndcg_at_3
value: 44.145
- type: ndcg_at_5
value: 45.032
- type: precision_at_1
value: 47.531
- type: precision_at_10
value: 13.194
- type: precision_at_100
value: 2.045
- type: precision_at_1000
value: 0.249
- type: precision_at_3
value: 29.424
- type: precision_at_5
value: 21.451
- type: recall_at_1
value: 23.845
- type: recall_at_10
value: 54.967
- type: recall_at_100
value: 79.11399999999999
- type: recall_at_1000
value: 94.56700000000001
- type: recall_at_3
value: 40.256
- type: recall_at_5
value: 46.215
- task:
type: Retrieval
dataset:
name: MTEB HotpotQA
type: hotpotqa
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 37.819
- type: map_at_10
value: 60.889
- type: map_at_100
value: 61.717999999999996
- type: map_at_1000
value: 61.778
- type: map_at_3
value: 57.254000000000005
- type: map_at_5
value: 59.541
- type: mrr_at_1
value: 75.638
- type: mrr_at_10
value: 82.173
- type: mrr_at_100
value: 82.362
- type: mrr_at_1000
value: 82.37
- type: mrr_at_3
value: 81.089
- type: mrr_at_5
value: 81.827
- type: ndcg_at_1
value: 75.638
- type: ndcg_at_10
value: 69.317
- type: ndcg_at_100
value: 72.221
- type: ndcg_at_1000
value: 73.382
- type: ndcg_at_3
value: 64.14
- type: ndcg_at_5
value: 67.07600000000001
- type: precision_at_1
value: 75.638
- type: precision_at_10
value: 14.704999999999998
- type: precision_at_100
value: 1.698
- type: precision_at_1000
value: 0.185
- type: precision_at_3
value: 41.394999999999996
- type: precision_at_5
value: 27.162999999999997
- type: recall_at_1
value: 37.819
- type: recall_at_10
value: 73.52499999999999
- type: recall_at_100
value: 84.875
- type: recall_at_1000
value: 92.559
- type: recall_at_3
value: 62.092999999999996
- type: recall_at_5
value: 67.907
- task:
type: Classification
dataset:
name: MTEB ImdbClassification
type: mteb/imdb
config: default
split: test
revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7
metrics:
- type: accuracy
value: 94.60079999999999
- type: ap
value: 92.67396345347356
- type: f1
value: 94.5988098167121
- task:
type: Retrieval
dataset:
name: MTEB MSMARCO
type: msmarco
config: default
split: dev
revision: None
metrics:
- type: map_at_1
value: 21.285
- type: map_at_10
value: 33.436
- type: map_at_100
value: 34.63
- type: map_at_1000
value: 34.681
- type: map_at_3
value: 29.412
- type: map_at_5
value: 31.715
- type: mrr_at_1
value: 21.848
- type: mrr_at_10
value: 33.979
- type: mrr_at_100
value: 35.118
- type: mrr_at_1000
value: 35.162
- type: mrr_at_3
value: 30.036
- type: mrr_at_5
value: 32.298
- type: ndcg_at_1
value: 21.862000000000002
- type: ndcg_at_10
value: 40.43
- type: ndcg_at_100
value: 46.17
- type: ndcg_at_1000
value: 47.412
- type: ndcg_at_3
value: 32.221
- type: ndcg_at_5
value: 36.332
- type: precision_at_1
value: 21.862000000000002
- type: precision_at_10
value: 6.491
- type: precision_at_100
value: 0.935
- type: precision_at_1000
value: 0.104
- type: precision_at_3
value: 13.744
- type: precision_at_5
value: 10.331999999999999
- type: recall_at_1
value: 21.285
- type: recall_at_10
value: 62.083
- type: recall_at_100
value: 88.576
- type: recall_at_1000
value: 98.006
- type: recall_at_3
value: 39.729
- type: recall_at_5
value: 49.608000000000004
- task:
type: Classification
dataset:
name: MTEB MTOPDomainClassification (en)
type: mteb/mtop_domain
config: en
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 93.92612859097127
- type: f1
value: 93.82370333372853
- task:
type: Classification
dataset:
name: MTEB MTOPDomainClassification (de)
type: mteb/mtop_domain
config: de
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 92.67681036911807
- type: f1
value: 92.14191382411472
- task:
type: Classification
dataset:
name: MTEB MTOPDomainClassification (es)
type: mteb/mtop_domain
config: es
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 92.26817878585723
- type: f1
value: 91.92824250337878
- task:
type: Classification
dataset:
name: MTEB MTOPDomainClassification (fr)
type: mteb/mtop_domain
config: fr
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 89.96554963983714
- type: f1
value: 90.02859329630792
- task:
type: Classification
dataset:
name: MTEB MTOPDomainClassification (hi)
type: mteb/mtop_domain
config: hi
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 90.02509860164935
- type: f1
value: 89.30665159182062
- task:
type: Classification
dataset:
name: MTEB MTOPDomainClassification (th)
type: mteb/mtop_domain
config: th
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 87.55515370705244
- type: f1
value: 87.94449232331907
- task:
type: Classification
dataset:
name: MTEB MTOPIntentClassification (en)
type: mteb/mtop_intent
config: en
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 82.4623803009576
- type: f1
value: 66.06738378772725
- task:
type: Classification
dataset:
name: MTEB MTOPIntentClassification (de)
type: mteb/mtop_intent
config: de
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 79.3716539870386
- type: f1
value: 60.37614033396853
- task:
type: Classification
dataset:
name: MTEB MTOPIntentClassification (es)
type: mteb/mtop_intent
config: es
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 80.34022681787857
- type: f1
value: 58.302008026952
- task:
type: Classification
dataset:
name: MTEB MTOPIntentClassification (fr)
type: mteb/mtop_intent
config: fr
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 76.72095208268087
- type: f1
value: 59.64524724009049
- task:
type: Classification
dataset:
name: MTEB MTOPIntentClassification (hi)
type: mteb/mtop_intent
config: hi
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 77.87020437432773
- type: f1
value: 57.80202694670567
- task:
type: Classification
dataset:
name: MTEB MTOPIntentClassification (th)
type: mteb/mtop_intent
config: th
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 77.73598553345387
- type: f1
value: 58.19628250675031
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (af)
type: mteb/amazon_massive_intent
config: af
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 67.6630800268998
- type: f1
value: 65.00996668051691
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (am)
type: mteb/amazon_massive_intent
config: am
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 60.7128446536651
- type: f1
value: 57.95860594874963
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (ar)
type: mteb/amazon_massive_intent
config: ar
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 63.61129791526563
- type: f1
value: 59.75328290206483
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (az)
type: mteb/amazon_massive_intent
config: az
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 69.00134498991257
- type: f1
value: 67.0230483991802
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (bn)
type: mteb/amazon_massive_intent
config: bn
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 68.54068594485541
- type: f1
value: 65.54604628946976
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (cy)
type: mteb/amazon_massive_intent
config: cy
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 63.032952252858095
- type: f1
value: 58.715741857057104
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (da)
type: mteb/amazon_massive_intent
config: da
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 71.80901143241427
- type: f1
value: 68.33963989243877
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (de)
type: mteb/amazon_massive_intent
config: de
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 72.47141896435777
- type: f1
value: 69.56765020308262
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (el)
type: mteb/amazon_massive_intent
config: el
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 71.2373907195696
- type: f1
value: 69.04529836036467
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (en)
type: mteb/amazon_massive_intent
config: en
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 77.05783456624076
- type: f1
value: 74.69430584708174
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (es)
type: mteb/amazon_massive_intent
config: es
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 72.82111634162744
- type: f1
value: 70.77228952803762
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (fa)
type: mteb/amazon_massive_intent
config: fa
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 74.25353059852051
- type: f1
value: 71.05310103416411
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (fi)
type: mteb/amazon_massive_intent
config: fi
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 72.28648285137861
- type: f1
value: 69.08020473732226
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (fr)
type: mteb/amazon_massive_intent
config: fr
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 73.31540013449899
- type: f1
value: 70.9426355465791
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (he)
type: mteb/amazon_massive_intent
config: he
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 70.2151983860121
- type: f1
value: 67.52541755908858
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (hi)
type: mteb/amazon_massive_intent
config: hi
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 71.58372562205784
- type: f1
value: 69.49769064229827
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (hu)
type: mteb/amazon_massive_intent
config: hu
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 71.9233355749832
- type: f1
value: 69.36311548259593
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (hy)
type: mteb/amazon_massive_intent
config: hy
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 68.07330195023538
- type: f1
value: 64.99882022345572
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (id)
type: mteb/amazon_massive_intent
config: id
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 72.62273032952253
- type: f1
value: 70.6394885471001
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (is)
type: mteb/amazon_massive_intent
config: is
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 65.77000672494957
- type: f1
value: 62.9368944815065
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (it)
type: mteb/amazon_massive_intent
config: it
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 73.453261600538
- type: f1
value: 70.85069934666681
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (ja)
type: mteb/amazon_massive_intent
config: ja
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 74.6906523201076
- type: f1
value: 72.03249740074217
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (jv)
type: mteb/amazon_massive_intent
config: jv
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 63.03631472763953
- type: f1
value: 59.3165215571852
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (ka)
type: mteb/amazon_massive_intent
config: ka
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 58.913920645595155
- type: f1
value: 57.367337711611285
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (km)
type: mteb/amazon_massive_intent
config: km
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 54.42837928715535
- type: f1
value: 52.60527294970906
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (kn)
type: mteb/amazon_massive_intent
config: kn
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 66.33490248823135
- type: f1
value: 63.213340969404065
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (ko)
type: mteb/amazon_massive_intent
config: ko
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 70.58507061197041
- type: f1
value: 68.40256628040486
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (lv)
type: mteb/amazon_massive_intent
config: lv
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 69.11230665770006
- type: f1
value: 66.44863577842305
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (ml)
type: mteb/amazon_massive_intent
config: ml
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 69.70073974445192
- type: f1
value: 67.21291337273702
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (mn)
type: mteb/amazon_massive_intent
config: mn
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 66.43913920645595
- type: f1
value: 64.09838087422806
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (ms)
type: mteb/amazon_massive_intent
config: ms
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 70.80026899798251
- type: f1
value: 68.76986742962444
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (my)
type: mteb/amazon_massive_intent
config: my
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 64.78816408876934
- type: f1
value: 62.18781873428972
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (nb)
type: mteb/amazon_massive_intent
config: nb
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 71.6577000672495
- type: f1
value: 68.75171511133003
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (nl)
type: mteb/amazon_massive_intent
config: nl
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 74.42501681237391
- type: f1
value: 71.18434963451544
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (pl)
type: mteb/amazon_massive_intent
config: pl
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 73.64828513786146
- type: f1
value: 70.67741914007422
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (pt)
type: mteb/amazon_massive_intent
config: pt
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 73.62811028917284
- type: f1
value: 71.36402039740959
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (ro)
type: mteb/amazon_massive_intent
config: ro
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 71.88634835238736
- type: f1
value: 69.23701923480677
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (ru)
type: mteb/amazon_massive_intent
config: ru
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 74.15938130464022
- type: f1
value: 71.87792218993388
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (sl)
type: mteb/amazon_massive_intent
config: sl
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 69.96301277740416
- type: f1
value: 67.29584200202983
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (sq)
type: mteb/amazon_massive_intent
config: sq
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 69.49562878278412
- type: f1
value: 66.91716685679431
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (sv)
type: mteb/amazon_massive_intent
config: sv
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 74.6805648957633
- type: f1
value: 72.02723592594374
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (sw)
type: mteb/amazon_massive_intent
config: sw
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 63.00605245460659
- type: f1
value: 60.16716669482932
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (ta)
type: mteb/amazon_massive_intent
config: ta
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 66.90988567585742
- type: f1
value: 63.99405488777784
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (te)
type: mteb/amazon_massive_intent
config: te
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 67.62273032952253
- type: f1
value: 65.17213906909481
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (th)
type: mteb/amazon_massive_intent
config: th
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 69.50907868190988
- type: f1
value: 69.15165697194853
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (tl)
type: mteb/amazon_massive_intent
config: tl
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 69.30733019502352
- type: f1
value: 66.69024007380474
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (tr)
type: mteb/amazon_massive_intent
config: tr
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 72.24277067921989
- type: f1
value: 68.80515408492947
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (ur)
type: mteb/amazon_massive_intent
config: ur
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 67.49831876260929
- type: f1
value: 64.83778567111116
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (vi)
type: mteb/amazon_massive_intent
config: vi
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 71.28782784129119
- type: f1
value: 69.3294186700733
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (zh-CN)
type: mteb/amazon_massive_intent
config: zh-CN
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 73.315400134499
- type: f1
value: 71.22674385243207
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (zh-TW)
type: mteb/amazon_massive_intent
config: zh-TW
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 69.37794216543377
- type: f1
value: 68.96962492838232
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (af)
type: mteb/amazon_massive_scenario
config: af
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 73.33557498318764
- type: f1
value: 72.28949738478356
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (am)
type: mteb/amazon_massive_scenario
config: am
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 65.84398117014123
- type: f1
value: 64.71026362091463
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (ar)
type: mteb/amazon_massive_scenario
config: ar
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 69.76462676529925
- type: f1
value: 69.8229667407667
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (az)
type: mteb/amazon_massive_scenario
config: az
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 72.02420981842636
- type: f1
value: 71.76576384895898
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (bn)
type: mteb/amazon_massive_scenario
config: bn
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 72.7572293207801
- type: f1
value: 72.76840765295256
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (cy)
type: mteb/amazon_massive_scenario
config: cy
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 68.02286482851379
- type: f1
value: 66.17237947327872
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (da)
type: mteb/amazon_massive_scenario
config: da
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 77.60928043039678
- type: f1
value: 77.27094731234773
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (de)
type: mteb/amazon_massive_scenario
config: de
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 77.68325487558843
- type: f1
value: 77.97530399082261
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (el)
type: mteb/amazon_massive_scenario
config: el
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 76.13315400134498
- type: f1
value: 75.97558584796424
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (en)
type: mteb/amazon_massive_scenario
config: en
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 80.47410894418292
- type: f1
value: 80.52244841473792
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (es)
type: mteb/amazon_massive_scenario
config: es
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 76.9670477471419
- type: f1
value: 77.37318805793146
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (fa)
type: mteb/amazon_massive_scenario
config: fa
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 78.09683927370544
- type: f1
value: 77.69773737430847
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (fi)
type: mteb/amazon_massive_scenario
config: fi
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 75.20847343644922
- type: f1
value: 75.17071738727348
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (fr)
type: mteb/amazon_massive_scenario
config: fr
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 77.07464694014796
- type: f1
value: 77.16136207698571
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (he)
type: mteb/amazon_massive_scenario
config: he
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 73.53396099529255
- type: f1
value: 73.58296404484122
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (hi)
type: mteb/amazon_massive_scenario
config: hi
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 75.75319435104237
- type: f1
value: 75.24674707850833
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (hu)
type: mteb/amazon_massive_scenario
config: hu
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 77.0948217888366
- type: f1
value: 76.47559490205028
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (hy)
type: mteb/amazon_massive_scenario
config: hy
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 71.07599193006052
- type: f1
value: 70.76028043093511
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (id)
type: mteb/amazon_massive_scenario
config: id
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 77.10490921318089
- type: f1
value: 77.01215275283272
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (is)
type: mteb/amazon_massive_scenario
config: is
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 71.25756556825824
- type: f1
value: 70.20605314648762
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (it)
type: mteb/amazon_massive_scenario
config: it
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 77.08137188971082
- type: f1
value: 77.3899269057439
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (ja)
type: mteb/amazon_massive_scenario
config: ja
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 79.35440484196369
- type: f1
value: 79.58964690002772
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (jv)
type: mteb/amazon_massive_scenario
config: jv
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 68.42299932750504
- type: f1
value: 68.07844356925413
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (ka)
type: mteb/amazon_massive_scenario
config: ka
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 66.15669132481507
- type: f1
value: 65.89383352608513
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (km)
type: mteb/amazon_massive_scenario
config: km
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 60.11432414256894
- type: f1
value: 57.69910594559806
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (kn)
type: mteb/amazon_massive_scenario
config: kn
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 71.24747814391392
- type: f1
value: 70.42455553830918
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (ko)
type: mteb/amazon_massive_scenario
config: ko
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 76.46267652992603
- type: f1
value: 76.8854559308316
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (lv)
type: mteb/amazon_massive_scenario
config: lv
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 73.24815063887021
- type: f1
value: 72.77805034658074
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (ml)
type: mteb/amazon_massive_scenario
config: ml
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 74.11566913248151
- type: f1
value: 73.86147988001356
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (mn)
type: mteb/amazon_massive_scenario
config: mn
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 70.0168123739072
- type: f1
value: 69.38515920054571
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (ms)
type: mteb/amazon_massive_scenario
config: ms
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 74.41156691324814
- type: f1
value: 73.43474953408237
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (my)
type: mteb/amazon_massive_scenario
config: my
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 68.39609952925353
- type: f1
value: 67.29731681109291
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (nb)
type: mteb/amazon_massive_scenario
config: nb
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 77.20914593140552
- type: f1
value: 77.07066497935367
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (nl)
type: mteb/amazon_massive_scenario
config: nl
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 78.52387357094821
- type: f1
value: 78.5259569473291
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (pl)
type: mteb/amazon_massive_scenario
config: pl
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 76.6913248150639
- type: f1
value: 76.91201656350455
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (pt)
type: mteb/amazon_massive_scenario
config: pt
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 77.1217215870881
- type: f1
value: 77.41179937912504
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (ro)
type: mteb/amazon_massive_scenario
config: ro
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 75.25891055817083
- type: f1
value: 75.8089244542887
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (ru)
type: mteb/amazon_massive_scenario
config: ru
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 77.70679219905851
- type: f1
value: 78.21459594517711
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (sl)
type: mteb/amazon_massive_scenario
config: sl
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 74.83523873570948
- type: f1
value: 74.86847028401978
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (sq)
type: mteb/amazon_massive_scenario
config: sq
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 74.71755211835911
- type: f1
value: 74.0214326485662
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (sv)
type: mteb/amazon_massive_scenario
config: sv
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 79.06523201075991
- type: f1
value: 79.10545620325138
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (sw)
type: mteb/amazon_massive_scenario
config: sw
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 67.91862811028918
- type: f1
value: 66.50386121217983
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (ta)
type: mteb/amazon_massive_scenario
config: ta
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 70.93140551445865
- type: f1
value: 70.755435928495
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (te)
type: mteb/amazon_massive_scenario
config: te
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 72.40753194351042
- type: f1
value: 71.61816115782923
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (th)
type: mteb/amazon_massive_scenario
config: th
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 75.1815736381977
- type: f1
value: 75.08016717887205
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (tl)
type: mteb/amazon_massive_scenario
config: tl
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 72.86482851378614
- type: f1
value: 72.39521180006291
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (tr)
type: mteb/amazon_massive_scenario
config: tr
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 76.46940147948891
- type: f1
value: 76.70044085362349
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (ur)
type: mteb/amazon_massive_scenario
config: ur
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 71.89307330195024
- type: f1
value: 71.5721825332298
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (vi)
type: mteb/amazon_massive_scenario
config: vi
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 74.7511768661735
- type: f1
value: 75.17918654541515
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (zh-CN)
type: mteb/amazon_massive_scenario
config: zh-CN
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 78.69535978480162
- type: f1
value: 78.90019070153316
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (zh-TW)
type: mteb/amazon_massive_scenario
config: zh-TW
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 75.45729657027572
- type: f1
value: 76.19578371794672
- task:
type: Clustering
dataset:
name: MTEB MedrxivClusteringP2P
type: mteb/medrxiv-clustering-p2p
config: default
split: test
revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73
metrics:
- type: v_measure
value: 36.92715354123554
- task:
type: Clustering
dataset:
name: MTEB MedrxivClusteringS2S
type: mteb/medrxiv-clustering-s2s
config: default
split: test
revision: 35191c8c0dca72d8ff3efcd72aa802307d469663
metrics:
- type: v_measure
value: 35.53536244162518
- task:
type: Reranking
dataset:
name: MTEB MindSmallReranking
type: mteb/mind_small
config: default
split: test
revision: 3bdac13927fdc888b903db93b2ffdbd90b295a69
metrics:
- type: map
value: 33.08507884504006
- type: mrr
value: 34.32436977159129
- task:
type: Retrieval
dataset:
name: MTEB NFCorpus
type: nfcorpus
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 5.935
- type: map_at_10
value: 13.297
- type: map_at_100
value: 16.907
- type: map_at_1000
value: 18.391
- type: map_at_3
value: 9.626999999999999
- type: map_at_5
value: 11.190999999999999
- type: mrr_at_1
value: 46.129999999999995
- type: mrr_at_10
value: 54.346000000000004
- type: mrr_at_100
value: 55.067
- type: mrr_at_1000
value: 55.1
- type: mrr_at_3
value: 51.961
- type: mrr_at_5
value: 53.246
- type: ndcg_at_1
value: 44.118
- type: ndcg_at_10
value: 35.534
- type: ndcg_at_100
value: 32.946999999999996
- type: ndcg_at_1000
value: 41.599000000000004
- type: ndcg_at_3
value: 40.25
- type: ndcg_at_5
value: 37.978
- type: precision_at_1
value: 46.129999999999995
- type: precision_at_10
value: 26.842
- type: precision_at_100
value: 8.427
- type: precision_at_1000
value: 2.128
- type: precision_at_3
value: 37.977
- type: precision_at_5
value: 32.879000000000005
- type: recall_at_1
value: 5.935
- type: recall_at_10
value: 17.211000000000002
- type: recall_at_100
value: 34.33
- type: recall_at_1000
value: 65.551
- type: recall_at_3
value: 10.483
- type: recall_at_5
value: 13.078999999999999
- task:
type: Retrieval
dataset:
name: MTEB NQ
type: nq
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 35.231
- type: map_at_10
value: 50.202000000000005
- type: map_at_100
value: 51.154999999999994
- type: map_at_1000
value: 51.181
- type: map_at_3
value: 45.774
- type: map_at_5
value: 48.522
- type: mrr_at_1
value: 39.687
- type: mrr_at_10
value: 52.88
- type: mrr_at_100
value: 53.569
- type: mrr_at_1000
value: 53.58500000000001
- type: mrr_at_3
value: 49.228
- type: mrr_at_5
value: 51.525
- type: ndcg_at_1
value: 39.687
- type: ndcg_at_10
value: 57.754000000000005
- type: ndcg_at_100
value: 61.597
- type: ndcg_at_1000
value: 62.18900000000001
- type: ndcg_at_3
value: 49.55
- type: ndcg_at_5
value: 54.11899999999999
- type: precision_at_1
value: 39.687
- type: precision_at_10
value: 9.313
- type: precision_at_100
value: 1.146
- type: precision_at_1000
value: 0.12
- type: precision_at_3
value: 22.229
- type: precision_at_5
value: 15.939
- type: recall_at_1
value: 35.231
- type: recall_at_10
value: 78.083
- type: recall_at_100
value: 94.42099999999999
- type: recall_at_1000
value: 98.81
- type: recall_at_3
value: 57.047000000000004
- type: recall_at_5
value: 67.637
- task:
type: Retrieval
dataset:
name: MTEB QuoraRetrieval
type: quora
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 71.241
- type: map_at_10
value: 85.462
- type: map_at_100
value: 86.083
- type: map_at_1000
value: 86.09700000000001
- type: map_at_3
value: 82.49499999999999
- type: map_at_5
value: 84.392
- type: mrr_at_1
value: 82.09
- type: mrr_at_10
value: 88.301
- type: mrr_at_100
value: 88.383
- type: mrr_at_1000
value: 88.384
- type: mrr_at_3
value: 87.37
- type: mrr_at_5
value: 88.035
- type: ndcg_at_1
value: 82.12
- type: ndcg_at_10
value: 89.149
- type: ndcg_at_100
value: 90.235
- type: ndcg_at_1000
value: 90.307
- type: ndcg_at_3
value: 86.37599999999999
- type: ndcg_at_5
value: 87.964
- type: precision_at_1
value: 82.12
- type: precision_at_10
value: 13.56
- type: precision_at_100
value: 1.539
- type: precision_at_1000
value: 0.157
- type: precision_at_3
value: 37.88
- type: precision_at_5
value: 24.92
- type: recall_at_1
value: 71.241
- type: recall_at_10
value: 96.128
- type: recall_at_100
value: 99.696
- type: recall_at_1000
value: 99.994
- type: recall_at_3
value: 88.181
- type: recall_at_5
value: 92.694
- task:
type: Clustering
dataset:
name: MTEB RedditClustering
type: mteb/reddit-clustering
config: default
split: test
revision: 24640382cdbf8abc73003fb0fa6d111a705499eb
metrics:
- type: v_measure
value: 56.59757799655151
- task:
type: Clustering
dataset:
name: MTEB RedditClusteringP2P
type: mteb/reddit-clustering-p2p
config: default
split: test
revision: 282350215ef01743dc01b456c7f5241fa8937f16
metrics:
- type: v_measure
value: 64.27391998854624
- task:
type: Retrieval
dataset:
name: MTEB SCIDOCS
type: scidocs
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 4.243
- type: map_at_10
value: 10.965
- type: map_at_100
value: 12.934999999999999
- type: map_at_1000
value: 13.256
- type: map_at_3
value: 7.907
- type: map_at_5
value: 9.435
- type: mrr_at_1
value: 20.9
- type: mrr_at_10
value: 31.849
- type: mrr_at_100
value: 32.964
- type: mrr_at_1000
value: 33.024
- type: mrr_at_3
value: 28.517
- type: mrr_at_5
value: 30.381999999999998
- type: ndcg_at_1
value: 20.9
- type: ndcg_at_10
value: 18.723
- type: ndcg_at_100
value: 26.384999999999998
- type: ndcg_at_1000
value: 32.114
- type: ndcg_at_3
value: 17.753
- type: ndcg_at_5
value: 15.558
- type: precision_at_1
value: 20.9
- type: precision_at_10
value: 9.8
- type: precision_at_100
value: 2.078
- type: precision_at_1000
value: 0.345
- type: precision_at_3
value: 16.900000000000002
- type: precision_at_5
value: 13.88
- type: recall_at_1
value: 4.243
- type: recall_at_10
value: 19.885
- type: recall_at_100
value: 42.17
- type: recall_at_1000
value: 70.12
- type: recall_at_3
value: 10.288
- type: recall_at_5
value: 14.072000000000001
- task:
type: STS
dataset:
name: MTEB SICK-R
type: mteb/sickr-sts
config: default
split: test
revision: a6ea5a8cab320b040a23452cc28066d9beae2cee
metrics:
- type: cos_sim_pearson
value: 85.84209174935282
- type: cos_sim_spearman
value: 81.73248048438833
- type: euclidean_pearson
value: 83.02810070308149
- type: euclidean_spearman
value: 81.73248295679514
- type: manhattan_pearson
value: 82.95368060376002
- type: manhattan_spearman
value: 81.60277910998718
- task:
type: STS
dataset:
name: MTEB STS12
type: mteb/sts12-sts
config: default
split: test
revision: a0d554a64d88156834ff5ae9920b964011b16384
metrics:
- type: cos_sim_pearson
value: 88.52628804556943
- type: cos_sim_spearman
value: 82.5713913555672
- type: euclidean_pearson
value: 85.8796774746988
- type: euclidean_spearman
value: 82.57137506803424
- type: manhattan_pearson
value: 85.79671002960058
- type: manhattan_spearman
value: 82.49445981618027
- task:
type: STS
dataset:
name: MTEB STS13
type: mteb/sts13-sts
config: default
split: test
revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca
metrics:
- type: cos_sim_pearson
value: 86.23682503505542
- type: cos_sim_spearman
value: 87.15008956711806
- type: euclidean_pearson
value: 86.79805401524959
- type: euclidean_spearman
value: 87.15008956711806
- type: manhattan_pearson
value: 86.65298502699244
- type: manhattan_spearman
value: 86.97677821948562
- task:
type: STS
dataset:
name: MTEB STS14
type: mteb/sts14-sts
config: default
split: test
revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375
metrics:
- type: cos_sim_pearson
value: 85.63370304677802
- type: cos_sim_spearman
value: 84.97105553540318
- type: euclidean_pearson
value: 85.28896108687721
- type: euclidean_spearman
value: 84.97105553540318
- type: manhattan_pearson
value: 85.09663190337331
- type: manhattan_spearman
value: 84.79126831644619
- task:
type: STS
dataset:
name: MTEB STS15
type: mteb/sts15-sts
config: default
split: test
revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3
metrics:
- type: cos_sim_pearson
value: 90.2614838800733
- type: cos_sim_spearman
value: 91.0509162991835
- type: euclidean_pearson
value: 90.33098317533373
- type: euclidean_spearman
value: 91.05091625871644
- type: manhattan_pearson
value: 90.26250435151107
- type: manhattan_spearman
value: 90.97999594417519
- task:
type: STS
dataset:
name: MTEB STS16
type: mteb/sts16-sts
config: default
split: test
revision: 4d8694f8f0e0100860b497b999b3dbed754a0513
metrics:
- type: cos_sim_pearson
value: 85.80480973335091
- type: cos_sim_spearman
value: 87.313695492969
- type: euclidean_pearson
value: 86.49267251576939
- type: euclidean_spearman
value: 87.313695492969
- type: manhattan_pearson
value: 86.44019901831935
- type: manhattan_spearman
value: 87.24205395460392
- task:
type: STS
dataset:
name: MTEB STS17 (en-en)
type: mteb/sts17-crosslingual-sts
config: en-en
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 90.05662789380672
- type: cos_sim_spearman
value: 90.02759424426651
- type: euclidean_pearson
value: 90.4042483422981
- type: euclidean_spearman
value: 90.02759424426651
- type: manhattan_pearson
value: 90.51446975000226
- type: manhattan_spearman
value: 90.08832889933616
- task:
type: STS
dataset:
name: MTEB STS22 (en)
type: mteb/sts22-crosslingual-sts
config: en
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 67.5975528273532
- type: cos_sim_spearman
value: 67.62969861411354
- type: euclidean_pearson
value: 69.224275734323
- type: euclidean_spearman
value: 67.62969861411354
- type: manhattan_pearson
value: 69.3761447059927
- type: manhattan_spearman
value: 67.90921005611467
- task:
type: STS
dataset:
name: MTEB STSBenchmark
type: mteb/stsbenchmark-sts
config: default
split: test
revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831
metrics:
- type: cos_sim_pearson
value: 87.11244327231684
- type: cos_sim_spearman
value: 88.37902438979035
- type: euclidean_pearson
value: 87.86054279847336
- type: euclidean_spearman
value: 88.37902438979035
- type: manhattan_pearson
value: 87.77257757320378
- type: manhattan_spearman
value: 88.25208966098123
- task:
type: Reranking
dataset:
name: MTEB SciDocsRR
type: mteb/scidocs-reranking
config: default
split: test
revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab
metrics:
- type: map
value: 85.87174608143563
- type: mrr
value: 96.12836872640794
- task:
type: Retrieval
dataset:
name: MTEB SciFact
type: scifact
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 57.760999999999996
- type: map_at_10
value: 67.258
- type: map_at_100
value: 67.757
- type: map_at_1000
value: 67.78800000000001
- type: map_at_3
value: 64.602
- type: map_at_5
value: 65.64
- type: mrr_at_1
value: 60.667
- type: mrr_at_10
value: 68.441
- type: mrr_at_100
value: 68.825
- type: mrr_at_1000
value: 68.853
- type: mrr_at_3
value: 66.444
- type: mrr_at_5
value: 67.26100000000001
- type: ndcg_at_1
value: 60.667
- type: ndcg_at_10
value: 71.852
- type: ndcg_at_100
value: 73.9
- type: ndcg_at_1000
value: 74.628
- type: ndcg_at_3
value: 67.093
- type: ndcg_at_5
value: 68.58
- type: precision_at_1
value: 60.667
- type: precision_at_10
value: 9.6
- type: precision_at_100
value: 1.0670000000000002
- type: precision_at_1000
value: 0.11199999999999999
- type: precision_at_3
value: 26.111
- type: precision_at_5
value: 16.733
- type: recall_at_1
value: 57.760999999999996
- type: recall_at_10
value: 84.967
- type: recall_at_100
value: 93.833
- type: recall_at_1000
value: 99.333
- type: recall_at_3
value: 71.589
- type: recall_at_5
value: 75.483
- task:
type: PairClassification
dataset:
name: MTEB SprintDuplicateQuestions
type: mteb/sprintduplicatequestions-pairclassification
config: default
split: test
revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46
metrics:
- type: cos_sim_accuracy
value: 99.66633663366336
- type: cos_sim_ap
value: 91.17685358899108
- type: cos_sim_f1
value: 82.16818642350559
- type: cos_sim_precision
value: 83.26488706365504
- type: cos_sim_recall
value: 81.10000000000001
- type: dot_accuracy
value: 99.66633663366336
- type: dot_ap
value: 91.17663411119032
- type: dot_f1
value: 82.16818642350559
- type: dot_precision
value: 83.26488706365504
- type: dot_recall
value: 81.10000000000001
- type: euclidean_accuracy
value: 99.66633663366336
- type: euclidean_ap
value: 91.17685189882275
- type: euclidean_f1
value: 82.16818642350559
- type: euclidean_precision
value: 83.26488706365504
- type: euclidean_recall
value: 81.10000000000001
- type: manhattan_accuracy
value: 99.66633663366336
- type: manhattan_ap
value: 91.2241619496737
- type: manhattan_f1
value: 82.20472440944883
- type: manhattan_precision
value: 86.51933701657458
- type: manhattan_recall
value: 78.3
- type: max_accuracy
value: 99.66633663366336
- type: max_ap
value: 91.2241619496737
- type: max_f1
value: 82.20472440944883
- task:
type: Clustering
dataset:
name: MTEB StackExchangeClustering
type: mteb/stackexchange-clustering
config: default
split: test
revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259
metrics:
- type: v_measure
value: 66.85101268897951
- task:
type: Clustering
dataset:
name: MTEB StackExchangeClusteringP2P
type: mteb/stackexchange-clustering-p2p
config: default
split: test
revision: 815ca46b2622cec33ccafc3735d572c266efdb44
metrics:
- type: v_measure
value: 42.461184054706905
- task:
type: Reranking
dataset:
name: MTEB StackOverflowDupQuestions
type: mteb/stackoverflowdupquestions-reranking
config: default
split: test
revision: e185fbe320c72810689fc5848eb6114e1ef5ec69
metrics:
- type: map
value: 51.44542568873886
- type: mrr
value: 52.33656151854681
- task:
type: Summarization
dataset:
name: MTEB SummEval
type: mteb/summeval
config: default
split: test
revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c
metrics:
- type: cos_sim_pearson
value: 30.75982974997539
- type: cos_sim_spearman
value: 30.385405026539914
- type: dot_pearson
value: 30.75982433546523
- type: dot_spearman
value: 30.385405026539914
- task:
type: Retrieval
dataset:
name: MTEB TRECCOVID
type: trec-covid
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 0.22799999999999998
- type: map_at_10
value: 2.064
- type: map_at_100
value: 13.056000000000001
- type: map_at_1000
value: 31.747999999999998
- type: map_at_3
value: 0.67
- type: map_at_5
value: 1.097
- type: mrr_at_1
value: 90.0
- type: mrr_at_10
value: 94.667
- type: mrr_at_100
value: 94.667
- type: mrr_at_1000
value: 94.667
- type: mrr_at_3
value: 94.667
- type: mrr_at_5
value: 94.667
- type: ndcg_at_1
value: 86.0
- type: ndcg_at_10
value: 82.0
- type: ndcg_at_100
value: 64.307
- type: ndcg_at_1000
value: 57.023999999999994
- type: ndcg_at_3
value: 85.816
- type: ndcg_at_5
value: 84.904
- type: precision_at_1
value: 90.0
- type: precision_at_10
value: 85.8
- type: precision_at_100
value: 66.46
- type: precision_at_1000
value: 25.202
- type: precision_at_3
value: 90.0
- type: precision_at_5
value: 89.2
- type: recall_at_1
value: 0.22799999999999998
- type: recall_at_10
value: 2.235
- type: recall_at_100
value: 16.185
- type: recall_at_1000
value: 53.620999999999995
- type: recall_at_3
value: 0.7040000000000001
- type: recall_at_5
value: 1.172
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (sqi-eng)
type: mteb/tatoeba-bitext-mining
config: sqi-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 97.39999999999999
- type: f1
value: 96.75
- type: precision
value: 96.45
- type: recall
value: 97.39999999999999
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (fry-eng)
type: mteb/tatoeba-bitext-mining
config: fry-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 85.54913294797689
- type: f1
value: 82.46628131021194
- type: precision
value: 81.1175337186898
- type: recall
value: 85.54913294797689
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (kur-eng)
type: mteb/tatoeba-bitext-mining
config: kur-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 81.21951219512195
- type: f1
value: 77.33333333333334
- type: precision
value: 75.54878048780488
- type: recall
value: 81.21951219512195
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (tur-eng)
type: mteb/tatoeba-bitext-mining
config: tur-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 98.6
- type: f1
value: 98.26666666666665
- type: precision
value: 98.1
- type: recall
value: 98.6
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (deu-eng)
type: mteb/tatoeba-bitext-mining
config: deu-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 99.5
- type: f1
value: 99.33333333333333
- type: precision
value: 99.25
- type: recall
value: 99.5
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (nld-eng)
type: mteb/tatoeba-bitext-mining
config: nld-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 97.8
- type: f1
value: 97.2
- type: precision
value: 96.89999999999999
- type: recall
value: 97.8
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ron-eng)
type: mteb/tatoeba-bitext-mining
config: ron-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 97.8
- type: f1
value: 97.18333333333334
- type: precision
value: 96.88333333333333
- type: recall
value: 97.8
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ang-eng)
type: mteb/tatoeba-bitext-mining
config: ang-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 77.61194029850746
- type: f1
value: 72.81094527363183
- type: precision
value: 70.83333333333333
- type: recall
value: 77.61194029850746
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ido-eng)
type: mteb/tatoeba-bitext-mining
config: ido-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 93.7
- type: f1
value: 91.91666666666667
- type: precision
value: 91.08333333333334
- type: recall
value: 93.7
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (jav-eng)
type: mteb/tatoeba-bitext-mining
config: jav-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 88.29268292682927
- type: f1
value: 85.27642276422765
- type: precision
value: 84.01277584204414
- type: recall
value: 88.29268292682927
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (isl-eng)
type: mteb/tatoeba-bitext-mining
config: isl-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 96.1
- type: f1
value: 95.0
- type: precision
value: 94.46666666666668
- type: recall
value: 96.1
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (slv-eng)
type: mteb/tatoeba-bitext-mining
config: slv-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 93.681652490887
- type: f1
value: 91.90765492102065
- type: precision
value: 91.05913325232888
- type: recall
value: 93.681652490887
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (cym-eng)
type: mteb/tatoeba-bitext-mining
config: cym-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 92.17391304347827
- type: f1
value: 89.97101449275361
- type: precision
value: 88.96811594202899
- type: recall
value: 92.17391304347827
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (kaz-eng)
type: mteb/tatoeba-bitext-mining
config: kaz-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 90.43478260869566
- type: f1
value: 87.72173913043478
- type: precision
value: 86.42028985507245
- type: recall
value: 90.43478260869566
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (est-eng)
type: mteb/tatoeba-bitext-mining
config: est-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 90.4
- type: f1
value: 88.03
- type: precision
value: 86.95
- type: recall
value: 90.4
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (heb-eng)
type: mteb/tatoeba-bitext-mining
config: heb-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 93.4
- type: f1
value: 91.45666666666666
- type: precision
value: 90.525
- type: recall
value: 93.4
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (gla-eng)
type: mteb/tatoeba-bitext-mining
config: gla-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 81.9059107358263
- type: f1
value: 78.32557872364869
- type: precision
value: 76.78260286824823
- type: recall
value: 81.9059107358263
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (mar-eng)
type: mteb/tatoeba-bitext-mining
config: mar-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 94.3
- type: f1
value: 92.58333333333333
- type: precision
value: 91.73333333333332
- type: recall
value: 94.3
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (lat-eng)
type: mteb/tatoeba-bitext-mining
config: lat-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 79.10000000000001
- type: f1
value: 74.50500000000001
- type: precision
value: 72.58928571428571
- type: recall
value: 79.10000000000001
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (bel-eng)
type: mteb/tatoeba-bitext-mining
config: bel-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 96.6
- type: f1
value: 95.55
- type: precision
value: 95.05
- type: recall
value: 96.6
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (pms-eng)
type: mteb/tatoeba-bitext-mining
config: pms-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 82.0952380952381
- type: f1
value: 77.98458049886621
- type: precision
value: 76.1968253968254
- type: recall
value: 82.0952380952381
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (gle-eng)
type: mteb/tatoeba-bitext-mining
config: gle-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 87.9
- type: f1
value: 84.99190476190476
- type: precision
value: 83.65
- type: recall
value: 87.9
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (pes-eng)
type: mteb/tatoeba-bitext-mining
config: pes-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 95.7
- type: f1
value: 94.56666666666666
- type: precision
value: 94.01666666666667
- type: recall
value: 95.7
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (nob-eng)
type: mteb/tatoeba-bitext-mining
config: nob-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 98.6
- type: f1
value: 98.2
- type: precision
value: 98.0
- type: recall
value: 98.6
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (bul-eng)
type: mteb/tatoeba-bitext-mining
config: bul-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 95.6
- type: f1
value: 94.38333333333334
- type: precision
value: 93.78333333333335
- type: recall
value: 95.6
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (cbk-eng)
type: mteb/tatoeba-bitext-mining
config: cbk-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 87.4
- type: f1
value: 84.10380952380952
- type: precision
value: 82.67
- type: recall
value: 87.4
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (hun-eng)
type: mteb/tatoeba-bitext-mining
config: hun-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 95.5
- type: f1
value: 94.33333333333334
- type: precision
value: 93.78333333333333
- type: recall
value: 95.5
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (uig-eng)
type: mteb/tatoeba-bitext-mining
config: uig-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 89.4
- type: f1
value: 86.82000000000001
- type: precision
value: 85.64500000000001
- type: recall
value: 89.4
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (rus-eng)
type: mteb/tatoeba-bitext-mining
config: rus-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 95.1
- type: f1
value: 93.56666666666668
- type: precision
value: 92.81666666666666
- type: recall
value: 95.1
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (spa-eng)
type: mteb/tatoeba-bitext-mining
config: spa-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 98.9
- type: f1
value: 98.6
- type: precision
value: 98.45
- type: recall
value: 98.9
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (hye-eng)
type: mteb/tatoeba-bitext-mining
config: hye-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 95.01347708894879
- type: f1
value: 93.51752021563343
- type: precision
value: 92.82794249775381
- type: recall
value: 95.01347708894879
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (tel-eng)
type: mteb/tatoeba-bitext-mining
config: tel-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 97.00854700854701
- type: f1
value: 96.08262108262107
- type: precision
value: 95.65527065527067
- type: recall
value: 97.00854700854701
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (afr-eng)
type: mteb/tatoeba-bitext-mining
config: afr-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 96.5
- type: f1
value: 95.39999999999999
- type: precision
value: 94.88333333333333
- type: recall
value: 96.5
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (mon-eng)
type: mteb/tatoeba-bitext-mining
config: mon-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 96.5909090909091
- type: f1
value: 95.49242424242425
- type: precision
value: 94.9621212121212
- type: recall
value: 96.5909090909091
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (arz-eng)
type: mteb/tatoeba-bitext-mining
config: arz-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 84.90566037735849
- type: f1
value: 81.85883997204752
- type: precision
value: 80.54507337526205
- type: recall
value: 84.90566037735849
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (hrv-eng)
type: mteb/tatoeba-bitext-mining
config: hrv-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 97.5
- type: f1
value: 96.75
- type: precision
value: 96.38333333333333
- type: recall
value: 97.5
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (nov-eng)
type: mteb/tatoeba-bitext-mining
config: nov-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 86.7704280155642
- type: f1
value: 82.99610894941635
- type: precision
value: 81.32295719844358
- type: recall
value: 86.7704280155642
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (gsw-eng)
type: mteb/tatoeba-bitext-mining
config: gsw-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 67.52136752136752
- type: f1
value: 61.89662189662191
- type: precision
value: 59.68660968660969
- type: recall
value: 67.52136752136752
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (nds-eng)
type: mteb/tatoeba-bitext-mining
config: nds-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 89.2
- type: f1
value: 86.32
- type: precision
value: 85.015
- type: recall
value: 89.2
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ukr-eng)
type: mteb/tatoeba-bitext-mining
config: ukr-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 96.0
- type: f1
value: 94.78333333333333
- type: precision
value: 94.18333333333334
- type: recall
value: 96.0
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (uzb-eng)
type: mteb/tatoeba-bitext-mining
config: uzb-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 83.8785046728972
- type: f1
value: 80.54517133956385
- type: precision
value: 79.154984423676
- type: recall
value: 83.8785046728972
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (lit-eng)
type: mteb/tatoeba-bitext-mining
config: lit-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 93.60000000000001
- type: f1
value: 92.01333333333334
- type: precision
value: 91.28333333333333
- type: recall
value: 93.60000000000001
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ina-eng)
type: mteb/tatoeba-bitext-mining
config: ina-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 97.1
- type: f1
value: 96.26666666666667
- type: precision
value: 95.85000000000001
- type: recall
value: 97.1
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (lfn-eng)
type: mteb/tatoeba-bitext-mining
config: lfn-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 84.3
- type: f1
value: 80.67833333333333
- type: precision
value: 79.03928571428571
- type: recall
value: 84.3
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (zsm-eng)
type: mteb/tatoeba-bitext-mining
config: zsm-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 97.3
- type: f1
value: 96.48333333333332
- type: precision
value: 96.08333333333331
- type: recall
value: 97.3
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ita-eng)
type: mteb/tatoeba-bitext-mining
config: ita-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 95.7
- type: f1
value: 94.66666666666667
- type: precision
value: 94.16666666666667
- type: recall
value: 95.7
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (cmn-eng)
type: mteb/tatoeba-bitext-mining
config: cmn-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 97.2
- type: f1
value: 96.36666666666667
- type: precision
value: 95.96666666666668
- type: recall
value: 97.2
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (lvs-eng)
type: mteb/tatoeba-bitext-mining
config: lvs-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 94.3
- type: f1
value: 92.80666666666667
- type: precision
value: 92.12833333333333
- type: recall
value: 94.3
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (glg-eng)
type: mteb/tatoeba-bitext-mining
config: glg-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 97.0
- type: f1
value: 96.22333333333334
- type: precision
value: 95.875
- type: recall
value: 97.0
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ceb-eng)
type: mteb/tatoeba-bitext-mining
config: ceb-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 74.33333333333333
- type: f1
value: 70.78174603174602
- type: precision
value: 69.28333333333332
- type: recall
value: 74.33333333333333
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (bre-eng)
type: mteb/tatoeba-bitext-mining
config: bre-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 37.6
- type: f1
value: 32.938348952090365
- type: precision
value: 31.2811038961039
- type: recall
value: 37.6
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ben-eng)
type: mteb/tatoeba-bitext-mining
config: ben-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 91.5
- type: f1
value: 89.13333333333333
- type: precision
value: 88.03333333333333
- type: recall
value: 91.5
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (swg-eng)
type: mteb/tatoeba-bitext-mining
config: swg-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 82.14285714285714
- type: f1
value: 77.67857142857143
- type: precision
value: 75.59523809523809
- type: recall
value: 82.14285714285714
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (arq-eng)
type: mteb/tatoeba-bitext-mining
config: arq-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 69.0450054884742
- type: f1
value: 63.070409283362075
- type: precision
value: 60.58992781824835
- type: recall
value: 69.0450054884742
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (kab-eng)
type: mteb/tatoeba-bitext-mining
config: kab-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 63.1
- type: f1
value: 57.848333333333336
- type: precision
value: 55.69500000000001
- type: recall
value: 63.1
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (fra-eng)
type: mteb/tatoeba-bitext-mining
config: fra-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 96.1
- type: f1
value: 95.01666666666667
- type: precision
value: 94.5
- type: recall
value: 96.1
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (por-eng)
type: mteb/tatoeba-bitext-mining
config: por-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 95.89999999999999
- type: f1
value: 94.90666666666667
- type: precision
value: 94.425
- type: recall
value: 95.89999999999999
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (tat-eng)
type: mteb/tatoeba-bitext-mining
config: tat-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 87.6
- type: f1
value: 84.61333333333333
- type: precision
value: 83.27
- type: recall
value: 87.6
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (oci-eng)
type: mteb/tatoeba-bitext-mining
config: oci-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 76.4
- type: f1
value: 71.90746031746032
- type: precision
value: 70.07027777777778
- type: recall
value: 76.4
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (pol-eng)
type: mteb/tatoeba-bitext-mining
config: pol-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 97.89999999999999
- type: f1
value: 97.26666666666667
- type: precision
value: 96.95
- type: recall
value: 97.89999999999999
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (war-eng)
type: mteb/tatoeba-bitext-mining
config: war-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 78.8
- type: f1
value: 74.39555555555555
- type: precision
value: 72.59416666666667
- type: recall
value: 78.8
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (aze-eng)
type: mteb/tatoeba-bitext-mining
config: aze-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 95.19999999999999
- type: f1
value: 93.78999999999999
- type: precision
value: 93.125
- type: recall
value: 95.19999999999999
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (vie-eng)
type: mteb/tatoeba-bitext-mining
config: vie-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 97.8
- type: f1
value: 97.1
- type: precision
value: 96.75
- type: recall
value: 97.8
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (nno-eng)
type: mteb/tatoeba-bitext-mining
config: nno-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 95.6
- type: f1
value: 94.25666666666666
- type: precision
value: 93.64166666666668
- type: recall
value: 95.6
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (cha-eng)
type: mteb/tatoeba-bitext-mining
config: cha-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 56.934306569343065
- type: f1
value: 51.461591936044485
- type: precision
value: 49.37434827945776
- type: recall
value: 56.934306569343065
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (mhr-eng)
type: mteb/tatoeba-bitext-mining
config: mhr-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 20.200000000000003
- type: f1
value: 16.91799284049284
- type: precision
value: 15.791855158730158
- type: recall
value: 20.200000000000003
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (dan-eng)
type: mteb/tatoeba-bitext-mining
config: dan-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 96.2
- type: f1
value: 95.3
- type: precision
value: 94.85
- type: recall
value: 96.2
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ell-eng)
type: mteb/tatoeba-bitext-mining
config: ell-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 96.3
- type: f1
value: 95.11666666666667
- type: precision
value: 94.53333333333333
- type: recall
value: 96.3
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (amh-eng)
type: mteb/tatoeba-bitext-mining
config: amh-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 89.88095238095238
- type: f1
value: 87.14285714285714
- type: precision
value: 85.96230158730161
- type: recall
value: 89.88095238095238
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (pam-eng)
type: mteb/tatoeba-bitext-mining
config: pam-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 24.099999999999998
- type: f1
value: 19.630969083349783
- type: precision
value: 18.275094905094907
- type: recall
value: 24.099999999999998
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (hsb-eng)
type: mteb/tatoeba-bitext-mining
config: hsb-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 83.4368530020704
- type: f1
value: 79.45183870649709
- type: precision
value: 77.7432712215321
- type: recall
value: 83.4368530020704
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (srp-eng)
type: mteb/tatoeba-bitext-mining
config: srp-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 95.8
- type: f1
value: 94.53333333333333
- type: precision
value: 93.91666666666666
- type: recall
value: 95.8
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (epo-eng)
type: mteb/tatoeba-bitext-mining
config: epo-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 98.8
- type: f1
value: 98.48333333333332
- type: precision
value: 98.33333333333334
- type: recall
value: 98.8
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (kzj-eng)
type: mteb/tatoeba-bitext-mining
config: kzj-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 17.5
- type: f1
value: 14.979285714285714
- type: precision
value: 14.23235060690943
- type: recall
value: 17.5
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (awa-eng)
type: mteb/tatoeba-bitext-mining
config: awa-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 93.93939393939394
- type: f1
value: 91.991341991342
- type: precision
value: 91.05339105339105
- type: recall
value: 93.93939393939394
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (fao-eng)
type: mteb/tatoeba-bitext-mining
config: fao-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 89.31297709923665
- type: f1
value: 86.76844783715012
- type: precision
value: 85.63613231552164
- type: recall
value: 89.31297709923665
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (mal-eng)
type: mteb/tatoeba-bitext-mining
config: mal-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 99.12663755458514
- type: f1
value: 98.93255701115964
- type: precision
value: 98.83551673944687
- type: recall
value: 99.12663755458514
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ile-eng)
type: mteb/tatoeba-bitext-mining
config: ile-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 92.0
- type: f1
value: 89.77999999999999
- type: precision
value: 88.78333333333333
- type: recall
value: 92.0
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (bos-eng)
type: mteb/tatoeba-bitext-mining
config: bos-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 96.89265536723164
- type: f1
value: 95.85687382297553
- type: precision
value: 95.33898305084746
- type: recall
value: 96.89265536723164
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (cor-eng)
type: mteb/tatoeba-bitext-mining
config: cor-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 14.6
- type: f1
value: 11.820611790170615
- type: precision
value: 11.022616224355355
- type: recall
value: 14.6
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (cat-eng)
type: mteb/tatoeba-bitext-mining
config: cat-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 95.89999999999999
- type: f1
value: 94.93333333333334
- type: precision
value: 94.48666666666666
- type: recall
value: 95.89999999999999
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (eus-eng)
type: mteb/tatoeba-bitext-mining
config: eus-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 87.6
- type: f1
value: 84.72333333333334
- type: precision
value: 83.44166666666666
- type: recall
value: 87.6
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (yue-eng)
type: mteb/tatoeba-bitext-mining
config: yue-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 94.8
- type: f1
value: 93.47333333333333
- type: precision
value: 92.875
- type: recall
value: 94.8
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (swe-eng)
type: mteb/tatoeba-bitext-mining
config: swe-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 96.6
- type: f1
value: 95.71666666666665
- type: precision
value: 95.28333333333335
- type: recall
value: 96.6
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (dtp-eng)
type: mteb/tatoeba-bitext-mining
config: dtp-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 17.8
- type: f1
value: 14.511074040901628
- type: precision
value: 13.503791000666002
- type: recall
value: 17.8
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (kat-eng)
type: mteb/tatoeba-bitext-mining
config: kat-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 94.10187667560321
- type: f1
value: 92.46648793565683
- type: precision
value: 91.71134941912423
- type: recall
value: 94.10187667560321
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (jpn-eng)
type: mteb/tatoeba-bitext-mining
config: jpn-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 97.0
- type: f1
value: 96.11666666666666
- type: precision
value: 95.68333333333334
- type: recall
value: 97.0
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (csb-eng)
type: mteb/tatoeba-bitext-mining
config: csb-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 72.72727272727273
- type: f1
value: 66.58949745906267
- type: precision
value: 63.86693017127799
- type: recall
value: 72.72727272727273
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (xho-eng)
type: mteb/tatoeba-bitext-mining
config: xho-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 90.14084507042254
- type: f1
value: 88.26291079812206
- type: precision
value: 87.32394366197182
- type: recall
value: 90.14084507042254
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (orv-eng)
type: mteb/tatoeba-bitext-mining
config: orv-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 64.67065868263472
- type: f1
value: 58.2876627696987
- type: precision
value: 55.79255774165953
- type: recall
value: 64.67065868263472
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ind-eng)
type: mteb/tatoeba-bitext-mining
config: ind-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 95.6
- type: f1
value: 94.41666666666667
- type: precision
value: 93.85
- type: recall
value: 95.6
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (tuk-eng)
type: mteb/tatoeba-bitext-mining
config: tuk-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 55.172413793103445
- type: f1
value: 49.63992493549144
- type: precision
value: 47.71405113769646
- type: recall
value: 55.172413793103445
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (max-eng)
type: mteb/tatoeba-bitext-mining
config: max-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 77.46478873239437
- type: f1
value: 73.4417616811983
- type: precision
value: 71.91607981220658
- type: recall
value: 77.46478873239437
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (swh-eng)
type: mteb/tatoeba-bitext-mining
config: swh-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 84.61538461538461
- type: f1
value: 80.91452991452994
- type: precision
value: 79.33760683760683
- type: recall
value: 84.61538461538461
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (hin-eng)
type: mteb/tatoeba-bitext-mining
config: hin-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 98.2
- type: f1
value: 97.6
- type: precision
value: 97.3
- type: recall
value: 98.2
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (dsb-eng)
type: mteb/tatoeba-bitext-mining
config: dsb-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 75.5741127348643
- type: f1
value: 72.00417536534445
- type: precision
value: 70.53467872883321
- type: recall
value: 75.5741127348643
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ber-eng)
type: mteb/tatoeba-bitext-mining
config: ber-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 62.2
- type: f1
value: 55.577460317460314
- type: precision
value: 52.98583333333333
- type: recall
value: 62.2
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (tam-eng)
type: mteb/tatoeba-bitext-mining
config: tam-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 92.18241042345277
- type: f1
value: 90.6468124709167
- type: precision
value: 89.95656894679696
- type: recall
value: 92.18241042345277
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (slk-eng)
type: mteb/tatoeba-bitext-mining
config: slk-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 96.1
- type: f1
value: 95.13333333333333
- type: precision
value: 94.66666666666667
- type: recall
value: 96.1
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (tgl-eng)
type: mteb/tatoeba-bitext-mining
config: tgl-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 96.8
- type: f1
value: 95.85000000000001
- type: precision
value: 95.39999999999999
- type: recall
value: 96.8
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ast-eng)
type: mteb/tatoeba-bitext-mining
config: ast-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 92.1259842519685
- type: f1
value: 89.76377952755905
- type: precision
value: 88.71391076115485
- type: recall
value: 92.1259842519685
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (mkd-eng)
type: mteb/tatoeba-bitext-mining
config: mkd-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 94.1
- type: f1
value: 92.49
- type: precision
value: 91.725
- type: recall
value: 94.1
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (khm-eng)
type: mteb/tatoeba-bitext-mining
config: khm-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 77.5623268698061
- type: f1
value: 73.27364463791058
- type: precision
value: 71.51947852086357
- type: recall
value: 77.5623268698061
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ces-eng)
type: mteb/tatoeba-bitext-mining
config: ces-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 97.39999999999999
- type: f1
value: 96.56666666666666
- type: precision
value: 96.16666666666667
- type: recall
value: 97.39999999999999
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (tzl-eng)
type: mteb/tatoeba-bitext-mining
config: tzl-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 66.34615384615384
- type: f1
value: 61.092032967032964
- type: precision
value: 59.27197802197802
- type: recall
value: 66.34615384615384
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (urd-eng)
type: mteb/tatoeba-bitext-mining
config: urd-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 94.89999999999999
- type: f1
value: 93.41190476190476
- type: precision
value: 92.7
- type: recall
value: 94.89999999999999
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ara-eng)
type: mteb/tatoeba-bitext-mining
config: ara-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 93.10000000000001
- type: f1
value: 91.10000000000001
- type: precision
value: 90.13333333333333
- type: recall
value: 93.10000000000001
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (kor-eng)
type: mteb/tatoeba-bitext-mining
config: kor-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 93.7
- type: f1
value: 91.97333333333334
- type: precision
value: 91.14166666666667
- type: recall
value: 93.7
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (yid-eng)
type: mteb/tatoeba-bitext-mining
config: yid-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 92.21698113207547
- type: f1
value: 90.3796046720575
- type: precision
value: 89.56367924528303
- type: recall
value: 92.21698113207547
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (fin-eng)
type: mteb/tatoeba-bitext-mining
config: fin-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 97.6
- type: f1
value: 96.91666666666667
- type: precision
value: 96.6
- type: recall
value: 97.6
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (tha-eng)
type: mteb/tatoeba-bitext-mining
config: tha-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 97.44525547445255
- type: f1
value: 96.71532846715328
- type: precision
value: 96.35036496350365
- type: recall
value: 97.44525547445255
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (wuu-eng)
type: mteb/tatoeba-bitext-mining
config: wuu-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 94.1
- type: f1
value: 92.34000000000002
- type: precision
value: 91.49166666666667
- type: recall
value: 94.1
- task:
type: Retrieval
dataset:
name: MTEB Touche2020
type: webis-touche2020
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 3.2910000000000004
- type: map_at_10
value: 10.373000000000001
- type: map_at_100
value: 15.612
- type: map_at_1000
value: 17.06
- type: map_at_3
value: 6.119
- type: map_at_5
value: 7.917000000000001
- type: mrr_at_1
value: 44.897999999999996
- type: mrr_at_10
value: 56.054
- type: mrr_at_100
value: 56.82000000000001
- type: mrr_at_1000
value: 56.82000000000001
- type: mrr_at_3
value: 52.381
- type: mrr_at_5
value: 53.81
- type: ndcg_at_1
value: 42.857
- type: ndcg_at_10
value: 27.249000000000002
- type: ndcg_at_100
value: 36.529
- type: ndcg_at_1000
value: 48.136
- type: ndcg_at_3
value: 33.938
- type: ndcg_at_5
value: 29.951
- type: precision_at_1
value: 44.897999999999996
- type: precision_at_10
value: 22.653000000000002
- type: precision_at_100
value: 7.000000000000001
- type: precision_at_1000
value: 1.48
- type: precision_at_3
value: 32.653
- type: precision_at_5
value: 27.755000000000003
- type: recall_at_1
value: 3.2910000000000004
- type: recall_at_10
value: 16.16
- type: recall_at_100
value: 43.908
- type: recall_at_1000
value: 79.823
- type: recall_at_3
value: 7.156
- type: recall_at_5
value: 10.204
- task:
type: Classification
dataset:
name: MTEB ToxicConversationsClassification
type: mteb/toxic_conversations_50k
config: default
split: test
revision: d7c0de2777da35d6aae2200a62c6e0e5af397c4c
metrics:
- type: accuracy
value: 71.05879999999999
- type: ap
value: 14.609748142799111
- type: f1
value: 54.878956295843096
- task:
type: Classification
dataset:
name: MTEB TweetSentimentExtractionClassification
type: mteb/tweet_sentiment_extraction
config: default
split: test
revision: d604517c81ca91fe16a244d1248fc021f9ecee7a
metrics:
- type: accuracy
value: 64.61799660441426
- type: f1
value: 64.8698191961434
- task:
type: Clustering
dataset:
name: MTEB TwentyNewsgroupsClustering
type: mteb/twentynewsgroups-clustering
config: default
split: test
revision: 6125ec4e24fa026cec8a478383ee943acfbd5449
metrics:
- type: v_measure
value: 51.32860036611885
- task:
type: PairClassification
dataset:
name: MTEB TwitterSemEval2015
type: mteb/twittersemeval2015-pairclassification
config: default
split: test
revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1
metrics:
- type: cos_sim_accuracy
value: 88.34714192048638
- type: cos_sim_ap
value: 80.26732975975634
- type: cos_sim_f1
value: 73.53415148134374
- type: cos_sim_precision
value: 69.34767360299276
- type: cos_sim_recall
value: 78.25857519788919
- type: dot_accuracy
value: 88.34714192048638
- type: dot_ap
value: 80.26733698491206
- type: dot_f1
value: 73.53415148134374
- type: dot_precision
value: 69.34767360299276
- type: dot_recall
value: 78.25857519788919
- type: euclidean_accuracy
value: 88.34714192048638
- type: euclidean_ap
value: 80.26734337771738
- type: euclidean_f1
value: 73.53415148134374
- type: euclidean_precision
value: 69.34767360299276
- type: euclidean_recall
value: 78.25857519788919
- type: manhattan_accuracy
value: 88.30541813196639
- type: manhattan_ap
value: 80.19415808104145
- type: manhattan_f1
value: 73.55143870713441
- type: manhattan_precision
value: 73.25307511122743
- type: manhattan_recall
value: 73.85224274406332
- type: max_accuracy
value: 88.34714192048638
- type: max_ap
value: 80.26734337771738
- type: max_f1
value: 73.55143870713441
- task:
type: PairClassification
dataset:
name: MTEB TwitterURLCorpus
type: mteb/twitterurlcorpus-pairclassification
config: default
split: test
revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf
metrics:
- type: cos_sim_accuracy
value: 89.81061047075717
- type: cos_sim_ap
value: 87.11747055081017
- type: cos_sim_f1
value: 80.04355498817256
- type: cos_sim_precision
value: 78.1165262000733
- type: cos_sim_recall
value: 82.06806282722513
- type: dot_accuracy
value: 89.81061047075717
- type: dot_ap
value: 87.11746902745236
- type: dot_f1
value: 80.04355498817256
- type: dot_precision
value: 78.1165262000733
- type: dot_recall
value: 82.06806282722513
- type: euclidean_accuracy
value: 89.81061047075717
- type: euclidean_ap
value: 87.11746919324248
- type: euclidean_f1
value: 80.04355498817256
- type: euclidean_precision
value: 78.1165262000733
- type: euclidean_recall
value: 82.06806282722513
- type: manhattan_accuracy
value: 89.79508673885202
- type: manhattan_ap
value: 87.11074390832218
- type: manhattan_f1
value: 80.13002540726349
- type: manhattan_precision
value: 77.83826945412311
- type: manhattan_recall
value: 82.56082537727133
- type: max_accuracy
value: 89.81061047075717
- type: max_ap
value: 87.11747055081017
- type: max_f1
value: 80.13002540726349
---
## Multilingual-E5-large-instruct
[Multilingual E5 Text Embeddings: A Technical Report](https://arxiv.org/pdf/2402.05672).
Liang Wang, Nan Yang, Xiaolong Huang, Linjun Yang, Rangan Majumder, Furu Wei, arXiv 2024
This model has 24 layers and the embedding size is 1024.
## Usage
Below are examples to encode queries and passages from the MS-MARCO passage ranking dataset.
### Transformers
```python
import torch.nn.functional as F
from torch import Tensor
from transformers import AutoTokenizer, AutoModel
def average_pool(last_hidden_states: Tensor,
attention_mask: Tensor) -> Tensor:
last_hidden = last_hidden_states.masked_fill(~attention_mask[..., None].bool(), 0.0)
return last_hidden.sum(dim=1) / attention_mask.sum(dim=1)[..., None]
def get_detailed_instruct(task_description: str, query: str) -> str:
return f'Instruct: {task_description}\nQuery: {query}'
# Each query must come with a one-sentence instruction that describes the task
task = 'Given a web search query, retrieve relevant passages that answer the query'
queries = [
get_detailed_instruct(task, 'how much protein should a female eat'),
get_detailed_instruct(task, '南瓜的家常做法')
]
# No need to add instruction for retrieval documents
documents = [
"As a general guideline, the CDC's average requirement of protein for women ages 19 to 70 is 46 grams per day. But, as you can see from this chart, you'll need to increase that if you're expecting or training for a marathon. Check out the chart below to see how much protein you should be eating each day.",
"1.清炒南瓜丝 原料:嫩南瓜半个 调料:葱、盐、白糖、鸡精 做法: 1、南瓜用刀薄薄的削去表面一层皮,用勺子刮去瓤 2、擦成细丝(没有擦菜板就用刀慢慢切成细丝) 3、锅烧热放油,入葱花煸出香味 4、入南瓜丝快速翻炒一分钟左右,放盐、一点白糖和鸡精调味出锅 2.香葱炒南瓜 原料:南瓜1只 调料:香葱、蒜末、橄榄油、盐 做法: 1、将南瓜去皮,切成片 2、油锅8成热后,将蒜末放入爆香 3、爆香后,将南瓜片放入,翻炒 4、在翻炒的同时,可以不时地往锅里加水,但不要太多 5、放入盐,炒匀 6、南瓜差不多软和绵了之后,就可以关火 7、撒入香葱,即可出锅"
]
input_texts = queries + documents
tokenizer = AutoTokenizer.from_pretrained('intfloat/multilingual-e5-large-instruct')
model = AutoModel.from_pretrained('intfloat/multilingual-e5-large-instruct')
# Tokenize the input texts
batch_dict = tokenizer(input_texts, max_length=512, padding=True, truncation=True, return_tensors='pt')
outputs = model(**batch_dict)
embeddings = average_pool(outputs.last_hidden_state, batch_dict['attention_mask'])
# normalize embeddings
embeddings = F.normalize(embeddings, p=2, dim=1)
scores = (embeddings[:2] @ embeddings[2:].T) * 100
print(scores.tolist())
# => [[91.92852783203125, 67.580322265625], [70.3814468383789, 92.1330795288086]]
```
### Sentence Transformers
```python
from sentence_transformers import SentenceTransformer
def get_detailed_instruct(task_description: str, query: str) -> str:
return f'Instruct: {task_description}\nQuery: {query}'
# Each query must come with a one-sentence instruction that describes the task
task = 'Given a web search query, retrieve relevant passages that answer the query'
queries = [
get_detailed_instruct(task, 'how much protein should a female eat'),
get_detailed_instruct(task, '南瓜的家常做法')
]
# No need to add instruction for retrieval documents
documents = [
"As a general guideline, the CDC's average requirement of protein for women ages 19 to 70 is 46 grams per day. But, as you can see from this chart, you'll need to increase that if you're expecting or training for a marathon. Check out the chart below to see how much protein you should be eating each day.",
"1.清炒南瓜丝 原料:嫩南瓜半个 调料:葱、盐、白糖、鸡精 做法: 1、南瓜用刀薄薄的削去表面一层皮,用勺子刮去瓤 2、擦成细丝(没有擦菜板就用刀慢慢切成细丝) 3、锅烧热放油,入葱花煸出香味 4、入南瓜丝快速翻炒一分钟左右,放盐、一点白糖和鸡精调味出锅 2.香葱炒南瓜 原料:南瓜1只 调料:香葱、蒜末、橄榄油、盐 做法: 1、将南瓜去皮,切成片 2、油锅8成热后,将蒜末放入爆香 3、爆香后,将南瓜片放入,翻炒 4、在翻炒的同时,可以不时地往锅里加水,但不要太多 5、放入盐,炒匀 6、南瓜差不多软和绵了之后,就可以关火 7、撒入香葱,即可出锅"
]
input_texts = queries + documents
model = SentenceTransformer('intfloat/multilingual-e5-large-instruct')
embeddings = model.encode(input_texts, convert_to_tensor=True, normalize_embeddings=True)
scores = (embeddings[:2] @ embeddings[2:].T) * 100
print(scores.tolist())
# [[91.92853546142578, 67.5802993774414], [70.38143157958984, 92.13307189941406]]
```
## Supported Languages
This model is initialized from [xlm-roberta-large](https://huggingface.co/xlm-roberta-large)
and continually trained on a mixture of multilingual datasets.
It supports 100 languages from xlm-roberta,
but low-resource languages may see performance degradation.
## Training Details
**Initialization**: [xlm-roberta-large](https://huggingface.co/xlm-roberta-large)
**First stage**: contrastive pre-training with 1 billion weakly supervised text pairs.
**Second stage**: fine-tuning on datasets from the [E5-mistral](https://arxiv.org/abs/2401.00368) paper.
## MTEB Benchmark Evaluation
Check out [unilm/e5](https://github.com/microsoft/unilm/tree/master/e5) to reproduce evaluation results
on the [BEIR](https://arxiv.org/abs/2104.08663) and [MTEB benchmark](https://arxiv.org/abs/2210.07316).
## FAQ
**1. Do I need to add instructions to the query?**
Yes, this is how the model is trained, otherwise you will see a performance degradation.
The task definition should be a one-sentence instruction that describes the task.
This is a way to customize text embeddings for different scenarios through natural language instructions.
Please check out [unilm/e5/utils.py](https://github.com/microsoft/unilm/blob/9c0f1ff7ca53431fe47d2637dfe253643d94185b/e5/utils.py#L106) for instructions we used for evaluation.
On the other hand, there is no need to add instructions to the document side.
**2. Why are my reproduced results slightly different from reported in the model card?**
Different versions of `transformers` and `pytorch` could cause negligible but non-zero performance differences.
**3. Why does the cosine similarity scores distribute around 0.7 to 1.0?**
This is a known and expected behavior as we use a low temperature 0.01 for InfoNCE contrastive loss.
For text embedding tasks like text retrieval or semantic similarity,
what matters is the relative order of the scores instead of the absolute values,
so this should not be an issue.
## Citation
If you find our paper or models helpful, please consider cite as follows:
```
@article{wang2024multilingual,
title={Multilingual E5 Text Embeddings: A Technical Report},
author={Wang, Liang and Yang, Nan and Huang, Xiaolong and Yang, Linjun and Majumder, Rangan and Wei, Furu},
journal={arXiv preprint arXiv:2402.05672},
year={2024}
}
```
## Limitations
Long texts will be truncated to at most 512 tokens.
| [
"SEMANTIC_SIMILARITY",
"SUMMARIZATION"
] | [
"BIOSSES",
"SCIFACT"
] |
yixuan-chia/multilingual-e5-base-gguf | yixuan-chia | sentence-similarity | [
"sentence-transformers",
"gguf",
"mteb",
"Sentence Transformers",
"sentence-similarity",
"multilingual",
"af",
"am",
"ar",
"as",
"az",
"be",
"bg",
"bn",
"br",
"bs",
"ca",
"cs",
"cy",
"da",
"de",
"el",
"en",
"eo",
"es",
"et",
"eu",
"fa",
"fi",
"fr",
"fy",
"ga",
"gd",
"gl",
"gu",
"ha",
"he",
"hi",
"hr",
"hu",
"hy",
"id",
"is",
"it",
"ja",
"jv",
"ka",
"kk",
"km",
"kn",
"ko",
"ku",
"ky",
"la",
"lo",
"lt",
"lv",
"mg",
"mk",
"ml",
"mn",
"mr",
"ms",
"my",
"ne",
"nl",
"no",
"om",
"or",
"pa",
"pl",
"ps",
"pt",
"ro",
"ru",
"sa",
"sd",
"si",
"sk",
"sl",
"so",
"sq",
"sr",
"su",
"sv",
"sw",
"ta",
"te",
"th",
"tl",
"tr",
"ug",
"uk",
"ur",
"uz",
"vi",
"xh",
"yi",
"zh",
"arxiv:2402.05672",
"arxiv:2108.08787",
"arxiv:2104.08663",
"arxiv:2210.07316",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us",
"feature-extraction"
] | 2024-10-23T03:02:45 | 2024-10-23T03:03:57 | 62 | 0 | ---
language:
- multilingual
- af
- am
- ar
- as
- az
- be
- bg
- bn
- br
- bs
- ca
- cs
- cy
- da
- de
- el
- en
- eo
- es
- et
- eu
- fa
- fi
- fr
- fy
- ga
- gd
- gl
- gu
- ha
- he
- hi
- hr
- hu
- hy
- id
- is
- it
- ja
- jv
- ka
- kk
- km
- kn
- ko
- ku
- ky
- la
- lo
- lt
- lv
- mg
- mk
- ml
- mn
- mr
- ms
- my
- ne
- nl
- 'no'
- om
- or
- pa
- pl
- ps
- pt
- ro
- ru
- sa
- sd
- si
- sk
- sl
- so
- sq
- sr
- su
- sv
- sw
- ta
- te
- th
- tl
- tr
- ug
- uk
- ur
- uz
- vi
- xh
- yi
- zh
license: mit
tags:
- mteb
- Sentence Transformers
- sentence-similarity
- sentence-transformers
model-index:
- name: multilingual-e5-base
results:
- task:
type: Classification
dataset:
name: MTEB AmazonCounterfactualClassification (en)
type: mteb/amazon_counterfactual
config: en
split: test
revision: e8379541af4e31359cca9fbcf4b00f2671dba205
metrics:
- type: accuracy
value: 78.97014925373135
- type: ap
value: 43.69351129103008
- type: f1
value: 73.38075030070492
- task:
type: Classification
dataset:
name: MTEB AmazonCounterfactualClassification (de)
type: mteb/amazon_counterfactual
config: de
split: test
revision: e8379541af4e31359cca9fbcf4b00f2671dba205
metrics:
- type: accuracy
value: 71.7237687366167
- type: ap
value: 82.22089859962671
- type: f1
value: 69.95532758884401
- task:
type: Classification
dataset:
name: MTEB AmazonCounterfactualClassification (en-ext)
type: mteb/amazon_counterfactual
config: en-ext
split: test
revision: e8379541af4e31359cca9fbcf4b00f2671dba205
metrics:
- type: accuracy
value: 79.65517241379312
- type: ap
value: 28.507918657094738
- type: f1
value: 66.84516013726119
- task:
type: Classification
dataset:
name: MTEB AmazonCounterfactualClassification (ja)
type: mteb/amazon_counterfactual
config: ja
split: test
revision: e8379541af4e31359cca9fbcf4b00f2671dba205
metrics:
- type: accuracy
value: 73.32976445396146
- type: ap
value: 20.720481637566014
- type: f1
value: 59.78002763416003
- task:
type: Classification
dataset:
name: MTEB AmazonPolarityClassification
type: mteb/amazon_polarity
config: default
split: test
revision: e2d317d38cd51312af73b3d32a06d1a08b442046
metrics:
- type: accuracy
value: 90.63775
- type: ap
value: 87.22277903861716
- type: f1
value: 90.60378636386807
- task:
type: Classification
dataset:
name: MTEB AmazonReviewsClassification (en)
type: mteb/amazon_reviews_multi
config: en
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 44.546
- type: f1
value: 44.05666638370923
- task:
type: Classification
dataset:
name: MTEB AmazonReviewsClassification (de)
type: mteb/amazon_reviews_multi
config: de
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 41.828
- type: f1
value: 41.2710255644252
- task:
type: Classification
dataset:
name: MTEB AmazonReviewsClassification (es)
type: mteb/amazon_reviews_multi
config: es
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 40.534
- type: f1
value: 39.820743174270326
- task:
type: Classification
dataset:
name: MTEB AmazonReviewsClassification (fr)
type: mteb/amazon_reviews_multi
config: fr
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 39.684
- type: f1
value: 39.11052682815307
- task:
type: Classification
dataset:
name: MTEB AmazonReviewsClassification (ja)
type: mteb/amazon_reviews_multi
config: ja
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 37.436
- type: f1
value: 37.07082931930871
- task:
type: Classification
dataset:
name: MTEB AmazonReviewsClassification (zh)
type: mteb/amazon_reviews_multi
config: zh
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 37.226000000000006
- type: f1
value: 36.65372077739185
- task:
type: Retrieval
dataset:
name: MTEB ArguAna
type: arguana
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 22.831000000000003
- type: map_at_10
value: 36.42
- type: map_at_100
value: 37.699
- type: map_at_1000
value: 37.724000000000004
- type: map_at_3
value: 32.207
- type: map_at_5
value: 34.312
- type: mrr_at_1
value: 23.257
- type: mrr_at_10
value: 36.574
- type: mrr_at_100
value: 37.854
- type: mrr_at_1000
value: 37.878
- type: mrr_at_3
value: 32.385000000000005
- type: mrr_at_5
value: 34.48
- type: ndcg_at_1
value: 22.831000000000003
- type: ndcg_at_10
value: 44.230000000000004
- type: ndcg_at_100
value: 49.974000000000004
- type: ndcg_at_1000
value: 50.522999999999996
- type: ndcg_at_3
value: 35.363
- type: ndcg_at_5
value: 39.164
- type: precision_at_1
value: 22.831000000000003
- type: precision_at_10
value: 6.935
- type: precision_at_100
value: 0.9520000000000001
- type: precision_at_1000
value: 0.099
- type: precision_at_3
value: 14.841
- type: precision_at_5
value: 10.754
- type: recall_at_1
value: 22.831000000000003
- type: recall_at_10
value: 69.346
- type: recall_at_100
value: 95.235
- type: recall_at_1000
value: 99.36
- type: recall_at_3
value: 44.523
- type: recall_at_5
value: 53.769999999999996
- task:
type: Clustering
dataset:
name: MTEB ArxivClusteringP2P
type: mteb/arxiv-clustering-p2p
config: default
split: test
revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d
metrics:
- type: v_measure
value: 40.27789869854063
- task:
type: Clustering
dataset:
name: MTEB ArxivClusteringS2S
type: mteb/arxiv-clustering-s2s
config: default
split: test
revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53
metrics:
- type: v_measure
value: 35.41979463347428
- task:
type: Reranking
dataset:
name: MTEB AskUbuntuDupQuestions
type: mteb/askubuntudupquestions-reranking
config: default
split: test
revision: 2000358ca161889fa9c082cb41daa8dcfb161a54
metrics:
- type: map
value: 58.22752045109304
- type: mrr
value: 71.51112430198303
- task:
type: STS
dataset:
name: MTEB BIOSSES
type: mteb/biosses-sts
config: default
split: test
revision: d3fb88f8f02e40887cd149695127462bbcf29b4a
metrics:
- type: cos_sim_pearson
value: 84.71147646622866
- type: cos_sim_spearman
value: 85.059167046486
- type: euclidean_pearson
value: 75.88421613600647
- type: euclidean_spearman
value: 75.12821787150585
- type: manhattan_pearson
value: 75.22005646957604
- type: manhattan_spearman
value: 74.42880434453272
- task:
type: BitextMining
dataset:
name: MTEB BUCC (de-en)
type: mteb/bucc-bitext-mining
config: de-en
split: test
revision: d51519689f32196a32af33b075a01d0e7c51e252
metrics:
- type: accuracy
value: 99.23799582463465
- type: f1
value: 99.12665274878218
- type: precision
value: 99.07098121085595
- type: recall
value: 99.23799582463465
- task:
type: BitextMining
dataset:
name: MTEB BUCC (fr-en)
type: mteb/bucc-bitext-mining
config: fr-en
split: test
revision: d51519689f32196a32af33b075a01d0e7c51e252
metrics:
- type: accuracy
value: 97.88685890380806
- type: f1
value: 97.59336708489249
- type: precision
value: 97.44662117543473
- type: recall
value: 97.88685890380806
- task:
type: BitextMining
dataset:
name: MTEB BUCC (ru-en)
type: mteb/bucc-bitext-mining
config: ru-en
split: test
revision: d51519689f32196a32af33b075a01d0e7c51e252
metrics:
- type: accuracy
value: 97.47142362313821
- type: f1
value: 97.1989377670015
- type: precision
value: 97.06384944001847
- type: recall
value: 97.47142362313821
- task:
type: BitextMining
dataset:
name: MTEB BUCC (zh-en)
type: mteb/bucc-bitext-mining
config: zh-en
split: test
revision: d51519689f32196a32af33b075a01d0e7c51e252
metrics:
- type: accuracy
value: 98.4728804634018
- type: f1
value: 98.2973494821836
- type: precision
value: 98.2095839915745
- type: recall
value: 98.4728804634018
- task:
type: Classification
dataset:
name: MTEB Banking77Classification
type: mteb/banking77
config: default
split: test
revision: 0fd18e25b25c072e09e0d92ab615fda904d66300
metrics:
- type: accuracy
value: 82.74025974025975
- type: f1
value: 82.67420447730439
- task:
type: Clustering
dataset:
name: MTEB BiorxivClusteringP2P
type: mteb/biorxiv-clustering-p2p
config: default
split: test
revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40
metrics:
- type: v_measure
value: 35.0380848063507
- task:
type: Clustering
dataset:
name: MTEB BiorxivClusteringS2S
type: mteb/biorxiv-clustering-s2s
config: default
split: test
revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908
metrics:
- type: v_measure
value: 29.45956405670166
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackAndroidRetrieval
type: BeIR/cqadupstack
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 32.122
- type: map_at_10
value: 42.03
- type: map_at_100
value: 43.364000000000004
- type: map_at_1000
value: 43.474000000000004
- type: map_at_3
value: 38.804
- type: map_at_5
value: 40.585
- type: mrr_at_1
value: 39.914
- type: mrr_at_10
value: 48.227
- type: mrr_at_100
value: 49.018
- type: mrr_at_1000
value: 49.064
- type: mrr_at_3
value: 45.994
- type: mrr_at_5
value: 47.396
- type: ndcg_at_1
value: 39.914
- type: ndcg_at_10
value: 47.825
- type: ndcg_at_100
value: 52.852
- type: ndcg_at_1000
value: 54.891
- type: ndcg_at_3
value: 43.517
- type: ndcg_at_5
value: 45.493
- type: precision_at_1
value: 39.914
- type: precision_at_10
value: 8.956
- type: precision_at_100
value: 1.388
- type: precision_at_1000
value: 0.182
- type: precision_at_3
value: 20.791999999999998
- type: precision_at_5
value: 14.821000000000002
- type: recall_at_1
value: 32.122
- type: recall_at_10
value: 58.294999999999995
- type: recall_at_100
value: 79.726
- type: recall_at_1000
value: 93.099
- type: recall_at_3
value: 45.017
- type: recall_at_5
value: 51.002
- type: map_at_1
value: 29.677999999999997
- type: map_at_10
value: 38.684000000000005
- type: map_at_100
value: 39.812999999999995
- type: map_at_1000
value: 39.945
- type: map_at_3
value: 35.831
- type: map_at_5
value: 37.446
- type: mrr_at_1
value: 37.771
- type: mrr_at_10
value: 44.936
- type: mrr_at_100
value: 45.583
- type: mrr_at_1000
value: 45.634
- type: mrr_at_3
value: 42.771
- type: mrr_at_5
value: 43.994
- type: ndcg_at_1
value: 37.771
- type: ndcg_at_10
value: 44.059
- type: ndcg_at_100
value: 48.192
- type: ndcg_at_1000
value: 50.375
- type: ndcg_at_3
value: 40.172000000000004
- type: ndcg_at_5
value: 41.899
- type: precision_at_1
value: 37.771
- type: precision_at_10
value: 8.286999999999999
- type: precision_at_100
value: 1.322
- type: precision_at_1000
value: 0.178
- type: precision_at_3
value: 19.406000000000002
- type: precision_at_5
value: 13.745
- type: recall_at_1
value: 29.677999999999997
- type: recall_at_10
value: 53.071
- type: recall_at_100
value: 70.812
- type: recall_at_1000
value: 84.841
- type: recall_at_3
value: 41.016000000000005
- type: recall_at_5
value: 46.22
- type: map_at_1
value: 42.675000000000004
- type: map_at_10
value: 53.93599999999999
- type: map_at_100
value: 54.806999999999995
- type: map_at_1000
value: 54.867
- type: map_at_3
value: 50.934000000000005
- type: map_at_5
value: 52.583
- type: mrr_at_1
value: 48.339
- type: mrr_at_10
value: 57.265
- type: mrr_at_100
value: 57.873
- type: mrr_at_1000
value: 57.906
- type: mrr_at_3
value: 55.193000000000005
- type: mrr_at_5
value: 56.303000000000004
- type: ndcg_at_1
value: 48.339
- type: ndcg_at_10
value: 59.19799999999999
- type: ndcg_at_100
value: 62.743
- type: ndcg_at_1000
value: 63.99399999999999
- type: ndcg_at_3
value: 54.367
- type: ndcg_at_5
value: 56.548
- type: precision_at_1
value: 48.339
- type: precision_at_10
value: 9.216000000000001
- type: precision_at_100
value: 1.1809999999999998
- type: precision_at_1000
value: 0.134
- type: precision_at_3
value: 23.72
- type: precision_at_5
value: 16.025
- type: recall_at_1
value: 42.675000000000004
- type: recall_at_10
value: 71.437
- type: recall_at_100
value: 86.803
- type: recall_at_1000
value: 95.581
- type: recall_at_3
value: 58.434
- type: recall_at_5
value: 63.754
- type: map_at_1
value: 23.518
- type: map_at_10
value: 30.648999999999997
- type: map_at_100
value: 31.508999999999997
- type: map_at_1000
value: 31.604
- type: map_at_3
value: 28.247
- type: map_at_5
value: 29.65
- type: mrr_at_1
value: 25.650000000000002
- type: mrr_at_10
value: 32.771
- type: mrr_at_100
value: 33.554
- type: mrr_at_1000
value: 33.629999999999995
- type: mrr_at_3
value: 30.433
- type: mrr_at_5
value: 31.812
- type: ndcg_at_1
value: 25.650000000000002
- type: ndcg_at_10
value: 34.929
- type: ndcg_at_100
value: 39.382
- type: ndcg_at_1000
value: 41.913
- type: ndcg_at_3
value: 30.292
- type: ndcg_at_5
value: 32.629999999999995
- type: precision_at_1
value: 25.650000000000002
- type: precision_at_10
value: 5.311
- type: precision_at_100
value: 0.792
- type: precision_at_1000
value: 0.105
- type: precision_at_3
value: 12.58
- type: precision_at_5
value: 8.994
- type: recall_at_1
value: 23.518
- type: recall_at_10
value: 46.19
- type: recall_at_100
value: 67.123
- type: recall_at_1000
value: 86.442
- type: recall_at_3
value: 33.678000000000004
- type: recall_at_5
value: 39.244
- type: map_at_1
value: 15.891
- type: map_at_10
value: 22.464000000000002
- type: map_at_100
value: 23.483
- type: map_at_1000
value: 23.613
- type: map_at_3
value: 20.080000000000002
- type: map_at_5
value: 21.526
- type: mrr_at_1
value: 20.025000000000002
- type: mrr_at_10
value: 26.712999999999997
- type: mrr_at_100
value: 27.650000000000002
- type: mrr_at_1000
value: 27.737000000000002
- type: mrr_at_3
value: 24.274
- type: mrr_at_5
value: 25.711000000000002
- type: ndcg_at_1
value: 20.025000000000002
- type: ndcg_at_10
value: 27.028999999999996
- type: ndcg_at_100
value: 32.064
- type: ndcg_at_1000
value: 35.188
- type: ndcg_at_3
value: 22.512999999999998
- type: ndcg_at_5
value: 24.89
- type: precision_at_1
value: 20.025000000000002
- type: precision_at_10
value: 4.776
- type: precision_at_100
value: 0.8500000000000001
- type: precision_at_1000
value: 0.125
- type: precision_at_3
value: 10.531
- type: precision_at_5
value: 7.811
- type: recall_at_1
value: 15.891
- type: recall_at_10
value: 37.261
- type: recall_at_100
value: 59.12
- type: recall_at_1000
value: 81.356
- type: recall_at_3
value: 24.741
- type: recall_at_5
value: 30.753999999999998
- type: map_at_1
value: 27.544
- type: map_at_10
value: 36.283
- type: map_at_100
value: 37.467
- type: map_at_1000
value: 37.574000000000005
- type: map_at_3
value: 33.528999999999996
- type: map_at_5
value: 35.028999999999996
- type: mrr_at_1
value: 34.166999999999994
- type: mrr_at_10
value: 41.866
- type: mrr_at_100
value: 42.666
- type: mrr_at_1000
value: 42.716
- type: mrr_at_3
value: 39.541
- type: mrr_at_5
value: 40.768
- type: ndcg_at_1
value: 34.166999999999994
- type: ndcg_at_10
value: 41.577
- type: ndcg_at_100
value: 46.687
- type: ndcg_at_1000
value: 48.967
- type: ndcg_at_3
value: 37.177
- type: ndcg_at_5
value: 39.097
- type: precision_at_1
value: 34.166999999999994
- type: precision_at_10
value: 7.420999999999999
- type: precision_at_100
value: 1.165
- type: precision_at_1000
value: 0.154
- type: precision_at_3
value: 17.291999999999998
- type: precision_at_5
value: 12.166
- type: recall_at_1
value: 27.544
- type: recall_at_10
value: 51.99399999999999
- type: recall_at_100
value: 73.738
- type: recall_at_1000
value: 89.33
- type: recall_at_3
value: 39.179
- type: recall_at_5
value: 44.385999999999996
- type: map_at_1
value: 26.661
- type: map_at_10
value: 35.475
- type: map_at_100
value: 36.626999999999995
- type: map_at_1000
value: 36.741
- type: map_at_3
value: 32.818000000000005
- type: map_at_5
value: 34.397
- type: mrr_at_1
value: 32.647999999999996
- type: mrr_at_10
value: 40.784
- type: mrr_at_100
value: 41.602
- type: mrr_at_1000
value: 41.661
- type: mrr_at_3
value: 38.68
- type: mrr_at_5
value: 39.838
- type: ndcg_at_1
value: 32.647999999999996
- type: ndcg_at_10
value: 40.697
- type: ndcg_at_100
value: 45.799
- type: ndcg_at_1000
value: 48.235
- type: ndcg_at_3
value: 36.516
- type: ndcg_at_5
value: 38.515
- type: precision_at_1
value: 32.647999999999996
- type: precision_at_10
value: 7.202999999999999
- type: precision_at_100
value: 1.1360000000000001
- type: precision_at_1000
value: 0.151
- type: precision_at_3
value: 17.314
- type: precision_at_5
value: 12.145999999999999
- type: recall_at_1
value: 26.661
- type: recall_at_10
value: 50.995000000000005
- type: recall_at_100
value: 73.065
- type: recall_at_1000
value: 89.781
- type: recall_at_3
value: 39.073
- type: recall_at_5
value: 44.395
- type: map_at_1
value: 25.946583333333333
- type: map_at_10
value: 33.79725
- type: map_at_100
value: 34.86408333333333
- type: map_at_1000
value: 34.9795
- type: map_at_3
value: 31.259999999999998
- type: map_at_5
value: 32.71541666666666
- type: mrr_at_1
value: 30.863749999999996
- type: mrr_at_10
value: 37.99183333333333
- type: mrr_at_100
value: 38.790499999999994
- type: mrr_at_1000
value: 38.85575000000001
- type: mrr_at_3
value: 35.82083333333333
- type: mrr_at_5
value: 37.07533333333333
- type: ndcg_at_1
value: 30.863749999999996
- type: ndcg_at_10
value: 38.52141666666667
- type: ndcg_at_100
value: 43.17966666666667
- type: ndcg_at_1000
value: 45.64608333333333
- type: ndcg_at_3
value: 34.333000000000006
- type: ndcg_at_5
value: 36.34975
- type: precision_at_1
value: 30.863749999999996
- type: precision_at_10
value: 6.598999999999999
- type: precision_at_100
value: 1.0502500000000001
- type: precision_at_1000
value: 0.14400000000000002
- type: precision_at_3
value: 15.557583333333334
- type: precision_at_5
value: 11.020000000000001
- type: recall_at_1
value: 25.946583333333333
- type: recall_at_10
value: 48.36991666666666
- type: recall_at_100
value: 69.02408333333334
- type: recall_at_1000
value: 86.43858333333331
- type: recall_at_3
value: 36.4965
- type: recall_at_5
value: 41.76258333333334
- type: map_at_1
value: 22.431
- type: map_at_10
value: 28.889
- type: map_at_100
value: 29.642000000000003
- type: map_at_1000
value: 29.742
- type: map_at_3
value: 26.998
- type: map_at_5
value: 28.172000000000004
- type: mrr_at_1
value: 25.307000000000002
- type: mrr_at_10
value: 31.763
- type: mrr_at_100
value: 32.443
- type: mrr_at_1000
value: 32.531
- type: mrr_at_3
value: 29.959000000000003
- type: mrr_at_5
value: 31.063000000000002
- type: ndcg_at_1
value: 25.307000000000002
- type: ndcg_at_10
value: 32.586999999999996
- type: ndcg_at_100
value: 36.5
- type: ndcg_at_1000
value: 39.133
- type: ndcg_at_3
value: 29.25
- type: ndcg_at_5
value: 31.023
- type: precision_at_1
value: 25.307000000000002
- type: precision_at_10
value: 4.954
- type: precision_at_100
value: 0.747
- type: precision_at_1000
value: 0.104
- type: precision_at_3
value: 12.577
- type: precision_at_5
value: 8.741999999999999
- type: recall_at_1
value: 22.431
- type: recall_at_10
value: 41.134
- type: recall_at_100
value: 59.28600000000001
- type: recall_at_1000
value: 78.857
- type: recall_at_3
value: 31.926
- type: recall_at_5
value: 36.335
- type: map_at_1
value: 17.586
- type: map_at_10
value: 23.304
- type: map_at_100
value: 24.159
- type: map_at_1000
value: 24.281
- type: map_at_3
value: 21.316
- type: map_at_5
value: 22.383
- type: mrr_at_1
value: 21.645
- type: mrr_at_10
value: 27.365000000000002
- type: mrr_at_100
value: 28.108
- type: mrr_at_1000
value: 28.192
- type: mrr_at_3
value: 25.482
- type: mrr_at_5
value: 26.479999999999997
- type: ndcg_at_1
value: 21.645
- type: ndcg_at_10
value: 27.306
- type: ndcg_at_100
value: 31.496000000000002
- type: ndcg_at_1000
value: 34.53
- type: ndcg_at_3
value: 23.73
- type: ndcg_at_5
value: 25.294
- type: precision_at_1
value: 21.645
- type: precision_at_10
value: 4.797
- type: precision_at_100
value: 0.8059999999999999
- type: precision_at_1000
value: 0.121
- type: precision_at_3
value: 10.850999999999999
- type: precision_at_5
value: 7.736
- type: recall_at_1
value: 17.586
- type: recall_at_10
value: 35.481
- type: recall_at_100
value: 54.534000000000006
- type: recall_at_1000
value: 76.456
- type: recall_at_3
value: 25.335
- type: recall_at_5
value: 29.473
- type: map_at_1
value: 25.095
- type: map_at_10
value: 32.374
- type: map_at_100
value: 33.537
- type: map_at_1000
value: 33.634
- type: map_at_3
value: 30.089
- type: map_at_5
value: 31.433
- type: mrr_at_1
value: 29.198
- type: mrr_at_10
value: 36.01
- type: mrr_at_100
value: 37.022
- type: mrr_at_1000
value: 37.083
- type: mrr_at_3
value: 33.94
- type: mrr_at_5
value: 35.148
- type: ndcg_at_1
value: 29.198
- type: ndcg_at_10
value: 36.729
- type: ndcg_at_100
value: 42.114000000000004
- type: ndcg_at_1000
value: 44.592
- type: ndcg_at_3
value: 32.644
- type: ndcg_at_5
value: 34.652
- type: precision_at_1
value: 29.198
- type: precision_at_10
value: 5.970000000000001
- type: precision_at_100
value: 0.967
- type: precision_at_1000
value: 0.129
- type: precision_at_3
value: 14.396999999999998
- type: precision_at_5
value: 10.093
- type: recall_at_1
value: 25.095
- type: recall_at_10
value: 46.392
- type: recall_at_100
value: 69.706
- type: recall_at_1000
value: 87.738
- type: recall_at_3
value: 35.303000000000004
- type: recall_at_5
value: 40.441
- type: map_at_1
value: 26.857999999999997
- type: map_at_10
value: 34.066
- type: map_at_100
value: 35.671
- type: map_at_1000
value: 35.881
- type: map_at_3
value: 31.304
- type: map_at_5
value: 32.885
- type: mrr_at_1
value: 32.411
- type: mrr_at_10
value: 38.987
- type: mrr_at_100
value: 39.894
- type: mrr_at_1000
value: 39.959
- type: mrr_at_3
value: 36.626999999999995
- type: mrr_at_5
value: 38.011
- type: ndcg_at_1
value: 32.411
- type: ndcg_at_10
value: 39.208
- type: ndcg_at_100
value: 44.626
- type: ndcg_at_1000
value: 47.43
- type: ndcg_at_3
value: 35.091
- type: ndcg_at_5
value: 37.119
- type: precision_at_1
value: 32.411
- type: precision_at_10
value: 7.51
- type: precision_at_100
value: 1.486
- type: precision_at_1000
value: 0.234
- type: precision_at_3
value: 16.14
- type: precision_at_5
value: 11.976
- type: recall_at_1
value: 26.857999999999997
- type: recall_at_10
value: 47.407
- type: recall_at_100
value: 72.236
- type: recall_at_1000
value: 90.77
- type: recall_at_3
value: 35.125
- type: recall_at_5
value: 40.522999999999996
- type: map_at_1
value: 21.3
- type: map_at_10
value: 27.412999999999997
- type: map_at_100
value: 28.29
- type: map_at_1000
value: 28.398
- type: map_at_3
value: 25.169999999999998
- type: map_at_5
value: 26.496
- type: mrr_at_1
value: 23.29
- type: mrr_at_10
value: 29.215000000000003
- type: mrr_at_100
value: 30.073
- type: mrr_at_1000
value: 30.156
- type: mrr_at_3
value: 26.956000000000003
- type: mrr_at_5
value: 28.38
- type: ndcg_at_1
value: 23.29
- type: ndcg_at_10
value: 31.113000000000003
- type: ndcg_at_100
value: 35.701
- type: ndcg_at_1000
value: 38.505
- type: ndcg_at_3
value: 26.727
- type: ndcg_at_5
value: 29.037000000000003
- type: precision_at_1
value: 23.29
- type: precision_at_10
value: 4.787
- type: precision_at_100
value: 0.763
- type: precision_at_1000
value: 0.11100000000000002
- type: precision_at_3
value: 11.091
- type: precision_at_5
value: 7.985
- type: recall_at_1
value: 21.3
- type: recall_at_10
value: 40.782000000000004
- type: recall_at_100
value: 62.13999999999999
- type: recall_at_1000
value: 83.012
- type: recall_at_3
value: 29.131
- type: recall_at_5
value: 34.624
- task:
type: Retrieval
dataset:
name: MTEB ClimateFEVER
type: climate-fever
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 9.631
- type: map_at_10
value: 16.634999999999998
- type: map_at_100
value: 18.23
- type: map_at_1000
value: 18.419
- type: map_at_3
value: 13.66
- type: map_at_5
value: 15.173
- type: mrr_at_1
value: 21.368000000000002
- type: mrr_at_10
value: 31.56
- type: mrr_at_100
value: 32.58
- type: mrr_at_1000
value: 32.633
- type: mrr_at_3
value: 28.241
- type: mrr_at_5
value: 30.225
- type: ndcg_at_1
value: 21.368000000000002
- type: ndcg_at_10
value: 23.855999999999998
- type: ndcg_at_100
value: 30.686999999999998
- type: ndcg_at_1000
value: 34.327000000000005
- type: ndcg_at_3
value: 18.781
- type: ndcg_at_5
value: 20.73
- type: precision_at_1
value: 21.368000000000002
- type: precision_at_10
value: 7.564
- type: precision_at_100
value: 1.496
- type: precision_at_1000
value: 0.217
- type: precision_at_3
value: 13.876
- type: precision_at_5
value: 11.062
- type: recall_at_1
value: 9.631
- type: recall_at_10
value: 29.517
- type: recall_at_100
value: 53.452
- type: recall_at_1000
value: 74.115
- type: recall_at_3
value: 17.605999999999998
- type: recall_at_5
value: 22.505
- task:
type: Retrieval
dataset:
name: MTEB DBPedia
type: dbpedia-entity
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 8.885
- type: map_at_10
value: 18.798000000000002
- type: map_at_100
value: 26.316
- type: map_at_1000
value: 27.869
- type: map_at_3
value: 13.719000000000001
- type: map_at_5
value: 15.716
- type: mrr_at_1
value: 66
- type: mrr_at_10
value: 74.263
- type: mrr_at_100
value: 74.519
- type: mrr_at_1000
value: 74.531
- type: mrr_at_3
value: 72.458
- type: mrr_at_5
value: 73.321
- type: ndcg_at_1
value: 53.87499999999999
- type: ndcg_at_10
value: 40.355999999999995
- type: ndcg_at_100
value: 44.366
- type: ndcg_at_1000
value: 51.771
- type: ndcg_at_3
value: 45.195
- type: ndcg_at_5
value: 42.187000000000005
- type: precision_at_1
value: 66
- type: precision_at_10
value: 31.75
- type: precision_at_100
value: 10.11
- type: precision_at_1000
value: 1.9800000000000002
- type: precision_at_3
value: 48.167
- type: precision_at_5
value: 40.050000000000004
- type: recall_at_1
value: 8.885
- type: recall_at_10
value: 24.471999999999998
- type: recall_at_100
value: 49.669000000000004
- type: recall_at_1000
value: 73.383
- type: recall_at_3
value: 14.872
- type: recall_at_5
value: 18.262999999999998
- task:
type: Classification
dataset:
name: MTEB EmotionClassification
type: mteb/emotion
config: default
split: test
revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37
metrics:
- type: accuracy
value: 45.18
- type: f1
value: 40.26878691789978
- task:
type: Retrieval
dataset:
name: MTEB FEVER
type: fever
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 62.751999999999995
- type: map_at_10
value: 74.131
- type: map_at_100
value: 74.407
- type: map_at_1000
value: 74.423
- type: map_at_3
value: 72.329
- type: map_at_5
value: 73.555
- type: mrr_at_1
value: 67.282
- type: mrr_at_10
value: 78.292
- type: mrr_at_100
value: 78.455
- type: mrr_at_1000
value: 78.458
- type: mrr_at_3
value: 76.755
- type: mrr_at_5
value: 77.839
- type: ndcg_at_1
value: 67.282
- type: ndcg_at_10
value: 79.443
- type: ndcg_at_100
value: 80.529
- type: ndcg_at_1000
value: 80.812
- type: ndcg_at_3
value: 76.281
- type: ndcg_at_5
value: 78.235
- type: precision_at_1
value: 67.282
- type: precision_at_10
value: 10.078
- type: precision_at_100
value: 1.082
- type: precision_at_1000
value: 0.11199999999999999
- type: precision_at_3
value: 30.178
- type: precision_at_5
value: 19.232
- type: recall_at_1
value: 62.751999999999995
- type: recall_at_10
value: 91.521
- type: recall_at_100
value: 95.997
- type: recall_at_1000
value: 97.775
- type: recall_at_3
value: 83.131
- type: recall_at_5
value: 87.93299999999999
- task:
type: Retrieval
dataset:
name: MTEB FiQA2018
type: fiqa
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 18.861
- type: map_at_10
value: 30.252000000000002
- type: map_at_100
value: 32.082
- type: map_at_1000
value: 32.261
- type: map_at_3
value: 25.909
- type: map_at_5
value: 28.296
- type: mrr_at_1
value: 37.346000000000004
- type: mrr_at_10
value: 45.802
- type: mrr_at_100
value: 46.611999999999995
- type: mrr_at_1000
value: 46.659
- type: mrr_at_3
value: 43.056
- type: mrr_at_5
value: 44.637
- type: ndcg_at_1
value: 37.346000000000004
- type: ndcg_at_10
value: 38.169
- type: ndcg_at_100
value: 44.864
- type: ndcg_at_1000
value: 47.974
- type: ndcg_at_3
value: 33.619
- type: ndcg_at_5
value: 35.317
- type: precision_at_1
value: 37.346000000000004
- type: precision_at_10
value: 10.693999999999999
- type: precision_at_100
value: 1.775
- type: precision_at_1000
value: 0.231
- type: precision_at_3
value: 22.325
- type: precision_at_5
value: 16.852
- type: recall_at_1
value: 18.861
- type: recall_at_10
value: 45.672000000000004
- type: recall_at_100
value: 70.60499999999999
- type: recall_at_1000
value: 89.216
- type: recall_at_3
value: 30.361
- type: recall_at_5
value: 36.998999999999995
- task:
type: Retrieval
dataset:
name: MTEB HotpotQA
type: hotpotqa
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 37.852999999999994
- type: map_at_10
value: 59.961
- type: map_at_100
value: 60.78
- type: map_at_1000
value: 60.843
- type: map_at_3
value: 56.39999999999999
- type: map_at_5
value: 58.646
- type: mrr_at_1
value: 75.70599999999999
- type: mrr_at_10
value: 82.321
- type: mrr_at_100
value: 82.516
- type: mrr_at_1000
value: 82.525
- type: mrr_at_3
value: 81.317
- type: mrr_at_5
value: 81.922
- type: ndcg_at_1
value: 75.70599999999999
- type: ndcg_at_10
value: 68.557
- type: ndcg_at_100
value: 71.485
- type: ndcg_at_1000
value: 72.71600000000001
- type: ndcg_at_3
value: 63.524
- type: ndcg_at_5
value: 66.338
- type: precision_at_1
value: 75.70599999999999
- type: precision_at_10
value: 14.463000000000001
- type: precision_at_100
value: 1.677
- type: precision_at_1000
value: 0.184
- type: precision_at_3
value: 40.806
- type: precision_at_5
value: 26.709
- type: recall_at_1
value: 37.852999999999994
- type: recall_at_10
value: 72.316
- type: recall_at_100
value: 83.842
- type: recall_at_1000
value: 91.999
- type: recall_at_3
value: 61.209
- type: recall_at_5
value: 66.77199999999999
- task:
type: Classification
dataset:
name: MTEB ImdbClassification
type: mteb/imdb
config: default
split: test
revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7
metrics:
- type: accuracy
value: 85.46039999999999
- type: ap
value: 79.9812521351881
- type: f1
value: 85.31722909702084
- task:
type: Retrieval
dataset:
name: MTEB MSMARCO
type: msmarco
config: default
split: dev
revision: None
metrics:
- type: map_at_1
value: 22.704
- type: map_at_10
value: 35.329
- type: map_at_100
value: 36.494
- type: map_at_1000
value: 36.541000000000004
- type: map_at_3
value: 31.476
- type: map_at_5
value: 33.731
- type: mrr_at_1
value: 23.294999999999998
- type: mrr_at_10
value: 35.859
- type: mrr_at_100
value: 36.968
- type: mrr_at_1000
value: 37.008
- type: mrr_at_3
value: 32.085
- type: mrr_at_5
value: 34.299
- type: ndcg_at_1
value: 23.324
- type: ndcg_at_10
value: 42.274
- type: ndcg_at_100
value: 47.839999999999996
- type: ndcg_at_1000
value: 48.971
- type: ndcg_at_3
value: 34.454
- type: ndcg_at_5
value: 38.464
- type: precision_at_1
value: 23.324
- type: precision_at_10
value: 6.648
- type: precision_at_100
value: 0.9440000000000001
- type: precision_at_1000
value: 0.104
- type: precision_at_3
value: 14.674999999999999
- type: precision_at_5
value: 10.850999999999999
- type: recall_at_1
value: 22.704
- type: recall_at_10
value: 63.660000000000004
- type: recall_at_100
value: 89.29899999999999
- type: recall_at_1000
value: 97.88900000000001
- type: recall_at_3
value: 42.441
- type: recall_at_5
value: 52.04
- task:
type: Classification
dataset:
name: MTEB MTOPDomainClassification (en)
type: mteb/mtop_domain
config: en
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 93.1326949384405
- type: f1
value: 92.89743579612082
- task:
type: Classification
dataset:
name: MTEB MTOPDomainClassification (de)
type: mteb/mtop_domain
config: de
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 89.62524654832347
- type: f1
value: 88.65106082263151
- task:
type: Classification
dataset:
name: MTEB MTOPDomainClassification (es)
type: mteb/mtop_domain
config: es
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 90.59039359573046
- type: f1
value: 90.31532892105662
- task:
type: Classification
dataset:
name: MTEB MTOPDomainClassification (fr)
type: mteb/mtop_domain
config: fr
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 86.21046038208581
- type: f1
value: 86.41459529813113
- task:
type: Classification
dataset:
name: MTEB MTOPDomainClassification (hi)
type: mteb/mtop_domain
config: hi
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 87.3180351380423
- type: f1
value: 86.71383078226444
- task:
type: Classification
dataset:
name: MTEB MTOPDomainClassification (th)
type: mteb/mtop_domain
config: th
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 86.24231464737792
- type: f1
value: 86.31845567592403
- task:
type: Classification
dataset:
name: MTEB MTOPIntentClassification (en)
type: mteb/mtop_intent
config: en
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 75.27131782945736
- type: f1
value: 57.52079940417103
- task:
type: Classification
dataset:
name: MTEB MTOPIntentClassification (de)
type: mteb/mtop_intent
config: de
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 71.2341504649197
- type: f1
value: 51.349951558039244
- task:
type: Classification
dataset:
name: MTEB MTOPIntentClassification (es)
type: mteb/mtop_intent
config: es
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 71.27418278852569
- type: f1
value: 50.1714985749095
- task:
type: Classification
dataset:
name: MTEB MTOPIntentClassification (fr)
type: mteb/mtop_intent
config: fr
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 67.68243031631694
- type: f1
value: 50.1066160836192
- task:
type: Classification
dataset:
name: MTEB MTOPIntentClassification (hi)
type: mteb/mtop_intent
config: hi
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 69.2362854069559
- type: f1
value: 48.821279948766424
- task:
type: Classification
dataset:
name: MTEB MTOPIntentClassification (th)
type: mteb/mtop_intent
config: th
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 71.71428571428571
- type: f1
value: 53.94611389496195
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (af)
type: mteb/amazon_massive_intent
config: af
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 59.97646267652992
- type: f1
value: 57.26797883561521
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (am)
type: mteb/amazon_massive_intent
config: am
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 53.65501008742435
- type: f1
value: 50.416258382177034
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (ar)
type: mteb/amazon_massive_intent
config: ar
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 57.45796906523201
- type: f1
value: 53.306690547422185
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (az)
type: mteb/amazon_massive_intent
config: az
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 62.59246805648957
- type: f1
value: 59.818381969051494
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (bn)
type: mteb/amazon_massive_intent
config: bn
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 61.126429051782104
- type: f1
value: 58.25993593933026
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (cy)
type: mteb/amazon_massive_intent
config: cy
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 50.057162071284466
- type: f1
value: 46.96095728790911
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (da)
type: mteb/amazon_massive_intent
config: da
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 66.64425016812375
- type: f1
value: 62.858291698755764
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (de)
type: mteb/amazon_massive_intent
config: de
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 66.08944182918628
- type: f1
value: 62.44639030604241
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (el)
type: mteb/amazon_massive_intent
config: el
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 64.68056489576328
- type: f1
value: 61.775326758789504
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (en)
type: mteb/amazon_massive_intent
config: en
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 72.11163416274377
- type: f1
value: 69.70789096927015
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (es)
type: mteb/amazon_massive_intent
config: es
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 68.40282447881641
- type: f1
value: 66.38492065671895
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (fa)
type: mteb/amazon_massive_intent
config: fa
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 67.24613315400134
- type: f1
value: 64.3348019501336
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (fi)
type: mteb/amazon_massive_intent
config: fi
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 65.78345662407531
- type: f1
value: 62.21279452354622
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (fr)
type: mteb/amazon_massive_intent
config: fr
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 67.9455279085407
- type: f1
value: 65.48193124964094
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (he)
type: mteb/amazon_massive_intent
config: he
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 62.05110961667788
- type: f1
value: 58.097856564684534
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (hi)
type: mteb/amazon_massive_intent
config: hi
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 64.95292535305985
- type: f1
value: 62.09182174767901
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (hu)
type: mteb/amazon_massive_intent
config: hu
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 64.97310020174848
- type: f1
value: 61.14252567730396
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (hy)
type: mteb/amazon_massive_intent
config: hy
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 60.08069939475453
- type: f1
value: 57.044041742492034
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (id)
type: mteb/amazon_massive_intent
config: id
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 66.63752521856085
- type: f1
value: 63.889340907205316
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (is)
type: mteb/amazon_massive_intent
config: is
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 56.385339609952936
- type: f1
value: 53.449033750088304
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (it)
type: mteb/amazon_massive_intent
config: it
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 68.93073301950234
- type: f1
value: 65.9884357824104
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (ja)
type: mteb/amazon_massive_intent
config: ja
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 68.94418291862812
- type: f1
value: 66.48740222583132
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (jv)
type: mteb/amazon_massive_intent
config: jv
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 54.26025554808339
- type: f1
value: 50.19562815100793
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (ka)
type: mteb/amazon_massive_intent
config: ka
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 48.98789509078682
- type: f1
value: 46.65788438676836
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (km)
type: mteb/amazon_massive_intent
config: km
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 44.68728984532616
- type: f1
value: 41.642419349541996
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (kn)
type: mteb/amazon_massive_intent
config: kn
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 59.19300605245461
- type: f1
value: 55.8626492442437
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (ko)
type: mteb/amazon_massive_intent
config: ko
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 66.33826496301278
- type: f1
value: 63.89499791648792
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (lv)
type: mteb/amazon_massive_intent
config: lv
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 60.33960995292536
- type: f1
value: 57.15242464180892
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (ml)
type: mteb/amazon_massive_intent
config: ml
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 63.09347679892402
- type: f1
value: 59.64733214063841
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (mn)
type: mteb/amazon_massive_intent
config: mn
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 58.75924680564896
- type: f1
value: 55.96585692366827
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (ms)
type: mteb/amazon_massive_intent
config: ms
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 62.48486886348352
- type: f1
value: 59.45143559032946
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (my)
type: mteb/amazon_massive_intent
config: my
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 58.56422326832549
- type: f1
value: 54.96368702901926
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (nb)
type: mteb/amazon_massive_intent
config: nb
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 66.18022864828512
- type: f1
value: 63.05369805040634
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (nl)
type: mteb/amazon_massive_intent
config: nl
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 67.30329522528581
- type: f1
value: 64.06084612020727
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (pl)
type: mteb/amazon_massive_intent
config: pl
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 68.36919973100201
- type: f1
value: 65.12154124788887
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (pt)
type: mteb/amazon_massive_intent
config: pt
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 68.98117014122394
- type: f1
value: 66.41847559806962
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (ro)
type: mteb/amazon_massive_intent
config: ro
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 65.53799596503026
- type: f1
value: 62.17067330740817
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (ru)
type: mteb/amazon_massive_intent
config: ru
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 69.01815736381977
- type: f1
value: 66.24988369607843
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (sl)
type: mteb/amazon_massive_intent
config: sl
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 62.34700739744452
- type: f1
value: 59.957933424941636
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (sq)
type: mteb/amazon_massive_intent
config: sq
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 61.23402824478815
- type: f1
value: 57.98836976018471
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (sv)
type: mteb/amazon_massive_intent
config: sv
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 68.54068594485541
- type: f1
value: 65.43849680666855
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (sw)
type: mteb/amazon_massive_intent
config: sw
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 55.998655010087425
- type: f1
value: 52.83737515406804
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (ta)
type: mteb/amazon_massive_intent
config: ta
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 58.71217215870882
- type: f1
value: 55.051794977833026
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (te)
type: mteb/amazon_massive_intent
config: te
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 59.724277067921996
- type: f1
value: 56.33485571838306
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (th)
type: mteb/amazon_massive_intent
config: th
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 65.59515803631473
- type: f1
value: 64.96772366193588
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (tl)
type: mteb/amazon_massive_intent
config: tl
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 60.860793544048406
- type: f1
value: 58.148845819115394
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (tr)
type: mteb/amazon_massive_intent
config: tr
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 67.40753194351043
- type: f1
value: 63.18903778054698
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (ur)
type: mteb/amazon_massive_intent
config: ur
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 61.52320107599194
- type: f1
value: 58.356144563398516
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (vi)
type: mteb/amazon_massive_intent
config: vi
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 66.17014122394083
- type: f1
value: 63.919964062638925
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (zh-CN)
type: mteb/amazon_massive_intent
config: zh-CN
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 69.15601882985878
- type: f1
value: 67.01451905761371
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (zh-TW)
type: mteb/amazon_massive_intent
config: zh-TW
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 64.65030262273034
- type: f1
value: 64.14420425129063
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (af)
type: mteb/amazon_massive_scenario
config: af
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 65.08742434431743
- type: f1
value: 63.044060042311756
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (am)
type: mteb/amazon_massive_scenario
config: am
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 58.52387357094821
- type: f1
value: 56.82398588814534
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (ar)
type: mteb/amazon_massive_scenario
config: ar
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 62.239408204438476
- type: f1
value: 61.92570286170469
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (az)
type: mteb/amazon_massive_scenario
config: az
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 63.74915938130463
- type: f1
value: 62.130740689396276
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (bn)
type: mteb/amazon_massive_scenario
config: bn
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 65.00336247478144
- type: f1
value: 63.71080635228055
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (cy)
type: mteb/amazon_massive_scenario
config: cy
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 52.837928715534645
- type: f1
value: 50.390741680320836
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (da)
type: mteb/amazon_massive_scenario
config: da
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 72.42098184263618
- type: f1
value: 71.41355113538995
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (de)
type: mteb/amazon_massive_scenario
config: de
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 71.95359784801613
- type: f1
value: 71.42699340156742
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (el)
type: mteb/amazon_massive_scenario
config: el
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 70.18157363819772
- type: f1
value: 69.74836113037671
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (en)
type: mteb/amazon_massive_scenario
config: en
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 77.08137188971082
- type: f1
value: 76.78000685068261
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (es)
type: mteb/amazon_massive_scenario
config: es
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 71.5030262273033
- type: f1
value: 71.71620130425673
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (fa)
type: mteb/amazon_massive_scenario
config: fa
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 70.24546065904505
- type: f1
value: 69.07638311730359
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (fi)
type: mteb/amazon_massive_scenario
config: fi
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 69.12911903160726
- type: f1
value: 68.32651736539815
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (fr)
type: mteb/amazon_massive_scenario
config: fr
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 71.89307330195025
- type: f1
value: 71.33986549860187
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (he)
type: mteb/amazon_massive_scenario
config: he
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 67.44451916610626
- type: f1
value: 66.90192664503866
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (hi)
type: mteb/amazon_massive_scenario
config: hi
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 69.16274377942166
- type: f1
value: 68.01090953775066
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (hu)
type: mteb/amazon_massive_scenario
config: hu
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 70.75319435104237
- type: f1
value: 70.18035309201403
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (hy)
type: mteb/amazon_massive_scenario
config: hy
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 63.14391392064559
- type: f1
value: 61.48286540778145
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (id)
type: mteb/amazon_massive_scenario
config: id
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 70.70275722932078
- type: f1
value: 70.26164779846495
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (is)
type: mteb/amazon_massive_scenario
config: is
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 60.93813046402153
- type: f1
value: 58.8852862116525
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (it)
type: mteb/amazon_massive_scenario
config: it
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 72.320107599193
- type: f1
value: 72.19836409602924
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (ja)
type: mteb/amazon_massive_scenario
config: ja
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 74.65366509751176
- type: f1
value: 74.55188288799579
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (jv)
type: mteb/amazon_massive_scenario
config: jv
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 59.694014794889036
- type: f1
value: 58.11353311721067
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (ka)
type: mteb/amazon_massive_scenario
config: ka
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 54.37457969065231
- type: f1
value: 52.81306134311697
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (km)
type: mteb/amazon_massive_scenario
config: km
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 48.3086751849361
- type: f1
value: 45.396449765419376
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (kn)
type: mteb/amazon_massive_scenario
config: kn
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 62.151983860121064
- type: f1
value: 60.31762544281696
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (ko)
type: mteb/amazon_massive_scenario
config: ko
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 72.44788164088769
- type: f1
value: 71.68150151736367
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (lv)
type: mteb/amazon_massive_scenario
config: lv
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 62.81439139206455
- type: f1
value: 62.06735559105593
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (ml)
type: mteb/amazon_massive_scenario
config: ml
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 68.04303967720242
- type: f1
value: 66.68298851670133
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (mn)
type: mteb/amazon_massive_scenario
config: mn
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 61.43913920645595
- type: f1
value: 60.25605977560783
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (ms)
type: mteb/amazon_massive_scenario
config: ms
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 66.90316072629456
- type: f1
value: 65.1325924692381
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (my)
type: mteb/amazon_massive_scenario
config: my
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 61.63752521856086
- type: f1
value: 59.14284778039585
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (nb)
type: mteb/amazon_massive_scenario
config: nb
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 71.63080026899797
- type: f1
value: 70.89771864626877
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (nl)
type: mteb/amazon_massive_scenario
config: nl
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 72.10827168796234
- type: f1
value: 71.71954219691159
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (pl)
type: mteb/amazon_massive_scenario
config: pl
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 70.59515803631471
- type: f1
value: 70.05040128099003
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (pt)
type: mteb/amazon_massive_scenario
config: pt
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 70.83389374579691
- type: f1
value: 70.84877936562735
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (ro)
type: mteb/amazon_massive_scenario
config: ro
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 69.18628110289173
- type: f1
value: 68.97232927921841
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (ru)
type: mteb/amazon_massive_scenario
config: ru
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 72.99260255548083
- type: f1
value: 72.85139492157732
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (sl)
type: mteb/amazon_massive_scenario
config: sl
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 65.26227303295225
- type: f1
value: 65.08833655469431
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (sq)
type: mteb/amazon_massive_scenario
config: sq
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 66.48621385339611
- type: f1
value: 64.43483199071298
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (sv)
type: mteb/amazon_massive_scenario
config: sv
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 73.14391392064559
- type: f1
value: 72.2580822579741
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (sw)
type: mteb/amazon_massive_scenario
config: sw
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 59.88567585743107
- type: f1
value: 58.3073765932569
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (ta)
type: mteb/amazon_massive_scenario
config: ta
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 62.38399462004034
- type: f1
value: 60.82139544252606
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (te)
type: mteb/amazon_massive_scenario
config: te
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 62.58574310692671
- type: f1
value: 60.71443370385374
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (th)
type: mteb/amazon_massive_scenario
config: th
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 71.61398789509079
- type: f1
value: 70.99761812049401
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (tl)
type: mteb/amazon_massive_scenario
config: tl
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 62.73705447209146
- type: f1
value: 61.680849331794796
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (tr)
type: mteb/amazon_massive_scenario
config: tr
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 71.66778749159381
- type: f1
value: 71.17320646080115
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (ur)
type: mteb/amazon_massive_scenario
config: ur
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 64.640215198386
- type: f1
value: 63.301805157015444
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (vi)
type: mteb/amazon_massive_scenario
config: vi
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 70.00672494956288
- type: f1
value: 70.26005548582106
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (zh-CN)
type: mteb/amazon_massive_scenario
config: zh-CN
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 75.42030934767989
- type: f1
value: 75.2074842882598
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (zh-TW)
type: mteb/amazon_massive_scenario
config: zh-TW
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 70.69266980497646
- type: f1
value: 70.94103167391192
- task:
type: Clustering
dataset:
name: MTEB MedrxivClusteringP2P
type: mteb/medrxiv-clustering-p2p
config: default
split: test
revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73
metrics:
- type: v_measure
value: 28.91697191169135
- task:
type: Clustering
dataset:
name: MTEB MedrxivClusteringS2S
type: mteb/medrxiv-clustering-s2s
config: default
split: test
revision: 35191c8c0dca72d8ff3efcd72aa802307d469663
metrics:
- type: v_measure
value: 28.434000079573313
- task:
type: Reranking
dataset:
name: MTEB MindSmallReranking
type: mteb/mind_small
config: default
split: test
revision: 3bdac13927fdc888b903db93b2ffdbd90b295a69
metrics:
- type: map
value: 30.96683513343383
- type: mrr
value: 31.967364078714834
- task:
type: Retrieval
dataset:
name: MTEB NFCorpus
type: nfcorpus
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 5.5280000000000005
- type: map_at_10
value: 11.793
- type: map_at_100
value: 14.496999999999998
- type: map_at_1000
value: 15.783
- type: map_at_3
value: 8.838
- type: map_at_5
value: 10.07
- type: mrr_at_1
value: 43.653
- type: mrr_at_10
value: 51.531000000000006
- type: mrr_at_100
value: 52.205
- type: mrr_at_1000
value: 52.242999999999995
- type: mrr_at_3
value: 49.431999999999995
- type: mrr_at_5
value: 50.470000000000006
- type: ndcg_at_1
value: 42.415000000000006
- type: ndcg_at_10
value: 32.464999999999996
- type: ndcg_at_100
value: 28.927999999999997
- type: ndcg_at_1000
value: 37.629000000000005
- type: ndcg_at_3
value: 37.845
- type: ndcg_at_5
value: 35.147
- type: precision_at_1
value: 43.653
- type: precision_at_10
value: 23.932000000000002
- type: precision_at_100
value: 7.17
- type: precision_at_1000
value: 1.967
- type: precision_at_3
value: 35.397
- type: precision_at_5
value: 29.907
- type: recall_at_1
value: 5.5280000000000005
- type: recall_at_10
value: 15.568000000000001
- type: recall_at_100
value: 28.54
- type: recall_at_1000
value: 59.864
- type: recall_at_3
value: 9.822000000000001
- type: recall_at_5
value: 11.726
- task:
type: Retrieval
dataset:
name: MTEB NQ
type: nq
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 37.041000000000004
- type: map_at_10
value: 52.664
- type: map_at_100
value: 53.477
- type: map_at_1000
value: 53.505
- type: map_at_3
value: 48.510999999999996
- type: map_at_5
value: 51.036
- type: mrr_at_1
value: 41.338
- type: mrr_at_10
value: 55.071000000000005
- type: mrr_at_100
value: 55.672
- type: mrr_at_1000
value: 55.689
- type: mrr_at_3
value: 51.82
- type: mrr_at_5
value: 53.852
- type: ndcg_at_1
value: 41.338
- type: ndcg_at_10
value: 60.01800000000001
- type: ndcg_at_100
value: 63.409000000000006
- type: ndcg_at_1000
value: 64.017
- type: ndcg_at_3
value: 52.44799999999999
- type: ndcg_at_5
value: 56.571000000000005
- type: precision_at_1
value: 41.338
- type: precision_at_10
value: 9.531
- type: precision_at_100
value: 1.145
- type: precision_at_1000
value: 0.12
- type: precision_at_3
value: 23.416
- type: precision_at_5
value: 16.46
- type: recall_at_1
value: 37.041000000000004
- type: recall_at_10
value: 79.76299999999999
- type: recall_at_100
value: 94.39
- type: recall_at_1000
value: 98.851
- type: recall_at_3
value: 60.465
- type: recall_at_5
value: 69.906
- task:
type: Retrieval
dataset:
name: MTEB QuoraRetrieval
type: quora
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 69.952
- type: map_at_10
value: 83.758
- type: map_at_100
value: 84.406
- type: map_at_1000
value: 84.425
- type: map_at_3
value: 80.839
- type: map_at_5
value: 82.646
- type: mrr_at_1
value: 80.62
- type: mrr_at_10
value: 86.947
- type: mrr_at_100
value: 87.063
- type: mrr_at_1000
value: 87.064
- type: mrr_at_3
value: 85.96000000000001
- type: mrr_at_5
value: 86.619
- type: ndcg_at_1
value: 80.63
- type: ndcg_at_10
value: 87.64800000000001
- type: ndcg_at_100
value: 88.929
- type: ndcg_at_1000
value: 89.054
- type: ndcg_at_3
value: 84.765
- type: ndcg_at_5
value: 86.291
- type: precision_at_1
value: 80.63
- type: precision_at_10
value: 13.314
- type: precision_at_100
value: 1.525
- type: precision_at_1000
value: 0.157
- type: precision_at_3
value: 37.1
- type: precision_at_5
value: 24.372
- type: recall_at_1
value: 69.952
- type: recall_at_10
value: 94.955
- type: recall_at_100
value: 99.38
- type: recall_at_1000
value: 99.96000000000001
- type: recall_at_3
value: 86.60600000000001
- type: recall_at_5
value: 90.997
- task:
type: Clustering
dataset:
name: MTEB RedditClustering
type: mteb/reddit-clustering
config: default
split: test
revision: 24640382cdbf8abc73003fb0fa6d111a705499eb
metrics:
- type: v_measure
value: 42.41329517878427
- task:
type: Clustering
dataset:
name: MTEB RedditClusteringP2P
type: mteb/reddit-clustering-p2p
config: default
split: test
revision: 282350215ef01743dc01b456c7f5241fa8937f16
metrics:
- type: v_measure
value: 55.171278362748666
- task:
type: Retrieval
dataset:
name: MTEB SCIDOCS
type: scidocs
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 4.213
- type: map_at_10
value: 9.895
- type: map_at_100
value: 11.776
- type: map_at_1000
value: 12.084
- type: map_at_3
value: 7.2669999999999995
- type: map_at_5
value: 8.620999999999999
- type: mrr_at_1
value: 20.8
- type: mrr_at_10
value: 31.112000000000002
- type: mrr_at_100
value: 32.274
- type: mrr_at_1000
value: 32.35
- type: mrr_at_3
value: 28.133000000000003
- type: mrr_at_5
value: 29.892999999999997
- type: ndcg_at_1
value: 20.8
- type: ndcg_at_10
value: 17.163999999999998
- type: ndcg_at_100
value: 24.738
- type: ndcg_at_1000
value: 30.316
- type: ndcg_at_3
value: 16.665
- type: ndcg_at_5
value: 14.478
- type: precision_at_1
value: 20.8
- type: precision_at_10
value: 8.74
- type: precision_at_100
value: 1.963
- type: precision_at_1000
value: 0.33
- type: precision_at_3
value: 15.467
- type: precision_at_5
value: 12.6
- type: recall_at_1
value: 4.213
- type: recall_at_10
value: 17.698
- type: recall_at_100
value: 39.838
- type: recall_at_1000
value: 66.893
- type: recall_at_3
value: 9.418
- type: recall_at_5
value: 12.773000000000001
- task:
type: STS
dataset:
name: MTEB SICK-R
type: mteb/sickr-sts
config: default
split: test
revision: a6ea5a8cab320b040a23452cc28066d9beae2cee
metrics:
- type: cos_sim_pearson
value: 82.90453315738294
- type: cos_sim_spearman
value: 78.51197850080254
- type: euclidean_pearson
value: 80.09647123597748
- type: euclidean_spearman
value: 78.63548011514061
- type: manhattan_pearson
value: 80.10645285675231
- type: manhattan_spearman
value: 78.57861806068901
- task:
type: STS
dataset:
name: MTEB STS12
type: mteb/sts12-sts
config: default
split: test
revision: a0d554a64d88156834ff5ae9920b964011b16384
metrics:
- type: cos_sim_pearson
value: 84.2616156846401
- type: cos_sim_spearman
value: 76.69713867850156
- type: euclidean_pearson
value: 77.97948563800394
- type: euclidean_spearman
value: 74.2371211567807
- type: manhattan_pearson
value: 77.69697879669705
- type: manhattan_spearman
value: 73.86529778022278
- task:
type: STS
dataset:
name: MTEB STS13
type: mteb/sts13-sts
config: default
split: test
revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca
metrics:
- type: cos_sim_pearson
value: 77.0293269315045
- type: cos_sim_spearman
value: 78.02555120584198
- type: euclidean_pearson
value: 78.25398100379078
- type: euclidean_spearman
value: 78.66963870599464
- type: manhattan_pearson
value: 78.14314682167348
- type: manhattan_spearman
value: 78.57692322969135
- task:
type: STS
dataset:
name: MTEB STS14
type: mteb/sts14-sts
config: default
split: test
revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375
metrics:
- type: cos_sim_pearson
value: 79.16989925136942
- type: cos_sim_spearman
value: 76.5996225327091
- type: euclidean_pearson
value: 77.8319003279786
- type: euclidean_spearman
value: 76.42824009468998
- type: manhattan_pearson
value: 77.69118862737736
- type: manhattan_spearman
value: 76.25568104762812
- task:
type: STS
dataset:
name: MTEB STS15
type: mteb/sts15-sts
config: default
split: test
revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3
metrics:
- type: cos_sim_pearson
value: 87.42012286935325
- type: cos_sim_spearman
value: 88.15654297884122
- type: euclidean_pearson
value: 87.34082819427852
- type: euclidean_spearman
value: 88.06333589547084
- type: manhattan_pearson
value: 87.25115596784842
- type: manhattan_spearman
value: 87.9559927695203
- task:
type: STS
dataset:
name: MTEB STS16
type: mteb/sts16-sts
config: default
split: test
revision: 4d8694f8f0e0100860b497b999b3dbed754a0513
metrics:
- type: cos_sim_pearson
value: 82.88222044996712
- type: cos_sim_spearman
value: 84.28476589061077
- type: euclidean_pearson
value: 83.17399758058309
- type: euclidean_spearman
value: 83.85497357244542
- type: manhattan_pearson
value: 83.0308397703786
- type: manhattan_spearman
value: 83.71554539935046
- task:
type: STS
dataset:
name: MTEB STS17 (ko-ko)
type: mteb/sts17-crosslingual-sts
config: ko-ko
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 80.20682986257339
- type: cos_sim_spearman
value: 79.94567120362092
- type: euclidean_pearson
value: 79.43122480368902
- type: euclidean_spearman
value: 79.94802077264987
- type: manhattan_pearson
value: 79.32653021527081
- type: manhattan_spearman
value: 79.80961146709178
- task:
type: STS
dataset:
name: MTEB STS17 (ar-ar)
type: mteb/sts17-crosslingual-sts
config: ar-ar
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 74.46578144394383
- type: cos_sim_spearman
value: 74.52496637472179
- type: euclidean_pearson
value: 72.2903807076809
- type: euclidean_spearman
value: 73.55549359771645
- type: manhattan_pearson
value: 72.09324837709393
- type: manhattan_spearman
value: 73.36743103606581
- task:
type: STS
dataset:
name: MTEB STS17 (en-ar)
type: mteb/sts17-crosslingual-sts
config: en-ar
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 71.37272335116
- type: cos_sim_spearman
value: 71.26702117766037
- type: euclidean_pearson
value: 67.114829954434
- type: euclidean_spearman
value: 66.37938893947761
- type: manhattan_pearson
value: 66.79688574095246
- type: manhattan_spearman
value: 66.17292828079667
- task:
type: STS
dataset:
name: MTEB STS17 (en-de)
type: mteb/sts17-crosslingual-sts
config: en-de
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 80.61016770129092
- type: cos_sim_spearman
value: 82.08515426632214
- type: euclidean_pearson
value: 80.557340361131
- type: euclidean_spearman
value: 80.37585812266175
- type: manhattan_pearson
value: 80.6782873404285
- type: manhattan_spearman
value: 80.6678073032024
- task:
type: STS
dataset:
name: MTEB STS17 (en-en)
type: mteb/sts17-crosslingual-sts
config: en-en
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 87.00150745350108
- type: cos_sim_spearman
value: 87.83441972211425
- type: euclidean_pearson
value: 87.94826702308792
- type: euclidean_spearman
value: 87.46143974860725
- type: manhattan_pearson
value: 87.97560344306105
- type: manhattan_spearman
value: 87.5267102829796
- task:
type: STS
dataset:
name: MTEB STS17 (en-tr)
type: mteb/sts17-crosslingual-sts
config: en-tr
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 64.76325252267235
- type: cos_sim_spearman
value: 63.32615095463905
- type: euclidean_pearson
value: 64.07920669155716
- type: euclidean_spearman
value: 61.21409893072176
- type: manhattan_pearson
value: 64.26308625680016
- type: manhattan_spearman
value: 61.2438185254079
- task:
type: STS
dataset:
name: MTEB STS17 (es-en)
type: mteb/sts17-crosslingual-sts
config: es-en
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 75.82644463022595
- type: cos_sim_spearman
value: 76.50381269945073
- type: euclidean_pearson
value: 75.1328548315934
- type: euclidean_spearman
value: 75.63761139408453
- type: manhattan_pearson
value: 75.18610101241407
- type: manhattan_spearman
value: 75.30669266354164
- task:
type: STS
dataset:
name: MTEB STS17 (es-es)
type: mteb/sts17-crosslingual-sts
config: es-es
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 87.49994164686832
- type: cos_sim_spearman
value: 86.73743986245549
- type: euclidean_pearson
value: 86.8272894387145
- type: euclidean_spearman
value: 85.97608491000507
- type: manhattan_pearson
value: 86.74960140396779
- type: manhattan_spearman
value: 85.79285984190273
- task:
type: STS
dataset:
name: MTEB STS17 (fr-en)
type: mteb/sts17-crosslingual-sts
config: fr-en
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 79.58172210788469
- type: cos_sim_spearman
value: 80.17516468334607
- type: euclidean_pearson
value: 77.56537843470504
- type: euclidean_spearman
value: 77.57264627395521
- type: manhattan_pearson
value: 78.09703521695943
- type: manhattan_spearman
value: 78.15942760916954
- task:
type: STS
dataset:
name: MTEB STS17 (it-en)
type: mteb/sts17-crosslingual-sts
config: it-en
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 79.7589932931751
- type: cos_sim_spearman
value: 80.15210089028162
- type: euclidean_pearson
value: 77.54135223516057
- type: euclidean_spearman
value: 77.52697996368764
- type: manhattan_pearson
value: 77.65734439572518
- type: manhattan_spearman
value: 77.77702992016121
- task:
type: STS
dataset:
name: MTEB STS17 (nl-en)
type: mteb/sts17-crosslingual-sts
config: nl-en
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 79.16682365511267
- type: cos_sim_spearman
value: 79.25311267628506
- type: euclidean_pearson
value: 77.54882036762244
- type: euclidean_spearman
value: 77.33212935194827
- type: manhattan_pearson
value: 77.98405516064015
- type: manhattan_spearman
value: 77.85075717865719
- task:
type: STS
dataset:
name: MTEB STS22 (en)
type: mteb/sts22-crosslingual-sts
config: en
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 59.10473294775917
- type: cos_sim_spearman
value: 61.82780474476838
- type: euclidean_pearson
value: 45.885111672377256
- type: euclidean_spearman
value: 56.88306351932454
- type: manhattan_pearson
value: 46.101218127323186
- type: manhattan_spearman
value: 56.80953694186333
- task:
type: STS
dataset:
name: MTEB STS22 (de)
type: mteb/sts22-crosslingual-sts
config: de
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 45.781923079584146
- type: cos_sim_spearman
value: 55.95098449691107
- type: euclidean_pearson
value: 25.4571031323205
- type: euclidean_spearman
value: 49.859978118078935
- type: manhattan_pearson
value: 25.624938455041384
- type: manhattan_spearman
value: 49.99546185049401
- task:
type: STS
dataset:
name: MTEB STS22 (es)
type: mteb/sts22-crosslingual-sts
config: es
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 60.00618133997907
- type: cos_sim_spearman
value: 66.57896677718321
- type: euclidean_pearson
value: 42.60118466388821
- type: euclidean_spearman
value: 62.8210759715209
- type: manhattan_pearson
value: 42.63446860604094
- type: manhattan_spearman
value: 62.73803068925271
- task:
type: STS
dataset:
name: MTEB STS22 (pl)
type: mteb/sts22-crosslingual-sts
config: pl
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 28.460759121626943
- type: cos_sim_spearman
value: 34.13459007469131
- type: euclidean_pearson
value: 6.0917739325525195
- type: euclidean_spearman
value: 27.9947262664867
- type: manhattan_pearson
value: 6.16877864169911
- type: manhattan_spearman
value: 28.00664163971514
- task:
type: STS
dataset:
name: MTEB STS22 (tr)
type: mteb/sts22-crosslingual-sts
config: tr
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 57.42546621771696
- type: cos_sim_spearman
value: 63.699663168970474
- type: euclidean_pearson
value: 38.12085278789738
- type: euclidean_spearman
value: 58.12329140741536
- type: manhattan_pearson
value: 37.97364549443335
- type: manhattan_spearman
value: 57.81545502318733
- task:
type: STS
dataset:
name: MTEB STS22 (ar)
type: mteb/sts22-crosslingual-sts
config: ar
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 46.82241380954213
- type: cos_sim_spearman
value: 57.86569456006391
- type: euclidean_pearson
value: 31.80480070178813
- type: euclidean_spearman
value: 52.484000620130104
- type: manhattan_pearson
value: 31.952708554646097
- type: manhattan_spearman
value: 52.8560972356195
- task:
type: STS
dataset:
name: MTEB STS22 (ru)
type: mteb/sts22-crosslingual-sts
config: ru
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 52.00447170498087
- type: cos_sim_spearman
value: 60.664116225735164
- type: euclidean_pearson
value: 33.87382555421702
- type: euclidean_spearman
value: 55.74649067458667
- type: manhattan_pearson
value: 33.99117246759437
- type: manhattan_spearman
value: 55.98749034923899
- task:
type: STS
dataset:
name: MTEB STS22 (zh)
type: mteb/sts22-crosslingual-sts
config: zh
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 58.06497233105448
- type: cos_sim_spearman
value: 65.62968801135676
- type: euclidean_pearson
value: 47.482076613243905
- type: euclidean_spearman
value: 62.65137791498299
- type: manhattan_pearson
value: 47.57052626104093
- type: manhattan_spearman
value: 62.436916516613294
- task:
type: STS
dataset:
name: MTEB STS22 (fr)
type: mteb/sts22-crosslingual-sts
config: fr
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 70.49397298562575
- type: cos_sim_spearman
value: 74.79604041187868
- type: euclidean_pearson
value: 49.661891561317795
- type: euclidean_spearman
value: 70.31535537621006
- type: manhattan_pearson
value: 49.553715741850006
- type: manhattan_spearman
value: 70.24779344636806
- task:
type: STS
dataset:
name: MTEB STS22 (de-en)
type: mteb/sts22-crosslingual-sts
config: de-en
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 55.640574515348696
- type: cos_sim_spearman
value: 54.927959317689
- type: euclidean_pearson
value: 29.00139666967476
- type: euclidean_spearman
value: 41.86386566971605
- type: manhattan_pearson
value: 29.47411067730344
- type: manhattan_spearman
value: 42.337438424952786
- task:
type: STS
dataset:
name: MTEB STS22 (es-en)
type: mteb/sts22-crosslingual-sts
config: es-en
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 68.14095292259312
- type: cos_sim_spearman
value: 73.99017581234789
- type: euclidean_pearson
value: 46.46304297872084
- type: euclidean_spearman
value: 60.91834114800041
- type: manhattan_pearson
value: 47.07072666338692
- type: manhattan_spearman
value: 61.70415727977926
- task:
type: STS
dataset:
name: MTEB STS22 (it)
type: mteb/sts22-crosslingual-sts
config: it
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 73.27184653359575
- type: cos_sim_spearman
value: 77.76070252418626
- type: euclidean_pearson
value: 62.30586577544778
- type: euclidean_spearman
value: 75.14246629110978
- type: manhattan_pearson
value: 62.328196884927046
- type: manhattan_spearman
value: 75.1282792981433
- task:
type: STS
dataset:
name: MTEB STS22 (pl-en)
type: mteb/sts22-crosslingual-sts
config: pl-en
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 71.59448528829957
- type: cos_sim_spearman
value: 70.37277734222123
- type: euclidean_pearson
value: 57.63145565721123
- type: euclidean_spearman
value: 66.10113048304427
- type: manhattan_pearson
value: 57.18897811586808
- type: manhattan_spearman
value: 66.5595511215901
- task:
type: STS
dataset:
name: MTEB STS22 (zh-en)
type: mteb/sts22-crosslingual-sts
config: zh-en
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 66.37520607720838
- type: cos_sim_spearman
value: 69.92282148997948
- type: euclidean_pearson
value: 40.55768770125291
- type: euclidean_spearman
value: 55.189128944669605
- type: manhattan_pearson
value: 41.03566433468883
- type: manhattan_spearman
value: 55.61251893174558
- task:
type: STS
dataset:
name: MTEB STS22 (es-it)
type: mteb/sts22-crosslingual-sts
config: es-it
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 57.791929533771835
- type: cos_sim_spearman
value: 66.45819707662093
- type: euclidean_pearson
value: 39.03686018511092
- type: euclidean_spearman
value: 56.01282695640428
- type: manhattan_pearson
value: 38.91586623619632
- type: manhattan_spearman
value: 56.69394943612747
- task:
type: STS
dataset:
name: MTEB STS22 (de-fr)
type: mteb/sts22-crosslingual-sts
config: de-fr
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 47.82224468473866
- type: cos_sim_spearman
value: 59.467307194781164
- type: euclidean_pearson
value: 27.428459190256145
- type: euclidean_spearman
value: 60.83463107397519
- type: manhattan_pearson
value: 27.487391578496638
- type: manhattan_spearman
value: 61.281380460246496
- task:
type: STS
dataset:
name: MTEB STS22 (de-pl)
type: mteb/sts22-crosslingual-sts
config: de-pl
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 16.306666792752644
- type: cos_sim_spearman
value: 39.35486427252405
- type: euclidean_pearson
value: -2.7887154897955435
- type: euclidean_spearman
value: 27.1296051831719
- type: manhattan_pearson
value: -3.202291270581297
- type: manhattan_spearman
value: 26.32895849218158
- task:
type: STS
dataset:
name: MTEB STS22 (fr-pl)
type: mteb/sts22-crosslingual-sts
config: fr-pl
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 59.67006803805076
- type: cos_sim_spearman
value: 73.24670207647144
- type: euclidean_pearson
value: 46.91884681500483
- type: euclidean_spearman
value: 16.903085094570333
- type: manhattan_pearson
value: 46.88391675325812
- type: manhattan_spearman
value: 28.17180849095055
- task:
type: STS
dataset:
name: MTEB STSBenchmark
type: mteb/stsbenchmark-sts
config: default
split: test
revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831
metrics:
- type: cos_sim_pearson
value: 83.79555591223837
- type: cos_sim_spearman
value: 85.63658602085185
- type: euclidean_pearson
value: 85.22080894037671
- type: euclidean_spearman
value: 85.54113580167038
- type: manhattan_pearson
value: 85.1639505960118
- type: manhattan_spearman
value: 85.43502665436196
- task:
type: Reranking
dataset:
name: MTEB SciDocsRR
type: mteb/scidocs-reranking
config: default
split: test
revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab
metrics:
- type: map
value: 80.73900991689766
- type: mrr
value: 94.81624131133934
- task:
type: Retrieval
dataset:
name: MTEB SciFact
type: scifact
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 55.678000000000004
- type: map_at_10
value: 65.135
- type: map_at_100
value: 65.824
- type: map_at_1000
value: 65.852
- type: map_at_3
value: 62.736000000000004
- type: map_at_5
value: 64.411
- type: mrr_at_1
value: 58.333
- type: mrr_at_10
value: 66.5
- type: mrr_at_100
value: 67.053
- type: mrr_at_1000
value: 67.08
- type: mrr_at_3
value: 64.944
- type: mrr_at_5
value: 65.89399999999999
- type: ndcg_at_1
value: 58.333
- type: ndcg_at_10
value: 69.34700000000001
- type: ndcg_at_100
value: 72.32
- type: ndcg_at_1000
value: 73.014
- type: ndcg_at_3
value: 65.578
- type: ndcg_at_5
value: 67.738
- type: precision_at_1
value: 58.333
- type: precision_at_10
value: 9.033
- type: precision_at_100
value: 1.0670000000000002
- type: precision_at_1000
value: 0.11199999999999999
- type: precision_at_3
value: 25.444
- type: precision_at_5
value: 16.933
- type: recall_at_1
value: 55.678000000000004
- type: recall_at_10
value: 80.72200000000001
- type: recall_at_100
value: 93.93299999999999
- type: recall_at_1000
value: 99.333
- type: recall_at_3
value: 70.783
- type: recall_at_5
value: 75.978
- task:
type: PairClassification
dataset:
name: MTEB SprintDuplicateQuestions
type: mteb/sprintduplicatequestions-pairclassification
config: default
split: test
revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46
metrics:
- type: cos_sim_accuracy
value: 99.74653465346535
- type: cos_sim_ap
value: 93.01476369929063
- type: cos_sim_f1
value: 86.93009118541033
- type: cos_sim_precision
value: 88.09034907597535
- type: cos_sim_recall
value: 85.8
- type: dot_accuracy
value: 99.22970297029703
- type: dot_ap
value: 51.58725659485144
- type: dot_f1
value: 53.51351351351352
- type: dot_precision
value: 58.235294117647065
- type: dot_recall
value: 49.5
- type: euclidean_accuracy
value: 99.74356435643564
- type: euclidean_ap
value: 92.40332894384368
- type: euclidean_f1
value: 86.97838109602817
- type: euclidean_precision
value: 87.46208291203236
- type: euclidean_recall
value: 86.5
- type: manhattan_accuracy
value: 99.73069306930694
- type: manhattan_ap
value: 92.01320815721121
- type: manhattan_f1
value: 86.4135864135864
- type: manhattan_precision
value: 86.32734530938124
- type: manhattan_recall
value: 86.5
- type: max_accuracy
value: 99.74653465346535
- type: max_ap
value: 93.01476369929063
- type: max_f1
value: 86.97838109602817
- task:
type: Clustering
dataset:
name: MTEB StackExchangeClustering
type: mteb/stackexchange-clustering
config: default
split: test
revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259
metrics:
- type: v_measure
value: 55.2660514302523
- task:
type: Clustering
dataset:
name: MTEB StackExchangeClusteringP2P
type: mteb/stackexchange-clustering-p2p
config: default
split: test
revision: 815ca46b2622cec33ccafc3735d572c266efdb44
metrics:
- type: v_measure
value: 30.4637783572547
- task:
type: Reranking
dataset:
name: MTEB StackOverflowDupQuestions
type: mteb/stackoverflowdupquestions-reranking
config: default
split: test
revision: e185fbe320c72810689fc5848eb6114e1ef5ec69
metrics:
- type: map
value: 49.41377758357637
- type: mrr
value: 50.138451213818854
- task:
type: Summarization
dataset:
name: MTEB SummEval
type: mteb/summeval
config: default
split: test
revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c
metrics:
- type: cos_sim_pearson
value: 28.887846011166594
- type: cos_sim_spearman
value: 30.10823258355903
- type: dot_pearson
value: 12.888049550236385
- type: dot_spearman
value: 12.827495903098123
- task:
type: Retrieval
dataset:
name: MTEB TRECCOVID
type: trec-covid
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 0.21
- type: map_at_10
value: 1.667
- type: map_at_100
value: 9.15
- type: map_at_1000
value: 22.927
- type: map_at_3
value: 0.573
- type: map_at_5
value: 0.915
- type: mrr_at_1
value: 80
- type: mrr_at_10
value: 87.167
- type: mrr_at_100
value: 87.167
- type: mrr_at_1000
value: 87.167
- type: mrr_at_3
value: 85.667
- type: mrr_at_5
value: 87.167
- type: ndcg_at_1
value: 76
- type: ndcg_at_10
value: 69.757
- type: ndcg_at_100
value: 52.402
- type: ndcg_at_1000
value: 47.737
- type: ndcg_at_3
value: 71.866
- type: ndcg_at_5
value: 72.225
- type: precision_at_1
value: 80
- type: precision_at_10
value: 75
- type: precision_at_100
value: 53.959999999999994
- type: precision_at_1000
value: 21.568
- type: precision_at_3
value: 76.667
- type: precision_at_5
value: 78
- type: recall_at_1
value: 0.21
- type: recall_at_10
value: 1.9189999999999998
- type: recall_at_100
value: 12.589
- type: recall_at_1000
value: 45.312000000000005
- type: recall_at_3
value: 0.61
- type: recall_at_5
value: 1.019
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (sqi-eng)
type: mteb/tatoeba-bitext-mining
config: sqi-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 92.10000000000001
- type: f1
value: 90.06
- type: precision
value: 89.17333333333333
- type: recall
value: 92.10000000000001
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (fry-eng)
type: mteb/tatoeba-bitext-mining
config: fry-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 56.06936416184971
- type: f1
value: 50.87508028259473
- type: precision
value: 48.97398843930635
- type: recall
value: 56.06936416184971
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (kur-eng)
type: mteb/tatoeba-bitext-mining
config: kur-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 57.3170731707317
- type: f1
value: 52.96080139372822
- type: precision
value: 51.67861124382864
- type: recall
value: 57.3170731707317
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (tur-eng)
type: mteb/tatoeba-bitext-mining
config: tur-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 94.3
- type: f1
value: 92.67333333333333
- type: precision
value: 91.90833333333333
- type: recall
value: 94.3
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (deu-eng)
type: mteb/tatoeba-bitext-mining
config: deu-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 97.7
- type: f1
value: 97.07333333333332
- type: precision
value: 96.79500000000002
- type: recall
value: 97.7
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (nld-eng)
type: mteb/tatoeba-bitext-mining
config: nld-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 94.69999999999999
- type: f1
value: 93.2
- type: precision
value: 92.48333333333333
- type: recall
value: 94.69999999999999
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ron-eng)
type: mteb/tatoeba-bitext-mining
config: ron-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 92.9
- type: f1
value: 91.26666666666667
- type: precision
value: 90.59444444444445
- type: recall
value: 92.9
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ang-eng)
type: mteb/tatoeba-bitext-mining
config: ang-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 34.32835820895522
- type: f1
value: 29.074180380150533
- type: precision
value: 28.068207322920596
- type: recall
value: 34.32835820895522
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ido-eng)
type: mteb/tatoeba-bitext-mining
config: ido-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 78.5
- type: f1
value: 74.3945115995116
- type: precision
value: 72.82967843459222
- type: recall
value: 78.5
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (jav-eng)
type: mteb/tatoeba-bitext-mining
config: jav-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 66.34146341463415
- type: f1
value: 61.2469400518181
- type: precision
value: 59.63977756660683
- type: recall
value: 66.34146341463415
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (isl-eng)
type: mteb/tatoeba-bitext-mining
config: isl-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 80.9
- type: f1
value: 76.90349206349207
- type: precision
value: 75.32921568627451
- type: recall
value: 80.9
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (slv-eng)
type: mteb/tatoeba-bitext-mining
config: slv-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 84.93317132442284
- type: f1
value: 81.92519105034295
- type: precision
value: 80.71283920615635
- type: recall
value: 84.93317132442284
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (cym-eng)
type: mteb/tatoeba-bitext-mining
config: cym-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 71.1304347826087
- type: f1
value: 65.22394755003451
- type: precision
value: 62.912422360248435
- type: recall
value: 71.1304347826087
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (kaz-eng)
type: mteb/tatoeba-bitext-mining
config: kaz-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 79.82608695652173
- type: f1
value: 75.55693581780538
- type: precision
value: 73.79420289855072
- type: recall
value: 79.82608695652173
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (est-eng)
type: mteb/tatoeba-bitext-mining
config: est-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 74
- type: f1
value: 70.51022222222223
- type: precision
value: 69.29673599347512
- type: recall
value: 74
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (heb-eng)
type: mteb/tatoeba-bitext-mining
config: heb-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 78.7
- type: f1
value: 74.14238095238095
- type: precision
value: 72.27214285714285
- type: recall
value: 78.7
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (gla-eng)
type: mteb/tatoeba-bitext-mining
config: gla-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 48.97466827503016
- type: f1
value: 43.080330405420874
- type: precision
value: 41.36505499593557
- type: recall
value: 48.97466827503016
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (mar-eng)
type: mteb/tatoeba-bitext-mining
config: mar-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 89.60000000000001
- type: f1
value: 86.62333333333333
- type: precision
value: 85.225
- type: recall
value: 89.60000000000001
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (lat-eng)
type: mteb/tatoeba-bitext-mining
config: lat-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 45.2
- type: f1
value: 39.5761253006253
- type: precision
value: 37.991358436312
- type: recall
value: 45.2
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (bel-eng)
type: mteb/tatoeba-bitext-mining
config: bel-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 89.5
- type: f1
value: 86.70333333333333
- type: precision
value: 85.53166666666667
- type: recall
value: 89.5
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (pms-eng)
type: mteb/tatoeba-bitext-mining
config: pms-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 50.095238095238095
- type: f1
value: 44.60650460650461
- type: precision
value: 42.774116796477045
- type: recall
value: 50.095238095238095
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (gle-eng)
type: mteb/tatoeba-bitext-mining
config: gle-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 63.4
- type: f1
value: 58.35967261904762
- type: precision
value: 56.54857142857143
- type: recall
value: 63.4
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (pes-eng)
type: mteb/tatoeba-bitext-mining
config: pes-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 89.2
- type: f1
value: 87.075
- type: precision
value: 86.12095238095239
- type: recall
value: 89.2
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (nob-eng)
type: mteb/tatoeba-bitext-mining
config: nob-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 96.8
- type: f1
value: 95.90333333333334
- type: precision
value: 95.50833333333333
- type: recall
value: 96.8
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (bul-eng)
type: mteb/tatoeba-bitext-mining
config: bul-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 90.9
- type: f1
value: 88.6288888888889
- type: precision
value: 87.61607142857142
- type: recall
value: 90.9
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (cbk-eng)
type: mteb/tatoeba-bitext-mining
config: cbk-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 65.2
- type: f1
value: 60.54377630539395
- type: precision
value: 58.89434482711381
- type: recall
value: 65.2
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (hun-eng)
type: mteb/tatoeba-bitext-mining
config: hun-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 87
- type: f1
value: 84.32412698412699
- type: precision
value: 83.25527777777778
- type: recall
value: 87
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (uig-eng)
type: mteb/tatoeba-bitext-mining
config: uig-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 68.7
- type: f1
value: 63.07883541295306
- type: precision
value: 61.06117424242426
- type: recall
value: 68.7
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (rus-eng)
type: mteb/tatoeba-bitext-mining
config: rus-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 93.7
- type: f1
value: 91.78333333333335
- type: precision
value: 90.86666666666667
- type: recall
value: 93.7
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (spa-eng)
type: mteb/tatoeba-bitext-mining
config: spa-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 97.7
- type: f1
value: 96.96666666666667
- type: precision
value: 96.61666666666667
- type: recall
value: 97.7
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (hye-eng)
type: mteb/tatoeba-bitext-mining
config: hye-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 88.27493261455525
- type: f1
value: 85.90745732255168
- type: precision
value: 84.91389637616052
- type: recall
value: 88.27493261455525
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (tel-eng)
type: mteb/tatoeba-bitext-mining
config: tel-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 90.5982905982906
- type: f1
value: 88.4900284900285
- type: precision
value: 87.57122507122507
- type: recall
value: 90.5982905982906
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (afr-eng)
type: mteb/tatoeba-bitext-mining
config: afr-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 89.5
- type: f1
value: 86.90769841269842
- type: precision
value: 85.80178571428571
- type: recall
value: 89.5
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (mon-eng)
type: mteb/tatoeba-bitext-mining
config: mon-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 82.5
- type: f1
value: 78.36796536796538
- type: precision
value: 76.82196969696969
- type: recall
value: 82.5
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (arz-eng)
type: mteb/tatoeba-bitext-mining
config: arz-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 71.48846960167715
- type: f1
value: 66.78771089148448
- type: precision
value: 64.98302885095339
- type: recall
value: 71.48846960167715
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (hrv-eng)
type: mteb/tatoeba-bitext-mining
config: hrv-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 94.1
- type: f1
value: 92.50333333333333
- type: precision
value: 91.77499999999999
- type: recall
value: 94.1
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (nov-eng)
type: mteb/tatoeba-bitext-mining
config: nov-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 71.20622568093385
- type: f1
value: 66.83278891450098
- type: precision
value: 65.35065777283677
- type: recall
value: 71.20622568093385
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (gsw-eng)
type: mteb/tatoeba-bitext-mining
config: gsw-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 48.717948717948715
- type: f1
value: 43.53146853146853
- type: precision
value: 42.04721204721204
- type: recall
value: 48.717948717948715
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (nds-eng)
type: mteb/tatoeba-bitext-mining
config: nds-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 58.5
- type: f1
value: 53.8564991863928
- type: precision
value: 52.40329436122275
- type: recall
value: 58.5
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ukr-eng)
type: mteb/tatoeba-bitext-mining
config: ukr-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 90.8
- type: f1
value: 88.29
- type: precision
value: 87.09166666666667
- type: recall
value: 90.8
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (uzb-eng)
type: mteb/tatoeba-bitext-mining
config: uzb-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 67.28971962616822
- type: f1
value: 62.63425307817832
- type: precision
value: 60.98065939771546
- type: recall
value: 67.28971962616822
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (lit-eng)
type: mteb/tatoeba-bitext-mining
config: lit-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 78.7
- type: f1
value: 75.5264472455649
- type: precision
value: 74.38205086580086
- type: recall
value: 78.7
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ina-eng)
type: mteb/tatoeba-bitext-mining
config: ina-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 88.7
- type: f1
value: 86.10809523809525
- type: precision
value: 85.07602564102565
- type: recall
value: 88.7
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (lfn-eng)
type: mteb/tatoeba-bitext-mining
config: lfn-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 56.99999999999999
- type: f1
value: 52.85487521402737
- type: precision
value: 51.53985162713104
- type: recall
value: 56.99999999999999
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (zsm-eng)
type: mteb/tatoeba-bitext-mining
config: zsm-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 94
- type: f1
value: 92.45333333333333
- type: precision
value: 91.79166666666667
- type: recall
value: 94
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ita-eng)
type: mteb/tatoeba-bitext-mining
config: ita-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 92.30000000000001
- type: f1
value: 90.61333333333333
- type: precision
value: 89.83333333333331
- type: recall
value: 92.30000000000001
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (cmn-eng)
type: mteb/tatoeba-bitext-mining
config: cmn-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 94.69999999999999
- type: f1
value: 93.34555555555555
- type: precision
value: 92.75416666666668
- type: recall
value: 94.69999999999999
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (lvs-eng)
type: mteb/tatoeba-bitext-mining
config: lvs-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 80.2
- type: f1
value: 76.6563035113035
- type: precision
value: 75.3014652014652
- type: recall
value: 80.2
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (glg-eng)
type: mteb/tatoeba-bitext-mining
config: glg-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 84.7
- type: f1
value: 82.78689263765207
- type: precision
value: 82.06705086580087
- type: recall
value: 84.7
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ceb-eng)
type: mteb/tatoeba-bitext-mining
config: ceb-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 50.33333333333333
- type: f1
value: 45.461523661523664
- type: precision
value: 43.93545574795575
- type: recall
value: 50.33333333333333
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (bre-eng)
type: mteb/tatoeba-bitext-mining
config: bre-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 6.6000000000000005
- type: f1
value: 5.442121400446441
- type: precision
value: 5.146630385487529
- type: recall
value: 6.6000000000000005
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ben-eng)
type: mteb/tatoeba-bitext-mining
config: ben-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 85
- type: f1
value: 81.04666666666667
- type: precision
value: 79.25
- type: recall
value: 85
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (swg-eng)
type: mteb/tatoeba-bitext-mining
config: swg-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 47.32142857142857
- type: f1
value: 42.333333333333336
- type: precision
value: 40.69196428571429
- type: recall
value: 47.32142857142857
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (arq-eng)
type: mteb/tatoeba-bitext-mining
config: arq-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 30.735455543358945
- type: f1
value: 26.73616790022338
- type: precision
value: 25.397823220451283
- type: recall
value: 30.735455543358945
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (kab-eng)
type: mteb/tatoeba-bitext-mining
config: kab-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 25.1
- type: f1
value: 21.975989896371022
- type: precision
value: 21.059885632257203
- type: recall
value: 25.1
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (fra-eng)
type: mteb/tatoeba-bitext-mining
config: fra-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 94.3
- type: f1
value: 92.75666666666666
- type: precision
value: 92.06166666666665
- type: recall
value: 94.3
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (por-eng)
type: mteb/tatoeba-bitext-mining
config: por-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 94.1
- type: f1
value: 92.74
- type: precision
value: 92.09166666666667
- type: recall
value: 94.1
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (tat-eng)
type: mteb/tatoeba-bitext-mining
config: tat-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 71.3
- type: f1
value: 66.922442002442
- type: precision
value: 65.38249567099568
- type: recall
value: 71.3
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (oci-eng)
type: mteb/tatoeba-bitext-mining
config: oci-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 40.300000000000004
- type: f1
value: 35.78682789299971
- type: precision
value: 34.66425128716588
- type: recall
value: 40.300000000000004
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (pol-eng)
type: mteb/tatoeba-bitext-mining
config: pol-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 96
- type: f1
value: 94.82333333333334
- type: precision
value: 94.27833333333334
- type: recall
value: 96
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (war-eng)
type: mteb/tatoeba-bitext-mining
config: war-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 51.1
- type: f1
value: 47.179074753133584
- type: precision
value: 46.06461044702424
- type: recall
value: 51.1
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (aze-eng)
type: mteb/tatoeba-bitext-mining
config: aze-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 87.7
- type: f1
value: 84.71
- type: precision
value: 83.46166666666667
- type: recall
value: 87.7
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (vie-eng)
type: mteb/tatoeba-bitext-mining
config: vie-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 95.8
- type: f1
value: 94.68333333333334
- type: precision
value: 94.13333333333334
- type: recall
value: 95.8
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (nno-eng)
type: mteb/tatoeba-bitext-mining
config: nno-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 85.39999999999999
- type: f1
value: 82.5577380952381
- type: precision
value: 81.36833333333334
- type: recall
value: 85.39999999999999
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (cha-eng)
type: mteb/tatoeba-bitext-mining
config: cha-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 21.16788321167883
- type: f1
value: 16.948865627297987
- type: precision
value: 15.971932568647897
- type: recall
value: 21.16788321167883
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (mhr-eng)
type: mteb/tatoeba-bitext-mining
config: mhr-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 6.9
- type: f1
value: 5.515526831658907
- type: precision
value: 5.141966366966367
- type: recall
value: 6.9
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (dan-eng)
type: mteb/tatoeba-bitext-mining
config: dan-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 93.2
- type: f1
value: 91.39666666666668
- type: precision
value: 90.58666666666667
- type: recall
value: 93.2
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ell-eng)
type: mteb/tatoeba-bitext-mining
config: ell-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 92.2
- type: f1
value: 89.95666666666666
- type: precision
value: 88.92833333333333
- type: recall
value: 92.2
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (amh-eng)
type: mteb/tatoeba-bitext-mining
config: amh-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 79.76190476190477
- type: f1
value: 74.93386243386244
- type: precision
value: 73.11011904761904
- type: recall
value: 79.76190476190477
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (pam-eng)
type: mteb/tatoeba-bitext-mining
config: pam-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 8.799999999999999
- type: f1
value: 6.921439712248537
- type: precision
value: 6.489885109680683
- type: recall
value: 8.799999999999999
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (hsb-eng)
type: mteb/tatoeba-bitext-mining
config: hsb-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 45.75569358178054
- type: f1
value: 40.34699501312631
- type: precision
value: 38.57886764719063
- type: recall
value: 45.75569358178054
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (srp-eng)
type: mteb/tatoeba-bitext-mining
config: srp-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 91.4
- type: f1
value: 89.08333333333333
- type: precision
value: 88.01666666666668
- type: recall
value: 91.4
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (epo-eng)
type: mteb/tatoeba-bitext-mining
config: epo-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 93.60000000000001
- type: f1
value: 92.06690476190477
- type: precision
value: 91.45095238095239
- type: recall
value: 93.60000000000001
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (kzj-eng)
type: mteb/tatoeba-bitext-mining
config: kzj-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 7.5
- type: f1
value: 6.200363129378736
- type: precision
value: 5.89115314822466
- type: recall
value: 7.5
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (awa-eng)
type: mteb/tatoeba-bitext-mining
config: awa-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 73.59307359307358
- type: f1
value: 68.38933553219267
- type: precision
value: 66.62698412698413
- type: recall
value: 73.59307359307358
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (fao-eng)
type: mteb/tatoeba-bitext-mining
config: fao-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 69.8473282442748
- type: f1
value: 64.72373682297346
- type: precision
value: 62.82834214131924
- type: recall
value: 69.8473282442748
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (mal-eng)
type: mteb/tatoeba-bitext-mining
config: mal-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 97.5254730713246
- type: f1
value: 96.72489082969432
- type: precision
value: 96.33672974284326
- type: recall
value: 97.5254730713246
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ile-eng)
type: mteb/tatoeba-bitext-mining
config: ile-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 75.6
- type: f1
value: 72.42746031746033
- type: precision
value: 71.14036630036631
- type: recall
value: 75.6
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (bos-eng)
type: mteb/tatoeba-bitext-mining
config: bos-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 91.24293785310734
- type: f1
value: 88.86064030131826
- type: precision
value: 87.73540489642184
- type: recall
value: 91.24293785310734
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (cor-eng)
type: mteb/tatoeba-bitext-mining
config: cor-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 6.2
- type: f1
value: 4.383083659794954
- type: precision
value: 4.027861324289673
- type: recall
value: 6.2
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (cat-eng)
type: mteb/tatoeba-bitext-mining
config: cat-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 86.8
- type: f1
value: 84.09428571428572
- type: precision
value: 83.00333333333333
- type: recall
value: 86.8
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (eus-eng)
type: mteb/tatoeba-bitext-mining
config: eus-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 60.699999999999996
- type: f1
value: 56.1584972394755
- type: precision
value: 54.713456330903135
- type: recall
value: 60.699999999999996
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (yue-eng)
type: mteb/tatoeba-bitext-mining
config: yue-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 84.2
- type: f1
value: 80.66190476190475
- type: precision
value: 79.19690476190476
- type: recall
value: 84.2
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (swe-eng)
type: mteb/tatoeba-bitext-mining
config: swe-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 93.2
- type: f1
value: 91.33
- type: precision
value: 90.45
- type: recall
value: 93.2
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (dtp-eng)
type: mteb/tatoeba-bitext-mining
config: dtp-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 6.3
- type: f1
value: 5.126828976748276
- type: precision
value: 4.853614328966668
- type: recall
value: 6.3
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (kat-eng)
type: mteb/tatoeba-bitext-mining
config: kat-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 81.76943699731903
- type: f1
value: 77.82873739308057
- type: precision
value: 76.27622452019234
- type: recall
value: 81.76943699731903
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (jpn-eng)
type: mteb/tatoeba-bitext-mining
config: jpn-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 92.30000000000001
- type: f1
value: 90.29666666666665
- type: precision
value: 89.40333333333334
- type: recall
value: 92.30000000000001
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (csb-eng)
type: mteb/tatoeba-bitext-mining
config: csb-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 29.249011857707508
- type: f1
value: 24.561866096392947
- type: precision
value: 23.356583740215456
- type: recall
value: 29.249011857707508
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (xho-eng)
type: mteb/tatoeba-bitext-mining
config: xho-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 77.46478873239437
- type: f1
value: 73.23943661971832
- type: precision
value: 71.66666666666667
- type: recall
value: 77.46478873239437
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (orv-eng)
type: mteb/tatoeba-bitext-mining
config: orv-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 20.35928143712575
- type: f1
value: 15.997867865075824
- type: precision
value: 14.882104658301346
- type: recall
value: 20.35928143712575
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ind-eng)
type: mteb/tatoeba-bitext-mining
config: ind-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 92.2
- type: f1
value: 90.25999999999999
- type: precision
value: 89.45333333333335
- type: recall
value: 92.2
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (tuk-eng)
type: mteb/tatoeba-bitext-mining
config: tuk-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 23.15270935960591
- type: f1
value: 19.65673625772148
- type: precision
value: 18.793705293464992
- type: recall
value: 23.15270935960591
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (max-eng)
type: mteb/tatoeba-bitext-mining
config: max-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 59.154929577464785
- type: f1
value: 52.3868463305083
- type: precision
value: 50.14938113529662
- type: recall
value: 59.154929577464785
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (swh-eng)
type: mteb/tatoeba-bitext-mining
config: swh-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 70.51282051282051
- type: f1
value: 66.8089133089133
- type: precision
value: 65.37645687645687
- type: recall
value: 70.51282051282051
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (hin-eng)
type: mteb/tatoeba-bitext-mining
config: hin-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 94.6
- type: f1
value: 93
- type: precision
value: 92.23333333333333
- type: recall
value: 94.6
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (dsb-eng)
type: mteb/tatoeba-bitext-mining
config: dsb-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 38.62212943632568
- type: f1
value: 34.3278276962583
- type: precision
value: 33.07646935732408
- type: recall
value: 38.62212943632568
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ber-eng)
type: mteb/tatoeba-bitext-mining
config: ber-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 28.1
- type: f1
value: 23.579609223054604
- type: precision
value: 22.39622774921555
- type: recall
value: 28.1
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (tam-eng)
type: mteb/tatoeba-bitext-mining
config: tam-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 88.27361563517914
- type: f1
value: 85.12486427795874
- type: precision
value: 83.71335504885994
- type: recall
value: 88.27361563517914
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (slk-eng)
type: mteb/tatoeba-bitext-mining
config: slk-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 88.6
- type: f1
value: 86.39928571428571
- type: precision
value: 85.4947557997558
- type: recall
value: 88.6
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (tgl-eng)
type: mteb/tatoeba-bitext-mining
config: tgl-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 86.5
- type: f1
value: 83.77952380952381
- type: precision
value: 82.67602564102565
- type: recall
value: 86.5
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ast-eng)
type: mteb/tatoeba-bitext-mining
config: ast-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 79.52755905511812
- type: f1
value: 75.3055868016498
- type: precision
value: 73.81889763779527
- type: recall
value: 79.52755905511812
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (mkd-eng)
type: mteb/tatoeba-bitext-mining
config: mkd-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 77.9
- type: f1
value: 73.76261904761905
- type: precision
value: 72.11670995670995
- type: recall
value: 77.9
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (khm-eng)
type: mteb/tatoeba-bitext-mining
config: khm-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 53.8781163434903
- type: f1
value: 47.25804051288816
- type: precision
value: 45.0603482390186
- type: recall
value: 53.8781163434903
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ces-eng)
type: mteb/tatoeba-bitext-mining
config: ces-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 91.10000000000001
- type: f1
value: 88.88
- type: precision
value: 87.96333333333334
- type: recall
value: 91.10000000000001
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (tzl-eng)
type: mteb/tatoeba-bitext-mining
config: tzl-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 38.46153846153847
- type: f1
value: 34.43978243978244
- type: precision
value: 33.429487179487175
- type: recall
value: 38.46153846153847
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (urd-eng)
type: mteb/tatoeba-bitext-mining
config: urd-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 88.9
- type: f1
value: 86.19888888888887
- type: precision
value: 85.07440476190476
- type: recall
value: 88.9
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ara-eng)
type: mteb/tatoeba-bitext-mining
config: ara-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 85.9
- type: f1
value: 82.58857142857143
- type: precision
value: 81.15666666666667
- type: recall
value: 85.9
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (kor-eng)
type: mteb/tatoeba-bitext-mining
config: kor-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 86.8
- type: f1
value: 83.36999999999999
- type: precision
value: 81.86833333333333
- type: recall
value: 86.8
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (yid-eng)
type: mteb/tatoeba-bitext-mining
config: yid-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 68.51415094339622
- type: f1
value: 63.195000099481234
- type: precision
value: 61.394033442972116
- type: recall
value: 68.51415094339622
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (fin-eng)
type: mteb/tatoeba-bitext-mining
config: fin-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 88.5
- type: f1
value: 86.14603174603175
- type: precision
value: 85.1162037037037
- type: recall
value: 88.5
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (tha-eng)
type: mteb/tatoeba-bitext-mining
config: tha-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 95.62043795620438
- type: f1
value: 94.40389294403892
- type: precision
value: 93.7956204379562
- type: recall
value: 95.62043795620438
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (wuu-eng)
type: mteb/tatoeba-bitext-mining
config: wuu-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 81.8
- type: f1
value: 78.6532178932179
- type: precision
value: 77.46348795840176
- type: recall
value: 81.8
- task:
type: Retrieval
dataset:
name: MTEB Touche2020
type: webis-touche2020
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 2.603
- type: map_at_10
value: 8.5
- type: map_at_100
value: 12.985
- type: map_at_1000
value: 14.466999999999999
- type: map_at_3
value: 4.859999999999999
- type: map_at_5
value: 5.817
- type: mrr_at_1
value: 28.571
- type: mrr_at_10
value: 42.331
- type: mrr_at_100
value: 43.592999999999996
- type: mrr_at_1000
value: 43.592999999999996
- type: mrr_at_3
value: 38.435
- type: mrr_at_5
value: 39.966
- type: ndcg_at_1
value: 26.531
- type: ndcg_at_10
value: 21.353
- type: ndcg_at_100
value: 31.087999999999997
- type: ndcg_at_1000
value: 43.163000000000004
- type: ndcg_at_3
value: 22.999
- type: ndcg_at_5
value: 21.451
- type: precision_at_1
value: 28.571
- type: precision_at_10
value: 19.387999999999998
- type: precision_at_100
value: 6.265
- type: precision_at_1000
value: 1.4160000000000001
- type: precision_at_3
value: 24.490000000000002
- type: precision_at_5
value: 21.224
- type: recall_at_1
value: 2.603
- type: recall_at_10
value: 14.474
- type: recall_at_100
value: 40.287
- type: recall_at_1000
value: 76.606
- type: recall_at_3
value: 5.978
- type: recall_at_5
value: 7.819
- task:
type: Classification
dataset:
name: MTEB ToxicConversationsClassification
type: mteb/toxic_conversations_50k
config: default
split: test
revision: d7c0de2777da35d6aae2200a62c6e0e5af397c4c
metrics:
- type: accuracy
value: 69.7848
- type: ap
value: 13.661023167088224
- type: f1
value: 53.61686134460943
- task:
type: Classification
dataset:
name: MTEB TweetSentimentExtractionClassification
type: mteb/tweet_sentiment_extraction
config: default
split: test
revision: d604517c81ca91fe16a244d1248fc021f9ecee7a
metrics:
- type: accuracy
value: 61.28183361629882
- type: f1
value: 61.55481034919965
- task:
type: Clustering
dataset:
name: MTEB TwentyNewsgroupsClustering
type: mteb/twentynewsgroups-clustering
config: default
split: test
revision: 6125ec4e24fa026cec8a478383ee943acfbd5449
metrics:
- type: v_measure
value: 35.972128420092396
- task:
type: PairClassification
dataset:
name: MTEB TwitterSemEval2015
type: mteb/twittersemeval2015-pairclassification
config: default
split: test
revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1
metrics:
- type: cos_sim_accuracy
value: 85.59933241938367
- type: cos_sim_ap
value: 72.20760361208136
- type: cos_sim_f1
value: 66.4447731755424
- type: cos_sim_precision
value: 62.35539102267469
- type: cos_sim_recall
value: 71.10817941952506
- type: dot_accuracy
value: 78.98313166835548
- type: dot_ap
value: 44.492521645493795
- type: dot_f1
value: 45.814889336016094
- type: dot_precision
value: 37.02439024390244
- type: dot_recall
value: 60.07915567282321
- type: euclidean_accuracy
value: 85.3907134767837
- type: euclidean_ap
value: 71.53847289080343
- type: euclidean_f1
value: 65.95952206778834
- type: euclidean_precision
value: 61.31006346328196
- type: euclidean_recall
value: 71.37203166226914
- type: manhattan_accuracy
value: 85.40859510043511
- type: manhattan_ap
value: 71.49664104395515
- type: manhattan_f1
value: 65.98569969356485
- type: manhattan_precision
value: 63.928748144482924
- type: manhattan_recall
value: 68.17941952506597
- type: max_accuracy
value: 85.59933241938367
- type: max_ap
value: 72.20760361208136
- type: max_f1
value: 66.4447731755424
- task:
type: PairClassification
dataset:
name: MTEB TwitterURLCorpus
type: mteb/twitterurlcorpus-pairclassification
config: default
split: test
revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf
metrics:
- type: cos_sim_accuracy
value: 88.83261536073273
- type: cos_sim_ap
value: 85.48178133644264
- type: cos_sim_f1
value: 77.87816307403935
- type: cos_sim_precision
value: 75.88953021114926
- type: cos_sim_recall
value: 79.97382198952879
- type: dot_accuracy
value: 79.76287499514883
- type: dot_ap
value: 59.17438838475084
- type: dot_f1
value: 56.34566667855996
- type: dot_precision
value: 52.50349092359864
- type: dot_recall
value: 60.794579611949494
- type: euclidean_accuracy
value: 88.76857996662397
- type: euclidean_ap
value: 85.22764834359887
- type: euclidean_f1
value: 77.65379751543554
- type: euclidean_precision
value: 75.11152683839401
- type: euclidean_recall
value: 80.37419156144134
- type: manhattan_accuracy
value: 88.6987231730508
- type: manhattan_ap
value: 85.18907981724007
- type: manhattan_f1
value: 77.51967028849757
- type: manhattan_precision
value: 75.49992701795358
- type: manhattan_recall
value: 79.65044656606098
- type: max_accuracy
value: 88.83261536073273
- type: max_ap
value: 85.48178133644264
- type: max_f1
value: 77.87816307403935
---
## Multilingual-E5-base
[Multilingual E5 Text Embeddings: A Technical Report](https://arxiv.org/pdf/2402.05672).
Liang Wang, Nan Yang, Xiaolong Huang, Linjun Yang, Rangan Majumder, Furu Wei, arXiv 2024
This model has 12 layers and the embedding size is 768.
## Usage
Below is an example to encode queries and passages from the MS-MARCO passage ranking dataset.
```python
import torch.nn.functional as F
from torch import Tensor
from transformers import AutoTokenizer, AutoModel
def average_pool(last_hidden_states: Tensor,
attention_mask: Tensor) -> Tensor:
last_hidden = last_hidden_states.masked_fill(~attention_mask[..., None].bool(), 0.0)
return last_hidden.sum(dim=1) / attention_mask.sum(dim=1)[..., None]
# Each input text should start with "query: " or "passage: ", even for non-English texts.
# For tasks other than retrieval, you can simply use the "query: " prefix.
input_texts = ['query: how much protein should a female eat',
'query: 南瓜的家常做法',
"passage: As a general guideline, the CDC's average requirement of protein for women ages 19 to 70 is 46 grams per day. But, as you can see from this chart, you'll need to increase that if you're expecting or training for a marathon. Check out the chart below to see how much protein you should be eating each day.",
"passage: 1.清炒南瓜丝 原料:嫩南瓜半个 调料:葱、盐、白糖、鸡精 做法: 1、南瓜用刀薄薄的削去表面一层皮,用勺子刮去瓤 2、擦成细丝(没有擦菜板就用刀慢慢切成细丝) 3、锅烧热放油,入葱花煸出香味 4、入南瓜丝快速翻炒一分钟左右,放盐、一点白糖和鸡精调味出锅 2.香葱炒南瓜 原料:南瓜1只 调料:香葱、蒜末、橄榄油、盐 做法: 1、将南瓜去皮,切成片 2、油锅8成热后,将蒜末放入爆香 3、爆香后,将南瓜片放入,翻炒 4、在翻炒的同时,可以不时地往锅里加水,但不要太多 5、放入盐,炒匀 6、南瓜差不多软和绵了之后,就可以关火 7、撒入香葱,即可出锅"]
tokenizer = AutoTokenizer.from_pretrained('intfloat/multilingual-e5-base')
model = AutoModel.from_pretrained('intfloat/multilingual-e5-base')
# Tokenize the input texts
batch_dict = tokenizer(input_texts, max_length=512, padding=True, truncation=True, return_tensors='pt')
outputs = model(**batch_dict)
embeddings = average_pool(outputs.last_hidden_state, batch_dict['attention_mask'])
# normalize embeddings
embeddings = F.normalize(embeddings, p=2, dim=1)
scores = (embeddings[:2] @ embeddings[2:].T) * 100
print(scores.tolist())
```
## Supported Languages
This model is initialized from [xlm-roberta-base](https://huggingface.co/xlm-roberta-base)
and continually trained on a mixture of multilingual datasets.
It supports 100 languages from xlm-roberta,
but low-resource languages may see performance degradation.
## Training Details
**Initialization**: [xlm-roberta-base](https://huggingface.co/xlm-roberta-base)
**First stage**: contrastive pre-training with weak supervision
| Dataset | Weak supervision | # of text pairs |
|--------------------------------------------------------------------------------------------------------|---------------------------------------|-----------------|
| Filtered [mC4](https://huggingface.co/datasets/mc4) | (title, page content) | 1B |
| [CC News](https://huggingface.co/datasets/intfloat/multilingual_cc_news) | (title, news content) | 400M |
| [NLLB](https://huggingface.co/datasets/allenai/nllb) | translation pairs | 2.4B |
| [Wikipedia](https://huggingface.co/datasets/intfloat/wikipedia) | (hierarchical section title, passage) | 150M |
| Filtered [Reddit](https://www.reddit.com/) | (comment, response) | 800M |
| [S2ORC](https://github.com/allenai/s2orc) | (title, abstract) and citation pairs | 100M |
| [Stackexchange](https://stackexchange.com/) | (question, answer) | 50M |
| [xP3](https://huggingface.co/datasets/bigscience/xP3) | (input prompt, response) | 80M |
| [Miscellaneous unsupervised SBERT data](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2) | - | 10M |
**Second stage**: supervised fine-tuning
| Dataset | Language | # of text pairs |
|----------------------------------------------------------------------------------------|--------------|-----------------|
| [MS MARCO](https://microsoft.github.io/msmarco/) | English | 500k |
| [NQ](https://github.com/facebookresearch/DPR) | English | 70k |
| [Trivia QA](https://github.com/facebookresearch/DPR) | English | 60k |
| [NLI from SimCSE](https://github.com/princeton-nlp/SimCSE) | English | <300k |
| [ELI5](https://huggingface.co/datasets/eli5) | English | 500k |
| [DuReader Retrieval](https://github.com/baidu/DuReader/tree/master/DuReader-Retrieval) | Chinese | 86k |
| [KILT Fever](https://huggingface.co/datasets/kilt_tasks) | English | 70k |
| [KILT HotpotQA](https://huggingface.co/datasets/kilt_tasks) | English | 70k |
| [SQuAD](https://huggingface.co/datasets/squad) | English | 87k |
| [Quora](https://huggingface.co/datasets/quora) | English | 150k |
| [Mr. TyDi](https://huggingface.co/datasets/castorini/mr-tydi) | 11 languages | 50k |
| [MIRACL](https://huggingface.co/datasets/miracl/miracl) | 16 languages | 40k |
For all labeled datasets, we only use its training set for fine-tuning.
For other training details, please refer to our paper at [https://arxiv.org/pdf/2402.05672](https://arxiv.org/pdf/2402.05672).
## Benchmark Results on [Mr. TyDi](https://arxiv.org/abs/2108.08787)
| Model | Avg MRR@10 | | ar | bn | en | fi | id | ja | ko | ru | sw | te | th |
|-----------------------|------------|-------|------| --- | --- | --- | --- | --- | --- | --- |------| --- | --- |
| BM25 | 33.3 | | 36.7 | 41.3 | 15.1 | 28.8 | 38.2 | 21.7 | 28.1 | 32.9 | 39.6 | 42.4 | 41.7 |
| mDPR | 16.7 | | 26.0 | 25.8 | 16.2 | 11.3 | 14.6 | 18.1 | 21.9 | 18.5 | 7.3 | 10.6 | 13.5 |
| BM25 + mDPR | 41.7 | | 49.1 | 53.5 | 28.4 | 36.5 | 45.5 | 35.5 | 36.2 | 42.7 | 40.5 | 42.0 | 49.2 |
| | |
| multilingual-e5-small | 64.4 | | 71.5 | 66.3 | 54.5 | 57.7 | 63.2 | 55.4 | 54.3 | 60.8 | 65.4 | 89.1 | 70.1 |
| multilingual-e5-base | 65.9 | | 72.3 | 65.0 | 58.5 | 60.8 | 64.9 | 56.6 | 55.8 | 62.7 | 69.0 | 86.6 | 72.7 |
| multilingual-e5-large | **70.5** | | 77.5 | 73.2 | 60.8 | 66.8 | 68.5 | 62.5 | 61.6 | 65.8 | 72.7 | 90.2 | 76.2 |
## MTEB Benchmark Evaluation
Check out [unilm/e5](https://github.com/microsoft/unilm/tree/master/e5) to reproduce evaluation results
on the [BEIR](https://arxiv.org/abs/2104.08663) and [MTEB benchmark](https://arxiv.org/abs/2210.07316).
## Support for Sentence Transformers
Below is an example for usage with sentence_transformers.
```python
from sentence_transformers import SentenceTransformer
model = SentenceTransformer('intfloat/multilingual-e5-base')
input_texts = [
'query: how much protein should a female eat',
'query: 南瓜的家常做法',
"passage: As a general guideline, the CDC's average requirement of protein for women ages 19 to 70 i s 46 grams per day. But, as you can see from this chart, you'll need to increase that if you're expecting or traini ng for a marathon. Check out the chart below to see how much protein you should be eating each day.",
"passage: 1.清炒南瓜丝 原料:嫩南瓜半个 调料:葱、盐、白糖、鸡精 做法: 1、南瓜用刀薄薄的削去表面一层皮 ,用勺子刮去瓤 2、擦成细丝(没有擦菜板就用刀慢慢切成细丝) 3、锅烧热放油,入葱花煸出香味 4、入南瓜丝快速翻炒一分钟左右, 放盐、一点白糖和鸡精调味出锅 2.香葱炒南瓜 原料:南瓜1只 调料:香葱、蒜末、橄榄油、盐 做法: 1、将南瓜去皮,切成片 2、油 锅8成热后,将蒜末放入爆香 3、爆香后,将南瓜片放入,翻炒 4、在翻炒的同时,可以不时地往锅里加水,但不要太多 5、放入盐,炒匀 6、南瓜差不多软和绵了之后,就可以关火 7、撒入香葱,即可出锅"
]
embeddings = model.encode(input_texts, normalize_embeddings=True)
```
Package requirements
`pip install sentence_transformers~=2.2.2`
Contributors: [michaelfeil](https://huggingface.co/michaelfeil)
## FAQ
**1. Do I need to add the prefix "query: " and "passage: " to input texts?**
Yes, this is how the model is trained, otherwise you will see a performance degradation.
Here are some rules of thumb:
- Use "query: " and "passage: " correspondingly for asymmetric tasks such as passage retrieval in open QA, ad-hoc information retrieval.
- Use "query: " prefix for symmetric tasks such as semantic similarity, bitext mining, paraphrase retrieval.
- Use "query: " prefix if you want to use embeddings as features, such as linear probing classification, clustering.
**2. Why are my reproduced results slightly different from reported in the model card?**
Different versions of `transformers` and `pytorch` could cause negligible but non-zero performance differences.
**3. Why does the cosine similarity scores distribute around 0.7 to 1.0?**
This is a known and expected behavior as we use a low temperature 0.01 for InfoNCE contrastive loss.
For text embedding tasks like text retrieval or semantic similarity,
what matters is the relative order of the scores instead of the absolute values,
so this should not be an issue.
## Citation
If you find our paper or models helpful, please consider cite as follows:
```
@article{wang2024multilingual,
title={Multilingual E5 Text Embeddings: A Technical Report},
author={Wang, Liang and Yang, Nan and Huang, Xiaolong and Yang, Linjun and Majumder, Rangan and Wei, Furu},
journal={arXiv preprint arXiv:2402.05672},
year={2024}
}
```
## Limitations
Long texts will be truncated to at most 512 tokens.
| [
"SEMANTIC_SIMILARITY",
"TRANSLATION",
"SUMMARIZATION"
] | [
"BIOSSES",
"SCIFACT"
] |
KomeijiForce/Cuckoo-C4-Instruct | KomeijiForce | question-answering | [
"transformers",
"safetensors",
"roberta",
"token-classification",
"question-answering",
"arxiv:2502.11275",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2025-02-16T22:52:28 | 2025-02-19T20:33:20 | 62 | 1 | ---
library_name: transformers
license: mit
pipeline_tag: question-answering
---
# Cuckoo 🐦 [[Github]](https://github.com/KomeijiForce/Cuckoo)
The Cuckoo family of models are extractive question answering models as described in the paper [Cuckoo: An IE Free Rider Hatched by Massive Nutrition in LLM's Nest](https://hf.co/papers/2502.11275).
Cuckoo is a small (300M) information extraction (IE) model that imitates the next token prediction paradigm of large language models. Instead of retrieving from the vocabulary, Cuckoo predicts the next tokens by tagging them in the given input context as shown below:

Cuckoo is substantially different from previous IE pre-training because it can use any text resource to enhance itself, especially by taking a free ride on data curated for LLMs!

Currently, we open-source checkpoints of Cuckoos that are pre-trained on:
1) 100M next tokens extraction (NTE) instances converted from C4. ([Cuckoo-C4](https://huggingface.co/KomeijiForce/Cuckoo-C4) 🐦)
2) Cuckoo-C4 + 2.6M next token extraction (NTE) instances converted from a supervised fine-tuning dataset, TuluV3. ([Cuckoo-C4-Instruct](https://huggingface.co/KomeijiForce/Cuckoo-C4-Instruct) 🐦🛠️)
3) Cuckoo-C4-Instruct + MultiNERD, MetaIE, NuNER, MRQA (excluding SQuAD, DROP). ([Cuckoo-C4-Rainbow](https://huggingface.co/KomeijiForce/Cuckoo-C4-Rainbow) 🌈🐦🛠️)
4) Cuckoo-C4-Rainbow + Multiple NER Datasets, WizardLM Dataset, Multiple Choice QA Datasets, MMLU, SQuAD, DROP, MNLI, SNLI. ([Cuckoo-C4-Super-Rainbow](https://huggingface.co/KomeijiForce/Cuckoo-C4-Super-Rainbow) 🦸🌈🐦🛠️)
## Performance Demonstration 🚀
Begin your journey with Cuckoo to experience unimaginable adaptation efficiency for all kinds of IE tasks!
| | CoNLL2003 | BioNLP2004 | MIT-Restaurant | MIT-Movie | Avg. | CoNLL2004 | ADE | Avg. | SQuAD | SQuAD-V2 | DROP | Avg. |
|----------------------|-----------|-----------|----------------|-----------|------|-----------|-----|------|-------|----------|------|------|
| OPT-C4-TuluV3 | 50.24 | 39.76 | 58.91 | 56.33 | 50.56 | 47.14 | 45.66 | 46.40 | 39.80 | 53.81 | 31.00 | 41.54 |
| RoBERTa | 33.75 | 32.91 | 62.15 | 58.32 | 46.80 | 34.16 | 2.15 | 18.15 | 31.86 | 48.55 | 9.16 | 29.86 |
| MRQA | 72.45 | 55.93 | 68.68 | 66.26 | 65.83 | 66.23 | 67.44 | 66.84 | 80.07 | 66.22 | 54.46 | 66.92 |
| MultiNERD | 66.78 | 54.62 | 64.16 | 66.30 | 60.59 | 57.52 | 45.10 | 51.31 | 42.85 | 50.99 | 30.12 | 41.32 |
| NuNER | 74.15 | 56.36 | 68.57 | 64.88 | 65.99 | 65.12 | 63.71 | 64.42 | 61.60 | 52.67 | 37.37 | 50.55 |
| MetaIE | 71.33 | 55.63 | 70.08 | 65.23 | 65.57 | 64.81 | 64.40 | 64.61 | 74.59 | 62.54 | 30.73 | 55.95 |
| Cuckoo 🐦🛠️ | 73.60 | 57.00 | 67.63 | 67.12 | 66.34 | 69.57 | 71.70 | 70.63 | 77.47 | 64.06 | 54.25 | 65.26 |
| └─ Only Pre-train 🐦 | 72.46 | 55.87 | 66.87 | 67.23 | 65.61 | 68.14 | 69.39 | 68.77 | 75.64 | 63.36 | 52.81 | 63.94 |
| └─ Only Post-train | 72.80 | 56.10 | 66.02 | 67.10 | 65.51 | 68.66 | 69.75 | 69.21 | 77.05 | 62.39 | 54.80 | 64.75 |
| Rainbow Cuckoo 🌈🐦🛠️ | 79.94 | 58.39 | 70.30 | 67.00 | **68.91** | 70.47 | 76.05 | **73.26** | 86.57 | 69.41 | 64.64 | **73.54** |
*(Super Rainbow Cuckoo 🦸🌈🐦🛠️ uses training sets except CoNLL2004 and ADE to boost its performance)*
| | CoNLL2003 | BioNLP2004 | MIT-Restaurant | MIT-Movie | Avg. | CoNLL2004 | ADE | Avg. | SQuAD | SQuAD-V2 | DROP | Avg. |
|----------------------|-----------|-----------|----------------|-----------|-------|-----------|-------|-------|-------|----------|-------|-------|
| Super Rainbow Cuckoo 🦸🌈🐦🛠️ | 88.38 | 68.33 | 76.79 | 69.39 | **75.22** | 72.96 | 80.06 | **76.51** | 89.54 | 74.52 | 74.89 | **79.65** |
## Quick Experience with Cuckoo in Next Tokens Extraction ⚡
We recommend using the strongest Super Rainbow Cuckoo 🦸🌈🐦🛠️ for zero-shot extraction. You can directly run the cases below in ```case_next_tokens_extraction.py```.
1️⃣ First load the model and the tokenizers
```python
from transformers import AutoModelForTokenClassification, AutoTokenizer
import torch
import spacy
nlp = spacy.load("en_core_web_sm")
device = torch.device("cuda:0")
path = f"KomeijiForce/Cuckoo-C4-Super-Rainbow"
tokenizer = AutoTokenizer.from_pretrained(path)
tagger = AutoModelForTokenClassification.from_pretrained(path).to(device)
```
2️⃣ Define the next tokens extraction function
```python
def next_tokens_extraction(text):
def find_sequences(lst):
sequences = []
i = 0
while i < len(lst):
if lst[i] == 0:
start = i
end = i
i += 1
while i < len(lst) and lst[i] == 1:
end = i
i += 1
sequences.append((start, end+1))
else:
i += 1
return sequences
text = " ".join([token.text for token in nlp(text)])
inputs = tokenizer(text, return_tensors="pt").to(device)
tag_predictions = tagger(**inputs).logits[0].argmax(-1)
predictions = [tokenizer.decode(inputs.input_ids[0, seq[0]:seq[1]]).strip() for seq in find_sequences(tag_predictions)]
return predictions
```
3️⃣ Call the function for extraction!
Case 1: Basic entity and relation understanding
```python
text = "Tom and Jack went to their trip in Paris."
for question in [
"What is the person mentioned here?",
"What is the city mentioned here?",
"Who goes with Tom together?",
"What do Tom and Jack go to Paris for?",
"Where does George live in?",
]:
prompt = f"User:\n\n{text}\n\nQuestion: {question}\n\nAssistant:"
predictions = next_tokens_extraction(prompt)
print(question, predictions)
```
You will get things like,
```
What is the person mentioned here? ['Tom', 'Jack']
What is the city mentioned here? ['Paris']
Who goes with Tom together? ['Jack']
What do Tom and Jack go to Paris for? ['trip']
Where does George live in? []
```
where [] indicates Cuckoo thinks there to be no next tokens for extraction.
Case 2: Longer context
```python
passage = f'''Ludwig van Beethoven (17 December 1770 – 26 March 1827) was a German composer and pianist. He is one of the most revered figures in the history of Western music; his works rank among the most performed of the classical music repertoire and span the transition from the Classical period to the Romantic era in classical music. His early period, during which he forged his craft, is typically considered to have lasted until 1802. From 1802 to around 1812, his middle period showed an individual development from the styles of Joseph Haydn and Wolfgang Amadeus Mozart, and is sometimes characterised as heroic. During this time, Beethoven began to grow increasingly deaf. In his late period, from 1812 to 1827, he extended his innovations in musical form and expression.'''
for question in [
"What are the people mentioned here?",
"What is the job of Beethoven?",
"How famous is Beethoven?",
"When did Beethoven's middle period showed an individual development?",
]:
text = f"User:\n\n{passage}\n\nQuestion: {question}\n\nAssistant:"
predictions = next_tokens_extraction(text)
print(question, predictions)
```
You will get things like,
```
What are the people mentioned here? ['Ludwig van Beethoven', 'Joseph Haydn', 'Wolfgang Amadeus Mozart']
What is the job of Beethoven? ['composer and pianist']
How famous is Beethoven? ['one of the most revered figures in the history of Western music']
When did Beethoven's middle period showed an individual development? ['1802']
```
Case 3: Knowledge quiz
```python
for obj in ["grass", "sea", "fire", "night"]:\n text = f"User:\\n\\nChoices:\\nred\\nblue\\ngreen.\\n\\nQuestion: What is the color of the {obj}?\\n\\nAssistant:\\n\\nAnswer:"
predictions = next_tokens_extraction(text)
print(obj, predictions)
```
You will get things like,
```
grass ['green']
sea ['blue']
fire ['red']
night []
```
which shows Cuckoo is not extracting any plausible spans but has the knowledge to understand the context.
## Few-shot Adaptation 🎯
Cuckoo 🐦 is an expert in few-shot adaptation to your own tasks, taking CoNLL2003 as an example, run ```bash run_downstream.sh conll2003.5shot KomeijiForce/Cuckoo-C4-Rainbow```, you will get a fine-tuned model in ```models/cuckoo-conll2003.5shot```. Then you can benchmark the model with the script ```python eval_conll2003.py```, which will show you an F1 performance of around 80.
You can also train the adaptation to machine reading comprehension (SQuAD), run ```bash run_downstream.sh squad.32shot KomeijiForce/Cuckoo-C4-Rainbow```, you will get a fine-tuned model in ```models/cuckoo-squad.32shot```. Then you can benchmark the model with the script ```python eval_squad.py```, which will show you an F1 performance of around 88.
For fine-tuning your own task, you need to create a Jsonlines file, each line contains {"words": [...], "ner": [...]}, For example:
```json
{"words": ["I", "am", "John", "Smith", ".", "Person", ":"], "ner": ["O", "O", "B", "I", "O", "O", "O"]}
```
<img src="https://github.com/user-attachments/assets/ef177466-d915-46d2-9201-5e672bb6ec23" style="width: 40%;" />
which indicates "John Smith" to be predicted as the next tokens.
You can refer to some prompts shown below for beginning:
| **Type** | **User Input** | **Assistant Response** |
|---------------------|----------------------------------------------------------------------------------------------------|----------------------------------------------------|
| Entity | **User:** [Context] Question: What is the [Label] mentioned? | **Assistant:** Answer: The [Label] is |
| Relation (Kill) | **User:** [Context] Question: Who does [Entity] kill? | **Assistant:** Answer: [Entity] kills |
| Relation (Live) | **User:** [Context] Question: Where does [Entity] live in? | **Assistant:** Answer: [Entity] lives in |
| Relation (Work) | **User:** [Context] Question: Who does [Entity] work for? | **Assistant:** Answer: [Entity] works for |
| Relation (Located) | **User:** [Context] Question: Where is [Entity] located in? | **Assistant:** Answer: [Entity] is located in |
| Relation (Based) | **User:** [Context] Question: Where is [Entity] based in? | **Assistant:** Answer: [Entity] is based in |
| Relation (Adverse) | **User:** [Context] Question: What is the adverse effect of [Entity]? | **Assistant:** Answer: The adverse effect of [Entity] is |
| Query | **User:** [Context] Question: [Question] | **Assistant:** Answer: |
| Instruction (Entity)| **User:** [Context] Question: What is the [Label] mentioned? ([Instruction]) | **Assistant:** Answer: The [Label] is |
| Instruction (Query) | **User:** [Context] Question: [Question] ([Instruction]) | **Assistant:** Answer: |
After building your own downstream dataset, save it into ```my_downstream.json```, and then run the command ```bash run_downstream.sh my_downstream KomeijiForce/Cuckoo-C4-Rainbow```. You will find an adapted Cuckoo in ```models/cuckoo-my_downstream```.
## Fly your own Cuckoo 🪽
We include the script to transform texts to NTE instances in the file ```nte_data_collection.py```, which takes C4 as an example, the converted results can be checked in ```cuckoo.c4.example.json```. The script is designed to be easily adapted to other resources like entity, query, and questions and you can modify your own data to NTE to fly your own Cuckoo! Run the ```run_cuckoo.sh``` script to try an example pre-training.
```bash
python run_ner.py \
--model_name_or_path roberta-large \
--train_file cuckoo.c4.example.json \
--output_dir models/cuckoo-c4-example \
--per_device_train_batch_size 4\
--gradient_accumulation_steps 16\
--num_train_epochs 1\
--save_steps 1000\
--learning_rate 0.00001\
--do_train \
--overwrite_output_dir
```
You will get an example Cuckoo model in ```models/cuckoo-c4-example```, it might not perform well if you pre-train with too little data. You may adjust the hyperparameters inside ```nte_data_collection.py``` or modify the conversion for your own resources to enable better pre-training performance.
## 🐾 Citation
```
@article{DBLP:journals/corr/abs-2502-11275,
author = {Letian Peng and
Zilong Wang and
Feng Yao and
Jingbo Shang},
title = {Cuckoo: An {IE} Free Rider Hatched by Massive Nutrition in {LLM}'s Nest},
journal = {CoRR},
volume = {abs/2502.11275},
year = {2025},
url = {https://doi.org/10.48550/arXiv.2502.11275},
doi = {10.48550/arXiv.2502.11275},
eprinttype = {arXiv},
eprint = {2502.11275},
timestamp = {Mon, 17 Feb 2025 19:32:20 +0000},
biburl = {https://dblp.org/rec/journals/corr/abs-2502-11275.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
``` | [
"QUESTION_ANSWERING"
] | [
"CRAFT"
] |
pavanmantha/distilroberta-pubmed-embeddings | pavanmantha | sentence-similarity | [
"sentence-transformers",
"safetensors",
"roberta",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:8000",
"loss:MultipleNegativesRankingLoss",
"dataset:pavanmantha/pumed-finetuning",
"arxiv:1908.10084",
"arxiv:1705.00652",
"base_model:sentence-transformers/all-distilroberta-v1",
"base_model:finetune:sentence-transformers/all-distilroberta-v1",
"model-index",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | 2025-02-22T05:12:15 | 2025-02-22T05:12:33 | 62 | 0 | ---
base_model: sentence-transformers/all-distilroberta-v1
datasets:
- pavanmantha/pumed-finetuning
library_name: sentence-transformers
metrics:
- cosine_accuracy
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:8000
- loss:MultipleNegativesRankingLoss
widget:
- source_sentence: Is acute doxorubicin cardiotoxicity associated with p53-induced
inhibition of the mammalian target of rapamycin pathway?
sentences:
- Tyrosinase, the rate-limiting enzyme required for melanin production, has been
targeted to develop active brightening/lightening materials for skin products.
Unexpected depigmentation of the skin characterized with the diverse symptoms
was reported in some subjects who used a tyrosinase-competitive inhibiting quasi-drug,
rhododendrol. To investigate the mechanism underlying the depigmentation caused
by rhododendrol-containing cosmetics, this study was performed. The mechanism
above was examined using more than dozen of melanocytes derived from donors of
different ethnic backgrounds. The RNAi technology was utilized to confirm the
effect of tyrosinase to induce the cytotoxicity of rhododendrol and liquid chromatography-tandem
mass spectrometry was introduced to detect rhododendrol and its metabolites in
the presence of tyrosinase. Melanocyte damage was related to tyrosinase activity
at a certain threshold. Treatment with a tyrosinase-specific siRNA was shown to
dramatically rescue the rhododendrol-induced melanocyte impairment. Hydroxyl-rhododendrol
was detected only in melanocytes with higher tyrosinase activity. When an equivalent
amount of hydroxyl-rhododendrol was administered, cell viability was almost equally
suppressed even in melanocytes with lower tyrosinase activity.
- Doxorubicin is used to treat childhood and adult cancer. Doxorubicin treatment
is associated with both acute and chronic cardiotoxicity. The cardiotoxic effects
of doxorubicin are cumulative, which limits its chemotherapeutic dose. Free radical
generation and p53-dependent apoptosis are thought to contribute to doxorubicin-induced
cardiotoxicity. Adult transgenic (MHC-CB7) mice expressing cardiomyocyte-restricted
dominant-interfering p53 and their nontransgenic littermates were treated with
doxorubicin (20 mg/kg cumulative dose). Nontransgenic mice exhibited reduced left
ventricular systolic function (predoxorubicin fractional shortening [FS] 61+/-2%,
postdoxorubicin FS 45+/-2%, mean+/-SEM, P<0.008), reduced cardiac mass, and high
levels of cardiomyocyte apoptosis 7 days after the initiation of doxorubicin treatment.
In contrast, doxorubicin-treated MHC-CB7 mice exhibited normal left ventricular
systolic function (predoxorubicin FS 63+/-2%, postdoxorubicin FS 60+/-2%, P>0.008),
normal cardiac mass, and low levels of cardiomyocyte apoptosis. Western blot analyses
indicated that mTOR (mammalian target of rapamycin) signaling was inhibited in
doxorubicin-treated nontransgenic mice but not in doxorubicin-treated MHC-CB7
mice. Accordingly, transgenic mice with cardiomyocyte-restricted, constitutively
active mTOR expression (MHC-mTORca) were studied. Left ventricular systolic function
(predoxorubicin FS 64+/-2%, postdoxorubicin FS 60+/-3%, P>0.008) and cardiac mass
were normal in doxorubicin-treated MHC-mTORca mice, despite levels of cardiomyocyte
apoptosis similar to those seen in doxorubicin-treated nontransgenic mice.
- To examine the regulatory aspects of zinc-α2-glycoprotein (ZAG) association with
obesity-related insulin resistance. ZAG mRNA and protein were analyzed in subcutaneous
adipose tissue (AT) and circulation of lean, obese, prediabetic, and type 2 diabetic
men; both subcutaneous and visceral AT were explored in lean and extremely obese.
Clinical and ex vivo findings were corroborated by results of in vitro ZAG silencing
experiment. Subcutaneous AT ZAG was reduced in obesity, with a trend to further
decrease with prediabetes and type 2 diabetes. ZAG was 3.3-fold higher in subcutaneous
than in visceral AT of lean individuals. All differences were lost in extreme
obesity. Obesity-associated changes in AT were not paralleled by alterations of
circulating ZAG. Subcutaneous AT ZAG correlated with adiposity, adipocyte hypertrophy,
whole-body and AT insulin sensitivity, mitochondrial content, expression of GLUT4,
PGC1α, and adiponectin. Subcutaneous AT ZAG and adipocyte size were the only predictors
of insulin sensitivity, independent on age and BMI. Silencing ZAG resulted in
reduced adiponectin, IRS1, GLUT4, and PGC1α gene expression in primary human adipocytes.
- source_sentence: Is avoidance of polypharmacy and excessive blood pressure control
associated with improved renal function in older patients?
sentences:
- Elderly patients are particularly susceptible to polypharmacy. The present study
evaluated the renal effects of optimizing potentially nephrotoxic medications
in an older population. Retrospective study of patients' ≥ 60 years treated between
January of 2013 and February of 2015 in a Nephrology Clinic. The renal effect
of avoiding polypharmacy was studied. Sixty-one patients were studied. Median
age was 81 years (range 60-94). Twenty-five patients (41%) were male. NSAIDs alone
were stopped in seven patients (11.4%), a dose reduction in antihypertensives
was done in 11 patients (18%), one or more antihypertensives were discontinued
in 20 patients (32.7%) and discontinuation and dose reduction of multiple medications
was carried out in 23 patients (37.7%). The number of antihypertensives was reduced
from a median of 3 (range of 0-8) at baseline to a median of 2 (range 0-7), p
< 0.001 after intervention. After intervention, the glomerular filtration rate
(GFR) improved significantly, from a baseline of 32 ± 15.5 cc/min/1.73 m(2) to
39.5 ± 17 cc/min/1.73 m(2) at t1 (p < 0.001) and 44.5 ± 18.7 cc/min/1.73 m(2)
at t2 (p < 0.001 vs. baseline). In a multivariate model, after adjusting for ACEIs/ARBs
discontinuation/dose reduction, NSAIDs use and change in DBP, an increase in SBP
at time 1 remained significantly associated with increments in GFR on follow-up
(estimate = 0.20, p = 0.01).
- Endothelial dysfunction and hypertension is more common in individuals with diabetes
than in the general population. This study was aimed to investigate the underlying
mechanisms responsible for endothelial dysfunction of type 1 diabetic rats fed
with high-salt diet. Type 1 diabetes (DM) was induced by intraperitoneal injection
of streptozotocin (70 mg·kg(-1)). Normal or diabetic rats were randomly fed high-salt
food (HS, 8% NaCl) or standard food (CON) for 6 weeks. Both HS (143±10 mmHg) and
DM+HS (169±11 mmHg) groups displayed significantly higher systolic blood pressure
than those in the CON group (112±12 mmHg, P<0.01). DM+HS rats exhibited more pronounced
impairment of vasorelaxation to acetylcholine and insulin compared with either
DM or HS. Akt/endothelial nitric oxide synthase (eNOS) phosphorylation levels
and nitric oxide (NO) concentration in DM+HS were significantly lower than in
DM. The levels of caveolin-1 (cav-1) in DM+HS were significantly higher than that
in DM and HS. Co-immunoprecipitation results showed increased interaction between
cav-1 and eNOS in the DM+HS group. In the presence of cav-1 small interfering
RNA (siRNA), eNOS phosphorylations in human umbilical vein endothelial cells (HUVEC)
were significantly increased compared with control siRNA. Cav-1 was slightly but
not significantly lower in HUVEC cultured with high glucose and high-salt buffer
solution and pretreated with wortmannin or l-nitro-arginine methyl ester.
- 'Several studies have revealed a correlation between sialosyl Tn antigen (STN)
and certain clinicopathologic features of various cancers, and that STN is an
independent prognostic factor. However, the clinical significance of the expression
of STN in gastric cancer has not been reported. Thus, the purpose of this study
was to evaluate immunohistochemically the clinical significance of expression
of STN in gastric cancer. The expression of STN in surgically resected specimens
of human gastric cancer was evaluated immunohistochemically using a monoclonal
antibody (TKH-2), in 60 patients whose serum STN levels were measured and in 54
patients with advanced cancer who had been followed for more than 5 years after
gastrectomy. The correlations between the level of STN expression and clinicopathologic
factors were analyzed. The staining intensity was graded as follows: (-), less
than 5% of the cancer cells expressed STN; (+), 5-50%; (++), more than 50%. Sialosyl
TN antigen staining was detected mainly on the cell membrane, in the cytoplasm,
and in the luminal contents, and 57.2% of the 60 specimens expressed STN, whereas
the corresponding value for positive serum levels was 15%. A higher percentage
of advanced tumors expressed STN than did the early cases, but the difference
was not statistically significant. All cases with strong staining, the (++) cases,
were advanced cases either with lymph node metastases or with cancer invading
in or beyond the muscle layer proper. The expression of STN appeared to be related
to the clinical stage, the extent of cancer invasion, and the presence of lymph
node metastases. Sialosyl TN antigen was detected in the serum in less than 6%
of the patients whose tumors were (-) or (+) for STN expression, and in 86.7%
of the patients whose tumors expressed high levels of STN (++). The estimated
5-year survival in advanced cases (Stage III) was significantly better in those
with negative STN expression than in those with positive STN expression (P < 0.01).'
- source_sentence: Does platelet attachment stimulate endothelial cell regeneration
after arterial injury?
sentences:
- The efficacy of lansoprazole (LPZ) at inhibiting gastric acid secretion is influenced
by cytochrome P450 2C19 (CYP2C19) polymorphism. The purpose of the present study
was to investigate whether CYP2C19 polymorphism had an influence on the remission
of erosive reflux esophagitis (RE) during maintenance therapy with LPZ. Eighty-two
Japanese patients with initial healing of erosive RE by 8 weeks of LPZ therapy
were enrolled. As maintenance therapy, the patients were treated with LPZ (15
mg/day) for 6 months. The CYP2C19 genotype, Helicobacter pylori infection status,
and serum pepsinogen (PG) I/II ratio were assessed before treatment. The patients
were investigated for relapse by endoscopy at 6 months or when symptoms recurred.
The proportion of patients in remission after 6 months was 61.5%, 78.0%, and 100%
among homozygous extensive metabolizers (homo-EM), heterozygous EM (hetero-EM),
and poor metabolizers (PM), respectively. The percentage of PM patients who remained
in remission was significantly higher than that of homo-EM or hetero-EM.
- Arterial injury is associated with endothelial disruption and attachment of platelets
to an exposed subintimal layer. A variety of factors released by platelets may
affect the ability of endothelial cells bordering an injury to regenerate. In
this study an organ culture model of arterial injury was used to investigate the
relationship between attachment of platelets to a superficial arterial injury
and endothelial regeneration. A defined superficial endothelial injury was made
in whole vessel wall explants of rabbit thoracic aorta. Injured explants were
treated with either fresh whole platelets, the supernatant of platelets aggregated
by collagen, or basic fibroblast growth factor. Four days after injury and treatment,
the average distance of endothelial regeneration was determined. A dramatic increase
in the rate of endothelial cell regeneration was observed when injured vessels
were exposed to fresh whole platelets (p = 0.003). This increase in regeneration
was comparable to that observed with fibroblast growth factor. No increase in
the regenerative rate was found after exposure of explants to the supernatant
of aggregated platelets (p = 0.69).
- To introduce an elastomeric continuous infusion pump for pain control after outpatient
orbital implant surgery. Retrospective, noncomparative consecutive case series
of all patients undergoing enucleation, evisceration, or secondary orbital implantation
using the On-Q pain system between August 2004 and January 2006. Postoperative
pain score, need for narcotics, and adverse events were recorded. The On-Q catheter
is inserted intraoperatively through the lateral lower eyelid into the muscle
cone under direct visualization, prior to the orbital implant placement. The On-Q
system continually infuses anesthesia (bupivacaine) to the retrobulbar site for
5 days. Among 20 patients, mean postoperative period pain score, with On-Q in
place, was 1.3 (scale of 0 to 10). Nine patients (45%) did not need any adjunctive
oral narcotics. Two patients experienced postoperative nausea. One catheter connector
leaked, thereby decreasing delivery of retrobulbar anesthetic resulting a pain
level of 6, the highest level in the study. There were no postoperative infections.
No systemic toxic effects from bupivacaine were observed clinically.
- source_sentence: Do mid-regional proadrenomedullin levels predict recurrence of
atrial fibrillation after catheter ablation?
sentences:
- We evaluated the prognostic value of mid-regional proadrenomedullin (MR-proADM)
in atrial fibrillation (AF) patients undergoing radiofrequency ablation. Plasma
concentrations of MR-proADM were measured at baseline and after 12months in 87
AF patients in whom radiofrequency ablation was performed. The association between
MR-proADM and AF recurrence was tested by univariable and multivariable Cox models.
In all 87 patients radiofrequency ablation was successfully performed. Of the
total population 54% had paroxysmal AF. The mean left ventricular ejection fraction
was 54% (minimum 25%). After 12months of follow-up, 71% of the patients were free
of AF recurrence. At baseline, mean MR-proADM in the total population was 0.72nmol/l±0.22.
Patients with AF recurrence had significantly higher baseline MR-proADM (0.89nmol/l±0.29)
as compared with patients without AF recurrence (0.65nmol/l±0.14; p<0.001). After
12months, mean MR-proADM plasma concentration remained higher in patients with
AF recurrence (0.81nmol/l±0.22 as compared with patients free of AF 0.54nmol/l±0.20;
p<0.001). Receiver operating characteristic (ROC) curve analysis for MR-proADM
yields a specificity of 98% and a sensitivity of 64% with an optimal cut-off value
of 0.82nmol/l to predict recurrence of AF after catheter ablation. In the logistic
regression analysis only MR-proADM remained independently predictive for AF recurrence.
- There has been growing interest in the role that implicit processing of drug cues
can play in motivating drug use behavior. However, the extent to which drug cue
processing biases relate to the processing biases exhibited to other types of
evocative stimuli is largely unknown. The goal of the present study was to determine
how the implicit cognitive processing of smoking cues relates to the processing
of affective cues using a novel paradigm. Smokers (n = 50) and nonsmokers (n =
38) completed a picture-viewing task, in which participants were presented with
a series of smoking, pleasant, unpleasant, and neutral images while engaging in
a distractor task designed to direct controlled resources away from conscious
processing of image content. Electroencephalogram recordings were obtained throughout
the task for extraction of event-related potentials (ERPs). Smokers exhibited
differential processing of smoking cues across 3 different ERP indices compared
with nonsmokers. Comparable effects were found for pleasant cues on 2 of these
indices. Late cognitive processing of smoking and pleasant cues was associated
with nicotine dependence and cigarette use.
- To evaluate the role of toll-like receptors (TLR) 2 and 4 in host responses to
Aspergillus fumigatus by use of cultured telomerase-immortalized human corneal
epithelial cells (HCECs). HCECs were stimulated with inactive antigens from A.
fumigatus. The expression of TLR2 and TLR4, phosphorylation of Ikappa B-alpha
(pIkappa B-alpha), and release of interleukin (IL)-1beta and IL-6 was measured
with and without inhibitors to TLR2 and TLR4. Exposure of HCECs to A. fumigatus
antigens resulted in up-regulation of TLR2 and TLR4, activation of pIkappa B,
and release of IL-1beta and IL-6 in HCECs, effects that could be inhibited by
treatment with TLR2 and TLR4 antibodies.
- source_sentence: Are peripheral blood lymphocytes from patients with rheumatoid
arthritis differentially sensitive to apoptosis induced by anti-tumour necrosis
factor-alpha therapy?
sentences:
- The aim of this study was to investigate the prognostic effect of serum free light
chain (sFLC) response after 2 cycles of first-line chemotherapy (CT) in multiple
myeloma (MM) patients. The data of 78 newly diagnosed MM patients who had sFLC
levels at diagnosis and after 2 cycles of first-line CT were included in the study.
The prognostic effect of sFLCs were evaluated with normalization of sFLC κ/λ ratio
after 2 cycles of CT and involved/uninvolved (i/u) sFLCs. At the end of follow-up
the probability of overall survival (OS) was 95.7% versus 68.5% in patients with
and without normalized sFLC κ/λ ratio, respectively (P = .072). The probability
of OS with i/u sFLC assessment was 97.4% versus 55.8% with regard to i/u sFLC
≤ 10 and > 10, respectively (P = .001). In univariate and multivariate analysis
including sFLC ratio, age, sex, and International Staging System, i/u sFLC ratio
> 10 after 2 cycles of CT was identified as an independent risk factor for OS
(P = .015; hazard ratio [HR], 13.2; 95% confidence interval [CI], 1.668-104.65
vs. P = .011; HR, 15.17; 95% CI, 1.85-123.89).
- This study examined links between DNA methylation and birth weight centile (BWC),
and explored the impact of genetic variation. Using HumanMethylation450 arrays,
we examined candidate gene-associated CpGs in cord blood from newborns with low
(<15th centile), medium (40-60th centile) and high (>85th centile) BWC (n = 12).
Candidates were examined in an investigation cohort (n = 110) using pyrosequencing
and genotyping for putative methylation-associated polymorphisms performed using
standard PCR. Array analysis identified 314 candidate genes associated with BWC
extremes, four of which showed ≥ 4 BWC-linked CpGs. Of these, PM20D1 and MI886
suggested genetically determined methylation levels. However, methylation at three
CpGs in FGFR2 remained significantly associated with high BWC (p = 0.004-0.027).
- The efficacy of anti-tumour necrosis factor-alpha (TNF-alpha) therapies in rheumatoid
arthritis (RA) has been mainly attributed to TNF-alpha neutralisation. Other mechanism
as immune cell apoptosis, which is impaired in RA, may also be induced by anti-TNF-alpha
therapies. The aim of our study was to investigate whether TNF-alpha inhibitors
could induce apoptosis in vitro of the peripheral blood lymphocytes of RA patients.
Peripheral blood mononuclear cells (PBMC) isolated from 24 patients with RA and
18 healthy donors were incubated with anti-TNF-alpha agents, infliximab or etanercept,
in comparison with no agent and including an isotypic control, for 48 hours. Apoptosis
was detected and quantified by annexin V labelling of phosphatidylserine externalization
using cytofluorometric analysis and compared with PBMC production TNF-alpha in
vitro. In healthy donors, induced apoptosis was observed in 0.3% to 3.8% of lymphocytes
with both therapies. In RA patients the treatment induced lymphocyte apoptosis
in 17 of 24 patients with a percentage of annexin V-positive lymphocytes ranging
from 0.1% to 25%. Among these 17 RA patients, a significant in vitro lymphocyte
apoptosis (> 4%) was observed in 11 patients (46%) compared with healthy donors
(p < 0.01). The variability of the response to anti-TNF-alpha within the RA population
was not dependent on TNF-alpha synthesis or disease activity.
model-index:
- name: SentenceTransformer based on sentence-transformers/all-distilroberta-v1
results:
- task:
type: triplet
name: Triplet
dataset:
name: ai pubmed validation
type: ai-pubmed-validation
metrics:
- type: cosine_accuracy
value: 1.0
name: Cosine Accuracy
---
# SentenceTransformer based on sentence-transformers/all-distilroberta-v1
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [sentence-transformers/all-distilroberta-v1](https://huggingface.co/sentence-transformers/all-distilroberta-v1) on the [pumed-finetuning](https://huggingface.co/datasets/pavanmantha/pumed-finetuning) dataset. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [sentence-transformers/all-distilroberta-v1](https://huggingface.co/sentence-transformers/all-distilroberta-v1) <!-- at revision 8d88b92a34345fd6a139aa47768c9881720006ce -->
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 768 dimensions
- **Similarity Function:** Cosine Similarity
- **Training Dataset:**
- [pumed-finetuning](https://huggingface.co/datasets/pavanmantha/pumed-finetuning)
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: RobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("pavanmantha/distilroberta-pubmed-embeddings")
# Run inference
sentences = [
'Are peripheral blood lymphocytes from patients with rheumatoid arthritis differentially sensitive to apoptosis induced by anti-tumour necrosis factor-alpha therapy?',
'The efficacy of anti-tumour necrosis factor-alpha (TNF-alpha) therapies in rheumatoid arthritis (RA) has been mainly attributed to TNF-alpha neutralisation. Other mechanism as immune cell apoptosis, which is impaired in RA, may also be induced by anti-TNF-alpha therapies. The aim of our study was to investigate whether TNF-alpha inhibitors could induce apoptosis in vitro of the peripheral blood lymphocytes of RA patients. Peripheral blood mononuclear cells (PBMC) isolated from 24 patients with RA and 18 healthy donors were incubated with anti-TNF-alpha agents, infliximab or etanercept, in comparison with no agent and including an isotypic control, for 48 hours. Apoptosis was detected and quantified by annexin V labelling of phosphatidylserine externalization using cytofluorometric analysis and compared with PBMC production TNF-alpha in vitro. In healthy donors, induced apoptosis was observed in 0.3% to 3.8% of lymphocytes with both therapies. In RA patients the treatment induced lymphocyte apoptosis in 17 of 24 patients with a percentage of annexin V-positive lymphocytes ranging from 0.1% to 25%. Among these 17 RA patients, a significant in vitro lymphocyte apoptosis (> 4%) was observed in 11 patients (46%) compared with healthy donors (p < 0.01). The variability of the response to anti-TNF-alpha within the RA population was not dependent on TNF-alpha synthesis or disease activity.',
'This study examined links between DNA methylation and birth weight centile (BWC), and explored the impact of genetic variation. Using HumanMethylation450 arrays, we examined candidate gene-associated CpGs in cord blood from newborns with low (<15th centile), medium (40-60th centile) and high (>85th centile) BWC (n = 12). Candidates were examined in an investigation cohort (n = 110) using pyrosequencing and genotyping for putative methylation-associated polymorphisms performed using standard PCR. Array analysis identified 314 candidate genes associated with BWC extremes, four of which showed ≥ 4 BWC-linked CpGs. Of these, PM20D1 and MI886 suggested genetically determined methylation levels. However, methylation at three CpGs in FGFR2 remained significantly associated with high BWC (p = 0.004-0.027).',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Triplet
* Dataset: `ai-pubmed-validation`
* Evaluated with [<code>TripletEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.TripletEvaluator)
| Metric | Value |
|:--------------------|:--------|
| **cosine_accuracy** | **1.0** |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### pumed-finetuning
* Dataset: [pumed-finetuning](https://huggingface.co/datasets/pavanmantha/pumed-finetuning) at [1ba143a](https://huggingface.co/datasets/pavanmantha/pumed-finetuning/tree/1ba143a9087c7004813ce74a7f356cac4619a7a8)
* Size: 8,000 training samples
* Columns: <code>instruction</code>, <code>context</code>, and <code>context_neg</code>
* Approximate statistics based on the first 1000 samples:
| | instruction | context | context_neg |
|:--------|:-----------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 11 tokens</li><li>mean: 26.36 tokens</li><li>max: 67 tokens</li></ul> | <ul><li>min: 31 tokens</li><li>mean: 320.03 tokens</li><li>max: 512 tokens</li></ul> | <ul><li>min: 82 tokens</li><li>mean: 321.22 tokens</li><li>max: 512 tokens</li></ul> |
* Samples:
| instruction | context | context_neg |
|:------------------------------------------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>Do competency assessment of primary care physicians as part of a peer review program?</code> | <code>To design and test a program that assesses clinical competence as a second stage in a peer review process and to determine the program's reliability. A three-cohort study of Ontario primary care physicians. Reference physicians (n = 26) randomly drawn from the Hamilton, Ontario, area; volunteer, self-referred physicians (n = 20); and physicians referred by the licensing body (n = 37) as a result of a disciplinary hearing or peer review. Standardized patients, structured oral examinations, chart-stimulated recall, objective structured clinical examination, and multiple-choice examination. Test reliability was high, ranging from 0.73 to 0.91, and all tests discriminated among subgroups. Demographic variables relating to the final category were age, Canadian or foreign graduates, and whether or not participants were certified in family medicine.</code> | <code>Static stretch is frequently observed in the lung. Both static stretch and cyclic stretch can induce cell death and Na(+)/K(+)-ATPase trafficking, but stretch-induced alveolar epithelial cell (AEC) functions are much less responsive to static than to cyclic stretch. AEC remodeling under static stretch may be partly explained. The aim of this study was to explore the AEC remodeling and functional changes under static stretch conditions. We used A549 cells as a model of AEC type II cells. We assessed F-actin content and cell viability by fluorescence staining at various static-stretch magnitudes and time points. Specifically, we used scanning electron microscopy to explore the possible biological mechanisms used by A549 cells to 'escape' static-stretch-induced injury. Finally, we measured choline cytidylyltransferase-alpha (CCT alpha) mRNA and protein by real-time PCR and Western blot to evaluate cellular secretory function. The results showed that the magnitude of static stretch was the...</code> |
| <code>Is age an important determinant of the growth hormone response to sprint exercise in non-obese young men?</code> | <code>The factors that regulate the growth hormone (GH) response to physiological stimuli, such as exercise, are not fully understood. The aim of the present study is to determine whether age, body composition, measures of sprint performance or the metabolic response to a sprint are predictors of the GH response to sprint exercise in non-obese young men. Twenty-seven healthy, non-obese males aged 18-32 years performed an all-out 30-second sprint on a cycle ergometer. Univariate linear regression analysis was employed to evaluate age-, BMI-, performance- and metabolic-dependent changes from pre-exercise to peak GH and integrated GH for 60 min after the sprint. GH was elevated following the sprint (change in GH: 17.0 +/- 14.2 microg l(-1); integrated GH: 662 +/- 582 min microg l(-1)). Performance characteristics, the metabolic response to exercise and BMI were not significant predictors of the GH response to exercise. However, age emerged as a significant predictor of both integrated GH (beta ...</code> | <code>We have previously reported the crucial roles of oncogenic Kirsten rat sarcoma viral oncogene homolog (KRAS) in inhibiting apoptosis and disrupting cell polarity via the regulation of phosphodiesterase 4 (PDE4) expression in human colorectal cancer HCT116 cells in three-dimensional cultures (3DC). Herein we evaluated the effects of resveratrol, a PDE4 inhibitor, on the luminal cavity formation and the induction of apoptosis in HCT116 cells. Apoptosis was detected by immunofluorescence using confocal laser scanning microscopy with an antibody against cleaved caspase-3 in HCT116 cells treated with or without resveratrol in a two-dimensional culture (2DC) or 3DC. Resveratrol did not induce apoptosis of HCT116 cells in 2DC, whereas the number of apoptotic HCT116 cells increased after resveratrol treatment in 3DC, leading to formation of a luminal cavity.</code> |
| <code>Is terlipressin more effective in decreasing variceal pressure than portal pressure in cirrhotic patients?</code> | <code>Terlipressin decreases portal pressure. However, its effects on variceal pressure have been poorly investigated. This study investigated the variceal, splanchnic and systemic hemodynamic effects of terlipressin. Twenty cirrhotic patients with esophageal varices grade II-III, and portal pressure > or =12 mmHg were studied. Hepatic venous pressure gradient, variceal pressure and systemic hemodynamic parameters were obtained. After baseline measurements, in a double-blind administration, 14 patients received a 2mg/iv injection of terlipressin and six patients received placebo. The same measurements were repeated 60 min later. No demographic or biochemical differences were observed in basal condition between groups. Terlipressin produced significant decreases in intravariceal pressure from 20.9+4.9 to 16.3+/-4.7 mmHg (p<0.01, -21+/- 16%), variceal pressure gradient from 18.9+/-4.8 to 13.5+/-6.0 mmHg (p<0.01, -28+/-27%), estimated variceal wall tension from 78+/-29 to 59+/-31 mmHg x mm (p<0...</code> | <code>Based on the theories of brain reserve and cognitive reserve, we investigated whether larger maximal lifetime brain growth (MLBG) and/or greater lifetime intellectual enrichment protect against cognitive decline over time. Forty patients with multiple sclerosis (MS) underwent baseline and 4.5-year follow-up evaluations of cognitive efficiency (Symbol Digit Modalities Test, Paced Auditory Serial Addition Task) and memory (Selective Reminding Test, Spatial Recall Test). Baseline and follow-up MRIs quantified disease progression: percentage brain volume change (cerebral atrophy), percentage change in T2 lesion volume. MLBG (brain reserve) was estimated with intracranial volume; intellectual enrichment (cognitive reserve) was estimated with vocabulary. We performed repeated-measures analyses of covariance to investigate whether larger MLBG and/or greater intellectual enrichment moderate/attenuate cognitive decline over time, controlling for disease progression. Patients with MS declined in...</code> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
### Evaluation Dataset
#### pumed-finetuning
* Dataset: [pumed-finetuning](https://huggingface.co/datasets/pavanmantha/pumed-finetuning) at [1ba143a](https://huggingface.co/datasets/pavanmantha/pumed-finetuning/tree/1ba143a9087c7004813ce74a7f356cac4619a7a8)
* Size: 1,000 evaluation samples
* Columns: <code>instruction</code>, <code>context</code>, and <code>context_neg</code>
* Approximate statistics based on the first 1000 samples:
| | instruction | context | context_neg |
|:--------|:-----------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 11 tokens</li><li>mean: 26.08 tokens</li><li>max: 52 tokens</li></ul> | <ul><li>min: 38 tokens</li><li>mean: 311.42 tokens</li><li>max: 512 tokens</li></ul> | <ul><li>min: 45 tokens</li><li>mean: 316.2 tokens</li><li>max: 512 tokens</li></ul> |
* Samples:
| instruction | context | context_neg |
|:----------------------------------------------------------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>Are pre-transplant impedance measures of reflux associated with early allograft injury after lung transplantation?</code> | <code>Acid reflux has been associated with poorer outcomes after lung transplantation. Standard pre-transplant reflux assessment has not been universally adopted. Non-acid reflux may also induce a pulmonary inflammatory cascade, leading to acute and chronic rejection. Esophageal multichannel intraluminal impedance and pH testing (MII-pH) may be valuable in standard pre-transplant evaluation. We assessed the association between pre-transplant MII-pH measures and early allograft injury in lung transplant patients. This was a retrospective cohort study of lung transplant recipients who underwent pre-transplant MII-pH at a tertiary center from 2007 to 2012. Results from pre-transplant MII-pH, cardiopulmonary function testing, and results of biopsy specimen analysis of the transplanted lung were recorded. Time-to-event analyses were performed using Cox proportional hazards and Kaplan-Maier methods to assess the associations between MII-pH measures and development of acute rejection or lymphocytic...</code> | <code>The yeast cell cycle is largely controlled by the cyclin-dependent kinase (CDK) Cdc28. Recent evidence suggests that both CDK complex stability as well as function during mitosis is determined by precise regulation of Swe1, a CDK inhibitory kinase and cyclin binding partner. A model of mitotic progression has been provided by study of filamentous yeast. When facing nutrient-limited conditions, Ras2-mediated PKA and MAPK signaling cascades induce a switch from round to filamentous morphology resulting in delayed mitotic progression. To delineate how the dimorphic switch contributes to cell cycle regulation, temperature sensitive cdc28 mutants exhibiting constitutive filamentation were subjected to epistasis analyses with RAS2 signaling effectors. It was found that Swe1-mediated inhibitory tyrosine phosphorylation of Cdc28 during filamentous growth is in part mediated by Ras2 activation of PKA, but not Kss1-MAPK, signaling. This pathway is further influenced by Cks1, a conserved CDK-bind...</code> |
| <code>Is predictive accuracy of the TRISS survival statistic improved by a modification that includes admission pH?</code> | <code>To determine if pH measured at the time of hospital admission and corrected for PCO2 was an independent predictor of trauma survival. Phase 1 was a retrospective case-control analysis of 1708 patients, followed by multivariate multiple logistic regression analysis of a subset of 919 patients for whom the Revised Trauma Score (RTS), Injury Severity Score (ISS), and pH were available. Phase 2 was a prospective comparison of a mathematical model of survival derived in phase 1 (pH-TRISS) with the TRISS method in 508 of 1325 subsequently admitted trauma patients. Urban level 1 trauma center. All patients admitted with blunt or penetrating trauma during the study period. Survival vs mortality. In phase 1, factors significantly associated with mortality by t test and chi 2 analysis included the RTS, ISS< Glasgow Coma Scale, corrected pH (CpH), and sum of the head, chest, and abdominal components of the Abbreviated Injury Scale-85 (AIS85) (HCAISS) (for all, P < .0001). The TRISS statistic was ...</code> | <code>Ovarian cancer is the most lethal gynecologic malignancy, and there is an unmet clinical need to develop new therapies. Although showing promising anticancer activity, Niclosamide may not be used as a monotherapy. We seek to investigate whether inhibiting IGF signaling potentiates Niclosamide's anticancer efficacy in human ovarian cancer cells. Cell proliferation and migration are assessed. Cell cycle progression and apoptosis are analyzed by flow cytometry. Inhibition of IGF signaling is accomplished by adenovirus-mediated expression of siRNAs targeting IGF-1R. Cancer-associated pathways are assessed using pathway-specific reporters. Subcutaneous xenograft model is used to determine anticancer activity. We find that Niclosamide is highly effective on inhibiting cell proliferation, cell migration, and cell cycle progression, and inducing apoptosis in human ovarian cancer cells, possibly by targeting multiple signaling pathways involved in ELK1/SRF, AP-1, MYC/MAX and NFkB. Silencing IGF...</code> |
| <code>Does exposure to intermittent nociceptive stimulation under pentobarbital anesthesia disrupt spinal cord function in rats?</code> | <code>Spinal cord plasticity can be assessed in spinal rats using an instrumental learning paradigm in which subjects learn an instrumental response, hindlimb flexion, to minimize shock exposure. Prior exposure to uncontrollable intermittent stimulation blocks learning in spinal rats but has no effect if given before spinal transection, suggesting that supraspinal systems modulate nociceptive input to the spinal cord, rendering it less susceptible to the detrimental consequences of uncontrollable stimulation. The present study examines whether disrupting brain function with pentobarbital blocks descending inhibitory systems that normally modulate nociceptive input, making the spinal cord more sensitive to the adverse effect of uncontrollable intermittent stimulation. Male Sprague-Dawley rats received uncontrollable intermittent stimulation during pentobarbital anesthesia after (experiment 1) or before (experiment 2) spinal cord transection. They were then tested for instrumental learning at ...</code> | <code>Increased serum hepcidin has been reported in patients receiving chronic hemodialysis, and hypothesized to contribute to the alterations of iron metabolism of end-stage renal disease. However, no quantitative assessment is available to date; the clinical determinants are still under definition; and the role of genetic factors, namely HFE mutations, has not yet been evaluated. The aim of this study was to quantitatively assess serum hepcidin-25 in hemodialysis patients versus controls, and analyze the relationship between hepcidin, iron indices, HFE genotype, and erythropoietic parameters. Sixty-five hemodialysis patients and 57 healthy controls were considered. Hepcidin-25 was evaluated by surface-enhanced laser desorption/ionization time-of-flight mass spectrometry, HFE genotype by restriction analysis. Serum hepcidin-25 was higher in hemodialysis patients compared with controls. In patients, hepcidin-25 correlated positively with ferritin and C reactive protein, and negatively with s...</code> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: steps
- `per_device_train_batch_size`: 16
- `per_device_eval_batch_size`: 16
- `learning_rate`: 2e-05
- `num_train_epochs`: 1
- `warmup_ratio`: 0.1
- `batch_sampler`: no_duplicates
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: steps
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 16
- `per_device_eval_batch_size`: 16
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 2e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 1
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.1
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: None
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: None
- `batch_sampler`: no_duplicates
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
| Epoch | Step | Training Loss | Validation Loss | ai-pubmed-validation_cosine_accuracy |
|:-----:|:----:|:-------------:|:---------------:|:------------------------------------:|
| -1 | -1 | - | - | 1.0 |
| 0.8 | 100 | 0.0152 | 0.0085 | 1.0 |
### Framework Versions
- Python: 3.11.10
- Sentence Transformers: 3.4.1
- Transformers: 4.49.0
- PyTorch: 2.4.1+cu124
- Accelerate: 1.4.0
- Datasets: 3.3.2
- Tokenizers: 0.21.0
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### MultipleNegativesRankingLoss
```bibtex
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
--> | [
"TEXT_CLASSIFICATION"
] | [
"PCR"
] |
medspaner/roberta-es-clinical-trials-umls-7sgs-ner | medspaner | token-classification | [
"transformers",
"pytorch",
"roberta",
"token-classification",
"generated_from_trainer",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2022-08-08T09:43:07 | 2024-10-01T06:43:16 | 61 | 1 | ---
license: cc-by-nc-4.0
metrics:
- precision
- recall
- f1
- accuracy
tags:
- generated_from_trainer
widget:
- text: "Criterios de inclusión: 18 a 65 años; necrosis avascular de cadera; sintomática\
\ de menos de 6 meses; capaz de otorgar consentimiento informado.\n Criterios\
\ de exclusión: embarazo, lactancia, mujer fértil sin métodos anticonceptivos\
\ adecuados; tratamiento activo con bifosfonatos; infección por VIH, hepatitis\
\ B o hepatitis C; historia de neoplasia en cualquier organo."
- text: 'Recuperación de daño hepático relacionado con nutrición parenteral con ácidos
omega-3 en adultos críticos: ensayo clínico aleatorizado.'
- text: 'Título público: Análisis del dolor tras inyección intramuscular de penicilina
con agujas de mayor calibre y anestésico local, frente a aguja tradicional sin
anestésico en pacientes con sífilis'
model-index:
- name: roberta-es-clinical-trials-umls-7sgs-ner
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-es-clinical-trials-umls-7sgs-ner
This medical named entity recognition model detects 7 types of semantic groups from the [Unified Medical Language System (UMLS)](https://www.nlm.nih.gov/research/umls/index.html) ([Bodenreider 2004](https://academic.oup.com/nar/article/32/suppl_1/D267/2505235)):
- ANAT: body parts and anatomy (e.g. *garganta*, 'throat')
- CHEM: chemical entities and pharmacological substances (e.g. *aspirina*,'aspirin')
- DEVI: medical devices (e.g. *catéter*, 'catheter')
- DISO: pathologic conditions (e.g. *dolor*, 'pain')
- LIVB: living beings (e.g. *paciente*, 'patient')
- PHYS: physiological processes (e.g. *respiración*, 'breathing')
- PROC: diagnostic and therapeutic procedures, laboratory analyses and medical research activities (e.g. *cirugía*, 'surgery')
The model achieves the following results on the test set (when trained with the training and development set; results are averaged over 5 evaluation rounds):
- Precision: 0.878 (±0.003)
- Recall: 0.894 (±0.003)
- F1: 0.886 (±0.002)
- Accuracy: 0.961 (±0.001)
## Model description
This model adapts the pre-trained model [bsc-bio-ehr-es](https://huggingface.co/PlanTL-GOB-ES/bsc-bio-ehr-es), presented in [Pio Carriño et al. (2022)](https://aclanthology.org/2022.bionlp-1.19/).
It is fine-tuned to conduct medical named entity recognition on Spanish texts about clinical trials.
The model is fine-tuned on the [CT-EBM-ES corpus (Campillos-Llanos et al. 2021)](https://bmcmedinformdecismak.biomedcentral.com/articles/10.1186/s12911-021-01395-z).
If you use this model, please, cite as follows:
```
@article{campillosetal2024,
title = {{Hybrid tool for semantic annotation and concept extraction of medical texts in Spanish}},
author = {Campillos-Llanos, Leonardo and Valverde-Mateos, Ana and Capllonch-Carri{\'o}n, Adri{\'a}n},
journal = {BMC Bioinformatics},
year={2024},
publisher={BioMed Central}
}
```
## Intended uses & limitations
**Disclosure**: *This model is under development and needs to be improved. It should not be used for medical decision making without human assistance and supervision*
This model is intended for a generalist purpose, and may have bias and/or any other undesirable distortions.
Third parties who deploy or provide systems and/or services using any of these models (or using systems based on these models) should note that it is their responsibility to mitigate the risks arising from their use. Third parties, in any event, need to comply with applicable regulations, including regulations concerning the use of artificial intelligence.
The owner or creator of the models will in no event be liable for any results arising from the use made by third parties of these models.
**Descargo de responsabilidad**: *Esta herramienta se encuentra en desarrollo y no debe ser empleada para la toma de decisiones médicas*
La finalidad de este modelo es generalista, y se advierte que puede tener sesgos y/u otro tipo de distorsiones indeseables.
Terceras partes que desplieguen o proporcionen sistemas y/o servicios usando alguno de estos modelos (o utilizando sistemas basados en estos modelos) han tener presente que es su responsabilidad abordar y minimizar los riesgos derivados de su uso. Las terceras partes, en cualquier circunstancia, deben cumplir con la normativa aplicable, incluyendo la normativa que concierne al uso de la inteligencia artificial.
El propietario o creador de los modelos de ningún modo será responsable de los resultados derivados del uso que las terceras partes hagan de estos modelos.
## Training and evaluation data
The data used for fine-tuning are the [Clinical Trials for Evidence-Based-Medicine in Spanish corpus](http://www.lllf.uam.es/ESP/nlpdata/wp2/).
It is a collection of 1200 texts about clinical trials studies and clinical trials announcements:
- 500 abstracts from journals published under a Creative Commons license, e.g. available in PubMed or the Scientific Electronic Library Online (SciELO)
- 700 clinical trials announcements published in the European Clinical Trials Register and Repositorio Español de Estudios Clínicos
If you use the CT-EBM-ES resource, please, cite as follows:
```
@article{campillosetal-midm2021,
title = {A clinical trials corpus annotated with UMLS© entities to enhance the access to Evidence-Based Medicine},
author = {Campillos-Llanos, Leonardo and Valverde-Mateos, Ana and Capllonch-Carri{\'o}n, Adri{\'a}n and Moreno-Sandoval, Antonio},
journal = {BMC Medical Informatics and Decision Making},
volume={21},
number={1},
pages={1--19},
year={2021},
publisher={BioMed Central}
}
```
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: we used different seeds for 5 evaluation rounds, and uploaded the model with the best results
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: average 17 epochs (±2.83); trained with early stopping if no improvement after 5 epochs (early stopping patience: 5)
### Training results (test set; average and standard deviation of 5 rounds with different seeds)
| Precision | Recall | F1 | Accuracy |
|:--------------:|:--------------:|:--------------:|:--------------:|
| 0.878 (±0.003) | 0.894 (±0.003) | 0.886 (±0.002) | 0.961 (±0.001) |
**Results per class (test set; average and standard deviation of 5 rounds with different seeds)**
| Class | Precision | Recall | F1 | Support |
|:----------:|:--------------:|:--------------:|:--------------:|:---------:|
| ANAT | 0.728 (±0.030) | 0.686 (±0.030) | 0.706 (±0.025) | 308 |
| CHEM | 0.917 (±0.005) | 0.923 (±0.008) | 0.920 (±0.005) | 2932 |
| DEVI | 0.645 (±0.018) | 0.791 (±0.047) | 0.711 (±0.027) | 134 |
| DISO | 0.890 (±0.008) | 0.903 (±0.003) | 0.896 (±0.003) | 3065 |
| LIVB | 0.949 (±0.004) | 0.959 (±0.006) | 0.954 (±0.003) | 1685 |
| PHYS | 0.766 (±0.021) | 0.765 (±0.012) | 0.765 (±0.008) | 308 |
| PROC | 0.842 (±0.002) | 0.871 (±0.004) | 0.856 (±0.001) | 4154 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.2+cu113
- Datasets 1.18.4
- Tokenizers 0.11.6
| [
"NAMED_ENTITY_RECOGNITION"
] | [
"SCIELO"
] |
QuantFactory/Llama3-OpenBioLLM-8B-GGUF | QuantFactory | null | [
"gguf",
"llama-3",
"llama",
"Mixtral",
"instruct",
"finetune",
"chatml",
"DPO",
"RLHF",
"gpt4",
"distillation",
"en",
"arxiv:2305.18290",
"arxiv:2303.13375",
"arxiv:2212.13138",
"arxiv:2305.09617",
"arxiv:2402.07023",
"base_model:meta-llama/Meta-Llama-3-8B",
"base_model:quantized:meta-llama/Meta-Llama-3-8B",
"license:llama3",
"endpoints_compatible",
"region:us"
] | 2024-09-22T14:40:36 | 2024-09-22T15:48:45 | 59 | 1 | ---
base_model: meta-llama/Meta-Llama-3-8B
language:
- en
license: llama3
tags:
- llama-3
- llama
- Mixtral
- instruct
- finetune
- chatml
- DPO
- RLHF
- gpt4
- distillation
widget:
- example_title: OpenBioLLM-8B
messages:
- role: system
content: You are an expert and experienced from the healthcare and biomedical
domain with extensive medical knowledge and practical experience.
- role: user
content: How long does it take for newborn jaundice to go away?
output:
text: 'Newborn jaundice, also known as neonatal jaundice, is a common condition
in newborns where the yellowing of the skin and eyes occurs due to an elevated
level of bilirubin in the blood. Bilirubin is a yellow pigment that forms when
red blood cells break down. In most cases, newborn jaundice resolves on its
own without any specific treatment.
The duration of newborn jaundice can vary depending on several factors such
as the underlying cause, gestational age at birth, and individual variations
in bilirubin metabolism. Here are some general guidelines:
1. Physiological jaundice: This is the most common type of newborn jaundice
and usually appears within 24-72 hours after birth. It tends to peak between
the second and fifth day of life and gradually improves over the next week or
two. By the time the baby is one week old, the jaundice should have mostly resolved.
2. Breast milk jaundice: This type of jaundice occurs in breastfed babies and
may appear later than physiological jaundice, typically between the fifth and
fourteenth day of life. It tends to persist for a longer duration but usually
resolves within six weeks after birth. 3. Pathological jaundice: This type of
jaundice is less common and occurs due to an underlying medical condition that
affects bilirubin metabolism or liver function. The duration of pathological
jaundice depends on the specific cause and may require treatment.
It''s important for parents to monitor their newborn''s jaundice closely and
seek medical advice if the jaundice progresses rapidly, becomes severe, or is
accompanied by other symptoms such as poor feeding, lethargy, or excessive sleepiness.
In these cases, further evaluation and management may be necessary. Remember
that each baby is unique, and the timing of jaundice resolution can vary. If
you have concerns about your newborn''s jaundice, it''s always best to consult
with a healthcare professional for personalized advice and guidance.'
model-index:
- name: OpenBioLLM-8B
results: []
---
[](https://hf.co/QuantFactory)
# QuantFactory/Llama3-OpenBioLLM-8B-GGUF
This is quantized version of [aaditya/Llama3-OpenBioLLM-8B](https://huggingface.co/aaditya/Llama3-OpenBioLLM-8B) created using llama.cpp
# Original Model Card
<div align="center">
<img width="260px" src="https://cdn-uploads.huggingface.co/production/uploads/5f3fe13d79c1ba4c353d0c19/BrQCb95lmEIFz79QAmoNA.png"></div>

<div align="center">
<h1>Advancing Open-source Large Language Models in Medical Domain</h1>
</div>
<p align="center" style="margin-top: 0px;">
<a href="https://colab.research.google.com/drive/1F5oV20InEYeAJGmBwYF9NM_QhLmjBkKJ?usp=sharing">
<img src="https://colab.research.google.com/assets/colab-badge.svg" alt="OpenChat Logo" style="width:20px; vertical-align: middle; display: inline-block; margin-right: 5px; margin-left: 10px; margin-top: 0px; margin-bottom: 0px;"/>
<span class="link-text" style=" margin-right: 5px;">Online Demo</span>
</a> |
<a href="https://github.com/openlifescience-ai">
<img src="https://github.githubassets.com/assets/GitHub-Mark-ea2971cee799.png" alt="GitHub Logo" style="width:20px; vertical-align: middle; display: inline-block; margin-right: 5px; margin-left: 5px; margin-top: 0px; margin-bottom: 0px;"/>
<span class="link-text" style=" margin-right: 5px;">GitHub</span>
</a> |
<a href="#">
<img src="https://github.com/alpayariyak/openchat/blob/master/assets/arxiv-logomark-small-square-border.png?raw=true" alt="ArXiv Logo" style="width:20px; vertical-align: middle; display: inline-block; margin-right: 5px; margin-left: 5px; margin-top: 0px; margin-bottom: 0px;"/>
<span class="link-text" style="margin-right: 5px;">Paper</span>
</a> |
<a href="https://discord.gg/A5Fjf5zC69">
<img src="https://cloud.githubusercontent.com/assets/6291467/26705903/96c2d66e-477c-11e7-9f4e-f3c0efe96c9a.png" alt="Discord Logo" style="width:20px; vertical-align: middle; display: inline-block; margin-right: 5px; margin-left: 5px; margin-top: 0px; margin-bottom: 0px;"/>
<span class="link-text">Discord</span>
</a>
</p>

Introducing OpenBioLLM-8B: A State-of-the-Art Open Source Biomedical Large Language Model
OpenBioLLM-8B is an advanced open source language model designed specifically for the biomedical domain. Developed by Saama AI Labs, this model leverages cutting-edge techniques to achieve state-of-the-art performance on a wide range of biomedical tasks.
🏥 **Biomedical Specialization**: OpenBioLLM-8B is tailored for the unique language and knowledge requirements of the medical and life sciences fields. It was fine-tuned on a vast corpus of high-quality biomedical data, enabling it to understand and generate text with domain-specific accuracy and fluency.
🎓 **Superior Performance**: With 8 billion parameters, OpenBioLLM-8B outperforms other open source biomedical language models of similar scale. It has also demonstrated better results compared to larger proprietary & open-source models like GPT-3.5 and Meditron-70B on biomedical benchmarks.
🧠 **Advanced Training Techniques**: OpenBioLLM-8B builds upon the powerful foundations of the **Meta-Llama-3-8B** and [Meta-Llama-3-8B](meta-llama/Meta-Llama-3-8B) models. It incorporates the DPO dataset and fine-tuning recipe along with a custom diverse medical instruction dataset. Key components of the training pipeline include:
<div align="center">
<img width="1200px" src="https://cdn-uploads.huggingface.co/production/uploads/5f3fe13d79c1ba4c353d0c19/oPchsJsEpQoGcGXVbh7YS.png">
</div>
- **Policy Optimization**: [Direct Preference Optimization: Your Language Model is Secretly a Reward Model (DPO)](https://arxiv.org/abs/2305.18290)
- **Ranking Dataset**: [berkeley-nest/Nectar](https://huggingface.co/datasets/berkeley-nest/Nectar)
- **Fine-tuning dataset**: Custom Medical Instruct dataset (We plan to release a sample training dataset in our upcoming paper; please stay updated)
This combination of cutting-edge techniques enables OpenBioLLM-8B to align with key capabilities and preferences for biomedical applications.
⚙️ **Release Details**:
- **Model Size**: 8 billion parameters
- **Quantization**: Optimized quantized versions available [Here](https://huggingface.co/aaditya/OpenBioLLM-Llama3-8B-GGUF)
- **Language(s) (NLP):** en
- **Developed By**: [Ankit Pal (Aaditya Ura)](https://aadityaura.github.io/) from Saama AI Labs
- **License:** Meta-Llama License
- **Fine-tuned from models:** [meta-llama/Meta-Llama-3-8B](meta-llama/Meta-Llama-3-8B)
- **Resources for more information:**
- Paper: Coming soon
The model can be fine-tuned for more specialized tasks and datasets as needed.
OpenBioLLM-8B represents an important step forward in democratizing advanced language AI for the biomedical community. By leveraging state-of-the-art architectures and training techniques from leading open source efforts like Llama-3, we have created a powerful tool to accelerate innovation and discovery in healthcare and the life sciences.
We are excited to share OpenBioLLM-8B with researchers and developers around the world.
### Use with transformers
**Important: Please use the exact chat template provided by Llama-3 instruct version. Otherwise there will be a degradation in the performance. The model output can be verbose in rare cases. Please consider setting temperature = 0 to make this happen less.**
See the snippet below for usage with Transformers:
```python
import transformers
import torch
model_id = "aaditya/OpenBioLLM-Llama3-8B"
pipeline = transformers.pipeline(
"text-generation",
model=model_id,
model_kwargs={"torch_dtype": torch.bfloat16},
device="auto",
)
messages = [
{"role": "system", "content": "You are an expert and experienced from the healthcare and biomedical domain with extensive medical knowledge and practical experience. Your name is OpenBioLLM, and you were developed by Saama AI Labs. who's willing to help answer the user's query with explanation. In your explanation, leverage your deep medical expertise such as relevant anatomical structures, physiological processes, diagnostic criteria, treatment guidelines, or other pertinent medical concepts. Use precise medical terminology while still aiming to make the explanation clear and accessible to a general audience."},
{"role": "user", "content": "How can i split a 3mg or 4mg waefin pill so i can get a 2.5mg pill?"},
]
prompt = pipeline.tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
terminators = [
pipeline.tokenizer.eos_token_id,
pipeline.tokenizer.convert_tokens_to_ids("<|eot_id|>")
]
outputs = pipeline(
prompt,
max_new_tokens=256,
eos_token_id=terminators,
do_sample=True,
temperature=0.0,
top_p=0.9,
)
print(outputs[0]["generated_text"][len(prompt):])
```
## **Training procedure**
### **Training hyperparameters**
<details>
<summary>Click to see details</summary>
- learning_rate: 0.0002
- lr_scheduler: cosine
- train_batch_size: 12
- eval_batch_size: 8
- GPU: H100 80GB SXM5
- num_devices: 1
- optimizer: adamw_bnb_8bit
- lr_scheduler_warmup_steps: 100
- num_epochs: 4
</details>
### **Peft hyperparameters**
<details>
<summary>Click to see details</summary>
- adapter: qlora
- lora_r: 128
- lora_alpha: 256
- lora_dropout: 0.05
- lora_target_linear: true
-lora_target_modules:
- q_proj
- v_proj
- k_proj
- o_proj
- gate_proj
- down_proj
- up_proj
</details>
### **Training results**
### **Framework versions**
- Transformers 4.39.3
- Pytorch 2.1.2+cu121
- Datasets 2.18.0
- Tokenizers 0.15.1
- Axolotl
- Lm harness for evaluation
# Benchmark Results
🔥 OpenBioLLM-8B demonstrates superior performance compared to larger models, such as GPT-3.5, Meditron-70B across 9 diverse biomedical datasets, achieving state-of-the-art results with an average score of 72.50%, despite having a significantly smaller parameter count. The model's strong performance in domain-specific tasks, such as Clinical KG, Medical Genetics, and PubMedQA, highlights its ability to effectively capture and apply biomedical knowledge.
🚨 The GPT-4, Med-PaLM-1, and Med-PaLM-2 results are taken from their official papers. Since Med-PaLM doesn't provide zero-shot accuracy, we are using 5-shot accuracy from their paper for comparison. All results presented are in the zero-shot setting, except for Med-PaLM-2 and Med-PaLM-1, which use 5-shot accuracy.
| | Clinical KG | Medical Genetics | Anatomy | Pro Medicine | College Biology | College Medicine | MedQA 4 opts | PubMedQA | MedMCQA | Avg |
|--------------------|-------------|------------------|---------|--------------|-----------------|------------------|--------------|----------|---------|-------|
| **OpenBioLLM-70B** | **92.93** | **93.197** | **83.904** | 93.75 | 93.827 | **85.749** | 78.162 | 78.97 | **74.014** | **86.05588** |
| Med-PaLM-2 (5-shot) | 88.3 | 90 | 77.8 | **95.2** | 94.4 | 80.9 | **79.7** | **79.2** | 71.3 | 84.08 |
| **GPT-4** | 86.04 | 91 | 80 | 93.01 | **95.14** | 76.88 | 78.87 | 75.2 | 69.52 | 82.85 |
| Med-PaLM-1 (Flan-PaLM, 5-shot) | 80.4 | 75 | 63.7 | 83.8 | 88.9 | 76.3 | 67.6 | 79 | 57.6 | 74.7 |
| **OpenBioLLM-8B** | 76.101 | 86.1 | 69.829 | 78.21 | 84.213 | 68.042 | 58.993 | 74.12 | 56.913 | 72.502 |
| Gemini-1.0 | 76.7 | 75.8 | 66.7 | 77.7 | 88 | 69.2 | 58 | 70.7 | 54.3 | 70.79 |
| GPT-3.5 Turbo 1106 | 74.71 | 74 | 72.79 | 72.79 | 72.91 | 64.73 | 57.71 | 72.66 | 53.79 | 66 |
| Meditron-70B | 66.79 | 69 | 53.33 | 71.69 | 76.38 | 63 | 57.1 | 76.6 | 46.85 | 64.52 |
| gemma-7b | 69.81 | 70 | 59.26 | 66.18 | 79.86 | 60.12 | 47.21 | 76.2 | 48.96 | 64.18 |
| Mistral-7B-v0.1 | 68.68 | 71 | 55.56 | 68.38 | 68.06 | 59.54 | 50.82 | 75.4 | 48.2 | 62.85 |
| Apollo-7B | 62.26 | 72 | 61.48 | 69.12 | 70.83 | 55.49 | 55.22 | 39.8 | 53.77 | 60 |
| MedAlpaca-7b | 57.36 | 69 | 57.04 | 67.28 | 65.28 | 54.34 | 41.71 | 72.8 | 37.51 | 58.03 |
| BioMistral-7B | 59.9 | 64 | 56.5 | 60.4 | 59 | 54.7 | 50.6 | 77.5 | 48.1 | 57.3 |
| AlpaCare-llama2-7b | 49.81 | 49 | 45.92 | 33.82 | 50 | 43.35 | 29.77 | 72.2 | 34.42 | 45.36 |
| ClinicalGPT | 30.56 | 27 | 30.37 | 19.48 | 25 | 24.27 | 26.08 | 63.8 | 28.18 | 30.52 |
<div align="center">
<img width="1600px" src="https://cdn-uploads.huggingface.co/production/uploads/5f3fe13d79c1ba4c353d0c19/_SzdcJSBjZyo8RS1bTEkP.png">
</div>
## Detailed Medical Subjectwise accuracy

# Use Cases & Examples
🚨 **Below results are from the quantized version of OpenBioLLM-70B**
# Summarize Clinical Notes
OpenBioLLM-70B can efficiently analyze and summarize complex clinical notes, EHR data, and discharge summaries, extracting key information and generating concise, structured summaries

# Answer Medical Questions
OpenBioLLM-70B can provide answers to a wide range of medical questions.


<details>
<summary>Click to see details</summary>



</details>
# Clinical Entity Recognition
OpenBioLLM-70B can perform advanced clinical entity recognition by identifying and extracting key medical concepts, such as diseases, symptoms, medications, procedures, and anatomical structures, from unstructured clinical text. By leveraging its deep understanding of medical terminology and context, the model can accurately annotate and categorize clinical entities, enabling more efficient information retrieval, data analysis, and knowledge discovery from electronic health records, research articles, and other biomedical text sources. This capability can support various downstream applications, such as clinical decision support, pharmacovigilance, and medical research.



# Biomarkers Extraction

# Classification
OpenBioLLM-70B can perform various biomedical classification tasks, such as disease prediction, sentiment analysis, medical document categorization

# De-Identification
OpenBioLLM-70B can detect and remove personally identifiable information (PII) from medical records, ensuring patient privacy and compliance with data protection regulations like HIPAA.

**Advisory Notice!**
While OpenBioLLM-70B & 8B leverages high-quality data sources, its outputs may still contain inaccuracies, biases, or misalignments that could pose risks if relied upon for medical decision-making without further testing and refinement. The model's performance has not yet been rigorously evaluated in randomized controlled trials or real-world healthcare environments.
Therefore, we strongly advise against using OpenBioLLM-70B & 8B for any direct patient care, clinical decision support, or other professional medical purposes at this time. Its use should be limited to research, development, and exploratory applications by qualified individuals who understand its limitations.
OpenBioLLM-70B & 8B are intended solely as a research tool to assist healthcare professionals and should never be considered a replacement for the professional judgment and expertise of a qualified medical doctor.
Appropriately adapting and validating OpenBioLLM-70B & 8B for specific medical use cases would require significant additional work, potentially including:
- Thorough testing and evaluation in relevant clinical scenarios
- Alignment with evidence-based guidelines and best practices
- Mitigation of potential biases and failure modes
- Integration with human oversight and interpretation
- Compliance with regulatory and ethical standards
Always consult a qualified healthcare provider for personal medical needs.
# Citation
If you find OpenBioLLM-70B & 8B useful in your work, please cite the model as follows:
```
@misc{OpenBioLLMs,
author = {Ankit Pal, Malaikannan Sankarasubbu},
title = {OpenBioLLMs: Advancing Open-Source Large Language Models for Healthcare and Life Sciences},
year = {2024},
publisher = {Hugging Face},
journal = {Hugging Face repository},
howpublished = {\url{https://huggingface.co/aaditya/OpenBioLLM-Llama3-70B}}
}
```
The accompanying paper is currently in progress and will be released soon.
<div align="center">
<h2> 💌 Contact </h2>
</div>
We look forward to hearing you and collaborating on this exciting project!
**Contributors:**
- [Ankit Pal (Aaditya Ura)](https://aadityaura.github.io/) [aadityaura at gmail dot com]
- Saama AI Labs
- Note: I am looking for a funded PhD opportunity, especially if it fits my Responsible Generative AI, Multimodal LLMs, Geometric Deep Learning, and Healthcare AI skillset.
# References
We thank the [Meta Team](meta-llama/Meta-Llama-3-70B-Instruct) for their amazing models!
Result sources
- [1] GPT-4 [Capabilities of GPT-4 on Medical Challenge Problems] (https://arxiv.org/abs/2303.13375)
- [2] Med-PaLM-1 [Large Language Models Encode Clinical Knowledge](https://arxiv.org/abs/2212.13138)
- [3] Med-PaLM-2 [Towards Expert-Level Medical Question Answering with Large Language Models](https://arxiv.org/abs/2305.09617)
- [4] Gemini-1.0 [Gemini Goes to Med School](https://arxiv.org/abs/2402.07023)
| [
"QUESTION_ANSWERING"
] | [
"MEDQA",
"PUBMEDQA"
] |
nnch/multilingual-e5-large-Q4_K_M-GGUF | nnch | feature-extraction | [
"sentence-transformers",
"gguf",
"xlm-roberta",
"mteb",
"Sentence Transformers",
"sentence-similarity",
"feature-extraction",
"llama-cpp",
"gguf-my-repo",
"multilingual",
"af",
"am",
"ar",
"as",
"az",
"be",
"bg",
"bn",
"br",
"bs",
"ca",
"cs",
"cy",
"da",
"de",
"el",
"en",
"eo",
"es",
"et",
"eu",
"fa",
"fi",
"fr",
"fy",
"ga",
"gd",
"gl",
"gu",
"ha",
"he",
"hi",
"hr",
"hu",
"hy",
"id",
"is",
"it",
"ja",
"jv",
"ka",
"kk",
"km",
"kn",
"ko",
"ku",
"ky",
"la",
"lo",
"lt",
"lv",
"mg",
"mk",
"ml",
"mn",
"mr",
"ms",
"my",
"ne",
"nl",
"no",
"om",
"or",
"pa",
"pl",
"ps",
"pt",
"ro",
"ru",
"sa",
"sd",
"si",
"sk",
"sl",
"so",
"sq",
"sr",
"su",
"sv",
"sw",
"ta",
"te",
"th",
"tl",
"tr",
"ug",
"uk",
"ur",
"uz",
"vi",
"xh",
"yi",
"zh",
"base_model:intfloat/multilingual-e5-large",
"base_model:quantized:intfloat/multilingual-e5-large",
"license:mit",
"model-index",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | 2024-10-24T13:52:54 | 2024-10-24T14:02:48 | 59 | 1 | ---
base_model: intfloat/multilingual-e5-large
language:
- multilingual
- af
- am
- ar
- as
- az
- be
- bg
- bn
- br
- bs
- ca
- cs
- cy
- da
- de
- el
- en
- eo
- es
- et
- eu
- fa
- fi
- fr
- fy
- ga
- gd
- gl
- gu
- ha
- he
- hi
- hr
- hu
- hy
- id
- is
- it
- ja
- jv
- ka
- kk
- km
- kn
- ko
- ku
- ky
- la
- lo
- lt
- lv
- mg
- mk
- ml
- mn
- mr
- ms
- my
- ne
- nl
- 'no'
- om
- or
- pa
- pl
- ps
- pt
- ro
- ru
- sa
- sd
- si
- sk
- sl
- so
- sq
- sr
- su
- sv
- sw
- ta
- te
- th
- tl
- tr
- ug
- uk
- ur
- uz
- vi
- xh
- yi
- zh
license: mit
tags:
- mteb
- Sentence Transformers
- sentence-similarity
- feature-extraction
- sentence-transformers
- llama-cpp
- gguf-my-repo
model-index:
- name: multilingual-e5-large
results:
- task:
type: Classification
dataset:
name: MTEB AmazonCounterfactualClassification (en)
type: mteb/amazon_counterfactual
config: en
split: test
revision: e8379541af4e31359cca9fbcf4b00f2671dba205
metrics:
- type: accuracy
value: 79.05970149253731
- type: ap
value: 43.486574390835635
- type: f1
value: 73.32700092140148
- task:
type: Classification
dataset:
name: MTEB AmazonCounterfactualClassification (de)
type: mteb/amazon_counterfactual
config: de
split: test
revision: e8379541af4e31359cca9fbcf4b00f2671dba205
metrics:
- type: accuracy
value: 71.22055674518201
- type: ap
value: 81.55756710830498
- type: f1
value: 69.28271787752661
- task:
type: Classification
dataset:
name: MTEB AmazonCounterfactualClassification (en-ext)
type: mteb/amazon_counterfactual
config: en-ext
split: test
revision: e8379541af4e31359cca9fbcf4b00f2671dba205
metrics:
- type: accuracy
value: 80.41979010494754
- type: ap
value: 29.34879922376344
- type: f1
value: 67.62475449011278
- task:
type: Classification
dataset:
name: MTEB AmazonCounterfactualClassification (ja)
type: mteb/amazon_counterfactual
config: ja
split: test
revision: e8379541af4e31359cca9fbcf4b00f2671dba205
metrics:
- type: accuracy
value: 77.8372591006424
- type: ap
value: 26.557560591210738
- type: f1
value: 64.96619417368707
- task:
type: Classification
dataset:
name: MTEB AmazonPolarityClassification
type: mteb/amazon_polarity
config: default
split: test
revision: e2d317d38cd51312af73b3d32a06d1a08b442046
metrics:
- type: accuracy
value: 93.489875
- type: ap
value: 90.98758636917603
- type: f1
value: 93.48554819717332
- task:
type: Classification
dataset:
name: MTEB AmazonReviewsClassification (en)
type: mteb/amazon_reviews_multi
config: en
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 47.564
- type: f1
value: 46.75122173518047
- task:
type: Classification
dataset:
name: MTEB AmazonReviewsClassification (de)
type: mteb/amazon_reviews_multi
config: de
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 45.400000000000006
- type: f1
value: 44.17195682400632
- task:
type: Classification
dataset:
name: MTEB AmazonReviewsClassification (es)
type: mteb/amazon_reviews_multi
config: es
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 43.068
- type: f1
value: 42.38155696855596
- task:
type: Classification
dataset:
name: MTEB AmazonReviewsClassification (fr)
type: mteb/amazon_reviews_multi
config: fr
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 41.89
- type: f1
value: 40.84407321682663
- task:
type: Classification
dataset:
name: MTEB AmazonReviewsClassification (ja)
type: mteb/amazon_reviews_multi
config: ja
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 40.120000000000005
- type: f1
value: 39.522976223819114
- task:
type: Classification
dataset:
name: MTEB AmazonReviewsClassification (zh)
type: mteb/amazon_reviews_multi
config: zh
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 38.832
- type: f1
value: 38.0392533394713
- task:
type: Retrieval
dataset:
name: MTEB ArguAna
type: arguana
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 30.725
- type: map_at_10
value: 46.055
- type: map_at_100
value: 46.900999999999996
- type: map_at_1000
value: 46.911
- type: map_at_3
value: 41.548
- type: map_at_5
value: 44.297
- type: mrr_at_1
value: 31.152
- type: mrr_at_10
value: 46.231
- type: mrr_at_100
value: 47.07
- type: mrr_at_1000
value: 47.08
- type: mrr_at_3
value: 41.738
- type: mrr_at_5
value: 44.468999999999994
- type: ndcg_at_1
value: 30.725
- type: ndcg_at_10
value: 54.379999999999995
- type: ndcg_at_100
value: 58.138
- type: ndcg_at_1000
value: 58.389
- type: ndcg_at_3
value: 45.156
- type: ndcg_at_5
value: 50.123
- type: precision_at_1
value: 30.725
- type: precision_at_10
value: 8.087
- type: precision_at_100
value: 0.9769999999999999
- type: precision_at_1000
value: 0.1
- type: precision_at_3
value: 18.54
- type: precision_at_5
value: 13.542000000000002
- type: recall_at_1
value: 30.725
- type: recall_at_10
value: 80.868
- type: recall_at_100
value: 97.653
- type: recall_at_1000
value: 99.57300000000001
- type: recall_at_3
value: 55.619
- type: recall_at_5
value: 67.71000000000001
- task:
type: Clustering
dataset:
name: MTEB ArxivClusteringP2P
type: mteb/arxiv-clustering-p2p
config: default
split: test
revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d
metrics:
- type: v_measure
value: 44.30960650674069
- task:
type: Clustering
dataset:
name: MTEB ArxivClusteringS2S
type: mteb/arxiv-clustering-s2s
config: default
split: test
revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53
metrics:
- type: v_measure
value: 38.427074197498996
- task:
type: Reranking
dataset:
name: MTEB AskUbuntuDupQuestions
type: mteb/askubuntudupquestions-reranking
config: default
split: test
revision: 2000358ca161889fa9c082cb41daa8dcfb161a54
metrics:
- type: map
value: 60.28270056031872
- type: mrr
value: 74.38332673789738
- task:
type: STS
dataset:
name: MTEB BIOSSES
type: mteb/biosses-sts
config: default
split: test
revision: d3fb88f8f02e40887cd149695127462bbcf29b4a
metrics:
- type: cos_sim_pearson
value: 84.05942144105269
- type: cos_sim_spearman
value: 82.51212105850809
- type: euclidean_pearson
value: 81.95639829909122
- type: euclidean_spearman
value: 82.3717564144213
- type: manhattan_pearson
value: 81.79273425468256
- type: manhattan_spearman
value: 82.20066817871039
- task:
type: BitextMining
dataset:
name: MTEB BUCC (de-en)
type: mteb/bucc-bitext-mining
config: de-en
split: test
revision: d51519689f32196a32af33b075a01d0e7c51e252
metrics:
- type: accuracy
value: 99.46764091858039
- type: f1
value: 99.37717466945023
- type: precision
value: 99.33194154488518
- type: recall
value: 99.46764091858039
- task:
type: BitextMining
dataset:
name: MTEB BUCC (fr-en)
type: mteb/bucc-bitext-mining
config: fr-en
split: test
revision: d51519689f32196a32af33b075a01d0e7c51e252
metrics:
- type: accuracy
value: 98.29407880255337
- type: f1
value: 98.11248073959938
- type: precision
value: 98.02443319392472
- type: recall
value: 98.29407880255337
- task:
type: BitextMining
dataset:
name: MTEB BUCC (ru-en)
type: mteb/bucc-bitext-mining
config: ru-en
split: test
revision: d51519689f32196a32af33b075a01d0e7c51e252
metrics:
- type: accuracy
value: 97.79009352268791
- type: f1
value: 97.5176076665512
- type: precision
value: 97.38136473848286
- type: recall
value: 97.79009352268791
- task:
type: BitextMining
dataset:
name: MTEB BUCC (zh-en)
type: mteb/bucc-bitext-mining
config: zh-en
split: test
revision: d51519689f32196a32af33b075a01d0e7c51e252
metrics:
- type: accuracy
value: 99.26276987888363
- type: f1
value: 99.20133403545726
- type: precision
value: 99.17500438827453
- type: recall
value: 99.26276987888363
- task:
type: Classification
dataset:
name: MTEB Banking77Classification
type: mteb/banking77
config: default
split: test
revision: 0fd18e25b25c072e09e0d92ab615fda904d66300
metrics:
- type: accuracy
value: 84.72727272727273
- type: f1
value: 84.67672206031433
- task:
type: Clustering
dataset:
name: MTEB BiorxivClusteringP2P
type: mteb/biorxiv-clustering-p2p
config: default
split: test
revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40
metrics:
- type: v_measure
value: 35.34220182511161
- task:
type: Clustering
dataset:
name: MTEB BiorxivClusteringS2S
type: mteb/biorxiv-clustering-s2s
config: default
split: test
revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908
metrics:
- type: v_measure
value: 33.4987096128766
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackRetrieval
type: BeIR/cqadupstack
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 25.558249999999997
- type: map_at_10
value: 34.44425000000001
- type: map_at_100
value: 35.59833333333333
- type: map_at_1000
value: 35.706916666666665
- type: map_at_3
value: 31.691749999999995
- type: map_at_5
value: 33.252916666666664
- type: mrr_at_1
value: 30.252666666666666
- type: mrr_at_10
value: 38.60675
- type: mrr_at_100
value: 39.42666666666666
- type: mrr_at_1000
value: 39.48408333333334
- type: mrr_at_3
value: 36.17441666666665
- type: mrr_at_5
value: 37.56275
- type: ndcg_at_1
value: 30.252666666666666
- type: ndcg_at_10
value: 39.683
- type: ndcg_at_100
value: 44.68541666666667
- type: ndcg_at_1000
value: 46.94316666666668
- type: ndcg_at_3
value: 34.961749999999995
- type: ndcg_at_5
value: 37.215666666666664
- type: precision_at_1
value: 30.252666666666666
- type: precision_at_10
value: 6.904166666666667
- type: precision_at_100
value: 1.0989999999999995
- type: precision_at_1000
value: 0.14733333333333334
- type: precision_at_3
value: 16.037666666666667
- type: precision_at_5
value: 11.413583333333333
- type: recall_at_1
value: 25.558249999999997
- type: recall_at_10
value: 51.13341666666666
- type: recall_at_100
value: 73.08366666666667
- type: recall_at_1000
value: 88.79483333333334
- type: recall_at_3
value: 37.989083333333326
- type: recall_at_5
value: 43.787833333333325
- task:
type: Retrieval
dataset:
name: MTEB ClimateFEVER
type: climate-fever
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 10.338
- type: map_at_10
value: 18.360000000000003
- type: map_at_100
value: 19.942
- type: map_at_1000
value: 20.134
- type: map_at_3
value: 15.174000000000001
- type: map_at_5
value: 16.830000000000002
- type: mrr_at_1
value: 23.257
- type: mrr_at_10
value: 33.768
- type: mrr_at_100
value: 34.707
- type: mrr_at_1000
value: 34.766000000000005
- type: mrr_at_3
value: 30.977
- type: mrr_at_5
value: 32.528
- type: ndcg_at_1
value: 23.257
- type: ndcg_at_10
value: 25.733
- type: ndcg_at_100
value: 32.288
- type: ndcg_at_1000
value: 35.992000000000004
- type: ndcg_at_3
value: 20.866
- type: ndcg_at_5
value: 22.612
- type: precision_at_1
value: 23.257
- type: precision_at_10
value: 8.124
- type: precision_at_100
value: 1.518
- type: precision_at_1000
value: 0.219
- type: precision_at_3
value: 15.679000000000002
- type: precision_at_5
value: 12.117
- type: recall_at_1
value: 10.338
- type: recall_at_10
value: 31.154
- type: recall_at_100
value: 54.161
- type: recall_at_1000
value: 75.21900000000001
- type: recall_at_3
value: 19.427
- type: recall_at_5
value: 24.214
- task:
type: Retrieval
dataset:
name: MTEB DBPedia
type: dbpedia-entity
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 8.498
- type: map_at_10
value: 19.103
- type: map_at_100
value: 27.375
- type: map_at_1000
value: 28.981
- type: map_at_3
value: 13.764999999999999
- type: map_at_5
value: 15.950000000000001
- type: mrr_at_1
value: 65.5
- type: mrr_at_10
value: 74.53800000000001
- type: mrr_at_100
value: 74.71799999999999
- type: mrr_at_1000
value: 74.725
- type: mrr_at_3
value: 72.792
- type: mrr_at_5
value: 73.554
- type: ndcg_at_1
value: 53.37499999999999
- type: ndcg_at_10
value: 41.286
- type: ndcg_at_100
value: 45.972
- type: ndcg_at_1000
value: 53.123
- type: ndcg_at_3
value: 46.172999999999995
- type: ndcg_at_5
value: 43.033
- type: precision_at_1
value: 65.5
- type: precision_at_10
value: 32.725
- type: precision_at_100
value: 10.683
- type: precision_at_1000
value: 1.978
- type: precision_at_3
value: 50
- type: precision_at_5
value: 41.349999999999994
- type: recall_at_1
value: 8.498
- type: recall_at_10
value: 25.070999999999998
- type: recall_at_100
value: 52.383
- type: recall_at_1000
value: 74.91499999999999
- type: recall_at_3
value: 15.207999999999998
- type: recall_at_5
value: 18.563
- task:
type: Classification
dataset:
name: MTEB EmotionClassification
type: mteb/emotion
config: default
split: test
revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37
metrics:
- type: accuracy
value: 46.5
- type: f1
value: 41.93833713984145
- task:
type: Retrieval
dataset:
name: MTEB FEVER
type: fever
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 67.914
- type: map_at_10
value: 78.10000000000001
- type: map_at_100
value: 78.333
- type: map_at_1000
value: 78.346
- type: map_at_3
value: 76.626
- type: map_at_5
value: 77.627
- type: mrr_at_1
value: 72.74199999999999
- type: mrr_at_10
value: 82.414
- type: mrr_at_100
value: 82.511
- type: mrr_at_1000
value: 82.513
- type: mrr_at_3
value: 81.231
- type: mrr_at_5
value: 82.065
- type: ndcg_at_1
value: 72.74199999999999
- type: ndcg_at_10
value: 82.806
- type: ndcg_at_100
value: 83.677
- type: ndcg_at_1000
value: 83.917
- type: ndcg_at_3
value: 80.305
- type: ndcg_at_5
value: 81.843
- type: precision_at_1
value: 72.74199999999999
- type: precision_at_10
value: 10.24
- type: precision_at_100
value: 1.089
- type: precision_at_1000
value: 0.11299999999999999
- type: precision_at_3
value: 31.268
- type: precision_at_5
value: 19.706000000000003
- type: recall_at_1
value: 67.914
- type: recall_at_10
value: 92.889
- type: recall_at_100
value: 96.42699999999999
- type: recall_at_1000
value: 97.92
- type: recall_at_3
value: 86.21
- type: recall_at_5
value: 90.036
- task:
type: Retrieval
dataset:
name: MTEB FiQA2018
type: fiqa
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 22.166
- type: map_at_10
value: 35.57
- type: map_at_100
value: 37.405
- type: map_at_1000
value: 37.564
- type: map_at_3
value: 30.379
- type: map_at_5
value: 33.324
- type: mrr_at_1
value: 43.519000000000005
- type: mrr_at_10
value: 51.556000000000004
- type: mrr_at_100
value: 52.344
- type: mrr_at_1000
value: 52.373999999999995
- type: mrr_at_3
value: 48.868
- type: mrr_at_5
value: 50.319
- type: ndcg_at_1
value: 43.519000000000005
- type: ndcg_at_10
value: 43.803
- type: ndcg_at_100
value: 50.468999999999994
- type: ndcg_at_1000
value: 53.111
- type: ndcg_at_3
value: 38.893
- type: ndcg_at_5
value: 40.653
- type: precision_at_1
value: 43.519000000000005
- type: precision_at_10
value: 12.253
- type: precision_at_100
value: 1.931
- type: precision_at_1000
value: 0.242
- type: precision_at_3
value: 25.617
- type: precision_at_5
value: 19.383
- type: recall_at_1
value: 22.166
- type: recall_at_10
value: 51.6
- type: recall_at_100
value: 76.574
- type: recall_at_1000
value: 92.192
- type: recall_at_3
value: 34.477999999999994
- type: recall_at_5
value: 41.835
- task:
type: Retrieval
dataset:
name: MTEB HotpotQA
type: hotpotqa
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 39.041
- type: map_at_10
value: 62.961999999999996
- type: map_at_100
value: 63.79899999999999
- type: map_at_1000
value: 63.854
- type: map_at_3
value: 59.399
- type: map_at_5
value: 61.669
- type: mrr_at_1
value: 78.082
- type: mrr_at_10
value: 84.321
- type: mrr_at_100
value: 84.49600000000001
- type: mrr_at_1000
value: 84.502
- type: mrr_at_3
value: 83.421
- type: mrr_at_5
value: 83.977
- type: ndcg_at_1
value: 78.082
- type: ndcg_at_10
value: 71.229
- type: ndcg_at_100
value: 74.10900000000001
- type: ndcg_at_1000
value: 75.169
- type: ndcg_at_3
value: 66.28699999999999
- type: ndcg_at_5
value: 69.084
- type: precision_at_1
value: 78.082
- type: precision_at_10
value: 14.993
- type: precision_at_100
value: 1.7239999999999998
- type: precision_at_1000
value: 0.186
- type: precision_at_3
value: 42.737
- type: precision_at_5
value: 27.843
- type: recall_at_1
value: 39.041
- type: recall_at_10
value: 74.96300000000001
- type: recall_at_100
value: 86.199
- type: recall_at_1000
value: 93.228
- type: recall_at_3
value: 64.105
- type: recall_at_5
value: 69.608
- task:
type: Classification
dataset:
name: MTEB ImdbClassification
type: mteb/imdb
config: default
split: test
revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7
metrics:
- type: accuracy
value: 90.23160000000001
- type: ap
value: 85.5674856808308
- type: f1
value: 90.18033354786317
- task:
type: Retrieval
dataset:
name: MTEB MSMARCO
type: msmarco
config: default
split: dev
revision: None
metrics:
- type: map_at_1
value: 24.091
- type: map_at_10
value: 36.753
- type: map_at_100
value: 37.913000000000004
- type: map_at_1000
value: 37.958999999999996
- type: map_at_3
value: 32.818999999999996
- type: map_at_5
value: 35.171
- type: mrr_at_1
value: 24.742
- type: mrr_at_10
value: 37.285000000000004
- type: mrr_at_100
value: 38.391999999999996
- type: mrr_at_1000
value: 38.431
- type: mrr_at_3
value: 33.440999999999995
- type: mrr_at_5
value: 35.75
- type: ndcg_at_1
value: 24.742
- type: ndcg_at_10
value: 43.698
- type: ndcg_at_100
value: 49.145
- type: ndcg_at_1000
value: 50.23800000000001
- type: ndcg_at_3
value: 35.769
- type: ndcg_at_5
value: 39.961999999999996
- type: precision_at_1
value: 24.742
- type: precision_at_10
value: 6.7989999999999995
- type: precision_at_100
value: 0.95
- type: precision_at_1000
value: 0.104
- type: precision_at_3
value: 15.096000000000002
- type: precision_at_5
value: 11.183
- type: recall_at_1
value: 24.091
- type: recall_at_10
value: 65.068
- type: recall_at_100
value: 89.899
- type: recall_at_1000
value: 98.16
- type: recall_at_3
value: 43.68
- type: recall_at_5
value: 53.754999999999995
- task:
type: Classification
dataset:
name: MTEB MTOPDomainClassification (en)
type: mteb/mtop_domain
config: en
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 93.66621067031465
- type: f1
value: 93.49622853272142
- task:
type: Classification
dataset:
name: MTEB MTOPDomainClassification (de)
type: mteb/mtop_domain
config: de
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 91.94702733164272
- type: f1
value: 91.17043441745282
- task:
type: Classification
dataset:
name: MTEB MTOPDomainClassification (es)
type: mteb/mtop_domain
config: es
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 92.20146764509674
- type: f1
value: 91.98359080555608
- task:
type: Classification
dataset:
name: MTEB MTOPDomainClassification (fr)
type: mteb/mtop_domain
config: fr
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 88.99780770435328
- type: f1
value: 89.19746342724068
- task:
type: Classification
dataset:
name: MTEB MTOPDomainClassification (hi)
type: mteb/mtop_domain
config: hi
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 89.78486912871998
- type: f1
value: 89.24578823628642
- task:
type: Classification
dataset:
name: MTEB MTOPDomainClassification (th)
type: mteb/mtop_domain
config: th
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 88.74502712477394
- type: f1
value: 89.00297573881542
- task:
type: Classification
dataset:
name: MTEB MTOPIntentClassification (en)
type: mteb/mtop_intent
config: en
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 77.9046967624259
- type: f1
value: 59.36787125785957
- task:
type: Classification
dataset:
name: MTEB MTOPIntentClassification (de)
type: mteb/mtop_intent
config: de
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 74.5280360664976
- type: f1
value: 57.17723440888718
- task:
type: Classification
dataset:
name: MTEB MTOPIntentClassification (es)
type: mteb/mtop_intent
config: es
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 75.44029352901934
- type: f1
value: 54.052855531072964
- task:
type: Classification
dataset:
name: MTEB MTOPIntentClassification (fr)
type: mteb/mtop_intent
config: fr
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 70.5606013153774
- type: f1
value: 52.62215934386531
- task:
type: Classification
dataset:
name: MTEB MTOPIntentClassification (hi)
type: mteb/mtop_intent
config: hi
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 73.11581211903908
- type: f1
value: 52.341291845645465
- task:
type: Classification
dataset:
name: MTEB MTOPIntentClassification (th)
type: mteb/mtop_intent
config: th
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 74.28933092224233
- type: f1
value: 57.07918745504911
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (af)
type: mteb/amazon_massive_intent
config: af
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 62.38063214525892
- type: f1
value: 59.46463723443009
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (am)
type: mteb/amazon_massive_intent
config: am
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 56.06926698049766
- type: f1
value: 52.49084283283562
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (ar)
type: mteb/amazon_massive_intent
config: ar
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 60.74983187626093
- type: f1
value: 56.960640620165904
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (az)
type: mteb/amazon_massive_intent
config: az
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 64.86550100874243
- type: f1
value: 62.47370548140688
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (bn)
type: mteb/amazon_massive_intent
config: bn
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 63.971082716879636
- type: f1
value: 61.03812421957381
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (cy)
type: mteb/amazon_massive_intent
config: cy
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 54.98318762609282
- type: f1
value: 51.51207916008392
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (da)
type: mteb/amazon_massive_intent
config: da
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 69.45527908540686
- type: f1
value: 66.16631905400318
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (de)
type: mteb/amazon_massive_intent
config: de
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 69.32750504371216
- type: f1
value: 66.16755288646591
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (el)
type: mteb/amazon_massive_intent
config: el
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 69.09213180901143
- type: f1
value: 66.95654394661507
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (en)
type: mteb/amazon_massive_intent
config: en
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 73.75588433086752
- type: f1
value: 71.79973779656923
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (es)
type: mteb/amazon_massive_intent
config: es
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 70.49428379287154
- type: f1
value: 68.37494379215734
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (fa)
type: mteb/amazon_massive_intent
config: fa
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 69.90921318090115
- type: f1
value: 66.79517376481645
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (fi)
type: mteb/amazon_massive_intent
config: fi
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 70.12104909213181
- type: f1
value: 67.29448842879584
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (fr)
type: mteb/amazon_massive_intent
config: fr
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 69.34095494283793
- type: f1
value: 67.01134288992947
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (he)
type: mteb/amazon_massive_intent
config: he
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 67.61264290517822
- type: f1
value: 64.68730512660757
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (hi)
type: mteb/amazon_massive_intent
config: hi
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 67.79757901815738
- type: f1
value: 65.24938539425598
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (hu)
type: mteb/amazon_massive_intent
config: hu
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 69.68728984532616
- type: f1
value: 67.0487169762553
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (hy)
type: mteb/amazon_massive_intent
config: hy
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 62.07464694014795
- type: f1
value: 59.183532276789286
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (id)
type: mteb/amazon_massive_intent
config: id
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 70.04707464694015
- type: f1
value: 67.66829629003848
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (is)
type: mteb/amazon_massive_intent
config: is
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 62.42434431741762
- type: f1
value: 59.01617226544757
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (it)
type: mteb/amazon_massive_intent
config: it
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 70.53127101546738
- type: f1
value: 68.10033760906255
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (ja)
type: mteb/amazon_massive_intent
config: ja
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 72.50504371217215
- type: f1
value: 69.74931103158923
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (jv)
type: mteb/amazon_massive_intent
config: jv
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 57.91190316072628
- type: f1
value: 54.05551136648796
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (ka)
type: mteb/amazon_massive_intent
config: ka
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 51.78211163416275
- type: f1
value: 49.874888544058535
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (km)
type: mteb/amazon_massive_intent
config: km
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 47.017484868863484
- type: f1
value: 44.53364263352014
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (kn)
type: mteb/amazon_massive_intent
config: kn
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 62.16207128446537
- type: f1
value: 59.01185692320829
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (ko)
type: mteb/amazon_massive_intent
config: ko
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 69.42501681237391
- type: f1
value: 67.13169450166086
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (lv)
type: mteb/amazon_massive_intent
config: lv
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 67.0780094149294
- type: f1
value: 64.41720167850707
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (ml)
type: mteb/amazon_massive_intent
config: ml
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 65.57162071284466
- type: f1
value: 62.414138683804424
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (mn)
type: mteb/amazon_massive_intent
config: mn
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 61.71149966375252
- type: f1
value: 58.594805125087234
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (ms)
type: mteb/amazon_massive_intent
config: ms
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 66.03900470746471
- type: f1
value: 63.87937257883887
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (my)
type: mteb/amazon_massive_intent
config: my
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 60.8776059179556
- type: f1
value: 57.48587618059131
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (nb)
type: mteb/amazon_massive_intent
config: nb
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 69.87895090786819
- type: f1
value: 66.8141299430347
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (nl)
type: mteb/amazon_massive_intent
config: nl
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 70.45057162071285
- type: f1
value: 67.46444039673516
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (pl)
type: mteb/amazon_massive_intent
config: pl
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 71.546738399462
- type: f1
value: 68.63640876702655
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (pt)
type: mteb/amazon_massive_intent
config: pt
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 70.72965702757229
- type: f1
value: 68.54119560379115
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (ro)
type: mteb/amazon_massive_intent
config: ro
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 68.35574983187625
- type: f1
value: 65.88844917691927
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (ru)
type: mteb/amazon_massive_intent
config: ru
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 71.70477471418964
- type: f1
value: 69.19665697061978
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (sl)
type: mteb/amazon_massive_intent
config: sl
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 67.0880968392737
- type: f1
value: 64.76962317666086
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (sq)
type: mteb/amazon_massive_intent
config: sq
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 65.18493611297916
- type: f1
value: 62.49984559035371
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (sv)
type: mteb/amazon_massive_intent
config: sv
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 71.75857431069265
- type: f1
value: 69.20053687623418
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (sw)
type: mteb/amazon_massive_intent
config: sw
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 58.500336247478145
- type: f1
value: 55.2972398687929
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (ta)
type: mteb/amazon_massive_intent
config: ta
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 62.68997982515132
- type: f1
value: 59.36848202755348
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (te)
type: mteb/amazon_massive_intent
config: te
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 63.01950235373235
- type: f1
value: 60.09351954625423
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (th)
type: mteb/amazon_massive_intent
config: th
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 68.29186281102892
- type: f1
value: 67.57860496703447
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (tl)
type: mteb/amazon_massive_intent
config: tl
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 64.77471418964357
- type: f1
value: 61.913983147713836
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (tr)
type: mteb/amazon_massive_intent
config: tr
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 69.87222595830532
- type: f1
value: 66.03679033708141
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (ur)
type: mteb/amazon_massive_intent
config: ur
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 64.04505716207127
- type: f1
value: 61.28569169817908
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (vi)
type: mteb/amazon_massive_intent
config: vi
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 69.38466711499663
- type: f1
value: 67.20532357036844
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (zh-CN)
type: mteb/amazon_massive_intent
config: zh-CN
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 71.12306657700067
- type: f1
value: 68.91251226588182
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (zh-TW)
type: mteb/amazon_massive_intent
config: zh-TW
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 66.20040349697378
- type: f1
value: 66.02657347714175
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (af)
type: mteb/amazon_massive_scenario
config: af
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 68.73907195696032
- type: f1
value: 66.98484521791418
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (am)
type: mteb/amazon_massive_scenario
config: am
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 60.58843308675185
- type: f1
value: 58.95591723092005
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (ar)
type: mteb/amazon_massive_scenario
config: ar
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 66.22730329522528
- type: f1
value: 66.0894499712115
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (az)
type: mteb/amazon_massive_scenario
config: az
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 66.48285137861465
- type: f1
value: 65.21963176785157
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (bn)
type: mteb/amazon_massive_scenario
config: bn
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 67.74714189643578
- type: f1
value: 66.8212192745412
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (cy)
type: mteb/amazon_massive_scenario
config: cy
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 59.09213180901143
- type: f1
value: 56.70735546356339
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (da)
type: mteb/amazon_massive_scenario
config: da
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 75.05716207128448
- type: f1
value: 74.8413712365364
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (de)
type: mteb/amazon_massive_scenario
config: de
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 74.69737726967047
- type: f1
value: 74.7664341963
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (el)
type: mteb/amazon_massive_scenario
config: el
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 73.90383322125084
- type: f1
value: 73.59201554448323
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (en)
type: mteb/amazon_massive_scenario
config: en
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 77.51176866173503
- type: f1
value: 77.46104434577758
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (es)
type: mteb/amazon_massive_scenario
config: es
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 74.31069266980496
- type: f1
value: 74.61048660675635
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (fa)
type: mteb/amazon_massive_scenario
config: fa
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 72.95225285810356
- type: f1
value: 72.33160006574627
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (fi)
type: mteb/amazon_massive_scenario
config: fi
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 73.12373907195696
- type: f1
value: 73.20921012557481
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (fr)
type: mteb/amazon_massive_scenario
config: fr
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 73.86684599865501
- type: f1
value: 73.82348774610831
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (he)
type: mteb/amazon_massive_scenario
config: he
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 71.40215198386012
- type: f1
value: 71.11945183971858
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (hi)
type: mteb/amazon_massive_scenario
config: hi
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 72.12844653665098
- type: f1
value: 71.34450495911766
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (hu)
type: mteb/amazon_massive_scenario
config: hu
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 74.52252858103566
- type: f1
value: 73.98878711342999
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (hy)
type: mteb/amazon_massive_scenario
config: hy
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 64.93611297915265
- type: f1
value: 63.723200467653385
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (id)
type: mteb/amazon_massive_scenario
config: id
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 74.11903160726295
- type: f1
value: 73.82138439467096
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (is)
type: mteb/amazon_massive_scenario
config: is
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 67.15198386012105
- type: f1
value: 66.02172193802167
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (it)
type: mteb/amazon_massive_scenario
config: it
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 74.32414256893072
- type: f1
value: 74.30943421170574
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (ja)
type: mteb/amazon_massive_scenario
config: ja
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 77.46805648957633
- type: f1
value: 77.62808409298209
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (jv)
type: mteb/amazon_massive_scenario
config: jv
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 63.318762609280434
- type: f1
value: 62.094284066075076
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (ka)
type: mteb/amazon_massive_scenario
config: ka
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 58.34902488231338
- type: f1
value: 57.12893860987984
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (km)
type: mteb/amazon_massive_scenario
config: km
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 50.88433086751849
- type: f1
value: 48.2272350802058
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (kn)
type: mteb/amazon_massive_scenario
config: kn
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 66.4425016812374
- type: f1
value: 64.61463095996173
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (ko)
type: mteb/amazon_massive_scenario
config: ko
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 75.04707464694015
- type: f1
value: 75.05099199098998
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (lv)
type: mteb/amazon_massive_scenario
config: lv
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 70.50437121721586
- type: f1
value: 69.83397721096314
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (ml)
type: mteb/amazon_massive_scenario
config: ml
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 69.94283792871553
- type: f1
value: 68.8704663703913
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (mn)
type: mteb/amazon_massive_scenario
config: mn
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 64.79488903833222
- type: f1
value: 63.615424063345436
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (ms)
type: mteb/amazon_massive_scenario
config: ms
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 69.88231338264963
- type: f1
value: 68.57892302593237
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (my)
type: mteb/amazon_massive_scenario
config: my
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 63.248150638870214
- type: f1
value: 61.06680605338809
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (nb)
type: mteb/amazon_massive_scenario
config: nb
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 74.84196368527236
- type: f1
value: 74.52566464968763
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (nl)
type: mteb/amazon_massive_scenario
config: nl
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 74.8285137861466
- type: f1
value: 74.8853197608802
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (pl)
type: mteb/amazon_massive_scenario
config: pl
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 74.13248150638869
- type: f1
value: 74.3982040999179
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (pt)
type: mteb/amazon_massive_scenario
config: pt
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 73.49024882313383
- type: f1
value: 73.82153848368573
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (ro)
type: mteb/amazon_massive_scenario
config: ro
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 71.72158708809684
- type: f1
value: 71.85049433180541
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (ru)
type: mteb/amazon_massive_scenario
config: ru
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 75.137861466039
- type: f1
value: 75.37628348188467
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (sl)
type: mteb/amazon_massive_scenario
config: sl
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 71.86953597848016
- type: f1
value: 71.87537624521661
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (sq)
type: mteb/amazon_massive_scenario
config: sq
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 70.27572293207801
- type: f1
value: 68.80017302344231
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (sv)
type: mteb/amazon_massive_scenario
config: sv
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 76.09952925353059
- type: f1
value: 76.07992707688408
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (sw)
type: mteb/amazon_massive_scenario
config: sw
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 63.140551445864155
- type: f1
value: 61.73855010331415
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (ta)
type: mteb/amazon_massive_scenario
config: ta
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 66.27774041694687
- type: f1
value: 64.83664868894539
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (te)
type: mteb/amazon_massive_scenario
config: te
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 66.69468728984533
- type: f1
value: 64.76239666920868
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (th)
type: mteb/amazon_massive_scenario
config: th
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 73.44653665097512
- type: f1
value: 73.14646052013873
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (tl)
type: mteb/amazon_massive_scenario
config: tl
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 67.71351714862139
- type: f1
value: 66.67212180163382
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (tr)
type: mteb/amazon_massive_scenario
config: tr
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 73.9946200403497
- type: f1
value: 73.87348793725525
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (ur)
type: mteb/amazon_massive_scenario
config: ur
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 68.15400134498992
- type: f1
value: 67.09433241421094
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (vi)
type: mteb/amazon_massive_scenario
config: vi
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 73.11365164761264
- type: f1
value: 73.59502539433753
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (zh-CN)
type: mteb/amazon_massive_scenario
config: zh-CN
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 76.82582380632145
- type: f1
value: 76.89992945316313
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (zh-TW)
type: mteb/amazon_massive_scenario
config: zh-TW
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 71.81237390719569
- type: f1
value: 72.36499770986265
- task:
type: Clustering
dataset:
name: MTEB MedrxivClusteringP2P
type: mteb/medrxiv-clustering-p2p
config: default
split: test
revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73
metrics:
- type: v_measure
value: 31.480506569594695
- task:
type: Clustering
dataset:
name: MTEB MedrxivClusteringS2S
type: mteb/medrxiv-clustering-s2s
config: default
split: test
revision: 35191c8c0dca72d8ff3efcd72aa802307d469663
metrics:
- type: v_measure
value: 29.71252128004552
- task:
type: Reranking
dataset:
name: MTEB MindSmallReranking
type: mteb/mind_small
config: default
split: test
revision: 3bdac13927fdc888b903db93b2ffdbd90b295a69
metrics:
- type: map
value: 31.421396787056548
- type: mrr
value: 32.48155274872267
- task:
type: Retrieval
dataset:
name: MTEB NFCorpus
type: nfcorpus
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 5.595
- type: map_at_10
value: 12.642000000000001
- type: map_at_100
value: 15.726
- type: map_at_1000
value: 17.061999999999998
- type: map_at_3
value: 9.125
- type: map_at_5
value: 10.866000000000001
- type: mrr_at_1
value: 43.344
- type: mrr_at_10
value: 52.227999999999994
- type: mrr_at_100
value: 52.898999999999994
- type: mrr_at_1000
value: 52.944
- type: mrr_at_3
value: 49.845
- type: mrr_at_5
value: 51.115
- type: ndcg_at_1
value: 41.949999999999996
- type: ndcg_at_10
value: 33.995
- type: ndcg_at_100
value: 30.869999999999997
- type: ndcg_at_1000
value: 39.487
- type: ndcg_at_3
value: 38.903999999999996
- type: ndcg_at_5
value: 37.236999999999995
- type: precision_at_1
value: 43.344
- type: precision_at_10
value: 25.480000000000004
- type: precision_at_100
value: 7.672
- type: precision_at_1000
value: 2.028
- type: precision_at_3
value: 36.636
- type: precision_at_5
value: 32.632
- type: recall_at_1
value: 5.595
- type: recall_at_10
value: 16.466
- type: recall_at_100
value: 31.226
- type: recall_at_1000
value: 62.778999999999996
- type: recall_at_3
value: 9.931
- type: recall_at_5
value: 12.884
- task:
type: Retrieval
dataset:
name: MTEB NQ
type: nq
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 40.414
- type: map_at_10
value: 56.754000000000005
- type: map_at_100
value: 57.457
- type: map_at_1000
value: 57.477999999999994
- type: map_at_3
value: 52.873999999999995
- type: map_at_5
value: 55.175
- type: mrr_at_1
value: 45.278
- type: mrr_at_10
value: 59.192
- type: mrr_at_100
value: 59.650000000000006
- type: mrr_at_1000
value: 59.665
- type: mrr_at_3
value: 56.141
- type: mrr_at_5
value: 57.998000000000005
- type: ndcg_at_1
value: 45.278
- type: ndcg_at_10
value: 64.056
- type: ndcg_at_100
value: 66.89
- type: ndcg_at_1000
value: 67.364
- type: ndcg_at_3
value: 56.97
- type: ndcg_at_5
value: 60.719
- type: precision_at_1
value: 45.278
- type: precision_at_10
value: 9.994
- type: precision_at_100
value: 1.165
- type: precision_at_1000
value: 0.121
- type: precision_at_3
value: 25.512
- type: precision_at_5
value: 17.509
- type: recall_at_1
value: 40.414
- type: recall_at_10
value: 83.596
- type: recall_at_100
value: 95.72
- type: recall_at_1000
value: 99.24
- type: recall_at_3
value: 65.472
- type: recall_at_5
value: 74.039
- task:
type: Retrieval
dataset:
name: MTEB QuoraRetrieval
type: quora
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 70.352
- type: map_at_10
value: 84.369
- type: map_at_100
value: 85.02499999999999
- type: map_at_1000
value: 85.04
- type: map_at_3
value: 81.42399999999999
- type: map_at_5
value: 83.279
- type: mrr_at_1
value: 81.05
- type: mrr_at_10
value: 87.401
- type: mrr_at_100
value: 87.504
- type: mrr_at_1000
value: 87.505
- type: mrr_at_3
value: 86.443
- type: mrr_at_5
value: 87.10799999999999
- type: ndcg_at_1
value: 81.04
- type: ndcg_at_10
value: 88.181
- type: ndcg_at_100
value: 89.411
- type: ndcg_at_1000
value: 89.507
- type: ndcg_at_3
value: 85.28099999999999
- type: ndcg_at_5
value: 86.888
- type: precision_at_1
value: 81.04
- type: precision_at_10
value: 13.406
- type: precision_at_100
value: 1.5350000000000001
- type: precision_at_1000
value: 0.157
- type: precision_at_3
value: 37.31
- type: precision_at_5
value: 24.54
- type: recall_at_1
value: 70.352
- type: recall_at_10
value: 95.358
- type: recall_at_100
value: 99.541
- type: recall_at_1000
value: 99.984
- type: recall_at_3
value: 87.111
- type: recall_at_5
value: 91.643
- task:
type: Clustering
dataset:
name: MTEB RedditClustering
type: mteb/reddit-clustering
config: default
split: test
revision: 24640382cdbf8abc73003fb0fa6d111a705499eb
metrics:
- type: v_measure
value: 46.54068723291946
- task:
type: Clustering
dataset:
name: MTEB RedditClusteringP2P
type: mteb/reddit-clustering-p2p
config: default
split: test
revision: 282350215ef01743dc01b456c7f5241fa8937f16
metrics:
- type: v_measure
value: 63.216287629895994
- task:
type: Retrieval
dataset:
name: MTEB SCIDOCS
type: scidocs
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 4.023000000000001
- type: map_at_10
value: 10.071
- type: map_at_100
value: 11.892
- type: map_at_1000
value: 12.196
- type: map_at_3
value: 7.234
- type: map_at_5
value: 8.613999999999999
- type: mrr_at_1
value: 19.900000000000002
- type: mrr_at_10
value: 30.516
- type: mrr_at_100
value: 31.656000000000002
- type: mrr_at_1000
value: 31.723000000000003
- type: mrr_at_3
value: 27.400000000000002
- type: mrr_at_5
value: 29.270000000000003
- type: ndcg_at_1
value: 19.900000000000002
- type: ndcg_at_10
value: 17.474
- type: ndcg_at_100
value: 25.020999999999997
- type: ndcg_at_1000
value: 30.728
- type: ndcg_at_3
value: 16.588
- type: ndcg_at_5
value: 14.498
- type: precision_at_1
value: 19.900000000000002
- type: precision_at_10
value: 9.139999999999999
- type: precision_at_100
value: 2.011
- type: precision_at_1000
value: 0.33899999999999997
- type: precision_at_3
value: 15.667
- type: precision_at_5
value: 12.839999999999998
- type: recall_at_1
value: 4.023000000000001
- type: recall_at_10
value: 18.497
- type: recall_at_100
value: 40.8
- type: recall_at_1000
value: 68.812
- type: recall_at_3
value: 9.508
- type: recall_at_5
value: 12.983
- task:
type: STS
dataset:
name: MTEB SICK-R
type: mteb/sickr-sts
config: default
split: test
revision: a6ea5a8cab320b040a23452cc28066d9beae2cee
metrics:
- type: cos_sim_pearson
value: 83.967008785134
- type: cos_sim_spearman
value: 80.23142141101837
- type: euclidean_pearson
value: 81.20166064704539
- type: euclidean_spearman
value: 80.18961335654585
- type: manhattan_pearson
value: 81.13925443187625
- type: manhattan_spearman
value: 80.07948723044424
- task:
type: STS
dataset:
name: MTEB STS12
type: mteb/sts12-sts
config: default
split: test
revision: a0d554a64d88156834ff5ae9920b964011b16384
metrics:
- type: cos_sim_pearson
value: 86.94262461316023
- type: cos_sim_spearman
value: 80.01596278563865
- type: euclidean_pearson
value: 83.80799622922581
- type: euclidean_spearman
value: 79.94984954947103
- type: manhattan_pearson
value: 83.68473841756281
- type: manhattan_spearman
value: 79.84990707951822
- task:
type: STS
dataset:
name: MTEB STS13
type: mteb/sts13-sts
config: default
split: test
revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca
metrics:
- type: cos_sim_pearson
value: 80.57346443146068
- type: cos_sim_spearman
value: 81.54689837570866
- type: euclidean_pearson
value: 81.10909881516007
- type: euclidean_spearman
value: 81.56746243261762
- type: manhattan_pearson
value: 80.87076036186582
- type: manhattan_spearman
value: 81.33074987964402
- task:
type: STS
dataset:
name: MTEB STS14
type: mteb/sts14-sts
config: default
split: test
revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375
metrics:
- type: cos_sim_pearson
value: 79.54733787179849
- type: cos_sim_spearman
value: 77.72202105610411
- type: euclidean_pearson
value: 78.9043595478849
- type: euclidean_spearman
value: 77.93422804309435
- type: manhattan_pearson
value: 78.58115121621368
- type: manhattan_spearman
value: 77.62508135122033
- task:
type: STS
dataset:
name: MTEB STS15
type: mteb/sts15-sts
config: default
split: test
revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3
metrics:
- type: cos_sim_pearson
value: 88.59880017237558
- type: cos_sim_spearman
value: 89.31088630824758
- type: euclidean_pearson
value: 88.47069261564656
- type: euclidean_spearman
value: 89.33581971465233
- type: manhattan_pearson
value: 88.40774264100956
- type: manhattan_spearman
value: 89.28657485627835
- task:
type: STS
dataset:
name: MTEB STS16
type: mteb/sts16-sts
config: default
split: test
revision: 4d8694f8f0e0100860b497b999b3dbed754a0513
metrics:
- type: cos_sim_pearson
value: 84.08055117917084
- type: cos_sim_spearman
value: 85.78491813080304
- type: euclidean_pearson
value: 84.99329155500392
- type: euclidean_spearman
value: 85.76728064677287
- type: manhattan_pearson
value: 84.87947428989587
- type: manhattan_spearman
value: 85.62429454917464
- task:
type: STS
dataset:
name: MTEB STS17 (ko-ko)
type: mteb/sts17-crosslingual-sts
config: ko-ko
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 82.14190939287384
- type: cos_sim_spearman
value: 82.27331573306041
- type: euclidean_pearson
value: 81.891896953716
- type: euclidean_spearman
value: 82.37695542955998
- type: manhattan_pearson
value: 81.73123869460504
- type: manhattan_spearman
value: 82.19989168441421
- task:
type: STS
dataset:
name: MTEB STS17 (ar-ar)
type: mteb/sts17-crosslingual-sts
config: ar-ar
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 76.84695301843362
- type: cos_sim_spearman
value: 77.87790986014461
- type: euclidean_pearson
value: 76.91981583106315
- type: euclidean_spearman
value: 77.88154772749589
- type: manhattan_pearson
value: 76.94953277451093
- type: manhattan_spearman
value: 77.80499230728604
- task:
type: STS
dataset:
name: MTEB STS17 (en-ar)
type: mteb/sts17-crosslingual-sts
config: en-ar
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 75.44657840482016
- type: cos_sim_spearman
value: 75.05531095119674
- type: euclidean_pearson
value: 75.88161755829299
- type: euclidean_spearman
value: 74.73176238219332
- type: manhattan_pearson
value: 75.63984765635362
- type: manhattan_spearman
value: 74.86476440770737
- task:
type: STS
dataset:
name: MTEB STS17 (en-de)
type: mteb/sts17-crosslingual-sts
config: en-de
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 85.64700140524133
- type: cos_sim_spearman
value: 86.16014210425672
- type: euclidean_pearson
value: 86.49086860843221
- type: euclidean_spearman
value: 86.09729326815614
- type: manhattan_pearson
value: 86.43406265125513
- type: manhattan_spearman
value: 86.17740150939994
- task:
type: STS
dataset:
name: MTEB STS17 (en-en)
type: mteb/sts17-crosslingual-sts
config: en-en
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 87.91170098764921
- type: cos_sim_spearman
value: 88.12437004058931
- type: euclidean_pearson
value: 88.81828254494437
- type: euclidean_spearman
value: 88.14831794572122
- type: manhattan_pearson
value: 88.93442183448961
- type: manhattan_spearman
value: 88.15254630778304
- task:
type: STS
dataset:
name: MTEB STS17 (en-tr)
type: mteb/sts17-crosslingual-sts
config: en-tr
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 72.91390577997292
- type: cos_sim_spearman
value: 71.22979457536074
- type: euclidean_pearson
value: 74.40314008106749
- type: euclidean_spearman
value: 72.54972136083246
- type: manhattan_pearson
value: 73.85687539530218
- type: manhattan_spearman
value: 72.09500771742637
- task:
type: STS
dataset:
name: MTEB STS17 (es-en)
type: mteb/sts17-crosslingual-sts
config: es-en
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 80.9301067983089
- type: cos_sim_spearman
value: 80.74989828346473
- type: euclidean_pearson
value: 81.36781301814257
- type: euclidean_spearman
value: 80.9448819964426
- type: manhattan_pearson
value: 81.0351322685609
- type: manhattan_spearman
value: 80.70192121844177
- task:
type: STS
dataset:
name: MTEB STS17 (es-es)
type: mteb/sts17-crosslingual-sts
config: es-es
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 87.13820465980005
- type: cos_sim_spearman
value: 86.73532498758757
- type: euclidean_pearson
value: 87.21329451846637
- type: euclidean_spearman
value: 86.57863198601002
- type: manhattan_pearson
value: 87.06973713818554
- type: manhattan_spearman
value: 86.47534918791499
- task:
type: STS
dataset:
name: MTEB STS17 (fr-en)
type: mteb/sts17-crosslingual-sts
config: fr-en
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 85.48720108904415
- type: cos_sim_spearman
value: 85.62221757068387
- type: euclidean_pearson
value: 86.1010129512749
- type: euclidean_spearman
value: 85.86580966509942
- type: manhattan_pearson
value: 86.26800938808971
- type: manhattan_spearman
value: 85.88902721678429
- task:
type: STS
dataset:
name: MTEB STS17 (it-en)
type: mteb/sts17-crosslingual-sts
config: it-en
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 83.98021347333516
- type: cos_sim_spearman
value: 84.53806553803501
- type: euclidean_pearson
value: 84.61483347248364
- type: euclidean_spearman
value: 85.14191408011702
- type: manhattan_pearson
value: 84.75297588825967
- type: manhattan_spearman
value: 85.33176753669242
- task:
type: STS
dataset:
name: MTEB STS17 (nl-en)
type: mteb/sts17-crosslingual-sts
config: nl-en
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 84.51856644893233
- type: cos_sim_spearman
value: 85.27510748506413
- type: euclidean_pearson
value: 85.09886861540977
- type: euclidean_spearman
value: 85.62579245860887
- type: manhattan_pearson
value: 84.93017860464607
- type: manhattan_spearman
value: 85.5063988898453
- task:
type: STS
dataset:
name: MTEB STS22 (en)
type: mteb/sts22-crosslingual-sts
config: en
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 62.581573200584195
- type: cos_sim_spearman
value: 63.05503590247928
- type: euclidean_pearson
value: 63.652564812602094
- type: euclidean_spearman
value: 62.64811520876156
- type: manhattan_pearson
value: 63.506842893061076
- type: manhattan_spearman
value: 62.51289573046917
- task:
type: STS
dataset:
name: MTEB STS22 (de)
type: mteb/sts22-crosslingual-sts
config: de
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 48.2248801729127
- type: cos_sim_spearman
value: 56.5936604678561
- type: euclidean_pearson
value: 43.98149464089
- type: euclidean_spearman
value: 56.108561882423615
- type: manhattan_pearson
value: 43.86880305903564
- type: manhattan_spearman
value: 56.04671150510166
- task:
type: STS
dataset:
name: MTEB STS22 (es)
type: mteb/sts22-crosslingual-sts
config: es
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 55.17564527009831
- type: cos_sim_spearman
value: 64.57978560979488
- type: euclidean_pearson
value: 58.8818330154583
- type: euclidean_spearman
value: 64.99214839071281
- type: manhattan_pearson
value: 58.72671436121381
- type: manhattan_spearman
value: 65.10713416616109
- task:
type: STS
dataset:
name: MTEB STS22 (pl)
type: mteb/sts22-crosslingual-sts
config: pl
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 26.772131864023297
- type: cos_sim_spearman
value: 34.68200792408681
- type: euclidean_pearson
value: 16.68082419005441
- type: euclidean_spearman
value: 34.83099932652166
- type: manhattan_pearson
value: 16.52605949659529
- type: manhattan_spearman
value: 34.82075801399475
- task:
type: STS
dataset:
name: MTEB STS22 (tr)
type: mteb/sts22-crosslingual-sts
config: tr
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 54.42415189043831
- type: cos_sim_spearman
value: 63.54594264576758
- type: euclidean_pearson
value: 57.36577498297745
- type: euclidean_spearman
value: 63.111466379158074
- type: manhattan_pearson
value: 57.584543715873885
- type: manhattan_spearman
value: 63.22361054139183
- task:
type: STS
dataset:
name: MTEB STS22 (ar)
type: mteb/sts22-crosslingual-sts
config: ar
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 47.55216762405518
- type: cos_sim_spearman
value: 56.98670142896412
- type: euclidean_pearson
value: 50.15318757562699
- type: euclidean_spearman
value: 56.524941926541906
- type: manhattan_pearson
value: 49.955618528674904
- type: manhattan_spearman
value: 56.37102209240117
- task:
type: STS
dataset:
name: MTEB STS22 (ru)
type: mteb/sts22-crosslingual-sts
config: ru
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 49.20540980338571
- type: cos_sim_spearman
value: 59.9009453504406
- type: euclidean_pearson
value: 49.557749853620535
- type: euclidean_spearman
value: 59.76631621172456
- type: manhattan_pearson
value: 49.62340591181147
- type: manhattan_spearman
value: 59.94224880322436
- task:
type: STS
dataset:
name: MTEB STS22 (zh)
type: mteb/sts22-crosslingual-sts
config: zh
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 51.508169956576985
- type: cos_sim_spearman
value: 66.82461565306046
- type: euclidean_pearson
value: 56.2274426480083
- type: euclidean_spearman
value: 66.6775323848333
- type: manhattan_pearson
value: 55.98277796300661
- type: manhattan_spearman
value: 66.63669848497175
- task:
type: STS
dataset:
name: MTEB STS22 (fr)
type: mteb/sts22-crosslingual-sts
config: fr
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 72.86478788045507
- type: cos_sim_spearman
value: 76.7946552053193
- type: euclidean_pearson
value: 75.01598530490269
- type: euclidean_spearman
value: 76.83618917858281
- type: manhattan_pearson
value: 74.68337628304332
- type: manhattan_spearman
value: 76.57480204017773
- task:
type: STS
dataset:
name: MTEB STS22 (de-en)
type: mteb/sts22-crosslingual-sts
config: de-en
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 55.922619099401984
- type: cos_sim_spearman
value: 56.599362477240774
- type: euclidean_pearson
value: 56.68307052369783
- type: euclidean_spearman
value: 54.28760436777401
- type: manhattan_pearson
value: 56.67763566500681
- type: manhattan_spearman
value: 53.94619541711359
- task:
type: STS
dataset:
name: MTEB STS22 (es-en)
type: mteb/sts22-crosslingual-sts
config: es-en
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 66.74357206710913
- type: cos_sim_spearman
value: 72.5208244925311
- type: euclidean_pearson
value: 67.49254562186032
- type: euclidean_spearman
value: 72.02469076238683
- type: manhattan_pearson
value: 67.45251772238085
- type: manhattan_spearman
value: 72.05538819984538
- task:
type: STS
dataset:
name: MTEB STS22 (it)
type: mteb/sts22-crosslingual-sts
config: it
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 71.25734330033191
- type: cos_sim_spearman
value: 76.98349083946823
- type: euclidean_pearson
value: 73.71642838667736
- type: euclidean_spearman
value: 77.01715504651384
- type: manhattan_pearson
value: 73.61712711868105
- type: manhattan_spearman
value: 77.01392571153896
- task:
type: STS
dataset:
name: MTEB STS22 (pl-en)
type: mteb/sts22-crosslingual-sts
config: pl-en
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 63.18215462781212
- type: cos_sim_spearman
value: 65.54373266117607
- type: euclidean_pearson
value: 64.54126095439005
- type: euclidean_spearman
value: 65.30410369102711
- type: manhattan_pearson
value: 63.50332221148234
- type: manhattan_spearman
value: 64.3455878104313
- task:
type: STS
dataset:
name: MTEB STS22 (zh-en)
type: mteb/sts22-crosslingual-sts
config: zh-en
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 62.30509221440029
- type: cos_sim_spearman
value: 65.99582704642478
- type: euclidean_pearson
value: 63.43818859884195
- type: euclidean_spearman
value: 66.83172582815764
- type: manhattan_pearson
value: 63.055779168508764
- type: manhattan_spearman
value: 65.49585020501449
- task:
type: STS
dataset:
name: MTEB STS22 (es-it)
type: mteb/sts22-crosslingual-sts
config: es-it
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 59.587830825340404
- type: cos_sim_spearman
value: 68.93467614588089
- type: euclidean_pearson
value: 62.3073527367404
- type: euclidean_spearman
value: 69.69758171553175
- type: manhattan_pearson
value: 61.9074580815789
- type: manhattan_spearman
value: 69.57696375597865
- task:
type: STS
dataset:
name: MTEB STS22 (de-fr)
type: mteb/sts22-crosslingual-sts
config: de-fr
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 57.143220125577066
- type: cos_sim_spearman
value: 67.78857859159226
- type: euclidean_pearson
value: 55.58225107923733
- type: euclidean_spearman
value: 67.80662907184563
- type: manhattan_pearson
value: 56.24953502726514
- type: manhattan_spearman
value: 67.98262125431616
- task:
type: STS
dataset:
name: MTEB STS22 (de-pl)
type: mteb/sts22-crosslingual-sts
config: de-pl
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 21.826928900322066
- type: cos_sim_spearman
value: 49.578506634400405
- type: euclidean_pearson
value: 27.939890138843214
- type: euclidean_spearman
value: 52.71950519136242
- type: manhattan_pearson
value: 26.39878683847546
- type: manhattan_spearman
value: 47.54609580342499
- task:
type: STS
dataset:
name: MTEB STS22 (fr-pl)
type: mteb/sts22-crosslingual-sts
config: fr-pl
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 57.27603854632001
- type: cos_sim_spearman
value: 50.709255283710995
- type: euclidean_pearson
value: 59.5419024445929
- type: euclidean_spearman
value: 50.709255283710995
- type: manhattan_pearson
value: 59.03256832438492
- type: manhattan_spearman
value: 61.97797868009122
- task:
type: STS
dataset:
name: MTEB STSBenchmark
type: mteb/stsbenchmark-sts
config: default
split: test
revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831
metrics:
- type: cos_sim_pearson
value: 85.00757054859712
- type: cos_sim_spearman
value: 87.29283629622222
- type: euclidean_pearson
value: 86.54824171775536
- type: euclidean_spearman
value: 87.24364730491402
- type: manhattan_pearson
value: 86.5062156915074
- type: manhattan_spearman
value: 87.15052170378574
- task:
type: Reranking
dataset:
name: MTEB SciDocsRR
type: mteb/scidocs-reranking
config: default
split: test
revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab
metrics:
- type: map
value: 82.03549357197389
- type: mrr
value: 95.05437645143527
- task:
type: Retrieval
dataset:
name: MTEB SciFact
type: scifact
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 57.260999999999996
- type: map_at_10
value: 66.259
- type: map_at_100
value: 66.884
- type: map_at_1000
value: 66.912
- type: map_at_3
value: 63.685
- type: map_at_5
value: 65.35499999999999
- type: mrr_at_1
value: 60.333000000000006
- type: mrr_at_10
value: 67.5
- type: mrr_at_100
value: 68.013
- type: mrr_at_1000
value: 68.038
- type: mrr_at_3
value: 65.61099999999999
- type: mrr_at_5
value: 66.861
- type: ndcg_at_1
value: 60.333000000000006
- type: ndcg_at_10
value: 70.41
- type: ndcg_at_100
value: 73.10600000000001
- type: ndcg_at_1000
value: 73.846
- type: ndcg_at_3
value: 66.133
- type: ndcg_at_5
value: 68.499
- type: precision_at_1
value: 60.333000000000006
- type: precision_at_10
value: 9.232999999999999
- type: precision_at_100
value: 1.0630000000000002
- type: precision_at_1000
value: 0.11299999999999999
- type: precision_at_3
value: 25.667
- type: precision_at_5
value: 17.067
- type: recall_at_1
value: 57.260999999999996
- type: recall_at_10
value: 81.94399999999999
- type: recall_at_100
value: 93.867
- type: recall_at_1000
value: 99.667
- type: recall_at_3
value: 70.339
- type: recall_at_5
value: 76.25
- task:
type: PairClassification
dataset:
name: MTEB SprintDuplicateQuestions
type: mteb/sprintduplicatequestions-pairclassification
config: default
split: test
revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46
metrics:
- type: cos_sim_accuracy
value: 99.74356435643564
- type: cos_sim_ap
value: 93.13411948212683
- type: cos_sim_f1
value: 86.80521991300147
- type: cos_sim_precision
value: 84.00374181478017
- type: cos_sim_recall
value: 89.8
- type: dot_accuracy
value: 99.67920792079208
- type: dot_ap
value: 89.27277565444479
- type: dot_f1
value: 83.9276990718124
- type: dot_precision
value: 82.04393505253104
- type: dot_recall
value: 85.9
- type: euclidean_accuracy
value: 99.74257425742574
- type: euclidean_ap
value: 93.17993008259062
- type: euclidean_f1
value: 86.69396110542476
- type: euclidean_precision
value: 88.78406708595388
- type: euclidean_recall
value: 84.7
- type: manhattan_accuracy
value: 99.74257425742574
- type: manhattan_ap
value: 93.14413755550099
- type: manhattan_f1
value: 86.82483594144371
- type: manhattan_precision
value: 87.66564729867483
- type: manhattan_recall
value: 86
- type: max_accuracy
value: 99.74356435643564
- type: max_ap
value: 93.17993008259062
- type: max_f1
value: 86.82483594144371
- task:
type: Clustering
dataset:
name: MTEB StackExchangeClustering
type: mteb/stackexchange-clustering
config: default
split: test
revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259
metrics:
- type: v_measure
value: 57.525863806168566
- task:
type: Clustering
dataset:
name: MTEB StackExchangeClusteringP2P
type: mteb/stackexchange-clustering-p2p
config: default
split: test
revision: 815ca46b2622cec33ccafc3735d572c266efdb44
metrics:
- type: v_measure
value: 32.68850574423839
- task:
type: Reranking
dataset:
name: MTEB StackOverflowDupQuestions
type: mteb/stackoverflowdupquestions-reranking
config: default
split: test
revision: e185fbe320c72810689fc5848eb6114e1ef5ec69
metrics:
- type: map
value: 49.71580650644033
- type: mrr
value: 50.50971903913081
- task:
type: Summarization
dataset:
name: MTEB SummEval
type: mteb/summeval
config: default
split: test
revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c
metrics:
- type: cos_sim_pearson
value: 29.152190498799484
- type: cos_sim_spearman
value: 29.686180371952727
- type: dot_pearson
value: 27.248664793816342
- type: dot_spearman
value: 28.37748983721745
- task:
type: Retrieval
dataset:
name: MTEB TRECCOVID
type: trec-covid
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 0.20400000000000001
- type: map_at_10
value: 1.6209999999999998
- type: map_at_100
value: 9.690999999999999
- type: map_at_1000
value: 23.733
- type: map_at_3
value: 0.575
- type: map_at_5
value: 0.885
- type: mrr_at_1
value: 78
- type: mrr_at_10
value: 86.56700000000001
- type: mrr_at_100
value: 86.56700000000001
- type: mrr_at_1000
value: 86.56700000000001
- type: mrr_at_3
value: 85.667
- type: mrr_at_5
value: 86.56700000000001
- type: ndcg_at_1
value: 76
- type: ndcg_at_10
value: 71.326
- type: ndcg_at_100
value: 54.208999999999996
- type: ndcg_at_1000
value: 49.252
- type: ndcg_at_3
value: 74.235
- type: ndcg_at_5
value: 73.833
- type: precision_at_1
value: 78
- type: precision_at_10
value: 74.8
- type: precision_at_100
value: 55.50000000000001
- type: precision_at_1000
value: 21.836
- type: precision_at_3
value: 78
- type: precision_at_5
value: 78
- type: recall_at_1
value: 0.20400000000000001
- type: recall_at_10
value: 1.894
- type: recall_at_100
value: 13.245999999999999
- type: recall_at_1000
value: 46.373
- type: recall_at_3
value: 0.613
- type: recall_at_5
value: 0.991
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (sqi-eng)
type: mteb/tatoeba-bitext-mining
config: sqi-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 95.89999999999999
- type: f1
value: 94.69999999999999
- type: precision
value: 94.11666666666667
- type: recall
value: 95.89999999999999
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (fry-eng)
type: mteb/tatoeba-bitext-mining
config: fry-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 68.20809248554913
- type: f1
value: 63.431048720066066
- type: precision
value: 61.69143958161298
- type: recall
value: 68.20809248554913
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (kur-eng)
type: mteb/tatoeba-bitext-mining
config: kur-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 71.21951219512195
- type: f1
value: 66.82926829268293
- type: precision
value: 65.1260162601626
- type: recall
value: 71.21951219512195
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (tur-eng)
type: mteb/tatoeba-bitext-mining
config: tur-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 97.2
- type: f1
value: 96.26666666666667
- type: precision
value: 95.8
- type: recall
value: 97.2
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (deu-eng)
type: mteb/tatoeba-bitext-mining
config: deu-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 99.3
- type: f1
value: 99.06666666666666
- type: precision
value: 98.95
- type: recall
value: 99.3
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (nld-eng)
type: mteb/tatoeba-bitext-mining
config: nld-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 97.39999999999999
- type: f1
value: 96.63333333333333
- type: precision
value: 96.26666666666668
- type: recall
value: 97.39999999999999
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ron-eng)
type: mteb/tatoeba-bitext-mining
config: ron-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 96
- type: f1
value: 94.86666666666666
- type: precision
value: 94.31666666666668
- type: recall
value: 96
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ang-eng)
type: mteb/tatoeba-bitext-mining
config: ang-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 47.01492537313433
- type: f1
value: 40.178867566927266
- type: precision
value: 38.179295828549556
- type: recall
value: 47.01492537313433
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ido-eng)
type: mteb/tatoeba-bitext-mining
config: ido-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 86.5
- type: f1
value: 83.62537480063796
- type: precision
value: 82.44555555555554
- type: recall
value: 86.5
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (jav-eng)
type: mteb/tatoeba-bitext-mining
config: jav-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 80.48780487804879
- type: f1
value: 75.45644599303138
- type: precision
value: 73.37398373983739
- type: recall
value: 80.48780487804879
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (isl-eng)
type: mteb/tatoeba-bitext-mining
config: isl-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 93.7
- type: f1
value: 91.95666666666666
- type: precision
value: 91.125
- type: recall
value: 93.7
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (slv-eng)
type: mteb/tatoeba-bitext-mining
config: slv-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 91.73754556500607
- type: f1
value: 89.65168084244632
- type: precision
value: 88.73025516403402
- type: recall
value: 91.73754556500607
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (cym-eng)
type: mteb/tatoeba-bitext-mining
config: cym-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 81.04347826086956
- type: f1
value: 76.2128364389234
- type: precision
value: 74.2
- type: recall
value: 81.04347826086956
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (kaz-eng)
type: mteb/tatoeba-bitext-mining
config: kaz-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 83.65217391304348
- type: f1
value: 79.4376811594203
- type: precision
value: 77.65797101449274
- type: recall
value: 83.65217391304348
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (est-eng)
type: mteb/tatoeba-bitext-mining
config: est-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 87.5
- type: f1
value: 85.02690476190476
- type: precision
value: 83.96261904761904
- type: recall
value: 87.5
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (heb-eng)
type: mteb/tatoeba-bitext-mining
config: heb-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 89.3
- type: f1
value: 86.52333333333333
- type: precision
value: 85.22833333333332
- type: recall
value: 89.3
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (gla-eng)
type: mteb/tatoeba-bitext-mining
config: gla-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 65.01809408926418
- type: f1
value: 59.00594446432805
- type: precision
value: 56.827215807915444
- type: recall
value: 65.01809408926418
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (mar-eng)
type: mteb/tatoeba-bitext-mining
config: mar-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 91.2
- type: f1
value: 88.58
- type: precision
value: 87.33333333333334
- type: recall
value: 91.2
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (lat-eng)
type: mteb/tatoeba-bitext-mining
config: lat-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 59.199999999999996
- type: f1
value: 53.299166276284915
- type: precision
value: 51.3383908045977
- type: recall
value: 59.199999999999996
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (bel-eng)
type: mteb/tatoeba-bitext-mining
config: bel-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 93.2
- type: f1
value: 91.2
- type: precision
value: 90.25
- type: recall
value: 93.2
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (pms-eng)
type: mteb/tatoeba-bitext-mining
config: pms-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 64.76190476190476
- type: f1
value: 59.867110667110666
- type: precision
value: 58.07390192653351
- type: recall
value: 64.76190476190476
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (gle-eng)
type: mteb/tatoeba-bitext-mining
config: gle-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 76.2
- type: f1
value: 71.48147546897547
- type: precision
value: 69.65409090909091
- type: recall
value: 76.2
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (pes-eng)
type: mteb/tatoeba-bitext-mining
config: pes-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 93.8
- type: f1
value: 92.14
- type: precision
value: 91.35833333333333
- type: recall
value: 93.8
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (nob-eng)
type: mteb/tatoeba-bitext-mining
config: nob-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 97.89999999999999
- type: f1
value: 97.2
- type: precision
value: 96.85000000000001
- type: recall
value: 97.89999999999999
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (bul-eng)
type: mteb/tatoeba-bitext-mining
config: bul-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 94.6
- type: f1
value: 92.93333333333334
- type: precision
value: 92.13333333333333
- type: recall
value: 94.6
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (cbk-eng)
type: mteb/tatoeba-bitext-mining
config: cbk-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 74.1
- type: f1
value: 69.14817460317461
- type: precision
value: 67.2515873015873
- type: recall
value: 74.1
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (hun-eng)
type: mteb/tatoeba-bitext-mining
config: hun-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 95.19999999999999
- type: f1
value: 94.01333333333335
- type: precision
value: 93.46666666666667
- type: recall
value: 95.19999999999999
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (uig-eng)
type: mteb/tatoeba-bitext-mining
config: uig-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 76.9
- type: f1
value: 72.07523809523809
- type: precision
value: 70.19777777777779
- type: recall
value: 76.9
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (rus-eng)
type: mteb/tatoeba-bitext-mining
config: rus-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 94.1
- type: f1
value: 92.31666666666666
- type: precision
value: 91.43333333333332
- type: recall
value: 94.1
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (spa-eng)
type: mteb/tatoeba-bitext-mining
config: spa-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 97.8
- type: f1
value: 97.1
- type: precision
value: 96.76666666666668
- type: recall
value: 97.8
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (hye-eng)
type: mteb/tatoeba-bitext-mining
config: hye-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 92.85714285714286
- type: f1
value: 90.92093441150045
- type: precision
value: 90.00449236298293
- type: recall
value: 92.85714285714286
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (tel-eng)
type: mteb/tatoeba-bitext-mining
config: tel-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 93.16239316239316
- type: f1
value: 91.33903133903132
- type: precision
value: 90.56267806267806
- type: recall
value: 93.16239316239316
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (afr-eng)
type: mteb/tatoeba-bitext-mining
config: afr-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 92.4
- type: f1
value: 90.25666666666666
- type: precision
value: 89.25833333333334
- type: recall
value: 92.4
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (mon-eng)
type: mteb/tatoeba-bitext-mining
config: mon-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 90.22727272727272
- type: f1
value: 87.53030303030303
- type: precision
value: 86.37121212121211
- type: recall
value: 90.22727272727272
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (arz-eng)
type: mteb/tatoeba-bitext-mining
config: arz-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 79.03563941299791
- type: f1
value: 74.7349505840072
- type: precision
value: 72.9035639412998
- type: recall
value: 79.03563941299791
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (hrv-eng)
type: mteb/tatoeba-bitext-mining
config: hrv-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 97
- type: f1
value: 96.15
- type: precision
value: 95.76666666666668
- type: recall
value: 97
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (nov-eng)
type: mteb/tatoeba-bitext-mining
config: nov-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 76.26459143968872
- type: f1
value: 71.55642023346303
- type: precision
value: 69.7544932369835
- type: recall
value: 76.26459143968872
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (gsw-eng)
type: mteb/tatoeba-bitext-mining
config: gsw-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 58.119658119658126
- type: f1
value: 51.65242165242165
- type: precision
value: 49.41768108434775
- type: recall
value: 58.119658119658126
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (nds-eng)
type: mteb/tatoeba-bitext-mining
config: nds-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 74.3
- type: f1
value: 69.52055555555555
- type: precision
value: 67.7574938949939
- type: recall
value: 74.3
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ukr-eng)
type: mteb/tatoeba-bitext-mining
config: ukr-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 94.8
- type: f1
value: 93.31666666666666
- type: precision
value: 92.60000000000001
- type: recall
value: 94.8
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (uzb-eng)
type: mteb/tatoeba-bitext-mining
config: uzb-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 76.63551401869158
- type: f1
value: 72.35202492211837
- type: precision
value: 70.60358255451713
- type: recall
value: 76.63551401869158
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (lit-eng)
type: mteb/tatoeba-bitext-mining
config: lit-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 90.4
- type: f1
value: 88.4811111111111
- type: precision
value: 87.7452380952381
- type: recall
value: 90.4
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ina-eng)
type: mteb/tatoeba-bitext-mining
config: ina-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 95
- type: f1
value: 93.60666666666667
- type: precision
value: 92.975
- type: recall
value: 95
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (lfn-eng)
type: mteb/tatoeba-bitext-mining
config: lfn-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 67.2
- type: f1
value: 63.01595782872099
- type: precision
value: 61.596587301587306
- type: recall
value: 67.2
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (zsm-eng)
type: mteb/tatoeba-bitext-mining
config: zsm-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 95.7
- type: f1
value: 94.52999999999999
- type: precision
value: 94
- type: recall
value: 95.7
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ita-eng)
type: mteb/tatoeba-bitext-mining
config: ita-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 94.6
- type: f1
value: 93.28999999999999
- type: precision
value: 92.675
- type: recall
value: 94.6
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (cmn-eng)
type: mteb/tatoeba-bitext-mining
config: cmn-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 96.39999999999999
- type: f1
value: 95.28333333333333
- type: precision
value: 94.75
- type: recall
value: 96.39999999999999
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (lvs-eng)
type: mteb/tatoeba-bitext-mining
config: lvs-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 91.9
- type: f1
value: 89.83
- type: precision
value: 88.92
- type: recall
value: 91.9
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (glg-eng)
type: mteb/tatoeba-bitext-mining
config: glg-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 94.69999999999999
- type: f1
value: 93.34222222222223
- type: precision
value: 92.75416666666668
- type: recall
value: 94.69999999999999
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ceb-eng)
type: mteb/tatoeba-bitext-mining
config: ceb-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 60.333333333333336
- type: f1
value: 55.31203703703703
- type: precision
value: 53.39971108326371
- type: recall
value: 60.333333333333336
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (bre-eng)
type: mteb/tatoeba-bitext-mining
config: bre-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 12.9
- type: f1
value: 11.099861903031458
- type: precision
value: 10.589187932631877
- type: recall
value: 12.9
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ben-eng)
type: mteb/tatoeba-bitext-mining
config: ben-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 86.7
- type: f1
value: 83.0152380952381
- type: precision
value: 81.37833333333333
- type: recall
value: 86.7
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (swg-eng)
type: mteb/tatoeba-bitext-mining
config: swg-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 63.39285714285714
- type: f1
value: 56.832482993197274
- type: precision
value: 54.56845238095237
- type: recall
value: 63.39285714285714
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (arq-eng)
type: mteb/tatoeba-bitext-mining
config: arq-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 48.73765093304062
- type: f1
value: 41.555736920720456
- type: precision
value: 39.06874531737319
- type: recall
value: 48.73765093304062
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (kab-eng)
type: mteb/tatoeba-bitext-mining
config: kab-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 41.099999999999994
- type: f1
value: 36.540165945165946
- type: precision
value: 35.05175685425686
- type: recall
value: 41.099999999999994
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (fra-eng)
type: mteb/tatoeba-bitext-mining
config: fra-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 94.89999999999999
- type: f1
value: 93.42333333333333
- type: precision
value: 92.75833333333333
- type: recall
value: 94.89999999999999
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (por-eng)
type: mteb/tatoeba-bitext-mining
config: por-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 94.89999999999999
- type: f1
value: 93.63333333333334
- type: precision
value: 93.01666666666665
- type: recall
value: 94.89999999999999
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (tat-eng)
type: mteb/tatoeba-bitext-mining
config: tat-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 77.9
- type: f1
value: 73.64833333333334
- type: precision
value: 71.90282106782105
- type: recall
value: 77.9
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (oci-eng)
type: mteb/tatoeba-bitext-mining
config: oci-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 59.4
- type: f1
value: 54.90521367521367
- type: precision
value: 53.432840025471606
- type: recall
value: 59.4
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (pol-eng)
type: mteb/tatoeba-bitext-mining
config: pol-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 97.39999999999999
- type: f1
value: 96.6
- type: precision
value: 96.2
- type: recall
value: 97.39999999999999
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (war-eng)
type: mteb/tatoeba-bitext-mining
config: war-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 67.2
- type: f1
value: 62.25926129426129
- type: precision
value: 60.408376623376626
- type: recall
value: 67.2
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (aze-eng)
type: mteb/tatoeba-bitext-mining
config: aze-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 90.2
- type: f1
value: 87.60666666666667
- type: precision
value: 86.45277777777778
- type: recall
value: 90.2
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (vie-eng)
type: mteb/tatoeba-bitext-mining
config: vie-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 97.7
- type: f1
value: 97
- type: precision
value: 96.65
- type: recall
value: 97.7
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (nno-eng)
type: mteb/tatoeba-bitext-mining
config: nno-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 93.2
- type: f1
value: 91.39746031746031
- type: precision
value: 90.6125
- type: recall
value: 93.2
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (cha-eng)
type: mteb/tatoeba-bitext-mining
config: cha-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 32.11678832116788
- type: f1
value: 27.210415386260234
- type: precision
value: 26.20408990846947
- type: recall
value: 32.11678832116788
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (mhr-eng)
type: mteb/tatoeba-bitext-mining
config: mhr-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 8.5
- type: f1
value: 6.787319277832475
- type: precision
value: 6.3452094433344435
- type: recall
value: 8.5
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (dan-eng)
type: mteb/tatoeba-bitext-mining
config: dan-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 96.1
- type: f1
value: 95.08
- type: precision
value: 94.61666666666667
- type: recall
value: 96.1
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ell-eng)
type: mteb/tatoeba-bitext-mining
config: ell-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 95.3
- type: f1
value: 93.88333333333333
- type: precision
value: 93.18333333333332
- type: recall
value: 95.3
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (amh-eng)
type: mteb/tatoeba-bitext-mining
config: amh-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 85.11904761904762
- type: f1
value: 80.69444444444444
- type: precision
value: 78.72023809523809
- type: recall
value: 85.11904761904762
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (pam-eng)
type: mteb/tatoeba-bitext-mining
config: pam-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 11.1
- type: f1
value: 9.276381801735853
- type: precision
value: 8.798174603174601
- type: recall
value: 11.1
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (hsb-eng)
type: mteb/tatoeba-bitext-mining
config: hsb-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 63.56107660455487
- type: f1
value: 58.70433569191332
- type: precision
value: 56.896926581464015
- type: recall
value: 63.56107660455487
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (srp-eng)
type: mteb/tatoeba-bitext-mining
config: srp-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 94.69999999999999
- type: f1
value: 93.10000000000001
- type: precision
value: 92.35
- type: recall
value: 94.69999999999999
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (epo-eng)
type: mteb/tatoeba-bitext-mining
config: epo-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 96.8
- type: f1
value: 96.01222222222222
- type: precision
value: 95.67083333333332
- type: recall
value: 96.8
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (kzj-eng)
type: mteb/tatoeba-bitext-mining
config: kzj-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 9.2
- type: f1
value: 7.911555250305249
- type: precision
value: 7.631246556216846
- type: recall
value: 9.2
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (awa-eng)
type: mteb/tatoeba-bitext-mining
config: awa-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 77.48917748917748
- type: f1
value: 72.27375798804371
- type: precision
value: 70.14430014430013
- type: recall
value: 77.48917748917748
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (fao-eng)
type: mteb/tatoeba-bitext-mining
config: fao-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 77.09923664122137
- type: f1
value: 72.61541257724463
- type: precision
value: 70.8998380754106
- type: recall
value: 77.09923664122137
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (mal-eng)
type: mteb/tatoeba-bitext-mining
config: mal-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 98.2532751091703
- type: f1
value: 97.69529354682193
- type: precision
value: 97.42843279961184
- type: recall
value: 98.2532751091703
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ile-eng)
type: mteb/tatoeba-bitext-mining
config: ile-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 82.8
- type: f1
value: 79.14672619047619
- type: precision
value: 77.59489247311828
- type: recall
value: 82.8
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (bos-eng)
type: mteb/tatoeba-bitext-mining
config: bos-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 94.35028248587571
- type: f1
value: 92.86252354048965
- type: precision
value: 92.2080979284369
- type: recall
value: 94.35028248587571
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (cor-eng)
type: mteb/tatoeba-bitext-mining
config: cor-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 8.5
- type: f1
value: 6.282429263935621
- type: precision
value: 5.783274240739785
- type: recall
value: 8.5
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (cat-eng)
type: mteb/tatoeba-bitext-mining
config: cat-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 92.7
- type: f1
value: 91.025
- type: precision
value: 90.30428571428571
- type: recall
value: 92.7
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (eus-eng)
type: mteb/tatoeba-bitext-mining
config: eus-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 81
- type: f1
value: 77.8232380952381
- type: precision
value: 76.60194444444444
- type: recall
value: 81
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (yue-eng)
type: mteb/tatoeba-bitext-mining
config: yue-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 91
- type: f1
value: 88.70857142857142
- type: precision
value: 87.7
- type: recall
value: 91
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (swe-eng)
type: mteb/tatoeba-bitext-mining
config: swe-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 96.39999999999999
- type: f1
value: 95.3
- type: precision
value: 94.76666666666667
- type: recall
value: 96.39999999999999
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (dtp-eng)
type: mteb/tatoeba-bitext-mining
config: dtp-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 8.1
- type: f1
value: 7.001008218834307
- type: precision
value: 6.708329562594269
- type: recall
value: 8.1
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (kat-eng)
type: mteb/tatoeba-bitext-mining
config: kat-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 87.1313672922252
- type: f1
value: 84.09070598748882
- type: precision
value: 82.79171454104429
- type: recall
value: 87.1313672922252
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (jpn-eng)
type: mteb/tatoeba-bitext-mining
config: jpn-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 96.39999999999999
- type: f1
value: 95.28333333333333
- type: precision
value: 94.73333333333332
- type: recall
value: 96.39999999999999
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (csb-eng)
type: mteb/tatoeba-bitext-mining
config: csb-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 42.29249011857708
- type: f1
value: 36.981018542283365
- type: precision
value: 35.415877813576024
- type: recall
value: 42.29249011857708
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (xho-eng)
type: mteb/tatoeba-bitext-mining
config: xho-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 83.80281690140845
- type: f1
value: 80.86854460093896
- type: precision
value: 79.60093896713614
- type: recall
value: 83.80281690140845
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (orv-eng)
type: mteb/tatoeba-bitext-mining
config: orv-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 45.26946107784431
- type: f1
value: 39.80235464678088
- type: precision
value: 38.14342660001342
- type: recall
value: 45.26946107784431
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ind-eng)
type: mteb/tatoeba-bitext-mining
config: ind-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 94.3
- type: f1
value: 92.9
- type: precision
value: 92.26666666666668
- type: recall
value: 94.3
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (tuk-eng)
type: mteb/tatoeba-bitext-mining
config: tuk-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 37.93103448275862
- type: f1
value: 33.15192743764172
- type: precision
value: 31.57456528146183
- type: recall
value: 37.93103448275862
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (max-eng)
type: mteb/tatoeba-bitext-mining
config: max-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 69.01408450704226
- type: f1
value: 63.41549295774648
- type: precision
value: 61.342778895595806
- type: recall
value: 69.01408450704226
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (swh-eng)
type: mteb/tatoeba-bitext-mining
config: swh-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 76.66666666666667
- type: f1
value: 71.60705960705961
- type: precision
value: 69.60683760683762
- type: recall
value: 76.66666666666667
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (hin-eng)
type: mteb/tatoeba-bitext-mining
config: hin-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 95.8
- type: f1
value: 94.48333333333333
- type: precision
value: 93.83333333333333
- type: recall
value: 95.8
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (dsb-eng)
type: mteb/tatoeba-bitext-mining
config: dsb-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 52.81837160751566
- type: f1
value: 48.435977731384824
- type: precision
value: 47.11291973845539
- type: recall
value: 52.81837160751566
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ber-eng)
type: mteb/tatoeba-bitext-mining
config: ber-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 44.9
- type: f1
value: 38.88962621607783
- type: precision
value: 36.95936507936508
- type: recall
value: 44.9
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (tam-eng)
type: mteb/tatoeba-bitext-mining
config: tam-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 90.55374592833876
- type: f1
value: 88.22553125484721
- type: precision
value: 87.26927252985884
- type: recall
value: 90.55374592833876
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (slk-eng)
type: mteb/tatoeba-bitext-mining
config: slk-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 94.6
- type: f1
value: 93.13333333333333
- type: precision
value: 92.45333333333333
- type: recall
value: 94.6
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (tgl-eng)
type: mteb/tatoeba-bitext-mining
config: tgl-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 93.7
- type: f1
value: 91.99666666666667
- type: precision
value: 91.26666666666668
- type: recall
value: 93.7
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ast-eng)
type: mteb/tatoeba-bitext-mining
config: ast-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 85.03937007874016
- type: f1
value: 81.75853018372703
- type: precision
value: 80.34120734908137
- type: recall
value: 85.03937007874016
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (mkd-eng)
type: mteb/tatoeba-bitext-mining
config: mkd-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 88.3
- type: f1
value: 85.5
- type: precision
value: 84.25833333333334
- type: recall
value: 88.3
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (khm-eng)
type: mteb/tatoeba-bitext-mining
config: khm-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 65.51246537396122
- type: f1
value: 60.02297410192148
- type: precision
value: 58.133467727289236
- type: recall
value: 65.51246537396122
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ces-eng)
type: mteb/tatoeba-bitext-mining
config: ces-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 96
- type: f1
value: 94.89
- type: precision
value: 94.39166666666667
- type: recall
value: 96
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (tzl-eng)
type: mteb/tatoeba-bitext-mining
config: tzl-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 57.692307692307686
- type: f1
value: 53.162393162393165
- type: precision
value: 51.70673076923077
- type: recall
value: 57.692307692307686
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (urd-eng)
type: mteb/tatoeba-bitext-mining
config: urd-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 91.60000000000001
- type: f1
value: 89.21190476190475
- type: precision
value: 88.08666666666667
- type: recall
value: 91.60000000000001
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ara-eng)
type: mteb/tatoeba-bitext-mining
config: ara-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 88
- type: f1
value: 85.47
- type: precision
value: 84.43266233766234
- type: recall
value: 88
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (kor-eng)
type: mteb/tatoeba-bitext-mining
config: kor-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 92.7
- type: f1
value: 90.64999999999999
- type: precision
value: 89.68333333333332
- type: recall
value: 92.7
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (yid-eng)
type: mteb/tatoeba-bitext-mining
config: yid-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 80.30660377358491
- type: f1
value: 76.33044137466307
- type: precision
value: 74.78970125786164
- type: recall
value: 80.30660377358491
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (fin-eng)
type: mteb/tatoeba-bitext-mining
config: fin-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 96.39999999999999
- type: f1
value: 95.44
- type: precision
value: 94.99166666666666
- type: recall
value: 96.39999999999999
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (tha-eng)
type: mteb/tatoeba-bitext-mining
config: tha-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 96.53284671532847
- type: f1
value: 95.37712895377129
- type: precision
value: 94.7992700729927
- type: recall
value: 96.53284671532847
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (wuu-eng)
type: mteb/tatoeba-bitext-mining
config: wuu-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 89
- type: f1
value: 86.23190476190476
- type: precision
value: 85.035
- type: recall
value: 89
- task:
type: Retrieval
dataset:
name: MTEB Touche2020
type: webis-touche2020
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 2.585
- type: map_at_10
value: 9.012
- type: map_at_100
value: 14.027000000000001
- type: map_at_1000
value: 15.565000000000001
- type: map_at_3
value: 5.032
- type: map_at_5
value: 6.657
- type: mrr_at_1
value: 28.571
- type: mrr_at_10
value: 45.377
- type: mrr_at_100
value: 46.119
- type: mrr_at_1000
value: 46.127
- type: mrr_at_3
value: 41.156
- type: mrr_at_5
value: 42.585
- type: ndcg_at_1
value: 27.551
- type: ndcg_at_10
value: 23.395
- type: ndcg_at_100
value: 33.342
- type: ndcg_at_1000
value: 45.523
- type: ndcg_at_3
value: 25.158
- type: ndcg_at_5
value: 23.427
- type: precision_at_1
value: 28.571
- type: precision_at_10
value: 21.429000000000002
- type: precision_at_100
value: 6.714
- type: precision_at_1000
value: 1.473
- type: precision_at_3
value: 27.211000000000002
- type: precision_at_5
value: 24.490000000000002
- type: recall_at_1
value: 2.585
- type: recall_at_10
value: 15.418999999999999
- type: recall_at_100
value: 42.485
- type: recall_at_1000
value: 79.536
- type: recall_at_3
value: 6.239999999999999
- type: recall_at_5
value: 8.996
- task:
type: Classification
dataset:
name: MTEB ToxicConversationsClassification
type: mteb/toxic_conversations_50k
config: default
split: test
revision: d7c0de2777da35d6aae2200a62c6e0e5af397c4c
metrics:
- type: accuracy
value: 71.3234
- type: ap
value: 14.361688653847423
- type: f1
value: 54.819068624319044
- task:
type: Classification
dataset:
name: MTEB TweetSentimentExtractionClassification
type: mteb/tweet_sentiment_extraction
config: default
split: test
revision: d604517c81ca91fe16a244d1248fc021f9ecee7a
metrics:
- type: accuracy
value: 61.97792869269949
- type: f1
value: 62.28965628513728
- task:
type: Clustering
dataset:
name: MTEB TwentyNewsgroupsClustering
type: mteb/twentynewsgroups-clustering
config: default
split: test
revision: 6125ec4e24fa026cec8a478383ee943acfbd5449
metrics:
- type: v_measure
value: 38.90540145385218
- task:
type: PairClassification
dataset:
name: MTEB TwitterSemEval2015
type: mteb/twittersemeval2015-pairclassification
config: default
split: test
revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1
metrics:
- type: cos_sim_accuracy
value: 86.53513739047506
- type: cos_sim_ap
value: 75.27741586677557
- type: cos_sim_f1
value: 69.18792902473774
- type: cos_sim_precision
value: 67.94708725515136
- type: cos_sim_recall
value: 70.47493403693932
- type: dot_accuracy
value: 84.7052512368123
- type: dot_ap
value: 69.36075482849378
- type: dot_f1
value: 64.44688376631296
- type: dot_precision
value: 59.92288500793831
- type: dot_recall
value: 69.70976253298153
- type: euclidean_accuracy
value: 86.60666388508076
- type: euclidean_ap
value: 75.47512772621097
- type: euclidean_f1
value: 69.413872536473
- type: euclidean_precision
value: 67.39562624254472
- type: euclidean_recall
value: 71.55672823218997
- type: manhattan_accuracy
value: 86.52917684925792
- type: manhattan_ap
value: 75.34000110496703
- type: manhattan_f1
value: 69.28489190226429
- type: manhattan_precision
value: 67.24608889992551
- type: manhattan_recall
value: 71.45118733509234
- type: max_accuracy
value: 86.60666388508076
- type: max_ap
value: 75.47512772621097
- type: max_f1
value: 69.413872536473
- task:
type: PairClassification
dataset:
name: MTEB TwitterURLCorpus
type: mteb/twitterurlcorpus-pairclassification
config: default
split: test
revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf
metrics:
- type: cos_sim_accuracy
value: 89.01695967710637
- type: cos_sim_ap
value: 85.8298270742901
- type: cos_sim_f1
value: 78.46988128389272
- type: cos_sim_precision
value: 74.86017897091722
- type: cos_sim_recall
value: 82.44533415460425
- type: dot_accuracy
value: 88.19420188613343
- type: dot_ap
value: 83.82679165901324
- type: dot_f1
value: 76.55833777304208
- type: dot_precision
value: 75.6884875846501
- type: dot_recall
value: 77.44841392054204
- type: euclidean_accuracy
value: 89.03054294252338
- type: euclidean_ap
value: 85.89089555185325
- type: euclidean_f1
value: 78.62997658079624
- type: euclidean_precision
value: 74.92329149232914
- type: euclidean_recall
value: 82.72251308900523
- type: manhattan_accuracy
value: 89.0266620095471
- type: manhattan_ap
value: 85.86458997929147
- type: manhattan_f1
value: 78.50685331000291
- type: manhattan_precision
value: 74.5499861534201
- type: manhattan_recall
value: 82.90729904527257
- type: max_accuracy
value: 89.03054294252338
- type: max_ap
value: 85.89089555185325
- type: max_f1
value: 78.62997658079624
---
# nnch/multilingual-e5-large-Q4_K_M-GGUF
This model was converted to GGUF format from [`intfloat/multilingual-e5-large`](https://huggingface.co/intfloat/multilingual-e5-large) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/intfloat/multilingual-e5-large) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo nnch/multilingual-e5-large-Q4_K_M-GGUF --hf-file multilingual-e5-large-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo nnch/multilingual-e5-large-Q4_K_M-GGUF --hf-file multilingual-e5-large-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo nnch/multilingual-e5-large-Q4_K_M-GGUF --hf-file multilingual-e5-large-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo nnch/multilingual-e5-large-Q4_K_M-GGUF --hf-file multilingual-e5-large-q4_k_m.gguf -c 2048
```
| [
"SUMMARIZATION"
] | [
"BIOSSES",
"SCIFACT"
] |
RichardErkhov/EleutherAI_-_pythia-410m-deduped-v0-gguf | RichardErkhov | null | [
"gguf",
"arxiv:2101.00027",
"arxiv:2201.07311",
"endpoints_compatible",
"region:us"
] | 2024-11-06T22:34:08 | 2024-11-06T22:51:35 | 59 | 0 | ---
{}
---
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
pythia-410m-deduped-v0 - GGUF
- Model creator: https://huggingface.co/EleutherAI/
- Original model: https://huggingface.co/EleutherAI/pythia-410m-deduped-v0/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [pythia-410m-deduped-v0.Q2_K.gguf](https://huggingface.co/RichardErkhov/EleutherAI_-_pythia-410m-deduped-v0-gguf/blob/main/pythia-410m-deduped-v0.Q2_K.gguf) | Q2_K | 0.16GB |
| [pythia-410m-deduped-v0.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/EleutherAI_-_pythia-410m-deduped-v0-gguf/blob/main/pythia-410m-deduped-v0.Q3_K_S.gguf) | Q3_K_S | 0.18GB |
| [pythia-410m-deduped-v0.Q3_K.gguf](https://huggingface.co/RichardErkhov/EleutherAI_-_pythia-410m-deduped-v0-gguf/blob/main/pythia-410m-deduped-v0.Q3_K.gguf) | Q3_K | 0.21GB |
| [pythia-410m-deduped-v0.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/EleutherAI_-_pythia-410m-deduped-v0-gguf/blob/main/pythia-410m-deduped-v0.Q3_K_M.gguf) | Q3_K_M | 0.21GB |
| [pythia-410m-deduped-v0.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/EleutherAI_-_pythia-410m-deduped-v0-gguf/blob/main/pythia-410m-deduped-v0.Q3_K_L.gguf) | Q3_K_L | 0.22GB |
| [pythia-410m-deduped-v0.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/EleutherAI_-_pythia-410m-deduped-v0-gguf/blob/main/pythia-410m-deduped-v0.IQ4_XS.gguf) | IQ4_XS | 0.22GB |
| [pythia-410m-deduped-v0.Q4_0.gguf](https://huggingface.co/RichardErkhov/EleutherAI_-_pythia-410m-deduped-v0-gguf/blob/main/pythia-410m-deduped-v0.Q4_0.gguf) | Q4_0 | 0.23GB |
| [pythia-410m-deduped-v0.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/EleutherAI_-_pythia-410m-deduped-v0-gguf/blob/main/pythia-410m-deduped-v0.IQ4_NL.gguf) | IQ4_NL | 0.23GB |
| [pythia-410m-deduped-v0.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/EleutherAI_-_pythia-410m-deduped-v0-gguf/blob/main/pythia-410m-deduped-v0.Q4_K_S.gguf) | Q4_K_S | 0.23GB |
| [pythia-410m-deduped-v0.Q4_K.gguf](https://huggingface.co/RichardErkhov/EleutherAI_-_pythia-410m-deduped-v0-gguf/blob/main/pythia-410m-deduped-v0.Q4_K.gguf) | Q4_K | 0.25GB |
| [pythia-410m-deduped-v0.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/EleutherAI_-_pythia-410m-deduped-v0-gguf/blob/main/pythia-410m-deduped-v0.Q4_K_M.gguf) | Q4_K_M | 0.25GB |
| [pythia-410m-deduped-v0.Q4_1.gguf](https://huggingface.co/RichardErkhov/EleutherAI_-_pythia-410m-deduped-v0-gguf/blob/main/pythia-410m-deduped-v0.Q4_1.gguf) | Q4_1 | 0.25GB |
| [pythia-410m-deduped-v0.Q5_0.gguf](https://huggingface.co/RichardErkhov/EleutherAI_-_pythia-410m-deduped-v0-gguf/blob/main/pythia-410m-deduped-v0.Q5_0.gguf) | Q5_0 | 0.27GB |
| [pythia-410m-deduped-v0.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/EleutherAI_-_pythia-410m-deduped-v0-gguf/blob/main/pythia-410m-deduped-v0.Q5_K_S.gguf) | Q5_K_S | 0.27GB |
| [pythia-410m-deduped-v0.Q5_K.gguf](https://huggingface.co/RichardErkhov/EleutherAI_-_pythia-410m-deduped-v0-gguf/blob/main/pythia-410m-deduped-v0.Q5_K.gguf) | Q5_K | 0.28GB |
| [pythia-410m-deduped-v0.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/EleutherAI_-_pythia-410m-deduped-v0-gguf/blob/main/pythia-410m-deduped-v0.Q5_K_M.gguf) | Q5_K_M | 0.28GB |
| [pythia-410m-deduped-v0.Q5_1.gguf](https://huggingface.co/RichardErkhov/EleutherAI_-_pythia-410m-deduped-v0-gguf/blob/main/pythia-410m-deduped-v0.Q5_1.gguf) | Q5_1 | 0.29GB |
| [pythia-410m-deduped-v0.Q6_K.gguf](https://huggingface.co/RichardErkhov/EleutherAI_-_pythia-410m-deduped-v0-gguf/blob/main/pythia-410m-deduped-v0.Q6_K.gguf) | Q6_K | 0.31GB |
| [pythia-410m-deduped-v0.Q8_0.gguf](https://huggingface.co/RichardErkhov/EleutherAI_-_pythia-410m-deduped-v0-gguf/blob/main/pythia-410m-deduped-v0.Q8_0.gguf) | Q8_0 | 0.4GB |
Original model description:
---
language:
- en
tags:
- pytorch
- causal-lm
- pythia
- pythia_v0
license: apache-2.0
datasets:
- EleutherAI/the_pile_deduplicated
---
The *Pythia Scaling Suite* is a collection of models developed to facilitate
interpretability research. It contains two sets of eight models of sizes
70M, 160M, 410M, 1B, 1.4B, 2.8B, 6.9B, and 12B. For each size, there are two
models: one trained on the Pile, and one trained on the Pile after the dataset
has been globally deduplicated. All 8 model sizes are trained on the exact
same data, in the exact same order. All Pythia models are available
[on Hugging Face](https://huggingface.co/models?other=pythia).
The Pythia model suite was deliberately designed to promote scientific
research on large language models, especially interpretability research.
Despite not centering downstream performance as a design goal, we find the
models <a href="#evaluations">match or exceed</a> the performance of
similar and same-sized models, such as those in the OPT and GPT-Neo suites.
Please note that all models in the *Pythia* suite were renamed in January
2023. For clarity, a <a href="#naming-convention-and-parameter-count">table
comparing the old and new names</a> is provided in this model card, together
with exact parameter counts.
## Pythia-410M-deduped
### Model Details
- Developed by: [EleutherAI](http://eleuther.ai)
- Model type: Transformer-based Language Model
- Language: English
- Learn more: [Pythia's GitHub repository](https://github.com/EleutherAI/pythia)
for training procedure, config files, and details on how to use.
- Library: [GPT-NeoX](https://github.com/EleutherAI/gpt-neox)
- License: Apache 2.0
- Contact: to ask questions about this model, join the [EleutherAI
Discord](https://discord.gg/zBGx3azzUn), and post them in `#release-discussion`.
Please read the existing *Pythia* documentation before asking about it in the
EleutherAI Discord. For general correspondence: [contact@eleuther.
ai](mailto:[email protected]).
<figure>
| Pythia model | Non-Embedding Params | Layers | Model Dim | Heads | Batch Size | Learning Rate | Equivalent Models |
| -----------: | -------------------: | :----: | :-------: | :---: | :--------: | :-------------------: | :--------------------: |
| 70M | 18,915,328 | 6 | 512 | 8 | 2M | 1.0 x 10<sup>-3</sup> | — |
| 160M | 85,056,000 | 12 | 768 | 12 | 4M | 6.0 x 10<sup>-4</sup> | GPT-Neo 125M, OPT-125M |
| 410M | 302,311,424 | 24 | 1024 | 16 | 4M | 3.0 x 10<sup>-4</sup> | OPT-350M |
| 1.0B | 805,736,448 | 16 | 2048 | 8 | 2M | 3.0 x 10<sup>-4</sup> | — |
| 1.4B | 1,208,602,624 | 24 | 2048 | 16 | 4M | 2.0 x 10<sup>-4</sup> | GPT-Neo 1.3B, OPT-1.3B |
| 2.8B | 2,517,652,480 | 32 | 2560 | 32 | 2M | 1.6 x 10<sup>-4</sup> | GPT-Neo 2.7B, OPT-2.7B |
| 6.9B | 6,444,163,072 | 32 | 4096 | 32 | 2M | 1.2 x 10<sup>-4</sup> | OPT-6.7B |
| 12B | 11,327,027,200 | 36 | 5120 | 40 | 2M | 1.2 x 10<sup>-4</sup> | — |
<figcaption>Engineering details for the <i>Pythia Suite</i>. Deduped and
non-deduped models of a given size have the same hyperparameters. “Equivalent”
models have <b>exactly</b> the same architecture, and the same number of
non-embedding parameters.</figcaption>
</figure>
### Uses and Limitations
#### Intended Use
The primary intended use of Pythia is research on the behavior, functionality,
and limitations of large language models. This suite is intended to provide
a controlled setting for performing scientific experiments. To enable the
study of how language models change in the course of training, we provide
143 evenly spaced intermediate checkpoints per model. These checkpoints are
hosted on Hugging Face as branches. Note that branch `143000` corresponds
exactly to the model checkpoint on the `main` branch of each model.
You may also further fine-tune and adapt Pythia-410M-deduped for deployment,
as long as your use is in accordance with the Apache 2.0 license. Pythia
models work with the Hugging Face [Transformers
Library](https://huggingface.co/docs/transformers/index). If you decide to use
pre-trained Pythia-410M-deduped as a basis for your fine-tuned model, please
conduct your own risk and bias assessment.
#### Out-of-scope use
The Pythia Suite is **not** intended for deployment. It is not a in itself
a product and cannot be used for human-facing interactions.
Pythia models are English-language only, and are not suitable for translation
or generating text in other languages.
Pythia-410M-deduped has not been fine-tuned for downstream contexts in which
language models are commonly deployed, such as writing genre prose,
or commercial chatbots. This means Pythia-410M-deduped will **not**
respond to a given prompt the way a product like ChatGPT does. This is because,
unlike this model, ChatGPT was fine-tuned using methods such as Reinforcement
Learning from Human Feedback (RLHF) to better “understand” human instructions.
#### Limitations and biases
The core functionality of a large language model is to take a string of text
and predict the next token. The token deemed statistically most likely by the
model need not produce the most “accurate” text. Never rely on
Pythia-410M-deduped to produce factually accurate output.
This model was trained on [the Pile](https://pile.eleuther.ai/), a dataset
known to contain profanity and texts that are lewd or otherwise offensive.
See [Section 6 of the Pile paper](https://arxiv.org/abs/2101.00027) for a
discussion of documented biases with regards to gender, religion, and race.
Pythia-410M-deduped may produce socially unacceptable or undesirable text,
*even if* the prompt itself does not include anything explicitly offensive.
If you plan on using text generated through, for example, the Hosted Inference
API, we recommend having a human curate the outputs of this language model
before presenting it to other people. Please inform your audience that the
text was generated by Pythia-410M-deduped.
### Quickstart
Pythia models can be loaded and used via the following code, demonstrated here
for the third `pythia-70m-deduped` checkpoint:
```python
from transformers import GPTNeoXForCausalLM, AutoTokenizer
model = GPTNeoXForCausalLM.from_pretrained(
"EleutherAI/pythia-70m-deduped",
revision="step3000",
cache_dir="./pythia-70m-deduped/step3000",
)
tokenizer = AutoTokenizer.from_pretrained(
"EleutherAI/pythia-70m-deduped",
revision="step3000",
cache_dir="./pythia-70m-deduped/step3000",
)
inputs = tokenizer("Hello, I am", return_tensors="pt")
tokens = model.generate(**inputs)
tokenizer.decode(tokens[0])
```
Revision/branch `step143000` corresponds exactly to the model checkpoint on
the `main` branch of each model.<br>
For more information on how to use all Pythia models, see [documentation on
GitHub](https://github.com/EleutherAI/pythia).
### Training
#### Training data
Pythia-410M-deduped was trained on the Pile **after the dataset has been
globally deduplicated**.<br>
[The Pile](https://pile.eleuther.ai/) is a 825GiB general-purpose dataset in
English. It was created by EleutherAI specifically for training large language
models. It contains texts from 22 diverse sources, roughly broken down into
five categories: academic writing (e.g. arXiv), internet (e.g. CommonCrawl),
prose (e.g. Project Gutenberg), dialogue (e.g. YouTube subtitles), and
miscellaneous (e.g. GitHub, Enron Emails). See [the Pile
paper](https://arxiv.org/abs/2101.00027) for a breakdown of all data sources,
methodology, and a discussion of ethical implications. Consult [the
datasheet](https://arxiv.org/abs/2201.07311) for more detailed documentation
about the Pile and its component datasets. The Pile can be downloaded from
the [official website](https://pile.eleuther.ai/), or from a [community
mirror](https://the-eye.eu/public/AI/pile/).
#### Training procedure
All models were trained on the exact same data, in the exact same order. Each
model saw 299,892,736,000 tokens during training, and 143 checkpoints for each
model are saved every 2,097,152,000 tokens, spaced evenly throughout training.
This corresponds to training for just under 1 epoch on the Pile for
non-deduplicated models, and about 1.5 epochs on the deduplicated Pile.
All *Pythia* models trained for the equivalent of 143000 steps at a batch size
of 2,097,152 tokens. Two batch sizes were used: 2M and 4M. Models with a batch
size of 4M tokens listed were originally trained for 71500 steps instead, with
checkpoints every 500 steps. The checkpoints on Hugging Face are renamed for
consistency with all 2M batch models, so `step1000` is the first checkpoint
for `pythia-1.4b` that was saved (corresponding to step 500 in training), and
`step1000` is likewise the first `pythia-6.9b` checkpoint that was saved
(corresponding to 1000 “actual” steps).<br>
See [GitHub](https://github.com/EleutherAI/pythia) for more details on training
procedure, including [how to reproduce
it](https://github.com/EleutherAI/pythia/blob/main/README.md#reproducing-training).<br>
Pythia uses the same tokenizer as [GPT-NeoX-
20B](https://huggingface.co/EleutherAI/gpt-neox-20b).
### Evaluations
All 16 *Pythia* models were evaluated using the [LM Evaluation
Harness](https://github.com/EleutherAI/lm-evaluation-harness). You can access
the results by model and step at `results/json/*` in the [GitHub
repository](https://github.com/EleutherAI/pythia/tree/main/results/json).<br>
Expand the sections below to see plots of evaluation results for all
Pythia and Pythia-deduped models compared with OPT and BLOOM.
<details>
<summary>LAMBADA – OpenAI</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/lambada_openai.png" style="width:auto"/>
</details>
<details>
<summary>Physical Interaction: Question Answering (PIQA)</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/piqa.png" style="width:auto"/>
</details>
<details>
<summary>WinoGrande</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/winogrande.png" style="width:auto"/>
</details>
<details>
<summary>AI2 Reasoning Challenge – Challenge Set</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/arc_challenge.png" style="width:auto"/>
</details>
<details>
<summary>SciQ</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/sciq.png" style="width:auto"/>
</details>
### Naming convention and parameter count
*Pythia* models were renamed in January 2023. It is possible that the old
naming convention still persists in some documentation by accident. The
current naming convention (70M, 160M, etc.) is based on total parameter count.
<figure style="width:32em">
| current Pythia suffix | old suffix | total params | non-embedding params |
| --------------------: | ---------: | -------------: | -------------------: |
| 70M | 19M | 70,426,624 | 18,915,328 |
| 160M | 125M | 162,322,944 | 85,056,000 |
| 410M | 350M | 405,334,016 | 302,311,424 |
| 1B | 800M | 1,011,781,632 | 805,736,448 |
| 1.4B | 1.3B | 1,414,647,808 | 1,208,602,624 |
| 2.8B | 2.7B | 2,775,208,960 | 2,517,652,480 |
| 6.9B | 6.7B | 6,857,302,016 | 6,444,163,072 |
| 12B | 13B | 11,846,072,320 | 11,327,027,200 |
</figure>
| [
"QUESTION_ANSWERING",
"TRANSLATION"
] | [
"SCIQ"
] |
tensorblock/GritLM-7B-GGUF | tensorblock | text-generation | [
"gguf",
"mteb",
"TensorBlock",
"GGUF",
"text-generation",
"dataset:GritLM/tulu2",
"base_model:GritLM/GritLM-7B",
"base_model:quantized:GritLM/GritLM-7B",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us",
"conversational"
] | 2024-11-11T14:22:59 | 2024-11-16T01:08:22 | 59 | 0 | ---
base_model: GritLM/GritLM-7B
datasets:
- GritLM/tulu2
license: apache-2.0
pipeline_tag: text-generation
tags:
- mteb
- TensorBlock
- GGUF
inference: true
model-index:
- name: GritLM-7B
results:
- task:
type: Classification
dataset:
name: MTEB AmazonCounterfactualClassification (en)
type: mteb/amazon_counterfactual
config: en
split: test
revision: e8379541af4e31359cca9fbcf4b00f2671dba205
metrics:
- type: accuracy
value: 81.17910447761194
- type: ap
value: 46.26260671758199
- type: f1
value: 75.44565719934167
- task:
type: Classification
dataset:
name: MTEB AmazonPolarityClassification
type: mteb/amazon_polarity
config: default
split: test
revision: e2d317d38cd51312af73b3d32a06d1a08b442046
metrics:
- type: accuracy
value: 96.5161
- type: ap
value: 94.79131981460425
- type: f1
value: 96.51506148413065
- task:
type: Classification
dataset:
name: MTEB AmazonReviewsClassification (en)
type: mteb/amazon_reviews_multi
config: en
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 57.806000000000004
- type: f1
value: 56.78350156257903
- task:
type: Retrieval
dataset:
name: MTEB ArguAna
type: arguana
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 38.478
- type: map_at_10
value: 54.955
- type: map_at_100
value: 54.955
- type: map_at_1000
value: 54.955
- type: map_at_3
value: 50.888999999999996
- type: map_at_5
value: 53.349999999999994
- type: mrr_at_1
value: 39.757999999999996
- type: mrr_at_10
value: 55.449000000000005
- type: mrr_at_100
value: 55.449000000000005
- type: mrr_at_1000
value: 55.449000000000005
- type: mrr_at_3
value: 51.37500000000001
- type: mrr_at_5
value: 53.822
- type: ndcg_at_1
value: 38.478
- type: ndcg_at_10
value: 63.239999999999995
- type: ndcg_at_100
value: 63.239999999999995
- type: ndcg_at_1000
value: 63.239999999999995
- type: ndcg_at_3
value: 54.935
- type: ndcg_at_5
value: 59.379000000000005
- type: precision_at_1
value: 38.478
- type: precision_at_10
value: 8.933
- type: precision_at_100
value: 0.893
- type: precision_at_1000
value: 0.089
- type: precision_at_3
value: 22.214
- type: precision_at_5
value: 15.491
- type: recall_at_1
value: 38.478
- type: recall_at_10
value: 89.331
- type: recall_at_100
value: 89.331
- type: recall_at_1000
value: 89.331
- type: recall_at_3
value: 66.643
- type: recall_at_5
value: 77.45400000000001
- task:
type: Clustering
dataset:
name: MTEB ArxivClusteringP2P
type: mteb/arxiv-clustering-p2p
config: default
split: test
revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d
metrics:
- type: v_measure
value: 51.67144081472449
- task:
type: Clustering
dataset:
name: MTEB ArxivClusteringS2S
type: mteb/arxiv-clustering-s2s
config: default
split: test
revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53
metrics:
- type: v_measure
value: 48.11256154264126
- task:
type: Reranking
dataset:
name: MTEB AskUbuntuDupQuestions
type: mteb/askubuntudupquestions-reranking
config: default
split: test
revision: 2000358ca161889fa9c082cb41daa8dcfb161a54
metrics:
- type: map
value: 67.33801955487878
- type: mrr
value: 80.71549487754474
- task:
type: STS
dataset:
name: MTEB BIOSSES
type: mteb/biosses-sts
config: default
split: test
revision: d3fb88f8f02e40887cd149695127462bbcf29b4a
metrics:
- type: cos_sim_pearson
value: 88.1935203751726
- type: cos_sim_spearman
value: 86.35497970498659
- type: euclidean_pearson
value: 85.46910708503744
- type: euclidean_spearman
value: 85.13928935405485
- type: manhattan_pearson
value: 85.68373836333303
- type: manhattan_spearman
value: 85.40013867117746
- task:
type: Classification
dataset:
name: MTEB Banking77Classification
type: mteb/banking77
config: default
split: test
revision: 0fd18e25b25c072e09e0d92ab615fda904d66300
metrics:
- type: accuracy
value: 88.46753246753248
- type: f1
value: 88.43006344981134
- task:
type: Clustering
dataset:
name: MTEB BiorxivClusteringP2P
type: mteb/biorxiv-clustering-p2p
config: default
split: test
revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40
metrics:
- type: v_measure
value: 40.86793640310432
- task:
type: Clustering
dataset:
name: MTEB BiorxivClusteringS2S
type: mteb/biorxiv-clustering-s2s
config: default
split: test
revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908
metrics:
- type: v_measure
value: 39.80291334130727
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackAndroidRetrieval
type: BeIR/cqadupstack
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 38.421
- type: map_at_10
value: 52.349000000000004
- type: map_at_100
value: 52.349000000000004
- type: map_at_1000
value: 52.349000000000004
- type: map_at_3
value: 48.17
- type: map_at_5
value: 50.432
- type: mrr_at_1
value: 47.353
- type: mrr_at_10
value: 58.387
- type: mrr_at_100
value: 58.387
- type: mrr_at_1000
value: 58.387
- type: mrr_at_3
value: 56.199
- type: mrr_at_5
value: 57.487
- type: ndcg_at_1
value: 47.353
- type: ndcg_at_10
value: 59.202
- type: ndcg_at_100
value: 58.848
- type: ndcg_at_1000
value: 58.831999999999994
- type: ndcg_at_3
value: 54.112
- type: ndcg_at_5
value: 56.312
- type: precision_at_1
value: 47.353
- type: precision_at_10
value: 11.459
- type: precision_at_100
value: 1.146
- type: precision_at_1000
value: 0.11499999999999999
- type: precision_at_3
value: 26.133
- type: precision_at_5
value: 18.627
- type: recall_at_1
value: 38.421
- type: recall_at_10
value: 71.89
- type: recall_at_100
value: 71.89
- type: recall_at_1000
value: 71.89
- type: recall_at_3
value: 56.58
- type: recall_at_5
value: 63.125
- type: map_at_1
value: 38.025999999999996
- type: map_at_10
value: 50.590999999999994
- type: map_at_100
value: 51.99700000000001
- type: map_at_1000
value: 52.11599999999999
- type: map_at_3
value: 47.435
- type: map_at_5
value: 49.236000000000004
- type: mrr_at_1
value: 48.28
- type: mrr_at_10
value: 56.814
- type: mrr_at_100
value: 57.446
- type: mrr_at_1000
value: 57.476000000000006
- type: mrr_at_3
value: 54.958
- type: mrr_at_5
value: 56.084999999999994
- type: ndcg_at_1
value: 48.28
- type: ndcg_at_10
value: 56.442
- type: ndcg_at_100
value: 60.651999999999994
- type: ndcg_at_1000
value: 62.187000000000005
- type: ndcg_at_3
value: 52.866
- type: ndcg_at_5
value: 54.515
- type: precision_at_1
value: 48.28
- type: precision_at_10
value: 10.586
- type: precision_at_100
value: 1.6310000000000002
- type: precision_at_1000
value: 0.20600000000000002
- type: precision_at_3
value: 25.945
- type: precision_at_5
value: 18.076
- type: recall_at_1
value: 38.025999999999996
- type: recall_at_10
value: 66.11399999999999
- type: recall_at_100
value: 83.339
- type: recall_at_1000
value: 92.413
- type: recall_at_3
value: 54.493
- type: recall_at_5
value: 59.64699999999999
- type: map_at_1
value: 47.905
- type: map_at_10
value: 61.58
- type: map_at_100
value: 62.605
- type: map_at_1000
value: 62.637
- type: map_at_3
value: 58.074000000000005
- type: map_at_5
value: 60.260000000000005
- type: mrr_at_1
value: 54.42
- type: mrr_at_10
value: 64.847
- type: mrr_at_100
value: 65.403
- type: mrr_at_1000
value: 65.41900000000001
- type: mrr_at_3
value: 62.675000000000004
- type: mrr_at_5
value: 64.101
- type: ndcg_at_1
value: 54.42
- type: ndcg_at_10
value: 67.394
- type: ndcg_at_100
value: 70.846
- type: ndcg_at_1000
value: 71.403
- type: ndcg_at_3
value: 62.025
- type: ndcg_at_5
value: 65.032
- type: precision_at_1
value: 54.42
- type: precision_at_10
value: 10.646
- type: precision_at_100
value: 1.325
- type: precision_at_1000
value: 0.13999999999999999
- type: precision_at_3
value: 27.398
- type: precision_at_5
value: 18.796
- type: recall_at_1
value: 47.905
- type: recall_at_10
value: 80.84599999999999
- type: recall_at_100
value: 95.078
- type: recall_at_1000
value: 98.878
- type: recall_at_3
value: 67.05600000000001
- type: recall_at_5
value: 74.261
- type: map_at_1
value: 30.745
- type: map_at_10
value: 41.021
- type: map_at_100
value: 41.021
- type: map_at_1000
value: 41.021
- type: map_at_3
value: 37.714999999999996
- type: map_at_5
value: 39.766
- type: mrr_at_1
value: 33.559
- type: mrr_at_10
value: 43.537
- type: mrr_at_100
value: 43.537
- type: mrr_at_1000
value: 43.537
- type: mrr_at_3
value: 40.546
- type: mrr_at_5
value: 42.439
- type: ndcg_at_1
value: 33.559
- type: ndcg_at_10
value: 46.781
- type: ndcg_at_100
value: 46.781
- type: ndcg_at_1000
value: 46.781
- type: ndcg_at_3
value: 40.516000000000005
- type: ndcg_at_5
value: 43.957
- type: precision_at_1
value: 33.559
- type: precision_at_10
value: 7.198
- type: precision_at_100
value: 0.72
- type: precision_at_1000
value: 0.07200000000000001
- type: precision_at_3
value: 17.1
- type: precision_at_5
value: 12.316
- type: recall_at_1
value: 30.745
- type: recall_at_10
value: 62.038000000000004
- type: recall_at_100
value: 62.038000000000004
- type: recall_at_1000
value: 62.038000000000004
- type: recall_at_3
value: 45.378
- type: recall_at_5
value: 53.580000000000005
- type: map_at_1
value: 19.637999999999998
- type: map_at_10
value: 31.05
- type: map_at_100
value: 31.05
- type: map_at_1000
value: 31.05
- type: map_at_3
value: 27.628000000000004
- type: map_at_5
value: 29.767
- type: mrr_at_1
value: 25.0
- type: mrr_at_10
value: 36.131
- type: mrr_at_100
value: 36.131
- type: mrr_at_1000
value: 36.131
- type: mrr_at_3
value: 33.333
- type: mrr_at_5
value: 35.143
- type: ndcg_at_1
value: 25.0
- type: ndcg_at_10
value: 37.478
- type: ndcg_at_100
value: 37.469
- type: ndcg_at_1000
value: 37.469
- type: ndcg_at_3
value: 31.757999999999996
- type: ndcg_at_5
value: 34.821999999999996
- type: precision_at_1
value: 25.0
- type: precision_at_10
value: 7.188999999999999
- type: precision_at_100
value: 0.719
- type: precision_at_1000
value: 0.07200000000000001
- type: precision_at_3
value: 15.837000000000002
- type: precision_at_5
value: 11.841
- type: recall_at_1
value: 19.637999999999998
- type: recall_at_10
value: 51.836000000000006
- type: recall_at_100
value: 51.836000000000006
- type: recall_at_1000
value: 51.836000000000006
- type: recall_at_3
value: 36.384
- type: recall_at_5
value: 43.964
- type: map_at_1
value: 34.884
- type: map_at_10
value: 47.88
- type: map_at_100
value: 47.88
- type: map_at_1000
value: 47.88
- type: map_at_3
value: 43.85
- type: map_at_5
value: 46.414
- type: mrr_at_1
value: 43.022
- type: mrr_at_10
value: 53.569
- type: mrr_at_100
value: 53.569
- type: mrr_at_1000
value: 53.569
- type: mrr_at_3
value: 51.075
- type: mrr_at_5
value: 52.725
- type: ndcg_at_1
value: 43.022
- type: ndcg_at_10
value: 54.461000000000006
- type: ndcg_at_100
value: 54.388000000000005
- type: ndcg_at_1000
value: 54.388000000000005
- type: ndcg_at_3
value: 48.864999999999995
- type: ndcg_at_5
value: 52.032000000000004
- type: precision_at_1
value: 43.022
- type: precision_at_10
value: 9.885
- type: precision_at_100
value: 0.988
- type: precision_at_1000
value: 0.099
- type: precision_at_3
value: 23.612
- type: precision_at_5
value: 16.997
- type: recall_at_1
value: 34.884
- type: recall_at_10
value: 68.12899999999999
- type: recall_at_100
value: 68.12899999999999
- type: recall_at_1000
value: 68.12899999999999
- type: recall_at_3
value: 52.428
- type: recall_at_5
value: 60.662000000000006
- type: map_at_1
value: 31.588
- type: map_at_10
value: 43.85
- type: map_at_100
value: 45.317
- type: map_at_1000
value: 45.408
- type: map_at_3
value: 39.73
- type: map_at_5
value: 42.122
- type: mrr_at_1
value: 38.927
- type: mrr_at_10
value: 49.582
- type: mrr_at_100
value: 50.39
- type: mrr_at_1000
value: 50.426
- type: mrr_at_3
value: 46.518
- type: mrr_at_5
value: 48.271
- type: ndcg_at_1
value: 38.927
- type: ndcg_at_10
value: 50.605999999999995
- type: ndcg_at_100
value: 56.22200000000001
- type: ndcg_at_1000
value: 57.724
- type: ndcg_at_3
value: 44.232
- type: ndcg_at_5
value: 47.233999999999995
- type: precision_at_1
value: 38.927
- type: precision_at_10
value: 9.429
- type: precision_at_100
value: 1.435
- type: precision_at_1000
value: 0.172
- type: precision_at_3
value: 21.271
- type: precision_at_5
value: 15.434000000000001
- type: recall_at_1
value: 31.588
- type: recall_at_10
value: 64.836
- type: recall_at_100
value: 88.066
- type: recall_at_1000
value: 97.748
- type: recall_at_3
value: 47.128
- type: recall_at_5
value: 54.954
- type: map_at_1
value: 31.956083333333336
- type: map_at_10
value: 43.33483333333333
- type: map_at_100
value: 44.64883333333333
- type: map_at_1000
value: 44.75
- type: map_at_3
value: 39.87741666666666
- type: map_at_5
value: 41.86766666666667
- type: mrr_at_1
value: 38.06341666666667
- type: mrr_at_10
value: 47.839666666666666
- type: mrr_at_100
value: 48.644000000000005
- type: mrr_at_1000
value: 48.68566666666667
- type: mrr_at_3
value: 45.26358333333334
- type: mrr_at_5
value: 46.790000000000006
- type: ndcg_at_1
value: 38.06341666666667
- type: ndcg_at_10
value: 49.419333333333334
- type: ndcg_at_100
value: 54.50166666666667
- type: ndcg_at_1000
value: 56.161166666666674
- type: ndcg_at_3
value: 43.982416666666666
- type: ndcg_at_5
value: 46.638083333333334
- type: precision_at_1
value: 38.06341666666667
- type: precision_at_10
value: 8.70858333333333
- type: precision_at_100
value: 1.327
- type: precision_at_1000
value: 0.165
- type: precision_at_3
value: 20.37816666666667
- type: precision_at_5
value: 14.516333333333334
- type: recall_at_1
value: 31.956083333333336
- type: recall_at_10
value: 62.69458333333334
- type: recall_at_100
value: 84.46433333333334
- type: recall_at_1000
value: 95.58449999999999
- type: recall_at_3
value: 47.52016666666666
- type: recall_at_5
value: 54.36066666666666
- type: map_at_1
value: 28.912
- type: map_at_10
value: 38.291
- type: map_at_100
value: 39.44
- type: map_at_1000
value: 39.528
- type: map_at_3
value: 35.638
- type: map_at_5
value: 37.218
- type: mrr_at_1
value: 32.822
- type: mrr_at_10
value: 41.661
- type: mrr_at_100
value: 42.546
- type: mrr_at_1000
value: 42.603
- type: mrr_at_3
value: 39.238
- type: mrr_at_5
value: 40.726
- type: ndcg_at_1
value: 32.822
- type: ndcg_at_10
value: 43.373
- type: ndcg_at_100
value: 48.638
- type: ndcg_at_1000
value: 50.654999999999994
- type: ndcg_at_3
value: 38.643
- type: ndcg_at_5
value: 41.126000000000005
- type: precision_at_1
value: 32.822
- type: precision_at_10
value: 6.8709999999999996
- type: precision_at_100
value: 1.032
- type: precision_at_1000
value: 0.128
- type: precision_at_3
value: 16.82
- type: precision_at_5
value: 11.718
- type: recall_at_1
value: 28.912
- type: recall_at_10
value: 55.376999999999995
- type: recall_at_100
value: 79.066
- type: recall_at_1000
value: 93.664
- type: recall_at_3
value: 42.569
- type: recall_at_5
value: 48.719
- type: map_at_1
value: 22.181
- type: map_at_10
value: 31.462
- type: map_at_100
value: 32.73
- type: map_at_1000
value: 32.848
- type: map_at_3
value: 28.57
- type: map_at_5
value: 30.182
- type: mrr_at_1
value: 27.185
- type: mrr_at_10
value: 35.846000000000004
- type: mrr_at_100
value: 36.811
- type: mrr_at_1000
value: 36.873
- type: mrr_at_3
value: 33.437
- type: mrr_at_5
value: 34.813
- type: ndcg_at_1
value: 27.185
- type: ndcg_at_10
value: 36.858000000000004
- type: ndcg_at_100
value: 42.501
- type: ndcg_at_1000
value: 44.945
- type: ndcg_at_3
value: 32.066
- type: ndcg_at_5
value: 34.29
- type: precision_at_1
value: 27.185
- type: precision_at_10
value: 6.752
- type: precision_at_100
value: 1.111
- type: precision_at_1000
value: 0.151
- type: precision_at_3
value: 15.290000000000001
- type: precision_at_5
value: 11.004999999999999
- type: recall_at_1
value: 22.181
- type: recall_at_10
value: 48.513
- type: recall_at_100
value: 73.418
- type: recall_at_1000
value: 90.306
- type: recall_at_3
value: 35.003
- type: recall_at_5
value: 40.876000000000005
- type: map_at_1
value: 33.934999999999995
- type: map_at_10
value: 44.727
- type: map_at_100
value: 44.727
- type: map_at_1000
value: 44.727
- type: map_at_3
value: 40.918
- type: map_at_5
value: 42.961
- type: mrr_at_1
value: 39.646
- type: mrr_at_10
value: 48.898
- type: mrr_at_100
value: 48.898
- type: mrr_at_1000
value: 48.898
- type: mrr_at_3
value: 45.896
- type: mrr_at_5
value: 47.514
- type: ndcg_at_1
value: 39.646
- type: ndcg_at_10
value: 50.817
- type: ndcg_at_100
value: 50.803
- type: ndcg_at_1000
value: 50.803
- type: ndcg_at_3
value: 44.507999999999996
- type: ndcg_at_5
value: 47.259
- type: precision_at_1
value: 39.646
- type: precision_at_10
value: 8.759
- type: precision_at_100
value: 0.876
- type: precision_at_1000
value: 0.08800000000000001
- type: precision_at_3
value: 20.274
- type: precision_at_5
value: 14.366000000000001
- type: recall_at_1
value: 33.934999999999995
- type: recall_at_10
value: 65.037
- type: recall_at_100
value: 65.037
- type: recall_at_1000
value: 65.037
- type: recall_at_3
value: 47.439
- type: recall_at_5
value: 54.567
- type: map_at_1
value: 32.058
- type: map_at_10
value: 43.137
- type: map_at_100
value: 43.137
- type: map_at_1000
value: 43.137
- type: map_at_3
value: 39.882
- type: map_at_5
value: 41.379
- type: mrr_at_1
value: 38.933
- type: mrr_at_10
value: 48.344
- type: mrr_at_100
value: 48.344
- type: mrr_at_1000
value: 48.344
- type: mrr_at_3
value: 45.652
- type: mrr_at_5
value: 46.877
- type: ndcg_at_1
value: 38.933
- type: ndcg_at_10
value: 49.964
- type: ndcg_at_100
value: 49.242000000000004
- type: ndcg_at_1000
value: 49.222
- type: ndcg_at_3
value: 44.605
- type: ndcg_at_5
value: 46.501999999999995
- type: precision_at_1
value: 38.933
- type: precision_at_10
value: 9.427000000000001
- type: precision_at_100
value: 0.943
- type: precision_at_1000
value: 0.094
- type: precision_at_3
value: 20.685000000000002
- type: precision_at_5
value: 14.585
- type: recall_at_1
value: 32.058
- type: recall_at_10
value: 63.074
- type: recall_at_100
value: 63.074
- type: recall_at_1000
value: 63.074
- type: recall_at_3
value: 47.509
- type: recall_at_5
value: 52.455
- type: map_at_1
value: 26.029000000000003
- type: map_at_10
value: 34.646
- type: map_at_100
value: 34.646
- type: map_at_1000
value: 34.646
- type: map_at_3
value: 31.456
- type: map_at_5
value: 33.138
- type: mrr_at_1
value: 28.281
- type: mrr_at_10
value: 36.905
- type: mrr_at_100
value: 36.905
- type: mrr_at_1000
value: 36.905
- type: mrr_at_3
value: 34.011
- type: mrr_at_5
value: 35.638
- type: ndcg_at_1
value: 28.281
- type: ndcg_at_10
value: 40.159
- type: ndcg_at_100
value: 40.159
- type: ndcg_at_1000
value: 40.159
- type: ndcg_at_3
value: 33.995
- type: ndcg_at_5
value: 36.836999999999996
- type: precision_at_1
value: 28.281
- type: precision_at_10
value: 6.358999999999999
- type: precision_at_100
value: 0.636
- type: precision_at_1000
value: 0.064
- type: precision_at_3
value: 14.233
- type: precision_at_5
value: 10.314
- type: recall_at_1
value: 26.029000000000003
- type: recall_at_10
value: 55.08
- type: recall_at_100
value: 55.08
- type: recall_at_1000
value: 55.08
- type: recall_at_3
value: 38.487
- type: recall_at_5
value: 45.308
- task:
type: Retrieval
dataset:
name: MTEB ClimateFEVER
type: climate-fever
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 12.842999999999998
- type: map_at_10
value: 22.101000000000003
- type: map_at_100
value: 24.319
- type: map_at_1000
value: 24.51
- type: map_at_3
value: 18.372
- type: map_at_5
value: 20.323
- type: mrr_at_1
value: 27.948
- type: mrr_at_10
value: 40.321
- type: mrr_at_100
value: 41.262
- type: mrr_at_1000
value: 41.297
- type: mrr_at_3
value: 36.558
- type: mrr_at_5
value: 38.824999999999996
- type: ndcg_at_1
value: 27.948
- type: ndcg_at_10
value: 30.906
- type: ndcg_at_100
value: 38.986
- type: ndcg_at_1000
value: 42.136
- type: ndcg_at_3
value: 24.911
- type: ndcg_at_5
value: 27.168999999999997
- type: precision_at_1
value: 27.948
- type: precision_at_10
value: 9.798
- type: precision_at_100
value: 1.8399999999999999
- type: precision_at_1000
value: 0.243
- type: precision_at_3
value: 18.328
- type: precision_at_5
value: 14.502
- type: recall_at_1
value: 12.842999999999998
- type: recall_at_10
value: 37.245
- type: recall_at_100
value: 64.769
- type: recall_at_1000
value: 82.055
- type: recall_at_3
value: 23.159
- type: recall_at_5
value: 29.113
- task:
type: Retrieval
dataset:
name: MTEB DBPedia
type: dbpedia-entity
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 8.934000000000001
- type: map_at_10
value: 21.915000000000003
- type: map_at_100
value: 21.915000000000003
- type: map_at_1000
value: 21.915000000000003
- type: map_at_3
value: 14.623
- type: map_at_5
value: 17.841
- type: mrr_at_1
value: 71.25
- type: mrr_at_10
value: 78.994
- type: mrr_at_100
value: 78.994
- type: mrr_at_1000
value: 78.994
- type: mrr_at_3
value: 77.208
- type: mrr_at_5
value: 78.55799999999999
- type: ndcg_at_1
value: 60.62499999999999
- type: ndcg_at_10
value: 46.604
- type: ndcg_at_100
value: 35.653
- type: ndcg_at_1000
value: 35.531
- type: ndcg_at_3
value: 50.605
- type: ndcg_at_5
value: 48.730000000000004
- type: precision_at_1
value: 71.25
- type: precision_at_10
value: 37.75
- type: precision_at_100
value: 3.775
- type: precision_at_1000
value: 0.377
- type: precision_at_3
value: 54.417
- type: precision_at_5
value: 48.15
- type: recall_at_1
value: 8.934000000000001
- type: recall_at_10
value: 28.471000000000004
- type: recall_at_100
value: 28.471000000000004
- type: recall_at_1000
value: 28.471000000000004
- type: recall_at_3
value: 16.019
- type: recall_at_5
value: 21.410999999999998
- task:
type: Classification
dataset:
name: MTEB EmotionClassification
type: mteb/emotion
config: default
split: test
revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37
metrics:
- type: accuracy
value: 52.81
- type: f1
value: 47.987573380720114
- task:
type: Retrieval
dataset:
name: MTEB FEVER
type: fever
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 66.81899999999999
- type: map_at_10
value: 78.034
- type: map_at_100
value: 78.034
- type: map_at_1000
value: 78.034
- type: map_at_3
value: 76.43100000000001
- type: map_at_5
value: 77.515
- type: mrr_at_1
value: 71.542
- type: mrr_at_10
value: 81.638
- type: mrr_at_100
value: 81.638
- type: mrr_at_1000
value: 81.638
- type: mrr_at_3
value: 80.403
- type: mrr_at_5
value: 81.256
- type: ndcg_at_1
value: 71.542
- type: ndcg_at_10
value: 82.742
- type: ndcg_at_100
value: 82.741
- type: ndcg_at_1000
value: 82.741
- type: ndcg_at_3
value: 80.039
- type: ndcg_at_5
value: 81.695
- type: precision_at_1
value: 71.542
- type: precision_at_10
value: 10.387
- type: precision_at_100
value: 1.039
- type: precision_at_1000
value: 0.104
- type: precision_at_3
value: 31.447999999999997
- type: precision_at_5
value: 19.91
- type: recall_at_1
value: 66.81899999999999
- type: recall_at_10
value: 93.372
- type: recall_at_100
value: 93.372
- type: recall_at_1000
value: 93.372
- type: recall_at_3
value: 86.33
- type: recall_at_5
value: 90.347
- task:
type: Retrieval
dataset:
name: MTEB FiQA2018
type: fiqa
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 31.158
- type: map_at_10
value: 52.017
- type: map_at_100
value: 54.259
- type: map_at_1000
value: 54.367
- type: map_at_3
value: 45.738
- type: map_at_5
value: 49.283
- type: mrr_at_1
value: 57.87
- type: mrr_at_10
value: 66.215
- type: mrr_at_100
value: 66.735
- type: mrr_at_1000
value: 66.75
- type: mrr_at_3
value: 64.043
- type: mrr_at_5
value: 65.116
- type: ndcg_at_1
value: 57.87
- type: ndcg_at_10
value: 59.946999999999996
- type: ndcg_at_100
value: 66.31099999999999
- type: ndcg_at_1000
value: 67.75999999999999
- type: ndcg_at_3
value: 55.483000000000004
- type: ndcg_at_5
value: 56.891000000000005
- type: precision_at_1
value: 57.87
- type: precision_at_10
value: 16.497
- type: precision_at_100
value: 2.321
- type: precision_at_1000
value: 0.258
- type: precision_at_3
value: 37.14
- type: precision_at_5
value: 27.067999999999998
- type: recall_at_1
value: 31.158
- type: recall_at_10
value: 67.381
- type: recall_at_100
value: 89.464
- type: recall_at_1000
value: 97.989
- type: recall_at_3
value: 50.553000000000004
- type: recall_at_5
value: 57.824
- task:
type: Retrieval
dataset:
name: MTEB HotpotQA
type: hotpotqa
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 42.073
- type: map_at_10
value: 72.418
- type: map_at_100
value: 73.175
- type: map_at_1000
value: 73.215
- type: map_at_3
value: 68.791
- type: map_at_5
value: 71.19
- type: mrr_at_1
value: 84.146
- type: mrr_at_10
value: 88.994
- type: mrr_at_100
value: 89.116
- type: mrr_at_1000
value: 89.12
- type: mrr_at_3
value: 88.373
- type: mrr_at_5
value: 88.82
- type: ndcg_at_1
value: 84.146
- type: ndcg_at_10
value: 79.404
- type: ndcg_at_100
value: 81.83200000000001
- type: ndcg_at_1000
value: 82.524
- type: ndcg_at_3
value: 74.595
- type: ndcg_at_5
value: 77.474
- type: precision_at_1
value: 84.146
- type: precision_at_10
value: 16.753999999999998
- type: precision_at_100
value: 1.8599999999999999
- type: precision_at_1000
value: 0.19499999999999998
- type: precision_at_3
value: 48.854
- type: precision_at_5
value: 31.579
- type: recall_at_1
value: 42.073
- type: recall_at_10
value: 83.768
- type: recall_at_100
value: 93.018
- type: recall_at_1000
value: 97.481
- type: recall_at_3
value: 73.282
- type: recall_at_5
value: 78.947
- task:
type: Classification
dataset:
name: MTEB ImdbClassification
type: mteb/imdb
config: default
split: test
revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7
metrics:
- type: accuracy
value: 94.9968
- type: ap
value: 92.93892195862824
- type: f1
value: 94.99327998213761
- task:
type: Retrieval
dataset:
name: MTEB MSMARCO
type: msmarco
config: default
split: dev
revision: None
metrics:
- type: map_at_1
value: 21.698
- type: map_at_10
value: 34.585
- type: map_at_100
value: 35.782000000000004
- type: map_at_1000
value: 35.825
- type: map_at_3
value: 30.397999999999996
- type: map_at_5
value: 32.72
- type: mrr_at_1
value: 22.192
- type: mrr_at_10
value: 35.085
- type: mrr_at_100
value: 36.218
- type: mrr_at_1000
value: 36.256
- type: mrr_at_3
value: 30.986000000000004
- type: mrr_at_5
value: 33.268
- type: ndcg_at_1
value: 22.192
- type: ndcg_at_10
value: 41.957
- type: ndcg_at_100
value: 47.658
- type: ndcg_at_1000
value: 48.697
- type: ndcg_at_3
value: 33.433
- type: ndcg_at_5
value: 37.551
- type: precision_at_1
value: 22.192
- type: precision_at_10
value: 6.781
- type: precision_at_100
value: 0.963
- type: precision_at_1000
value: 0.105
- type: precision_at_3
value: 14.365
- type: precision_at_5
value: 10.713000000000001
- type: recall_at_1
value: 21.698
- type: recall_at_10
value: 64.79
- type: recall_at_100
value: 91.071
- type: recall_at_1000
value: 98.883
- type: recall_at_3
value: 41.611
- type: recall_at_5
value: 51.459999999999994
- task:
type: Classification
dataset:
name: MTEB MTOPDomainClassification (en)
type: mteb/mtop_domain
config: en
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 96.15823073415413
- type: f1
value: 96.00362034963248
- task:
type: Classification
dataset:
name: MTEB MTOPIntentClassification (en)
type: mteb/mtop_intent
config: en
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 87.12722298221614
- type: f1
value: 70.46888967516227
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (en)
type: mteb/amazon_massive_intent
config: en
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 80.77673167451245
- type: f1
value: 77.60202561132175
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (en)
type: mteb/amazon_massive_scenario
config: en
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 82.09145931405514
- type: f1
value: 81.7701921473406
- task:
type: Clustering
dataset:
name: MTEB MedrxivClusteringP2P
type: mteb/medrxiv-clustering-p2p
config: default
split: test
revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73
metrics:
- type: v_measure
value: 36.52153488185864
- task:
type: Clustering
dataset:
name: MTEB MedrxivClusteringS2S
type: mteb/medrxiv-clustering-s2s
config: default
split: test
revision: 35191c8c0dca72d8ff3efcd72aa802307d469663
metrics:
- type: v_measure
value: 36.80090398444147
- task:
type: Reranking
dataset:
name: MTEB MindSmallReranking
type: mteb/mind_small
config: default
split: test
revision: 3bdac13927fdc888b903db93b2ffdbd90b295a69
metrics:
- type: map
value: 31.807141746058605
- type: mrr
value: 32.85025611455029
- task:
type: Retrieval
dataset:
name: MTEB NFCorpus
type: nfcorpus
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 6.920999999999999
- type: map_at_10
value: 16.049
- type: map_at_100
value: 16.049
- type: map_at_1000
value: 16.049
- type: map_at_3
value: 11.865
- type: map_at_5
value: 13.657
- type: mrr_at_1
value: 53.87
- type: mrr_at_10
value: 62.291
- type: mrr_at_100
value: 62.291
- type: mrr_at_1000
value: 62.291
- type: mrr_at_3
value: 60.681
- type: mrr_at_5
value: 61.61
- type: ndcg_at_1
value: 51.23799999999999
- type: ndcg_at_10
value: 40.892
- type: ndcg_at_100
value: 26.951999999999998
- type: ndcg_at_1000
value: 26.474999999999998
- type: ndcg_at_3
value: 46.821
- type: ndcg_at_5
value: 44.333
- type: precision_at_1
value: 53.251000000000005
- type: precision_at_10
value: 30.124000000000002
- type: precision_at_100
value: 3.012
- type: precision_at_1000
value: 0.301
- type: precision_at_3
value: 43.55
- type: precision_at_5
value: 38.266
- type: recall_at_1
value: 6.920999999999999
- type: recall_at_10
value: 20.852
- type: recall_at_100
value: 20.852
- type: recall_at_1000
value: 20.852
- type: recall_at_3
value: 13.628000000000002
- type: recall_at_5
value: 16.273
- task:
type: Retrieval
dataset:
name: MTEB NQ
type: nq
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 46.827999999999996
- type: map_at_10
value: 63.434000000000005
- type: map_at_100
value: 63.434000000000005
- type: map_at_1000
value: 63.434000000000005
- type: map_at_3
value: 59.794000000000004
- type: map_at_5
value: 62.08
- type: mrr_at_1
value: 52.288999999999994
- type: mrr_at_10
value: 65.95
- type: mrr_at_100
value: 65.95
- type: mrr_at_1000
value: 65.95
- type: mrr_at_3
value: 63.413
- type: mrr_at_5
value: 65.08
- type: ndcg_at_1
value: 52.288999999999994
- type: ndcg_at_10
value: 70.301
- type: ndcg_at_100
value: 70.301
- type: ndcg_at_1000
value: 70.301
- type: ndcg_at_3
value: 63.979
- type: ndcg_at_5
value: 67.582
- type: precision_at_1
value: 52.288999999999994
- type: precision_at_10
value: 10.576
- type: precision_at_100
value: 1.058
- type: precision_at_1000
value: 0.106
- type: precision_at_3
value: 28.177000000000003
- type: precision_at_5
value: 19.073
- type: recall_at_1
value: 46.827999999999996
- type: recall_at_10
value: 88.236
- type: recall_at_100
value: 88.236
- type: recall_at_1000
value: 88.236
- type: recall_at_3
value: 72.371
- type: recall_at_5
value: 80.56
- task:
type: Retrieval
dataset:
name: MTEB QuoraRetrieval
type: quora
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 71.652
- type: map_at_10
value: 85.953
- type: map_at_100
value: 85.953
- type: map_at_1000
value: 85.953
- type: map_at_3
value: 83.05399999999999
- type: map_at_5
value: 84.89
- type: mrr_at_1
value: 82.42
- type: mrr_at_10
value: 88.473
- type: mrr_at_100
value: 88.473
- type: mrr_at_1000
value: 88.473
- type: mrr_at_3
value: 87.592
- type: mrr_at_5
value: 88.211
- type: ndcg_at_1
value: 82.44
- type: ndcg_at_10
value: 89.467
- type: ndcg_at_100
value: 89.33
- type: ndcg_at_1000
value: 89.33
- type: ndcg_at_3
value: 86.822
- type: ndcg_at_5
value: 88.307
- type: precision_at_1
value: 82.44
- type: precision_at_10
value: 13.616
- type: precision_at_100
value: 1.362
- type: precision_at_1000
value: 0.136
- type: precision_at_3
value: 38.117000000000004
- type: precision_at_5
value: 25.05
- type: recall_at_1
value: 71.652
- type: recall_at_10
value: 96.224
- type: recall_at_100
value: 96.224
- type: recall_at_1000
value: 96.224
- type: recall_at_3
value: 88.571
- type: recall_at_5
value: 92.812
- task:
type: Clustering
dataset:
name: MTEB RedditClustering
type: mteb/reddit-clustering
config: default
split: test
revision: 24640382cdbf8abc73003fb0fa6d111a705499eb
metrics:
- type: v_measure
value: 61.295010338050474
- task:
type: Clustering
dataset:
name: MTEB RedditClusteringP2P
type: mteb/reddit-clustering-p2p
config: default
split: test
revision: 282350215ef01743dc01b456c7f5241fa8937f16
metrics:
- type: v_measure
value: 67.26380819328142
- task:
type: Retrieval
dataset:
name: MTEB SCIDOCS
type: scidocs
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 5.683
- type: map_at_10
value: 14.924999999999999
- type: map_at_100
value: 17.532
- type: map_at_1000
value: 17.875
- type: map_at_3
value: 10.392
- type: map_at_5
value: 12.592
- type: mrr_at_1
value: 28.000000000000004
- type: mrr_at_10
value: 39.951
- type: mrr_at_100
value: 41.025
- type: mrr_at_1000
value: 41.056
- type: mrr_at_3
value: 36.317
- type: mrr_at_5
value: 38.412
- type: ndcg_at_1
value: 28.000000000000004
- type: ndcg_at_10
value: 24.410999999999998
- type: ndcg_at_100
value: 33.79
- type: ndcg_at_1000
value: 39.035
- type: ndcg_at_3
value: 22.845
- type: ndcg_at_5
value: 20.080000000000002
- type: precision_at_1
value: 28.000000000000004
- type: precision_at_10
value: 12.790000000000001
- type: precision_at_100
value: 2.633
- type: precision_at_1000
value: 0.388
- type: precision_at_3
value: 21.367
- type: precision_at_5
value: 17.7
- type: recall_at_1
value: 5.683
- type: recall_at_10
value: 25.91
- type: recall_at_100
value: 53.443
- type: recall_at_1000
value: 78.73
- type: recall_at_3
value: 13.003
- type: recall_at_5
value: 17.932000000000002
- task:
type: STS
dataset:
name: MTEB SICK-R
type: mteb/sickr-sts
config: default
split: test
revision: a6ea5a8cab320b040a23452cc28066d9beae2cee
metrics:
- type: cos_sim_pearson
value: 84.677978681023
- type: cos_sim_spearman
value: 83.13093441058189
- type: euclidean_pearson
value: 83.35535759341572
- type: euclidean_spearman
value: 83.42583744219611
- type: manhattan_pearson
value: 83.2243124045889
- type: manhattan_spearman
value: 83.39801618652632
- task:
type: STS
dataset:
name: MTEB STS12
type: mteb/sts12-sts
config: default
split: test
revision: a0d554a64d88156834ff5ae9920b964011b16384
metrics:
- type: cos_sim_pearson
value: 81.68960206569666
- type: cos_sim_spearman
value: 77.3368966488535
- type: euclidean_pearson
value: 77.62828980560303
- type: euclidean_spearman
value: 76.77951481444651
- type: manhattan_pearson
value: 77.88637240839041
- type: manhattan_spearman
value: 77.22157841466188
- task:
type: STS
dataset:
name: MTEB STS13
type: mteb/sts13-sts
config: default
split: test
revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca
metrics:
- type: cos_sim_pearson
value: 84.18745821650724
- type: cos_sim_spearman
value: 85.04423285574542
- type: euclidean_pearson
value: 85.46604816931023
- type: euclidean_spearman
value: 85.5230593932974
- type: manhattan_pearson
value: 85.57912805986261
- type: manhattan_spearman
value: 85.65955905111873
- task:
type: STS
dataset:
name: MTEB STS14
type: mteb/sts14-sts
config: default
split: test
revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375
metrics:
- type: cos_sim_pearson
value: 83.6715333300355
- type: cos_sim_spearman
value: 82.9058522514908
- type: euclidean_pearson
value: 83.9640357424214
- type: euclidean_spearman
value: 83.60415457472637
- type: manhattan_pearson
value: 84.05621005853469
- type: manhattan_spearman
value: 83.87077724707746
- task:
type: STS
dataset:
name: MTEB STS15
type: mteb/sts15-sts
config: default
split: test
revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3
metrics:
- type: cos_sim_pearson
value: 87.82422928098886
- type: cos_sim_spearman
value: 88.12660311894628
- type: euclidean_pearson
value: 87.50974805056555
- type: euclidean_spearman
value: 87.91957275596677
- type: manhattan_pearson
value: 87.74119404878883
- type: manhattan_spearman
value: 88.2808922165719
- task:
type: STS
dataset:
name: MTEB STS16
type: mteb/sts16-sts
config: default
split: test
revision: 4d8694f8f0e0100860b497b999b3dbed754a0513
metrics:
- type: cos_sim_pearson
value: 84.80605838552093
- type: cos_sim_spearman
value: 86.24123388765678
- type: euclidean_pearson
value: 85.32648347339814
- type: euclidean_spearman
value: 85.60046671950158
- type: manhattan_pearson
value: 85.53800168487811
- type: manhattan_spearman
value: 85.89542420480763
- task:
type: STS
dataset:
name: MTEB STS17 (en-en)
type: mteb/sts17-crosslingual-sts
config: en-en
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 89.87540978988132
- type: cos_sim_spearman
value: 90.12715295099461
- type: euclidean_pearson
value: 91.61085993525275
- type: euclidean_spearman
value: 91.31835942311758
- type: manhattan_pearson
value: 91.57500202032934
- type: manhattan_spearman
value: 91.1790925526635
- task:
type: STS
dataset:
name: MTEB STS22 (en)
type: mteb/sts22-crosslingual-sts
config: en
split: test
revision: eea2b4fe26a775864c896887d910b76a8098ad3f
metrics:
- type: cos_sim_pearson
value: 69.87136205329556
- type: cos_sim_spearman
value: 68.6253154635078
- type: euclidean_pearson
value: 68.91536015034222
- type: euclidean_spearman
value: 67.63744649352542
- type: manhattan_pearson
value: 69.2000713045275
- type: manhattan_spearman
value: 68.16002901587316
- task:
type: STS
dataset:
name: MTEB STSBenchmark
type: mteb/stsbenchmark-sts
config: default
split: test
revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831
metrics:
- type: cos_sim_pearson
value: 85.21849551039082
- type: cos_sim_spearman
value: 85.6392959372461
- type: euclidean_pearson
value: 85.92050852609488
- type: euclidean_spearman
value: 85.97205649009734
- type: manhattan_pearson
value: 86.1031154802254
- type: manhattan_spearman
value: 86.26791155517466
- task:
type: Reranking
dataset:
name: MTEB SciDocsRR
type: mteb/scidocs-reranking
config: default
split: test
revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab
metrics:
- type: map
value: 86.83953958636627
- type: mrr
value: 96.71167612344082
- task:
type: Retrieval
dataset:
name: MTEB SciFact
type: scifact
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 64.994
- type: map_at_10
value: 74.763
- type: map_at_100
value: 75.127
- type: map_at_1000
value: 75.143
- type: map_at_3
value: 71.824
- type: map_at_5
value: 73.71
- type: mrr_at_1
value: 68.333
- type: mrr_at_10
value: 75.749
- type: mrr_at_100
value: 75.922
- type: mrr_at_1000
value: 75.938
- type: mrr_at_3
value: 73.556
- type: mrr_at_5
value: 74.739
- type: ndcg_at_1
value: 68.333
- type: ndcg_at_10
value: 79.174
- type: ndcg_at_100
value: 80.41
- type: ndcg_at_1000
value: 80.804
- type: ndcg_at_3
value: 74.361
- type: ndcg_at_5
value: 76.861
- type: precision_at_1
value: 68.333
- type: precision_at_10
value: 10.333
- type: precision_at_100
value: 1.0999999999999999
- type: precision_at_1000
value: 0.11299999999999999
- type: precision_at_3
value: 28.778
- type: precision_at_5
value: 19.067
- type: recall_at_1
value: 64.994
- type: recall_at_10
value: 91.822
- type: recall_at_100
value: 97.0
- type: recall_at_1000
value: 100.0
- type: recall_at_3
value: 78.878
- type: recall_at_5
value: 85.172
- task:
type: PairClassification
dataset:
name: MTEB SprintDuplicateQuestions
type: mteb/sprintduplicatequestions-pairclassification
config: default
split: test
revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46
metrics:
- type: cos_sim_accuracy
value: 99.72079207920792
- type: cos_sim_ap
value: 93.00265215525152
- type: cos_sim_f1
value: 85.06596306068602
- type: cos_sim_precision
value: 90.05586592178771
- type: cos_sim_recall
value: 80.60000000000001
- type: dot_accuracy
value: 99.66039603960397
- type: dot_ap
value: 91.22371407479089
- type: dot_f1
value: 82.34693877551021
- type: dot_precision
value: 84.0625
- type: dot_recall
value: 80.7
- type: euclidean_accuracy
value: 99.71881188118812
- type: euclidean_ap
value: 92.88449963304728
- type: euclidean_f1
value: 85.19480519480518
- type: euclidean_precision
value: 88.64864864864866
- type: euclidean_recall
value: 82.0
- type: manhattan_accuracy
value: 99.73267326732673
- type: manhattan_ap
value: 93.23055393056883
- type: manhattan_f1
value: 85.88957055214725
- type: manhattan_precision
value: 87.86610878661088
- type: manhattan_recall
value: 84.0
- type: max_accuracy
value: 99.73267326732673
- type: max_ap
value: 93.23055393056883
- type: max_f1
value: 85.88957055214725
- task:
type: Clustering
dataset:
name: MTEB StackExchangeClustering
type: mteb/stackexchange-clustering
config: default
split: test
revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259
metrics:
- type: v_measure
value: 77.3305735900358
- task:
type: Clustering
dataset:
name: MTEB StackExchangeClusteringP2P
type: mteb/stackexchange-clustering-p2p
config: default
split: test
revision: 815ca46b2622cec33ccafc3735d572c266efdb44
metrics:
- type: v_measure
value: 41.32967136540674
- task:
type: Reranking
dataset:
name: MTEB StackOverflowDupQuestions
type: mteb/stackoverflowdupquestions-reranking
config: default
split: test
revision: e185fbe320c72810689fc5848eb6114e1ef5ec69
metrics:
- type: map
value: 55.95514866379359
- type: mrr
value: 56.95423245055598
- task:
type: Summarization
dataset:
name: MTEB SummEval
type: mteb/summeval
config: default
split: test
revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c
metrics:
- type: cos_sim_pearson
value: 30.783007208997144
- type: cos_sim_spearman
value: 30.373444721540533
- type: dot_pearson
value: 29.210604111143905
- type: dot_spearman
value: 29.98809758085659
- task:
type: Retrieval
dataset:
name: MTEB TRECCOVID
type: trec-covid
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 0.234
- type: map_at_10
value: 1.894
- type: map_at_100
value: 1.894
- type: map_at_1000
value: 1.894
- type: map_at_3
value: 0.636
- type: map_at_5
value: 1.0
- type: mrr_at_1
value: 88.0
- type: mrr_at_10
value: 93.667
- type: mrr_at_100
value: 93.667
- type: mrr_at_1000
value: 93.667
- type: mrr_at_3
value: 93.667
- type: mrr_at_5
value: 93.667
- type: ndcg_at_1
value: 85.0
- type: ndcg_at_10
value: 74.798
- type: ndcg_at_100
value: 16.462
- type: ndcg_at_1000
value: 7.0889999999999995
- type: ndcg_at_3
value: 80.754
- type: ndcg_at_5
value: 77.319
- type: precision_at_1
value: 88.0
- type: precision_at_10
value: 78.0
- type: precision_at_100
value: 7.8
- type: precision_at_1000
value: 0.7799999999999999
- type: precision_at_3
value: 83.333
- type: precision_at_5
value: 80.80000000000001
- type: recall_at_1
value: 0.234
- type: recall_at_10
value: 2.093
- type: recall_at_100
value: 2.093
- type: recall_at_1000
value: 2.093
- type: recall_at_3
value: 0.662
- type: recall_at_5
value: 1.0739999999999998
- task:
type: Retrieval
dataset:
name: MTEB Touche2020
type: webis-touche2020
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 2.703
- type: map_at_10
value: 10.866000000000001
- type: map_at_100
value: 10.866000000000001
- type: map_at_1000
value: 10.866000000000001
- type: map_at_3
value: 5.909
- type: map_at_5
value: 7.35
- type: mrr_at_1
value: 36.735
- type: mrr_at_10
value: 53.583000000000006
- type: mrr_at_100
value: 53.583000000000006
- type: mrr_at_1000
value: 53.583000000000006
- type: mrr_at_3
value: 49.32
- type: mrr_at_5
value: 51.769
- type: ndcg_at_1
value: 34.694
- type: ndcg_at_10
value: 27.926000000000002
- type: ndcg_at_100
value: 22.701
- type: ndcg_at_1000
value: 22.701
- type: ndcg_at_3
value: 32.073
- type: ndcg_at_5
value: 28.327999999999996
- type: precision_at_1
value: 36.735
- type: precision_at_10
value: 24.694
- type: precision_at_100
value: 2.469
- type: precision_at_1000
value: 0.247
- type: precision_at_3
value: 31.973000000000003
- type: precision_at_5
value: 26.939
- type: recall_at_1
value: 2.703
- type: recall_at_10
value: 17.702
- type: recall_at_100
value: 17.702
- type: recall_at_1000
value: 17.702
- type: recall_at_3
value: 7.208
- type: recall_at_5
value: 9.748999999999999
- task:
type: Classification
dataset:
name: MTEB ToxicConversationsClassification
type: mteb/toxic_conversations_50k
config: default
split: test
revision: d7c0de2777da35d6aae2200a62c6e0e5af397c4c
metrics:
- type: accuracy
value: 70.79960000000001
- type: ap
value: 15.467565415565815
- type: f1
value: 55.28639823443618
- task:
type: Classification
dataset:
name: MTEB TweetSentimentExtractionClassification
type: mteb/tweet_sentiment_extraction
config: default
split: test
revision: d604517c81ca91fe16a244d1248fc021f9ecee7a
metrics:
- type: accuracy
value: 64.7792869269949
- type: f1
value: 65.08597154774318
- task:
type: Clustering
dataset:
name: MTEB TwentyNewsgroupsClustering
type: mteb/twentynewsgroups-clustering
config: default
split: test
revision: 6125ec4e24fa026cec8a478383ee943acfbd5449
metrics:
- type: v_measure
value: 55.70352297774293
- task:
type: PairClassification
dataset:
name: MTEB TwitterSemEval2015
type: mteb/twittersemeval2015-pairclassification
config: default
split: test
revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1
metrics:
- type: cos_sim_accuracy
value: 88.27561542588067
- type: cos_sim_ap
value: 81.08262141256193
- type: cos_sim_f1
value: 73.82341501361338
- type: cos_sim_precision
value: 72.5720112159062
- type: cos_sim_recall
value: 75.11873350923483
- type: dot_accuracy
value: 86.66030875603504
- type: dot_ap
value: 76.6052349228621
- type: dot_f1
value: 70.13897280966768
- type: dot_precision
value: 64.70457079152732
- type: dot_recall
value: 76.56992084432717
- type: euclidean_accuracy
value: 88.37098408535495
- type: euclidean_ap
value: 81.12515230092113
- type: euclidean_f1
value: 74.10338225909379
- type: euclidean_precision
value: 71.76761433868974
- type: euclidean_recall
value: 76.59630606860158
- type: manhattan_accuracy
value: 88.34118137926924
- type: manhattan_ap
value: 80.95751834536561
- type: manhattan_f1
value: 73.9119496855346
- type: manhattan_precision
value: 70.625
- type: manhattan_recall
value: 77.5197889182058
- type: max_accuracy
value: 88.37098408535495
- type: max_ap
value: 81.12515230092113
- type: max_f1
value: 74.10338225909379
- task:
type: PairClassification
dataset:
name: MTEB TwitterURLCorpus
type: mteb/twitterurlcorpus-pairclassification
config: default
split: test
revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf
metrics:
- type: cos_sim_accuracy
value: 89.79896767182831
- type: cos_sim_ap
value: 87.40071784061065
- type: cos_sim_f1
value: 79.87753144712087
- type: cos_sim_precision
value: 76.67304015296367
- type: cos_sim_recall
value: 83.3615645210964
- type: dot_accuracy
value: 88.95486474948578
- type: dot_ap
value: 86.00227979119943
- type: dot_f1
value: 78.54601474525914
- type: dot_precision
value: 75.00525394045535
- type: dot_recall
value: 82.43763473975977
- type: euclidean_accuracy
value: 89.7892653393876
- type: euclidean_ap
value: 87.42174706480819
- type: euclidean_f1
value: 80.07283321194465
- type: euclidean_precision
value: 75.96738529574351
- type: euclidean_recall
value: 84.6473668001232
- type: manhattan_accuracy
value: 89.8474793340319
- type: manhattan_ap
value: 87.47814292587448
- type: manhattan_f1
value: 80.15461150280949
- type: manhattan_precision
value: 74.88798234468
- type: manhattan_recall
value: 86.21804742839544
- type: max_accuracy
value: 89.8474793340319
- type: max_ap
value: 87.47814292587448
- type: max_f1
value: 80.15461150280949
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;">
Feedback and support: TensorBlock's <a href="https://x.com/tensorblock_aoi">Twitter/X</a>, <a href="https://t.me/TensorBlock">Telegram Group</a> and <a href="https://x.com/tensorblock_aoi">Discord server</a>
</p>
</div>
</div>
## GritLM/GritLM-7B - GGUF
This repo contains GGUF format model files for [GritLM/GritLM-7B](https://huggingface.co/GritLM/GritLM-7B).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4011](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
<div style="text-align: left; margin: 20px 0;">
<a href="https://tensorblock.co/waitlist/client" style="display: inline-block; padding: 10px 20px; background-color: #007bff; color: white; text-decoration: none; border-radius: 5px; font-weight: bold;">
Run them on the TensorBlock client using your local machine ↗
</a>
</div>
## Prompt template
```
<s><|user|>
{prompt}
<|assistant|>
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [GritLM-7B-Q2_K.gguf](https://huggingface.co/tensorblock/GritLM-7B-GGUF/blob/main/GritLM-7B-Q2_K.gguf) | Q2_K | 2.532 GB | smallest, significant quality loss - not recommended for most purposes |
| [GritLM-7B-Q3_K_S.gguf](https://huggingface.co/tensorblock/GritLM-7B-GGUF/blob/main/GritLM-7B-Q3_K_S.gguf) | Q3_K_S | 2.947 GB | very small, high quality loss |
| [GritLM-7B-Q3_K_M.gguf](https://huggingface.co/tensorblock/GritLM-7B-GGUF/blob/main/GritLM-7B-Q3_K_M.gguf) | Q3_K_M | 3.277 GB | very small, high quality loss |
| [GritLM-7B-Q3_K_L.gguf](https://huggingface.co/tensorblock/GritLM-7B-GGUF/blob/main/GritLM-7B-Q3_K_L.gguf) | Q3_K_L | 3.560 GB | small, substantial quality loss |
| [GritLM-7B-Q4_0.gguf](https://huggingface.co/tensorblock/GritLM-7B-GGUF/blob/main/GritLM-7B-Q4_0.gguf) | Q4_0 | 3.827 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [GritLM-7B-Q4_K_S.gguf](https://huggingface.co/tensorblock/GritLM-7B-GGUF/blob/main/GritLM-7B-Q4_K_S.gguf) | Q4_K_S | 3.856 GB | small, greater quality loss |
| [GritLM-7B-Q4_K_M.gguf](https://huggingface.co/tensorblock/GritLM-7B-GGUF/blob/main/GritLM-7B-Q4_K_M.gguf) | Q4_K_M | 4.068 GB | medium, balanced quality - recommended |
| [GritLM-7B-Q5_0.gguf](https://huggingface.co/tensorblock/GritLM-7B-GGUF/blob/main/GritLM-7B-Q5_0.gguf) | Q5_0 | 4.654 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [GritLM-7B-Q5_K_S.gguf](https://huggingface.co/tensorblock/GritLM-7B-GGUF/blob/main/GritLM-7B-Q5_K_S.gguf) | Q5_K_S | 4.654 GB | large, low quality loss - recommended |
| [GritLM-7B-Q5_K_M.gguf](https://huggingface.co/tensorblock/GritLM-7B-GGUF/blob/main/GritLM-7B-Q5_K_M.gguf) | Q5_K_M | 4.779 GB | large, very low quality loss - recommended |
| [GritLM-7B-Q6_K.gguf](https://huggingface.co/tensorblock/GritLM-7B-GGUF/blob/main/GritLM-7B-Q6_K.gguf) | Q6_K | 5.534 GB | very large, extremely low quality loss |
| [GritLM-7B-Q8_0.gguf](https://huggingface.co/tensorblock/GritLM-7B-GGUF/blob/main/GritLM-7B-Q8_0.gguf) | Q8_0 | 7.167 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/GritLM-7B-GGUF --include "GritLM-7B-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/GritLM-7B-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
| [
"SUMMARIZATION"
] | [
"BIOSSES",
"SCIFACT"
] |
tomaarsen/mxbai-embed-large-v1-test | tomaarsen | feature-extraction | [
"sentence-transformers",
"onnx",
"safetensors",
"openvino",
"gguf",
"bert",
"feature-extraction",
"mteb",
"transformers.js",
"transformers",
"en",
"arxiv:2309.12871",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | 2024-11-11T16:14:21 | 2024-11-11T17:03:14 | 59 | 0 | ---
language:
- en
library_name: sentence-transformers
license: apache-2.0
pipeline_tag: feature-extraction
tags:
- mteb
- transformers.js
- transformers
model-index:
- name: mxbai-angle-large-v1
results:
- task:
type: Classification
dataset:
name: MTEB AmazonCounterfactualClassification (en)
type: mteb/amazon_counterfactual
config: en
split: test
revision: e8379541af4e31359cca9fbcf4b00f2671dba205
metrics:
- type: accuracy
value: 75.044776119403
- type: ap
value: 37.7362433623053
- type: f1
value: 68.92736573359774
- task:
type: Classification
dataset:
name: MTEB AmazonPolarityClassification
type: mteb/amazon_polarity
config: default
split: test
revision: e2d317d38cd51312af73b3d32a06d1a08b442046
metrics:
- type: accuracy
value: 93.84025000000001
- type: ap
value: 90.93190875404055
- type: f1
value: 93.8297833897293
- task:
type: Classification
dataset:
name: MTEB AmazonReviewsClassification (en)
type: mteb/amazon_reviews_multi
config: en
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 49.184
- type: f1
value: 48.74163227751588
- task:
type: Retrieval
dataset:
name: MTEB ArguAna
type: arguana
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 41.252
- type: map_at_10
value: 57.778
- type: map_at_100
value: 58.233000000000004
- type: map_at_1000
value: 58.23700000000001
- type: map_at_3
value: 53.449999999999996
- type: map_at_5
value: 56.376000000000005
- type: mrr_at_1
value: 41.679
- type: mrr_at_10
value: 57.92699999999999
- type: mrr_at_100
value: 58.389
- type: mrr_at_1000
value: 58.391999999999996
- type: mrr_at_3
value: 53.651
- type: mrr_at_5
value: 56.521
- type: ndcg_at_1
value: 41.252
- type: ndcg_at_10
value: 66.018
- type: ndcg_at_100
value: 67.774
- type: ndcg_at_1000
value: 67.84400000000001
- type: ndcg_at_3
value: 57.372
- type: ndcg_at_5
value: 62.646
- type: precision_at_1
value: 41.252
- type: precision_at_10
value: 9.189
- type: precision_at_100
value: 0.991
- type: precision_at_1000
value: 0.1
- type: precision_at_3
value: 22.902
- type: precision_at_5
value: 16.302
- type: recall_at_1
value: 41.252
- type: recall_at_10
value: 91.892
- type: recall_at_100
value: 99.14699999999999
- type: recall_at_1000
value: 99.644
- type: recall_at_3
value: 68.706
- type: recall_at_5
value: 81.50800000000001
- task:
type: Clustering
dataset:
name: MTEB ArxivClusteringP2P
type: mteb/arxiv-clustering-p2p
config: default
split: test
revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d
metrics:
- type: v_measure
value: 48.97294504317859
- task:
type: Clustering
dataset:
name: MTEB ArxivClusteringS2S
type: mteb/arxiv-clustering-s2s
config: default
split: test
revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53
metrics:
- type: v_measure
value: 42.98071077674629
- task:
type: Reranking
dataset:
name: MTEB AskUbuntuDupQuestions
type: mteb/askubuntudupquestions-reranking
config: default
split: test
revision: 2000358ca161889fa9c082cb41daa8dcfb161a54
metrics:
- type: map
value: 65.16477858490782
- type: mrr
value: 78.23583080508287
- task:
type: STS
dataset:
name: MTEB BIOSSES
type: mteb/biosses-sts
config: default
split: test
revision: d3fb88f8f02e40887cd149695127462bbcf29b4a
metrics:
- type: cos_sim_pearson
value: 89.6277629421789
- type: cos_sim_spearman
value: 88.4056288400568
- type: euclidean_pearson
value: 87.94871847578163
- type: euclidean_spearman
value: 88.4056288400568
- type: manhattan_pearson
value: 87.73271254229648
- type: manhattan_spearman
value: 87.91826833762677
- task:
type: Classification
dataset:
name: MTEB Banking77Classification
type: mteb/banking77
config: default
split: test
revision: 0fd18e25b25c072e09e0d92ab615fda904d66300
metrics:
- type: accuracy
value: 87.81818181818181
- type: f1
value: 87.79879337316918
- task:
type: Clustering
dataset:
name: MTEB BiorxivClusteringP2P
type: mteb/biorxiv-clustering-p2p
config: default
split: test
revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40
metrics:
- type: v_measure
value: 39.91773608582761
- task:
type: Clustering
dataset:
name: MTEB BiorxivClusteringS2S
type: mteb/biorxiv-clustering-s2s
config: default
split: test
revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908
metrics:
- type: v_measure
value: 36.73059477462478
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackAndroidRetrieval
type: BeIR/cqadupstack
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 32.745999999999995
- type: map_at_10
value: 43.632
- type: map_at_100
value: 45.206
- type: map_at_1000
value: 45.341
- type: map_at_3
value: 39.956
- type: map_at_5
value: 42.031
- type: mrr_at_1
value: 39.485
- type: mrr_at_10
value: 49.537
- type: mrr_at_100
value: 50.249
- type: mrr_at_1000
value: 50.294000000000004
- type: mrr_at_3
value: 46.757
- type: mrr_at_5
value: 48.481
- type: ndcg_at_1
value: 39.485
- type: ndcg_at_10
value: 50.058
- type: ndcg_at_100
value: 55.586
- type: ndcg_at_1000
value: 57.511
- type: ndcg_at_3
value: 44.786
- type: ndcg_at_5
value: 47.339999999999996
- type: precision_at_1
value: 39.485
- type: precision_at_10
value: 9.557
- type: precision_at_100
value: 1.552
- type: precision_at_1000
value: 0.202
- type: precision_at_3
value: 21.412
- type: precision_at_5
value: 15.479000000000001
- type: recall_at_1
value: 32.745999999999995
- type: recall_at_10
value: 62.056
- type: recall_at_100
value: 85.088
- type: recall_at_1000
value: 96.952
- type: recall_at_3
value: 46.959
- type: recall_at_5
value: 54.06999999999999
- type: map_at_1
value: 31.898
- type: map_at_10
value: 42.142
- type: map_at_100
value: 43.349
- type: map_at_1000
value: 43.483
- type: map_at_3
value: 39.18
- type: map_at_5
value: 40.733000000000004
- type: mrr_at_1
value: 39.617999999999995
- type: mrr_at_10
value: 47.922
- type: mrr_at_100
value: 48.547000000000004
- type: mrr_at_1000
value: 48.597
- type: mrr_at_3
value: 45.86
- type: mrr_at_5
value: 46.949000000000005
- type: ndcg_at_1
value: 39.617999999999995
- type: ndcg_at_10
value: 47.739
- type: ndcg_at_100
value: 51.934999999999995
- type: ndcg_at_1000
value: 54.007000000000005
- type: ndcg_at_3
value: 43.748
- type: ndcg_at_5
value: 45.345
- type: precision_at_1
value: 39.617999999999995
- type: precision_at_10
value: 8.962
- type: precision_at_100
value: 1.436
- type: precision_at_1000
value: 0.192
- type: precision_at_3
value: 21.083
- type: precision_at_5
value: 14.752
- type: recall_at_1
value: 31.898
- type: recall_at_10
value: 57.587999999999994
- type: recall_at_100
value: 75.323
- type: recall_at_1000
value: 88.304
- type: recall_at_3
value: 45.275
- type: recall_at_5
value: 49.99
- type: map_at_1
value: 40.458
- type: map_at_10
value: 52.942
- type: map_at_100
value: 53.974
- type: map_at_1000
value: 54.031
- type: map_at_3
value: 49.559999999999995
- type: map_at_5
value: 51.408
- type: mrr_at_1
value: 46.27
- type: mrr_at_10
value: 56.31699999999999
- type: mrr_at_100
value: 56.95099999999999
- type: mrr_at_1000
value: 56.98
- type: mrr_at_3
value: 53.835
- type: mrr_at_5
value: 55.252
- type: ndcg_at_1
value: 46.27
- type: ndcg_at_10
value: 58.964000000000006
- type: ndcg_at_100
value: 62.875
- type: ndcg_at_1000
value: 63.969
- type: ndcg_at_3
value: 53.297000000000004
- type: ndcg_at_5
value: 55.938
- type: precision_at_1
value: 46.27
- type: precision_at_10
value: 9.549000000000001
- type: precision_at_100
value: 1.2409999999999999
- type: precision_at_1000
value: 0.13799999999999998
- type: precision_at_3
value: 23.762
- type: precision_at_5
value: 16.262999999999998
- type: recall_at_1
value: 40.458
- type: recall_at_10
value: 73.446
- type: recall_at_100
value: 90.12400000000001
- type: recall_at_1000
value: 97.795
- type: recall_at_3
value: 58.123000000000005
- type: recall_at_5
value: 64.68
- type: map_at_1
value: 27.443
- type: map_at_10
value: 36.081
- type: map_at_100
value: 37.163000000000004
- type: map_at_1000
value: 37.232
- type: map_at_3
value: 33.308
- type: map_at_5
value: 34.724
- type: mrr_at_1
value: 29.492
- type: mrr_at_10
value: 38.138
- type: mrr_at_100
value: 39.065
- type: mrr_at_1000
value: 39.119
- type: mrr_at_3
value: 35.593
- type: mrr_at_5
value: 36.785000000000004
- type: ndcg_at_1
value: 29.492
- type: ndcg_at_10
value: 41.134
- type: ndcg_at_100
value: 46.300999999999995
- type: ndcg_at_1000
value: 48.106
- type: ndcg_at_3
value: 35.77
- type: ndcg_at_5
value: 38.032
- type: precision_at_1
value: 29.492
- type: precision_at_10
value: 6.249
- type: precision_at_100
value: 0.9299999999999999
- type: precision_at_1000
value: 0.11199999999999999
- type: precision_at_3
value: 15.065999999999999
- type: precision_at_5
value: 10.373000000000001
- type: recall_at_1
value: 27.443
- type: recall_at_10
value: 54.80199999999999
- type: recall_at_100
value: 78.21900000000001
- type: recall_at_1000
value: 91.751
- type: recall_at_3
value: 40.211000000000006
- type: recall_at_5
value: 45.599000000000004
- type: map_at_1
value: 18.731
- type: map_at_10
value: 26.717999999999996
- type: map_at_100
value: 27.897
- type: map_at_1000
value: 28.029
- type: map_at_3
value: 23.91
- type: map_at_5
value: 25.455
- type: mrr_at_1
value: 23.134
- type: mrr_at_10
value: 31.769
- type: mrr_at_100
value: 32.634
- type: mrr_at_1000
value: 32.707
- type: mrr_at_3
value: 28.938999999999997
- type: mrr_at_5
value: 30.531000000000002
- type: ndcg_at_1
value: 23.134
- type: ndcg_at_10
value: 32.249
- type: ndcg_at_100
value: 37.678
- type: ndcg_at_1000
value: 40.589999999999996
- type: ndcg_at_3
value: 26.985999999999997
- type: ndcg_at_5
value: 29.457
- type: precision_at_1
value: 23.134
- type: precision_at_10
value: 5.8709999999999996
- type: precision_at_100
value: 0.988
- type: precision_at_1000
value: 0.13799999999999998
- type: precision_at_3
value: 12.852
- type: precision_at_5
value: 9.428
- type: recall_at_1
value: 18.731
- type: recall_at_10
value: 44.419
- type: recall_at_100
value: 67.851
- type: recall_at_1000
value: 88.103
- type: recall_at_3
value: 29.919
- type: recall_at_5
value: 36.230000000000004
- type: map_at_1
value: 30.324
- type: map_at_10
value: 41.265
- type: map_at_100
value: 42.559000000000005
- type: map_at_1000
value: 42.669000000000004
- type: map_at_3
value: 38.138
- type: map_at_5
value: 39.881
- type: mrr_at_1
value: 36.67
- type: mrr_at_10
value: 46.774
- type: mrr_at_100
value: 47.554
- type: mrr_at_1000
value: 47.593
- type: mrr_at_3
value: 44.338
- type: mrr_at_5
value: 45.723
- type: ndcg_at_1
value: 36.67
- type: ndcg_at_10
value: 47.367
- type: ndcg_at_100
value: 52.623
- type: ndcg_at_1000
value: 54.59
- type: ndcg_at_3
value: 42.323
- type: ndcg_at_5
value: 44.727
- type: precision_at_1
value: 36.67
- type: precision_at_10
value: 8.518
- type: precision_at_100
value: 1.2890000000000001
- type: precision_at_1000
value: 0.163
- type: precision_at_3
value: 19.955000000000002
- type: precision_at_5
value: 14.11
- type: recall_at_1
value: 30.324
- type: recall_at_10
value: 59.845000000000006
- type: recall_at_100
value: 81.77499999999999
- type: recall_at_1000
value: 94.463
- type: recall_at_3
value: 46.019
- type: recall_at_5
value: 52.163000000000004
- type: map_at_1
value: 24.229
- type: map_at_10
value: 35.004000000000005
- type: map_at_100
value: 36.409000000000006
- type: map_at_1000
value: 36.521
- type: map_at_3
value: 31.793
- type: map_at_5
value: 33.432
- type: mrr_at_1
value: 30.365
- type: mrr_at_10
value: 40.502
- type: mrr_at_100
value: 41.372
- type: mrr_at_1000
value: 41.435
- type: mrr_at_3
value: 37.804
- type: mrr_at_5
value: 39.226
- type: ndcg_at_1
value: 30.365
- type: ndcg_at_10
value: 41.305
- type: ndcg_at_100
value: 47.028999999999996
- type: ndcg_at_1000
value: 49.375
- type: ndcg_at_3
value: 35.85
- type: ndcg_at_5
value: 38.12
- type: precision_at_1
value: 30.365
- type: precision_at_10
value: 7.808
- type: precision_at_100
value: 1.228
- type: precision_at_1000
value: 0.161
- type: precision_at_3
value: 17.352
- type: precision_at_5
value: 12.42
- type: recall_at_1
value: 24.229
- type: recall_at_10
value: 54.673
- type: recall_at_100
value: 78.766
- type: recall_at_1000
value: 94.625
- type: recall_at_3
value: 39.602
- type: recall_at_5
value: 45.558
- type: map_at_1
value: 26.695
- type: map_at_10
value: 36.0895
- type: map_at_100
value: 37.309416666666664
- type: map_at_1000
value: 37.42558333333334
- type: map_at_3
value: 33.19616666666666
- type: map_at_5
value: 34.78641666666667
- type: mrr_at_1
value: 31.486083333333337
- type: mrr_at_10
value: 40.34774999999999
- type: mrr_at_100
value: 41.17533333333333
- type: mrr_at_1000
value: 41.231583333333326
- type: mrr_at_3
value: 37.90075
- type: mrr_at_5
value: 39.266999999999996
- type: ndcg_at_1
value: 31.486083333333337
- type: ndcg_at_10
value: 41.60433333333334
- type: ndcg_at_100
value: 46.74525
- type: ndcg_at_1000
value: 48.96166666666667
- type: ndcg_at_3
value: 36.68825
- type: ndcg_at_5
value: 38.966499999999996
- type: precision_at_1
value: 31.486083333333337
- type: precision_at_10
value: 7.29675
- type: precision_at_100
value: 1.1621666666666666
- type: precision_at_1000
value: 0.1545
- type: precision_at_3
value: 16.8815
- type: precision_at_5
value: 11.974583333333333
- type: recall_at_1
value: 26.695
- type: recall_at_10
value: 53.651916666666665
- type: recall_at_100
value: 76.12083333333332
- type: recall_at_1000
value: 91.31191666666668
- type: recall_at_3
value: 40.03575
- type: recall_at_5
value: 45.876666666666665
- type: map_at_1
value: 25.668000000000003
- type: map_at_10
value: 32.486
- type: map_at_100
value: 33.371
- type: map_at_1000
value: 33.458
- type: map_at_3
value: 30.261
- type: map_at_5
value: 31.418000000000003
- type: mrr_at_1
value: 28.988000000000003
- type: mrr_at_10
value: 35.414
- type: mrr_at_100
value: 36.149
- type: mrr_at_1000
value: 36.215
- type: mrr_at_3
value: 33.333
- type: mrr_at_5
value: 34.43
- type: ndcg_at_1
value: 28.988000000000003
- type: ndcg_at_10
value: 36.732
- type: ndcg_at_100
value: 41.331
- type: ndcg_at_1000
value: 43.575
- type: ndcg_at_3
value: 32.413
- type: ndcg_at_5
value: 34.316
- type: precision_at_1
value: 28.988000000000003
- type: precision_at_10
value: 5.7059999999999995
- type: precision_at_100
value: 0.882
- type: precision_at_1000
value: 0.11299999999999999
- type: precision_at_3
value: 13.65
- type: precision_at_5
value: 9.417
- type: recall_at_1
value: 25.668000000000003
- type: recall_at_10
value: 47.147
- type: recall_at_100
value: 68.504
- type: recall_at_1000
value: 85.272
- type: recall_at_3
value: 35.19
- type: recall_at_5
value: 39.925
- type: map_at_1
value: 17.256
- type: map_at_10
value: 24.58
- type: map_at_100
value: 25.773000000000003
- type: map_at_1000
value: 25.899
- type: map_at_3
value: 22.236
- type: map_at_5
value: 23.507
- type: mrr_at_1
value: 20.957
- type: mrr_at_10
value: 28.416000000000004
- type: mrr_at_100
value: 29.447000000000003
- type: mrr_at_1000
value: 29.524
- type: mrr_at_3
value: 26.245
- type: mrr_at_5
value: 27.451999999999998
- type: ndcg_at_1
value: 20.957
- type: ndcg_at_10
value: 29.285
- type: ndcg_at_100
value: 35.003
- type: ndcg_at_1000
value: 37.881
- type: ndcg_at_3
value: 25.063000000000002
- type: ndcg_at_5
value: 26.983
- type: precision_at_1
value: 20.957
- type: precision_at_10
value: 5.344
- type: precision_at_100
value: 0.958
- type: precision_at_1000
value: 0.13799999999999998
- type: precision_at_3
value: 11.918
- type: precision_at_5
value: 8.596
- type: recall_at_1
value: 17.256
- type: recall_at_10
value: 39.644
- type: recall_at_100
value: 65.279
- type: recall_at_1000
value: 85.693
- type: recall_at_3
value: 27.825
- type: recall_at_5
value: 32.792
- type: map_at_1
value: 26.700000000000003
- type: map_at_10
value: 36.205999999999996
- type: map_at_100
value: 37.316
- type: map_at_1000
value: 37.425000000000004
- type: map_at_3
value: 33.166000000000004
- type: map_at_5
value: 35.032999999999994
- type: mrr_at_1
value: 31.436999999999998
- type: mrr_at_10
value: 40.61
- type: mrr_at_100
value: 41.415
- type: mrr_at_1000
value: 41.48
- type: mrr_at_3
value: 37.966
- type: mrr_at_5
value: 39.599000000000004
- type: ndcg_at_1
value: 31.436999999999998
- type: ndcg_at_10
value: 41.771
- type: ndcg_at_100
value: 46.784
- type: ndcg_at_1000
value: 49.183
- type: ndcg_at_3
value: 36.437000000000005
- type: ndcg_at_5
value: 39.291
- type: precision_at_1
value: 31.436999999999998
- type: precision_at_10
value: 6.987
- type: precision_at_100
value: 1.072
- type: precision_at_1000
value: 0.13899999999999998
- type: precision_at_3
value: 16.448999999999998
- type: precision_at_5
value: 11.866
- type: recall_at_1
value: 26.700000000000003
- type: recall_at_10
value: 54.301
- type: recall_at_100
value: 75.871
- type: recall_at_1000
value: 92.529
- type: recall_at_3
value: 40.201
- type: recall_at_5
value: 47.208
- type: map_at_1
value: 24.296
- type: map_at_10
value: 33.116
- type: map_at_100
value: 34.81
- type: map_at_1000
value: 35.032000000000004
- type: map_at_3
value: 30.105999999999998
- type: map_at_5
value: 31.839000000000002
- type: mrr_at_1
value: 29.051
- type: mrr_at_10
value: 37.803
- type: mrr_at_100
value: 38.856
- type: mrr_at_1000
value: 38.903999999999996
- type: mrr_at_3
value: 35.211
- type: mrr_at_5
value: 36.545
- type: ndcg_at_1
value: 29.051
- type: ndcg_at_10
value: 39.007
- type: ndcg_at_100
value: 45.321
- type: ndcg_at_1000
value: 47.665
- type: ndcg_at_3
value: 34.1
- type: ndcg_at_5
value: 36.437000000000005
- type: precision_at_1
value: 29.051
- type: precision_at_10
value: 7.668
- type: precision_at_100
value: 1.542
- type: precision_at_1000
value: 0.24
- type: precision_at_3
value: 16.14
- type: precision_at_5
value: 11.897
- type: recall_at_1
value: 24.296
- type: recall_at_10
value: 49.85
- type: recall_at_100
value: 78.457
- type: recall_at_1000
value: 92.618
- type: recall_at_3
value: 36.138999999999996
- type: recall_at_5
value: 42.223
- type: map_at_1
value: 20.591
- type: map_at_10
value: 28.902
- type: map_at_100
value: 29.886000000000003
- type: map_at_1000
value: 29.987000000000002
- type: map_at_3
value: 26.740000000000002
- type: map_at_5
value: 27.976
- type: mrr_at_1
value: 22.366
- type: mrr_at_10
value: 30.971
- type: mrr_at_100
value: 31.865
- type: mrr_at_1000
value: 31.930999999999997
- type: mrr_at_3
value: 28.927999999999997
- type: mrr_at_5
value: 30.231
- type: ndcg_at_1
value: 22.366
- type: ndcg_at_10
value: 33.641
- type: ndcg_at_100
value: 38.477
- type: ndcg_at_1000
value: 41.088
- type: ndcg_at_3
value: 29.486
- type: ndcg_at_5
value: 31.612000000000002
- type: precision_at_1
value: 22.366
- type: precision_at_10
value: 5.3420000000000005
- type: precision_at_100
value: 0.828
- type: precision_at_1000
value: 0.11800000000000001
- type: precision_at_3
value: 12.939
- type: precision_at_5
value: 9.094
- type: recall_at_1
value: 20.591
- type: recall_at_10
value: 46.052
- type: recall_at_100
value: 68.193
- type: recall_at_1000
value: 87.638
- type: recall_at_3
value: 34.966
- type: recall_at_5
value: 40.082
- task:
type: Retrieval
dataset:
name: MTEB ClimateFEVER
type: climate-fever
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 15.091
- type: map_at_10
value: 26.38
- type: map_at_100
value: 28.421999999999997
- type: map_at_1000
value: 28.621999999999996
- type: map_at_3
value: 21.597
- type: map_at_5
value: 24.12
- type: mrr_at_1
value: 34.266999999999996
- type: mrr_at_10
value: 46.864
- type: mrr_at_100
value: 47.617
- type: mrr_at_1000
value: 47.644
- type: mrr_at_3
value: 43.312
- type: mrr_at_5
value: 45.501000000000005
- type: ndcg_at_1
value: 34.266999999999996
- type: ndcg_at_10
value: 36.095
- type: ndcg_at_100
value: 43.447
- type: ndcg_at_1000
value: 46.661
- type: ndcg_at_3
value: 29.337999999999997
- type: ndcg_at_5
value: 31.824
- type: precision_at_1
value: 34.266999999999996
- type: precision_at_10
value: 11.472
- type: precision_at_100
value: 1.944
- type: precision_at_1000
value: 0.255
- type: precision_at_3
value: 21.933
- type: precision_at_5
value: 17.224999999999998
- type: recall_at_1
value: 15.091
- type: recall_at_10
value: 43.022
- type: recall_at_100
value: 68.075
- type: recall_at_1000
value: 85.76
- type: recall_at_3
value: 26.564
- type: recall_at_5
value: 33.594
- task:
type: Retrieval
dataset:
name: MTEB DBPedia
type: dbpedia-entity
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 9.252
- type: map_at_10
value: 20.923
- type: map_at_100
value: 30.741000000000003
- type: map_at_1000
value: 32.542
- type: map_at_3
value: 14.442
- type: map_at_5
value: 17.399
- type: mrr_at_1
value: 70.25
- type: mrr_at_10
value: 78.17
- type: mrr_at_100
value: 78.444
- type: mrr_at_1000
value: 78.45100000000001
- type: mrr_at_3
value: 76.958
- type: mrr_at_5
value: 77.571
- type: ndcg_at_1
value: 58.375
- type: ndcg_at_10
value: 44.509
- type: ndcg_at_100
value: 49.897999999999996
- type: ndcg_at_1000
value: 57.269999999999996
- type: ndcg_at_3
value: 48.64
- type: ndcg_at_5
value: 46.697
- type: precision_at_1
value: 70.25
- type: precision_at_10
value: 36.05
- type: precision_at_100
value: 11.848
- type: precision_at_1000
value: 2.213
- type: precision_at_3
value: 52.917
- type: precision_at_5
value: 45.7
- type: recall_at_1
value: 9.252
- type: recall_at_10
value: 27.006999999999998
- type: recall_at_100
value: 57.008
- type: recall_at_1000
value: 80.697
- type: recall_at_3
value: 15.798000000000002
- type: recall_at_5
value: 20.4
- task:
type: Classification
dataset:
name: MTEB EmotionClassification
type: mteb/emotion
config: default
split: test
revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37
metrics:
- type: accuracy
value: 50.88
- type: f1
value: 45.545495028653384
- task:
type: Retrieval
dataset:
name: MTEB FEVER
type: fever
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 75.424
- type: map_at_10
value: 83.435
- type: map_at_100
value: 83.66900000000001
- type: map_at_1000
value: 83.685
- type: map_at_3
value: 82.39800000000001
- type: map_at_5
value: 83.07
- type: mrr_at_1
value: 81.113
- type: mrr_at_10
value: 87.77199999999999
- type: mrr_at_100
value: 87.862
- type: mrr_at_1000
value: 87.86500000000001
- type: mrr_at_3
value: 87.17099999999999
- type: mrr_at_5
value: 87.616
- type: ndcg_at_1
value: 81.113
- type: ndcg_at_10
value: 86.909
- type: ndcg_at_100
value: 87.746
- type: ndcg_at_1000
value: 88.017
- type: ndcg_at_3
value: 85.368
- type: ndcg_at_5
value: 86.28099999999999
- type: precision_at_1
value: 81.113
- type: precision_at_10
value: 10.363
- type: precision_at_100
value: 1.102
- type: precision_at_1000
value: 0.11399999999999999
- type: precision_at_3
value: 32.507999999999996
- type: precision_at_5
value: 20.138
- type: recall_at_1
value: 75.424
- type: recall_at_10
value: 93.258
- type: recall_at_100
value: 96.545
- type: recall_at_1000
value: 98.284
- type: recall_at_3
value: 89.083
- type: recall_at_5
value: 91.445
- task:
type: Retrieval
dataset:
name: MTEB FiQA2018
type: fiqa
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 22.532
- type: map_at_10
value: 37.141999999999996
- type: map_at_100
value: 39.162
- type: map_at_1000
value: 39.322
- type: map_at_3
value: 32.885
- type: map_at_5
value: 35.093999999999994
- type: mrr_at_1
value: 44.29
- type: mrr_at_10
value: 53.516
- type: mrr_at_100
value: 54.24
- type: mrr_at_1000
value: 54.273
- type: mrr_at_3
value: 51.286
- type: mrr_at_5
value: 52.413
- type: ndcg_at_1
value: 44.29
- type: ndcg_at_10
value: 45.268
- type: ndcg_at_100
value: 52.125
- type: ndcg_at_1000
value: 54.778000000000006
- type: ndcg_at_3
value: 41.829
- type: ndcg_at_5
value: 42.525
- type: precision_at_1
value: 44.29
- type: precision_at_10
value: 12.5
- type: precision_at_100
value: 1.9720000000000002
- type: precision_at_1000
value: 0.245
- type: precision_at_3
value: 28.035
- type: precision_at_5
value: 20.093
- type: recall_at_1
value: 22.532
- type: recall_at_10
value: 52.419000000000004
- type: recall_at_100
value: 77.43299999999999
- type: recall_at_1000
value: 93.379
- type: recall_at_3
value: 38.629000000000005
- type: recall_at_5
value: 43.858000000000004
- task:
type: Retrieval
dataset:
name: MTEB HotpotQA
type: hotpotqa
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 39.359
- type: map_at_10
value: 63.966
- type: map_at_100
value: 64.87
- type: map_at_1000
value: 64.92599999999999
- type: map_at_3
value: 60.409
- type: map_at_5
value: 62.627
- type: mrr_at_1
value: 78.717
- type: mrr_at_10
value: 84.468
- type: mrr_at_100
value: 84.655
- type: mrr_at_1000
value: 84.661
- type: mrr_at_3
value: 83.554
- type: mrr_at_5
value: 84.133
- type: ndcg_at_1
value: 78.717
- type: ndcg_at_10
value: 72.03399999999999
- type: ndcg_at_100
value: 75.158
- type: ndcg_at_1000
value: 76.197
- type: ndcg_at_3
value: 67.049
- type: ndcg_at_5
value: 69.808
- type: precision_at_1
value: 78.717
- type: precision_at_10
value: 15.201
- type: precision_at_100
value: 1.764
- type: precision_at_1000
value: 0.19
- type: precision_at_3
value: 43.313
- type: precision_at_5
value: 28.165000000000003
- type: recall_at_1
value: 39.359
- type: recall_at_10
value: 76.003
- type: recall_at_100
value: 88.197
- type: recall_at_1000
value: 95.003
- type: recall_at_3
value: 64.97
- type: recall_at_5
value: 70.41199999999999
- task:
type: Classification
dataset:
name: MTEB ImdbClassification
type: mteb/imdb
config: default
split: test
revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7
metrics:
- type: accuracy
value: 92.83200000000001
- type: ap
value: 89.33560571859861
- type: f1
value: 92.82322915005167
- task:
type: Retrieval
dataset:
name: MTEB MSMARCO
type: msmarco
config: default
split: dev
revision: None
metrics:
- type: map_at_1
value: 21.983
- type: map_at_10
value: 34.259
- type: map_at_100
value: 35.432
- type: map_at_1000
value: 35.482
- type: map_at_3
value: 30.275999999999996
- type: map_at_5
value: 32.566
- type: mrr_at_1
value: 22.579
- type: mrr_at_10
value: 34.882999999999996
- type: mrr_at_100
value: 35.984
- type: mrr_at_1000
value: 36.028
- type: mrr_at_3
value: 30.964999999999996
- type: mrr_at_5
value: 33.245000000000005
- type: ndcg_at_1
value: 22.564
- type: ndcg_at_10
value: 41.258
- type: ndcg_at_100
value: 46.824
- type: ndcg_at_1000
value: 48.037
- type: ndcg_at_3
value: 33.17
- type: ndcg_at_5
value: 37.263000000000005
- type: precision_at_1
value: 22.564
- type: precision_at_10
value: 6.572
- type: precision_at_100
value: 0.935
- type: precision_at_1000
value: 0.104
- type: precision_at_3
value: 14.130999999999998
- type: precision_at_5
value: 10.544
- type: recall_at_1
value: 21.983
- type: recall_at_10
value: 62.775000000000006
- type: recall_at_100
value: 88.389
- type: recall_at_1000
value: 97.603
- type: recall_at_3
value: 40.878
- type: recall_at_5
value: 50.690000000000005
- task:
type: Classification
dataset:
name: MTEB MTOPDomainClassification (en)
type: mteb/mtop_domain
config: en
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 93.95120839033288
- type: f1
value: 93.73824125055208
- task:
type: Classification
dataset:
name: MTEB MTOPIntentClassification (en)
type: mteb/mtop_intent
config: en
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 76.78978568171455
- type: f1
value: 57.50180552858304
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (en)
type: mteb/amazon_massive_intent
config: en
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 76.24411566913248
- type: f1
value: 74.37851403532832
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (en)
type: mteb/amazon_massive_scenario
config: en
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 79.94620040349699
- type: f1
value: 80.21293397970435
- task:
type: Clustering
dataset:
name: MTEB MedrxivClusteringP2P
type: mteb/medrxiv-clustering-p2p
config: default
split: test
revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73
metrics:
- type: v_measure
value: 33.44403096245675
- task:
type: Clustering
dataset:
name: MTEB MedrxivClusteringS2S
type: mteb/medrxiv-clustering-s2s
config: default
split: test
revision: 35191c8c0dca72d8ff3efcd72aa802307d469663
metrics:
- type: v_measure
value: 31.659594631336812
- task:
type: Reranking
dataset:
name: MTEB MindSmallReranking
type: mteb/mind_small
config: default
split: test
revision: 3bdac13927fdc888b903db93b2ffdbd90b295a69
metrics:
- type: map
value: 32.53833075108798
- type: mrr
value: 33.78840823218308
- task:
type: Retrieval
dataset:
name: MTEB NFCorpus
type: nfcorpus
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 7.185999999999999
- type: map_at_10
value: 15.193999999999999
- type: map_at_100
value: 19.538
- type: map_at_1000
value: 21.178
- type: map_at_3
value: 11.208
- type: map_at_5
value: 12.745999999999999
- type: mrr_at_1
value: 48.916
- type: mrr_at_10
value: 58.141
- type: mrr_at_100
value: 58.656
- type: mrr_at_1000
value: 58.684999999999995
- type: mrr_at_3
value: 55.521
- type: mrr_at_5
value: 57.239
- type: ndcg_at_1
value: 47.059
- type: ndcg_at_10
value: 38.644
- type: ndcg_at_100
value: 36.272999999999996
- type: ndcg_at_1000
value: 44.996
- type: ndcg_at_3
value: 43.293
- type: ndcg_at_5
value: 40.819
- type: precision_at_1
value: 48.916
- type: precision_at_10
value: 28.607
- type: precision_at_100
value: 9.195
- type: precision_at_1000
value: 2.225
- type: precision_at_3
value: 40.454
- type: precision_at_5
value: 34.985
- type: recall_at_1
value: 7.185999999999999
- type: recall_at_10
value: 19.654
- type: recall_at_100
value: 37.224000000000004
- type: recall_at_1000
value: 68.663
- type: recall_at_3
value: 12.158
- type: recall_at_5
value: 14.674999999999999
- task:
type: Retrieval
dataset:
name: MTEB NQ
type: nq
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 31.552000000000003
- type: map_at_10
value: 47.75
- type: map_at_100
value: 48.728
- type: map_at_1000
value: 48.754
- type: map_at_3
value: 43.156
- type: map_at_5
value: 45.883
- type: mrr_at_1
value: 35.66
- type: mrr_at_10
value: 50.269
- type: mrr_at_100
value: 50.974
- type: mrr_at_1000
value: 50.991
- type: mrr_at_3
value: 46.519
- type: mrr_at_5
value: 48.764
- type: ndcg_at_1
value: 35.632000000000005
- type: ndcg_at_10
value: 55.786
- type: ndcg_at_100
value: 59.748999999999995
- type: ndcg_at_1000
value: 60.339
- type: ndcg_at_3
value: 47.292
- type: ndcg_at_5
value: 51.766999999999996
- type: precision_at_1
value: 35.632000000000005
- type: precision_at_10
value: 9.267
- type: precision_at_100
value: 1.149
- type: precision_at_1000
value: 0.12
- type: precision_at_3
value: 21.601
- type: precision_at_5
value: 15.539
- type: recall_at_1
value: 31.552000000000003
- type: recall_at_10
value: 77.62400000000001
- type: recall_at_100
value: 94.527
- type: recall_at_1000
value: 98.919
- type: recall_at_3
value: 55.898
- type: recall_at_5
value: 66.121
- task:
type: Retrieval
dataset:
name: MTEB QuoraRetrieval
type: quora
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 71.414
- type: map_at_10
value: 85.37400000000001
- type: map_at_100
value: 86.01100000000001
- type: map_at_1000
value: 86.027
- type: map_at_3
value: 82.562
- type: map_at_5
value: 84.284
- type: mrr_at_1
value: 82.24000000000001
- type: mrr_at_10
value: 88.225
- type: mrr_at_100
value: 88.324
- type: mrr_at_1000
value: 88.325
- type: mrr_at_3
value: 87.348
- type: mrr_at_5
value: 87.938
- type: ndcg_at_1
value: 82.24000000000001
- type: ndcg_at_10
value: 88.97699999999999
- type: ndcg_at_100
value: 90.16
- type: ndcg_at_1000
value: 90.236
- type: ndcg_at_3
value: 86.371
- type: ndcg_at_5
value: 87.746
- type: precision_at_1
value: 82.24000000000001
- type: precision_at_10
value: 13.481000000000002
- type: precision_at_100
value: 1.534
- type: precision_at_1000
value: 0.157
- type: precision_at_3
value: 37.86
- type: precision_at_5
value: 24.738
- type: recall_at_1
value: 71.414
- type: recall_at_10
value: 95.735
- type: recall_at_100
value: 99.696
- type: recall_at_1000
value: 99.979
- type: recall_at_3
value: 88.105
- type: recall_at_5
value: 92.17999999999999
- task:
type: Clustering
dataset:
name: MTEB RedditClustering
type: mteb/reddit-clustering
config: default
split: test
revision: 24640382cdbf8abc73003fb0fa6d111a705499eb
metrics:
- type: v_measure
value: 60.22146692057259
- task:
type: Clustering
dataset:
name: MTEB RedditClusteringP2P
type: mteb/reddit-clustering-p2p
config: default
split: test
revision: 282350215ef01743dc01b456c7f5241fa8937f16
metrics:
- type: v_measure
value: 65.29273320614578
- task:
type: Retrieval
dataset:
name: MTEB SCIDOCS
type: scidocs
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 5.023
- type: map_at_10
value: 14.161000000000001
- type: map_at_100
value: 16.68
- type: map_at_1000
value: 17.072000000000003
- type: map_at_3
value: 9.763
- type: map_at_5
value: 11.977
- type: mrr_at_1
value: 24.8
- type: mrr_at_10
value: 37.602999999999994
- type: mrr_at_100
value: 38.618
- type: mrr_at_1000
value: 38.659
- type: mrr_at_3
value: 34.117
- type: mrr_at_5
value: 36.082
- type: ndcg_at_1
value: 24.8
- type: ndcg_at_10
value: 23.316
- type: ndcg_at_100
value: 32.613
- type: ndcg_at_1000
value: 38.609
- type: ndcg_at_3
value: 21.697
- type: ndcg_at_5
value: 19.241
- type: precision_at_1
value: 24.8
- type: precision_at_10
value: 12.36
- type: precision_at_100
value: 2.593
- type: precision_at_1000
value: 0.402
- type: precision_at_3
value: 20.767
- type: precision_at_5
value: 17.34
- type: recall_at_1
value: 5.023
- type: recall_at_10
value: 25.069999999999997
- type: recall_at_100
value: 52.563
- type: recall_at_1000
value: 81.525
- type: recall_at_3
value: 12.613
- type: recall_at_5
value: 17.583
- task:
type: STS
dataset:
name: MTEB SICK-R
type: mteb/sickr-sts
config: default
split: test
revision: a6ea5a8cab320b040a23452cc28066d9beae2cee
metrics:
- type: cos_sim_pearson
value: 87.71506247604255
- type: cos_sim_spearman
value: 82.91813463738802
- type: euclidean_pearson
value: 85.5154616194479
- type: euclidean_spearman
value: 82.91815254466314
- type: manhattan_pearson
value: 85.5280917850374
- type: manhattan_spearman
value: 82.92276537286398
- task:
type: STS
dataset:
name: MTEB STS12
type: mteb/sts12-sts
config: default
split: test
revision: a0d554a64d88156834ff5ae9920b964011b16384
metrics:
- type: cos_sim_pearson
value: 87.43772054228462
- type: cos_sim_spearman
value: 78.75750601716682
- type: euclidean_pearson
value: 85.76074482955764
- type: euclidean_spearman
value: 78.75651057223058
- type: manhattan_pearson
value: 85.73390291701668
- type: manhattan_spearman
value: 78.72699385957797
- task:
type: STS
dataset:
name: MTEB STS13
type: mteb/sts13-sts
config: default
split: test
revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca
metrics:
- type: cos_sim_pearson
value: 89.58144067172472
- type: cos_sim_spearman
value: 90.3524512966946
- type: euclidean_pearson
value: 89.71365391594237
- type: euclidean_spearman
value: 90.35239632843408
- type: manhattan_pearson
value: 89.66905421746478
- type: manhattan_spearman
value: 90.31508211683513
- task:
type: STS
dataset:
name: MTEB STS14
type: mteb/sts14-sts
config: default
split: test
revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375
metrics:
- type: cos_sim_pearson
value: 87.77692637102102
- type: cos_sim_spearman
value: 85.45710562643485
- type: euclidean_pearson
value: 87.42456979928723
- type: euclidean_spearman
value: 85.45709386240908
- type: manhattan_pearson
value: 87.40754529526272
- type: manhattan_spearman
value: 85.44834854173303
- task:
type: STS
dataset:
name: MTEB STS15
type: mteb/sts15-sts
config: default
split: test
revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3
metrics:
- type: cos_sim_pearson
value: 88.28491331695997
- type: cos_sim_spearman
value: 89.62037029566964
- type: euclidean_pearson
value: 89.02479391362826
- type: euclidean_spearman
value: 89.62036733618466
- type: manhattan_pearson
value: 89.00394756040342
- type: manhattan_spearman
value: 89.60867744215236
- task:
type: STS
dataset:
name: MTEB STS16
type: mteb/sts16-sts
config: default
split: test
revision: 4d8694f8f0e0100860b497b999b3dbed754a0513
metrics:
- type: cos_sim_pearson
value: 85.08911381280191
- type: cos_sim_spearman
value: 86.5791780765767
- type: euclidean_pearson
value: 86.16063473577861
- type: euclidean_spearman
value: 86.57917745378766
- type: manhattan_pearson
value: 86.13677924604175
- type: manhattan_spearman
value: 86.56115615768685
- task:
type: STS
dataset:
name: MTEB STS17 (en-en)
type: mteb/sts17-crosslingual-sts
config: en-en
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 89.58029496205235
- type: cos_sim_spearman
value: 89.49551253826998
- type: euclidean_pearson
value: 90.13714840963748
- type: euclidean_spearman
value: 89.49551253826998
- type: manhattan_pearson
value: 90.13039633601363
- type: manhattan_spearman
value: 89.4513453745516
- task:
type: STS
dataset:
name: MTEB STS22 (en)
type: mteb/sts22-crosslingual-sts
config: en
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 69.01546399666435
- type: cos_sim_spearman
value: 69.33824484595624
- type: euclidean_pearson
value: 70.76511642998874
- type: euclidean_spearman
value: 69.33824484595624
- type: manhattan_pearson
value: 70.84320785047453
- type: manhattan_spearman
value: 69.54233632223537
- task:
type: STS
dataset:
name: MTEB STSBenchmark
type: mteb/stsbenchmark-sts
config: default
split: test
revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831
metrics:
- type: cos_sim_pearson
value: 87.26389196390119
- type: cos_sim_spearman
value: 89.09721478341385
- type: euclidean_pearson
value: 88.97208685922517
- type: euclidean_spearman
value: 89.09720927308881
- type: manhattan_pearson
value: 88.97513670502573
- type: manhattan_spearman
value: 89.07647853984004
- task:
type: Reranking
dataset:
name: MTEB SciDocsRR
type: mteb/scidocs-reranking
config: default
split: test
revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab
metrics:
- type: map
value: 87.53075025771936
- type: mrr
value: 96.24327651288436
- task:
type: Retrieval
dataset:
name: MTEB SciFact
type: scifact
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 60.428000000000004
- type: map_at_10
value: 70.088
- type: map_at_100
value: 70.589
- type: map_at_1000
value: 70.614
- type: map_at_3
value: 67.191
- type: map_at_5
value: 68.515
- type: mrr_at_1
value: 63.333
- type: mrr_at_10
value: 71.13000000000001
- type: mrr_at_100
value: 71.545
- type: mrr_at_1000
value: 71.569
- type: mrr_at_3
value: 68.944
- type: mrr_at_5
value: 70.078
- type: ndcg_at_1
value: 63.333
- type: ndcg_at_10
value: 74.72800000000001
- type: ndcg_at_100
value: 76.64999999999999
- type: ndcg_at_1000
value: 77.176
- type: ndcg_at_3
value: 69.659
- type: ndcg_at_5
value: 71.626
- type: precision_at_1
value: 63.333
- type: precision_at_10
value: 10
- type: precision_at_100
value: 1.09
- type: precision_at_1000
value: 0.11299999999999999
- type: precision_at_3
value: 27.111
- type: precision_at_5
value: 17.666999999999998
- type: recall_at_1
value: 60.428000000000004
- type: recall_at_10
value: 87.98899999999999
- type: recall_at_100
value: 96.167
- type: recall_at_1000
value: 100
- type: recall_at_3
value: 74.006
- type: recall_at_5
value: 79.05
- task:
type: PairClassification
dataset:
name: MTEB SprintDuplicateQuestions
type: mteb/sprintduplicatequestions-pairclassification
config: default
split: test
revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46
metrics:
- type: cos_sim_accuracy
value: 99.87326732673267
- type: cos_sim_ap
value: 96.81770773701805
- type: cos_sim_f1
value: 93.6318407960199
- type: cos_sim_precision
value: 93.16831683168317
- type: cos_sim_recall
value: 94.1
- type: dot_accuracy
value: 99.87326732673267
- type: dot_ap
value: 96.8174218946665
- type: dot_f1
value: 93.6318407960199
- type: dot_precision
value: 93.16831683168317
- type: dot_recall
value: 94.1
- type: euclidean_accuracy
value: 99.87326732673267
- type: euclidean_ap
value: 96.81770773701807
- type: euclidean_f1
value: 93.6318407960199
- type: euclidean_precision
value: 93.16831683168317
- type: euclidean_recall
value: 94.1
- type: manhattan_accuracy
value: 99.87227722772278
- type: manhattan_ap
value: 96.83164126821747
- type: manhattan_f1
value: 93.54677338669335
- type: manhattan_precision
value: 93.5935935935936
- type: manhattan_recall
value: 93.5
- type: max_accuracy
value: 99.87326732673267
- type: max_ap
value: 96.83164126821747
- type: max_f1
value: 93.6318407960199
- task:
type: Clustering
dataset:
name: MTEB StackExchangeClustering
type: mteb/stackexchange-clustering
config: default
split: test
revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259
metrics:
- type: v_measure
value: 65.6212042420246
- task:
type: Clustering
dataset:
name: MTEB StackExchangeClusteringP2P
type: mteb/stackexchange-clustering-p2p
config: default
split: test
revision: 815ca46b2622cec33ccafc3735d572c266efdb44
metrics:
- type: v_measure
value: 35.779230635982564
- task:
type: Reranking
dataset:
name: MTEB StackOverflowDupQuestions
type: mteb/stackoverflowdupquestions-reranking
config: default
split: test
revision: e185fbe320c72810689fc5848eb6114e1ef5ec69
metrics:
- type: map
value: 55.217701909036286
- type: mrr
value: 56.17658995416349
- task:
type: Summarization
dataset:
name: MTEB SummEval
type: mteb/summeval
config: default
split: test
revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c
metrics:
- type: cos_sim_pearson
value: 30.954206018888453
- type: cos_sim_spearman
value: 32.71062599450096
- type: dot_pearson
value: 30.95420929056943
- type: dot_spearman
value: 32.71062599450096
- task:
type: Retrieval
dataset:
name: MTEB TRECCOVID
type: trec-covid
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 0.22699999999999998
- type: map_at_10
value: 1.924
- type: map_at_100
value: 10.525
- type: map_at_1000
value: 24.973
- type: map_at_3
value: 0.638
- type: map_at_5
value: 1.0659999999999998
- type: mrr_at_1
value: 84
- type: mrr_at_10
value: 91.067
- type: mrr_at_100
value: 91.067
- type: mrr_at_1000
value: 91.067
- type: mrr_at_3
value: 90.667
- type: mrr_at_5
value: 91.067
- type: ndcg_at_1
value: 81
- type: ndcg_at_10
value: 75.566
- type: ndcg_at_100
value: 56.387
- type: ndcg_at_1000
value: 49.834
- type: ndcg_at_3
value: 80.899
- type: ndcg_at_5
value: 80.75099999999999
- type: precision_at_1
value: 84
- type: precision_at_10
value: 79
- type: precision_at_100
value: 57.56
- type: precision_at_1000
value: 21.8
- type: precision_at_3
value: 84.667
- type: precision_at_5
value: 85.2
- type: recall_at_1
value: 0.22699999999999998
- type: recall_at_10
value: 2.136
- type: recall_at_100
value: 13.861
- type: recall_at_1000
value: 46.299
- type: recall_at_3
value: 0.6649999999999999
- type: recall_at_5
value: 1.145
- task:
type: Retrieval
dataset:
name: MTEB Touche2020
type: webis-touche2020
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 2.752
- type: map_at_10
value: 9.951
- type: map_at_100
value: 16.794999999999998
- type: map_at_1000
value: 18.251
- type: map_at_3
value: 5.288
- type: map_at_5
value: 6.954000000000001
- type: mrr_at_1
value: 38.775999999999996
- type: mrr_at_10
value: 50.458000000000006
- type: mrr_at_100
value: 51.324999999999996
- type: mrr_at_1000
value: 51.339999999999996
- type: mrr_at_3
value: 46.939
- type: mrr_at_5
value: 47.857
- type: ndcg_at_1
value: 36.735
- type: ndcg_at_10
value: 25.198999999999998
- type: ndcg_at_100
value: 37.938
- type: ndcg_at_1000
value: 49.145
- type: ndcg_at_3
value: 29.348000000000003
- type: ndcg_at_5
value: 25.804
- type: precision_at_1
value: 38.775999999999996
- type: precision_at_10
value: 22.041
- type: precision_at_100
value: 7.939
- type: precision_at_1000
value: 1.555
- type: precision_at_3
value: 29.932
- type: precision_at_5
value: 24.490000000000002
- type: recall_at_1
value: 2.752
- type: recall_at_10
value: 16.197
- type: recall_at_100
value: 49.166
- type: recall_at_1000
value: 84.18900000000001
- type: recall_at_3
value: 6.438000000000001
- type: recall_at_5
value: 9.093
- task:
type: Classification
dataset:
name: MTEB ToxicConversationsClassification
type: mteb/toxic_conversations_50k
config: default
split: test
revision: d7c0de2777da35d6aae2200a62c6e0e5af397c4c
metrics:
- type: accuracy
value: 71.47980000000001
- type: ap
value: 14.605194452178754
- type: f1
value: 55.07362924988948
- task:
type: Classification
dataset:
name: MTEB TweetSentimentExtractionClassification
type: mteb/tweet_sentiment_extraction
config: default
split: test
revision: d604517c81ca91fe16a244d1248fc021f9ecee7a
metrics:
- type: accuracy
value: 59.708545557441994
- type: f1
value: 60.04751270975683
- task:
type: Clustering
dataset:
name: MTEB TwentyNewsgroupsClustering
type: mteb/twentynewsgroups-clustering
config: default
split: test
revision: 6125ec4e24fa026cec8a478383ee943acfbd5449
metrics:
- type: v_measure
value: 53.21105960597211
- task:
type: PairClassification
dataset:
name: MTEB TwitterSemEval2015
type: mteb/twittersemeval2015-pairclassification
config: default
split: test
revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1
metrics:
- type: cos_sim_accuracy
value: 87.58419264469214
- type: cos_sim_ap
value: 78.55300004517404
- type: cos_sim_f1
value: 71.49673530889001
- type: cos_sim_precision
value: 68.20795400095831
- type: cos_sim_recall
value: 75.11873350923483
- type: dot_accuracy
value: 87.58419264469214
- type: dot_ap
value: 78.55297659559511
- type: dot_f1
value: 71.49673530889001
- type: dot_precision
value: 68.20795400095831
- type: dot_recall
value: 75.11873350923483
- type: euclidean_accuracy
value: 87.58419264469214
- type: euclidean_ap
value: 78.55300477331477
- type: euclidean_f1
value: 71.49673530889001
- type: euclidean_precision
value: 68.20795400095831
- type: euclidean_recall
value: 75.11873350923483
- type: manhattan_accuracy
value: 87.5663110210407
- type: manhattan_ap
value: 78.49982050876562
- type: manhattan_f1
value: 71.35488740722104
- type: manhattan_precision
value: 68.18946862226497
- type: manhattan_recall
value: 74.82849604221636
- type: max_accuracy
value: 87.58419264469214
- type: max_ap
value: 78.55300477331477
- type: max_f1
value: 71.49673530889001
- task:
type: PairClassification
dataset:
name: MTEB TwitterURLCorpus
type: mteb/twitterurlcorpus-pairclassification
config: default
split: test
revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf
metrics:
- type: cos_sim_accuracy
value: 89.09069740365584
- type: cos_sim_ap
value: 86.22749303724757
- type: cos_sim_f1
value: 78.36863452005407
- type: cos_sim_precision
value: 76.49560117302053
- type: cos_sim_recall
value: 80.33569448721897
- type: dot_accuracy
value: 89.09069740365584
- type: dot_ap
value: 86.22750233655673
- type: dot_f1
value: 78.36863452005407
- type: dot_precision
value: 76.49560117302053
- type: dot_recall
value: 80.33569448721897
- type: euclidean_accuracy
value: 89.09069740365584
- type: euclidean_ap
value: 86.22749355597347
- type: euclidean_f1
value: 78.36863452005407
- type: euclidean_precision
value: 76.49560117302053
- type: euclidean_recall
value: 80.33569448721897
- type: manhattan_accuracy
value: 89.08293553770326
- type: manhattan_ap
value: 86.21913616084771
- type: manhattan_f1
value: 78.3907031479847
- type: manhattan_precision
value: 75.0352013517319
- type: manhattan_recall
value: 82.06036341238065
- type: max_accuracy
value: 89.09069740365584
- type: max_ap
value: 86.22750233655673
- type: max_f1
value: 78.3907031479847
---
<br><br>
<p align="center">
<svg xmlns="http://www.w3.org/2000/svg" xml:space="preserve" viewBox="0 0 2020 1130" width="150" height="150" aria-hidden="true"><path fill="#e95a0f" d="M398.167 621.992c-1.387-20.362-4.092-40.739-3.851-61.081.355-30.085 6.873-59.139 21.253-85.976 10.487-19.573 24.09-36.822 40.662-51.515 16.394-14.535 34.338-27.046 54.336-36.182 15.224-6.955 31.006-12.609 47.829-14.168 11.809-1.094 23.753-2.514 35.524-1.836 23.033 1.327 45.131 7.255 66.255 16.75 16.24 7.3 31.497 16.165 45.651 26.969 12.997 9.921 24.412 21.37 34.158 34.509 11.733 15.817 20.849 33.037 25.987 52.018 3.468 12.81 6.438 25.928 7.779 39.097 1.722 16.908 1.642 34.003 2.235 51.021.427 12.253.224 24.547 1.117 36.762 1.677 22.93 4.062 45.764 11.8 67.7 5.376 15.239 12.499 29.55 20.846 43.681l-18.282 20.328c-1.536 1.71-2.795 3.665-4.254 5.448l-19.323 23.533c-13.859-5.449-27.446-11.803-41.657-16.086-13.622-4.106-27.793-6.765-41.905-8.775-15.256-2.173-30.701-3.475-46.105-4.049-23.571-.879-47.178-1.056-70.769-1.029-10.858.013-21.723 1.116-32.57 1.926-5.362.4-10.69 1.255-16.464 1.477-2.758-7.675-5.284-14.865-7.367-22.181-3.108-10.92-4.325-22.554-13.16-31.095-2.598-2.512-5.069-5.341-6.883-8.443-6.366-10.884-12.48-21.917-18.571-32.959-4.178-7.573-8.411-14.375-17.016-18.559-10.34-5.028-19.538-12.387-29.311-18.611-3.173-2.021-6.414-4.312-9.952-5.297-5.857-1.63-11.98-2.301-17.991-3.376z"></path><path fill="#ed6d7b" d="M1478.998 758.842c-12.025.042-24.05.085-36.537-.373-.14-8.536.231-16.569.453-24.607.033-1.179-.315-2.986-1.081-3.4-.805-.434-2.376.338-3.518.81-.856.354-1.562 1.069-3.589 2.521-.239-3.308-.664-5.586-.519-7.827.488-7.544 2.212-15.166 1.554-22.589-1.016-11.451 1.397-14.592-12.332-14.419-3.793.048-3.617-2.803-3.332-5.331.499-4.422 1.45-8.803 1.77-13.233.311-4.316.068-8.672.068-12.861-2.554-.464-4.326-.86-6.12-1.098-4.415-.586-6.051-2.251-5.065-7.31 1.224-6.279.848-12.862 1.276-19.306.19-2.86-.971-4.473-3.794-4.753-4.113-.407-8.242-1.057-12.352-.975-4.663.093-5.192-2.272-4.751-6.012.733-6.229 1.252-12.483 1.875-18.726l1.102-10.495c-5.905-.309-11.146-.805-16.385-.778-3.32.017-5.174-1.4-5.566-4.4-1.172-8.968-2.479-17.944-3.001-26.96-.26-4.484-1.936-5.705-6.005-5.774-9.284-.158-18.563-.594-27.843-.953-7.241-.28-10.137-2.764-11.3-9.899-.746-4.576-2.715-7.801-7.777-8.207-7.739-.621-15.511-.992-23.207-1.961-7.327-.923-14.587-2.415-21.853-3.777-5.021-.941-10.003-2.086-15.003-3.14 4.515-22.952 13.122-44.382 26.284-63.587 18.054-26.344 41.439-47.239 69.102-63.294 15.847-9.197 32.541-16.277 50.376-20.599 16.655-4.036 33.617-5.715 50.622-4.385 33.334 2.606 63.836 13.955 92.415 31.15 15.864 9.545 30.241 20.86 42.269 34.758 8.113 9.374 15.201 19.78 21.718 30.359 10.772 17.484 16.846 36.922 20.611 56.991 1.783 9.503 2.815 19.214 3.318 28.876.758 14.578.755 29.196.65 44.311l-51.545 20.013c-7.779 3.059-15.847 5.376-21.753 12.365-4.73 5.598-10.658 10.316-16.547 14.774-9.9 7.496-18.437 15.988-25.083 26.631-3.333 5.337-7.901 10.381-12.999 14.038-11.355 8.144-17.397 18.973-19.615 32.423l-6.988 41.011z"></path><path fill="#ec663e" d="M318.11 923.047c-.702 17.693-.832 35.433-2.255 53.068-1.699 21.052-6.293 41.512-14.793 61.072-9.001 20.711-21.692 38.693-38.496 53.583-16.077 14.245-34.602 24.163-55.333 30.438-21.691 6.565-43.814 8.127-66.013 6.532-22.771-1.636-43.88-9.318-62.74-22.705-20.223-14.355-35.542-32.917-48.075-54.096-9.588-16.203-16.104-33.55-19.201-52.015-2.339-13.944-2.307-28.011-.403-42.182 2.627-19.545 9.021-37.699 17.963-55.067 11.617-22.564 27.317-41.817 48.382-56.118 15.819-10.74 33.452-17.679 52.444-20.455 8.77-1.282 17.696-1.646 26.568-2.055 11.755-.542 23.534-.562 35.289-1.11 8.545-.399 17.067-1.291 26.193-1.675 1.349 1.77 2.24 3.199 2.835 4.742 4.727 12.261 10.575 23.865 18.636 34.358 7.747 10.084 14.83 20.684 22.699 30.666 3.919 4.972 8.37 9.96 13.609 13.352 7.711 4.994 16.238 8.792 24.617 12.668 5.852 2.707 12.037 4.691 18.074 6.998z"></path><path fill="#ea580e" d="M1285.167 162.995c3.796-29.75 13.825-56.841 32.74-80.577 16.339-20.505 36.013-36.502 59.696-47.614 14.666-6.881 29.971-11.669 46.208-12.749 10.068-.669 20.239-1.582 30.255-.863 16.6 1.191 32.646 5.412 47.9 12.273 19.39 8.722 36.44 20.771 50.582 36.655 15.281 17.162 25.313 37.179 31.49 59.286 5.405 19.343 6.31 39.161 4.705 58.825-2.37 29.045-11.836 55.923-30.451 78.885-10.511 12.965-22.483 24.486-37.181 33.649-5.272-5.613-10.008-11.148-14.539-16.846-5.661-7.118-10.958-14.533-16.78-21.513-4.569-5.478-9.548-10.639-14.624-15.658-3.589-3.549-7.411-6.963-11.551-9.827-5.038-3.485-10.565-6.254-15.798-9.468-8.459-5.195-17.011-9.669-26.988-11.898-12.173-2.72-24.838-4.579-35.622-11.834-1.437-.967-3.433-1.192-5.213-1.542-12.871-2.529-25.454-5.639-36.968-12.471-5.21-3.091-11.564-4.195-17.011-6.965-4.808-2.445-8.775-6.605-13.646-8.851-8.859-4.085-18.114-7.311-27.204-10.896z"></path><path fill="#f8ab00" d="M524.963 311.12c-9.461-5.684-19.513-10.592-28.243-17.236-12.877-9.801-24.031-21.578-32.711-35.412-11.272-17.965-19.605-37.147-21.902-58.403-1.291-11.951-2.434-24.073-1.87-36.034.823-17.452 4.909-34.363 11.581-50.703 8.82-21.603 22.25-39.792 39.568-55.065 18.022-15.894 39.162-26.07 62.351-32.332 19.22-5.19 38.842-6.177 58.37-4.674 23.803 1.831 45.56 10.663 65.062 24.496 17.193 12.195 31.688 27.086 42.894 45.622-11.403 8.296-22.633 16.117-34.092 23.586-17.094 11.142-34.262 22.106-48.036 37.528-8.796 9.848-17.201 20.246-27.131 28.837-16.859 14.585-27.745 33.801-41.054 51.019-11.865 15.349-20.663 33.117-30.354 50.08-5.303 9.283-9.654 19.11-14.434 28.692z"></path><path fill="#ea5227" d="M1060.11 1122.049c-7.377 1.649-14.683 4.093-22.147 4.763-11.519 1.033-23.166 1.441-34.723 1.054-19.343-.647-38.002-4.7-55.839-12.65-15.078-6.72-28.606-15.471-40.571-26.836-24.013-22.81-42.053-49.217-49.518-81.936-1.446-6.337-1.958-12.958-2.235-19.477-.591-13.926-.219-27.909-1.237-41.795-.916-12.5-3.16-24.904-4.408-37.805 1.555-1.381 3.134-2.074 3.778-3.27 4.729-8.79 12.141-15.159 19.083-22.03 5.879-5.818 10.688-12.76 16.796-18.293 6.993-6.335 11.86-13.596 14.364-22.612l8.542-29.993c8.015 1.785 15.984 3.821 24.057 5.286 8.145 1.478 16.371 2.59 24.602 3.493 8.453.927 16.956 1.408 25.891 2.609 1.119 16.09 1.569 31.667 2.521 47.214.676 11.045 1.396 22.154 3.234 33.043 2.418 14.329 5.708 28.527 9.075 42.674 3.499 14.705 4.028 29.929 10.415 44.188 10.157 22.674 18.29 46.25 28.281 69.004 7.175 16.341 12.491 32.973 15.078 50.615.645 4.4 3.256 8.511 4.963 12.755z"></path><path fill="#ea5330" d="M1060.512 1122.031c-2.109-4.226-4.72-8.337-5.365-12.737-2.587-17.642-7.904-34.274-15.078-50.615-9.991-22.755-18.124-46.33-28.281-69.004-6.387-14.259-6.916-29.482-10.415-44.188-3.366-14.147-6.656-28.346-9.075-42.674-1.838-10.889-2.558-21.999-3.234-33.043-.951-15.547-1.401-31.124-2.068-47.146 8.568-.18 17.146.487 25.704.286l41.868-1.4c.907 3.746 1.245 7.04 1.881 10.276l8.651 42.704c.903 4.108 2.334 8.422 4.696 11.829 7.165 10.338 14.809 20.351 22.456 30.345 4.218 5.512 8.291 11.304 13.361 15.955 8.641 7.927 18.065 14.995 27.071 22.532 12.011 10.052 24.452 19.302 40.151 22.854-1.656 11.102-2.391 22.44-5.172 33.253-4.792 18.637-12.38 36.209-23.412 52.216-13.053 18.94-29.086 34.662-49.627 45.055-10.757 5.443-22.443 9.048-34.111 13.501z"></path><path fill="#f8aa05" d="M1989.106 883.951c5.198 8.794 11.46 17.148 15.337 26.491 5.325 12.833 9.744 26.207 12.873 39.737 2.95 12.757 3.224 25.908 1.987 39.219-1.391 14.973-4.643 29.268-10.349 43.034-5.775 13.932-13.477 26.707-23.149 38.405-14.141 17.104-31.215 30.458-50.807 40.488-14.361 7.352-29.574 12.797-45.741 14.594-10.297 1.144-20.732 2.361-31.031 1.894-24.275-1.1-47.248-7.445-68.132-20.263-6.096-3.741-11.925-7.917-17.731-12.342 5.319-5.579 10.361-10.852 15.694-15.811l37.072-34.009c.975-.892 2.113-1.606 3.08-2.505 6.936-6.448 14.765-12.2 20.553-19.556 8.88-11.285 20.064-19.639 31.144-28.292 4.306-3.363 9.06-6.353 12.673-10.358 5.868-6.504 10.832-13.814 16.422-20.582 6.826-8.264 13.727-16.481 20.943-24.401 4.065-4.461 8.995-8.121 13.249-12.424 14.802-14.975 28.77-30.825 45.913-43.317z"></path><path fill="#ed6876" d="M1256.099 523.419c5.065.642 10.047 1.787 15.068 2.728 7.267 1.362 14.526 2.854 21.853 3.777 7.696.97 15.468 1.34 23.207 1.961 5.062.406 7.031 3.631 7.777 8.207 1.163 7.135 4.059 9.62 11.3 9.899l27.843.953c4.069.069 5.745 1.291 6.005 5.774.522 9.016 1.829 17.992 3.001 26.96.392 3 2.246 4.417 5.566 4.4 5.239-.026 10.48.469 16.385.778l-1.102 10.495-1.875 18.726c-.44 3.74.088 6.105 4.751 6.012 4.11-.082 8.239.568 12.352.975 2.823.28 3.984 1.892 3.794 4.753-.428 6.444-.052 13.028-1.276 19.306-.986 5.059.651 6.724 5.065 7.31 1.793.238 3.566.634 6.12 1.098 0 4.189.243 8.545-.068 12.861-.319 4.43-1.27 8.811-1.77 13.233-.285 2.528-.461 5.379 3.332 5.331 13.729-.173 11.316 2.968 12.332 14.419.658 7.423-1.066 15.045-1.554 22.589-.145 2.241.28 4.519.519 7.827 2.026-1.452 2.733-2.167 3.589-2.521 1.142-.472 2.713-1.244 3.518-.81.767.414 1.114 2.221 1.081 3.4l-.917 24.539c-11.215.82-22.45.899-33.636 1.674l-43.952 3.436c-1.086-3.01-2.319-5.571-2.296-8.121.084-9.297-4.468-16.583-9.091-24.116-3.872-6.308-8.764-13.052-9.479-19.987-1.071-10.392-5.716-15.936-14.889-18.979-1.097-.364-2.16-.844-3.214-1.327-7.478-3.428-15.548-5.918-19.059-14.735-.904-2.27-3.657-3.775-5.461-5.723-2.437-2.632-4.615-5.525-7.207-7.987-2.648-2.515-5.352-5.346-8.589-6.777-4.799-2.121-10.074-3.185-15.175-4.596l-15.785-4.155c.274-12.896 1.722-25.901.54-38.662-1.647-17.783-3.457-35.526-2.554-53.352.528-10.426 2.539-20.777 3.948-31.574z"></path><path fill="#f6a200" d="M525.146 311.436c4.597-9.898 8.947-19.725 14.251-29.008 9.691-16.963 18.49-34.73 30.354-50.08 13.309-17.218 24.195-36.434 41.054-51.019 9.93-8.591 18.335-18.989 27.131-28.837 13.774-15.422 30.943-26.386 48.036-37.528 11.459-7.469 22.688-15.29 34.243-23.286 11.705 16.744 19.716 35.424 22.534 55.717 2.231 16.066 2.236 32.441 2.753 49.143-4.756 1.62-9.284 2.234-13.259 4.056-6.43 2.948-12.193 7.513-18.774 9.942-19.863 7.331-33.806 22.349-47.926 36.784-7.86 8.035-13.511 18.275-19.886 27.705-4.434 6.558-9.345 13.037-12.358 20.254-4.249 10.177-6.94 21.004-10.296 31.553-12.33.053-24.741 1.027-36.971-.049-20.259-1.783-40.227-5.567-58.755-14.69-.568-.28-1.295-.235-2.132-.658z"></path><path fill="#f7a80d" d="M1989.057 883.598c-17.093 12.845-31.061 28.695-45.863 43.67-4.254 4.304-9.184 7.963-13.249 12.424-7.216 7.92-14.117 16.137-20.943 24.401-5.59 6.768-10.554 14.078-16.422 20.582-3.614 4.005-8.367 6.995-12.673 10.358-11.08 8.653-22.264 17.007-31.144 28.292-5.788 7.356-13.617 13.108-20.553 19.556-.967.899-2.105 1.614-3.08 2.505l-37.072 34.009c-5.333 4.96-10.375 10.232-15.859 15.505-21.401-17.218-37.461-38.439-48.623-63.592 3.503-1.781 7.117-2.604 9.823-4.637 8.696-6.536 20.392-8.406 27.297-17.714.933-1.258 2.646-1.973 4.065-2.828 17.878-10.784 36.338-20.728 53.441-32.624 10.304-7.167 18.637-17.23 27.583-26.261 3.819-3.855 7.436-8.091 10.3-12.681 12.283-19.68 24.43-39.446 40.382-56.471 12.224-13.047 17.258-29.524 22.539-45.927 15.85 4.193 29.819 12.129 42.632 22.08 10.583 8.219 19.782 17.883 27.42 29.351z"></path><path fill="#ef7a72" d="M1479.461 758.907c1.872-13.734 4.268-27.394 6.525-41.076 2.218-13.45 8.26-24.279 19.615-32.423 5.099-3.657 9.667-8.701 12.999-14.038 6.646-10.643 15.183-19.135 25.083-26.631 5.888-4.459 11.817-9.176 16.547-14.774 5.906-6.99 13.974-9.306 21.753-12.365l51.48-19.549c.753 11.848.658 23.787 1.641 35.637 1.771 21.353 4.075 42.672 11.748 62.955.17.449.107.985-.019 2.158-6.945 4.134-13.865 7.337-20.437 11.143-3.935 2.279-7.752 5.096-10.869 8.384-6.011 6.343-11.063 13.624-17.286 19.727-9.096 8.92-12.791 20.684-18.181 31.587-.202.409-.072.984-.096 1.481-8.488-1.72-16.937-3.682-25.476-5.094-9.689-1.602-19.426-3.084-29.201-3.949-15.095-1.335-30.241-2.1-45.828-3.172z"></path><path fill="#e94e3b" d="M957.995 766.838c-20.337-5.467-38.791-14.947-55.703-27.254-8.2-5.967-15.451-13.238-22.958-20.37 2.969-3.504 5.564-6.772 8.598-9.563 7.085-6.518 11.283-14.914 15.8-23.153 4.933-8.996 10.345-17.743 14.966-26.892 2.642-5.231 5.547-11.01 5.691-16.611.12-4.651.194-8.932 2.577-12.742 8.52-13.621 15.483-28.026 18.775-43.704 2.11-10.049 7.888-18.774 7.81-29.825-.064-9.089 4.291-18.215 6.73-27.313 3.212-11.983 7.369-23.797 9.492-35.968 3.202-18.358 5.133-36.945 7.346-55.466l4.879-45.8c6.693.288 13.386.575 20.54 1.365.13 3.458-.41 6.407-.496 9.37l-1.136 42.595c-.597 11.552-2.067 23.058-3.084 34.59l-3.845 44.478c-.939 10.202-1.779 20.432-3.283 30.557-.96 6.464-4.46 12.646-1.136 19.383.348.706-.426 1.894-.448 2.864-.224 9.918-5.99 19.428-2.196 29.646.103.279-.033.657-.092.983l-8.446 46.205c-1.231 6.469-2.936 12.846-4.364 19.279-1.5 6.757-2.602 13.621-4.456 20.277-3.601 12.93-10.657 25.3-5.627 39.47.368 1.036.234 2.352.017 3.476l-5.949 30.123z"></path><path fill="#ea5043" d="M958.343 767.017c1.645-10.218 3.659-20.253 5.602-30.302.217-1.124.351-2.44-.017-3.476-5.03-14.17 2.026-26.539 5.627-39.47 1.854-6.656 2.956-13.52 4.456-20.277 1.428-6.433 3.133-12.81 4.364-19.279l8.446-46.205c.059-.326.196-.705.092-.983-3.794-10.218 1.972-19.728 2.196-29.646.022-.97.796-2.158.448-2.864-3.324-6.737.176-12.919 1.136-19.383 1.504-10.125 2.344-20.355 3.283-30.557l3.845-44.478c1.017-11.532 2.488-23.038 3.084-34.59.733-14.18.722-28.397 1.136-42.595.086-2.963.626-5.912.956-9.301 5.356-.48 10.714-.527 16.536-.081 2.224 15.098 1.855 29.734 1.625 44.408-.157 10.064 1.439 20.142 1.768 30.23.334 10.235-.035 20.49.116 30.733.084 5.713.789 11.418.861 17.13.054 4.289-.469 8.585-.702 12.879-.072 1.323-.138 2.659-.031 3.975l2.534 34.405-1.707 36.293-1.908 48.69c-.182 8.103.993 16.237.811 24.34-.271 12.076-1.275 24.133-1.787 36.207-.102 2.414-.101 5.283 1.06 7.219 4.327 7.22 4.463 15.215 4.736 23.103.365 10.553.088 21.128.086 31.693-11.44 2.602-22.84.688-34.106-.916-11.486-1.635-22.806-4.434-34.546-6.903z"></path><path fill="#eb5d19" d="M398.091 622.45c6.086.617 12.21 1.288 18.067 2.918 3.539.985 6.779 3.277 9.952 5.297 9.773 6.224 18.971 13.583 29.311 18.611 8.606 4.184 12.839 10.986 17.016 18.559l18.571 32.959c1.814 3.102 4.285 5.931 6.883 8.443 8.835 8.542 10.052 20.175 13.16 31.095 2.082 7.317 4.609 14.507 6.946 22.127-29.472 3.021-58.969 5.582-87.584 15.222-1.185-2.302-1.795-4.362-2.769-6.233-4.398-8.449-6.703-18.174-14.942-24.299-2.511-1.866-5.103-3.814-7.047-6.218-8.358-10.332-17.028-20.276-28.772-26.973 4.423-11.478 9.299-22.806 13.151-34.473 4.406-13.348 6.724-27.18 6.998-41.313.098-5.093.643-10.176 1.06-15.722z"></path><path fill="#e94c32" d="M981.557 392.109c-1.172 15.337-2.617 30.625-4.438 45.869-2.213 18.521-4.144 37.108-7.346 55.466-2.123 12.171-6.28 23.985-9.492 35.968-2.439 9.098-6.794 18.224-6.73 27.313.078 11.051-5.7 19.776-7.81 29.825-3.292 15.677-10.255 30.082-18.775 43.704-2.383 3.81-2.458 8.091-2.577 12.742-.144 5.6-3.049 11.38-5.691 16.611-4.621 9.149-10.033 17.896-14.966 26.892-4.517 8.239-8.715 16.635-15.8 23.153-3.034 2.791-5.629 6.06-8.735 9.255-12.197-10.595-21.071-23.644-29.301-37.24-7.608-12.569-13.282-25.962-17.637-40.37 13.303-6.889 25.873-13.878 35.311-25.315.717-.869 1.934-1.312 2.71-2.147 5.025-5.405 10.515-10.481 14.854-16.397 6.141-8.374 10.861-17.813 17.206-26.008 8.22-10.618 13.657-22.643 20.024-34.466 4.448-.626 6.729-3.21 8.114-6.89 1.455-3.866 2.644-7.895 4.609-11.492 4.397-8.05 9.641-15.659 13.708-23.86 3.354-6.761 5.511-14.116 8.203-21.206 5.727-15.082 7.277-31.248 12.521-46.578 3.704-10.828 3.138-23.116 4.478-34.753l7.56-.073z"></path><path fill="#f7a617" d="M1918.661 831.99c-4.937 16.58-9.971 33.057-22.196 46.104-15.952 17.025-28.099 36.791-40.382 56.471-2.864 4.59-6.481 8.825-10.3 12.681-8.947 9.031-17.279 19.094-27.583 26.261-17.103 11.896-35.564 21.84-53.441 32.624-1.419.856-3.132 1.571-4.065 2.828-6.904 9.308-18.6 11.178-27.297 17.714-2.705 2.033-6.319 2.856-9.874 4.281-3.413-9.821-6.916-19.583-9.36-29.602-1.533-6.284-1.474-12.957-1.665-19.913 1.913-.78 3.374-1.057 4.81-1.431 15.822-4.121 31.491-8.029 43.818-20.323 9.452-9.426 20.371-17.372 30.534-26.097 6.146-5.277 13.024-10.052 17.954-16.326 14.812-18.848 28.876-38.285 43.112-57.581 2.624-3.557 5.506-7.264 6.83-11.367 2.681-8.311 4.375-16.94 6.476-25.438 17.89.279 35.333 3.179 52.629 9.113z"></path><path fill="#ea553a" d="M1172.91 977.582c-15.775-3.127-28.215-12.377-40.227-22.43-9.005-7.537-18.43-14.605-27.071-22.532-5.07-4.651-9.143-10.443-13.361-15.955-7.647-9.994-15.291-20.007-22.456-30.345-2.361-3.407-3.792-7.72-4.696-11.829-3.119-14.183-5.848-28.453-8.651-42.704-.636-3.236-.974-6.53-1.452-10.209 15.234-2.19 30.471-3.969 46.408-5.622 2.692 5.705 4.882 11.222 6.63 16.876 2.9 9.381 7.776 17.194 15.035 24.049 7.056 6.662 13.305 14.311 19.146 22.099 9.509 12.677 23.01 19.061 36.907 25.054-1.048 7.441-2.425 14.854-3.066 22.33-.956 11.162-1.393 22.369-2.052 33.557l-1.096 17.661z"></path><path fill="#ea5453" d="M1163.123 704.036c-4.005 5.116-7.685 10.531-12.075 15.293-12.842 13.933-27.653 25.447-44.902 34.538-3.166-5.708-5.656-11.287-8.189-17.251-3.321-12.857-6.259-25.431-9.963-37.775-4.6-15.329-10.6-30.188-11.349-46.562-.314-6.871-1.275-14.287-7.114-19.644-1.047-.961-1.292-3.053-1.465-4.67l-4.092-39.927c-.554-5.245-.383-10.829-2.21-15.623-3.622-9.503-4.546-19.253-4.688-29.163-.088-6.111 1.068-12.256.782-18.344-.67-14.281-1.76-28.546-2.9-42.8-.657-8.222-1.951-16.395-2.564-24.62-.458-6.137-.285-12.322-.104-18.21.959 5.831 1.076 11.525 2.429 16.909 2.007 7.986 5.225 15.664 7.324 23.632 3.222 12.23 1.547 25.219 6.728 37.355 4.311 10.099 6.389 21.136 9.732 31.669 2.228 7.02 6.167 13.722 7.121 20.863 1.119 8.376 6.1 13.974 10.376 20.716l2.026 10.576c1.711 9.216 3.149 18.283 8.494 26.599 6.393 9.946 11.348 20.815 16.943 31.276 4.021 7.519 6.199 16.075 12.925 22.065l24.462 22.26c.556.503 1.507.571 2.274.841z"></path><path fill="#ea5b15" d="M1285.092 163.432c9.165 3.148 18.419 6.374 27.279 10.459 4.871 2.246 8.838 6.406 13.646 8.851 5.446 2.77 11.801 3.874 17.011 6.965 11.514 6.831 24.097 9.942 36.968 12.471 1.78.35 3.777.576 5.213 1.542 10.784 7.255 23.448 9.114 35.622 11.834 9.977 2.23 18.529 6.703 26.988 11.898 5.233 3.214 10.76 5.983 15.798 9.468 4.14 2.864 7.962 6.279 11.551 9.827 5.076 5.02 10.056 10.181 14.624 15.658 5.822 6.98 11.119 14.395 16.78 21.513 4.531 5.698 9.267 11.233 14.222 16.987-10.005 5.806-20.07 12.004-30.719 16.943-7.694 3.569-16.163 5.464-24.688 7.669-2.878-7.088-5.352-13.741-7.833-20.392-.802-2.15-1.244-4.55-2.498-6.396-4.548-6.7-9.712-12.999-14.011-19.847-6.672-10.627-15.34-18.93-26.063-25.376-9.357-5.625-18.367-11.824-27.644-17.587-6.436-3.997-12.902-8.006-19.659-11.405-5.123-2.577-11.107-3.536-16.046-6.37-17.187-9.863-35.13-17.887-54.031-23.767-4.403-1.37-8.953-2.267-13.436-3.382l.926-27.565z"></path><path fill="#ea504b" d="M1098 737l7.789 16.893c-15.04 9.272-31.679 15.004-49.184 17.995-9.464 1.617-19.122 2.097-29.151 3.019-.457-10.636-.18-21.211-.544-31.764-.273-7.888-.409-15.883-4.736-23.103-1.16-1.936-1.162-4.805-1.06-7.219l1.787-36.207c.182-8.103-.993-16.237-.811-24.34.365-16.236 1.253-32.461 1.908-48.69.484-12 .942-24.001 1.98-36.069 5.57 10.19 10.632 20.42 15.528 30.728 1.122 2.362 2.587 5.09 2.339 7.488-1.536 14.819 5.881 26.839 12.962 38.33 10.008 16.241 16.417 33.54 20.331 51.964 2.285 10.756 4.729 21.394 11.958 30.165L1098 737z"></path><path fill="#f6a320" d="M1865.78 822.529c-1.849 8.846-3.544 17.475-6.224 25.786-1.323 4.102-4.206 7.81-6.83 11.367l-43.112 57.581c-4.93 6.273-11.808 11.049-17.954 16.326-10.162 8.725-21.082 16.671-30.534 26.097-12.327 12.294-27.997 16.202-43.818 20.323-1.436.374-2.897.651-4.744.986-1.107-17.032-1.816-34.076-2.079-51.556 1.265-.535 2.183-.428 2.888-.766 10.596-5.072 20.8-11.059 32.586-13.273 1.69-.317 3.307-1.558 4.732-2.662l26.908-21.114c4.992-4.003 11.214-7.393 14.381-12.585 11.286-18.5 22.363-37.263 27.027-58.87l36.046 1.811c3.487.165 6.983.14 10.727.549z"></path><path fill="#ec6333" d="M318.448 922.814c-6.374-2.074-12.56-4.058-18.412-6.765-8.379-3.876-16.906-7.675-24.617-12.668-5.239-3.392-9.69-8.381-13.609-13.352-7.87-9.983-14.953-20.582-22.699-30.666-8.061-10.493-13.909-22.097-18.636-34.358-.595-1.543-1.486-2.972-2.382-4.783 6.84-1.598 13.797-3.023 20.807-4.106 18.852-2.912 36.433-9.493 53.737-17.819.697.888.889 1.555 1.292 2.051l17.921 21.896c4.14 4.939 8.06 10.191 12.862 14.412 5.67 4.984 12.185 9.007 18.334 13.447-8.937 16.282-16.422 33.178-20.696 51.31-1.638 6.951-2.402 14.107-3.903 21.403z"></path><path fill="#f49700" d="M623.467 326.903c2.893-10.618 5.584-21.446 9.833-31.623 3.013-7.217 7.924-13.696 12.358-20.254 6.375-9.43 12.026-19.67 19.886-27.705 14.12-14.434 28.063-29.453 47.926-36.784 6.581-2.429 12.344-6.994 18.774-9.942 3.975-1.822 8.503-2.436 13.186-3.592 1.947 18.557 3.248 37.15 8.307 55.686-15.453 7.931-28.853 18.092-40.46 29.996-10.417 10.683-19.109 23.111-28.013 35.175-3.238 4.388-4.888 9.948-7.262 14.973-17.803-3.987-35.767-6.498-54.535-5.931z"></path><path fill="#ea544c" d="M1097.956 736.615c-2.925-3.218-5.893-6.822-8.862-10.425-7.229-8.771-9.672-19.409-11.958-30.165-3.914-18.424-10.323-35.722-20.331-51.964-7.081-11.491-14.498-23.511-12.962-38.33.249-2.398-1.217-5.126-2.339-7.488l-15.232-31.019-3.103-34.338c-.107-1.316-.041-2.653.031-3.975.233-4.294.756-8.59.702-12.879-.072-5.713-.776-11.417-.861-17.13l-.116-30.733c-.329-10.088-1.926-20.166-1.768-30.23.23-14.674.599-29.31-1.162-44.341 9.369-.803 18.741-1.179 28.558-1.074 1.446 15.814 2.446 31.146 3.446 46.478.108 6.163-.064 12.348.393 18.485.613 8.225 1.907 16.397 2.564 24.62l2.9 42.8c.286 6.088-.869 12.234-.782 18.344.142 9.91 1.066 19.661 4.688 29.163 1.827 4.794 1.657 10.377 2.21 15.623l4.092 39.927c.172 1.617.417 3.71 1.465 4.67 5.839 5.357 6.8 12.773 7.114 19.644.749 16.374 6.749 31.233 11.349 46.562 3.704 12.344 6.642 24.918 9.963 37.775z"></path><path fill="#ec5c61" d="M1204.835 568.008c1.254 25.351-1.675 50.16-10.168 74.61-8.598-4.883-18.177-8.709-24.354-15.59-7.44-8.289-13.929-17.442-21.675-25.711-8.498-9.072-16.731-18.928-21.084-31.113-.54-1.513-1.691-2.807-2.594-4.564-4.605-9.247-7.706-18.544-7.96-29.09-.835-7.149-1.214-13.944-2.609-20.523-2.215-10.454-5.626-20.496-7.101-31.302-2.513-18.419-7.207-36.512-5.347-55.352.24-2.43-.17-4.949-.477-7.402l-4.468-34.792c2.723-.379 5.446-.757 8.585-.667 1.749 8.781 2.952 17.116 4.448 25.399 1.813 10.037 3.64 20.084 5.934 30.017 1.036 4.482 3.953 8.573 4.73 13.064 1.794 10.377 4.73 20.253 9.272 29.771 2.914 6.105 4.761 12.711 7.496 18.912 2.865 6.496 6.264 12.755 9.35 19.156 3.764 7.805 7.667 15.013 16.1 19.441 7.527 3.952 13.713 10.376 20.983 14.924 6.636 4.152 13.932 7.25 20.937 10.813z"></path><path fill="#ed676f" d="M1140.75 379.231c18.38-4.858 36.222-11.21 53.979-18.971 3.222 3.368 5.693 6.744 8.719 9.512 2.333 2.134 5.451 5.07 8.067 4.923 7.623-.429 12.363 2.688 17.309 8.215 5.531 6.18 12.744 10.854 19.224 16.184-5.121 7.193-10.461 14.241-15.323 21.606-13.691 20.739-22.99 43.255-26.782 67.926-.543 3.536-1.281 7.043-2.366 10.925-14.258-6.419-26.411-14.959-32.731-29.803-1.087-2.553-2.596-4.93-3.969-7.355-1.694-2.993-3.569-5.89-5.143-8.943-1.578-3.062-2.922-6.249-4.295-9.413-1.57-3.621-3.505-7.163-4.47-10.946-1.257-4.93-.636-10.572-2.725-15.013-5.831-12.397-7.467-25.628-9.497-38.847z"></path><path fill="#ed656e" d="M1254.103 647.439c5.325.947 10.603 2.272 15.847 3.722 5.101 1.41 10.376 2.475 15.175 4.596 3.237 1.431 5.942 4.262 8.589 6.777 2.592 2.462 4.77 5.355 7.207 7.987 1.804 1.948 4.557 3.453 5.461 5.723 3.51 8.817 11.581 11.307 19.059 14.735 1.053.483 2.116.963 3.214 1.327 9.172 3.043 13.818 8.587 14.889 18.979.715 6.935 5.607 13.679 9.479 19.987 4.623 7.533 9.175 14.819 9.091 24.116-.023 2.55 1.21 5.111 1.874 8.055-19.861 2.555-39.795 4.296-59.597 9.09l-11.596-23.203c-1.107-2.169-2.526-4.353-4.307-5.975-7.349-6.694-14.863-13.209-22.373-19.723l-17.313-14.669c-2.776-2.245-5.935-4.017-8.92-6.003l11.609-38.185c1.508-5.453 1.739-11.258 2.613-17.336z"></path><path fill="#ec6168" d="M1140.315 379.223c2.464 13.227 4.101 26.459 9.931 38.856 2.089 4.441 1.468 10.083 2.725 15.013.965 3.783 2.9 7.325 4.47 10.946 1.372 3.164 2.716 6.351 4.295 9.413 1.574 3.053 3.449 5.95 5.143 8.943 1.372 2.425 2.882 4.803 3.969 7.355 6.319 14.844 18.473 23.384 32.641 30.212.067 5.121-.501 10.201-.435 15.271l.985 38.117c.151 4.586.616 9.162.868 14.201-7.075-3.104-14.371-6.202-21.007-10.354-7.269-4.548-13.456-10.972-20.983-14.924-8.434-4.428-12.337-11.637-16.1-19.441-3.087-6.401-6.485-12.66-9.35-19.156-2.735-6.201-4.583-12.807-7.496-18.912-4.542-9.518-7.477-19.394-9.272-29.771-.777-4.491-3.694-8.581-4.73-13.064-2.294-9.933-4.121-19.98-5.934-30.017-1.496-8.283-2.699-16.618-4.036-25.335 10.349-2.461 20.704-4.511 31.054-6.582.957-.191 1.887-.515 3.264-.769z"></path><path fill="#e94c28" d="M922 537c-6.003 11.784-11.44 23.81-19.66 34.428-6.345 8.196-11.065 17.635-17.206 26.008-4.339 5.916-9.828 10.992-14.854 16.397-.776.835-1.993 1.279-2.71 2.147-9.439 11.437-22.008 18.427-35.357 24.929-4.219-10.885-6.942-22.155-7.205-33.905l-.514-49.542c7.441-2.893 14.452-5.197 21.334-7.841 1.749-.672 3.101-2.401 4.604-3.681 6.749-5.745 12.845-12.627 20.407-16.944 7.719-4.406 14.391-9.101 18.741-16.889.626-1.122 1.689-2.077 2.729-2.877 7.197-5.533 12.583-12.51 16.906-20.439.68-1.247 2.495-1.876 4.105-2.651 2.835 1.408 5.267 2.892 7.884 3.892 3.904 1.491 4.392 3.922 2.833 7.439-1.47 3.318-2.668 6.756-4.069 10.106-1.247 2.981-.435 5.242 2.413 6.544 2.805 1.282 3.125 3.14 1.813 5.601l-6.907 12.799L922 537z"></path><path fill="#eb5659" d="M1124.995 566c.868 1.396 2.018 2.691 2.559 4.203 4.353 12.185 12.586 22.041 21.084 31.113 7.746 8.269 14.235 17.422 21.675 25.711 6.176 6.881 15.756 10.707 24.174 15.932-6.073 22.316-16.675 42.446-31.058 60.937-1.074-.131-2.025-.199-2.581-.702l-24.462-22.26c-6.726-5.99-8.904-14.546-12.925-22.065-5.594-10.461-10.55-21.33-16.943-31.276-5.345-8.315-6.783-17.383-8.494-26.599-.63-3.394-1.348-6.772-1.738-10.848-.371-6.313-1.029-11.934-1.745-18.052l6.34 4.04 1.288-.675-2.143-15.385 9.454 1.208v-8.545L1124.995 566z"></path><path fill="#f5a02d" d="M1818.568 820.096c-4.224 21.679-15.302 40.442-26.587 58.942-3.167 5.192-9.389 8.582-14.381 12.585l-26.908 21.114c-1.425 1.104-3.042 2.345-4.732 2.662-11.786 2.214-21.99 8.201-32.586 13.273-.705.338-1.624.231-2.824.334a824.35 824.35 0 0 1-8.262-42.708c4.646-2.14 9.353-3.139 13.269-5.47 5.582-3.323 11.318-6.942 15.671-11.652 7.949-8.6 14.423-18.572 22.456-27.081 8.539-9.046 13.867-19.641 18.325-30.922l46.559 8.922z"></path><path fill="#eb5a57" d="M1124.96 565.639c-5.086-4.017-10.208-8.395-15.478-12.901v8.545l-9.454-1.208 2.143 15.385-1.288.675-6.34-4.04c.716 6.118 1.375 11.74 1.745 17.633-4.564-6.051-9.544-11.649-10.663-20.025-.954-7.141-4.892-13.843-7.121-20.863-3.344-10.533-5.421-21.57-9.732-31.669-5.181-12.135-3.506-25.125-6.728-37.355-2.099-7.968-5.317-15.646-7.324-23.632-1.353-5.384-1.47-11.078-2.429-16.909l-3.294-46.689a278.63 278.63 0 0 1 27.57-2.084c2.114 12.378 3.647 24.309 5.479 36.195 1.25 8.111 2.832 16.175 4.422 24.23 1.402 7.103 2.991 14.169 4.55 21.241 1.478 6.706.273 14.002 4.6 20.088 5.401 7.597 7.176 16.518 9.467 25.337 1.953 7.515 5.804 14.253 11.917 19.406.254 10.095 3.355 19.392 7.96 28.639z"></path><path fill="#ea541c" d="M911.651 810.999c-2.511 10.165-5.419 20.146-8.2 30.162-2.503 9.015-7.37 16.277-14.364 22.612-6.108 5.533-10.917 12.475-16.796 18.293-6.942 6.871-14.354 13.24-19.083 22.03-.644 1.196-2.222 1.889-3.705 2.857-2.39-7.921-4.101-15.991-6.566-23.823-5.451-17.323-12.404-33.976-23.414-48.835l21.627-21.095c3.182-3.29 5.532-7.382 8.295-11.083l10.663-14.163c9.528 4.78 18.925 9.848 28.625 14.247 7.324 3.321 15.036 5.785 22.917 8.799z"></path><path fill="#eb5d19" d="M1284.092 191.421c4.557.69 9.107 1.587 13.51 2.957 18.901 5.881 36.844 13.904 54.031 23.767 4.938 2.834 10.923 3.792 16.046 6.37 6.757 3.399 13.224 7.408 19.659 11.405l27.644 17.587c10.723 6.446 19.392 14.748 26.063 25.376 4.299 6.848 9.463 13.147 14.011 19.847 1.254 1.847 1.696 4.246 2.498 6.396l7.441 20.332c-11.685 1.754-23.379 3.133-35.533 4.037-.737-2.093-.995-3.716-1.294-5.33-3.157-17.057-14.048-30.161-23.034-44.146-3.027-4.71-7.786-8.529-12.334-11.993-9.346-7.116-19.004-13.834-28.688-20.491-6.653-4.573-13.311-9.251-20.431-13.002-8.048-4.24-16.479-7.85-24.989-11.091-11.722-4.465-23.673-8.328-35.527-12.449l.927-19.572z"></path><path fill="#eb5e24" d="M1283.09 211.415c11.928 3.699 23.88 7.562 35.602 12.027 8.509 3.241 16.941 6.852 24.989 11.091 7.12 3.751 13.778 8.429 20.431 13.002 9.684 6.657 19.342 13.375 28.688 20.491 4.548 3.463 9.307 7.283 12.334 11.993 8.986 13.985 19.877 27.089 23.034 44.146.299 1.615.557 3.237.836 5.263-13.373-.216-26.749-.839-40.564-1.923-2.935-9.681-4.597-18.92-12.286-26.152-15.577-14.651-30.4-30.102-45.564-45.193-.686-.683-1.626-1.156-2.516-1.584l-47.187-22.615 2.203-20.546z"></path><path fill="#e9511f" d="M913 486.001c-1.29.915-3.105 1.543-3.785 2.791-4.323 7.929-9.709 14.906-16.906 20.439-1.04.8-2.103 1.755-2.729 2.877-4.35 7.788-11.022 12.482-18.741 16.889-7.562 4.317-13.658 11.199-20.407 16.944-1.503 1.28-2.856 3.009-4.604 3.681-6.881 2.643-13.893 4.948-21.262 7.377-.128-11.151.202-22.302.378-33.454.03-1.892-.6-3.795-.456-6.12 13.727-1.755 23.588-9.527 33.278-17.663 2.784-2.337 6.074-4.161 8.529-6.784l29.057-31.86c1.545-1.71 3.418-3.401 4.221-5.459 5.665-14.509 11.49-28.977 16.436-43.736 2.817-8.407 4.074-17.338 6.033-26.032 5.039.714 10.078 1.427 15.536 2.629-.909 8.969-2.31 17.438-3.546 25.931-2.41 16.551-5.84 32.839-11.991 48.461L913 486.001z"></path><path fill="#ea5741" d="M1179.451 903.828c-14.224-5.787-27.726-12.171-37.235-24.849-5.841-7.787-12.09-15.436-19.146-22.099-7.259-6.854-12.136-14.667-15.035-24.049-1.748-5.654-3.938-11.171-6.254-17.033 15.099-4.009 30.213-8.629 44.958-15.533l28.367 36.36c6.09 8.015 13.124 14.75 22.72 18.375-7.404 14.472-13.599 29.412-17.48 45.244-.271 1.106-.382 2.25-.895 3.583z"></path><path fill="#ea522a" d="M913.32 486.141c2.693-7.837 5.694-15.539 8.722-23.231 6.151-15.622 9.581-31.91 11.991-48.461l3.963-25.861c7.582.317 15.168 1.031 22.748 1.797 4.171.421 8.333.928 12.877 1.596-.963 11.836-.398 24.125-4.102 34.953-5.244 15.33-6.794 31.496-12.521 46.578-2.692 7.09-4.849 14.445-8.203 21.206-4.068 8.201-9.311 15.81-13.708 23.86-1.965 3.597-3.154 7.627-4.609 11.492-1.385 3.68-3.666 6.265-8.114 6.89-1.994-1.511-3.624-3.059-5.077-4.44l6.907-12.799c1.313-2.461.993-4.318-1.813-5.601-2.849-1.302-3.66-3.563-2.413-6.544 1.401-3.35 2.599-6.788 4.069-10.106 1.558-3.517 1.071-5.948-2.833-7.439-2.617-1-5.049-2.484-7.884-3.892z"></path><path fill="#eb5e24" d="M376.574 714.118c12.053 6.538 20.723 16.481 29.081 26.814 1.945 2.404 4.537 4.352 7.047 6.218 8.24 6.125 10.544 15.85 14.942 24.299.974 1.871 1.584 3.931 2.376 6.29-7.145 3.719-14.633 6.501-21.386 10.517-9.606 5.713-18.673 12.334-28.425 18.399-3.407-3.73-6.231-7.409-9.335-10.834l-30.989-33.862c11.858-11.593 22.368-24.28 31.055-38.431 1.86-3.031 3.553-6.164 5.632-9.409z"></path><path fill="#e95514" d="M859.962 787.636c-3.409 5.037-6.981 9.745-10.516 14.481-2.763 3.701-5.113 7.792-8.295 11.083-6.885 7.118-14.186 13.834-21.65 20.755-13.222-17.677-29.417-31.711-48.178-42.878-.969-.576-2.068-.934-3.27-1.709 6.28-8.159 12.733-15.993 19.16-23.849 1.459-1.783 2.718-3.738 4.254-5.448l18.336-19.969c4.909 5.34 9.619 10.738 14.081 16.333 9.72 12.19 21.813 21.566 34.847 29.867.411.262.725.674 1.231 1.334z"></path><path fill="#eb5f2d" d="M339.582 762.088l31.293 33.733c3.104 3.425 5.928 7.104 9.024 10.979-12.885 11.619-24.548 24.139-33.899 38.704-.872 1.359-1.56 2.837-2.644 4.428-6.459-4.271-12.974-8.294-18.644-13.278-4.802-4.221-8.722-9.473-12.862-14.412l-17.921-21.896c-.403-.496-.595-1.163-.926-2.105 16.738-10.504 32.58-21.87 46.578-36.154z"></path><path fill="#f28d00" d="M678.388 332.912c1.989-5.104 3.638-10.664 6.876-15.051 8.903-12.064 17.596-24.492 28.013-35.175 11.607-11.904 25.007-22.064 40.507-29.592 4.873 11.636 9.419 23.412 13.67 35.592-5.759 4.084-11.517 7.403-16.594 11.553-4.413 3.607-8.124 8.092-12.023 12.301-5.346 5.772-10.82 11.454-15.782 17.547-3.929 4.824-7.17 10.208-10.716 15.344l-33.95-12.518z"></path><path fill="#f08369" d="M1580.181 771.427c-.191-.803-.322-1.377-.119-1.786 5.389-10.903 9.084-22.666 18.181-31.587 6.223-6.103 11.276-13.385 17.286-19.727 3.117-3.289 6.933-6.105 10.869-8.384 6.572-3.806 13.492-7.009 20.461-10.752 1.773 3.23 3.236 6.803 4.951 10.251l12.234 24.993c-1.367 1.966-2.596 3.293-3.935 4.499-7.845 7.07-16.315 13.564-23.407 21.32-6.971 7.623-12.552 16.517-18.743 24.854l-37.777-13.68z"></path><path fill="#f18b5e" d="M1618.142 785.4c6.007-8.63 11.588-17.524 18.559-25.147 7.092-7.755 15.562-14.249 23.407-21.32 1.338-1.206 2.568-2.534 3.997-4.162l28.996 33.733c1.896 2.205 4.424 3.867 6.66 6.394-6.471 7.492-12.967 14.346-19.403 21.255l-18.407 19.953c-12.958-12.409-27.485-22.567-43.809-30.706z"></path><path fill="#f49c3a" d="M1771.617 811.1c-4.066 11.354-9.394 21.949-17.933 30.995-8.032 8.509-14.507 18.481-22.456 27.081-4.353 4.71-10.089 8.329-15.671 11.652-3.915 2.331-8.623 3.331-13.318 5.069-4.298-9.927-8.255-19.998-12.1-30.743 4.741-4.381 9.924-7.582 13.882-11.904 7.345-8.021 14.094-16.603 20.864-25.131 4.897-6.168 9.428-12.626 14.123-18.955l32.61 11.936z"></path><path fill="#f08000" d="M712.601 345.675c3.283-5.381 6.524-10.765 10.453-15.589 4.962-6.093 10.435-11.774 15.782-17.547 3.899-4.21 7.61-8.695 12.023-12.301 5.078-4.15 10.836-7.469 16.636-11.19a934.12 934.12 0 0 1 23.286 35.848c-4.873 6.234-9.676 11.895-14.63 17.421l-25.195 27.801c-11.713-9.615-24.433-17.645-38.355-24.443z"></path><path fill="#ed6e04" d="M751.11 370.42c8.249-9.565 16.693-18.791 25.041-28.103 4.954-5.526 9.757-11.187 14.765-17.106 7.129 6.226 13.892 13.041 21.189 19.225 5.389 4.567 11.475 8.312 17.53 12.92-5.51 7.863-10.622 15.919-17.254 22.427-8.881 8.716-18.938 16.233-28.49 24.264-5.703-6.587-11.146-13.427-17.193-19.682-4.758-4.921-10.261-9.121-15.587-13.944z"></path><path fill="#ea541c" d="M921.823 385.544c-1.739 9.04-2.995 17.971-5.813 26.378-4.946 14.759-10.771 29.227-16.436 43.736-.804 2.058-2.676 3.749-4.221 5.459l-29.057 31.86c-2.455 2.623-5.745 4.447-8.529 6.784-9.69 8.135-19.551 15.908-33.208 17.237-1.773-9.728-3.147-19.457-4.091-29.6l36.13-16.763c.581-.267 1.046-.812 1.525-1.269 8.033-7.688 16.258-15.19 24.011-23.152 4.35-4.467 9.202-9.144 11.588-14.69 6.638-15.425 15.047-30.299 17.274-47.358 3.536.344 7.072.688 10.829 1.377z"></path><path fill="#f3944d" d="M1738.688 798.998c-4.375 6.495-8.906 12.953-13.803 19.121-6.771 8.528-13.519 17.11-20.864 25.131-3.958 4.322-9.141 7.523-13.925 11.54-8.036-13.464-16.465-26.844-27.999-38.387 5.988-6.951 12.094-13.629 18.261-20.25l19.547-20.95 38.783 23.794z"></path><path fill="#ec6168" d="M1239.583 703.142c3.282 1.805 6.441 3.576 9.217 5.821 5.88 4.755 11.599 9.713 17.313 14.669l22.373 19.723c1.781 1.622 3.2 3.806 4.307 5.975 3.843 7.532 7.477 15.171 11.194 23.136-10.764 4.67-21.532 8.973-32.69 12.982l-22.733-27.366c-2.003-2.416-4.096-4.758-6.194-7.093-3.539-3.94-6.927-8.044-10.74-11.701-2.57-2.465-5.762-4.283-8.675-6.39l16.627-29.755z"></path><path fill="#ec663e" d="M1351.006 332.839l-28.499 10.33c-.294.107-.533.367-1.194.264-11.067-19.018-27.026-32.559-44.225-44.855-4.267-3.051-8.753-5.796-13.138-8.682l9.505-24.505c10.055 4.069 19.821 8.227 29.211 13.108 3.998 2.078 7.299 5.565 10.753 8.598 3.077 2.701 5.743 5.891 8.926 8.447 4.116 3.304 9.787 5.345 12.62 9.432 6.083 8.777 10.778 18.517 16.041 27.863z"></path><path fill="#eb5e5b" d="M1222.647 733.051c3.223 1.954 6.415 3.771 8.985 6.237 3.813 3.658 7.201 7.761 10.74 11.701l6.194 7.093 22.384 27.409c-13.056 6.836-25.309 14.613-36.736 24.161l-39.323-44.7 24.494-27.846c1.072-1.224 1.974-2.598 3.264-4.056z"></path><path fill="#ea580e" d="M876.001 376.171c5.874 1.347 11.748 2.694 17.812 4.789-.81 5.265-2.687 9.791-2.639 14.296.124 11.469-4.458 20.383-12.73 27.863-2.075 1.877-3.659 4.286-5.668 6.248l-22.808 21.967c-.442.422-1.212.488-1.813.757l-23.113 10.389-9.875 4.514c-2.305-6.09-4.609-12.181-6.614-18.676 7.64-4.837 15.567-8.54 22.18-13.873 9.697-7.821 18.931-16.361 27.443-25.455 5.613-5.998 12.679-11.331 14.201-20.475.699-4.2 2.384-8.235 3.623-12.345z"></path><path fill="#e95514" d="M815.103 467.384c3.356-1.894 6.641-3.415 9.94-4.903l23.113-10.389c.6-.269 1.371-.335 1.813-.757l22.808-21.967c2.008-1.962 3.593-4.371 5.668-6.248 8.272-7.48 12.854-16.394 12.73-27.863-.049-4.505 1.828-9.031 2.847-13.956 5.427.559 10.836 1.526 16.609 2.68-1.863 17.245-10.272 32.119-16.91 47.544-2.387 5.546-7.239 10.223-11.588 14.69-7.753 7.962-15.978 15.464-24.011 23.152-.478.458-.944 1.002-1.525 1.269l-36.069 16.355c-2.076-6.402-3.783-12.81-5.425-19.607z"></path><path fill="#eb620b" d="M783.944 404.402c9.499-8.388 19.556-15.905 28.437-24.621 6.631-6.508 11.744-14.564 17.575-22.273 9.271 4.016 18.501 8.375 27.893 13.43-4.134 7.07-8.017 13.778-12.833 19.731-5.785 7.15-12.109 13.917-18.666 20.376-7.99 7.869-16.466 15.244-24.731 22.832l-17.674-29.475z"></path><path fill="#ea544c" d="M1197.986 854.686c-9.756-3.309-16.79-10.044-22.88-18.059l-28.001-36.417c8.601-5.939 17.348-11.563 26.758-17.075 1.615 1.026 2.639 1.876 3.505 2.865l26.664 30.44c3.723 4.139 7.995 7.785 12.017 11.656l-18.064 26.591z"></path><path fill="#ec6333" d="M1351.41 332.903c-5.667-9.409-10.361-19.149-16.445-27.926-2.833-4.087-8.504-6.128-12.62-9.432-3.184-2.555-5.849-5.745-8.926-8.447-3.454-3.033-6.756-6.52-10.753-8.598-9.391-4.88-19.157-9.039-29.138-13.499 1.18-5.441 2.727-10.873 4.81-16.607 11.918 4.674 24.209 8.261 34.464 14.962 14.239 9.304 29.011 18.453 39.595 32.464 2.386 3.159 5.121 6.077 7.884 8.923 6.564 6.764 10.148 14.927 11.723 24.093l-20.594 4.067z"></path><path fill="#eb5e5b" d="M1117 536.549c-6.113-4.702-9.965-11.44-11.917-18.955-2.292-8.819-4.066-17.74-9.467-25.337-4.327-6.085-3.122-13.382-4.6-20.088l-4.55-21.241c-1.59-8.054-3.172-16.118-4.422-24.23l-5.037-36.129c6.382-1.43 12.777-2.462 19.582-3.443 1.906 11.646 3.426 23.24 4.878 34.842.307 2.453.717 4.973.477 7.402-1.86 18.84 2.834 36.934 5.347 55.352 1.474 10.806 4.885 20.848 7.101 31.302 1.394 6.579 1.774 13.374 2.609 20.523z"></path><path fill="#ec644b" d="M1263.638 290.071c4.697 2.713 9.183 5.458 13.45 8.509 17.199 12.295 33.158 25.836 43.873 44.907-8.026 4.725-16.095 9.106-24.83 13.372-11.633-15.937-25.648-28.515-41.888-38.689-1.609-1.008-3.555-1.48-5.344-2.2 2.329-3.852 4.766-7.645 6.959-11.573l7.78-14.326z"></path><path fill="#eb5f2d" d="M1372.453 328.903c-2.025-9.233-5.608-17.396-12.172-24.16-2.762-2.846-5.498-5.764-7.884-8.923-10.584-14.01-25.356-23.16-39.595-32.464-10.256-6.701-22.546-10.289-34.284-15.312.325-5.246 1.005-10.444 2.027-15.863l47.529 22.394c.89.428 1.83.901 2.516 1.584l45.564 45.193c7.69 7.233 9.352 16.472 11.849 26.084-5.032.773-10.066 1.154-15.55 1.466z"></path><path fill="#e95a0f" d="M801.776 434.171c8.108-7.882 16.584-15.257 24.573-23.126 6.558-6.459 12.881-13.226 18.666-20.376 4.817-5.953 8.7-12.661 13.011-19.409 5.739 1.338 11.463 3.051 17.581 4.838-.845 4.183-2.53 8.219-3.229 12.418-1.522 9.144-8.588 14.477-14.201 20.475-8.512 9.094-17.745 17.635-27.443 25.455-6.613 5.333-14.54 9.036-22.223 13.51-2.422-4.469-4.499-8.98-6.735-13.786z"></path><path fill="#eb5e5b" d="M1248.533 316.002c2.155.688 4.101 1.159 5.71 2.168 16.24 10.174 30.255 22.752 41.532 38.727-7.166 5.736-14.641 11.319-22.562 16.731-1.16-1.277-1.684-2.585-2.615-3.46l-38.694-36.2 14.203-15.029c.803-.86 1.38-1.93 2.427-2.936z"></path><path fill="#eb5a57" d="M1216.359 827.958c-4.331-3.733-8.603-7.379-12.326-11.518l-26.664-30.44c-.866-.989-1.89-1.839-3.152-2.902 6.483-6.054 13.276-11.959 20.371-18.005l39.315 44.704c-5.648 6.216-11.441 12.12-17.544 18.161z"></path><path fill="#ec6168" d="M1231.598 334.101l38.999 36.066c.931.876 1.456 2.183 2.303 3.608-4.283 4.279-8.7 8.24-13.769 12.091-4.2-3.051-7.512-6.349-11.338-8.867-12.36-8.136-22.893-18.27-32.841-29.093l16.646-13.805z"></path><path fill="#ed656e" d="M1214.597 347.955c10.303 10.775 20.836 20.908 33.196 29.044 3.825 2.518 7.137 5.816 10.992 8.903-3.171 4.397-6.65 8.648-10.432 13.046-6.785-5.184-13.998-9.858-19.529-16.038-4.946-5.527-9.687-8.644-17.309-8.215-2.616.147-5.734-2.788-8.067-4.923-3.026-2.769-5.497-6.144-8.35-9.568 6.286-4.273 12.715-8.237 19.499-12.25z"></path></svg>
</p>
<p align="center">
<b>The crispy sentence embedding family from <a href="https://mixedbread.ai"><b>Mixedbread</b></a>.</b>
</p>
# mixedbread-ai/mxbai-embed-large-v1
Here, we provide several ways to produce sentence embeddings. Please note that you have to provide the prompt `Represent this sentence for searching relevant passages:` for query if you want to use it for retrieval. Besides that you don't need any prompt. Our model also supports [Matryoshka Representation Learning and binary quantization](https://www.mixedbread.ai/blog/binary-mrl).
## Quickstart
Here, we provide several ways to produce sentence embeddings. Please note that you have to provide the prompt `Represent this sentence for searching relevant passages: ` for query if you want to use it for retrieval. Besides that you don't need any prompt.
### sentence-transformers
```
python -m pip install -U sentence-transformers
```
```python
from sentence_transformers import SentenceTransformer
from sentence_transformers.util import cos_sim
from sentence_transformers.quantization import quantize_embeddings
# 1. Specify preffered dimensions
dimensions = 512
# 2. load model
model = SentenceTransformer("mixedbread-ai/mxbai-embed-large-v1", truncate_dim=dimensions)
# The prompt used for query retrieval tasks:
# query_prompt = 'Represent this sentence for searching relevant passages: '
query = "A man is eating a piece of bread"
docs = [
"A man is eating food.",
"A man is eating pasta.",
"The girl is carrying a baby.",
"A man is riding a horse.",
]
# 2. Encode
query_embedding = model.encode(query, prompt_name="query")
# Equivalent Alternatives:
# query_embedding = model.encode(query_prompt + query)
# query_embedding = model.encode(query, prompt=query_prompt)
docs_embeddings = model.encode(docs)
# Optional: Quantize the embeddings
binary_query_embedding = quantize_embeddings(query_embedding, precision="ubinary")
binary_docs_embeddings = quantize_embeddings(docs_embeddings, precision="ubinary")
similarities = cos_sim(query_embedding, docs_embeddings)
print('similarities:', similarities)
### Transformers
from typing import Dict
import torch
import numpy as np
from transformers import AutoModel, AutoTokenizer
from sentence_transformers.util import cos_sim
# For retrieval you need to pass this prompt. Please find our more in our blog post.
def transform_query(query: str) -> str:
""" For retrieval, add the prompt for query (not for documents).
"""
return f'Represent this sentence for searching relevant passages: {query}'
# The model works really well with cls pooling (default) but also with mean pooling.
def pooling(outputs: torch.Tensor, inputs: Dict, strategy: str = 'cls') -> np.ndarray:
if strategy == 'cls':
outputs = outputs[:, 0]
elif strategy == 'mean':
outputs = torch.sum(
outputs * inputs["attention_mask"][:, :, None], dim=1) / torch.sum(inputs["attention_mask"], dim=1, keepdim=True)
else:
raise NotImplementedError
return outputs.detach().cpu().numpy()
# 1. load model
model_id = 'mixedbread-ai/mxbai-embed-large-v1'
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModel.from_pretrained(model_id).cuda()
docs = [
transform_query('A man is eating a piece of bread'),
"A man is eating food.",
"A man is eating pasta.",
"The girl is carrying a baby.",
"A man is riding a horse.",
]
# 2. encode
inputs = tokenizer(docs, padding=True, return_tensors='pt')
for k, v in inputs.items():
inputs[k] = v.cuda()
outputs = model(**inputs).last_hidden_state
embeddings = pooling(outputs, inputs, 'cls')
similarities = cos_sim(embeddings[0], embeddings[1:])
print('similarities:', similarities)
### Transformers.js
If you haven't already, you can install the [Transformers.js](https://huggingface.co/docs/transformers.js) JavaScript library from [NPM](https://www.npmjs.com/package/@xenova/transformers) using:
npm i @xenova/transformers
You can then use the model to compute embeddings like this:
import { pipeline, cos_sim } from '@xenova/transformers';
// Create a feature extraction pipeline
const extractor = await pipeline('feature-extraction', 'mixedbread-ai/mxbai-embed-large-v1', {
quantized: false, // Comment out this line to use the quantized version
});
// Generate sentence embeddings
const docs = [
'Represent this sentence for searching relevant passages: A man is eating a piece of bread',
'A man is eating food.',
'A man is eating pasta.',
'The girl is carrying a baby.',
'A man is riding a horse.',
]
const output = await extractor(docs, { pooling: 'cls' });
// Compute similarity scores
const [source_embeddings, ...document_embeddings ] = output.tolist();
const similarities = document_embeddings.map(x => cos_sim(source_embeddings, x));
console.log(similarities); // [0.7919578577247139, 0.6369278664248345, 0.16512018371357193, 0.3620778366720027]
### Using API
You can use the model via our API as follows:
from mixedbread_ai.client import MixedbreadAI, EncodingFormat
from sklearn.metrics.pairwise import cosine_similarity
import os
mxbai = MixedbreadAI(api_key="{MIXEDBREAD_API_KEY}")
english_sentences = [
'What is the capital of Australia?',
'Canberra is the capital of Australia.'
]
res = mxbai.embeddings(
input=english_sentences,
model="mixedbread-ai/mxbai-embed-large-v1",
normalized=True,
encoding_format=[EncodingFormat.FLOAT, EncodingFormat.UBINARY, EncodingFormat.INT_8],
dimensions=512
)
encoded_embeddings = res.data[0].embedding
print(res.dimensions, encoded_embeddings.ubinary, encoded_embeddings.float_, encoded_embeddings.int_8)
The API comes with native int8 and binary quantization support! Check out the [docs](https://mixedbread.ai/docs) for more information.
## Evaluation
As of March 2024, our model archives SOTA performance for Bert-large sized models on the [MTEB](https://huggingface.co/spaces/mteb/leaderboard). It ourperforms commercial models like OpenAIs text-embedding-3-large and matches the performance of model 20x it's size like the [echo-mistral-7b](https://huggingface.co/jspringer/echo-mistral-7b-instruct-lasttoken). Our model was trained with no overlap of the MTEB data, which indicates that our model generalizes well across several domains, tasks and text length. We know there are some limitations with this model, which will be fixed in v2.
| Model | Avg (56 datasets) | Classification (12 datasets) | Clustering (11 datasets) | PairClassification (3 datasets) | Reranking (4 datasets) | Retrieval (15 datasets) | STS (10 datasets) | Summarization (1 dataset) |
| --------------------------------------------------------------------------------------------- | ----------------- | ---------------------------- | ------------------------ | ------------------------------- | ---------------------- | ----------------------- | ----------------- | ------------------------- |
| **mxbai-embed-large-v1** | **64.68** | 75.64 | 46.71 | 87.2 | 60.11 | 54.39 | 85.00 | 32.71 |
| [bge-large-en-v1.5](https://huggingface.co/BAAI/bge-large-en-v1.5) | 64.23 | 75.97 | 46.08 | 87.12 | 60.03 | 54.29 | 83.11 | 31.61 |
| [mxbai-embed-2d-large-v1](https://huggingface.co/mixedbread-ai/mxbai-embed-2d-large-v1) | 63.25 | 74.14 | 46.07 | 85.89 | 58.94 | 51.42 | 84.9 | 31.55 |
| [nomic-embed-text-v1](https://huggingface.co/nomic-ai/nomic-embed-text-v1) | 62.39 | 74.12 | 43.91 | 85.15 | 55.69 | 52.81 | 82.06 | 30.08 |
| [jina-embeddings-v2-base-en](https://huggingface.co/jinaai/jina-embeddings-v2-base-en) | 60.38 | 73.45 | 41.73 | 85.38 | 56.98 | 47.87 | 80.7 | 31.6 |
| *Proprietary Models* | | | | | | | | |
| [OpenAI text-embedding-3-large](https://openai.com/blog/new-embedding-models-and-api-updates) | 64.58 | 75.45 | 49.01 | 85.72 | 59.16 | 55.44 | 81.73 | 29.92 |
| [Cohere embed-english-v3.0](https://txt.cohere.com/introducing-embed-v3/) | 64.47 | 76.49 | 47.43 | 85.84 | 58.01 | 55.00 | 82.62 | 30.18 |
| [OpenAI text-embedding-ada-002](https://openai.com/blog/new-and-improved-embedding-model) | 60.99 | 70.93 | 45.90 | 84.89 | 56.32 | 49.25 | 80.97 | 30.80 |
Please find more information in our [blog post](https://mixedbread.ai/blog/mxbai-embed-large-v1).
## Matryoshka and Binary Quantization
Embeddings in their commonly used form (float arrays) have a high memory footprint when used at scale. Two approaches to solve this problem are Matryoshka Representation Learning (MRL) and (Binary) Quantization. While MRL reduces the number of dimensions of an embedding, binary quantization transforms the value of each dimension from a float32 into a lower precision (int8 or even binary). <b> The model supports both approaches! </b>
You can also take it one step further, and combine both MRL and quantization. This combination of binary quantization and MRL allows you to reduce the memory usage of your embeddings significantly. This leads to much lower costs when using a vector database in particular. You can read more about the technology and its advantages in our [blog post](https://www.mixedbread.ai/blog/binary-mrl).
## Community
Please join our [Discord Community](https://discord.gg/jDfMHzAVfU) and share your feedback and thoughts! We are here to help and also always happy to chat.
## License
Apache 2.0
## Citation
```bibtex
@online{emb2024mxbai,
title={Open Source Strikes Bread - New Fluffy Embeddings Model},
author={Sean Lee and Aamir Shakir and Darius Koenig and Julius Lipp},
year={2024},
url={https://www.mixedbread.ai/blog/mxbai-embed-large-v1},
}
@article{li2023angle,
title={AnglE-optimized Text Embeddings},
author={Li, Xianming and Li, Jing},
journal={arXiv preprint arXiv:2309.12871},
year={2023}
}
```
| [
"SUMMARIZATION"
] | [
"BIOSSES",
"SCIFACT"
] |
automated-analytics/setfit-paraphrase-mpnet | automated-analytics | text-classification | [
"setfit",
"safetensors",
"mpnet",
"sentence-transformers",
"text-classification",
"generated_from_setfit_trainer",
"arxiv:2209.11055",
"base_model:sentence-transformers/paraphrase-mpnet-base-v2",
"base_model:finetune:sentence-transformers/paraphrase-mpnet-base-v2",
"model-index",
"region:us"
] | 2025-03-04T10:21:20 | 2025-03-04T10:21:40 | 59 | 0 | ---
base_model: sentence-transformers/paraphrase-mpnet-base-v2
library_name: setfit
metrics:
- metric
pipeline_tag: text-classification
tags:
- setfit
- sentence-transformers
- text-classification
- generated_from_setfit_trainer
widget:
- text: 'client: Thank you for [NAME], partners with [ORGANIZATION]. Please listen
to the options carefully, secure the right team. We record calls for quality and
training purposes. Please enter your account number using your telephone keypad,
ignoring any letters, then press the hash key. Just use your telephone keypad
to enter your account number, then press the hash key.
customer: I''m trying.
client: This may start with it. Okay, just a moment whilst I look that up. I''m
afraid that account number doesn''t match our records. Please try again. Okay,
just a moment whilst I put that up. Please enter your phone number using the telephone
teapad.
customer: Thank you.
client: I''m sorry, I didn''t understand. Try entering your phone number. I''m
sorry, I still didn''t understand. Please by entering it again.'
- text: 'client: Thank you for calling Dino Plumbing. Please note all our calls are
recorded for monitoring and training passes. Press one for an update on parts.
Press two for an update on your appointment. Press three to discuss an ongoing
complaint. Press four for sales. Thank you for calling Dino Plumbing. Please note
all our calls are recorded for monitoring and training purposes. Press one for
an update on parts. Press two for an update on your appointment. Press three to
discuss an ongoing complaint and press four for sales. Thank you for calling dino
plumbing.
customer: and get some rolls.
client: Your call is very important to us. Your call is very important to us.
All of our agents are currently busy assisting other customers. Please continue
to hold and the next available agent will take your call. Alternatively, you can
email. Good afternoon if you''re starting and I can help.
customer: Oh, hello. I called yesterday regarding a plumbing issue, a drainage
issue at one of our properties. He took all of my details and said that we were
going to call us back, but I never got a call back.
client: I''m not sure postcode or have a little look for you to apologize.
customer: The post code of the property, NW6, NW6, 1 UJ.
client: Yes, please. No problem. And it''s the door number 46? 48, okay.
customer: 48, which in [LOCATION]. But maybe, I can''t remember if it was a charge
of [LOCATION]. Yes, yes, yes, yes, yes.
client: It was for a chargeable job, wasn''t it?
customer: All right, lovely, thank you. Yes, it''s a charge call job, yes, yes.
client: No problem. Let me just see if a gentleman''s available. Give me two seconds.
We''ll go straight through if it does connect. Thank you.
customer: I can''t do everything? It''s [LOCATION], hi [NAME]. [NAME]. Can I get
to keep your cautious on the other line? He has [NAME]. Yes, no problem. Okay.
Haven''t a plan for what? Okay, but I need it over to me. There''s no point...
Yeah, and one about the electric, because the bills... Yes, it''s a gas and the
electric payment. Just a gas, all about the electric. Because the bill was not
correct. [NAME], okay. I''ve emailed it to her. All right, fine, yeah, perfect.
Once you''ve got all that just emailed across the street. Okay, thank you. They''re
going to get such a shock. Oh dear. All that broken furniture, they threw it in
the bushes. They bought me the cotton. I swear to God, when the landlord was going
through it, yeah? She found it. I wasn''t, I''m not going through the bushes.
Not at all. Sort of that. Wouldn''t even go through my own bushes. Let''s learn
someone else''s. Please speak companies, I''m telling you, they''re just about
the place. They just put me on the whole now. I didn''t even know. I''m using
your butter, is that all right? Can''t you? Do you be joint tenants. Timeer, you''d
be joint tenants. I just need to eat something for me. You''d be joint tenants.
You''d be joint tenants not living together unless you are a couple.
client: I do apologize about the way I''m just trying to get through to them I
just speak to somebody who''s in the main office and they said they''ve just popped
out for their lunch and so they''ll be back within the next hour I can give him
a call personally if you want and just let him know that you know you didn''t
receive a call yesterday so it has to be today as soon as possible.
customer: Yeah, I mean the trouble is they told me that yesterday and they didn''t
call back. This is my issue is that, this is my issue that you know, are they
going to call back today?
client: Yeah, no, I can''t understand. I will have a word for him myself, and
I have them just let the people in the office know if they could just let me know
when he''s in there and I can give him a call personally and just let him know.
I know it has been very busy, but I will make sure I''ll just say if you can give
her a call up right now, because she has been waiting.
customer: Yeah, it really is urgent.
client: And he should have, you know, been able to investigate, whatever he had
to do.
customer: Yeah. Yeah, it really is urgent. Yeah. Okay, all right then.
client: I''m really sorry about the way.
customer: Okay, all right then. Okay, all right then, okay. All right then I will
wait.
client: Thank you so much.
customer: All right, okay. All right, then I will wait. All right, okay. Should
I, if I don''t hear anything, what, within an hour, should I give a callback or
not?
client: Yeah that''s completely fine and they said he will be back within the
hour so if it isn''t just after at all then you can always call back but it would
definitely be before five I''ll definitely make sure he calls you myself.
customer: Yeah, okay.
client: It''s no worries.
customer: All right, lovely. Appreciate it.
client: Thank you so much.
customer: Thank you. All right. All right, thanks.'
- text: 'client: Thank you for calling [NAME], partners with [ORGANIZATION]. Please
listen to the options carefully so we can get you to the right team. If you need
any help with a drainage problem, press one. If you require our plumbing services,
press two. Or if you have an existing home care policy, it''s fee. Did you know
that you can now book plumbing services online in a few easy clicks? Visit dino.com
for details. Or heavily please hold and your call will be transferred to a customer
service advisor. We record calls for quality and training purposes.
customer: I did. I spoke to one of your engineers recently due to and being a
family member and I was just saying he said to bring off and see if he''s with
him taking anyone else on. A promen.
client: Is it plumbing or drainage? I''m going to say the postcode where you are.
customer: 8th, [LOCATION], 5JG.
client: Yep. What I''ll do is I''ll put you through to the plumbers that cover
that postcode and they can, you know, you
customer: Yeah, that''s great. Thank you so much. Oh.
client: [NAME], your local drains and plumbing experts. Drainage inquiries, press
1. For plumbing inquiries, press 2. If you are a nosly business owner, press 3,
and for finance, press 4. Thank you for calling [NAME], your local drains and
plumbing experts, for drain inquiries, press 1, for [ORGANIZATION], press 2. If
you are a nosly business owner, press 3, and for finance, press 4. Your call is
very important to us. Please reach the next available agent. Hello, you''re through
the next available agent.
customer: I''ve recently spoke to a family member who works for yourselves and
he said to bring up to you to take him anyone else on. It''s [NAME]. It''s [NAME].
He said to bring up to you to take him anyone else on. It''s [NAME].
client: and who''s your family member?
customer: It''s [NAME].
client: Oh no! No, we''re not having another one of him here. Right, okay, hang
on.
customer: Yeah, I have I have.
client: Have you got a C visa, though? Can you email it to [NAME]? Yeah? Yeah,
[NAME].
customer: Yeah, of course, have you got it, please.
client: Yeah. Yeah, [NAME].
customer: Thank you very much. [NAME] [LOCATION].
client: Yeah.
customer: I just felt sorry.
client: G-I-L-L-I-E-S-P-I-E.
customer: G-I-L-I-S-G-I-L-I-E-S. G-G-I-L-I-E-S.
client: Why?
customer: Yeah. Yeah.
client: What? Yeah, that''s what I''m supposed for? Right. I-N-G-I-L-L.
customer: Yeah.
client: Yeah, E-S-P-I-E-P-I-E-E-P-I-T, right if you got that? Yes, P-I-E, yeah.
Can have my job if you want?
customer: Right, I''ve got that now.
client: And it''s at [LOCATION]. Yeah, M-A-C-E. M-E. Doco. [LOCATION]. What''s
your name? Sorry? Elias, all right, no bother. Bye, darling. Bye, bye, bye.'
- text: 'client: Good morning, [NAME]. How can I help?
customer: Oh, hi yeah, is that [NAME]?
client: It is indeed. Right, potentially.
customer: Yeah, I had an engineer out yesterday who kindly put a new central heating
pump in for me. But he''s left a tool here. Looks like I''m speaking as a non-tall
person. Look like mole grips, big pair of yellow mole grips.
client: Right, potentially what the postcode there?
customer: I just realized you''d left them on the table after he''s gone, so.
client: That''s fine. 22 orchard close.
customer: That''s the one, yes.
client: Excellent, and that would be you.
customer: I just thought if someone''s passing at any point, you could pick them
up, but I, yeah.
client: That would be you in. Now that''s fine. What I do is I''ll speak to my
plumbing department and let them know, and I will get them to get somebody back
there to pick them up for you.
customer: Yeah, whenever, yeah, I''ll be around, but I just thought I''ve let
you know.
client: I''ll get them to get somebody back there to pick them up for you. I''ll
get them to get somebody back there to pick them up for you. I''ll get them to
customer: Lovely. Okay. Thank you. Bye.
client: No worries to talk. Thanks now. Thanks. Bye bye.'
- text: 'client: Thank you for calling [NAME], partners with [ORGANIZATION]. Please
listen to the options carefully, so we can get you to the right team. If you need
any help with a drainage problem, press one.
customer: You''re
client: If you require our plumbing service, press two. Did you know that you
can now book drainage services online in a few easy clicks, visit Dino. For details.
Alternatively, please hold and your call will be transferred to a customer service
advisor. We record calls for quality and training purposes.
customer: Hi, good morning. I''ve just had a survey done on a property that I
want to buy and they recommend that I have a [ORGANIZATION] drainage report and
I was just wondering do you cover [LOCATION]?
client: Yes, we do cover everywhere with [ORGANIZATION] surveys, yeah.
customer: Okay and how much is the cost of a [ORGANIZATION] camera drainage report?
client: So with the [ORGANIZATION] surveys, we don''t actually book them here
at the core centre, they''re booked directly with our local offices because it''s
the equipment that needs booking, not the person.
customer: Okay.
client: We don''t get visibility of the equipment, I''m afraid. And then, so I''ve
never booked one, so I don''t know the price. I do believe, I''m pretty sure a
customer told me they''d had one done and it was around 250 pounds.
customer: Okay, all right then. So I mean what do we do we book it with a local
office then?
client: Yeah so I could certainly give you their phone number but just to let
you know they''re not open at the weekends they won''t be open until tomorrow
now. Yeah, it''s [LOCATION] you say.
customer: I think so long. Yes it is, yeah.
client: Yeah, let me find the number for you.
customer: But then group. Okay. How quick is it done? Okay.
client: Again I couldn''t tell you I''m afraid because I''ve never booked one
like I say and the actual process of doing it again I''ve never been out with
an engineer so I''ve never seen it done I know that each office only has basically
the equipment uses it''s like millions of pounds so each office only has one basically
that''s why it''s the equipment that needs booking you see but yeah let me give
you their phone number have you got a pen
customer: Okay. Have you. Have you? Yeah. Yeah. Yeah. Yeah. Great. Great. Great.
Great.
client: Yeah, it''s 0151 545 0913. Yeah, that''s all right, no problem.
customer: Great. Great. Great. Great.
client: Take care. Bye.
customer: Great. Great. Great. Great. Lovely. Thank you very much for help. Thanks
a lot.'
inference: true
model-index:
- name: SetFit with sentence-transformers/paraphrase-mpnet-base-v2
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: Unknown
type: unknown
split: test
metrics:
- type: metric
value: 0.5307449553507265
name: Metric
---
# SetFit with sentence-transformers/paraphrase-mpnet-base-v2
This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. This SetFit model uses [sentence-transformers/paraphrase-mpnet-base-v2](https://huggingface.co/sentence-transformers/paraphrase-mpnet-base-v2) as the Sentence Transformer embedding model. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification.
The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Model Details
### Model Description
- **Model Type:** SetFit
- **Sentence Transformer body:** [sentence-transformers/paraphrase-mpnet-base-v2](https://huggingface.co/sentence-transformers/paraphrase-mpnet-base-v2)
- **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance
- **Maximum Sequence Length:** 512 tokens
- **Number of Classes:** 11 classes
<!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit)
- **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055)
- **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit)
### Model Labels
| Label | Examples |
|:------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 1 | <ul><li>"client: Thank you for calling [NAME], partners with [ORGANIZATION].\ncustomer: I can't find it. I didn't mind where I'm a little bit better than what I'm going to get there.\nclient: Please listen to the options carefully so we can get you to the right team. If you need any help with a drainage problem, press one. If you require our plumbing services, press two. Or if you have an existing home care policy, it's three. Did you know that you can now book plumbing services online in a few easy clicks? Visit dino.com for details. Alternatively, please hold and your call will be transferred to a customer service advisor.\ncustomer: Hi, this is Mr. [NAME].\nclient: We record calls for quality and training purposes.\ncustomer: I just want to cancel for tomorrow's appointment. I'm sorry. I'm sorry. I just want to cancel for tomorrow's appointment.\nclient: Yep, have you made it directly with all the through British-class home care?\ncustomer: I'm sorry.\nclient: Have you made the appointment directly with also through British-class home care?\ncustomer: Direct with you guys.\nclient: Yep. What's the postcode?\ncustomer: 8-A-J.\nclient: Yep. [NAME] What's the first line of the address?\ncustomer: Hello? Yeah. Number 11 Austin Avenue. Ah, Mr. [NAME]. Yeah. Yeah, that's right, yeah. Yeah.\nclient: And what's your name please?\ncustomer: Yeah, that's right, yeah. Yeah, yes.\nclient: Okay, so, is the appointment for tomorrow?\ncustomer: Yes, yes, yes, yes, please.\nclient: Is that right? Between 8 and 6?\ncustomer: Okay. Thank you very much. Bye.\nclient: Yeah, and that's your point we want to cancel, yep? Yeah, okay, no problem, I'll get that canceled for you now. Thank you for letting us know. Thanks, bye."</li><li>"client: Thank you calling [NAME], partners with [ORGANIZATION].\ncustomer: Yeah.\nclient: Please listen to the options carefully, so we can get you to the right team. If you need any help with a drainage problem, press one. If you require our plumbing service. Did you know that you can now book drainage services online in a few easy clicks, visit [ORGANIZATION] for details. Alternatively, please hold and your call will be transferred to a customer service advisor. We record calls for quality and training purposes.\ncustomer: Hi, I've got an appointment book tomorrow. I'd like to cancel, please. Chargeable. B-A-2. B-A-2. B-A-2. Yeah. 4-QG.\nclient: Sorry, the line's not too clear. Yep. Any address, please? And what names it? No problem, I get it cancelled for you.\ncustomer: under [NAME]. Great, thanks very much.\nclient: You're welcome. I know.\ncustomer: Right, bye."</li><li>"client: Thank you for calling [NAME].\ncustomer: Yet it's [ORGANIZATION], and 3-1-E-X.\nclient: You're about to be put through to our emergency line. If this is not an emergency, please call back between our usual opening hours of 8 a.m. and 6 p.m. Monday to Friday. Thank you. Thank you for calling [NAME]. Partners with [ORGANIZATION]. Please listen to the options carefully so we can get you to the right team. If you need any help with a drainage problem, press one. If you require our plumbing services, press two. Or if you have an existing home care policy, it's three.\ncustomer: I'll see.\nclient: I'm sorry I didn't get anything if you need any help with a drainage problem press one if you require our plumbing services press did you know that you can now book drainage services online in a few easy clicks visit dino dot com for details alternatively please hold and your call will be transferred to a customer service advisor\ncustomer: I was a book for somebody to come out of there.\nclient: Morning, [NAME], speaking to [NAME]. How can I help? Yeah. Okay, no problem. What's the post code, please?\ncustomer: I was a book for somebody to come out to my home address but I need to cancel it because we've sorted it.\nclient: Yeah. Okay. No problem. What's the post code, please?\ncustomer: It's [POSTCODE].\nclient: Yeah. First line of the address. And your name, please.\ncustomer: It's [NAME].\nclient: Right, I'll get it cancelled, thanks for letting us know, though.\ncustomer: All right, thank you.\nclient: You have a good day.\ncustomer: Bye.\nclient: Okay, no worries. But now, right way."</li></ul> |
| 3 | <ul><li>"client: Thanks for calling Dynaflamey. Please note all calls are recorded for training and monitoring purposes. Please hold while we connect you to a member of our team. We'll be with you in just a moment. Good afternoon, [NAME] the word speaking. How can I help?\ncustomer: Thank you. This is for [ORGANIZATION] train test. The location is on is a consultant SM5, 3-2N. Is this available for this area?\nclient: What is it you require us to do?\ncustomer: Sorry.\nclient: Unfortunately, we don't undertake that kind of work.\ncustomer: Oh, for doing a [ORGANIZATION] train test.\nclient: You need Dynobod.\ncustomer: for the property number, right?\nclient: Can I give you the number for them?\ncustomer: The property number, right?\nclient: So we are Dynoplumbing, we don't deal with drainage.\ncustomer: Okay, uh, okay, uh, okay, a moment, right?\nclient: So I can give you the drainage company's number, okay? So you need to call 0.333.\ncustomer: Mm-hmm. Okay, a moment, please write down.\nclient: Yep, 242, 2 178. Yes, all right? I don't know. I don't know.\ncustomer: 0-333-333. 4-2-2-178 2-178. 2-178. Okay, 0-3-3. Okay, so may I make an appointment through the website or... Okay, so, may I, uh, make an appointment through the website or...\nclient: You'd have to call them, unfortunately. I don't know. No, you'd have to call them, unfortunately. Okay, thank you. No, no, you'd have to call them, unfortunately. Okay. Thank you. No, no, no, you'd have to call them, unfortunately. Okay. Thank you. No, no problem.\ncustomer: Thank you. Bye."</li><li>"client: Thank you for calling [NAME], partner [NAME].\ncustomer: carried out gases detection on winter and water supply pipe between internal and external stopcock.\nclient: Please listen to the options carefully, so we can get you to the right team. If you need any help with a draining problem, press one.\ncustomer: Well they detected it.\nclient: If you require our plumbing services, press two. Or if you have an existing home care policy, it's three. I'm sorry I didn't get anything if you need any help with a drainage problem press one if you require our plumbing services press two or if you have an existing home care policy it's three did you know that you can now book plus services online in a few easy clicks visit dino dot com for details alternatively please hold and your call will be transferred to a customer service advisor we record calls for quality and training purposes purposes\ncustomer: Okay.\nclient: Dino with one of the [LOCATION]'s leading plumbing and drain services. From blocked drains to burst pipes and busted boilers, we're here to help.\ncustomer: Hello [NAME]. I had one of your guys out today to look for a water leak on my drive. It was with the gas down the line and he confirmed there was a water leak that [ORGANIZATION] have told us but he couldn't find it and I'm sort of just stuck down. I've paid 552 pounds and I'm known none the wiser as to when he came out. I know there's a water leak before we got here I know there's a water leak now but I don't know where it is so I can fix it. Yes I've got a log number here 438585838\nclient: Okay and it was one of our engineers that came out you say. Yeah, can I take that from you, please? Yeah. Okay, thank you. And can I take the first one of your address and postcode, please?\ncustomer: [NAME].\nclient: Thank you. Can I just take your name as well, please? Thank you. Okay, so I'm going to bring the details on the job that was done. Lead detection. Okay, so the description I've got on here states that the engineer has been out and he's done what he could do and he's quoted on re-running the water supply to the property and fitting a new stockcock.\ncustomer: Yeah, but that's not telling me where the leak is because I could spend all that money and that may not be a problem. You said you were going to detect the leak and you haven't.\nclient: Okay.\ncustomer: You've told me exactly what [NAME] have already told me I have a leak, but you haven't detected it where it is. I've paid 552 pounds for what.\nclient: Okay, so can I just ask you then, what is it that you'd like us to do?\ncustomer: I find the leak you said there's a league why can't you find it I know I know it's not I understand that completely I understand it's not you personally you the company\nclient: Okay, so I'm just going off of, it's not me personally, I'm just off the engineer's note. Okay, so I'm, again, all I can do is go off the engineer. Okay, if I can explain, I'm just looking at the engineer's note. and the engineer has mentioned on it as I read through to you that he's carried out a gas leak detection on the water supply but he was not able to find the leak successfully. So he's done the tools that he had and the equipment that he had to look for that leak. Ultimately he's not been able to find it.\ncustomer: Right.\nclient: So we cannot find that leak so what he's advised is we can't find the leak so what we can do is we can offer you another service where we can quote you for rerunning the water supply but we are not able to find that leak so you paid the money for the work that we've done.\ncustomer: Yeah. Yeah, but you haven't. Yeah. Yeah, but you haven't. Yeah, but you haven't detected the leak.\nclient: No and again I reiterate that's what the engineer was mentioned we've done the new detection but we were unable to find the leak so you know you paid that money for that service that we have provided.\ncustomer: Right, okay. But running in a new supply pipe does not guarantee that that will sort out the problem. That's what I'm saying. Yeah.\nclient: I'm not an engineer so I can't advise that but I can only go off what I can tell the system I wasn't aware of the situation until you called a minute ago so I'm looking at the details and I'm seeing what the engineer is mentioned about the caller that you have today.\ncustomer: Yeah. Okay. Okay, that's fine, I knew you were going to say this. I just think it's disingenuous that you, you quote that you will find the leak and yes you told me there's a leak, but you know, I knew there was a leak, but you've not helped me in any way to find the leak. So, but I understand your company policy and thank you for explaining it to me.\nclient: No worse."</li><li>"client: Welcome to Dino. All calls are recorded for training and monitoring purposes.\ncustomer: I'm on your website looking at door handles for outside doors and try to replace a couple when I've worn out done tacky looking so they have to be sort of the same one or very close so they don't have to start destroying doors. The thing is that most of them seem to be 48 middle on your site but there's a lot of dead length but I need to be about 210 high and\nclient: Sorry to interrupt. We are dino plumbing. I'm assuming you've got the wrong number if you're talking about doors.\ncustomer: door handles, yeah.\nclient: Yeah.\ncustomer: It's coming up this handle world.\nclient: Where are dino plumbing in lead?\ncustomer: Oh, right. Right, I don't know how I did that, thank you much. Mr. [NAME]. Thank you for your time. Sorry. I got ourselves. Bye."</li></ul> |
| 9 | <ul><li>"client: Thank you for calling [NAME], partners with [ORGANIZATION]. Please listen to the options carefully, so we can get you to the right team. If you need any help with a drainage problem, press one. If you require our plumbing services, press two.\ncustomer: So.\nclient: Or if you have an existing home care policy, it's three. Did you know that you can now book drainage services online in a few easy clicks? Visit dino.com for details. Alternatively, please hold and your call will be transferred to a customer service advisor. We record calls for quality and training purposes.\ncustomer: Hi, I'm ringing up. We've had you out to unblock a dodgy kitchen drain in the past, and it's decided to block again.\nclient: Yeah.\ncustomer: But there's one guy that has a very long rod, and that's the only one that will unblock our sink because of the angles of our sink. We'd like him to come out again, but there's no point someone else or him coming out unless he's got the long rods on his van. CB-22, 6RQ. Okay? Yeah.\nclient: Yeah, right. What I'll do, let me give you the direct number for the office where the engineers work from in your area, and you're probably best speaking to them directly. They're not open to eight, mind. They're on 01536.\ncustomer: Oh, hang on, let me just write this down. Oops. Up. Oh, not the right bit to write it on. Oh, 1, 536, 402, 371. 1, 536, 4,000, 3,71. 1, 536, 4,000, 3,71. 1, 1, 536, 4, 02371. Okay.\nclient: Yep, 402, 371. And they're open on 8. And they're open on 8. And that's it's okay.\ncustomer: Okay, lovely. Thanks for your help.\nclient: Right, nobody's, take care of 5.\ncustomer: All right, bye."</li><li>"client: Thank you for calling [NAME] partners with [ORGANIZATION]. Please listen to the options carefully so we can get you to the right team. If you need any help with a drainage problem, press one. If you require our plumbing services, press two. Did you know that you can now book drainage services online in a few easy clicks, visit [ORGANIZATION] for details. Alternatively, please hold and your call will be transferred to a customer service advisor. we record calls for quality and training purposes. Dino with one of the [LOCATION]'s leading plumbing and drain services. From blocked drains to burst pipes and busted boilers we're here to help. Our local teams of expert engineers are available across the [LOCATION] 24-7, 365 days a year.\ncustomer: You know I got a toilet blockage on the ground floor and you know when you flush you know I can see you know the ground floor toilet and the kitchen is nearby and if I look out through the kitchen window there is a manual cover and sometimes you know when I'm flashing I can see the waste is coming out so\nclient: Right, okay.\ncustomer: So you have a because I want to talk to someone who who says okay there is a problem you need a camera or you can jet jet pressure is and clear it so I want to somebody who can speak to me and you know give details do you have some technical person available now?\nclient: I can transfer you to the local office to speak to somebody else but what happens with Diner as much as you know there's no call-out charge basically the engineer would attend and assess the work and then he would speak to you about what the problem is and give you a price for fixing it so I could either book you in a job or I can transfer you to the local office if you want to speak to them a bit more about it for\ncustomer: Okay, what is the local office number? Yeah, it is a local office number.\nclient: Could I take your post code please and I'll find it out for you?\ncustomer: Yeah, it is early two.\nclient: [NAME], can you say that again, your post code? [NAME] to say that again, your post code?\ncustomer: L-E-26-S-A. Yeah. Yeah. One second, one second.\nclient: [NAME] 2, 6SA. Okay.\ncustomer: Oh, one second.\nclient: Okay, right, I've got the number here if you're ready. Yeah, it's 01536, 36, 46, 402, 402, 402, 402, 371, 371. Yep.\ncustomer: Oh, one second. Oh, one second. Oh, one, five. Yeah. Yeah. Yeah. Okay, that is 0153, 640, 2371.\nclient: Yeah, that's correct.\ncustomer: Okay, and, yeah, but this number doesn't seem to be lesser, because the number I called that is [NAME], oh, 116, the number that is [NAME], oh, 116, the number that you gave me is 0153.\nclient: If you just give them a ring, they'll be able to go through some more of the technical stuff with you if you want that information. I understand that, but it's a local office that's the closest to you where an engineer had come out.\ncustomer: Okay, okay, and okay, so, okay, you are doing all right. So I will speak to him and yeah, let me see what he says.\nclient: No worries.\ncustomer: Okay, ma, thank you.\nclient: Take care. Bye.\ncustomer: Bye bye."</li><li>"client: Thank you for calling [NAME] partners with [ORGANIZATION]. We record calls for quality and training purposes.\ncustomer: Oh, hi, my name's [NAME]. I'm calling from [ORGANIZATION], Out of Hours Repairs. I'm in desperate need of a plumber, please.\nclient: So are you a key account?\ncustomer: Yeah, we have an account.\nclient: So is it a key account?\ncustomer: Yes, it is. Key accounts at [LOCATION]. Credit [LOCATION] is the contact we've got.\nclient: Yeah, so if you email it over.\ncustomer: Yeah.\nclient: And have you made it over, I'll let them know it's on its way. What's the press code?\ncustomer: Postcode of site is Bravo 12 to [ORGANIZATION]. Is apartment, well it's an issue in the loft but it's affecting apartment 34.\nclient: Okay, yeah, be email it over then. There is someone on the key accounts and they'll pick that up straight away. Well, thank you.\ncustomer: Brilliant, thank you very much.\nclient: Bye."</li></ul> |
| 7 | <ul><li>"client: Thank you for calling [NAME], partners with [ORGANIZATION]. Please listen to the options carefully so we can get you to the right team. If you need any help with a drainage problem, press one. If you require our plumbing services, press two. Or if you have existing home care policy, three. Alternatively please hold and your call will be transferred to a customer service advisor. We record calls for quality and training purposes.\ncustomer: I'm trying to find out if you do a service we could show the camera down a loo to find out where a blockage is. I'm pretty sure it's lime scale but we want to know if it's in the pan or in the pipe work behind the pan before we decide to buy a new pan. And would you have a camera that you can put down that?\nclient: They do have cameras that they can have a look. So it's just trying to, you just want someone throw something down there at the moment of causing a blockage, is it?\ncustomer: Yeah, we think it's like a lime scale because there's a lot of a loop of lime scale in the actual pan itself.\nclient: Okay. Okay.\ncustomer: We have to keep checking that out. And the plumber said he's done quite a few around here. And he says the worst one he's had was where the lime scale bill took way to state where he couldn't even get a pencil for it.\nclient: Right, okay. So we can have somebody come out and have a look and see the issue is. Do you have any cover with [ORGANIZATION]? The right cover for plumbing and drains? Okay, and is this your judgment?\ncustomer: No.\nclient: Well, we have someone who can come out and try and look at a blockage.\ncustomer: How much would it be for, how much would it be to show the camera down the toilet?\nclient: So the way that it works is our price, we don't have any, we don't really charge for the coal house itself, however, our price to clear a blockage starts at a hundred and eighty pounds. which is inclusive of [ORGANIZATION]. So the engineer will come out. And that's the initial charge that we will make.\ncustomer: Yeah.\nclient: If there's anything more that the engineer thinks it needs to be done, then he will provide you with a quote for that. And you can decide if you want to go ahead. But if the engineer does any work, then our price, it's clear and a quote would be $180.\ncustomer: Okay, can I just leave it for now then and I'll see what I can do? Okay, thanks very much, huh? Cheers, bye."</li><li>"client: Good morning, Dino. How can I help?\ncustomer: Oh, hi there is Carrie calling from [ORGANIZATION]. I was wondering if I could arrange for a quotation to have a couple of drains unblock that run to a septic tank.\nclient: Couple comes on to run to a septic tank. Is the septic tank?\ncustomer: Yeah.\nclient: Is the septic tank empty or?\ncustomer: At the moment we're just waiting on a report back but we've got a feeling it's likely going to be full, potentially.\nclient: So what I would suggest is that tank is empty before we attend.\ncustomer: Yeah, okay.\nclient: The reason I say that obviously if all the drains are blocked and the tank is full, we're going to have nowhere to clear that blockage too.\ncustomer: Is that something you guys can do as well or?\nclient: No, unfortunately we don't empty septic tanks.\ncustomer: Okay, no that's fine. What I'll do, I'll just speak with my colleague because he might have had an update from the site anyway. And then if it is empty, then obviously we'll give you a call back just to kind of have\nclient: Yeah, I mean, is it a domestic property commercial property?\ncustomer: So it's one of our poultry farms that we've got down in [LOCATION]. So it's based in [LOCATION].\nclient: Where is [NAME]? Right, okay. That's fine. So the price is [ORGANIZATION]. Right, okay. That's fine. So the price is for us to do so.\ncustomer: No, I don't think it's too long distance. It's literally just the two that would just run in the run up to the septic tank. So I don't think it's a big strain, yeah.\nclient: massive. Okay, so prices for us to do so $175 for the first hour and 60 pounds per half are thereafter plus the [ORGANIZATION].\ncustomer: Yeah. Per half hour, yeah.\nclient: And then obviously we are payment on completion so as soon as it is done you will take payment from yourselves.\ncustomer: Yeah. That's not a problem at all. That's fine. Do you have like an email as well where I can request that on so we could get that as a quote on the system as well?\nclient: Yeah, if you send your inquiry to admin at [ORGANIZATION], which is B-O-W-Y-E-R, hyphen-D-R, hyphen drains, dot-com.\ncustomer: Yeah. Yeah. Yeah. Yeah. Okay, brilliant. I'll pop that over. I'll find out just the info on the tank itself and then we can go from there.\nclient: Yet no worries.\ncustomer: Great stuff.\nclient: Thanks.\ncustomer: Thank you very much.\nclient: Bye bye. Bye bye.\ncustomer: Take care. Bye."</li><li>"customer: Yeah, we're not.\nclient: Thank you for calling [NAME], partners with [ORGANIZATION]. Please listen to the options carefully, so we can get you to the right team. If you need any help with a drainage problem, press one. If you require our plumbing. Did you know that you can now book drainage services online in a few easy clicks, visit [ORGANIZATION] for details. Alternatively, please hold and your call will be transferred to a customer service advisor. We record calls for quality and training purposes.\ncustomer: It's me now. It's me now. I come in [LOCATION], [LOCATION], [LOCATION], [LOCATION]. Oh, hello, Dino. Would you be able to send somebody to clear my manhole? Oh, it's so blocked. It's insane. Actually, it's me and my neighbor have it. So, but it's overfly and I can't leave it until the morning. So I think it's time.\nclient: Overflow with sewage or just sewage.\ncustomer: It's like a sewage, yeah. I think it's my neighbors, it's not really mine, but it's on my garden.\nclient: Yeah.\ncustomer: Yeah, I've only got one toilet link to it and I think more is from his side, but it needs doing because... I don't know really who to call.\nclient: Right, okay. Is it a shared drain then? I'm assuming it is.\ncustomer: I was thinking this time I call you, come and clear it and then... will sort it out afterwards. So who should I call?\nclient: Well, the way we stand, I'll just explain. So as I say, the shared lines, the responsibility of the water board, they don't really let anyone else touch it. I mean, we can try and clear it from your side. However, it will be an attempt. It may not be successful and you'd get charged for the attempt, clear or not.\ncustomer: So who shall I call?\nclient: The water board, I can get you the number. What's the post code?\ncustomer: Is he Wessex Water? Ah.\nclient: Yeah, that'll be the one, yeah.\ncustomer: He isn't first.\nclient: that'll be the one if it's a shared line between multiple properties it's their responsibility yes that'll be the one they'll come out for free they'll stick some cameras down if the responsibility if the if the block is on the shared line they'll just clear it if it's on your side to all your neighbors side they'll tell whoever it is other than that we can try we can come out that's not an issue however\ncustomer: So this is Wessex Water sewage. How much is the cost?\nclient: As I say, we may or may not be successful when you get charged for the attempt, if it's a shared line. Today, on a Sunday, if we can get there before, so it depends when you get there. If we can get there before six, you'll be looking attempt 237, including that, after 6 286.\ncustomer: Oh God, okay. I'll try it with sex water then.\nclient: I mean what I'd do what I'd do is speak of the war support and then if the war support says right that's fine we'll be out within you know a time scale that you're happy with great if they say they're going to be out within 48 hours or something you don't want to wait then you might want to risk it no problems take care now\ncustomer: Okay, I'll try it with sex water then. Okay, all right, I'll do that then. Thank you."</li></ul> |
| 0 | <ul><li>"client: Thank you for calling [NAME], partners with [ORGANIZATION]. Please listen to the options carefully, so we can get you to the right team. If you need any help with a drainage problem, press one. If you require our plumbing services, press one. Did you know that you can now book drainage services online in a few easy clicks, visit Dino. For details. Alternatively, please hold and your call will be transferred to a customer service advisor. We record calls for quality and training purposes.\ncustomer: Hi [NAME], I was wondering you could help me. I'm at my mom and dad's house just now and it's their back garden is flooding with the rain and it was just to see if could have got some advice whether or not there's a blockage within their drain that's causing the water to the gathering. Okay.\nclient: Well, okay, I can book a job in to get some with the outs, that's all I can do. I can't sort of advise it.\ncustomer: Yeah.\nclient: It sounds like there is a book as if something's, is it just wastewater or is it toilet waste, it's overflowing?\ncustomer: Yeah. It's rain water. It's not disappearing. It's not disappearing down the drain.\nclient: [ORGANIZATION].\ncustomer: It's not disappearing down the drain.\nclient: Okay, no problem. So, is it, it's residential property, yes. And did they have any kind of account with Dino Road True, [ORGANIZATION]?\ncustomer: Oh, never thought about that. We'll do on a few minutes.\nclient: Well I can check it just takes a minute to check if you give me the post code yeah thank you and the first line of the address thank you and what would the name be if they have a contract on this property okay one minute right we've got\ncustomer: Okay, okay. Sure. G74, 3 years. [NAME] 1. 61. [NAME]. That's right, yeah, that's my mum.\nclient: Okay, one minute.\ncustomer: Thank you.\nclient: Now she's got home care, but she's only got home care one which doesn't cover drainage work. So I can book a new job in, but it would be payable.\ncustomer: Okay. Okay.\nclient: Now, is she aware of this that this would be a payable job?\ncustomer: How much is the call out and how is it? Okay. Okay. Okay. Okay. Can I just say that to my mom and dad.\nclient: Okay, basically there's no call-out to charge itself, but a standard unblocking of a drain is a fixed price of 180 pounds, that's 1-80, and that is inclusive of B-A-T.\ncustomer: Okay. Okay. Can I just go and say that to my mom and dad. Yeah. Okay. Okay. Can I just go and say that to my mom and dad. Yeah. Okay. Okay.\nclient: Yeah, not a problem.\ncustomer: Can I just go and say that to my mom and dad. Yeah. Okay. Thank you.\nclient: How are you yet?\ncustomer: Hello there. Yeah, they're agreeable to that, thank you.\nclient: Okay, no problem. But Diner Road have never been at this property before. And it's 61.\ncustomer: No.\nclient: So who would be paying the bill for this then? Sorry.\ncustomer: [NAME].\nclient: [NAME], right? OK. So just go for 61. Okay, I just need now a contact number that can either be, [NAME] or yourself, whatever's easiest. Yep.\ncustomer: Yep, I'll give you mine. I'm [NAME] the daughter. 0750, 668 38666. That's correct.\nclient: Yep.\ncustomer: That's correct.\nclient: Yeah. Okay, I'll repeat. O-750, double-6-838. And what's your name again, sorry? Right, is it possible to take an email address as well, [NAME]?\ncustomer: Sure, it's Muir, M-U-I-R-G with a G-7 at [ORGANIZATION].\nclient: Yep. Thank you. Right, one last question, [NAME]. How did you hear about [NAME] today?\ncustomer: Oh, just I've seen the vans about and obviously a way of what you do.\nclient: Right, so just to clarify now, you're saying it's the drains of overflowed outside, yeah, and it's just waste water, yeah.\ncustomer: It's just, well it looks like rain water in the backyard and it's just gathered and it won't drain under the, into the drain.\nclient: Okay, yeah, okay, but can they use the toilet and the bathroom facilities? Okay, one minute, I'll put the information in it, see what comes up. Okay, Julian, we can get some of the output. Unfortunately, the, the, um, got so many bookings on. The earliest available slot at the moment would be Thursday morning. Do we look at him for Thursday morning?\ncustomer: Okay.\nclient: So an engineer will call from the, um, his club, right, where he will call you, he'll be any time between 8 and 1 on Thursday the 5th.\ncustomer: Yeah, that's fine, that's great. Thank you.\nclient: Okay. Cheelian. Okay, thank you very much. Have a good time now.\ncustomer: Thanks very much. Thank you. Bye bye. Bye."</li><li>"customer: the rules that start to the public on the side of the house and come back into the cup.\nclient: Thank you for calling [NAME], partners with [ORGANIZATION]. Please listen to the options carefully, so we can get you to the right team. If you need any help with a drainage problem, press one.\ncustomer: I was the rest but she might only get it.\nclient: If you require our plumbing services, press two.\ncustomer: That was like that. That was the same as a problem.\nclient: Did you know that you can now book drainage services online in a few easy clicks, visit Dino. For details. Alternatively, please hold and your call will be transferred to a customer service advisor.\ncustomer: in [LOCATION]. Well, we're going to do so basically meant that he didn't leave the country until the same thing.\nclient: We record calls for quality and training purposes.\ncustomer: Well, we have two toilets. We have two toilets where the sewage from the toilet is coming up. Is this something that you could help us with, please?\nclient: Yeah, do you have a home care cover with [ORGANIZATION]?\ncustomer: A what?\nclient: A home care cover?\ncustomer: No, we don't.\nclient: Okay.\ncustomer: I thought you guys had died no roared.\nclient: Our price, yeah, we are, and we are working with [ORGANIZATION]. So the first question is, it is to find out if you are a British Gas customer. So our price for Sunday evening is $286, cash or card upon completion and the price includes the 80s.\ncustomer: Okay, that's fine.\nclient: Could you please provide me the pause call?\ncustomer: [NAME], 6 [NAME].\nclient: Yeah. He booked with us before in 2022 under the name Christian. Okay, it was last time under the billing account, property inside [LOCATION].\ncustomer: Yes, it is. Inside [LOCATION], yes it is.\nclient: Is it still the same? Okay, let bear me a second so I can extract first the date from here and transfer it to a new booking, okay?\ncustomer: It is still the same. Okay. Okay. Thank you. Okay, thank you.\nclient: This is still the same contact name with Christian.\ncustomer: You can call me, that's fine, but Christian is fine. Do you have his number ending with double zero?\nclient: Okay, yeah, I have here telephone number 07439300.\ncustomer: Yeah.\nclient: Okay, so it's blocked, drain, over 07439300.\ncustomer: Yes.\nclient: Okay, so it's blocked, drain overflowing inside of the property, yeah.\ncustomer: Yes.\nclient: Any COVID at the site. COVID love [ORGANIZATION].\ncustomer: COVID, no. Sorry.\nclient: No.\ncustomer: So, Christian is obviously, you know, we are the Latin agent managing this property, but to get into the flat, my tenant is there.\nclient: Yeah. Yeah. Okay. Okay.\ncustomer: Yeah? So we will make the payment to you.\nclient: Yeah, once the engineer will use a number we have in the system, the one ending with 300, unless you want to give another number.\ncustomer: Okay, no, no, that's fine.\nclient: In that number, we receive a job reference number and on that number, the engineer will call for updates and also to make the payment.\ncustomer: Okay, thank you very much.\nclient: So do I keep the same number?\ncustomer: Yes, please.\nclient: Okay. Okay.\ncustomer: Yeah.\nclient: Okay, yeah, the job has been confirmed, so you should receive that person will receive a message in their mobile number with a job reference number, followed by the call from the engineer before arriving at the property. Is there anything else to me to add?\ncustomer: No, but how long would the engineer take?\nclient: This will be tonight, a couple of hours, few hours. We don't know until there will not be a sign of, yeah.\ncustomer: Okay, okay, thank you very much. So up to two hours I can tell my tenants, yes? Okay, thank you. Okay, thank you."</li><li>"client: Thank you for calling [NAME], partners with [ORGANIZATION]. Please listen to the options carefully so we can get you to the right team.\ncustomer: Uh, that's not.\nclient: If you need any help with a drainage problem, press one. If you require our plumbing services, press two. Or if you have an existing home care policy, Did you know that you can now book drainage services online in a few easy clicks? Visit dino.com for details. Alternatively, please hold and your call will be transferred to a customer service advisor. We record calls for quality and training purposes.\ncustomer: Hi, good morning. I wonder if I could get someone out to looking to a slower water cleaning issue. Well, yeah, go on, sorry.\nclient: Okay.\ncustomer: [NAME] is, yes. Sorry.\nclient: And is this your own premises?\ncustomer: So, eh? [NAME] is, yes.\nclient: Okay. And do you have landlord covered with British Gas or [NAME]?\ncustomer: Sorry. Do you have a landlord account?\nclient: Do you have a British Gas? Yeah, a landlord account.\ncustomer: No.\nclient: Okay. Can I set a postcode where the property is?\ncustomer: Yeah, E for [ORGANIZATION], 14, 14, 4 A for Alpha, A, for 30. D106, 14, Harpsmere Road.\nclient: and the first time of the address point.\ncustomer: That's right, yes.\nclient: And can it take your name, please? Okay. And they're the best phone number for you, please. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah.\ncustomer: 119. Okay. Okay.\nclient: Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah.\ncustomer: H4-A-4-Ders. [NAME].\nclient: Yeah. Yeah. Yeah. Yeah. Okay.\ncustomer: [ORGANIZATION], U4 Uniform.\nclient: Okay.\ncustomer: [NAME]. H-For Hotel, [NAME], [NAME], so that's [NAME], at [ORGANIZATION].\nclient: Okay. Okay. Okay. I'm. Okay. I can just ask how did you hear about down about it?\ncustomer: Okay. My plumber recommended here.\nclient: Yeah. Okay. Can I just ask how did you hear about\ncustomer: So the shower water is not draining well, so it just overflowing really quickly so it's out missing water as well.\nclient: Let me check they're available.\ncustomer: Okay, the painting in the property is it, has it just a mobile number, I don't know if it is what, just noting their number just in case.\nclient: Right.\ncustomer: But I guess we'll talk about that when you set up a permit, if there is anything I've got about fine. Just need to double chip with the tenant. How about Thursday? Is there anything available on Thursday, please?\nclient: Yeah, Thursday afternoon.\ncustomer: Roughly what time, please?\nclient: Between one and six?\ncustomer: Or is it like, one in six? Oh, okay. That should be okay.\nclient: Right.\ncustomer: I'll let him know about the timing.\nclient: Okay, so the engineers call you and they're on the way. They'll go around and do an inspection. If they're inspection on cart at three, so the guys can have a look, then they'll call you back just to tell you what the problem is, confirm the price with you and make sure that you're happy for them to do it.\ncustomer: Yeah, okay.\nclient: And if they do anything chargeable, it's just a cash-a-car payment to them when they're finished.\ncustomer: Okay, okay. Okay, okay. Would you be able to, would I be able to make it over the phone as if I won't be on the property?\nclient: Yeah. All right then.\ncustomer: All right, okay, perfect. Yeah, cool, absolutely. Sounds great.\nclient: That's all being booked now.\ncustomer: Yeah.\nclient: Yeah. All right, then. Yeah. All right, then.\ncustomer: So it's all sorted then, yeah.\nclient: That's all been booked now.\ncustomer: That's where you can't see.\nclient: Yeah. All right, then.\ncustomer: Thank you. I appreciate it.\nclient: That's a lot."</li></ul> |
| 6 | <ul><li>"Client: Thank you for calling Dino Rod partnered with British Gas. Please listen to the options carefully so we can get you to the right team. If you need any help with a drainage problem, press one. If you require further assistance, please note you can now book drainage services online in a few easy clicks. Visit dino.com for details. Alternatively, please hold and your call will be transferred to a customer service advisor. We record calls for quality and training purposes. \nCustomer: Hi, can I\nClient: How may I help\nCustomer: Yeah. I'm calling from Acer Eye Clinic in [PII]. Um, I'm calling because we've got, we've had um, several issues with our public toilet that we've got here in store over the last couple of weeks. Um, but today it's completely blocked and the water is up to the rim of the toilet.\nClient: Right, ok. We should be able to have a look at that. Have we ever been out to you before or would this be the first time?\nCustomer: Not that I'm aware of.\nClient: Ok. Let's grab some details and um we'll get everything um set up. What's the postcode there?\nCustomer: it is [PII]\nClient: Yeah. Wait a moment just to set everything up in here. And also, I take it this is just something that will need to be paid for? Is it there's no like British gas business cover or anything as far as you know?\nCustomer: Yes, that's right.\nClient: Ok, no problem. That's fine. And sorry, what was the name of the business?\nCustomer: Aces Eye Clinic.\nClient: Yes, no problem. That's fine. And in terms of like, the sort of invoicing billing side of things, is it all under the same address? It doesn't need like a separate head office address or anything, as far as you know?\nCustomer: Um, let me just check that for you. Yeah, you can put the invoice through into us and then we will forward it on to the people who pay.\nClient: Right, ok. No worries at all. And Do you have a contact email at all?\nCustomer: Yes, you can put my email down if you like. Let me grab it. Will that just be for the appointment that's booked or...?\nClient: Yeah it's not marketing or anything, It's just for if any quotes are needed or anything like that. It's purely to do with just this job.\nCustomer: That's fine. The email is [PII] at Aces Eye Clinic.co.uk.\nClient: Perfect. Thanks for the confirmation. And one last thing, how did you hear about us? Just so I can complete this inquiry.\nCustomer: Yeah, I believe it was through acquaintances and such. Friends have utilized your services.\nClient: Alright. I'll note that down. Now aside from that, are there any access restrictions or specific timings we should adhere to when visiting your location? \nCustomer: We're only here from half eight until five every day. And the best contact number is this one that I'm using right now.\nClient: Great. Ok then, I'll get this straight over to our local office now. They'll give you a call back in a few minutes just to confirm everything. We'll get out there ASAP. Hopefully, we can get this sorted today for you. They will also go through costs and confirm everything with you.\nCustomer: Brilliant, thank You.\nClient: You're welcome. Bye.\nCustomer: Bye."</li><li>"customer: get some more pizza.\nclient: Thanks for calling Dino Plumbing. Please note all calls are recorded for training and monitoring purposes. Please hold while we connect you to a member of our team. We'll be with you in just a moment. Good afternoon, dino plumben.\ncustomer: Hello, afternoon. Is it Diner Road? Oh, uh, yes.\nclient: There's dino plumben. So, there's dino plumben.\ncustomer: I'm calling from R, uh, yes.\nclient: So there is dino, is it dino rods that you require? It is, okay. And can I ask what post-code every, the property?\ncustomer: It's, I'm calling from R, H. H. H. H. A, no, [NAME] 7, 6E. P.\nclient: I'll see if I can transfer you free to the local office, bear with me.\ncustomer: Yes, please. Okay, thank you.\nclient: Thank you. [ORGANIZATION], part of the [ORGANIZATION] franchise group, is your local plumbing partner. We've managed emergencies across [NAME], [NAME], [ORGANIZATION], and [LOCATION] since 2005, offering 24-7 attention for everything from leaky taps to bathroom installations. Need a helping hand. Welcome to [NAME] [LOCATION]. Please note all calls are recorded for training and monitoring purposes. Please choose from the following options. Press option one to book a job. It's two for accounts and option three for anything else. Welcome to [LOCATION] region is a franchise of [LOCATION], experts in [LOCATION].\ncustomer: Hello, I'm [NAME] from [LOCATION].\nclient: Good afternoon, [ORGANIZATION]. How can I help?\ncustomer: We have a blocked drain and I like to book an appointment, please.\nclient: Can I grab a post code and I have a look when I can get somebody out to you?\ncustomer: It's [ORGANIZATION]. R-H-7. R-H-7. R-H-7. So, you do?\nclient: H we don't deal with that area.\ncustomer: Or-H-7. So, here you do?\nclient: We do [LOCATION]. We don't deal with that area. We do [LOCATION]. You'll have to ring another company.\ncustomer: Oh, could? Okay.\nclient: Let me just grab the post code for you.\ncustomer: Okay.\nclient: Okay, have you got a pen on a piece of paper ready? So it's 01342. 885 36464. That's [NAME].\ncustomer: over 364 and what is the company name. I know road link field. Yeah, that's the one I want. Thank you. Okay. Thank you."</li><li>"Client: Thank you for calling Dino Rod partners with British Gas\nCustomer: Mm\nClient: Please listen to the options carefully so we can get you to the right team if you need any help with a drainage problem Press [PII] if you require our plumbing services Press [PII] did you know that you can now book drainage services online in a few easy clicks Visit dino.com for details Alternatively please hold and your call will be transferred to a customer service advisor We record calls for quality and training purposes Welcome to Dino My name's [PII] How can I help\nCustomer: hi I'm just wondering if I could get um, book a, a visit for doing a AAA drain clear and um, a a possible CCTV survey on it\nClient: Ok. We should be able to have a look at that for you. Um, Is this for a home or is it for like a workplace?\nCustomer: It's for a home\nClient: Ok. Doke. No problem at all. And do you have any British gas cover for the drains or is it just something to pay for?\nCustomer: It's just something to pay for\nClient: Ok. Let me grab some details then, um we'll see if we getting it sorted. What's the postcode for the property?\nCustomer: Uh so it's [PII]\nClient: Yep, and what's the first line on the address?\nCustomer: It's [PII]\nClient: Mhm, and is it your own home? Do you own it and live in it, or…\nCustomer: No, we're working on it. The landlords requested this.\nClient: Right, right. Ok, no worries at all. That's absolutely fine. And whose name is going on the booking? Is it going under the landlord's name or are you sorting it all?\nCustomer: Uh, yeah it's going under my name.\nClient: No worries, that's absolutely fine. What's your name?\nCustomer: So it's [PII]\nClient: Yup\nCustomer: and it's [PII]\nClient: Ok, no worries at all. That's absolutely brilliant. And do you have a contact email at all?\nCustomer: Yeah, so it's [PII].\nClient: Yep.\nCustomer: and then it's five letters which is [PII].\nClient: Mhm.\nCustomer: [PII] dot co.uk\nClient: Now, which drain is it or is it just sort of checking all of them?\nCustomer: Uh, well it's a drain at the rear really. But it's um, it's full of sludge and um, the the landlord seems to think that there might be an issue and that's why he wants it cleared and then a survey.\nClient: Ok. No problem. I'll just assign your details to our drains team. And in terms of the best contact number, is that the one that you're ringing from?\nCustomer: Yeah ending in [PII].\nClient: Yeah, that's fine. I'll just copy that over into the system. Go. And is it for as soon as possible, or have you got particular days?\nCustomer: Well, I mean how does it work? Cos I've never used it before. Do you have an engineer for me? Cos we've got to have somebody on site you see.\nClient: Ok, what I'll do is I'll pop the details over to the local office, and they'll give you a call back. It should only be sort of 10 or 15 minutes. They'll have a word with you to see what works best for you, when we've got CCTV availability, and we can go from there. But what I'll do is I'll just note down ASAP at the minute.\nCustomer: Ok\nClient: And they can just have a look at when the first availability is. They'll give you a call back in a few minutes and just have a word about costs and everything, and availability, and we'll get it all booked in for when it works. \nCustomer: That's great. Thank you very much.\nClient: No problem at all. We'll be in contact shortly.\nCustomer: Ok, see you later.\nClient: Thank you. Bye\nCustomer: Bye.\nClient: Bye.\nCustomer: Bye.\nClient: Bye."</li></ul> |
| 10 | <ul><li>"client: Thank you for calling [NAME] partners with [ORGANIZATION].\ncustomer: Anywhere else?\nclient: Please listen to the options carefully so we can get you to the right team. If you need any help with a drainage problem, press one. If you require our plumbing services, press two. Did you know that you can now book drainage services online in a few easy clicks, visit [ORGANIZATION] for details. Alternatively, please hold and your call will be transferred to a customer service advisor. We record calls for quality and training purposes.\ncustomer: Hello?\nclient: Dino with one of the [LOCATION]'s leading plumbing and drain services. From blocked drains to burst pipes and busted boilers, we're here to help.\ncustomer: you.\nclient: Our local teams of expert engineers are available across the [LOCATION] 24-7, 365 days a year."</li><li>"customer: Hi there, I'm just phoning up.\nclient: Thank you for calling [NAME], partners with [ORGANIZATION].\ncustomer: I've been advised over the weekend there's been a drainage problem at the property I'm managing [NAME].\nclient: Please listen to the options carefully, so we can get you to the right team.\ncustomer: This has caused a foul water to flood one of the apartments in the building.\nclient: If you need any help with a drainage problem, press one.\ncustomer: I'm seeing if you've got any engineers available today in the area of sort of SW4 to attend and clear a block is in the communal stack pipe.\nclient: If you require our plumbing services, press. Did you know that you can now book drainage services online in a few easy clicks, visit Dino.\ncustomer: Super, already, thank you.\nclient: For details. Alternatively, please hold and your call will be transferred to a customer service advisor. We record calls for quality and training purposes. I'll be doing the job, they can tell straight away what their availability is. Two seconds. Oh, Oh, yeah. Oh, yeah. Oh, yeah. Bea. Oh, yeah. B."</li><li>"client: Thank you for calling Dino Plumbing. Please note all our calls are rewarded for monitoring and training purposes. First one for an update on parts. Press two for an update on your appointment. Press three to discuss an ongoing complaint and press four for sales. Thank you for calling Dino Plumbing. Please note all our calls are recorded, monitoring and training purposes. Press one for an update on parts. Press two for an update on your appointment. Press three to discuss an ongoing complaint and press four for sale. Thank you for calling Dino Plumbing. Your call is your call.\ncustomer: You're all.\nclient: Your call is Dino Plumbing. Your call is very important to us. All of our agents are currently busy assisting other customers. Please continue to hold and the next available agent will take your call. Alternatively, you can email your query to Office at Dino East London.com. Thank you're calling Dino Plumbing. Your call is very important to us. All of our agents are currently busy assisting other customers. Please continue to hold and the next available agent will take your call. Alternatively, you can email your query to Office at Dino East London.com Thank you for calling Dino Plumbing. Your call is very important to us. All of our agents are currently busy assisting other customers. Please continue to hub and the next available agent will take your call.\ncustomer: Oh.\nclient: Alternatively, you can email your query to Office at Dino East London.com. All of our agents are busy helping other customers. At the tone, please record your message. When you may hang up or press a hat key for more options.\ncustomer: you're Okay. You're\nclient: You have exceeded maximum voicemail duration. Your message will be deleted unless you press three to save it."</li></ul> |
| 2 | <ul><li>"client: Thank you for calling [NAME], partners with [ORGANIZATION]. Please listen to the options carefully so we can get you to the right team. If you need any help with a drainage problem, press one. If you require our plumbing services, press two. Or if you have an existing home care policy, it's three. Did you know that you can now book plumbing services online in a few easy clicks? Visit dino.com for details. Alternatively please hold and your call will be transferred to a customer service advisor.\ncustomer: Oh.\nclient: We record calls for quality and training purposes.\ncustomer: Hello, [NAME], I hope you can. My name is [NAME]. Can you please check that your plumbers are coming out to 57 Summer Field Avenue and give me an approximate time? I've been waiting since 7 o'clock this morning. And I can't, yeah, you probably did.\nclient: Yeah, I spoke with you earlier, I believe.\ncustomer: Yeah, if you can get him to come out, yeah, if you can get him to come out because I can't even have a shower or anything, you know, and I'm running out.\nclient: Yeah, so he will be coming to you, I can promise you that. It's set as the emergency, it's the highest priority we can make it. It's just that he had that other emergency job to attend to before. Yeah, I know.\ncustomer: I had a half a kettle of water, some even running out of water to have a coffee or anything with. So can you please hurry him on?\nclient: Yeah.\ncustomer: I know I'm impatient, I know you're dealing with the public and they're bloody awful to deal with, but on this occasion I'm part of the public, please help.\nclient: Absolutely, absolutely.\ncustomer: Okay, thank you. Thank you very much indeed.\nclient: No problem at all.\ncustomer: Thank you.\nclient: We'll be with you. We'll be with you as quickly as we can promise you that. Absolutely.\ncustomer: Please, in two seconds if possible.\nclient: No problem at all. We'll be with you as quickly as we can promise you that.\ncustomer: Okay, thank you. Yeah, okay, bye for now, bye.\nclient: Bye bye."</li><li>"client: Thank you for doing [NAME], [ORGANIZATION], [ORGANIZATION]. Please listen to the options carefully, so we can get you to the right team.\ncustomer: Oh, thank you.\nclient: You need any help with a drainage problem, press one. If you require our coming services, press two, or if you have an existing home care policy. Did you know that you can now book drainage services online in a few easy clicks, visit Dino. For details. Alternatively, please hold your call will be transferred to a customer service advisor. We record calls for quality and training purposes.\ncustomer: Yes, hi there. I hope you can help. I spoke to a colleague of yours a few moments ago regarding a blocked sewer drain and a quote and a quote and a quote and said you have someone out before six because sewage is coming out. Is that okay to book that in please?\nclient: Yeah, we're going to get an appointment button.\ncustomer: So it's the sewer drain, so there's a development of eight properties, six of which are being lived in and it's the drain that runs outside the front of those properties in a private road.\nclient: Is it your home address where you live? Right, bear with me one moment. Let me put you free to that office. Lots of postcode of the property. Right, let me put you directly through to them.\ncustomer: It's C.M. 7, the beginning of a post code. [NAME] 7. 8. [NAME] 7.\nclient: And then what's the rest of the postcode? Thank you.\ncustomer: 8. [ORGANIZATION] to be at Bravo.\nclient: Okay, I'm just going to transfer you directly free to it off, they'll be able to advise them to get out for me one moment. Good afternoon, [NAME].\ncustomer: Yes, hi there. I've got a blocked sewer drain.\nclient: Okay, what was the postcode?\ncustomer: I'd like to book somebody to come and have a look at that, please, to clear it and give it a jet.\nclient: Yep.\ncustomer: It's [NAME] 77, 8 [NAME].\nclient: Okay, was it just a house code? Yep. Okay. Was it just a house or was it a commercial building? Yep. Right, okay, let me hope for you.\ncustomer: So it's residential.\nclient: The thing that we have at the moment be Friday, if that's any good.\ncustomer: There's six houses, the new builds, and it's the drain that runs in front of those.\nclient: The only state that we have at the moment be Friday, if that's any good. The soonest date that we have at the moment would be Friday, if that's any good.\ncustomer: Sorry? I spoke to a gentleman from your company a few minutes ago, about 10, 15 minutes ago and he said that because sewage was coming out through the drain cover, he'd be able to have somebody out by 6 o'clock this evening.\nclient: Right. Do you know who it was that you're speaking to? Right.\ncustomer: I literally spoke to about about 10, 15 minutes ago.\nclient: Okay, let me just check. You could just hold one moment. Thank you.\ncustomer: Yeah sure no problem.\nclient: Oh. Oh. So,\ncustomer: Thank you.\nclient: Hello. Hi, yeah, thanks for holding. I'm not too sure who was that you spoke with, but it doesn't matter. What was the first line of the address? Is there your number or house name at all?\ncustomer: Hi there. That's all right. It's Braintree Road, quest in.\nclient: Okay, right that's fine. There. And is there? And is their name? And is their name that I can put on to the job?\ncustomer: So I live at number 17 but I'm at work so the guy that is at home right now that can meet your guy is a guy called [NAME] that lives at number 12.\nclient: And is there a name that I can put onto the job? Lovely and the best contact number. Yeah.\ncustomer: Yeah, so the guy that's waiting is [NAME]. So it's [NAME] and their surname is He, A, T, H, O, R, N. You can mean me like just my number. So it's 07963, 598, 446.\nclient: and it was just a blocked drain. I cannot have a problem I'll pop that on for you and then the engineer will go call when he's on his way.\ncustomer: Yeah, it's a new build, so it's just a locked rain. It just needs to be cleared and jaded, really.\nclient: Within the next couple of hours it should be, depending how we get one, but yeah, yeah, it should be within the next couple of hours, yeah, but it will give you a call on the way.\ncustomer: Perfect, you know, a rough time just let the guy know.\nclient: All right then, that's the case, no problem. Thank you, you too. Bye.\ncustomer: Okay, that's right. No, just so he knows to wait in, that's all. That's fine, so a couple of hours is fine. Brilliant. Excellent. That's fantastic. Thank you so much for your help. I really appreciate it. Cheers. Take care. Have a good day. Cheers. Bye."</li><li>"client: Thank you for calling [NAME]. Press one for drains or two for accounts.\ncustomer: Yeah, good afternoon. It's [NAME] here again. I'm just wondering if anybody's coming to our drains today. Is it you I spoke to this morning?\nclient: [ORGANIZATION] is it?\ncustomer: Yeah, an IP year. Are they?\nclient: Yeah, I'm just trying to sort it out now because the vans are all over the place at the moment, but I'll let you know, yeah, I'm trying to sort out if I can get two of them together.\ncustomer: Good. So will it be today, do you think?\nclient: So I don't know, like I say, it depends how long they're on other jobs, that's the thing as well. It's not that easy to work it.\ncustomer: Tell that [NAME] to go, tell that [NAME] to get his finger out and get over here.\nclient: [NAME]'s in scum so unfortunately yeah so I'm doing my best for you but it's not it's not that easy get in the van's and depending on how long they're on each job she see sometimes they're on half an hour they can be on two hours so\ncustomer: Is he? Yeah, right. Okay. Please do, yeah, because we're in a bit of a mess to be quite honest. Yeah. Yeah, yeah. Okay. Please do, yeah, because we're in a bit of a mess to be quite honest. We, yeah, if it's not today, will it be tomorrow?\nclient: uh... yeah i'll do my best for tomorrow i'm trying my best for today but i'll let you know i'm just like working i'm working around it at the moment yeah i will like i'll let you know yeah\ncustomer: Yeah, okay, I'll leave it with you. Yeah, okay, I'll leave it with you then if it's asked to be tomorrow if you can just let us know, because we're out tomorrow morning."</li></ul> |
| 5 | <ul><li>"Client: Thank you for calling Dino Rod partners with British Gas. Please listen to the options carefully so we can get you to the right team. If you need any help with a drainage problem Press [PII], if you require our plumbing services press [PII] or if you have an existing home care policy it's three. Did you know that you can now book plumbing services online in a few easy clicks? Visit dino.com for details. Alternatively, please hold and your call will be transferred to a customer service advisor. We record calls for quality and training purposes. Please tell me your date of birth.\nCustomer: 23 09 1965\nClient: Thanks, you're through to [PII]. My name is [PII]. How can I help?\nCustomer: Hello. Um, I had Dino um a couple of months ago to fit a pump for my shower and I think there's something wrong with the pump. It's like when we first had it installed it would click on, you know, when we put the taps on or the shower. But the past couple of days it's like literally clicking on for a few seconds, clicking off and then a couple of minutes later it's clicking back on again and it's doing it constantly.\nClient: Ok, and what's the postcode to your property?\nCustomer: It's [PII].\nClient: And what's the first line of your address?\nCustomer: Um, [PII].\nClient: And who am I speaking to, please?\nCustomer: Uh, [PII].\nClient: Alright, so make sure, was this under your British Gas policy or was it a pay job?\nCustomer: It was a paid job. So, I booked it initially through British Gas but then I paid Dino Direct for having the work done.\nClient: Ok. Let's see. I'm going to give you a number to call. This is the number to the office and they will be able to assist you. So this would have been the Plumbing office that installed it. Do you have a pen please?\nCustomer: I do. Yes.\nClient: So it's [PII].\nCustomer: Yeah. \nClient: [PII].\nCustomer: [PII].\nClient: [PII].\nCustomer: [PII]. Do they work today? Do you know?\nClient: Um, they should be open today. \nCustomer: Yeah.\nClient: If I'm not mistaken.\nCustomer: Right. Ok then. Thank you.\nClient: No worries. Cheers.\nCustomer: Bye.\nClient: Bye bye."</li><li>"client: Hello, [NAME].\ncustomer: Yeah, I've done, I've got a customer complaints or customer services department.\nclient: Um, you got to put a complaint in [NAME].\ncustomer: Yeah.\nclient: Okay, what's it regarding? Yeah, that's one of our, that's one of our, that's sort of a franchise, yeah.\ncustomer: What have you all franchise? Have you heard of our and, H-A-N-D? Yeah, they're just the way that's behaved and we've had a job done by them. And it's been a really difficult experience. I just want to speak to somebody about it.\nclient: Okay then I think what the best thing to do is for you to contact a head office because we're a different franchise we don't know anything what goes on there sorry Let me just try and find it for you one second\ncustomer: What's their number at office? Yeah. Yeah. Yeah.\nclient: 2178. Okay, thank you. Bye.\ncustomer: Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah."</li><li>"client: Thank you for calling [NAME], partners with [ORGANIZATION]. We record calls for quality and training purposes. Hello, you flipped to [NAME]. How can I help you today?\ncustomer: Hi yes I just want to make a slight complaint about [NAME] not the service itself because I've never used it but we live up back a plane in [LOCATION] and every time the tree goes on this house they're using the neighbor's gardens to turn around in just because the open plan and it's totally inappropriate to be doing so.\nclient: Okay, yeah, get you through to the right office. What's the post code for the area? All right, I'll just get you through to the local office one moment.\ncustomer: 215 here we are. Thank you.\nclient: Welcome to [ORGANIZATION]. All calls are called for training and monitoring purposes."</li></ul> |
| 8 | <ul><li>"customer: I just forgot my [ORGANIZATION]. He tagged I mean. Oh, hello. Last week on the 29th of August, one of your men came out to do my drain for me.\nclient: Hello, I'm Donoque, any check now.\ncustomer: Oh, hello. Last week on the 29th of August, one of your men came out to do my drain for me. Now the thing is, I was going to pay, I couldn't pay in cash because I didn't have it at the time. but he couldn't take a payment because his little machine thing wasn't working so he said I'll contact the office and then they'll contact you but no one's contacted me so the thing is I need to pay it and I haven't do not I mean so I don't know who I can give the details to to pay it I mean I have now been to the bank and got the cash out but it depends on whether he's passing in the area but if he's not I have got my card available here to pay someone\nclient: Oh, okay. Strange. Yeah, no, not a problem. Um... Yeah, but I don't believe. Um... Yeah, but I don't believe so, but I can, I can say go over the phone now if I need be, ah, if I can take a postcode, get it all sorted.\ncustomer: Okay, yeah, it's P.O-2, 9-E-T, Echo Tango.\nclient: Yep.\ncustomer: Yes, it. Yep.\nclient: Let's have a look. For number 132, from what I can see. Perfect was engineer [NAME], yeah. That's fine. And from what he's put on there, let's have a look, where much it's between nine? Where was that? Thursday. So I believe it is just the standard sort of 180, including the 80 there.\ncustomer: Right, yep, that's it, because it was, yeah, it wasn't that, are you too there?\nclient: Yeah.\ncustomer: Oh, sorry, the phone went dead then, sorry, I heard all that, I was on 180 plus V-A-T and then it was nothing. Sorry, yeah, okay, that's it, yeah.\nclient: Is there an email that I could take to get the receipt to you on?\ncustomer: Yes, it's Deb dot Palmer, P-A-L-M-E-R. Oh, yeah, you'd know that, wouldn't you? At 900.\nclient: Yep.\ncustomer: at gmail.com. That's Deb dot parmer 900 gmail dot com yeah at gmail dot com sorry yep yep\nclient: Perfect. Yeah, got it. It's right. Let's have a look. And yeah, if I could take the long card number which would never be ready there. Yep. Yep. Perfect. And the ex- diary for that one.\ncustomer: It's 0.9. 25. Take that again, sorry?\nclient: and then three on the back sorry. Perfect.\ncustomer: Oh yeah, it's 862. Oh, that's 862.\nclient: Make sure I then put that correctly. Yeah perfect that has all gone through and I'll get that receipt over to you now.\ncustomer: Oh, that's brilliant. Thank you so much.\nclient: No problem at all.\ncustomer: Thank you.\nclient: Thank you very much.\ncustomer: Thank you. Thank you.\nclient: Thank you.\ncustomer: Thank you."</li></ul> |
| 4 | <ul><li>"client: Thank you for calling [NAME], partners with [ORGANIZATION]. Please listen to the options carefully, so we can get you to the right team. If you need any help with a drainage problem, press one. If you require our plumbing. Did you know that you can now book drainage services online in a few easy clicks, visit [ORGANIZATION] for details. Alternatively, please hold and your call will be transferred to a customer service advisor. We record calls for quality and training purposes.\ncustomer: Hello, I have a semi-emergency. It's not life or death, but it's not far off. So we're training center, and one of the pipes under the building, the piercally appears to be blocked, so we can't use anything until it's unblocked.\nclient: Now, I think it's just a bit more than a sense, you know, when you're going to go.\ncustomer: And the line just keeps breaking up. I don't know if you can hear me, okay? I can't really make out what you're saying. I think you said you'll give me a call back."</li><li>"client: Thank you for calling [NAME] partners with [ORGANIZATION]. Please listen to the options carefully so we can get you to the right team. If you need any help with a drainage problem, press one. If you require our plumbing services press two, or if you have an existing home care policy, it's three. I'm sorry I didn't get anything we record calls for quality and training purposes please enter your account number using your telephone keypad ignoring any letters then press the hash key\ncustomer: uh...\nclient: or if your account number includes letters, please enter the numbers only. I'm sorry, I still didn't understand. Please enter your account number using your telephone pad, then press the hash key. Your account number may start with 85 or 91, or it may start with letters.\ncustomer: Hmm."</li><li>"client: Thank you for calling [NAME]. Our calls are recorded for training and monitoring purposes. Please choose from one of the following options. Press one for a new blockage or service. Press two for an existing or complete service. Press three for accounts. Press four for reception. If you know the extension number you are calling, please. Oh, down a road.\ncustomer: Yeah, my name is [NAME]. I'm calling from I had a lot of great family. and we've got a bit of a bunch in some of our toilets.\nclient: You did kind of break up there. Did you mention it was a primary school? Sorry, you did kind of break up there. Did you mention it was a primary school? Lock v. integrated primary school. Give me a second. And what's the issue you're having? [ORGANIZATION]'s blocked.\ncustomer: Yeah, it says the female staff toilets, the water just doesn't drain in the way.\nclient: Just making a note of this. And what about manhole access? Are there manholes that what can... Are they internal any of them? Hello? Hello? Hello?\ncustomer: You can't."</li></ul> |
## Evaluation
### Metrics
| Label | Metric |
|:--------|:-------|
| **all** | 0.5307 |
## Uses
### Direct Use for Inference
First install the SetFit library:
```bash
pip install setfit
```
Then you can load this model and run inference.
```python
from setfit import SetFitModel
# Download from the 🤗 Hub
model = SetFitModel.from_pretrained("automated-analytics/setfit-paraphrase-mpnet")
# Run inference
preds = model("client: Thank you for [NAME], partners with [ORGANIZATION]. Please listen to the options carefully, secure the right team. We record calls for quality and training purposes. Please enter your account number using your telephone keypad, ignoring any letters, then press the hash key. Just use your telephone keypad to enter your account number, then press the hash key.
customer: I'm trying.
client: This may start with it. Okay, just a moment whilst I look that up. I'm afraid that account number doesn't match our records. Please try again. Okay, just a moment whilst I put that up. Please enter your phone number using the telephone teapad.
customer: Thank you.
client: I'm sorry, I didn't understand. Try entering your phone number. I'm sorry, I still didn't understand. Please by entering it again.")
```
<!--
### Downstream Use
*List how someone could finetune this model on their own dataset.*
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Set Metrics
| Training set | Min | Median | Max |
|:-------------|:----|:---------|:-----|
| Word count | 38 | 413.8182 | 3760 |
| Label | Training Sample Count |
|:------|:----------------------|
| 0 | 8 |
| 1 | 8 |
| 2 | 8 |
| 3 | 8 |
| 4 | 8 |
| 5 | 4 |
| 6 | 8 |
| 7 | 8 |
| 8 | 1 |
| 9 | 8 |
| 10 | 8 |
### Training Hyperparameters
- batch_size: (64, 64)
- num_epochs: (1, 1)
- max_steps: -1
- sampling_strategy: oversampling
- body_learning_rate: (2e-05, 1e-05)
- head_learning_rate: 0.01
- loss: CachedMultipleNegativesRankingLoss
- distance_metric: cosine_distance
- margin: 0.25
- end_to_end: False
- use_amp: False
- warmup_proportion: 0.1
- l2_weight: 0.01
- seed: 42
- eval_max_steps: -1
- load_best_model_at_end: True
### Training Results
| Epoch | Step | Training Loss | Validation Loss |
|:------:|:----:|:-------------:|:---------------:|
| 0.0119 | 1 | 4.6669 | - |
| 0.5952 | 50 | 4.0552 | - |
| 1.0 | 84 | - | 4.0622 |
### Framework Versions
- Python: 3.10.16
- SetFit: 1.1.1
- Sentence Transformers: 3.4.1
- Transformers: 4.49.0
- PyTorch: 2.2.1+cu121
- Datasets: 3.3.2
- Tokenizers: 0.21.0
## Citation
### BibTeX
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
--> | [
"TEXT_CLASSIFICATION"
] | [
"BEAR"
] |
lcampillos/roberta-es-clinical-trials-ner | lcampillos | token-classification | [
"transformers",
"pytorch",
"safetensors",
"roberta",
"token-classification",
"generated_from_trainer",
"es",
"arxiv:1910.09700",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2022-06-30T20:14:09 | 2023-03-20T16:23:58 | 58 | 9 | ---
language:
- es
license: cc-by-nc-4.0
metrics:
- precision
- recall
- f1
- accuracy
tags:
- generated_from_trainer
widget:
- text: El ensayo clínico con vacunas promete buenos resultados para la infección
por SARS-CoV-2.
- text: El paciente toma aspirina para el dolor de cabeza y porque la garganta también
le duele mucho.
- text: El mejor tratamiento actual contra la COVID es la vacunación.
model-index:
- name: roberta-es-clinical-trials-ner
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-es-clinical-trials-ner
This medical named entity recognition model detects 4 types of semantic groups from the Unified Medical Language System (UMLS) (Bodenreider 2004):
- ANAT: body parts and anatomy (e.g. *garganta*, 'throat')
- CHEM: chemical entities and pharmacological substances (e.g. *aspirina*,'aspirin')
- DISO: pathologic conditions (e.g. *dolor*, 'pain')
- PROC: diagnostic and therapeutic procedures, laboratory analyses and medical research activities (e.g. *cirugía*, 'surgery')
The model achieves the following results on the evaluation set:
- Loss: 0.1580
- Precision: 0.8495
- Recall: 0.8806
- F1: 0.8647
- Accuracy: 0.9583
## Model description
This model adapts the pre-trained model [bsc-bio-ehr-es](https://huggingface.co/PlanTL-GOB-ES/bsc-bio-ehr-es), presented in [Pio Carriño et al. (2022)](https://aclanthology.org/2022.bionlp-1.19/).
It is fine-tuned to conduct medical named entity recognition on Spanish texts about clinical trials.
The model is fine-tuned on the [CT-EBM-SP corpus (Campillos-Llanos et al. 2021)](https://bmcmedinformdecismak.biomedcentral.com/articles/10.1186/s12911-021-01395-z).
## Intended uses & limitations
**Disclosure**: *This model is under development and needs to be improved. It should not be used for medical decision making without human assistance and supervision*
This model is intended for a generalist purpose, and may have bias and/or any other undesirable distortions.
Third parties who deploy or provide systems and/or services using any of these models (or using systems based on these models) should note that it is their responsibility to mitigate the risks arising from their use. Third parties, in any event, need to comply with applicable regulations, including regulations concerning the use of artificial intelligence.
The owner or creator of the models (CSIC – Consejo Superior de Investigaciones Científicas) will in no event be liable for any results arising from the use made by third parties of these models.
**Descargo de responsabilidad**: *Esta herramienta se encuentra en desarrollo y no debe ser empleada para la toma de decisiones médicas*
La finalidad de este modelo es generalista, y se advierte que puede tener sesgos y/u otro tipo de distorsiones indeseables.
Terceras partes que desplieguen o proporcionen sistemas y/o servicios usando alguno de estos modelos (o utilizando sistemas basados en estos modelos) han tener presente que es su responsabilidad abordar y minimizar los riesgos derivados de su uso. Las terceras partes, en cualquier circunstancia, deben cumplir con la normativa aplicable, incluyendo la normativa que concierne al uso de la inteligencia artificial.
El propietario o creador de los modelos (CSIC – Consejo Superior de Investigaciones Científicas) de ningún modo será responsable de los resultados derivados del uso que las terceras partes hagan de estos modelos.
## Training and evaluation data
The data used for fine-tuning is the [Clinical Trials for Evidence-Based-Medicine in Spanish corpus](http://www.lllf.uam.es/ESP/nlpdata/wp2/).
It is a collection of 1200 texts about clinical trials studies and clinical trials announcements:
- 500 abstracts from journals published under a Creative Commons license, e.g. available in PubMed or the Scientific Electronic Library Online (SciELO)
- 700 clinical trials announcements published in the European Clinical Trials Register and Repositorio Español de Estudios Clínicos
If you use this resource, please, cite as follows:
```
@article{campillosetal-midm2021,
title = {A clinical trials corpus annotated with UMLS© entities to enhance the access to Evidence-Based Medicine},
author = {Campillos-Llanos, Leonardo and Valverde-Mateos, Ana and Capllonch-Carri{\'o}n, Adri{\'a}n and Moreno-Sandoval, Antonio},
journal = {BMC Medical Informatics and Decision Making},
volume={21},
number={1},
pages={1--19},
year={2021},
publisher={BioMed Central}
}
```
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0771 | 1.0 | 785 | 0.1274 | 0.8449 | 0.8797 | 0.8619 | 0.9608 |
| 0.0415 | 2.0 | 1570 | 0.1356 | 0.8569 | 0.8856 | 0.8710 | 0.9528 |
| 0.0262 | 3.0 | 2355 | 0.1562 | 0.8619 | 0.8798 | 0.8707 | 0.9526 |
| 0.0186 | 4.0 | 3140 | 0.1582 | 0.8609 | 0.8846 | 0.8726 | 0.9527 |
**Results per class (test set)**
| Class | Precision | Recall | F1 | Support |
|:-----:|:---------:|:------:|:------:|:--------:|
| ANAT | 0.7069 | 0.6518 | 0.6783 | 359 |
| CHEM | 0.9162 | 0.9228 | 0.9195 | 2929 |
| DISO | 0.8805 | 0.8918 | 0.8861 | 3042 |
| PROC | 0.8198 | 0.8720 | 0.8450 | 3954 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.2+cu113
- Datasets 1.18.4
- Tokenizers 0.11.6
## Environmental Impact
Carbon emissions are estimated with the [Machine Learning Impact calculator](https://mlco2.github.io/impact/#compute) by [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). The carbon impact is estimated by specifying the hardware, runtime, cloud provider, and compute region.
- Hardware Type: 1 GPU 24 GB RTX 3090
- Time used: 4' (0.07 hours)
- Compute Region: Spain, Europe
- Carbon Emitted (Power consumption x Time x Carbon produced based on location of power grid): 0.01 kg eq. CO2
(Carbon offset: 0)
## Funding
This model was created with the annotated dataset from the [NLPMedTerm project](http://www.lllf.uam.es/ESP/nlpmedterm_en.html), funded by InterTalentum UAM, Marie Skłodowska-Curie COFUND grant (2019-2021) (H2020 program, contract number 713366) and by the Computational Linguistics Chair from the Knowledge Engineering Institute (IIC-UAM).
We thank the [Computational Linguistics Laboratory (LLI)](http://www.lllf.uam.es) at the Autonomous Universidad of Madrid (Universidad Autónoma de Madrid) for the computational facilities we used to fine-tune the model.
# License
Attribution-NonCommercial 4.0 International (CC BY 4.0)
| [
"NAMED_ENTITY_RECOGNITION"
] | [
"CT-EBM-SP",
"SCIELO"
] |
bigscience/sgpt-bloom-7b1-msmarco | bigscience | sentence-similarity | [
"sentence-transformers",
"pytorch",
"bloom",
"feature-extraction",
"sentence-similarity",
"mteb",
"arxiv:2202.08904",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2022-08-26T09:34:08 | 2024-04-03T12:03:45 | 58 | 43 | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- mteb
model-index:
- name: sgpt-bloom-7b1-msmarco
results:
- task:
type: Classification
dataset:
name: MTEB AmazonCounterfactualClassification (en)
type: mteb/amazon_counterfactual
config: en
split: test
revision: 2d8a100785abf0ae21420d2a55b0c56e3e1ea996
metrics:
- type: accuracy
value: 68.05970149253731
- type: ap
value: 31.640363460776193
- type: f1
value: 62.50025574145796
- task:
type: Classification
dataset:
name: MTEB AmazonCounterfactualClassification (de)
type: mteb/amazon_counterfactual
config: de
split: test
revision: 2d8a100785abf0ae21420d2a55b0c56e3e1ea996
metrics:
- type: accuracy
value: 61.34903640256959
- type: ap
value: 75.18797161500426
- type: f1
value: 59.04772570730417
- task:
type: Classification
dataset:
name: MTEB AmazonCounterfactualClassification (en-ext)
type: mteb/amazon_counterfactual
config: en-ext
split: test
revision: 2d8a100785abf0ae21420d2a55b0c56e3e1ea996
metrics:
- type: accuracy
value: 67.78110944527737
- type: ap
value: 19.218916023322706
- type: f1
value: 56.24477391445512
- task:
type: Classification
dataset:
name: MTEB AmazonCounterfactualClassification (ja)
type: mteb/amazon_counterfactual
config: ja
split: test
revision: 2d8a100785abf0ae21420d2a55b0c56e3e1ea996
metrics:
- type: accuracy
value: 58.23340471092078
- type: ap
value: 13.20222967424681
- type: f1
value: 47.511718095460296
- task:
type: Classification
dataset:
name: MTEB AmazonPolarityClassification
type: mteb/amazon_polarity
config: default
split: test
revision: 80714f8dcf8cefc218ef4f8c5a966dd83f75a0e1
metrics:
- type: accuracy
value: 68.97232499999998
- type: ap
value: 63.53632885535693
- type: f1
value: 68.62038513152868
- task:
type: Classification
dataset:
name: MTEB AmazonReviewsClassification (en)
type: mteb/amazon_reviews_multi
config: en
split: test
revision: c379a6705fec24a2493fa68e011692605f44e119
metrics:
- type: accuracy
value: 33.855999999999995
- type: f1
value: 33.43468222830134
- task:
type: Classification
dataset:
name: MTEB AmazonReviewsClassification (de)
type: mteb/amazon_reviews_multi
config: de
split: test
revision: c379a6705fec24a2493fa68e011692605f44e119
metrics:
- type: accuracy
value: 29.697999999999997
- type: f1
value: 29.39935388885501
- task:
type: Classification
dataset:
name: MTEB AmazonReviewsClassification (es)
type: mteb/amazon_reviews_multi
config: es
split: test
revision: c379a6705fec24a2493fa68e011692605f44e119
metrics:
- type: accuracy
value: 35.974000000000004
- type: f1
value: 35.25910820714383
- task:
type: Classification
dataset:
name: MTEB AmazonReviewsClassification (fr)
type: mteb/amazon_reviews_multi
config: fr
split: test
revision: c379a6705fec24a2493fa68e011692605f44e119
metrics:
- type: accuracy
value: 35.922
- type: f1
value: 35.38637028933444
- task:
type: Classification
dataset:
name: MTEB AmazonReviewsClassification (ja)
type: mteb/amazon_reviews_multi
config: ja
split: test
revision: c379a6705fec24a2493fa68e011692605f44e119
metrics:
- type: accuracy
value: 27.636
- type: f1
value: 27.178349955978266
- task:
type: Classification
dataset:
name: MTEB AmazonReviewsClassification (zh)
type: mteb/amazon_reviews_multi
config: zh
split: test
revision: c379a6705fec24a2493fa68e011692605f44e119
metrics:
- type: accuracy
value: 32.632
- type: f1
value: 32.08014766494587
- task:
type: Retrieval
dataset:
name: MTEB ArguAna
type: arguana
config: default
split: test
revision: 5b3e3697907184a9b77a3c99ee9ea1a9cbb1e4e3
metrics:
- type: map_at_1
value: 23.684
- type: map_at_10
value: 38.507999999999996
- type: map_at_100
value: 39.677
- type: map_at_1000
value: 39.690999999999995
- type: map_at_3
value: 33.369
- type: map_at_5
value: 36.15
- type: mrr_at_1
value: 24.04
- type: mrr_at_10
value: 38.664
- type: mrr_at_100
value: 39.833
- type: mrr_at_1000
value: 39.847
- type: mrr_at_3
value: 33.476
- type: mrr_at_5
value: 36.306
- type: ndcg_at_1
value: 23.684
- type: ndcg_at_10
value: 47.282000000000004
- type: ndcg_at_100
value: 52.215
- type: ndcg_at_1000
value: 52.551
- type: ndcg_at_3
value: 36.628
- type: ndcg_at_5
value: 41.653
- type: precision_at_1
value: 23.684
- type: precision_at_10
value: 7.553
- type: precision_at_100
value: 0.97
- type: precision_at_1000
value: 0.1
- type: precision_at_3
value: 15.363
- type: precision_at_5
value: 11.664
- type: recall_at_1
value: 23.684
- type: recall_at_10
value: 75.533
- type: recall_at_100
value: 97.013
- type: recall_at_1000
value: 99.57300000000001
- type: recall_at_3
value: 46.088
- type: recall_at_5
value: 58.321
- task:
type: Clustering
dataset:
name: MTEB ArxivClusteringP2P
type: mteb/arxiv-clustering-p2p
config: default
split: test
revision: 0bbdb47bcbe3a90093699aefeed338a0f28a7ee8
metrics:
- type: v_measure
value: 44.59375023881131
- task:
type: Clustering
dataset:
name: MTEB ArxivClusteringS2S
type: mteb/arxiv-clustering-s2s
config: default
split: test
revision: b73bd54100e5abfa6e3a23dcafb46fe4d2438dc3
metrics:
- type: v_measure
value: 38.02921907752556
- task:
type: Reranking
dataset:
name: MTEB AskUbuntuDupQuestions
type: mteb/askubuntudupquestions-reranking
config: default
split: test
revision: 4d853f94cd57d85ec13805aeeac3ae3e5eb4c49c
metrics:
- type: map
value: 59.97321570342109
- type: mrr
value: 73.18284746955106
- task:
type: STS
dataset:
name: MTEB BIOSSES
type: mteb/biosses-sts
config: default
split: test
revision: 9ee918f184421b6bd48b78f6c714d86546106103
metrics:
- type: cos_sim_pearson
value: 89.09091435741429
- type: cos_sim_spearman
value: 85.31459455332202
- type: euclidean_pearson
value: 79.3587681410798
- type: euclidean_spearman
value: 76.8174129874685
- type: manhattan_pearson
value: 79.57051762121769
- type: manhattan_spearman
value: 76.75837549768094
- task:
type: BitextMining
dataset:
name: MTEB BUCC (de-en)
type: mteb/bucc-bitext-mining
config: de-en
split: test
revision: d51519689f32196a32af33b075a01d0e7c51e252
metrics:
- type: accuracy
value: 54.27974947807933
- type: f1
value: 54.00144411132214
- type: precision
value: 53.87119374071357
- type: recall
value: 54.27974947807933
- task:
type: BitextMining
dataset:
name: MTEB BUCC (fr-en)
type: mteb/bucc-bitext-mining
config: fr-en
split: test
revision: d51519689f32196a32af33b075a01d0e7c51e252
metrics:
- type: accuracy
value: 97.3365617433414
- type: f1
value: 97.06141316310809
- type: precision
value: 96.92567319685965
- type: recall
value: 97.3365617433414
- task:
type: BitextMining
dataset:
name: MTEB BUCC (ru-en)
type: mteb/bucc-bitext-mining
config: ru-en
split: test
revision: d51519689f32196a32af33b075a01d0e7c51e252
metrics:
- type: accuracy
value: 46.05472809144441
- type: f1
value: 45.30319274690595
- type: precision
value: 45.00015469655234
- type: recall
value: 46.05472809144441
- task:
type: BitextMining
dataset:
name: MTEB BUCC (zh-en)
type: mteb/bucc-bitext-mining
config: zh-en
split: test
revision: d51519689f32196a32af33b075a01d0e7c51e252
metrics:
- type: accuracy
value: 98.10426540284361
- type: f1
value: 97.96384061786905
- type: precision
value: 97.89362822538178
- type: recall
value: 98.10426540284361
- task:
type: Classification
dataset:
name: MTEB Banking77Classification
type: mteb/banking77
config: default
split: test
revision: 44fa15921b4c889113cc5df03dd4901b49161ab7
metrics:
- type: accuracy
value: 84.33441558441558
- type: f1
value: 84.31653077470322
- task:
type: Clustering
dataset:
name: MTEB BiorxivClusteringP2P
type: mteb/biorxiv-clustering-p2p
config: default
split: test
revision: 11d0121201d1f1f280e8cc8f3d98fb9c4d9f9c55
metrics:
- type: v_measure
value: 36.025318694698086
- task:
type: Clustering
dataset:
name: MTEB BiorxivClusteringS2S
type: mteb/biorxiv-clustering-s2s
config: default
split: test
revision: c0fab014e1bcb8d3a5e31b2088972a1e01547dc1
metrics:
- type: v_measure
value: 32.484889034590346
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackAndroidRetrieval
type: BeIR/cqadupstack
config: default
split: test
revision: 2b9f5791698b5be7bc5e10535c8690f20043c3db
metrics:
- type: map_at_1
value: 30.203999999999997
- type: map_at_10
value: 41.314
- type: map_at_100
value: 42.66
- type: map_at_1000
value: 42.775999999999996
- type: map_at_3
value: 37.614999999999995
- type: map_at_5
value: 39.643
- type: mrr_at_1
value: 37.482
- type: mrr_at_10
value: 47.075
- type: mrr_at_100
value: 47.845
- type: mrr_at_1000
value: 47.887
- type: mrr_at_3
value: 44.635000000000005
- type: mrr_at_5
value: 45.966
- type: ndcg_at_1
value: 37.482
- type: ndcg_at_10
value: 47.676
- type: ndcg_at_100
value: 52.915
- type: ndcg_at_1000
value: 54.82900000000001
- type: ndcg_at_3
value: 42.562
- type: ndcg_at_5
value: 44.852
- type: precision_at_1
value: 37.482
- type: precision_at_10
value: 9.142
- type: precision_at_100
value: 1.436
- type: precision_at_1000
value: 0.189
- type: precision_at_3
value: 20.458000000000002
- type: precision_at_5
value: 14.821000000000002
- type: recall_at_1
value: 30.203999999999997
- type: recall_at_10
value: 60.343
- type: recall_at_100
value: 82.58
- type: recall_at_1000
value: 94.813
- type: recall_at_3
value: 45.389
- type: recall_at_5
value: 51.800999999999995
- type: map_at_1
value: 30.889
- type: map_at_10
value: 40.949999999999996
- type: map_at_100
value: 42.131
- type: map_at_1000
value: 42.253
- type: map_at_3
value: 38.346999999999994
- type: map_at_5
value: 39.782000000000004
- type: mrr_at_1
value: 38.79
- type: mrr_at_10
value: 46.944
- type: mrr_at_100
value: 47.61
- type: mrr_at_1000
value: 47.650999999999996
- type: mrr_at_3
value: 45.053
- type: mrr_at_5
value: 46.101
- type: ndcg_at_1
value: 38.79
- type: ndcg_at_10
value: 46.286
- type: ndcg_at_100
value: 50.637
- type: ndcg_at_1000
value: 52.649
- type: ndcg_at_3
value: 42.851
- type: ndcg_at_5
value: 44.311
- type: precision_at_1
value: 38.79
- type: precision_at_10
value: 8.516
- type: precision_at_100
value: 1.3679999999999999
- type: precision_at_1000
value: 0.183
- type: precision_at_3
value: 20.637
- type: precision_at_5
value: 14.318
- type: recall_at_1
value: 30.889
- type: recall_at_10
value: 55.327000000000005
- type: recall_at_100
value: 74.091
- type: recall_at_1000
value: 86.75500000000001
- type: recall_at_3
value: 44.557
- type: recall_at_5
value: 49.064
- type: map_at_1
value: 39.105000000000004
- type: map_at_10
value: 50.928
- type: map_at_100
value: 51.958000000000006
- type: map_at_1000
value: 52.017
- type: map_at_3
value: 47.638999999999996
- type: map_at_5
value: 49.624
- type: mrr_at_1
value: 44.639
- type: mrr_at_10
value: 54.261
- type: mrr_at_100
value: 54.913999999999994
- type: mrr_at_1000
value: 54.945
- type: mrr_at_3
value: 51.681999999999995
- type: mrr_at_5
value: 53.290000000000006
- type: ndcg_at_1
value: 44.639
- type: ndcg_at_10
value: 56.678
- type: ndcg_at_100
value: 60.649
- type: ndcg_at_1000
value: 61.855000000000004
- type: ndcg_at_3
value: 51.092999999999996
- type: ndcg_at_5
value: 54.096999999999994
- type: precision_at_1
value: 44.639
- type: precision_at_10
value: 9.028
- type: precision_at_100
value: 1.194
- type: precision_at_1000
value: 0.135
- type: precision_at_3
value: 22.508
- type: precision_at_5
value: 15.661
- type: recall_at_1
value: 39.105000000000004
- type: recall_at_10
value: 70.367
- type: recall_at_100
value: 87.359
- type: recall_at_1000
value: 95.88
- type: recall_at_3
value: 55.581
- type: recall_at_5
value: 62.821000000000005
- type: map_at_1
value: 23.777
- type: map_at_10
value: 32.297
- type: map_at_100
value: 33.516
- type: map_at_1000
value: 33.592
- type: map_at_3
value: 30.001
- type: map_at_5
value: 31.209999999999997
- type: mrr_at_1
value: 25.989
- type: mrr_at_10
value: 34.472
- type: mrr_at_100
value: 35.518
- type: mrr_at_1000
value: 35.577
- type: mrr_at_3
value: 32.185
- type: mrr_at_5
value: 33.399
- type: ndcg_at_1
value: 25.989
- type: ndcg_at_10
value: 37.037
- type: ndcg_at_100
value: 42.699
- type: ndcg_at_1000
value: 44.725
- type: ndcg_at_3
value: 32.485
- type: ndcg_at_5
value: 34.549
- type: precision_at_1
value: 25.989
- type: precision_at_10
value: 5.718
- type: precision_at_100
value: 0.89
- type: precision_at_1000
value: 0.11
- type: precision_at_3
value: 14.049
- type: precision_at_5
value: 9.672
- type: recall_at_1
value: 23.777
- type: recall_at_10
value: 49.472
- type: recall_at_100
value: 74.857
- type: recall_at_1000
value: 90.289
- type: recall_at_3
value: 37.086000000000006
- type: recall_at_5
value: 42.065999999999995
- type: map_at_1
value: 13.377
- type: map_at_10
value: 21.444
- type: map_at_100
value: 22.663
- type: map_at_1000
value: 22.8
- type: map_at_3
value: 18.857
- type: map_at_5
value: 20.426
- type: mrr_at_1
value: 16.542
- type: mrr_at_10
value: 25.326999999999998
- type: mrr_at_100
value: 26.323
- type: mrr_at_1000
value: 26.406000000000002
- type: mrr_at_3
value: 22.823
- type: mrr_at_5
value: 24.340999999999998
- type: ndcg_at_1
value: 16.542
- type: ndcg_at_10
value: 26.479000000000003
- type: ndcg_at_100
value: 32.29
- type: ndcg_at_1000
value: 35.504999999999995
- type: ndcg_at_3
value: 21.619
- type: ndcg_at_5
value: 24.19
- type: precision_at_1
value: 16.542
- type: precision_at_10
value: 5.075
- type: precision_at_100
value: 0.9339999999999999
- type: precision_at_1000
value: 0.135
- type: precision_at_3
value: 10.697
- type: precision_at_5
value: 8.134
- type: recall_at_1
value: 13.377
- type: recall_at_10
value: 38.027
- type: recall_at_100
value: 63.439
- type: recall_at_1000
value: 86.354
- type: recall_at_3
value: 25.0
- type: recall_at_5
value: 31.306
- type: map_at_1
value: 28.368
- type: map_at_10
value: 39.305
- type: map_at_100
value: 40.637
- type: map_at_1000
value: 40.753
- type: map_at_3
value: 36.077999999999996
- type: map_at_5
value: 37.829
- type: mrr_at_1
value: 34.937000000000005
- type: mrr_at_10
value: 45.03
- type: mrr_at_100
value: 45.78
- type: mrr_at_1000
value: 45.827
- type: mrr_at_3
value: 42.348
- type: mrr_at_5
value: 43.807
- type: ndcg_at_1
value: 34.937000000000005
- type: ndcg_at_10
value: 45.605000000000004
- type: ndcg_at_100
value: 50.941
- type: ndcg_at_1000
value: 52.983000000000004
- type: ndcg_at_3
value: 40.366
- type: ndcg_at_5
value: 42.759
- type: precision_at_1
value: 34.937000000000005
- type: precision_at_10
value: 8.402
- type: precision_at_100
value: 1.2959999999999998
- type: precision_at_1000
value: 0.164
- type: precision_at_3
value: 19.217000000000002
- type: precision_at_5
value: 13.725000000000001
- type: recall_at_1
value: 28.368
- type: recall_at_10
value: 58.5
- type: recall_at_100
value: 80.67999999999999
- type: recall_at_1000
value: 93.925
- type: recall_at_3
value: 43.956
- type: recall_at_5
value: 50.065000000000005
- type: map_at_1
value: 24.851
- type: map_at_10
value: 34.758
- type: map_at_100
value: 36.081
- type: map_at_1000
value: 36.205999999999996
- type: map_at_3
value: 31.678
- type: map_at_5
value: 33.398
- type: mrr_at_1
value: 31.279
- type: mrr_at_10
value: 40.138
- type: mrr_at_100
value: 41.005
- type: mrr_at_1000
value: 41.065000000000005
- type: mrr_at_3
value: 37.519000000000005
- type: mrr_at_5
value: 38.986
- type: ndcg_at_1
value: 31.279
- type: ndcg_at_10
value: 40.534
- type: ndcg_at_100
value: 46.093
- type: ndcg_at_1000
value: 48.59
- type: ndcg_at_3
value: 35.473
- type: ndcg_at_5
value: 37.801
- type: precision_at_1
value: 31.279
- type: precision_at_10
value: 7.477
- type: precision_at_100
value: 1.2
- type: precision_at_1000
value: 0.159
- type: precision_at_3
value: 17.047
- type: precision_at_5
value: 12.306000000000001
- type: recall_at_1
value: 24.851
- type: recall_at_10
value: 52.528
- type: recall_at_100
value: 76.198
- type: recall_at_1000
value: 93.12
- type: recall_at_3
value: 38.257999999999996
- type: recall_at_5
value: 44.440000000000005
- type: map_at_1
value: 25.289833333333334
- type: map_at_10
value: 34.379333333333335
- type: map_at_100
value: 35.56916666666666
- type: map_at_1000
value: 35.68633333333333
- type: map_at_3
value: 31.63916666666666
- type: map_at_5
value: 33.18383333333334
- type: mrr_at_1
value: 30.081749999999996
- type: mrr_at_10
value: 38.53658333333333
- type: mrr_at_100
value: 39.37825
- type: mrr_at_1000
value: 39.43866666666666
- type: mrr_at_3
value: 36.19025
- type: mrr_at_5
value: 37.519749999999995
- type: ndcg_at_1
value: 30.081749999999996
- type: ndcg_at_10
value: 39.62041666666667
- type: ndcg_at_100
value: 44.74825
- type: ndcg_at_1000
value: 47.11366666666667
- type: ndcg_at_3
value: 35.000499999999995
- type: ndcg_at_5
value: 37.19283333333333
- type: precision_at_1
value: 30.081749999999996
- type: precision_at_10
value: 6.940249999999999
- type: precision_at_100
value: 1.1164166666666668
- type: precision_at_1000
value: 0.15025000000000002
- type: precision_at_3
value: 16.110416666666666
- type: precision_at_5
value: 11.474416666666668
- type: recall_at_1
value: 25.289833333333334
- type: recall_at_10
value: 51.01591666666667
- type: recall_at_100
value: 73.55275000000002
- type: recall_at_1000
value: 90.02666666666667
- type: recall_at_3
value: 38.15208333333334
- type: recall_at_5
value: 43.78458333333334
- type: map_at_1
value: 23.479
- type: map_at_10
value: 31.2
- type: map_at_100
value: 32.11
- type: map_at_1000
value: 32.214
- type: map_at_3
value: 29.093999999999998
- type: map_at_5
value: 30.415
- type: mrr_at_1
value: 26.840000000000003
- type: mrr_at_10
value: 34.153
- type: mrr_at_100
value: 34.971000000000004
- type: mrr_at_1000
value: 35.047
- type: mrr_at_3
value: 32.285000000000004
- type: mrr_at_5
value: 33.443
- type: ndcg_at_1
value: 26.840000000000003
- type: ndcg_at_10
value: 35.441
- type: ndcg_at_100
value: 40.150000000000006
- type: ndcg_at_1000
value: 42.74
- type: ndcg_at_3
value: 31.723000000000003
- type: ndcg_at_5
value: 33.71
- type: precision_at_1
value: 26.840000000000003
- type: precision_at_10
value: 5.552
- type: precision_at_100
value: 0.859
- type: precision_at_1000
value: 0.11499999999999999
- type: precision_at_3
value: 13.804
- type: precision_at_5
value: 9.600999999999999
- type: recall_at_1
value: 23.479
- type: recall_at_10
value: 45.442
- type: recall_at_100
value: 67.465
- type: recall_at_1000
value: 86.53
- type: recall_at_3
value: 35.315999999999995
- type: recall_at_5
value: 40.253
- type: map_at_1
value: 16.887
- type: map_at_10
value: 23.805
- type: map_at_100
value: 24.804000000000002
- type: map_at_1000
value: 24.932000000000002
- type: map_at_3
value: 21.632
- type: map_at_5
value: 22.845
- type: mrr_at_1
value: 20.75
- type: mrr_at_10
value: 27.686
- type: mrr_at_100
value: 28.522
- type: mrr_at_1000
value: 28.605000000000004
- type: mrr_at_3
value: 25.618999999999996
- type: mrr_at_5
value: 26.723999999999997
- type: ndcg_at_1
value: 20.75
- type: ndcg_at_10
value: 28.233000000000004
- type: ndcg_at_100
value: 33.065
- type: ndcg_at_1000
value: 36.138999999999996
- type: ndcg_at_3
value: 24.361
- type: ndcg_at_5
value: 26.111
- type: precision_at_1
value: 20.75
- type: precision_at_10
value: 5.124
- type: precision_at_100
value: 0.8750000000000001
- type: precision_at_1000
value: 0.131
- type: precision_at_3
value: 11.539000000000001
- type: precision_at_5
value: 8.273
- type: recall_at_1
value: 16.887
- type: recall_at_10
value: 37.774
- type: recall_at_100
value: 59.587
- type: recall_at_1000
value: 81.523
- type: recall_at_3
value: 26.837
- type: recall_at_5
value: 31.456
- type: map_at_1
value: 25.534000000000002
- type: map_at_10
value: 33.495999999999995
- type: map_at_100
value: 34.697
- type: map_at_1000
value: 34.805
- type: map_at_3
value: 31.22
- type: map_at_5
value: 32.277
- type: mrr_at_1
value: 29.944
- type: mrr_at_10
value: 37.723
- type: mrr_at_100
value: 38.645
- type: mrr_at_1000
value: 38.712999999999994
- type: mrr_at_3
value: 35.665
- type: mrr_at_5
value: 36.681999999999995
- type: ndcg_at_1
value: 29.944
- type: ndcg_at_10
value: 38.407000000000004
- type: ndcg_at_100
value: 43.877
- type: ndcg_at_1000
value: 46.312
- type: ndcg_at_3
value: 34.211000000000006
- type: ndcg_at_5
value: 35.760999999999996
- type: precision_at_1
value: 29.944
- type: precision_at_10
value: 6.343
- type: precision_at_100
value: 1.023
- type: precision_at_1000
value: 0.133
- type: precision_at_3
value: 15.360999999999999
- type: precision_at_5
value: 10.428999999999998
- type: recall_at_1
value: 25.534000000000002
- type: recall_at_10
value: 49.204
- type: recall_at_100
value: 72.878
- type: recall_at_1000
value: 89.95
- type: recall_at_3
value: 37.533
- type: recall_at_5
value: 41.611
- type: map_at_1
value: 26.291999999999998
- type: map_at_10
value: 35.245
- type: map_at_100
value: 36.762
- type: map_at_1000
value: 36.983
- type: map_at_3
value: 32.439
- type: map_at_5
value: 33.964
- type: mrr_at_1
value: 31.423000000000002
- type: mrr_at_10
value: 39.98
- type: mrr_at_100
value: 40.791
- type: mrr_at_1000
value: 40.854
- type: mrr_at_3
value: 37.451
- type: mrr_at_5
value: 38.854
- type: ndcg_at_1
value: 31.423000000000002
- type: ndcg_at_10
value: 40.848
- type: ndcg_at_100
value: 46.35
- type: ndcg_at_1000
value: 49.166
- type: ndcg_at_3
value: 36.344
- type: ndcg_at_5
value: 38.36
- type: precision_at_1
value: 31.423000000000002
- type: precision_at_10
value: 7.767
- type: precision_at_100
value: 1.498
- type: precision_at_1000
value: 0.23700000000000002
- type: precision_at_3
value: 16.733
- type: precision_at_5
value: 12.213000000000001
- type: recall_at_1
value: 26.291999999999998
- type: recall_at_10
value: 51.184
- type: recall_at_100
value: 76.041
- type: recall_at_1000
value: 94.11500000000001
- type: recall_at_3
value: 38.257000000000005
- type: recall_at_5
value: 43.68
- type: map_at_1
value: 20.715
- type: map_at_10
value: 27.810000000000002
- type: map_at_100
value: 28.810999999999996
- type: map_at_1000
value: 28.904999999999998
- type: map_at_3
value: 25.069999999999997
- type: map_at_5
value: 26.793
- type: mrr_at_1
value: 22.366
- type: mrr_at_10
value: 29.65
- type: mrr_at_100
value: 30.615
- type: mrr_at_1000
value: 30.686999999999998
- type: mrr_at_3
value: 27.017999999999997
- type: mrr_at_5
value: 28.644
- type: ndcg_at_1
value: 22.366
- type: ndcg_at_10
value: 32.221
- type: ndcg_at_100
value: 37.313
- type: ndcg_at_1000
value: 39.871
- type: ndcg_at_3
value: 26.918
- type: ndcg_at_5
value: 29.813000000000002
- type: precision_at_1
value: 22.366
- type: precision_at_10
value: 5.139
- type: precision_at_100
value: 0.8240000000000001
- type: precision_at_1000
value: 0.11199999999999999
- type: precision_at_3
value: 11.275
- type: precision_at_5
value: 8.540000000000001
- type: recall_at_1
value: 20.715
- type: recall_at_10
value: 44.023
- type: recall_at_100
value: 67.458
- type: recall_at_1000
value: 87.066
- type: recall_at_3
value: 30.055
- type: recall_at_5
value: 36.852000000000004
- task:
type: Retrieval
dataset:
name: MTEB ClimateFEVER
type: climate-fever
config: default
split: test
revision: 392b78eb68c07badcd7c2cd8f39af108375dfcce
metrics:
- type: map_at_1
value: 11.859
- type: map_at_10
value: 20.625
- type: map_at_100
value: 22.5
- type: map_at_1000
value: 22.689
- type: map_at_3
value: 16.991
- type: map_at_5
value: 18.781
- type: mrr_at_1
value: 26.906000000000002
- type: mrr_at_10
value: 39.083
- type: mrr_at_100
value: 39.978
- type: mrr_at_1000
value: 40.014
- type: mrr_at_3
value: 35.44
- type: mrr_at_5
value: 37.619
- type: ndcg_at_1
value: 26.906000000000002
- type: ndcg_at_10
value: 29.386000000000003
- type: ndcg_at_100
value: 36.510999999999996
- type: ndcg_at_1000
value: 39.814
- type: ndcg_at_3
value: 23.558
- type: ndcg_at_5
value: 25.557999999999996
- type: precision_at_1
value: 26.906000000000002
- type: precision_at_10
value: 9.342
- type: precision_at_100
value: 1.6969999999999998
- type: precision_at_1000
value: 0.231
- type: precision_at_3
value: 17.503
- type: precision_at_5
value: 13.655000000000001
- type: recall_at_1
value: 11.859
- type: recall_at_10
value: 35.929
- type: recall_at_100
value: 60.21300000000001
- type: recall_at_1000
value: 78.606
- type: recall_at_3
value: 21.727
- type: recall_at_5
value: 27.349
- task:
type: Retrieval
dataset:
name: MTEB DBPedia
type: dbpedia-entity
config: default
split: test
revision: f097057d03ed98220bc7309ddb10b71a54d667d6
metrics:
- type: map_at_1
value: 8.627
- type: map_at_10
value: 18.248
- type: map_at_100
value: 25.19
- type: map_at_1000
value: 26.741
- type: map_at_3
value: 13.286000000000001
- type: map_at_5
value: 15.126000000000001
- type: mrr_at_1
value: 64.75
- type: mrr_at_10
value: 71.865
- type: mrr_at_100
value: 72.247
- type: mrr_at_1000
value: 72.255
- type: mrr_at_3
value: 69.958
- type: mrr_at_5
value: 71.108
- type: ndcg_at_1
value: 53.25
- type: ndcg_at_10
value: 39.035
- type: ndcg_at_100
value: 42.735
- type: ndcg_at_1000
value: 50.166
- type: ndcg_at_3
value: 43.857
- type: ndcg_at_5
value: 40.579
- type: precision_at_1
value: 64.75
- type: precision_at_10
value: 30.75
- type: precision_at_100
value: 9.54
- type: precision_at_1000
value: 2.035
- type: precision_at_3
value: 47.333
- type: precision_at_5
value: 39.0
- type: recall_at_1
value: 8.627
- type: recall_at_10
value: 23.413
- type: recall_at_100
value: 48.037
- type: recall_at_1000
value: 71.428
- type: recall_at_3
value: 14.158999999999999
- type: recall_at_5
value: 17.002
- task:
type: Classification
dataset:
name: MTEB EmotionClassification
type: mteb/emotion
config: default
split: test
revision: 829147f8f75a25f005913200eb5ed41fae320aa1
metrics:
- type: accuracy
value: 44.865
- type: f1
value: 41.56625743266997
- task:
type: Retrieval
dataset:
name: MTEB FEVER
type: fever
config: default
split: test
revision: 1429cf27e393599b8b359b9b72c666f96b2525f9
metrics:
- type: map_at_1
value: 57.335
- type: map_at_10
value: 68.29499999999999
- type: map_at_100
value: 68.69800000000001
- type: map_at_1000
value: 68.714
- type: map_at_3
value: 66.149
- type: map_at_5
value: 67.539
- type: mrr_at_1
value: 61.656
- type: mrr_at_10
value: 72.609
- type: mrr_at_100
value: 72.923
- type: mrr_at_1000
value: 72.928
- type: mrr_at_3
value: 70.645
- type: mrr_at_5
value: 71.938
- type: ndcg_at_1
value: 61.656
- type: ndcg_at_10
value: 73.966
- type: ndcg_at_100
value: 75.663
- type: ndcg_at_1000
value: 75.986
- type: ndcg_at_3
value: 69.959
- type: ndcg_at_5
value: 72.269
- type: precision_at_1
value: 61.656
- type: precision_at_10
value: 9.581000000000001
- type: precision_at_100
value: 1.054
- type: precision_at_1000
value: 0.11
- type: precision_at_3
value: 27.743000000000002
- type: precision_at_5
value: 17.939
- type: recall_at_1
value: 57.335
- type: recall_at_10
value: 87.24300000000001
- type: recall_at_100
value: 94.575
- type: recall_at_1000
value: 96.75399999999999
- type: recall_at_3
value: 76.44800000000001
- type: recall_at_5
value: 82.122
- task:
type: Retrieval
dataset:
name: MTEB FiQA2018
type: fiqa
config: default
split: test
revision: 41b686a7f28c59bcaaa5791efd47c67c8ebe28be
metrics:
- type: map_at_1
value: 17.014000000000003
- type: map_at_10
value: 28.469
- type: map_at_100
value: 30.178
- type: map_at_1000
value: 30.369
- type: map_at_3
value: 24.63
- type: map_at_5
value: 26.891
- type: mrr_at_1
value: 34.259
- type: mrr_at_10
value: 43.042
- type: mrr_at_100
value: 43.91
- type: mrr_at_1000
value: 43.963
- type: mrr_at_3
value: 40.483999999999995
- type: mrr_at_5
value: 42.135
- type: ndcg_at_1
value: 34.259
- type: ndcg_at_10
value: 35.836
- type: ndcg_at_100
value: 42.488
- type: ndcg_at_1000
value: 45.902
- type: ndcg_at_3
value: 32.131
- type: ndcg_at_5
value: 33.697
- type: precision_at_1
value: 34.259
- type: precision_at_10
value: 10.0
- type: precision_at_100
value: 1.699
- type: precision_at_1000
value: 0.22999999999999998
- type: precision_at_3
value: 21.502
- type: precision_at_5
value: 16.296
- type: recall_at_1
value: 17.014000000000003
- type: recall_at_10
value: 42.832
- type: recall_at_100
value: 67.619
- type: recall_at_1000
value: 88.453
- type: recall_at_3
value: 29.537000000000003
- type: recall_at_5
value: 35.886
- task:
type: Retrieval
dataset:
name: MTEB HotpotQA
type: hotpotqa
config: default
split: test
revision: 766870b35a1b9ca65e67a0d1913899973551fc6c
metrics:
- type: map_at_1
value: 34.558
- type: map_at_10
value: 48.039
- type: map_at_100
value: 48.867
- type: map_at_1000
value: 48.941
- type: map_at_3
value: 45.403
- type: map_at_5
value: 46.983999999999995
- type: mrr_at_1
value: 69.11500000000001
- type: mrr_at_10
value: 75.551
- type: mrr_at_100
value: 75.872
- type: mrr_at_1000
value: 75.887
- type: mrr_at_3
value: 74.447
- type: mrr_at_5
value: 75.113
- type: ndcg_at_1
value: 69.11500000000001
- type: ndcg_at_10
value: 57.25599999999999
- type: ndcg_at_100
value: 60.417
- type: ndcg_at_1000
value: 61.976
- type: ndcg_at_3
value: 53.258
- type: ndcg_at_5
value: 55.374
- type: precision_at_1
value: 69.11500000000001
- type: precision_at_10
value: 11.689
- type: precision_at_100
value: 1.418
- type: precision_at_1000
value: 0.163
- type: precision_at_3
value: 33.018
- type: precision_at_5
value: 21.488
- type: recall_at_1
value: 34.558
- type: recall_at_10
value: 58.447
- type: recall_at_100
value: 70.91199999999999
- type: recall_at_1000
value: 81.31
- type: recall_at_3
value: 49.527
- type: recall_at_5
value: 53.72
- task:
type: Classification
dataset:
name: MTEB ImdbClassification
type: mteb/imdb
config: default
split: test
revision: 8d743909f834c38949e8323a8a6ce8721ea6c7f4
metrics:
- type: accuracy
value: 61.772000000000006
- type: ap
value: 57.48217702943605
- type: f1
value: 61.20495351356274
- task:
type: Retrieval
dataset:
name: MTEB MSMARCO
type: msmarco
config: default
split: validation
revision: e6838a846e2408f22cf5cc337ebc83e0bcf77849
metrics:
- type: map_at_1
value: 22.044
- type: map_at_10
value: 34.211000000000006
- type: map_at_100
value: 35.394
- type: map_at_1000
value: 35.443000000000005
- type: map_at_3
value: 30.318
- type: map_at_5
value: 32.535
- type: mrr_at_1
value: 22.722
- type: mrr_at_10
value: 34.842
- type: mrr_at_100
value: 35.954
- type: mrr_at_1000
value: 35.997
- type: mrr_at_3
value: 30.991000000000003
- type: mrr_at_5
value: 33.2
- type: ndcg_at_1
value: 22.722
- type: ndcg_at_10
value: 41.121
- type: ndcg_at_100
value: 46.841
- type: ndcg_at_1000
value: 48.049
- type: ndcg_at_3
value: 33.173
- type: ndcg_at_5
value: 37.145
- type: precision_at_1
value: 22.722
- type: precision_at_10
value: 6.516
- type: precision_at_100
value: 0.9400000000000001
- type: precision_at_1000
value: 0.104
- type: precision_at_3
value: 14.093
- type: precision_at_5
value: 10.473
- type: recall_at_1
value: 22.044
- type: recall_at_10
value: 62.382000000000005
- type: recall_at_100
value: 88.914
- type: recall_at_1000
value: 98.099
- type: recall_at_3
value: 40.782000000000004
- type: recall_at_5
value: 50.322
- task:
type: Classification
dataset:
name: MTEB MTOPDomainClassification (en)
type: mteb/mtop_domain
config: en
split: test
revision: a7e2a951126a26fc8c6a69f835f33a346ba259e3
metrics:
- type: accuracy
value: 93.68217054263563
- type: f1
value: 93.25810075739523
- task:
type: Classification
dataset:
name: MTEB MTOPDomainClassification (de)
type: mteb/mtop_domain
config: de
split: test
revision: a7e2a951126a26fc8c6a69f835f33a346ba259e3
metrics:
- type: accuracy
value: 82.05409974640745
- type: f1
value: 80.42814140324903
- task:
type: Classification
dataset:
name: MTEB MTOPDomainClassification (es)
type: mteb/mtop_domain
config: es
split: test
revision: a7e2a951126a26fc8c6a69f835f33a346ba259e3
metrics:
- type: accuracy
value: 93.54903268845896
- type: f1
value: 92.8909878077932
- task:
type: Classification
dataset:
name: MTEB MTOPDomainClassification (fr)
type: mteb/mtop_domain
config: fr
split: test
revision: a7e2a951126a26fc8c6a69f835f33a346ba259e3
metrics:
- type: accuracy
value: 90.98340119010334
- type: f1
value: 90.51522537281313
- task:
type: Classification
dataset:
name: MTEB MTOPDomainClassification (hi)
type: mteb/mtop_domain
config: hi
split: test
revision: a7e2a951126a26fc8c6a69f835f33a346ba259e3
metrics:
- type: accuracy
value: 89.33309429903191
- type: f1
value: 88.60371305209185
- task:
type: Classification
dataset:
name: MTEB MTOPDomainClassification (th)
type: mteb/mtop_domain
config: th
split: test
revision: a7e2a951126a26fc8c6a69f835f33a346ba259e3
metrics:
- type: accuracy
value: 60.4882459312839
- type: f1
value: 59.02590456131682
- task:
type: Classification
dataset:
name: MTEB MTOPIntentClassification (en)
type: mteb/mtop_intent
config: en
split: test
revision: 6299947a7777084cc2d4b64235bf7190381ce755
metrics:
- type: accuracy
value: 71.34290925672595
- type: f1
value: 54.44803151449109
- task:
type: Classification
dataset:
name: MTEB MTOPIntentClassification (de)
type: mteb/mtop_intent
config: de
split: test
revision: 6299947a7777084cc2d4b64235bf7190381ce755
metrics:
- type: accuracy
value: 61.92448577063963
- type: f1
value: 43.125939975781854
- task:
type: Classification
dataset:
name: MTEB MTOPIntentClassification (es)
type: mteb/mtop_intent
config: es
split: test
revision: 6299947a7777084cc2d4b64235bf7190381ce755
metrics:
- type: accuracy
value: 74.48965977318213
- type: f1
value: 51.855353687466696
- task:
type: Classification
dataset:
name: MTEB MTOPIntentClassification (fr)
type: mteb/mtop_intent
config: fr
split: test
revision: 6299947a7777084cc2d4b64235bf7190381ce755
metrics:
- type: accuracy
value: 69.11994989038521
- type: f1
value: 50.57872704171278
- task:
type: Classification
dataset:
name: MTEB MTOPIntentClassification (hi)
type: mteb/mtop_intent
config: hi
split: test
revision: 6299947a7777084cc2d4b64235bf7190381ce755
metrics:
- type: accuracy
value: 64.84761563284331
- type: f1
value: 43.61322970761394
- task:
type: Classification
dataset:
name: MTEB MTOPIntentClassification (th)
type: mteb/mtop_intent
config: th
split: test
revision: 6299947a7777084cc2d4b64235bf7190381ce755
metrics:
- type: accuracy
value: 49.35623869801085
- type: f1
value: 33.48547326952042
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (af)
type: mteb/amazon_massive_intent
config: af
split: test
revision: 072a486a144adf7f4479a4a0dddb2152e161e1ea
metrics:
- type: accuracy
value: 47.85474108944183
- type: f1
value: 46.50175016795915
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (am)
type: mteb/amazon_massive_intent
config: am
split: test
revision: 072a486a144adf7f4479a4a0dddb2152e161e1ea
metrics:
- type: accuracy
value: 33.29858776059179
- type: f1
value: 31.803027601259082
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (ar)
type: mteb/amazon_massive_intent
config: ar
split: test
revision: 072a486a144adf7f4479a4a0dddb2152e161e1ea
metrics:
- type: accuracy
value: 59.24680564895763
- type: f1
value: 57.037691806846865
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (az)
type: mteb/amazon_massive_intent
config: az
split: test
revision: 072a486a144adf7f4479a4a0dddb2152e161e1ea
metrics:
- type: accuracy
value: 45.23537323470073
- type: f1
value: 44.81126398428613
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (bn)
type: mteb/amazon_massive_intent
config: bn
split: test
revision: 072a486a144adf7f4479a4a0dddb2152e161e1ea
metrics:
- type: accuracy
value: 61.590450571620714
- type: f1
value: 59.247442149977104
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (cy)
type: mteb/amazon_massive_intent
config: cy
split: test
revision: 072a486a144adf7f4479a4a0dddb2152e161e1ea
metrics:
- type: accuracy
value: 44.9226630800269
- type: f1
value: 44.076183379991654
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (da)
type: mteb/amazon_massive_intent
config: da
split: test
revision: 072a486a144adf7f4479a4a0dddb2152e161e1ea
metrics:
- type: accuracy
value: 51.23066577000672
- type: f1
value: 50.20719330417618
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (de)
type: mteb/amazon_massive_intent
config: de
split: test
revision: 072a486a144adf7f4479a4a0dddb2152e161e1ea
metrics:
- type: accuracy
value: 56.0995292535306
- type: f1
value: 53.29421532133969
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (el)
type: mteb/amazon_massive_intent
config: el
split: test
revision: 072a486a144adf7f4479a4a0dddb2152e161e1ea
metrics:
- type: accuracy
value: 46.12642905178211
- type: f1
value: 44.441530267639635
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (en)
type: mteb/amazon_massive_intent
config: en
split: test
revision: 072a486a144adf7f4479a4a0dddb2152e161e1ea
metrics:
- type: accuracy
value: 69.67047747141896
- type: f1
value: 68.38493366054783
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (es)
type: mteb/amazon_massive_intent
config: es
split: test
revision: 072a486a144adf7f4479a4a0dddb2152e161e1ea
metrics:
- type: accuracy
value: 66.3483523873571
- type: f1
value: 65.13046416817832
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (fa)
type: mteb/amazon_massive_intent
config: fa
split: test
revision: 072a486a144adf7f4479a4a0dddb2152e161e1ea
metrics:
- type: accuracy
value: 51.20040349697378
- type: f1
value: 49.02889836601541
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (fi)
type: mteb/amazon_massive_intent
config: fi
split: test
revision: 072a486a144adf7f4479a4a0dddb2152e161e1ea
metrics:
- type: accuracy
value: 45.33288500336248
- type: f1
value: 42.91893101970983
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (fr)
type: mteb/amazon_massive_intent
config: fr
split: test
revision: 072a486a144adf7f4479a4a0dddb2152e161e1ea
metrics:
- type: accuracy
value: 66.95359784801613
- type: f1
value: 64.98788914810562
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (he)
type: mteb/amazon_massive_intent
config: he
split: test
revision: 072a486a144adf7f4479a4a0dddb2152e161e1ea
metrics:
- type: accuracy
value: 43.18090114324143
- type: f1
value: 41.31250407417542
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (hi)
type: mteb/amazon_massive_intent
config: hi
split: test
revision: 072a486a144adf7f4479a4a0dddb2152e161e1ea
metrics:
- type: accuracy
value: 63.54068594485541
- type: f1
value: 61.94829361488948
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (hu)
type: mteb/amazon_massive_intent
config: hu
split: test
revision: 072a486a144adf7f4479a4a0dddb2152e161e1ea
metrics:
- type: accuracy
value: 44.7343644922663
- type: f1
value: 43.23001702247849
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (hy)
type: mteb/amazon_massive_intent
config: hy
split: test
revision: 072a486a144adf7f4479a4a0dddb2152e161e1ea
metrics:
- type: accuracy
value: 38.1271015467384
- type: f1
value: 36.94700198241727
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (id)
type: mteb/amazon_massive_intent
config: id
split: test
revision: 072a486a144adf7f4479a4a0dddb2152e161e1ea
metrics:
- type: accuracy
value: 64.05514458641561
- type: f1
value: 62.35033731674541
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (is)
type: mteb/amazon_massive_intent
config: is
split: test
revision: 072a486a144adf7f4479a4a0dddb2152e161e1ea
metrics:
- type: accuracy
value: 44.351042367182245
- type: f1
value: 43.13370397574502
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (it)
type: mteb/amazon_massive_intent
config: it
split: test
revision: 072a486a144adf7f4479a4a0dddb2152e161e1ea
metrics:
- type: accuracy
value: 60.77000672494955
- type: f1
value: 59.71546868957779
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (ja)
type: mteb/amazon_massive_intent
config: ja
split: test
revision: 072a486a144adf7f4479a4a0dddb2152e161e1ea
metrics:
- type: accuracy
value: 61.22057834566241
- type: f1
value: 59.447639306287044
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (jv)
type: mteb/amazon_massive_intent
config: jv
split: test
revision: 072a486a144adf7f4479a4a0dddb2152e161e1ea
metrics:
- type: accuracy
value: 50.9448554135844
- type: f1
value: 48.524338247875214
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (ka)
type: mteb/amazon_massive_intent
config: ka
split: test
revision: 072a486a144adf7f4479a4a0dddb2152e161e1ea
metrics:
- type: accuracy
value: 33.8399462004035
- type: f1
value: 33.518999997305535
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (km)
type: mteb/amazon_massive_intent
config: km
split: test
revision: 072a486a144adf7f4479a4a0dddb2152e161e1ea
metrics:
- type: accuracy
value: 37.34028244788165
- type: f1
value: 35.6156599064704
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (kn)
type: mteb/amazon_massive_intent
config: kn
split: test
revision: 072a486a144adf7f4479a4a0dddb2152e161e1ea
metrics:
- type: accuracy
value: 53.544048419636844
- type: f1
value: 51.29299915455352
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (ko)
type: mteb/amazon_massive_intent
config: ko
split: test
revision: 072a486a144adf7f4479a4a0dddb2152e161e1ea
metrics:
- type: accuracy
value: 53.35574983187625
- type: f1
value: 51.463936565192945
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (lv)
type: mteb/amazon_massive_intent
config: lv
split: test
revision: 072a486a144adf7f4479a4a0dddb2152e161e1ea
metrics:
- type: accuracy
value: 46.503026227303295
- type: f1
value: 46.049497734375514
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (ml)
type: mteb/amazon_massive_intent
config: ml
split: test
revision: 072a486a144adf7f4479a4a0dddb2152e161e1ea
metrics:
- type: accuracy
value: 58.268325487558826
- type: f1
value: 56.10849656896158
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (mn)
type: mteb/amazon_massive_intent
config: mn
split: test
revision: 072a486a144adf7f4479a4a0dddb2152e161e1ea
metrics:
- type: accuracy
value: 40.27572293207801
- type: f1
value: 40.20097238549224
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (ms)
type: mteb/amazon_massive_intent
config: ms
split: test
revision: 072a486a144adf7f4479a4a0dddb2152e161e1ea
metrics:
- type: accuracy
value: 59.64694014794889
- type: f1
value: 58.39584148789066
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (my)
type: mteb/amazon_massive_intent
config: my
split: test
revision: 072a486a144adf7f4479a4a0dddb2152e161e1ea
metrics:
- type: accuracy
value: 37.41761936785474
- type: f1
value: 35.04551731363685
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (nb)
type: mteb/amazon_massive_intent
config: nb
split: test
revision: 072a486a144adf7f4479a4a0dddb2152e161e1ea
metrics:
- type: accuracy
value: 49.408204438466704
- type: f1
value: 48.39369057638714
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (nl)
type: mteb/amazon_massive_intent
config: nl
split: test
revision: 072a486a144adf7f4479a4a0dddb2152e161e1ea
metrics:
- type: accuracy
value: 52.09482178883659
- type: f1
value: 49.91518031712698
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (pl)
type: mteb/amazon_massive_intent
config: pl
split: test
revision: 072a486a144adf7f4479a4a0dddb2152e161e1ea
metrics:
- type: accuracy
value: 50.477471418964356
- type: f1
value: 48.429495257184705
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (pt)
type: mteb/amazon_massive_intent
config: pt
split: test
revision: 072a486a144adf7f4479a4a0dddb2152e161e1ea
metrics:
- type: accuracy
value: 66.69468728984532
- type: f1
value: 65.40306868707009
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (ro)
type: mteb/amazon_massive_intent
config: ro
split: test
revision: 072a486a144adf7f4479a4a0dddb2152e161e1ea
metrics:
- type: accuracy
value: 50.52790854068594
- type: f1
value: 49.780400354514
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (ru)
type: mteb/amazon_massive_intent
config: ru
split: test
revision: 072a486a144adf7f4479a4a0dddb2152e161e1ea
metrics:
- type: accuracy
value: 58.31540013449899
- type: f1
value: 56.144142926685134
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (sl)
type: mteb/amazon_massive_intent
config: sl
split: test
revision: 072a486a144adf7f4479a4a0dddb2152e161e1ea
metrics:
- type: accuracy
value: 47.74041694687289
- type: f1
value: 46.16767322761359
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (sq)
type: mteb/amazon_massive_intent
config: sq
split: test
revision: 072a486a144adf7f4479a4a0dddb2152e161e1ea
metrics:
- type: accuracy
value: 48.94418291862811
- type: f1
value: 48.445352284756325
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (sv)
type: mteb/amazon_massive_intent
config: sv
split: test
revision: 072a486a144adf7f4479a4a0dddb2152e161e1ea
metrics:
- type: accuracy
value: 50.78681909885676
- type: f1
value: 49.64882295494536
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (sw)
type: mteb/amazon_massive_intent
config: sw
split: test
revision: 072a486a144adf7f4479a4a0dddb2152e161e1ea
metrics:
- type: accuracy
value: 49.811701412239415
- type: f1
value: 48.213234514449375
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (ta)
type: mteb/amazon_massive_intent
config: ta
split: test
revision: 072a486a144adf7f4479a4a0dddb2152e161e1ea
metrics:
- type: accuracy
value: 56.39542703429725
- type: f1
value: 54.031981085233795
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (te)
type: mteb/amazon_massive_intent
config: te
split: test
revision: 072a486a144adf7f4479a4a0dddb2152e161e1ea
metrics:
- type: accuracy
value: 54.71082716879623
- type: f1
value: 52.513144113474596
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (th)
type: mteb/amazon_massive_intent
config: th
split: test
revision: 072a486a144adf7f4479a4a0dddb2152e161e1ea
metrics:
- type: accuracy
value: 44.425016812373904
- type: f1
value: 43.96016300057656
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (tl)
type: mteb/amazon_massive_intent
config: tl
split: test
revision: 072a486a144adf7f4479a4a0dddb2152e161e1ea
metrics:
- type: accuracy
value: 50.205110961667785
- type: f1
value: 48.86669996798709
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (tr)
type: mteb/amazon_massive_intent
config: tr
split: test
revision: 072a486a144adf7f4479a4a0dddb2152e161e1ea
metrics:
- type: accuracy
value: 46.56355077336921
- type: f1
value: 45.18252022585022
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (ur)
type: mteb/amazon_massive_intent
config: ur
split: test
revision: 072a486a144adf7f4479a4a0dddb2152e161e1ea
metrics:
- type: accuracy
value: 56.748486886348346
- type: f1
value: 54.29884570375382
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (vi)
type: mteb/amazon_massive_intent
config: vi
split: test
revision: 072a486a144adf7f4479a4a0dddb2152e161e1ea
metrics:
- type: accuracy
value: 64.52589105581708
- type: f1
value: 62.97947342861603
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (zh-CN)
type: mteb/amazon_massive_intent
config: zh-CN
split: test
revision: 072a486a144adf7f4479a4a0dddb2152e161e1ea
metrics:
- type: accuracy
value: 67.06792199058508
- type: f1
value: 65.36025601634017
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (zh-TW)
type: mteb/amazon_massive_intent
config: zh-TW
split: test
revision: 072a486a144adf7f4479a4a0dddb2152e161e1ea
metrics:
- type: accuracy
value: 62.89172831203766
- type: f1
value: 62.69803707054342
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (af)
type: mteb/amazon_massive_scenario
config: af
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 51.47276395427035
- type: f1
value: 49.37463208130799
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (am)
type: mteb/amazon_massive_scenario
config: am
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 34.86886348352387
- type: f1
value: 33.74178074349636
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (ar)
type: mteb/amazon_massive_scenario
config: ar
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 65.20511096166778
- type: f1
value: 65.85812500602437
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (az)
type: mteb/amazon_massive_scenario
config: az
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 45.578345662407536
- type: f1
value: 44.44514917028003
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (bn)
type: mteb/amazon_massive_scenario
config: bn
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 67.29657027572293
- type: f1
value: 67.24477523937466
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (cy)
type: mteb/amazon_massive_scenario
config: cy
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 46.29455279085407
- type: f1
value: 43.8563839951935
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (da)
type: mteb/amazon_massive_scenario
config: da
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 53.52387357094821
- type: f1
value: 51.70977848027552
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (de)
type: mteb/amazon_massive_scenario
config: de
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 61.741761936785466
- type: f1
value: 60.219169644792295
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (el)
type: mteb/amazon_massive_scenario
config: el
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 48.957632817753876
- type: f1
value: 46.878428264460034
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (en)
type: mteb/amazon_massive_scenario
config: en
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 75.33624747814393
- type: f1
value: 75.9143846211171
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (es)
type: mteb/amazon_massive_scenario
config: es
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 73.34229993275049
- type: f1
value: 73.78165397558983
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (fa)
type: mteb/amazon_massive_scenario
config: fa
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 53.174176193678555
- type: f1
value: 51.709679227778985
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (fi)
type: mteb/amazon_massive_scenario
config: fi
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 44.6906523201076
- type: f1
value: 41.54881682785664
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (fr)
type: mteb/amazon_massive_scenario
config: fr
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 72.9119031607263
- type: f1
value: 73.2742013056326
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (he)
type: mteb/amazon_massive_scenario
config: he
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 43.10356422326832
- type: f1
value: 40.8859122581252
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (hi)
type: mteb/amazon_massive_scenario
config: hi
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 69.27370544720914
- type: f1
value: 69.39544506405082
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (hu)
type: mteb/amazon_massive_scenario
config: hu
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 45.16476126429052
- type: f1
value: 42.74022531579054
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (hy)
type: mteb/amazon_massive_scenario
config: hy
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 38.73234700739744
- type: f1
value: 37.40546754951026
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (id)
type: mteb/amazon_massive_scenario
config: id
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 70.12777404169468
- type: f1
value: 70.27219152812738
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (is)
type: mteb/amazon_massive_scenario
config: is
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 44.21318090114325
- type: f1
value: 41.934593213829366
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (it)
type: mteb/amazon_massive_scenario
config: it
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 65.57162071284466
- type: f1
value: 64.83341759045335
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (ja)
type: mteb/amazon_massive_scenario
config: ja
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 65.75991930060525
- type: f1
value: 65.16549875504951
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (jv)
type: mteb/amazon_massive_scenario
config: jv
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 54.79488903833223
- type: f1
value: 54.03616401426859
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (ka)
type: mteb/amazon_massive_scenario
config: ka
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 32.992602555480836
- type: f1
value: 31.820068470018846
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (km)
type: mteb/amazon_massive_scenario
config: km
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 39.34431741761937
- type: f1
value: 36.436221665290105
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (kn)
type: mteb/amazon_massive_scenario
config: kn
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 60.501008742434436
- type: f1
value: 60.051013712579085
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (ko)
type: mteb/amazon_massive_scenario
config: ko
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 55.689307330195035
- type: f1
value: 53.94058032286942
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (lv)
type: mteb/amazon_massive_scenario
config: lv
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 44.351042367182245
- type: f1
value: 42.05421666771541
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (ml)
type: mteb/amazon_massive_scenario
config: ml
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 65.53127101546738
- type: f1
value: 65.98462024333497
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (mn)
type: mteb/amazon_massive_scenario
config: mn
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 38.71553463349025
- type: f1
value: 37.44327037149584
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (ms)
type: mteb/amazon_massive_scenario
config: ms
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 64.98991257565567
- type: f1
value: 63.87720198978004
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (my)
type: mteb/amazon_massive_scenario
config: my
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 36.839273705447205
- type: f1
value: 35.233967279698376
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (nb)
type: mteb/amazon_massive_scenario
config: nb
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 51.79892400806993
- type: f1
value: 49.66926632125972
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (nl)
type: mteb/amazon_massive_scenario
config: nl
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 56.31809011432415
- type: f1
value: 53.832185336179826
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (pl)
type: mteb/amazon_massive_scenario
config: pl
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 49.979825151311374
- type: f1
value: 48.83013175441888
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (pt)
type: mteb/amazon_massive_scenario
config: pt
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 71.45595158036315
- type: f1
value: 72.08708814699702
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (ro)
type: mteb/amazon_massive_scenario
config: ro
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 53.68527236045729
- type: f1
value: 52.23278593929981
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (ru)
type: mteb/amazon_massive_scenario
config: ru
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 61.60390047074647
- type: f1
value: 60.50391482195116
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (sl)
type: mteb/amazon_massive_scenario
config: sl
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 48.036314727639535
- type: f1
value: 46.43480413383716
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (sq)
type: mteb/amazon_massive_scenario
config: sq
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 50.05716207128445
- type: f1
value: 48.85821859948888
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (sv)
type: mteb/amazon_massive_scenario
config: sv
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 51.728312037659705
- type: f1
value: 49.89292996950847
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (sw)
type: mteb/amazon_massive_scenario
config: sw
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 54.21990585070613
- type: f1
value: 52.8711542984193
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (ta)
type: mteb/amazon_massive_scenario
config: ta
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 62.770679219905844
- type: f1
value: 63.09441501491594
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (te)
type: mteb/amazon_massive_scenario
config: te
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 62.58574310692671
- type: f1
value: 61.61370697612978
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (th)
type: mteb/amazon_massive_scenario
config: th
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 45.17821116341628
- type: f1
value: 43.85143229183324
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (tl)
type: mteb/amazon_massive_scenario
config: tl
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 52.064559515803644
- type: f1
value: 50.94356892049626
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (tr)
type: mteb/amazon_massive_scenario
config: tr
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 47.205783456624076
- type: f1
value: 47.04223644120489
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (ur)
type: mteb/amazon_massive_scenario
config: ur
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 64.25689307330195
- type: f1
value: 63.89944944984115
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (vi)
type: mteb/amazon_massive_scenario
config: vi
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 70.60524546065905
- type: f1
value: 71.5634157334358
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (zh-CN)
type: mteb/amazon_massive_scenario
config: zh-CN
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 73.95427034297242
- type: f1
value: 74.39706882311063
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (zh-TW)
type: mteb/amazon_massive_scenario
config: zh-TW
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 70.29926025554808
- type: f1
value: 71.32045932560297
- task:
type: Clustering
dataset:
name: MTEB MedrxivClusteringP2P
type: mteb/medrxiv-clustering-p2p
config: default
split: test
revision: dcefc037ef84348e49b0d29109e891c01067226b
metrics:
- type: v_measure
value: 31.054474964883806
- task:
type: Clustering
dataset:
name: MTEB MedrxivClusteringS2S
type: mteb/medrxiv-clustering-s2s
config: default
split: test
revision: 3cd0e71dfbe09d4de0f9e5ecba43e7ce280959dc
metrics:
- type: v_measure
value: 29.259725940477523
- task:
type: Reranking
dataset:
name: MTEB MindSmallReranking
type: mteb/mind_small
config: default
split: test
revision: 3bdac13927fdc888b903db93b2ffdbd90b295a69
metrics:
- type: map
value: 31.785007883256572
- type: mrr
value: 32.983556622438456
- task:
type: Retrieval
dataset:
name: MTEB NFCorpus
type: nfcorpus
config: default
split: test
revision: 7eb63cc0c1eb59324d709ebed25fcab851fa7610
metrics:
- type: map_at_1
value: 5.742
- type: map_at_10
value: 13.074
- type: map_at_100
value: 16.716
- type: map_at_1000
value: 18.238
- type: map_at_3
value: 9.600999999999999
- type: map_at_5
value: 11.129999999999999
- type: mrr_at_1
value: 47.988
- type: mrr_at_10
value: 55.958
- type: mrr_at_100
value: 56.58800000000001
- type: mrr_at_1000
value: 56.620000000000005
- type: mrr_at_3
value: 54.025
- type: mrr_at_5
value: 55.31
- type: ndcg_at_1
value: 46.44
- type: ndcg_at_10
value: 35.776
- type: ndcg_at_100
value: 32.891999999999996
- type: ndcg_at_1000
value: 41.835
- type: ndcg_at_3
value: 41.812
- type: ndcg_at_5
value: 39.249
- type: precision_at_1
value: 48.297000000000004
- type: precision_at_10
value: 26.687
- type: precision_at_100
value: 8.511000000000001
- type: precision_at_1000
value: 2.128
- type: precision_at_3
value: 39.009
- type: precision_at_5
value: 33.994
- type: recall_at_1
value: 5.742
- type: recall_at_10
value: 16.993
- type: recall_at_100
value: 33.69
- type: recall_at_1000
value: 66.75
- type: recall_at_3
value: 10.817
- type: recall_at_5
value: 13.256
- task:
type: Retrieval
dataset:
name: MTEB NQ
type: nq
config: default
split: test
revision: 6062aefc120bfe8ece5897809fb2e53bfe0d128c
metrics:
- type: map_at_1
value: 30.789
- type: map_at_10
value: 45.751999999999995
- type: map_at_100
value: 46.766000000000005
- type: map_at_1000
value: 46.798
- type: map_at_3
value: 41.746
- type: map_at_5
value: 44.046
- type: mrr_at_1
value: 34.618
- type: mrr_at_10
value: 48.288
- type: mrr_at_100
value: 49.071999999999996
- type: mrr_at_1000
value: 49.094
- type: mrr_at_3
value: 44.979
- type: mrr_at_5
value: 46.953
- type: ndcg_at_1
value: 34.589
- type: ndcg_at_10
value: 53.151
- type: ndcg_at_100
value: 57.537000000000006
- type: ndcg_at_1000
value: 58.321999999999996
- type: ndcg_at_3
value: 45.628
- type: ndcg_at_5
value: 49.474000000000004
- type: precision_at_1
value: 34.589
- type: precision_at_10
value: 8.731
- type: precision_at_100
value: 1.119
- type: precision_at_1000
value: 0.11900000000000001
- type: precision_at_3
value: 20.819
- type: precision_at_5
value: 14.728
- type: recall_at_1
value: 30.789
- type: recall_at_10
value: 73.066
- type: recall_at_100
value: 92.27
- type: recall_at_1000
value: 98.18
- type: recall_at_3
value: 53.632999999999996
- type: recall_at_5
value: 62.476
- task:
type: Retrieval
dataset:
name: MTEB QuoraRetrieval
type: quora
config: default
split: test
revision: 6205996560df11e3a3da9ab4f926788fc30a7db4
metrics:
- type: map_at_1
value: 54.993
- type: map_at_10
value: 69.07600000000001
- type: map_at_100
value: 70.05799999999999
- type: map_at_1000
value: 70.09
- type: map_at_3
value: 65.456
- type: map_at_5
value: 67.622
- type: mrr_at_1
value: 63.07000000000001
- type: mrr_at_10
value: 72.637
- type: mrr_at_100
value: 73.029
- type: mrr_at_1000
value: 73.033
- type: mrr_at_3
value: 70.572
- type: mrr_at_5
value: 71.86399999999999
- type: ndcg_at_1
value: 63.07000000000001
- type: ndcg_at_10
value: 74.708
- type: ndcg_at_100
value: 77.579
- type: ndcg_at_1000
value: 77.897
- type: ndcg_at_3
value: 69.69999999999999
- type: ndcg_at_5
value: 72.321
- type: precision_at_1
value: 63.07000000000001
- type: precision_at_10
value: 11.851
- type: precision_at_100
value: 1.481
- type: precision_at_1000
value: 0.156
- type: precision_at_3
value: 30.747000000000003
- type: precision_at_5
value: 20.830000000000002
- type: recall_at_1
value: 54.993
- type: recall_at_10
value: 87.18900000000001
- type: recall_at_100
value: 98.137
- type: recall_at_1000
value: 99.833
- type: recall_at_3
value: 73.654
- type: recall_at_5
value: 80.36
- task:
type: Clustering
dataset:
name: MTEB RedditClustering
type: mteb/reddit-clustering
config: default
split: test
revision: b2805658ae38990172679479369a78b86de8c390
metrics:
- type: v_measure
value: 35.53178375429036
- task:
type: Clustering
dataset:
name: MTEB RedditClusteringP2P
type: mteb/reddit-clustering-p2p
config: default
split: test
revision: 385e3cb46b4cfa89021f56c4380204149d0efe33
metrics:
- type: v_measure
value: 54.520782970558265
- task:
type: Retrieval
dataset:
name: MTEB SCIDOCS
type: scidocs
config: default
split: test
revision: 5c59ef3e437a0a9651c8fe6fde943e7dce59fba5
metrics:
- type: map_at_1
value: 4.3229999999999995
- type: map_at_10
value: 10.979999999999999
- type: map_at_100
value: 12.867
- type: map_at_1000
value: 13.147
- type: map_at_3
value: 7.973
- type: map_at_5
value: 9.513
- type: mrr_at_1
value: 21.3
- type: mrr_at_10
value: 32.34
- type: mrr_at_100
value: 33.428999999999995
- type: mrr_at_1000
value: 33.489999999999995
- type: mrr_at_3
value: 28.999999999999996
- type: mrr_at_5
value: 31.019999999999996
- type: ndcg_at_1
value: 21.3
- type: ndcg_at_10
value: 18.619
- type: ndcg_at_100
value: 26.108999999999998
- type: ndcg_at_1000
value: 31.253999999999998
- type: ndcg_at_3
value: 17.842
- type: ndcg_at_5
value: 15.673
- type: precision_at_1
value: 21.3
- type: precision_at_10
value: 9.55
- type: precision_at_100
value: 2.0340000000000003
- type: precision_at_1000
value: 0.327
- type: precision_at_3
value: 16.667
- type: precision_at_5
value: 13.76
- type: recall_at_1
value: 4.3229999999999995
- type: recall_at_10
value: 19.387
- type: recall_at_100
value: 41.307
- type: recall_at_1000
value: 66.475
- type: recall_at_3
value: 10.143
- type: recall_at_5
value: 14.007
- task:
type: STS
dataset:
name: MTEB SICK-R
type: mteb/sickr-sts
config: default
split: test
revision: 20a6d6f312dd54037fe07a32d58e5e168867909d
metrics:
- type: cos_sim_pearson
value: 78.77975189382573
- type: cos_sim_spearman
value: 69.81522686267631
- type: euclidean_pearson
value: 71.37617936889518
- type: euclidean_spearman
value: 65.71738481148611
- type: manhattan_pearson
value: 71.58222165832424
- type: manhattan_spearman
value: 65.86851365286654
- task:
type: STS
dataset:
name: MTEB STS12
type: mteb/sts12-sts
config: default
split: test
revision: fdf84275bb8ce4b49c971d02e84dd1abc677a50f
metrics:
- type: cos_sim_pearson
value: 77.75509450443367
- type: cos_sim_spearman
value: 69.66180222442091
- type: euclidean_pearson
value: 74.98512779786111
- type: euclidean_spearman
value: 69.5997451409469
- type: manhattan_pearson
value: 75.50135090962459
- type: manhattan_spearman
value: 69.94984748475302
- task:
type: STS
dataset:
name: MTEB STS13
type: mteb/sts13-sts
config: default
split: test
revision: 1591bfcbe8c69d4bf7fe2a16e2451017832cafb9
metrics:
- type: cos_sim_pearson
value: 79.42363892383264
- type: cos_sim_spearman
value: 79.66529244176742
- type: euclidean_pearson
value: 79.50429208135942
- type: euclidean_spearman
value: 80.44767586416276
- type: manhattan_pearson
value: 79.58563944997708
- type: manhattan_spearman
value: 80.51452267103
- task:
type: STS
dataset:
name: MTEB STS14
type: mteb/sts14-sts
config: default
split: test
revision: e2125984e7df8b7871f6ae9949cf6b6795e7c54b
metrics:
- type: cos_sim_pearson
value: 79.2749401478149
- type: cos_sim_spearman
value: 74.6076920702392
- type: euclidean_pearson
value: 73.3302002952881
- type: euclidean_spearman
value: 70.67029803077013
- type: manhattan_pearson
value: 73.52699344010296
- type: manhattan_spearman
value: 70.8517556194297
- task:
type: STS
dataset:
name: MTEB STS15
type: mteb/sts15-sts
config: default
split: test
revision: 1cd7298cac12a96a373b6a2f18738bb3e739a9b6
metrics:
- type: cos_sim_pearson
value: 83.20884740785921
- type: cos_sim_spearman
value: 83.80600789090722
- type: euclidean_pearson
value: 74.9154089816344
- type: euclidean_spearman
value: 75.69243899592276
- type: manhattan_pearson
value: 75.0312832634451
- type: manhattan_spearman
value: 75.78324960357642
- task:
type: STS
dataset:
name: MTEB STS16
type: mteb/sts16-sts
config: default
split: test
revision: 360a0b2dff98700d09e634a01e1cc1624d3e42cd
metrics:
- type: cos_sim_pearson
value: 79.63194141000497
- type: cos_sim_spearman
value: 80.40118418350866
- type: euclidean_pearson
value: 72.07354384551088
- type: euclidean_spearman
value: 72.28819150373845
- type: manhattan_pearson
value: 72.08736119834145
- type: manhattan_spearman
value: 72.28347083261288
- task:
type: STS
dataset:
name: MTEB STS17 (ko-ko)
type: mteb/sts17-crosslingual-sts
config: ko-ko
split: test
revision: 9fc37e8c632af1c87a3d23e685d49552a02582a0
metrics:
- type: cos_sim_pearson
value: 66.78512789499386
- type: cos_sim_spearman
value: 66.89125587193288
- type: euclidean_pearson
value: 58.74535708627959
- type: euclidean_spearman
value: 59.62103716794647
- type: manhattan_pearson
value: 59.00494529143961
- type: manhattan_spearman
value: 59.832257846799806
- task:
type: STS
dataset:
name: MTEB STS17 (ar-ar)
type: mteb/sts17-crosslingual-sts
config: ar-ar
split: test
revision: 9fc37e8c632af1c87a3d23e685d49552a02582a0
metrics:
- type: cos_sim_pearson
value: 75.48960503523992
- type: cos_sim_spearman
value: 76.4223037534204
- type: euclidean_pearson
value: 64.93966381820944
- type: euclidean_spearman
value: 62.39697395373789
- type: manhattan_pearson
value: 65.54480770061505
- type: manhattan_spearman
value: 62.944204863043105
- task:
type: STS
dataset:
name: MTEB STS17 (en-ar)
type: mteb/sts17-crosslingual-sts
config: en-ar
split: test
revision: 9fc37e8c632af1c87a3d23e685d49552a02582a0
metrics:
- type: cos_sim_pearson
value: 77.7331440643619
- type: cos_sim_spearman
value: 78.0748413292835
- type: euclidean_pearson
value: 38.533108233460304
- type: euclidean_spearman
value: 35.37638615280026
- type: manhattan_pearson
value: 41.0639726746513
- type: manhattan_spearman
value: 37.688161243671765
- task:
type: STS
dataset:
name: MTEB STS17 (en-de)
type: mteb/sts17-crosslingual-sts
config: en-de
split: test
revision: 9fc37e8c632af1c87a3d23e685d49552a02582a0
metrics:
- type: cos_sim_pearson
value: 58.4628923720782
- type: cos_sim_spearman
value: 59.10093128795948
- type: euclidean_pearson
value: 30.422902393436836
- type: euclidean_spearman
value: 27.837806030497457
- type: manhattan_pearson
value: 32.51576984630963
- type: manhattan_spearman
value: 29.181887010982514
- task:
type: STS
dataset:
name: MTEB STS17 (en-en)
type: mteb/sts17-crosslingual-sts
config: en-en
split: test
revision: 9fc37e8c632af1c87a3d23e685d49552a02582a0
metrics:
- type: cos_sim_pearson
value: 86.87447904613737
- type: cos_sim_spearman
value: 87.06554974065622
- type: euclidean_pearson
value: 76.82669047851108
- type: euclidean_spearman
value: 75.45711985511991
- type: manhattan_pearson
value: 77.46644556452847
- type: manhattan_spearman
value: 76.0249120007112
- task:
type: STS
dataset:
name: MTEB STS17 (en-tr)
type: mteb/sts17-crosslingual-sts
config: en-tr
split: test
revision: 9fc37e8c632af1c87a3d23e685d49552a02582a0
metrics:
- type: cos_sim_pearson
value: 17.784495723497468
- type: cos_sim_spearman
value: 11.79629537128697
- type: euclidean_pearson
value: -4.354328445994008
- type: euclidean_spearman
value: -6.984566116230058
- type: manhattan_pearson
value: -4.166751901507852
- type: manhattan_spearman
value: -6.984143198323786
- task:
type: STS
dataset:
name: MTEB STS17 (es-en)
type: mteb/sts17-crosslingual-sts
config: es-en
split: test
revision: 9fc37e8c632af1c87a3d23e685d49552a02582a0
metrics:
- type: cos_sim_pearson
value: 76.9009642643449
- type: cos_sim_spearman
value: 78.21764726338341
- type: euclidean_pearson
value: 50.578959144342925
- type: euclidean_spearman
value: 51.664379260719606
- type: manhattan_pearson
value: 53.95690880393329
- type: manhattan_spearman
value: 54.910058464050785
- task:
type: STS
dataset:
name: MTEB STS17 (es-es)
type: mteb/sts17-crosslingual-sts
config: es-es
split: test
revision: 9fc37e8c632af1c87a3d23e685d49552a02582a0
metrics:
- type: cos_sim_pearson
value: 86.41638022270219
- type: cos_sim_spearman
value: 86.00477030366811
- type: euclidean_pearson
value: 79.7224037788285
- type: euclidean_spearman
value: 79.21417626867616
- type: manhattan_pearson
value: 80.29412412756984
- type: manhattan_spearman
value: 79.49460867616206
- task:
type: STS
dataset:
name: MTEB STS17 (fr-en)
type: mteb/sts17-crosslingual-sts
config: fr-en
split: test
revision: 9fc37e8c632af1c87a3d23e685d49552a02582a0
metrics:
- type: cos_sim_pearson
value: 79.90432664091082
- type: cos_sim_spearman
value: 80.46007940700204
- type: euclidean_pearson
value: 49.25348015214428
- type: euclidean_spearman
value: 47.13113020475859
- type: manhattan_pearson
value: 54.57291204043908
- type: manhattan_spearman
value: 51.98559736896087
- task:
type: STS
dataset:
name: MTEB STS17 (it-en)
type: mteb/sts17-crosslingual-sts
config: it-en
split: test
revision: 9fc37e8c632af1c87a3d23e685d49552a02582a0
metrics:
- type: cos_sim_pearson
value: 52.55164822309034
- type: cos_sim_spearman
value: 51.57629192137736
- type: euclidean_pearson
value: 16.63360593235354
- type: euclidean_spearman
value: 14.479679923782912
- type: manhattan_pearson
value: 18.524867185117472
- type: manhattan_spearman
value: 16.65940056664755
- task:
type: STS
dataset:
name: MTEB STS17 (nl-en)
type: mteb/sts17-crosslingual-sts
config: nl-en
split: test
revision: 9fc37e8c632af1c87a3d23e685d49552a02582a0
metrics:
- type: cos_sim_pearson
value: 46.83690919715875
- type: cos_sim_spearman
value: 45.84993650002922
- type: euclidean_pearson
value: 6.173128686815117
- type: euclidean_spearman
value: 6.260781946306191
- type: manhattan_pearson
value: 7.328440452367316
- type: manhattan_spearman
value: 7.370842306497447
- task:
type: STS
dataset:
name: MTEB STS22 (en)
type: mteb/sts22-crosslingual-sts
config: en
split: test
revision: 2de6ce8c1921b71a755b262c6b57fef195dd7906
metrics:
- type: cos_sim_pearson
value: 64.97916914277232
- type: cos_sim_spearman
value: 66.13392188807865
- type: euclidean_pearson
value: 65.3921146908468
- type: euclidean_spearman
value: 65.8381588635056
- type: manhattan_pearson
value: 65.8866165769975
- type: manhattan_spearman
value: 66.27774050472219
- task:
type: STS
dataset:
name: MTEB STS22 (de)
type: mteb/sts22-crosslingual-sts
config: de
split: test
revision: 2de6ce8c1921b71a755b262c6b57fef195dd7906
metrics:
- type: cos_sim_pearson
value: 25.605130445111545
- type: cos_sim_spearman
value: 30.054844562369254
- type: euclidean_pearson
value: 23.890611005408196
- type: euclidean_spearman
value: 29.07902600726761
- type: manhattan_pearson
value: 24.239478426621833
- type: manhattan_spearman
value: 29.48547576782375
- task:
type: STS
dataset:
name: MTEB STS22 (es)
type: mteb/sts22-crosslingual-sts
config: es
split: test
revision: 2de6ce8c1921b71a755b262c6b57fef195dd7906
metrics:
- type: cos_sim_pearson
value: 61.6665616159781
- type: cos_sim_spearman
value: 65.41310206289988
- type: euclidean_pearson
value: 68.38805493215008
- type: euclidean_spearman
value: 65.22777377603435
- type: manhattan_pearson
value: 69.37445390454346
- type: manhattan_spearman
value: 66.02437701858754
- task:
type: STS
dataset:
name: MTEB STS22 (pl)
type: mteb/sts22-crosslingual-sts
config: pl
split: test
revision: 2de6ce8c1921b71a755b262c6b57fef195dd7906
metrics:
- type: cos_sim_pearson
value: 15.302891825626372
- type: cos_sim_spearman
value: 31.134517255070097
- type: euclidean_pearson
value: 12.672592658843143
- type: euclidean_spearman
value: 29.14881036784207
- type: manhattan_pearson
value: 13.528545327757735
- type: manhattan_spearman
value: 29.56217928148797
- task:
type: STS
dataset:
name: MTEB STS22 (tr)
type: mteb/sts22-crosslingual-sts
config: tr
split: test
revision: 2de6ce8c1921b71a755b262c6b57fef195dd7906
metrics:
- type: cos_sim_pearson
value: 28.79299114515319
- type: cos_sim_spearman
value: 47.135864983626206
- type: euclidean_pearson
value: 40.66410787594309
- type: euclidean_spearman
value: 45.09585593138228
- type: manhattan_pearson
value: 42.02561630700308
- type: manhattan_spearman
value: 45.43979983670554
- task:
type: STS
dataset:
name: MTEB STS22 (ar)
type: mteb/sts22-crosslingual-sts
config: ar
split: test
revision: 2de6ce8c1921b71a755b262c6b57fef195dd7906
metrics:
- type: cos_sim_pearson
value: 46.00096625052943
- type: cos_sim_spearman
value: 58.67147426715496
- type: euclidean_pearson
value: 54.7154367422438
- type: euclidean_spearman
value: 59.003235142442634
- type: manhattan_pearson
value: 56.3116235357115
- type: manhattan_spearman
value: 60.12956331404423
- task:
type: STS
dataset:
name: MTEB STS22 (ru)
type: mteb/sts22-crosslingual-sts
config: ru
split: test
revision: 2de6ce8c1921b71a755b262c6b57fef195dd7906
metrics:
- type: cos_sim_pearson
value: 29.3396354650316
- type: cos_sim_spearman
value: 43.3632935734809
- type: euclidean_pearson
value: 31.18506539466593
- type: euclidean_spearman
value: 37.531745324803815
- type: manhattan_pearson
value: 32.829038232529015
- type: manhattan_spearman
value: 38.04574361589953
- task:
type: STS
dataset:
name: MTEB STS22 (zh)
type: mteb/sts22-crosslingual-sts
config: zh
split: test
revision: 2de6ce8c1921b71a755b262c6b57fef195dd7906
metrics:
- type: cos_sim_pearson
value: 62.9596148375188
- type: cos_sim_spearman
value: 66.77653412402461
- type: euclidean_pearson
value: 64.53156585980886
- type: euclidean_spearman
value: 66.2884373036083
- type: manhattan_pearson
value: 65.2831035495143
- type: manhattan_spearman
value: 66.83641945244322
- task:
type: STS
dataset:
name: MTEB STS22 (fr)
type: mteb/sts22-crosslingual-sts
config: fr
split: test
revision: 2de6ce8c1921b71a755b262c6b57fef195dd7906
metrics:
- type: cos_sim_pearson
value: 79.9138821493919
- type: cos_sim_spearman
value: 80.38097535004677
- type: euclidean_pearson
value: 76.2401499094322
- type: euclidean_spearman
value: 77.00897050735907
- type: manhattan_pearson
value: 76.69531453728563
- type: manhattan_spearman
value: 77.83189696428695
- task:
type: STS
dataset:
name: MTEB STS22 (de-en)
type: mteb/sts22-crosslingual-sts
config: de-en
split: test
revision: 2de6ce8c1921b71a755b262c6b57fef195dd7906
metrics:
- type: cos_sim_pearson
value: 51.27009640779202
- type: cos_sim_spearman
value: 51.16120562029285
- type: euclidean_pearson
value: 52.20594985566323
- type: euclidean_spearman
value: 52.75331049709882
- type: manhattan_pearson
value: 52.2725118792549
- type: manhattan_spearman
value: 53.614847968995115
- task:
type: STS
dataset:
name: MTEB STS22 (es-en)
type: mteb/sts22-crosslingual-sts
config: es-en
split: test
revision: 2de6ce8c1921b71a755b262c6b57fef195dd7906
metrics:
- type: cos_sim_pearson
value: 70.46044814118835
- type: cos_sim_spearman
value: 75.05760236668672
- type: euclidean_pearson
value: 72.80128921879461
- type: euclidean_spearman
value: 73.81164755219257
- type: manhattan_pearson
value: 72.7863795809044
- type: manhattan_spearman
value: 73.65932033818906
- task:
type: STS
dataset:
name: MTEB STS22 (it)
type: mteb/sts22-crosslingual-sts
config: it
split: test
revision: 2de6ce8c1921b71a755b262c6b57fef195dd7906
metrics:
- type: cos_sim_pearson
value: 61.89276840435938
- type: cos_sim_spearman
value: 65.65042955732055
- type: euclidean_pearson
value: 61.22969491863841
- type: euclidean_spearman
value: 63.451215637904724
- type: manhattan_pearson
value: 61.16138956945465
- type: manhattan_spearman
value: 63.34966179331079
- task:
type: STS
dataset:
name: MTEB STS22 (pl-en)
type: mteb/sts22-crosslingual-sts
config: pl-en
split: test
revision: 2de6ce8c1921b71a755b262c6b57fef195dd7906
metrics:
- type: cos_sim_pearson
value: 56.377577221753626
- type: cos_sim_spearman
value: 53.31223653270353
- type: euclidean_pearson
value: 26.488793041564307
- type: euclidean_spearman
value: 19.524551741701472
- type: manhattan_pearson
value: 24.322868054606474
- type: manhattan_spearman
value: 19.50371443994939
- task:
type: STS
dataset:
name: MTEB STS22 (zh-en)
type: mteb/sts22-crosslingual-sts
config: zh-en
split: test
revision: 2de6ce8c1921b71a755b262c6b57fef195dd7906
metrics:
- type: cos_sim_pearson
value: 69.3634693673425
- type: cos_sim_spearman
value: 68.45051245419702
- type: euclidean_pearson
value: 56.1417414374769
- type: euclidean_spearman
value: 55.89891749631458
- type: manhattan_pearson
value: 57.266417430882925
- type: manhattan_spearman
value: 56.57927102744128
- task:
type: STS
dataset:
name: MTEB STS22 (es-it)
type: mteb/sts22-crosslingual-sts
config: es-it
split: test
revision: 2de6ce8c1921b71a755b262c6b57fef195dd7906
metrics:
- type: cos_sim_pearson
value: 60.04169437653179
- type: cos_sim_spearman
value: 65.49531007553446
- type: euclidean_pearson
value: 58.583860732586324
- type: euclidean_spearman
value: 58.80034792537441
- type: manhattan_pearson
value: 59.02513161664622
- type: manhattan_spearman
value: 58.42942047904558
- task:
type: STS
dataset:
name: MTEB STS22 (de-fr)
type: mteb/sts22-crosslingual-sts
config: de-fr
split: test
revision: 2de6ce8c1921b71a755b262c6b57fef195dd7906
metrics:
- type: cos_sim_pearson
value: 48.81035211493999
- type: cos_sim_spearman
value: 53.27599246786967
- type: euclidean_pearson
value: 52.25710699032889
- type: euclidean_spearman
value: 55.22995695529873
- type: manhattan_pearson
value: 51.894901893217884
- type: manhattan_spearman
value: 54.95919975149795
- task:
type: STS
dataset:
name: MTEB STS22 (de-pl)
type: mteb/sts22-crosslingual-sts
config: de-pl
split: test
revision: 2de6ce8c1921b71a755b262c6b57fef195dd7906
metrics:
- type: cos_sim_pearson
value: 36.75993101477816
- type: cos_sim_spearman
value: 43.050156692479355
- type: euclidean_pearson
value: 51.49021084746248
- type: euclidean_spearman
value: 49.54771253090078
- type: manhattan_pearson
value: 54.68410760796417
- type: manhattan_spearman
value: 48.19277197691717
- task:
type: STS
dataset:
name: MTEB STS22 (fr-pl)
type: mteb/sts22-crosslingual-sts
config: fr-pl
split: test
revision: 2de6ce8c1921b71a755b262c6b57fef195dd7906
metrics:
- type: cos_sim_pearson
value: 48.553763306386486
- type: cos_sim_spearman
value: 28.17180849095055
- type: euclidean_pearson
value: 17.50739087826514
- type: euclidean_spearman
value: 16.903085094570333
- type: manhattan_pearson
value: 20.750046512534112
- type: manhattan_spearman
value: 5.634361698190111
- task:
type: STS
dataset:
name: MTEB STSBenchmark
type: mteb/stsbenchmark-sts
config: default
split: test
revision: 8913289635987208e6e7c72789e4be2fe94b6abd
metrics:
- type: cos_sim_pearson
value: 82.17107190594417
- type: cos_sim_spearman
value: 80.89611873505183
- type: euclidean_pearson
value: 71.82491561814403
- type: euclidean_spearman
value: 70.33608835403274
- type: manhattan_pearson
value: 71.89538332420133
- type: manhattan_spearman
value: 70.36082395775944
- task:
type: Reranking
dataset:
name: MTEB SciDocsRR
type: mteb/scidocs-reranking
config: default
split: test
revision: 56a6d0140cf6356659e2a7c1413286a774468d44
metrics:
- type: map
value: 79.77047154974562
- type: mrr
value: 94.25887021475256
- task:
type: Retrieval
dataset:
name: MTEB SciFact
type: scifact
config: default
split: test
revision: a75ae049398addde9b70f6b268875f5cbce99089
metrics:
- type: map_at_1
value: 56.328
- type: map_at_10
value: 67.167
- type: map_at_100
value: 67.721
- type: map_at_1000
value: 67.735
- type: map_at_3
value: 64.20400000000001
- type: map_at_5
value: 65.904
- type: mrr_at_1
value: 59.667
- type: mrr_at_10
value: 68.553
- type: mrr_at_100
value: 68.992
- type: mrr_at_1000
value: 69.004
- type: mrr_at_3
value: 66.22200000000001
- type: mrr_at_5
value: 67.739
- type: ndcg_at_1
value: 59.667
- type: ndcg_at_10
value: 72.111
- type: ndcg_at_100
value: 74.441
- type: ndcg_at_1000
value: 74.90599999999999
- type: ndcg_at_3
value: 67.11399999999999
- type: ndcg_at_5
value: 69.687
- type: precision_at_1
value: 59.667
- type: precision_at_10
value: 9.733
- type: precision_at_100
value: 1.09
- type: precision_at_1000
value: 0.11299999999999999
- type: precision_at_3
value: 26.444000000000003
- type: precision_at_5
value: 17.599999999999998
- type: recall_at_1
value: 56.328
- type: recall_at_10
value: 85.8
- type: recall_at_100
value: 96.167
- type: recall_at_1000
value: 100.0
- type: recall_at_3
value: 72.433
- type: recall_at_5
value: 78.972
- task:
type: PairClassification
dataset:
name: MTEB SprintDuplicateQuestions
type: mteb/sprintduplicatequestions-pairclassification
config: default
split: test
revision: 5a8256d0dff9c4bd3be3ba3e67e4e70173f802ea
metrics:
- type: cos_sim_accuracy
value: 99.8019801980198
- type: cos_sim_ap
value: 94.92527097094644
- type: cos_sim_f1
value: 89.91935483870968
- type: cos_sim_precision
value: 90.65040650406505
- type: cos_sim_recall
value: 89.2
- type: dot_accuracy
value: 99.51782178217822
- type: dot_ap
value: 81.30756869559929
- type: dot_f1
value: 75.88235294117648
- type: dot_precision
value: 74.42307692307692
- type: dot_recall
value: 77.4
- type: euclidean_accuracy
value: 99.73069306930694
- type: euclidean_ap
value: 91.05040371796932
- type: euclidean_f1
value: 85.7889237199582
- type: euclidean_precision
value: 89.82494529540482
- type: euclidean_recall
value: 82.1
- type: manhattan_accuracy
value: 99.73762376237623
- type: manhattan_ap
value: 91.4823412839869
- type: manhattan_f1
value: 86.39836984207845
- type: manhattan_precision
value: 88.05815160955348
- type: manhattan_recall
value: 84.8
- type: max_accuracy
value: 99.8019801980198
- type: max_ap
value: 94.92527097094644
- type: max_f1
value: 89.91935483870968
- task:
type: Clustering
dataset:
name: MTEB StackExchangeClustering
type: mteb/stackexchange-clustering
config: default
split: test
revision: 70a89468f6dccacc6aa2b12a6eac54e74328f235
metrics:
- type: v_measure
value: 55.13046832022158
- task:
type: Clustering
dataset:
name: MTEB StackExchangeClusteringP2P
type: mteb/stackexchange-clustering-p2p
config: default
split: test
revision: d88009ab563dd0b16cfaf4436abaf97fa3550cf0
metrics:
- type: v_measure
value: 34.31252463546675
- task:
type: Reranking
dataset:
name: MTEB StackOverflowDupQuestions
type: mteb/stackoverflowdupquestions-reranking
config: default
split: test
revision: ef807ea29a75ec4f91b50fd4191cb4ee4589a9f9
metrics:
- type: map
value: 51.06639688231414
- type: mrr
value: 51.80205415499534
- task:
type: Summarization
dataset:
name: MTEB SummEval
type: mteb/summeval
config: default
split: test
revision: 8753c2788d36c01fc6f05d03fe3f7268d63f9122
metrics:
- type: cos_sim_pearson
value: 31.963331462886956
- type: cos_sim_spearman
value: 33.59510652629926
- type: dot_pearson
value: 29.033733540882125
- type: dot_spearman
value: 31.550290638315506
- task:
type: Retrieval
dataset:
name: MTEB TRECCOVID
type: trec-covid
config: default
split: test
revision: 2c8041b2c07a79b6f7ba8fe6acc72e5d9f92d217
metrics:
- type: map_at_1
value: 0.23600000000000002
- type: map_at_10
value: 2.09
- type: map_at_100
value: 12.466000000000001
- type: map_at_1000
value: 29.852
- type: map_at_3
value: 0.6859999999999999
- type: map_at_5
value: 1.099
- type: mrr_at_1
value: 88.0
- type: mrr_at_10
value: 94.0
- type: mrr_at_100
value: 94.0
- type: mrr_at_1000
value: 94.0
- type: mrr_at_3
value: 94.0
- type: mrr_at_5
value: 94.0
- type: ndcg_at_1
value: 86.0
- type: ndcg_at_10
value: 81.368
- type: ndcg_at_100
value: 61.879
- type: ndcg_at_1000
value: 55.282
- type: ndcg_at_3
value: 84.816
- type: ndcg_at_5
value: 82.503
- type: precision_at_1
value: 88.0
- type: precision_at_10
value: 85.6
- type: precision_at_100
value: 63.85999999999999
- type: precision_at_1000
value: 24.682000000000002
- type: precision_at_3
value: 88.667
- type: precision_at_5
value: 86.0
- type: recall_at_1
value: 0.23600000000000002
- type: recall_at_10
value: 2.25
- type: recall_at_100
value: 15.488
- type: recall_at_1000
value: 52.196
- type: recall_at_3
value: 0.721
- type: recall_at_5
value: 1.159
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (sqi-eng)
type: mteb/tatoeba-bitext-mining
config: sqi-eng
split: test
revision: ed9e4a974f867fd9736efcf222fc3a26487387a5
metrics:
- type: accuracy
value: 12.7
- type: f1
value: 10.384182044950325
- type: precision
value: 9.805277385275312
- type: recall
value: 12.7
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (fry-eng)
type: mteb/tatoeba-bitext-mining
config: fry-eng
split: test
revision: ed9e4a974f867fd9736efcf222fc3a26487387a5
metrics:
- type: accuracy
value: 30.63583815028902
- type: f1
value: 24.623726947426373
- type: precision
value: 22.987809919828013
- type: recall
value: 30.63583815028902
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (kur-eng)
type: mteb/tatoeba-bitext-mining
config: kur-eng
split: test
revision: ed9e4a974f867fd9736efcf222fc3a26487387a5
metrics:
- type: accuracy
value: 10.487804878048781
- type: f1
value: 8.255945048627975
- type: precision
value: 7.649047253615001
- type: recall
value: 10.487804878048781
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (tur-eng)
type: mteb/tatoeba-bitext-mining
config: tur-eng
split: test
revision: ed9e4a974f867fd9736efcf222fc3a26487387a5
metrics:
- type: accuracy
value: 8.5
- type: f1
value: 6.154428783776609
- type: precision
value: 5.680727638128585
- type: recall
value: 8.5
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (deu-eng)
type: mteb/tatoeba-bitext-mining
config: deu-eng
split: test
revision: ed9e4a974f867fd9736efcf222fc3a26487387a5
metrics:
- type: accuracy
value: 73.0
- type: f1
value: 70.10046605876393
- type: precision
value: 69.0018253968254
- type: recall
value: 73.0
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (nld-eng)
type: mteb/tatoeba-bitext-mining
config: nld-eng
split: test
revision: ed9e4a974f867fd9736efcf222fc3a26487387a5
metrics:
- type: accuracy
value: 32.7
- type: f1
value: 29.7428583868239
- type: precision
value: 28.81671359506905
- type: recall
value: 32.7
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ron-eng)
type: mteb/tatoeba-bitext-mining
config: ron-eng
split: test
revision: ed9e4a974f867fd9736efcf222fc3a26487387a5
metrics:
- type: accuracy
value: 31.5
- type: f1
value: 27.228675552174003
- type: precision
value: 25.950062299847747
- type: recall
value: 31.5
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ang-eng)
type: mteb/tatoeba-bitext-mining
config: ang-eng
split: test
revision: ed9e4a974f867fd9736efcf222fc3a26487387a5
metrics:
- type: accuracy
value: 35.82089552238806
- type: f1
value: 28.75836980510979
- type: precision
value: 26.971643613434658
- type: recall
value: 35.82089552238806
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ido-eng)
type: mteb/tatoeba-bitext-mining
config: ido-eng
split: test
revision: ed9e4a974f867fd9736efcf222fc3a26487387a5
metrics:
- type: accuracy
value: 49.8
- type: f1
value: 43.909237401451776
- type: precision
value: 41.944763440988936
- type: recall
value: 49.8
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (jav-eng)
type: mteb/tatoeba-bitext-mining
config: jav-eng
split: test
revision: ed9e4a974f867fd9736efcf222fc3a26487387a5
metrics:
- type: accuracy
value: 18.536585365853657
- type: f1
value: 15.020182570246751
- type: precision
value: 14.231108073213337
- type: recall
value: 18.536585365853657
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (isl-eng)
type: mteb/tatoeba-bitext-mining
config: isl-eng
split: test
revision: ed9e4a974f867fd9736efcf222fc3a26487387a5
metrics:
- type: accuracy
value: 8.7
- type: f1
value: 6.2934784902885355
- type: precision
value: 5.685926293425392
- type: recall
value: 8.7
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (slv-eng)
type: mteb/tatoeba-bitext-mining
config: slv-eng
split: test
revision: ed9e4a974f867fd9736efcf222fc3a26487387a5
metrics:
- type: accuracy
value: 12.879708383961116
- type: f1
value: 10.136118341751114
- type: precision
value: 9.571444036679436
- type: recall
value: 12.879708383961116
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (cym-eng)
type: mteb/tatoeba-bitext-mining
config: cym-eng
split: test
revision: ed9e4a974f867fd9736efcf222fc3a26487387a5
metrics:
- type: accuracy
value: 9.217391304347826
- type: f1
value: 6.965003297761793
- type: precision
value: 6.476093529199119
- type: recall
value: 9.217391304347826
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (kaz-eng)
type: mteb/tatoeba-bitext-mining
config: kaz-eng
split: test
revision: ed9e4a974f867fd9736efcf222fc3a26487387a5
metrics:
- type: accuracy
value: 4.3478260869565215
- type: f1
value: 3.3186971707677397
- type: precision
value: 3.198658632552104
- type: recall
value: 4.3478260869565215
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (est-eng)
type: mteb/tatoeba-bitext-mining
config: est-eng
split: test
revision: ed9e4a974f867fd9736efcf222fc3a26487387a5
metrics:
- type: accuracy
value: 6.9
- type: f1
value: 4.760708297894056
- type: precision
value: 4.28409511756074
- type: recall
value: 6.9
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (heb-eng)
type: mteb/tatoeba-bitext-mining
config: heb-eng
split: test
revision: ed9e4a974f867fd9736efcf222fc3a26487387a5
metrics:
- type: accuracy
value: 2.1999999999999997
- type: f1
value: 1.6862703878117107
- type: precision
value: 1.6048118233915603
- type: recall
value: 2.1999999999999997
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (gla-eng)
type: mteb/tatoeba-bitext-mining
config: gla-eng
split: test
revision: ed9e4a974f867fd9736efcf222fc3a26487387a5
metrics:
- type: accuracy
value: 3.0156815440289506
- type: f1
value: 2.0913257250659134
- type: precision
value: 1.9072775486461648
- type: recall
value: 3.0156815440289506
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (mar-eng)
type: mteb/tatoeba-bitext-mining
config: mar-eng
split: test
revision: ed9e4a974f867fd9736efcf222fc3a26487387a5
metrics:
- type: accuracy
value: 49.0
- type: f1
value: 45.5254456536713
- type: precision
value: 44.134609250398725
- type: recall
value: 49.0
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (lat-eng)
type: mteb/tatoeba-bitext-mining
config: lat-eng
split: test
revision: ed9e4a974f867fd9736efcf222fc3a26487387a5
metrics:
- type: accuracy
value: 33.5
- type: f1
value: 28.759893973182564
- type: precision
value: 27.401259116024836
- type: recall
value: 33.5
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (bel-eng)
type: mteb/tatoeba-bitext-mining
config: bel-eng
split: test
revision: ed9e4a974f867fd9736efcf222fc3a26487387a5
metrics:
- type: accuracy
value: 10.2
- type: f1
value: 8.030039981676275
- type: precision
value: 7.548748077210127
- type: recall
value: 10.2
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (pms-eng)
type: mteb/tatoeba-bitext-mining
config: pms-eng
split: test
revision: ed9e4a974f867fd9736efcf222fc3a26487387a5
metrics:
- type: accuracy
value: 38.095238095238095
- type: f1
value: 31.944999250262406
- type: precision
value: 30.04452690166976
- type: recall
value: 38.095238095238095
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (gle-eng)
type: mteb/tatoeba-bitext-mining
config: gle-eng
split: test
revision: ed9e4a974f867fd9736efcf222fc3a26487387a5
metrics:
- type: accuracy
value: 4.8
- type: f1
value: 3.2638960786708067
- type: precision
value: 3.0495382950729644
- type: recall
value: 4.8
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (pes-eng)
type: mteb/tatoeba-bitext-mining
config: pes-eng
split: test
revision: ed9e4a974f867fd9736efcf222fc3a26487387a5
metrics:
- type: accuracy
value: 15.8
- type: f1
value: 12.131087470371275
- type: precision
value: 11.141304011547815
- type: recall
value: 15.8
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (nob-eng)
type: mteb/tatoeba-bitext-mining
config: nob-eng
split: test
revision: ed9e4a974f867fd9736efcf222fc3a26487387a5
metrics:
- type: accuracy
value: 23.3
- type: f1
value: 21.073044636921384
- type: precision
value: 20.374220568287285
- type: recall
value: 23.3
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (bul-eng)
type: mteb/tatoeba-bitext-mining
config: bul-eng
split: test
revision: ed9e4a974f867fd9736efcf222fc3a26487387a5
metrics:
- type: accuracy
value: 24.9
- type: f1
value: 20.091060685364987
- type: precision
value: 18.899700591081224
- type: recall
value: 24.9
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (cbk-eng)
type: mteb/tatoeba-bitext-mining
config: cbk-eng
split: test
revision: ed9e4a974f867fd9736efcf222fc3a26487387a5
metrics:
- type: accuracy
value: 70.1
- type: f1
value: 64.62940836940835
- type: precision
value: 62.46559523809524
- type: recall
value: 70.1
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (hun-eng)
type: mteb/tatoeba-bitext-mining
config: hun-eng
split: test
revision: ed9e4a974f867fd9736efcf222fc3a26487387a5
metrics:
- type: accuracy
value: 7.199999999999999
- type: f1
value: 5.06613460576115
- type: precision
value: 4.625224463391809
- type: recall
value: 7.199999999999999
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (uig-eng)
type: mteb/tatoeba-bitext-mining
config: uig-eng
split: test
revision: ed9e4a974f867fd9736efcf222fc3a26487387a5
metrics:
- type: accuracy
value: 1.7999999999999998
- type: f1
value: 1.2716249514772895
- type: precision
value: 1.2107445914723798
- type: recall
value: 1.7999999999999998
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (rus-eng)
type: mteb/tatoeba-bitext-mining
config: rus-eng
split: test
revision: ed9e4a974f867fd9736efcf222fc3a26487387a5
metrics:
- type: accuracy
value: 65.5
- type: f1
value: 59.84399711399712
- type: precision
value: 57.86349567099567
- type: recall
value: 65.5
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (spa-eng)
type: mteb/tatoeba-bitext-mining
config: spa-eng
split: test
revision: ed9e4a974f867fd9736efcf222fc3a26487387a5
metrics:
- type: accuracy
value: 95.7
- type: f1
value: 94.48333333333333
- type: precision
value: 93.89999999999999
- type: recall
value: 95.7
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (hye-eng)
type: mteb/tatoeba-bitext-mining
config: hye-eng
split: test
revision: ed9e4a974f867fd9736efcf222fc3a26487387a5
metrics:
- type: accuracy
value: 0.8086253369272237
- type: f1
value: 0.4962046191492002
- type: precision
value: 0.47272438578554393
- type: recall
value: 0.8086253369272237
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (tel-eng)
type: mteb/tatoeba-bitext-mining
config: tel-eng
split: test
revision: ed9e4a974f867fd9736efcf222fc3a26487387a5
metrics:
- type: accuracy
value: 69.23076923076923
- type: f1
value: 64.6227941099736
- type: precision
value: 63.03795877325289
- type: recall
value: 69.23076923076923
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (afr-eng)
type: mteb/tatoeba-bitext-mining
config: afr-eng
split: test
revision: ed9e4a974f867fd9736efcf222fc3a26487387a5
metrics:
- type: accuracy
value: 20.599999999999998
- type: f1
value: 16.62410040660465
- type: precision
value: 15.598352437967069
- type: recall
value: 20.599999999999998
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (mon-eng)
type: mteb/tatoeba-bitext-mining
config: mon-eng
split: test
revision: ed9e4a974f867fd9736efcf222fc3a26487387a5
metrics:
- type: accuracy
value: 4.318181818181818
- type: f1
value: 2.846721192535661
- type: precision
value: 2.6787861417537147
- type: recall
value: 4.318181818181818
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (arz-eng)
type: mteb/tatoeba-bitext-mining
config: arz-eng
split: test
revision: ed9e4a974f867fd9736efcf222fc3a26487387a5
metrics:
- type: accuracy
value: 74.84276729559748
- type: f1
value: 70.6638714185884
- type: precision
value: 68.86792452830188
- type: recall
value: 74.84276729559748
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (hrv-eng)
type: mteb/tatoeba-bitext-mining
config: hrv-eng
split: test
revision: ed9e4a974f867fd9736efcf222fc3a26487387a5
metrics:
- type: accuracy
value: 15.9
- type: f1
value: 12.793698974586706
- type: precision
value: 12.088118017657736
- type: recall
value: 15.9
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (nov-eng)
type: mteb/tatoeba-bitext-mining
config: nov-eng
split: test
revision: ed9e4a974f867fd9736efcf222fc3a26487387a5
metrics:
- type: accuracy
value: 59.92217898832685
- type: f1
value: 52.23086900129701
- type: precision
value: 49.25853869433636
- type: recall
value: 59.92217898832685
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (gsw-eng)
type: mteb/tatoeba-bitext-mining
config: gsw-eng
split: test
revision: ed9e4a974f867fd9736efcf222fc3a26487387a5
metrics:
- type: accuracy
value: 27.350427350427353
- type: f1
value: 21.033781033781032
- type: precision
value: 19.337955491801644
- type: recall
value: 27.350427350427353
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (nds-eng)
type: mteb/tatoeba-bitext-mining
config: nds-eng
split: test
revision: ed9e4a974f867fd9736efcf222fc3a26487387a5
metrics:
- type: accuracy
value: 29.299999999999997
- type: f1
value: 23.91597452425777
- type: precision
value: 22.36696598364942
- type: recall
value: 29.299999999999997
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ukr-eng)
type: mteb/tatoeba-bitext-mining
config: ukr-eng
split: test
revision: ed9e4a974f867fd9736efcf222fc3a26487387a5
metrics:
- type: accuracy
value: 27.3
- type: f1
value: 22.059393517688886
- type: precision
value: 20.503235534170887
- type: recall
value: 27.3
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (uzb-eng)
type: mteb/tatoeba-bitext-mining
config: uzb-eng
split: test
revision: ed9e4a974f867fd9736efcf222fc3a26487387a5
metrics:
- type: accuracy
value: 8.177570093457943
- type: f1
value: 4.714367017906037
- type: precision
value: 4.163882933965758
- type: recall
value: 8.177570093457943
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (lit-eng)
type: mteb/tatoeba-bitext-mining
config: lit-eng
split: test
revision: ed9e4a974f867fd9736efcf222fc3a26487387a5
metrics:
- type: accuracy
value: 5.800000000000001
- type: f1
value: 4.4859357432293825
- type: precision
value: 4.247814465614043
- type: recall
value: 5.800000000000001
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ina-eng)
type: mteb/tatoeba-bitext-mining
config: ina-eng
split: test
revision: ed9e4a974f867fd9736efcf222fc3a26487387a5
metrics:
- type: accuracy
value: 78.4
- type: f1
value: 73.67166666666667
- type: precision
value: 71.83285714285714
- type: recall
value: 78.4
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (lfn-eng)
type: mteb/tatoeba-bitext-mining
config: lfn-eng
split: test
revision: ed9e4a974f867fd9736efcf222fc3a26487387a5
metrics:
- type: accuracy
value: 50.3
- type: f1
value: 44.85221545883311
- type: precision
value: 43.04913026243909
- type: recall
value: 50.3
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (zsm-eng)
type: mteb/tatoeba-bitext-mining
config: zsm-eng
split: test
revision: ed9e4a974f867fd9736efcf222fc3a26487387a5
metrics:
- type: accuracy
value: 83.5
- type: f1
value: 79.95151515151515
- type: precision
value: 78.53611111111111
- type: recall
value: 83.5
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ita-eng)
type: mteb/tatoeba-bitext-mining
config: ita-eng
split: test
revision: ed9e4a974f867fd9736efcf222fc3a26487387a5
metrics:
- type: accuracy
value: 69.89999999999999
- type: f1
value: 65.03756269256269
- type: precision
value: 63.233519536019536
- type: recall
value: 69.89999999999999
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (cmn-eng)
type: mteb/tatoeba-bitext-mining
config: cmn-eng
split: test
revision: ed9e4a974f867fd9736efcf222fc3a26487387a5
metrics:
- type: accuracy
value: 93.2
- type: f1
value: 91.44666666666666
- type: precision
value: 90.63333333333333
- type: recall
value: 93.2
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (lvs-eng)
type: mteb/tatoeba-bitext-mining
config: lvs-eng
split: test
revision: ed9e4a974f867fd9736efcf222fc3a26487387a5
metrics:
- type: accuracy
value: 8.3
- type: f1
value: 6.553388144729963
- type: precision
value: 6.313497782829976
- type: recall
value: 8.3
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (glg-eng)
type: mteb/tatoeba-bitext-mining
config: glg-eng
split: test
revision: ed9e4a974f867fd9736efcf222fc3a26487387a5
metrics:
- type: accuracy
value: 83.6
- type: f1
value: 79.86243107769424
- type: precision
value: 78.32555555555555
- type: recall
value: 83.6
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ceb-eng)
type: mteb/tatoeba-bitext-mining
config: ceb-eng
split: test
revision: ed9e4a974f867fd9736efcf222fc3a26487387a5
metrics:
- type: accuracy
value: 9.166666666666666
- type: f1
value: 6.637753604420271
- type: precision
value: 6.10568253585495
- type: recall
value: 9.166666666666666
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (bre-eng)
type: mteb/tatoeba-bitext-mining
config: bre-eng
split: test
revision: ed9e4a974f867fd9736efcf222fc3a26487387a5
metrics:
- type: accuracy
value: 7.3999999999999995
- type: f1
value: 4.6729483612322165
- type: precision
value: 4.103844520292658
- type: recall
value: 7.3999999999999995
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ben-eng)
type: mteb/tatoeba-bitext-mining
config: ben-eng
split: test
revision: ed9e4a974f867fd9736efcf222fc3a26487387a5
metrics:
- type: accuracy
value: 80.30000000000001
- type: f1
value: 75.97666666666667
- type: precision
value: 74.16
- type: recall
value: 80.30000000000001
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (swg-eng)
type: mteb/tatoeba-bitext-mining
config: swg-eng
split: test
revision: ed9e4a974f867fd9736efcf222fc3a26487387a5
metrics:
- type: accuracy
value: 23.214285714285715
- type: f1
value: 16.88988095238095
- type: precision
value: 15.364937641723353
- type: recall
value: 23.214285714285715
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (arq-eng)
type: mteb/tatoeba-bitext-mining
config: arq-eng
split: test
revision: ed9e4a974f867fd9736efcf222fc3a26487387a5
metrics:
- type: accuracy
value: 33.15038419319429
- type: f1
value: 27.747873024072415
- type: precision
value: 25.99320572578704
- type: recall
value: 33.15038419319429
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (kab-eng)
type: mteb/tatoeba-bitext-mining
config: kab-eng
split: test
revision: ed9e4a974f867fd9736efcf222fc3a26487387a5
metrics:
- type: accuracy
value: 2.6
- type: f1
value: 1.687059048752127
- type: precision
value: 1.5384884521299
- type: recall
value: 2.6
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (fra-eng)
type: mteb/tatoeba-bitext-mining
config: fra-eng
split: test
revision: ed9e4a974f867fd9736efcf222fc3a26487387a5
metrics:
- type: accuracy
value: 93.30000000000001
- type: f1
value: 91.44000000000001
- type: precision
value: 90.59166666666667
- type: recall
value: 93.30000000000001
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (por-eng)
type: mteb/tatoeba-bitext-mining
config: por-eng
split: test
revision: ed9e4a974f867fd9736efcf222fc3a26487387a5
metrics:
- type: accuracy
value: 94.1
- type: f1
value: 92.61666666666667
- type: precision
value: 91.88333333333333
- type: recall
value: 94.1
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (tat-eng)
type: mteb/tatoeba-bitext-mining
config: tat-eng
split: test
revision: ed9e4a974f867fd9736efcf222fc3a26487387a5
metrics:
- type: accuracy
value: 5.0
- type: f1
value: 3.589591971281927
- type: precision
value: 3.3046491614532854
- type: recall
value: 5.0
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (oci-eng)
type: mteb/tatoeba-bitext-mining
config: oci-eng
split: test
revision: ed9e4a974f867fd9736efcf222fc3a26487387a5
metrics:
- type: accuracy
value: 45.9
- type: f1
value: 40.171969141969136
- type: precision
value: 38.30764368870302
- type: recall
value: 45.9
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (pol-eng)
type: mteb/tatoeba-bitext-mining
config: pol-eng
split: test
revision: ed9e4a974f867fd9736efcf222fc3a26487387a5
metrics:
- type: accuracy
value: 16.900000000000002
- type: f1
value: 14.094365204207351
- type: precision
value: 13.276519841269844
- type: recall
value: 16.900000000000002
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (war-eng)
type: mteb/tatoeba-bitext-mining
config: war-eng
split: test
revision: ed9e4a974f867fd9736efcf222fc3a26487387a5
metrics:
- type: accuracy
value: 12.8
- type: f1
value: 10.376574912567156
- type: precision
value: 9.758423963284509
- type: recall
value: 12.8
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (aze-eng)
type: mteb/tatoeba-bitext-mining
config: aze-eng
split: test
revision: ed9e4a974f867fd9736efcf222fc3a26487387a5
metrics:
- type: accuracy
value: 8.1
- type: f1
value: 6.319455355175778
- type: precision
value: 5.849948830628881
- type: recall
value: 8.1
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (vie-eng)
type: mteb/tatoeba-bitext-mining
config: vie-eng
split: test
revision: ed9e4a974f867fd9736efcf222fc3a26487387a5
metrics:
- type: accuracy
value: 95.5
- type: f1
value: 94.19666666666667
- type: precision
value: 93.60000000000001
- type: recall
value: 95.5
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (nno-eng)
type: mteb/tatoeba-bitext-mining
config: nno-eng
split: test
revision: ed9e4a974f867fd9736efcf222fc3a26487387a5
metrics:
- type: accuracy
value: 19.1
- type: f1
value: 16.280080686081906
- type: precision
value: 15.451573089395668
- type: recall
value: 19.1
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (cha-eng)
type: mteb/tatoeba-bitext-mining
config: cha-eng
split: test
revision: ed9e4a974f867fd9736efcf222fc3a26487387a5
metrics:
- type: accuracy
value: 30.656934306569344
- type: f1
value: 23.2568647897115
- type: precision
value: 21.260309034031664
- type: recall
value: 30.656934306569344
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (mhr-eng)
type: mteb/tatoeba-bitext-mining
config: mhr-eng
split: test
revision: ed9e4a974f867fd9736efcf222fc3a26487387a5
metrics:
- type: accuracy
value: 2.1999999999999997
- type: f1
value: 1.556861047295521
- type: precision
value: 1.4555993437238521
- type: recall
value: 2.1999999999999997
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (dan-eng)
type: mteb/tatoeba-bitext-mining
config: dan-eng
split: test
revision: ed9e4a974f867fd9736efcf222fc3a26487387a5
metrics:
- type: accuracy
value: 27.500000000000004
- type: f1
value: 23.521682636223492
- type: precision
value: 22.345341306967683
- type: recall
value: 27.500000000000004
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ell-eng)
type: mteb/tatoeba-bitext-mining
config: ell-eng
split: test
revision: ed9e4a974f867fd9736efcf222fc3a26487387a5
metrics:
- type: accuracy
value: 7.3999999999999995
- type: f1
value: 5.344253880846173
- type: precision
value: 4.999794279068863
- type: recall
value: 7.3999999999999995
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (amh-eng)
type: mteb/tatoeba-bitext-mining
config: amh-eng
split: test
revision: ed9e4a974f867fd9736efcf222fc3a26487387a5
metrics:
- type: accuracy
value: 0.5952380952380952
- type: f1
value: 0.026455026455026457
- type: precision
value: 0.013528138528138528
- type: recall
value: 0.5952380952380952
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (pam-eng)
type: mteb/tatoeba-bitext-mining
config: pam-eng
split: test
revision: ed9e4a974f867fd9736efcf222fc3a26487387a5
metrics:
- type: accuracy
value: 7.3
- type: f1
value: 5.853140211779251
- type: precision
value: 5.505563080945322
- type: recall
value: 7.3
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (hsb-eng)
type: mteb/tatoeba-bitext-mining
config: hsb-eng
split: test
revision: ed9e4a974f867fd9736efcf222fc3a26487387a5
metrics:
- type: accuracy
value: 13.250517598343686
- type: f1
value: 9.676349506190704
- type: precision
value: 8.930392053553216
- type: recall
value: 13.250517598343686
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (srp-eng)
type: mteb/tatoeba-bitext-mining
config: srp-eng
split: test
revision: ed9e4a974f867fd9736efcf222fc3a26487387a5
metrics:
- type: accuracy
value: 14.499999999999998
- type: f1
value: 11.68912588067557
- type: precision
value: 11.024716513105519
- type: recall
value: 14.499999999999998
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (epo-eng)
type: mteb/tatoeba-bitext-mining
config: epo-eng
split: test
revision: ed9e4a974f867fd9736efcf222fc3a26487387a5
metrics:
- type: accuracy
value: 30.099999999999998
- type: f1
value: 26.196880936315146
- type: precision
value: 25.271714086169478
- type: recall
value: 30.099999999999998
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (kzj-eng)
type: mteb/tatoeba-bitext-mining
config: kzj-eng
split: test
revision: ed9e4a974f867fd9736efcf222fc3a26487387a5
metrics:
- type: accuracy
value: 6.4
- type: f1
value: 5.1749445942023335
- type: precision
value: 4.975338142029625
- type: recall
value: 6.4
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (awa-eng)
type: mteb/tatoeba-bitext-mining
config: awa-eng
split: test
revision: ed9e4a974f867fd9736efcf222fc3a26487387a5
metrics:
- type: accuracy
value: 39.39393939393939
- type: f1
value: 35.005707393767096
- type: precision
value: 33.64342032053631
- type: recall
value: 39.39393939393939
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (fao-eng)
type: mteb/tatoeba-bitext-mining
config: fao-eng
split: test
revision: ed9e4a974f867fd9736efcf222fc3a26487387a5
metrics:
- type: accuracy
value: 18.3206106870229
- type: f1
value: 12.610893447220345
- type: precision
value: 11.079228765297467
- type: recall
value: 18.3206106870229
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (mal-eng)
type: mteb/tatoeba-bitext-mining
config: mal-eng
split: test
revision: ed9e4a974f867fd9736efcf222fc3a26487387a5
metrics:
- type: accuracy
value: 85.58951965065502
- type: f1
value: 83.30363944928548
- type: precision
value: 82.40026591554977
- type: recall
value: 85.58951965065502
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ile-eng)
type: mteb/tatoeba-bitext-mining
config: ile-eng
split: test
revision: ed9e4a974f867fd9736efcf222fc3a26487387a5
metrics:
- type: accuracy
value: 65.7
- type: f1
value: 59.589642857142856
- type: precision
value: 57.392826797385624
- type: recall
value: 65.7
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (bos-eng)
type: mteb/tatoeba-bitext-mining
config: bos-eng
split: test
revision: ed9e4a974f867fd9736efcf222fc3a26487387a5
metrics:
- type: accuracy
value: 18.07909604519774
- type: f1
value: 13.65194306689995
- type: precision
value: 12.567953943826327
- type: recall
value: 18.07909604519774
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (cor-eng)
type: mteb/tatoeba-bitext-mining
config: cor-eng
split: test
revision: ed9e4a974f867fd9736efcf222fc3a26487387a5
metrics:
- type: accuracy
value: 4.6
- type: f1
value: 2.8335386392505013
- type: precision
value: 2.558444143575722
- type: recall
value: 4.6
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (cat-eng)
type: mteb/tatoeba-bitext-mining
config: cat-eng
split: test
revision: ed9e4a974f867fd9736efcf222fc3a26487387a5
metrics:
- type: accuracy
value: 90.7
- type: f1
value: 88.30666666666666
- type: precision
value: 87.195
- type: recall
value: 90.7
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (eus-eng)
type: mteb/tatoeba-bitext-mining
config: eus-eng
split: test
revision: ed9e4a974f867fd9736efcf222fc3a26487387a5
metrics:
- type: accuracy
value: 57.699999999999996
- type: f1
value: 53.38433067253876
- type: precision
value: 51.815451335350346
- type: recall
value: 57.699999999999996
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (yue-eng)
type: mteb/tatoeba-bitext-mining
config: yue-eng
split: test
revision: ed9e4a974f867fd9736efcf222fc3a26487387a5
metrics:
- type: accuracy
value: 80.60000000000001
- type: f1
value: 77.0290354090354
- type: precision
value: 75.61685897435898
- type: recall
value: 80.60000000000001
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (swe-eng)
type: mteb/tatoeba-bitext-mining
config: swe-eng
split: test
revision: ed9e4a974f867fd9736efcf222fc3a26487387a5
metrics:
- type: accuracy
value: 24.6
- type: f1
value: 19.52814960069739
- type: precision
value: 18.169084599880502
- type: recall
value: 24.6
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (dtp-eng)
type: mteb/tatoeba-bitext-mining
config: dtp-eng
split: test
revision: ed9e4a974f867fd9736efcf222fc3a26487387a5
metrics:
- type: accuracy
value: 5.0
- type: f1
value: 3.4078491753102376
- type: precision
value: 3.1757682319102387
- type: recall
value: 5.0
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (kat-eng)
type: mteb/tatoeba-bitext-mining
config: kat-eng
split: test
revision: ed9e4a974f867fd9736efcf222fc3a26487387a5
metrics:
- type: accuracy
value: 1.2064343163538873
- type: f1
value: 0.4224313053283095
- type: precision
value: 0.3360484946842894
- type: recall
value: 1.2064343163538873
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (jpn-eng)
type: mteb/tatoeba-bitext-mining
config: jpn-eng
split: test
revision: ed9e4a974f867fd9736efcf222fc3a26487387a5
metrics:
- type: accuracy
value: 76.1
- type: f1
value: 71.36246031746032
- type: precision
value: 69.5086544011544
- type: recall
value: 76.1
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (csb-eng)
type: mteb/tatoeba-bitext-mining
config: csb-eng
split: test
revision: ed9e4a974f867fd9736efcf222fc3a26487387a5
metrics:
- type: accuracy
value: 14.229249011857709
- type: f1
value: 10.026578603653704
- type: precision
value: 9.09171178352764
- type: recall
value: 14.229249011857709
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (xho-eng)
type: mteb/tatoeba-bitext-mining
config: xho-eng
split: test
revision: ed9e4a974f867fd9736efcf222fc3a26487387a5
metrics:
- type: accuracy
value: 8.450704225352112
- type: f1
value: 5.51214407186151
- type: precision
value: 4.928281812084629
- type: recall
value: 8.450704225352112
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (orv-eng)
type: mteb/tatoeba-bitext-mining
config: orv-eng
split: test
revision: ed9e4a974f867fd9736efcf222fc3a26487387a5
metrics:
- type: accuracy
value: 7.664670658682635
- type: f1
value: 5.786190079917295
- type: precision
value: 5.3643643579244
- type: recall
value: 7.664670658682635
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ind-eng)
type: mteb/tatoeba-bitext-mining
config: ind-eng
split: test
revision: ed9e4a974f867fd9736efcf222fc3a26487387a5
metrics:
- type: accuracy
value: 90.5
- type: f1
value: 88.03999999999999
- type: precision
value: 86.94833333333334
- type: recall
value: 90.5
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (tuk-eng)
type: mteb/tatoeba-bitext-mining
config: tuk-eng
split: test
revision: ed9e4a974f867fd9736efcf222fc3a26487387a5
metrics:
- type: accuracy
value: 7.389162561576355
- type: f1
value: 5.482366349556517
- type: precision
value: 5.156814449917898
- type: recall
value: 7.389162561576355
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (max-eng)
type: mteb/tatoeba-bitext-mining
config: max-eng
split: test
revision: ed9e4a974f867fd9736efcf222fc3a26487387a5
metrics:
- type: accuracy
value: 41.54929577464789
- type: f1
value: 36.13520282534367
- type: precision
value: 34.818226488560995
- type: recall
value: 41.54929577464789
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (swh-eng)
type: mteb/tatoeba-bitext-mining
config: swh-eng
split: test
revision: ed9e4a974f867fd9736efcf222fc3a26487387a5
metrics:
- type: accuracy
value: 20.76923076923077
- type: f1
value: 16.742497560177643
- type: precision
value: 15.965759712090138
- type: recall
value: 20.76923076923077
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (hin-eng)
type: mteb/tatoeba-bitext-mining
config: hin-eng
split: test
revision: ed9e4a974f867fd9736efcf222fc3a26487387a5
metrics:
- type: accuracy
value: 88.1
- type: f1
value: 85.23176470588236
- type: precision
value: 84.04458333333334
- type: recall
value: 88.1
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (dsb-eng)
type: mteb/tatoeba-bitext-mining
config: dsb-eng
split: test
revision: ed9e4a974f867fd9736efcf222fc3a26487387a5
metrics:
- type: accuracy
value: 11.899791231732777
- type: f1
value: 8.776706659565102
- type: precision
value: 8.167815946521582
- type: recall
value: 11.899791231732777
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ber-eng)
type: mteb/tatoeba-bitext-mining
config: ber-eng
split: test
revision: ed9e4a974f867fd9736efcf222fc3a26487387a5
metrics:
- type: accuracy
value: 6.1
- type: f1
value: 4.916589537178435
- type: precision
value: 4.72523017415345
- type: recall
value: 6.1
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (tam-eng)
type: mteb/tatoeba-bitext-mining
config: tam-eng
split: test
revision: ed9e4a974f867fd9736efcf222fc3a26487387a5
metrics:
- type: accuracy
value: 76.54723127035831
- type: f1
value: 72.75787187839306
- type: precision
value: 71.43338442869005
- type: recall
value: 76.54723127035831
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (slk-eng)
type: mteb/tatoeba-bitext-mining
config: slk-eng
split: test
revision: ed9e4a974f867fd9736efcf222fc3a26487387a5
metrics:
- type: accuracy
value: 11.700000000000001
- type: f1
value: 9.975679190026007
- type: precision
value: 9.569927715653522
- type: recall
value: 11.700000000000001
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (tgl-eng)
type: mteb/tatoeba-bitext-mining
config: tgl-eng
split: test
revision: ed9e4a974f867fd9736efcf222fc3a26487387a5
metrics:
- type: accuracy
value: 13.100000000000001
- type: f1
value: 10.697335850115408
- type: precision
value: 10.113816082086341
- type: recall
value: 13.100000000000001
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ast-eng)
type: mteb/tatoeba-bitext-mining
config: ast-eng
split: test
revision: ed9e4a974f867fd9736efcf222fc3a26487387a5
metrics:
- type: accuracy
value: 76.37795275590551
- type: f1
value: 71.12860892388451
- type: precision
value: 68.89763779527559
- type: recall
value: 76.37795275590551
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (mkd-eng)
type: mteb/tatoeba-bitext-mining
config: mkd-eng
split: test
revision: ed9e4a974f867fd9736efcf222fc3a26487387a5
metrics:
- type: accuracy
value: 13.700000000000001
- type: f1
value: 10.471861684067568
- type: precision
value: 9.602902567641697
- type: recall
value: 13.700000000000001
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (khm-eng)
type: mteb/tatoeba-bitext-mining
config: khm-eng
split: test
revision: ed9e4a974f867fd9736efcf222fc3a26487387a5
metrics:
- type: accuracy
value: 0.554016620498615
- type: f1
value: 0.37034084643642423
- type: precision
value: 0.34676040281208437
- type: recall
value: 0.554016620498615
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ces-eng)
type: mteb/tatoeba-bitext-mining
config: ces-eng
split: test
revision: ed9e4a974f867fd9736efcf222fc3a26487387a5
metrics:
- type: accuracy
value: 12.4
- type: f1
value: 9.552607451092534
- type: precision
value: 8.985175505050504
- type: recall
value: 12.4
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (tzl-eng)
type: mteb/tatoeba-bitext-mining
config: tzl-eng
split: test
revision: ed9e4a974f867fd9736efcf222fc3a26487387a5
metrics:
- type: accuracy
value: 33.65384615384615
- type: f1
value: 27.820512820512818
- type: precision
value: 26.09432234432234
- type: recall
value: 33.65384615384615
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (urd-eng)
type: mteb/tatoeba-bitext-mining
config: urd-eng
split: test
revision: ed9e4a974f867fd9736efcf222fc3a26487387a5
metrics:
- type: accuracy
value: 74.5
- type: f1
value: 70.09686507936507
- type: precision
value: 68.3117857142857
- type: recall
value: 74.5
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ara-eng)
type: mteb/tatoeba-bitext-mining
config: ara-eng
split: test
revision: ed9e4a974f867fd9736efcf222fc3a26487387a5
metrics:
- type: accuracy
value: 88.3
- type: f1
value: 85.37333333333333
- type: precision
value: 84.05833333333334
- type: recall
value: 88.3
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (kor-eng)
type: mteb/tatoeba-bitext-mining
config: kor-eng
split: test
revision: ed9e4a974f867fd9736efcf222fc3a26487387a5
metrics:
- type: accuracy
value: 25.0
- type: f1
value: 22.393124632031995
- type: precision
value: 21.58347686592367
- type: recall
value: 25.0
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (yid-eng)
type: mteb/tatoeba-bitext-mining
config: yid-eng
split: test
revision: ed9e4a974f867fd9736efcf222fc3a26487387a5
metrics:
- type: accuracy
value: 0.589622641509434
- type: f1
value: 0.15804980033762941
- type: precision
value: 0.1393275384872965
- type: recall
value: 0.589622641509434
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (fin-eng)
type: mteb/tatoeba-bitext-mining
config: fin-eng
split: test
revision: ed9e4a974f867fd9736efcf222fc3a26487387a5
metrics:
- type: accuracy
value: 4.1000000000000005
- type: f1
value: 3.4069011332551775
- type: precision
value: 3.1784507042253516
- type: recall
value: 4.1000000000000005
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (tha-eng)
type: mteb/tatoeba-bitext-mining
config: tha-eng
split: test
revision: ed9e4a974f867fd9736efcf222fc3a26487387a5
metrics:
- type: accuracy
value: 3.102189781021898
- type: f1
value: 2.223851811694751
- type: precision
value: 2.103465682299194
- type: recall
value: 3.102189781021898
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (wuu-eng)
type: mteb/tatoeba-bitext-mining
config: wuu-eng
split: test
revision: ed9e4a974f867fd9736efcf222fc3a26487387a5
metrics:
- type: accuracy
value: 83.1
- type: f1
value: 79.58255835667599
- type: precision
value: 78.09708333333333
- type: recall
value: 83.1
- task:
type: Retrieval
dataset:
name: MTEB Touche2020
type: webis-touche2020
config: default
split: test
revision: 527b7d77e16e343303e68cb6af11d6e18b9f7b3b
metrics:
- type: map_at_1
value: 2.322
- type: map_at_10
value: 8.959999999999999
- type: map_at_100
value: 15.136
- type: map_at_1000
value: 16.694
- type: map_at_3
value: 4.837000000000001
- type: map_at_5
value: 6.196
- type: mrr_at_1
value: 28.571
- type: mrr_at_10
value: 47.589999999999996
- type: mrr_at_100
value: 48.166
- type: mrr_at_1000
value: 48.169000000000004
- type: mrr_at_3
value: 43.197
- type: mrr_at_5
value: 45.646
- type: ndcg_at_1
value: 26.531
- type: ndcg_at_10
value: 23.982
- type: ndcg_at_100
value: 35.519
- type: ndcg_at_1000
value: 46.878
- type: ndcg_at_3
value: 26.801000000000002
- type: ndcg_at_5
value: 24.879
- type: precision_at_1
value: 28.571
- type: precision_at_10
value: 22.041
- type: precision_at_100
value: 7.4079999999999995
- type: precision_at_1000
value: 1.492
- type: precision_at_3
value: 28.571
- type: precision_at_5
value: 25.306
- type: recall_at_1
value: 2.322
- type: recall_at_10
value: 15.443999999999999
- type: recall_at_100
value: 45.918
- type: recall_at_1000
value: 79.952
- type: recall_at_3
value: 6.143
- type: recall_at_5
value: 8.737
- task:
type: Classification
dataset:
name: MTEB ToxicConversationsClassification
type: mteb/toxic_conversations_50k
config: default
split: test
revision: edfaf9da55d3dd50d43143d90c1ac476895ae6de
metrics:
- type: accuracy
value: 66.5452
- type: ap
value: 12.99191723223892
- type: f1
value: 51.667665096195734
- task:
type: Classification
dataset:
name: MTEB TweetSentimentExtractionClassification
type: mteb/tweet_sentiment_extraction
config: default
split: test
revision: 62146448f05be9e52a36b8ee9936447ea787eede
metrics:
- type: accuracy
value: 55.854555744199196
- type: f1
value: 56.131766302254185
- task:
type: Clustering
dataset:
name: MTEB TwentyNewsgroupsClustering
type: mteb/twentynewsgroups-clustering
config: default
split: test
revision: 091a54f9a36281ce7d6590ec8c75dd485e7e01d4
metrics:
- type: v_measure
value: 37.27891385518074
- task:
type: PairClassification
dataset:
name: MTEB TwitterSemEval2015
type: mteb/twittersemeval2015-pairclassification
config: default
split: test
revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1
metrics:
- type: cos_sim_accuracy
value: 83.53102461703523
- type: cos_sim_ap
value: 65.30753664579191
- type: cos_sim_f1
value: 61.739943872778305
- type: cos_sim_precision
value: 55.438891222175556
- type: cos_sim_recall
value: 69.65699208443272
- type: dot_accuracy
value: 80.38981939560112
- type: dot_ap
value: 53.52081118421347
- type: dot_f1
value: 54.232957844617346
- type: dot_precision
value: 48.43393486828459
- type: dot_recall
value: 61.60949868073878
- type: euclidean_accuracy
value: 82.23758717291531
- type: euclidean_ap
value: 60.361102792772535
- type: euclidean_f1
value: 57.50518791791561
- type: euclidean_precision
value: 51.06470106470107
- type: euclidean_recall
value: 65.8047493403694
- type: manhattan_accuracy
value: 82.14221851344102
- type: manhattan_ap
value: 60.341937223793366
- type: manhattan_f1
value: 57.53803596127247
- type: manhattan_precision
value: 51.08473188702415
- type: manhattan_recall
value: 65.85751978891821
- type: max_accuracy
value: 83.53102461703523
- type: max_ap
value: 65.30753664579191
- type: max_f1
value: 61.739943872778305
- task:
type: PairClassification
dataset:
name: MTEB TwitterURLCorpus
type: mteb/twitterurlcorpus-pairclassification
config: default
split: test
revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf
metrics:
- type: cos_sim_accuracy
value: 88.75305623471883
- type: cos_sim_ap
value: 85.46387153880272
- type: cos_sim_f1
value: 77.91527673159008
- type: cos_sim_precision
value: 72.93667315828353
- type: cos_sim_recall
value: 83.62334462580844
- type: dot_accuracy
value: 85.08169363915086
- type: dot_ap
value: 74.96808060965559
- type: dot_f1
value: 71.39685033990366
- type: dot_precision
value: 64.16948111759288
- type: dot_recall
value: 80.45888512473051
- type: euclidean_accuracy
value: 85.84235650250321
- type: euclidean_ap
value: 78.42045145247211
- type: euclidean_f1
value: 70.32669630775179
- type: euclidean_precision
value: 70.6298050788227
- type: euclidean_recall
value: 70.02617801047121
- type: manhattan_accuracy
value: 85.86176116738464
- type: manhattan_ap
value: 78.54012451558276
- type: manhattan_f1
value: 70.56508080693389
- type: manhattan_precision
value: 69.39626293456413
- type: manhattan_recall
value: 71.77394518016631
- type: max_accuracy
value: 88.75305623471883
- type: max_ap
value: 85.46387153880272
- type: max_f1
value: 77.91527673159008
---
## Usage
For usage instructions, refer to: https://github.com/Muennighoff/sgpt#asymmetric-semantic-search-be
The model was trained with the command
```bash
CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 accelerate launch examples/training/ms_marco/train_bi-encoder_mnrl.py --model_name bigscience/bloom-7b1 --train_batch_size 32 --eval_batch_size 16 --freezenonbias --specb --lr 4e-4 --wandb --wandbwatchlog gradients --pooling weightedmean --gradcache --chunksize 8
```
## Evaluation Results
`{"ndcgs": {"sgpt-bloom-7b1-msmarco": {"scifact": {"NDCG@10": 0.71824}, "nfcorpus": {"NDCG@10": 0.35748}, "arguana": {"NDCG@10": 0.47281}, "scidocs": {"NDCG@10": 0.18435}, "fiqa": {"NDCG@10": 0.35736}, "cqadupstack": {"NDCG@10": 0.3708525}, "quora": {"NDCG@10": 0.74655}, "trec-covid": {"NDCG@10": 0.82731}, "webis-touche2020": {"NDCG@10": 0.2365}}}`
See the evaluation folder or [MTEB](https://huggingface.co/spaces/mteb/leaderboard) for more results.
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 15600 with parameters:
```
{'batch_size': 32, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
The model uses BitFit, weighted-mean pooling & GradCache, for details see: https://arxiv.org/abs/2202.08904
**Loss**:
`sentence_transformers.losses.MultipleNegativesRankingLoss.MNRLGradCache`
Parameters of the fit()-Method:
```
{
"epochs": 10,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 0.0004
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 1000,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 300, 'do_lower_case': False}) with Transformer model: BloomModel
(1): Pooling({'word_embedding_dimension': 4096, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': True, 'pooling_mode_lasttoken': False})
)
```
## Citing & Authors
```bibtex
@article{muennighoff2022sgpt,
title={SGPT: GPT Sentence Embeddings for Semantic Search},
author={Muennighoff, Niklas},
journal={arXiv preprint arXiv:2202.08904},
year={2022}
}
``` | [
"SUMMARIZATION"
] | [
"BIOSSES",
"SCIFACT"
] |
nold/CroissantLLMBase-GGUF | nold | text2text-generation | [
"gguf",
"legal",
"code",
"text-generation-inference",
"art",
"text2text-generation",
"fr",
"en",
"dataset:cerebras/SlimPajama-627B",
"dataset:uonlp/CulturaX",
"dataset:pg19",
"dataset:bigcode/starcoderdata",
"dataset:croissantllm/croissant_dataset",
"arxiv:2402.00786",
"license:mit",
"endpoints_compatible",
"region:us"
] | 2024-02-14T13:15:08 | 2024-02-14T13:38:14 | 58 | 0 | ---
datasets:
- cerebras/SlimPajama-627B
- uonlp/CulturaX
- pg19
- bigcode/starcoderdata
- croissantllm/croissant_dataset
language:
- fr
- en
license: mit
pipeline_tag: text2text-generation
tags:
- legal
- code
- text-generation-inference
- art
---
# CroissantLLM - Base (190k steps, Final version)
This model is part of the CroissantLLM initiative, and corresponds to the checkpoint after 190k steps (2.99 T) tokens.
To play with the final model, we recommend using the Chat version: https://huggingface.co/croissantllm/CroissantLLMChat-v0.1.
https://arxiv.org/abs/2402.00786
## Abstract
We introduce CroissantLLM, a 1.3B language model pretrained on a set of 3T English and French tokens, to bring to the research and industrial community a high-performance, fully open-sourced bilingual model that runs swiftly on consumer-grade local hardware.
To that end, we pioneer the approach of training an intrinsically bilingual model with a 1:1 English-to-French pretraining data ratio, a custom tokenizer, and bilingual finetuning datasets. We release the training dataset, notably containing a French split with manually curated, high-quality, and varied data sources.
To assess performance outside of English, we craft a novel benchmark, FrenchBench, consisting of an array of classification and generation tasks, covering various orthogonal aspects of model performance in the French Language. Additionally, rooted in transparency and to foster further Large Language Model research, we release codebases, and dozens of checkpoints across various model sizes, training data distributions, and training steps, as well as fine-tuned Chat models, and strong translation models. We evaluate our model through the FMTI framework, and validate 81% of the transparency criteria, far beyond the scores of even most open initiatives.
This work enriches the NLP landscape, breaking away from previous English-centric work in order to strengthen our understanding of multilinguality in language models.
## Citation
Our work can be cited as:
```bash
@misc{faysse2024croissantllm,
title={CroissantLLM: A Truly Bilingual French-English Language Model},
author={Manuel Faysse and Patrick Fernandes and Nuno Guerreiro and António Loison and Duarte Alves and Caio Corro and Nicolas Boizard and João Alves and Ricardo Rei and Pedro Martins and Antoni Bigata Casademunt and François Yvon and André Martins and Gautier Viaud and Céline Hudelot and Pierre Colombo},
year={2024},
eprint={2402.00786},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
## Usage
This model is a base model, that is, it is not finetuned for Chat function and works best with few-shot prompting strategies.
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "croissantllm/CroissantLLMBase"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype=torch.float16, device_map="auto")
inputs = tokenizer("I am so tired I could sleep right now. -> Je suis si fatigué que je pourrais m'endormir maintenant.\nHe is heading to the market. -> Il va au marché.\nWe are running on the beach. ->", return_tensors="pt").to(model.device)
tokens = model.generate(**inputs, max_length=100, do_sample=True, top_p=0.95, top_k=60, temperature=0.3)
print(tokenizer.decode(tokens[0]))
# remove bos token
inputs = tokenizer("Capitales: France -> Paris, Italie -> Rome, Allemagne -> Berlin, Espagne ->", return_tensors="pt", add_special_tokens=True).to(model.device)
tokens = model.generate(**inputs, max_length=100, do_sample=True, top_p=0.95, top_k=60)
print(tokenizer.decode(tokens[0]))
```
***
Quantization of Model [croissantllm/CroissantLLMBase](https://huggingface.co/croissantllm/CroissantLLMBase). Created using [llm-quantizer](https://github.com/Nold360/llm-quantizer) Pipeline [8668cbd2081063e33a128251312e6de9744d0a64]
| [
"TRANSLATION"
] | [
"CRAFT"
] |
nomic-ai/modernbert-embed-base-unsupervised | nomic-ai | sentence-similarity | [
"sentence-transformers",
"safetensors",
"modernbert",
"feature-extraction",
"sentence-similarity",
"mteb",
"en",
"arxiv:2402.01613",
"base_model:answerdotai/ModernBERT-base",
"base_model:finetune:answerdotai/ModernBERT-base",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2024-12-22T21:52:33 | 2024-12-30T01:23:53 | 58 | 10 | ---
base_model:
- answerdotai/ModernBERT-base
language:
- en
license: apache-2.0
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- mteb
model-index:
- name: binarize_False
results:
- task:
type: Classification
dataset:
name: MTEB AmazonCounterfactualClassification (en)
type: None
config: en
split: test
revision: e8379541af4e31359cca9fbcf4b00f2671dba205
metrics:
- type: accuracy
value: 76.20895522388061
- type: ap
value: 39.2507182700391
- type: f1
value: 70.1524994873644
- task:
type: Classification
dataset:
name: MTEB AmazonPolarityClassification
type: None
config: default
split: test
revision: e2d317d38cd51312af73b3d32a06d1a08b442046
metrics:
- type: accuracy
value: 91.66092499999999
- type: ap
value: 88.67291765528996
- type: f1
value: 91.65254265062715
- task:
type: Classification
dataset:
name: MTEB AmazonReviewsClassification (en)
type: None
config: en
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 46.768
- type: f1
value: 46.1529444051673
- task:
type: Retrieval
dataset:
name: MTEB ArguAna
type: None
config: default
split: test
revision: c22ab2a51041ffd869aaddef7af8d8215647e41a
metrics:
- type: map_at_1
value: 24.964
- type: map_at_10
value: 39.891
- type: map_at_100
value: 41.015
- type: map_at_1000
value: 41.027
- type: map_at_20
value: 40.788999999999994
- type: map_at_3
value: 35.016999999999996
- type: map_at_5
value: 37.445
- type: mrr_at_1
value: 25.462
- type: mrr_at_10
value: 40.081
- type: mrr_at_100
value: 41.204
- type: mrr_at_1000
value: 41.216
- type: mrr_at_20
value: 40.979
- type: mrr_at_3
value: 35.171
- type: mrr_at_5
value: 37.66
- type: ndcg_at_1
value: 24.964
- type: ndcg_at_10
value: 48.815999999999995
- type: ndcg_at_100
value: 53.415
- type: ndcg_at_1000
value: 53.70399999999999
- type: ndcg_at_20
value: 51.983000000000004
- type: ndcg_at_3
value: 38.417
- type: ndcg_at_5
value: 42.833
- type: precision_at_1
value: 24.964
- type: precision_at_10
value: 7.774
- type: precision_at_100
value: 0.9740000000000001
- type: precision_at_1000
value: 0.1
- type: precision_at_20
value: 4.502
- type: precision_at_3
value: 16.098000000000003
- type: precision_at_5
value: 11.821
- type: recall_at_1
value: 24.964
- type: recall_at_10
value: 77.738
- type: recall_at_100
value: 97.368
- type: recall_at_1000
value: 99.57300000000001
- type: recall_at_20
value: 90.04299999999999
- type: recall_at_3
value: 48.293
- type: recall_at_5
value: 59.104
- task:
type: Clustering
dataset:
name: MTEB ArxivClusteringP2P
type: None
config: default
split: test
revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d
metrics:
- type: v_measure
value: 47.46642893138737
- type: v_measures
value:
- 0.4630870464942426
- 0.48179626796437086
- 0.48112541416510324
- 0.4785458846844729
- 0.4667757763219518
- 0.4888239384534906
- 0.48913193407033156
- 0.45400599455241203
- 0.4796128193217837
- 0.4826602649834829
- 0.5534097832418009
- 0.547017625264848
- 0.5534875637912158
- 0.5545166479145291
- 0.551868078347376
- 0.5565074707024643
- 0.5454716112544638
- 0.549704436465488
- 0.5522699426270606
- 0.5473649503725682
- 0.5210558655702533
- 0.3091614875108429
- 0.4435292091514286
- 0.40925130602725246
- 0.35095638279275543
- 0.27771465836053044
- 0.3062679436429392
- 0.2356102795990061
- 0.31737058583388944
- 1.0
- 0.2664917992477291
- 0.4630870464942426
- 0.48179626796437086
- 0.48112541416510324
- 0.4785458846844729
- 0.4667757763219518
- 0.4888239384534906
- 0.48913193407033156
- 0.45400599455241203
- 0.4796128193217837
- 0.4826602649834829
- 0.5534097832418009
- 0.547017625264848
- 0.5534875637912158
- 0.5545166479145291
- 0.551868078347376
- 0.5565074707024643
- 0.5454716112544638
- 0.549704436465488
- 0.5522699426270606
- 0.5473649503725682
- 0.5210558655702533
- 0.3091614875108429
- 0.4435292091514286
- 0.40925130602725246
- 0.35095638279275543
- 0.27771465836053044
- 0.3062679436429392
- 0.2356102795990061
- 0.31737058583388944
- 1.0
- 0.2664917992477291
- 0.4630870464942426
- 0.48179626796437086
- 0.48112541416510324
- 0.4785458846844729
- 0.4667757763219518
- 0.4888239384534906
- 0.48913193407033156
- 0.45400599455241203
- 0.4796128193217837
- 0.4826602649834829
- 0.5534097832418009
- 0.547017625264848
- 0.5534875637912158
- 0.5545166479145291
- 0.551868078347376
- 0.5565074707024643
- 0.5454716112544638
- 0.549704436465488
- 0.5522699426270606
- 0.5473649503725682
- 0.5210558655702533
- 0.3091614875108429
- 0.4435292091514286
- 0.40925130602725246
- 0.35095638279275543
- 0.27771465836053044
- 0.3062679436429392
- 0.2356102795990061
- 0.31737058583388944
- 1.0
- 0.2664917992477291
- 0.4630870464942426
- 0.48179626796437086
- 0.48112541416510324
- 0.4785458846844729
- 0.4667757763219518
- 0.4888239384534906
- 0.48913193407033156
- 0.45400599455241203
- 0.4796128193217837
- 0.4826602649834829
- 0.5534097832418009
- 0.547017625264848
- 0.5534875637912158
- 0.5545166479145291
- 0.551868078347376
- 0.5565074707024643
- 0.5454716112544638
- 0.549704436465488
- 0.5522699426270606
- 0.5473649503725682
- 0.5210558655702533
- 0.3091614875108429
- 0.4435292091514286
- 0.40925130602725246
- 0.35095638279275543
- 0.27771465836053044
- 0.3062679436429392
- 0.2356102795990061
- 0.31737058583388944
- 1.0
- 0.2664917992477291
- 0.4630870464942426
- 0.48179626796437086
- 0.48112541416510324
- 0.4785458846844729
- 0.4667757763219518
- 0.4888239384534906
- 0.48913193407033156
- 0.45400599455241203
- 0.4796128193217837
- 0.4826602649834829
- 0.5534097832418009
- 0.547017625264848
- 0.5534875637912158
- 0.5545166479145291
- 0.551868078347376
- 0.5565074707024643
- 0.5454716112544638
- 0.549704436465488
- 0.5522699426270606
- 0.5473649503725682
- 0.5210558655702533
- 0.3091614875108429
- 0.4435292091514286
- 0.40925130602725246
- 0.35095638279275543
- 0.27771465836053044
- 0.3062679436429392
- 0.2356102795990061
- 0.31737058583388944
- 1.0
- 0.2664917992477291
- 0.4630870464942426
- 0.48179626796437086
- 0.48112541416510324
- 0.4785458846844729
- 0.4667757763219518
- 0.4888239384534906
- 0.48913193407033156
- 0.45400599455241203
- 0.4796128193217837
- 0.4826602649834829
- 0.5534097832418009
- 0.547017625264848
- 0.5534875637912158
- 0.5545166479145291
- 0.551868078347376
- 0.5565074707024643
- 0.5454716112544638
- 0.549704436465488
- 0.5522699426270606
- 0.5473649503725682
- 0.5210558655702533
- 0.3091614875108429
- 0.4435292091514286
- 0.40925130602725246
- 0.35095638279275543
- 0.27771465836053044
- 0.3062679436429392
- 0.2356102795990061
- 0.31737058583388944
- 1.0
- 0.2664917992477291
- 0.4630870464942426
- 0.48179626796437086
- 0.48112541416510324
- 0.4785458846844729
- 0.4667757763219518
- 0.4888239384534906
- 0.48913193407033156
- 0.45400599455241203
- 0.4796128193217837
- 0.4826602649834829
- 0.5534097832418009
- 0.547017625264848
- 0.5534875637912158
- 0.5545166479145291
- 0.551868078347376
- 0.5565074707024643
- 0.5454716112544638
- 0.549704436465488
- 0.5522699426270606
- 0.5473649503725682
- 0.5210558655702533
- 0.3091614875108429
- 0.4435292091514286
- 0.40925130602725246
- 0.35095638279275543
- 0.27771465836053044
- 0.3062679436429392
- 0.2356102795990061
- 0.31737058583388944
- 1.0
- 0.2664917992477291
- 0.4630870464942426
- 0.48179626796437086
- 0.48112541416510324
- 0.4785458846844729
- 0.4667757763219518
- 0.4888239384534906
- 0.48913193407033156
- 0.45400599455241203
- 0.4796128193217837
- 0.4826602649834829
- 0.5534097832418009
- 0.547017625264848
- 0.5534875637912158
- 0.5545166479145291
- 0.551868078347376
- 0.5565074707024643
- 0.5454716112544638
- 0.549704436465488
- 0.5522699426270606
- 0.5473649503725682
- 0.5210558655702533
- 0.3091614875108429
- 0.4435292091514286
- 0.40925130602725246
- 0.35095638279275543
- 0.27771465836053044
- 0.3062679436429392
- 0.2356102795990061
- 0.31737058583388944
- 1.0
- 0.2664917992477291
- 0.4630870464942426
- 0.48179626796437086
- 0.48112541416510324
- 0.4785458846844729
- 0.4667757763219518
- 0.4888239384534906
- 0.48913193407033156
- 0.45400599455241203
- 0.4796128193217837
- 0.4826602649834829
- 0.5534097832418009
- 0.547017625264848
- 0.5534875637912158
- 0.5545166479145291
- 0.551868078347376
- 0.5565074707024643
- 0.5454716112544638
- 0.549704436465488
- 0.5522699426270606
- 0.5473649503725682
- 0.5210558655702533
- 0.3091614875108429
- 0.4435292091514286
- 0.40925130602725246
- 0.35095638279275543
- 0.27771465836053044
- 0.3062679436429392
- 0.2356102795990061
- 0.31737058583388944
- 1.0
- 0.2664917992477291
- 0.4630870464942426
- 0.48179626796437086
- 0.48112541416510324
- 0.4785458846844729
- 0.4667757763219518
- 0.4888239384534906
- 0.48913193407033156
- 0.45400599455241203
- 0.4796128193217837
- 0.4826602649834829
- 0.5534097832418009
- 0.547017625264848
- 0.5534875637912158
- 0.5545166479145291
- 0.551868078347376
- 0.5565074707024643
- 0.5454716112544638
- 0.549704436465488
- 0.5522699426270606
- 0.5473649503725682
- 0.5210558655702533
- 0.3091614875108429
- 0.4435292091514286
- 0.40925130602725246
- 0.35095638279275543
- 0.27771465836053044
- 0.3062679436429392
- 0.2356102795990061
- 0.31737058583388944
- 1.0
- 0.2664917992477291
- 0.4630870464942426
- 0.48179626796437086
- 0.48112541416510324
- 0.4785458846844729
- 0.4667757763219518
- 0.4888239384534906
- 0.48913193407033156
- 0.45400599455241203
- 0.4796128193217837
- 0.4826602649834829
- 0.5534097832418009
- 0.547017625264848
- 0.5534875637912158
- 0.5545166479145291
- 0.551868078347376
- 0.5565074707024643
- 0.5454716112544638
- 0.549704436465488
- 0.5522699426270606
- 0.5473649503725682
- 0.5210558655702533
- 0.3091614875108429
- 0.4435292091514286
- 0.40925130602725246
- 0.35095638279275543
- 0.27771465836053044
- 0.3062679436429392
- 0.2356102795990061
- 0.31737058583388944
- 1.0
- 0.2664917992477291
- 0.4630870464942426
- 0.48179626796437086
- 0.48112541416510324
- 0.4785458846844729
- 0.4667757763219518
- 0.4888239384534906
- 0.48913193407033156
- 0.45400599455241203
- 0.4796128193217837
- 0.4826602649834829
- 0.5534097832418009
- 0.547017625264848
- 0.5534875637912158
- 0.5545166479145291
- 0.551868078347376
- 0.5565074707024643
- 0.5454716112544638
- 0.549704436465488
- 0.5522699426270606
- 0.5473649503725682
- 0.5210558655702533
- 0.3091614875108429
- 0.4435292091514286
- 0.40925130602725246
- 0.35095638279275543
- 0.27771465836053044
- 0.3062679436429392
- 0.2356102795990061
- 0.31737058583388944
- 1.0
- 0.2664917992477291
- 0.4630870464942426
- 0.48179626796437086
- 0.48112541416510324
- 0.4785458846844729
- 0.4667757763219518
- 0.4888239384534906
- 0.48913193407033156
- 0.45400599455241203
- 0.4796128193217837
- 0.4826602649834829
- 0.5534097832418009
- 0.547017625264848
- 0.5534875637912158
- 0.5545166479145291
- 0.551868078347376
- 0.5565074707024643
- 0.5454716112544638
- 0.549704436465488
- 0.5522699426270606
- 0.5473649503725682
- 0.5210558655702533
- 0.3091614875108429
- 0.4435292091514286
- 0.40925130602725246
- 0.35095638279275543
- 0.27771465836053044
- 0.3062679436429392
- 0.2356102795990061
- 0.31737058583388944
- 1.0
- 0.2664917992477291
- 0.4630870464942426
- 0.48179626796437086
- 0.48112541416510324
- 0.4785458846844729
- 0.4667757763219518
- 0.4888239384534906
- 0.48913193407033156
- 0.45400599455241203
- 0.4796128193217837
- 0.4826602649834829
- 0.5534097832418009
- 0.547017625264848
- 0.5534875637912158
- 0.5545166479145291
- 0.551868078347376
- 0.5565074707024643
- 0.5454716112544638
- 0.549704436465488
- 0.5522699426270606
- 0.5473649503725682
- 0.5210558655702533
- 0.3091614875108429
- 0.4435292091514286
- 0.40925130602725246
- 0.35095638279275543
- 0.27771465836053044
- 0.3062679436429392
- 0.2356102795990061
- 0.31737058583388944
- 1.0
- 0.2664917992477291
- 0.4630870464942426
- 0.48179626796437086
- 0.48112541416510324
- 0.4785458846844729
- 0.4667757763219518
- 0.4888239384534906
- 0.48913193407033156
- 0.45400599455241203
- 0.4796128193217837
- 0.4826602649834829
- 0.5534097832418009
- 0.547017625264848
- 0.5534875637912158
- 0.5545166479145291
- 0.551868078347376
- 0.5565074707024643
- 0.5454716112544638
- 0.549704436465488
- 0.5522699426270606
- 0.5473649503725682
- 0.5210558655702533
- 0.3091614875108429
- 0.4435292091514286
- 0.40925130602725246
- 0.35095638279275543
- 0.27771465836053044
- 0.3062679436429392
- 0.2356102795990061
- 0.31737058583388944
- 1.0
- 0.2664917992477291
- 0.4630870464942426
- 0.48179626796437086
- 0.48112541416510324
- 0.4785458846844729
- 0.4667757763219518
- 0.4888239384534906
- 0.48913193407033156
- 0.45400599455241203
- 0.4796128193217837
- 0.4826602649834829
- 0.5534097832418009
- 0.547017625264848
- 0.5534875637912158
- 0.5545166479145291
- 0.551868078347376
- 0.5565074707024643
- 0.5454716112544638
- 0.549704436465488
- 0.5522699426270606
- 0.5473649503725682
- 0.5210558655702533
- 0.3091614875108429
- 0.4435292091514286
- 0.40925130602725246
- 0.35095638279275543
- 0.27771465836053044
- 0.3062679436429392
- 0.2356102795990061
- 0.31737058583388944
- 1.0
- 0.2664917992477291
- 0.4630870464942426
- 0.48179626796437086
- 0.48112541416510324
- 0.4785458846844729
- 0.4667757763219518
- 0.4888239384534906
- 0.48913193407033156
- 0.45400599455241203
- 0.4796128193217837
- 0.4826602649834829
- 0.5534097832418009
- 0.547017625264848
- 0.5534875637912158
- 0.5545166479145291
- 0.551868078347376
- 0.5565074707024643
- 0.5454716112544638
- 0.549704436465488
- 0.5522699426270606
- 0.5473649503725682
- 0.5210558655702533
- 0.3091614875108429
- 0.4435292091514286
- 0.40925130602725246
- 0.35095638279275543
- 0.27771465836053044
- 0.3062679436429392
- 0.2356102795990061
- 0.31737058583388944
- 1.0
- 0.2664917992477291
- 0.4630870464942426
- 0.48179626796437086
- 0.48112541416510324
- 0.4785458846844729
- 0.4667757763219518
- 0.4888239384534906
- 0.48913193407033156
- 0.45400599455241203
- 0.4796128193217837
- 0.4826602649834829
- 0.5534097832418009
- 0.547017625264848
- 0.5534875637912158
- 0.5545166479145291
- 0.551868078347376
- 0.5565074707024643
- 0.5454716112544638
- 0.549704436465488
- 0.5522699426270606
- 0.5473649503725682
- 0.5210558655702533
- 0.3091614875108429
- 0.4435292091514286
- 0.40925130602725246
- 0.35095638279275543
- 0.27771465836053044
- 0.3062679436429392
- 0.2356102795990061
- 0.31737058583388944
- 1.0
- 0.2664917992477291
- 0.4630870464942426
- 0.48179626796437086
- 0.48112541416510324
- 0.4785458846844729
- 0.4667757763219518
- 0.4888239384534906
- 0.48913193407033156
- 0.45400599455241203
- 0.4796128193217837
- 0.4826602649834829
- 0.5534097832418009
- 0.547017625264848
- 0.5534875637912158
- 0.5545166479145291
- 0.551868078347376
- 0.5565074707024643
- 0.5454716112544638
- 0.549704436465488
- 0.5522699426270606
- 0.5473649503725682
- 0.5210558655702533
- 0.3091614875108429
- 0.4435292091514286
- 0.40925130602725246
- 0.35095638279275543
- 0.27771465836053044
- 0.3062679436429392
- 0.2356102795990061
- 0.31737058583388944
- 1.0
- 0.2664917992477291
- 0.4630870464942426
- 0.48179626796437086
- 0.48112541416510324
- 0.4785458846844729
- 0.4667757763219518
- 0.4888239384534906
- 0.48913193407033156
- 0.45400599455241203
- 0.4796128193217837
- 0.4826602649834829
- 0.5534097832418009
- 0.547017625264848
- 0.5534875637912158
- 0.5545166479145291
- 0.551868078347376
- 0.5565074707024643
- 0.5454716112544638
- 0.549704436465488
- 0.5522699426270606
- 0.5473649503725682
- 0.5210558655702533
- 0.3091614875108429
- 0.4435292091514286
- 0.40925130602725246
- 0.35095638279275543
- 0.27771465836053044
- 0.3062679436429392
- 0.2356102795990061
- 0.31737058583388944
- 1.0
- 0.2664917992477291
- 0.4630870464942426
- 0.48179626796437086
- 0.48112541416510324
- 0.4785458846844729
- 0.4667757763219518
- 0.4888239384534906
- 0.48913193407033156
- 0.45400599455241203
- 0.4796128193217837
- 0.4826602649834829
- 0.5534097832418009
- 0.547017625264848
- 0.5534875637912158
- 0.5545166479145291
- 0.551868078347376
- 0.5565074707024643
- 0.5454716112544638
- 0.549704436465488
- 0.5522699426270606
- 0.5473649503725682
- 0.5210558655702533
- 0.3091614875108429
- 0.4435292091514286
- 0.40925130602725246
- 0.35095638279275543
- 0.27771465836053044
- 0.3062679436429392
- 0.2356102795990061
- 0.31737058583388944
- 1.0
- 0.2664917992477291
- 0.4630870464942426
- 0.48179626796437086
- 0.48112541416510324
- 0.4785458846844729
- 0.4667757763219518
- 0.4888239384534906
- 0.48913193407033156
- 0.45400599455241203
- 0.4796128193217837
- 0.4826602649834829
- 0.5534097832418009
- 0.547017625264848
- 0.5534875637912158
- 0.5545166479145291
- 0.551868078347376
- 0.5565074707024643
- 0.5454716112544638
- 0.549704436465488
- 0.5522699426270606
- 0.5473649503725682
- 0.5210558655702533
- 0.3091614875108429
- 0.4435292091514286
- 0.40925130602725246
- 0.35095638279275543
- 0.27771465836053044
- 0.3062679436429392
- 0.2356102795990061
- 0.31737058583388944
- 1.0
- 0.2664917992477291
- 0.4630870464942426
- 0.48179626796437086
- 0.48112541416510324
- 0.4785458846844729
- 0.4667757763219518
- 0.4888239384534906
- 0.48913193407033156
- 0.45400599455241203
- 0.4796128193217837
- 0.4826602649834829
- 0.5534097832418009
- 0.547017625264848
- 0.5534875637912158
- 0.5545166479145291
- 0.551868078347376
- 0.5565074707024643
- 0.5454716112544638
- 0.549704436465488
- 0.5522699426270606
- 0.5473649503725682
- 0.5210558655702533
- 0.3091614875108429
- 0.4435292091514286
- 0.40925130602725246
- 0.35095638279275543
- 0.27771465836053044
- 0.3062679436429392
- 0.2356102795990061
- 0.31737058583388944
- 1.0
- 0.2664917992477291
- 0.4630870464942426
- 0.48179626796437086
- 0.48112541416510324
- 0.4785458846844729
- 0.4667757763219518
- 0.4888239384534906
- 0.48913193407033156
- 0.45400599455241203
- 0.4796128193217837
- 0.4826602649834829
- 0.5534097832418009
- 0.547017625264848
- 0.5534875637912158
- 0.5545166479145291
- 0.551868078347376
- 0.5565074707024643
- 0.5454716112544638
- 0.549704436465488
- 0.5522699426270606
- 0.5473649503725682
- 0.5210558655702533
- 0.3091614875108429
- 0.4435292091514286
- 0.40925130602725246
- 0.35095638279275543
- 0.27771465836053044
- 0.3062679436429392
- 0.2356102795990061
- 0.31737058583388944
- 1.0
- 0.2664917992477291
- 0.4630870464942426
- 0.48179626796437086
- 0.48112541416510324
- 0.4785458846844729
- 0.4667757763219518
- 0.4888239384534906
- 0.48913193407033156
- 0.45400599455241203
- 0.4796128193217837
- 0.4826602649834829
- 0.5534097832418009
- 0.547017625264848
- 0.5534875637912158
- 0.5545166479145291
- 0.551868078347376
- 0.5565074707024643
- 0.5454716112544638
- 0.549704436465488
- 0.5522699426270606
- 0.5473649503725682
- 0.5210558655702533
- 0.3091614875108429
- 0.4435292091514286
- 0.40925130602725246
- 0.35095638279275543
- 0.27771465836053044
- 0.3062679436429392
- 0.2356102795990061
- 0.31737058583388944
- 1.0
- 0.2664917992477291
- 0.4630870464942426
- 0.48179626796437086
- 0.48112541416510324
- 0.4785458846844729
- 0.4667757763219518
- 0.4888239384534906
- 0.48913193407033156
- 0.45400599455241203
- 0.4796128193217837
- 0.4826602649834829
- 0.5534097832418009
- 0.547017625264848
- 0.5534875637912158
- 0.5545166479145291
- 0.551868078347376
- 0.5565074707024643
- 0.5454716112544638
- 0.549704436465488
- 0.5522699426270606
- 0.5473649503725682
- 0.5210558655702533
- 0.3091614875108429
- 0.4435292091514286
- 0.40925130602725246
- 0.35095638279275543
- 0.27771465836053044
- 0.3062679436429392
- 0.2356102795990061
- 0.31737058583388944
- 1.0
- 0.2664917992477291
- 0.4630870464942426
- 0.48179626796437086
- 0.48112541416510324
- 0.4785458846844729
- 0.4667757763219518
- 0.4888239384534906
- 0.48913193407033156
- 0.45400599455241203
- 0.4796128193217837
- 0.4826602649834829
- 0.5534097832418009
- 0.547017625264848
- 0.5534875637912158
- 0.5545166479145291
- 0.551868078347376
- 0.5565074707024643
- 0.5454716112544638
- 0.549704436465488
- 0.5522699426270606
- 0.5473649503725682
- 0.5210558655702533
- 0.3091614875108429
- 0.4435292091514286
- 0.40925130602725246
- 0.35095638279275543
- 0.27771465836053044
- 0.3062679436429392
- 0.2356102795990061
- 0.31737058583388944
- 1.0
- 0.2664917992477291
- 0.4630870464942426
- 0.48179626796437086
- 0.48112541416510324
- 0.4785458846844729
- 0.4667757763219518
- 0.4888239384534906
- 0.48913193407033156
- 0.45400599455241203
- 0.4796128193217837
- 0.4826602649834829
- 0.5534097832418009
- 0.547017625264848
- 0.5534875637912158
- 0.5545166479145291
- 0.551868078347376
- 0.5565074707024643
- 0.5454716112544638
- 0.549704436465488
- 0.5522699426270606
- 0.5473649503725682
- 0.5210558655702533
- 0.3091614875108429
- 0.4435292091514286
- 0.40925130602725246
- 0.35095638279275543
- 0.27771465836053044
- 0.3062679436429392
- 0.2356102795990061
- 0.31737058583388944
- 1.0
- 0.2664917992477291
- 0.4630870464942426
- 0.48179626796437086
- 0.48112541416510324
- 0.4785458846844729
- 0.4667757763219518
- 0.4888239384534906
- 0.48913193407033156
- 0.45400599455241203
- 0.4796128193217837
- 0.4826602649834829
- 0.5534097832418009
- 0.547017625264848
- 0.5534875637912158
- 0.5545166479145291
- 0.551868078347376
- 0.5565074707024643
- 0.5454716112544638
- 0.549704436465488
- 0.5522699426270606
- 0.5473649503725682
- 0.5210558655702533
- 0.3091614875108429
- 0.4435292091514286
- 0.40925130602725246
- 0.35095638279275543
- 0.27771465836053044
- 0.3062679436429392
- 0.2356102795990061
- 0.31737058583388944
- 1.0
- 0.2664917992477291
- 0.4630870464942426
- 0.48179626796437086
- 0.48112541416510324
- 0.4785458846844729
- 0.4667757763219518
- 0.4888239384534906
- 0.48913193407033156
- 0.45400599455241203
- 0.4796128193217837
- 0.4826602649834829
- 0.5534097832418009
- 0.547017625264848
- 0.5534875637912158
- 0.5545166479145291
- 0.551868078347376
- 0.5565074707024643
- 0.5454716112544638
- 0.549704436465488
- 0.5522699426270606
- 0.5473649503725682
- 0.5210558655702533
- 0.3091614875108429
- 0.4435292091514286
- 0.40925130602725246
- 0.35095638279275543
- 0.27771465836053044
- 0.3062679436429392
- 0.2356102795990061
- 0.31737058583388944
- 1.0
- 0.2664917992477291
- 0.4630870464942426
- 0.48179626796437086
- 0.48112541416510324
- 0.4785458846844729
- 0.4667757763219518
- 0.4888239384534906
- 0.48913193407033156
- 0.45400599455241203
- 0.4796128193217837
- 0.4826602649834829
- 0.5534097832418009
- 0.547017625264848
- 0.5534875637912158
- 0.5545166479145291
- 0.551868078347376
- 0.5565074707024643
- 0.5454716112544638
- 0.549704436465488
- 0.5522699426270606
- 0.5473649503725682
- 0.5210558655702533
- 0.3091614875108429
- 0.4435292091514286
- 0.40925130602725246
- 0.35095638279275543
- 0.27771465836053044
- 0.3062679436429392
- 0.2356102795990061
- 0.31737058583388944
- 1.0
- 0.2664917992477291
- 0.4630870464942426
- 0.48179626796437086
- 0.48112541416510324
- 0.4785458846844729
- 0.4667757763219518
- 0.4888239384534906
- 0.48913193407033156
- 0.45400599455241203
- 0.4796128193217837
- 0.4826602649834829
- 0.5534097832418009
- 0.547017625264848
- 0.5534875637912158
- 0.5545166479145291
- 0.551868078347376
- 0.5565074707024643
- 0.5454716112544638
- 0.549704436465488
- 0.5522699426270606
- 0.5473649503725682
- 0.5210558655702533
- 0.3091614875108429
- 0.4435292091514286
- 0.40925130602725246
- 0.35095638279275543
- 0.27771465836053044
- 0.3062679436429392
- 0.2356102795990061
- 0.31737058583388944
- 1.0
- 0.2664917992477291
- 0.4630870464942426
- 0.48179626796437086
- 0.48112541416510324
- 0.4785458846844729
- 0.4667757763219518
- 0.4888239384534906
- 0.48913193407033156
- 0.45400599455241203
- 0.4796128193217837
- 0.4826602649834829
- 0.5534097832418009
- 0.547017625264848
- 0.5534875637912158
- 0.5545166479145291
- 0.551868078347376
- 0.5565074707024643
- 0.5454716112544638
- 0.549704436465488
- 0.5522699426270606
- 0.5473649503725682
- 0.5210558655702533
- 0.3091614875108429
- 0.4435292091514286
- 0.40925130602725246
- 0.35095638279275543
- 0.27771465836053044
- 0.3062679436429392
- 0.2356102795990061
- 0.31737058583388944
- 1.0
- 0.2664917992477291
- 0.4630870464942426
- 0.48179626796437086
- 0.48112541416510324
- 0.4785458846844729
- 0.4667757763219518
- 0.4888239384534906
- 0.48913193407033156
- 0.45400599455241203
- 0.4796128193217837
- 0.4826602649834829
- 0.5534097832418009
- 0.547017625264848
- 0.5534875637912158
- 0.5545166479145291
- 0.551868078347376
- 0.5565074707024643
- 0.5454716112544638
- 0.549704436465488
- 0.5522699426270606
- 0.5473649503725682
- 0.5210558655702533
- 0.3091614875108429
- 0.4435292091514286
- 0.40925130602725246
- 0.35095638279275543
- 0.27771465836053044
- 0.3062679436429392
- 0.2356102795990061
- 0.31737058583388944
- 1.0
- 0.2664917992477291
- 0.4630870464942426
- 0.48179626796437086
- 0.48112541416510324
- 0.4785458846844729
- 0.4667757763219518
- 0.4888239384534906
- 0.48913193407033156
- 0.45400599455241203
- 0.4796128193217837
- 0.4826602649834829
- 0.5534097832418009
- 0.547017625264848
- 0.5534875637912158
- 0.5545166479145291
- 0.551868078347376
- 0.5565074707024643
- 0.5454716112544638
- 0.549704436465488
- 0.5522699426270606
- 0.5473649503725682
- 0.5210558655702533
- 0.3091614875108429
- 0.4435292091514286
- 0.40925130602725246
- 0.35095638279275543
- 0.27771465836053044
- 0.3062679436429392
- 0.2356102795990061
- 0.31737058583388944
- 1.0
- 0.2664917992477291
- 0.4630870464942426
- 0.48179626796437086
- 0.48112541416510324
- 0.4785458846844729
- 0.4667757763219518
- 0.4888239384534906
- 0.48913193407033156
- 0.45400599455241203
- 0.4796128193217837
- 0.4826602649834829
- 0.5534097832418009
- 0.547017625264848
- 0.5534875637912158
- 0.5545166479145291
- 0.551868078347376
- 0.5565074707024643
- 0.5454716112544638
- 0.549704436465488
- 0.5522699426270606
- 0.5473649503725682
- 0.5210558655702533
- 0.3091614875108429
- 0.4435292091514286
- 0.40925130602725246
- 0.35095638279275543
- 0.27771465836053044
- 0.3062679436429392
- 0.2356102795990061
- 0.31737058583388944
- 1.0
- 0.2664917992477291
- 0.4630870464942426
- 0.48179626796437086
- 0.48112541416510324
- 0.4785458846844729
- 0.4667757763219518
- 0.4888239384534906
- 0.48913193407033156
- 0.45400599455241203
- 0.4796128193217837
- 0.4826602649834829
- 0.5534097832418009
- 0.547017625264848
- 0.5534875637912158
- 0.5545166479145291
- 0.551868078347376
- 0.5565074707024643
- 0.5454716112544638
- 0.549704436465488
- 0.5522699426270606
- 0.5473649503725682
- 0.5210558655702533
- 0.3091614875108429
- 0.4435292091514286
- 0.40925130602725246
- 0.35095638279275543
- 0.27771465836053044
- 0.3062679436429392
- 0.2356102795990061
- 0.31737058583388944
- 1.0
- 0.2664917992477291
- 0.4630870464942426
- 0.48179626796437086
- 0.48112541416510324
- 0.4785458846844729
- 0.4667757763219518
- 0.4888239384534906
- 0.48913193407033156
- 0.45400599455241203
- 0.4796128193217837
- 0.4826602649834829
- 0.5534097832418009
- 0.547017625264848
- 0.5534875637912158
- 0.5545166479145291
- 0.551868078347376
- 0.5565074707024643
- 0.5454716112544638
- 0.549704436465488
- 0.5522699426270606
- 0.5473649503725682
- 0.5210558655702533
- 0.3091614875108429
- 0.4435292091514286
- 0.40925130602725246
- 0.35095638279275543
- 0.27771465836053044
- 0.3062679436429392
- 0.2356102795990061
- 0.31737058583388944
- 1.0
- 0.2664917992477291
- 0.4630870464942426
- 0.48179626796437086
- 0.48112541416510324
- 0.4785458846844729
- 0.4667757763219518
- 0.4888239384534906
- 0.48913193407033156
- 0.45400599455241203
- 0.4796128193217837
- 0.4826602649834829
- 0.5534097832418009
- 0.547017625264848
- 0.5534875637912158
- 0.5545166479145291
- 0.551868078347376
- 0.5565074707024643
- 0.5454716112544638
- 0.549704436465488
- 0.5522699426270606
- 0.5473649503725682
- 0.5210558655702533
- 0.3091614875108429
- 0.4435292091514286
- 0.40925130602725246
- 0.35095638279275543
- 0.27771465836053044
- 0.3062679436429392
- 0.2356102795990061
- 0.31737058583388944
- 1.0
- 0.2664917992477291
- 0.4630870464942426
- 0.48179626796437086
- 0.48112541416510324
- 0.4785458846844729
- 0.4667757763219518
- 0.4888239384534906
- 0.48913193407033156
- 0.45400599455241203
- 0.4796128193217837
- 0.4826602649834829
- 0.5534097832418009
- 0.547017625264848
- 0.5534875637912158
- 0.5545166479145291
- 0.551868078347376
- 0.5565074707024643
- 0.5454716112544638
- 0.549704436465488
- 0.5522699426270606
- 0.5473649503725682
- 0.5210558655702533
- 0.3091614875108429
- 0.4435292091514286
- 0.40925130602725246
- 0.35095638279275543
- 0.27771465836053044
- 0.3062679436429392
- 0.2356102795990061
- 0.31737058583388944
- 1.0
- 0.2664917992477291
- 0.4630870464942426
- 0.48179626796437086
- 0.48112541416510324
- 0.4785458846844729
- 0.4667757763219518
- 0.4888239384534906
- 0.48913193407033156
- 0.45400599455241203
- 0.4796128193217837
- 0.4826602649834829
- 0.5534097832418009
- 0.547017625264848
- 0.5534875637912158
- 0.5545166479145291
- 0.551868078347376
- 0.5565074707024643
- 0.5454716112544638
- 0.549704436465488
- 0.5522699426270606
- 0.5473649503725682
- 0.5210558655702533
- 0.3091614875108429
- 0.4435292091514286
- 0.40925130602725246
- 0.35095638279275543
- 0.27771465836053044
- 0.3062679436429392
- 0.2356102795990061
- 0.31737058583388944
- 1.0
- 0.2664917992477291
- 0.4630870464942426
- 0.48179626796437086
- 0.48112541416510324
- 0.4785458846844729
- 0.4667757763219518
- 0.4888239384534906
- 0.48913193407033156
- 0.45400599455241203
- 0.4796128193217837
- 0.4826602649834829
- 0.5534097832418009
- 0.547017625264848
- 0.5534875637912158
- 0.5545166479145291
- 0.551868078347376
- 0.5565074707024643
- 0.5454716112544638
- 0.549704436465488
- 0.5522699426270606
- 0.5473649503725682
- 0.5210558655702533
- 0.3091614875108429
- 0.4435292091514286
- 0.40925130602725246
- 0.35095638279275543
- 0.27771465836053044
- 0.3062679436429392
- 0.2356102795990061
- 0.31737058583388944
- 1.0
- 0.2664917992477291
- 0.4630870464942426
- 0.48179626796437086
- 0.48112541416510324
- 0.4785458846844729
- 0.4667757763219518
- 0.4888239384534906
- 0.48913193407033156
- 0.45400599455241203
- 0.4796128193217837
- 0.4826602649834829
- 0.5534097832418009
- 0.547017625264848
- 0.5534875637912158
- 0.5545166479145291
- 0.551868078347376
- 0.5565074707024643
- 0.5454716112544638
- 0.549704436465488
- 0.5522699426270606
- 0.5473649503725682
- 0.5210558655702533
- 0.3091614875108429
- 0.4435292091514286
- 0.40925130602725246
- 0.35095638279275543
- 0.27771465836053044
- 0.3062679436429392
- 0.2356102795990061
- 0.31737058583388944
- 1.0
- 0.2664917992477291
- 0.4630870464942426
- 0.48179626796437086
- 0.48112541416510324
- 0.4785458846844729
- 0.4667757763219518
- 0.4888239384534906
- 0.48913193407033156
- 0.45400599455241203
- 0.4796128193217837
- 0.4826602649834829
- 0.5534097832418009
- 0.547017625264848
- 0.5534875637912158
- 0.5545166479145291
- 0.551868078347376
- 0.5565074707024643
- 0.5454716112544638
- 0.549704436465488
- 0.5522699426270606
- 0.5473649503725682
- 0.5210558655702533
- 0.3091614875108429
- 0.4435292091514286
- 0.40925130602725246
- 0.35095638279275543
- 0.27771465836053044
- 0.3062679436429392
- 0.2356102795990061
- 0.31737058583388944
- 1.0
- 0.2664917992477291
- 0.4630870464942426
- 0.48179626796437086
- 0.48112541416510324
- 0.4785458846844729
- 0.4667757763219518
- 0.4888239384534906
- 0.48913193407033156
- 0.45400599455241203
- 0.4796128193217837
- 0.4826602649834829
- 0.5534097832418009
- 0.547017625264848
- 0.5534875637912158
- 0.5545166479145291
- 0.551868078347376
- 0.5565074707024643
- 0.5454716112544638
- 0.549704436465488
- 0.5522699426270606
- 0.5473649503725682
- 0.5210558655702533
- 0.3091614875108429
- 0.4435292091514286
- 0.40925130602725246
- 0.35095638279275543
- 0.27771465836053044
- 0.3062679436429392
- 0.2356102795990061
- 0.31737058583388944
- 1.0
- 0.2664917992477291
- 0.4630870464942426
- 0.48179626796437086
- 0.48112541416510324
- 0.4785458846844729
- 0.4667757763219518
- 0.4888239384534906
- 0.48913193407033156
- 0.45400599455241203
- 0.4796128193217837
- 0.4826602649834829
- 0.5534097832418009
- 0.547017625264848
- 0.5534875637912158
- 0.5545166479145291
- 0.551868078347376
- 0.5565074707024643
- 0.5454716112544638
- 0.549704436465488
- 0.5522699426270606
- 0.5473649503725682
- 0.5210558655702533
- 0.3091614875108429
- 0.4435292091514286
- 0.40925130602725246
- 0.35095638279275543
- 0.27771465836053044
- 0.3062679436429392
- 0.2356102795990061
- 0.31737058583388944
- 1.0
- 0.2664917992477291
- 0.4630870464942426
- 0.48179626796437086
- 0.48112541416510324
- 0.4785458846844729
- 0.4667757763219518
- 0.4888239384534906
- 0.48913193407033156
- 0.45400599455241203
- 0.4796128193217837
- 0.4826602649834829
- 0.5534097832418009
- 0.547017625264848
- 0.5534875637912158
- 0.5545166479145291
- 0.551868078347376
- 0.5565074707024643
- 0.5454716112544638
- 0.549704436465488
- 0.5522699426270606
- 0.5473649503725682
- 0.5210558655702533
- 0.3091614875108429
- 0.4435292091514286
- 0.40925130602725246
- 0.35095638279275543
- 0.27771465836053044
- 0.3062679436429392
- 0.2356102795990061
- 0.31737058583388944
- 1.0
- 0.2664917992477291
- 0.4630870464942426
- 0.48179626796437086
- 0.48112541416510324
- 0.4785458846844729
- 0.4667757763219518
- 0.4888239384534906
- 0.48913193407033156
- 0.45400599455241203
- 0.4796128193217837
- 0.4826602649834829
- 0.5534097832418009
- 0.547017625264848
- 0.5534875637912158
- 0.5545166479145291
- 0.551868078347376
- 0.5565074707024643
- 0.5454716112544638
- 0.549704436465488
- 0.5522699426270606
- 0.5473649503725682
- 0.5210558655702533
- 0.3091614875108429
- 0.4435292091514286
- 0.40925130602725246
- 0.35095638279275543
- 0.27771465836053044
- 0.3062679436429392
- 0.2356102795990061
- 0.31737058583388944
- 1.0
- 0.2664917992477291
- 0.4630870464942426
- 0.48179626796437086
- 0.48112541416510324
- 0.4785458846844729
- 0.4667757763219518
- 0.4888239384534906
- 0.48913193407033156
- 0.45400599455241203
- 0.4796128193217837
- 0.4826602649834829
- 0.5534097832418009
- 0.547017625264848
- 0.5534875637912158
- 0.5545166479145291
- 0.551868078347376
- 0.5565074707024643
- 0.5454716112544638
- 0.549704436465488
- 0.5522699426270606
- 0.5473649503725682
- 0.5210558655702533
- 0.3091614875108429
- 0.4435292091514286
- 0.40925130602725246
- 0.35095638279275543
- 0.27771465836053044
- 0.3062679436429392
- 0.2356102795990061
- 0.31737058583388944
- 1.0
- 0.2664917992477291
- 0.4630870464942426
- 0.48179626796437086
- 0.48112541416510324
- 0.4785458846844729
- 0.4667757763219518
- 0.4888239384534906
- 0.48913193407033156
- 0.45400599455241203
- 0.4796128193217837
- 0.4826602649834829
- 0.5534097832418009
- 0.547017625264848
- 0.5534875637912158
- 0.5545166479145291
- 0.551868078347376
- 0.5565074707024643
- 0.5454716112544638
- 0.549704436465488
- 0.5522699426270606
- 0.5473649503725682
- 0.5210558655702533
- 0.3091614875108429
- 0.4435292091514286
- 0.40925130602725246
- 0.35095638279275543
- 0.27771465836053044
- 0.3062679436429392
- 0.2356102795990061
- 0.31737058583388944
- 1.0
- 0.2664917992477291
- 0.4630870464942426
- 0.48179626796437086
- 0.48112541416510324
- 0.4785458846844729
- 0.4667757763219518
- 0.4888239384534906
- 0.48913193407033156
- 0.45400599455241203
- 0.4796128193217837
- 0.4826602649834829
- 0.5534097832418009
- 0.547017625264848
- 0.5534875637912158
- 0.5545166479145291
- 0.551868078347376
- 0.5565074707024643
- 0.5454716112544638
- 0.549704436465488
- 0.5522699426270606
- 0.5473649503725682
- 0.5210558655702533
- 0.3091614875108429
- 0.4435292091514286
- 0.40925130602725246
- 0.35095638279275543
- 0.27771465836053044
- 0.3062679436429392
- 0.2356102795990061
- 0.31737058583388944
- 1.0
- 0.2664917992477291
- 0.4630870464942426
- 0.48179626796437086
- 0.48112541416510324
- 0.4785458846844729
- 0.4667757763219518
- 0.4888239384534906
- 0.48913193407033156
- 0.45400599455241203
- 0.4796128193217837
- 0.4826602649834829
- 0.5534097832418009
- 0.547017625264848
- 0.5534875637912158
- 0.5545166479145291
- 0.551868078347376
- 0.5565074707024643
- 0.5454716112544638
- 0.549704436465488
- 0.5522699426270606
- 0.5473649503725682
- 0.5210558655702533
- 0.3091614875108429
- 0.4435292091514286
- 0.40925130602725246
- 0.35095638279275543
- 0.27771465836053044
- 0.3062679436429392
- 0.2356102795990061
- 0.31737058583388944
- 1.0
- 0.2664917992477291
- 0.4630870464942426
- 0.48179626796437086
- 0.48112541416510324
- 0.4785458846844729
- 0.4667757763219518
- 0.4888239384534906
- 0.48913193407033156
- 0.45400599455241203
- 0.4796128193217837
- 0.4826602649834829
- 0.5534097832418009
- 0.547017625264848
- 0.5534875637912158
- 0.5545166479145291
- 0.551868078347376
- 0.5565074707024643
- 0.5454716112544638
- 0.549704436465488
- 0.5522699426270606
- 0.5473649503725682
- 0.5210558655702533
- 0.3091614875108429
- 0.4435292091514286
- 0.40925130602725246
- 0.35095638279275543
- 0.27771465836053044
- 0.3062679436429392
- 0.2356102795990061
- 0.31737058583388944
- 1.0
- 0.2664917992477291
- 0.4630870464942426
- 0.48179626796437086
- 0.48112541416510324
- 0.4785458846844729
- 0.4667757763219518
- 0.4888239384534906
- 0.48913193407033156
- 0.45400599455241203
- 0.4796128193217837
- 0.4826602649834829
- 0.5534097832418009
- 0.547017625264848
- 0.5534875637912158
- 0.5545166479145291
- 0.551868078347376
- 0.5565074707024643
- 0.5454716112544638
- 0.549704436465488
- 0.5522699426270606
- 0.5473649503725682
- 0.5210558655702533
- 0.3091614875108429
- 0.4435292091514286
- 0.40925130602725246
- 0.35095638279275543
- 0.27771465836053044
- 0.3062679436429392
- 0.2356102795990061
- 0.31737058583388944
- 1.0
- 0.2664917992477291
- 0.4630870464942426
- 0.48179626796437086
- 0.48112541416510324
- 0.4785458846844729
- 0.4667757763219518
- 0.4888239384534906
- 0.48913193407033156
- 0.45400599455241203
- 0.4796128193217837
- 0.4826602649834829
- 0.5534097832418009
- 0.547017625264848
- 0.5534875637912158
- 0.5545166479145291
- 0.551868078347376
- 0.5565074707024643
- 0.5454716112544638
- 0.549704436465488
- 0.5522699426270606
- 0.5473649503725682
- 0.5210558655702533
- 0.3091614875108429
- 0.4435292091514286
- 0.40925130602725246
- 0.35095638279275543
- 0.27771465836053044
- 0.3062679436429392
- 0.2356102795990061
- 0.31737058583388944
- 1.0
- 0.2664917992477291
- 0.4630870464942426
- 0.48179626796437086
- 0.48112541416510324
- 0.4785458846844729
- 0.4667757763219518
- 0.4888239384534906
- 0.48913193407033156
- 0.45400599455241203
- 0.4796128193217837
- 0.4826602649834829
- 0.5534097832418009
- 0.547017625264848
- 0.5534875637912158
- 0.5545166479145291
- 0.551868078347376
- 0.5565074707024643
- 0.5454716112544638
- 0.549704436465488
- 0.5522699426270606
- 0.5473649503725682
- 0.5210558655702533
- 0.3091614875108429
- 0.4435292091514286
- 0.40925130602725246
- 0.35095638279275543
- 0.27771465836053044
- 0.3062679436429392
- 0.2356102795990061
- 0.31737058583388944
- 1.0
- 0.2664917992477291
- 0.4630870464942426
- 0.48179626796437086
- 0.48112541416510324
- 0.4785458846844729
- 0.4667757763219518
- 0.4888239384534906
- 0.48913193407033156
- 0.45400599455241203
- 0.4796128193217837
- 0.4826602649834829
- 0.5534097832418009
- 0.547017625264848
- 0.5534875637912158
- 0.5545166479145291
- 0.551868078347376
- 0.5565074707024643
- 0.5454716112544638
- 0.549704436465488
- 0.5522699426270606
- 0.5473649503725682
- 0.5210558655702533
- 0.3091614875108429
- 0.4435292091514286
- 0.40925130602725246
- 0.35095638279275543
- 0.27771465836053044
- 0.3062679436429392
- 0.2356102795990061
- 0.31737058583388944
- 1.0
- 0.2664917992477291
- 0.4630870464942426
- 0.48179626796437086
- 0.48112541416510324
- 0.4785458846844729
- 0.4667757763219518
- 0.4888239384534906
- 0.48913193407033156
- 0.45400599455241203
- 0.4796128193217837
- 0.4826602649834829
- 0.5534097832418009
- 0.547017625264848
- 0.5534875637912158
- 0.5545166479145291
- 0.551868078347376
- 0.5565074707024643
- 0.5454716112544638
- 0.549704436465488
- 0.5522699426270606
- 0.5473649503725682
- 0.5210558655702533
- 0.3091614875108429
- 0.4435292091514286
- 0.40925130602725246
- 0.35095638279275543
- 0.27771465836053044
- 0.3062679436429392
- 0.2356102795990061
- 0.31737058583388944
- 1.0
- 0.2664917992477291
- 0.4630870464942426
- 0.48179626796437086
- 0.48112541416510324
- 0.4785458846844729
- 0.4667757763219518
- 0.4888239384534906
- 0.48913193407033156
- 0.45400599455241203
- 0.4796128193217837
- 0.4826602649834829
- 0.5534097832418009
- 0.547017625264848
- 0.5534875637912158
- 0.5545166479145291
- 0.551868078347376
- 0.5565074707024643
- 0.5454716112544638
- 0.549704436465488
- 0.5522699426270606
- 0.5473649503725682
- 0.5210558655702533
- 0.3091614875108429
- 0.4435292091514286
- 0.40925130602725246
- 0.35095638279275543
- 0.27771465836053044
- 0.3062679436429392
- 0.2356102795990061
- 0.31737058583388944
- 1.0
- 0.2664917992477291
- 0.4630870464942426
- 0.48179626796437086
- 0.48112541416510324
- 0.4785458846844729
- 0.4667757763219518
- 0.4888239384534906
- 0.48913193407033156
- 0.45400599455241203
- 0.4796128193217837
- 0.4826602649834829
- 0.5534097832418009
- 0.547017625264848
- 0.5534875637912158
- 0.5545166479145291
- 0.551868078347376
- 0.5565074707024643
- 0.5454716112544638
- 0.549704436465488
- 0.5522699426270606
- 0.5473649503725682
- 0.5210558655702533
- 0.3091614875108429
- 0.4435292091514286
- 0.40925130602725246
- 0.35095638279275543
- 0.27771465836053044
- 0.3062679436429392
- 0.2356102795990061
- 0.31737058583388944
- 1.0
- 0.2664917992477291
- 0.4630870464942426
- 0.48179626796437086
- 0.48112541416510324
- 0.4785458846844729
- 0.4667757763219518
- 0.4888239384534906
- 0.48913193407033156
- 0.45400599455241203
- 0.4796128193217837
- 0.4826602649834829
- 0.5534097832418009
- 0.547017625264848
- 0.5534875637912158
- 0.5545166479145291
- 0.551868078347376
- 0.5565074707024643
- 0.5454716112544638
- 0.549704436465488
- 0.5522699426270606
- 0.5473649503725682
- 0.5210558655702533
- 0.3091614875108429
- 0.4435292091514286
- 0.40925130602725246
- 0.35095638279275543
- 0.27771465836053044
- 0.3062679436429392
- 0.2356102795990061
- 0.31737058583388944
- 1.0
- 0.2664917992477291
- 0.4630870464942426
- 0.48179626796437086
- 0.48112541416510324
- 0.4785458846844729
- 0.4667757763219518
- 0.4888239384534906
- 0.48913193407033156
- 0.45400599455241203
- 0.4796128193217837
- 0.4826602649834829
- 0.5534097832418009
- 0.547017625264848
- 0.5534875637912158
- 0.5545166479145291
- 0.551868078347376
- 0.5565074707024643
- 0.5454716112544638
- 0.549704436465488
- 0.5522699426270606
- 0.5473649503725682
- 0.5210558655702533
- 0.3091614875108429
- 0.4435292091514286
- 0.40925130602725246
- 0.35095638279275543
- 0.27771465836053044
- 0.3062679436429392
- 0.2356102795990061
- 0.31737058583388944
- 1.0
- 0.2664917992477291
- 0.4630870464942426
- 0.48179626796437086
- 0.48112541416510324
- 0.4785458846844729
- 0.4667757763219518
- 0.4888239384534906
- 0.48913193407033156
- 0.45400599455241203
- 0.4796128193217837
- 0.4826602649834829
- 0.5534097832418009
- 0.547017625264848
- 0.5534875637912158
- 0.5545166479145291
- 0.551868078347376
- 0.5565074707024643
- 0.5454716112544638
- 0.549704436465488
- 0.5522699426270606
- 0.5473649503725682
- 0.5210558655702533
- 0.3091614875108429
- 0.4435292091514286
- 0.40925130602725246
- 0.35095638279275543
- 0.27771465836053044
- 0.3062679436429392
- 0.2356102795990061
- 0.31737058583388944
- 1.0
- 0.2664917992477291
- 0.4630870464942426
- 0.48179626796437086
- 0.48112541416510324
- 0.4785458846844729
- 0.4667757763219518
- 0.4888239384534906
- 0.48913193407033156
- 0.45400599455241203
- 0.4796128193217837
- 0.4826602649834829
- 0.5534097832418009
- 0.547017625264848
- 0.5534875637912158
- 0.5545166479145291
- 0.551868078347376
- 0.5565074707024643
- 0.5454716112544638
- 0.549704436465488
- 0.5522699426270606
- 0.5473649503725682
- 0.5210558655702533
- 0.3091614875108429
- 0.4435292091514286
- 0.40925130602725246
- 0.35095638279275543
- 0.27771465836053044
- 0.3062679436429392
- 0.2356102795990061
- 0.31737058583388944
- 1.0
- 0.2664917992477291
- 0.4630870464942426
- 0.48179626796437086
- 0.48112541416510324
- 0.4785458846844729
- 0.4667757763219518
- 0.4888239384534906
- 0.48913193407033156
- 0.45400599455241203
- 0.4796128193217837
- 0.4826602649834829
- 0.5534097832418009
- 0.547017625264848
- 0.5534875637912158
- 0.5545166479145291
- 0.551868078347376
- 0.5565074707024643
- 0.5454716112544638
- 0.549704436465488
- 0.5522699426270606
- 0.5473649503725682
- 0.5210558655702533
- 0.3091614875108429
- 0.4435292091514286
- 0.40925130602725246
- 0.35095638279275543
- 0.27771465836053044
- 0.3062679436429392
- 0.2356102795990061
- 0.31737058583388944
- 1.0
- 0.2664917992477291
- 0.4630870464942426
- 0.48179626796437086
- 0.48112541416510324
- 0.4785458846844729
- 0.4667757763219518
- 0.4888239384534906
- 0.48913193407033156
- 0.45400599455241203
- 0.4796128193217837
- 0.4826602649834829
- 0.5534097832418009
- 0.547017625264848
- 0.5534875637912158
- 0.5545166479145291
- 0.551868078347376
- 0.5565074707024643
- 0.5454716112544638
- 0.549704436465488
- 0.5522699426270606
- 0.5473649503725682
- 0.5210558655702533
- 0.3091614875108429
- 0.4435292091514286
- 0.40925130602725246
- 0.35095638279275543
- 0.27771465836053044
- 0.3062679436429392
- 0.2356102795990061
- 0.31737058583388944
- 1.0
- 0.2664917992477291
- 0.4630870464942426
- 0.48179626796437086
- 0.48112541416510324
- 0.4785458846844729
- 0.4667757763219518
- 0.4888239384534906
- 0.48913193407033156
- 0.45400599455241203
- 0.4796128193217837
- 0.4826602649834829
- 0.5534097832418009
- 0.547017625264848
- 0.5534875637912158
- 0.5545166479145291
- 0.551868078347376
- 0.5565074707024643
- 0.5454716112544638
- 0.549704436465488
- 0.5522699426270606
- 0.5473649503725682
- 0.5210558655702533
- 0.3091614875108429
- 0.4435292091514286
- 0.40925130602725246
- 0.35095638279275543
- 0.27771465836053044
- 0.3062679436429392
- 0.2356102795990061
- 0.31737058583388944
- 1.0
- 0.2664917992477291
- 0.4630870464942426
- 0.48179626796437086
- 0.48112541416510324
- 0.4785458846844729
- 0.4667757763219518
- 0.4888239384534906
- 0.48913193407033156
- 0.45400599455241203
- 0.4796128193217837
- 0.4826602649834829
- 0.5534097832418009
- 0.547017625264848
- 0.5534875637912158
- 0.5545166479145291
- 0.551868078347376
- 0.5565074707024643
- 0.5454716112544638
- 0.549704436465488
- 0.5522699426270606
- 0.5473649503725682
- 0.5210558655702533
- 0.3091614875108429
- 0.4435292091514286
- 0.40925130602725246
- 0.35095638279275543
- 0.27771465836053044
- 0.3062679436429392
- 0.2356102795990061
- 0.31737058583388944
- 1.0
- 0.2664917992477291
- 0.4630870464942426
- 0.48179626796437086
- 0.48112541416510324
- 0.4785458846844729
- 0.4667757763219518
- 0.4888239384534906
- 0.48913193407033156
- 0.45400599455241203
- 0.4796128193217837
- 0.4826602649834829
- 0.5534097832418009
- 0.547017625264848
- 0.5534875637912158
- 0.5545166479145291
- 0.551868078347376
- 0.5565074707024643
- 0.5454716112544638
- 0.549704436465488
- 0.5522699426270606
- 0.5473649503725682
- 0.5210558655702533
- 0.3091614875108429
- 0.4435292091514286
- 0.40925130602725246
- 0.35095638279275543
- 0.27771465836053044
- 0.3062679436429392
- 0.2356102795990061
- 0.31737058583388944
- 1.0
- 0.2664917992477291
- 0.4630870464942426
- 0.48179626796437086
- 0.48112541416510324
- 0.4785458846844729
- 0.4667757763219518
- 0.4888239384534906
- 0.48913193407033156
- 0.45400599455241203
- 0.4796128193217837
- 0.4826602649834829
- 0.5534097832418009
- 0.547017625264848
- 0.5534875637912158
- 0.5545166479145291
- 0.551868078347376
- 0.5565074707024643
- 0.5454716112544638
- 0.549704436465488
- 0.5522699426270606
- 0.5473649503725682
- 0.5210558655702533
- 0.3091614875108429
- 0.4435292091514286
- 0.40925130602725246
- 0.35095638279275543
- 0.27771465836053044
- 0.3062679436429392
- 0.2356102795990061
- 0.31737058583388944
- 1.0
- 0.2664917992477291
- 0.4630870464942426
- 0.48179626796437086
- 0.48112541416510324
- 0.4785458846844729
- 0.4667757763219518
- 0.4888239384534906
- 0.48913193407033156
- 0.45400599455241203
- 0.4796128193217837
- 0.4826602649834829
- 0.5534097832418009
- 0.547017625264848
- 0.5534875637912158
- 0.5545166479145291
- 0.551868078347376
- 0.5565074707024643
- 0.5454716112544638
- 0.549704436465488
- 0.5522699426270606
- 0.5473649503725682
- 0.5210558655702533
- 0.3091614875108429
- 0.4435292091514286
- 0.40925130602725246
- 0.35095638279275543
- 0.27771465836053044
- 0.3062679436429392
- 0.2356102795990061
- 0.31737058583388944
- 1.0
- 0.2664917992477291
- 0.4630870464942426
- 0.48179626796437086
- 0.48112541416510324
- 0.4785458846844729
- 0.4667757763219518
- 0.4888239384534906
- 0.48913193407033156
- 0.45400599455241203
- 0.4796128193217837
- 0.4826602649834829
- 0.5534097832418009
- 0.547017625264848
- 0.5534875637912158
- 0.5545166479145291
- 0.551868078347376
- 0.5565074707024643
- 0.5454716112544638
- 0.549704436465488
- 0.5522699426270606
- 0.5473649503725682
- 0.5210558655702533
- 0.3091614875108429
- 0.4435292091514286
- 0.40925130602725246
- 0.35095638279275543
- 0.27771465836053044
- 0.3062679436429392
- 0.2356102795990061
- 0.31737058583388944
- 1.0
- 0.2664917992477291
- 0.4630870464942426
- 0.48179626796437086
- 0.48112541416510324
- 0.4785458846844729
- 0.4667757763219518
- 0.4888239384534906
- 0.48913193407033156
- 0.45400599455241203
- 0.4796128193217837
- 0.4826602649834829
- 0.5534097832418009
- 0.547017625264848
- 0.5534875637912158
- 0.5545166479145291
- 0.551868078347376
- 0.5565074707024643
- 0.5454716112544638
- 0.549704436465488
- 0.5522699426270606
- 0.5473649503725682
- 0.5210558655702533
- 0.3091614875108429
- 0.4435292091514286
- 0.40925130602725246
- 0.35095638279275543
- 0.27771465836053044
- 0.3062679436429392
- 0.2356102795990061
- 0.31737058583388944
- 1.0
- 0.2664917992477291
- 0.4630870464942426
- 0.48179626796437086
- 0.48112541416510324
- 0.4785458846844729
- 0.4667757763219518
- 0.4888239384534906
- 0.48913193407033156
- 0.45400599455241203
- 0.4796128193217837
- 0.4826602649834829
- 0.5534097832418009
- 0.547017625264848
- 0.5534875637912158
- 0.5545166479145291
- 0.551868078347376
- 0.5565074707024643
- 0.5454716112544638
- 0.549704436465488
- 0.5522699426270606
- 0.5473649503725682
- 0.5210558655702533
- 0.3091614875108429
- 0.4435292091514286
- 0.40925130602725246
- 0.35095638279275543
- 0.27771465836053044
- 0.3062679436429392
- 0.2356102795990061
- 0.31737058583388944
- 1.0
- 0.2664917992477291
- 0.4630870464942426
- 0.48179626796437086
- 0.48112541416510324
- 0.4785458846844729
- 0.4667757763219518
- 0.4888239384534906
- 0.48913193407033156
- 0.45400599455241203
- 0.4796128193217837
- 0.4826602649834829
- 0.5534097832418009
- 0.547017625264848
- 0.5534875637912158
- 0.5545166479145291
- 0.551868078347376
- 0.5565074707024643
- 0.5454716112544638
- 0.549704436465488
- 0.5522699426270606
- 0.5473649503725682
- 0.5210558655702533
- 0.3091614875108429
- 0.4435292091514286
- 0.40925130602725246
- 0.35095638279275543
- 0.27771465836053044
- 0.3062679436429392
- 0.2356102795990061
- 0.31737058583388944
- 1.0
- 0.2664917992477291
- 0.4630870464942426
- 0.48179626796437086
- 0.48112541416510324
- 0.4785458846844729
- 0.4667757763219518
- 0.4888239384534906
- 0.48913193407033156
- 0.45400599455241203
- 0.4796128193217837
- 0.4826602649834829
- 0.5534097832418009
- 0.547017625264848
- 0.5534875637912158
- 0.5545166479145291
- 0.551868078347376
- 0.5565074707024643
- 0.5454716112544638
- 0.549704436465488
- 0.5522699426270606
- 0.5473649503725682
- 0.5210558655702533
- 0.3091614875108429
- 0.4435292091514286
- 0.40925130602725246
- 0.35095638279275543
- 0.27771465836053044
- 0.3062679436429392
- 0.2356102795990061
- 0.31737058583388944
- 1.0
- 0.2664917992477291
- 0.4630870464942426
- 0.48179626796437086
- 0.48112541416510324
- 0.4785458846844729
- 0.4667757763219518
- 0.4888239384534906
- 0.48913193407033156
- 0.45400599455241203
- 0.4796128193217837
- 0.4826602649834829
- 0.5534097832418009
- 0.547017625264848
- 0.5534875637912158
- 0.5545166479145291
- 0.551868078347376
- 0.5565074707024643
- 0.5454716112544638
- 0.549704436465488
- 0.5522699426270606
- 0.5473649503725682
- 0.5210558655702533
- 0.3091614875108429
- 0.4435292091514286
- 0.40925130602725246
- 0.35095638279275543
- 0.27771465836053044
- 0.3062679436429392
- 0.2356102795990061
- 0.31737058583388944
- 1.0
- 0.2664917992477291
- 0.4630870464942426
- 0.48179626796437086
- 0.48112541416510324
- 0.4785458846844729
- 0.4667757763219518
- 0.4888239384534906
- 0.48913193407033156
- 0.45400599455241203
- 0.4796128193217837
- 0.4826602649834829
- 0.5534097832418009
- 0.547017625264848
- 0.5534875637912158
- 0.5545166479145291
- 0.551868078347376
- 0.5565074707024643
- 0.5454716112544638
- 0.549704436465488
- 0.5522699426270606
- 0.5473649503725682
- 0.5210558655702533
- 0.3091614875108429
- 0.4435292091514286
- 0.40925130602725246
- 0.35095638279275543
- 0.27771465836053044
- 0.3062679436429392
- 0.2356102795990061
- 0.31737058583388944
- 1.0
- 0.2664917992477291
- 0.4630870464942426
- 0.48179626796437086
- 0.48112541416510324
- 0.4785458846844729
- 0.4667757763219518
- 0.4888239384534906
- 0.48913193407033156
- 0.45400599455241203
- 0.4796128193217837
- 0.4826602649834829
- 0.5534097832418009
- 0.547017625264848
- 0.5534875637912158
- 0.5545166479145291
- 0.551868078347376
- 0.5565074707024643
- 0.5454716112544638
- 0.549704436465488
- 0.5522699426270606
- 0.5473649503725682
- 0.5210558655702533
- 0.3091614875108429
- 0.4435292091514286
- 0.40925130602725246
- 0.35095638279275543
- 0.27771465836053044
- 0.3062679436429392
- 0.2356102795990061
- 0.31737058583388944
- 1.0
- 0.2664917992477291
- 0.4630870464942426
- 0.48179626796437086
- 0.48112541416510324
- 0.4785458846844729
- 0.4667757763219518
- 0.4888239384534906
- 0.48913193407033156
- 0.45400599455241203
- 0.4796128193217837
- 0.4826602649834829
- 0.5534097832418009
- 0.547017625264848
- 0.5534875637912158
- 0.5545166479145291
- 0.551868078347376
- 0.5565074707024643
- 0.5454716112544638
- 0.549704436465488
- 0.5522699426270606
- 0.5473649503725682
- 0.5210558655702533
- 0.3091614875108429
- 0.4435292091514286
- 0.40925130602725246
- 0.35095638279275543
- 0.27771465836053044
- 0.3062679436429392
- 0.2356102795990061
- 0.31737058583388944
- 1.0
- 0.2664917992477291
- 0.4630870464942426
- 0.48179626796437086
- 0.48112541416510324
- 0.4785458846844729
- 0.4667757763219518
- 0.4888239384534906
- 0.48913193407033156
- 0.45400599455241203
- 0.4796128193217837
- 0.4826602649834829
- 0.5534097832418009
- 0.547017625264848
- 0.5534875637912158
- 0.5545166479145291
- 0.551868078347376
- 0.5565074707024643
- 0.5454716112544638
- 0.549704436465488
- 0.5522699426270606
- 0.5473649503725682
- 0.5210558655702533
- 0.3091614875108429
- 0.4435292091514286
- 0.40925130602725246
- 0.35095638279275543
- 0.27771465836053044
- 0.3062679436429392
- 0.2356102795990061
- 0.31737058583388944
- 1.0
- 0.2664917992477291
- 0.4630870464942426
- 0.48179626796437086
- 0.48112541416510324
- 0.4785458846844729
- 0.4667757763219518
- 0.4888239384534906
- 0.48913193407033156
- 0.45400599455241203
- 0.4796128193217837
- 0.4826602649834829
- 0.5534097832418009
- 0.547017625264848
- 0.5534875637912158
- 0.5545166479145291
- 0.551868078347376
- 0.5565074707024643
- 0.5454716112544638
- 0.549704436465488
- 0.5522699426270606
- 0.5473649503725682
- 0.5210558655702533
- 0.3091614875108429
- 0.4435292091514286
- 0.40925130602725246
- 0.35095638279275543
- 0.27771465836053044
- 0.3062679436429392
- 0.2356102795990061
- 0.31737058583388944
- 1.0
- 0.2664917992477291
- 0.4630870464942426
- 0.48179626796437086
- 0.48112541416510324
- 0.4785458846844729
- 0.4667757763219518
- 0.4888239384534906
- 0.48913193407033156
- 0.45400599455241203
- 0.4796128193217837
- 0.4826602649834829
- 0.5534097832418009
- 0.547017625264848
- 0.5534875637912158
- 0.5545166479145291
- 0.551868078347376
- 0.5565074707024643
- 0.5454716112544638
- 0.549704436465488
- 0.5522699426270606
- 0.5473649503725682
- 0.5210558655702533
- 0.3091614875108429
- 0.4435292091514286
- 0.40925130602725246
- 0.35095638279275543
- 0.27771465836053044
- 0.3062679436429392
- 0.2356102795990061
- 0.31737058583388944
- 1.0
- 0.2664917992477291
- 0.4630870464942426
- 0.48179626796437086
- 0.48112541416510324
- 0.4785458846844729
- 0.4667757763219518
- 0.4888239384534906
- 0.48913193407033156
- 0.45400599455241203
- 0.4796128193217837
- 0.4826602649834829
- 0.5534097832418009
- 0.547017625264848
- 0.5534875637912158
- 0.5545166479145291
- 0.551868078347376
- 0.5565074707024643
- 0.5454716112544638
- 0.549704436465488
- 0.5522699426270606
- 0.5473649503725682
- 0.5210558655702533
- 0.3091614875108429
- 0.4435292091514286
- 0.40925130602725246
- 0.35095638279275543
- 0.27771465836053044
- 0.3062679436429392
- 0.2356102795990061
- 0.31737058583388944
- 1.0
- 0.2664917992477291
- 0.4630870464942426
- 0.48179626796437086
- 0.48112541416510324
- 0.4785458846844729
- 0.4667757763219518
- 0.4888239384534906
- 0.48913193407033156
- 0.45400599455241203
- 0.4796128193217837
- 0.4826602649834829
- 0.5534097832418009
- 0.547017625264848
- 0.5534875637912158
- 0.5545166479145291
- 0.551868078347376
- 0.5565074707024643
- 0.5454716112544638
- 0.549704436465488
- 0.5522699426270606
- 0.5473649503725682
- 0.5210558655702533
- 0.3091614875108429
- 0.4435292091514286
- 0.40925130602725246
- 0.35095638279275543
- 0.27771465836053044
- 0.3062679436429392
- 0.2356102795990061
- 0.31737058583388944
- 1.0
- 0.2664917992477291
- 0.4630870464942426
- 0.48179626796437086
- 0.48112541416510324
- 0.4785458846844729
- 0.4667757763219518
- 0.4888239384534906
- 0.48913193407033156
- 0.45400599455241203
- 0.4796128193217837
- 0.4826602649834829
- 0.5534097832418009
- 0.547017625264848
- 0.5534875637912158
- 0.5545166479145291
- 0.551868078347376
- 0.5565074707024643
- 0.5454716112544638
- 0.549704436465488
- 0.5522699426270606
- 0.5473649503725682
- 0.5210558655702533
- 0.3091614875108429
- 0.4435292091514286
- 0.40925130602725246
- 0.35095638279275543
- 0.27771465836053044
- 0.3062679436429392
- 0.2356102795990061
- 0.31737058583388944
- 1.0
- 0.2664917992477291
- 0.4630870464942426
- 0.48179626796437086
- 0.48112541416510324
- 0.4785458846844729
- 0.4667757763219518
- 0.4888239384534906
- 0.48913193407033156
- 0.45400599455241203
- 0.4796128193217837
- 0.4826602649834829
- 0.5534097832418009
- 0.547017625264848
- 0.5534875637912158
- 0.5545166479145291
- 0.551868078347376
- 0.5565074707024643
- 0.5454716112544638
- 0.549704436465488
- 0.5522699426270606
- 0.5473649503725682
- 0.5210558655702533
- 0.3091614875108429
- 0.4435292091514286
- 0.40925130602725246
- 0.35095638279275543
- 0.27771465836053044
- 0.3062679436429392
- 0.2356102795990061
- 0.31737058583388944
- 1.0
- 0.2664917992477291
- 0.4630870464942426
- 0.48179626796437086
- 0.48112541416510324
- 0.4785458846844729
- 0.4667757763219518
- 0.4888239384534906
- 0.48913193407033156
- 0.45400599455241203
- 0.4796128193217837
- 0.4826602649834829
- 0.5534097832418009
- 0.547017625264848
- 0.5534875637912158
- 0.5545166479145291
- 0.551868078347376
- 0.5565074707024643
- 0.5454716112544638
- 0.549704436465488
- 0.5522699426270606
- 0.5473649503725682
- 0.5210558655702533
- 0.3091614875108429
- 0.4435292091514286
- 0.40925130602725246
- 0.35095638279275543
- 0.27771465836053044
- 0.3062679436429392
- 0.2356102795990061
- 0.31737058583388944
- 1.0
- 0.2664917992477291
- 0.4630870464942426
- 0.48179626796437086
- 0.48112541416510324
- 0.4785458846844729
- 0.4667757763219518
- 0.4888239384534906
- 0.48913193407033156
- 0.45400599455241203
- 0.4796128193217837
- 0.4826602649834829
- 0.5534097832418009
- 0.547017625264848
- 0.5534875637912158
- 0.5545166479145291
- 0.551868078347376
- 0.5565074707024643
- 0.5454716112544638
- 0.549704436465488
- 0.5522699426270606
- 0.5473649503725682
- 0.5210558655702533
- 0.3091614875108429
- 0.4435292091514286
- 0.40925130602725246
- 0.35095638279275543
- 0.27771465836053044
- 0.3062679436429392
- 0.2356102795990061
- 0.31737058583388944
- 1.0
- 0.2664917992477291
- 0.4630870464942426
- 0.48179626796437086
- 0.48112541416510324
- 0.4785458846844729
- 0.4667757763219518
- 0.4888239384534906
- 0.48913193407033156
- 0.45400599455241203
- 0.4796128193217837
- 0.4826602649834829
- 0.5534097832418009
- 0.547017625264848
- 0.5534875637912158
- 0.5545166479145291
- 0.551868078347376
- 0.5565074707024643
- 0.5454716112544638
- 0.549704436465488
- 0.5522699426270606
- 0.5473649503725682
- 0.5210558655702533
- 0.3091614875108429
- 0.4435292091514286
- 0.40925130602725246
- 0.35095638279275543
- 0.27771465836053044
- 0.3062679436429392
- 0.2356102795990061
- 0.31737058583388944
- 1.0
- 0.2664917992477291
- 0.4630870464942426
- 0.48179626796437086
- 0.48112541416510324
- 0.4785458846844729
- 0.4667757763219518
- 0.4888239384534906
- 0.48913193407033156
- 0.45400599455241203
- 0.4796128193217837
- 0.4826602649834829
- 0.5534097832418009
- 0.547017625264848
- 0.5534875637912158
- 0.5545166479145291
- 0.551868078347376
- 0.5565074707024643
- 0.5454716112544638
- 0.549704436465488
- 0.5522699426270606
- 0.5473649503725682
- 0.5210558655702533
- 0.3091614875108429
- 0.4435292091514286
- 0.40925130602725246
- 0.35095638279275543
- 0.27771465836053044
- 0.3062679436429392
- 0.2356102795990061
- 0.31737058583388944
- 1.0
- 0.2664917992477291
- 0.4630870464942426
- 0.48179626796437086
- 0.48112541416510324
- 0.4785458846844729
- 0.4667757763219518
- 0.4888239384534906
- 0.48913193407033156
- 0.45400599455241203
- 0.4796128193217837
- 0.4826602649834829
- 0.5534097832418009
- 0.547017625264848
- 0.5534875637912158
- 0.5545166479145291
- 0.551868078347376
- 0.5565074707024643
- 0.5454716112544638
- 0.549704436465488
- 0.5522699426270606
- 0.5473649503725682
- 0.5210558655702533
- 0.3091614875108429
- 0.4435292091514286
- 0.40925130602725246
- 0.35095638279275543
- 0.27771465836053044
- 0.3062679436429392
- 0.2356102795990061
- 0.31737058583388944
- 1.0
- 0.2664917992477291
- 0.4630870464942426
- 0.48179626796437086
- 0.48112541416510324
- 0.4785458846844729
- 0.4667757763219518
- 0.4888239384534906
- 0.48913193407033156
- 0.45400599455241203
- 0.4796128193217837
- 0.4826602649834829
- 0.5534097832418009
- 0.547017625264848
- 0.5534875637912158
- 0.5545166479145291
- 0.551868078347376
- 0.5565074707024643
- 0.5454716112544638
- 0.549704436465488
- 0.5522699426270606
- 0.5473649503725682
- 0.5210558655702533
- 0.3091614875108429
- 0.4435292091514286
- 0.40925130602725246
- 0.35095638279275543
- 0.27771465836053044
- 0.3062679436429392
- 0.2356102795990061
- 0.31737058583388944
- 1.0
- 0.2664917992477291
- 0.4630870464942426
- 0.48179626796437086
- 0.48112541416510324
- 0.4785458846844729
- 0.4667757763219518
- 0.4888239384534906
- 0.48913193407033156
- 0.45400599455241203
- 0.4796128193217837
- 0.4826602649834829
- 0.5534097832418009
- 0.547017625264848
- 0.5534875637912158
- 0.5545166479145291
- 0.551868078347376
- 0.5565074707024643
- 0.5454716112544638
- 0.549704436465488
- 0.5522699426270606
- 0.5473649503725682
- 0.5210558655702533
- 0.3091614875108429
- 0.4435292091514286
- 0.40925130602725246
- 0.35095638279275543
- 0.27771465836053044
- 0.3062679436429392
- 0.2356102795990061
- 0.31737058583388944
- 1.0
- 0.2664917992477291
- 0.4630870464942426
- 0.48179626796437086
- 0.48112541416510324
- 0.4785458846844729
- 0.4667757763219518
- 0.4888239384534906
- 0.48913193407033156
- 0.45400599455241203
- 0.4796128193217837
- 0.4826602649834829
- 0.5534097832418009
- 0.547017625264848
- 0.5534875637912158
- 0.5545166479145291
- 0.551868078347376
- 0.5565074707024643
- 0.5454716112544638
- 0.549704436465488
- 0.5522699426270606
- 0.5473649503725682
- 0.5210558655702533
- 0.3091614875108429
- 0.4435292091514286
- 0.40925130602725246
- 0.35095638279275543
- 0.27771465836053044
- 0.3062679436429392
- 0.2356102795990061
- 0.31737058583388944
- 1.0
- 0.2664917992477291
- 0.4630870464942426
- 0.48179626796437086
- 0.48112541416510324
- 0.4785458846844729
- 0.4667757763219518
- 0.4888239384534906
- 0.48913193407033156
- 0.45400599455241203
- 0.4796128193217837
- 0.4826602649834829
- 0.5534097832418009
- 0.547017625264848
- 0.5534875637912158
- 0.5545166479145291
- 0.551868078347376
- 0.5565074707024643
- 0.5454716112544638
- 0.549704436465488
- 0.5522699426270606
- 0.5473649503725682
- 0.5210558655702533
- 0.3091614875108429
- 0.4435292091514286
- 0.40925130602725246
- 0.35095638279275543
- 0.27771465836053044
- 0.3062679436429392
- 0.2356102795990061
- 0.31737058583388944
- 1.0
- 0.2664917992477291
- 0.4630870464942426
- 0.48179626796437086
- 0.48112541416510324
- 0.4785458846844729
- 0.4667757763219518
- 0.4888239384534906
- 0.48913193407033156
- 0.45400599455241203
- 0.4796128193217837
- 0.4826602649834829
- 0.5534097832418009
- 0.547017625264848
- 0.5534875637912158
- 0.5545166479145291
- 0.551868078347376
- 0.5565074707024643
- 0.5454716112544638
- 0.549704436465488
- 0.5522699426270606
- 0.5473649503725682
- 0.5210558655702533
- 0.3091614875108429
- 0.4435292091514286
- 0.40925130602725246
- 0.35095638279275543
- 0.27771465836053044
- 0.3062679436429392
- 0.2356102795990061
- 0.31737058583388944
- 1.0
- 0.2664917992477291
- 0.4630870464942426
- 0.48179626796437086
- 0.48112541416510324
- 0.4785458846844729
- 0.4667757763219518
- 0.4888239384534906
- 0.48913193407033156
- 0.45400599455241203
- 0.4796128193217837
- 0.4826602649834829
- 0.5534097832418009
- 0.547017625264848
- 0.5534875637912158
- 0.5545166479145291
- 0.551868078347376
- 0.5565074707024643
- 0.5454716112544638
- 0.549704436465488
- 0.5522699426270606
- 0.5473649503725682
- 0.5210558655702533
- 0.3091614875108429
- 0.4435292091514286
- 0.40925130602725246
- 0.35095638279275543
- 0.27771465836053044
- 0.3062679436429392
- 0.2356102795990061
- 0.31737058583388944
- 1.0
- 0.2664917992477291
- 0.4630870464942426
- 0.48179626796437086
- 0.48112541416510324
- 0.4785458846844729
- 0.4667757763219518
- 0.4888239384534906
- 0.48913193407033156
- 0.45400599455241203
- 0.4796128193217837
- 0.4826602649834829
- 0.5534097832418009
- 0.547017625264848
- 0.5534875637912158
- 0.5545166479145291
- 0.551868078347376
- 0.5565074707024643
- 0.5454716112544638
- 0.549704436465488
- 0.5522699426270606
- 0.5473649503725682
- 0.5210558655702533
- 0.3091614875108429
- 0.4435292091514286
- 0.40925130602725246
- 0.35095638279275543
- 0.27771465836053044
- 0.3062679436429392
- 0.2356102795990061
- 0.31737058583388944
- 1.0
- 0.2664917992477291
- 0.4630870464942426
- 0.48179626796437086
- 0.48112541416510324
- 0.4785458846844729
- 0.4667757763219518
- 0.4888239384534906
- 0.48913193407033156
- 0.45400599455241203
- 0.4796128193217837
- 0.4826602649834829
- 0.5534097832418009
- 0.547017625264848
- 0.5534875637912158
- 0.5545166479145291
- 0.551868078347376
- 0.5565074707024643
- 0.5454716112544638
- 0.549704436465488
- 0.5522699426270606
- 0.5473649503725682
- 0.5210558655702533
- 0.3091614875108429
- 0.4435292091514286
- 0.40925130602725246
- 0.35095638279275543
- 0.27771465836053044
- 0.3062679436429392
- 0.2356102795990061
- 0.31737058583388944
- 1.0
- 0.2664917992477291
- task:
type: Clustering
dataset:
name: MTEB ArxivClusteringS2S
type: None
config: default
split: test
revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53
metrics:
- type: v_measure
value: 37.92904883350074
- type: v_measures
value:
- 0.3942530387223539
- 0.4037300750919399
- 0.37628324974390925
- 0.373319550245667
- 0.3885322788174104
- 0.38119794461431755
- 0.3823715539670135
- 0.39591870914604277
- 0.39418963009800245
- 0.3792863969189305
- 0.4284515442623109
- 0.43507367250415546
- 0.42700571785176217
- 0.4361446299823968
- 0.43904291221017366
- 0.4299550538908112
- 0.43238309813164827
- 0.42903116749560066
- 0.4205715584354972
- 0.42679694564103793
- 0.39803191142389904
- 0.2294459267018928
- 0.2818297992588612
- 0.335409231908862
- 0.2840591462499585
- 0.2126881092800587
- 0.23725806040439548
- 0.16296784316806723
- 0.23662008905329618
- 1.0
- 0.2061562931649559
- 0.3942530387223539
- 0.4037300750919399
- 0.37628324974390925
- 0.373319550245667
- 0.3885322788174104
- 0.38119794461431755
- 0.3823715539670135
- 0.39591870914604277
- 0.39418963009800245
- 0.3792863969189305
- 0.4284515442623109
- 0.43507367250415546
- 0.42700571785176217
- 0.4361446299823968
- 0.43904291221017366
- 0.4299550538908112
- 0.43238309813164827
- 0.42903116749560066
- 0.4205715584354972
- 0.42679694564103793
- 0.39803191142389904
- 0.2294459267018928
- 0.2818297992588612
- 0.335409231908862
- 0.2840591462499585
- 0.2126881092800587
- 0.23725806040439548
- 0.16296784316806723
- 0.23662008905329618
- 1.0
- 0.2061562931649559
- 0.3942530387223539
- 0.4037300750919399
- 0.37628324974390925
- 0.373319550245667
- 0.3885322788174104
- 0.38119794461431755
- 0.3823715539670135
- 0.39591870914604277
- 0.39418963009800245
- 0.3792863969189305
- 0.4284515442623109
- 0.43507367250415546
- 0.42700571785176217
- 0.4361446299823968
- 0.43904291221017366
- 0.4299550538908112
- 0.43238309813164827
- 0.42903116749560066
- 0.4205715584354972
- 0.42679694564103793
- 0.39803191142389904
- 0.2294459267018928
- 0.2818297992588612
- 0.335409231908862
- 0.2840591462499585
- 0.2126881092800587
- 0.23725806040439548
- 0.16296784316806723
- 0.23662008905329618
- 1.0
- 0.2061562931649559
- 0.3942530387223539
- 0.4037300750919399
- 0.37628324974390925
- 0.373319550245667
- 0.3885322788174104
- 0.38119794461431755
- 0.3823715539670135
- 0.39591870914604277
- 0.39418963009800245
- 0.3792863969189305
- 0.4284515442623109
- 0.43507367250415546
- 0.42700571785176217
- 0.4361446299823968
- 0.43904291221017366
- 0.4299550538908112
- 0.43238309813164827
- 0.42903116749560066
- 0.4205715584354972
- 0.42679694564103793
- 0.39803191142389904
- 0.2294459267018928
- 0.2818297992588612
- 0.335409231908862
- 0.2840591462499585
- 0.2126881092800587
- 0.23725806040439548
- 0.16296784316806723
- 0.23662008905329618
- 1.0
- 0.2061562931649559
- 0.3942530387223539
- 0.4037300750919399
- 0.37628324974390925
- 0.373319550245667
- 0.3885322788174104
- 0.38119794461431755
- 0.3823715539670135
- 0.39591870914604277
- 0.39418963009800245
- 0.3792863969189305
- 0.4284515442623109
- 0.43507367250415546
- 0.42700571785176217
- 0.4361446299823968
- 0.43904291221017366
- 0.4299550538908112
- 0.43238309813164827
- 0.42903116749560066
- 0.4205715584354972
- 0.42679694564103793
- 0.39803191142389904
- 0.2294459267018928
- 0.2818297992588612
- 0.335409231908862
- 0.2840591462499585
- 0.2126881092800587
- 0.23725806040439548
- 0.16296784316806723
- 0.23662008905329618
- 1.0
- 0.2061562931649559
- 0.3942530387223539
- 0.4037300750919399
- 0.37628324974390925
- 0.373319550245667
- 0.3885322788174104
- 0.38119794461431755
- 0.3823715539670135
- 0.39591870914604277
- 0.39418963009800245
- 0.3792863969189305
- 0.4284515442623109
- 0.43507367250415546
- 0.42700571785176217
- 0.4361446299823968
- 0.43904291221017366
- 0.4299550538908112
- 0.43238309813164827
- 0.42903116749560066
- 0.4205715584354972
- 0.42679694564103793
- 0.39803191142389904
- 0.2294459267018928
- 0.2818297992588612
- 0.335409231908862
- 0.2840591462499585
- 0.2126881092800587
- 0.23725806040439548
- 0.16296784316806723
- 0.23662008905329618
- 1.0
- 0.2061562931649559
- 0.3942530387223539
- 0.4037300750919399
- 0.37628324974390925
- 0.373319550245667
- 0.3885322788174104
- 0.38119794461431755
- 0.3823715539670135
- 0.39591870914604277
- 0.39418963009800245
- 0.3792863969189305
- 0.4284515442623109
- 0.43507367250415546
- 0.42700571785176217
- 0.4361446299823968
- 0.43904291221017366
- 0.4299550538908112
- 0.43238309813164827
- 0.42903116749560066
- 0.4205715584354972
- 0.42679694564103793
- 0.39803191142389904
- 0.2294459267018928
- 0.2818297992588612
- 0.335409231908862
- 0.2840591462499585
- 0.2126881092800587
- 0.23725806040439548
- 0.16296784316806723
- 0.23662008905329618
- 1.0
- 0.2061562931649559
- 0.3942530387223539
- 0.4037300750919399
- 0.37628324974390925
- 0.373319550245667
- 0.3885322788174104
- 0.38119794461431755
- 0.3823715539670135
- 0.39591870914604277
- 0.39418963009800245
- 0.3792863969189305
- 0.4284515442623109
- 0.43507367250415546
- 0.42700571785176217
- 0.4361446299823968
- 0.43904291221017366
- 0.4299550538908112
- 0.43238309813164827
- 0.42903116749560066
- 0.4205715584354972
- 0.42679694564103793
- 0.39803191142389904
- 0.2294459267018928
- 0.2818297992588612
- 0.335409231908862
- 0.2840591462499585
- 0.2126881092800587
- 0.23725806040439548
- 0.16296784316806723
- 0.23662008905329618
- 1.0
- 0.2061562931649559
- 0.3942530387223539
- 0.4037300750919399
- 0.37628324974390925
- 0.373319550245667
- 0.3885322788174104
- 0.38119794461431755
- 0.3823715539670135
- 0.39591870914604277
- 0.39418963009800245
- 0.3792863969189305
- 0.4284515442623109
- 0.43507367250415546
- 0.42700571785176217
- 0.4361446299823968
- 0.43904291221017366
- 0.4299550538908112
- 0.43238309813164827
- 0.42903116749560066
- 0.4205715584354972
- 0.42679694564103793
- 0.39803191142389904
- 0.2294459267018928
- 0.2818297992588612
- 0.335409231908862
- 0.2840591462499585
- 0.2126881092800587
- 0.23725806040439548
- 0.16296784316806723
- 0.23662008905329618
- 1.0
- 0.2061562931649559
- 0.3942530387223539
- 0.4037300750919399
- 0.37628324974390925
- 0.373319550245667
- 0.3885322788174104
- 0.38119794461431755
- 0.3823715539670135
- 0.39591870914604277
- 0.39418963009800245
- 0.3792863969189305
- 0.4284515442623109
- 0.43507367250415546
- 0.42700571785176217
- 0.4361446299823968
- 0.43904291221017366
- 0.4299550538908112
- 0.43238309813164827
- 0.42903116749560066
- 0.4205715584354972
- 0.42679694564103793
- 0.39803191142389904
- 0.2294459267018928
- 0.2818297992588612
- 0.335409231908862
- 0.2840591462499585
- 0.2126881092800587
- 0.23725806040439548
- 0.16296784316806723
- 0.23662008905329618
- 1.0
- 0.2061562931649559
- 0.3942530387223539
- 0.4037300750919399
- 0.37628324974390925
- 0.373319550245667
- 0.3885322788174104
- 0.38119794461431755
- 0.3823715539670135
- 0.39591870914604277
- 0.39418963009800245
- 0.3792863969189305
- 0.4284515442623109
- 0.43507367250415546
- 0.42700571785176217
- 0.4361446299823968
- 0.43904291221017366
- 0.4299550538908112
- 0.43238309813164827
- 0.42903116749560066
- 0.4205715584354972
- 0.42679694564103793
- 0.39803191142389904
- 0.2294459267018928
- 0.2818297992588612
- 0.335409231908862
- 0.2840591462499585
- 0.2126881092800587
- 0.23725806040439548
- 0.16296784316806723
- 0.23662008905329618
- 1.0
- 0.2061562931649559
- 0.3942530387223539
- 0.4037300750919399
- 0.37628324974390925
- 0.373319550245667
- 0.3885322788174104
- 0.38119794461431755
- 0.3823715539670135
- 0.39591870914604277
- 0.39418963009800245
- 0.3792863969189305
- 0.4284515442623109
- 0.43507367250415546
- 0.42700571785176217
- 0.4361446299823968
- 0.43904291221017366
- 0.4299550538908112
- 0.43238309813164827
- 0.42903116749560066
- 0.4205715584354972
- 0.42679694564103793
- 0.39803191142389904
- 0.2294459267018928
- 0.2818297992588612
- 0.335409231908862
- 0.2840591462499585
- 0.2126881092800587
- 0.23725806040439548
- 0.16296784316806723
- 0.23662008905329618
- 1.0
- 0.2061562931649559
- 0.3942530387223539
- 0.4037300750919399
- 0.37628324974390925
- 0.373319550245667
- 0.3885322788174104
- 0.38119794461431755
- 0.3823715539670135
- 0.39591870914604277
- 0.39418963009800245
- 0.3792863969189305
- 0.4284515442623109
- 0.43507367250415546
- 0.42700571785176217
- 0.4361446299823968
- 0.43904291221017366
- 0.4299550538908112
- 0.43238309813164827
- 0.42903116749560066
- 0.4205715584354972
- 0.42679694564103793
- 0.39803191142389904
- 0.2294459267018928
- 0.2818297992588612
- 0.335409231908862
- 0.2840591462499585
- 0.2126881092800587
- 0.23725806040439548
- 0.16296784316806723
- 0.23662008905329618
- 1.0
- 0.2061562931649559
- 0.3942530387223539
- 0.4037300750919399
- 0.37628324974390925
- 0.373319550245667
- 0.3885322788174104
- 0.38119794461431755
- 0.3823715539670135
- 0.39591870914604277
- 0.39418963009800245
- 0.3792863969189305
- 0.4284515442623109
- 0.43507367250415546
- 0.42700571785176217
- 0.4361446299823968
- 0.43904291221017366
- 0.4299550538908112
- 0.43238309813164827
- 0.42903116749560066
- 0.4205715584354972
- 0.42679694564103793
- 0.39803191142389904
- 0.2294459267018928
- 0.2818297992588612
- 0.335409231908862
- 0.2840591462499585
- 0.2126881092800587
- 0.23725806040439548
- 0.16296784316806723
- 0.23662008905329618
- 1.0
- 0.2061562931649559
- 0.3942530387223539
- 0.4037300750919399
- 0.37628324974390925
- 0.373319550245667
- 0.3885322788174104
- 0.38119794461431755
- 0.3823715539670135
- 0.39591870914604277
- 0.39418963009800245
- 0.3792863969189305
- 0.4284515442623109
- 0.43507367250415546
- 0.42700571785176217
- 0.4361446299823968
- 0.43904291221017366
- 0.4299550538908112
- 0.43238309813164827
- 0.42903116749560066
- 0.4205715584354972
- 0.42679694564103793
- 0.39803191142389904
- 0.2294459267018928
- 0.2818297992588612
- 0.335409231908862
- 0.2840591462499585
- 0.2126881092800587
- 0.23725806040439548
- 0.16296784316806723
- 0.23662008905329618
- 1.0
- 0.2061562931649559
- 0.3942530387223539
- 0.4037300750919399
- 0.37628324974390925
- 0.373319550245667
- 0.3885322788174104
- 0.38119794461431755
- 0.3823715539670135
- 0.39591870914604277
- 0.39418963009800245
- 0.3792863969189305
- 0.4284515442623109
- 0.43507367250415546
- 0.42700571785176217
- 0.4361446299823968
- 0.43904291221017366
- 0.4299550538908112
- 0.43238309813164827
- 0.42903116749560066
- 0.4205715584354972
- 0.42679694564103793
- 0.39803191142389904
- 0.2294459267018928
- 0.2818297992588612
- 0.335409231908862
- 0.2840591462499585
- 0.2126881092800587
- 0.23725806040439548
- 0.16296784316806723
- 0.23662008905329618
- 1.0
- 0.2061562931649559
- 0.3942530387223539
- 0.4037300750919399
- 0.37628324974390925
- 0.373319550245667
- 0.3885322788174104
- 0.38119794461431755
- 0.3823715539670135
- 0.39591870914604277
- 0.39418963009800245
- 0.3792863969189305
- 0.4284515442623109
- 0.43507367250415546
- 0.42700571785176217
- 0.4361446299823968
- 0.43904291221017366
- 0.4299550538908112
- 0.43238309813164827
- 0.42903116749560066
- 0.4205715584354972
- 0.42679694564103793
- 0.39803191142389904
- 0.2294459267018928
- 0.2818297992588612
- 0.335409231908862
- 0.2840591462499585
- 0.2126881092800587
- 0.23725806040439548
- 0.16296784316806723
- 0.23662008905329618
- 1.0
- 0.2061562931649559
- 0.3942530387223539
- 0.4037300750919399
- 0.37628324974390925
- 0.373319550245667
- 0.3885322788174104
- 0.38119794461431755
- 0.3823715539670135
- 0.39591870914604277
- 0.39418963009800245
- 0.3792863969189305
- 0.4284515442623109
- 0.43507367250415546
- 0.42700571785176217
- 0.4361446299823968
- 0.43904291221017366
- 0.4299550538908112
- 0.43238309813164827
- 0.42903116749560066
- 0.4205715584354972
- 0.42679694564103793
- 0.39803191142389904
- 0.2294459267018928
- 0.2818297992588612
- 0.335409231908862
- 0.2840591462499585
- 0.2126881092800587
- 0.23725806040439548
- 0.16296784316806723
- 0.23662008905329618
- 1.0
- 0.2061562931649559
- 0.3942530387223539
- 0.4037300750919399
- 0.37628324974390925
- 0.373319550245667
- 0.3885322788174104
- 0.38119794461431755
- 0.3823715539670135
- 0.39591870914604277
- 0.39418963009800245
- 0.3792863969189305
- 0.4284515442623109
- 0.43507367250415546
- 0.42700571785176217
- 0.4361446299823968
- 0.43904291221017366
- 0.4299550538908112
- 0.43238309813164827
- 0.42903116749560066
- 0.4205715584354972
- 0.42679694564103793
- 0.39803191142389904
- 0.2294459267018928
- 0.2818297992588612
- 0.335409231908862
- 0.2840591462499585
- 0.2126881092800587
- 0.23725806040439548
- 0.16296784316806723
- 0.23662008905329618
- 1.0
- 0.2061562931649559
- 0.3942530387223539
- 0.4037300750919399
- 0.37628324974390925
- 0.373319550245667
- 0.3885322788174104
- 0.38119794461431755
- 0.3823715539670135
- 0.39591870914604277
- 0.39418963009800245
- 0.3792863969189305
- 0.4284515442623109
- 0.43507367250415546
- 0.42700571785176217
- 0.4361446299823968
- 0.43904291221017366
- 0.4299550538908112
- 0.43238309813164827
- 0.42903116749560066
- 0.4205715584354972
- 0.42679694564103793
- 0.39803191142389904
- 0.2294459267018928
- 0.2818297992588612
- 0.335409231908862
- 0.2840591462499585
- 0.2126881092800587
- 0.23725806040439548
- 0.16296784316806723
- 0.23662008905329618
- 1.0
- 0.2061562931649559
- 0.3942530387223539
- 0.4037300750919399
- 0.37628324974390925
- 0.373319550245667
- 0.3885322788174104
- 0.38119794461431755
- 0.3823715539670135
- 0.39591870914604277
- 0.39418963009800245
- 0.3792863969189305
- 0.4284515442623109
- 0.43507367250415546
- 0.42700571785176217
- 0.4361446299823968
- 0.43904291221017366
- 0.4299550538908112
- 0.43238309813164827
- 0.42903116749560066
- 0.4205715584354972
- 0.42679694564103793
- 0.39803191142389904
- 0.2294459267018928
- 0.2818297992588612
- 0.335409231908862
- 0.2840591462499585
- 0.2126881092800587
- 0.23725806040439548
- 0.16296784316806723
- 0.23662008905329618
- 1.0
- 0.2061562931649559
- 0.3942530387223539
- 0.4037300750919399
- 0.37628324974390925
- 0.373319550245667
- 0.3885322788174104
- 0.38119794461431755
- 0.3823715539670135
- 0.39591870914604277
- 0.39418963009800245
- 0.3792863969189305
- 0.4284515442623109
- 0.43507367250415546
- 0.42700571785176217
- 0.4361446299823968
- 0.43904291221017366
- 0.4299550538908112
- 0.43238309813164827
- 0.42903116749560066
- 0.4205715584354972
- 0.42679694564103793
- 0.39803191142389904
- 0.2294459267018928
- 0.2818297992588612
- 0.335409231908862
- 0.2840591462499585
- 0.2126881092800587
- 0.23725806040439548
- 0.16296784316806723
- 0.23662008905329618
- 1.0
- 0.2061562931649559
- 0.3942530387223539
- 0.4037300750919399
- 0.37628324974390925
- 0.373319550245667
- 0.3885322788174104
- 0.38119794461431755
- 0.3823715539670135
- 0.39591870914604277
- 0.39418963009800245
- 0.3792863969189305
- 0.4284515442623109
- 0.43507367250415546
- 0.42700571785176217
- 0.4361446299823968
- 0.43904291221017366
- 0.4299550538908112
- 0.43238309813164827
- 0.42903116749560066
- 0.4205715584354972
- 0.42679694564103793
- 0.39803191142389904
- 0.2294459267018928
- 0.2818297992588612
- 0.335409231908862
- 0.2840591462499585
- 0.2126881092800587
- 0.23725806040439548
- 0.16296784316806723
- 0.23662008905329618
- 1.0
- 0.2061562931649559
- 0.3942530387223539
- 0.4037300750919399
- 0.37628324974390925
- 0.373319550245667
- 0.3885322788174104
- 0.38119794461431755
- 0.3823715539670135
- 0.39591870914604277
- 0.39418963009800245
- 0.3792863969189305
- 0.4284515442623109
- 0.43507367250415546
- 0.42700571785176217
- 0.4361446299823968
- 0.43904291221017366
- 0.4299550538908112
- 0.43238309813164827
- 0.42903116749560066
- 0.4205715584354972
- 0.42679694564103793
- 0.39803191142389904
- 0.2294459267018928
- 0.2818297992588612
- 0.335409231908862
- 0.2840591462499585
- 0.2126881092800587
- 0.23725806040439548
- 0.16296784316806723
- 0.23662008905329618
- 1.0
- 0.2061562931649559
- 0.3942530387223539
- 0.4037300750919399
- 0.37628324974390925
- 0.373319550245667
- 0.3885322788174104
- 0.38119794461431755
- 0.3823715539670135
- 0.39591870914604277
- 0.39418963009800245
- 0.3792863969189305
- 0.4284515442623109
- 0.43507367250415546
- 0.42700571785176217
- 0.4361446299823968
- 0.43904291221017366
- 0.4299550538908112
- 0.43238309813164827
- 0.42903116749560066
- 0.4205715584354972
- 0.42679694564103793
- 0.39803191142389904
- 0.2294459267018928
- 0.2818297992588612
- 0.335409231908862
- 0.2840591462499585
- 0.2126881092800587
- 0.23725806040439548
- 0.16296784316806723
- 0.23662008905329618
- 1.0
- 0.2061562931649559
- 0.3942530387223539
- 0.4037300750919399
- 0.37628324974390925
- 0.373319550245667
- 0.3885322788174104
- 0.38119794461431755
- 0.3823715539670135
- 0.39591870914604277
- 0.39418963009800245
- 0.3792863969189305
- 0.4284515442623109
- 0.43507367250415546
- 0.42700571785176217
- 0.4361446299823968
- 0.43904291221017366
- 0.4299550538908112
- 0.43238309813164827
- 0.42903116749560066
- 0.4205715584354972
- 0.42679694564103793
- 0.39803191142389904
- 0.2294459267018928
- 0.2818297992588612
- 0.335409231908862
- 0.2840591462499585
- 0.2126881092800587
- 0.23725806040439548
- 0.16296784316806723
- 0.23662008905329618
- 1.0
- 0.2061562931649559
- 0.3942530387223539
- 0.4037300750919399
- 0.37628324974390925
- 0.373319550245667
- 0.3885322788174104
- 0.38119794461431755
- 0.3823715539670135
- 0.39591870914604277
- 0.39418963009800245
- 0.3792863969189305
- 0.4284515442623109
- 0.43507367250415546
- 0.42700571785176217
- 0.4361446299823968
- 0.43904291221017366
- 0.4299550538908112
- 0.43238309813164827
- 0.42903116749560066
- 0.4205715584354972
- 0.42679694564103793
- 0.39803191142389904
- 0.2294459267018928
- 0.2818297992588612
- 0.335409231908862
- 0.2840591462499585
- 0.2126881092800587
- 0.23725806040439548
- 0.16296784316806723
- 0.23662008905329618
- 1.0
- 0.2061562931649559
- 0.3942530387223539
- 0.4037300750919399
- 0.37628324974390925
- 0.373319550245667
- 0.3885322788174104
- 0.38119794461431755
- 0.3823715539670135
- 0.39591870914604277
- 0.39418963009800245
- 0.3792863969189305
- 0.4284515442623109
- 0.43507367250415546
- 0.42700571785176217
- 0.4361446299823968
- 0.43904291221017366
- 0.4299550538908112
- 0.43238309813164827
- 0.42903116749560066
- 0.4205715584354972
- 0.42679694564103793
- 0.39803191142389904
- 0.2294459267018928
- 0.2818297992588612
- 0.335409231908862
- 0.2840591462499585
- 0.2126881092800587
- 0.23725806040439548
- 0.16296784316806723
- 0.23662008905329618
- 1.0
- 0.2061562931649559
- 0.3942530387223539
- 0.4037300750919399
- 0.37628324974390925
- 0.373319550245667
- 0.3885322788174104
- 0.38119794461431755
- 0.3823715539670135
- 0.39591870914604277
- 0.39418963009800245
- 0.3792863969189305
- 0.4284515442623109
- 0.43507367250415546
- 0.42700571785176217
- 0.4361446299823968
- 0.43904291221017366
- 0.4299550538908112
- 0.43238309813164827
- 0.42903116749560066
- 0.4205715584354972
- 0.42679694564103793
- 0.39803191142389904
- 0.2294459267018928
- 0.2818297992588612
- 0.335409231908862
- 0.2840591462499585
- 0.2126881092800587
- 0.23725806040439548
- 0.16296784316806723
- 0.23662008905329618
- 1.0
- 0.2061562931649559
- 0.3942530387223539
- 0.4037300750919399
- 0.37628324974390925
- 0.373319550245667
- 0.3885322788174104
- 0.38119794461431755
- 0.3823715539670135
- 0.39591870914604277
- 0.39418963009800245
- 0.3792863969189305
- 0.4284515442623109
- 0.43507367250415546
- 0.42700571785176217
- 0.4361446299823968
- 0.43904291221017366
- 0.4299550538908112
- 0.43238309813164827
- 0.42903116749560066
- 0.4205715584354972
- 0.42679694564103793
- 0.39803191142389904
- 0.2294459267018928
- 0.2818297992588612
- 0.335409231908862
- 0.2840591462499585
- 0.2126881092800587
- 0.23725806040439548
- 0.16296784316806723
- 0.23662008905329618
- 1.0
- 0.2061562931649559
- 0.3942530387223539
- 0.4037300750919399
- 0.37628324974390925
- 0.373319550245667
- 0.3885322788174104
- 0.38119794461431755
- 0.3823715539670135
- 0.39591870914604277
- 0.39418963009800245
- 0.3792863969189305
- 0.4284515442623109
- 0.43507367250415546
- 0.42700571785176217
- 0.4361446299823968
- 0.43904291221017366
- 0.4299550538908112
- 0.43238309813164827
- 0.42903116749560066
- 0.4205715584354972
- 0.42679694564103793
- 0.39803191142389904
- 0.2294459267018928
- 0.2818297992588612
- 0.335409231908862
- 0.2840591462499585
- 0.2126881092800587
- 0.23725806040439548
- 0.16296784316806723
- 0.23662008905329618
- 1.0
- 0.2061562931649559
- 0.3942530387223539
- 0.4037300750919399
- 0.37628324974390925
- 0.373319550245667
- 0.3885322788174104
- 0.38119794461431755
- 0.3823715539670135
- 0.39591870914604277
- 0.39418963009800245
- 0.3792863969189305
- 0.4284515442623109
- 0.43507367250415546
- 0.42700571785176217
- 0.4361446299823968
- 0.43904291221017366
- 0.4299550538908112
- 0.43238309813164827
- 0.42903116749560066
- 0.4205715584354972
- 0.42679694564103793
- 0.39803191142389904
- 0.2294459267018928
- 0.2818297992588612
- 0.335409231908862
- 0.2840591462499585
- 0.2126881092800587
- 0.23725806040439548
- 0.16296784316806723
- 0.23662008905329618
- 1.0
- 0.2061562931649559
- 0.3942530387223539
- 0.4037300750919399
- 0.37628324974390925
- 0.373319550245667
- 0.3885322788174104
- 0.38119794461431755
- 0.3823715539670135
- 0.39591870914604277
- 0.39418963009800245
- 0.3792863969189305
- 0.4284515442623109
- 0.43507367250415546
- 0.42700571785176217
- 0.4361446299823968
- 0.43904291221017366
- 0.4299550538908112
- 0.43238309813164827
- 0.42903116749560066
- 0.4205715584354972
- 0.42679694564103793
- 0.39803191142389904
- 0.2294459267018928
- 0.2818297992588612
- 0.335409231908862
- 0.2840591462499585
- 0.2126881092800587
- 0.23725806040439548
- 0.16296784316806723
- 0.23662008905329618
- 1.0
- 0.2061562931649559
- 0.3942530387223539
- 0.4037300750919399
- 0.37628324974390925
- 0.373319550245667
- 0.3885322788174104
- 0.38119794461431755
- 0.3823715539670135
- 0.39591870914604277
- 0.39418963009800245
- 0.3792863969189305
- 0.4284515442623109
- 0.43507367250415546
- 0.42700571785176217
- 0.4361446299823968
- 0.43904291221017366
- 0.4299550538908112
- 0.43238309813164827
- 0.42903116749560066
- 0.4205715584354972
- 0.42679694564103793
- 0.39803191142389904
- 0.2294459267018928
- 0.2818297992588612
- 0.335409231908862
- 0.2840591462499585
- 0.2126881092800587
- 0.23725806040439548
- 0.16296784316806723
- 0.23662008905329618
- 1.0
- 0.2061562931649559
- 0.3942530387223539
- 0.4037300750919399
- 0.37628324974390925
- 0.373319550245667
- 0.3885322788174104
- 0.38119794461431755
- 0.3823715539670135
- 0.39591870914604277
- 0.39418963009800245
- 0.3792863969189305
- 0.4284515442623109
- 0.43507367250415546
- 0.42700571785176217
- 0.4361446299823968
- 0.43904291221017366
- 0.4299550538908112
- 0.43238309813164827
- 0.42903116749560066
- 0.4205715584354972
- 0.42679694564103793
- 0.39803191142389904
- 0.2294459267018928
- 0.2818297992588612
- 0.335409231908862
- 0.2840591462499585
- 0.2126881092800587
- 0.23725806040439548
- 0.16296784316806723
- 0.23662008905329618
- 1.0
- 0.2061562931649559
- 0.3942530387223539
- 0.4037300750919399
- 0.37628324974390925
- 0.373319550245667
- 0.3885322788174104
- 0.38119794461431755
- 0.3823715539670135
- 0.39591870914604277
- 0.39418963009800245
- 0.3792863969189305
- 0.4284515442623109
- 0.43507367250415546
- 0.42700571785176217
- 0.4361446299823968
- 0.43904291221017366
- 0.4299550538908112
- 0.43238309813164827
- 0.42903116749560066
- 0.4205715584354972
- 0.42679694564103793
- 0.39803191142389904
- 0.2294459267018928
- 0.2818297992588612
- 0.335409231908862
- 0.2840591462499585
- 0.2126881092800587
- 0.23725806040439548
- 0.16296784316806723
- 0.23662008905329618
- 1.0
- 0.2061562931649559
- 0.3942530387223539
- 0.4037300750919399
- 0.37628324974390925
- 0.373319550245667
- 0.3885322788174104
- 0.38119794461431755
- 0.3823715539670135
- 0.39591870914604277
- 0.39418963009800245
- 0.3792863969189305
- 0.4284515442623109
- 0.43507367250415546
- 0.42700571785176217
- 0.4361446299823968
- 0.43904291221017366
- 0.4299550538908112
- 0.43238309813164827
- 0.42903116749560066
- 0.4205715584354972
- 0.42679694564103793
- 0.39803191142389904
- 0.2294459267018928
- 0.2818297992588612
- 0.335409231908862
- 0.2840591462499585
- 0.2126881092800587
- 0.23725806040439548
- 0.16296784316806723
- 0.23662008905329618
- 1.0
- 0.2061562931649559
- 0.3942530387223539
- 0.4037300750919399
- 0.37628324974390925
- 0.373319550245667
- 0.3885322788174104
- 0.38119794461431755
- 0.3823715539670135
- 0.39591870914604277
- 0.39418963009800245
- 0.3792863969189305
- 0.4284515442623109
- 0.43507367250415546
- 0.42700571785176217
- 0.4361446299823968
- 0.43904291221017366
- 0.4299550538908112
- 0.43238309813164827
- 0.42903116749560066
- 0.4205715584354972
- 0.42679694564103793
- 0.39803191142389904
- 0.2294459267018928
- 0.2818297992588612
- 0.335409231908862
- 0.2840591462499585
- 0.2126881092800587
- 0.23725806040439548
- 0.16296784316806723
- 0.23662008905329618
- 1.0
- 0.2061562931649559
- 0.3942530387223539
- 0.4037300750919399
- 0.37628324974390925
- 0.373319550245667
- 0.3885322788174104
- 0.38119794461431755
- 0.3823715539670135
- 0.39591870914604277
- 0.39418963009800245
- 0.3792863969189305
- 0.4284515442623109
- 0.43507367250415546
- 0.42700571785176217
- 0.4361446299823968
- 0.43904291221017366
- 0.4299550538908112
- 0.43238309813164827
- 0.42903116749560066
- 0.4205715584354972
- 0.42679694564103793
- 0.39803191142389904
- 0.2294459267018928
- 0.2818297992588612
- 0.335409231908862
- 0.2840591462499585
- 0.2126881092800587
- 0.23725806040439548
- 0.16296784316806723
- 0.23662008905329618
- 1.0
- 0.2061562931649559
- 0.3942530387223539
- 0.4037300750919399
- 0.37628324974390925
- 0.373319550245667
- 0.3885322788174104
- 0.38119794461431755
- 0.3823715539670135
- 0.39591870914604277
- 0.39418963009800245
- 0.3792863969189305
- 0.4284515442623109
- 0.43507367250415546
- 0.42700571785176217
- 0.4361446299823968
- 0.43904291221017366
- 0.4299550538908112
- 0.43238309813164827
- 0.42903116749560066
- 0.4205715584354972
- 0.42679694564103793
- 0.39803191142389904
- 0.2294459267018928
- 0.2818297992588612
- 0.335409231908862
- 0.2840591462499585
- 0.2126881092800587
- 0.23725806040439548
- 0.16296784316806723
- 0.23662008905329618
- 1.0
- 0.2061562931649559
- 0.3942530387223539
- 0.4037300750919399
- 0.37628324974390925
- 0.373319550245667
- 0.3885322788174104
- 0.38119794461431755
- 0.3823715539670135
- 0.39591870914604277
- 0.39418963009800245
- 0.3792863969189305
- 0.4284515442623109
- 0.43507367250415546
- 0.42700571785176217
- 0.4361446299823968
- 0.43904291221017366
- 0.4299550538908112
- 0.43238309813164827
- 0.42903116749560066
- 0.4205715584354972
- 0.42679694564103793
- 0.39803191142389904
- 0.2294459267018928
- 0.2818297992588612
- 0.335409231908862
- 0.2840591462499585
- 0.2126881092800587
- 0.23725806040439548
- 0.16296784316806723
- 0.23662008905329618
- 1.0
- 0.2061562931649559
- 0.3942530387223539
- 0.4037300750919399
- 0.37628324974390925
- 0.373319550245667
- 0.3885322788174104
- 0.38119794461431755
- 0.3823715539670135
- 0.39591870914604277
- 0.39418963009800245
- 0.3792863969189305
- 0.4284515442623109
- 0.43507367250415546
- 0.42700571785176217
- 0.4361446299823968
- 0.43904291221017366
- 0.4299550538908112
- 0.43238309813164827
- 0.42903116749560066
- 0.4205715584354972
- 0.42679694564103793
- 0.39803191142389904
- 0.2294459267018928
- 0.2818297992588612
- 0.335409231908862
- 0.2840591462499585
- 0.2126881092800587
- 0.23725806040439548
- 0.16296784316806723
- 0.23662008905329618
- 1.0
- 0.2061562931649559
- 0.3942530387223539
- 0.4037300750919399
- 0.37628324974390925
- 0.373319550245667
- 0.3885322788174104
- 0.38119794461431755
- 0.3823715539670135
- 0.39591870914604277
- 0.39418963009800245
- 0.3792863969189305
- 0.4284515442623109
- 0.43507367250415546
- 0.42700571785176217
- 0.4361446299823968
- 0.43904291221017366
- 0.4299550538908112
- 0.43238309813164827
- 0.42903116749560066
- 0.4205715584354972
- 0.42679694564103793
- 0.39803191142389904
- 0.2294459267018928
- 0.2818297992588612
- 0.335409231908862
- 0.2840591462499585
- 0.2126881092800587
- 0.23725806040439548
- 0.16296784316806723
- 0.23662008905329618
- 1.0
- 0.2061562931649559
- 0.3942530387223539
- 0.4037300750919399
- 0.37628324974390925
- 0.373319550245667
- 0.3885322788174104
- 0.38119794461431755
- 0.3823715539670135
- 0.39591870914604277
- 0.39418963009800245
- 0.3792863969189305
- 0.4284515442623109
- 0.43507367250415546
- 0.42700571785176217
- 0.4361446299823968
- 0.43904291221017366
- 0.4299550538908112
- 0.43238309813164827
- 0.42903116749560066
- 0.4205715584354972
- 0.42679694564103793
- 0.39803191142389904
- 0.2294459267018928
- 0.2818297992588612
- 0.335409231908862
- 0.2840591462499585
- 0.2126881092800587
- 0.23725806040439548
- 0.16296784316806723
- 0.23662008905329618
- 1.0
- 0.2061562931649559
- 0.3942530387223539
- 0.4037300750919399
- 0.37628324974390925
- 0.373319550245667
- 0.3885322788174104
- 0.38119794461431755
- 0.3823715539670135
- 0.39591870914604277
- 0.39418963009800245
- 0.3792863969189305
- 0.4284515442623109
- 0.43507367250415546
- 0.42700571785176217
- 0.4361446299823968
- 0.43904291221017366
- 0.4299550538908112
- 0.43238309813164827
- 0.42903116749560066
- 0.4205715584354972
- 0.42679694564103793
- 0.39803191142389904
- 0.2294459267018928
- 0.2818297992588612
- 0.335409231908862
- 0.2840591462499585
- 0.2126881092800587
- 0.23725806040439548
- 0.16296784316806723
- 0.23662008905329618
- 1.0
- 0.2061562931649559
- 0.3942530387223539
- 0.4037300750919399
- 0.37628324974390925
- 0.373319550245667
- 0.3885322788174104
- 0.38119794461431755
- 0.3823715539670135
- 0.39591870914604277
- 0.39418963009800245
- 0.3792863969189305
- 0.4284515442623109
- 0.43507367250415546
- 0.42700571785176217
- 0.4361446299823968
- 0.43904291221017366
- 0.4299550538908112
- 0.43238309813164827
- 0.42903116749560066
- 0.4205715584354972
- 0.42679694564103793
- 0.39803191142389904
- 0.2294459267018928
- 0.2818297992588612
- 0.335409231908862
- 0.2840591462499585
- 0.2126881092800587
- 0.23725806040439548
- 0.16296784316806723
- 0.23662008905329618
- 1.0
- 0.2061562931649559
- 0.3942530387223539
- 0.4037300750919399
- 0.37628324974390925
- 0.373319550245667
- 0.3885322788174104
- 0.38119794461431755
- 0.3823715539670135
- 0.39591870914604277
- 0.39418963009800245
- 0.3792863969189305
- 0.4284515442623109
- 0.43507367250415546
- 0.42700571785176217
- 0.4361446299823968
- 0.43904291221017366
- 0.4299550538908112
- 0.43238309813164827
- 0.42903116749560066
- 0.4205715584354972
- 0.42679694564103793
- 0.39803191142389904
- 0.2294459267018928
- 0.2818297992588612
- 0.335409231908862
- 0.2840591462499585
- 0.2126881092800587
- 0.23725806040439548
- 0.16296784316806723
- 0.23662008905329618
- 1.0
- 0.2061562931649559
- 0.3942530387223539
- 0.4037300750919399
- 0.37628324974390925
- 0.373319550245667
- 0.3885322788174104
- 0.38119794461431755
- 0.3823715539670135
- 0.39591870914604277
- 0.39418963009800245
- 0.3792863969189305
- 0.4284515442623109
- 0.43507367250415546
- 0.42700571785176217
- 0.4361446299823968
- 0.43904291221017366
- 0.4299550538908112
- 0.43238309813164827
- 0.42903116749560066
- 0.4205715584354972
- 0.42679694564103793
- 0.39803191142389904
- 0.2294459267018928
- 0.2818297992588612
- 0.335409231908862
- 0.2840591462499585
- 0.2126881092800587
- 0.23725806040439548
- 0.16296784316806723
- 0.23662008905329618
- 1.0
- 0.2061562931649559
- 0.3942530387223539
- 0.4037300750919399
- 0.37628324974390925
- 0.373319550245667
- 0.3885322788174104
- 0.38119794461431755
- 0.3823715539670135
- 0.39591870914604277
- 0.39418963009800245
- 0.3792863969189305
- 0.4284515442623109
- 0.43507367250415546
- 0.42700571785176217
- 0.4361446299823968
- 0.43904291221017366
- 0.4299550538908112
- 0.43238309813164827
- 0.42903116749560066
- 0.4205715584354972
- 0.42679694564103793
- 0.39803191142389904
- 0.2294459267018928
- 0.2818297992588612
- 0.335409231908862
- 0.2840591462499585
- 0.2126881092800587
- 0.23725806040439548
- 0.16296784316806723
- 0.23662008905329618
- 1.0
- 0.2061562931649559
- 0.3942530387223539
- 0.4037300750919399
- 0.37628324974390925
- 0.373319550245667
- 0.3885322788174104
- 0.38119794461431755
- 0.3823715539670135
- 0.39591870914604277
- 0.39418963009800245
- 0.3792863969189305
- 0.4284515442623109
- 0.43507367250415546
- 0.42700571785176217
- 0.4361446299823968
- 0.43904291221017366
- 0.4299550538908112
- 0.43238309813164827
- 0.42903116749560066
- 0.4205715584354972
- 0.42679694564103793
- 0.39803191142389904
- 0.2294459267018928
- 0.2818297992588612
- 0.335409231908862
- 0.2840591462499585
- 0.2126881092800587
- 0.23725806040439548
- 0.16296784316806723
- 0.23662008905329618
- 1.0
- 0.2061562931649559
- 0.3942530387223539
- 0.4037300750919399
- 0.37628324974390925
- 0.373319550245667
- 0.3885322788174104
- 0.38119794461431755
- 0.3823715539670135
- 0.39591870914604277
- 0.39418963009800245
- 0.3792863969189305
- 0.4284515442623109
- 0.43507367250415546
- 0.42700571785176217
- 0.4361446299823968
- 0.43904291221017366
- 0.4299550538908112
- 0.43238309813164827
- 0.42903116749560066
- 0.4205715584354972
- 0.42679694564103793
- 0.39803191142389904
- 0.2294459267018928
- 0.2818297992588612
- 0.335409231908862
- 0.2840591462499585
- 0.2126881092800587
- 0.23725806040439548
- 0.16296784316806723
- 0.23662008905329618
- 1.0
- 0.2061562931649559
- 0.3942530387223539
- 0.4037300750919399
- 0.37628324974390925
- 0.373319550245667
- 0.3885322788174104
- 0.38119794461431755
- 0.3823715539670135
- 0.39591870914604277
- 0.39418963009800245
- 0.3792863969189305
- 0.4284515442623109
- 0.43507367250415546
- 0.42700571785176217
- 0.4361446299823968
- 0.43904291221017366
- 0.4299550538908112
- 0.43238309813164827
- 0.42903116749560066
- 0.4205715584354972
- 0.42679694564103793
- 0.39803191142389904
- 0.2294459267018928
- 0.2818297992588612
- 0.335409231908862
- 0.2840591462499585
- 0.2126881092800587
- 0.23725806040439548
- 0.16296784316806723
- 0.23662008905329618
- 1.0
- 0.2061562931649559
- 0.3942530387223539
- 0.4037300750919399
- 0.37628324974390925
- 0.373319550245667
- 0.3885322788174104
- 0.38119794461431755
- 0.3823715539670135
- 0.39591870914604277
- 0.39418963009800245
- 0.3792863969189305
- 0.4284515442623109
- 0.43507367250415546
- 0.42700571785176217
- 0.4361446299823968
- 0.43904291221017366
- 0.4299550538908112
- 0.43238309813164827
- 0.42903116749560066
- 0.4205715584354972
- 0.42679694564103793
- 0.39803191142389904
- 0.2294459267018928
- 0.2818297992588612
- 0.335409231908862
- 0.2840591462499585
- 0.2126881092800587
- 0.23725806040439548
- 0.16296784316806723
- 0.23662008905329618
- 1.0
- 0.2061562931649559
- 0.3942530387223539
- 0.4037300750919399
- 0.37628324974390925
- 0.373319550245667
- 0.3885322788174104
- 0.38119794461431755
- 0.3823715539670135
- 0.39591870914604277
- 0.39418963009800245
- 0.3792863969189305
- 0.4284515442623109
- 0.43507367250415546
- 0.42700571785176217
- 0.4361446299823968
- 0.43904291221017366
- 0.4299550538908112
- 0.43238309813164827
- 0.42903116749560066
- 0.4205715584354972
- 0.42679694564103793
- 0.39803191142389904
- 0.2294459267018928
- 0.2818297992588612
- 0.335409231908862
- 0.2840591462499585
- 0.2126881092800587
- 0.23725806040439548
- 0.16296784316806723
- 0.23662008905329618
- 1.0
- 0.2061562931649559
- 0.3942530387223539
- 0.4037300750919399
- 0.37628324974390925
- 0.373319550245667
- 0.3885322788174104
- 0.38119794461431755
- 0.3823715539670135
- 0.39591870914604277
- 0.39418963009800245
- 0.3792863969189305
- 0.4284515442623109
- 0.43507367250415546
- 0.42700571785176217
- 0.4361446299823968
- 0.43904291221017366
- 0.4299550538908112
- 0.43238309813164827
- 0.42903116749560066
- 0.4205715584354972
- 0.42679694564103793
- 0.39803191142389904
- 0.2294459267018928
- 0.2818297992588612
- 0.335409231908862
- 0.2840591462499585
- 0.2126881092800587
- 0.23725806040439548
- 0.16296784316806723
- 0.23662008905329618
- 1.0
- 0.2061562931649559
- 0.3942530387223539
- 0.4037300750919399
- 0.37628324974390925
- 0.373319550245667
- 0.3885322788174104
- 0.38119794461431755
- 0.3823715539670135
- 0.39591870914604277
- 0.39418963009800245
- 0.3792863969189305
- 0.4284515442623109
- 0.43507367250415546
- 0.42700571785176217
- 0.4361446299823968
- 0.43904291221017366
- 0.4299550538908112
- 0.43238309813164827
- 0.42903116749560066
- 0.4205715584354972
- 0.42679694564103793
- 0.39803191142389904
- 0.2294459267018928
- 0.2818297992588612
- 0.335409231908862
- 0.2840591462499585
- 0.2126881092800587
- 0.23725806040439548
- 0.16296784316806723
- 0.23662008905329618
- 1.0
- 0.2061562931649559
- 0.3942530387223539
- 0.4037300750919399
- 0.37628324974390925
- 0.373319550245667
- 0.3885322788174104
- 0.38119794461431755
- 0.3823715539670135
- 0.39591870914604277
- 0.39418963009800245
- 0.3792863969189305
- 0.4284515442623109
- 0.43507367250415546
- 0.42700571785176217
- 0.4361446299823968
- 0.43904291221017366
- 0.4299550538908112
- 0.43238309813164827
- 0.42903116749560066
- 0.4205715584354972
- 0.42679694564103793
- 0.39803191142389904
- 0.2294459267018928
- 0.2818297992588612
- 0.335409231908862
- 0.2840591462499585
- 0.2126881092800587
- 0.23725806040439548
- 0.16296784316806723
- 0.23662008905329618
- 1.0
- 0.2061562931649559
- 0.3942530387223539
- 0.4037300750919399
- 0.37628324974390925
- 0.373319550245667
- 0.3885322788174104
- 0.38119794461431755
- 0.3823715539670135
- 0.39591870914604277
- 0.39418963009800245
- 0.3792863969189305
- 0.4284515442623109
- 0.43507367250415546
- 0.42700571785176217
- 0.4361446299823968
- 0.43904291221017366
- 0.4299550538908112
- 0.43238309813164827
- 0.42903116749560066
- 0.4205715584354972
- 0.42679694564103793
- 0.39803191142389904
- 0.2294459267018928
- 0.2818297992588612
- 0.335409231908862
- 0.2840591462499585
- 0.2126881092800587
- 0.23725806040439548
- 0.16296784316806723
- 0.23662008905329618
- 1.0
- 0.2061562931649559
- 0.3942530387223539
- 0.4037300750919399
- 0.37628324974390925
- 0.373319550245667
- 0.3885322788174104
- 0.38119794461431755
- 0.3823715539670135
- 0.39591870914604277
- 0.39418963009800245
- 0.3792863969189305
- 0.4284515442623109
- 0.43507367250415546
- 0.42700571785176217
- 0.4361446299823968
- 0.43904291221017366
- 0.4299550538908112
- 0.43238309813164827
- 0.42903116749560066
- 0.4205715584354972
- 0.42679694564103793
- 0.39803191142389904
- 0.2294459267018928
- 0.2818297992588612
- 0.335409231908862
- 0.2840591462499585
- 0.2126881092800587
- 0.23725806040439548
- 0.16296784316806723
- 0.23662008905329618
- 1.0
- 0.2061562931649559
- 0.3942530387223539
- 0.4037300750919399
- 0.37628324974390925
- 0.373319550245667
- 0.3885322788174104
- 0.38119794461431755
- 0.3823715539670135
- 0.39591870914604277
- 0.39418963009800245
- 0.3792863969189305
- 0.4284515442623109
- 0.43507367250415546
- 0.42700571785176217
- 0.4361446299823968
- 0.43904291221017366
- 0.4299550538908112
- 0.43238309813164827
- 0.42903116749560066
- 0.4205715584354972
- 0.42679694564103793
- 0.39803191142389904
- 0.2294459267018928
- 0.2818297992588612
- 0.335409231908862
- 0.2840591462499585
- 0.2126881092800587
- 0.23725806040439548
- 0.16296784316806723
- 0.23662008905329618
- 1.0
- 0.2061562931649559
- 0.3942530387223539
- 0.4037300750919399
- 0.37628324974390925
- 0.373319550245667
- 0.3885322788174104
- 0.38119794461431755
- 0.3823715539670135
- 0.39591870914604277
- 0.39418963009800245
- 0.3792863969189305
- 0.4284515442623109
- 0.43507367250415546
- 0.42700571785176217
- 0.4361446299823968
- 0.43904291221017366
- 0.4299550538908112
- 0.43238309813164827
- 0.42903116749560066
- 0.4205715584354972
- 0.42679694564103793
- 0.39803191142389904
- 0.2294459267018928
- 0.2818297992588612
- 0.335409231908862
- 0.2840591462499585
- 0.2126881092800587
- 0.23725806040439548
- 0.16296784316806723
- 0.23662008905329618
- 1.0
- 0.2061562931649559
- 0.3942530387223539
- 0.4037300750919399
- 0.37628324974390925
- 0.373319550245667
- 0.3885322788174104
- 0.38119794461431755
- 0.3823715539670135
- 0.39591870914604277
- 0.39418963009800245
- 0.3792863969189305
- 0.4284515442623109
- 0.43507367250415546
- 0.42700571785176217
- 0.4361446299823968
- 0.43904291221017366
- 0.4299550538908112
- 0.43238309813164827
- 0.42903116749560066
- 0.4205715584354972
- 0.42679694564103793
- 0.39803191142389904
- 0.2294459267018928
- 0.2818297992588612
- 0.335409231908862
- 0.2840591462499585
- 0.2126881092800587
- 0.23725806040439548
- 0.16296784316806723
- 0.23662008905329618
- 1.0
- 0.2061562931649559
- 0.3942530387223539
- 0.4037300750919399
- 0.37628324974390925
- 0.373319550245667
- 0.3885322788174104
- 0.38119794461431755
- 0.3823715539670135
- 0.39591870914604277
- 0.39418963009800245
- 0.3792863969189305
- 0.4284515442623109
- 0.43507367250415546
- 0.42700571785176217
- 0.4361446299823968
- 0.43904291221017366
- 0.4299550538908112
- 0.43238309813164827
- 0.42903116749560066
- 0.4205715584354972
- 0.42679694564103793
- 0.39803191142389904
- 0.2294459267018928
- 0.2818297992588612
- 0.335409231908862
- 0.2840591462499585
- 0.2126881092800587
- 0.23725806040439548
- 0.16296784316806723
- 0.23662008905329618
- 1.0
- 0.2061562931649559
- 0.3942530387223539
- 0.4037300750919399
- 0.37628324974390925
- 0.373319550245667
- 0.3885322788174104
- 0.38119794461431755
- 0.3823715539670135
- 0.39591870914604277
- 0.39418963009800245
- 0.3792863969189305
- 0.4284515442623109
- 0.43507367250415546
- 0.42700571785176217
- 0.4361446299823968
- 0.43904291221017366
- 0.4299550538908112
- 0.43238309813164827
- 0.42903116749560066
- 0.4205715584354972
- 0.42679694564103793
- 0.39803191142389904
- 0.2294459267018928
- 0.2818297992588612
- 0.335409231908862
- 0.2840591462499585
- 0.2126881092800587
- 0.23725806040439548
- 0.16296784316806723
- 0.23662008905329618
- 1.0
- 0.2061562931649559
- 0.3942530387223539
- 0.4037300750919399
- 0.37628324974390925
- 0.373319550245667
- 0.3885322788174104
- 0.38119794461431755
- 0.3823715539670135
- 0.39591870914604277
- 0.39418963009800245
- 0.3792863969189305
- 0.4284515442623109
- 0.43507367250415546
- 0.42700571785176217
- 0.4361446299823968
- 0.43904291221017366
- 0.4299550538908112
- 0.43238309813164827
- 0.42903116749560066
- 0.4205715584354972
- 0.42679694564103793
- 0.39803191142389904
- 0.2294459267018928
- 0.2818297992588612
- 0.335409231908862
- 0.2840591462499585
- 0.2126881092800587
- 0.23725806040439548
- 0.16296784316806723
- 0.23662008905329618
- 1.0
- 0.2061562931649559
- 0.3942530387223539
- 0.4037300750919399
- 0.37628324974390925
- 0.373319550245667
- 0.3885322788174104
- 0.38119794461431755
- 0.3823715539670135
- 0.39591870914604277
- 0.39418963009800245
- 0.3792863969189305
- 0.4284515442623109
- 0.43507367250415546
- 0.42700571785176217
- 0.4361446299823968
- 0.43904291221017366
- 0.4299550538908112
- 0.43238309813164827
- 0.42903116749560066
- 0.4205715584354972
- 0.42679694564103793
- 0.39803191142389904
- 0.2294459267018928
- 0.2818297992588612
- 0.335409231908862
- 0.2840591462499585
- 0.2126881092800587
- 0.23725806040439548
- 0.16296784316806723
- 0.23662008905329618
- 1.0
- 0.2061562931649559
- 0.3942530387223539
- 0.4037300750919399
- 0.37628324974390925
- 0.373319550245667
- 0.3885322788174104
- 0.38119794461431755
- 0.3823715539670135
- 0.39591870914604277
- 0.39418963009800245
- 0.3792863969189305
- 0.4284515442623109
- 0.43507367250415546
- 0.42700571785176217
- 0.4361446299823968
- 0.43904291221017366
- 0.4299550538908112
- 0.43238309813164827
- 0.42903116749560066
- 0.4205715584354972
- 0.42679694564103793
- 0.39803191142389904
- 0.2294459267018928
- 0.2818297992588612
- 0.335409231908862
- 0.2840591462499585
- 0.2126881092800587
- 0.23725806040439548
- 0.16296784316806723
- 0.23662008905329618
- 1.0
- 0.2061562931649559
- 0.3942530387223539
- 0.4037300750919399
- 0.37628324974390925
- 0.373319550245667
- 0.3885322788174104
- 0.38119794461431755
- 0.3823715539670135
- 0.39591870914604277
- 0.39418963009800245
- 0.3792863969189305
- 0.4284515442623109
- 0.43507367250415546
- 0.42700571785176217
- 0.4361446299823968
- 0.43904291221017366
- 0.4299550538908112
- 0.43238309813164827
- 0.42903116749560066
- 0.4205715584354972
- 0.42679694564103793
- 0.39803191142389904
- 0.2294459267018928
- 0.2818297992588612
- 0.335409231908862
- 0.2840591462499585
- 0.2126881092800587
- 0.23725806040439548
- 0.16296784316806723
- 0.23662008905329618
- 1.0
- 0.2061562931649559
- 0.3942530387223539
- 0.4037300750919399
- 0.37628324974390925
- 0.373319550245667
- 0.3885322788174104
- 0.38119794461431755
- 0.3823715539670135
- 0.39591870914604277
- 0.39418963009800245
- 0.3792863969189305
- 0.4284515442623109
- 0.43507367250415546
- 0.42700571785176217
- 0.4361446299823968
- 0.43904291221017366
- 0.4299550538908112
- 0.43238309813164827
- 0.42903116749560066
- 0.4205715584354972
- 0.42679694564103793
- 0.39803191142389904
- 0.2294459267018928
- 0.2818297992588612
- 0.335409231908862
- 0.2840591462499585
- 0.2126881092800587
- 0.23725806040439548
- 0.16296784316806723
- 0.23662008905329618
- 1.0
- 0.2061562931649559
- 0.3942530387223539
- 0.4037300750919399
- 0.37628324974390925
- 0.373319550245667
- 0.3885322788174104
- 0.38119794461431755
- 0.3823715539670135
- 0.39591870914604277
- 0.39418963009800245
- 0.3792863969189305
- 0.4284515442623109
- 0.43507367250415546
- 0.42700571785176217
- 0.4361446299823968
- 0.43904291221017366
- 0.4299550538908112
- 0.43238309813164827
- 0.42903116749560066
- 0.4205715584354972
- 0.42679694564103793
- 0.39803191142389904
- 0.2294459267018928
- 0.2818297992588612
- 0.335409231908862
- 0.2840591462499585
- 0.2126881092800587
- 0.23725806040439548
- 0.16296784316806723
- 0.23662008905329618
- 1.0
- 0.2061562931649559
- 0.3942530387223539
- 0.4037300750919399
- 0.37628324974390925
- 0.373319550245667
- 0.3885322788174104
- 0.38119794461431755
- 0.3823715539670135
- 0.39591870914604277
- 0.39418963009800245
- 0.3792863969189305
- 0.4284515442623109
- 0.43507367250415546
- 0.42700571785176217
- 0.4361446299823968
- 0.43904291221017366
- 0.4299550538908112
- 0.43238309813164827
- 0.42903116749560066
- 0.4205715584354972
- 0.42679694564103793
- 0.39803191142389904
- 0.2294459267018928
- 0.2818297992588612
- 0.335409231908862
- 0.2840591462499585
- 0.2126881092800587
- 0.23725806040439548
- 0.16296784316806723
- 0.23662008905329618
- 1.0
- 0.2061562931649559
- 0.3942530387223539
- 0.4037300750919399
- 0.37628324974390925
- 0.373319550245667
- 0.3885322788174104
- 0.38119794461431755
- 0.3823715539670135
- 0.39591870914604277
- 0.39418963009800245
- 0.3792863969189305
- 0.4284515442623109
- 0.43507367250415546
- 0.42700571785176217
- 0.4361446299823968
- 0.43904291221017366
- 0.4299550538908112
- 0.43238309813164827
- 0.42903116749560066
- 0.4205715584354972
- 0.42679694564103793
- 0.39803191142389904
- 0.2294459267018928
- 0.2818297992588612
- 0.335409231908862
- 0.2840591462499585
- 0.2126881092800587
- 0.23725806040439548
- 0.16296784316806723
- 0.23662008905329618
- 1.0
- 0.2061562931649559
- 0.3942530387223539
- 0.4037300750919399
- 0.37628324974390925
- 0.373319550245667
- 0.3885322788174104
- 0.38119794461431755
- 0.3823715539670135
- 0.39591870914604277
- 0.39418963009800245
- 0.3792863969189305
- 0.4284515442623109
- 0.43507367250415546
- 0.42700571785176217
- 0.4361446299823968
- 0.43904291221017366
- 0.4299550538908112
- 0.43238309813164827
- 0.42903116749560066
- 0.4205715584354972
- 0.42679694564103793
- 0.39803191142389904
- 0.2294459267018928
- 0.2818297992588612
- 0.335409231908862
- 0.2840591462499585
- 0.2126881092800587
- 0.23725806040439548
- 0.16296784316806723
- 0.23662008905329618
- 1.0
- 0.2061562931649559
- 0.3942530387223539
- 0.4037300750919399
- 0.37628324974390925
- 0.373319550245667
- 0.3885322788174104
- 0.38119794461431755
- 0.3823715539670135
- 0.39591870914604277
- 0.39418963009800245
- 0.3792863969189305
- 0.4284515442623109
- 0.43507367250415546
- 0.42700571785176217
- 0.4361446299823968
- 0.43904291221017366
- 0.4299550538908112
- 0.43238309813164827
- 0.42903116749560066
- 0.4205715584354972
- 0.42679694564103793
- 0.39803191142389904
- 0.2294459267018928
- 0.2818297992588612
- 0.335409231908862
- 0.2840591462499585
- 0.2126881092800587
- 0.23725806040439548
- 0.16296784316806723
- 0.23662008905329618
- 1.0
- 0.2061562931649559
- 0.3942530387223539
- 0.4037300750919399
- 0.37628324974390925
- 0.373319550245667
- 0.3885322788174104
- 0.38119794461431755
- 0.3823715539670135
- 0.39591870914604277
- 0.39418963009800245
- 0.3792863969189305
- 0.4284515442623109
- 0.43507367250415546
- 0.42700571785176217
- 0.4361446299823968
- 0.43904291221017366
- 0.4299550538908112
- 0.43238309813164827
- 0.42903116749560066
- 0.4205715584354972
- 0.42679694564103793
- 0.39803191142389904
- 0.2294459267018928
- 0.2818297992588612
- 0.335409231908862
- 0.2840591462499585
- 0.2126881092800587
- 0.23725806040439548
- 0.16296784316806723
- 0.23662008905329618
- 1.0
- 0.2061562931649559
- 0.3942530387223539
- 0.4037300750919399
- 0.37628324974390925
- 0.373319550245667
- 0.3885322788174104
- 0.38119794461431755
- 0.3823715539670135
- 0.39591870914604277
- 0.39418963009800245
- 0.3792863969189305
- 0.4284515442623109
- 0.43507367250415546
- 0.42700571785176217
- 0.4361446299823968
- 0.43904291221017366
- 0.4299550538908112
- 0.43238309813164827
- 0.42903116749560066
- 0.4205715584354972
- 0.42679694564103793
- 0.39803191142389904
- 0.2294459267018928
- 0.2818297992588612
- 0.335409231908862
- 0.2840591462499585
- 0.2126881092800587
- 0.23725806040439548
- 0.16296784316806723
- 0.23662008905329618
- 1.0
- 0.2061562931649559
- 0.3942530387223539
- 0.4037300750919399
- 0.37628324974390925
- 0.373319550245667
- 0.3885322788174104
- 0.38119794461431755
- 0.3823715539670135
- 0.39591870914604277
- 0.39418963009800245
- 0.3792863969189305
- 0.4284515442623109
- 0.43507367250415546
- 0.42700571785176217
- 0.4361446299823968
- 0.43904291221017366
- 0.4299550538908112
- 0.43238309813164827
- 0.42903116749560066
- 0.4205715584354972
- 0.42679694564103793
- 0.39803191142389904
- 0.2294459267018928
- 0.2818297992588612
- 0.335409231908862
- 0.2840591462499585
- 0.2126881092800587
- 0.23725806040439548
- 0.16296784316806723
- 0.23662008905329618
- 1.0
- 0.2061562931649559
- 0.3942530387223539
- 0.4037300750919399
- 0.37628324974390925
- 0.373319550245667
- 0.3885322788174104
- 0.38119794461431755
- 0.3823715539670135
- 0.39591870914604277
- 0.39418963009800245
- 0.3792863969189305
- 0.4284515442623109
- 0.43507367250415546
- 0.42700571785176217
- 0.4361446299823968
- 0.43904291221017366
- 0.4299550538908112
- 0.43238309813164827
- 0.42903116749560066
- 0.4205715584354972
- 0.42679694564103793
- 0.39803191142389904
- 0.2294459267018928
- 0.2818297992588612
- 0.335409231908862
- 0.2840591462499585
- 0.2126881092800587
- 0.23725806040439548
- 0.16296784316806723
- 0.23662008905329618
- 1.0
- 0.2061562931649559
- 0.3942530387223539
- 0.4037300750919399
- 0.37628324974390925
- 0.373319550245667
- 0.3885322788174104
- 0.38119794461431755
- 0.3823715539670135
- 0.39591870914604277
- 0.39418963009800245
- 0.3792863969189305
- 0.4284515442623109
- 0.43507367250415546
- 0.42700571785176217
- 0.4361446299823968
- 0.43904291221017366
- 0.4299550538908112
- 0.43238309813164827
- 0.42903116749560066
- 0.4205715584354972
- 0.42679694564103793
- 0.39803191142389904
- 0.2294459267018928
- 0.2818297992588612
- 0.335409231908862
- 0.2840591462499585
- 0.2126881092800587
- 0.23725806040439548
- 0.16296784316806723
- 0.23662008905329618
- 1.0
- 0.2061562931649559
- 0.3942530387223539
- 0.4037300750919399
- 0.37628324974390925
- 0.373319550245667
- 0.3885322788174104
- 0.38119794461431755
- 0.3823715539670135
- 0.39591870914604277
- 0.39418963009800245
- 0.3792863969189305
- 0.4284515442623109
- 0.43507367250415546
- 0.42700571785176217
- 0.4361446299823968
- 0.43904291221017366
- 0.4299550538908112
- 0.43238309813164827
- 0.42903116749560066
- 0.4205715584354972
- 0.42679694564103793
- 0.39803191142389904
- 0.2294459267018928
- 0.2818297992588612
- 0.335409231908862
- 0.2840591462499585
- 0.2126881092800587
- 0.23725806040439548
- 0.16296784316806723
- 0.23662008905329618
- 1.0
- 0.2061562931649559
- 0.3942530387223539
- 0.4037300750919399
- 0.37628324974390925
- 0.373319550245667
- 0.3885322788174104
- 0.38119794461431755
- 0.3823715539670135
- 0.39591870914604277
- 0.39418963009800245
- 0.3792863969189305
- 0.4284515442623109
- 0.43507367250415546
- 0.42700571785176217
- 0.4361446299823968
- 0.43904291221017366
- 0.4299550538908112
- 0.43238309813164827
- 0.42903116749560066
- 0.4205715584354972
- 0.42679694564103793
- 0.39803191142389904
- 0.2294459267018928
- 0.2818297992588612
- 0.335409231908862
- 0.2840591462499585
- 0.2126881092800587
- 0.23725806040439548
- 0.16296784316806723
- 0.23662008905329618
- 1.0
- 0.2061562931649559
- 0.3942530387223539
- 0.4037300750919399
- 0.37628324974390925
- 0.373319550245667
- 0.3885322788174104
- 0.38119794461431755
- 0.3823715539670135
- 0.39591870914604277
- 0.39418963009800245
- 0.3792863969189305
- 0.4284515442623109
- 0.43507367250415546
- 0.42700571785176217
- 0.4361446299823968
- 0.43904291221017366
- 0.4299550538908112
- 0.43238309813164827
- 0.42903116749560066
- 0.4205715584354972
- 0.42679694564103793
- 0.39803191142389904
- 0.2294459267018928
- 0.2818297992588612
- 0.335409231908862
- 0.2840591462499585
- 0.2126881092800587
- 0.23725806040439548
- 0.16296784316806723
- 0.23662008905329618
- 1.0
- 0.2061562931649559
- 0.3942530387223539
- 0.4037300750919399
- 0.37628324974390925
- 0.373319550245667
- 0.3885322788174104
- 0.38119794461431755
- 0.3823715539670135
- 0.39591870914604277
- 0.39418963009800245
- 0.3792863969189305
- 0.4284515442623109
- 0.43507367250415546
- 0.42700571785176217
- 0.4361446299823968
- 0.43904291221017366
- 0.4299550538908112
- 0.43238309813164827
- 0.42903116749560066
- 0.4205715584354972
- 0.42679694564103793
- 0.39803191142389904
- 0.2294459267018928
- 0.2818297992588612
- 0.335409231908862
- 0.2840591462499585
- 0.2126881092800587
- 0.23725806040439548
- 0.16296784316806723
- 0.23662008905329618
- 1.0
- 0.2061562931649559
- 0.3942530387223539
- 0.4037300750919399
- 0.37628324974390925
- 0.373319550245667
- 0.3885322788174104
- 0.38119794461431755
- 0.3823715539670135
- 0.39591870914604277
- 0.39418963009800245
- 0.3792863969189305
- 0.4284515442623109
- 0.43507367250415546
- 0.42700571785176217
- 0.4361446299823968
- 0.43904291221017366
- 0.4299550538908112
- 0.43238309813164827
- 0.42903116749560066
- 0.4205715584354972
- 0.42679694564103793
- 0.39803191142389904
- 0.2294459267018928
- 0.2818297992588612
- 0.335409231908862
- 0.2840591462499585
- 0.2126881092800587
- 0.23725806040439548
- 0.16296784316806723
- 0.23662008905329618
- 1.0
- 0.2061562931649559
- 0.3942530387223539
- 0.4037300750919399
- 0.37628324974390925
- 0.373319550245667
- 0.3885322788174104
- 0.38119794461431755
- 0.3823715539670135
- 0.39591870914604277
- 0.39418963009800245
- 0.3792863969189305
- 0.4284515442623109
- 0.43507367250415546
- 0.42700571785176217
- 0.4361446299823968
- 0.43904291221017366
- 0.4299550538908112
- 0.43238309813164827
- 0.42903116749560066
- 0.4205715584354972
- 0.42679694564103793
- 0.39803191142389904
- 0.2294459267018928
- 0.2818297992588612
- 0.335409231908862
- 0.2840591462499585
- 0.2126881092800587
- 0.23725806040439548
- 0.16296784316806723
- 0.23662008905329618
- 1.0
- 0.2061562931649559
- 0.3942530387223539
- 0.4037300750919399
- 0.37628324974390925
- 0.373319550245667
- 0.3885322788174104
- 0.38119794461431755
- 0.3823715539670135
- 0.39591870914604277
- 0.39418963009800245
- 0.3792863969189305
- 0.4284515442623109
- 0.43507367250415546
- 0.42700571785176217
- 0.4361446299823968
- 0.43904291221017366
- 0.4299550538908112
- 0.43238309813164827
- 0.42903116749560066
- 0.4205715584354972
- 0.42679694564103793
- 0.39803191142389904
- 0.2294459267018928
- 0.2818297992588612
- 0.335409231908862
- 0.2840591462499585
- 0.2126881092800587
- 0.23725806040439548
- 0.16296784316806723
- 0.23662008905329618
- 1.0
- 0.2061562931649559
- 0.3942530387223539
- 0.4037300750919399
- 0.37628324974390925
- 0.373319550245667
- 0.3885322788174104
- 0.38119794461431755
- 0.3823715539670135
- 0.39591870914604277
- 0.39418963009800245
- 0.3792863969189305
- 0.4284515442623109
- 0.43507367250415546
- 0.42700571785176217
- 0.4361446299823968
- 0.43904291221017366
- 0.4299550538908112
- 0.43238309813164827
- 0.42903116749560066
- 0.4205715584354972
- 0.42679694564103793
- 0.39803191142389904
- 0.2294459267018928
- 0.2818297992588612
- 0.335409231908862
- 0.2840591462499585
- 0.2126881092800587
- 0.23725806040439548
- 0.16296784316806723
- 0.23662008905329618
- 1.0
- 0.2061562931649559
- 0.3942530387223539
- 0.4037300750919399
- 0.37628324974390925
- 0.373319550245667
- 0.3885322788174104
- 0.38119794461431755
- 0.3823715539670135
- 0.39591870914604277
- 0.39418963009800245
- 0.3792863969189305
- 0.4284515442623109
- 0.43507367250415546
- 0.42700571785176217
- 0.4361446299823968
- 0.43904291221017366
- 0.4299550538908112
- 0.43238309813164827
- 0.42903116749560066
- 0.4205715584354972
- 0.42679694564103793
- 0.39803191142389904
- 0.2294459267018928
- 0.2818297992588612
- 0.335409231908862
- 0.2840591462499585
- 0.2126881092800587
- 0.23725806040439548
- 0.16296784316806723
- 0.23662008905329618
- 1.0
- 0.2061562931649559
- 0.3942530387223539
- 0.4037300750919399
- 0.37628324974390925
- 0.373319550245667
- 0.3885322788174104
- 0.38119794461431755
- 0.3823715539670135
- 0.39591870914604277
- 0.39418963009800245
- 0.3792863969189305
- 0.4284515442623109
- 0.43507367250415546
- 0.42700571785176217
- 0.4361446299823968
- 0.43904291221017366
- 0.4299550538908112
- 0.43238309813164827
- 0.42903116749560066
- 0.4205715584354972
- 0.42679694564103793
- 0.39803191142389904
- 0.2294459267018928
- 0.2818297992588612
- 0.335409231908862
- 0.2840591462499585
- 0.2126881092800587
- 0.23725806040439548
- 0.16296784316806723
- 0.23662008905329618
- 1.0
- 0.2061562931649559
- 0.3942530387223539
- 0.4037300750919399
- 0.37628324974390925
- 0.373319550245667
- 0.3885322788174104
- 0.38119794461431755
- 0.3823715539670135
- 0.39591870914604277
- 0.39418963009800245
- 0.3792863969189305
- 0.4284515442623109
- 0.43507367250415546
- 0.42700571785176217
- 0.4361446299823968
- 0.43904291221017366
- 0.4299550538908112
- 0.43238309813164827
- 0.42903116749560066
- 0.4205715584354972
- 0.42679694564103793
- 0.39803191142389904
- 0.2294459267018928
- 0.2818297992588612
- 0.335409231908862
- 0.2840591462499585
- 0.2126881092800587
- 0.23725806040439548
- 0.16296784316806723
- 0.23662008905329618
- 1.0
- 0.2061562931649559
- 0.3942530387223539
- 0.4037300750919399
- 0.37628324974390925
- 0.373319550245667
- 0.3885322788174104
- 0.38119794461431755
- 0.3823715539670135
- 0.39591870914604277
- 0.39418963009800245
- 0.3792863969189305
- 0.4284515442623109
- 0.43507367250415546
- 0.42700571785176217
- 0.4361446299823968
- 0.43904291221017366
- 0.4299550538908112
- 0.43238309813164827
- 0.42903116749560066
- 0.4205715584354972
- 0.42679694564103793
- 0.39803191142389904
- 0.2294459267018928
- 0.2818297992588612
- 0.335409231908862
- 0.2840591462499585
- 0.2126881092800587
- 0.23725806040439548
- 0.16296784316806723
- 0.23662008905329618
- 1.0
- 0.2061562931649559
- 0.3942530387223539
- 0.4037300750919399
- 0.37628324974390925
- 0.373319550245667
- 0.3885322788174104
- 0.38119794461431755
- 0.3823715539670135
- 0.39591870914604277
- 0.39418963009800245
- 0.3792863969189305
- 0.4284515442623109
- 0.43507367250415546
- 0.42700571785176217
- 0.4361446299823968
- 0.43904291221017366
- 0.4299550538908112
- 0.43238309813164827
- 0.42903116749560066
- 0.4205715584354972
- 0.42679694564103793
- 0.39803191142389904
- 0.2294459267018928
- 0.2818297992588612
- 0.335409231908862
- 0.2840591462499585
- 0.2126881092800587
- 0.23725806040439548
- 0.16296784316806723
- 0.23662008905329618
- 1.0
- 0.2061562931649559
- 0.3942530387223539
- 0.4037300750919399
- 0.37628324974390925
- 0.373319550245667
- 0.3885322788174104
- 0.38119794461431755
- 0.3823715539670135
- 0.39591870914604277
- 0.39418963009800245
- 0.3792863969189305
- 0.4284515442623109
- 0.43507367250415546
- 0.42700571785176217
- 0.4361446299823968
- 0.43904291221017366
- 0.4299550538908112
- 0.43238309813164827
- 0.42903116749560066
- 0.4205715584354972
- 0.42679694564103793
- 0.39803191142389904
- 0.2294459267018928
- 0.2818297992588612
- 0.335409231908862
- 0.2840591462499585
- 0.2126881092800587
- 0.23725806040439548
- 0.16296784316806723
- 0.23662008905329618
- 1.0
- 0.2061562931649559
- 0.3942530387223539
- 0.4037300750919399
- 0.37628324974390925
- 0.373319550245667
- 0.3885322788174104
- 0.38119794461431755
- 0.3823715539670135
- 0.39591870914604277
- 0.39418963009800245
- 0.3792863969189305
- 0.4284515442623109
- 0.43507367250415546
- 0.42700571785176217
- 0.4361446299823968
- 0.43904291221017366
- 0.4299550538908112
- 0.43238309813164827
- 0.42903116749560066
- 0.4205715584354972
- 0.42679694564103793
- 0.39803191142389904
- 0.2294459267018928
- 0.2818297992588612
- 0.335409231908862
- 0.2840591462499585
- 0.2126881092800587
- 0.23725806040439548
- 0.16296784316806723
- 0.23662008905329618
- 1.0
- 0.2061562931649559
- 0.3942530387223539
- 0.4037300750919399
- 0.37628324974390925
- 0.373319550245667
- 0.3885322788174104
- 0.38119794461431755
- 0.3823715539670135
- 0.39591870914604277
- 0.39418963009800245
- 0.3792863969189305
- 0.4284515442623109
- 0.43507367250415546
- 0.42700571785176217
- 0.4361446299823968
- 0.43904291221017366
- 0.4299550538908112
- 0.43238309813164827
- 0.42903116749560066
- 0.4205715584354972
- 0.42679694564103793
- 0.39803191142389904
- 0.2294459267018928
- 0.2818297992588612
- 0.335409231908862
- 0.2840591462499585
- 0.2126881092800587
- 0.23725806040439548
- 0.16296784316806723
- 0.23662008905329618
- 1.0
- 0.2061562931649559
- 0.3942530387223539
- 0.4037300750919399
- 0.37628324974390925
- 0.373319550245667
- 0.3885322788174104
- 0.38119794461431755
- 0.3823715539670135
- 0.39591870914604277
- 0.39418963009800245
- 0.3792863969189305
- 0.4284515442623109
- 0.43507367250415546
- 0.42700571785176217
- 0.4361446299823968
- 0.43904291221017366
- 0.4299550538908112
- 0.43238309813164827
- 0.42903116749560066
- 0.4205715584354972
- 0.42679694564103793
- 0.39803191142389904
- 0.2294459267018928
- 0.2818297992588612
- 0.335409231908862
- 0.2840591462499585
- 0.2126881092800587
- 0.23725806040439548
- 0.16296784316806723
- 0.23662008905329618
- 1.0
- 0.2061562931649559
- 0.3942530387223539
- 0.4037300750919399
- 0.37628324974390925
- 0.373319550245667
- 0.3885322788174104
- 0.38119794461431755
- 0.3823715539670135
- 0.39591870914604277
- 0.39418963009800245
- 0.3792863969189305
- 0.4284515442623109
- 0.43507367250415546
- 0.42700571785176217
- 0.4361446299823968
- 0.43904291221017366
- 0.4299550538908112
- 0.43238309813164827
- 0.42903116749560066
- 0.4205715584354972
- 0.42679694564103793
- 0.39803191142389904
- 0.2294459267018928
- 0.2818297992588612
- 0.335409231908862
- 0.2840591462499585
- 0.2126881092800587
- 0.23725806040439548
- 0.16296784316806723
- 0.23662008905329618
- 1.0
- 0.2061562931649559
- 0.3942530387223539
- 0.4037300750919399
- 0.37628324974390925
- 0.373319550245667
- 0.3885322788174104
- 0.38119794461431755
- 0.3823715539670135
- 0.39591870914604277
- 0.39418963009800245
- 0.3792863969189305
- 0.4284515442623109
- 0.43507367250415546
- 0.42700571785176217
- 0.4361446299823968
- 0.43904291221017366
- 0.4299550538908112
- 0.43238309813164827
- 0.42903116749560066
- 0.4205715584354972
- 0.42679694564103793
- 0.39803191142389904
- 0.2294459267018928
- 0.2818297992588612
- 0.335409231908862
- 0.2840591462499585
- 0.2126881092800587
- 0.23725806040439548
- 0.16296784316806723
- 0.23662008905329618
- 1.0
- 0.2061562931649559
- 0.3942530387223539
- 0.4037300750919399
- 0.37628324974390925
- 0.373319550245667
- 0.3885322788174104
- 0.38119794461431755
- 0.3823715539670135
- 0.39591870914604277
- 0.39418963009800245
- 0.3792863969189305
- 0.4284515442623109
- 0.43507367250415546
- 0.42700571785176217
- 0.4361446299823968
- 0.43904291221017366
- 0.4299550538908112
- 0.43238309813164827
- 0.42903116749560066
- 0.4205715584354972
- 0.42679694564103793
- 0.39803191142389904
- 0.2294459267018928
- 0.2818297992588612
- 0.335409231908862
- 0.2840591462499585
- 0.2126881092800587
- 0.23725806040439548
- 0.16296784316806723
- 0.23662008905329618
- 1.0
- 0.2061562931649559
- 0.3942530387223539
- 0.4037300750919399
- 0.37628324974390925
- 0.373319550245667
- 0.3885322788174104
- 0.38119794461431755
- 0.3823715539670135
- 0.39591870914604277
- 0.39418963009800245
- 0.3792863969189305
- 0.4284515442623109
- 0.43507367250415546
- 0.42700571785176217
- 0.4361446299823968
- 0.43904291221017366
- 0.4299550538908112
- 0.43238309813164827
- 0.42903116749560066
- 0.4205715584354972
- 0.42679694564103793
- 0.39803191142389904
- 0.2294459267018928
- 0.2818297992588612
- 0.335409231908862
- 0.2840591462499585
- 0.2126881092800587
- 0.23725806040439548
- 0.16296784316806723
- 0.23662008905329618
- 1.0
- 0.2061562931649559
- task:
type: Reranking
dataset:
name: MTEB AskUbuntuDupQuestions
type: None
config: default
split: test
revision: 2000358ca161889fa9c082cb41daa8dcfb161a54
metrics:
- type: map
value: 59.546570214269444
- type: mrr
value: 73.57197819109176
- task:
type: STS
dataset:
name: MTEB BIOSSES
type: None
config: default
split: test
revision: d3fb88f8f02e40887cd149695127462bbcf29b4a
metrics:
- type: cos_sim_pearson
value: 88.82818534163955
- type: cos_sim_spearman
value: 87.48572836142807
- type: euclidean_pearson
value: 87.85699699546558
- type: euclidean_spearman
value: 87.43873933894409
- type: manhattan_pearson
value: 87.41736797732888
- type: manhattan_spearman
value: 87.07909258993207
- task:
type: Classification
dataset:
name: MTEB Banking77Classification
type: None
config: default
split: test
revision: 0fd18e25b25c072e09e0d92ab615fda904d66300
metrics:
- type: accuracy
value: 84.57792207792207
- type: f1
value: 84.52727174280496
- task:
type: Clustering
dataset:
name: MTEB BiorxivClusteringP2P
type: None
config: default
split: test
revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40
metrics:
- type: v_measure
value: 38.72890855696805
- type: v_measures
value:
- 0.37363201017038467
- 0.39757192882016223
- 0.3941873172297799
- 0.3907542489870819
- 0.3703403333497427
- 0.3937364067847444
- 0.39592901588688134
- 0.3974412620588268
- 0.37304573120688667
- 0.38625260120231425
- 0.37363201017038467
- 0.39757192882016223
- 0.3941873172297799
- 0.3907542489870819
- 0.3703403333497427
- 0.3937364067847444
- 0.39592901588688134
- 0.3974412620588268
- 0.37304573120688667
- 0.38625260120231425
- 0.37363201017038467
- 0.39757192882016223
- 0.3941873172297799
- 0.3907542489870819
- 0.3703403333497427
- 0.3937364067847444
- 0.39592901588688134
- 0.3974412620588268
- 0.37304573120688667
- 0.38625260120231425
- 0.37363201017038467
- 0.39757192882016223
- 0.3941873172297799
- 0.3907542489870819
- 0.3703403333497427
- 0.3937364067847444
- 0.39592901588688134
- 0.3974412620588268
- 0.37304573120688667
- 0.38625260120231425
- 0.37363201017038467
- 0.39757192882016223
- 0.3941873172297799
- 0.3907542489870819
- 0.3703403333497427
- 0.3937364067847444
- 0.39592901588688134
- 0.3974412620588268
- 0.37304573120688667
- 0.38625260120231425
- 0.37363201017038467
- 0.39757192882016223
- 0.3941873172297799
- 0.3907542489870819
- 0.3703403333497427
- 0.3937364067847444
- 0.39592901588688134
- 0.3974412620588268
- 0.37304573120688667
- 0.38625260120231425
- 0.37363201017038467
- 0.39757192882016223
- 0.3941873172297799
- 0.3907542489870819
- 0.3703403333497427
- 0.3937364067847444
- 0.39592901588688134
- 0.3974412620588268
- 0.37304573120688667
- 0.38625260120231425
- 0.37363201017038467
- 0.39757192882016223
- 0.3941873172297799
- 0.3907542489870819
- 0.3703403333497427
- 0.3937364067847444
- 0.39592901588688134
- 0.3974412620588268
- 0.37304573120688667
- 0.38625260120231425
- 0.37363201017038467
- 0.39757192882016223
- 0.3941873172297799
- 0.3907542489870819
- 0.3703403333497427
- 0.3937364067847444
- 0.39592901588688134
- 0.3974412620588268
- 0.37304573120688667
- 0.38625260120231425
- 0.37363201017038467
- 0.39757192882016223
- 0.3941873172297799
- 0.3907542489870819
- 0.3703403333497427
- 0.3937364067847444
- 0.39592901588688134
- 0.3974412620588268
- 0.37304573120688667
- 0.38625260120231425
- 0.37363201017038467
- 0.39757192882016223
- 0.3941873172297799
- 0.3907542489870819
- 0.3703403333497427
- 0.3937364067847444
- 0.39592901588688134
- 0.3974412620588268
- 0.37304573120688667
- 0.38625260120231425
- 0.37363201017038467
- 0.39757192882016223
- 0.3941873172297799
- 0.3907542489870819
- 0.3703403333497427
- 0.3937364067847444
- 0.39592901588688134
- 0.3974412620588268
- 0.37304573120688667
- 0.38625260120231425
- 0.37363201017038467
- 0.39757192882016223
- 0.3941873172297799
- 0.3907542489870819
- 0.3703403333497427
- 0.3937364067847444
- 0.39592901588688134
- 0.3974412620588268
- 0.37304573120688667
- 0.38625260120231425
- 0.37363201017038467
- 0.39757192882016223
- 0.3941873172297799
- 0.3907542489870819
- 0.3703403333497427
- 0.3937364067847444
- 0.39592901588688134
- 0.3974412620588268
- 0.37304573120688667
- 0.38625260120231425
- 0.37363201017038467
- 0.39757192882016223
- 0.3941873172297799
- 0.3907542489870819
- 0.3703403333497427
- 0.3937364067847444
- 0.39592901588688134
- 0.3974412620588268
- 0.37304573120688667
- 0.38625260120231425
- 0.37363201017038467
- 0.39757192882016223
- 0.3941873172297799
- 0.3907542489870819
- 0.3703403333497427
- 0.3937364067847444
- 0.39592901588688134
- 0.3974412620588268
- 0.37304573120688667
- 0.38625260120231425
- 0.37363201017038467
- 0.39757192882016223
- 0.3941873172297799
- 0.3907542489870819
- 0.3703403333497427
- 0.3937364067847444
- 0.39592901588688134
- 0.3974412620588268
- 0.37304573120688667
- 0.38625260120231425
- 0.37363201017038467
- 0.39757192882016223
- 0.3941873172297799
- 0.3907542489870819
- 0.3703403333497427
- 0.3937364067847444
- 0.39592901588688134
- 0.3974412620588268
- 0.37304573120688667
- 0.38625260120231425
- 0.37363201017038467
- 0.39757192882016223
- 0.3941873172297799
- 0.3907542489870819
- 0.3703403333497427
- 0.3937364067847444
- 0.39592901588688134
- 0.3974412620588268
- 0.37304573120688667
- 0.38625260120231425
- 0.37363201017038467
- 0.39757192882016223
- 0.3941873172297799
- 0.3907542489870819
- 0.3703403333497427
- 0.3937364067847444
- 0.39592901588688134
- 0.3974412620588268
- 0.37304573120688667
- 0.38625260120231425
- 0.37363201017038467
- 0.39757192882016223
- 0.3941873172297799
- 0.3907542489870819
- 0.3703403333497427
- 0.3937364067847444
- 0.39592901588688134
- 0.3974412620588268
- 0.37304573120688667
- 0.38625260120231425
- 0.37363201017038467
- 0.39757192882016223
- 0.3941873172297799
- 0.3907542489870819
- 0.3703403333497427
- 0.3937364067847444
- 0.39592901588688134
- 0.3974412620588268
- 0.37304573120688667
- 0.38625260120231425
- 0.37363201017038467
- 0.39757192882016223
- 0.3941873172297799
- 0.3907542489870819
- 0.3703403333497427
- 0.3937364067847444
- 0.39592901588688134
- 0.3974412620588268
- 0.37304573120688667
- 0.38625260120231425
- 0.37363201017038467
- 0.39757192882016223
- 0.3941873172297799
- 0.3907542489870819
- 0.3703403333497427
- 0.3937364067847444
- 0.39592901588688134
- 0.3974412620588268
- 0.37304573120688667
- 0.38625260120231425
- 0.37363201017038467
- 0.39757192882016223
- 0.3941873172297799
- 0.3907542489870819
- 0.3703403333497427
- 0.3937364067847444
- 0.39592901588688134
- 0.3974412620588268
- 0.37304573120688667
- 0.38625260120231425
- 0.37363201017038467
- 0.39757192882016223
- 0.3941873172297799
- 0.3907542489870819
- 0.3703403333497427
- 0.3937364067847444
- 0.39592901588688134
- 0.3974412620588268
- 0.37304573120688667
- 0.38625260120231425
- 0.37363201017038467
- 0.39757192882016223
- 0.3941873172297799
- 0.3907542489870819
- 0.3703403333497427
- 0.3937364067847444
- 0.39592901588688134
- 0.3974412620588268
- 0.37304573120688667
- 0.38625260120231425
- 0.37363201017038467
- 0.39757192882016223
- 0.3941873172297799
- 0.3907542489870819
- 0.3703403333497427
- 0.3937364067847444
- 0.39592901588688134
- 0.3974412620588268
- 0.37304573120688667
- 0.38625260120231425
- 0.37363201017038467
- 0.39757192882016223
- 0.3941873172297799
- 0.3907542489870819
- 0.3703403333497427
- 0.3937364067847444
- 0.39592901588688134
- 0.3974412620588268
- 0.37304573120688667
- 0.38625260120231425
- 0.37363201017038467
- 0.39757192882016223
- 0.3941873172297799
- 0.3907542489870819
- 0.3703403333497427
- 0.3937364067847444
- 0.39592901588688134
- 0.3974412620588268
- 0.37304573120688667
- 0.38625260120231425
- 0.37363201017038467
- 0.39757192882016223
- 0.3941873172297799
- 0.3907542489870819
- 0.3703403333497427
- 0.3937364067847444
- 0.39592901588688134
- 0.3974412620588268
- 0.37304573120688667
- 0.38625260120231425
- 0.37363201017038467
- 0.39757192882016223
- 0.3941873172297799
- 0.3907542489870819
- 0.3703403333497427
- 0.3937364067847444
- 0.39592901588688134
- 0.3974412620588268
- 0.37304573120688667
- 0.38625260120231425
- 0.37363201017038467
- 0.39757192882016223
- 0.3941873172297799
- 0.3907542489870819
- 0.3703403333497427
- 0.3937364067847444
- 0.39592901588688134
- 0.3974412620588268
- 0.37304573120688667
- 0.38625260120231425
- 0.37363201017038467
- 0.39757192882016223
- 0.3941873172297799
- 0.3907542489870819
- 0.3703403333497427
- 0.3937364067847444
- 0.39592901588688134
- 0.3974412620588268
- 0.37304573120688667
- 0.38625260120231425
- 0.37363201017038467
- 0.39757192882016223
- 0.3941873172297799
- 0.3907542489870819
- 0.3703403333497427
- 0.3937364067847444
- 0.39592901588688134
- 0.3974412620588268
- 0.37304573120688667
- 0.38625260120231425
- 0.37363201017038467
- 0.39757192882016223
- 0.3941873172297799
- 0.3907542489870819
- 0.3703403333497427
- 0.3937364067847444
- 0.39592901588688134
- 0.3974412620588268
- 0.37304573120688667
- 0.38625260120231425
- 0.37363201017038467
- 0.39757192882016223
- 0.3941873172297799
- 0.3907542489870819
- 0.3703403333497427
- 0.3937364067847444
- 0.39592901588688134
- 0.3974412620588268
- 0.37304573120688667
- 0.38625260120231425
- 0.37363201017038467
- 0.39757192882016223
- 0.3941873172297799
- 0.3907542489870819
- 0.3703403333497427
- 0.3937364067847444
- 0.39592901588688134
- 0.3974412620588268
- 0.37304573120688667
- 0.38625260120231425
- 0.37363201017038467
- 0.39757192882016223
- 0.3941873172297799
- 0.3907542489870819
- 0.3703403333497427
- 0.3937364067847444
- 0.39592901588688134
- 0.3974412620588268
- 0.37304573120688667
- 0.38625260120231425
- 0.37363201017038467
- 0.39757192882016223
- 0.3941873172297799
- 0.3907542489870819
- 0.3703403333497427
- 0.3937364067847444
- 0.39592901588688134
- 0.3974412620588268
- 0.37304573120688667
- 0.38625260120231425
- 0.37363201017038467
- 0.39757192882016223
- 0.3941873172297799
- 0.3907542489870819
- 0.3703403333497427
- 0.3937364067847444
- 0.39592901588688134
- 0.3974412620588268
- 0.37304573120688667
- 0.38625260120231425
- 0.37363201017038467
- 0.39757192882016223
- 0.3941873172297799
- 0.3907542489870819
- 0.3703403333497427
- 0.3937364067847444
- 0.39592901588688134
- 0.3974412620588268
- 0.37304573120688667
- 0.38625260120231425
- 0.37363201017038467
- 0.39757192882016223
- 0.3941873172297799
- 0.3907542489870819
- 0.3703403333497427
- 0.3937364067847444
- 0.39592901588688134
- 0.3974412620588268
- 0.37304573120688667
- 0.38625260120231425
- 0.37363201017038467
- 0.39757192882016223
- 0.3941873172297799
- 0.3907542489870819
- 0.3703403333497427
- 0.3937364067847444
- 0.39592901588688134
- 0.3974412620588268
- 0.37304573120688667
- 0.38625260120231425
- 0.37363201017038467
- 0.39757192882016223
- 0.3941873172297799
- 0.3907542489870819
- 0.3703403333497427
- 0.3937364067847444
- 0.39592901588688134
- 0.3974412620588268
- 0.37304573120688667
- 0.38625260120231425
- 0.37363201017038467
- 0.39757192882016223
- 0.3941873172297799
- 0.3907542489870819
- 0.3703403333497427
- 0.3937364067847444
- 0.39592901588688134
- 0.3974412620588268
- 0.37304573120688667
- 0.38625260120231425
- 0.37363201017038467
- 0.39757192882016223
- 0.3941873172297799
- 0.3907542489870819
- 0.3703403333497427
- 0.3937364067847444
- 0.39592901588688134
- 0.3974412620588268
- 0.37304573120688667
- 0.38625260120231425
- 0.37363201017038467
- 0.39757192882016223
- 0.3941873172297799
- 0.3907542489870819
- 0.3703403333497427
- 0.3937364067847444
- 0.39592901588688134
- 0.3974412620588268
- 0.37304573120688667
- 0.38625260120231425
- 0.37363201017038467
- 0.39757192882016223
- 0.3941873172297799
- 0.3907542489870819
- 0.3703403333497427
- 0.3937364067847444
- 0.39592901588688134
- 0.3974412620588268
- 0.37304573120688667
- 0.38625260120231425
- 0.37363201017038467
- 0.39757192882016223
- 0.3941873172297799
- 0.3907542489870819
- 0.3703403333497427
- 0.3937364067847444
- 0.39592901588688134
- 0.3974412620588268
- 0.37304573120688667
- 0.38625260120231425
- 0.37363201017038467
- 0.39757192882016223
- 0.3941873172297799
- 0.3907542489870819
- 0.3703403333497427
- 0.3937364067847444
- 0.39592901588688134
- 0.3974412620588268
- 0.37304573120688667
- 0.38625260120231425
- 0.37363201017038467
- 0.39757192882016223
- 0.3941873172297799
- 0.3907542489870819
- 0.3703403333497427
- 0.3937364067847444
- 0.39592901588688134
- 0.3974412620588268
- 0.37304573120688667
- 0.38625260120231425
- 0.37363201017038467
- 0.39757192882016223
- 0.3941873172297799
- 0.3907542489870819
- 0.3703403333497427
- 0.3937364067847444
- 0.39592901588688134
- 0.3974412620588268
- 0.37304573120688667
- 0.38625260120231425
- 0.37363201017038467
- 0.39757192882016223
- 0.3941873172297799
- 0.3907542489870819
- 0.3703403333497427
- 0.3937364067847444
- 0.39592901588688134
- 0.3974412620588268
- 0.37304573120688667
- 0.38625260120231425
- 0.37363201017038467
- 0.39757192882016223
- 0.3941873172297799
- 0.3907542489870819
- 0.3703403333497427
- 0.3937364067847444
- 0.39592901588688134
- 0.3974412620588268
- 0.37304573120688667
- 0.38625260120231425
- 0.37363201017038467
- 0.39757192882016223
- 0.3941873172297799
- 0.3907542489870819
- 0.3703403333497427
- 0.3937364067847444
- 0.39592901588688134
- 0.3974412620588268
- 0.37304573120688667
- 0.38625260120231425
- 0.37363201017038467
- 0.39757192882016223
- 0.3941873172297799
- 0.3907542489870819
- 0.3703403333497427
- 0.3937364067847444
- 0.39592901588688134
- 0.3974412620588268
- 0.37304573120688667
- 0.38625260120231425
- 0.37363201017038467
- 0.39757192882016223
- 0.3941873172297799
- 0.3907542489870819
- 0.3703403333497427
- 0.3937364067847444
- 0.39592901588688134
- 0.3974412620588268
- 0.37304573120688667
- 0.38625260120231425
- 0.37363201017038467
- 0.39757192882016223
- 0.3941873172297799
- 0.3907542489870819
- 0.3703403333497427
- 0.3937364067847444
- 0.39592901588688134
- 0.3974412620588268
- 0.37304573120688667
- 0.38625260120231425
- 0.37363201017038467
- 0.39757192882016223
- 0.3941873172297799
- 0.3907542489870819
- 0.3703403333497427
- 0.3937364067847444
- 0.39592901588688134
- 0.3974412620588268
- 0.37304573120688667
- 0.38625260120231425
- 0.37363201017038467
- 0.39757192882016223
- 0.3941873172297799
- 0.3907542489870819
- 0.3703403333497427
- 0.3937364067847444
- 0.39592901588688134
- 0.3974412620588268
- 0.37304573120688667
- 0.38625260120231425
- 0.37363201017038467
- 0.39757192882016223
- 0.3941873172297799
- 0.3907542489870819
- 0.3703403333497427
- 0.3937364067847444
- 0.39592901588688134
- 0.3974412620588268
- 0.37304573120688667
- 0.38625260120231425
- 0.37363201017038467
- 0.39757192882016223
- 0.3941873172297799
- 0.3907542489870819
- 0.3703403333497427
- 0.3937364067847444
- 0.39592901588688134
- 0.3974412620588268
- 0.37304573120688667
- 0.38625260120231425
- 0.37363201017038467
- 0.39757192882016223
- 0.3941873172297799
- 0.3907542489870819
- 0.3703403333497427
- 0.3937364067847444
- 0.39592901588688134
- 0.3974412620588268
- 0.37304573120688667
- 0.38625260120231425
- 0.37363201017038467
- 0.39757192882016223
- 0.3941873172297799
- 0.3907542489870819
- 0.3703403333497427
- 0.3937364067847444
- 0.39592901588688134
- 0.3974412620588268
- 0.37304573120688667
- 0.38625260120231425
- 0.37363201017038467
- 0.39757192882016223
- 0.3941873172297799
- 0.3907542489870819
- 0.3703403333497427
- 0.3937364067847444
- 0.39592901588688134
- 0.3974412620588268
- 0.37304573120688667
- 0.38625260120231425
- 0.37363201017038467
- 0.39757192882016223
- 0.3941873172297799
- 0.3907542489870819
- 0.3703403333497427
- 0.3937364067847444
- 0.39592901588688134
- 0.3974412620588268
- 0.37304573120688667
- 0.38625260120231425
- 0.37363201017038467
- 0.39757192882016223
- 0.3941873172297799
- 0.3907542489870819
- 0.3703403333497427
- 0.3937364067847444
- 0.39592901588688134
- 0.3974412620588268
- 0.37304573120688667
- 0.38625260120231425
- 0.37363201017038467
- 0.39757192882016223
- 0.3941873172297799
- 0.3907542489870819
- 0.3703403333497427
- 0.3937364067847444
- 0.39592901588688134
- 0.3974412620588268
- 0.37304573120688667
- 0.38625260120231425
- 0.37363201017038467
- 0.39757192882016223
- 0.3941873172297799
- 0.3907542489870819
- 0.3703403333497427
- 0.3937364067847444
- 0.39592901588688134
- 0.3974412620588268
- 0.37304573120688667
- 0.38625260120231425
- 0.37363201017038467
- 0.39757192882016223
- 0.3941873172297799
- 0.3907542489870819
- 0.3703403333497427
- 0.3937364067847444
- 0.39592901588688134
- 0.3974412620588268
- 0.37304573120688667
- 0.38625260120231425
- 0.37363201017038467
- 0.39757192882016223
- 0.3941873172297799
- 0.3907542489870819
- 0.3703403333497427
- 0.3937364067847444
- 0.39592901588688134
- 0.3974412620588268
- 0.37304573120688667
- 0.38625260120231425
- 0.37363201017038467
- 0.39757192882016223
- 0.3941873172297799
- 0.3907542489870819
- 0.3703403333497427
- 0.3937364067847444
- 0.39592901588688134
- 0.3974412620588268
- 0.37304573120688667
- 0.38625260120231425
- 0.37363201017038467
- 0.39757192882016223
- 0.3941873172297799
- 0.3907542489870819
- 0.3703403333497427
- 0.3937364067847444
- 0.39592901588688134
- 0.3974412620588268
- 0.37304573120688667
- 0.38625260120231425
- 0.37363201017038467
- 0.39757192882016223
- 0.3941873172297799
- 0.3907542489870819
- 0.3703403333497427
- 0.3937364067847444
- 0.39592901588688134
- 0.3974412620588268
- 0.37304573120688667
- 0.38625260120231425
- 0.37363201017038467
- 0.39757192882016223
- 0.3941873172297799
- 0.3907542489870819
- 0.3703403333497427
- 0.3937364067847444
- 0.39592901588688134
- 0.3974412620588268
- 0.37304573120688667
- 0.38625260120231425
- 0.37363201017038467
- 0.39757192882016223
- 0.3941873172297799
- 0.3907542489870819
- 0.3703403333497427
- 0.3937364067847444
- 0.39592901588688134
- 0.3974412620588268
- 0.37304573120688667
- 0.38625260120231425
- 0.37363201017038467
- 0.39757192882016223
- 0.3941873172297799
- 0.3907542489870819
- 0.3703403333497427
- 0.3937364067847444
- 0.39592901588688134
- 0.3974412620588268
- 0.37304573120688667
- 0.38625260120231425
- 0.37363201017038467
- 0.39757192882016223
- 0.3941873172297799
- 0.3907542489870819
- 0.3703403333497427
- 0.3937364067847444
- 0.39592901588688134
- 0.3974412620588268
- 0.37304573120688667
- 0.38625260120231425
- 0.37363201017038467
- 0.39757192882016223
- 0.3941873172297799
- 0.3907542489870819
- 0.3703403333497427
- 0.3937364067847444
- 0.39592901588688134
- 0.3974412620588268
- 0.37304573120688667
- 0.38625260120231425
- 0.37363201017038467
- 0.39757192882016223
- 0.3941873172297799
- 0.3907542489870819
- 0.3703403333497427
- 0.3937364067847444
- 0.39592901588688134
- 0.3974412620588268
- 0.37304573120688667
- 0.38625260120231425
- 0.37363201017038467
- 0.39757192882016223
- 0.3941873172297799
- 0.3907542489870819
- 0.3703403333497427
- 0.3937364067847444
- 0.39592901588688134
- 0.3974412620588268
- 0.37304573120688667
- 0.38625260120231425
- 0.37363201017038467
- 0.39757192882016223
- 0.3941873172297799
- 0.3907542489870819
- 0.3703403333497427
- 0.3937364067847444
- 0.39592901588688134
- 0.3974412620588268
- 0.37304573120688667
- 0.38625260120231425
- 0.37363201017038467
- 0.39757192882016223
- 0.3941873172297799
- 0.3907542489870819
- 0.3703403333497427
- 0.3937364067847444
- 0.39592901588688134
- 0.3974412620588268
- 0.37304573120688667
- 0.38625260120231425
- 0.37363201017038467
- 0.39757192882016223
- 0.3941873172297799
- 0.3907542489870819
- 0.3703403333497427
- 0.3937364067847444
- 0.39592901588688134
- 0.3974412620588268
- 0.37304573120688667
- 0.38625260120231425
- 0.37363201017038467
- 0.39757192882016223
- 0.3941873172297799
- 0.3907542489870819
- 0.3703403333497427
- 0.3937364067847444
- 0.39592901588688134
- 0.3974412620588268
- 0.37304573120688667
- 0.38625260120231425
- 0.37363201017038467
- 0.39757192882016223
- 0.3941873172297799
- 0.3907542489870819
- 0.3703403333497427
- 0.3937364067847444
- 0.39592901588688134
- 0.3974412620588268
- 0.37304573120688667
- 0.38625260120231425
- 0.37363201017038467
- 0.39757192882016223
- 0.3941873172297799
- 0.3907542489870819
- 0.3703403333497427
- 0.3937364067847444
- 0.39592901588688134
- 0.3974412620588268
- 0.37304573120688667
- 0.38625260120231425
- 0.37363201017038467
- 0.39757192882016223
- 0.3941873172297799
- 0.3907542489870819
- 0.3703403333497427
- 0.3937364067847444
- 0.39592901588688134
- 0.3974412620588268
- 0.37304573120688667
- 0.38625260120231425
- 0.37363201017038467
- 0.39757192882016223
- 0.3941873172297799
- 0.3907542489870819
- 0.3703403333497427
- 0.3937364067847444
- 0.39592901588688134
- 0.3974412620588268
- 0.37304573120688667
- 0.38625260120231425
- 0.37363201017038467
- 0.39757192882016223
- 0.3941873172297799
- 0.3907542489870819
- 0.3703403333497427
- 0.3937364067847444
- 0.39592901588688134
- 0.3974412620588268
- 0.37304573120688667
- 0.38625260120231425
- 0.37363201017038467
- 0.39757192882016223
- 0.3941873172297799
- 0.3907542489870819
- 0.3703403333497427
- 0.3937364067847444
- 0.39592901588688134
- 0.3974412620588268
- 0.37304573120688667
- 0.38625260120231425
- 0.37363201017038467
- 0.39757192882016223
- 0.3941873172297799
- 0.3907542489870819
- 0.3703403333497427
- 0.3937364067847444
- 0.39592901588688134
- 0.3974412620588268
- 0.37304573120688667
- 0.38625260120231425
- 0.37363201017038467
- 0.39757192882016223
- 0.3941873172297799
- 0.3907542489870819
- 0.3703403333497427
- 0.3937364067847444
- 0.39592901588688134
- 0.3974412620588268
- 0.37304573120688667
- 0.38625260120231425
- 0.37363201017038467
- 0.39757192882016223
- 0.3941873172297799
- 0.3907542489870819
- 0.3703403333497427
- 0.3937364067847444
- 0.39592901588688134
- 0.3974412620588268
- 0.37304573120688667
- 0.38625260120231425
- 0.37363201017038467
- 0.39757192882016223
- 0.3941873172297799
- 0.3907542489870819
- 0.3703403333497427
- 0.3937364067847444
- 0.39592901588688134
- 0.3974412620588268
- 0.37304573120688667
- 0.38625260120231425
- 0.37363201017038467
- 0.39757192882016223
- 0.3941873172297799
- 0.3907542489870819
- 0.3703403333497427
- 0.3937364067847444
- 0.39592901588688134
- 0.3974412620588268
- 0.37304573120688667
- 0.38625260120231425
- 0.37363201017038467
- 0.39757192882016223
- 0.3941873172297799
- 0.3907542489870819
- 0.3703403333497427
- 0.3937364067847444
- 0.39592901588688134
- 0.3974412620588268
- 0.37304573120688667
- 0.38625260120231425
- 0.37363201017038467
- 0.39757192882016223
- 0.3941873172297799
- 0.3907542489870819
- 0.3703403333497427
- 0.3937364067847444
- 0.39592901588688134
- 0.3974412620588268
- 0.37304573120688667
- 0.38625260120231425
- 0.37363201017038467
- 0.39757192882016223
- 0.3941873172297799
- 0.3907542489870819
- 0.3703403333497427
- 0.3937364067847444
- 0.39592901588688134
- 0.3974412620588268
- 0.37304573120688667
- 0.38625260120231425
- task:
type: Clustering
dataset:
name: MTEB BiorxivClusteringS2S
type: None
config: default
split: test
revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908
metrics:
- type: v_measure
value: 33.88310773970377
- type: v_measures
value:
- 0.3416913241231775
- 0.343634850219928
- 0.3538569088433259
- 0.330378640451087
- 0.33046099405309765
- 0.35265391146515984
- 0.3331545004828837
- 0.3245349825114234
- 0.3352908890916282
- 0.3426537727286653
- 0.3416913241231775
- 0.343634850219928
- 0.3538569088433259
- 0.330378640451087
- 0.33046099405309765
- 0.35265391146515984
- 0.3331545004828837
- 0.3245349825114234
- 0.3352908890916282
- 0.3426537727286653
- 0.3416913241231775
- 0.343634850219928
- 0.3538569088433259
- 0.330378640451087
- 0.33046099405309765
- 0.35265391146515984
- 0.3331545004828837
- 0.3245349825114234
- 0.3352908890916282
- 0.3426537727286653
- 0.3416913241231775
- 0.343634850219928
- 0.3538569088433259
- 0.330378640451087
- 0.33046099405309765
- 0.35265391146515984
- 0.3331545004828837
- 0.3245349825114234
- 0.3352908890916282
- 0.3426537727286653
- 0.3416913241231775
- 0.343634850219928
- 0.3538569088433259
- 0.330378640451087
- 0.33046099405309765
- 0.35265391146515984
- 0.3331545004828837
- 0.3245349825114234
- 0.3352908890916282
- 0.3426537727286653
- 0.3416913241231775
- 0.343634850219928
- 0.3538569088433259
- 0.330378640451087
- 0.33046099405309765
- 0.35265391146515984
- 0.3331545004828837
- 0.3245349825114234
- 0.3352908890916282
- 0.3426537727286653
- 0.3416913241231775
- 0.343634850219928
- 0.3538569088433259
- 0.330378640451087
- 0.33046099405309765
- 0.35265391146515984
- 0.3331545004828837
- 0.3245349825114234
- 0.3352908890916282
- 0.3426537727286653
- 0.3416913241231775
- 0.343634850219928
- 0.3538569088433259
- 0.330378640451087
- 0.33046099405309765
- 0.35265391146515984
- 0.3331545004828837
- 0.3245349825114234
- 0.3352908890916282
- 0.3426537727286653
- 0.3416913241231775
- 0.343634850219928
- 0.3538569088433259
- 0.330378640451087
- 0.33046099405309765
- 0.35265391146515984
- 0.3331545004828837
- 0.3245349825114234
- 0.3352908890916282
- 0.3426537727286653
- 0.3416913241231775
- 0.343634850219928
- 0.3538569088433259
- 0.330378640451087
- 0.33046099405309765
- 0.35265391146515984
- 0.3331545004828837
- 0.3245349825114234
- 0.3352908890916282
- 0.3426537727286653
- 0.3416913241231775
- 0.343634850219928
- 0.3538569088433259
- 0.330378640451087
- 0.33046099405309765
- 0.35265391146515984
- 0.3331545004828837
- 0.3245349825114234
- 0.3352908890916282
- 0.3426537727286653
- 0.3416913241231775
- 0.343634850219928
- 0.3538569088433259
- 0.330378640451087
- 0.33046099405309765
- 0.35265391146515984
- 0.3331545004828837
- 0.3245349825114234
- 0.3352908890916282
- 0.3426537727286653
- 0.3416913241231775
- 0.343634850219928
- 0.3538569088433259
- 0.330378640451087
- 0.33046099405309765
- 0.35265391146515984
- 0.3331545004828837
- 0.3245349825114234
- 0.3352908890916282
- 0.3426537727286653
- 0.3416913241231775
- 0.343634850219928
- 0.3538569088433259
- 0.330378640451087
- 0.33046099405309765
- 0.35265391146515984
- 0.3331545004828837
- 0.3245349825114234
- 0.3352908890916282
- 0.3426537727286653
- 0.3416913241231775
- 0.343634850219928
- 0.3538569088433259
- 0.330378640451087
- 0.33046099405309765
- 0.35265391146515984
- 0.3331545004828837
- 0.3245349825114234
- 0.3352908890916282
- 0.3426537727286653
- 0.3416913241231775
- 0.343634850219928
- 0.3538569088433259
- 0.330378640451087
- 0.33046099405309765
- 0.35265391146515984
- 0.3331545004828837
- 0.3245349825114234
- 0.3352908890916282
- 0.3426537727286653
- 0.3416913241231775
- 0.343634850219928
- 0.3538569088433259
- 0.330378640451087
- 0.33046099405309765
- 0.35265391146515984
- 0.3331545004828837
- 0.3245349825114234
- 0.3352908890916282
- 0.3426537727286653
- 0.3416913241231775
- 0.343634850219928
- 0.3538569088433259
- 0.330378640451087
- 0.33046099405309765
- 0.35265391146515984
- 0.3331545004828837
- 0.3245349825114234
- 0.3352908890916282
- 0.3426537727286653
- 0.3416913241231775
- 0.343634850219928
- 0.3538569088433259
- 0.330378640451087
- 0.33046099405309765
- 0.35265391146515984
- 0.3331545004828837
- 0.3245349825114234
- 0.3352908890916282
- 0.3426537727286653
- 0.3416913241231775
- 0.343634850219928
- 0.3538569088433259
- 0.330378640451087
- 0.33046099405309765
- 0.35265391146515984
- 0.3331545004828837
- 0.3245349825114234
- 0.3352908890916282
- 0.3426537727286653
- 0.3416913241231775
- 0.343634850219928
- 0.3538569088433259
- 0.330378640451087
- 0.33046099405309765
- 0.35265391146515984
- 0.3331545004828837
- 0.3245349825114234
- 0.3352908890916282
- 0.3426537727286653
- 0.3416913241231775
- 0.343634850219928
- 0.3538569088433259
- 0.330378640451087
- 0.33046099405309765
- 0.35265391146515984
- 0.3331545004828837
- 0.3245349825114234
- 0.3352908890916282
- 0.3426537727286653
- 0.3416913241231775
- 0.343634850219928
- 0.3538569088433259
- 0.330378640451087
- 0.33046099405309765
- 0.35265391146515984
- 0.3331545004828837
- 0.3245349825114234
- 0.3352908890916282
- 0.3426537727286653
- 0.3416913241231775
- 0.343634850219928
- 0.3538569088433259
- 0.330378640451087
- 0.33046099405309765
- 0.35265391146515984
- 0.3331545004828837
- 0.3245349825114234
- 0.3352908890916282
- 0.3426537727286653
- 0.3416913241231775
- 0.343634850219928
- 0.3538569088433259
- 0.330378640451087
- 0.33046099405309765
- 0.35265391146515984
- 0.3331545004828837
- 0.3245349825114234
- 0.3352908890916282
- 0.3426537727286653
- 0.3416913241231775
- 0.343634850219928
- 0.3538569088433259
- 0.330378640451087
- 0.33046099405309765
- 0.35265391146515984
- 0.3331545004828837
- 0.3245349825114234
- 0.3352908890916282
- 0.3426537727286653
- 0.3416913241231775
- 0.343634850219928
- 0.3538569088433259
- 0.330378640451087
- 0.33046099405309765
- 0.35265391146515984
- 0.3331545004828837
- 0.3245349825114234
- 0.3352908890916282
- 0.3426537727286653
- 0.3416913241231775
- 0.343634850219928
- 0.3538569088433259
- 0.330378640451087
- 0.33046099405309765
- 0.35265391146515984
- 0.3331545004828837
- 0.3245349825114234
- 0.3352908890916282
- 0.3426537727286653
- 0.3416913241231775
- 0.343634850219928
- 0.3538569088433259
- 0.330378640451087
- 0.33046099405309765
- 0.35265391146515984
- 0.3331545004828837
- 0.3245349825114234
- 0.3352908890916282
- 0.3426537727286653
- 0.3416913241231775
- 0.343634850219928
- 0.3538569088433259
- 0.330378640451087
- 0.33046099405309765
- 0.35265391146515984
- 0.3331545004828837
- 0.3245349825114234
- 0.3352908890916282
- 0.3426537727286653
- 0.3416913241231775
- 0.343634850219928
- 0.3538569088433259
- 0.330378640451087
- 0.33046099405309765
- 0.35265391146515984
- 0.3331545004828837
- 0.3245349825114234
- 0.3352908890916282
- 0.3426537727286653
- 0.3416913241231775
- 0.343634850219928
- 0.3538569088433259
- 0.330378640451087
- 0.33046099405309765
- 0.35265391146515984
- 0.3331545004828837
- 0.3245349825114234
- 0.3352908890916282
- 0.3426537727286653
- 0.3416913241231775
- 0.343634850219928
- 0.3538569088433259
- 0.330378640451087
- 0.33046099405309765
- 0.35265391146515984
- 0.3331545004828837
- 0.3245349825114234
- 0.3352908890916282
- 0.3426537727286653
- 0.3416913241231775
- 0.343634850219928
- 0.3538569088433259
- 0.330378640451087
- 0.33046099405309765
- 0.35265391146515984
- 0.3331545004828837
- 0.3245349825114234
- 0.3352908890916282
- 0.3426537727286653
- 0.3416913241231775
- 0.343634850219928
- 0.3538569088433259
- 0.330378640451087
- 0.33046099405309765
- 0.35265391146515984
- 0.3331545004828837
- 0.3245349825114234
- 0.3352908890916282
- 0.3426537727286653
- 0.3416913241231775
- 0.343634850219928
- 0.3538569088433259
- 0.330378640451087
- 0.33046099405309765
- 0.35265391146515984
- 0.3331545004828837
- 0.3245349825114234
- 0.3352908890916282
- 0.3426537727286653
- 0.3416913241231775
- 0.343634850219928
- 0.3538569088433259
- 0.330378640451087
- 0.33046099405309765
- 0.35265391146515984
- 0.3331545004828837
- 0.3245349825114234
- 0.3352908890916282
- 0.3426537727286653
- 0.3416913241231775
- 0.343634850219928
- 0.3538569088433259
- 0.330378640451087
- 0.33046099405309765
- 0.35265391146515984
- 0.3331545004828837
- 0.3245349825114234
- 0.3352908890916282
- 0.3426537727286653
- 0.3416913241231775
- 0.343634850219928
- 0.3538569088433259
- 0.330378640451087
- 0.33046099405309765
- 0.35265391146515984
- 0.3331545004828837
- 0.3245349825114234
- 0.3352908890916282
- 0.3426537727286653
- 0.3416913241231775
- 0.343634850219928
- 0.3538569088433259
- 0.330378640451087
- 0.33046099405309765
- 0.35265391146515984
- 0.3331545004828837
- 0.3245349825114234
- 0.3352908890916282
- 0.3426537727286653
- 0.3416913241231775
- 0.343634850219928
- 0.3538569088433259
- 0.330378640451087
- 0.33046099405309765
- 0.35265391146515984
- 0.3331545004828837
- 0.3245349825114234
- 0.3352908890916282
- 0.3426537727286653
- 0.3416913241231775
- 0.343634850219928
- 0.3538569088433259
- 0.330378640451087
- 0.33046099405309765
- 0.35265391146515984
- 0.3331545004828837
- 0.3245349825114234
- 0.3352908890916282
- 0.3426537727286653
- 0.3416913241231775
- 0.343634850219928
- 0.3538569088433259
- 0.330378640451087
- 0.33046099405309765
- 0.35265391146515984
- 0.3331545004828837
- 0.3245349825114234
- 0.3352908890916282
- 0.3426537727286653
- 0.3416913241231775
- 0.343634850219928
- 0.3538569088433259
- 0.330378640451087
- 0.33046099405309765
- 0.35265391146515984
- 0.3331545004828837
- 0.3245349825114234
- 0.3352908890916282
- 0.3426537727286653
- 0.3416913241231775
- 0.343634850219928
- 0.3538569088433259
- 0.330378640451087
- 0.33046099405309765
- 0.35265391146515984
- 0.3331545004828837
- 0.3245349825114234
- 0.3352908890916282
- 0.3426537727286653
- 0.3416913241231775
- 0.343634850219928
- 0.3538569088433259
- 0.330378640451087
- 0.33046099405309765
- 0.35265391146515984
- 0.3331545004828837
- 0.3245349825114234
- 0.3352908890916282
- 0.3426537727286653
- 0.3416913241231775
- 0.343634850219928
- 0.3538569088433259
- 0.330378640451087
- 0.33046099405309765
- 0.35265391146515984
- 0.3331545004828837
- 0.3245349825114234
- 0.3352908890916282
- 0.3426537727286653
- 0.3416913241231775
- 0.343634850219928
- 0.3538569088433259
- 0.330378640451087
- 0.33046099405309765
- 0.35265391146515984
- 0.3331545004828837
- 0.3245349825114234
- 0.3352908890916282
- 0.3426537727286653
- 0.3416913241231775
- 0.343634850219928
- 0.3538569088433259
- 0.330378640451087
- 0.33046099405309765
- 0.35265391146515984
- 0.3331545004828837
- 0.3245349825114234
- 0.3352908890916282
- 0.3426537727286653
- 0.3416913241231775
- 0.343634850219928
- 0.3538569088433259
- 0.330378640451087
- 0.33046099405309765
- 0.35265391146515984
- 0.3331545004828837
- 0.3245349825114234
- 0.3352908890916282
- 0.3426537727286653
- 0.3416913241231775
- 0.343634850219928
- 0.3538569088433259
- 0.330378640451087
- 0.33046099405309765
- 0.35265391146515984
- 0.3331545004828837
- 0.3245349825114234
- 0.3352908890916282
- 0.3426537727286653
- 0.3416913241231775
- 0.343634850219928
- 0.3538569088433259
- 0.330378640451087
- 0.33046099405309765
- 0.35265391146515984
- 0.3331545004828837
- 0.3245349825114234
- 0.3352908890916282
- 0.3426537727286653
- 0.3416913241231775
- 0.343634850219928
- 0.3538569088433259
- 0.330378640451087
- 0.33046099405309765
- 0.35265391146515984
- 0.3331545004828837
- 0.3245349825114234
- 0.3352908890916282
- 0.3426537727286653
- 0.3416913241231775
- 0.343634850219928
- 0.3538569088433259
- 0.330378640451087
- 0.33046099405309765
- 0.35265391146515984
- 0.3331545004828837
- 0.3245349825114234
- 0.3352908890916282
- 0.3426537727286653
- 0.3416913241231775
- 0.343634850219928
- 0.3538569088433259
- 0.330378640451087
- 0.33046099405309765
- 0.35265391146515984
- 0.3331545004828837
- 0.3245349825114234
- 0.3352908890916282
- 0.3426537727286653
- 0.3416913241231775
- 0.343634850219928
- 0.3538569088433259
- 0.330378640451087
- 0.33046099405309765
- 0.35265391146515984
- 0.3331545004828837
- 0.3245349825114234
- 0.3352908890916282
- 0.3426537727286653
- 0.3416913241231775
- 0.343634850219928
- 0.3538569088433259
- 0.330378640451087
- 0.33046099405309765
- 0.35265391146515984
- 0.3331545004828837
- 0.3245349825114234
- 0.3352908890916282
- 0.3426537727286653
- 0.3416913241231775
- 0.343634850219928
- 0.3538569088433259
- 0.330378640451087
- 0.33046099405309765
- 0.35265391146515984
- 0.3331545004828837
- 0.3245349825114234
- 0.3352908890916282
- 0.3426537727286653
- 0.3416913241231775
- 0.343634850219928
- 0.3538569088433259
- 0.330378640451087
- 0.33046099405309765
- 0.35265391146515984
- 0.3331545004828837
- 0.3245349825114234
- 0.3352908890916282
- 0.3426537727286653
- 0.3416913241231775
- 0.343634850219928
- 0.3538569088433259
- 0.330378640451087
- 0.33046099405309765
- 0.35265391146515984
- 0.3331545004828837
- 0.3245349825114234
- 0.3352908890916282
- 0.3426537727286653
- 0.3416913241231775
- 0.343634850219928
- 0.3538569088433259
- 0.330378640451087
- 0.33046099405309765
- 0.35265391146515984
- 0.3331545004828837
- 0.3245349825114234
- 0.3352908890916282
- 0.3426537727286653
- 0.3416913241231775
- 0.343634850219928
- 0.3538569088433259
- 0.330378640451087
- 0.33046099405309765
- 0.35265391146515984
- 0.3331545004828837
- 0.3245349825114234
- 0.3352908890916282
- 0.3426537727286653
- 0.3416913241231775
- 0.343634850219928
- 0.3538569088433259
- 0.330378640451087
- 0.33046099405309765
- 0.35265391146515984
- 0.3331545004828837
- 0.3245349825114234
- 0.3352908890916282
- 0.3426537727286653
- 0.3416913241231775
- 0.343634850219928
- 0.3538569088433259
- 0.330378640451087
- 0.33046099405309765
- 0.35265391146515984
- 0.3331545004828837
- 0.3245349825114234
- 0.3352908890916282
- 0.3426537727286653
- 0.3416913241231775
- 0.343634850219928
- 0.3538569088433259
- 0.330378640451087
- 0.33046099405309765
- 0.35265391146515984
- 0.3331545004828837
- 0.3245349825114234
- 0.3352908890916282
- 0.3426537727286653
- 0.3416913241231775
- 0.343634850219928
- 0.3538569088433259
- 0.330378640451087
- 0.33046099405309765
- 0.35265391146515984
- 0.3331545004828837
- 0.3245349825114234
- 0.3352908890916282
- 0.3426537727286653
- 0.3416913241231775
- 0.343634850219928
- 0.3538569088433259
- 0.330378640451087
- 0.33046099405309765
- 0.35265391146515984
- 0.3331545004828837
- 0.3245349825114234
- 0.3352908890916282
- 0.3426537727286653
- 0.3416913241231775
- 0.343634850219928
- 0.3538569088433259
- 0.330378640451087
- 0.33046099405309765
- 0.35265391146515984
- 0.3331545004828837
- 0.3245349825114234
- 0.3352908890916282
- 0.3426537727286653
- 0.3416913241231775
- 0.343634850219928
- 0.3538569088433259
- 0.330378640451087
- 0.33046099405309765
- 0.35265391146515984
- 0.3331545004828837
- 0.3245349825114234
- 0.3352908890916282
- 0.3426537727286653
- 0.3416913241231775
- 0.343634850219928
- 0.3538569088433259
- 0.330378640451087
- 0.33046099405309765
- 0.35265391146515984
- 0.3331545004828837
- 0.3245349825114234
- 0.3352908890916282
- 0.3426537727286653
- 0.3416913241231775
- 0.343634850219928
- 0.3538569088433259
- 0.330378640451087
- 0.33046099405309765
- 0.35265391146515984
- 0.3331545004828837
- 0.3245349825114234
- 0.3352908890916282
- 0.3426537727286653
- 0.3416913241231775
- 0.343634850219928
- 0.3538569088433259
- 0.330378640451087
- 0.33046099405309765
- 0.35265391146515984
- 0.3331545004828837
- 0.3245349825114234
- 0.3352908890916282
- 0.3426537727286653
- 0.3416913241231775
- 0.343634850219928
- 0.3538569088433259
- 0.330378640451087
- 0.33046099405309765
- 0.35265391146515984
- 0.3331545004828837
- 0.3245349825114234
- 0.3352908890916282
- 0.3426537727286653
- 0.3416913241231775
- 0.343634850219928
- 0.3538569088433259
- 0.330378640451087
- 0.33046099405309765
- 0.35265391146515984
- 0.3331545004828837
- 0.3245349825114234
- 0.3352908890916282
- 0.3426537727286653
- 0.3416913241231775
- 0.343634850219928
- 0.3538569088433259
- 0.330378640451087
- 0.33046099405309765
- 0.35265391146515984
- 0.3331545004828837
- 0.3245349825114234
- 0.3352908890916282
- 0.3426537727286653
- 0.3416913241231775
- 0.343634850219928
- 0.3538569088433259
- 0.330378640451087
- 0.33046099405309765
- 0.35265391146515984
- 0.3331545004828837
- 0.3245349825114234
- 0.3352908890916282
- 0.3426537727286653
- 0.3416913241231775
- 0.343634850219928
- 0.3538569088433259
- 0.330378640451087
- 0.33046099405309765
- 0.35265391146515984
- 0.3331545004828837
- 0.3245349825114234
- 0.3352908890916282
- 0.3426537727286653
- 0.3416913241231775
- 0.343634850219928
- 0.3538569088433259
- 0.330378640451087
- 0.33046099405309765
- 0.35265391146515984
- 0.3331545004828837
- 0.3245349825114234
- 0.3352908890916282
- 0.3426537727286653
- 0.3416913241231775
- 0.343634850219928
- 0.3538569088433259
- 0.330378640451087
- 0.33046099405309765
- 0.35265391146515984
- 0.3331545004828837
- 0.3245349825114234
- 0.3352908890916282
- 0.3426537727286653
- 0.3416913241231775
- 0.343634850219928
- 0.3538569088433259
- 0.330378640451087
- 0.33046099405309765
- 0.35265391146515984
- 0.3331545004828837
- 0.3245349825114234
- 0.3352908890916282
- 0.3426537727286653
- 0.3416913241231775
- 0.343634850219928
- 0.3538569088433259
- 0.330378640451087
- 0.33046099405309765
- 0.35265391146515984
- 0.3331545004828837
- 0.3245349825114234
- 0.3352908890916282
- 0.3426537727286653
- 0.3416913241231775
- 0.343634850219928
- 0.3538569088433259
- 0.330378640451087
- 0.33046099405309765
- 0.35265391146515984
- 0.3331545004828837
- 0.3245349825114234
- 0.3352908890916282
- 0.3426537727286653
- 0.3416913241231775
- 0.343634850219928
- 0.3538569088433259
- 0.330378640451087
- 0.33046099405309765
- 0.35265391146515984
- 0.3331545004828837
- 0.3245349825114234
- 0.3352908890916282
- 0.3426537727286653
- 0.3416913241231775
- 0.343634850219928
- 0.3538569088433259
- 0.330378640451087
- 0.33046099405309765
- 0.35265391146515984
- 0.3331545004828837
- 0.3245349825114234
- 0.3352908890916282
- 0.3426537727286653
- 0.3416913241231775
- 0.343634850219928
- 0.3538569088433259
- 0.330378640451087
- 0.33046099405309765
- 0.35265391146515984
- 0.3331545004828837
- 0.3245349825114234
- 0.3352908890916282
- 0.3426537727286653
- 0.3416913241231775
- 0.343634850219928
- 0.3538569088433259
- 0.330378640451087
- 0.33046099405309765
- 0.35265391146515984
- 0.3331545004828837
- 0.3245349825114234
- 0.3352908890916282
- 0.3426537727286653
- 0.3416913241231775
- 0.343634850219928
- 0.3538569088433259
- 0.330378640451087
- 0.33046099405309765
- 0.35265391146515984
- 0.3331545004828837
- 0.3245349825114234
- 0.3352908890916282
- 0.3426537727286653
- 0.3416913241231775
- 0.343634850219928
- 0.3538569088433259
- 0.330378640451087
- 0.33046099405309765
- 0.35265391146515984
- 0.3331545004828837
- 0.3245349825114234
- 0.3352908890916282
- 0.3426537727286653
- 0.3416913241231775
- 0.343634850219928
- 0.3538569088433259
- 0.330378640451087
- 0.33046099405309765
- 0.35265391146515984
- 0.3331545004828837
- 0.3245349825114234
- 0.3352908890916282
- 0.3426537727286653
- 0.3416913241231775
- 0.343634850219928
- 0.3538569088433259
- 0.330378640451087
- 0.33046099405309765
- 0.35265391146515984
- 0.3331545004828837
- 0.3245349825114234
- 0.3352908890916282
- 0.3426537727286653
- 0.3416913241231775
- 0.343634850219928
- 0.3538569088433259
- 0.330378640451087
- 0.33046099405309765
- 0.35265391146515984
- 0.3331545004828837
- 0.3245349825114234
- 0.3352908890916282
- 0.3426537727286653
- 0.3416913241231775
- 0.343634850219928
- 0.3538569088433259
- 0.330378640451087
- 0.33046099405309765
- 0.35265391146515984
- 0.3331545004828837
- 0.3245349825114234
- 0.3352908890916282
- 0.3426537727286653
- 0.3416913241231775
- 0.343634850219928
- 0.3538569088433259
- 0.330378640451087
- 0.33046099405309765
- 0.35265391146515984
- 0.3331545004828837
- 0.3245349825114234
- 0.3352908890916282
- 0.3426537727286653
- 0.3416913241231775
- 0.343634850219928
- 0.3538569088433259
- 0.330378640451087
- 0.33046099405309765
- 0.35265391146515984
- 0.3331545004828837
- 0.3245349825114234
- 0.3352908890916282
- 0.3426537727286653
- 0.3416913241231775
- 0.343634850219928
- 0.3538569088433259
- 0.330378640451087
- 0.33046099405309765
- 0.35265391146515984
- 0.3331545004828837
- 0.3245349825114234
- 0.3352908890916282
- 0.3426537727286653
- 0.3416913241231775
- 0.343634850219928
- 0.3538569088433259
- 0.330378640451087
- 0.33046099405309765
- 0.35265391146515984
- 0.3331545004828837
- 0.3245349825114234
- 0.3352908890916282
- 0.3426537727286653
- 0.3416913241231775
- 0.343634850219928
- 0.3538569088433259
- 0.330378640451087
- 0.33046099405309765
- 0.35265391146515984
- 0.3331545004828837
- 0.3245349825114234
- 0.3352908890916282
- 0.3426537727286653
- 0.3416913241231775
- 0.343634850219928
- 0.3538569088433259
- 0.330378640451087
- 0.33046099405309765
- 0.35265391146515984
- 0.3331545004828837
- 0.3245349825114234
- 0.3352908890916282
- 0.3426537727286653
- 0.3416913241231775
- 0.343634850219928
- 0.3538569088433259
- 0.330378640451087
- 0.33046099405309765
- 0.35265391146515984
- 0.3331545004828837
- 0.3245349825114234
- 0.3352908890916282
- 0.3426537727286653
- 0.3416913241231775
- 0.343634850219928
- 0.3538569088433259
- 0.330378640451087
- 0.33046099405309765
- 0.35265391146515984
- 0.3331545004828837
- 0.3245349825114234
- 0.3352908890916282
- 0.3426537727286653
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackAndroidRetrieval
type: BeIR/cqadupstack
config: default
split: test
revision: f46a197baaae43b4f621051089b82a364682dfeb
metrics:
- type: map_at_1
value: 33.62
- type: map_at_10
value: 45.384
- type: map_at_100
value: 46.739999999999995
- type: map_at_1000
value: 46.847
- type: map_at_20
value: 46.099000000000004
- type: map_at_3
value: 41.766
- type: map_at_5
value: 43.891000000000005
- type: mrr_at_1
value: 40.916000000000004
- type: mrr_at_10
value: 51.15
- type: mrr_at_100
value: 51.797000000000004
- type: mrr_at_1000
value: 51.833
- type: mrr_at_20
value: 51.529
- type: mrr_at_3
value: 48.592999999999996
- type: mrr_at_5
value: 50.124
- type: ndcg_at_1
value: 40.916000000000004
- type: ndcg_at_10
value: 51.76500000000001
- type: ndcg_at_100
value: 56.706
- type: ndcg_at_1000
value: 58.406000000000006
- type: ndcg_at_20
value: 53.53
- type: ndcg_at_3
value: 46.916999999999994
- type: ndcg_at_5
value: 49.282
- type: precision_at_1
value: 40.916000000000004
- type: precision_at_10
value: 9.9
- type: precision_at_100
value: 1.534
- type: precision_at_1000
value: 0.194
- type: precision_at_20
value: 5.722
- type: precision_at_3
value: 22.747
- type: precision_at_5
value: 16.338
- type: recall_at_1
value: 33.62
- type: recall_at_10
value: 63.768
- type: recall_at_100
value: 84.629
- type: recall_at_1000
value: 95.488
- type: recall_at_20
value: 70.127
- type: recall_at_3
value: 49.563
- type: recall_at_5
value: 56.467999999999996
- type: map_at_1
value: 28.017500000000002
- type: map_at_10
value: 37.226000000000006
- type: map_at_100
value: 38.387249999999995
- type: map_at_1000
value: 38.497
- type: map_at_20
value: 37.8685
- type: map_at_3
value: 34.45716666666666
- type: map_at_5
value: 36.02891666666667
- type: mrr_at_1
value: 33.0525
- type: mrr_at_10
value: 41.375249999999994
- type: mrr_at_100
value: 42.214083333333335
- type: mrr_at_1000
value: 42.266416666666665
- type: mrr_at_20
value: 41.868833333333335
- type: mrr_at_3
value: 39.14641666666667
- type: mrr_at_5
value: 40.44550000000001
- type: ndcg_at_1
value: 33.0525
- type: ndcg_at_10
value: 42.40116666666667
- type: ndcg_at_100
value: 47.34408333333333
- type: ndcg_at_1000
value: 49.45733333333333
- type: ndcg_at_20
value: 44.33925
- type: ndcg_at_3
value: 37.934916666666666
- type: ndcg_at_5
value: 40.07458333333334
- type: precision_at_1
value: 33.0525
- type: precision_at_10
value: 7.330500000000001
- type: precision_at_100
value: 1.1537499999999998
- type: precision_at_1000
value: 0.1514166666666667
- type: precision_at_20
value: 4.298583333333333
- type: precision_at_3
value: 17.37725
- type: precision_at_5
value: 12.249500000000001
- type: recall_at_1
value: 28.017500000000002
- type: recall_at_10
value: 53.424416666666666
- type: recall_at_100
value: 75.08983333333332
- type: recall_at_1000
value: 89.7495
- type: recall_at_20
value: 60.53375000000001
- type: recall_at_3
value: 40.93975000000001
- type: recall_at_5
value: 46.51383333333333
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackEnglishRetrieval
type: BeIR/cqadupstack
config: default
split: test
revision: ad9991cb51e31e31e430383c75ffb2885547b5f0
metrics:
- type: map_at_1
value: 32.43
- type: map_at_10
value: 43.56
- type: map_at_100
value: 44.772
- type: map_at_1000
value: 44.894
- type: map_at_20
value: 44.207
- type: map_at_3
value: 40.163
- type: map_at_5
value: 42.053000000000004
- type: mrr_at_1
value: 40.764
- type: mrr_at_10
value: 49.718
- type: mrr_at_100
value: 50.265
- type: mrr_at_1000
value: 50.304
- type: mrr_at_20
value: 50.009
- type: mrr_at_3
value: 47.473
- type: mrr_at_5
value: 48.801
- type: ndcg_at_1
value: 40.764
- type: ndcg_at_10
value: 49.571
- type: ndcg_at_100
value: 53.474999999999994
- type: ndcg_at_1000
value: 55.309
- type: ndcg_at_20
value: 51.001
- type: ndcg_at_3
value: 45.107
- type: ndcg_at_5
value: 47.164
- type: precision_at_1
value: 40.764
- type: precision_at_10
value: 9.49
- type: precision_at_100
value: 1.467
- type: precision_at_1000
value: 0.191
- type: precision_at_20
value: 5.513
- type: precision_at_3
value: 21.996
- type: precision_at_5
value: 15.631
- type: recall_at_1
value: 32.43
- type: recall_at_10
value: 59.935
- type: recall_at_100
value: 76.386
- type: recall_at_1000
value: 88.011
- type: recall_at_20
value: 65.071
- type: recall_at_3
value: 46.56
- type: recall_at_5
value: 52.513
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackGamingRetrieval
type: BeIR/cqadupstack
config: default
split: test
revision: 4885aa143210c98657558c04aaf3dc47cfb54340
metrics:
- type: map_at_1
value: 43.195
- type: map_at_10
value: 56.013000000000005
- type: map_at_100
value: 56.957
- type: map_at_1000
value: 57.006
- type: map_at_20
value: 56.596000000000004
- type: map_at_3
value: 52.807
- type: map_at_5
value: 54.555
- type: mrr_at_1
value: 49.592000000000006
- type: mrr_at_10
value: 59.399
- type: mrr_at_100
value: 59.995
- type: mrr_at_1000
value: 60.019999999999996
- type: mrr_at_20
value: 59.77400000000001
- type: mrr_at_3
value: 57.220000000000006
- type: mrr_at_5
value: 58.48100000000001
- type: ndcg_at_1
value: 49.592000000000006
- type: ndcg_at_10
value: 61.682
- type: ndcg_at_100
value: 65.33
- type: ndcg_at_1000
value: 66.29
- type: ndcg_at_20
value: 63.298
- type: ndcg_at_3
value: 56.538999999999994
- type: ndcg_at_5
value: 58.946
- type: precision_at_1
value: 49.592000000000006
- type: precision_at_10
value: 9.824
- type: precision_at_100
value: 1.25
- type: precision_at_1000
value: 0.13699999999999998
- type: precision_at_20
value: 5.423
- type: precision_at_3
value: 25.119999999999997
- type: precision_at_5
value: 16.977999999999998
- type: recall_at_1
value: 43.195
- type: recall_at_10
value: 74.979
- type: recall_at_100
value: 90.701
- type: recall_at_1000
value: 97.474
- type: recall_at_20
value: 80.951
- type: recall_at_3
value: 61.275999999999996
- type: recall_at_5
value: 67.143
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackGisRetrieval
type: BeIR/cqadupstack
config: default
split: test
revision: 5003b3064772da1887988e05400cf3806fe491f2
metrics:
- type: map_at_1
value: 27.254
- type: map_at_10
value: 35.74
- type: map_at_100
value: 36.702
- type: map_at_1000
value: 36.782
- type: map_at_20
value: 36.258
- type: map_at_3
value: 33.341
- type: map_at_5
value: 34.666999999999994
- type: mrr_at_1
value: 28.927000000000003
- type: mrr_at_10
value: 37.396
- type: mrr_at_100
value: 38.267
- type: mrr_at_1000
value: 38.328
- type: mrr_at_20
value: 37.865
- type: mrr_at_3
value: 35.141
- type: mrr_at_5
value: 36.35
- type: ndcg_at_1
value: 28.927000000000003
- type: ndcg_at_10
value: 40.403
- type: ndcg_at_100
value: 45.241
- type: ndcg_at_1000
value: 47.278999999999996
- type: ndcg_at_20
value: 42.241
- type: ndcg_at_3
value: 35.754999999999995
- type: ndcg_at_5
value: 37.935
- type: precision_at_1
value: 28.927000000000003
- type: precision_at_10
value: 6.056
- type: precision_at_100
value: 0.893
- type: precision_at_1000
value: 0.11
- type: precision_at_20
value: 3.458
- type: precision_at_3
value: 14.915000000000001
- type: precision_at_5
value: 10.282
- type: recall_at_1
value: 27.254
- type: recall_at_10
value: 52.967
- type: recall_at_100
value: 75.224
- type: recall_at_1000
value: 90.617
- type: recall_at_20
value: 60.053
- type: recall_at_3
value: 40.548
- type: recall_at_5
value: 45.741
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackMathematicaRetrieval
type: BeIR/cqadupstack
config: default
split: test
revision: 90fceea13679c63fe563ded68f3b6f06e50061de
metrics:
- type: map_at_1
value: 17.291999999999998
- type: map_at_10
value: 25.56
- type: map_at_100
value: 26.694000000000003
- type: map_at_1000
value: 26.813
- type: map_at_20
value: 26.169999999999998
- type: map_at_3
value: 23.151
- type: map_at_5
value: 24.535
- type: mrr_at_1
value: 21.517
- type: mrr_at_10
value: 30.097
- type: mrr_at_100
value: 31.087999999999997
- type: mrr_at_1000
value: 31.157
- type: mrr_at_20
value: 30.689
- type: mrr_at_3
value: 27.736
- type: mrr_at_5
value: 29.154000000000003
- type: ndcg_at_1
value: 21.517
- type: ndcg_at_10
value: 30.636000000000003
- type: ndcg_at_100
value: 36.015
- type: ndcg_at_1000
value: 38.800000000000004
- type: ndcg_at_20
value: 32.716
- type: ndcg_at_3
value: 26.316
- type: ndcg_at_5
value: 28.46
- type: precision_at_1
value: 21.517
- type: precision_at_10
value: 5.585
- type: precision_at_100
value: 0.938
- type: precision_at_1000
value: 0.132
- type: precision_at_20
value: 3.34
- type: precision_at_3
value: 12.769
- type: precision_at_5
value: 9.254
- type: recall_at_1
value: 17.291999999999998
- type: recall_at_10
value: 41.677
- type: recall_at_100
value: 64.92999999999999
- type: recall_at_1000
value: 84.41300000000001
- type: recall_at_20
value: 49.18
- type: recall_at_3
value: 29.836000000000002
- type: recall_at_5
value: 35.284
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackPhysicsRetrieval
type: BeIR/cqadupstack
config: default
split: test
revision: 79531abbd1fb92d06c6d6315a0cbbbf5bb247ea4
metrics:
- type: map_at_1
value: 29.215000000000003
- type: map_at_10
value: 39.919
- type: map_at_100
value: 41.209
- type: map_at_1000
value: 41.31
- type: map_at_20
value: 40.62
- type: map_at_3
value: 36.565
- type: map_at_5
value: 38.439
- type: mrr_at_1
value: 35.996
- type: mrr_at_10
value: 45.39
- type: mrr_at_100
value: 46.205
- type: mrr_at_1000
value: 46.247
- type: mrr_at_20
value: 45.867000000000004
- type: mrr_at_3
value: 42.782
- type: mrr_at_5
value: 44.235
- type: ndcg_at_1
value: 35.996
- type: ndcg_at_10
value: 46.032000000000004
- type: ndcg_at_100
value: 51.397999999999996
- type: ndcg_at_1000
value: 53.215
- type: ndcg_at_20
value: 48.128
- type: ndcg_at_3
value: 40.78
- type: ndcg_at_5
value: 43.187999999999995
- type: precision_at_1
value: 35.996
- type: precision_at_10
value: 8.402
- type: precision_at_100
value: 1.304
- type: precision_at_1000
value: 0.161
- type: precision_at_20
value: 4.913
- type: precision_at_3
value: 19.442
- type: precision_at_5
value: 13.84
- type: recall_at_1
value: 29.215000000000003
- type: recall_at_10
value: 58.846
- type: recall_at_100
value: 81.255
- type: recall_at_1000
value: 93.10300000000001
- type: recall_at_20
value: 66.193
- type: recall_at_3
value: 43.842
- type: recall_at_5
value: 50.157
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackProgrammersRetrieval
type: BeIR/cqadupstack
config: default
split: test
revision: 6184bc1440d2dbc7612be22b50686b8826d22b32
metrics:
- type: map_at_1
value: 28.654000000000003
- type: map_at_10
value: 38.635000000000005
- type: map_at_100
value: 39.898
- type: map_at_1000
value: 40.003
- type: map_at_20
value: 39.33
- type: map_at_3
value: 35.705999999999996
- type: map_at_5
value: 37.294
- type: mrr_at_1
value: 34.589
- type: mrr_at_10
value: 43.580000000000005
- type: mrr_at_100
value: 44.455
- type: mrr_at_1000
value: 44.505
- type: mrr_at_20
value: 44.088
- type: mrr_at_3
value: 41.419
- type: mrr_at_5
value: 42.635
- type: ndcg_at_1
value: 34.589
- type: ndcg_at_10
value: 44.021
- type: ndcg_at_100
value: 49.653999999999996
- type: ndcg_at_1000
value: 51.695
- type: ndcg_at_20
value: 46.190999999999995
- type: ndcg_at_3
value: 39.568999999999996
- type: ndcg_at_5
value: 41.53
- type: precision_at_1
value: 34.589
- type: precision_at_10
value: 7.865
- type: precision_at_100
value: 1.243
- type: precision_at_1000
value: 0.16
- type: precision_at_20
value: 4.618
- type: precision_at_3
value: 18.531
- type: precision_at_5
value: 13.081999999999999
- type: recall_at_1
value: 28.654000000000003
- type: recall_at_10
value: 54.785
- type: recall_at_100
value: 79.532
- type: recall_at_1000
value: 92.99199999999999
- type: recall_at_20
value: 62.605
- type: recall_at_3
value: 42.559000000000005
- type: recall_at_5
value: 47.664
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackStatsRetrieval
type: BeIR/cqadupstack
config: default
split: test
revision: 65ac3a16b8e91f9cee4c9828cc7c335575432a2a
metrics:
- type: map_at_1
value: 25.277
- type: map_at_10
value: 32.135000000000005
- type: map_at_100
value: 33.105000000000004
- type: map_at_1000
value: 33.194
- type: map_at_20
value: 32.696
- type: map_at_3
value: 30.173
- type: map_at_5
value: 31.291000000000004
- type: mrr_at_1
value: 28.221
- type: mrr_at_10
value: 34.915
- type: mrr_at_100
value: 35.812
- type: mrr_at_1000
value: 35.876000000000005
- type: mrr_at_20
value: 35.447
- type: mrr_at_3
value: 33.154
- type: mrr_at_5
value: 34.19
- type: ndcg_at_1
value: 28.221
- type: ndcg_at_10
value: 36.086
- type: ndcg_at_100
value: 40.778999999999996
- type: ndcg_at_1000
value: 43.024
- type: ndcg_at_20
value: 38.019
- type: ndcg_at_3
value: 32.57
- type: ndcg_at_5
value: 34.272000000000006
- type: precision_at_1
value: 28.221
- type: precision_at_10
value: 5.567
- type: precision_at_100
value: 0.84
- type: precision_at_1000
value: 0.11
- type: precision_at_20
value: 3.2520000000000002
- type: precision_at_3
value: 13.905999999999999
- type: precision_at_5
value: 9.54
- type: recall_at_1
value: 25.277
- type: recall_at_10
value: 45.426
- type: recall_at_100
value: 66.63900000000001
- type: recall_at_1000
value: 83.25
- type: recall_at_20
value: 52.723
- type: recall_at_3
value: 35.629
- type: recall_at_5
value: 39.916000000000004
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackTexRetrieval
type: BeIR/cqadupstack
config: default
split: test
revision: 46989137a86843e03a6195de44b09deda022eec7
metrics:
- type: map_at_1
value: 18.365000000000002
- type: map_at_10
value: 25.387999999999998
- type: map_at_100
value: 26.394000000000002
- type: map_at_1000
value: 26.509
- type: map_at_20
value: 25.927
- type: map_at_3
value: 23.182
- type: map_at_5
value: 24.383
- type: mrr_at_1
value: 22.402
- type: mrr_at_10
value: 29.465000000000003
- type: mrr_at_100
value: 30.330000000000002
- type: mrr_at_1000
value: 30.404999999999998
- type: mrr_at_20
value: 29.95
- type: mrr_at_3
value: 27.415
- type: mrr_at_5
value: 28.548000000000002
- type: ndcg_at_1
value: 22.402
- type: ndcg_at_10
value: 29.872
- type: ndcg_at_100
value: 34.747
- type: ndcg_at_1000
value: 37.592999999999996
- type: ndcg_at_20
value: 31.653
- type: ndcg_at_3
value: 26.040999999999997
- type: ndcg_at_5
value: 27.755999999999997
- type: precision_at_1
value: 22.402
- type: precision_at_10
value: 5.337
- type: precision_at_100
value: 0.8959999999999999
- type: precision_at_1000
value: 0.13
- type: precision_at_20
value: 3.1850000000000005
- type: precision_at_3
value: 12.239
- type: precision_at_5
value: 8.692
- type: recall_at_1
value: 18.365000000000002
- type: recall_at_10
value: 39.283
- type: recall_at_100
value: 61.412
- type: recall_at_1000
value: 81.922
- type: recall_at_20
value: 45.917
- type: recall_at_3
value: 28.462
- type: recall_at_5
value: 33.040000000000006
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackUnixRetrieval
type: BeIR/cqadupstack
config: default
split: test
revision: 6c6430d3a6d36f8d2a829195bc5dc94d7e063e53
metrics:
- type: map_at_1
value: 29.687
- type: map_at_10
value: 37.667
- type: map_at_100
value: 38.76
- type: map_at_1000
value: 38.863
- type: map_at_20
value: 38.287
- type: map_at_3
value: 35.157
- type: map_at_5
value: 36.732
- type: mrr_at_1
value: 35.168
- type: mrr_at_10
value: 42.309999999999995
- type: mrr_at_100
value: 43.169999999999995
- type: mrr_at_1000
value: 43.227
- type: mrr_at_20
value: 42.826
- type: mrr_at_3
value: 40.065
- type: mrr_at_5
value: 41.549
- type: ndcg_at_1
value: 35.168
- type: ndcg_at_10
value: 42.463
- type: ndcg_at_100
value: 47.475
- type: ndcg_at_1000
value: 49.735
- type: ndcg_at_20
value: 44.440000000000005
- type: ndcg_at_3
value: 38.108
- type: ndcg_at_5
value: 40.507
- type: precision_at_1
value: 35.168
- type: precision_at_10
value: 6.847
- type: precision_at_100
value: 1.048
- type: precision_at_1000
value: 0.134
- type: precision_at_20
value: 3.9510000000000005
- type: precision_at_3
value: 16.884
- type: precision_at_5
value: 11.884
- type: recall_at_1
value: 29.687
- type: recall_at_10
value: 52.413
- type: recall_at_100
value: 74.21799999999999
- type: recall_at_1000
value: 90.022
- type: recall_at_20
value: 59.559
- type: recall_at_3
value: 40.717999999999996
- type: recall_at_5
value: 46.833999999999996
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackWebmastersRetrieval
type: BeIR/cqadupstack
config: default
split: test
revision: 160c094312a0e1facb97e55eeddb698c0abe3571
metrics:
- type: map_at_1
value: 28.233000000000004
- type: map_at_10
value: 36.492000000000004
- type: map_at_100
value: 38.157999999999994
- type: map_at_1000
value: 38.391999999999996
- type: map_at_20
value: 37.336999999999996
- type: map_at_3
value: 33.833999999999996
- type: map_at_5
value: 35.225
- type: mrr_at_1
value: 33.399
- type: mrr_at_10
value: 40.983000000000004
- type: mrr_at_100
value: 42.065999999999995
- type: mrr_at_1000
value: 42.117
- type: mrr_at_20
value: 41.635
- type: mrr_at_3
value: 38.999
- type: mrr_at_5
value: 40.105000000000004
- type: ndcg_at_1
value: 33.399
- type: ndcg_at_10
value: 41.764
- type: ndcg_at_100
value: 47.894
- type: ndcg_at_1000
value: 50.304
- type: ndcg_at_20
value: 43.986999999999995
- type: ndcg_at_3
value: 37.861
- type: ndcg_at_5
value: 39.532000000000004
- type: precision_at_1
value: 33.399
- type: precision_at_10
value: 7.806
- type: precision_at_100
value: 1.609
- type: precision_at_1000
value: 0.244
- type: precision_at_20
value: 5.01
- type: precision_at_3
value: 17.655
- type: precision_at_5
value: 12.49
- type: recall_at_1
value: 28.233000000000004
- type: recall_at_10
value: 51.031000000000006
- type: recall_at_100
value: 78.597
- type: recall_at_1000
value: 93.907
- type: recall_at_20
value: 59.231
- type: recall_at_3
value: 39.018
- type: recall_at_5
value: 43.905
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackWordpressRetrieval
type: BeIR/cqadupstack
config: default
split: test
revision: 4ffe81d471b1924886b33c7567bfb200e9eec5c4
metrics:
- type: map_at_1
value: 22.988
- type: map_at_10
value: 30.219
- type: map_at_100
value: 31.258000000000003
- type: map_at_1000
value: 31.351000000000003
- type: map_at_20
value: 30.895
- type: map_at_3
value: 27.641
- type: map_at_5
value: 29.282000000000004
- type: mrr_at_1
value: 25.139
- type: mrr_at_10
value: 32.1
- type: mrr_at_100
value: 33.119
- type: mrr_at_1000
value: 33.178000000000004
- type: mrr_at_20
value: 32.747
- type: mrr_at_3
value: 29.759999999999998
- type: mrr_at_5
value: 31.174000000000003
- type: ndcg_at_1
value: 25.139
- type: ndcg_at_10
value: 34.519
- type: ndcg_at_100
value: 39.415
- type: ndcg_at_1000
value: 41.837999999999994
- type: ndcg_at_20
value: 36.867
- type: ndcg_at_3
value: 29.656
- type: ndcg_at_5
value: 32.323
- type: precision_at_1
value: 25.139
- type: precision_at_10
value: 5.287
- type: precision_at_100
value: 0.823
- type: precision_at_1000
value: 0.11399999999999999
- type: precision_at_20
value: 3.198
- type: precision_at_3
value: 12.323
- type: precision_at_5
value: 8.982999999999999
- type: recall_at_1
value: 22.988
- type: recall_at_10
value: 45.983000000000004
- type: recall_at_100
value: 67.55499999999999
- type: recall_at_1000
value: 85.795
- type: recall_at_20
value: 54.795
- type: recall_at_3
value: 33.266
- type: recall_at_5
value: 39.501
- task:
type: Retrieval
dataset:
name: MTEB ClimateFEVER
type: None
config: default
split: test
revision: 47f2ac6acb640fc46020b02a5b59fdda04d39380
metrics:
- type: map_at_1
value: 9.466
- type: map_at_10
value: 15.967
- type: map_at_100
value: 17.804000000000002
- type: map_at_1000
value: 18.003
- type: map_at_20
value: 16.929
- type: map_at_3
value: 13.248
- type: map_at_5
value: 14.6
- type: mrr_at_1
value: 21.303
- type: mrr_at_10
value: 30.908
- type: mrr_at_100
value: 32.16
- type: mrr_at_1000
value: 32.211
- type: mrr_at_20
value: 31.721
- type: mrr_at_3
value: 27.6
- type: mrr_at_5
value: 29.402
- type: ndcg_at_1
value: 21.303
- type: ndcg_at_10
value: 22.972
- type: ndcg_at_100
value: 30.782999999999998
- type: ndcg_at_1000
value: 34.382000000000005
- type: ndcg_at_20
value: 25.983
- type: ndcg_at_3
value: 18.278
- type: ndcg_at_5
value: 19.894000000000002
- type: precision_at_1
value: 21.303
- type: precision_at_10
value: 7.225
- type: precision_at_100
value: 1.549
- type: precision_at_1000
value: 0.22100000000000003
- type: precision_at_20
value: 4.883
- type: precision_at_3
value: 13.442000000000002
- type: precision_at_5
value: 10.463000000000001
- type: recall_at_1
value: 9.466
- type: recall_at_10
value: 28.261999999999997
- type: recall_at_100
value: 55.541
- type: recall_at_1000
value: 75.723
- type: recall_at_20
value: 36.934
- type: recall_at_3
value: 16.862
- type: recall_at_5
value: 21.365000000000002
- task:
type: Retrieval
dataset:
name: MTEB DBPedia
type: None
config: default
split: test
revision: c0f706b76e590d620bd6618b3ca8efdd34e2d659
metrics:
- type: map_at_1
value: 8.425
- type: map_at_10
value: 18.485
- type: map_at_100
value: 25.790000000000003
- type: map_at_1000
value: 27.205000000000002
- type: map_at_20
value: 21.201
- type: map_at_3
value: 13.26
- type: map_at_5
value: 15.328
- type: mrr_at_1
value: 62.0
- type: mrr_at_10
value: 70.954
- type: mrr_at_100
value: 71.311
- type: mrr_at_1000
value: 71.318
- type: mrr_at_20
value: 71.18100000000001
- type: mrr_at_3
value: 68.708
- type: mrr_at_5
value: 70.296
- type: ndcg_at_1
value: 50.0
- type: ndcg_at_10
value: 37.972
- type: ndcg_at_100
value: 42.725
- type: ndcg_at_1000
value: 49.617
- type: ndcg_at_20
value: 37.478
- type: ndcg_at_3
value: 42.378
- type: ndcg_at_5
value: 39.678000000000004
- type: precision_at_1
value: 62.0
- type: precision_at_10
value: 30.175
- type: precision_at_100
value: 9.56
- type: precision_at_1000
value: 1.8350000000000002
- type: precision_at_20
value: 22.400000000000002
- type: precision_at_3
value: 46.083
- type: precision_at_5
value: 38.65
- type: recall_at_1
value: 8.425
- type: recall_at_10
value: 24.52
- type: recall_at_100
value: 49.79
- type: recall_at_1000
value: 71.27799999999999
- type: recall_at_20
value: 30.938
- type: recall_at_3
value: 14.466999999999999
- type: recall_at_5
value: 18.13
- task:
type: Classification
dataset:
name: MTEB EmotionClassification
type: None
config: default
split: test
revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37
metrics:
- type: accuracy
value: 43.615
- type: f1
value: 40.05868641887659
- task:
type: Retrieval
dataset:
name: MTEB FEVER
type: None
config: default
split: test
revision: bea83ef9e8fb933d90a2f1d5515737465d613e12
metrics:
- type: map_at_1
value: 46.028000000000006
- type: map_at_10
value: 60.24699999999999
- type: map_at_100
value: 60.745000000000005
- type: map_at_1000
value: 60.763
- type: map_at_20
value: 60.590999999999994
- type: map_at_3
value: 57.32000000000001
- type: map_at_5
value: 59.245999999999995
- type: mrr_at_1
value: 49.565
- type: mrr_at_10
value: 63.980000000000004
- type: mrr_at_100
value: 64.393
- type: mrr_at_1000
value: 64.398
- type: mrr_at_20
value: 64.285
- type: mrr_at_3
value: 61.109
- type: mrr_at_5
value: 63.032999999999994
- type: ndcg_at_1
value: 49.565
- type: ndcg_at_10
value: 67.391
- type: ndcg_at_100
value: 69.488
- type: ndcg_at_1000
value: 69.82000000000001
- type: ndcg_at_20
value: 68.55499999999999
- type: ndcg_at_3
value: 61.768
- type: ndcg_at_5
value: 65.09899999999999
- type: precision_at_1
value: 49.565
- type: precision_at_10
value: 9.388
- type: precision_at_100
value: 1.055
- type: precision_at_1000
value: 0.11
- type: precision_at_20
value: 4.958
- type: precision_at_3
value: 25.602999999999998
- type: precision_at_5
value: 17.177
- type: recall_at_1
value: 46.028000000000006
- type: recall_at_10
value: 85.685
- type: recall_at_100
value: 94.64099999999999
- type: recall_at_1000
value: 96.878
- type: recall_at_20
value: 90.065
- type: recall_at_3
value: 70.783
- type: recall_at_5
value: 78.818
- task:
type: Retrieval
dataset:
name: MTEB FiQA2018
type: None
config: default
split: test
revision: 27a168819829fe9bcd655c2df245fb19452e8e06
metrics:
- type: map_at_1
value: 20.371
- type: map_at_10
value: 34.364
- type: map_at_100
value: 36.315
- type: map_at_1000
value: 36.477
- type: map_at_20
value: 35.443999999999996
- type: map_at_3
value: 29.845
- type: map_at_5
value: 32.559
- type: mrr_at_1
value: 41.049
- type: mrr_at_10
value: 50.552
- type: mrr_at_100
value: 51.33
- type: mrr_at_1000
value: 51.361000000000004
- type: mrr_at_20
value: 51.032
- type: mrr_at_3
value: 48.251
- type: mrr_at_5
value: 49.509
- type: ndcg_at_1
value: 41.049
- type: ndcg_at_10
value: 42.527
- type: ndcg_at_100
value: 49.293
- type: ndcg_at_1000
value: 52.014
- type: ndcg_at_20
value: 45.245999999999995
- type: ndcg_at_3
value: 38.802
- type: ndcg_at_5
value: 40.19
- type: precision_at_1
value: 41.049
- type: precision_at_10
value: 11.914
- type: precision_at_100
value: 1.889
- type: precision_at_1000
value: 0.23900000000000002
- type: precision_at_20
value: 7.106
- type: precision_at_3
value: 26.44
- type: precision_at_5
value: 19.599
- type: recall_at_1
value: 20.371
- type: recall_at_10
value: 50.20099999999999
- type: recall_at_100
value: 74.85300000000001
- type: recall_at_1000
value: 91.166
- type: recall_at_20
value: 58.559000000000005
- type: recall_at_3
value: 35.32
- type: recall_at_5
value: 42.106
- task:
type: Retrieval
dataset:
name: MTEB HotpotQA
type: None
config: default
split: test
revision: ab518f4d6fcca38d87c25209f94beba119d02014
metrics:
- type: map_at_1
value: 35.205999999999996
- type: map_at_10
value: 50.463
- type: map_at_100
value: 51.321000000000005
- type: map_at_1000
value: 51.391
- type: map_at_20
value: 50.965
- type: map_at_3
value: 47.331
- type: map_at_5
value: 49.247
- type: mrr_at_1
value: 70.41199999999999
- type: mrr_at_10
value: 77.577
- type: mrr_at_100
value: 77.835
- type: mrr_at_1000
value: 77.847
- type: mrr_at_20
value: 77.755
- type: mrr_at_3
value: 76.291
- type: mrr_at_5
value: 77.128
- type: ndcg_at_1
value: 70.41199999999999
- type: ndcg_at_10
value: 60.002
- type: ndcg_at_100
value: 63.1
- type: ndcg_at_1000
value: 64.491
- type: ndcg_at_20
value: 61.321000000000005
- type: ndcg_at_3
value: 55.318999999999996
- type: ndcg_at_5
value: 57.886
- type: precision_at_1
value: 70.41199999999999
- type: precision_at_10
value: 12.46
- type: precision_at_100
value: 1.488
- type: precision_at_1000
value: 0.167
- type: precision_at_20
value: 6.656
- type: precision_at_3
value: 34.634
- type: precision_at_5
value: 22.804
- type: recall_at_1
value: 35.205999999999996
- type: recall_at_10
value: 62.302
- type: recall_at_100
value: 74.409
- type: recall_at_1000
value: 83.633
- type: recall_at_20
value: 66.556
- type: recall_at_3
value: 51.951
- type: recall_at_5
value: 57.009
- task:
type: Classification
dataset:
name: MTEB ImdbClassification
type: None
config: default
split: test
revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7
metrics:
- type: accuracy
value: 88.73559999999999
- type: ap
value: 84.40550091347858
- type: f1
value: 88.6897413895929
- task:
type: Retrieval
dataset:
name: MTEB MSMARCO
type: None
config: default
split: dev
revision: c5a29a104738b98a9e76336939199e264163d4a0
metrics:
- type: map_at_1
value: 15.634
- type: map_at_10
value: 27.287
- type: map_at_100
value: 28.62
- type: map_at_1000
value: 28.677999999999997
- type: map_at_20
value: 28.113
- type: map_at_3
value: 23.227999999999998
- type: map_at_5
value: 25.509999999999998
- type: mrr_at_1
value: 16.103
- type: mrr_at_10
value: 27.772999999999996
- type: mrr_at_100
value: 29.055999999999997
- type: mrr_at_1000
value: 29.108
- type: mrr_at_20
value: 28.573999999999998
- type: mrr_at_3
value: 23.766000000000002
- type: mrr_at_5
value: 26.005
- type: ndcg_at_1
value: 16.103
- type: ndcg_at_10
value: 34.233999999999995
- type: ndcg_at_100
value: 40.748
- type: ndcg_at_1000
value: 42.189
- type: ndcg_at_20
value: 37.199
- type: ndcg_at_3
value: 25.913999999999998
- type: ndcg_at_5
value: 29.992
- type: precision_at_1
value: 16.103
- type: precision_at_10
value: 5.838
- type: precision_at_100
value: 0.909
- type: precision_at_1000
value: 0.10300000000000001
- type: precision_at_20
value: 3.535
- type: precision_at_3
value: 11.471
- type: precision_at_5
value: 8.953999999999999
- type: recall_at_1
value: 15.634
- type: recall_at_10
value: 55.887
- type: recall_at_100
value: 86.188
- type: recall_at_1000
value: 97.17
- type: recall_at_20
value: 67.461
- type: recall_at_3
value: 33.17
- type: recall_at_5
value: 42.988
- task:
type: Classification
dataset:
name: MTEB MTOPDomainClassification (en)
type: None
config: en
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 92.2936616507068
- type: f1
value: 92.02636761092074
- task:
type: Classification
dataset:
name: MTEB MTOPIntentClassification (en)
type: None
config: en
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 73.5248518011856
- type: f1
value: 53.05521175765365
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (en)
type: None
config: en
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 70.21856086079356
- type: f1
value: 67.85484208485116
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (en)
type: None
config: en
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 75.27236045729657
- type: f1
value: 74.916229419199
- task:
type: Clustering
dataset:
name: MTEB MedrxivClusteringP2P
type: None
config: default
split: test
revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73
metrics:
- type: v_measure
value: 32.750593892555116
- type: v_measures
value:
- 0.30689136699710556
- 0.31435532289406576
- 0.3149165244680583
- 0.31942972122175306
- 0.3185331208118458
- 0.33682182366550517
- 0.3394323349184708
- 0.34389267115248884
- 0.3459164509339567
- 0.33487005219226135
- 0.30689136699710556
- 0.31435532289406576
- 0.3149165244680583
- 0.31942972122175306
- 0.3185331208118458
- 0.33682182366550517
- 0.3394323349184708
- 0.34389267115248884
- 0.3459164509339567
- 0.33487005219226135
- 0.30689136699710556
- 0.31435532289406576
- 0.3149165244680583
- 0.31942972122175306
- 0.3185331208118458
- 0.33682182366550517
- 0.3394323349184708
- 0.34389267115248884
- 0.3459164509339567
- 0.33487005219226135
- 0.30689136699710556
- 0.31435532289406576
- 0.3149165244680583
- 0.31942972122175306
- 0.3185331208118458
- 0.33682182366550517
- 0.3394323349184708
- 0.34389267115248884
- 0.3459164509339567
- 0.33487005219226135
- 0.30689136699710556
- 0.31435532289406576
- 0.3149165244680583
- 0.31942972122175306
- 0.3185331208118458
- 0.33682182366550517
- 0.3394323349184708
- 0.34389267115248884
- 0.3459164509339567
- 0.33487005219226135
- 0.30689136699710556
- 0.31435532289406576
- 0.3149165244680583
- 0.31942972122175306
- 0.3185331208118458
- 0.33682182366550517
- 0.3394323349184708
- 0.34389267115248884
- 0.3459164509339567
- 0.33487005219226135
- 0.30689136699710556
- 0.31435532289406576
- 0.3149165244680583
- 0.31942972122175306
- 0.3185331208118458
- 0.33682182366550517
- 0.3394323349184708
- 0.34389267115248884
- 0.3459164509339567
- 0.33487005219226135
- 0.30689136699710556
- 0.31435532289406576
- 0.3149165244680583
- 0.31942972122175306
- 0.3185331208118458
- 0.33682182366550517
- 0.3394323349184708
- 0.34389267115248884
- 0.3459164509339567
- 0.33487005219226135
- 0.30689136699710556
- 0.31435532289406576
- 0.3149165244680583
- 0.31942972122175306
- 0.3185331208118458
- 0.33682182366550517
- 0.3394323349184708
- 0.34389267115248884
- 0.3459164509339567
- 0.33487005219226135
- 0.30689136699710556
- 0.31435532289406576
- 0.3149165244680583
- 0.31942972122175306
- 0.3185331208118458
- 0.33682182366550517
- 0.3394323349184708
- 0.34389267115248884
- 0.3459164509339567
- 0.33487005219226135
- 0.30689136699710556
- 0.31435532289406576
- 0.3149165244680583
- 0.31942972122175306
- 0.3185331208118458
- 0.33682182366550517
- 0.3394323349184708
- 0.34389267115248884
- 0.3459164509339567
- 0.33487005219226135
- 0.30689136699710556
- 0.31435532289406576
- 0.3149165244680583
- 0.31942972122175306
- 0.3185331208118458
- 0.33682182366550517
- 0.3394323349184708
- 0.34389267115248884
- 0.3459164509339567
- 0.33487005219226135
- 0.30689136699710556
- 0.31435532289406576
- 0.3149165244680583
- 0.31942972122175306
- 0.3185331208118458
- 0.33682182366550517
- 0.3394323349184708
- 0.34389267115248884
- 0.3459164509339567
- 0.33487005219226135
- 0.30689136699710556
- 0.31435532289406576
- 0.3149165244680583
- 0.31942972122175306
- 0.3185331208118458
- 0.33682182366550517
- 0.3394323349184708
- 0.34389267115248884
- 0.3459164509339567
- 0.33487005219226135
- 0.30689136699710556
- 0.31435532289406576
- 0.3149165244680583
- 0.31942972122175306
- 0.3185331208118458
- 0.33682182366550517
- 0.3394323349184708
- 0.34389267115248884
- 0.3459164509339567
- 0.33487005219226135
- 0.30689136699710556
- 0.31435532289406576
- 0.3149165244680583
- 0.31942972122175306
- 0.3185331208118458
- 0.33682182366550517
- 0.3394323349184708
- 0.34389267115248884
- 0.3459164509339567
- 0.33487005219226135
- 0.30689136699710556
- 0.31435532289406576
- 0.3149165244680583
- 0.31942972122175306
- 0.3185331208118458
- 0.33682182366550517
- 0.3394323349184708
- 0.34389267115248884
- 0.3459164509339567
- 0.33487005219226135
- 0.30689136699710556
- 0.31435532289406576
- 0.3149165244680583
- 0.31942972122175306
- 0.3185331208118458
- 0.33682182366550517
- 0.3394323349184708
- 0.34389267115248884
- 0.3459164509339567
- 0.33487005219226135
- 0.30689136699710556
- 0.31435532289406576
- 0.3149165244680583
- 0.31942972122175306
- 0.3185331208118458
- 0.33682182366550517
- 0.3394323349184708
- 0.34389267115248884
- 0.3459164509339567
- 0.33487005219226135
- 0.30689136699710556
- 0.31435532289406576
- 0.3149165244680583
- 0.31942972122175306
- 0.3185331208118458
- 0.33682182366550517
- 0.3394323349184708
- 0.34389267115248884
- 0.3459164509339567
- 0.33487005219226135
- 0.30689136699710556
- 0.31435532289406576
- 0.3149165244680583
- 0.31942972122175306
- 0.3185331208118458
- 0.33682182366550517
- 0.3394323349184708
- 0.34389267115248884
- 0.3459164509339567
- 0.33487005219226135
- 0.30689136699710556
- 0.31435532289406576
- 0.3149165244680583
- 0.31942972122175306
- 0.3185331208118458
- 0.33682182366550517
- 0.3394323349184708
- 0.34389267115248884
- 0.3459164509339567
- 0.33487005219226135
- 0.30689136699710556
- 0.31435532289406576
- 0.3149165244680583
- 0.31942972122175306
- 0.3185331208118458
- 0.33682182366550517
- 0.3394323349184708
- 0.34389267115248884
- 0.3459164509339567
- 0.33487005219226135
- 0.30689136699710556
- 0.31435532289406576
- 0.3149165244680583
- 0.31942972122175306
- 0.3185331208118458
- 0.33682182366550517
- 0.3394323349184708
- 0.34389267115248884
- 0.3459164509339567
- 0.33487005219226135
- 0.30689136699710556
- 0.31435532289406576
- 0.3149165244680583
- 0.31942972122175306
- 0.3185331208118458
- 0.33682182366550517
- 0.3394323349184708
- 0.34389267115248884
- 0.3459164509339567
- 0.33487005219226135
- 0.30689136699710556
- 0.31435532289406576
- 0.3149165244680583
- 0.31942972122175306
- 0.3185331208118458
- 0.33682182366550517
- 0.3394323349184708
- 0.34389267115248884
- 0.3459164509339567
- 0.33487005219226135
- 0.30689136699710556
- 0.31435532289406576
- 0.3149165244680583
- 0.31942972122175306
- 0.3185331208118458
- 0.33682182366550517
- 0.3394323349184708
- 0.34389267115248884
- 0.3459164509339567
- 0.33487005219226135
- 0.30689136699710556
- 0.31435532289406576
- 0.3149165244680583
- 0.31942972122175306
- 0.3185331208118458
- 0.33682182366550517
- 0.3394323349184708
- 0.34389267115248884
- 0.3459164509339567
- 0.33487005219226135
- 0.30689136699710556
- 0.31435532289406576
- 0.3149165244680583
- 0.31942972122175306
- 0.3185331208118458
- 0.33682182366550517
- 0.3394323349184708
- 0.34389267115248884
- 0.3459164509339567
- 0.33487005219226135
- 0.30689136699710556
- 0.31435532289406576
- 0.3149165244680583
- 0.31942972122175306
- 0.3185331208118458
- 0.33682182366550517
- 0.3394323349184708
- 0.34389267115248884
- 0.3459164509339567
- 0.33487005219226135
- 0.30689136699710556
- 0.31435532289406576
- 0.3149165244680583
- 0.31942972122175306
- 0.3185331208118458
- 0.33682182366550517
- 0.3394323349184708
- 0.34389267115248884
- 0.3459164509339567
- 0.33487005219226135
- 0.30689136699710556
- 0.31435532289406576
- 0.3149165244680583
- 0.31942972122175306
- 0.3185331208118458
- 0.33682182366550517
- 0.3394323349184708
- 0.34389267115248884
- 0.3459164509339567
- 0.33487005219226135
- 0.30689136699710556
- 0.31435532289406576
- 0.3149165244680583
- 0.31942972122175306
- 0.3185331208118458
- 0.33682182366550517
- 0.3394323349184708
- 0.34389267115248884
- 0.3459164509339567
- 0.33487005219226135
- 0.30689136699710556
- 0.31435532289406576
- 0.3149165244680583
- 0.31942972122175306
- 0.3185331208118458
- 0.33682182366550517
- 0.3394323349184708
- 0.34389267115248884
- 0.3459164509339567
- 0.33487005219226135
- 0.30689136699710556
- 0.31435532289406576
- 0.3149165244680583
- 0.31942972122175306
- 0.3185331208118458
- 0.33682182366550517
- 0.3394323349184708
- 0.34389267115248884
- 0.3459164509339567
- 0.33487005219226135
- 0.30689136699710556
- 0.31435532289406576
- 0.3149165244680583
- 0.31942972122175306
- 0.3185331208118458
- 0.33682182366550517
- 0.3394323349184708
- 0.34389267115248884
- 0.3459164509339567
- 0.33487005219226135
- 0.30689136699710556
- 0.31435532289406576
- 0.3149165244680583
- 0.31942972122175306
- 0.3185331208118458
- 0.33682182366550517
- 0.3394323349184708
- 0.34389267115248884
- 0.3459164509339567
- 0.33487005219226135
- 0.30689136699710556
- 0.31435532289406576
- 0.3149165244680583
- 0.31942972122175306
- 0.3185331208118458
- 0.33682182366550517
- 0.3394323349184708
- 0.34389267115248884
- 0.3459164509339567
- 0.33487005219226135
- 0.30689136699710556
- 0.31435532289406576
- 0.3149165244680583
- 0.31942972122175306
- 0.3185331208118458
- 0.33682182366550517
- 0.3394323349184708
- 0.34389267115248884
- 0.3459164509339567
- 0.33487005219226135
- 0.30689136699710556
- 0.31435532289406576
- 0.3149165244680583
- 0.31942972122175306
- 0.3185331208118458
- 0.33682182366550517
- 0.3394323349184708
- 0.34389267115248884
- 0.3459164509339567
- 0.33487005219226135
- 0.30689136699710556
- 0.31435532289406576
- 0.3149165244680583
- 0.31942972122175306
- 0.3185331208118458
- 0.33682182366550517
- 0.3394323349184708
- 0.34389267115248884
- 0.3459164509339567
- 0.33487005219226135
- 0.30689136699710556
- 0.31435532289406576
- 0.3149165244680583
- 0.31942972122175306
- 0.3185331208118458
- 0.33682182366550517
- 0.3394323349184708
- 0.34389267115248884
- 0.3459164509339567
- 0.33487005219226135
- 0.30689136699710556
- 0.31435532289406576
- 0.3149165244680583
- 0.31942972122175306
- 0.3185331208118458
- 0.33682182366550517
- 0.3394323349184708
- 0.34389267115248884
- 0.3459164509339567
- 0.33487005219226135
- 0.30689136699710556
- 0.31435532289406576
- 0.3149165244680583
- 0.31942972122175306
- 0.3185331208118458
- 0.33682182366550517
- 0.3394323349184708
- 0.34389267115248884
- 0.3459164509339567
- 0.33487005219226135
- 0.30689136699710556
- 0.31435532289406576
- 0.3149165244680583
- 0.31942972122175306
- 0.3185331208118458
- 0.33682182366550517
- 0.3394323349184708
- 0.34389267115248884
- 0.3459164509339567
- 0.33487005219226135
- 0.30689136699710556
- 0.31435532289406576
- 0.3149165244680583
- 0.31942972122175306
- 0.3185331208118458
- 0.33682182366550517
- 0.3394323349184708
- 0.34389267115248884
- 0.3459164509339567
- 0.33487005219226135
- 0.30689136699710556
- 0.31435532289406576
- 0.3149165244680583
- 0.31942972122175306
- 0.3185331208118458
- 0.33682182366550517
- 0.3394323349184708
- 0.34389267115248884
- 0.3459164509339567
- 0.33487005219226135
- 0.30689136699710556
- 0.31435532289406576
- 0.3149165244680583
- 0.31942972122175306
- 0.3185331208118458
- 0.33682182366550517
- 0.3394323349184708
- 0.34389267115248884
- 0.3459164509339567
- 0.33487005219226135
- 0.30689136699710556
- 0.31435532289406576
- 0.3149165244680583
- 0.31942972122175306
- 0.3185331208118458
- 0.33682182366550517
- 0.3394323349184708
- 0.34389267115248884
- 0.3459164509339567
- 0.33487005219226135
- 0.30689136699710556
- 0.31435532289406576
- 0.3149165244680583
- 0.31942972122175306
- 0.3185331208118458
- 0.33682182366550517
- 0.3394323349184708
- 0.34389267115248884
- 0.3459164509339567
- 0.33487005219226135
- 0.30689136699710556
- 0.31435532289406576
- 0.3149165244680583
- 0.31942972122175306
- 0.3185331208118458
- 0.33682182366550517
- 0.3394323349184708
- 0.34389267115248884
- 0.3459164509339567
- 0.33487005219226135
- 0.30689136699710556
- 0.31435532289406576
- 0.3149165244680583
- 0.31942972122175306
- 0.3185331208118458
- 0.33682182366550517
- 0.3394323349184708
- 0.34389267115248884
- 0.3459164509339567
- 0.33487005219226135
- 0.30689136699710556
- 0.31435532289406576
- 0.3149165244680583
- 0.31942972122175306
- 0.3185331208118458
- 0.33682182366550517
- 0.3394323349184708
- 0.34389267115248884
- 0.3459164509339567
- 0.33487005219226135
- 0.30689136699710556
- 0.31435532289406576
- 0.3149165244680583
- 0.31942972122175306
- 0.3185331208118458
- 0.33682182366550517
- 0.3394323349184708
- 0.34389267115248884
- 0.3459164509339567
- 0.33487005219226135
- 0.30689136699710556
- 0.31435532289406576
- 0.3149165244680583
- 0.31942972122175306
- 0.3185331208118458
- 0.33682182366550517
- 0.3394323349184708
- 0.34389267115248884
- 0.3459164509339567
- 0.33487005219226135
- 0.30689136699710556
- 0.31435532289406576
- 0.3149165244680583
- 0.31942972122175306
- 0.3185331208118458
- 0.33682182366550517
- 0.3394323349184708
- 0.34389267115248884
- 0.3459164509339567
- 0.33487005219226135
- 0.30689136699710556
- 0.31435532289406576
- 0.3149165244680583
- 0.31942972122175306
- 0.3185331208118458
- 0.33682182366550517
- 0.3394323349184708
- 0.34389267115248884
- 0.3459164509339567
- 0.33487005219226135
- 0.30689136699710556
- 0.31435532289406576
- 0.3149165244680583
- 0.31942972122175306
- 0.3185331208118458
- 0.33682182366550517
- 0.3394323349184708
- 0.34389267115248884
- 0.3459164509339567
- 0.33487005219226135
- 0.30689136699710556
- 0.31435532289406576
- 0.3149165244680583
- 0.31942972122175306
- 0.3185331208118458
- 0.33682182366550517
- 0.3394323349184708
- 0.34389267115248884
- 0.3459164509339567
- 0.33487005219226135
- 0.30689136699710556
- 0.31435532289406576
- 0.3149165244680583
- 0.31942972122175306
- 0.3185331208118458
- 0.33682182366550517
- 0.3394323349184708
- 0.34389267115248884
- 0.3459164509339567
- 0.33487005219226135
- 0.30689136699710556
- 0.31435532289406576
- 0.3149165244680583
- 0.31942972122175306
- 0.3185331208118458
- 0.33682182366550517
- 0.3394323349184708
- 0.34389267115248884
- 0.3459164509339567
- 0.33487005219226135
- 0.30689136699710556
- 0.31435532289406576
- 0.3149165244680583
- 0.31942972122175306
- 0.3185331208118458
- 0.33682182366550517
- 0.3394323349184708
- 0.34389267115248884
- 0.3459164509339567
- 0.33487005219226135
- 0.30689136699710556
- 0.31435532289406576
- 0.3149165244680583
- 0.31942972122175306
- 0.3185331208118458
- 0.33682182366550517
- 0.3394323349184708
- 0.34389267115248884
- 0.3459164509339567
- 0.33487005219226135
- 0.30689136699710556
- 0.31435532289406576
- 0.3149165244680583
- 0.31942972122175306
- 0.3185331208118458
- 0.33682182366550517
- 0.3394323349184708
- 0.34389267115248884
- 0.3459164509339567
- 0.33487005219226135
- 0.30689136699710556
- 0.31435532289406576
- 0.3149165244680583
- 0.31942972122175306
- 0.3185331208118458
- 0.33682182366550517
- 0.3394323349184708
- 0.34389267115248884
- 0.3459164509339567
- 0.33487005219226135
- 0.30689136699710556
- 0.31435532289406576
- 0.3149165244680583
- 0.31942972122175306
- 0.3185331208118458
- 0.33682182366550517
- 0.3394323349184708
- 0.34389267115248884
- 0.3459164509339567
- 0.33487005219226135
- 0.30689136699710556
- 0.31435532289406576
- 0.3149165244680583
- 0.31942972122175306
- 0.3185331208118458
- 0.33682182366550517
- 0.3394323349184708
- 0.34389267115248884
- 0.3459164509339567
- 0.33487005219226135
- 0.30689136699710556
- 0.31435532289406576
- 0.3149165244680583
- 0.31942972122175306
- 0.3185331208118458
- 0.33682182366550517
- 0.3394323349184708
- 0.34389267115248884
- 0.3459164509339567
- 0.33487005219226135
- 0.30689136699710556
- 0.31435532289406576
- 0.3149165244680583
- 0.31942972122175306
- 0.3185331208118458
- 0.33682182366550517
- 0.3394323349184708
- 0.34389267115248884
- 0.3459164509339567
- 0.33487005219226135
- 0.30689136699710556
- 0.31435532289406576
- 0.3149165244680583
- 0.31942972122175306
- 0.3185331208118458
- 0.33682182366550517
- 0.3394323349184708
- 0.34389267115248884
- 0.3459164509339567
- 0.33487005219226135
- 0.30689136699710556
- 0.31435532289406576
- 0.3149165244680583
- 0.31942972122175306
- 0.3185331208118458
- 0.33682182366550517
- 0.3394323349184708
- 0.34389267115248884
- 0.3459164509339567
- 0.33487005219226135
- 0.30689136699710556
- 0.31435532289406576
- 0.3149165244680583
- 0.31942972122175306
- 0.3185331208118458
- 0.33682182366550517
- 0.3394323349184708
- 0.34389267115248884
- 0.3459164509339567
- 0.33487005219226135
- 0.30689136699710556
- 0.31435532289406576
- 0.3149165244680583
- 0.31942972122175306
- 0.3185331208118458
- 0.33682182366550517
- 0.3394323349184708
- 0.34389267115248884
- 0.3459164509339567
- 0.33487005219226135
- 0.30689136699710556
- 0.31435532289406576
- 0.3149165244680583
- 0.31942972122175306
- 0.3185331208118458
- 0.33682182366550517
- 0.3394323349184708
- 0.34389267115248884
- 0.3459164509339567
- 0.33487005219226135
- 0.30689136699710556
- 0.31435532289406576
- 0.3149165244680583
- 0.31942972122175306
- 0.3185331208118458
- 0.33682182366550517
- 0.3394323349184708
- 0.34389267115248884
- 0.3459164509339567
- 0.33487005219226135
- 0.30689136699710556
- 0.31435532289406576
- 0.3149165244680583
- 0.31942972122175306
- 0.3185331208118458
- 0.33682182366550517
- 0.3394323349184708
- 0.34389267115248884
- 0.3459164509339567
- 0.33487005219226135
- 0.30689136699710556
- 0.31435532289406576
- 0.3149165244680583
- 0.31942972122175306
- 0.3185331208118458
- 0.33682182366550517
- 0.3394323349184708
- 0.34389267115248884
- 0.3459164509339567
- 0.33487005219226135
- 0.30689136699710556
- 0.31435532289406576
- 0.3149165244680583
- 0.31942972122175306
- 0.3185331208118458
- 0.33682182366550517
- 0.3394323349184708
- 0.34389267115248884
- 0.3459164509339567
- 0.33487005219226135
- 0.30689136699710556
- 0.31435532289406576
- 0.3149165244680583
- 0.31942972122175306
- 0.3185331208118458
- 0.33682182366550517
- 0.3394323349184708
- 0.34389267115248884
- 0.3459164509339567
- 0.33487005219226135
- 0.30689136699710556
- 0.31435532289406576
- 0.3149165244680583
- 0.31942972122175306
- 0.3185331208118458
- 0.33682182366550517
- 0.3394323349184708
- 0.34389267115248884
- 0.3459164509339567
- 0.33487005219226135
- 0.30689136699710556
- 0.31435532289406576
- 0.3149165244680583
- 0.31942972122175306
- 0.3185331208118458
- 0.33682182366550517
- 0.3394323349184708
- 0.34389267115248884
- 0.3459164509339567
- 0.33487005219226135
- 0.30689136699710556
- 0.31435532289406576
- 0.3149165244680583
- 0.31942972122175306
- 0.3185331208118458
- 0.33682182366550517
- 0.3394323349184708
- 0.34389267115248884
- 0.3459164509339567
- 0.33487005219226135
- 0.30689136699710556
- 0.31435532289406576
- 0.3149165244680583
- 0.31942972122175306
- 0.3185331208118458
- 0.33682182366550517
- 0.3394323349184708
- 0.34389267115248884
- 0.3459164509339567
- 0.33487005219226135
- 0.30689136699710556
- 0.31435532289406576
- 0.3149165244680583
- 0.31942972122175306
- 0.3185331208118458
- 0.33682182366550517
- 0.3394323349184708
- 0.34389267115248884
- 0.3459164509339567
- 0.33487005219226135
- 0.30689136699710556
- 0.31435532289406576
- 0.3149165244680583
- 0.31942972122175306
- 0.3185331208118458
- 0.33682182366550517
- 0.3394323349184708
- 0.34389267115248884
- 0.3459164509339567
- 0.33487005219226135
- 0.30689136699710556
- 0.31435532289406576
- 0.3149165244680583
- 0.31942972122175306
- 0.3185331208118458
- 0.33682182366550517
- 0.3394323349184708
- 0.34389267115248884
- 0.3459164509339567
- 0.33487005219226135
- 0.30689136699710556
- 0.31435532289406576
- 0.3149165244680583
- 0.31942972122175306
- 0.3185331208118458
- 0.33682182366550517
- 0.3394323349184708
- 0.34389267115248884
- 0.3459164509339567
- 0.33487005219226135
- 0.30689136699710556
- 0.31435532289406576
- 0.3149165244680583
- 0.31942972122175306
- 0.3185331208118458
- 0.33682182366550517
- 0.3394323349184708
- 0.34389267115248884
- 0.3459164509339567
- 0.33487005219226135
- 0.30689136699710556
- 0.31435532289406576
- 0.3149165244680583
- 0.31942972122175306
- 0.3185331208118458
- 0.33682182366550517
- 0.3394323349184708
- 0.34389267115248884
- 0.3459164509339567
- 0.33487005219226135
- 0.30689136699710556
- 0.31435532289406576
- 0.3149165244680583
- 0.31942972122175306
- 0.3185331208118458
- 0.33682182366550517
- 0.3394323349184708
- 0.34389267115248884
- 0.3459164509339567
- 0.33487005219226135
- 0.30689136699710556
- 0.31435532289406576
- 0.3149165244680583
- 0.31942972122175306
- 0.3185331208118458
- 0.33682182366550517
- 0.3394323349184708
- 0.34389267115248884
- 0.3459164509339567
- 0.33487005219226135
- 0.30689136699710556
- 0.31435532289406576
- 0.3149165244680583
- 0.31942972122175306
- 0.3185331208118458
- 0.33682182366550517
- 0.3394323349184708
- 0.34389267115248884
- 0.3459164509339567
- 0.33487005219226135
- 0.30689136699710556
- 0.31435532289406576
- 0.3149165244680583
- 0.31942972122175306
- 0.3185331208118458
- 0.33682182366550517
- 0.3394323349184708
- 0.34389267115248884
- 0.3459164509339567
- 0.33487005219226135
- 0.30689136699710556
- 0.31435532289406576
- 0.3149165244680583
- 0.31942972122175306
- 0.3185331208118458
- 0.33682182366550517
- 0.3394323349184708
- 0.34389267115248884
- 0.3459164509339567
- 0.33487005219226135
- 0.30689136699710556
- 0.31435532289406576
- 0.3149165244680583
- 0.31942972122175306
- 0.3185331208118458
- 0.33682182366550517
- 0.3394323349184708
- 0.34389267115248884
- 0.3459164509339567
- 0.33487005219226135
- 0.30689136699710556
- 0.31435532289406576
- 0.3149165244680583
- 0.31942972122175306
- 0.3185331208118458
- 0.33682182366550517
- 0.3394323349184708
- 0.34389267115248884
- 0.3459164509339567
- 0.33487005219226135
- 0.30689136699710556
- 0.31435532289406576
- 0.3149165244680583
- 0.31942972122175306
- 0.3185331208118458
- 0.33682182366550517
- 0.3394323349184708
- 0.34389267115248884
- 0.3459164509339567
- 0.33487005219226135
- 0.30689136699710556
- 0.31435532289406576
- 0.3149165244680583
- 0.31942972122175306
- 0.3185331208118458
- 0.33682182366550517
- 0.3394323349184708
- 0.34389267115248884
- 0.3459164509339567
- 0.33487005219226135
- 0.30689136699710556
- 0.31435532289406576
- 0.3149165244680583
- 0.31942972122175306
- 0.3185331208118458
- 0.33682182366550517
- 0.3394323349184708
- 0.34389267115248884
- 0.3459164509339567
- 0.33487005219226135
- 0.30689136699710556
- 0.31435532289406576
- 0.3149165244680583
- 0.31942972122175306
- 0.3185331208118458
- 0.33682182366550517
- 0.3394323349184708
- 0.34389267115248884
- 0.3459164509339567
- 0.33487005219226135
- task:
type: Clustering
dataset:
name: MTEB MedrxivClusteringS2S
type: None
config: default
split: test
revision: 35191c8c0dca72d8ff3efcd72aa802307d469663
metrics:
- type: v_measure
value: 30.746118307596042
- type: v_measures
value:
- 0.295221871793276
- 0.30138555768270153
- 0.28285264542859556
- 0.2954786531542634
- 0.30124320780785346
- 0.3194920452805882
- 0.31660681255160195
- 0.33097353066945473
- 0.33177822982735117
- 0.29957927656391736
- 0.295221871793276
- 0.30138555768270153
- 0.28285264542859556
- 0.2954786531542634
- 0.30124320780785346
- 0.3194920452805882
- 0.31660681255160195
- 0.33097353066945473
- 0.33177822982735117
- 0.29957927656391736
- 0.295221871793276
- 0.30138555768270153
- 0.28285264542859556
- 0.2954786531542634
- 0.30124320780785346
- 0.3194920452805882
- 0.31660681255160195
- 0.33097353066945473
- 0.33177822982735117
- 0.29957927656391736
- 0.295221871793276
- 0.30138555768270153
- 0.28285264542859556
- 0.2954786531542634
- 0.30124320780785346
- 0.3194920452805882
- 0.31660681255160195
- 0.33097353066945473
- 0.33177822982735117
- 0.29957927656391736
- 0.295221871793276
- 0.30138555768270153
- 0.28285264542859556
- 0.2954786531542634
- 0.30124320780785346
- 0.3194920452805882
- 0.31660681255160195
- 0.33097353066945473
- 0.33177822982735117
- 0.29957927656391736
- 0.295221871793276
- 0.30138555768270153
- 0.28285264542859556
- 0.2954786531542634
- 0.30124320780785346
- 0.3194920452805882
- 0.31660681255160195
- 0.33097353066945473
- 0.33177822982735117
- 0.29957927656391736
- 0.295221871793276
- 0.30138555768270153
- 0.28285264542859556
- 0.2954786531542634
- 0.30124320780785346
- 0.3194920452805882
- 0.31660681255160195
- 0.33097353066945473
- 0.33177822982735117
- 0.29957927656391736
- 0.295221871793276
- 0.30138555768270153
- 0.28285264542859556
- 0.2954786531542634
- 0.30124320780785346
- 0.3194920452805882
- 0.31660681255160195
- 0.33097353066945473
- 0.33177822982735117
- 0.29957927656391736
- 0.295221871793276
- 0.30138555768270153
- 0.28285264542859556
- 0.2954786531542634
- 0.30124320780785346
- 0.3194920452805882
- 0.31660681255160195
- 0.33097353066945473
- 0.33177822982735117
- 0.29957927656391736
- 0.295221871793276
- 0.30138555768270153
- 0.28285264542859556
- 0.2954786531542634
- 0.30124320780785346
- 0.3194920452805882
- 0.31660681255160195
- 0.33097353066945473
- 0.33177822982735117
- 0.29957927656391736
- 0.295221871793276
- 0.30138555768270153
- 0.28285264542859556
- 0.2954786531542634
- 0.30124320780785346
- 0.3194920452805882
- 0.31660681255160195
- 0.33097353066945473
- 0.33177822982735117
- 0.29957927656391736
- 0.295221871793276
- 0.30138555768270153
- 0.28285264542859556
- 0.2954786531542634
- 0.30124320780785346
- 0.3194920452805882
- 0.31660681255160195
- 0.33097353066945473
- 0.33177822982735117
- 0.29957927656391736
- 0.295221871793276
- 0.30138555768270153
- 0.28285264542859556
- 0.2954786531542634
- 0.30124320780785346
- 0.3194920452805882
- 0.31660681255160195
- 0.33097353066945473
- 0.33177822982735117
- 0.29957927656391736
- 0.295221871793276
- 0.30138555768270153
- 0.28285264542859556
- 0.2954786531542634
- 0.30124320780785346
- 0.3194920452805882
- 0.31660681255160195
- 0.33097353066945473
- 0.33177822982735117
- 0.29957927656391736
- 0.295221871793276
- 0.30138555768270153
- 0.28285264542859556
- 0.2954786531542634
- 0.30124320780785346
- 0.3194920452805882
- 0.31660681255160195
- 0.33097353066945473
- 0.33177822982735117
- 0.29957927656391736
- 0.295221871793276
- 0.30138555768270153
- 0.28285264542859556
- 0.2954786531542634
- 0.30124320780785346
- 0.3194920452805882
- 0.31660681255160195
- 0.33097353066945473
- 0.33177822982735117
- 0.29957927656391736
- 0.295221871793276
- 0.30138555768270153
- 0.28285264542859556
- 0.2954786531542634
- 0.30124320780785346
- 0.3194920452805882
- 0.31660681255160195
- 0.33097353066945473
- 0.33177822982735117
- 0.29957927656391736
- 0.295221871793276
- 0.30138555768270153
- 0.28285264542859556
- 0.2954786531542634
- 0.30124320780785346
- 0.3194920452805882
- 0.31660681255160195
- 0.33097353066945473
- 0.33177822982735117
- 0.29957927656391736
- 0.295221871793276
- 0.30138555768270153
- 0.28285264542859556
- 0.2954786531542634
- 0.30124320780785346
- 0.3194920452805882
- 0.31660681255160195
- 0.33097353066945473
- 0.33177822982735117
- 0.29957927656391736
- 0.295221871793276
- 0.30138555768270153
- 0.28285264542859556
- 0.2954786531542634
- 0.30124320780785346
- 0.3194920452805882
- 0.31660681255160195
- 0.33097353066945473
- 0.33177822982735117
- 0.29957927656391736
- 0.295221871793276
- 0.30138555768270153
- 0.28285264542859556
- 0.2954786531542634
- 0.30124320780785346
- 0.3194920452805882
- 0.31660681255160195
- 0.33097353066945473
- 0.33177822982735117
- 0.29957927656391736
- 0.295221871793276
- 0.30138555768270153
- 0.28285264542859556
- 0.2954786531542634
- 0.30124320780785346
- 0.3194920452805882
- 0.31660681255160195
- 0.33097353066945473
- 0.33177822982735117
- 0.29957927656391736
- 0.295221871793276
- 0.30138555768270153
- 0.28285264542859556
- 0.2954786531542634
- 0.30124320780785346
- 0.3194920452805882
- 0.31660681255160195
- 0.33097353066945473
- 0.33177822982735117
- 0.29957927656391736
- 0.295221871793276
- 0.30138555768270153
- 0.28285264542859556
- 0.2954786531542634
- 0.30124320780785346
- 0.3194920452805882
- 0.31660681255160195
- 0.33097353066945473
- 0.33177822982735117
- 0.29957927656391736
- 0.295221871793276
- 0.30138555768270153
- 0.28285264542859556
- 0.2954786531542634
- 0.30124320780785346
- 0.3194920452805882
- 0.31660681255160195
- 0.33097353066945473
- 0.33177822982735117
- 0.29957927656391736
- 0.295221871793276
- 0.30138555768270153
- 0.28285264542859556
- 0.2954786531542634
- 0.30124320780785346
- 0.3194920452805882
- 0.31660681255160195
- 0.33097353066945473
- 0.33177822982735117
- 0.29957927656391736
- 0.295221871793276
- 0.30138555768270153
- 0.28285264542859556
- 0.2954786531542634
- 0.30124320780785346
- 0.3194920452805882
- 0.31660681255160195
- 0.33097353066945473
- 0.33177822982735117
- 0.29957927656391736
- 0.295221871793276
- 0.30138555768270153
- 0.28285264542859556
- 0.2954786531542634
- 0.30124320780785346
- 0.3194920452805882
- 0.31660681255160195
- 0.33097353066945473
- 0.33177822982735117
- 0.29957927656391736
- 0.295221871793276
- 0.30138555768270153
- 0.28285264542859556
- 0.2954786531542634
- 0.30124320780785346
- 0.3194920452805882
- 0.31660681255160195
- 0.33097353066945473
- 0.33177822982735117
- 0.29957927656391736
- 0.295221871793276
- 0.30138555768270153
- 0.28285264542859556
- 0.2954786531542634
- 0.30124320780785346
- 0.3194920452805882
- 0.31660681255160195
- 0.33097353066945473
- 0.33177822982735117
- 0.29957927656391736
- 0.295221871793276
- 0.30138555768270153
- 0.28285264542859556
- 0.2954786531542634
- 0.30124320780785346
- 0.3194920452805882
- 0.31660681255160195
- 0.33097353066945473
- 0.33177822982735117
- 0.29957927656391736
- 0.295221871793276
- 0.30138555768270153
- 0.28285264542859556
- 0.2954786531542634
- 0.30124320780785346
- 0.3194920452805882
- 0.31660681255160195
- 0.33097353066945473
- 0.33177822982735117
- 0.29957927656391736
- 0.295221871793276
- 0.30138555768270153
- 0.28285264542859556
- 0.2954786531542634
- 0.30124320780785346
- 0.3194920452805882
- 0.31660681255160195
- 0.33097353066945473
- 0.33177822982735117
- 0.29957927656391736
- 0.295221871793276
- 0.30138555768270153
- 0.28285264542859556
- 0.2954786531542634
- 0.30124320780785346
- 0.3194920452805882
- 0.31660681255160195
- 0.33097353066945473
- 0.33177822982735117
- 0.29957927656391736
- 0.295221871793276
- 0.30138555768270153
- 0.28285264542859556
- 0.2954786531542634
- 0.30124320780785346
- 0.3194920452805882
- 0.31660681255160195
- 0.33097353066945473
- 0.33177822982735117
- 0.29957927656391736
- 0.295221871793276
- 0.30138555768270153
- 0.28285264542859556
- 0.2954786531542634
- 0.30124320780785346
- 0.3194920452805882
- 0.31660681255160195
- 0.33097353066945473
- 0.33177822982735117
- 0.29957927656391736
- 0.295221871793276
- 0.30138555768270153
- 0.28285264542859556
- 0.2954786531542634
- 0.30124320780785346
- 0.3194920452805882
- 0.31660681255160195
- 0.33097353066945473
- 0.33177822982735117
- 0.29957927656391736
- 0.295221871793276
- 0.30138555768270153
- 0.28285264542859556
- 0.2954786531542634
- 0.30124320780785346
- 0.3194920452805882
- 0.31660681255160195
- 0.33097353066945473
- 0.33177822982735117
- 0.29957927656391736
- 0.295221871793276
- 0.30138555768270153
- 0.28285264542859556
- 0.2954786531542634
- 0.30124320780785346
- 0.3194920452805882
- 0.31660681255160195
- 0.33097353066945473
- 0.33177822982735117
- 0.29957927656391736
- 0.295221871793276
- 0.30138555768270153
- 0.28285264542859556
- 0.2954786531542634
- 0.30124320780785346
- 0.3194920452805882
- 0.31660681255160195
- 0.33097353066945473
- 0.33177822982735117
- 0.29957927656391736
- 0.295221871793276
- 0.30138555768270153
- 0.28285264542859556
- 0.2954786531542634
- 0.30124320780785346
- 0.3194920452805882
- 0.31660681255160195
- 0.33097353066945473
- 0.33177822982735117
- 0.29957927656391736
- 0.295221871793276
- 0.30138555768270153
- 0.28285264542859556
- 0.2954786531542634
- 0.30124320780785346
- 0.3194920452805882
- 0.31660681255160195
- 0.33097353066945473
- 0.33177822982735117
- 0.29957927656391736
- 0.295221871793276
- 0.30138555768270153
- 0.28285264542859556
- 0.2954786531542634
- 0.30124320780785346
- 0.3194920452805882
- 0.31660681255160195
- 0.33097353066945473
- 0.33177822982735117
- 0.29957927656391736
- 0.295221871793276
- 0.30138555768270153
- 0.28285264542859556
- 0.2954786531542634
- 0.30124320780785346
- 0.3194920452805882
- 0.31660681255160195
- 0.33097353066945473
- 0.33177822982735117
- 0.29957927656391736
- 0.295221871793276
- 0.30138555768270153
- 0.28285264542859556
- 0.2954786531542634
- 0.30124320780785346
- 0.3194920452805882
- 0.31660681255160195
- 0.33097353066945473
- 0.33177822982735117
- 0.29957927656391736
- 0.295221871793276
- 0.30138555768270153
- 0.28285264542859556
- 0.2954786531542634
- 0.30124320780785346
- 0.3194920452805882
- 0.31660681255160195
- 0.33097353066945473
- 0.33177822982735117
- 0.29957927656391736
- 0.295221871793276
- 0.30138555768270153
- 0.28285264542859556
- 0.2954786531542634
- 0.30124320780785346
- 0.3194920452805882
- 0.31660681255160195
- 0.33097353066945473
- 0.33177822982735117
- 0.29957927656391736
- 0.295221871793276
- 0.30138555768270153
- 0.28285264542859556
- 0.2954786531542634
- 0.30124320780785346
- 0.3194920452805882
- 0.31660681255160195
- 0.33097353066945473
- 0.33177822982735117
- 0.29957927656391736
- 0.295221871793276
- 0.30138555768270153
- 0.28285264542859556
- 0.2954786531542634
- 0.30124320780785346
- 0.3194920452805882
- 0.31660681255160195
- 0.33097353066945473
- 0.33177822982735117
- 0.29957927656391736
- 0.295221871793276
- 0.30138555768270153
- 0.28285264542859556
- 0.2954786531542634
- 0.30124320780785346
- 0.3194920452805882
- 0.31660681255160195
- 0.33097353066945473
- 0.33177822982735117
- 0.29957927656391736
- 0.295221871793276
- 0.30138555768270153
- 0.28285264542859556
- 0.2954786531542634
- 0.30124320780785346
- 0.3194920452805882
- 0.31660681255160195
- 0.33097353066945473
- 0.33177822982735117
- 0.29957927656391736
- 0.295221871793276
- 0.30138555768270153
- 0.28285264542859556
- 0.2954786531542634
- 0.30124320780785346
- 0.3194920452805882
- 0.31660681255160195
- 0.33097353066945473
- 0.33177822982735117
- 0.29957927656391736
- 0.295221871793276
- 0.30138555768270153
- 0.28285264542859556
- 0.2954786531542634
- 0.30124320780785346
- 0.3194920452805882
- 0.31660681255160195
- 0.33097353066945473
- 0.33177822982735117
- 0.29957927656391736
- 0.295221871793276
- 0.30138555768270153
- 0.28285264542859556
- 0.2954786531542634
- 0.30124320780785346
- 0.3194920452805882
- 0.31660681255160195
- 0.33097353066945473
- 0.33177822982735117
- 0.29957927656391736
- 0.295221871793276
- 0.30138555768270153
- 0.28285264542859556
- 0.2954786531542634
- 0.30124320780785346
- 0.3194920452805882
- 0.31660681255160195
- 0.33097353066945473
- 0.33177822982735117
- 0.29957927656391736
- 0.295221871793276
- 0.30138555768270153
- 0.28285264542859556
- 0.2954786531542634
- 0.30124320780785346
- 0.3194920452805882
- 0.31660681255160195
- 0.33097353066945473
- 0.33177822982735117
- 0.29957927656391736
- 0.295221871793276
- 0.30138555768270153
- 0.28285264542859556
- 0.2954786531542634
- 0.30124320780785346
- 0.3194920452805882
- 0.31660681255160195
- 0.33097353066945473
- 0.33177822982735117
- 0.29957927656391736
- 0.295221871793276
- 0.30138555768270153
- 0.28285264542859556
- 0.2954786531542634
- 0.30124320780785346
- 0.3194920452805882
- 0.31660681255160195
- 0.33097353066945473
- 0.33177822982735117
- 0.29957927656391736
- 0.295221871793276
- 0.30138555768270153
- 0.28285264542859556
- 0.2954786531542634
- 0.30124320780785346
- 0.3194920452805882
- 0.31660681255160195
- 0.33097353066945473
- 0.33177822982735117
- 0.29957927656391736
- 0.295221871793276
- 0.30138555768270153
- 0.28285264542859556
- 0.2954786531542634
- 0.30124320780785346
- 0.3194920452805882
- 0.31660681255160195
- 0.33097353066945473
- 0.33177822982735117
- 0.29957927656391736
- 0.295221871793276
- 0.30138555768270153
- 0.28285264542859556
- 0.2954786531542634
- 0.30124320780785346
- 0.3194920452805882
- 0.31660681255160195
- 0.33097353066945473
- 0.33177822982735117
- 0.29957927656391736
- 0.295221871793276
- 0.30138555768270153
- 0.28285264542859556
- 0.2954786531542634
- 0.30124320780785346
- 0.3194920452805882
- 0.31660681255160195
- 0.33097353066945473
- 0.33177822982735117
- 0.29957927656391736
- 0.295221871793276
- 0.30138555768270153
- 0.28285264542859556
- 0.2954786531542634
- 0.30124320780785346
- 0.3194920452805882
- 0.31660681255160195
- 0.33097353066945473
- 0.33177822982735117
- 0.29957927656391736
- 0.295221871793276
- 0.30138555768270153
- 0.28285264542859556
- 0.2954786531542634
- 0.30124320780785346
- 0.3194920452805882
- 0.31660681255160195
- 0.33097353066945473
- 0.33177822982735117
- 0.29957927656391736
- 0.295221871793276
- 0.30138555768270153
- 0.28285264542859556
- 0.2954786531542634
- 0.30124320780785346
- 0.3194920452805882
- 0.31660681255160195
- 0.33097353066945473
- 0.33177822982735117
- 0.29957927656391736
- 0.295221871793276
- 0.30138555768270153
- 0.28285264542859556
- 0.2954786531542634
- 0.30124320780785346
- 0.3194920452805882
- 0.31660681255160195
- 0.33097353066945473
- 0.33177822982735117
- 0.29957927656391736
- 0.295221871793276
- 0.30138555768270153
- 0.28285264542859556
- 0.2954786531542634
- 0.30124320780785346
- 0.3194920452805882
- 0.31660681255160195
- 0.33097353066945473
- 0.33177822982735117
- 0.29957927656391736
- 0.295221871793276
- 0.30138555768270153
- 0.28285264542859556
- 0.2954786531542634
- 0.30124320780785346
- 0.3194920452805882
- 0.31660681255160195
- 0.33097353066945473
- 0.33177822982735117
- 0.29957927656391736
- 0.295221871793276
- 0.30138555768270153
- 0.28285264542859556
- 0.2954786531542634
- 0.30124320780785346
- 0.3194920452805882
- 0.31660681255160195
- 0.33097353066945473
- 0.33177822982735117
- 0.29957927656391736
- 0.295221871793276
- 0.30138555768270153
- 0.28285264542859556
- 0.2954786531542634
- 0.30124320780785346
- 0.3194920452805882
- 0.31660681255160195
- 0.33097353066945473
- 0.33177822982735117
- 0.29957927656391736
- 0.295221871793276
- 0.30138555768270153
- 0.28285264542859556
- 0.2954786531542634
- 0.30124320780785346
- 0.3194920452805882
- 0.31660681255160195
- 0.33097353066945473
- 0.33177822982735117
- 0.29957927656391736
- 0.295221871793276
- 0.30138555768270153
- 0.28285264542859556
- 0.2954786531542634
- 0.30124320780785346
- 0.3194920452805882
- 0.31660681255160195
- 0.33097353066945473
- 0.33177822982735117
- 0.29957927656391736
- 0.295221871793276
- 0.30138555768270153
- 0.28285264542859556
- 0.2954786531542634
- 0.30124320780785346
- 0.3194920452805882
- 0.31660681255160195
- 0.33097353066945473
- 0.33177822982735117
- 0.29957927656391736
- 0.295221871793276
- 0.30138555768270153
- 0.28285264542859556
- 0.2954786531542634
- 0.30124320780785346
- 0.3194920452805882
- 0.31660681255160195
- 0.33097353066945473
- 0.33177822982735117
- 0.29957927656391736
- 0.295221871793276
- 0.30138555768270153
- 0.28285264542859556
- 0.2954786531542634
- 0.30124320780785346
- 0.3194920452805882
- 0.31660681255160195
- 0.33097353066945473
- 0.33177822982735117
- 0.29957927656391736
- 0.295221871793276
- 0.30138555768270153
- 0.28285264542859556
- 0.2954786531542634
- 0.30124320780785346
- 0.3194920452805882
- 0.31660681255160195
- 0.33097353066945473
- 0.33177822982735117
- 0.29957927656391736
- 0.295221871793276
- 0.30138555768270153
- 0.28285264542859556
- 0.2954786531542634
- 0.30124320780785346
- 0.3194920452805882
- 0.31660681255160195
- 0.33097353066945473
- 0.33177822982735117
- 0.29957927656391736
- 0.295221871793276
- 0.30138555768270153
- 0.28285264542859556
- 0.2954786531542634
- 0.30124320780785346
- 0.3194920452805882
- 0.31660681255160195
- 0.33097353066945473
- 0.33177822982735117
- 0.29957927656391736
- 0.295221871793276
- 0.30138555768270153
- 0.28285264542859556
- 0.2954786531542634
- 0.30124320780785346
- 0.3194920452805882
- 0.31660681255160195
- 0.33097353066945473
- 0.33177822982735117
- 0.29957927656391736
- 0.295221871793276
- 0.30138555768270153
- 0.28285264542859556
- 0.2954786531542634
- 0.30124320780785346
- 0.3194920452805882
- 0.31660681255160195
- 0.33097353066945473
- 0.33177822982735117
- 0.29957927656391736
- 0.295221871793276
- 0.30138555768270153
- 0.28285264542859556
- 0.2954786531542634
- 0.30124320780785346
- 0.3194920452805882
- 0.31660681255160195
- 0.33097353066945473
- 0.33177822982735117
- 0.29957927656391736
- 0.295221871793276
- 0.30138555768270153
- 0.28285264542859556
- 0.2954786531542634
- 0.30124320780785346
- 0.3194920452805882
- 0.31660681255160195
- 0.33097353066945473
- 0.33177822982735117
- 0.29957927656391736
- 0.295221871793276
- 0.30138555768270153
- 0.28285264542859556
- 0.2954786531542634
- 0.30124320780785346
- 0.3194920452805882
- 0.31660681255160195
- 0.33097353066945473
- 0.33177822982735117
- 0.29957927656391736
- 0.295221871793276
- 0.30138555768270153
- 0.28285264542859556
- 0.2954786531542634
- 0.30124320780785346
- 0.3194920452805882
- 0.31660681255160195
- 0.33097353066945473
- 0.33177822982735117
- 0.29957927656391736
- 0.295221871793276
- 0.30138555768270153
- 0.28285264542859556
- 0.2954786531542634
- 0.30124320780785346
- 0.3194920452805882
- 0.31660681255160195
- 0.33097353066945473
- 0.33177822982735117
- 0.29957927656391736
- 0.295221871793276
- 0.30138555768270153
- 0.28285264542859556
- 0.2954786531542634
- 0.30124320780785346
- 0.3194920452805882
- 0.31660681255160195
- 0.33097353066945473
- 0.33177822982735117
- 0.29957927656391736
- 0.295221871793276
- 0.30138555768270153
- 0.28285264542859556
- 0.2954786531542634
- 0.30124320780785346
- 0.3194920452805882
- 0.31660681255160195
- 0.33097353066945473
- 0.33177822982735117
- 0.29957927656391736
- 0.295221871793276
- 0.30138555768270153
- 0.28285264542859556
- 0.2954786531542634
- 0.30124320780785346
- 0.3194920452805882
- 0.31660681255160195
- 0.33097353066945473
- 0.33177822982735117
- 0.29957927656391736
- 0.295221871793276
- 0.30138555768270153
- 0.28285264542859556
- 0.2954786531542634
- 0.30124320780785346
- 0.3194920452805882
- 0.31660681255160195
- 0.33097353066945473
- 0.33177822982735117
- 0.29957927656391736
- 0.295221871793276
- 0.30138555768270153
- 0.28285264542859556
- 0.2954786531542634
- 0.30124320780785346
- 0.3194920452805882
- 0.31660681255160195
- 0.33097353066945473
- 0.33177822982735117
- 0.29957927656391736
- 0.295221871793276
- 0.30138555768270153
- 0.28285264542859556
- 0.2954786531542634
- 0.30124320780785346
- 0.3194920452805882
- 0.31660681255160195
- 0.33097353066945473
- 0.33177822982735117
- 0.29957927656391736
- 0.295221871793276
- 0.30138555768270153
- 0.28285264542859556
- 0.2954786531542634
- 0.30124320780785346
- 0.3194920452805882
- 0.31660681255160195
- 0.33097353066945473
- 0.33177822982735117
- 0.29957927656391736
- 0.295221871793276
- 0.30138555768270153
- 0.28285264542859556
- 0.2954786531542634
- 0.30124320780785346
- 0.3194920452805882
- 0.31660681255160195
- 0.33097353066945473
- 0.33177822982735117
- 0.29957927656391736
- 0.295221871793276
- 0.30138555768270153
- 0.28285264542859556
- 0.2954786531542634
- 0.30124320780785346
- 0.3194920452805882
- 0.31660681255160195
- 0.33097353066945473
- 0.33177822982735117
- 0.29957927656391736
- 0.295221871793276
- 0.30138555768270153
- 0.28285264542859556
- 0.2954786531542634
- 0.30124320780785346
- 0.3194920452805882
- 0.31660681255160195
- 0.33097353066945473
- 0.33177822982735117
- 0.29957927656391736
- 0.295221871793276
- 0.30138555768270153
- 0.28285264542859556
- 0.2954786531542634
- 0.30124320780785346
- 0.3194920452805882
- 0.31660681255160195
- 0.33097353066945473
- 0.33177822982735117
- 0.29957927656391736
- 0.295221871793276
- 0.30138555768270153
- 0.28285264542859556
- 0.2954786531542634
- 0.30124320780785346
- 0.3194920452805882
- 0.31660681255160195
- 0.33097353066945473
- 0.33177822982735117
- 0.29957927656391736
- 0.295221871793276
- 0.30138555768270153
- 0.28285264542859556
- 0.2954786531542634
- 0.30124320780785346
- 0.3194920452805882
- 0.31660681255160195
- 0.33097353066945473
- 0.33177822982735117
- 0.29957927656391736
- 0.295221871793276
- 0.30138555768270153
- 0.28285264542859556
- 0.2954786531542634
- 0.30124320780785346
- 0.3194920452805882
- 0.31660681255160195
- 0.33097353066945473
- 0.33177822982735117
- 0.29957927656391736
- 0.295221871793276
- 0.30138555768270153
- 0.28285264542859556
- 0.2954786531542634
- 0.30124320780785346
- 0.3194920452805882
- 0.31660681255160195
- 0.33097353066945473
- 0.33177822982735117
- 0.29957927656391736
- task:
type: Reranking
dataset:
name: MTEB MindSmallReranking
type: None
config: default
split: test
revision: 3bdac13927fdc888b903db93b2ffdbd90b295a69
metrics:
- type: map
value: 30.633568196946598
- type: mrr
value: 31.699313664022284
- task:
type: Retrieval
dataset:
name: MTEB NFCorpus
type: None
config: default
split: test
revision: ec0fa4fe99da2ff19ca1214b7966684033a58814
metrics:
- type: map_at_1
value: 6.164
- type: map_at_10
value: 13.353000000000002
- type: map_at_100
value: 16.468
- type: map_at_1000
value: 17.916
- type: map_at_20
value: 14.677000000000001
- type: map_at_3
value: 9.976
- type: map_at_5
value: 11.369
- type: mrr_at_1
value: 48.297000000000004
- type: mrr_at_10
value: 55.779
- type: mrr_at_100
value: 56.367
- type: mrr_at_1000
value: 56.413000000000004
- type: mrr_at_20
value: 56.123999999999995
- type: mrr_at_3
value: 54.334
- type: mrr_at_5
value: 55.00000000000001
- type: ndcg_at_1
value: 46.285
- type: ndcg_at_10
value: 35.333999999999996
- type: ndcg_at_100
value: 31.696999999999996
- type: ndcg_at_1000
value: 40.544999999999995
- type: ndcg_at_20
value: 32.694
- type: ndcg_at_3
value: 41.373
- type: ndcg_at_5
value: 38.324999999999996
- type: precision_at_1
value: 48.297000000000004
- type: precision_at_10
value: 26.006
- type: precision_at_100
value: 7.901
- type: precision_at_1000
value: 2.073
- type: precision_at_20
value: 18.884999999999998
- type: precision_at_3
value: 38.7
- type: precision_at_5
value: 32.632
- type: recall_at_1
value: 6.164
- type: recall_at_10
value: 16.913
- type: recall_at_100
value: 30.956
- type: recall_at_1000
value: 63.147
- type: recall_at_20
value: 20.319000000000003
- type: recall_at_3
value: 10.894
- type: recall_at_5
value: 13.039000000000001
- task:
type: Retrieval
dataset:
name: MTEB NQ
type: None
config: default
split: test
revision: b774495ed302d8c44a3a7ea25c90dbce03968f31
metrics:
- type: map_at_1
value: 22.707
- type: map_at_10
value: 37.815
- type: map_at_100
value: 39.129000000000005
- type: map_at_1000
value: 39.157
- type: map_at_20
value: 38.685
- type: map_at_3
value: 32.784
- type: map_at_5
value: 35.66
- type: mrr_at_1
value: 25.695
- type: mrr_at_10
value: 40.245999999999995
- type: mrr_at_100
value: 41.239
- type: mrr_at_1000
value: 41.259
- type: mrr_at_20
value: 40.907
- type: mrr_at_3
value: 35.936
- type: mrr_at_5
value: 38.457
- type: ndcg_at_1
value: 25.666
- type: ndcg_at_10
value: 46.317
- type: ndcg_at_100
value: 51.82
- type: ndcg_at_1000
value: 52.471999999999994
- type: ndcg_at_20
value: 49.175000000000004
- type: ndcg_at_3
value: 36.69
- type: ndcg_at_5
value: 41.537
- type: precision_at_1
value: 25.666
- type: precision_at_10
value: 8.34
- type: precision_at_100
value: 1.1360000000000001
- type: precision_at_1000
value: 0.12
- type: precision_at_20
value: 4.848
- type: precision_at_3
value: 17.304
- type: precision_at_5
value: 13.163
- type: recall_at_1
value: 22.707
- type: recall_at_10
value: 69.988
- type: recall_at_100
value: 93.733
- type: recall_at_1000
value: 98.571
- type: recall_at_20
value: 80.71199999999999
- type: recall_at_3
value: 44.858
- type: recall_at_5
value: 56.035000000000004
- task:
type: Retrieval
dataset:
name: MTEB QuoraRetrieval
type: None
config: default
split: test
revision: e4e08e0b7dbe3c8700f0daef558ff32256715259
metrics:
- type: map_at_1
value: 70.88600000000001
- type: map_at_10
value: 84.848
- type: map_at_100
value: 85.45700000000001
- type: map_at_1000
value: 85.473
- type: map_at_20
value: 85.239
- type: map_at_3
value: 81.89800000000001
- type: map_at_5
value: 83.786
- type: mrr_at_1
value: 81.64
- type: mrr_at_10
value: 87.90400000000001
- type: mrr_at_100
value: 87.98899999999999
- type: mrr_at_1000
value: 87.99
- type: mrr_at_20
value: 87.968
- type: mrr_at_3
value: 86.978
- type: mrr_at_5
value: 87.631
- type: ndcg_at_1
value: 81.66
- type: ndcg_at_10
value: 88.627
- type: ndcg_at_100
value: 89.769
- type: ndcg_at_1000
value: 89.86800000000001
- type: ndcg_at_20
value: 89.232
- type: ndcg_at_3
value: 85.804
- type: ndcg_at_5
value: 87.41
- type: precision_at_1
value: 81.66
- type: precision_at_10
value: 13.427
- type: precision_at_100
value: 1.528
- type: precision_at_1000
value: 0.157
- type: precision_at_20
value: 7.106
- type: precision_at_3
value: 37.492999999999995
- type: precision_at_5
value: 24.666
- type: recall_at_1
value: 70.88600000000001
- type: recall_at_10
value: 95.734
- type: recall_at_100
value: 99.565
- type: recall_at_1000
value: 99.982
- type: recall_at_20
value: 97.661
- type: recall_at_3
value: 87.605
- type: recall_at_5
value: 92.169
- task:
type: Clustering
dataset:
name: MTEB RedditClustering
type: None
config: default
split: test
revision: 24640382cdbf8abc73003fb0fa6d111a705499eb
metrics:
- type: v_measure
value: 56.21080787817221
- type: v_measures
value:
- 0.5673345398559368
- 0.6207955639665198
- 0.49172704540335
- 0.5613519584334187
- 0.5287355061030274
- 0.5241710337741662
- 0.5989619393352348
- 0.5047898087704462
- 0.5347507660674999
- 0.546453253548092
- 0.5222264596468855
- 0.5688140378164993
- 0.5588319773871532
- 0.5847911401438255
- 0.6690822373658819
- 0.5243915696652743
- 0.6141150363888348
- 0.6633119609787945
- 0.5417146255579326
- 0.5335099806695802
- 0.5290316249519529
- 0.5206989825465232
- 0.6395926790977834
- 0.5687818613145609
- 0.5347363807538766
- 0.5673345398559368
- 0.6207955639665198
- 0.49172704540335
- 0.5613519584334187
- 0.5287355061030274
- 0.5241710337741662
- 0.5989619393352348
- 0.5047898087704462
- 0.5347507660674999
- 0.546453253548092
- 0.5222264596468855
- 0.5688140378164993
- 0.5588319773871532
- 0.5847911401438255
- 0.6690822373658819
- 0.5243915696652743
- 0.6141150363888348
- 0.6633119609787945
- 0.5417146255579326
- 0.5335099806695802
- 0.5290316249519529
- 0.5206989825465232
- 0.6395926790977834
- 0.5687818613145609
- 0.5347363807538766
- 0.5673345398559368
- 0.6207955639665198
- 0.49172704540335
- 0.5613519584334187
- 0.5287355061030274
- 0.5241710337741662
- 0.5989619393352348
- 0.5047898087704462
- 0.5347507660674999
- 0.546453253548092
- 0.5222264596468855
- 0.5688140378164993
- 0.5588319773871532
- 0.5847911401438255
- 0.6690822373658819
- 0.5243915696652743
- 0.6141150363888348
- 0.6633119609787945
- 0.5417146255579326
- 0.5335099806695802
- 0.5290316249519529
- 0.5206989825465232
- 0.6395926790977834
- 0.5687818613145609
- 0.5347363807538766
- 0.5673345398559368
- 0.6207955639665198
- 0.49172704540335
- 0.5613519584334187
- 0.5287355061030274
- 0.5241710337741662
- 0.5989619393352348
- 0.5047898087704462
- 0.5347507660674999
- 0.546453253548092
- 0.5222264596468855
- 0.5688140378164993
- 0.5588319773871532
- 0.5847911401438255
- 0.6690822373658819
- 0.5243915696652743
- 0.6141150363888348
- 0.6633119609787945
- 0.5417146255579326
- 0.5335099806695802
- 0.5290316249519529
- 0.5206989825465232
- 0.6395926790977834
- 0.5687818613145609
- 0.5347363807538766
- 0.5673345398559368
- 0.6207955639665198
- 0.49172704540335
- 0.5613519584334187
- 0.5287355061030274
- 0.5241710337741662
- 0.5989619393352348
- 0.5047898087704462
- 0.5347507660674999
- 0.546453253548092
- 0.5222264596468855
- 0.5688140378164993
- 0.5588319773871532
- 0.5847911401438255
- 0.6690822373658819
- 0.5243915696652743
- 0.6141150363888348
- 0.6633119609787945
- 0.5417146255579326
- 0.5335099806695802
- 0.5290316249519529
- 0.5206989825465232
- 0.6395926790977834
- 0.5687818613145609
- 0.5347363807538766
- 0.5673345398559368
- 0.6207955639665198
- 0.49172704540335
- 0.5613519584334187
- 0.5287355061030274
- 0.5241710337741662
- 0.5989619393352348
- 0.5047898087704462
- 0.5347507660674999
- 0.546453253548092
- 0.5222264596468855
- 0.5688140378164993
- 0.5588319773871532
- 0.5847911401438255
- 0.6690822373658819
- 0.5243915696652743
- 0.6141150363888348
- 0.6633119609787945
- 0.5417146255579326
- 0.5335099806695802
- 0.5290316249519529
- 0.5206989825465232
- 0.6395926790977834
- 0.5687818613145609
- 0.5347363807538766
- 0.5673345398559368
- 0.6207955639665198
- 0.49172704540335
- 0.5613519584334187
- 0.5287355061030274
- 0.5241710337741662
- 0.5989619393352348
- 0.5047898087704462
- 0.5347507660674999
- 0.546453253548092
- 0.5222264596468855
- 0.5688140378164993
- 0.5588319773871532
- 0.5847911401438255
- 0.6690822373658819
- 0.5243915696652743
- 0.6141150363888348
- 0.6633119609787945
- 0.5417146255579326
- 0.5335099806695802
- 0.5290316249519529
- 0.5206989825465232
- 0.6395926790977834
- 0.5687818613145609
- 0.5347363807538766
- 0.5673345398559368
- 0.6207955639665198
- 0.49172704540335
- 0.5613519584334187
- 0.5287355061030274
- 0.5241710337741662
- 0.5989619393352348
- 0.5047898087704462
- 0.5347507660674999
- 0.546453253548092
- 0.5222264596468855
- 0.5688140378164993
- 0.5588319773871532
- 0.5847911401438255
- 0.6690822373658819
- 0.5243915696652743
- 0.6141150363888348
- 0.6633119609787945
- 0.5417146255579326
- 0.5335099806695802
- 0.5290316249519529
- 0.5206989825465232
- 0.6395926790977834
- 0.5687818613145609
- 0.5347363807538766
- 0.5673345398559368
- 0.6207955639665198
- 0.49172704540335
- 0.5613519584334187
- 0.5287355061030274
- 0.5241710337741662
- 0.5989619393352348
- 0.5047898087704462
- 0.5347507660674999
- 0.546453253548092
- 0.5222264596468855
- 0.5688140378164993
- 0.5588319773871532
- 0.5847911401438255
- 0.6690822373658819
- 0.5243915696652743
- 0.6141150363888348
- 0.6633119609787945
- 0.5417146255579326
- 0.5335099806695802
- 0.5290316249519529
- 0.5206989825465232
- 0.6395926790977834
- 0.5687818613145609
- 0.5347363807538766
- 0.5673345398559368
- 0.6207955639665198
- 0.49172704540335
- 0.5613519584334187
- 0.5287355061030274
- 0.5241710337741662
- 0.5989619393352348
- 0.5047898087704462
- 0.5347507660674999
- 0.546453253548092
- 0.5222264596468855
- 0.5688140378164993
- 0.5588319773871532
- 0.5847911401438255
- 0.6690822373658819
- 0.5243915696652743
- 0.6141150363888348
- 0.6633119609787945
- 0.5417146255579326
- 0.5335099806695802
- 0.5290316249519529
- 0.5206989825465232
- 0.6395926790977834
- 0.5687818613145609
- 0.5347363807538766
- 0.5673345398559368
- 0.6207955639665198
- 0.49172704540335
- 0.5613519584334187
- 0.5287355061030274
- 0.5241710337741662
- 0.5989619393352348
- 0.5047898087704462
- 0.5347507660674999
- 0.546453253548092
- 0.5222264596468855
- 0.5688140378164993
- 0.5588319773871532
- 0.5847911401438255
- 0.6690822373658819
- 0.5243915696652743
- 0.6141150363888348
- 0.6633119609787945
- 0.5417146255579326
- 0.5335099806695802
- 0.5290316249519529
- 0.5206989825465232
- 0.6395926790977834
- 0.5687818613145609
- 0.5347363807538766
- 0.5673345398559368
- 0.6207955639665198
- 0.49172704540335
- 0.5613519584334187
- 0.5287355061030274
- 0.5241710337741662
- 0.5989619393352348
- 0.5047898087704462
- 0.5347507660674999
- 0.546453253548092
- 0.5222264596468855
- 0.5688140378164993
- 0.5588319773871532
- 0.5847911401438255
- 0.6690822373658819
- 0.5243915696652743
- 0.6141150363888348
- 0.6633119609787945
- 0.5417146255579326
- 0.5335099806695802
- 0.5290316249519529
- 0.5206989825465232
- 0.6395926790977834
- 0.5687818613145609
- 0.5347363807538766
- 0.5673345398559368
- 0.6207955639665198
- 0.49172704540335
- 0.5613519584334187
- 0.5287355061030274
- 0.5241710337741662
- 0.5989619393352348
- 0.5047898087704462
- 0.5347507660674999
- 0.546453253548092
- 0.5222264596468855
- 0.5688140378164993
- 0.5588319773871532
- 0.5847911401438255
- 0.6690822373658819
- 0.5243915696652743
- 0.6141150363888348
- 0.6633119609787945
- 0.5417146255579326
- 0.5335099806695802
- 0.5290316249519529
- 0.5206989825465232
- 0.6395926790977834
- 0.5687818613145609
- 0.5347363807538766
- 0.5673345398559368
- 0.6207955639665198
- 0.49172704540335
- 0.5613519584334187
- 0.5287355061030274
- 0.5241710337741662
- 0.5989619393352348
- 0.5047898087704462
- 0.5347507660674999
- 0.546453253548092
- 0.5222264596468855
- 0.5688140378164993
- 0.5588319773871532
- 0.5847911401438255
- 0.6690822373658819
- 0.5243915696652743
- 0.6141150363888348
- 0.6633119609787945
- 0.5417146255579326
- 0.5335099806695802
- 0.5290316249519529
- 0.5206989825465232
- 0.6395926790977834
- 0.5687818613145609
- 0.5347363807538766
- 0.5673345398559368
- 0.6207955639665198
- 0.49172704540335
- 0.5613519584334187
- 0.5287355061030274
- 0.5241710337741662
- 0.5989619393352348
- 0.5047898087704462
- 0.5347507660674999
- 0.546453253548092
- 0.5222264596468855
- 0.5688140378164993
- 0.5588319773871532
- 0.5847911401438255
- 0.6690822373658819
- 0.5243915696652743
- 0.6141150363888348
- 0.6633119609787945
- 0.5417146255579326
- 0.5335099806695802
- 0.5290316249519529
- 0.5206989825465232
- 0.6395926790977834
- 0.5687818613145609
- 0.5347363807538766
- 0.5673345398559368
- 0.6207955639665198
- 0.49172704540335
- 0.5613519584334187
- 0.5287355061030274
- 0.5241710337741662
- 0.5989619393352348
- 0.5047898087704462
- 0.5347507660674999
- 0.546453253548092
- 0.5222264596468855
- 0.5688140378164993
- 0.5588319773871532
- 0.5847911401438255
- 0.6690822373658819
- 0.5243915696652743
- 0.6141150363888348
- 0.6633119609787945
- 0.5417146255579326
- 0.5335099806695802
- 0.5290316249519529
- 0.5206989825465232
- 0.6395926790977834
- 0.5687818613145609
- 0.5347363807538766
- 0.5673345398559368
- 0.6207955639665198
- 0.49172704540335
- 0.5613519584334187
- 0.5287355061030274
- 0.5241710337741662
- 0.5989619393352348
- 0.5047898087704462
- 0.5347507660674999
- 0.546453253548092
- 0.5222264596468855
- 0.5688140378164993
- 0.5588319773871532
- 0.5847911401438255
- 0.6690822373658819
- 0.5243915696652743
- 0.6141150363888348
- 0.6633119609787945
- 0.5417146255579326
- 0.5335099806695802
- 0.5290316249519529
- 0.5206989825465232
- 0.6395926790977834
- 0.5687818613145609
- 0.5347363807538766
- 0.5673345398559368
- 0.6207955639665198
- 0.49172704540335
- 0.5613519584334187
- 0.5287355061030274
- 0.5241710337741662
- 0.5989619393352348
- 0.5047898087704462
- 0.5347507660674999
- 0.546453253548092
- 0.5222264596468855
- 0.5688140378164993
- 0.5588319773871532
- 0.5847911401438255
- 0.6690822373658819
- 0.5243915696652743
- 0.6141150363888348
- 0.6633119609787945
- 0.5417146255579326
- 0.5335099806695802
- 0.5290316249519529
- 0.5206989825465232
- 0.6395926790977834
- 0.5687818613145609
- 0.5347363807538766
- 0.5673345398559368
- 0.6207955639665198
- 0.49172704540335
- 0.5613519584334187
- 0.5287355061030274
- 0.5241710337741662
- 0.5989619393352348
- 0.5047898087704462
- 0.5347507660674999
- 0.546453253548092
- 0.5222264596468855
- 0.5688140378164993
- 0.5588319773871532
- 0.5847911401438255
- 0.6690822373658819
- 0.5243915696652743
- 0.6141150363888348
- 0.6633119609787945
- 0.5417146255579326
- 0.5335099806695802
- 0.5290316249519529
- 0.5206989825465232
- 0.6395926790977834
- 0.5687818613145609
- 0.5347363807538766
- 0.5673345398559368
- 0.6207955639665198
- 0.49172704540335
- 0.5613519584334187
- 0.5287355061030274
- 0.5241710337741662
- 0.5989619393352348
- 0.5047898087704462
- 0.5347507660674999
- 0.546453253548092
- 0.5222264596468855
- 0.5688140378164993
- 0.5588319773871532
- 0.5847911401438255
- 0.6690822373658819
- 0.5243915696652743
- 0.6141150363888348
- 0.6633119609787945
- 0.5417146255579326
- 0.5335099806695802
- 0.5290316249519529
- 0.5206989825465232
- 0.6395926790977834
- 0.5687818613145609
- 0.5347363807538766
- 0.5673345398559368
- 0.6207955639665198
- 0.49172704540335
- 0.5613519584334187
- 0.5287355061030274
- 0.5241710337741662
- 0.5989619393352348
- 0.5047898087704462
- 0.5347507660674999
- 0.546453253548092
- 0.5222264596468855
- 0.5688140378164993
- 0.5588319773871532
- 0.5847911401438255
- 0.6690822373658819
- 0.5243915696652743
- 0.6141150363888348
- 0.6633119609787945
- 0.5417146255579326
- 0.5335099806695802
- 0.5290316249519529
- 0.5206989825465232
- 0.6395926790977834
- 0.5687818613145609
- 0.5347363807538766
- 0.5673345398559368
- 0.6207955639665198
- 0.49172704540335
- 0.5613519584334187
- 0.5287355061030274
- 0.5241710337741662
- 0.5989619393352348
- 0.5047898087704462
- 0.5347507660674999
- 0.546453253548092
- 0.5222264596468855
- 0.5688140378164993
- 0.5588319773871532
- 0.5847911401438255
- 0.6690822373658819
- 0.5243915696652743
- 0.6141150363888348
- 0.6633119609787945
- 0.5417146255579326
- 0.5335099806695802
- 0.5290316249519529
- 0.5206989825465232
- 0.6395926790977834
- 0.5687818613145609
- 0.5347363807538766
- 0.5673345398559368
- 0.6207955639665198
- 0.49172704540335
- 0.5613519584334187
- 0.5287355061030274
- 0.5241710337741662
- 0.5989619393352348
- 0.5047898087704462
- 0.5347507660674999
- 0.546453253548092
- 0.5222264596468855
- 0.5688140378164993
- 0.5588319773871532
- 0.5847911401438255
- 0.6690822373658819
- 0.5243915696652743
- 0.6141150363888348
- 0.6633119609787945
- 0.5417146255579326
- 0.5335099806695802
- 0.5290316249519529
- 0.5206989825465232
- 0.6395926790977834
- 0.5687818613145609
- 0.5347363807538766
- 0.5673345398559368
- 0.6207955639665198
- 0.49172704540335
- 0.5613519584334187
- 0.5287355061030274
- 0.5241710337741662
- 0.5989619393352348
- 0.5047898087704462
- 0.5347507660674999
- 0.546453253548092
- 0.5222264596468855
- 0.5688140378164993
- 0.5588319773871532
- 0.5847911401438255
- 0.6690822373658819
- 0.5243915696652743
- 0.6141150363888348
- 0.6633119609787945
- 0.5417146255579326
- 0.5335099806695802
- 0.5290316249519529
- 0.5206989825465232
- 0.6395926790977834
- 0.5687818613145609
- 0.5347363807538766
- 0.5673345398559368
- 0.6207955639665198
- 0.49172704540335
- 0.5613519584334187
- 0.5287355061030274
- 0.5241710337741662
- 0.5989619393352348
- 0.5047898087704462
- 0.5347507660674999
- 0.546453253548092
- 0.5222264596468855
- 0.5688140378164993
- 0.5588319773871532
- 0.5847911401438255
- 0.6690822373658819
- 0.5243915696652743
- 0.6141150363888348
- 0.6633119609787945
- 0.5417146255579326
- 0.5335099806695802
- 0.5290316249519529
- 0.5206989825465232
- 0.6395926790977834
- 0.5687818613145609
- 0.5347363807538766
- 0.5673345398559368
- 0.6207955639665198
- 0.49172704540335
- 0.5613519584334187
- 0.5287355061030274
- 0.5241710337741662
- 0.5989619393352348
- 0.5047898087704462
- 0.5347507660674999
- 0.546453253548092
- 0.5222264596468855
- 0.5688140378164993
- 0.5588319773871532
- 0.5847911401438255
- 0.6690822373658819
- 0.5243915696652743
- 0.6141150363888348
- 0.6633119609787945
- 0.5417146255579326
- 0.5335099806695802
- 0.5290316249519529
- 0.5206989825465232
- 0.6395926790977834
- 0.5687818613145609
- 0.5347363807538766
- 0.5673345398559368
- 0.6207955639665198
- 0.49172704540335
- 0.5613519584334187
- 0.5287355061030274
- 0.5241710337741662
- 0.5989619393352348
- 0.5047898087704462
- 0.5347507660674999
- 0.546453253548092
- 0.5222264596468855
- 0.5688140378164993
- 0.5588319773871532
- 0.5847911401438255
- 0.6690822373658819
- 0.5243915696652743
- 0.6141150363888348
- 0.6633119609787945
- 0.5417146255579326
- 0.5335099806695802
- 0.5290316249519529
- 0.5206989825465232
- 0.6395926790977834
- 0.5687818613145609
- 0.5347363807538766
- 0.5673345398559368
- 0.6207955639665198
- 0.49172704540335
- 0.5613519584334187
- 0.5287355061030274
- 0.5241710337741662
- 0.5989619393352348
- 0.5047898087704462
- 0.5347507660674999
- 0.546453253548092
- 0.5222264596468855
- 0.5688140378164993
- 0.5588319773871532
- 0.5847911401438255
- 0.6690822373658819
- 0.5243915696652743
- 0.6141150363888348
- 0.6633119609787945
- 0.5417146255579326
- 0.5335099806695802
- 0.5290316249519529
- 0.5206989825465232
- 0.6395926790977834
- 0.5687818613145609
- 0.5347363807538766
- 0.5673345398559368
- 0.6207955639665198
- 0.49172704540335
- 0.5613519584334187
- 0.5287355061030274
- 0.5241710337741662
- 0.5989619393352348
- 0.5047898087704462
- 0.5347507660674999
- 0.546453253548092
- 0.5222264596468855
- 0.5688140378164993
- 0.5588319773871532
- 0.5847911401438255
- 0.6690822373658819
- 0.5243915696652743
- 0.6141150363888348
- 0.6633119609787945
- 0.5417146255579326
- 0.5335099806695802
- 0.5290316249519529
- 0.5206989825465232
- 0.6395926790977834
- 0.5687818613145609
- 0.5347363807538766
- 0.5673345398559368
- 0.6207955639665198
- 0.49172704540335
- 0.5613519584334187
- 0.5287355061030274
- 0.5241710337741662
- 0.5989619393352348
- 0.5047898087704462
- 0.5347507660674999
- 0.546453253548092
- 0.5222264596468855
- 0.5688140378164993
- 0.5588319773871532
- 0.5847911401438255
- 0.6690822373658819
- 0.5243915696652743
- 0.6141150363888348
- 0.6633119609787945
- 0.5417146255579326
- 0.5335099806695802
- 0.5290316249519529
- 0.5206989825465232
- 0.6395926790977834
- 0.5687818613145609
- 0.5347363807538766
- 0.5673345398559368
- 0.6207955639665198
- 0.49172704540335
- 0.5613519584334187
- 0.5287355061030274
- 0.5241710337741662
- 0.5989619393352348
- 0.5047898087704462
- 0.5347507660674999
- 0.546453253548092
- 0.5222264596468855
- 0.5688140378164993
- 0.5588319773871532
- 0.5847911401438255
- 0.6690822373658819
- 0.5243915696652743
- 0.6141150363888348
- 0.6633119609787945
- 0.5417146255579326
- 0.5335099806695802
- 0.5290316249519529
- 0.5206989825465232
- 0.6395926790977834
- 0.5687818613145609
- 0.5347363807538766
- 0.5673345398559368
- 0.6207955639665198
- 0.49172704540335
- 0.5613519584334187
- 0.5287355061030274
- 0.5241710337741662
- 0.5989619393352348
- 0.5047898087704462
- 0.5347507660674999
- 0.546453253548092
- 0.5222264596468855
- 0.5688140378164993
- 0.5588319773871532
- 0.5847911401438255
- 0.6690822373658819
- 0.5243915696652743
- 0.6141150363888348
- 0.6633119609787945
- 0.5417146255579326
- 0.5335099806695802
- 0.5290316249519529
- 0.5206989825465232
- 0.6395926790977834
- 0.5687818613145609
- 0.5347363807538766
- 0.5673345398559368
- 0.6207955639665198
- 0.49172704540335
- 0.5613519584334187
- 0.5287355061030274
- 0.5241710337741662
- 0.5989619393352348
- 0.5047898087704462
- 0.5347507660674999
- 0.546453253548092
- 0.5222264596468855
- 0.5688140378164993
- 0.5588319773871532
- 0.5847911401438255
- 0.6690822373658819
- 0.5243915696652743
- 0.6141150363888348
- 0.6633119609787945
- 0.5417146255579326
- 0.5335099806695802
- 0.5290316249519529
- 0.5206989825465232
- 0.6395926790977834
- 0.5687818613145609
- 0.5347363807538766
- 0.5673345398559368
- 0.6207955639665198
- 0.49172704540335
- 0.5613519584334187
- 0.5287355061030274
- 0.5241710337741662
- 0.5989619393352348
- 0.5047898087704462
- 0.5347507660674999
- 0.546453253548092
- 0.5222264596468855
- 0.5688140378164993
- 0.5588319773871532
- 0.5847911401438255
- 0.6690822373658819
- 0.5243915696652743
- 0.6141150363888348
- 0.6633119609787945
- 0.5417146255579326
- 0.5335099806695802
- 0.5290316249519529
- 0.5206989825465232
- 0.6395926790977834
- 0.5687818613145609
- 0.5347363807538766
- 0.5673345398559368
- 0.6207955639665198
- 0.49172704540335
- 0.5613519584334187
- 0.5287355061030274
- 0.5241710337741662
- 0.5989619393352348
- 0.5047898087704462
- 0.5347507660674999
- 0.546453253548092
- 0.5222264596468855
- 0.5688140378164993
- 0.5588319773871532
- 0.5847911401438255
- 0.6690822373658819
- 0.5243915696652743
- 0.6141150363888348
- 0.6633119609787945
- 0.5417146255579326
- 0.5335099806695802
- 0.5290316249519529
- 0.5206989825465232
- 0.6395926790977834
- 0.5687818613145609
- 0.5347363807538766
- 0.5673345398559368
- 0.6207955639665198
- 0.49172704540335
- 0.5613519584334187
- 0.5287355061030274
- 0.5241710337741662
- 0.5989619393352348
- 0.5047898087704462
- 0.5347507660674999
- 0.546453253548092
- 0.5222264596468855
- 0.5688140378164993
- 0.5588319773871532
- 0.5847911401438255
- 0.6690822373658819
- 0.5243915696652743
- 0.6141150363888348
- 0.6633119609787945
- 0.5417146255579326
- 0.5335099806695802
- 0.5290316249519529
- 0.5206989825465232
- 0.6395926790977834
- 0.5687818613145609
- 0.5347363807538766
- 0.5673345398559368
- 0.6207955639665198
- 0.49172704540335
- 0.5613519584334187
- 0.5287355061030274
- 0.5241710337741662
- 0.5989619393352348
- 0.5047898087704462
- 0.5347507660674999
- 0.546453253548092
- 0.5222264596468855
- 0.5688140378164993
- 0.5588319773871532
- 0.5847911401438255
- 0.6690822373658819
- 0.5243915696652743
- 0.6141150363888348
- 0.6633119609787945
- 0.5417146255579326
- 0.5335099806695802
- 0.5290316249519529
- 0.5206989825465232
- 0.6395926790977834
- 0.5687818613145609
- 0.5347363807538766
- 0.5673345398559368
- 0.6207955639665198
- 0.49172704540335
- 0.5613519584334187
- 0.5287355061030274
- 0.5241710337741662
- 0.5989619393352348
- 0.5047898087704462
- 0.5347507660674999
- 0.546453253548092
- 0.5222264596468855
- 0.5688140378164993
- 0.5588319773871532
- 0.5847911401438255
- 0.6690822373658819
- 0.5243915696652743
- 0.6141150363888348
- 0.6633119609787945
- 0.5417146255579326
- 0.5335099806695802
- 0.5290316249519529
- 0.5206989825465232
- 0.6395926790977834
- 0.5687818613145609
- 0.5347363807538766
- 0.5673345398559368
- 0.6207955639665198
- 0.49172704540335
- 0.5613519584334187
- 0.5287355061030274
- 0.5241710337741662
- 0.5989619393352348
- 0.5047898087704462
- 0.5347507660674999
- 0.546453253548092
- 0.5222264596468855
- 0.5688140378164993
- 0.5588319773871532
- 0.5847911401438255
- 0.6690822373658819
- 0.5243915696652743
- 0.6141150363888348
- 0.6633119609787945
- 0.5417146255579326
- 0.5335099806695802
- 0.5290316249519529
- 0.5206989825465232
- 0.6395926790977834
- 0.5687818613145609
- 0.5347363807538766
- 0.5673345398559368
- 0.6207955639665198
- 0.49172704540335
- 0.5613519584334187
- 0.5287355061030274
- 0.5241710337741662
- 0.5989619393352348
- 0.5047898087704462
- 0.5347507660674999
- 0.546453253548092
- 0.5222264596468855
- 0.5688140378164993
- 0.5588319773871532
- 0.5847911401438255
- 0.6690822373658819
- 0.5243915696652743
- 0.6141150363888348
- 0.6633119609787945
- 0.5417146255579326
- 0.5335099806695802
- 0.5290316249519529
- 0.5206989825465232
- 0.6395926790977834
- 0.5687818613145609
- 0.5347363807538766
- 0.5673345398559368
- 0.6207955639665198
- 0.49172704540335
- 0.5613519584334187
- 0.5287355061030274
- 0.5241710337741662
- 0.5989619393352348
- 0.5047898087704462
- 0.5347507660674999
- 0.546453253548092
- 0.5222264596468855
- 0.5688140378164993
- 0.5588319773871532
- 0.5847911401438255
- 0.6690822373658819
- 0.5243915696652743
- 0.6141150363888348
- 0.6633119609787945
- 0.5417146255579326
- 0.5335099806695802
- 0.5290316249519529
- 0.5206989825465232
- 0.6395926790977834
- 0.5687818613145609
- 0.5347363807538766
- 0.5673345398559368
- 0.6207955639665198
- 0.49172704540335
- 0.5613519584334187
- 0.5287355061030274
- 0.5241710337741662
- 0.5989619393352348
- 0.5047898087704462
- 0.5347507660674999
- 0.546453253548092
- 0.5222264596468855
- 0.5688140378164993
- 0.5588319773871532
- 0.5847911401438255
- 0.6690822373658819
- 0.5243915696652743
- 0.6141150363888348
- 0.6633119609787945
- 0.5417146255579326
- 0.5335099806695802
- 0.5290316249519529
- 0.5206989825465232
- 0.6395926790977834
- 0.5687818613145609
- 0.5347363807538766
- 0.5673345398559368
- 0.6207955639665198
- 0.49172704540335
- 0.5613519584334187
- 0.5287355061030274
- 0.5241710337741662
- 0.5989619393352348
- 0.5047898087704462
- 0.5347507660674999
- 0.546453253548092
- 0.5222264596468855
- 0.5688140378164993
- 0.5588319773871532
- 0.5847911401438255
- 0.6690822373658819
- 0.5243915696652743
- 0.6141150363888348
- 0.6633119609787945
- 0.5417146255579326
- 0.5335099806695802
- 0.5290316249519529
- 0.5206989825465232
- 0.6395926790977834
- 0.5687818613145609
- 0.5347363807538766
- 0.5673345398559368
- 0.6207955639665198
- 0.49172704540335
- 0.5613519584334187
- 0.5287355061030274
- 0.5241710337741662
- 0.5989619393352348
- 0.5047898087704462
- 0.5347507660674999
- 0.546453253548092
- 0.5222264596468855
- 0.5688140378164993
- 0.5588319773871532
- 0.5847911401438255
- 0.6690822373658819
- 0.5243915696652743
- 0.6141150363888348
- 0.6633119609787945
- 0.5417146255579326
- 0.5335099806695802
- 0.5290316249519529
- 0.5206989825465232
- 0.6395926790977834
- 0.5687818613145609
- 0.5347363807538766
- 0.5673345398559368
- 0.6207955639665198
- 0.49172704540335
- 0.5613519584334187
- 0.5287355061030274
- 0.5241710337741662
- 0.5989619393352348
- 0.5047898087704462
- 0.5347507660674999
- 0.546453253548092
- 0.5222264596468855
- 0.5688140378164993
- 0.5588319773871532
- 0.5847911401438255
- 0.6690822373658819
- 0.5243915696652743
- 0.6141150363888348
- 0.6633119609787945
- 0.5417146255579326
- 0.5335099806695802
- 0.5290316249519529
- 0.5206989825465232
- 0.6395926790977834
- 0.5687818613145609
- 0.5347363807538766
- 0.5673345398559368
- 0.6207955639665198
- 0.49172704540335
- 0.5613519584334187
- 0.5287355061030274
- 0.5241710337741662
- 0.5989619393352348
- 0.5047898087704462
- 0.5347507660674999
- 0.546453253548092
- 0.5222264596468855
- 0.5688140378164993
- 0.5588319773871532
- 0.5847911401438255
- 0.6690822373658819
- 0.5243915696652743
- 0.6141150363888348
- 0.6633119609787945
- 0.5417146255579326
- 0.5335099806695802
- 0.5290316249519529
- 0.5206989825465232
- 0.6395926790977834
- 0.5687818613145609
- 0.5347363807538766
- 0.5673345398559368
- 0.6207955639665198
- 0.49172704540335
- 0.5613519584334187
- 0.5287355061030274
- 0.5241710337741662
- 0.5989619393352348
- 0.5047898087704462
- 0.5347507660674999
- 0.546453253548092
- 0.5222264596468855
- 0.5688140378164993
- 0.5588319773871532
- 0.5847911401438255
- 0.6690822373658819
- 0.5243915696652743
- 0.6141150363888348
- 0.6633119609787945
- 0.5417146255579326
- 0.5335099806695802
- 0.5290316249519529
- 0.5206989825465232
- 0.6395926790977834
- 0.5687818613145609
- 0.5347363807538766
- 0.5673345398559368
- 0.6207955639665198
- 0.49172704540335
- 0.5613519584334187
- 0.5287355061030274
- 0.5241710337741662
- 0.5989619393352348
- 0.5047898087704462
- 0.5347507660674999
- 0.546453253548092
- 0.5222264596468855
- 0.5688140378164993
- 0.5588319773871532
- 0.5847911401438255
- 0.6690822373658819
- 0.5243915696652743
- 0.6141150363888348
- 0.6633119609787945
- 0.5417146255579326
- 0.5335099806695802
- 0.5290316249519529
- 0.5206989825465232
- 0.6395926790977834
- 0.5687818613145609
- 0.5347363807538766
- 0.5673345398559368
- 0.6207955639665198
- 0.49172704540335
- 0.5613519584334187
- 0.5287355061030274
- 0.5241710337741662
- 0.5989619393352348
- 0.5047898087704462
- 0.5347507660674999
- 0.546453253548092
- 0.5222264596468855
- 0.5688140378164993
- 0.5588319773871532
- 0.5847911401438255
- 0.6690822373658819
- 0.5243915696652743
- 0.6141150363888348
- 0.6633119609787945
- 0.5417146255579326
- 0.5335099806695802
- 0.5290316249519529
- 0.5206989825465232
- 0.6395926790977834
- 0.5687818613145609
- 0.5347363807538766
- 0.5673345398559368
- 0.6207955639665198
- 0.49172704540335
- 0.5613519584334187
- 0.5287355061030274
- 0.5241710337741662
- 0.5989619393352348
- 0.5047898087704462
- 0.5347507660674999
- 0.546453253548092
- 0.5222264596468855
- 0.5688140378164993
- 0.5588319773871532
- 0.5847911401438255
- 0.6690822373658819
- 0.5243915696652743
- 0.6141150363888348
- 0.6633119609787945
- 0.5417146255579326
- 0.5335099806695802
- 0.5290316249519529
- 0.5206989825465232
- 0.6395926790977834
- 0.5687818613145609
- 0.5347363807538766
- 0.5673345398559368
- 0.6207955639665198
- 0.49172704540335
- 0.5613519584334187
- 0.5287355061030274
- 0.5241710337741662
- 0.5989619393352348
- 0.5047898087704462
- 0.5347507660674999
- 0.546453253548092
- 0.5222264596468855
- 0.5688140378164993
- 0.5588319773871532
- 0.5847911401438255
- 0.6690822373658819
- 0.5243915696652743
- 0.6141150363888348
- 0.6633119609787945
- 0.5417146255579326
- 0.5335099806695802
- 0.5290316249519529
- 0.5206989825465232
- 0.6395926790977834
- 0.5687818613145609
- 0.5347363807538766
- 0.5673345398559368
- 0.6207955639665198
- 0.49172704540335
- 0.5613519584334187
- 0.5287355061030274
- 0.5241710337741662
- 0.5989619393352348
- 0.5047898087704462
- 0.5347507660674999
- 0.546453253548092
- 0.5222264596468855
- 0.5688140378164993
- 0.5588319773871532
- 0.5847911401438255
- 0.6690822373658819
- 0.5243915696652743
- 0.6141150363888348
- 0.6633119609787945
- 0.5417146255579326
- 0.5335099806695802
- 0.5290316249519529
- 0.5206989825465232
- 0.6395926790977834
- 0.5687818613145609
- 0.5347363807538766
- 0.5673345398559368
- 0.6207955639665198
- 0.49172704540335
- 0.5613519584334187
- 0.5287355061030274
- 0.5241710337741662
- 0.5989619393352348
- 0.5047898087704462
- 0.5347507660674999
- 0.546453253548092
- 0.5222264596468855
- 0.5688140378164993
- 0.5588319773871532
- 0.5847911401438255
- 0.6690822373658819
- 0.5243915696652743
- 0.6141150363888348
- 0.6633119609787945
- 0.5417146255579326
- 0.5335099806695802
- 0.5290316249519529
- 0.5206989825465232
- 0.6395926790977834
- 0.5687818613145609
- 0.5347363807538766
- 0.5673345398559368
- 0.6207955639665198
- 0.49172704540335
- 0.5613519584334187
- 0.5287355061030274
- 0.5241710337741662
- 0.5989619393352348
- 0.5047898087704462
- 0.5347507660674999
- 0.546453253548092
- 0.5222264596468855
- 0.5688140378164993
- 0.5588319773871532
- 0.5847911401438255
- 0.6690822373658819
- 0.5243915696652743
- 0.6141150363888348
- 0.6633119609787945
- 0.5417146255579326
- 0.5335099806695802
- 0.5290316249519529
- 0.5206989825465232
- 0.6395926790977834
- 0.5687818613145609
- 0.5347363807538766
- 0.5673345398559368
- 0.6207955639665198
- 0.49172704540335
- 0.5613519584334187
- 0.5287355061030274
- 0.5241710337741662
- 0.5989619393352348
- 0.5047898087704462
- 0.5347507660674999
- 0.546453253548092
- 0.5222264596468855
- 0.5688140378164993
- 0.5588319773871532
- 0.5847911401438255
- 0.6690822373658819
- 0.5243915696652743
- 0.6141150363888348
- 0.6633119609787945
- 0.5417146255579326
- 0.5335099806695802
- 0.5290316249519529
- 0.5206989825465232
- 0.6395926790977834
- 0.5687818613145609
- 0.5347363807538766
- 0.5673345398559368
- 0.6207955639665198
- 0.49172704540335
- 0.5613519584334187
- 0.5287355061030274
- 0.5241710337741662
- 0.5989619393352348
- 0.5047898087704462
- 0.5347507660674999
- 0.546453253548092
- 0.5222264596468855
- 0.5688140378164993
- 0.5588319773871532
- 0.5847911401438255
- 0.6690822373658819
- 0.5243915696652743
- 0.6141150363888348
- 0.6633119609787945
- 0.5417146255579326
- 0.5335099806695802
- 0.5290316249519529
- 0.5206989825465232
- 0.6395926790977834
- 0.5687818613145609
- 0.5347363807538766
- 0.5673345398559368
- 0.6207955639665198
- 0.49172704540335
- 0.5613519584334187
- 0.5287355061030274
- 0.5241710337741662
- 0.5989619393352348
- 0.5047898087704462
- 0.5347507660674999
- 0.546453253548092
- 0.5222264596468855
- 0.5688140378164993
- 0.5588319773871532
- 0.5847911401438255
- 0.6690822373658819
- 0.5243915696652743
- 0.6141150363888348
- 0.6633119609787945
- 0.5417146255579326
- 0.5335099806695802
- 0.5290316249519529
- 0.5206989825465232
- 0.6395926790977834
- 0.5687818613145609
- 0.5347363807538766
- 0.5673345398559368
- 0.6207955639665198
- 0.49172704540335
- 0.5613519584334187
- 0.5287355061030274
- 0.5241710337741662
- 0.5989619393352348
- 0.5047898087704462
- 0.5347507660674999
- 0.546453253548092
- 0.5222264596468855
- 0.5688140378164993
- 0.5588319773871532
- 0.5847911401438255
- 0.6690822373658819
- 0.5243915696652743
- 0.6141150363888348
- 0.6633119609787945
- 0.5417146255579326
- 0.5335099806695802
- 0.5290316249519529
- 0.5206989825465232
- 0.6395926790977834
- 0.5687818613145609
- 0.5347363807538766
- 0.5673345398559368
- 0.6207955639665198
- 0.49172704540335
- 0.5613519584334187
- 0.5287355061030274
- 0.5241710337741662
- 0.5989619393352348
- 0.5047898087704462
- 0.5347507660674999
- 0.546453253548092
- 0.5222264596468855
- 0.5688140378164993
- 0.5588319773871532
- 0.5847911401438255
- 0.6690822373658819
- 0.5243915696652743
- 0.6141150363888348
- 0.6633119609787945
- 0.5417146255579326
- 0.5335099806695802
- 0.5290316249519529
- 0.5206989825465232
- 0.6395926790977834
- 0.5687818613145609
- 0.5347363807538766
- 0.5673345398559368
- 0.6207955639665198
- 0.49172704540335
- 0.5613519584334187
- 0.5287355061030274
- 0.5241710337741662
- 0.5989619393352348
- 0.5047898087704462
- 0.5347507660674999
- 0.546453253548092
- 0.5222264596468855
- 0.5688140378164993
- 0.5588319773871532
- 0.5847911401438255
- 0.6690822373658819
- 0.5243915696652743
- 0.6141150363888348
- 0.6633119609787945
- 0.5417146255579326
- 0.5335099806695802
- 0.5290316249519529
- 0.5206989825465232
- 0.6395926790977834
- 0.5687818613145609
- 0.5347363807538766
- 0.5673345398559368
- 0.6207955639665198
- 0.49172704540335
- 0.5613519584334187
- 0.5287355061030274
- 0.5241710337741662
- 0.5989619393352348
- 0.5047898087704462
- 0.5347507660674999
- 0.546453253548092
- 0.5222264596468855
- 0.5688140378164993
- 0.5588319773871532
- 0.5847911401438255
- 0.6690822373658819
- 0.5243915696652743
- 0.6141150363888348
- 0.6633119609787945
- 0.5417146255579326
- 0.5335099806695802
- 0.5290316249519529
- 0.5206989825465232
- 0.6395926790977834
- 0.5687818613145609
- 0.5347363807538766
- 0.5673345398559368
- 0.6207955639665198
- 0.49172704540335
- 0.5613519584334187
- 0.5287355061030274
- 0.5241710337741662
- 0.5989619393352348
- 0.5047898087704462
- 0.5347507660674999
- 0.546453253548092
- 0.5222264596468855
- 0.5688140378164993
- 0.5588319773871532
- 0.5847911401438255
- 0.6690822373658819
- 0.5243915696652743
- 0.6141150363888348
- 0.6633119609787945
- 0.5417146255579326
- 0.5335099806695802
- 0.5290316249519529
- 0.5206989825465232
- 0.6395926790977834
- 0.5687818613145609
- 0.5347363807538766
- 0.5673345398559368
- 0.6207955639665198
- 0.49172704540335
- 0.5613519584334187
- 0.5287355061030274
- 0.5241710337741662
- 0.5989619393352348
- 0.5047898087704462
- 0.5347507660674999
- 0.546453253548092
- 0.5222264596468855
- 0.5688140378164993
- 0.5588319773871532
- 0.5847911401438255
- 0.6690822373658819
- 0.5243915696652743
- 0.6141150363888348
- 0.6633119609787945
- 0.5417146255579326
- 0.5335099806695802
- 0.5290316249519529
- 0.5206989825465232
- 0.6395926790977834
- 0.5687818613145609
- 0.5347363807538766
- 0.5673345398559368
- 0.6207955639665198
- 0.49172704540335
- 0.5613519584334187
- 0.5287355061030274
- 0.5241710337741662
- 0.5989619393352348
- 0.5047898087704462
- 0.5347507660674999
- 0.546453253548092
- 0.5222264596468855
- 0.5688140378164993
- 0.5588319773871532
- 0.5847911401438255
- 0.6690822373658819
- 0.5243915696652743
- 0.6141150363888348
- 0.6633119609787945
- 0.5417146255579326
- 0.5335099806695802
- 0.5290316249519529
- 0.5206989825465232
- 0.6395926790977834
- 0.5687818613145609
- 0.5347363807538766
- 0.5673345398559368
- 0.6207955639665198
- 0.49172704540335
- 0.5613519584334187
- 0.5287355061030274
- 0.5241710337741662
- 0.5989619393352348
- 0.5047898087704462
- 0.5347507660674999
- 0.546453253548092
- 0.5222264596468855
- 0.5688140378164993
- 0.5588319773871532
- 0.5847911401438255
- 0.6690822373658819
- 0.5243915696652743
- 0.6141150363888348
- 0.6633119609787945
- 0.5417146255579326
- 0.5335099806695802
- 0.5290316249519529
- 0.5206989825465232
- 0.6395926790977834
- 0.5687818613145609
- 0.5347363807538766
- 0.5673345398559368
- 0.6207955639665198
- 0.49172704540335
- 0.5613519584334187
- 0.5287355061030274
- 0.5241710337741662
- 0.5989619393352348
- 0.5047898087704462
- 0.5347507660674999
- 0.546453253548092
- 0.5222264596468855
- 0.5688140378164993
- 0.5588319773871532
- 0.5847911401438255
- 0.6690822373658819
- 0.5243915696652743
- 0.6141150363888348
- 0.6633119609787945
- 0.5417146255579326
- 0.5335099806695802
- 0.5290316249519529
- 0.5206989825465232
- 0.6395926790977834
- 0.5687818613145609
- 0.5347363807538766
- 0.5673345398559368
- 0.6207955639665198
- 0.49172704540335
- 0.5613519584334187
- 0.5287355061030274
- 0.5241710337741662
- 0.5989619393352348
- 0.5047898087704462
- 0.5347507660674999
- 0.546453253548092
- 0.5222264596468855
- 0.5688140378164993
- 0.5588319773871532
- 0.5847911401438255
- 0.6690822373658819
- 0.5243915696652743
- 0.6141150363888348
- 0.6633119609787945
- 0.5417146255579326
- 0.5335099806695802
- 0.5290316249519529
- 0.5206989825465232
- 0.6395926790977834
- 0.5687818613145609
- 0.5347363807538766
- 0.5673345398559368
- 0.6207955639665198
- 0.49172704540335
- 0.5613519584334187
- 0.5287355061030274
- 0.5241710337741662
- 0.5989619393352348
- 0.5047898087704462
- 0.5347507660674999
- 0.546453253548092
- 0.5222264596468855
- 0.5688140378164993
- 0.5588319773871532
- 0.5847911401438255
- 0.6690822373658819
- 0.5243915696652743
- 0.6141150363888348
- 0.6633119609787945
- 0.5417146255579326
- 0.5335099806695802
- 0.5290316249519529
- 0.5206989825465232
- 0.6395926790977834
- 0.5687818613145609
- 0.5347363807538766
- 0.5673345398559368
- 0.6207955639665198
- 0.49172704540335
- 0.5613519584334187
- 0.5287355061030274
- 0.5241710337741662
- 0.5989619393352348
- 0.5047898087704462
- 0.5347507660674999
- 0.546453253548092
- 0.5222264596468855
- 0.5688140378164993
- 0.5588319773871532
- 0.5847911401438255
- 0.6690822373658819
- 0.5243915696652743
- 0.6141150363888348
- 0.6633119609787945
- 0.5417146255579326
- 0.5335099806695802
- 0.5290316249519529
- 0.5206989825465232
- 0.6395926790977834
- 0.5687818613145609
- 0.5347363807538766
- 0.5673345398559368
- 0.6207955639665198
- 0.49172704540335
- 0.5613519584334187
- 0.5287355061030274
- 0.5241710337741662
- 0.5989619393352348
- 0.5047898087704462
- 0.5347507660674999
- 0.546453253548092
- 0.5222264596468855
- 0.5688140378164993
- 0.5588319773871532
- 0.5847911401438255
- 0.6690822373658819
- 0.5243915696652743
- 0.6141150363888348
- 0.6633119609787945
- 0.5417146255579326
- 0.5335099806695802
- 0.5290316249519529
- 0.5206989825465232
- 0.6395926790977834
- 0.5687818613145609
- 0.5347363807538766
- 0.5673345398559368
- 0.6207955639665198
- 0.49172704540335
- 0.5613519584334187
- 0.5287355061030274
- 0.5241710337741662
- 0.5989619393352348
- 0.5047898087704462
- 0.5347507660674999
- 0.546453253548092
- 0.5222264596468855
- 0.5688140378164993
- 0.5588319773871532
- 0.5847911401438255
- 0.6690822373658819
- 0.5243915696652743
- 0.6141150363888348
- 0.6633119609787945
- 0.5417146255579326
- 0.5335099806695802
- 0.5290316249519529
- 0.5206989825465232
- 0.6395926790977834
- 0.5687818613145609
- 0.5347363807538766
- 0.5673345398559368
- 0.6207955639665198
- 0.49172704540335
- 0.5613519584334187
- 0.5287355061030274
- 0.5241710337741662
- 0.5989619393352348
- 0.5047898087704462
- 0.5347507660674999
- 0.546453253548092
- 0.5222264596468855
- 0.5688140378164993
- 0.5588319773871532
- 0.5847911401438255
- 0.6690822373658819
- 0.5243915696652743
- 0.6141150363888348
- 0.6633119609787945
- 0.5417146255579326
- 0.5335099806695802
- 0.5290316249519529
- 0.5206989825465232
- 0.6395926790977834
- 0.5687818613145609
- 0.5347363807538766
- 0.5673345398559368
- 0.6207955639665198
- 0.49172704540335
- 0.5613519584334187
- 0.5287355061030274
- 0.5241710337741662
- 0.5989619393352348
- 0.5047898087704462
- 0.5347507660674999
- 0.546453253548092
- 0.5222264596468855
- 0.5688140378164993
- 0.5588319773871532
- 0.5847911401438255
- 0.6690822373658819
- 0.5243915696652743
- 0.6141150363888348
- 0.6633119609787945
- 0.5417146255579326
- 0.5335099806695802
- 0.5290316249519529
- 0.5206989825465232
- 0.6395926790977834
- 0.5687818613145609
- 0.5347363807538766
- 0.5673345398559368
- 0.6207955639665198
- 0.49172704540335
- 0.5613519584334187
- 0.5287355061030274
- 0.5241710337741662
- 0.5989619393352348
- 0.5047898087704462
- 0.5347507660674999
- 0.546453253548092
- 0.5222264596468855
- 0.5688140378164993
- 0.5588319773871532
- 0.5847911401438255
- 0.6690822373658819
- 0.5243915696652743
- 0.6141150363888348
- 0.6633119609787945
- 0.5417146255579326
- 0.5335099806695802
- 0.5290316249519529
- 0.5206989825465232
- 0.6395926790977834
- 0.5687818613145609
- 0.5347363807538766
- 0.5673345398559368
- 0.6207955639665198
- 0.49172704540335
- 0.5613519584334187
- 0.5287355061030274
- 0.5241710337741662
- 0.5989619393352348
- 0.5047898087704462
- 0.5347507660674999
- 0.546453253548092
- 0.5222264596468855
- 0.5688140378164993
- 0.5588319773871532
- 0.5847911401438255
- 0.6690822373658819
- 0.5243915696652743
- 0.6141150363888348
- 0.6633119609787945
- 0.5417146255579326
- 0.5335099806695802
- 0.5290316249519529
- 0.5206989825465232
- 0.6395926790977834
- 0.5687818613145609
- 0.5347363807538766
- 0.5673345398559368
- 0.6207955639665198
- 0.49172704540335
- 0.5613519584334187
- 0.5287355061030274
- 0.5241710337741662
- 0.5989619393352348
- 0.5047898087704462
- 0.5347507660674999
- 0.546453253548092
- 0.5222264596468855
- 0.5688140378164993
- 0.5588319773871532
- 0.5847911401438255
- 0.6690822373658819
- 0.5243915696652743
- 0.6141150363888348
- 0.6633119609787945
- 0.5417146255579326
- 0.5335099806695802
- 0.5290316249519529
- 0.5206989825465232
- 0.6395926790977834
- 0.5687818613145609
- 0.5347363807538766
- 0.5673345398559368
- 0.6207955639665198
- 0.49172704540335
- 0.5613519584334187
- 0.5287355061030274
- 0.5241710337741662
- 0.5989619393352348
- 0.5047898087704462
- 0.5347507660674999
- 0.546453253548092
- 0.5222264596468855
- 0.5688140378164993
- 0.5588319773871532
- 0.5847911401438255
- 0.6690822373658819
- 0.5243915696652743
- 0.6141150363888348
- 0.6633119609787945
- 0.5417146255579326
- 0.5335099806695802
- 0.5290316249519529
- 0.5206989825465232
- 0.6395926790977834
- 0.5687818613145609
- 0.5347363807538766
- 0.5673345398559368
- 0.6207955639665198
- 0.49172704540335
- 0.5613519584334187
- 0.5287355061030274
- 0.5241710337741662
- 0.5989619393352348
- 0.5047898087704462
- 0.5347507660674999
- 0.546453253548092
- 0.5222264596468855
- 0.5688140378164993
- 0.5588319773871532
- 0.5847911401438255
- 0.6690822373658819
- 0.5243915696652743
- 0.6141150363888348
- 0.6633119609787945
- 0.5417146255579326
- 0.5335099806695802
- 0.5290316249519529
- 0.5206989825465232
- 0.6395926790977834
- 0.5687818613145609
- 0.5347363807538766
- 0.5673345398559368
- 0.6207955639665198
- 0.49172704540335
- 0.5613519584334187
- 0.5287355061030274
- 0.5241710337741662
- 0.5989619393352348
- 0.5047898087704462
- 0.5347507660674999
- 0.546453253548092
- 0.5222264596468855
- 0.5688140378164993
- 0.5588319773871532
- 0.5847911401438255
- 0.6690822373658819
- 0.5243915696652743
- 0.6141150363888348
- 0.6633119609787945
- 0.5417146255579326
- 0.5335099806695802
- 0.5290316249519529
- 0.5206989825465232
- 0.6395926790977834
- 0.5687818613145609
- 0.5347363807538766
- 0.5673345398559368
- 0.6207955639665198
- 0.49172704540335
- 0.5613519584334187
- 0.5287355061030274
- 0.5241710337741662
- 0.5989619393352348
- 0.5047898087704462
- 0.5347507660674999
- 0.546453253548092
- 0.5222264596468855
- 0.5688140378164993
- 0.5588319773871532
- 0.5847911401438255
- 0.6690822373658819
- 0.5243915696652743
- 0.6141150363888348
- 0.6633119609787945
- 0.5417146255579326
- 0.5335099806695802
- 0.5290316249519529
- 0.5206989825465232
- 0.6395926790977834
- 0.5687818613145609
- 0.5347363807538766
- 0.5673345398559368
- 0.6207955639665198
- 0.49172704540335
- 0.5613519584334187
- 0.5287355061030274
- 0.5241710337741662
- 0.5989619393352348
- 0.5047898087704462
- 0.5347507660674999
- 0.546453253548092
- 0.5222264596468855
- 0.5688140378164993
- 0.5588319773871532
- 0.5847911401438255
- 0.6690822373658819
- 0.5243915696652743
- 0.6141150363888348
- 0.6633119609787945
- 0.5417146255579326
- 0.5335099806695802
- 0.5290316249519529
- 0.5206989825465232
- 0.6395926790977834
- 0.5687818613145609
- 0.5347363807538766
- 0.5673345398559368
- 0.6207955639665198
- 0.49172704540335
- 0.5613519584334187
- 0.5287355061030274
- 0.5241710337741662
- 0.5989619393352348
- 0.5047898087704462
- 0.5347507660674999
- 0.546453253548092
- 0.5222264596468855
- 0.5688140378164993
- 0.5588319773871532
- 0.5847911401438255
- 0.6690822373658819
- 0.5243915696652743
- 0.6141150363888348
- 0.6633119609787945
- 0.5417146255579326
- 0.5335099806695802
- 0.5290316249519529
- 0.5206989825465232
- 0.6395926790977834
- 0.5687818613145609
- 0.5347363807538766
- 0.5673345398559368
- 0.6207955639665198
- 0.49172704540335
- 0.5613519584334187
- 0.5287355061030274
- 0.5241710337741662
- 0.5989619393352348
- 0.5047898087704462
- 0.5347507660674999
- 0.546453253548092
- 0.5222264596468855
- 0.5688140378164993
- 0.5588319773871532
- 0.5847911401438255
- 0.6690822373658819
- 0.5243915696652743
- 0.6141150363888348
- 0.6633119609787945
- 0.5417146255579326
- 0.5335099806695802
- 0.5290316249519529
- 0.5206989825465232
- 0.6395926790977834
- 0.5687818613145609
- 0.5347363807538766
- 0.5673345398559368
- 0.6207955639665198
- 0.49172704540335
- 0.5613519584334187
- 0.5287355061030274
- 0.5241710337741662
- 0.5989619393352348
- 0.5047898087704462
- 0.5347507660674999
- 0.546453253548092
- 0.5222264596468855
- 0.5688140378164993
- 0.5588319773871532
- 0.5847911401438255
- 0.6690822373658819
- 0.5243915696652743
- 0.6141150363888348
- 0.6633119609787945
- 0.5417146255579326
- 0.5335099806695802
- 0.5290316249519529
- 0.5206989825465232
- 0.6395926790977834
- 0.5687818613145609
- 0.5347363807538766
- 0.5673345398559368
- 0.6207955639665198
- 0.49172704540335
- 0.5613519584334187
- 0.5287355061030274
- 0.5241710337741662
- 0.5989619393352348
- 0.5047898087704462
- 0.5347507660674999
- 0.546453253548092
- 0.5222264596468855
- 0.5688140378164993
- 0.5588319773871532
- 0.5847911401438255
- 0.6690822373658819
- 0.5243915696652743
- 0.6141150363888348
- 0.6633119609787945
- 0.5417146255579326
- 0.5335099806695802
- 0.5290316249519529
- 0.5206989825465232
- 0.6395926790977834
- 0.5687818613145609
- 0.5347363807538766
- 0.5673345398559368
- 0.6207955639665198
- 0.49172704540335
- 0.5613519584334187
- 0.5287355061030274
- 0.5241710337741662
- 0.5989619393352348
- 0.5047898087704462
- 0.5347507660674999
- 0.546453253548092
- 0.5222264596468855
- 0.5688140378164993
- 0.5588319773871532
- 0.5847911401438255
- 0.6690822373658819
- 0.5243915696652743
- 0.6141150363888348
- 0.6633119609787945
- 0.5417146255579326
- 0.5335099806695802
- 0.5290316249519529
- 0.5206989825465232
- 0.6395926790977834
- 0.5687818613145609
- 0.5347363807538766
- 0.5673345398559368
- 0.6207955639665198
- 0.49172704540335
- 0.5613519584334187
- 0.5287355061030274
- 0.5241710337741662
- 0.5989619393352348
- 0.5047898087704462
- 0.5347507660674999
- 0.546453253548092
- 0.5222264596468855
- 0.5688140378164993
- 0.5588319773871532
- 0.5847911401438255
- 0.6690822373658819
- 0.5243915696652743
- 0.6141150363888348
- 0.6633119609787945
- 0.5417146255579326
- 0.5335099806695802
- 0.5290316249519529
- 0.5206989825465232
- 0.6395926790977834
- 0.5687818613145609
- 0.5347363807538766
- 0.5673345398559368
- 0.6207955639665198
- 0.49172704540335
- 0.5613519584334187
- 0.5287355061030274
- 0.5241710337741662
- 0.5989619393352348
- 0.5047898087704462
- 0.5347507660674999
- 0.546453253548092
- 0.5222264596468855
- 0.5688140378164993
- 0.5588319773871532
- 0.5847911401438255
- 0.6690822373658819
- 0.5243915696652743
- 0.6141150363888348
- 0.6633119609787945
- 0.5417146255579326
- 0.5335099806695802
- 0.5290316249519529
- 0.5206989825465232
- 0.6395926790977834
- 0.5687818613145609
- 0.5347363807538766
- 0.5673345398559368
- 0.6207955639665198
- 0.49172704540335
- 0.5613519584334187
- 0.5287355061030274
- 0.5241710337741662
- 0.5989619393352348
- 0.5047898087704462
- 0.5347507660674999
- 0.546453253548092
- 0.5222264596468855
- 0.5688140378164993
- 0.5588319773871532
- 0.5847911401438255
- 0.6690822373658819
- 0.5243915696652743
- 0.6141150363888348
- 0.6633119609787945
- 0.5417146255579326
- 0.5335099806695802
- 0.5290316249519529
- 0.5206989825465232
- 0.6395926790977834
- 0.5687818613145609
- 0.5347363807538766
- 0.5673345398559368
- 0.6207955639665198
- 0.49172704540335
- 0.5613519584334187
- 0.5287355061030274
- 0.5241710337741662
- 0.5989619393352348
- 0.5047898087704462
- 0.5347507660674999
- 0.546453253548092
- 0.5222264596468855
- 0.5688140378164993
- 0.5588319773871532
- 0.5847911401438255
- 0.6690822373658819
- 0.5243915696652743
- 0.6141150363888348
- 0.6633119609787945
- 0.5417146255579326
- 0.5335099806695802
- 0.5290316249519529
- 0.5206989825465232
- 0.6395926790977834
- 0.5687818613145609
- 0.5347363807538766
- 0.5673345398559368
- 0.6207955639665198
- 0.49172704540335
- 0.5613519584334187
- 0.5287355061030274
- 0.5241710337741662
- 0.5989619393352348
- 0.5047898087704462
- 0.5347507660674999
- 0.546453253548092
- 0.5222264596468855
- 0.5688140378164993
- 0.5588319773871532
- 0.5847911401438255
- 0.6690822373658819
- 0.5243915696652743
- 0.6141150363888348
- 0.6633119609787945
- 0.5417146255579326
- 0.5335099806695802
- 0.5290316249519529
- 0.5206989825465232
- 0.6395926790977834
- 0.5687818613145609
- 0.5347363807538766
- 0.5673345398559368
- 0.6207955639665198
- 0.49172704540335
- 0.5613519584334187
- 0.5287355061030274
- 0.5241710337741662
- 0.5989619393352348
- 0.5047898087704462
- 0.5347507660674999
- 0.546453253548092
- 0.5222264596468855
- 0.5688140378164993
- 0.5588319773871532
- 0.5847911401438255
- 0.6690822373658819
- 0.5243915696652743
- 0.6141150363888348
- 0.6633119609787945
- 0.5417146255579326
- 0.5335099806695802
- 0.5290316249519529
- 0.5206989825465232
- 0.6395926790977834
- 0.5687818613145609
- 0.5347363807538766
- 0.5673345398559368
- 0.6207955639665198
- 0.49172704540335
- 0.5613519584334187
- 0.5287355061030274
- 0.5241710337741662
- 0.5989619393352348
- 0.5047898087704462
- 0.5347507660674999
- 0.546453253548092
- 0.5222264596468855
- 0.5688140378164993
- 0.5588319773871532
- 0.5847911401438255
- 0.6690822373658819
- 0.5243915696652743
- 0.6141150363888348
- 0.6633119609787945
- 0.5417146255579326
- 0.5335099806695802
- 0.5290316249519529
- 0.5206989825465232
- 0.6395926790977834
- 0.5687818613145609
- 0.5347363807538766
- 0.5673345398559368
- 0.6207955639665198
- 0.49172704540335
- 0.5613519584334187
- 0.5287355061030274
- 0.5241710337741662
- 0.5989619393352348
- 0.5047898087704462
- 0.5347507660674999
- 0.546453253548092
- 0.5222264596468855
- 0.5688140378164993
- 0.5588319773871532
- 0.5847911401438255
- 0.6690822373658819
- 0.5243915696652743
- 0.6141150363888348
- 0.6633119609787945
- 0.5417146255579326
- 0.5335099806695802
- 0.5290316249519529
- 0.5206989825465232
- 0.6395926790977834
- 0.5687818613145609
- 0.5347363807538766
- 0.5673345398559368
- 0.6207955639665198
- 0.49172704540335
- 0.5613519584334187
- 0.5287355061030274
- 0.5241710337741662
- 0.5989619393352348
- 0.5047898087704462
- 0.5347507660674999
- 0.546453253548092
- 0.5222264596468855
- 0.5688140378164993
- 0.5588319773871532
- 0.5847911401438255
- 0.6690822373658819
- 0.5243915696652743
- 0.6141150363888348
- 0.6633119609787945
- 0.5417146255579326
- 0.5335099806695802
- 0.5290316249519529
- 0.5206989825465232
- 0.6395926790977834
- 0.5687818613145609
- 0.5347363807538766
- 0.5673345398559368
- 0.6207955639665198
- 0.49172704540335
- 0.5613519584334187
- 0.5287355061030274
- 0.5241710337741662
- 0.5989619393352348
- 0.5047898087704462
- 0.5347507660674999
- 0.546453253548092
- 0.5222264596468855
- 0.5688140378164993
- 0.5588319773871532
- 0.5847911401438255
- 0.6690822373658819
- 0.5243915696652743
- 0.6141150363888348
- 0.6633119609787945
- 0.5417146255579326
- 0.5335099806695802
- 0.5290316249519529
- 0.5206989825465232
- 0.6395926790977834
- 0.5687818613145609
- 0.5347363807538766
- 0.5673345398559368
- 0.6207955639665198
- 0.49172704540335
- 0.5613519584334187
- 0.5287355061030274
- 0.5241710337741662
- 0.5989619393352348
- 0.5047898087704462
- 0.5347507660674999
- 0.546453253548092
- 0.5222264596468855
- 0.5688140378164993
- 0.5588319773871532
- 0.5847911401438255
- 0.6690822373658819
- 0.5243915696652743
- 0.6141150363888348
- 0.6633119609787945
- 0.5417146255579326
- 0.5335099806695802
- 0.5290316249519529
- 0.5206989825465232
- 0.6395926790977834
- 0.5687818613145609
- 0.5347363807538766
- 0.5673345398559368
- 0.6207955639665198
- 0.49172704540335
- 0.5613519584334187
- 0.5287355061030274
- 0.5241710337741662
- 0.5989619393352348
- 0.5047898087704462
- 0.5347507660674999
- 0.546453253548092
- 0.5222264596468855
- 0.5688140378164993
- 0.5588319773871532
- 0.5847911401438255
- 0.6690822373658819
- 0.5243915696652743
- 0.6141150363888348
- 0.6633119609787945
- 0.5417146255579326
- 0.5335099806695802
- 0.5290316249519529
- 0.5206989825465232
- 0.6395926790977834
- 0.5687818613145609
- 0.5347363807538766
- 0.5673345398559368
- 0.6207955639665198
- 0.49172704540335
- 0.5613519584334187
- 0.5287355061030274
- 0.5241710337741662
- 0.5989619393352348
- 0.5047898087704462
- 0.5347507660674999
- 0.546453253548092
- 0.5222264596468855
- 0.5688140378164993
- 0.5588319773871532
- 0.5847911401438255
- 0.6690822373658819
- 0.5243915696652743
- 0.6141150363888348
- 0.6633119609787945
- 0.5417146255579326
- 0.5335099806695802
- 0.5290316249519529
- 0.5206989825465232
- 0.6395926790977834
- 0.5687818613145609
- 0.5347363807538766
- 0.5673345398559368
- 0.6207955639665198
- 0.49172704540335
- 0.5613519584334187
- 0.5287355061030274
- 0.5241710337741662
- 0.5989619393352348
- 0.5047898087704462
- 0.5347507660674999
- 0.546453253548092
- 0.5222264596468855
- 0.5688140378164993
- 0.5588319773871532
- 0.5847911401438255
- 0.6690822373658819
- 0.5243915696652743
- 0.6141150363888348
- 0.6633119609787945
- 0.5417146255579326
- 0.5335099806695802
- 0.5290316249519529
- 0.5206989825465232
- 0.6395926790977834
- 0.5687818613145609
- 0.5347363807538766
- task:
type: Clustering
dataset:
name: MTEB RedditClusteringP2P
type: None
config: default
split: test
revision: 385e3cb46b4cfa89021f56c4380204149d0efe33
metrics:
- type: v_measure
value: 62.695441918144745
- type: v_measures
value:
- 0.6567314795009059
- 0.6913306724738202
- 0.6676599210494237
- 0.40983324085642114
- 0.7120033776430696
- 0.6222687713138416
- 0.36420652956305893
- 0.7314242972357771
- 0.6973642134040027
- 0.7167216887741535
- 0.6567314795009059
- 0.6913306724738202
- 0.6676599210494237
- 0.40983324085642114
- 0.7120033776430696
- 0.6222687713138416
- 0.36420652956305893
- 0.7314242972357771
- 0.6973642134040027
- 0.7167216887741535
- 0.6567314795009059
- 0.6913306724738202
- 0.6676599210494237
- 0.40983324085642114
- 0.7120033776430696
- 0.6222687713138416
- 0.36420652956305893
- 0.7314242972357771
- 0.6973642134040027
- 0.7167216887741535
- 0.6567314795009059
- 0.6913306724738202
- 0.6676599210494237
- 0.40983324085642114
- 0.7120033776430696
- 0.6222687713138416
- 0.36420652956305893
- 0.7314242972357771
- 0.6973642134040027
- 0.7167216887741535
- 0.6567314795009059
- 0.6913306724738202
- 0.6676599210494237
- 0.40983324085642114
- 0.7120033776430696
- 0.6222687713138416
- 0.36420652956305893
- 0.7314242972357771
- 0.6973642134040027
- 0.7167216887741535
- 0.6567314795009059
- 0.6913306724738202
- 0.6676599210494237
- 0.40983324085642114
- 0.7120033776430696
- 0.6222687713138416
- 0.36420652956305893
- 0.7314242972357771
- 0.6973642134040027
- 0.7167216887741535
- 0.6567314795009059
- 0.6913306724738202
- 0.6676599210494237
- 0.40983324085642114
- 0.7120033776430696
- 0.6222687713138416
- 0.36420652956305893
- 0.7314242972357771
- 0.6973642134040027
- 0.7167216887741535
- 0.6567314795009059
- 0.6913306724738202
- 0.6676599210494237
- 0.40983324085642114
- 0.7120033776430696
- 0.6222687713138416
- 0.36420652956305893
- 0.7314242972357771
- 0.6973642134040027
- 0.7167216887741535
- 0.6567314795009059
- 0.6913306724738202
- 0.6676599210494237
- 0.40983324085642114
- 0.7120033776430696
- 0.6222687713138416
- 0.36420652956305893
- 0.7314242972357771
- 0.6973642134040027
- 0.7167216887741535
- 0.6567314795009059
- 0.6913306724738202
- 0.6676599210494237
- 0.40983324085642114
- 0.7120033776430696
- 0.6222687713138416
- 0.36420652956305893
- 0.7314242972357771
- 0.6973642134040027
- 0.7167216887741535
- 0.6567314795009059
- 0.6913306724738202
- 0.6676599210494237
- 0.40983324085642114
- 0.7120033776430696
- 0.6222687713138416
- 0.36420652956305893
- 0.7314242972357771
- 0.6973642134040027
- 0.7167216887741535
- 0.6567314795009059
- 0.6913306724738202
- 0.6676599210494237
- 0.40983324085642114
- 0.7120033776430696
- 0.6222687713138416
- 0.36420652956305893
- 0.7314242972357771
- 0.6973642134040027
- 0.7167216887741535
- 0.6567314795009059
- 0.6913306724738202
- 0.6676599210494237
- 0.40983324085642114
- 0.7120033776430696
- 0.6222687713138416
- 0.36420652956305893
- 0.7314242972357771
- 0.6973642134040027
- 0.7167216887741535
- 0.6567314795009059
- 0.6913306724738202
- 0.6676599210494237
- 0.40983324085642114
- 0.7120033776430696
- 0.6222687713138416
- 0.36420652956305893
- 0.7314242972357771
- 0.6973642134040027
- 0.7167216887741535
- 0.6567314795009059
- 0.6913306724738202
- 0.6676599210494237
- 0.40983324085642114
- 0.7120033776430696
- 0.6222687713138416
- 0.36420652956305893
- 0.7314242972357771
- 0.6973642134040027
- 0.7167216887741535
- 0.6567314795009059
- 0.6913306724738202
- 0.6676599210494237
- 0.40983324085642114
- 0.7120033776430696
- 0.6222687713138416
- 0.36420652956305893
- 0.7314242972357771
- 0.6973642134040027
- 0.7167216887741535
- 0.6567314795009059
- 0.6913306724738202
- 0.6676599210494237
- 0.40983324085642114
- 0.7120033776430696
- 0.6222687713138416
- 0.36420652956305893
- 0.7314242972357771
- 0.6973642134040027
- 0.7167216887741535
- 0.6567314795009059
- 0.6913306724738202
- 0.6676599210494237
- 0.40983324085642114
- 0.7120033776430696
- 0.6222687713138416
- 0.36420652956305893
- 0.7314242972357771
- 0.6973642134040027
- 0.7167216887741535
- 0.6567314795009059
- 0.6913306724738202
- 0.6676599210494237
- 0.40983324085642114
- 0.7120033776430696
- 0.6222687713138416
- 0.36420652956305893
- 0.7314242972357771
- 0.6973642134040027
- 0.7167216887741535
- 0.6567314795009059
- 0.6913306724738202
- 0.6676599210494237
- 0.40983324085642114
- 0.7120033776430696
- 0.6222687713138416
- 0.36420652956305893
- 0.7314242972357771
- 0.6973642134040027
- 0.7167216887741535
- 0.6567314795009059
- 0.6913306724738202
- 0.6676599210494237
- 0.40983324085642114
- 0.7120033776430696
- 0.6222687713138416
- 0.36420652956305893
- 0.7314242972357771
- 0.6973642134040027
- 0.7167216887741535
- 0.6567314795009059
- 0.6913306724738202
- 0.6676599210494237
- 0.40983324085642114
- 0.7120033776430696
- 0.6222687713138416
- 0.36420652956305893
- 0.7314242972357771
- 0.6973642134040027
- 0.7167216887741535
- 0.6567314795009059
- 0.6913306724738202
- 0.6676599210494237
- 0.40983324085642114
- 0.7120033776430696
- 0.6222687713138416
- 0.36420652956305893
- 0.7314242972357771
- 0.6973642134040027
- 0.7167216887741535
- 0.6567314795009059
- 0.6913306724738202
- 0.6676599210494237
- 0.40983324085642114
- 0.7120033776430696
- 0.6222687713138416
- 0.36420652956305893
- 0.7314242972357771
- 0.6973642134040027
- 0.7167216887741535
- 0.6567314795009059
- 0.6913306724738202
- 0.6676599210494237
- 0.40983324085642114
- 0.7120033776430696
- 0.6222687713138416
- 0.36420652956305893
- 0.7314242972357771
- 0.6973642134040027
- 0.7167216887741535
- 0.6567314795009059
- 0.6913306724738202
- 0.6676599210494237
- 0.40983324085642114
- 0.7120033776430696
- 0.6222687713138416
- 0.36420652956305893
- 0.7314242972357771
- 0.6973642134040027
- 0.7167216887741535
- 0.6567314795009059
- 0.6913306724738202
- 0.6676599210494237
- 0.40983324085642114
- 0.7120033776430696
- 0.6222687713138416
- 0.36420652956305893
- 0.7314242972357771
- 0.6973642134040027
- 0.7167216887741535
- 0.6567314795009059
- 0.6913306724738202
- 0.6676599210494237
- 0.40983324085642114
- 0.7120033776430696
- 0.6222687713138416
- 0.36420652956305893
- 0.7314242972357771
- 0.6973642134040027
- 0.7167216887741535
- 0.6567314795009059
- 0.6913306724738202
- 0.6676599210494237
- 0.40983324085642114
- 0.7120033776430696
- 0.6222687713138416
- 0.36420652956305893
- 0.7314242972357771
- 0.6973642134040027
- 0.7167216887741535
- 0.6567314795009059
- 0.6913306724738202
- 0.6676599210494237
- 0.40983324085642114
- 0.7120033776430696
- 0.6222687713138416
- 0.36420652956305893
- 0.7314242972357771
- 0.6973642134040027
- 0.7167216887741535
- 0.6567314795009059
- 0.6913306724738202
- 0.6676599210494237
- 0.40983324085642114
- 0.7120033776430696
- 0.6222687713138416
- 0.36420652956305893
- 0.7314242972357771
- 0.6973642134040027
- 0.7167216887741535
- 0.6567314795009059
- 0.6913306724738202
- 0.6676599210494237
- 0.40983324085642114
- 0.7120033776430696
- 0.6222687713138416
- 0.36420652956305893
- 0.7314242972357771
- 0.6973642134040027
- 0.7167216887741535
- 0.6567314795009059
- 0.6913306724738202
- 0.6676599210494237
- 0.40983324085642114
- 0.7120033776430696
- 0.6222687713138416
- 0.36420652956305893
- 0.7314242972357771
- 0.6973642134040027
- 0.7167216887741535
- 0.6567314795009059
- 0.6913306724738202
- 0.6676599210494237
- 0.40983324085642114
- 0.7120033776430696
- 0.6222687713138416
- 0.36420652956305893
- 0.7314242972357771
- 0.6973642134040027
- 0.7167216887741535
- 0.6567314795009059
- 0.6913306724738202
- 0.6676599210494237
- 0.40983324085642114
- 0.7120033776430696
- 0.6222687713138416
- 0.36420652956305893
- 0.7314242972357771
- 0.6973642134040027
- 0.7167216887741535
- 0.6567314795009059
- 0.6913306724738202
- 0.6676599210494237
- 0.40983324085642114
- 0.7120033776430696
- 0.6222687713138416
- 0.36420652956305893
- 0.7314242972357771
- 0.6973642134040027
- 0.7167216887741535
- 0.6567314795009059
- 0.6913306724738202
- 0.6676599210494237
- 0.40983324085642114
- 0.7120033776430696
- 0.6222687713138416
- 0.36420652956305893
- 0.7314242972357771
- 0.6973642134040027
- 0.7167216887741535
- 0.6567314795009059
- 0.6913306724738202
- 0.6676599210494237
- 0.40983324085642114
- 0.7120033776430696
- 0.6222687713138416
- 0.36420652956305893
- 0.7314242972357771
- 0.6973642134040027
- 0.7167216887741535
- 0.6567314795009059
- 0.6913306724738202
- 0.6676599210494237
- 0.40983324085642114
- 0.7120033776430696
- 0.6222687713138416
- 0.36420652956305893
- 0.7314242972357771
- 0.6973642134040027
- 0.7167216887741535
- 0.6567314795009059
- 0.6913306724738202
- 0.6676599210494237
- 0.40983324085642114
- 0.7120033776430696
- 0.6222687713138416
- 0.36420652956305893
- 0.7314242972357771
- 0.6973642134040027
- 0.7167216887741535
- 0.6567314795009059
- 0.6913306724738202
- 0.6676599210494237
- 0.40983324085642114
- 0.7120033776430696
- 0.6222687713138416
- 0.36420652956305893
- 0.7314242972357771
- 0.6973642134040027
- 0.7167216887741535
- 0.6567314795009059
- 0.6913306724738202
- 0.6676599210494237
- 0.40983324085642114
- 0.7120033776430696
- 0.6222687713138416
- 0.36420652956305893
- 0.7314242972357771
- 0.6973642134040027
- 0.7167216887741535
- 0.6567314795009059
- 0.6913306724738202
- 0.6676599210494237
- 0.40983324085642114
- 0.7120033776430696
- 0.6222687713138416
- 0.36420652956305893
- 0.7314242972357771
- 0.6973642134040027
- 0.7167216887741535
- 0.6567314795009059
- 0.6913306724738202
- 0.6676599210494237
- 0.40983324085642114
- 0.7120033776430696
- 0.6222687713138416
- 0.36420652956305893
- 0.7314242972357771
- 0.6973642134040027
- 0.7167216887741535
- 0.6567314795009059
- 0.6913306724738202
- 0.6676599210494237
- 0.40983324085642114
- 0.7120033776430696
- 0.6222687713138416
- 0.36420652956305893
- 0.7314242972357771
- 0.6973642134040027
- 0.7167216887741535
- 0.6567314795009059
- 0.6913306724738202
- 0.6676599210494237
- 0.40983324085642114
- 0.7120033776430696
- 0.6222687713138416
- 0.36420652956305893
- 0.7314242972357771
- 0.6973642134040027
- 0.7167216887741535
- 0.6567314795009059
- 0.6913306724738202
- 0.6676599210494237
- 0.40983324085642114
- 0.7120033776430696
- 0.6222687713138416
- 0.36420652956305893
- 0.7314242972357771
- 0.6973642134040027
- 0.7167216887741535
- 0.6567314795009059
- 0.6913306724738202
- 0.6676599210494237
- 0.40983324085642114
- 0.7120033776430696
- 0.6222687713138416
- 0.36420652956305893
- 0.7314242972357771
- 0.6973642134040027
- 0.7167216887741535
- 0.6567314795009059
- 0.6913306724738202
- 0.6676599210494237
- 0.40983324085642114
- 0.7120033776430696
- 0.6222687713138416
- 0.36420652956305893
- 0.7314242972357771
- 0.6973642134040027
- 0.7167216887741535
- 0.6567314795009059
- 0.6913306724738202
- 0.6676599210494237
- 0.40983324085642114
- 0.7120033776430696
- 0.6222687713138416
- 0.36420652956305893
- 0.7314242972357771
- 0.6973642134040027
- 0.7167216887741535
- 0.6567314795009059
- 0.6913306724738202
- 0.6676599210494237
- 0.40983324085642114
- 0.7120033776430696
- 0.6222687713138416
- 0.36420652956305893
- 0.7314242972357771
- 0.6973642134040027
- 0.7167216887741535
- 0.6567314795009059
- 0.6913306724738202
- 0.6676599210494237
- 0.40983324085642114
- 0.7120033776430696
- 0.6222687713138416
- 0.36420652956305893
- 0.7314242972357771
- 0.6973642134040027
- 0.7167216887741535
- 0.6567314795009059
- 0.6913306724738202
- 0.6676599210494237
- 0.40983324085642114
- 0.7120033776430696
- 0.6222687713138416
- 0.36420652956305893
- 0.7314242972357771
- 0.6973642134040027
- 0.7167216887741535
- 0.6567314795009059
- 0.6913306724738202
- 0.6676599210494237
- 0.40983324085642114
- 0.7120033776430696
- 0.6222687713138416
- 0.36420652956305893
- 0.7314242972357771
- 0.6973642134040027
- 0.7167216887741535
- 0.6567314795009059
- 0.6913306724738202
- 0.6676599210494237
- 0.40983324085642114
- 0.7120033776430696
- 0.6222687713138416
- 0.36420652956305893
- 0.7314242972357771
- 0.6973642134040027
- 0.7167216887741535
- 0.6567314795009059
- 0.6913306724738202
- 0.6676599210494237
- 0.40983324085642114
- 0.7120033776430696
- 0.6222687713138416
- 0.36420652956305893
- 0.7314242972357771
- 0.6973642134040027
- 0.7167216887741535
- 0.6567314795009059
- 0.6913306724738202
- 0.6676599210494237
- 0.40983324085642114
- 0.7120033776430696
- 0.6222687713138416
- 0.36420652956305893
- 0.7314242972357771
- 0.6973642134040027
- 0.7167216887741535
- 0.6567314795009059
- 0.6913306724738202
- 0.6676599210494237
- 0.40983324085642114
- 0.7120033776430696
- 0.6222687713138416
- 0.36420652956305893
- 0.7314242972357771
- 0.6973642134040027
- 0.7167216887741535
- 0.6567314795009059
- 0.6913306724738202
- 0.6676599210494237
- 0.40983324085642114
- 0.7120033776430696
- 0.6222687713138416
- 0.36420652956305893
- 0.7314242972357771
- 0.6973642134040027
- 0.7167216887741535
- 0.6567314795009059
- 0.6913306724738202
- 0.6676599210494237
- 0.40983324085642114
- 0.7120033776430696
- 0.6222687713138416
- 0.36420652956305893
- 0.7314242972357771
- 0.6973642134040027
- 0.7167216887741535
- 0.6567314795009059
- 0.6913306724738202
- 0.6676599210494237
- 0.40983324085642114
- 0.7120033776430696
- 0.6222687713138416
- 0.36420652956305893
- 0.7314242972357771
- 0.6973642134040027
- 0.7167216887741535
- 0.6567314795009059
- 0.6913306724738202
- 0.6676599210494237
- 0.40983324085642114
- 0.7120033776430696
- 0.6222687713138416
- 0.36420652956305893
- 0.7314242972357771
- 0.6973642134040027
- 0.7167216887741535
- 0.6567314795009059
- 0.6913306724738202
- 0.6676599210494237
- 0.40983324085642114
- 0.7120033776430696
- 0.6222687713138416
- 0.36420652956305893
- 0.7314242972357771
- 0.6973642134040027
- 0.7167216887741535
- 0.6567314795009059
- 0.6913306724738202
- 0.6676599210494237
- 0.40983324085642114
- 0.7120033776430696
- 0.6222687713138416
- 0.36420652956305893
- 0.7314242972357771
- 0.6973642134040027
- 0.7167216887741535
- 0.6567314795009059
- 0.6913306724738202
- 0.6676599210494237
- 0.40983324085642114
- 0.7120033776430696
- 0.6222687713138416
- 0.36420652956305893
- 0.7314242972357771
- 0.6973642134040027
- 0.7167216887741535
- 0.6567314795009059
- 0.6913306724738202
- 0.6676599210494237
- 0.40983324085642114
- 0.7120033776430696
- 0.6222687713138416
- 0.36420652956305893
- 0.7314242972357771
- 0.6973642134040027
- 0.7167216887741535
- 0.6567314795009059
- 0.6913306724738202
- 0.6676599210494237
- 0.40983324085642114
- 0.7120033776430696
- 0.6222687713138416
- 0.36420652956305893
- 0.7314242972357771
- 0.6973642134040027
- 0.7167216887741535
- 0.6567314795009059
- 0.6913306724738202
- 0.6676599210494237
- 0.40983324085642114
- 0.7120033776430696
- 0.6222687713138416
- 0.36420652956305893
- 0.7314242972357771
- 0.6973642134040027
- 0.7167216887741535
- 0.6567314795009059
- 0.6913306724738202
- 0.6676599210494237
- 0.40983324085642114
- 0.7120033776430696
- 0.6222687713138416
- 0.36420652956305893
- 0.7314242972357771
- 0.6973642134040027
- 0.7167216887741535
- 0.6567314795009059
- 0.6913306724738202
- 0.6676599210494237
- 0.40983324085642114
- 0.7120033776430696
- 0.6222687713138416
- 0.36420652956305893
- 0.7314242972357771
- 0.6973642134040027
- 0.7167216887741535
- 0.6567314795009059
- 0.6913306724738202
- 0.6676599210494237
- 0.40983324085642114
- 0.7120033776430696
- 0.6222687713138416
- 0.36420652956305893
- 0.7314242972357771
- 0.6973642134040027
- 0.7167216887741535
- 0.6567314795009059
- 0.6913306724738202
- 0.6676599210494237
- 0.40983324085642114
- 0.7120033776430696
- 0.6222687713138416
- 0.36420652956305893
- 0.7314242972357771
- 0.6973642134040027
- 0.7167216887741535
- 0.6567314795009059
- 0.6913306724738202
- 0.6676599210494237
- 0.40983324085642114
- 0.7120033776430696
- 0.6222687713138416
- 0.36420652956305893
- 0.7314242972357771
- 0.6973642134040027
- 0.7167216887741535
- 0.6567314795009059
- 0.6913306724738202
- 0.6676599210494237
- 0.40983324085642114
- 0.7120033776430696
- 0.6222687713138416
- 0.36420652956305893
- 0.7314242972357771
- 0.6973642134040027
- 0.7167216887741535
- 0.6567314795009059
- 0.6913306724738202
- 0.6676599210494237
- 0.40983324085642114
- 0.7120033776430696
- 0.6222687713138416
- 0.36420652956305893
- 0.7314242972357771
- 0.6973642134040027
- 0.7167216887741535
- 0.6567314795009059
- 0.6913306724738202
- 0.6676599210494237
- 0.40983324085642114
- 0.7120033776430696
- 0.6222687713138416
- 0.36420652956305893
- 0.7314242972357771
- 0.6973642134040027
- 0.7167216887741535
- 0.6567314795009059
- 0.6913306724738202
- 0.6676599210494237
- 0.40983324085642114
- 0.7120033776430696
- 0.6222687713138416
- 0.36420652956305893
- 0.7314242972357771
- 0.6973642134040027
- 0.7167216887741535
- 0.6567314795009059
- 0.6913306724738202
- 0.6676599210494237
- 0.40983324085642114
- 0.7120033776430696
- 0.6222687713138416
- 0.36420652956305893
- 0.7314242972357771
- 0.6973642134040027
- 0.7167216887741535
- 0.6567314795009059
- 0.6913306724738202
- 0.6676599210494237
- 0.40983324085642114
- 0.7120033776430696
- 0.6222687713138416
- 0.36420652956305893
- 0.7314242972357771
- 0.6973642134040027
- 0.7167216887741535
- 0.6567314795009059
- 0.6913306724738202
- 0.6676599210494237
- 0.40983324085642114
- 0.7120033776430696
- 0.6222687713138416
- 0.36420652956305893
- 0.7314242972357771
- 0.6973642134040027
- 0.7167216887741535
- 0.6567314795009059
- 0.6913306724738202
- 0.6676599210494237
- 0.40983324085642114
- 0.7120033776430696
- 0.6222687713138416
- 0.36420652956305893
- 0.7314242972357771
- 0.6973642134040027
- 0.7167216887741535
- 0.6567314795009059
- 0.6913306724738202
- 0.6676599210494237
- 0.40983324085642114
- 0.7120033776430696
- 0.6222687713138416
- 0.36420652956305893
- 0.7314242972357771
- 0.6973642134040027
- 0.7167216887741535
- 0.6567314795009059
- 0.6913306724738202
- 0.6676599210494237
- 0.40983324085642114
- 0.7120033776430696
- 0.6222687713138416
- 0.36420652956305893
- 0.7314242972357771
- 0.6973642134040027
- 0.7167216887741535
- 0.6567314795009059
- 0.6913306724738202
- 0.6676599210494237
- 0.40983324085642114
- 0.7120033776430696
- 0.6222687713138416
- 0.36420652956305893
- 0.7314242972357771
- 0.6973642134040027
- 0.7167216887741535
- 0.6567314795009059
- 0.6913306724738202
- 0.6676599210494237
- 0.40983324085642114
- 0.7120033776430696
- 0.6222687713138416
- 0.36420652956305893
- 0.7314242972357771
- 0.6973642134040027
- 0.7167216887741535
- 0.6567314795009059
- 0.6913306724738202
- 0.6676599210494237
- 0.40983324085642114
- 0.7120033776430696
- 0.6222687713138416
- 0.36420652956305893
- 0.7314242972357771
- 0.6973642134040027
- 0.7167216887741535
- 0.6567314795009059
- 0.6913306724738202
- 0.6676599210494237
- 0.40983324085642114
- 0.7120033776430696
- 0.6222687713138416
- 0.36420652956305893
- 0.7314242972357771
- 0.6973642134040027
- 0.7167216887741535
- 0.6567314795009059
- 0.6913306724738202
- 0.6676599210494237
- 0.40983324085642114
- 0.7120033776430696
- 0.6222687713138416
- 0.36420652956305893
- 0.7314242972357771
- 0.6973642134040027
- 0.7167216887741535
- 0.6567314795009059
- 0.6913306724738202
- 0.6676599210494237
- 0.40983324085642114
- 0.7120033776430696
- 0.6222687713138416
- 0.36420652956305893
- 0.7314242972357771
- 0.6973642134040027
- 0.7167216887741535
- 0.6567314795009059
- 0.6913306724738202
- 0.6676599210494237
- 0.40983324085642114
- 0.7120033776430696
- 0.6222687713138416
- 0.36420652956305893
- 0.7314242972357771
- 0.6973642134040027
- 0.7167216887741535
- 0.6567314795009059
- 0.6913306724738202
- 0.6676599210494237
- 0.40983324085642114
- 0.7120033776430696
- 0.6222687713138416
- 0.36420652956305893
- 0.7314242972357771
- 0.6973642134040027
- 0.7167216887741535
- 0.6567314795009059
- 0.6913306724738202
- 0.6676599210494237
- 0.40983324085642114
- 0.7120033776430696
- 0.6222687713138416
- 0.36420652956305893
- 0.7314242972357771
- 0.6973642134040027
- 0.7167216887741535
- 0.6567314795009059
- 0.6913306724738202
- 0.6676599210494237
- 0.40983324085642114
- 0.7120033776430696
- 0.6222687713138416
- 0.36420652956305893
- 0.7314242972357771
- 0.6973642134040027
- 0.7167216887741535
- 0.6567314795009059
- 0.6913306724738202
- 0.6676599210494237
- 0.40983324085642114
- 0.7120033776430696
- 0.6222687713138416
- 0.36420652956305893
- 0.7314242972357771
- 0.6973642134040027
- 0.7167216887741535
- 0.6567314795009059
- 0.6913306724738202
- 0.6676599210494237
- 0.40983324085642114
- 0.7120033776430696
- 0.6222687713138416
- 0.36420652956305893
- 0.7314242972357771
- 0.6973642134040027
- 0.7167216887741535
- 0.6567314795009059
- 0.6913306724738202
- 0.6676599210494237
- 0.40983324085642114
- 0.7120033776430696
- 0.6222687713138416
- 0.36420652956305893
- 0.7314242972357771
- 0.6973642134040027
- 0.7167216887741535
- 0.6567314795009059
- 0.6913306724738202
- 0.6676599210494237
- 0.40983324085642114
- 0.7120033776430696
- 0.6222687713138416
- 0.36420652956305893
- 0.7314242972357771
- 0.6973642134040027
- 0.7167216887741535
- 0.6567314795009059
- 0.6913306724738202
- 0.6676599210494237
- 0.40983324085642114
- 0.7120033776430696
- 0.6222687713138416
- 0.36420652956305893
- 0.7314242972357771
- 0.6973642134040027
- 0.7167216887741535
- 0.6567314795009059
- 0.6913306724738202
- 0.6676599210494237
- 0.40983324085642114
- 0.7120033776430696
- 0.6222687713138416
- 0.36420652956305893
- 0.7314242972357771
- 0.6973642134040027
- 0.7167216887741535
- 0.6567314795009059
- 0.6913306724738202
- 0.6676599210494237
- 0.40983324085642114
- 0.7120033776430696
- 0.6222687713138416
- 0.36420652956305893
- 0.7314242972357771
- 0.6973642134040027
- 0.7167216887741535
- task:
type: Retrieval
dataset:
name: MTEB SCIDOCS
type: None
config: default
split: test
revision: f8c2fcf00f625baaa80f62ec5bd9e1fff3b8ae88
metrics:
- type: map_at_1
value: 4.803
- type: map_at_10
value: 11.965
- type: map_at_100
value: 13.969000000000001
- type: map_at_1000
value: 14.251
- type: map_at_20
value: 13.074
- type: map_at_3
value: 8.704
- type: map_at_5
value: 10.39
- type: mrr_at_1
value: 23.7
- type: mrr_at_10
value: 34.300000000000004
- type: mrr_at_100
value: 35.413
- type: mrr_at_1000
value: 35.47
- type: mrr_at_20
value: 34.971999999999994
- type: mrr_at_3
value: 31.35
- type: mrr_at_5
value: 33.11
- type: ndcg_at_1
value: 23.7
- type: ndcg_at_10
value: 19.833000000000002
- type: ndcg_at_100
value: 27.543
- type: ndcg_at_1000
value: 32.657000000000004
- type: ndcg_at_20
value: 22.753999999999998
- type: ndcg_at_3
value: 19.371
- type: ndcg_at_5
value: 16.81
- type: precision_at_1
value: 23.7
- type: precision_at_10
value: 10.08
- type: precision_at_100
value: 2.114
- type: precision_at_1000
value: 0.335
- type: precision_at_20
value: 6.7299999999999995
- type: precision_at_3
value: 18.099999999999998
- type: precision_at_5
value: 14.680000000000001
- type: recall_at_1
value: 4.803
- type: recall_at_10
value: 20.408
- type: recall_at_100
value: 42.937999999999995
- type: recall_at_1000
value: 67.957
- type: recall_at_20
value: 27.253
- type: recall_at_3
value: 11.008
- type: recall_at_5
value: 14.878
- task:
type: STS
dataset:
name: MTEB SICK-R
type: None
config: default
split: test
revision: 20a6d6f312dd54037fe07a32d58e5e168867909d
metrics:
- type: cos_sim_pearson
value: 80.56573112423378
- type: cos_sim_spearman
value: 74.17802402341557
- type: euclidean_pearson
value: 77.64719557838848
- type: euclidean_spearman
value: 74.18218845491099
- type: manhattan_pearson
value: 77.65349040610312
- type: manhattan_spearman
value: 74.24528452265194
- task:
type: STS
dataset:
name: MTEB STS12
type: None
config: default
split: test
revision: a0d554a64d88156834ff5ae9920b964011b16384
metrics:
- type: cos_sim_pearson
value: 77.8662929981252
- type: cos_sim_spearman
value: 73.18685763781161
- type: euclidean_pearson
value: 74.05220881070804
- type: euclidean_spearman
value: 73.1802498913973
- type: manhattan_pearson
value: 73.95702570399803
- type: manhattan_spearman
value: 73.148251314861
- task:
type: STS
dataset:
name: MTEB STS13
type: None
config: default
split: test
revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca
metrics:
- type: cos_sim_pearson
value: 83.3566965914774
- type: cos_sim_spearman
value: 83.57082995137267
- type: euclidean_pearson
value: 83.0673597536666
- type: euclidean_spearman
value: 83.56179042864954
- type: manhattan_pearson
value: 82.99371986719699
- type: manhattan_spearman
value: 83.4564971341052
- task:
type: STS
dataset:
name: MTEB STS14
type: None
config: default
split: test
revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375
metrics:
- type: cos_sim_pearson
value: 82.29928049097985
- type: cos_sim_spearman
value: 79.24507751018872
- type: euclidean_pearson
value: 81.05714342924686
- type: euclidean_spearman
value: 79.24448099194757
- type: manhattan_pearson
value: 81.1323440664372
- type: manhattan_spearman
value: 79.33271509619381
- task:
type: STS
dataset:
name: MTEB STS15
type: None
config: default
split: test
revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3
metrics:
- type: cos_sim_pearson
value: 86.52550571006007
- type: cos_sim_spearman
value: 87.09852049607704
- type: euclidean_pearson
value: 86.6667274835381
- type: euclidean_spearman
value: 87.10282548900487
- type: manhattan_pearson
value: 86.65166599447521
- type: manhattan_spearman
value: 87.08134750847402
- task:
type: STS
dataset:
name: MTEB STS16
type: None
config: default
split: test
revision: 4d8694f8f0e0100860b497b999b3dbed754a0513
metrics:
- type: cos_sim_pearson
value: 82.03173421048572
- type: cos_sim_spearman
value: 82.68144478503824
- type: euclidean_pearson
value: 82.16342331747909
- type: euclidean_spearman
value: 82.68199277546111
- type: manhattan_pearson
value: 82.17641395526667
- type: manhattan_spearman
value: 82.70409481262362
- task:
type: STS
dataset:
name: MTEB STS17 (en-en)
type: None
config: en-en
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 87.83421066375598
- type: cos_sim_spearman
value: 88.1065724802746
- type: euclidean_pearson
value: 87.9179286282574
- type: euclidean_spearman
value: 88.13943838539143
- type: manhattan_pearson
value: 87.78121970619249
- type: manhattan_spearman
value: 87.97091893740061
- task:
type: STS
dataset:
name: MTEB STS22 (en)
type: None
config: en
split: test
revision: eea2b4fe26a775864c896887d910b76a8098ad3f
metrics:
- type: cos_sim_pearson
value: 63.18977730855335
- type: cos_sim_spearman
value: 64.32281973949075
- type: euclidean_pearson
value: 65.88520469364576
- type: euclidean_spearman
value: 64.33592296112258
- type: manhattan_pearson
value: 65.77016266953936
- type: manhattan_spearman
value: 64.37327935074376
- task:
type: STS
dataset:
name: MTEB STSBenchmark
type: None
config: default
split: test
revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831
metrics:
- type: cos_sim_pearson
value: 83.82183713235801
- type: cos_sim_spearman
value: 83.40253231983908
- type: euclidean_pearson
value: 83.3368925429508
- type: euclidean_spearman
value: 83.40496299801828
- type: manhattan_pearson
value: 83.37982295504875
- type: manhattan_spearman
value: 83.44331438539328
- task:
type: Reranking
dataset:
name: MTEB SciDocsRR
type: None
config: default
split: test
revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab
metrics:
- type: map
value: 81.57437869315952
- type: mrr
value: 95.02558715794011
- task:
type: Retrieval
dataset:
name: MTEB SciFact
type: None
config: default
split: test
revision: 0228b52cf27578f30900b9e5271d331663a030d7
metrics:
- type: map_at_1
value: 56.89999999999999
- type: map_at_10
value: 67.637
- type: map_at_100
value: 68.107
- type: map_at_1000
value: 68.128
- type: map_at_20
value: 67.92099999999999
- type: map_at_3
value: 64.86
- type: map_at_5
value: 66.44200000000001
- type: mrr_at_1
value: 59.333000000000006
- type: mrr_at_10
value: 68.352
- type: mrr_at_100
value: 68.74
- type: mrr_at_1000
value: 68.76100000000001
- type: mrr_at_20
value: 68.597
- type: mrr_at_3
value: 66.333
- type: mrr_at_5
value: 67.583
- type: ndcg_at_1
value: 59.333000000000006
- type: ndcg_at_10
value: 72.30199999999999
- type: ndcg_at_100
value: 74.374
- type: ndcg_at_1000
value: 74.995
- type: ndcg_at_20
value: 73.22800000000001
- type: ndcg_at_3
value: 67.584
- type: ndcg_at_5
value: 69.95700000000001
- type: precision_at_1
value: 59.333000000000006
- type: precision_at_10
value: 9.700000000000001
- type: precision_at_100
value: 1.08
- type: precision_at_1000
value: 0.11299999999999999
- type: precision_at_20
value: 5.050000000000001
- type: precision_at_3
value: 26.667
- type: precision_at_5
value: 17.533
- type: recall_at_1
value: 56.89999999999999
- type: recall_at_10
value: 85.68900000000001
- type: recall_at_100
value: 95.0
- type: recall_at_1000
value: 100.0
- type: recall_at_20
value: 89.2
- type: recall_at_3
value: 72.906
- type: recall_at_5
value: 79.039
- task:
type: PairClassification
dataset:
name: MTEB SprintDuplicateQuestions
type: None
config: default
split: test
revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46
metrics:
- type: cos_sim_accuracy
value: 99.81485148514851
- type: cos_sim_ap
value: 95.58169993718987
- type: cos_sim_f1
value: 90.5027932960894
- type: cos_sim_precision
value: 91.95046439628483
- type: cos_sim_recall
value: 89.1
- type: dot_accuracy
value: 99.81485148514851
- type: dot_ap
value: 95.5719777669169
- type: dot_f1
value: 90.51243023845764
- type: dot_precision
value: 91.86405767250257
- type: dot_recall
value: 89.2
- type: euclidean_accuracy
value: 99.81584158415842
- type: euclidean_ap
value: 95.58771856329962
- type: euclidean_f1
value: 90.54878048780488
- type: euclidean_precision
value: 92.04545454545455
- type: euclidean_recall
value: 89.1
- type: manhattan_accuracy
value: 99.81287128712871
- type: manhattan_ap
value: 95.58869634659905
- type: manhattan_f1
value: 90.30271934325295
- type: manhattan_precision
value: 92.72918861959958
- type: manhattan_recall
value: 88.0
- type: max_accuracy
value: 99.81584158415842
- type: max_ap
value: 95.58869634659905
- type: max_f1
value: 90.54878048780488
- task:
type: Clustering
dataset:
name: MTEB StackExchangeClustering
type: None
config: default
split: test
revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259
metrics:
- type: v_measure
value: 66.81466934930805
- type: v_measures
value:
- 0.6549319157284832
- 0.729102722021156
- 0.5645023963515139
- 0.6675700545366731
- 0.7106324328008338
- 0.6319760643208963
- 0.6114787245939142
- 0.7374837646425462
- 0.6662072905119479
- 0.6677848929819692
- 0.751779276675506
- 0.759089429391716
- 0.7602209390862023
- 0.6492366899599431
- 0.6195040191500187
- 0.6499553625304811
- 0.6426200803991593
- 0.6501320764151193
- 0.64076653277881
- 0.6308932508075157
- 0.6858425302866819
- 0.6480916795406368
- 0.621952158159244
- 0.6919485686557781
- 0.6599644850002667
- 0.6549319157284832
- 0.729102722021156
- 0.5645023963515139
- 0.6675700545366731
- 0.7106324328008338
- 0.6319760643208963
- 0.6114787245939142
- 0.7374837646425462
- 0.6662072905119479
- 0.6677848929819692
- 0.751779276675506
- 0.759089429391716
- 0.7602209390862023
- 0.6492366899599431
- 0.6195040191500187
- 0.6499553625304811
- 0.6426200803991593
- 0.6501320764151193
- 0.64076653277881
- 0.6308932508075157
- 0.6858425302866819
- 0.6480916795406368
- 0.621952158159244
- 0.6919485686557781
- 0.6599644850002667
- 0.6549319157284832
- 0.729102722021156
- 0.5645023963515139
- 0.6675700545366731
- 0.7106324328008338
- 0.6319760643208963
- 0.6114787245939142
- 0.7374837646425462
- 0.6662072905119479
- 0.6677848929819692
- 0.751779276675506
- 0.759089429391716
- 0.7602209390862023
- 0.6492366899599431
- 0.6195040191500187
- 0.6499553625304811
- 0.6426200803991593
- 0.6501320764151193
- 0.64076653277881
- 0.6308932508075157
- 0.6858425302866819
- 0.6480916795406368
- 0.621952158159244
- 0.6919485686557781
- 0.6599644850002667
- 0.6549319157284832
- 0.729102722021156
- 0.5645023963515139
- 0.6675700545366731
- 0.7106324328008338
- 0.6319760643208963
- 0.6114787245939142
- 0.7374837646425462
- 0.6662072905119479
- 0.6677848929819692
- 0.751779276675506
- 0.759089429391716
- 0.7602209390862023
- 0.6492366899599431
- 0.6195040191500187
- 0.6499553625304811
- 0.6426200803991593
- 0.6501320764151193
- 0.64076653277881
- 0.6308932508075157
- 0.6858425302866819
- 0.6480916795406368
- 0.621952158159244
- 0.6919485686557781
- 0.6599644850002667
- 0.6549319157284832
- 0.729102722021156
- 0.5645023963515139
- 0.6675700545366731
- 0.7106324328008338
- 0.6319760643208963
- 0.6114787245939142
- 0.7374837646425462
- 0.6662072905119479
- 0.6677848929819692
- 0.751779276675506
- 0.759089429391716
- 0.7602209390862023
- 0.6492366899599431
- 0.6195040191500187
- 0.6499553625304811
- 0.6426200803991593
- 0.6501320764151193
- 0.64076653277881
- 0.6308932508075157
- 0.6858425302866819
- 0.6480916795406368
- 0.621952158159244
- 0.6919485686557781
- 0.6599644850002667
- 0.6549319157284832
- 0.729102722021156
- 0.5645023963515139
- 0.6675700545366731
- 0.7106324328008338
- 0.6319760643208963
- 0.6114787245939142
- 0.7374837646425462
- 0.6662072905119479
- 0.6677848929819692
- 0.751779276675506
- 0.759089429391716
- 0.7602209390862023
- 0.6492366899599431
- 0.6195040191500187
- 0.6499553625304811
- 0.6426200803991593
- 0.6501320764151193
- 0.64076653277881
- 0.6308932508075157
- 0.6858425302866819
- 0.6480916795406368
- 0.621952158159244
- 0.6919485686557781
- 0.6599644850002667
- 0.6549319157284832
- 0.729102722021156
- 0.5645023963515139
- 0.6675700545366731
- 0.7106324328008338
- 0.6319760643208963
- 0.6114787245939142
- 0.7374837646425462
- 0.6662072905119479
- 0.6677848929819692
- 0.751779276675506
- 0.759089429391716
- 0.7602209390862023
- 0.6492366899599431
- 0.6195040191500187
- 0.6499553625304811
- 0.6426200803991593
- 0.6501320764151193
- 0.64076653277881
- 0.6308932508075157
- 0.6858425302866819
- 0.6480916795406368
- 0.621952158159244
- 0.6919485686557781
- 0.6599644850002667
- 0.6549319157284832
- 0.729102722021156
- 0.5645023963515139
- 0.6675700545366731
- 0.7106324328008338
- 0.6319760643208963
- 0.6114787245939142
- 0.7374837646425462
- 0.6662072905119479
- 0.6677848929819692
- 0.751779276675506
- 0.759089429391716
- 0.7602209390862023
- 0.6492366899599431
- 0.6195040191500187
- 0.6499553625304811
- 0.6426200803991593
- 0.6501320764151193
- 0.64076653277881
- 0.6308932508075157
- 0.6858425302866819
- 0.6480916795406368
- 0.621952158159244
- 0.6919485686557781
- 0.6599644850002667
- 0.6549319157284832
- 0.729102722021156
- 0.5645023963515139
- 0.6675700545366731
- 0.7106324328008338
- 0.6319760643208963
- 0.6114787245939142
- 0.7374837646425462
- 0.6662072905119479
- 0.6677848929819692
- 0.751779276675506
- 0.759089429391716
- 0.7602209390862023
- 0.6492366899599431
- 0.6195040191500187
- 0.6499553625304811
- 0.6426200803991593
- 0.6501320764151193
- 0.64076653277881
- 0.6308932508075157
- 0.6858425302866819
- 0.6480916795406368
- 0.621952158159244
- 0.6919485686557781
- 0.6599644850002667
- 0.6549319157284832
- 0.729102722021156
- 0.5645023963515139
- 0.6675700545366731
- 0.7106324328008338
- 0.6319760643208963
- 0.6114787245939142
- 0.7374837646425462
- 0.6662072905119479
- 0.6677848929819692
- 0.751779276675506
- 0.759089429391716
- 0.7602209390862023
- 0.6492366899599431
- 0.6195040191500187
- 0.6499553625304811
- 0.6426200803991593
- 0.6501320764151193
- 0.64076653277881
- 0.6308932508075157
- 0.6858425302866819
- 0.6480916795406368
- 0.621952158159244
- 0.6919485686557781
- 0.6599644850002667
- 0.6549319157284832
- 0.729102722021156
- 0.5645023963515139
- 0.6675700545366731
- 0.7106324328008338
- 0.6319760643208963
- 0.6114787245939142
- 0.7374837646425462
- 0.6662072905119479
- 0.6677848929819692
- 0.751779276675506
- 0.759089429391716
- 0.7602209390862023
- 0.6492366899599431
- 0.6195040191500187
- 0.6499553625304811
- 0.6426200803991593
- 0.6501320764151193
- 0.64076653277881
- 0.6308932508075157
- 0.6858425302866819
- 0.6480916795406368
- 0.621952158159244
- 0.6919485686557781
- 0.6599644850002667
- 0.6549319157284832
- 0.729102722021156
- 0.5645023963515139
- 0.6675700545366731
- 0.7106324328008338
- 0.6319760643208963
- 0.6114787245939142
- 0.7374837646425462
- 0.6662072905119479
- 0.6677848929819692
- 0.751779276675506
- 0.759089429391716
- 0.7602209390862023
- 0.6492366899599431
- 0.6195040191500187
- 0.6499553625304811
- 0.6426200803991593
- 0.6501320764151193
- 0.64076653277881
- 0.6308932508075157
- 0.6858425302866819
- 0.6480916795406368
- 0.621952158159244
- 0.6919485686557781
- 0.6599644850002667
- 0.6549319157284832
- 0.729102722021156
- 0.5645023963515139
- 0.6675700545366731
- 0.7106324328008338
- 0.6319760643208963
- 0.6114787245939142
- 0.7374837646425462
- 0.6662072905119479
- 0.6677848929819692
- 0.751779276675506
- 0.759089429391716
- 0.7602209390862023
- 0.6492366899599431
- 0.6195040191500187
- 0.6499553625304811
- 0.6426200803991593
- 0.6501320764151193
- 0.64076653277881
- 0.6308932508075157
- 0.6858425302866819
- 0.6480916795406368
- 0.621952158159244
- 0.6919485686557781
- 0.6599644850002667
- 0.6549319157284832
- 0.729102722021156
- 0.5645023963515139
- 0.6675700545366731
- 0.7106324328008338
- 0.6319760643208963
- 0.6114787245939142
- 0.7374837646425462
- 0.6662072905119479
- 0.6677848929819692
- 0.751779276675506
- 0.759089429391716
- 0.7602209390862023
- 0.6492366899599431
- 0.6195040191500187
- 0.6499553625304811
- 0.6426200803991593
- 0.6501320764151193
- 0.64076653277881
- 0.6308932508075157
- 0.6858425302866819
- 0.6480916795406368
- 0.621952158159244
- 0.6919485686557781
- 0.6599644850002667
- 0.6549319157284832
- 0.729102722021156
- 0.5645023963515139
- 0.6675700545366731
- 0.7106324328008338
- 0.6319760643208963
- 0.6114787245939142
- 0.7374837646425462
- 0.6662072905119479
- 0.6677848929819692
- 0.751779276675506
- 0.759089429391716
- 0.7602209390862023
- 0.6492366899599431
- 0.6195040191500187
- 0.6499553625304811
- 0.6426200803991593
- 0.6501320764151193
- 0.64076653277881
- 0.6308932508075157
- 0.6858425302866819
- 0.6480916795406368
- 0.621952158159244
- 0.6919485686557781
- 0.6599644850002667
- 0.6549319157284832
- 0.729102722021156
- 0.5645023963515139
- 0.6675700545366731
- 0.7106324328008338
- 0.6319760643208963
- 0.6114787245939142
- 0.7374837646425462
- 0.6662072905119479
- 0.6677848929819692
- 0.751779276675506
- 0.759089429391716
- 0.7602209390862023
- 0.6492366899599431
- 0.6195040191500187
- 0.6499553625304811
- 0.6426200803991593
- 0.6501320764151193
- 0.64076653277881
- 0.6308932508075157
- 0.6858425302866819
- 0.6480916795406368
- 0.621952158159244
- 0.6919485686557781
- 0.6599644850002667
- 0.6549319157284832
- 0.729102722021156
- 0.5645023963515139
- 0.6675700545366731
- 0.7106324328008338
- 0.6319760643208963
- 0.6114787245939142
- 0.7374837646425462
- 0.6662072905119479
- 0.6677848929819692
- 0.751779276675506
- 0.759089429391716
- 0.7602209390862023
- 0.6492366899599431
- 0.6195040191500187
- 0.6499553625304811
- 0.6426200803991593
- 0.6501320764151193
- 0.64076653277881
- 0.6308932508075157
- 0.6858425302866819
- 0.6480916795406368
- 0.621952158159244
- 0.6919485686557781
- 0.6599644850002667
- 0.6549319157284832
- 0.729102722021156
- 0.5645023963515139
- 0.6675700545366731
- 0.7106324328008338
- 0.6319760643208963
- 0.6114787245939142
- 0.7374837646425462
- 0.6662072905119479
- 0.6677848929819692
- 0.751779276675506
- 0.759089429391716
- 0.7602209390862023
- 0.6492366899599431
- 0.6195040191500187
- 0.6499553625304811
- 0.6426200803991593
- 0.6501320764151193
- 0.64076653277881
- 0.6308932508075157
- 0.6858425302866819
- 0.6480916795406368
- 0.621952158159244
- 0.6919485686557781
- 0.6599644850002667
- 0.6549319157284832
- 0.729102722021156
- 0.5645023963515139
- 0.6675700545366731
- 0.7106324328008338
- 0.6319760643208963
- 0.6114787245939142
- 0.7374837646425462
- 0.6662072905119479
- 0.6677848929819692
- 0.751779276675506
- 0.759089429391716
- 0.7602209390862023
- 0.6492366899599431
- 0.6195040191500187
- 0.6499553625304811
- 0.6426200803991593
- 0.6501320764151193
- 0.64076653277881
- 0.6308932508075157
- 0.6858425302866819
- 0.6480916795406368
- 0.621952158159244
- 0.6919485686557781
- 0.6599644850002667
- 0.6549319157284832
- 0.729102722021156
- 0.5645023963515139
- 0.6675700545366731
- 0.7106324328008338
- 0.6319760643208963
- 0.6114787245939142
- 0.7374837646425462
- 0.6662072905119479
- 0.6677848929819692
- 0.751779276675506
- 0.759089429391716
- 0.7602209390862023
- 0.6492366899599431
- 0.6195040191500187
- 0.6499553625304811
- 0.6426200803991593
- 0.6501320764151193
- 0.64076653277881
- 0.6308932508075157
- 0.6858425302866819
- 0.6480916795406368
- 0.621952158159244
- 0.6919485686557781
- 0.6599644850002667
- 0.6549319157284832
- 0.729102722021156
- 0.5645023963515139
- 0.6675700545366731
- 0.7106324328008338
- 0.6319760643208963
- 0.6114787245939142
- 0.7374837646425462
- 0.6662072905119479
- 0.6677848929819692
- 0.751779276675506
- 0.759089429391716
- 0.7602209390862023
- 0.6492366899599431
- 0.6195040191500187
- 0.6499553625304811
- 0.6426200803991593
- 0.6501320764151193
- 0.64076653277881
- 0.6308932508075157
- 0.6858425302866819
- 0.6480916795406368
- 0.621952158159244
- 0.6919485686557781
- 0.6599644850002667
- 0.6549319157284832
- 0.729102722021156
- 0.5645023963515139
- 0.6675700545366731
- 0.7106324328008338
- 0.6319760643208963
- 0.6114787245939142
- 0.7374837646425462
- 0.6662072905119479
- 0.6677848929819692
- 0.751779276675506
- 0.759089429391716
- 0.7602209390862023
- 0.6492366899599431
- 0.6195040191500187
- 0.6499553625304811
- 0.6426200803991593
- 0.6501320764151193
- 0.64076653277881
- 0.6308932508075157
- 0.6858425302866819
- 0.6480916795406368
- 0.621952158159244
- 0.6919485686557781
- 0.6599644850002667
- 0.6549319157284832
- 0.729102722021156
- 0.5645023963515139
- 0.6675700545366731
- 0.7106324328008338
- 0.6319760643208963
- 0.6114787245939142
- 0.7374837646425462
- 0.6662072905119479
- 0.6677848929819692
- 0.751779276675506
- 0.759089429391716
- 0.7602209390862023
- 0.6492366899599431
- 0.6195040191500187
- 0.6499553625304811
- 0.6426200803991593
- 0.6501320764151193
- 0.64076653277881
- 0.6308932508075157
- 0.6858425302866819
- 0.6480916795406368
- 0.621952158159244
- 0.6919485686557781
- 0.6599644850002667
- 0.6549319157284832
- 0.729102722021156
- 0.5645023963515139
- 0.6675700545366731
- 0.7106324328008338
- 0.6319760643208963
- 0.6114787245939142
- 0.7374837646425462
- 0.6662072905119479
- 0.6677848929819692
- 0.751779276675506
- 0.759089429391716
- 0.7602209390862023
- 0.6492366899599431
- 0.6195040191500187
- 0.6499553625304811
- 0.6426200803991593
- 0.6501320764151193
- 0.64076653277881
- 0.6308932508075157
- 0.6858425302866819
- 0.6480916795406368
- 0.621952158159244
- 0.6919485686557781
- 0.6599644850002667
- 0.6549319157284832
- 0.729102722021156
- 0.5645023963515139
- 0.6675700545366731
- 0.7106324328008338
- 0.6319760643208963
- 0.6114787245939142
- 0.7374837646425462
- 0.6662072905119479
- 0.6677848929819692
- 0.751779276675506
- 0.759089429391716
- 0.7602209390862023
- 0.6492366899599431
- 0.6195040191500187
- 0.6499553625304811
- 0.6426200803991593
- 0.6501320764151193
- 0.64076653277881
- 0.6308932508075157
- 0.6858425302866819
- 0.6480916795406368
- 0.621952158159244
- 0.6919485686557781
- 0.6599644850002667
- 0.6549319157284832
- 0.729102722021156
- 0.5645023963515139
- 0.6675700545366731
- 0.7106324328008338
- 0.6319760643208963
- 0.6114787245939142
- 0.7374837646425462
- 0.6662072905119479
- 0.6677848929819692
- 0.751779276675506
- 0.759089429391716
- 0.7602209390862023
- 0.6492366899599431
- 0.6195040191500187
- 0.6499553625304811
- 0.6426200803991593
- 0.6501320764151193
- 0.64076653277881
- 0.6308932508075157
- 0.6858425302866819
- 0.6480916795406368
- 0.621952158159244
- 0.6919485686557781
- 0.6599644850002667
- 0.6549319157284832
- 0.729102722021156
- 0.5645023963515139
- 0.6675700545366731
- 0.7106324328008338
- 0.6319760643208963
- 0.6114787245939142
- 0.7374837646425462
- 0.6662072905119479
- 0.6677848929819692
- 0.751779276675506
- 0.759089429391716
- 0.7602209390862023
- 0.6492366899599431
- 0.6195040191500187
- 0.6499553625304811
- 0.6426200803991593
- 0.6501320764151193
- 0.64076653277881
- 0.6308932508075157
- 0.6858425302866819
- 0.6480916795406368
- 0.621952158159244
- 0.6919485686557781
- 0.6599644850002667
- 0.6549319157284832
- 0.729102722021156
- 0.5645023963515139
- 0.6675700545366731
- 0.7106324328008338
- 0.6319760643208963
- 0.6114787245939142
- 0.7374837646425462
- 0.6662072905119479
- 0.6677848929819692
- 0.751779276675506
- 0.759089429391716
- 0.7602209390862023
- 0.6492366899599431
- 0.6195040191500187
- 0.6499553625304811
- 0.6426200803991593
- 0.6501320764151193
- 0.64076653277881
- 0.6308932508075157
- 0.6858425302866819
- 0.6480916795406368
- 0.621952158159244
- 0.6919485686557781
- 0.6599644850002667
- 0.6549319157284832
- 0.729102722021156
- 0.5645023963515139
- 0.6675700545366731
- 0.7106324328008338
- 0.6319760643208963
- 0.6114787245939142
- 0.7374837646425462
- 0.6662072905119479
- 0.6677848929819692
- 0.751779276675506
- 0.759089429391716
- 0.7602209390862023
- 0.6492366899599431
- 0.6195040191500187
- 0.6499553625304811
- 0.6426200803991593
- 0.6501320764151193
- 0.64076653277881
- 0.6308932508075157
- 0.6858425302866819
- 0.6480916795406368
- 0.621952158159244
- 0.6919485686557781
- 0.6599644850002667
- 0.6549319157284832
- 0.729102722021156
- 0.5645023963515139
- 0.6675700545366731
- 0.7106324328008338
- 0.6319760643208963
- 0.6114787245939142
- 0.7374837646425462
- 0.6662072905119479
- 0.6677848929819692
- 0.751779276675506
- 0.759089429391716
- 0.7602209390862023
- 0.6492366899599431
- 0.6195040191500187
- 0.6499553625304811
- 0.6426200803991593
- 0.6501320764151193
- 0.64076653277881
- 0.6308932508075157
- 0.6858425302866819
- 0.6480916795406368
- 0.621952158159244
- 0.6919485686557781
- 0.6599644850002667
- 0.6549319157284832
- 0.729102722021156
- 0.5645023963515139
- 0.6675700545366731
- 0.7106324328008338
- 0.6319760643208963
- 0.6114787245939142
- 0.7374837646425462
- 0.6662072905119479
- 0.6677848929819692
- 0.751779276675506
- 0.759089429391716
- 0.7602209390862023
- 0.6492366899599431
- 0.6195040191500187
- 0.6499553625304811
- 0.6426200803991593
- 0.6501320764151193
- 0.64076653277881
- 0.6308932508075157
- 0.6858425302866819
- 0.6480916795406368
- 0.621952158159244
- 0.6919485686557781
- 0.6599644850002667
- 0.6549319157284832
- 0.729102722021156
- 0.5645023963515139
- 0.6675700545366731
- 0.7106324328008338
- 0.6319760643208963
- 0.6114787245939142
- 0.7374837646425462
- 0.6662072905119479
- 0.6677848929819692
- 0.751779276675506
- 0.759089429391716
- 0.7602209390862023
- 0.6492366899599431
- 0.6195040191500187
- 0.6499553625304811
- 0.6426200803991593
- 0.6501320764151193
- 0.64076653277881
- 0.6308932508075157
- 0.6858425302866819
- 0.6480916795406368
- 0.621952158159244
- 0.6919485686557781
- 0.6599644850002667
- 0.6549319157284832
- 0.729102722021156
- 0.5645023963515139
- 0.6675700545366731
- 0.7106324328008338
- 0.6319760643208963
- 0.6114787245939142
- 0.7374837646425462
- 0.6662072905119479
- 0.6677848929819692
- 0.751779276675506
- 0.759089429391716
- 0.7602209390862023
- 0.6492366899599431
- 0.6195040191500187
- 0.6499553625304811
- 0.6426200803991593
- 0.6501320764151193
- 0.64076653277881
- 0.6308932508075157
- 0.6858425302866819
- 0.6480916795406368
- 0.621952158159244
- 0.6919485686557781
- 0.6599644850002667
- 0.6549319157284832
- 0.729102722021156
- 0.5645023963515139
- 0.6675700545366731
- 0.7106324328008338
- 0.6319760643208963
- 0.6114787245939142
- 0.7374837646425462
- 0.6662072905119479
- 0.6677848929819692
- 0.751779276675506
- 0.759089429391716
- 0.7602209390862023
- 0.6492366899599431
- 0.6195040191500187
- 0.6499553625304811
- 0.6426200803991593
- 0.6501320764151193
- 0.64076653277881
- 0.6308932508075157
- 0.6858425302866819
- 0.6480916795406368
- 0.621952158159244
- 0.6919485686557781
- 0.6599644850002667
- 0.6549319157284832
- 0.729102722021156
- 0.5645023963515139
- 0.6675700545366731
- 0.7106324328008338
- 0.6319760643208963
- 0.6114787245939142
- 0.7374837646425462
- 0.6662072905119479
- 0.6677848929819692
- 0.751779276675506
- 0.759089429391716
- 0.7602209390862023
- 0.6492366899599431
- 0.6195040191500187
- 0.6499553625304811
- 0.6426200803991593
- 0.6501320764151193
- 0.64076653277881
- 0.6308932508075157
- 0.6858425302866819
- 0.6480916795406368
- 0.621952158159244
- 0.6919485686557781
- 0.6599644850002667
- 0.6549319157284832
- 0.729102722021156
- 0.5645023963515139
- 0.6675700545366731
- 0.7106324328008338
- 0.6319760643208963
- 0.6114787245939142
- 0.7374837646425462
- 0.6662072905119479
- 0.6677848929819692
- 0.751779276675506
- 0.759089429391716
- 0.7602209390862023
- 0.6492366899599431
- 0.6195040191500187
- 0.6499553625304811
- 0.6426200803991593
- 0.6501320764151193
- 0.64076653277881
- 0.6308932508075157
- 0.6858425302866819
- 0.6480916795406368
- 0.621952158159244
- 0.6919485686557781
- 0.6599644850002667
- 0.6549319157284832
- 0.729102722021156
- 0.5645023963515139
- 0.6675700545366731
- 0.7106324328008338
- 0.6319760643208963
- 0.6114787245939142
- 0.7374837646425462
- 0.6662072905119479
- 0.6677848929819692
- 0.751779276675506
- 0.759089429391716
- 0.7602209390862023
- 0.6492366899599431
- 0.6195040191500187
- 0.6499553625304811
- 0.6426200803991593
- 0.6501320764151193
- 0.64076653277881
- 0.6308932508075157
- 0.6858425302866819
- 0.6480916795406368
- 0.621952158159244
- 0.6919485686557781
- 0.6599644850002667
- 0.6549319157284832
- 0.729102722021156
- 0.5645023963515139
- 0.6675700545366731
- 0.7106324328008338
- 0.6319760643208963
- 0.6114787245939142
- 0.7374837646425462
- 0.6662072905119479
- 0.6677848929819692
- 0.751779276675506
- 0.759089429391716
- 0.7602209390862023
- 0.6492366899599431
- 0.6195040191500187
- 0.6499553625304811
- 0.6426200803991593
- 0.6501320764151193
- 0.64076653277881
- 0.6308932508075157
- 0.6858425302866819
- 0.6480916795406368
- 0.621952158159244
- 0.6919485686557781
- 0.6599644850002667
- 0.6549319157284832
- 0.729102722021156
- 0.5645023963515139
- 0.6675700545366731
- 0.7106324328008338
- 0.6319760643208963
- 0.6114787245939142
- 0.7374837646425462
- 0.6662072905119479
- 0.6677848929819692
- 0.751779276675506
- 0.759089429391716
- 0.7602209390862023
- 0.6492366899599431
- 0.6195040191500187
- 0.6499553625304811
- 0.6426200803991593
- 0.6501320764151193
- 0.64076653277881
- 0.6308932508075157
- 0.6858425302866819
- 0.6480916795406368
- 0.621952158159244
- 0.6919485686557781
- 0.6599644850002667
- 0.6549319157284832
- 0.729102722021156
- 0.5645023963515139
- 0.6675700545366731
- 0.7106324328008338
- 0.6319760643208963
- 0.6114787245939142
- 0.7374837646425462
- 0.6662072905119479
- 0.6677848929819692
- 0.751779276675506
- 0.759089429391716
- 0.7602209390862023
- 0.6492366899599431
- 0.6195040191500187
- 0.6499553625304811
- 0.6426200803991593
- 0.6501320764151193
- 0.64076653277881
- 0.6308932508075157
- 0.6858425302866819
- 0.6480916795406368
- 0.621952158159244
- 0.6919485686557781
- 0.6599644850002667
- 0.6549319157284832
- 0.729102722021156
- 0.5645023963515139
- 0.6675700545366731
- 0.7106324328008338
- 0.6319760643208963
- 0.6114787245939142
- 0.7374837646425462
- 0.6662072905119479
- 0.6677848929819692
- 0.751779276675506
- 0.759089429391716
- 0.7602209390862023
- 0.6492366899599431
- 0.6195040191500187
- 0.6499553625304811
- 0.6426200803991593
- 0.6501320764151193
- 0.64076653277881
- 0.6308932508075157
- 0.6858425302866819
- 0.6480916795406368
- 0.621952158159244
- 0.6919485686557781
- 0.6599644850002667
- 0.6549319157284832
- 0.729102722021156
- 0.5645023963515139
- 0.6675700545366731
- 0.7106324328008338
- 0.6319760643208963
- 0.6114787245939142
- 0.7374837646425462
- 0.6662072905119479
- 0.6677848929819692
- 0.751779276675506
- 0.759089429391716
- 0.7602209390862023
- 0.6492366899599431
- 0.6195040191500187
- 0.6499553625304811
- 0.6426200803991593
- 0.6501320764151193
- 0.64076653277881
- 0.6308932508075157
- 0.6858425302866819
- 0.6480916795406368
- 0.621952158159244
- 0.6919485686557781
- 0.6599644850002667
- 0.6549319157284832
- 0.729102722021156
- 0.5645023963515139
- 0.6675700545366731
- 0.7106324328008338
- 0.6319760643208963
- 0.6114787245939142
- 0.7374837646425462
- 0.6662072905119479
- 0.6677848929819692
- 0.751779276675506
- 0.759089429391716
- 0.7602209390862023
- 0.6492366899599431
- 0.6195040191500187
- 0.6499553625304811
- 0.6426200803991593
- 0.6501320764151193
- 0.64076653277881
- 0.6308932508075157
- 0.6858425302866819
- 0.6480916795406368
- 0.621952158159244
- 0.6919485686557781
- 0.6599644850002667
- 0.6549319157284832
- 0.729102722021156
- 0.5645023963515139
- 0.6675700545366731
- 0.7106324328008338
- 0.6319760643208963
- 0.6114787245939142
- 0.7374837646425462
- 0.6662072905119479
- 0.6677848929819692
- 0.751779276675506
- 0.759089429391716
- 0.7602209390862023
- 0.6492366899599431
- 0.6195040191500187
- 0.6499553625304811
- 0.6426200803991593
- 0.6501320764151193
- 0.64076653277881
- 0.6308932508075157
- 0.6858425302866819
- 0.6480916795406368
- 0.621952158159244
- 0.6919485686557781
- 0.6599644850002667
- 0.6549319157284832
- 0.729102722021156
- 0.5645023963515139
- 0.6675700545366731
- 0.7106324328008338
- 0.6319760643208963
- 0.6114787245939142
- 0.7374837646425462
- 0.6662072905119479
- 0.6677848929819692
- 0.751779276675506
- 0.759089429391716
- 0.7602209390862023
- 0.6492366899599431
- 0.6195040191500187
- 0.6499553625304811
- 0.6426200803991593
- 0.6501320764151193
- 0.64076653277881
- 0.6308932508075157
- 0.6858425302866819
- 0.6480916795406368
- 0.621952158159244
- 0.6919485686557781
- 0.6599644850002667
- 0.6549319157284832
- 0.729102722021156
- 0.5645023963515139
- 0.6675700545366731
- 0.7106324328008338
- 0.6319760643208963
- 0.6114787245939142
- 0.7374837646425462
- 0.6662072905119479
- 0.6677848929819692
- 0.751779276675506
- 0.759089429391716
- 0.7602209390862023
- 0.6492366899599431
- 0.6195040191500187
- 0.6499553625304811
- 0.6426200803991593
- 0.6501320764151193
- 0.64076653277881
- 0.6308932508075157
- 0.6858425302866819
- 0.6480916795406368
- 0.621952158159244
- 0.6919485686557781
- 0.6599644850002667
- 0.6549319157284832
- 0.729102722021156
- 0.5645023963515139
- 0.6675700545366731
- 0.7106324328008338
- 0.6319760643208963
- 0.6114787245939142
- 0.7374837646425462
- 0.6662072905119479
- 0.6677848929819692
- 0.751779276675506
- 0.759089429391716
- 0.7602209390862023
- 0.6492366899599431
- 0.6195040191500187
- 0.6499553625304811
- 0.6426200803991593
- 0.6501320764151193
- 0.64076653277881
- 0.6308932508075157
- 0.6858425302866819
- 0.6480916795406368
- 0.621952158159244
- 0.6919485686557781
- 0.6599644850002667
- 0.6549319157284832
- 0.729102722021156
- 0.5645023963515139
- 0.6675700545366731
- 0.7106324328008338
- 0.6319760643208963
- 0.6114787245939142
- 0.7374837646425462
- 0.6662072905119479
- 0.6677848929819692
- 0.751779276675506
- 0.759089429391716
- 0.7602209390862023
- 0.6492366899599431
- 0.6195040191500187
- 0.6499553625304811
- 0.6426200803991593
- 0.6501320764151193
- 0.64076653277881
- 0.6308932508075157
- 0.6858425302866819
- 0.6480916795406368
- 0.621952158159244
- 0.6919485686557781
- 0.6599644850002667
- 0.6549319157284832
- 0.729102722021156
- 0.5645023963515139
- 0.6675700545366731
- 0.7106324328008338
- 0.6319760643208963
- 0.6114787245939142
- 0.7374837646425462
- 0.6662072905119479
- 0.6677848929819692
- 0.751779276675506
- 0.759089429391716
- 0.7602209390862023
- 0.6492366899599431
- 0.6195040191500187
- 0.6499553625304811
- 0.6426200803991593
- 0.6501320764151193
- 0.64076653277881
- 0.6308932508075157
- 0.6858425302866819
- 0.6480916795406368
- 0.621952158159244
- 0.6919485686557781
- 0.6599644850002667
- 0.6549319157284832
- 0.729102722021156
- 0.5645023963515139
- 0.6675700545366731
- 0.7106324328008338
- 0.6319760643208963
- 0.6114787245939142
- 0.7374837646425462
- 0.6662072905119479
- 0.6677848929819692
- 0.751779276675506
- 0.759089429391716
- 0.7602209390862023
- 0.6492366899599431
- 0.6195040191500187
- 0.6499553625304811
- 0.6426200803991593
- 0.6501320764151193
- 0.64076653277881
- 0.6308932508075157
- 0.6858425302866819
- 0.6480916795406368
- 0.621952158159244
- 0.6919485686557781
- 0.6599644850002667
- 0.6549319157284832
- 0.729102722021156
- 0.5645023963515139
- 0.6675700545366731
- 0.7106324328008338
- 0.6319760643208963
- 0.6114787245939142
- 0.7374837646425462
- 0.6662072905119479
- 0.6677848929819692
- 0.751779276675506
- 0.759089429391716
- 0.7602209390862023
- 0.6492366899599431
- 0.6195040191500187
- 0.6499553625304811
- 0.6426200803991593
- 0.6501320764151193
- 0.64076653277881
- 0.6308932508075157
- 0.6858425302866819
- 0.6480916795406368
- 0.621952158159244
- 0.6919485686557781
- 0.6599644850002667
- 0.6549319157284832
- 0.729102722021156
- 0.5645023963515139
- 0.6675700545366731
- 0.7106324328008338
- 0.6319760643208963
- 0.6114787245939142
- 0.7374837646425462
- 0.6662072905119479
- 0.6677848929819692
- 0.751779276675506
- 0.759089429391716
- 0.7602209390862023
- 0.6492366899599431
- 0.6195040191500187
- 0.6499553625304811
- 0.6426200803991593
- 0.6501320764151193
- 0.64076653277881
- 0.6308932508075157
- 0.6858425302866819
- 0.6480916795406368
- 0.621952158159244
- 0.6919485686557781
- 0.6599644850002667
- 0.6549319157284832
- 0.729102722021156
- 0.5645023963515139
- 0.6675700545366731
- 0.7106324328008338
- 0.6319760643208963
- 0.6114787245939142
- 0.7374837646425462
- 0.6662072905119479
- 0.6677848929819692
- 0.751779276675506
- 0.759089429391716
- 0.7602209390862023
- 0.6492366899599431
- 0.6195040191500187
- 0.6499553625304811
- 0.6426200803991593
- 0.6501320764151193
- 0.64076653277881
- 0.6308932508075157
- 0.6858425302866819
- 0.6480916795406368
- 0.621952158159244
- 0.6919485686557781
- 0.6599644850002667
- 0.6549319157284832
- 0.729102722021156
- 0.5645023963515139
- 0.6675700545366731
- 0.7106324328008338
- 0.6319760643208963
- 0.6114787245939142
- 0.7374837646425462
- 0.6662072905119479
- 0.6677848929819692
- 0.751779276675506
- 0.759089429391716
- 0.7602209390862023
- 0.6492366899599431
- 0.6195040191500187
- 0.6499553625304811
- 0.6426200803991593
- 0.6501320764151193
- 0.64076653277881
- 0.6308932508075157
- 0.6858425302866819
- 0.6480916795406368
- 0.621952158159244
- 0.6919485686557781
- 0.6599644850002667
- 0.6549319157284832
- 0.729102722021156
- 0.5645023963515139
- 0.6675700545366731
- 0.7106324328008338
- 0.6319760643208963
- 0.6114787245939142
- 0.7374837646425462
- 0.6662072905119479
- 0.6677848929819692
- 0.751779276675506
- 0.759089429391716
- 0.7602209390862023
- 0.6492366899599431
- 0.6195040191500187
- 0.6499553625304811
- 0.6426200803991593
- 0.6501320764151193
- 0.64076653277881
- 0.6308932508075157
- 0.6858425302866819
- 0.6480916795406368
- 0.621952158159244
- 0.6919485686557781
- 0.6599644850002667
- 0.6549319157284832
- 0.729102722021156
- 0.5645023963515139
- 0.6675700545366731
- 0.7106324328008338
- 0.6319760643208963
- 0.6114787245939142
- 0.7374837646425462
- 0.6662072905119479
- 0.6677848929819692
- 0.751779276675506
- 0.759089429391716
- 0.7602209390862023
- 0.6492366899599431
- 0.6195040191500187
- 0.6499553625304811
- 0.6426200803991593
- 0.6501320764151193
- 0.64076653277881
- 0.6308932508075157
- 0.6858425302866819
- 0.6480916795406368
- 0.621952158159244
- 0.6919485686557781
- 0.6599644850002667
- 0.6549319157284832
- 0.729102722021156
- 0.5645023963515139
- 0.6675700545366731
- 0.7106324328008338
- 0.6319760643208963
- 0.6114787245939142
- 0.7374837646425462
- 0.6662072905119479
- 0.6677848929819692
- 0.751779276675506
- 0.759089429391716
- 0.7602209390862023
- 0.6492366899599431
- 0.6195040191500187
- 0.6499553625304811
- 0.6426200803991593
- 0.6501320764151193
- 0.64076653277881
- 0.6308932508075157
- 0.6858425302866819
- 0.6480916795406368
- 0.621952158159244
- 0.6919485686557781
- 0.6599644850002667
- 0.6549319157284832
- 0.729102722021156
- 0.5645023963515139
- 0.6675700545366731
- 0.7106324328008338
- 0.6319760643208963
- 0.6114787245939142
- 0.7374837646425462
- 0.6662072905119479
- 0.6677848929819692
- 0.751779276675506
- 0.759089429391716
- 0.7602209390862023
- 0.6492366899599431
- 0.6195040191500187
- 0.6499553625304811
- 0.6426200803991593
- 0.6501320764151193
- 0.64076653277881
- 0.6308932508075157
- 0.6858425302866819
- 0.6480916795406368
- 0.621952158159244
- 0.6919485686557781
- 0.6599644850002667
- 0.6549319157284832
- 0.729102722021156
- 0.5645023963515139
- 0.6675700545366731
- 0.7106324328008338
- 0.6319760643208963
- 0.6114787245939142
- 0.7374837646425462
- 0.6662072905119479
- 0.6677848929819692
- 0.751779276675506
- 0.759089429391716
- 0.7602209390862023
- 0.6492366899599431
- 0.6195040191500187
- 0.6499553625304811
- 0.6426200803991593
- 0.6501320764151193
- 0.64076653277881
- 0.6308932508075157
- 0.6858425302866819
- 0.6480916795406368
- 0.621952158159244
- 0.6919485686557781
- 0.6599644850002667
- 0.6549319157284832
- 0.729102722021156
- 0.5645023963515139
- 0.6675700545366731
- 0.7106324328008338
- 0.6319760643208963
- 0.6114787245939142
- 0.7374837646425462
- 0.6662072905119479
- 0.6677848929819692
- 0.751779276675506
- 0.759089429391716
- 0.7602209390862023
- 0.6492366899599431
- 0.6195040191500187
- 0.6499553625304811
- 0.6426200803991593
- 0.6501320764151193
- 0.64076653277881
- 0.6308932508075157
- 0.6858425302866819
- 0.6480916795406368
- 0.621952158159244
- 0.6919485686557781
- 0.6599644850002667
- 0.6549319157284832
- 0.729102722021156
- 0.5645023963515139
- 0.6675700545366731
- 0.7106324328008338
- 0.6319760643208963
- 0.6114787245939142
- 0.7374837646425462
- 0.6662072905119479
- 0.6677848929819692
- 0.751779276675506
- 0.759089429391716
- 0.7602209390862023
- 0.6492366899599431
- 0.6195040191500187
- 0.6499553625304811
- 0.6426200803991593
- 0.6501320764151193
- 0.64076653277881
- 0.6308932508075157
- 0.6858425302866819
- 0.6480916795406368
- 0.621952158159244
- 0.6919485686557781
- 0.6599644850002667
- 0.6549319157284832
- 0.729102722021156
- 0.5645023963515139
- 0.6675700545366731
- 0.7106324328008338
- 0.6319760643208963
- 0.6114787245939142
- 0.7374837646425462
- 0.6662072905119479
- 0.6677848929819692
- 0.751779276675506
- 0.759089429391716
- 0.7602209390862023
- 0.6492366899599431
- 0.6195040191500187
- 0.6499553625304811
- 0.6426200803991593
- 0.6501320764151193
- 0.64076653277881
- 0.6308932508075157
- 0.6858425302866819
- 0.6480916795406368
- 0.621952158159244
- 0.6919485686557781
- 0.6599644850002667
- 0.6549319157284832
- 0.729102722021156
- 0.5645023963515139
- 0.6675700545366731
- 0.7106324328008338
- 0.6319760643208963
- 0.6114787245939142
- 0.7374837646425462
- 0.6662072905119479
- 0.6677848929819692
- 0.751779276675506
- 0.759089429391716
- 0.7602209390862023
- 0.6492366899599431
- 0.6195040191500187
- 0.6499553625304811
- 0.6426200803991593
- 0.6501320764151193
- 0.64076653277881
- 0.6308932508075157
- 0.6858425302866819
- 0.6480916795406368
- 0.621952158159244
- 0.6919485686557781
- 0.6599644850002667
- 0.6549319157284832
- 0.729102722021156
- 0.5645023963515139
- 0.6675700545366731
- 0.7106324328008338
- 0.6319760643208963
- 0.6114787245939142
- 0.7374837646425462
- 0.6662072905119479
- 0.6677848929819692
- 0.751779276675506
- 0.759089429391716
- 0.7602209390862023
- 0.6492366899599431
- 0.6195040191500187
- 0.6499553625304811
- 0.6426200803991593
- 0.6501320764151193
- 0.64076653277881
- 0.6308932508075157
- 0.6858425302866819
- 0.6480916795406368
- 0.621952158159244
- 0.6919485686557781
- 0.6599644850002667
- 0.6549319157284832
- 0.729102722021156
- 0.5645023963515139
- 0.6675700545366731
- 0.7106324328008338
- 0.6319760643208963
- 0.6114787245939142
- 0.7374837646425462
- 0.6662072905119479
- 0.6677848929819692
- 0.751779276675506
- 0.759089429391716
- 0.7602209390862023
- 0.6492366899599431
- 0.6195040191500187
- 0.6499553625304811
- 0.6426200803991593
- 0.6501320764151193
- 0.64076653277881
- 0.6308932508075157
- 0.6858425302866819
- 0.6480916795406368
- 0.621952158159244
- 0.6919485686557781
- 0.6599644850002667
- 0.6549319157284832
- 0.729102722021156
- 0.5645023963515139
- 0.6675700545366731
- 0.7106324328008338
- 0.6319760643208963
- 0.6114787245939142
- 0.7374837646425462
- 0.6662072905119479
- 0.6677848929819692
- 0.751779276675506
- 0.759089429391716
- 0.7602209390862023
- 0.6492366899599431
- 0.6195040191500187
- 0.6499553625304811
- 0.6426200803991593
- 0.6501320764151193
- 0.64076653277881
- 0.6308932508075157
- 0.6858425302866819
- 0.6480916795406368
- 0.621952158159244
- 0.6919485686557781
- 0.6599644850002667
- 0.6549319157284832
- 0.729102722021156
- 0.5645023963515139
- 0.6675700545366731
- 0.7106324328008338
- 0.6319760643208963
- 0.6114787245939142
- 0.7374837646425462
- 0.6662072905119479
- 0.6677848929819692
- 0.751779276675506
- 0.759089429391716
- 0.7602209390862023
- 0.6492366899599431
- 0.6195040191500187
- 0.6499553625304811
- 0.6426200803991593
- 0.6501320764151193
- 0.64076653277881
- 0.6308932508075157
- 0.6858425302866819
- 0.6480916795406368
- 0.621952158159244
- 0.6919485686557781
- 0.6599644850002667
- 0.6549319157284832
- 0.729102722021156
- 0.5645023963515139
- 0.6675700545366731
- 0.7106324328008338
- 0.6319760643208963
- 0.6114787245939142
- 0.7374837646425462
- 0.6662072905119479
- 0.6677848929819692
- 0.751779276675506
- 0.759089429391716
- 0.7602209390862023
- 0.6492366899599431
- 0.6195040191500187
- 0.6499553625304811
- 0.6426200803991593
- 0.6501320764151193
- 0.64076653277881
- 0.6308932508075157
- 0.6858425302866819
- 0.6480916795406368
- 0.621952158159244
- 0.6919485686557781
- 0.6599644850002667
- 0.6549319157284832
- 0.729102722021156
- 0.5645023963515139
- 0.6675700545366731
- 0.7106324328008338
- 0.6319760643208963
- 0.6114787245939142
- 0.7374837646425462
- 0.6662072905119479
- 0.6677848929819692
- 0.751779276675506
- 0.759089429391716
- 0.7602209390862023
- 0.6492366899599431
- 0.6195040191500187
- 0.6499553625304811
- 0.6426200803991593
- 0.6501320764151193
- 0.64076653277881
- 0.6308932508075157
- 0.6858425302866819
- 0.6480916795406368
- 0.621952158159244
- 0.6919485686557781
- 0.6599644850002667
- 0.6549319157284832
- 0.729102722021156
- 0.5645023963515139
- 0.6675700545366731
- 0.7106324328008338
- 0.6319760643208963
- 0.6114787245939142
- 0.7374837646425462
- 0.6662072905119479
- 0.6677848929819692
- 0.751779276675506
- 0.759089429391716
- 0.7602209390862023
- 0.6492366899599431
- 0.6195040191500187
- 0.6499553625304811
- 0.6426200803991593
- 0.6501320764151193
- 0.64076653277881
- 0.6308932508075157
- 0.6858425302866819
- 0.6480916795406368
- 0.621952158159244
- 0.6919485686557781
- 0.6599644850002667
- 0.6549319157284832
- 0.729102722021156
- 0.5645023963515139
- 0.6675700545366731
- 0.7106324328008338
- 0.6319760643208963
- 0.6114787245939142
- 0.7374837646425462
- 0.6662072905119479
- 0.6677848929819692
- 0.751779276675506
- 0.759089429391716
- 0.7602209390862023
- 0.6492366899599431
- 0.6195040191500187
- 0.6499553625304811
- 0.6426200803991593
- 0.6501320764151193
- 0.64076653277881
- 0.6308932508075157
- 0.6858425302866819
- 0.6480916795406368
- 0.621952158159244
- 0.6919485686557781
- 0.6599644850002667
- 0.6549319157284832
- 0.729102722021156
- 0.5645023963515139
- 0.6675700545366731
- 0.7106324328008338
- 0.6319760643208963
- 0.6114787245939142
- 0.7374837646425462
- 0.6662072905119479
- 0.6677848929819692
- 0.751779276675506
- 0.759089429391716
- 0.7602209390862023
- 0.6492366899599431
- 0.6195040191500187
- 0.6499553625304811
- 0.6426200803991593
- 0.6501320764151193
- 0.64076653277881
- 0.6308932508075157
- 0.6858425302866819
- 0.6480916795406368
- 0.621952158159244
- 0.6919485686557781
- 0.6599644850002667
- 0.6549319157284832
- 0.729102722021156
- 0.5645023963515139
- 0.6675700545366731
- 0.7106324328008338
- 0.6319760643208963
- 0.6114787245939142
- 0.7374837646425462
- 0.6662072905119479
- 0.6677848929819692
- 0.751779276675506
- 0.759089429391716
- 0.7602209390862023
- 0.6492366899599431
- 0.6195040191500187
- 0.6499553625304811
- 0.6426200803991593
- 0.6501320764151193
- 0.64076653277881
- 0.6308932508075157
- 0.6858425302866819
- 0.6480916795406368
- 0.621952158159244
- 0.6919485686557781
- 0.6599644850002667
- 0.6549319157284832
- 0.729102722021156
- 0.5645023963515139
- 0.6675700545366731
- 0.7106324328008338
- 0.6319760643208963
- 0.6114787245939142
- 0.7374837646425462
- 0.6662072905119479
- 0.6677848929819692
- 0.751779276675506
- 0.759089429391716
- 0.7602209390862023
- 0.6492366899599431
- 0.6195040191500187
- 0.6499553625304811
- 0.6426200803991593
- 0.6501320764151193
- 0.64076653277881
- 0.6308932508075157
- 0.6858425302866819
- 0.6480916795406368
- 0.621952158159244
- 0.6919485686557781
- 0.6599644850002667
- 0.6549319157284832
- 0.729102722021156
- 0.5645023963515139
- 0.6675700545366731
- 0.7106324328008338
- 0.6319760643208963
- 0.6114787245939142
- 0.7374837646425462
- 0.6662072905119479
- 0.6677848929819692
- 0.751779276675506
- 0.759089429391716
- 0.7602209390862023
- 0.6492366899599431
- 0.6195040191500187
- 0.6499553625304811
- 0.6426200803991593
- 0.6501320764151193
- 0.64076653277881
- 0.6308932508075157
- 0.6858425302866819
- 0.6480916795406368
- 0.621952158159244
- 0.6919485686557781
- 0.6599644850002667
- 0.6549319157284832
- 0.729102722021156
- 0.5645023963515139
- 0.6675700545366731
- 0.7106324328008338
- 0.6319760643208963
- 0.6114787245939142
- 0.7374837646425462
- 0.6662072905119479
- 0.6677848929819692
- 0.751779276675506
- 0.759089429391716
- 0.7602209390862023
- 0.6492366899599431
- 0.6195040191500187
- 0.6499553625304811
- 0.6426200803991593
- 0.6501320764151193
- 0.64076653277881
- 0.6308932508075157
- 0.6858425302866819
- 0.6480916795406368
- 0.621952158159244
- 0.6919485686557781
- 0.6599644850002667
- 0.6549319157284832
- 0.729102722021156
- 0.5645023963515139
- 0.6675700545366731
- 0.7106324328008338
- 0.6319760643208963
- 0.6114787245939142
- 0.7374837646425462
- 0.6662072905119479
- 0.6677848929819692
- 0.751779276675506
- 0.759089429391716
- 0.7602209390862023
- 0.6492366899599431
- 0.6195040191500187
- 0.6499553625304811
- 0.6426200803991593
- 0.6501320764151193
- 0.64076653277881
- 0.6308932508075157
- 0.6858425302866819
- 0.6480916795406368
- 0.621952158159244
- 0.6919485686557781
- 0.6599644850002667
- 0.6549319157284832
- 0.729102722021156
- 0.5645023963515139
- 0.6675700545366731
- 0.7106324328008338
- 0.6319760643208963
- 0.6114787245939142
- 0.7374837646425462
- 0.6662072905119479
- 0.6677848929819692
- 0.751779276675506
- 0.759089429391716
- 0.7602209390862023
- 0.6492366899599431
- 0.6195040191500187
- 0.6499553625304811
- 0.6426200803991593
- 0.6501320764151193
- 0.64076653277881
- 0.6308932508075157
- 0.6858425302866819
- 0.6480916795406368
- 0.621952158159244
- 0.6919485686557781
- 0.6599644850002667
- 0.6549319157284832
- 0.729102722021156
- 0.5645023963515139
- 0.6675700545366731
- 0.7106324328008338
- 0.6319760643208963
- 0.6114787245939142
- 0.7374837646425462
- 0.6662072905119479
- 0.6677848929819692
- 0.751779276675506
- 0.759089429391716
- 0.7602209390862023
- 0.6492366899599431
- 0.6195040191500187
- 0.6499553625304811
- 0.6426200803991593
- 0.6501320764151193
- 0.64076653277881
- 0.6308932508075157
- 0.6858425302866819
- 0.6480916795406368
- 0.621952158159244
- 0.6919485686557781
- 0.6599644850002667
- 0.6549319157284832
- 0.729102722021156
- 0.5645023963515139
- 0.6675700545366731
- 0.7106324328008338
- 0.6319760643208963
- 0.6114787245939142
- 0.7374837646425462
- 0.6662072905119479
- 0.6677848929819692
- 0.751779276675506
- 0.759089429391716
- 0.7602209390862023
- 0.6492366899599431
- 0.6195040191500187
- 0.6499553625304811
- 0.6426200803991593
- 0.6501320764151193
- 0.64076653277881
- 0.6308932508075157
- 0.6858425302866819
- 0.6480916795406368
- 0.621952158159244
- 0.6919485686557781
- 0.6599644850002667
- 0.6549319157284832
- 0.729102722021156
- 0.5645023963515139
- 0.6675700545366731
- 0.7106324328008338
- 0.6319760643208963
- 0.6114787245939142
- 0.7374837646425462
- 0.6662072905119479
- 0.6677848929819692
- 0.751779276675506
- 0.759089429391716
- 0.7602209390862023
- 0.6492366899599431
- 0.6195040191500187
- 0.6499553625304811
- 0.6426200803991593
- 0.6501320764151193
- 0.64076653277881
- 0.6308932508075157
- 0.6858425302866819
- 0.6480916795406368
- 0.621952158159244
- 0.6919485686557781
- 0.6599644850002667
- 0.6549319157284832
- 0.729102722021156
- 0.5645023963515139
- 0.6675700545366731
- 0.7106324328008338
- 0.6319760643208963
- 0.6114787245939142
- 0.7374837646425462
- 0.6662072905119479
- 0.6677848929819692
- 0.751779276675506
- 0.759089429391716
- 0.7602209390862023
- 0.6492366899599431
- 0.6195040191500187
- 0.6499553625304811
- 0.6426200803991593
- 0.6501320764151193
- 0.64076653277881
- 0.6308932508075157
- 0.6858425302866819
- 0.6480916795406368
- 0.621952158159244
- 0.6919485686557781
- 0.6599644850002667
- 0.6549319157284832
- 0.729102722021156
- 0.5645023963515139
- 0.6675700545366731
- 0.7106324328008338
- 0.6319760643208963
- 0.6114787245939142
- 0.7374837646425462
- 0.6662072905119479
- 0.6677848929819692
- 0.751779276675506
- 0.759089429391716
- 0.7602209390862023
- 0.6492366899599431
- 0.6195040191500187
- 0.6499553625304811
- 0.6426200803991593
- 0.6501320764151193
- 0.64076653277881
- 0.6308932508075157
- 0.6858425302866819
- 0.6480916795406368
- 0.621952158159244
- 0.6919485686557781
- 0.6599644850002667
- 0.6549319157284832
- 0.729102722021156
- 0.5645023963515139
- 0.6675700545366731
- 0.7106324328008338
- 0.6319760643208963
- 0.6114787245939142
- 0.7374837646425462
- 0.6662072905119479
- 0.6677848929819692
- 0.751779276675506
- 0.759089429391716
- 0.7602209390862023
- 0.6492366899599431
- 0.6195040191500187
- 0.6499553625304811
- 0.6426200803991593
- 0.6501320764151193
- 0.64076653277881
- 0.6308932508075157
- 0.6858425302866819
- 0.6480916795406368
- 0.621952158159244
- 0.6919485686557781
- 0.6599644850002667
- 0.6549319157284832
- 0.729102722021156
- 0.5645023963515139
- 0.6675700545366731
- 0.7106324328008338
- 0.6319760643208963
- 0.6114787245939142
- 0.7374837646425462
- 0.6662072905119479
- 0.6677848929819692
- 0.751779276675506
- 0.759089429391716
- 0.7602209390862023
- 0.6492366899599431
- 0.6195040191500187
- 0.6499553625304811
- 0.6426200803991593
- 0.6501320764151193
- 0.64076653277881
- 0.6308932508075157
- 0.6858425302866819
- 0.6480916795406368
- 0.621952158159244
- 0.6919485686557781
- 0.6599644850002667
- 0.6549319157284832
- 0.729102722021156
- 0.5645023963515139
- 0.6675700545366731
- 0.7106324328008338
- 0.6319760643208963
- 0.6114787245939142
- 0.7374837646425462
- 0.6662072905119479
- 0.6677848929819692
- 0.751779276675506
- 0.759089429391716
- 0.7602209390862023
- 0.6492366899599431
- 0.6195040191500187
- 0.6499553625304811
- 0.6426200803991593
- 0.6501320764151193
- 0.64076653277881
- 0.6308932508075157
- 0.6858425302866819
- 0.6480916795406368
- 0.621952158159244
- 0.6919485686557781
- 0.6599644850002667
- 0.6549319157284832
- 0.729102722021156
- 0.5645023963515139
- 0.6675700545366731
- 0.7106324328008338
- 0.6319760643208963
- 0.6114787245939142
- 0.7374837646425462
- 0.6662072905119479
- 0.6677848929819692
- 0.751779276675506
- 0.759089429391716
- 0.7602209390862023
- 0.6492366899599431
- 0.6195040191500187
- 0.6499553625304811
- 0.6426200803991593
- 0.6501320764151193
- 0.64076653277881
- 0.6308932508075157
- 0.6858425302866819
- 0.6480916795406368
- 0.621952158159244
- 0.6919485686557781
- 0.6599644850002667
- 0.6549319157284832
- 0.729102722021156
- 0.5645023963515139
- 0.6675700545366731
- 0.7106324328008338
- 0.6319760643208963
- 0.6114787245939142
- 0.7374837646425462
- 0.6662072905119479
- 0.6677848929819692
- 0.751779276675506
- 0.759089429391716
- 0.7602209390862023
- 0.6492366899599431
- 0.6195040191500187
- 0.6499553625304811
- 0.6426200803991593
- 0.6501320764151193
- 0.64076653277881
- 0.6308932508075157
- 0.6858425302866819
- 0.6480916795406368
- 0.621952158159244
- 0.6919485686557781
- 0.6599644850002667
- 0.6549319157284832
- 0.729102722021156
- 0.5645023963515139
- 0.6675700545366731
- 0.7106324328008338
- 0.6319760643208963
- 0.6114787245939142
- 0.7374837646425462
- 0.6662072905119479
- 0.6677848929819692
- 0.751779276675506
- 0.759089429391716
- 0.7602209390862023
- 0.6492366899599431
- 0.6195040191500187
- 0.6499553625304811
- 0.6426200803991593
- 0.6501320764151193
- 0.64076653277881
- 0.6308932508075157
- 0.6858425302866819
- 0.6480916795406368
- 0.621952158159244
- 0.6919485686557781
- 0.6599644850002667
- 0.6549319157284832
- 0.729102722021156
- 0.5645023963515139
- 0.6675700545366731
- 0.7106324328008338
- 0.6319760643208963
- 0.6114787245939142
- 0.7374837646425462
- 0.6662072905119479
- 0.6677848929819692
- 0.751779276675506
- 0.759089429391716
- 0.7602209390862023
- 0.6492366899599431
- 0.6195040191500187
- 0.6499553625304811
- 0.6426200803991593
- 0.6501320764151193
- 0.64076653277881
- 0.6308932508075157
- 0.6858425302866819
- 0.6480916795406368
- 0.621952158159244
- 0.6919485686557781
- 0.6599644850002667
- 0.6549319157284832
- 0.729102722021156
- 0.5645023963515139
- 0.6675700545366731
- 0.7106324328008338
- 0.6319760643208963
- 0.6114787245939142
- 0.7374837646425462
- 0.6662072905119479
- 0.6677848929819692
- 0.751779276675506
- 0.759089429391716
- 0.7602209390862023
- 0.6492366899599431
- 0.6195040191500187
- 0.6499553625304811
- 0.6426200803991593
- 0.6501320764151193
- 0.64076653277881
- 0.6308932508075157
- 0.6858425302866819
- 0.6480916795406368
- 0.621952158159244
- 0.6919485686557781
- 0.6599644850002667
- 0.6549319157284832
- 0.729102722021156
- 0.5645023963515139
- 0.6675700545366731
- 0.7106324328008338
- 0.6319760643208963
- 0.6114787245939142
- 0.7374837646425462
- 0.6662072905119479
- 0.6677848929819692
- 0.751779276675506
- 0.759089429391716
- 0.7602209390862023
- 0.6492366899599431
- 0.6195040191500187
- 0.6499553625304811
- 0.6426200803991593
- 0.6501320764151193
- 0.64076653277881
- 0.6308932508075157
- 0.6858425302866819
- 0.6480916795406368
- 0.621952158159244
- 0.6919485686557781
- 0.6599644850002667
- 0.6549319157284832
- 0.729102722021156
- 0.5645023963515139
- 0.6675700545366731
- 0.7106324328008338
- 0.6319760643208963
- 0.6114787245939142
- 0.7374837646425462
- 0.6662072905119479
- 0.6677848929819692
- 0.751779276675506
- 0.759089429391716
- 0.7602209390862023
- 0.6492366899599431
- 0.6195040191500187
- 0.6499553625304811
- 0.6426200803991593
- 0.6501320764151193
- 0.64076653277881
- 0.6308932508075157
- 0.6858425302866819
- 0.6480916795406368
- 0.621952158159244
- 0.6919485686557781
- 0.6599644850002667
- 0.6549319157284832
- 0.729102722021156
- 0.5645023963515139
- 0.6675700545366731
- 0.7106324328008338
- 0.6319760643208963
- 0.6114787245939142
- 0.7374837646425462
- 0.6662072905119479
- 0.6677848929819692
- 0.751779276675506
- 0.759089429391716
- 0.7602209390862023
- 0.6492366899599431
- 0.6195040191500187
- 0.6499553625304811
- 0.6426200803991593
- 0.6501320764151193
- 0.64076653277881
- 0.6308932508075157
- 0.6858425302866819
- 0.6480916795406368
- 0.621952158159244
- 0.6919485686557781
- 0.6599644850002667
- 0.6549319157284832
- 0.729102722021156
- 0.5645023963515139
- 0.6675700545366731
- 0.7106324328008338
- 0.6319760643208963
- 0.6114787245939142
- 0.7374837646425462
- 0.6662072905119479
- 0.6677848929819692
- 0.751779276675506
- 0.759089429391716
- 0.7602209390862023
- 0.6492366899599431
- 0.6195040191500187
- 0.6499553625304811
- 0.6426200803991593
- 0.6501320764151193
- 0.64076653277881
- 0.6308932508075157
- 0.6858425302866819
- 0.6480916795406368
- 0.621952158159244
- 0.6919485686557781
- 0.6599644850002667
- 0.6549319157284832
- 0.729102722021156
- 0.5645023963515139
- 0.6675700545366731
- 0.7106324328008338
- 0.6319760643208963
- 0.6114787245939142
- 0.7374837646425462
- 0.6662072905119479
- 0.6677848929819692
- 0.751779276675506
- 0.759089429391716
- 0.7602209390862023
- 0.6492366899599431
- 0.6195040191500187
- 0.6499553625304811
- 0.6426200803991593
- 0.6501320764151193
- 0.64076653277881
- 0.6308932508075157
- 0.6858425302866819
- 0.6480916795406368
- 0.621952158159244
- 0.6919485686557781
- 0.6599644850002667
- 0.6549319157284832
- 0.729102722021156
- 0.5645023963515139
- 0.6675700545366731
- 0.7106324328008338
- 0.6319760643208963
- 0.6114787245939142
- 0.7374837646425462
- 0.6662072905119479
- 0.6677848929819692
- 0.751779276675506
- 0.759089429391716
- 0.7602209390862023
- 0.6492366899599431
- 0.6195040191500187
- 0.6499553625304811
- 0.6426200803991593
- 0.6501320764151193
- 0.64076653277881
- 0.6308932508075157
- 0.6858425302866819
- 0.6480916795406368
- 0.621952158159244
- 0.6919485686557781
- 0.6599644850002667
- 0.6549319157284832
- 0.729102722021156
- 0.5645023963515139
- 0.6675700545366731
- 0.7106324328008338
- 0.6319760643208963
- 0.6114787245939142
- 0.7374837646425462
- 0.6662072905119479
- 0.6677848929819692
- 0.751779276675506
- 0.759089429391716
- 0.7602209390862023
- 0.6492366899599431
- 0.6195040191500187
- 0.6499553625304811
- 0.6426200803991593
- 0.6501320764151193
- 0.64076653277881
- 0.6308932508075157
- 0.6858425302866819
- 0.6480916795406368
- 0.621952158159244
- 0.6919485686557781
- 0.6599644850002667
- 0.6549319157284832
- 0.729102722021156
- 0.5645023963515139
- 0.6675700545366731
- 0.7106324328008338
- 0.6319760643208963
- 0.6114787245939142
- 0.7374837646425462
- 0.6662072905119479
- 0.6677848929819692
- 0.751779276675506
- 0.759089429391716
- 0.7602209390862023
- 0.6492366899599431
- 0.6195040191500187
- 0.6499553625304811
- 0.6426200803991593
- 0.6501320764151193
- 0.64076653277881
- 0.6308932508075157
- 0.6858425302866819
- 0.6480916795406368
- 0.621952158159244
- 0.6919485686557781
- 0.6599644850002667
- 0.6549319157284832
- 0.729102722021156
- 0.5645023963515139
- 0.6675700545366731
- 0.7106324328008338
- 0.6319760643208963
- 0.6114787245939142
- 0.7374837646425462
- 0.6662072905119479
- 0.6677848929819692
- 0.751779276675506
- 0.759089429391716
- 0.7602209390862023
- 0.6492366899599431
- 0.6195040191500187
- 0.6499553625304811
- 0.6426200803991593
- 0.6501320764151193
- 0.64076653277881
- 0.6308932508075157
- 0.6858425302866819
- 0.6480916795406368
- 0.621952158159244
- 0.6919485686557781
- 0.6599644850002667
- task:
type: Clustering
dataset:
name: MTEB StackExchangeClusteringP2P
type: None
config: default
split: test
revision: 815ca46b2622cec33ccafc3735d572c266efdb44
metrics:
- type: v_measure
value: 32.89624220641997
- type: v_measures
value:
- 0.31945056306012165
- 0.31808019971577695
- 0.31586396504149594
- 0.30978907206904555
- 0.31749756951395836
- 0.3520278613309176
- 0.34013572005643994
- 0.3392435192470549
- 0.3341655962016141
- 0.3433701544055723
- 0.31945056306012165
- 0.31808019971577695
- 0.31586396504149594
- 0.30978907206904555
- 0.31749756951395836
- 0.3520278613309176
- 0.34013572005643994
- 0.3392435192470549
- 0.3341655962016141
- 0.3433701544055723
- 0.31945056306012165
- 0.31808019971577695
- 0.31586396504149594
- 0.30978907206904555
- 0.31749756951395836
- 0.3520278613309176
- 0.34013572005643994
- 0.3392435192470549
- 0.3341655962016141
- 0.3433701544055723
- 0.31945056306012165
- 0.31808019971577695
- 0.31586396504149594
- 0.30978907206904555
- 0.31749756951395836
- 0.3520278613309176
- 0.34013572005643994
- 0.3392435192470549
- 0.3341655962016141
- 0.3433701544055723
- 0.31945056306012165
- 0.31808019971577695
- 0.31586396504149594
- 0.30978907206904555
- 0.31749756951395836
- 0.3520278613309176
- 0.34013572005643994
- 0.3392435192470549
- 0.3341655962016141
- 0.3433701544055723
- 0.31945056306012165
- 0.31808019971577695
- 0.31586396504149594
- 0.30978907206904555
- 0.31749756951395836
- 0.3520278613309176
- 0.34013572005643994
- 0.3392435192470549
- 0.3341655962016141
- 0.3433701544055723
- 0.31945056306012165
- 0.31808019971577695
- 0.31586396504149594
- 0.30978907206904555
- 0.31749756951395836
- 0.3520278613309176
- 0.34013572005643994
- 0.3392435192470549
- 0.3341655962016141
- 0.3433701544055723
- 0.31945056306012165
- 0.31808019971577695
- 0.31586396504149594
- 0.30978907206904555
- 0.31749756951395836
- 0.3520278613309176
- 0.34013572005643994
- 0.3392435192470549
- 0.3341655962016141
- 0.3433701544055723
- 0.31945056306012165
- 0.31808019971577695
- 0.31586396504149594
- 0.30978907206904555
- 0.31749756951395836
- 0.3520278613309176
- 0.34013572005643994
- 0.3392435192470549
- 0.3341655962016141
- 0.3433701544055723
- 0.31945056306012165
- 0.31808019971577695
- 0.31586396504149594
- 0.30978907206904555
- 0.31749756951395836
- 0.3520278613309176
- 0.34013572005643994
- 0.3392435192470549
- 0.3341655962016141
- 0.3433701544055723
- 0.31945056306012165
- 0.31808019971577695
- 0.31586396504149594
- 0.30978907206904555
- 0.31749756951395836
- 0.3520278613309176
- 0.34013572005643994
- 0.3392435192470549
- 0.3341655962016141
- 0.3433701544055723
- 0.31945056306012165
- 0.31808019971577695
- 0.31586396504149594
- 0.30978907206904555
- 0.31749756951395836
- 0.3520278613309176
- 0.34013572005643994
- 0.3392435192470549
- 0.3341655962016141
- 0.3433701544055723
- 0.31945056306012165
- 0.31808019971577695
- 0.31586396504149594
- 0.30978907206904555
- 0.31749756951395836
- 0.3520278613309176
- 0.34013572005643994
- 0.3392435192470549
- 0.3341655962016141
- 0.3433701544055723
- 0.31945056306012165
- 0.31808019971577695
- 0.31586396504149594
- 0.30978907206904555
- 0.31749756951395836
- 0.3520278613309176
- 0.34013572005643994
- 0.3392435192470549
- 0.3341655962016141
- 0.3433701544055723
- 0.31945056306012165
- 0.31808019971577695
- 0.31586396504149594
- 0.30978907206904555
- 0.31749756951395836
- 0.3520278613309176
- 0.34013572005643994
- 0.3392435192470549
- 0.3341655962016141
- 0.3433701544055723
- 0.31945056306012165
- 0.31808019971577695
- 0.31586396504149594
- 0.30978907206904555
- 0.31749756951395836
- 0.3520278613309176
- 0.34013572005643994
- 0.3392435192470549
- 0.3341655962016141
- 0.3433701544055723
- 0.31945056306012165
- 0.31808019971577695
- 0.31586396504149594
- 0.30978907206904555
- 0.31749756951395836
- 0.3520278613309176
- 0.34013572005643994
- 0.3392435192470549
- 0.3341655962016141
- 0.3433701544055723
- 0.31945056306012165
- 0.31808019971577695
- 0.31586396504149594
- 0.30978907206904555
- 0.31749756951395836
- 0.3520278613309176
- 0.34013572005643994
- 0.3392435192470549
- 0.3341655962016141
- 0.3433701544055723
- 0.31945056306012165
- 0.31808019971577695
- 0.31586396504149594
- 0.30978907206904555
- 0.31749756951395836
- 0.3520278613309176
- 0.34013572005643994
- 0.3392435192470549
- 0.3341655962016141
- 0.3433701544055723
- 0.31945056306012165
- 0.31808019971577695
- 0.31586396504149594
- 0.30978907206904555
- 0.31749756951395836
- 0.3520278613309176
- 0.34013572005643994
- 0.3392435192470549
- 0.3341655962016141
- 0.3433701544055723
- 0.31945056306012165
- 0.31808019971577695
- 0.31586396504149594
- 0.30978907206904555
- 0.31749756951395836
- 0.3520278613309176
- 0.34013572005643994
- 0.3392435192470549
- 0.3341655962016141
- 0.3433701544055723
- 0.31945056306012165
- 0.31808019971577695
- 0.31586396504149594
- 0.30978907206904555
- 0.31749756951395836
- 0.3520278613309176
- 0.34013572005643994
- 0.3392435192470549
- 0.3341655962016141
- 0.3433701544055723
- 0.31945056306012165
- 0.31808019971577695
- 0.31586396504149594
- 0.30978907206904555
- 0.31749756951395836
- 0.3520278613309176
- 0.34013572005643994
- 0.3392435192470549
- 0.3341655962016141
- 0.3433701544055723
- 0.31945056306012165
- 0.31808019971577695
- 0.31586396504149594
- 0.30978907206904555
- 0.31749756951395836
- 0.3520278613309176
- 0.34013572005643994
- 0.3392435192470549
- 0.3341655962016141
- 0.3433701544055723
- 0.31945056306012165
- 0.31808019971577695
- 0.31586396504149594
- 0.30978907206904555
- 0.31749756951395836
- 0.3520278613309176
- 0.34013572005643994
- 0.3392435192470549
- 0.3341655962016141
- 0.3433701544055723
- 0.31945056306012165
- 0.31808019971577695
- 0.31586396504149594
- 0.30978907206904555
- 0.31749756951395836
- 0.3520278613309176
- 0.34013572005643994
- 0.3392435192470549
- 0.3341655962016141
- 0.3433701544055723
- 0.31945056306012165
- 0.31808019971577695
- 0.31586396504149594
- 0.30978907206904555
- 0.31749756951395836
- 0.3520278613309176
- 0.34013572005643994
- 0.3392435192470549
- 0.3341655962016141
- 0.3433701544055723
- 0.31945056306012165
- 0.31808019971577695
- 0.31586396504149594
- 0.30978907206904555
- 0.31749756951395836
- 0.3520278613309176
- 0.34013572005643994
- 0.3392435192470549
- 0.3341655962016141
- 0.3433701544055723
- 0.31945056306012165
- 0.31808019971577695
- 0.31586396504149594
- 0.30978907206904555
- 0.31749756951395836
- 0.3520278613309176
- 0.34013572005643994
- 0.3392435192470549
- 0.3341655962016141
- 0.3433701544055723
- 0.31945056306012165
- 0.31808019971577695
- 0.31586396504149594
- 0.30978907206904555
- 0.31749756951395836
- 0.3520278613309176
- 0.34013572005643994
- 0.3392435192470549
- 0.3341655962016141
- 0.3433701544055723
- 0.31945056306012165
- 0.31808019971577695
- 0.31586396504149594
- 0.30978907206904555
- 0.31749756951395836
- 0.3520278613309176
- 0.34013572005643994
- 0.3392435192470549
- 0.3341655962016141
- 0.3433701544055723
- 0.31945056306012165
- 0.31808019971577695
- 0.31586396504149594
- 0.30978907206904555
- 0.31749756951395836
- 0.3520278613309176
- 0.34013572005643994
- 0.3392435192470549
- 0.3341655962016141
- 0.3433701544055723
- 0.31945056306012165
- 0.31808019971577695
- 0.31586396504149594
- 0.30978907206904555
- 0.31749756951395836
- 0.3520278613309176
- 0.34013572005643994
- 0.3392435192470549
- 0.3341655962016141
- 0.3433701544055723
- 0.31945056306012165
- 0.31808019971577695
- 0.31586396504149594
- 0.30978907206904555
- 0.31749756951395836
- 0.3520278613309176
- 0.34013572005643994
- 0.3392435192470549
- 0.3341655962016141
- 0.3433701544055723
- 0.31945056306012165
- 0.31808019971577695
- 0.31586396504149594
- 0.30978907206904555
- 0.31749756951395836
- 0.3520278613309176
- 0.34013572005643994
- 0.3392435192470549
- 0.3341655962016141
- 0.3433701544055723
- 0.31945056306012165
- 0.31808019971577695
- 0.31586396504149594
- 0.30978907206904555
- 0.31749756951395836
- 0.3520278613309176
- 0.34013572005643994
- 0.3392435192470549
- 0.3341655962016141
- 0.3433701544055723
- 0.31945056306012165
- 0.31808019971577695
- 0.31586396504149594
- 0.30978907206904555
- 0.31749756951395836
- 0.3520278613309176
- 0.34013572005643994
- 0.3392435192470549
- 0.3341655962016141
- 0.3433701544055723
- 0.31945056306012165
- 0.31808019971577695
- 0.31586396504149594
- 0.30978907206904555
- 0.31749756951395836
- 0.3520278613309176
- 0.34013572005643994
- 0.3392435192470549
- 0.3341655962016141
- 0.3433701544055723
- 0.31945056306012165
- 0.31808019971577695
- 0.31586396504149594
- 0.30978907206904555
- 0.31749756951395836
- 0.3520278613309176
- 0.34013572005643994
- 0.3392435192470549
- 0.3341655962016141
- 0.3433701544055723
- 0.31945056306012165
- 0.31808019971577695
- 0.31586396504149594
- 0.30978907206904555
- 0.31749756951395836
- 0.3520278613309176
- 0.34013572005643994
- 0.3392435192470549
- 0.3341655962016141
- 0.3433701544055723
- 0.31945056306012165
- 0.31808019971577695
- 0.31586396504149594
- 0.30978907206904555
- 0.31749756951395836
- 0.3520278613309176
- 0.34013572005643994
- 0.3392435192470549
- 0.3341655962016141
- 0.3433701544055723
- 0.31945056306012165
- 0.31808019971577695
- 0.31586396504149594
- 0.30978907206904555
- 0.31749756951395836
- 0.3520278613309176
- 0.34013572005643994
- 0.3392435192470549
- 0.3341655962016141
- 0.3433701544055723
- 0.31945056306012165
- 0.31808019971577695
- 0.31586396504149594
- 0.30978907206904555
- 0.31749756951395836
- 0.3520278613309176
- 0.34013572005643994
- 0.3392435192470549
- 0.3341655962016141
- 0.3433701544055723
- 0.31945056306012165
- 0.31808019971577695
- 0.31586396504149594
- 0.30978907206904555
- 0.31749756951395836
- 0.3520278613309176
- 0.34013572005643994
- 0.3392435192470549
- 0.3341655962016141
- 0.3433701544055723
- 0.31945056306012165
- 0.31808019971577695
- 0.31586396504149594
- 0.30978907206904555
- 0.31749756951395836
- 0.3520278613309176
- 0.34013572005643994
- 0.3392435192470549
- 0.3341655962016141
- 0.3433701544055723
- 0.31945056306012165
- 0.31808019971577695
- 0.31586396504149594
- 0.30978907206904555
- 0.31749756951395836
- 0.3520278613309176
- 0.34013572005643994
- 0.3392435192470549
- 0.3341655962016141
- 0.3433701544055723
- 0.31945056306012165
- 0.31808019971577695
- 0.31586396504149594
- 0.30978907206904555
- 0.31749756951395836
- 0.3520278613309176
- 0.34013572005643994
- 0.3392435192470549
- 0.3341655962016141
- 0.3433701544055723
- 0.31945056306012165
- 0.31808019971577695
- 0.31586396504149594
- 0.30978907206904555
- 0.31749756951395836
- 0.3520278613309176
- 0.34013572005643994
- 0.3392435192470549
- 0.3341655962016141
- 0.3433701544055723
- 0.31945056306012165
- 0.31808019971577695
- 0.31586396504149594
- 0.30978907206904555
- 0.31749756951395836
- 0.3520278613309176
- 0.34013572005643994
- 0.3392435192470549
- 0.3341655962016141
- 0.3433701544055723
- 0.31945056306012165
- 0.31808019971577695
- 0.31586396504149594
- 0.30978907206904555
- 0.31749756951395836
- 0.3520278613309176
- 0.34013572005643994
- 0.3392435192470549
- 0.3341655962016141
- 0.3433701544055723
- 0.31945056306012165
- 0.31808019971577695
- 0.31586396504149594
- 0.30978907206904555
- 0.31749756951395836
- 0.3520278613309176
- 0.34013572005643994
- 0.3392435192470549
- 0.3341655962016141
- 0.3433701544055723
- 0.31945056306012165
- 0.31808019971577695
- 0.31586396504149594
- 0.30978907206904555
- 0.31749756951395836
- 0.3520278613309176
- 0.34013572005643994
- 0.3392435192470549
- 0.3341655962016141
- 0.3433701544055723
- 0.31945056306012165
- 0.31808019971577695
- 0.31586396504149594
- 0.30978907206904555
- 0.31749756951395836
- 0.3520278613309176
- 0.34013572005643994
- 0.3392435192470549
- 0.3341655962016141
- 0.3433701544055723
- 0.31945056306012165
- 0.31808019971577695
- 0.31586396504149594
- 0.30978907206904555
- 0.31749756951395836
- 0.3520278613309176
- 0.34013572005643994
- 0.3392435192470549
- 0.3341655962016141
- 0.3433701544055723
- 0.31945056306012165
- 0.31808019971577695
- 0.31586396504149594
- 0.30978907206904555
- 0.31749756951395836
- 0.3520278613309176
- 0.34013572005643994
- 0.3392435192470549
- 0.3341655962016141
- 0.3433701544055723
- 0.31945056306012165
- 0.31808019971577695
- 0.31586396504149594
- 0.30978907206904555
- 0.31749756951395836
- 0.3520278613309176
- 0.34013572005643994
- 0.3392435192470549
- 0.3341655962016141
- 0.3433701544055723
- 0.31945056306012165
- 0.31808019971577695
- 0.31586396504149594
- 0.30978907206904555
- 0.31749756951395836
- 0.3520278613309176
- 0.34013572005643994
- 0.3392435192470549
- 0.3341655962016141
- 0.3433701544055723
- 0.31945056306012165
- 0.31808019971577695
- 0.31586396504149594
- 0.30978907206904555
- 0.31749756951395836
- 0.3520278613309176
- 0.34013572005643994
- 0.3392435192470549
- 0.3341655962016141
- 0.3433701544055723
- 0.31945056306012165
- 0.31808019971577695
- 0.31586396504149594
- 0.30978907206904555
- 0.31749756951395836
- 0.3520278613309176
- 0.34013572005643994
- 0.3392435192470549
- 0.3341655962016141
- 0.3433701544055723
- 0.31945056306012165
- 0.31808019971577695
- 0.31586396504149594
- 0.30978907206904555
- 0.31749756951395836
- 0.3520278613309176
- 0.34013572005643994
- 0.3392435192470549
- 0.3341655962016141
- 0.3433701544055723
- 0.31945056306012165
- 0.31808019971577695
- 0.31586396504149594
- 0.30978907206904555
- 0.31749756951395836
- 0.3520278613309176
- 0.34013572005643994
- 0.3392435192470549
- 0.3341655962016141
- 0.3433701544055723
- 0.31945056306012165
- 0.31808019971577695
- 0.31586396504149594
- 0.30978907206904555
- 0.31749756951395836
- 0.3520278613309176
- 0.34013572005643994
- 0.3392435192470549
- 0.3341655962016141
- 0.3433701544055723
- 0.31945056306012165
- 0.31808019971577695
- 0.31586396504149594
- 0.30978907206904555
- 0.31749756951395836
- 0.3520278613309176
- 0.34013572005643994
- 0.3392435192470549
- 0.3341655962016141
- 0.3433701544055723
- 0.31945056306012165
- 0.31808019971577695
- 0.31586396504149594
- 0.30978907206904555
- 0.31749756951395836
- 0.3520278613309176
- 0.34013572005643994
- 0.3392435192470549
- 0.3341655962016141
- 0.3433701544055723
- 0.31945056306012165
- 0.31808019971577695
- 0.31586396504149594
- 0.30978907206904555
- 0.31749756951395836
- 0.3520278613309176
- 0.34013572005643994
- 0.3392435192470549
- 0.3341655962016141
- 0.3433701544055723
- 0.31945056306012165
- 0.31808019971577695
- 0.31586396504149594
- 0.30978907206904555
- 0.31749756951395836
- 0.3520278613309176
- 0.34013572005643994
- 0.3392435192470549
- 0.3341655962016141
- 0.3433701544055723
- 0.31945056306012165
- 0.31808019971577695
- 0.31586396504149594
- 0.30978907206904555
- 0.31749756951395836
- 0.3520278613309176
- 0.34013572005643994
- 0.3392435192470549
- 0.3341655962016141
- 0.3433701544055723
- 0.31945056306012165
- 0.31808019971577695
- 0.31586396504149594
- 0.30978907206904555
- 0.31749756951395836
- 0.3520278613309176
- 0.34013572005643994
- 0.3392435192470549
- 0.3341655962016141
- 0.3433701544055723
- 0.31945056306012165
- 0.31808019971577695
- 0.31586396504149594
- 0.30978907206904555
- 0.31749756951395836
- 0.3520278613309176
- 0.34013572005643994
- 0.3392435192470549
- 0.3341655962016141
- 0.3433701544055723
- 0.31945056306012165
- 0.31808019971577695
- 0.31586396504149594
- 0.30978907206904555
- 0.31749756951395836
- 0.3520278613309176
- 0.34013572005643994
- 0.3392435192470549
- 0.3341655962016141
- 0.3433701544055723
- 0.31945056306012165
- 0.31808019971577695
- 0.31586396504149594
- 0.30978907206904555
- 0.31749756951395836
- 0.3520278613309176
- 0.34013572005643994
- 0.3392435192470549
- 0.3341655962016141
- 0.3433701544055723
- 0.31945056306012165
- 0.31808019971577695
- 0.31586396504149594
- 0.30978907206904555
- 0.31749756951395836
- 0.3520278613309176
- 0.34013572005643994
- 0.3392435192470549
- 0.3341655962016141
- 0.3433701544055723
- 0.31945056306012165
- 0.31808019971577695
- 0.31586396504149594
- 0.30978907206904555
- 0.31749756951395836
- 0.3520278613309176
- 0.34013572005643994
- 0.3392435192470549
- 0.3341655962016141
- 0.3433701544055723
- 0.31945056306012165
- 0.31808019971577695
- 0.31586396504149594
- 0.30978907206904555
- 0.31749756951395836
- 0.3520278613309176
- 0.34013572005643994
- 0.3392435192470549
- 0.3341655962016141
- 0.3433701544055723
- 0.31945056306012165
- 0.31808019971577695
- 0.31586396504149594
- 0.30978907206904555
- 0.31749756951395836
- 0.3520278613309176
- 0.34013572005643994
- 0.3392435192470549
- 0.3341655962016141
- 0.3433701544055723
- 0.31945056306012165
- 0.31808019971577695
- 0.31586396504149594
- 0.30978907206904555
- 0.31749756951395836
- 0.3520278613309176
- 0.34013572005643994
- 0.3392435192470549
- 0.3341655962016141
- 0.3433701544055723
- 0.31945056306012165
- 0.31808019971577695
- 0.31586396504149594
- 0.30978907206904555
- 0.31749756951395836
- 0.3520278613309176
- 0.34013572005643994
- 0.3392435192470549
- 0.3341655962016141
- 0.3433701544055723
- 0.31945056306012165
- 0.31808019971577695
- 0.31586396504149594
- 0.30978907206904555
- 0.31749756951395836
- 0.3520278613309176
- 0.34013572005643994
- 0.3392435192470549
- 0.3341655962016141
- 0.3433701544055723
- 0.31945056306012165
- 0.31808019971577695
- 0.31586396504149594
- 0.30978907206904555
- 0.31749756951395836
- 0.3520278613309176
- 0.34013572005643994
- 0.3392435192470549
- 0.3341655962016141
- 0.3433701544055723
- 0.31945056306012165
- 0.31808019971577695
- 0.31586396504149594
- 0.30978907206904555
- 0.31749756951395836
- 0.3520278613309176
- 0.34013572005643994
- 0.3392435192470549
- 0.3341655962016141
- 0.3433701544055723
- 0.31945056306012165
- 0.31808019971577695
- 0.31586396504149594
- 0.30978907206904555
- 0.31749756951395836
- 0.3520278613309176
- 0.34013572005643994
- 0.3392435192470549
- 0.3341655962016141
- 0.3433701544055723
- 0.31945056306012165
- 0.31808019971577695
- 0.31586396504149594
- 0.30978907206904555
- 0.31749756951395836
- 0.3520278613309176
- 0.34013572005643994
- 0.3392435192470549
- 0.3341655962016141
- 0.3433701544055723
- 0.31945056306012165
- 0.31808019971577695
- 0.31586396504149594
- 0.30978907206904555
- 0.31749756951395836
- 0.3520278613309176
- 0.34013572005643994
- 0.3392435192470549
- 0.3341655962016141
- 0.3433701544055723
- 0.31945056306012165
- 0.31808019971577695
- 0.31586396504149594
- 0.30978907206904555
- 0.31749756951395836
- 0.3520278613309176
- 0.34013572005643994
- 0.3392435192470549
- 0.3341655962016141
- 0.3433701544055723
- 0.31945056306012165
- 0.31808019971577695
- 0.31586396504149594
- 0.30978907206904555
- 0.31749756951395836
- 0.3520278613309176
- 0.34013572005643994
- 0.3392435192470549
- 0.3341655962016141
- 0.3433701544055723
- 0.31945056306012165
- 0.31808019971577695
- 0.31586396504149594
- 0.30978907206904555
- 0.31749756951395836
- 0.3520278613309176
- 0.34013572005643994
- 0.3392435192470549
- 0.3341655962016141
- 0.3433701544055723
- 0.31945056306012165
- 0.31808019971577695
- 0.31586396504149594
- 0.30978907206904555
- 0.31749756951395836
- 0.3520278613309176
- 0.34013572005643994
- 0.3392435192470549
- 0.3341655962016141
- 0.3433701544055723
- 0.31945056306012165
- 0.31808019971577695
- 0.31586396504149594
- 0.30978907206904555
- 0.31749756951395836
- 0.3520278613309176
- 0.34013572005643994
- 0.3392435192470549
- 0.3341655962016141
- 0.3433701544055723
- 0.31945056306012165
- 0.31808019971577695
- 0.31586396504149594
- 0.30978907206904555
- 0.31749756951395836
- 0.3520278613309176
- 0.34013572005643994
- 0.3392435192470549
- 0.3341655962016141
- 0.3433701544055723
- 0.31945056306012165
- 0.31808019971577695
- 0.31586396504149594
- 0.30978907206904555
- 0.31749756951395836
- 0.3520278613309176
- 0.34013572005643994
- 0.3392435192470549
- 0.3341655962016141
- 0.3433701544055723
- 0.31945056306012165
- 0.31808019971577695
- 0.31586396504149594
- 0.30978907206904555
- 0.31749756951395836
- 0.3520278613309176
- 0.34013572005643994
- 0.3392435192470549
- 0.3341655962016141
- 0.3433701544055723
- 0.31945056306012165
- 0.31808019971577695
- 0.31586396504149594
- 0.30978907206904555
- 0.31749756951395836
- 0.3520278613309176
- 0.34013572005643994
- 0.3392435192470549
- 0.3341655962016141
- 0.3433701544055723
- 0.31945056306012165
- 0.31808019971577695
- 0.31586396504149594
- 0.30978907206904555
- 0.31749756951395836
- 0.3520278613309176
- 0.34013572005643994
- 0.3392435192470549
- 0.3341655962016141
- 0.3433701544055723
- 0.31945056306012165
- 0.31808019971577695
- 0.31586396504149594
- 0.30978907206904555
- 0.31749756951395836
- 0.3520278613309176
- 0.34013572005643994
- 0.3392435192470549
- 0.3341655962016141
- 0.3433701544055723
- 0.31945056306012165
- 0.31808019971577695
- 0.31586396504149594
- 0.30978907206904555
- 0.31749756951395836
- 0.3520278613309176
- 0.34013572005643994
- 0.3392435192470549
- 0.3341655962016141
- 0.3433701544055723
- 0.31945056306012165
- 0.31808019971577695
- 0.31586396504149594
- 0.30978907206904555
- 0.31749756951395836
- 0.3520278613309176
- 0.34013572005643994
- 0.3392435192470549
- 0.3341655962016141
- 0.3433701544055723
- 0.31945056306012165
- 0.31808019971577695
- 0.31586396504149594
- 0.30978907206904555
- 0.31749756951395836
- 0.3520278613309176
- 0.34013572005643994
- 0.3392435192470549
- 0.3341655962016141
- 0.3433701544055723
- 0.31945056306012165
- 0.31808019971577695
- 0.31586396504149594
- 0.30978907206904555
- 0.31749756951395836
- 0.3520278613309176
- 0.34013572005643994
- 0.3392435192470549
- 0.3341655962016141
- 0.3433701544055723
- 0.31945056306012165
- 0.31808019971577695
- 0.31586396504149594
- 0.30978907206904555
- 0.31749756951395836
- 0.3520278613309176
- 0.34013572005643994
- 0.3392435192470549
- 0.3341655962016141
- 0.3433701544055723
- 0.31945056306012165
- 0.31808019971577695
- 0.31586396504149594
- 0.30978907206904555
- 0.31749756951395836
- 0.3520278613309176
- 0.34013572005643994
- 0.3392435192470549
- 0.3341655962016141
- 0.3433701544055723
- task:
type: Reranking
dataset:
name: MTEB StackOverflowDupQuestions
type: None
config: default
split: test
revision: e185fbe320c72810689fc5848eb6114e1ef5ec69
metrics:
- type: map
value: 48.253810565773705
- type: mrr
value: 49.14455744418979
- task:
type: Summarization
dataset:
name: MTEB SummEval
type: None
config: default
split: test
revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c
metrics:
- type: cos_sim_pearson
value: 30.976959578668456
- type: cos_sim_spearman
value: 31.195930170179643
- type: dot_pearson
value: 31.023896821497786
- type: dot_spearman
value: 30.873340062924225
- task:
type: Retrieval
dataset:
name: MTEB TRECCOVID
type: None
config: default
split: test
revision: bb9466bac8153a0349341eb1b22e06409e78ef4e
metrics:
- type: map_at_1
value: 0.231
- type: map_at_10
value: 1.6709999999999998
- type: map_at_100
value: 10.578999999999999
- type: map_at_1000
value: 26.997
- type: map_at_20
value: 3.032
- type: map_at_3
value: 0.584
- type: map_at_5
value: 0.9249999999999999
- type: mrr_at_1
value: 82.0
- type: mrr_at_10
value: 89.833
- type: mrr_at_100
value: 89.833
- type: mrr_at_1000
value: 89.833
- type: mrr_at_20
value: 89.833
- type: mrr_at_3
value: 89.333
- type: mrr_at_5
value: 89.833
- type: ndcg_at_1
value: 72.0
- type: ndcg_at_10
value: 68.44200000000001
- type: ndcg_at_100
value: 56.06100000000001
- type: ndcg_at_1000
value: 53.315
- type: ndcg_at_20
value: 65.781
- type: ndcg_at_3
value: 69.69300000000001
- type: ndcg_at_5
value: 70.538
- type: precision_at_1
value: 82.0
- type: precision_at_10
value: 73.2
- type: precision_at_100
value: 58.34
- type: precision_at_1000
value: 23.854
- type: precision_at_20
value: 70.1
- type: precision_at_3
value: 76.667
- type: precision_at_5
value: 76.4
- type: recall_at_1
value: 0.231
- type: recall_at_10
value: 1.94
- type: recall_at_100
value: 14.26
- type: recall_at_1000
value: 51.013
- type: recall_at_20
value: 3.6519999999999997
- type: recall_at_3
value: 0.623
- type: recall_at_5
value: 1.022
- task:
type: Retrieval
dataset:
name: MTEB Touche2020
type: None
config: default
split: test
revision: a34f9a33db75fa0cbb21bb5cfc3dae8dc8bec93f
metrics:
- type: map_at_1
value: 1.3419999999999999
- type: map_at_10
value: 6.959999999999999
- type: map_at_100
value: 12.649
- type: map_at_1000
value: 14.332
- type: map_at_20
value: 9.48
- type: map_at_3
value: 3.447
- type: map_at_5
value: 4.811
- type: mrr_at_1
value: 18.367
- type: mrr_at_10
value: 33.273
- type: mrr_at_100
value: 34.611
- type: mrr_at_1000
value: 34.628
- type: mrr_at_20
value: 34.165
- type: mrr_at_3
value: 29.252
- type: mrr_at_5
value: 30.578
- type: ndcg_at_1
value: 16.326999999999998
- type: ndcg_at_10
value: 18.581
- type: ndcg_at_100
value: 31.512
- type: ndcg_at_1000
value: 43.93
- type: ndcg_at_20
value: 20.578
- type: ndcg_at_3
value: 18.179000000000002
- type: ndcg_at_5
value: 17.772
- type: precision_at_1
value: 18.367
- type: precision_at_10
value: 17.551
- type: precision_at_100
value: 7.102
- type: precision_at_1000
value: 1.533
- type: precision_at_20
value: 14.388000000000002
- type: precision_at_3
value: 20.408
- type: precision_at_5
value: 19.184
- type: recall_at_1
value: 1.3419999999999999
- type: recall_at_10
value: 13.081999999999999
- type: recall_at_100
value: 45.397
- type: recall_at_1000
value: 82.866
- type: recall_at_20
value: 21.034
- type: recall_at_3
value: 4.644
- type: recall_at_5
value: 7.449
- task:
type: Classification
dataset:
name: MTEB ToxicConversationsClassification
type: None
config: default
split: test
revision: edfaf9da55d3dd50d43143d90c1ac476895ae6de
metrics:
- type: accuracy
value: 67.5634765625
- type: ap
value: 12.539329872788752
- type: f1
value: 51.61250153500541
- task:
type: Classification
dataset:
name: MTEB TweetSentimentExtractionClassification
type: None
config: default
split: test
revision: d604517c81ca91fe16a244d1248fc021f9ecee7a
metrics:
- type: accuracy
value: 54.850028296547805
- type: f1
value: 55.18064459526432
- task:
type: Clustering
dataset:
name: MTEB TwentyNewsgroupsClustering
type: None
config: default
split: test
revision: 6125ec4e24fa026cec8a478383ee943acfbd5449
metrics:
- type: v_measure
value: 47.6299204409476
- type: v_measures
value:
- 0.4573171052753204
- 0.4831361996460077
- 0.49177843872885985
- 0.4695462700427479
- 0.4697818926471495
- 0.4844307048396859
- 0.4872612383566334
- 0.45587716734484074
- 0.48326143336804445
- 0.4806015938454703
- 0.4573171052753204
- 0.4831361996460077
- 0.49177843872885985
- 0.4695462700427479
- 0.4697818926471495
- 0.4844307048396859
- 0.4872612383566334
- 0.45587716734484074
- 0.48326143336804445
- 0.4806015938454703
- 0.4573171052753204
- 0.4831361996460077
- 0.49177843872885985
- 0.4695462700427479
- 0.4697818926471495
- 0.4844307048396859
- 0.4872612383566334
- 0.45587716734484074
- 0.48326143336804445
- 0.4806015938454703
- 0.4573171052753204
- 0.4831361996460077
- 0.49177843872885985
- 0.4695462700427479
- 0.4697818926471495
- 0.4844307048396859
- 0.4872612383566334
- 0.45587716734484074
- 0.48326143336804445
- 0.4806015938454703
- 0.4573171052753204
- 0.4831361996460077
- 0.49177843872885985
- 0.4695462700427479
- 0.4697818926471495
- 0.4844307048396859
- 0.4872612383566334
- 0.45587716734484074
- 0.48326143336804445
- 0.4806015938454703
- 0.4573171052753204
- 0.4831361996460077
- 0.49177843872885985
- 0.4695462700427479
- 0.4697818926471495
- 0.4844307048396859
- 0.4872612383566334
- 0.45587716734484074
- 0.48326143336804445
- 0.4806015938454703
- 0.4573171052753204
- 0.4831361996460077
- 0.49177843872885985
- 0.4695462700427479
- 0.4697818926471495
- 0.4844307048396859
- 0.4872612383566334
- 0.45587716734484074
- 0.48326143336804445
- 0.4806015938454703
- 0.4573171052753204
- 0.4831361996460077
- 0.49177843872885985
- 0.4695462700427479
- 0.4697818926471495
- 0.4844307048396859
- 0.4872612383566334
- 0.45587716734484074
- 0.48326143336804445
- 0.4806015938454703
- 0.4573171052753204
- 0.4831361996460077
- 0.49177843872885985
- 0.4695462700427479
- 0.4697818926471495
- 0.4844307048396859
- 0.4872612383566334
- 0.45587716734484074
- 0.48326143336804445
- 0.4806015938454703
- 0.4573171052753204
- 0.4831361996460077
- 0.49177843872885985
- 0.4695462700427479
- 0.4697818926471495
- 0.4844307048396859
- 0.4872612383566334
- 0.45587716734484074
- 0.48326143336804445
- 0.4806015938454703
- 0.4573171052753204
- 0.4831361996460077
- 0.49177843872885985
- 0.4695462700427479
- 0.4697818926471495
- 0.4844307048396859
- 0.4872612383566334
- 0.45587716734484074
- 0.48326143336804445
- 0.4806015938454703
- 0.4573171052753204
- 0.4831361996460077
- 0.49177843872885985
- 0.4695462700427479
- 0.4697818926471495
- 0.4844307048396859
- 0.4872612383566334
- 0.45587716734484074
- 0.48326143336804445
- 0.4806015938454703
- 0.4573171052753204
- 0.4831361996460077
- 0.49177843872885985
- 0.4695462700427479
- 0.4697818926471495
- 0.4844307048396859
- 0.4872612383566334
- 0.45587716734484074
- 0.48326143336804445
- 0.4806015938454703
- 0.4573171052753204
- 0.4831361996460077
- 0.49177843872885985
- 0.4695462700427479
- 0.4697818926471495
- 0.4844307048396859
- 0.4872612383566334
- 0.45587716734484074
- 0.48326143336804445
- 0.4806015938454703
- 0.4573171052753204
- 0.4831361996460077
- 0.49177843872885985
- 0.4695462700427479
- 0.4697818926471495
- 0.4844307048396859
- 0.4872612383566334
- 0.45587716734484074
- 0.48326143336804445
- 0.4806015938454703
- 0.4573171052753204
- 0.4831361996460077
- 0.49177843872885985
- 0.4695462700427479
- 0.4697818926471495
- 0.4844307048396859
- 0.4872612383566334
- 0.45587716734484074
- 0.48326143336804445
- 0.4806015938454703
- 0.4573171052753204
- 0.4831361996460077
- 0.49177843872885985
- 0.4695462700427479
- 0.4697818926471495
- 0.4844307048396859
- 0.4872612383566334
- 0.45587716734484074
- 0.48326143336804445
- 0.4806015938454703
- 0.4573171052753204
- 0.4831361996460077
- 0.49177843872885985
- 0.4695462700427479
- 0.4697818926471495
- 0.4844307048396859
- 0.4872612383566334
- 0.45587716734484074
- 0.48326143336804445
- 0.4806015938454703
- 0.4573171052753204
- 0.4831361996460077
- 0.49177843872885985
- 0.4695462700427479
- 0.4697818926471495
- 0.4844307048396859
- 0.4872612383566334
- 0.45587716734484074
- 0.48326143336804445
- 0.4806015938454703
- 0.4573171052753204
- 0.4831361996460077
- 0.49177843872885985
- 0.4695462700427479
- 0.4697818926471495
- 0.4844307048396859
- 0.4872612383566334
- 0.45587716734484074
- 0.48326143336804445
- 0.4806015938454703
- 0.4573171052753204
- 0.4831361996460077
- 0.49177843872885985
- 0.4695462700427479
- 0.4697818926471495
- 0.4844307048396859
- 0.4872612383566334
- 0.45587716734484074
- 0.48326143336804445
- 0.4806015938454703
- 0.4573171052753204
- 0.4831361996460077
- 0.49177843872885985
- 0.4695462700427479
- 0.4697818926471495
- 0.4844307048396859
- 0.4872612383566334
- 0.45587716734484074
- 0.48326143336804445
- 0.4806015938454703
- 0.4573171052753204
- 0.4831361996460077
- 0.49177843872885985
- 0.4695462700427479
- 0.4697818926471495
- 0.4844307048396859
- 0.4872612383566334
- 0.45587716734484074
- 0.48326143336804445
- 0.4806015938454703
- 0.4573171052753204
- 0.4831361996460077
- 0.49177843872885985
- 0.4695462700427479
- 0.4697818926471495
- 0.4844307048396859
- 0.4872612383566334
- 0.45587716734484074
- 0.48326143336804445
- 0.4806015938454703
- 0.4573171052753204
- 0.4831361996460077
- 0.49177843872885985
- 0.4695462700427479
- 0.4697818926471495
- 0.4844307048396859
- 0.4872612383566334
- 0.45587716734484074
- 0.48326143336804445
- 0.4806015938454703
- 0.4573171052753204
- 0.4831361996460077
- 0.49177843872885985
- 0.4695462700427479
- 0.4697818926471495
- 0.4844307048396859
- 0.4872612383566334
- 0.45587716734484074
- 0.48326143336804445
- 0.4806015938454703
- 0.4573171052753204
- 0.4831361996460077
- 0.49177843872885985
- 0.4695462700427479
- 0.4697818926471495
- 0.4844307048396859
- 0.4872612383566334
- 0.45587716734484074
- 0.48326143336804445
- 0.4806015938454703
- 0.4573171052753204
- 0.4831361996460077
- 0.49177843872885985
- 0.4695462700427479
- 0.4697818926471495
- 0.4844307048396859
- 0.4872612383566334
- 0.45587716734484074
- 0.48326143336804445
- 0.4806015938454703
- 0.4573171052753204
- 0.4831361996460077
- 0.49177843872885985
- 0.4695462700427479
- 0.4697818926471495
- 0.4844307048396859
- 0.4872612383566334
- 0.45587716734484074
- 0.48326143336804445
- 0.4806015938454703
- 0.4573171052753204
- 0.4831361996460077
- 0.49177843872885985
- 0.4695462700427479
- 0.4697818926471495
- 0.4844307048396859
- 0.4872612383566334
- 0.45587716734484074
- 0.48326143336804445
- 0.4806015938454703
- 0.4573171052753204
- 0.4831361996460077
- 0.49177843872885985
- 0.4695462700427479
- 0.4697818926471495
- 0.4844307048396859
- 0.4872612383566334
- 0.45587716734484074
- 0.48326143336804445
- 0.4806015938454703
- 0.4573171052753204
- 0.4831361996460077
- 0.49177843872885985
- 0.4695462700427479
- 0.4697818926471495
- 0.4844307048396859
- 0.4872612383566334
- 0.45587716734484074
- 0.48326143336804445
- 0.4806015938454703
- 0.4573171052753204
- 0.4831361996460077
- 0.49177843872885985
- 0.4695462700427479
- 0.4697818926471495
- 0.4844307048396859
- 0.4872612383566334
- 0.45587716734484074
- 0.48326143336804445
- 0.4806015938454703
- 0.4573171052753204
- 0.4831361996460077
- 0.49177843872885985
- 0.4695462700427479
- 0.4697818926471495
- 0.4844307048396859
- 0.4872612383566334
- 0.45587716734484074
- 0.48326143336804445
- 0.4806015938454703
- 0.4573171052753204
- 0.4831361996460077
- 0.49177843872885985
- 0.4695462700427479
- 0.4697818926471495
- 0.4844307048396859
- 0.4872612383566334
- 0.45587716734484074
- 0.48326143336804445
- 0.4806015938454703
- 0.4573171052753204
- 0.4831361996460077
- 0.49177843872885985
- 0.4695462700427479
- 0.4697818926471495
- 0.4844307048396859
- 0.4872612383566334
- 0.45587716734484074
- 0.48326143336804445
- 0.4806015938454703
- 0.4573171052753204
- 0.4831361996460077
- 0.49177843872885985
- 0.4695462700427479
- 0.4697818926471495
- 0.4844307048396859
- 0.4872612383566334
- 0.45587716734484074
- 0.48326143336804445
- 0.4806015938454703
- 0.4573171052753204
- 0.4831361996460077
- 0.49177843872885985
- 0.4695462700427479
- 0.4697818926471495
- 0.4844307048396859
- 0.4872612383566334
- 0.45587716734484074
- 0.48326143336804445
- 0.4806015938454703
- 0.4573171052753204
- 0.4831361996460077
- 0.49177843872885985
- 0.4695462700427479
- 0.4697818926471495
- 0.4844307048396859
- 0.4872612383566334
- 0.45587716734484074
- 0.48326143336804445
- 0.4806015938454703
- 0.4573171052753204
- 0.4831361996460077
- 0.49177843872885985
- 0.4695462700427479
- 0.4697818926471495
- 0.4844307048396859
- 0.4872612383566334
- 0.45587716734484074
- 0.48326143336804445
- 0.4806015938454703
- 0.4573171052753204
- 0.4831361996460077
- 0.49177843872885985
- 0.4695462700427479
- 0.4697818926471495
- 0.4844307048396859
- 0.4872612383566334
- 0.45587716734484074
- 0.48326143336804445
- 0.4806015938454703
- 0.4573171052753204
- 0.4831361996460077
- 0.49177843872885985
- 0.4695462700427479
- 0.4697818926471495
- 0.4844307048396859
- 0.4872612383566334
- 0.45587716734484074
- 0.48326143336804445
- 0.4806015938454703
- 0.4573171052753204
- 0.4831361996460077
- 0.49177843872885985
- 0.4695462700427479
- 0.4697818926471495
- 0.4844307048396859
- 0.4872612383566334
- 0.45587716734484074
- 0.48326143336804445
- 0.4806015938454703
- 0.4573171052753204
- 0.4831361996460077
- 0.49177843872885985
- 0.4695462700427479
- 0.4697818926471495
- 0.4844307048396859
- 0.4872612383566334
- 0.45587716734484074
- 0.48326143336804445
- 0.4806015938454703
- 0.4573171052753204
- 0.4831361996460077
- 0.49177843872885985
- 0.4695462700427479
- 0.4697818926471495
- 0.4844307048396859
- 0.4872612383566334
- 0.45587716734484074
- 0.48326143336804445
- 0.4806015938454703
- 0.4573171052753204
- 0.4831361996460077
- 0.49177843872885985
- 0.4695462700427479
- 0.4697818926471495
- 0.4844307048396859
- 0.4872612383566334
- 0.45587716734484074
- 0.48326143336804445
- 0.4806015938454703
- 0.4573171052753204
- 0.4831361996460077
- 0.49177843872885985
- 0.4695462700427479
- 0.4697818926471495
- 0.4844307048396859
- 0.4872612383566334
- 0.45587716734484074
- 0.48326143336804445
- 0.4806015938454703
- 0.4573171052753204
- 0.4831361996460077
- 0.49177843872885985
- 0.4695462700427479
- 0.4697818926471495
- 0.4844307048396859
- 0.4872612383566334
- 0.45587716734484074
- 0.48326143336804445
- 0.4806015938454703
- 0.4573171052753204
- 0.4831361996460077
- 0.49177843872885985
- 0.4695462700427479
- 0.4697818926471495
- 0.4844307048396859
- 0.4872612383566334
- 0.45587716734484074
- 0.48326143336804445
- 0.4806015938454703
- 0.4573171052753204
- 0.4831361996460077
- 0.49177843872885985
- 0.4695462700427479
- 0.4697818926471495
- 0.4844307048396859
- 0.4872612383566334
- 0.45587716734484074
- 0.48326143336804445
- 0.4806015938454703
- 0.4573171052753204
- 0.4831361996460077
- 0.49177843872885985
- 0.4695462700427479
- 0.4697818926471495
- 0.4844307048396859
- 0.4872612383566334
- 0.45587716734484074
- 0.48326143336804445
- 0.4806015938454703
- 0.4573171052753204
- 0.4831361996460077
- 0.49177843872885985
- 0.4695462700427479
- 0.4697818926471495
- 0.4844307048396859
- 0.4872612383566334
- 0.45587716734484074
- 0.48326143336804445
- 0.4806015938454703
- 0.4573171052753204
- 0.4831361996460077
- 0.49177843872885985
- 0.4695462700427479
- 0.4697818926471495
- 0.4844307048396859
- 0.4872612383566334
- 0.45587716734484074
- 0.48326143336804445
- 0.4806015938454703
- 0.4573171052753204
- 0.4831361996460077
- 0.49177843872885985
- 0.4695462700427479
- 0.4697818926471495
- 0.4844307048396859
- 0.4872612383566334
- 0.45587716734484074
- 0.48326143336804445
- 0.4806015938454703
- 0.4573171052753204
- 0.4831361996460077
- 0.49177843872885985
- 0.4695462700427479
- 0.4697818926471495
- 0.4844307048396859
- 0.4872612383566334
- 0.45587716734484074
- 0.48326143336804445
- 0.4806015938454703
- 0.4573171052753204
- 0.4831361996460077
- 0.49177843872885985
- 0.4695462700427479
- 0.4697818926471495
- 0.4844307048396859
- 0.4872612383566334
- 0.45587716734484074
- 0.48326143336804445
- 0.4806015938454703
- 0.4573171052753204
- 0.4831361996460077
- 0.49177843872885985
- 0.4695462700427479
- 0.4697818926471495
- 0.4844307048396859
- 0.4872612383566334
- 0.45587716734484074
- 0.48326143336804445
- 0.4806015938454703
- 0.4573171052753204
- 0.4831361996460077
- 0.49177843872885985
- 0.4695462700427479
- 0.4697818926471495
- 0.4844307048396859
- 0.4872612383566334
- 0.45587716734484074
- 0.48326143336804445
- 0.4806015938454703
- 0.4573171052753204
- 0.4831361996460077
- 0.49177843872885985
- 0.4695462700427479
- 0.4697818926471495
- 0.4844307048396859
- 0.4872612383566334
- 0.45587716734484074
- 0.48326143336804445
- 0.4806015938454703
- 0.4573171052753204
- 0.4831361996460077
- 0.49177843872885985
- 0.4695462700427479
- 0.4697818926471495
- 0.4844307048396859
- 0.4872612383566334
- 0.45587716734484074
- 0.48326143336804445
- 0.4806015938454703
- 0.4573171052753204
- 0.4831361996460077
- 0.49177843872885985
- 0.4695462700427479
- 0.4697818926471495
- 0.4844307048396859
- 0.4872612383566334
- 0.45587716734484074
- 0.48326143336804445
- 0.4806015938454703
- 0.4573171052753204
- 0.4831361996460077
- 0.49177843872885985
- 0.4695462700427479
- 0.4697818926471495
- 0.4844307048396859
- 0.4872612383566334
- 0.45587716734484074
- 0.48326143336804445
- 0.4806015938454703
- 0.4573171052753204
- 0.4831361996460077
- 0.49177843872885985
- 0.4695462700427479
- 0.4697818926471495
- 0.4844307048396859
- 0.4872612383566334
- 0.45587716734484074
- 0.48326143336804445
- 0.4806015938454703
- 0.4573171052753204
- 0.4831361996460077
- 0.49177843872885985
- 0.4695462700427479
- 0.4697818926471495
- 0.4844307048396859
- 0.4872612383566334
- 0.45587716734484074
- 0.48326143336804445
- 0.4806015938454703
- 0.4573171052753204
- 0.4831361996460077
- 0.49177843872885985
- 0.4695462700427479
- 0.4697818926471495
- 0.4844307048396859
- 0.4872612383566334
- 0.45587716734484074
- 0.48326143336804445
- 0.4806015938454703
- 0.4573171052753204
- 0.4831361996460077
- 0.49177843872885985
- 0.4695462700427479
- 0.4697818926471495
- 0.4844307048396859
- 0.4872612383566334
- 0.45587716734484074
- 0.48326143336804445
- 0.4806015938454703
- 0.4573171052753204
- 0.4831361996460077
- 0.49177843872885985
- 0.4695462700427479
- 0.4697818926471495
- 0.4844307048396859
- 0.4872612383566334
- 0.45587716734484074
- 0.48326143336804445
- 0.4806015938454703
- 0.4573171052753204
- 0.4831361996460077
- 0.49177843872885985
- 0.4695462700427479
- 0.4697818926471495
- 0.4844307048396859
- 0.4872612383566334
- 0.45587716734484074
- 0.48326143336804445
- 0.4806015938454703
- 0.4573171052753204
- 0.4831361996460077
- 0.49177843872885985
- 0.4695462700427479
- 0.4697818926471495
- 0.4844307048396859
- 0.4872612383566334
- 0.45587716734484074
- 0.48326143336804445
- 0.4806015938454703
- 0.4573171052753204
- 0.4831361996460077
- 0.49177843872885985
- 0.4695462700427479
- 0.4697818926471495
- 0.4844307048396859
- 0.4872612383566334
- 0.45587716734484074
- 0.48326143336804445
- 0.4806015938454703
- 0.4573171052753204
- 0.4831361996460077
- 0.49177843872885985
- 0.4695462700427479
- 0.4697818926471495
- 0.4844307048396859
- 0.4872612383566334
- 0.45587716734484074
- 0.48326143336804445
- 0.4806015938454703
- 0.4573171052753204
- 0.4831361996460077
- 0.49177843872885985
- 0.4695462700427479
- 0.4697818926471495
- 0.4844307048396859
- 0.4872612383566334
- 0.45587716734484074
- 0.48326143336804445
- 0.4806015938454703
- 0.4573171052753204
- 0.4831361996460077
- 0.49177843872885985
- 0.4695462700427479
- 0.4697818926471495
- 0.4844307048396859
- 0.4872612383566334
- 0.45587716734484074
- 0.48326143336804445
- 0.4806015938454703
- 0.4573171052753204
- 0.4831361996460077
- 0.49177843872885985
- 0.4695462700427479
- 0.4697818926471495
- 0.4844307048396859
- 0.4872612383566334
- 0.45587716734484074
- 0.48326143336804445
- 0.4806015938454703
- 0.4573171052753204
- 0.4831361996460077
- 0.49177843872885985
- 0.4695462700427479
- 0.4697818926471495
- 0.4844307048396859
- 0.4872612383566334
- 0.45587716734484074
- 0.48326143336804445
- 0.4806015938454703
- 0.4573171052753204
- 0.4831361996460077
- 0.49177843872885985
- 0.4695462700427479
- 0.4697818926471495
- 0.4844307048396859
- 0.4872612383566334
- 0.45587716734484074
- 0.48326143336804445
- 0.4806015938454703
- 0.4573171052753204
- 0.4831361996460077
- 0.49177843872885985
- 0.4695462700427479
- 0.4697818926471495
- 0.4844307048396859
- 0.4872612383566334
- 0.45587716734484074
- 0.48326143336804445
- 0.4806015938454703
- 0.4573171052753204
- 0.4831361996460077
- 0.49177843872885985
- 0.4695462700427479
- 0.4697818926471495
- 0.4844307048396859
- 0.4872612383566334
- 0.45587716734484074
- 0.48326143336804445
- 0.4806015938454703
- 0.4573171052753204
- 0.4831361996460077
- 0.49177843872885985
- 0.4695462700427479
- 0.4697818926471495
- 0.4844307048396859
- 0.4872612383566334
- 0.45587716734484074
- 0.48326143336804445
- 0.4806015938454703
- 0.4573171052753204
- 0.4831361996460077
- 0.49177843872885985
- 0.4695462700427479
- 0.4697818926471495
- 0.4844307048396859
- 0.4872612383566334
- 0.45587716734484074
- 0.48326143336804445
- 0.4806015938454703
- 0.4573171052753204
- 0.4831361996460077
- 0.49177843872885985
- 0.4695462700427479
- 0.4697818926471495
- 0.4844307048396859
- 0.4872612383566334
- 0.45587716734484074
- 0.48326143336804445
- 0.4806015938454703
- 0.4573171052753204
- 0.4831361996460077
- 0.49177843872885985
- 0.4695462700427479
- 0.4697818926471495
- 0.4844307048396859
- 0.4872612383566334
- 0.45587716734484074
- 0.48326143336804445
- 0.4806015938454703
- 0.4573171052753204
- 0.4831361996460077
- 0.49177843872885985
- 0.4695462700427479
- 0.4697818926471495
- 0.4844307048396859
- 0.4872612383566334
- 0.45587716734484074
- 0.48326143336804445
- 0.4806015938454703
- 0.4573171052753204
- 0.4831361996460077
- 0.49177843872885985
- 0.4695462700427479
- 0.4697818926471495
- 0.4844307048396859
- 0.4872612383566334
- 0.45587716734484074
- 0.48326143336804445
- 0.4806015938454703
- 0.4573171052753204
- 0.4831361996460077
- 0.49177843872885985
- 0.4695462700427479
- 0.4697818926471495
- 0.4844307048396859
- 0.4872612383566334
- 0.45587716734484074
- 0.48326143336804445
- 0.4806015938454703
- 0.4573171052753204
- 0.4831361996460077
- 0.49177843872885985
- 0.4695462700427479
- 0.4697818926471495
- 0.4844307048396859
- 0.4872612383566334
- 0.45587716734484074
- 0.48326143336804445
- 0.4806015938454703
- 0.4573171052753204
- 0.4831361996460077
- 0.49177843872885985
- 0.4695462700427479
- 0.4697818926471495
- 0.4844307048396859
- 0.4872612383566334
- 0.45587716734484074
- 0.48326143336804445
- 0.4806015938454703
- 0.4573171052753204
- 0.4831361996460077
- 0.49177843872885985
- 0.4695462700427479
- 0.4697818926471495
- 0.4844307048396859
- 0.4872612383566334
- 0.45587716734484074
- 0.48326143336804445
- 0.4806015938454703
- 0.4573171052753204
- 0.4831361996460077
- 0.49177843872885985
- 0.4695462700427479
- 0.4697818926471495
- 0.4844307048396859
- 0.4872612383566334
- 0.45587716734484074
- 0.48326143336804445
- 0.4806015938454703
- 0.4573171052753204
- 0.4831361996460077
- 0.49177843872885985
- 0.4695462700427479
- 0.4697818926471495
- 0.4844307048396859
- 0.4872612383566334
- 0.45587716734484074
- 0.48326143336804445
- 0.4806015938454703
- 0.4573171052753204
- 0.4831361996460077
- 0.49177843872885985
- 0.4695462700427479
- 0.4697818926471495
- 0.4844307048396859
- 0.4872612383566334
- 0.45587716734484074
- 0.48326143336804445
- 0.4806015938454703
- 0.4573171052753204
- 0.4831361996460077
- 0.49177843872885985
- 0.4695462700427479
- 0.4697818926471495
- 0.4844307048396859
- 0.4872612383566334
- 0.45587716734484074
- 0.48326143336804445
- 0.4806015938454703
- 0.4573171052753204
- 0.4831361996460077
- 0.49177843872885985
- 0.4695462700427479
- 0.4697818926471495
- 0.4844307048396859
- 0.4872612383566334
- 0.45587716734484074
- 0.48326143336804445
- 0.4806015938454703
- 0.4573171052753204
- 0.4831361996460077
- 0.49177843872885985
- 0.4695462700427479
- 0.4697818926471495
- 0.4844307048396859
- 0.4872612383566334
- 0.45587716734484074
- 0.48326143336804445
- 0.4806015938454703
- 0.4573171052753204
- 0.4831361996460077
- 0.49177843872885985
- 0.4695462700427479
- 0.4697818926471495
- 0.4844307048396859
- 0.4872612383566334
- 0.45587716734484074
- 0.48326143336804445
- 0.4806015938454703
- 0.4573171052753204
- 0.4831361996460077
- 0.49177843872885985
- 0.4695462700427479
- 0.4697818926471495
- 0.4844307048396859
- 0.4872612383566334
- 0.45587716734484074
- 0.48326143336804445
- 0.4806015938454703
- 0.4573171052753204
- 0.4831361996460077
- 0.49177843872885985
- 0.4695462700427479
- 0.4697818926471495
- 0.4844307048396859
- 0.4872612383566334
- 0.45587716734484074
- 0.48326143336804445
- 0.4806015938454703
- 0.4573171052753204
- 0.4831361996460077
- 0.49177843872885985
- 0.4695462700427479
- 0.4697818926471495
- 0.4844307048396859
- 0.4872612383566334
- 0.45587716734484074
- 0.48326143336804445
- 0.4806015938454703
- 0.4573171052753204
- 0.4831361996460077
- 0.49177843872885985
- 0.4695462700427479
- 0.4697818926471495
- 0.4844307048396859
- 0.4872612383566334
- 0.45587716734484074
- 0.48326143336804445
- 0.4806015938454703
- 0.4573171052753204
- 0.4831361996460077
- 0.49177843872885985
- 0.4695462700427479
- 0.4697818926471495
- 0.4844307048396859
- 0.4872612383566334
- 0.45587716734484074
- 0.48326143336804445
- 0.4806015938454703
- task:
type: PairClassification
dataset:
name: MTEB TwitterSemEval2015
type: None
config: default
split: test
revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1
metrics:
- type: cos_sim_accuracy
value: 83.78732788937235
- type: cos_sim_ap
value: 66.7369597819357
- type: cos_sim_f1
value: 61.900121802679664
- type: cos_sim_precision
value: 57.48868778280543
- type: cos_sim_recall
value: 67.04485488126649
- type: dot_accuracy
value: 83.77540680693807
- type: dot_ap
value: 66.7494206279536
- type: dot_f1
value: 61.906496660595025
- type: dot_precision
value: 57.34533183352081
- type: dot_recall
value: 67.25593667546174
- type: euclidean_accuracy
value: 83.78136734815521
- type: euclidean_ap
value: 66.72851072777382
- type: euclidean_f1
value: 61.89545676599902
- type: euclidean_precision
value: 57.617098681218735
- type: euclidean_recall
value: 66.86015831134564
- type: manhattan_accuracy
value: 83.69195922989807
- type: manhattan_ap
value: 66.67869948457852
- type: manhattan_f1
value: 61.948212083847096
- type: manhattan_precision
value: 58.14814814814815
- type: manhattan_recall
value: 66.2796833773087
- type: max_accuracy
value: 83.78732788937235
- type: max_ap
value: 66.7494206279536
- type: max_f1
value: 61.948212083847096
- task:
type: PairClassification
dataset:
name: MTEB TwitterURLCorpus
type: None
config: default
split: test
revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf
metrics:
- type: cos_sim_accuracy
value: 89.06935227228627
- type: cos_sim_ap
value: 86.01490350477971
- type: cos_sim_f1
value: 78.5821414200534
- type: cos_sim_precision
value: 74.85539061955538
- type: cos_sim_recall
value: 82.69941484447182
- type: dot_accuracy
value: 89.06741180579812
- type: dot_ap
value: 86.00939130135514
- type: dot_f1
value: 78.5863394982604
- type: dot_precision
value: 74.93888384438081
- type: dot_recall
value: 82.60702186633816
- type: euclidean_accuracy
value: 89.06547133930997
- type: euclidean_ap
value: 86.01611265260871
- type: euclidean_f1
value: 78.5754075834664
- type: euclidean_precision
value: 75.89497094483106
- type: euclidean_recall
value: 81.45210963966738
- type: manhattan_accuracy
value: 89.03636434198782
- type: manhattan_ap
value: 85.98483745706906
- type: manhattan_f1
value: 78.52461404019809
- type: manhattan_precision
value: 74.48880906327715
- type: manhattan_recall
value: 83.02279026793964
- type: max_accuracy
value: 89.06935227228627
- type: max_ap
value: 86.01611265260871
- type: max_f1
value: 78.5863394982604
---
# ModernBERT-Embed-Unsupervised
`modernbert-embed-unsupervised` is the unsupervised checkpoint trained with the [contrastors](https://github.com/nomic-ai/contrastors) library
for 1 epoch over the 235M weakly-supervised contrastive pairs curated in [Nomic Embed](https://arxiv.org/abs/2402.01613).
We suggest using [moderbert-embed](https://huggingface.co/nomic-ai/modernbert-embed) for embedding tasks.
## Performance
The modernbert-unsupervised model performs similarly to the `nomic-embed-text-v1_unsup` model
| Model | Average (56) | Classification (12) | Clustering (11) | Pair Classification (3) | Reranking (4) | Retrieval (15) | STS (10) | Overall |
|-------|--------------|--------------------:|-----------------|------------------------|---------------|----------------|-----------|----------|
| nomic-embed-text-v1_unsup | 59.9 | 71.2 | 42.5 | 83.7 | 55.0 | 48.0 | 80.8 | 30.7 |
| modernbert-embed-unsupervised | 60.03 | 72.11 | 44.34 | 82.78 | 55.0 | 47.05 | 80.33 | 31.2 |
| [
"SUMMARIZATION"
] | [
"BIOSSES",
"SCIFACT"
] |
McGill-NLP/LLM2Vec-Llama-2-7b-chat-hf-mntp-unsup-simcse | McGill-NLP | sentence-similarity | [
"peft",
"safetensors",
"text-embedding",
"embeddings",
"information-retrieval",
"beir",
"text-classification",
"language-model",
"text-clustering",
"text-semantic-similarity",
"text-evaluation",
"text-reranking",
"feature-extraction",
"sentence-similarity",
"Sentence Similarity",
"natural_questions",
"ms_marco",
"fever",
"hotpot_qa",
"mteb",
"en",
"arxiv:2404.05961",
"license:mit",
"model-index",
"region:us"
] | 2024-04-04T05:31:48 | 2024-04-11T19:56:16 | 57 | 0 | ---
language:
- en
library_name: peft
license: mit
pipeline_tag: sentence-similarity
tags:
- text-embedding
- embeddings
- information-retrieval
- beir
- text-classification
- language-model
- text-clustering
- text-semantic-similarity
- text-evaluation
- text-reranking
- feature-extraction
- sentence-similarity
- Sentence Similarity
- natural_questions
- ms_marco
- fever
- hotpot_qa
- mteb
model-index:
- name: LLM2Vec-Llama-2-unsupervised
results:
- task:
type: Classification
dataset:
name: MTEB AmazonCounterfactualClassification (en)
type: mteb/amazon_counterfactual
config: en
split: test
revision: e8379541af4e31359cca9fbcf4b00f2671dba205
metrics:
- type: accuracy
value: 76.91044776119402
- type: ap
value: 41.73039886859448
- type: f1
value: 71.49663106134554
- task:
type: Classification
dataset:
name: MTEB AmazonPolarityClassification
type: mteb/amazon_polarity
config: default
split: test
revision: e2d317d38cd51312af73b3d32a06d1a08b442046
metrics:
- type: accuracy
value: 79.0549
- type: ap
value: 74.50419535911905
- type: f1
value: 78.87370110570745
- task:
type: Classification
dataset:
name: MTEB AmazonReviewsClassification (en)
type: mteb/amazon_reviews_multi
config: en
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 40.07999999999999
- type: f1
value: 39.74598250149754
- task:
type: Retrieval
dataset:
name: MTEB ArguAna
type: arguana
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 22.973
- type: map_at_10
value: 38.217
- type: map_at_100
value: 39.247
- type: map_at_1000
value: 39.263
- type: map_at_3
value: 33.108
- type: map_at_5
value: 35.942
- type: mrr_at_1
value: 23.755000000000003
- type: mrr_at_10
value: 38.495000000000005
- type: mrr_at_100
value: 39.525
- type: mrr_at_1000
value: 39.541
- type: mrr_at_3
value: 33.333
- type: mrr_at_5
value: 36.221
- type: ndcg_at_1
value: 22.973
- type: ndcg_at_10
value: 47.093
- type: ndcg_at_100
value: 51.745
- type: ndcg_at_1000
value: 52.126
- type: ndcg_at_3
value: 36.473
- type: ndcg_at_5
value: 41.591
- type: precision_at_1
value: 22.973
- type: precision_at_10
value: 7.568
- type: precision_at_100
value: 0.966
- type: precision_at_1000
value: 0.1
- type: precision_at_3
value: 15.409999999999998
- type: precision_at_5
value: 11.735
- type: recall_at_1
value: 22.973
- type: recall_at_10
value: 75.676
- type: recall_at_100
value: 96.586
- type: recall_at_1000
value: 99.502
- type: recall_at_3
value: 46.23
- type: recall_at_5
value: 58.677
- task:
type: Clustering
dataset:
name: MTEB ArxivClusteringP2P
type: mteb/arxiv-clustering-p2p
config: default
split: test
revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d
metrics:
- type: v_measure
value: 47.808566636089296
- task:
type: Clustering
dataset:
name: MTEB ArxivClusteringS2S
type: mteb/arxiv-clustering-s2s
config: default
split: test
revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53
metrics:
- type: v_measure
value: 40.53253525071289
- task:
type: Reranking
dataset:
name: MTEB AskUbuntuDupQuestions
type: mteb/askubuntudupquestions-reranking
config: default
split: test
revision: 2000358ca161889fa9c082cb41daa8dcfb161a54
metrics:
- type: map
value: 55.564312661366564
- type: mrr
value: 69.24526227850326
- task:
type: STS
dataset:
name: MTEB BIOSSES
type: mteb/biosses-sts
config: default
split: test
revision: d3fb88f8f02e40887cd149695127462bbcf29b4a
metrics:
- type: cos_sim_spearman
value: 82.40790181633206
- task:
type: Classification
dataset:
name: MTEB Banking77Classification
type: mteb/banking77
config: default
split: test
revision: 0fd18e25b25c072e09e0d92ab615fda904d66300
metrics:
- type: accuracy
value: 84.64935064935064
- type: f1
value: 84.59305945931867
- task:
type: Clustering
dataset:
name: MTEB BiorxivClusteringP2P
type: mteb/biorxiv-clustering-p2p
config: default
split: test
revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40
metrics:
- type: v_measure
value: 38.11916694447953
- task:
type: Clustering
dataset:
name: MTEB BiorxivClusteringS2S
type: mteb/biorxiv-clustering-s2s
config: default
split: test
revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908
metrics:
- type: v_measure
value: 31.248648913887024
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackAndroidRetrieval
type: cqadupstack/android
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 24.483
- type: map_at_10
value: 34.549
- type: map_at_100
value: 36.106
- type: map_at_1000
value: 36.253
- type: map_at_3
value: 31.313999999999997
- type: map_at_5
value: 32.987
- type: mrr_at_1
value: 32.046
- type: mrr_at_10
value: 41.217999999999996
- type: mrr_at_100
value: 42.068
- type: mrr_at_1000
value: 42.126999999999995
- type: mrr_at_3
value: 38.746
- type: mrr_at_5
value: 40.083
- type: ndcg_at_1
value: 32.046
- type: ndcg_at_10
value: 40.927
- type: ndcg_at_100
value: 46.5
- type: ndcg_at_1000
value: 49.043
- type: ndcg_at_3
value: 36.448
- type: ndcg_at_5
value: 38.199
- type: precision_at_1
value: 32.046
- type: precision_at_10
value: 8.484
- type: precision_at_100
value: 1.443
- type: precision_at_1000
value: 0.2
- type: precision_at_3
value: 18.407
- type: precision_at_5
value: 13.419
- type: recall_at_1
value: 24.483
- type: recall_at_10
value: 51.946999999999996
- type: recall_at_100
value: 75.842
- type: recall_at_1000
value: 93.368
- type: recall_at_3
value: 38.023
- type: recall_at_5
value: 43.356
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackEnglishRetrieval
type: cqadupstack/english
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 27.090999999999998
- type: map_at_10
value: 36.106
- type: map_at_100
value: 37.188
- type: map_at_1000
value: 37.32
- type: map_at_3
value: 33.293
- type: map_at_5
value: 34.755
- type: mrr_at_1
value: 35.86
- type: mrr_at_10
value: 42.979
- type: mrr_at_100
value: 43.619
- type: mrr_at_1000
value: 43.669999999999995
- type: mrr_at_3
value: 40.849000000000004
- type: mrr_at_5
value: 41.964
- type: ndcg_at_1
value: 35.86
- type: ndcg_at_10
value: 41.676
- type: ndcg_at_100
value: 45.678000000000004
- type: ndcg_at_1000
value: 47.99
- type: ndcg_at_3
value: 37.862
- type: ndcg_at_5
value: 39.342
- type: precision_at_1
value: 35.86
- type: precision_at_10
value: 8.178
- type: precision_at_100
value: 1.308
- type: precision_at_1000
value: 0.182
- type: precision_at_3
value: 18.662
- type: precision_at_5
value: 13.172
- type: recall_at_1
value: 27.090999999999998
- type: recall_at_10
value: 50.407999999999994
- type: recall_at_100
value: 68.27499999999999
- type: recall_at_1000
value: 83.155
- type: recall_at_3
value: 38.259
- type: recall_at_5
value: 43.096000000000004
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackGamingRetrieval
type: cqadupstack/gaming
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 32.01
- type: map_at_10
value: 42.915
- type: map_at_100
value: 44.096000000000004
- type: map_at_1000
value: 44.175
- type: map_at_3
value: 40.283
- type: map_at_5
value: 41.744
- type: mrr_at_1
value: 37.68
- type: mrr_at_10
value: 46.929
- type: mrr_at_100
value: 47.75
- type: mrr_at_1000
value: 47.795
- type: mrr_at_3
value: 44.713
- type: mrr_at_5
value: 45.885
- type: ndcg_at_1
value: 37.68
- type: ndcg_at_10
value: 48.453
- type: ndcg_at_100
value: 53.494
- type: ndcg_at_1000
value: 55.169000000000004
- type: ndcg_at_3
value: 43.834
- type: ndcg_at_5
value: 45.926
- type: precision_at_1
value: 37.68
- type: precision_at_10
value: 7.906000000000001
- type: precision_at_100
value: 1.135
- type: precision_at_1000
value: 0.134
- type: precision_at_3
value: 20.041999999999998
- type: precision_at_5
value: 13.58
- type: recall_at_1
value: 32.01
- type: recall_at_10
value: 61.049
- type: recall_at_100
value: 83.182
- type: recall_at_1000
value: 95.279
- type: recall_at_3
value: 48.407
- type: recall_at_5
value: 53.748
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackGisRetrieval
type: cqadupstack/gis
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 14.511
- type: map_at_10
value: 20.305999999999997
- type: map_at_100
value: 21.307000000000002
- type: map_at_1000
value: 21.419
- type: map_at_3
value: 18.376
- type: map_at_5
value: 19.421
- type: mrr_at_1
value: 16.045
- type: mrr_at_10
value: 22.002
- type: mrr_at_100
value: 22.986
- type: mrr_at_1000
value: 23.071
- type: mrr_at_3
value: 20.264
- type: mrr_at_5
value: 21.173000000000002
- type: ndcg_at_1
value: 16.045
- type: ndcg_at_10
value: 23.953
- type: ndcg_at_100
value: 29.201
- type: ndcg_at_1000
value: 32.366
- type: ndcg_at_3
value: 20.136000000000003
- type: ndcg_at_5
value: 21.859
- type: precision_at_1
value: 16.045
- type: precision_at_10
value: 3.8760000000000003
- type: precision_at_100
value: 0.696
- type: precision_at_1000
value: 0.101
- type: precision_at_3
value: 8.776
- type: precision_at_5
value: 6.282
- type: recall_at_1
value: 14.511
- type: recall_at_10
value: 33.707
- type: recall_at_100
value: 58.182
- type: recall_at_1000
value: 82.845
- type: recall_at_3
value: 23.206
- type: recall_at_5
value: 27.311999999999998
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackMathematicaRetrieval
type: cqadupstack/mathematica
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 9.762
- type: map_at_10
value: 15.495000000000001
- type: map_at_100
value: 16.637
- type: map_at_1000
value: 16.786
- type: map_at_3
value: 13.62
- type: map_at_5
value: 14.655999999999999
- type: mrr_at_1
value: 12.934999999999999
- type: mrr_at_10
value: 18.985
- type: mrr_at_100
value: 20.079
- type: mrr_at_1000
value: 20.177999999999997
- type: mrr_at_3
value: 16.977999999999998
- type: mrr_at_5
value: 18.197
- type: ndcg_at_1
value: 12.934999999999999
- type: ndcg_at_10
value: 19.444
- type: ndcg_at_100
value: 25.108999999999998
- type: ndcg_at_1000
value: 28.804999999999996
- type: ndcg_at_3
value: 15.93
- type: ndcg_at_5
value: 17.57
- type: precision_at_1
value: 12.934999999999999
- type: precision_at_10
value: 3.856
- type: precision_at_100
value: 0.765
- type: precision_at_1000
value: 0.124
- type: precision_at_3
value: 8.043
- type: precision_at_5
value: 6.095
- type: recall_at_1
value: 9.762
- type: recall_at_10
value: 28.216
- type: recall_at_100
value: 53.28000000000001
- type: recall_at_1000
value: 79.64099999999999
- type: recall_at_3
value: 18.335
- type: recall_at_5
value: 22.435
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackPhysicsRetrieval
type: cqadupstack/physics
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 21.770999999999997
- type: map_at_10
value: 30.837999999999997
- type: map_at_100
value: 32.327
- type: map_at_1000
value: 32.464999999999996
- type: map_at_3
value: 27.891
- type: map_at_5
value: 29.433
- type: mrr_at_1
value: 27.622999999999998
- type: mrr_at_10
value: 36.293
- type: mrr_at_100
value: 37.221
- type: mrr_at_1000
value: 37.288
- type: mrr_at_3
value: 33.574
- type: mrr_at_5
value: 35.085
- type: ndcg_at_1
value: 27.622999999999998
- type: ndcg_at_10
value: 36.784
- type: ndcg_at_100
value: 43.033
- type: ndcg_at_1000
value: 45.616
- type: ndcg_at_3
value: 31.694
- type: ndcg_at_5
value: 33.909
- type: precision_at_1
value: 27.622999999999998
- type: precision_at_10
value: 7.141
- type: precision_at_100
value: 1.24
- type: precision_at_1000
value: 0.165
- type: precision_at_3
value: 15.623999999999999
- type: precision_at_5
value: 11.338
- type: recall_at_1
value: 21.770999999999997
- type: recall_at_10
value: 49.318
- type: recall_at_100
value: 75.779
- type: recall_at_1000
value: 92.729
- type: recall_at_3
value: 34.685
- type: recall_at_5
value: 40.546
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackProgrammersRetrieval
type: cqadupstack/programmers
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 20.156
- type: map_at_10
value: 27.732
- type: map_at_100
value: 29.002
- type: map_at_1000
value: 29.149
- type: map_at_3
value: 25.044
- type: map_at_5
value: 26.586
- type: mrr_at_1
value: 25.457
- type: mrr_at_10
value: 32.799
- type: mrr_at_100
value: 33.756
- type: mrr_at_1000
value: 33.833
- type: mrr_at_3
value: 30.497999999999998
- type: mrr_at_5
value: 31.857000000000003
- type: ndcg_at_1
value: 25.457
- type: ndcg_at_10
value: 32.59
- type: ndcg_at_100
value: 38.336
- type: ndcg_at_1000
value: 41.475
- type: ndcg_at_3
value: 28.166000000000004
- type: ndcg_at_5
value: 30.281000000000002
- type: precision_at_1
value: 25.457
- type: precision_at_10
value: 6.062
- type: precision_at_100
value: 1.083
- type: precision_at_1000
value: 0.156
- type: precision_at_3
value: 13.661000000000001
- type: precision_at_5
value: 9.886000000000001
- type: recall_at_1
value: 20.156
- type: recall_at_10
value: 42.191
- type: recall_at_100
value: 66.953
- type: recall_at_1000
value: 88.91
- type: recall_at_3
value: 29.86
- type: recall_at_5
value: 35.553000000000004
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackRetrieval
type: mteb/cqadupstack
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 18.901250000000005
- type: map_at_10
value: 26.13458333333333
- type: map_at_100
value: 27.282833333333333
- type: map_at_1000
value: 27.416749999999997
- type: map_at_3
value: 23.753500000000003
- type: map_at_5
value: 25.076833333333337
- type: mrr_at_1
value: 23.560500000000005
- type: mrr_at_10
value: 30.31466666666666
- type: mrr_at_100
value: 31.217249999999996
- type: mrr_at_1000
value: 31.29225
- type: mrr_at_3
value: 28.16208333333333
- type: mrr_at_5
value: 29.39025
- type: ndcg_at_1
value: 23.560500000000005
- type: ndcg_at_10
value: 30.780500000000004
- type: ndcg_at_100
value: 36.003083333333336
- type: ndcg_at_1000
value: 38.918166666666664
- type: ndcg_at_3
value: 26.735249999999994
- type: ndcg_at_5
value: 28.60558333333333
- type: precision_at_1
value: 23.560500000000005
- type: precision_at_10
value: 5.700583333333334
- type: precision_at_100
value: 1.0015
- type: precision_at_1000
value: 0.14475
- type: precision_at_3
value: 12.736749999999999
- type: precision_at_5
value: 9.230666666666666
- type: recall_at_1
value: 18.901250000000005
- type: recall_at_10
value: 40.4075
- type: recall_at_100
value: 63.96683333333333
- type: recall_at_1000
value: 84.86883333333333
- type: recall_at_3
value: 28.79183333333334
- type: recall_at_5
value: 33.7335
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackStatsRetrieval
type: cqadupstack/stats
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 15.977
- type: map_at_10
value: 21.612000000000002
- type: map_at_100
value: 22.519
- type: map_at_1000
value: 22.633
- type: map_at_3
value: 19.766000000000002
- type: map_at_5
value: 20.855999999999998
- type: mrr_at_1
value: 19.017999999999997
- type: mrr_at_10
value: 24.310000000000002
- type: mrr_at_100
value: 25.206
- type: mrr_at_1000
value: 25.295
- type: mrr_at_3
value: 22.52
- type: mrr_at_5
value: 23.586
- type: ndcg_at_1
value: 19.017999999999997
- type: ndcg_at_10
value: 25.024
- type: ndcg_at_100
value: 29.942999999999998
- type: ndcg_at_1000
value: 33.059
- type: ndcg_at_3
value: 21.654
- type: ndcg_at_5
value: 23.347
- type: precision_at_1
value: 19.017999999999997
- type: precision_at_10
value: 4.1259999999999994
- type: precision_at_100
value: 0.719
- type: precision_at_1000
value: 0.106
- type: precision_at_3
value: 9.714
- type: precision_at_5
value: 7.025
- type: recall_at_1
value: 15.977
- type: recall_at_10
value: 33.012
- type: recall_at_100
value: 56.201
- type: recall_at_1000
value: 79.837
- type: recall_at_3
value: 23.551
- type: recall_at_5
value: 27.733
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackTexRetrieval
type: cqadupstack/tex
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 10.26
- type: map_at_10
value: 14.248
- type: map_at_100
value: 15.095
- type: map_at_1000
value: 15.22
- type: map_at_3
value: 12.7
- type: map_at_5
value: 13.492999999999999
- type: mrr_at_1
value: 13.73
- type: mrr_at_10
value: 17.964
- type: mrr_at_100
value: 18.748
- type: mrr_at_1000
value: 18.842
- type: mrr_at_3
value: 16.34
- type: mrr_at_5
value: 17.205000000000002
- type: ndcg_at_1
value: 13.73
- type: ndcg_at_10
value: 17.429
- type: ndcg_at_100
value: 21.856
- type: ndcg_at_1000
value: 25.251
- type: ndcg_at_3
value: 14.667
- type: ndcg_at_5
value: 15.790000000000001
- type: precision_at_1
value: 13.73
- type: precision_at_10
value: 3.4099999999999997
- type: precision_at_100
value: 0.6839999999999999
- type: precision_at_1000
value: 0.11399999999999999
- type: precision_at_3
value: 7.202999999999999
- type: precision_at_5
value: 5.299
- type: recall_at_1
value: 10.26
- type: recall_at_10
value: 23.54
- type: recall_at_100
value: 44.085
- type: recall_at_1000
value: 69.233
- type: recall_at_3
value: 15.387999999999998
- type: recall_at_5
value: 18.467
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackUnixRetrieval
type: cqadupstack/unix
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 18.695
- type: map_at_10
value: 25.752000000000002
- type: map_at_100
value: 26.810000000000002
- type: map_at_1000
value: 26.931
- type: map_at_3
value: 23.205000000000002
- type: map_at_5
value: 24.792
- type: mrr_at_1
value: 23.134
- type: mrr_at_10
value: 30.176
- type: mrr_at_100
value: 31.087999999999997
- type: mrr_at_1000
value: 31.162
- type: mrr_at_3
value: 27.766999999999996
- type: mrr_at_5
value: 29.321
- type: ndcg_at_1
value: 23.134
- type: ndcg_at_10
value: 30.427
- type: ndcg_at_100
value: 35.839999999999996
- type: ndcg_at_1000
value: 38.675
- type: ndcg_at_3
value: 25.959
- type: ndcg_at_5
value: 28.364
- type: precision_at_1
value: 23.134
- type: precision_at_10
value: 5.466
- type: precision_at_100
value: 0.9259999999999999
- type: precision_at_1000
value: 0.128
- type: precision_at_3
value: 12.127
- type: precision_at_5
value: 8.993
- type: recall_at_1
value: 18.695
- type: recall_at_10
value: 40.345
- type: recall_at_100
value: 65.009
- type: recall_at_1000
value: 85.368
- type: recall_at_3
value: 28.016999999999996
- type: recall_at_5
value: 34.211999999999996
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackWebmastersRetrieval
type: cqadupstack/webmasters
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 19.955000000000002
- type: map_at_10
value: 26.924999999999997
- type: map_at_100
value: 28.260999999999996
- type: map_at_1000
value: 28.499999999999996
- type: map_at_3
value: 24.282
- type: map_at_5
value: 25.89
- type: mrr_at_1
value: 25.889
- type: mrr_at_10
value: 31.596999999999998
- type: mrr_at_100
value: 32.631
- type: mrr_at_1000
value: 32.702999999999996
- type: mrr_at_3
value: 29.182999999999996
- type: mrr_at_5
value: 30.705
- type: ndcg_at_1
value: 25.889
- type: ndcg_at_10
value: 32.094
- type: ndcg_at_100
value: 37.39
- type: ndcg_at_1000
value: 40.923
- type: ndcg_at_3
value: 27.815
- type: ndcg_at_5
value: 30.162
- type: precision_at_1
value: 25.889
- type: precision_at_10
value: 6.482
- type: precision_at_100
value: 1.374
- type: precision_at_1000
value: 0.231
- type: precision_at_3
value: 13.373
- type: precision_at_5
value: 10.356
- type: recall_at_1
value: 19.955000000000002
- type: recall_at_10
value: 41.157
- type: recall_at_100
value: 66.518
- type: recall_at_1000
value: 90.814
- type: recall_at_3
value: 28.319
- type: recall_at_5
value: 34.394999999999996
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackWordpressRetrieval
type: cqadupstack/wordpress
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 12.144
- type: map_at_10
value: 17.137
- type: map_at_100
value: 18.046
- type: map_at_1000
value: 18.15
- type: map_at_3
value: 15.268
- type: map_at_5
value: 16.309
- type: mrr_at_1
value: 13.309000000000001
- type: mrr_at_10
value: 18.523999999999997
- type: mrr_at_100
value: 19.455
- type: mrr_at_1000
value: 19.543
- type: mrr_at_3
value: 16.512999999999998
- type: mrr_at_5
value: 17.622
- type: ndcg_at_1
value: 13.309000000000001
- type: ndcg_at_10
value: 20.565
- type: ndcg_at_100
value: 25.657000000000004
- type: ndcg_at_1000
value: 28.646
- type: ndcg_at_3
value: 16.658
- type: ndcg_at_5
value: 18.518
- type: precision_at_1
value: 13.309000000000001
- type: precision_at_10
value: 3.42
- type: precision_at_100
value: 0.645
- type: precision_at_1000
value: 0.096
- type: precision_at_3
value: 7.2090000000000005
- type: precision_at_5
value: 5.323
- type: recall_at_1
value: 12.144
- type: recall_at_10
value: 30.0
- type: recall_at_100
value: 54.296
- type: recall_at_1000
value: 77.247
- type: recall_at_3
value: 19.451999999999998
- type: recall_at_5
value: 23.949
- task:
type: Retrieval
dataset:
name: MTEB ClimateFEVER
type: climate-fever
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 7.531000000000001
- type: map_at_10
value: 13.875000000000002
- type: map_at_100
value: 15.714
- type: map_at_1000
value: 15.934999999999999
- type: map_at_3
value: 11.204
- type: map_at_5
value: 12.373000000000001
- type: mrr_at_1
value: 16.547
- type: mrr_at_10
value: 26.889000000000003
- type: mrr_at_100
value: 28.194999999999997
- type: mrr_at_1000
value: 28.242
- type: mrr_at_3
value: 23.279
- type: mrr_at_5
value: 25.289
- type: ndcg_at_1
value: 16.547
- type: ndcg_at_10
value: 20.666999999999998
- type: ndcg_at_100
value: 28.896
- type: ndcg_at_1000
value: 32.843
- type: ndcg_at_3
value: 15.598999999999998
- type: ndcg_at_5
value: 17.238
- type: precision_at_1
value: 16.547
- type: precision_at_10
value: 6.958
- type: precision_at_100
value: 1.5810000000000002
- type: precision_at_1000
value: 0.231
- type: precision_at_3
value: 11.726
- type: precision_at_5
value: 9.472
- type: recall_at_1
value: 7.531000000000001
- type: recall_at_10
value: 26.726
- type: recall_at_100
value: 55.940999999999995
- type: recall_at_1000
value: 78.119
- type: recall_at_3
value: 14.815000000000001
- type: recall_at_5
value: 18.955
- task:
type: Retrieval
dataset:
name: MTEB DBPedia
type: dbpedia-entity
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 4.741
- type: map_at_10
value: 11.743
- type: map_at_100
value: 16.723
- type: map_at_1000
value: 17.813000000000002
- type: map_at_3
value: 8.017000000000001
- type: map_at_5
value: 9.655
- type: mrr_at_1
value: 40.25
- type: mrr_at_10
value: 52.244
- type: mrr_at_100
value: 52.933
- type: mrr_at_1000
value: 52.957
- type: mrr_at_3
value: 49.791999999999994
- type: mrr_at_5
value: 51.629000000000005
- type: ndcg_at_1
value: 30.0
- type: ndcg_at_10
value: 25.813000000000002
- type: ndcg_at_100
value: 31.075999999999997
- type: ndcg_at_1000
value: 38.242
- type: ndcg_at_3
value: 27.394000000000002
- type: ndcg_at_5
value: 26.395999999999997
- type: precision_at_1
value: 40.25
- type: precision_at_10
value: 22.0
- type: precision_at_100
value: 7.077999999999999
- type: precision_at_1000
value: 1.492
- type: precision_at_3
value: 32.833
- type: precision_at_5
value: 28.15
- type: recall_at_1
value: 4.741
- type: recall_at_10
value: 18.11
- type: recall_at_100
value: 40.617999999999995
- type: recall_at_1000
value: 63.92
- type: recall_at_3
value: 9.724
- type: recall_at_5
value: 13.333
- task:
type: Classification
dataset:
name: MTEB EmotionClassification
type: mteb/emotion
config: default
split: test
revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37
metrics:
- type: accuracy
value: 46.575
- type: f1
value: 42.15253766150754
- task:
type: Retrieval
dataset:
name: MTEB FEVER
type: fever
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 24.676000000000002
- type: map_at_10
value: 36.666
- type: map_at_100
value: 37.613
- type: map_at_1000
value: 37.663000000000004
- type: map_at_3
value: 33.269999999999996
- type: map_at_5
value: 35.21
- type: mrr_at_1
value: 26.733
- type: mrr_at_10
value: 39.007999999999996
- type: mrr_at_100
value: 39.904
- type: mrr_at_1000
value: 39.944
- type: mrr_at_3
value: 35.591
- type: mrr_at_5
value: 37.544
- type: ndcg_at_1
value: 26.733
- type: ndcg_at_10
value: 43.477
- type: ndcg_at_100
value: 47.906
- type: ndcg_at_1000
value: 49.144
- type: ndcg_at_3
value: 36.606
- type: ndcg_at_5
value: 40.009
- type: precision_at_1
value: 26.733
- type: precision_at_10
value: 6.842
- type: precision_at_100
value: 0.9209999999999999
- type: precision_at_1000
value: 0.104
- type: precision_at_3
value: 15.906999999999998
- type: precision_at_5
value: 11.356
- type: recall_at_1
value: 24.676000000000002
- type: recall_at_10
value: 62.556999999999995
- type: recall_at_100
value: 82.43
- type: recall_at_1000
value: 91.738
- type: recall_at_3
value: 43.885000000000005
- type: recall_at_5
value: 52.054
- task:
type: Retrieval
dataset:
name: MTEB FiQA2018
type: fiqa
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 11.101999999999999
- type: map_at_10
value: 18.490000000000002
- type: map_at_100
value: 20.404
- type: map_at_1000
value: 20.631
- type: map_at_3
value: 15.6
- type: map_at_5
value: 17.169
- type: mrr_at_1
value: 22.531000000000002
- type: mrr_at_10
value: 30.429000000000002
- type: mrr_at_100
value: 31.537
- type: mrr_at_1000
value: 31.606
- type: mrr_at_3
value: 27.546
- type: mrr_at_5
value: 29.159000000000002
- type: ndcg_at_1
value: 22.531000000000002
- type: ndcg_at_10
value: 24.624
- type: ndcg_at_100
value: 32.836
- type: ndcg_at_1000
value: 36.992000000000004
- type: ndcg_at_3
value: 20.806
- type: ndcg_at_5
value: 22.292
- type: precision_at_1
value: 22.531000000000002
- type: precision_at_10
value: 7.176
- type: precision_at_100
value: 1.546
- type: precision_at_1000
value: 0.22799999999999998
- type: precision_at_3
value: 14.198
- type: precision_at_5
value: 11.019
- type: recall_at_1
value: 11.101999999999999
- type: recall_at_10
value: 30.86
- type: recall_at_100
value: 62.564
- type: recall_at_1000
value: 87.627
- type: recall_at_3
value: 18.721
- type: recall_at_5
value: 23.830000000000002
- task:
type: Retrieval
dataset:
name: MTEB HotpotQA
type: hotpotqa
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 27.474999999999998
- type: map_at_10
value: 39.342
- type: map_at_100
value: 40.458
- type: map_at_1000
value: 40.553
- type: map_at_3
value: 36.272999999999996
- type: map_at_5
value: 38.091
- type: mrr_at_1
value: 54.949000000000005
- type: mrr_at_10
value: 63.28
- type: mrr_at_100
value: 63.796
- type: mrr_at_1000
value: 63.821000000000005
- type: mrr_at_3
value: 61.41799999999999
- type: mrr_at_5
value: 62.522999999999996
- type: ndcg_at_1
value: 54.949000000000005
- type: ndcg_at_10
value: 48.461
- type: ndcg_at_100
value: 52.903999999999996
- type: ndcg_at_1000
value: 54.906
- type: ndcg_at_3
value: 43.428
- type: ndcg_at_5
value: 46.045
- type: precision_at_1
value: 54.949000000000005
- type: precision_at_10
value: 10.446
- type: precision_at_100
value: 1.397
- type: precision_at_1000
value: 0.166
- type: precision_at_3
value: 27.310000000000002
- type: precision_at_5
value: 18.458
- type: recall_at_1
value: 27.474999999999998
- type: recall_at_10
value: 52.227999999999994
- type: recall_at_100
value: 69.838
- type: recall_at_1000
value: 83.153
- type: recall_at_3
value: 40.966
- type: recall_at_5
value: 46.144
- task:
type: Classification
dataset:
name: MTEB ImdbClassification
type: mteb/imdb
config: default
split: test
revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7
metrics:
- type: accuracy
value: 75.6784
- type: ap
value: 70.03950630113135
- type: f1
value: 75.38669491280882
- task:
type: Retrieval
dataset:
name: MTEB MSMARCO
type: msmarco
config: default
split: dev
revision: None
metrics:
- type: map_at_1
value: 8.182
- type: map_at_10
value: 14.597999999999999
- type: map_at_100
value: 15.795
- type: map_at_1000
value: 15.901000000000002
- type: map_at_3
value: 12.001000000000001
- type: map_at_5
value: 13.377
- type: mrr_at_1
value: 8.395
- type: mrr_at_10
value: 14.883
- type: mrr_at_100
value: 16.073999999999998
- type: mrr_at_1000
value: 16.174
- type: mrr_at_3
value: 12.267999999999999
- type: mrr_at_5
value: 13.658000000000001
- type: ndcg_at_1
value: 8.395
- type: ndcg_at_10
value: 18.81
- type: ndcg_at_100
value: 25.144
- type: ndcg_at_1000
value: 28.094
- type: ndcg_at_3
value: 13.366
- type: ndcg_at_5
value: 15.856
- type: precision_at_1
value: 8.395
- type: precision_at_10
value: 3.328
- type: precision_at_100
value: 0.657
- type: precision_at_1000
value: 0.091
- type: precision_at_3
value: 5.84
- type: precision_at_5
value: 4.765
- type: recall_at_1
value: 8.182
- type: recall_at_10
value: 32.151
- type: recall_at_100
value: 62.633
- type: recall_at_1000
value: 85.88
- type: recall_at_3
value: 17.069000000000003
- type: recall_at_5
value: 23.092
- task:
type: Classification
dataset:
name: MTEB MTOPDomainClassification (en)
type: mteb/mtop_domain
config: en
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 94.3296853625171
- type: f1
value: 94.02246426051437
- task:
type: Classification
dataset:
name: MTEB MTOPIntentClassification (en)
type: mteb/mtop_intent
config: en
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 79.54172366621067
- type: f1
value: 60.47715992221304
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (en)
type: mteb/amazon_massive_intent
config: en
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 73.83994620040349
- type: f1
value: 70.84392062730345
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (en)
type: mteb/amazon_massive_scenario
config: en
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 79.17283120376597
- type: f1
value: 78.83856078561683
- task:
type: Clustering
dataset:
name: MTEB MedrxivClusteringP2P
type: mteb/medrxiv-clustering-p2p
config: default
split: test
revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73
metrics:
- type: v_measure
value: 30.939561146943344
- task:
type: Clustering
dataset:
name: MTEB MedrxivClusteringS2S
type: mteb/medrxiv-clustering-s2s
config: default
split: test
revision: 35191c8c0dca72d8ff3efcd72aa802307d469663
metrics:
- type: v_measure
value: 28.0435406238161
- task:
type: Reranking
dataset:
name: MTEB MindSmallReranking
type: mteb/mind_small
config: default
split: test
revision: 3bdac13927fdc888b903db93b2ffdbd90b295a69
metrics:
- type: map
value: 30.860539801824743
- type: mrr
value: 31.993223906232455
- task:
type: Retrieval
dataset:
name: MTEB NFCorpus
type: nfcorpus
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 2.6759999999999997
- type: map_at_10
value: 8.365
- type: map_at_100
value: 10.949
- type: map_at_1000
value: 12.248000000000001
- type: map_at_3
value: 5.836
- type: map_at_5
value: 7.094
- type: mrr_at_1
value: 32.507999999999996
- type: mrr_at_10
value: 43.336999999999996
- type: mrr_at_100
value: 44.092
- type: mrr_at_1000
value: 44.125
- type: mrr_at_3
value: 40.402
- type: mrr_at_5
value: 42.214
- type: ndcg_at_1
value: 30.186
- type: ndcg_at_10
value: 26.806
- type: ndcg_at_100
value: 25.446999999999996
- type: ndcg_at_1000
value: 34.33
- type: ndcg_at_3
value: 30.159999999999997
- type: ndcg_at_5
value: 28.671999999999997
- type: precision_at_1
value: 31.579
- type: precision_at_10
value: 20.96
- type: precision_at_100
value: 6.885
- type: precision_at_1000
value: 1.9560000000000002
- type: precision_at_3
value: 29.825000000000003
- type: precision_at_5
value: 25.944
- type: recall_at_1
value: 2.6759999999999997
- type: recall_at_10
value: 13.715
- type: recall_at_100
value: 29.246
- type: recall_at_1000
value: 59.878
- type: recall_at_3
value: 7.6850000000000005
- type: recall_at_5
value: 10.559000000000001
- task:
type: Retrieval
dataset:
name: MTEB NQ
type: nq
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 14.999
- type: map_at_10
value: 26.229999999999997
- type: map_at_100
value: 27.77
- type: map_at_1000
value: 27.832
- type: map_at_3
value: 22.127
- type: map_at_5
value: 24.395
- type: mrr_at_1
value: 17.265
- type: mrr_at_10
value: 28.515
- type: mrr_at_100
value: 29.793999999999997
- type: mrr_at_1000
value: 29.837999999999997
- type: mrr_at_3
value: 24.609
- type: mrr_at_5
value: 26.790000000000003
- type: ndcg_at_1
value: 17.236
- type: ndcg_at_10
value: 33.207
- type: ndcg_at_100
value: 40.211000000000006
- type: ndcg_at_1000
value: 41.669
- type: ndcg_at_3
value: 25.013
- type: ndcg_at_5
value: 28.965999999999998
- type: precision_at_1
value: 17.236
- type: precision_at_10
value: 6.260000000000001
- type: precision_at_100
value: 1.015
- type: precision_at_1000
value: 0.11499999999999999
- type: precision_at_3
value: 12.032
- type: precision_at_5
value: 9.45
- type: recall_at_1
value: 14.999
- type: recall_at_10
value: 52.581
- type: recall_at_100
value: 83.918
- type: recall_at_1000
value: 94.735
- type: recall_at_3
value: 30.946
- type: recall_at_5
value: 40.136
- task:
type: Retrieval
dataset:
name: MTEB QuoraRetrieval
type: quora
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 68.085
- type: map_at_10
value: 81.952
- type: map_at_100
value: 82.636
- type: map_at_1000
value: 82.65599999999999
- type: map_at_3
value: 78.83200000000001
- type: map_at_5
value: 80.793
- type: mrr_at_1
value: 78.45
- type: mrr_at_10
value: 85.35199999999999
- type: mrr_at_100
value: 85.483
- type: mrr_at_1000
value: 85.485
- type: mrr_at_3
value: 84.195
- type: mrr_at_5
value: 84.985
- type: ndcg_at_1
value: 78.46
- type: ndcg_at_10
value: 86.151
- type: ndcg_at_100
value: 87.589
- type: ndcg_at_1000
value: 87.737
- type: ndcg_at_3
value: 82.839
- type: ndcg_at_5
value: 84.67
- type: precision_at_1
value: 78.46
- type: precision_at_10
value: 13.114999999999998
- type: precision_at_100
value: 1.5190000000000001
- type: precision_at_1000
value: 0.156
- type: precision_at_3
value: 36.167
- type: precision_at_5
value: 23.921999999999997
- type: recall_at_1
value: 68.085
- type: recall_at_10
value: 94.28699999999999
- type: recall_at_100
value: 99.235
- type: recall_at_1000
value: 99.954
- type: recall_at_3
value: 84.941
- type: recall_at_5
value: 89.991
- task:
type: Clustering
dataset:
name: MTEB RedditClustering
type: mteb/reddit-clustering
config: default
split: test
revision: 24640382cdbf8abc73003fb0fa6d111a705499eb
metrics:
- type: v_measure
value: 42.84102304870842
- task:
type: Clustering
dataset:
name: MTEB RedditClusteringP2P
type: mteb/reddit-clustering-p2p
config: default
split: test
revision: 282350215ef01743dc01b456c7f5241fa8937f16
metrics:
- type: v_measure
value: 60.096590952185046
- task:
type: Retrieval
dataset:
name: MTEB SCIDOCS
type: scidocs
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 2.283
- type: map_at_10
value: 5.554
- type: map_at_100
value: 6.98
- type: map_at_1000
value: 7.324999999999999
- type: map_at_3
value: 3.9890000000000003
- type: map_at_5
value: 4.766
- type: mrr_at_1
value: 11.200000000000001
- type: mrr_at_10
value: 17.746000000000002
- type: mrr_at_100
value: 18.971
- type: mrr_at_1000
value: 19.1
- type: mrr_at_3
value: 15.15
- type: mrr_at_5
value: 16.619999999999997
- type: ndcg_at_1
value: 11.200000000000001
- type: ndcg_at_10
value: 10.001
- type: ndcg_at_100
value: 16.933
- type: ndcg_at_1000
value: 23.835
- type: ndcg_at_3
value: 9.005
- type: ndcg_at_5
value: 8.076
- type: precision_at_1
value: 11.200000000000001
- type: precision_at_10
value: 5.3
- type: precision_at_100
value: 1.5730000000000002
- type: precision_at_1000
value: 0.32299999999999995
- type: precision_at_3
value: 8.3
- type: precision_at_5
value: 7.12
- type: recall_at_1
value: 2.283
- type: recall_at_10
value: 10.775
- type: recall_at_100
value: 31.913000000000004
- type: recall_at_1000
value: 65.595
- type: recall_at_3
value: 5.0729999999999995
- type: recall_at_5
value: 7.228
- task:
type: STS
dataset:
name: MTEB SICK-R
type: mteb/sickr-sts
config: default
split: test
revision: a6ea5a8cab320b040a23452cc28066d9beae2cee
metrics:
- type: cos_sim_spearman
value: 71.76588896280093
- task:
type: STS
dataset:
name: MTEB STS12
type: mteb/sts12-sts
config: default
split: test
revision: a0d554a64d88156834ff5ae9920b964011b16384
metrics:
- type: cos_sim_spearman
value: 65.3943089429597
- task:
type: STS
dataset:
name: MTEB STS13
type: mteb/sts13-sts
config: default
split: test
revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca
metrics:
- type: cos_sim_spearman
value: 79.26435573752327
- task:
type: STS
dataset:
name: MTEB STS14
type: mteb/sts14-sts
config: default
split: test
revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375
metrics:
- type: cos_sim_spearman
value: 72.98102120833857
- task:
type: STS
dataset:
name: MTEB STS15
type: mteb/sts15-sts
config: default
split: test
revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3
metrics:
- type: cos_sim_spearman
value: 82.72040157931015
- task:
type: STS
dataset:
name: MTEB STS16
type: mteb/sts16-sts
config: default
split: test
revision: 4d8694f8f0e0100860b497b999b3dbed754a0513
metrics:
- type: cos_sim_spearman
value: 81.020987615843
- task:
type: STS
dataset:
name: MTEB STS17 (en-en)
type: mteb/sts17-crosslingual-sts
config: en-en
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_spearman
value: 86.69902762920725
- task:
type: STS
dataset:
name: MTEB STS22 (en)
type: mteb/sts22-crosslingual-sts
config: en
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_spearman
value: 63.474026946359615
- task:
type: STS
dataset:
name: MTEB STSBenchmark
type: mteb/stsbenchmark-sts
config: default
split: test
revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831
metrics:
- type: cos_sim_spearman
value: 78.32422438643496
- task:
type: Reranking
dataset:
name: MTEB SciDocsRR
type: mteb/scidocs-reranking
config: default
split: test
revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab
metrics:
- type: map
value: 77.61818188370545
- type: mrr
value: 93.57944887356652
- task:
type: Retrieval
dataset:
name: MTEB SciFact
type: scifact
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 48.417
- type: map_at_10
value: 59.217
- type: map_at_100
value: 59.866
- type: map_at_1000
value: 59.91
- type: map_at_3
value: 56.302
- type: map_at_5
value: 58.252
- type: mrr_at_1
value: 51.0
- type: mrr_at_10
value: 60.368
- type: mrr_at_100
value: 60.901
- type: mrr_at_1000
value: 60.936
- type: mrr_at_3
value: 57.778
- type: mrr_at_5
value: 59.577999999999996
- type: ndcg_at_1
value: 51.0
- type: ndcg_at_10
value: 64.479
- type: ndcg_at_100
value: 67.37100000000001
- type: ndcg_at_1000
value: 68.367
- type: ndcg_at_3
value: 59.117
- type: ndcg_at_5
value: 62.283
- type: precision_at_1
value: 51.0
- type: precision_at_10
value: 8.833
- type: precision_at_100
value: 1.043
- type: precision_at_1000
value: 0.11299999999999999
- type: precision_at_3
value: 23.778
- type: precision_at_5
value: 16.067
- type: recall_at_1
value: 48.417
- type: recall_at_10
value: 79.567
- type: recall_at_100
value: 92.422
- type: recall_at_1000
value: 100.0
- type: recall_at_3
value: 65.011
- type: recall_at_5
value: 72.983
- task:
type: PairClassification
dataset:
name: MTEB SprintDuplicateQuestions
type: mteb/sprintduplicatequestions-pairclassification
config: default
split: test
revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46
metrics:
- type: cos_sim_accuracy
value: 99.63861386138613
- type: cos_sim_ap
value: 87.57401607596607
- type: cos_sim_f1
value: 81.18006103763987
- type: cos_sim_precision
value: 82.6086956521739
- type: cos_sim_recall
value: 79.80000000000001
- type: dot_accuracy
value: 99.36435643564356
- type: dot_ap
value: 67.10054414762459
- type: dot_f1
value: 62.686567164179095
- type: dot_precision
value: 70.08652657601978
- type: dot_recall
value: 56.699999999999996
- type: euclidean_accuracy
value: 99.6108910891089
- type: euclidean_ap
value: 85.27455886915234
- type: euclidean_f1
value: 79.41330539549503
- type: euclidean_precision
value: 83.3883388338834
- type: euclidean_recall
value: 75.8
- type: manhattan_accuracy
value: 99.62574257425743
- type: manhattan_ap
value: 86.03781248244218
- type: manhattan_f1
value: 80.23012552301255
- type: manhattan_precision
value: 84.10087719298247
- type: manhattan_recall
value: 76.7
- type: max_accuracy
value: 99.63861386138613
- type: max_ap
value: 87.57401607596607
- type: max_f1
value: 81.18006103763987
- task:
type: Clustering
dataset:
name: MTEB StackExchangeClustering
type: mteb/stackexchange-clustering
config: default
split: test
revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259
metrics:
- type: v_measure
value: 65.11651958999349
- task:
type: Clustering
dataset:
name: MTEB StackExchangeClusteringP2P
type: mteb/stackexchange-clustering-p2p
config: default
split: test
revision: 815ca46b2622cec33ccafc3735d572c266efdb44
metrics:
- type: v_measure
value: 33.60581294647579
- task:
type: Reranking
dataset:
name: MTEB StackOverflowDupQuestions
type: mteb/stackoverflowdupquestions-reranking
config: default
split: test
revision: e185fbe320c72810689fc5848eb6114e1ef5ec69
metrics:
- type: map
value: 47.773753263238696
- type: mrr
value: 48.39623917748917
- task:
type: Summarization
dataset:
name: MTEB SummEval
type: mteb/summeval
config: default
split: test
revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c
metrics:
- type: cos_sim_pearson
value: 31.564097570977395
- type: cos_sim_spearman
value: 31.380186846178056
- type: dot_pearson
value: 18.77679329172303
- type: dot_spearman
value: 20.468892673671043
- task:
type: Retrieval
dataset:
name: MTEB TRECCOVID
type: trec-covid
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 0.191
- type: map_at_10
value: 1.307
- type: map_at_100
value: 6.458
- type: map_at_1000
value: 16.785
- type: map_at_3
value: 0.47600000000000003
- type: map_at_5
value: 0.751
- type: mrr_at_1
value: 72.0
- type: mrr_at_10
value: 81.175
- type: mrr_at_100
value: 81.229
- type: mrr_at_1000
value: 81.229
- type: mrr_at_3
value: 79.667
- type: mrr_at_5
value: 80.667
- type: ndcg_at_1
value: 68.0
- type: ndcg_at_10
value: 60.672000000000004
- type: ndcg_at_100
value: 43.114000000000004
- type: ndcg_at_1000
value: 40.459
- type: ndcg_at_3
value: 65.642
- type: ndcg_at_5
value: 64.033
- type: precision_at_1
value: 72.0
- type: precision_at_10
value: 63.0
- type: precision_at_100
value: 43.82
- type: precision_at_1000
value: 18.758
- type: precision_at_3
value: 68.0
- type: precision_at_5
value: 67.60000000000001
- type: recall_at_1
value: 0.191
- type: recall_at_10
value: 1.585
- type: recall_at_100
value: 10.113999999999999
- type: recall_at_1000
value: 38.83
- type: recall_at_3
value: 0.514
- type: recall_at_5
value: 0.853
- task:
type: Retrieval
dataset:
name: MTEB Touche2020
type: webis-touche2020
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 0.857
- type: map_at_10
value: 4.154
- type: map_at_100
value: 7.1819999999999995
- type: map_at_1000
value: 8.501
- type: map_at_3
value: 2.3369999999999997
- type: map_at_5
value: 2.573
- type: mrr_at_1
value: 8.163
- type: mrr_at_10
value: 20.305
- type: mrr_at_100
value: 22.334
- type: mrr_at_1000
value: 22.397
- type: mrr_at_3
value: 17.347
- type: mrr_at_5
value: 18.673000000000002
- type: ndcg_at_1
value: 6.122
- type: ndcg_at_10
value: 10.18
- type: ndcg_at_100
value: 20.735999999999997
- type: ndcg_at_1000
value: 32.897999999999996
- type: ndcg_at_3
value: 10.299999999999999
- type: ndcg_at_5
value: 8.981
- type: precision_at_1
value: 8.163
- type: precision_at_10
value: 10.204
- type: precision_at_100
value: 5.061
- type: precision_at_1000
value: 1.276
- type: precision_at_3
value: 14.285999999999998
- type: precision_at_5
value: 10.612
- type: recall_at_1
value: 0.857
- type: recall_at_10
value: 8.57
- type: recall_at_100
value: 33.215
- type: recall_at_1000
value: 70.488
- type: recall_at_3
value: 3.527
- type: recall_at_5
value: 4.194
- task:
type: Classification
dataset:
name: MTEB ToxicConversationsClassification
type: mteb/toxic_conversations_50k
config: default
split: test
revision: d7c0de2777da35d6aae2200a62c6e0e5af397c4c
metrics:
- type: accuracy
value: 71.8126
- type: ap
value: 15.399874831474428
- type: f1
value: 55.733319106134225
- task:
type: Classification
dataset:
name: MTEB TweetSentimentExtractionClassification
type: mteb/tweet_sentiment_extraction
config: default
split: test
revision: d604517c81ca91fe16a244d1248fc021f9ecee7a
metrics:
- type: accuracy
value: 57.167515563101304
- type: f1
value: 57.493718365420854
- task:
type: Clustering
dataset:
name: MTEB TwentyNewsgroupsClustering
type: mteb/twentynewsgroups-clustering
config: default
split: test
revision: 6125ec4e24fa026cec8a478383ee943acfbd5449
metrics:
- type: v_measure
value: 30.761111606661984
- task:
type: PairClassification
dataset:
name: MTEB TwitterSemEval2015
type: mteb/twittersemeval2015-pairclassification
config: default
split: test
revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1
metrics:
- type: cos_sim_accuracy
value: 83.90057817249806
- type: cos_sim_ap
value: 65.13897428351787
- type: cos_sim_f1
value: 61.042677616025884
- type: cos_sim_precision
value: 57.75841770661644
- type: cos_sim_recall
value: 64.72295514511873
- type: dot_accuracy
value: 80.60439887941826
- type: dot_ap
value: 55.55250665214204
- type: dot_f1
value: 54.91251682368774
- type: dot_precision
value: 47.75653531018338
- type: dot_recall
value: 64.5910290237467
- type: euclidean_accuracy
value: 83.30452405078381
- type: euclidean_ap
value: 62.67995656680978
- type: euclidean_f1
value: 59.421025901472824
- type: euclidean_precision
value: 57.268722466960355
- type: euclidean_recall
value: 61.74142480211082
- type: manhattan_accuracy
value: 83.39393216904095
- type: manhattan_ap
value: 63.04154722022527
- type: manhattan_f1
value: 59.49575573292791
- type: manhattan_precision
value: 57.226419692907626
- type: manhattan_recall
value: 61.952506596306065
- type: max_accuracy
value: 83.90057817249806
- type: max_ap
value: 65.13897428351787
- type: max_f1
value: 61.042677616025884
- task:
type: PairClassification
dataset:
name: MTEB TwitterURLCorpus
type: mteb/twitterurlcorpus-pairclassification
config: default
split: test
revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf
metrics:
- type: cos_sim_accuracy
value: 86.91349400395855
- type: cos_sim_ap
value: 80.94267715916922
- type: cos_sim_f1
value: 73.80416854101064
- type: cos_sim_precision
value: 71.91700759789596
- type: cos_sim_recall
value: 75.79303972898059
- type: dot_accuracy
value: 85.36694221290799
- type: dot_ap
value: 76.58601958627575
- type: dot_f1
value: 71.08344449384913
- type: dot_precision
value: 68.51428571428572
- type: dot_recall
value: 73.85278718817369
- type: euclidean_accuracy
value: 86.23627119959639
- type: euclidean_ap
value: 79.39212423810176
- type: euclidean_f1
value: 72.54634884600833
- type: euclidean_precision
value: 71.32123195952983
- type: euclidean_recall
value: 73.81429011395134
- type: manhattan_accuracy
value: 86.72720922109676
- type: manhattan_ap
value: 80.52847011448226
- type: manhattan_f1
value: 73.27869471616877
- type: manhattan_precision
value: 71.91785899621914
- type: manhattan_recall
value: 74.69202340622113
- type: max_accuracy
value: 86.91349400395855
- type: max_ap
value: 80.94267715916922
- type: max_f1
value: 73.80416854101064
---
# LLM2Vec: Large Language Models Are Secretly Powerful Text Encoders
> LLM2Vec is a simple recipe to convert decoder-only LLMs into text encoders. It consists of 3 simple steps: 1) enabling bidirectional attention, 2) masked next token prediction, and 3) unsupervised contrastive learning. The model can be further fine-tuned to achieve state-of-the-art performance.
- **Repository:** https://github.com/McGill-NLP/llm2vec
- **Paper:** https://arxiv.org/abs/2404.05961
## Installation
```bash
pip install llm2vec
```
## Usage
```python
from llm2vec import LLM2Vec
import torch
from transformers import AutoTokenizer, AutoModel, AutoConfig
from peft import PeftModel
# Loading base Mistral model, along with custom code that enables bidirectional connections in decoder-only LLMs. MNTP LoRA weights are merged into the base model.
tokenizer = AutoTokenizer.from_pretrained(
"McGill-NLP/LLM2Vec-Llama-2-7b-chat-hf-mntp"
)
config = AutoConfig.from_pretrained(
"McGill-NLP/LLM2Vec-Llama-2-7b-chat-hf-mntp", trust_remote_code=True
)
model = AutoModel.from_pretrained(
"McGill-NLP/LLM2Vec-Llama-2-7b-chat-hf-mntp",
trust_remote_code=True,
config=config,
torch_dtype=torch.bfloat16,
device_map="cuda" if torch.cuda.is_available() else "cpu",
)
model = PeftModel.from_pretrained(
model,
"McGill-NLP/LLM2Vec-Llama-2-7b-chat-hf-mntp",
)
model = model.merge_and_unload() # This can take several minutes on cpu
# Loading unsupervised SimCSE model. This loads the trained LoRA weights on top of MNTP model. Hence the final weights are -- Base model + MNTP (LoRA) + SimCSE (LoRA).
model = PeftModel.from_pretrained(
model, "McGill-NLP/LLM2Vec-Llama-2-7b-chat-hf-mntp-unsup-simcse"
)
# Wrapper for encoding and pooling operations
l2v = LLM2Vec(model, tokenizer, pooling_mode="mean", max_length=512)
# Encoding queries using instructions
instruction = (
"Given a web search query, retrieve relevant passages that answer the query:"
)
queries = [
[instruction, "how much protein should a female eat"],
[instruction, "summit define"],
]
q_reps = l2v.encode(queries)
# Encoding documents. Instruction are not required for documents
documents = [
"As a general guideline, the CDC's average requirement of protein for women ages 19 to 70 is 46 grams per day. But, as you can see from this chart, you'll need to increase that if you're expecting or training for a marathon. Check out the chart below to see how much protein you should be eating each day.",
"Definition of summit for English Language Learners. : 1 the highest point of a mountain : the top of a mountain. : 2 the highest level. : 3 a meeting or series of meetings between the leaders of two or more governments.",
]
d_reps = l2v.encode(documents)
# Compute cosine similarity
q_reps_norm = torch.nn.functional.normalize(q_reps, p=2, dim=1)
d_reps_norm = torch.nn.functional.normalize(d_reps, p=2, dim=1)
cos_sim = torch.mm(q_reps_norm, d_reps_norm.transpose(0, 1))
print(cos_sim)
"""
tensor([[0.6231, 0.1744],
[0.1670, 0.4732]])
"""
```
## Questions
If you have any question about the code, feel free to email Parishad (`[email protected]`) and Vaibhav (`[email protected]`). | [
"SUMMARIZATION"
] | [
"BIOSSES",
"SCIFACT"
] |
MoMonir/Llama3-OpenBioLLM-8B-GGUF | MoMonir | null | [
"gguf",
"llama-3",
"llama",
"Mixtral",
"instruct",
"finetune",
"chatml",
"DPO",
"RLHF",
"gpt4",
"distillation",
"en",
"arxiv:2305.18290",
"arxiv:2303.13375",
"arxiv:2212.13138",
"arxiv:2305.09617",
"arxiv:2402.07023",
"base_model:meta-llama/Meta-Llama-3-8B",
"base_model:quantized:meta-llama/Meta-Llama-3-8B",
"license:llama3",
"endpoints_compatible",
"region:us"
] | 2024-05-04T04:08:54 | 2024-05-04T22:55:24 | 57 | 2 | ---
base_model: meta-llama/Meta-Llama-3-8B
language:
- en
license: llama3
tags:
- llama-3
- llama
- Mixtral
- instruct
- finetune
- chatml
- DPO
- RLHF
- gpt4
- distillation
widget:
- example_title: OpenBioLLM-8B
messages:
- role: system
content: You are an expert and experienced from the healthcare and biomedical
domain with extensive medical knowledge and practical experience.
- role: user
content: How long does it take for newborn jaundice to go away?
output:
text: 'Newborn jaundice, also known as neonatal jaundice, is a common condition
in newborns where the yellowing of the skin and eyes occurs due to an elevated
level of bilirubin in the blood. Bilirubin is a yellow pigment that forms when
red blood cells break down. In most cases, newborn jaundice resolves on its
own without any specific treatment.
The duration of newborn jaundice can vary depending on several factors such
as the underlying cause, gestational age at birth, and individual variations
in bilirubin metabolism. Here are some general guidelines:
1. Physiological jaundice: This is the most common type of newborn jaundice
and usually appears within 24-72 hours after birth. It tends to peak between
the second and fifth day of life and gradually improves over the next week or
two. By the time the baby is one week old, the jaundice should have mostly resolved.
2. Breast milk jaundice: This type of jaundice occurs in breastfed babies and
may appear later than physiological jaundice, typically between the fifth and
fourteenth day of life. It tends to persist for a longer duration but usually
resolves within six weeks after birth. 3. Pathological jaundice: This type of
jaundice is less common and occurs due to an underlying medical condition that
affects bilirubin metabolism or liver function. The duration of pathological
jaundice depends on the specific cause and may require treatment.
It''s important for parents to monitor their newborn''s jaundice closely and
seek medical advice if the jaundice progresses rapidly, becomes severe, or is
accompanied by other symptoms such as poor feeding, lethargy, or excessive sleepiness.
In these cases, further evaluation and management may be necessary. Remember
that each baby is unique, and the timing of jaundice resolution can vary. If
you have concerns about your newborn''s jaundice, it''s always best to consult
with a healthcare professional for personalized advice and guidance.'
model-index:
- name: OpenBioLLM-8B
results: []
---
# MoMonir/Llama3-OpenBioLLM-8B
This model was converted to GGUF format from [`aaditya/Llama3-OpenBioLLM-8B`](https://huggingface.co/aaditya/Llama3-OpenBioLLM-8B)
Refer to the [original model card](https://huggingface.co/aaditya/Llama3-OpenBioLLM-8B) for more details on the model.
<!-- README_GGUF.md-about-gguf start -->
### About GGUF ([TheBloke](https://huggingface.co/TheBloke) Description)
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplete list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models.
<!-- README_GGUF.md-about-gguf end -->
================================
# #--# Original Model Card #--#
================================
<div align="center">
<img width="260px" src="https://cdn-uploads.huggingface.co/production/uploads/5f3fe13d79c1ba4c353d0c19/BrQCb95lmEIFz79QAmoNA.png"></div>

<div align="center">
<h1>Advancing Open-source Large Language Models in Medical Domain</h1>
</div>
<p align="center" style="margin-top: 0px;">
<a href="https://colab.research.google.com/drive/1F5oV20InEYeAJGmBwYF9NM_QhLmjBkKJ?usp=sharing">
<img src="https://colab.research.google.com/assets/colab-badge.svg" alt="OpenChat Logo" style="width:20px; vertical-align: middle; display: inline-block; margin-right: 5px; margin-left: 10px; margin-top: 0px; margin-bottom: 0px;"/>
<span class="link-text" style=" margin-right: 5px;">Online Demo</span>
</a> |
<a href="https://github.com/openlifescience-ai">
<img src="https://github.githubassets.com/assets/GitHub-Mark-ea2971cee799.png" alt="GitHub Logo" style="width:20px; vertical-align: middle; display: inline-block; margin-right: 5px; margin-left: 5px; margin-top: 0px; margin-bottom: 0px;"/>
<span class="link-text" style=" margin-right: 5px;">GitHub</span>
</a> |
<a href="#">
<img src="https://github.com/alpayariyak/openchat/blob/master/assets/arxiv-logomark-small-square-border.png?raw=true" alt="ArXiv Logo" style="width:20px; vertical-align: middle; display: inline-block; margin-right: 5px; margin-left: 5px; margin-top: 0px; margin-bottom: 0px;"/>
<span class="link-text" style="margin-right: 5px;">Paper</span>
</a> |
<a href="https://discord.gg/A5Fjf5zC69">
<img src="https://cloud.githubusercontent.com/assets/6291467/26705903/96c2d66e-477c-11e7-9f4e-f3c0efe96c9a.png" alt="Discord Logo" style="width:20px; vertical-align: middle; display: inline-block; margin-right: 5px; margin-left: 5px; margin-top: 0px; margin-bottom: 0px;"/>
<span class="link-text">Discord</span>
</a>
</p>

Introducing OpenBioLLM-8B: A State-of-the-Art Open Source Biomedical Large Language Model
OpenBioLLM-8B is an advanced open source language model designed specifically for the biomedical domain. Developed by Saama AI Labs, this model leverages cutting-edge techniques to achieve state-of-the-art performance on a wide range of biomedical tasks.
🏥 **Biomedical Specialization**: OpenBioLLM-8B is tailored for the unique language and knowledge requirements of the medical and life sciences fields. It was fine-tuned on a vast corpus of high-quality biomedical data, enabling it to understand and generate text with domain-specific accuracy and fluency.
🎓 **Superior Performance**: With 8 billion parameters, OpenBioLLM-8B outperforms other open source biomedical language models of similar scale. It has also demonstrated better results compared to larger proprietary & open-source models like GPT-3.5 and Meditron-70B on biomedical benchmarks.
🧠 **Advanced Training Techniques**: OpenBioLLM-8B builds upon the powerful foundations of the **Meta-Llama-3-8B** and [Meta-Llama-3-8B](meta-llama/Meta-Llama-3-8B) models. It incorporates the DPO dataset and fine-tuning recipe along with a custom diverse medical instruction dataset. Key components of the training pipeline include:
<div align="center">
<img width="1200px" src="https://cdn-uploads.huggingface.co/production/uploads/5f3fe13d79c1ba4c353d0c19/oPchsJsEpQoGcGXVbh7YS.png">
</div>
- **Policy Optimization**: [Direct Preference Optimization: Your Language Model is Secretly a Reward Model (DPO)](https://arxiv.org/abs/2305.18290)
- **Ranking Dataset**: [berkeley-nest/Nectar](https://huggingface.co/datasets/berkeley-nest/Nectar)
- **Fine-tuning dataset**: Custom Medical Instruct dataset (We plan to release a sample training dataset in our upcoming paper; please stay updated)
This combination of cutting-edge techniques enables OpenBioLLM-8B to align with key capabilities and preferences for biomedical applications.
⚙️ **Release Details**:
- **Model Size**: 8 billion parameters
- **Quantization**: Optimized quantized versions available [Here](https://huggingface.co/aaditya/OpenBioLLM-Llama3-8B-GGUF)
- **Language(s) (NLP):** en
- **Developed By**: [Ankit Pal (Aaditya Ura)](https://aadityaura.github.io/) from Saama AI Labs
- **License:** Meta-Llama License
- **Fine-tuned from models:** [meta-llama/Meta-Llama-3-8B](meta-llama/Meta-Llama-3-8B)
- **Resources for more information:**
- Paper: Coming soon
The model can be fine-tuned for more specialized tasks and datasets as needed.
OpenBioLLM-8B represents an important step forward in democratizing advanced language AI for the biomedical community. By leveraging state-of-the-art architectures and training techniques from leading open source efforts like Llama-3, we have created a powerful tool to accelerate innovation and discovery in healthcare and the life sciences.
We are excited to share OpenBioLLM-8B with researchers and developers around the world.
### Use with transformers
**Important: Please use the exact chat template provided by Llama-3 instruct version. Otherwise there will be a degradation in the performance. The model output can be verbose in rare cases. Please consider setting temperature = 0 to make this happen less.**
See the snippet below for usage with Transformers:
```python
import transformers
import torch
model_id = "aaditya/OpenBioLLM-Llama3-8B"
pipeline = transformers.pipeline(
"text-generation",
model=model_id,
model_kwargs={"torch_dtype": torch.bfloat16},
device="auto",
)
messages = [
{"role": "system", "content": "You are an expert and experienced from the healthcare and biomedical domain with extensive medical knowledge and practical experience. Your name is OpenBioLLM, and you were developed by Saama AI Labs. who's willing to help answer the user's query with explanation. In your explanation, leverage your deep medical expertise such as relevant anatomical structures, physiological processes, diagnostic criteria, treatment guidelines, or other pertinent medical concepts. Use precise medical terminology while still aiming to make the explanation clear and accessible to a general audience."},
{"role": "user", "content": "How can i split a 3mg or 4mg waefin pill so i can get a 2.5mg pill?"},
]
prompt = pipeline.tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
terminators = [
pipeline.tokenizer.eos_token_id,
pipeline.tokenizer.convert_tokens_to_ids("<|eot_id|>")
]
outputs = pipeline(
prompt,
max_new_tokens=256,
eos_token_id=terminators,
do_sample=True,
temperature=0.0,
top_p=0.9,
)
print(outputs[0]["generated_text"][len(prompt):])
```
## **Training procedure**
### **Training hyperparameters**
<details>
<summary>Click to see details</summary>
- learning_rate: 0.0002
- lr_scheduler: cosine
- train_batch_size: 12
- eval_batch_size: 8
- GPU: H100 80GB SXM5
- num_devices: 1
- optimizer: adamw_bnb_8bit
- lr_scheduler_warmup_steps: 100
- num_epochs: 4
</details>
### **Peft hyperparameters**
<details>
<summary>Click to see details</summary>
- adapter: qlora
- lora_r: 128
- lora_alpha: 256
- lora_dropout: 0.05
- lora_target_linear: true
-lora_target_modules:
- q_proj
- v_proj
- k_proj
- o_proj
- gate_proj
- down_proj
- up_proj
</details>
### **Training results**
### **Framework versions**
- Transformers 4.39.3
- Pytorch 2.1.2+cu121
- Datasets 2.18.0
- Tokenizers 0.15.1
- Axolotl
- Lm harness for evaluation
# Benchmark Results
🔥 OpenBioLLM-8B demonstrates superior performance compared to larger models, such as GPT-3.5, Meditron-70B across 9 diverse biomedical datasets, achieving state-of-the-art results with an average score of 72.50%, despite having a significantly smaller parameter count. The model's strong performance in domain-specific tasks, such as Clinical KG, Medical Genetics, and PubMedQA, highlights its ability to effectively capture and apply biomedical knowledge.
🚨 The GPT-4, Med-PaLM-1, and Med-PaLM-2 results are taken from their official papers. Since Med-PaLM doesn't provide zero-shot accuracy, we are using 5-shot accuracy from their paper for comparison. All results presented are in the zero-shot setting, except for Med-PaLM-2 and Med-PaLM-1, which use 5-shot accuracy.
| | Clinical KG | Medical Genetics | Anatomy | Pro Medicine | College Biology | College Medicine | MedQA 4 opts | PubMedQA | MedMCQA | Avg |
|--------------------|-------------|------------------|---------|--------------|-----------------|------------------|--------------|----------|---------|-------|
| **OpenBioLLM-70B** | **92.93** | **93.197** | **83.904** | 93.75 | 93.827 | **85.749** | 78.162 | 78.97 | **74.014** | **86.05588** |
| Med-PaLM-2 (5-shot) | 88.3 | 90 | 77.8 | **95.2** | 94.4 | 80.9 | **79.7** | **79.2** | 71.3 | 84.08 |
| **GPT-4** | 86.04 | 91 | 80 | 93.01 | **95.14** | 76.88 | 78.87 | 75.2 | 69.52 | 82.85 |
| Med-PaLM-1 (Flan-PaLM, 5-shot) | 80.4 | 75 | 63.7 | 83.8 | 88.9 | 76.3 | 67.6 | 79 | 57.6 | 74.7 |
| **OpenBioLLM-8B** | 76.101 | 86.1 | 69.829 | 78.21 | 84.213 | 68.042 | 58.993 | 74.12 | 56.913 | 72.502 |
| Gemini-1.0 | 76.7 | 75.8 | 66.7 | 77.7 | 88 | 69.2 | 58 | 70.7 | 54.3 | 70.79 |
| GPT-3.5 Turbo 1106 | 74.71 | 74 | 72.79 | 72.79 | 72.91 | 64.73 | 57.71 | 72.66 | 53.79 | 66 |
| Meditron-70B | 66.79 | 69 | 53.33 | 71.69 | 76.38 | 63 | 57.1 | 76.6 | 46.85 | 64.52 |
| gemma-7b | 69.81 | 70 | 59.26 | 66.18 | 79.86 | 60.12 | 47.21 | 76.2 | 48.96 | 64.18 |
| Mistral-7B-v0.1 | 68.68 | 71 | 55.56 | 68.38 | 68.06 | 59.54 | 50.82 | 75.4 | 48.2 | 62.85 |
| Apollo-7B | 62.26 | 72 | 61.48 | 69.12 | 70.83 | 55.49 | 55.22 | 39.8 | 53.77 | 60 |
| MedAlpaca-7b | 57.36 | 69 | 57.04 | 67.28 | 65.28 | 54.34 | 41.71 | 72.8 | 37.51 | 58.03 |
| BioMistral-7B | 59.9 | 64 | 56.5 | 60.4 | 59 | 54.7 | 50.6 | 77.5 | 48.1 | 57.3 |
| AlpaCare-llama2-7b | 49.81 | 49 | 45.92 | 33.82 | 50 | 43.35 | 29.77 | 72.2 | 34.42 | 45.36 |
| ClinicalGPT | 30.56 | 27 | 30.37 | 19.48 | 25 | 24.27 | 26.08 | 63.8 | 28.18 | 30.52 |
<div align="center">
<img width="1600px" src="https://cdn-uploads.huggingface.co/production/uploads/5f3fe13d79c1ba4c353d0c19/_SzdcJSBjZyo8RS1bTEkP.png">
</div>
## Detailed Medical Subjectwise accuracy

# Use Cases & Examples
🚨 **Below results are from the quantized version of OpenBioLLM-70B**
# Summarize Clinical Notes
OpenBioLLM-70B can efficiently analyze and summarize complex clinical notes, EHR data, and discharge summaries, extracting key information and generating concise, structured summaries

# Answer Medical Questions
OpenBioLLM-70B can provide answers to a wide range of medical questions.


<details>
<summary>Click to see details</summary>



</details>
# Clinical Entity Recognition
OpenBioLLM-70B can perform advanced clinical entity recognition by identifying and extracting key medical concepts, such as diseases, symptoms, medications, procedures, and anatomical structures, from unstructured clinical text. By leveraging its deep understanding of medical terminology and context, the model can accurately annotate and categorize clinical entities, enabling more efficient information retrieval, data analysis, and knowledge discovery from electronic health records, research articles, and other biomedical text sources. This capability can support various downstream applications, such as clinical decision support, pharmacovigilance, and medical research.



# Biomarkers Extraction

# Classification
OpenBioLLM-70B can perform various biomedical classification tasks, such as disease prediction, sentiment analysis, medical document categorization

# De-Identification
OpenBioLLM-70B can detect and remove personally identifiable information (PII) from medical records, ensuring patient privacy and compliance with data protection regulations like HIPAA.

**Advisory Notice!**
While OpenBioLLM-70B & 8B leverages high-quality data sources, its outputs may still contain inaccuracies, biases, or misalignments that could pose risks if relied upon for medical decision-making without further testing and refinement. The model's performance has not yet been rigorously evaluated in randomized controlled trials or real-world healthcare environments.
Therefore, we strongly advise against using OpenBioLLM-70B & 8B for any direct patient care, clinical decision support, or other professional medical purposes at this time. Its use should be limited to research, development, and exploratory applications by qualified individuals who understand its limitations.
OpenBioLLM-70B & 8B are intended solely as a research tool to assist healthcare professionals and should never be considered a replacement for the professional judgment and expertise of a qualified medical doctor.
Appropriately adapting and validating OpenBioLLM-70B & 8B for specific medical use cases would require significant additional work, potentially including:
- Thorough testing and evaluation in relevant clinical scenarios
- Alignment with evidence-based guidelines and best practices
- Mitigation of potential biases and failure modes
- Integration with human oversight and interpretation
- Compliance with regulatory and ethical standards
Always consult a qualified healthcare provider for personal medical needs.
# Citation
If you find OpenBioLLM-70B & 8B useful in your work, please cite the model as follows:
```
@misc{OpenBioLLMs,
author = {Ankit Pal, Malaikannan Sankarasubbu},
title = {OpenBioLLMs: Advancing Open-Source Large Language Models for Healthcare and Life Sciences},
year = {2024},
publisher = {Hugging Face},
journal = {Hugging Face repository},
howpublished = {\url{https://huggingface.co/aaditya/OpenBioLLM-Llama3-70B}}
}
```
The accompanying paper is currently in progress and will be released soon.
<div align="center">
<h2> 💌 Contact </h2>
</div>
We look forward to hearing you and collaborating on this exciting project!
**Contributors:**
- [Ankit Pal (Aaditya Ura)](https://aadityaura.github.io/) [aadityaura at gmail dot com]
- Saama AI Labs
- Note: I am looking for a funded PhD opportunity, especially if it fits my Responsible Generative AI, Multimodal LLMs, Geometric Deep Learning, and Healthcare AI skillset.
# References
We thank the [Meta Team](meta-llama/Meta-Llama-3-70B-Instruct) for their amazing models!
Result sources
- [1] GPT-4 [Capabilities of GPT-4 on Medical Challenge Problems] (https://arxiv.org/abs/2303.13375)
- [2] Med-PaLM-1 [Large Language Models Encode Clinical Knowledge](https://arxiv.org/abs/2212.13138)
- [3] Med-PaLM-2 [Towards Expert-Level Medical Question Answering with Large Language Models](https://arxiv.org/abs/2305.09617)
- [4] Gemini-1.0 [Gemini Goes to Med School](https://arxiv.org/abs/2402.07023) | [
"QUESTION_ANSWERING"
] | [
"MEDQA",
"PUBMEDQA"
] |
VenkatNDivi77/gte-Qwen2-7B-instruct-Q4_K_M-GGUF | VenkatNDivi77 | sentence-similarity | [
"sentence-transformers",
"gguf",
"mteb",
"transformers",
"Qwen2",
"sentence-similarity",
"llama-cpp",
"gguf-my-repo",
"base_model:Alibaba-NLP/gte-Qwen2-7B-instruct",
"base_model:quantized:Alibaba-NLP/gte-Qwen2-7B-instruct",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us",
"conversational"
] | 2024-08-02T07:32:26 | 2024-08-02T07:32:47 | 57 | 4 | ---
base_model: Alibaba-NLP/gte-Qwen2-7B-instruct
license: apache-2.0
tags:
- mteb
- sentence-transformers
- transformers
- Qwen2
- sentence-similarity
- llama-cpp
- gguf-my-repo
model-index:
- name: gte-qwen2-7B-instruct
results:
- task:
type: Classification
dataset:
name: MTEB AmazonCounterfactualClassification (en)
type: mteb/amazon_counterfactual
config: en
split: test
revision: e8379541af4e31359cca9fbcf4b00f2671dba205
metrics:
- type: accuracy
value: 91.31343283582089
- type: ap
value: 67.64251402604096
- type: f1
value: 87.53372530755692
- task:
type: Classification
dataset:
name: MTEB AmazonPolarityClassification
type: mteb/amazon_polarity
config: default
split: test
revision: e2d317d38cd51312af73b3d32a06d1a08b442046
metrics:
- type: accuracy
value: 97.497825
- type: ap
value: 96.30329547047529
- type: f1
value: 97.49769793778039
- task:
type: Classification
dataset:
name: MTEB AmazonReviewsClassification (en)
type: mteb/amazon_reviews_multi
config: en
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 62.564
- type: f1
value: 60.975777935041066
- task:
type: Retrieval
dataset:
name: MTEB ArguAna
type: mteb/arguana
config: default
split: test
revision: c22ab2a51041ffd869aaddef7af8d8215647e41a
metrics:
- type: map_at_1
value: 36.486000000000004
- type: map_at_10
value: 54.842
- type: map_at_100
value: 55.206999999999994
- type: map_at_1000
value: 55.206999999999994
- type: map_at_3
value: 49.893
- type: map_at_5
value: 53.105000000000004
- type: mrr_at_1
value: 37.34
- type: mrr_at_10
value: 55.143
- type: mrr_at_100
value: 55.509
- type: mrr_at_1000
value: 55.509
- type: mrr_at_3
value: 50.212999999999994
- type: mrr_at_5
value: 53.432
- type: ndcg_at_1
value: 36.486000000000004
- type: ndcg_at_10
value: 64.273
- type: ndcg_at_100
value: 65.66199999999999
- type: ndcg_at_1000
value: 65.66199999999999
- type: ndcg_at_3
value: 54.352999999999994
- type: ndcg_at_5
value: 60.131
- type: precision_at_1
value: 36.486000000000004
- type: precision_at_10
value: 9.395000000000001
- type: precision_at_100
value: 0.996
- type: precision_at_1000
value: 0.1
- type: precision_at_3
value: 22.428
- type: precision_at_5
value: 16.259
- type: recall_at_1
value: 36.486000000000004
- type: recall_at_10
value: 93.95400000000001
- type: recall_at_100
value: 99.644
- type: recall_at_1000
value: 99.644
- type: recall_at_3
value: 67.283
- type: recall_at_5
value: 81.294
- task:
type: Clustering
dataset:
name: MTEB ArxivClusteringP2P
type: mteb/arxiv-clustering-p2p
config: default
split: test
revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d
metrics:
- type: v_measure
value: 56.461169803700564
- task:
type: Clustering
dataset:
name: MTEB ArxivClusteringS2S
type: mteb/arxiv-clustering-s2s
config: default
split: test
revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53
metrics:
- type: v_measure
value: 51.73600434466286
- task:
type: Reranking
dataset:
name: MTEB AskUbuntuDupQuestions
type: mteb/askubuntudupquestions-reranking
config: default
split: test
revision: 2000358ca161889fa9c082cb41daa8dcfb161a54
metrics:
- type: map
value: 67.57827065898053
- type: mrr
value: 79.08136569493911
- task:
type: STS
dataset:
name: MTEB BIOSSES
type: mteb/biosses-sts
config: default
split: test
revision: d3fb88f8f02e40887cd149695127462bbcf29b4a
metrics:
- type: cos_sim_pearson
value: 83.53324575999243
- type: cos_sim_spearman
value: 81.37173362822374
- type: euclidean_pearson
value: 82.19243335103444
- type: euclidean_spearman
value: 81.33679307304334
- type: manhattan_pearson
value: 82.38752665975699
- type: manhattan_spearman
value: 81.31510583189689
- task:
type: Classification
dataset:
name: MTEB Banking77Classification
type: mteb/banking77
config: default
split: test
revision: 0fd18e25b25c072e09e0d92ab615fda904d66300
metrics:
- type: accuracy
value: 87.56818181818181
- type: f1
value: 87.25826722019875
- task:
type: Clustering
dataset:
name: MTEB BiorxivClusteringP2P
type: mteb/biorxiv-clustering-p2p
config: default
split: test
revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40
metrics:
- type: v_measure
value: 50.09239610327673
- task:
type: Clustering
dataset:
name: MTEB BiorxivClusteringS2S
type: mteb/biorxiv-clustering-s2s
config: default
split: test
revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908
metrics:
- type: v_measure
value: 46.64733054606282
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackAndroidRetrieval
type: BeIR/cqadupstack
config: default
split: test
revision: f46a197baaae43b4f621051089b82a364682dfeb
metrics:
- type: map_at_1
value: 33.997
- type: map_at_10
value: 48.176
- type: map_at_100
value: 49.82
- type: map_at_1000
value: 49.924
- type: map_at_3
value: 43.626
- type: map_at_5
value: 46.275
- type: mrr_at_1
value: 42.059999999999995
- type: mrr_at_10
value: 53.726
- type: mrr_at_100
value: 54.398
- type: mrr_at_1000
value: 54.416
- type: mrr_at_3
value: 50.714999999999996
- type: mrr_at_5
value: 52.639
- type: ndcg_at_1
value: 42.059999999999995
- type: ndcg_at_10
value: 55.574999999999996
- type: ndcg_at_100
value: 60.744
- type: ndcg_at_1000
value: 61.85699999999999
- type: ndcg_at_3
value: 49.363
- type: ndcg_at_5
value: 52.44
- type: precision_at_1
value: 42.059999999999995
- type: precision_at_10
value: 11.101999999999999
- type: precision_at_100
value: 1.73
- type: precision_at_1000
value: 0.218
- type: precision_at_3
value: 24.464
- type: precision_at_5
value: 18.026
- type: recall_at_1
value: 33.997
- type: recall_at_10
value: 70.35900000000001
- type: recall_at_100
value: 91.642
- type: recall_at_1000
value: 97.977
- type: recall_at_3
value: 52.76
- type: recall_at_5
value: 61.148
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackEnglishRetrieval
type: BeIR/cqadupstack
config: default
split: test
revision: ad9991cb51e31e31e430383c75ffb2885547b5f0
metrics:
- type: map_at_1
value: 35.884
- type: map_at_10
value: 48.14
- type: map_at_100
value: 49.5
- type: map_at_1000
value: 49.63
- type: map_at_3
value: 44.646
- type: map_at_5
value: 46.617999999999995
- type: mrr_at_1
value: 44.458999999999996
- type: mrr_at_10
value: 53.751000000000005
- type: mrr_at_100
value: 54.37800000000001
- type: mrr_at_1000
value: 54.415
- type: mrr_at_3
value: 51.815
- type: mrr_at_5
value: 52.882
- type: ndcg_at_1
value: 44.458999999999996
- type: ndcg_at_10
value: 54.157
- type: ndcg_at_100
value: 58.362
- type: ndcg_at_1000
value: 60.178
- type: ndcg_at_3
value: 49.661
- type: ndcg_at_5
value: 51.74999999999999
- type: precision_at_1
value: 44.458999999999996
- type: precision_at_10
value: 10.248
- type: precision_at_100
value: 1.5890000000000002
- type: precision_at_1000
value: 0.207
- type: precision_at_3
value: 23.928
- type: precision_at_5
value: 16.878999999999998
- type: recall_at_1
value: 35.884
- type: recall_at_10
value: 64.798
- type: recall_at_100
value: 82.345
- type: recall_at_1000
value: 93.267
- type: recall_at_3
value: 51.847
- type: recall_at_5
value: 57.601
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackGamingRetrieval
type: BeIR/cqadupstack
config: default
split: test
revision: 4885aa143210c98657558c04aaf3dc47cfb54340
metrics:
- type: map_at_1
value: 39.383
- type: map_at_10
value: 53.714
- type: map_at_100
value: 54.838
- type: map_at_1000
value: 54.87800000000001
- type: map_at_3
value: 50.114999999999995
- type: map_at_5
value: 52.153000000000006
- type: mrr_at_1
value: 45.016
- type: mrr_at_10
value: 56.732000000000006
- type: mrr_at_100
value: 57.411
- type: mrr_at_1000
value: 57.431
- type: mrr_at_3
value: 54.044000000000004
- type: mrr_at_5
value: 55.639
- type: ndcg_at_1
value: 45.016
- type: ndcg_at_10
value: 60.228
- type: ndcg_at_100
value: 64.277
- type: ndcg_at_1000
value: 65.07
- type: ndcg_at_3
value: 54.124
- type: ndcg_at_5
value: 57.147000000000006
- type: precision_at_1
value: 45.016
- type: precision_at_10
value: 9.937
- type: precision_at_100
value: 1.288
- type: precision_at_1000
value: 0.13899999999999998
- type: precision_at_3
value: 24.471999999999998
- type: precision_at_5
value: 16.991
- type: recall_at_1
value: 39.383
- type: recall_at_10
value: 76.175
- type: recall_at_100
value: 93.02
- type: recall_at_1000
value: 98.60900000000001
- type: recall_at_3
value: 60.265
- type: recall_at_5
value: 67.46600000000001
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackGisRetrieval
type: BeIR/cqadupstack
config: default
split: test
revision: 5003b3064772da1887988e05400cf3806fe491f2
metrics:
- type: map_at_1
value: 27.426000000000002
- type: map_at_10
value: 37.397000000000006
- type: map_at_100
value: 38.61
- type: map_at_1000
value: 38.678000000000004
- type: map_at_3
value: 34.150999999999996
- type: map_at_5
value: 36.137
- type: mrr_at_1
value: 29.944
- type: mrr_at_10
value: 39.654
- type: mrr_at_100
value: 40.638000000000005
- type: mrr_at_1000
value: 40.691
- type: mrr_at_3
value: 36.817
- type: mrr_at_5
value: 38.524
- type: ndcg_at_1
value: 29.944
- type: ndcg_at_10
value: 43.094
- type: ndcg_at_100
value: 48.789
- type: ndcg_at_1000
value: 50.339999999999996
- type: ndcg_at_3
value: 36.984
- type: ndcg_at_5
value: 40.248
- type: precision_at_1
value: 29.944
- type: precision_at_10
value: 6.78
- type: precision_at_100
value: 1.024
- type: precision_at_1000
value: 0.11800000000000001
- type: precision_at_3
value: 15.895000000000001
- type: precision_at_5
value: 11.39
- type: recall_at_1
value: 27.426000000000002
- type: recall_at_10
value: 58.464000000000006
- type: recall_at_100
value: 84.193
- type: recall_at_1000
value: 95.52000000000001
- type: recall_at_3
value: 42.172
- type: recall_at_5
value: 50.101
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackMathematicaRetrieval
type: BeIR/cqadupstack
config: default
split: test
revision: 90fceea13679c63fe563ded68f3b6f06e50061de
metrics:
- type: map_at_1
value: 19.721
- type: map_at_10
value: 31.604
- type: map_at_100
value: 32.972
- type: map_at_1000
value: 33.077
- type: map_at_3
value: 27.218999999999998
- type: map_at_5
value: 29.53
- type: mrr_at_1
value: 25.0
- type: mrr_at_10
value: 35.843
- type: mrr_at_100
value: 36.785000000000004
- type: mrr_at_1000
value: 36.842000000000006
- type: mrr_at_3
value: 32.193
- type: mrr_at_5
value: 34.264
- type: ndcg_at_1
value: 25.0
- type: ndcg_at_10
value: 38.606
- type: ndcg_at_100
value: 44.272
- type: ndcg_at_1000
value: 46.527
- type: ndcg_at_3
value: 30.985000000000003
- type: ndcg_at_5
value: 34.43
- type: precision_at_1
value: 25.0
- type: precision_at_10
value: 7.811
- type: precision_at_100
value: 1.203
- type: precision_at_1000
value: 0.15
- type: precision_at_3
value: 15.423
- type: precision_at_5
value: 11.791
- type: recall_at_1
value: 19.721
- type: recall_at_10
value: 55.625
- type: recall_at_100
value: 79.34400000000001
- type: recall_at_1000
value: 95.208
- type: recall_at_3
value: 35.19
- type: recall_at_5
value: 43.626
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackPhysicsRetrieval
type: BeIR/cqadupstack
config: default
split: test
revision: 79531abbd1fb92d06c6d6315a0cbbbf5bb247ea4
metrics:
- type: map_at_1
value: 33.784
- type: map_at_10
value: 47.522
- type: map_at_100
value: 48.949999999999996
- type: map_at_1000
value: 49.038
- type: map_at_3
value: 43.284
- type: map_at_5
value: 45.629
- type: mrr_at_1
value: 41.482
- type: mrr_at_10
value: 52.830999999999996
- type: mrr_at_100
value: 53.559999999999995
- type: mrr_at_1000
value: 53.588
- type: mrr_at_3
value: 50.016000000000005
- type: mrr_at_5
value: 51.614000000000004
- type: ndcg_at_1
value: 41.482
- type: ndcg_at_10
value: 54.569
- type: ndcg_at_100
value: 59.675999999999995
- type: ndcg_at_1000
value: 60.989000000000004
- type: ndcg_at_3
value: 48.187000000000005
- type: ndcg_at_5
value: 51.183
- type: precision_at_1
value: 41.482
- type: precision_at_10
value: 10.221
- type: precision_at_100
value: 1.486
- type: precision_at_1000
value: 0.17500000000000002
- type: precision_at_3
value: 23.548
- type: precision_at_5
value: 16.805
- type: recall_at_1
value: 33.784
- type: recall_at_10
value: 69.798
- type: recall_at_100
value: 90.098
- type: recall_at_1000
value: 98.176
- type: recall_at_3
value: 52.127
- type: recall_at_5
value: 59.861
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackProgrammersRetrieval
type: BeIR/cqadupstack
config: default
split: test
revision: 6184bc1440d2dbc7612be22b50686b8826d22b32
metrics:
- type: map_at_1
value: 28.038999999999998
- type: map_at_10
value: 41.904
- type: map_at_100
value: 43.36
- type: map_at_1000
value: 43.453
- type: map_at_3
value: 37.785999999999994
- type: map_at_5
value: 40.105000000000004
- type: mrr_at_1
value: 35.046
- type: mrr_at_10
value: 46.926
- type: mrr_at_100
value: 47.815000000000005
- type: mrr_at_1000
value: 47.849000000000004
- type: mrr_at_3
value: 44.273
- type: mrr_at_5
value: 45.774
- type: ndcg_at_1
value: 35.046
- type: ndcg_at_10
value: 48.937000000000005
- type: ndcg_at_100
value: 54.544000000000004
- type: ndcg_at_1000
value: 56.069
- type: ndcg_at_3
value: 42.858000000000004
- type: ndcg_at_5
value: 45.644
- type: precision_at_1
value: 35.046
- type: precision_at_10
value: 9.452
- type: precision_at_100
value: 1.429
- type: precision_at_1000
value: 0.173
- type: precision_at_3
value: 21.346999999999998
- type: precision_at_5
value: 15.342
- type: recall_at_1
value: 28.038999999999998
- type: recall_at_10
value: 64.59700000000001
- type: recall_at_100
value: 87.735
- type: recall_at_1000
value: 97.41300000000001
- type: recall_at_3
value: 47.368
- type: recall_at_5
value: 54.93900000000001
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackRetrieval
type: BeIR/cqadupstack
config: default
split: test
revision: 4ffe81d471b1924886b33c7567bfb200e9eec5c4
metrics:
- type: map_at_1
value: 28.17291666666667
- type: map_at_10
value: 40.025749999999995
- type: map_at_100
value: 41.39208333333333
- type: map_at_1000
value: 41.499249999999996
- type: map_at_3
value: 36.347
- type: map_at_5
value: 38.41391666666667
- type: mrr_at_1
value: 33.65925
- type: mrr_at_10
value: 44.085499999999996
- type: mrr_at_100
value: 44.94116666666667
- type: mrr_at_1000
value: 44.9855
- type: mrr_at_3
value: 41.2815
- type: mrr_at_5
value: 42.91491666666666
- type: ndcg_at_1
value: 33.65925
- type: ndcg_at_10
value: 46.430833333333325
- type: ndcg_at_100
value: 51.761
- type: ndcg_at_1000
value: 53.50899999999999
- type: ndcg_at_3
value: 40.45133333333333
- type: ndcg_at_5
value: 43.31483333333334
- type: precision_at_1
value: 33.65925
- type: precision_at_10
value: 8.4995
- type: precision_at_100
value: 1.3210000000000004
- type: precision_at_1000
value: 0.16591666666666666
- type: precision_at_3
value: 19.165083333333335
- type: precision_at_5
value: 13.81816666666667
- type: recall_at_1
value: 28.17291666666667
- type: recall_at_10
value: 61.12624999999999
- type: recall_at_100
value: 83.97266666666667
- type: recall_at_1000
value: 95.66550000000001
- type: recall_at_3
value: 44.661249999999995
- type: recall_at_5
value: 51.983333333333334
- type: map_at_1
value: 17.936
- type: map_at_10
value: 27.399
- type: map_at_100
value: 28.632
- type: map_at_1000
value: 28.738000000000003
- type: map_at_3
value: 24.456
- type: map_at_5
value: 26.06
- type: mrr_at_1
value: 19.224
- type: mrr_at_10
value: 28.998
- type: mrr_at_100
value: 30.11
- type: mrr_at_1000
value: 30.177
- type: mrr_at_3
value: 26.247999999999998
- type: mrr_at_5
value: 27.708
- type: ndcg_at_1
value: 19.224
- type: ndcg_at_10
value: 32.911
- type: ndcg_at_100
value: 38.873999999999995
- type: ndcg_at_1000
value: 41.277
- type: ndcg_at_3
value: 27.142
- type: ndcg_at_5
value: 29.755
- type: precision_at_1
value: 19.224
- type: precision_at_10
value: 5.6930000000000005
- type: precision_at_100
value: 0.9259999999999999
- type: precision_at_1000
value: 0.126
- type: precision_at_3
value: 12.138
- type: precision_at_5
value: 8.909
- type: recall_at_1
value: 17.936
- type: recall_at_10
value: 48.096
- type: recall_at_100
value: 75.389
- type: recall_at_1000
value: 92.803
- type: recall_at_3
value: 32.812999999999995
- type: recall_at_5
value: 38.851
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackStatsRetrieval
type: BeIR/cqadupstack
config: default
split: test
revision: 65ac3a16b8e91f9cee4c9828cc7c335575432a2a
metrics:
- type: map_at_1
value: 24.681
- type: map_at_10
value: 34.892
- type: map_at_100
value: 35.996
- type: map_at_1000
value: 36.083
- type: map_at_3
value: 31.491999999999997
- type: map_at_5
value: 33.632
- type: mrr_at_1
value: 28.528
- type: mrr_at_10
value: 37.694
- type: mrr_at_100
value: 38.613
- type: mrr_at_1000
value: 38.668
- type: mrr_at_3
value: 34.714
- type: mrr_at_5
value: 36.616
- type: ndcg_at_1
value: 28.528
- type: ndcg_at_10
value: 40.703
- type: ndcg_at_100
value: 45.993
- type: ndcg_at_1000
value: 47.847
- type: ndcg_at_3
value: 34.622
- type: ndcg_at_5
value: 38.035999999999994
- type: precision_at_1
value: 28.528
- type: precision_at_10
value: 6.902
- type: precision_at_100
value: 1.0370000000000001
- type: precision_at_1000
value: 0.126
- type: precision_at_3
value: 15.798000000000002
- type: precision_at_5
value: 11.655999999999999
- type: recall_at_1
value: 24.681
- type: recall_at_10
value: 55.81
- type: recall_at_100
value: 79.785
- type: recall_at_1000
value: 92.959
- type: recall_at_3
value: 39.074
- type: recall_at_5
value: 47.568
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackTexRetrieval
type: BeIR/cqadupstack
config: default
split: test
revision: 46989137a86843e03a6195de44b09deda022eec7
metrics:
- type: map_at_1
value: 18.627
- type: map_at_10
value: 27.872000000000003
- type: map_at_100
value: 29.237999999999996
- type: map_at_1000
value: 29.363
- type: map_at_3
value: 24.751
- type: map_at_5
value: 26.521
- type: mrr_at_1
value: 23.021
- type: mrr_at_10
value: 31.924000000000003
- type: mrr_at_100
value: 32.922000000000004
- type: mrr_at_1000
value: 32.988
- type: mrr_at_3
value: 29.192
- type: mrr_at_5
value: 30.798
- type: ndcg_at_1
value: 23.021
- type: ndcg_at_10
value: 33.535
- type: ndcg_at_100
value: 39.732
- type: ndcg_at_1000
value: 42.201
- type: ndcg_at_3
value: 28.153
- type: ndcg_at_5
value: 30.746000000000002
- type: precision_at_1
value: 23.021
- type: precision_at_10
value: 6.459
- type: precision_at_100
value: 1.1320000000000001
- type: precision_at_1000
value: 0.153
- type: precision_at_3
value: 13.719000000000001
- type: precision_at_5
value: 10.193000000000001
- type: recall_at_1
value: 18.627
- type: recall_at_10
value: 46.463
- type: recall_at_100
value: 74.226
- type: recall_at_1000
value: 91.28500000000001
- type: recall_at_3
value: 31.357000000000003
- type: recall_at_5
value: 38.067
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackUnixRetrieval
type: BeIR/cqadupstack
config: default
split: test
revision: 6c6430d3a6d36f8d2a829195bc5dc94d7e063e53
metrics:
- type: map_at_1
value: 31.457
- type: map_at_10
value: 42.888
- type: map_at_100
value: 44.24
- type: map_at_1000
value: 44.327
- type: map_at_3
value: 39.588
- type: map_at_5
value: 41.423
- type: mrr_at_1
value: 37.126999999999995
- type: mrr_at_10
value: 47.083000000000006
- type: mrr_at_100
value: 47.997
- type: mrr_at_1000
value: 48.044
- type: mrr_at_3
value: 44.574000000000005
- type: mrr_at_5
value: 46.202
- type: ndcg_at_1
value: 37.126999999999995
- type: ndcg_at_10
value: 48.833
- type: ndcg_at_100
value: 54.327000000000005
- type: ndcg_at_1000
value: 56.011
- type: ndcg_at_3
value: 43.541999999999994
- type: ndcg_at_5
value: 46.127
- type: precision_at_1
value: 37.126999999999995
- type: precision_at_10
value: 8.376999999999999
- type: precision_at_100
value: 1.2309999999999999
- type: precision_at_1000
value: 0.146
- type: precision_at_3
value: 20.211000000000002
- type: precision_at_5
value: 14.16
- type: recall_at_1
value: 31.457
- type: recall_at_10
value: 62.369
- type: recall_at_100
value: 85.444
- type: recall_at_1000
value: 96.65599999999999
- type: recall_at_3
value: 47.961
- type: recall_at_5
value: 54.676
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackWebmastersRetrieval
type: BeIR/cqadupstack
config: default
split: test
revision: 160c094312a0e1facb97e55eeddb698c0abe3571
metrics:
- type: map_at_1
value: 27.139999999999997
- type: map_at_10
value: 38.801
- type: map_at_100
value: 40.549
- type: map_at_1000
value: 40.802
- type: map_at_3
value: 35.05
- type: map_at_5
value: 36.884
- type: mrr_at_1
value: 33.004
- type: mrr_at_10
value: 43.864
- type: mrr_at_100
value: 44.667
- type: mrr_at_1000
value: 44.717
- type: mrr_at_3
value: 40.777
- type: mrr_at_5
value: 42.319
- type: ndcg_at_1
value: 33.004
- type: ndcg_at_10
value: 46.022
- type: ndcg_at_100
value: 51.542
- type: ndcg_at_1000
value: 53.742000000000004
- type: ndcg_at_3
value: 39.795
- type: ndcg_at_5
value: 42.272
- type: precision_at_1
value: 33.004
- type: precision_at_10
value: 9.012
- type: precision_at_100
value: 1.7770000000000001
- type: precision_at_1000
value: 0.26
- type: precision_at_3
value: 19.038
- type: precision_at_5
value: 13.675999999999998
- type: recall_at_1
value: 27.139999999999997
- type: recall_at_10
value: 60.961
- type: recall_at_100
value: 84.451
- type: recall_at_1000
value: 98.113
- type: recall_at_3
value: 43.001
- type: recall_at_5
value: 49.896
- task:
type: Retrieval
dataset:
name: MTEB ClimateFEVER
type: mteb/climate-fever
config: default
split: test
revision: 47f2ac6acb640fc46020b02a5b59fdda04d39380
metrics:
- type: map_at_1
value: 22.076999999999998
- type: map_at_10
value: 35.44
- type: map_at_100
value: 37.651
- type: map_at_1000
value: 37.824999999999996
- type: map_at_3
value: 30.764999999999997
- type: map_at_5
value: 33.26
- type: mrr_at_1
value: 50.163000000000004
- type: mrr_at_10
value: 61.207
- type: mrr_at_100
value: 61.675000000000004
- type: mrr_at_1000
value: 61.692
- type: mrr_at_3
value: 58.60999999999999
- type: mrr_at_5
value: 60.307
- type: ndcg_at_1
value: 50.163000000000004
- type: ndcg_at_10
value: 45.882
- type: ndcg_at_100
value: 53.239999999999995
- type: ndcg_at_1000
value: 55.852000000000004
- type: ndcg_at_3
value: 40.514
- type: ndcg_at_5
value: 42.038
- type: precision_at_1
value: 50.163000000000004
- type: precision_at_10
value: 13.466000000000001
- type: precision_at_100
value: 2.164
- type: precision_at_1000
value: 0.266
- type: precision_at_3
value: 29.707
- type: precision_at_5
value: 21.694
- type: recall_at_1
value: 22.076999999999998
- type: recall_at_10
value: 50.193
- type: recall_at_100
value: 74.993
- type: recall_at_1000
value: 89.131
- type: recall_at_3
value: 35.472
- type: recall_at_5
value: 41.814
- task:
type: Retrieval
dataset:
name: MTEB DBPedia
type: mteb/dbpedia
config: default
split: test
revision: c0f706b76e590d620bd6618b3ca8efdd34e2d659
metrics:
- type: map_at_1
value: 9.953
- type: map_at_10
value: 24.515
- type: map_at_100
value: 36.173
- type: map_at_1000
value: 38.351
- type: map_at_3
value: 16.592000000000002
- type: map_at_5
value: 20.036
- type: mrr_at_1
value: 74.25
- type: mrr_at_10
value: 81.813
- type: mrr_at_100
value: 82.006
- type: mrr_at_1000
value: 82.011
- type: mrr_at_3
value: 80.875
- type: mrr_at_5
value: 81.362
- type: ndcg_at_1
value: 62.5
- type: ndcg_at_10
value: 52.42
- type: ndcg_at_100
value: 56.808
- type: ndcg_at_1000
value: 63.532999999999994
- type: ndcg_at_3
value: 56.654
- type: ndcg_at_5
value: 54.18300000000001
- type: precision_at_1
value: 74.25
- type: precision_at_10
value: 42.699999999999996
- type: precision_at_100
value: 13.675
- type: precision_at_1000
value: 2.664
- type: precision_at_3
value: 60.5
- type: precision_at_5
value: 52.800000000000004
- type: recall_at_1
value: 9.953
- type: recall_at_10
value: 30.253999999999998
- type: recall_at_100
value: 62.516000000000005
- type: recall_at_1000
value: 84.163
- type: recall_at_3
value: 18.13
- type: recall_at_5
value: 22.771
- task:
type: Classification
dataset:
name: MTEB EmotionClassification
type: mteb/emotion
config: default
split: test
revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37
metrics:
- type: accuracy
value: 79.455
- type: f1
value: 74.16798697647569
- task:
type: Retrieval
dataset:
name: MTEB FEVER
type: mteb/fever
config: default
split: test
revision: bea83ef9e8fb933d90a2f1d5515737465d613e12
metrics:
- type: map_at_1
value: 87.531
- type: map_at_10
value: 93.16799999999999
- type: map_at_100
value: 93.341
- type: map_at_1000
value: 93.349
- type: map_at_3
value: 92.444
- type: map_at_5
value: 92.865
- type: mrr_at_1
value: 94.014
- type: mrr_at_10
value: 96.761
- type: mrr_at_100
value: 96.762
- type: mrr_at_1000
value: 96.762
- type: mrr_at_3
value: 96.672
- type: mrr_at_5
value: 96.736
- type: ndcg_at_1
value: 94.014
- type: ndcg_at_10
value: 95.112
- type: ndcg_at_100
value: 95.578
- type: ndcg_at_1000
value: 95.68900000000001
- type: ndcg_at_3
value: 94.392
- type: ndcg_at_5
value: 94.72500000000001
- type: precision_at_1
value: 94.014
- type: precision_at_10
value: 11.065
- type: precision_at_100
value: 1.157
- type: precision_at_1000
value: 0.11800000000000001
- type: precision_at_3
value: 35.259
- type: precision_at_5
value: 21.599
- type: recall_at_1
value: 87.531
- type: recall_at_10
value: 97.356
- type: recall_at_100
value: 98.965
- type: recall_at_1000
value: 99.607
- type: recall_at_3
value: 95.312
- type: recall_at_5
value: 96.295
- task:
type: Retrieval
dataset:
name: MTEB FiQA2018
type: mteb/fiqa
config: default
split: test
revision: 27a168819829fe9bcd655c2df245fb19452e8e06
metrics:
- type: map_at_1
value: 32.055
- type: map_at_10
value: 53.114
- type: map_at_100
value: 55.235
- type: map_at_1000
value: 55.345
- type: map_at_3
value: 45.854
- type: map_at_5
value: 50.025
- type: mrr_at_1
value: 60.34
- type: mrr_at_10
value: 68.804
- type: mrr_at_100
value: 69.309
- type: mrr_at_1000
value: 69.32199999999999
- type: mrr_at_3
value: 66.40899999999999
- type: mrr_at_5
value: 67.976
- type: ndcg_at_1
value: 60.34
- type: ndcg_at_10
value: 62.031000000000006
- type: ndcg_at_100
value: 68.00500000000001
- type: ndcg_at_1000
value: 69.286
- type: ndcg_at_3
value: 56.355999999999995
- type: ndcg_at_5
value: 58.687
- type: precision_at_1
value: 60.34
- type: precision_at_10
value: 17.176
- type: precision_at_100
value: 2.36
- type: precision_at_1000
value: 0.259
- type: precision_at_3
value: 37.14
- type: precision_at_5
value: 27.809
- type: recall_at_1
value: 32.055
- type: recall_at_10
value: 70.91
- type: recall_at_100
value: 91.83
- type: recall_at_1000
value: 98.871
- type: recall_at_3
value: 51.202999999999996
- type: recall_at_5
value: 60.563
- task:
type: Retrieval
dataset:
name: MTEB HotpotQA
type: mteb/hotpotqa
config: default
split: test
revision: ab518f4d6fcca38d87c25209f94beba119d02014
metrics:
- type: map_at_1
value: 43.68
- type: map_at_10
value: 64.389
- type: map_at_100
value: 65.24
- type: map_at_1000
value: 65.303
- type: map_at_3
value: 61.309000000000005
- type: map_at_5
value: 63.275999999999996
- type: mrr_at_1
value: 87.36
- type: mrr_at_10
value: 91.12
- type: mrr_at_100
value: 91.227
- type: mrr_at_1000
value: 91.229
- type: mrr_at_3
value: 90.57600000000001
- type: mrr_at_5
value: 90.912
- type: ndcg_at_1
value: 87.36
- type: ndcg_at_10
value: 73.076
- type: ndcg_at_100
value: 75.895
- type: ndcg_at_1000
value: 77.049
- type: ndcg_at_3
value: 68.929
- type: ndcg_at_5
value: 71.28
- type: precision_at_1
value: 87.36
- type: precision_at_10
value: 14.741000000000001
- type: precision_at_100
value: 1.694
- type: precision_at_1000
value: 0.185
- type: precision_at_3
value: 43.043
- type: precision_at_5
value: 27.681
- type: recall_at_1
value: 43.68
- type: recall_at_10
value: 73.707
- type: recall_at_100
value: 84.7
- type: recall_at_1000
value: 92.309
- type: recall_at_3
value: 64.564
- type: recall_at_5
value: 69.203
- task:
type: Classification
dataset:
name: MTEB ImdbClassification
type: mteb/imdb
config: default
split: test
revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7
metrics:
- type: accuracy
value: 96.75399999999999
- type: ap
value: 95.29389839242187
- type: f1
value: 96.75348377433475
- task:
type: Retrieval
dataset:
name: MTEB MSMARCO
type: mteb/msmarco
config: default
split: dev
revision: c5a29a104738b98a9e76336939199e264163d4a0
metrics:
- type: map_at_1
value: 25.176
- type: map_at_10
value: 38.598
- type: map_at_100
value: 39.707
- type: map_at_1000
value: 39.744
- type: map_at_3
value: 34.566
- type: map_at_5
value: 36.863
- type: mrr_at_1
value: 25.874000000000002
- type: mrr_at_10
value: 39.214
- type: mrr_at_100
value: 40.251
- type: mrr_at_1000
value: 40.281
- type: mrr_at_3
value: 35.291
- type: mrr_at_5
value: 37.545
- type: ndcg_at_1
value: 25.874000000000002
- type: ndcg_at_10
value: 45.98
- type: ndcg_at_100
value: 51.197
- type: ndcg_at_1000
value: 52.073
- type: ndcg_at_3
value: 37.785999999999994
- type: ndcg_at_5
value: 41.870000000000005
- type: precision_at_1
value: 25.874000000000002
- type: precision_at_10
value: 7.181
- type: precision_at_100
value: 0.979
- type: precision_at_1000
value: 0.106
- type: precision_at_3
value: 16.051000000000002
- type: precision_at_5
value: 11.713
- type: recall_at_1
value: 25.176
- type: recall_at_10
value: 68.67699999999999
- type: recall_at_100
value: 92.55
- type: recall_at_1000
value: 99.164
- type: recall_at_3
value: 46.372
- type: recall_at_5
value: 56.16
- task:
type: Classification
dataset:
name: MTEB MTOPDomainClassification (en)
type: mteb/mtop_domain
config: en
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 99.03784769721841
- type: f1
value: 98.97791641821495
- task:
type: Classification
dataset:
name: MTEB MTOPIntentClassification (en)
type: mteb/mtop_intent
config: en
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 91.88326493388054
- type: f1
value: 73.74809928034335
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (en)
type: mteb/amazon_massive_intent
config: en
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 85.41358439811701
- type: f1
value: 83.503679460639
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (en)
type: mteb/amazon_massive_scenario
config: en
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 89.77135171486215
- type: f1
value: 88.89843747468366
- task:
type: Clustering
dataset:
name: MTEB MedrxivClusteringP2P
type: mteb/medrxiv-clustering-p2p
config: default
split: test
revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73
metrics:
- type: v_measure
value: 46.22695362087359
- task:
type: Clustering
dataset:
name: MTEB MedrxivClusteringS2S
type: mteb/medrxiv-clustering-s2s
config: default
split: test
revision: 35191c8c0dca72d8ff3efcd72aa802307d469663
metrics:
- type: v_measure
value: 44.132372165849425
- task:
type: Reranking
dataset:
name: MTEB MindSmallReranking
type: mteb/mind_small
config: default
split: test
revision: 3bdac13927fdc888b903db93b2ffdbd90b295a69
metrics:
- type: map
value: 33.35680810650402
- type: mrr
value: 34.72625715637218
- task:
type: Retrieval
dataset:
name: MTEB NFCorpus
type: mteb/nfcorpus
config: default
split: test
revision: ec0fa4fe99da2ff19ca1214b7966684033a58814
metrics:
- type: map_at_1
value: 7.165000000000001
- type: map_at_10
value: 15.424
- type: map_at_100
value: 20.28
- type: map_at_1000
value: 22.065
- type: map_at_3
value: 11.236
- type: map_at_5
value: 13.025999999999998
- type: mrr_at_1
value: 51.702999999999996
- type: mrr_at_10
value: 59.965
- type: mrr_at_100
value: 60.667
- type: mrr_at_1000
value: 60.702999999999996
- type: mrr_at_3
value: 58.772000000000006
- type: mrr_at_5
value: 59.267
- type: ndcg_at_1
value: 49.536
- type: ndcg_at_10
value: 40.6
- type: ndcg_at_100
value: 37.848
- type: ndcg_at_1000
value: 46.657
- type: ndcg_at_3
value: 46.117999999999995
- type: ndcg_at_5
value: 43.619
- type: precision_at_1
value: 51.393
- type: precision_at_10
value: 30.31
- type: precision_at_100
value: 9.972
- type: precision_at_1000
value: 2.329
- type: precision_at_3
value: 43.137
- type: precision_at_5
value: 37.585
- type: recall_at_1
value: 7.165000000000001
- type: recall_at_10
value: 19.689999999999998
- type: recall_at_100
value: 39.237
- type: recall_at_1000
value: 71.417
- type: recall_at_3
value: 12.247
- type: recall_at_5
value: 14.902999999999999
- task:
type: Retrieval
dataset:
name: MTEB NQ
type: mteb/nq
config: default
split: test
revision: b774495ed302d8c44a3a7ea25c90dbce03968f31
metrics:
- type: map_at_1
value: 42.653999999999996
- type: map_at_10
value: 59.611999999999995
- type: map_at_100
value: 60.32300000000001
- type: map_at_1000
value: 60.336
- type: map_at_3
value: 55.584999999999994
- type: map_at_5
value: 58.19
- type: mrr_at_1
value: 47.683
- type: mrr_at_10
value: 62.06700000000001
- type: mrr_at_100
value: 62.537
- type: mrr_at_1000
value: 62.544999999999995
- type: mrr_at_3
value: 59.178
- type: mrr_at_5
value: 61.034
- type: ndcg_at_1
value: 47.654
- type: ndcg_at_10
value: 67.001
- type: ndcg_at_100
value: 69.73899999999999
- type: ndcg_at_1000
value: 69.986
- type: ndcg_at_3
value: 59.95700000000001
- type: ndcg_at_5
value: 64.025
- type: precision_at_1
value: 47.654
- type: precision_at_10
value: 10.367999999999999
- type: precision_at_100
value: 1.192
- type: precision_at_1000
value: 0.121
- type: precision_at_3
value: 26.651000000000003
- type: precision_at_5
value: 18.459
- type: recall_at_1
value: 42.653999999999996
- type: recall_at_10
value: 86.619
- type: recall_at_100
value: 98.04899999999999
- type: recall_at_1000
value: 99.812
- type: recall_at_3
value: 68.987
- type: recall_at_5
value: 78.158
- task:
type: Retrieval
dataset:
name: MTEB QuoraRetrieval
type: mteb/quora
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 72.538
- type: map_at_10
value: 86.702
- type: map_at_100
value: 87.31
- type: map_at_1000
value: 87.323
- type: map_at_3
value: 83.87
- type: map_at_5
value: 85.682
- type: mrr_at_1
value: 83.31
- type: mrr_at_10
value: 89.225
- type: mrr_at_100
value: 89.30399999999999
- type: mrr_at_1000
value: 89.30399999999999
- type: mrr_at_3
value: 88.44300000000001
- type: mrr_at_5
value: 89.005
- type: ndcg_at_1
value: 83.32000000000001
- type: ndcg_at_10
value: 90.095
- type: ndcg_at_100
value: 91.12
- type: ndcg_at_1000
value: 91.179
- type: ndcg_at_3
value: 87.606
- type: ndcg_at_5
value: 89.031
- type: precision_at_1
value: 83.32000000000001
- type: precision_at_10
value: 13.641
- type: precision_at_100
value: 1.541
- type: precision_at_1000
value: 0.157
- type: precision_at_3
value: 38.377
- type: precision_at_5
value: 25.162000000000003
- type: recall_at_1
value: 72.538
- type: recall_at_10
value: 96.47200000000001
- type: recall_at_100
value: 99.785
- type: recall_at_1000
value: 99.99900000000001
- type: recall_at_3
value: 89.278
- type: recall_at_5
value: 93.367
- task:
type: Clustering
dataset:
name: MTEB RedditClustering
type: mteb/reddit-clustering
config: default
split: test
revision: 24640382cdbf8abc73003fb0fa6d111a705499eb
metrics:
- type: v_measure
value: 73.55219145406065
- task:
type: Clustering
dataset:
name: MTEB RedditClusteringP2P
type: mteb/reddit-clustering-p2p
config: default
split: test
revision: 282350215ef01743dc01b456c7f5241fa8937f16
metrics:
- type: v_measure
value: 74.13437105242755
- task:
type: Retrieval
dataset:
name: MTEB SCIDOCS
type: mteb/scidocs
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 6.873
- type: map_at_10
value: 17.944
- type: map_at_100
value: 21.171
- type: map_at_1000
value: 21.528
- type: map_at_3
value: 12.415
- type: map_at_5
value: 15.187999999999999
- type: mrr_at_1
value: 33.800000000000004
- type: mrr_at_10
value: 46.455
- type: mrr_at_100
value: 47.378
- type: mrr_at_1000
value: 47.394999999999996
- type: mrr_at_3
value: 42.367
- type: mrr_at_5
value: 44.972
- type: ndcg_at_1
value: 33.800000000000004
- type: ndcg_at_10
value: 28.907
- type: ndcg_at_100
value: 39.695
- type: ndcg_at_1000
value: 44.582
- type: ndcg_at_3
value: 26.949
- type: ndcg_at_5
value: 23.988
- type: precision_at_1
value: 33.800000000000004
- type: precision_at_10
value: 15.079999999999998
- type: precision_at_100
value: 3.056
- type: precision_at_1000
value: 0.42100000000000004
- type: precision_at_3
value: 25.167
- type: precision_at_5
value: 21.26
- type: recall_at_1
value: 6.873
- type: recall_at_10
value: 30.568
- type: recall_at_100
value: 62.062
- type: recall_at_1000
value: 85.37700000000001
- type: recall_at_3
value: 15.312999999999999
- type: recall_at_5
value: 21.575
- task:
type: STS
dataset:
name: MTEB SICK-R
type: mteb/sickr-sts
config: default
split: test
revision: a6ea5a8cab320b040a23452cc28066d9beae2cee
metrics:
- type: cos_sim_pearson
value: 82.37009118256057
- type: cos_sim_spearman
value: 79.27986395671529
- type: euclidean_pearson
value: 79.18037715442115
- type: euclidean_spearman
value: 79.28004791561621
- type: manhattan_pearson
value: 79.34062972800541
- type: manhattan_spearman
value: 79.43106695543402
- task:
type: STS
dataset:
name: MTEB STS12
type: mteb/sts12-sts
config: default
split: test
revision: a0d554a64d88156834ff5ae9920b964011b16384
metrics:
- type: cos_sim_pearson
value: 87.48474767383833
- type: cos_sim_spearman
value: 79.54505388752513
- type: euclidean_pearson
value: 83.43282704179565
- type: euclidean_spearman
value: 79.54579919925405
- type: manhattan_pearson
value: 83.77564492427952
- type: manhattan_spearman
value: 79.84558396989286
- task:
type: STS
dataset:
name: MTEB STS13
type: mteb/sts13-sts
config: default
split: test
revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca
metrics:
- type: cos_sim_pearson
value: 88.803698035802
- type: cos_sim_spearman
value: 88.83451367754881
- type: euclidean_pearson
value: 88.28939285711628
- type: euclidean_spearman
value: 88.83528996073112
- type: manhattan_pearson
value: 88.28017412671795
- type: manhattan_spearman
value: 88.9228828016344
- task:
type: STS
dataset:
name: MTEB STS14
type: mteb/sts14-sts
config: default
split: test
revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375
metrics:
- type: cos_sim_pearson
value: 85.27469288153428
- type: cos_sim_spearman
value: 83.87477064876288
- type: euclidean_pearson
value: 84.2601737035379
- type: euclidean_spearman
value: 83.87431082479074
- type: manhattan_pearson
value: 84.3621547772745
- type: manhattan_spearman
value: 84.12094375000423
- task:
type: STS
dataset:
name: MTEB STS15
type: mteb/sts15-sts
config: default
split: test
revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3
metrics:
- type: cos_sim_pearson
value: 88.12749863201587
- type: cos_sim_spearman
value: 88.54287568368565
- type: euclidean_pearson
value: 87.90429700607999
- type: euclidean_spearman
value: 88.5437689576261
- type: manhattan_pearson
value: 88.19276653356833
- type: manhattan_spearman
value: 88.99995393814679
- task:
type: STS
dataset:
name: MTEB STS16
type: mteb/sts16-sts
config: default
split: test
revision: 4d8694f8f0e0100860b497b999b3dbed754a0513
metrics:
- type: cos_sim_pearson
value: 85.68398747560902
- type: cos_sim_spearman
value: 86.48815303460574
- type: euclidean_pearson
value: 85.52356631237954
- type: euclidean_spearman
value: 86.486391949551
- type: manhattan_pearson
value: 85.67267981761788
- type: manhattan_spearman
value: 86.7073696332485
- task:
type: STS
dataset:
name: MTEB STS17 (en-en)
type: mteb/sts17-crosslingual-sts
config: en-en
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 88.9057107443124
- type: cos_sim_spearman
value: 88.7312168757697
- type: euclidean_pearson
value: 88.72810439714794
- type: euclidean_spearman
value: 88.71976185854771
- type: manhattan_pearson
value: 88.50433745949111
- type: manhattan_spearman
value: 88.51726175544195
- task:
type: STS
dataset:
name: MTEB STS22 (en)
type: mteb/sts22-crosslingual-sts
config: en
split: test
revision: eea2b4fe26a775864c896887d910b76a8098ad3f
metrics:
- type: cos_sim_pearson
value: 67.59391795109886
- type: cos_sim_spearman
value: 66.87613008631367
- type: euclidean_pearson
value: 69.23198488262217
- type: euclidean_spearman
value: 66.85427723013692
- type: manhattan_pearson
value: 69.50730124841084
- type: manhattan_spearman
value: 67.10404669820792
- task:
type: STS
dataset:
name: MTEB STSBenchmark
type: mteb/stsbenchmark-sts
config: default
split: test
revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831
metrics:
- type: cos_sim_pearson
value: 87.0820605344619
- type: cos_sim_spearman
value: 86.8518089863434
- type: euclidean_pearson
value: 86.31087134689284
- type: euclidean_spearman
value: 86.8518520517941
- type: manhattan_pearson
value: 86.47203796160612
- type: manhattan_spearman
value: 87.1080149734421
- task:
type: Reranking
dataset:
name: MTEB SciDocsRR
type: mteb/scidocs-reranking
config: default
split: test
revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab
metrics:
- type: map
value: 89.09255369305481
- type: mrr
value: 97.10323445617563
- task:
type: Retrieval
dataset:
name: MTEB SciFact
type: mteb/scifact
config: default
split: test
revision: 0228b52cf27578f30900b9e5271d331663a030d7
metrics:
- type: map_at_1
value: 61.260999999999996
- type: map_at_10
value: 74.043
- type: map_at_100
value: 74.37700000000001
- type: map_at_1000
value: 74.384
- type: map_at_3
value: 71.222
- type: map_at_5
value: 72.875
- type: mrr_at_1
value: 64.333
- type: mrr_at_10
value: 74.984
- type: mrr_at_100
value: 75.247
- type: mrr_at_1000
value: 75.25500000000001
- type: mrr_at_3
value: 73.167
- type: mrr_at_5
value: 74.35000000000001
- type: ndcg_at_1
value: 64.333
- type: ndcg_at_10
value: 79.06
- type: ndcg_at_100
value: 80.416
- type: ndcg_at_1000
value: 80.55600000000001
- type: ndcg_at_3
value: 74.753
- type: ndcg_at_5
value: 76.97500000000001
- type: precision_at_1
value: 64.333
- type: precision_at_10
value: 10.567
- type: precision_at_100
value: 1.1199999999999999
- type: precision_at_1000
value: 0.11299999999999999
- type: precision_at_3
value: 29.889
- type: precision_at_5
value: 19.533
- type: recall_at_1
value: 61.260999999999996
- type: recall_at_10
value: 93.167
- type: recall_at_100
value: 99.0
- type: recall_at_1000
value: 100.0
- type: recall_at_3
value: 81.667
- type: recall_at_5
value: 87.394
- task:
type: PairClassification
dataset:
name: MTEB SprintDuplicateQuestions
type: mteb/sprintduplicatequestions-pairclassification
config: default
split: test
revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46
metrics:
- type: cos_sim_accuracy
value: 99.71980198019801
- type: cos_sim_ap
value: 92.81616007802704
- type: cos_sim_f1
value: 85.17548454688318
- type: cos_sim_precision
value: 89.43894389438944
- type: cos_sim_recall
value: 81.3
- type: dot_accuracy
value: 99.71980198019801
- type: dot_ap
value: 92.81398760591358
- type: dot_f1
value: 85.17548454688318
- type: dot_precision
value: 89.43894389438944
- type: dot_recall
value: 81.3
- type: euclidean_accuracy
value: 99.71980198019801
- type: euclidean_ap
value: 92.81560637245072
- type: euclidean_f1
value: 85.17548454688318
- type: euclidean_precision
value: 89.43894389438944
- type: euclidean_recall
value: 81.3
- type: manhattan_accuracy
value: 99.73069306930694
- type: manhattan_ap
value: 93.14005487480794
- type: manhattan_f1
value: 85.56263269639068
- type: manhattan_precision
value: 91.17647058823529
- type: manhattan_recall
value: 80.60000000000001
- type: max_accuracy
value: 99.73069306930694
- type: max_ap
value: 93.14005487480794
- type: max_f1
value: 85.56263269639068
- task:
type: Clustering
dataset:
name: MTEB StackExchangeClustering
type: mteb/stackexchange-clustering
config: default
split: test
revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259
metrics:
- type: v_measure
value: 79.86443362395185
- task:
type: Clustering
dataset:
name: MTEB StackExchangeClusteringP2P
type: mteb/stackexchange-clustering-p2p
config: default
split: test
revision: 815ca46b2622cec33ccafc3735d572c266efdb44
metrics:
- type: v_measure
value: 49.40897096662564
- task:
type: Reranking
dataset:
name: MTEB StackOverflowDupQuestions
type: mteb/stackoverflowdupquestions-reranking
config: default
split: test
revision: e185fbe320c72810689fc5848eb6114e1ef5ec69
metrics:
- type: map
value: 55.66040806627947
- type: mrr
value: 56.58670475766064
- task:
type: Summarization
dataset:
name: MTEB SummEval
type: mteb/summeval
config: default
split: test
revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c
metrics:
- type: cos_sim_pearson
value: 31.51015090598575
- type: cos_sim_spearman
value: 31.35016454939226
- type: dot_pearson
value: 31.5150068731
- type: dot_spearman
value: 31.34790869023487
- task:
type: Retrieval
dataset:
name: MTEB TRECCOVID
type: mteb/trec-covid
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 0.254
- type: map_at_10
value: 2.064
- type: map_at_100
value: 12.909
- type: map_at_1000
value: 31.761
- type: map_at_3
value: 0.738
- type: map_at_5
value: 1.155
- type: mrr_at_1
value: 96.0
- type: mrr_at_10
value: 98.0
- type: mrr_at_100
value: 98.0
- type: mrr_at_1000
value: 98.0
- type: mrr_at_3
value: 98.0
- type: mrr_at_5
value: 98.0
- type: ndcg_at_1
value: 93.0
- type: ndcg_at_10
value: 82.258
- type: ndcg_at_100
value: 64.34
- type: ndcg_at_1000
value: 57.912
- type: ndcg_at_3
value: 90.827
- type: ndcg_at_5
value: 86.79
- type: precision_at_1
value: 96.0
- type: precision_at_10
value: 84.8
- type: precision_at_100
value: 66.0
- type: precision_at_1000
value: 25.356
- type: precision_at_3
value: 94.667
- type: precision_at_5
value: 90.4
- type: recall_at_1
value: 0.254
- type: recall_at_10
value: 2.1950000000000003
- type: recall_at_100
value: 16.088
- type: recall_at_1000
value: 54.559000000000005
- type: recall_at_3
value: 0.75
- type: recall_at_5
value: 1.191
- task:
type: Retrieval
dataset:
name: MTEB Touche2020
type: mteb/touche2020
config: default
split: test
revision: a34f9a33db75fa0cbb21bb5cfc3dae8dc8bec93f
metrics:
- type: map_at_1
value: 2.976
- type: map_at_10
value: 11.389000000000001
- type: map_at_100
value: 18.429000000000002
- type: map_at_1000
value: 20.113
- type: map_at_3
value: 6.483
- type: map_at_5
value: 8.770999999999999
- type: mrr_at_1
value: 40.816
- type: mrr_at_10
value: 58.118
- type: mrr_at_100
value: 58.489999999999995
- type: mrr_at_1000
value: 58.489999999999995
- type: mrr_at_3
value: 53.061
- type: mrr_at_5
value: 57.041
- type: ndcg_at_1
value: 40.816
- type: ndcg_at_10
value: 30.567
- type: ndcg_at_100
value: 42.44
- type: ndcg_at_1000
value: 53.480000000000004
- type: ndcg_at_3
value: 36.016
- type: ndcg_at_5
value: 34.257
- type: precision_at_1
value: 42.857
- type: precision_at_10
value: 25.714
- type: precision_at_100
value: 8.429
- type: precision_at_1000
value: 1.5939999999999999
- type: precision_at_3
value: 36.735
- type: precision_at_5
value: 33.878
- type: recall_at_1
value: 2.976
- type: recall_at_10
value: 17.854999999999997
- type: recall_at_100
value: 51.833
- type: recall_at_1000
value: 86.223
- type: recall_at_3
value: 7.887
- type: recall_at_5
value: 12.026
- task:
type: Classification
dataset:
name: MTEB ToxicConversationsClassification
type: mteb/toxic_conversations_50k
config: default
split: test
revision: d7c0de2777da35d6aae2200a62c6e0e5af397c4c
metrics:
- type: accuracy
value: 85.1174
- type: ap
value: 30.169441069345748
- type: f1
value: 69.79254701873245
- task:
type: Classification
dataset:
name: MTEB TweetSentimentExtractionClassification
type: mteb/tweet_sentiment_extraction
config: default
split: test
revision: d604517c81ca91fe16a244d1248fc021f9ecee7a
metrics:
- type: accuracy
value: 72.58347481607245
- type: f1
value: 72.74877295564937
- task:
type: Clustering
dataset:
name: MTEB TwentyNewsgroupsClustering
type: mteb/twentynewsgroups-clustering
config: default
split: test
revision: 6125ec4e24fa026cec8a478383ee943acfbd5449
metrics:
- type: v_measure
value: 53.90586138221305
- task:
type: PairClassification
dataset:
name: MTEB TwitterSemEval2015
type: mteb/twittersemeval2015-pairclassification
config: default
split: test
revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1
metrics:
- type: cos_sim_accuracy
value: 87.35769207844072
- type: cos_sim_ap
value: 77.9645072410354
- type: cos_sim_f1
value: 71.32352941176471
- type: cos_sim_precision
value: 66.5903890160183
- type: cos_sim_recall
value: 76.78100263852242
- type: dot_accuracy
value: 87.37557370209214
- type: dot_ap
value: 77.96250046429908
- type: dot_f1
value: 71.28932757557064
- type: dot_precision
value: 66.95249130938586
- type: dot_recall
value: 76.22691292875989
- type: euclidean_accuracy
value: 87.35173153722357
- type: euclidean_ap
value: 77.96520460741593
- type: euclidean_f1
value: 71.32470733210104
- type: euclidean_precision
value: 66.91329479768785
- type: euclidean_recall
value: 76.35883905013192
- type: manhattan_accuracy
value: 87.25636287774931
- type: manhattan_ap
value: 77.77752485611796
- type: manhattan_f1
value: 71.18148599269183
- type: manhattan_precision
value: 66.10859728506787
- type: manhattan_recall
value: 77.0976253298153
- type: max_accuracy
value: 87.37557370209214
- type: max_ap
value: 77.96520460741593
- type: max_f1
value: 71.32470733210104
- task:
type: PairClassification
dataset:
name: MTEB TwitterURLCorpus
type: mteb/twitterurlcorpus-pairclassification
config: default
split: test
revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf
metrics:
- type: cos_sim_accuracy
value: 89.38176737687739
- type: cos_sim_ap
value: 86.58811861657401
- type: cos_sim_f1
value: 79.09430644097604
- type: cos_sim_precision
value: 75.45085977911366
- type: cos_sim_recall
value: 83.10748383122882
- type: dot_accuracy
value: 89.38370784336554
- type: dot_ap
value: 86.58840606004333
- type: dot_f1
value: 79.10179860068133
- type: dot_precision
value: 75.44546153308643
- type: dot_recall
value: 83.13058207576223
- type: euclidean_accuracy
value: 89.38564830985369
- type: euclidean_ap
value: 86.58820721061164
- type: euclidean_f1
value: 79.09070942235888
- type: euclidean_precision
value: 75.38729937194697
- type: euclidean_recall
value: 83.17677856482906
- type: manhattan_accuracy
value: 89.40699344122326
- type: manhattan_ap
value: 86.60631843011362
- type: manhattan_f1
value: 79.14949970570925
- type: manhattan_precision
value: 75.78191039729502
- type: manhattan_recall
value: 82.83030489682784
- type: max_accuracy
value: 89.40699344122326
- type: max_ap
value: 86.60631843011362
- type: max_f1
value: 79.14949970570925
- task:
type: STS
dataset:
name: MTEB AFQMC
type: C-MTEB/AFQMC
config: default
split: validation
revision: b44c3b011063adb25877c13823db83bb193913c4
metrics:
- type: cos_sim_pearson
value: 65.58442135663871
- type: cos_sim_spearman
value: 72.2538631361313
- type: euclidean_pearson
value: 70.97255486607429
- type: euclidean_spearman
value: 72.25374250228647
- type: manhattan_pearson
value: 70.83250199989911
- type: manhattan_spearman
value: 72.14819496536272
- task:
type: STS
dataset:
name: MTEB ATEC
type: C-MTEB/ATEC
config: default
split: test
revision: 0f319b1142f28d00e055a6770f3f726ae9b7d865
metrics:
- type: cos_sim_pearson
value: 59.99478404929932
- type: cos_sim_spearman
value: 62.61836216999812
- type: euclidean_pearson
value: 66.86429811933593
- type: euclidean_spearman
value: 62.6183520374191
- type: manhattan_pearson
value: 66.8063778911633
- type: manhattan_spearman
value: 62.569607573241115
- task:
type: Classification
dataset:
name: MTEB AmazonReviewsClassification (zh)
type: mteb/amazon_reviews_multi
config: zh
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 53.98400000000001
- type: f1
value: 51.21447361350723
- task:
type: STS
dataset:
name: MTEB BQ
type: C-MTEB/BQ
config: default
split: test
revision: e3dda5e115e487b39ec7e618c0c6a29137052a55
metrics:
- type: cos_sim_pearson
value: 79.11941660686553
- type: cos_sim_spearman
value: 81.25029594540435
- type: euclidean_pearson
value: 82.06973504238826
- type: euclidean_spearman
value: 81.2501989488524
- type: manhattan_pearson
value: 82.10094630392753
- type: manhattan_spearman
value: 81.27987244392389
- task:
type: Clustering
dataset:
name: MTEB CLSClusteringP2P
type: C-MTEB/CLSClusteringP2P
config: default
split: test
revision: 4b6227591c6c1a73bc76b1055f3b7f3588e72476
metrics:
- type: v_measure
value: 47.07270168705156
- task:
type: Clustering
dataset:
name: MTEB CLSClusteringS2S
type: C-MTEB/CLSClusteringS2S
config: default
split: test
revision: e458b3f5414b62b7f9f83499ac1f5497ae2e869f
metrics:
- type: v_measure
value: 45.98511703185043
- task:
type: Reranking
dataset:
name: MTEB CMedQAv1
type: C-MTEB/CMedQAv1-reranking
config: default
split: test
revision: 8d7f1e942507dac42dc58017c1a001c3717da7df
metrics:
- type: map
value: 88.19895157194931
- type: mrr
value: 90.21424603174603
- task:
type: Reranking
dataset:
name: MTEB CMedQAv2
type: C-MTEB/CMedQAv2-reranking
config: default
split: test
revision: 23d186750531a14a0357ca22cd92d712fd512ea0
metrics:
- type: map
value: 88.03317320980119
- type: mrr
value: 89.9461507936508
- task:
type: Retrieval
dataset:
name: MTEB CmedqaRetrieval
type: C-MTEB/CmedqaRetrieval
config: default
split: dev
revision: cd540c506dae1cf9e9a59c3e06f42030d54e7301
metrics:
- type: map_at_1
value: 29.037000000000003
- type: map_at_10
value: 42.001
- type: map_at_100
value: 43.773
- type: map_at_1000
value: 43.878
- type: map_at_3
value: 37.637
- type: map_at_5
value: 40.034
- type: mrr_at_1
value: 43.136
- type: mrr_at_10
value: 51.158
- type: mrr_at_100
value: 52.083
- type: mrr_at_1000
value: 52.12
- type: mrr_at_3
value: 48.733
- type: mrr_at_5
value: 50.025
- type: ndcg_at_1
value: 43.136
- type: ndcg_at_10
value: 48.685
- type: ndcg_at_100
value: 55.513
- type: ndcg_at_1000
value: 57.242000000000004
- type: ndcg_at_3
value: 43.329
- type: ndcg_at_5
value: 45.438
- type: precision_at_1
value: 43.136
- type: precision_at_10
value: 10.56
- type: precision_at_100
value: 1.6129999999999998
- type: precision_at_1000
value: 0.184
- type: precision_at_3
value: 24.064
- type: precision_at_5
value: 17.269000000000002
- type: recall_at_1
value: 29.037000000000003
- type: recall_at_10
value: 59.245000000000005
- type: recall_at_100
value: 87.355
- type: recall_at_1000
value: 98.74000000000001
- type: recall_at_3
value: 42.99
- type: recall_at_5
value: 49.681999999999995
- task:
type: PairClassification
dataset:
name: MTEB Cmnli
type: C-MTEB/CMNLI
config: default
split: validation
revision: 41bc36f332156f7adc9e38f53777c959b2ae9766
metrics:
- type: cos_sim_accuracy
value: 82.68190018039687
- type: cos_sim_ap
value: 90.18017125327886
- type: cos_sim_f1
value: 83.64080906868193
- type: cos_sim_precision
value: 79.7076890489303
- type: cos_sim_recall
value: 87.98223053542202
- type: dot_accuracy
value: 82.68190018039687
- type: dot_ap
value: 90.18782350103646
- type: dot_f1
value: 83.64242087729039
- type: dot_precision
value: 79.65313028764805
- type: dot_recall
value: 88.05237315875614
- type: euclidean_accuracy
value: 82.68190018039687
- type: euclidean_ap
value: 90.1801957900632
- type: euclidean_f1
value: 83.63636363636364
- type: euclidean_precision
value: 79.52772506852203
- type: euclidean_recall
value: 88.19265840542437
- type: manhattan_accuracy
value: 82.14070956103427
- type: manhattan_ap
value: 89.96178420101427
- type: manhattan_f1
value: 83.21087838578791
- type: manhattan_precision
value: 78.35605121850475
- type: manhattan_recall
value: 88.70703764320785
- type: max_accuracy
value: 82.68190018039687
- type: max_ap
value: 90.18782350103646
- type: max_f1
value: 83.64242087729039
- task:
type: Retrieval
dataset:
name: MTEB CovidRetrieval
type: C-MTEB/CovidRetrieval
config: default
split: dev
revision: 1271c7809071a13532e05f25fb53511ffce77117
metrics:
- type: map_at_1
value: 72.234
- type: map_at_10
value: 80.10000000000001
- type: map_at_100
value: 80.36
- type: map_at_1000
value: 80.363
- type: map_at_3
value: 78.315
- type: map_at_5
value: 79.607
- type: mrr_at_1
value: 72.392
- type: mrr_at_10
value: 80.117
- type: mrr_at_100
value: 80.36999999999999
- type: mrr_at_1000
value: 80.373
- type: mrr_at_3
value: 78.469
- type: mrr_at_5
value: 79.633
- type: ndcg_at_1
value: 72.392
- type: ndcg_at_10
value: 83.651
- type: ndcg_at_100
value: 84.749
- type: ndcg_at_1000
value: 84.83000000000001
- type: ndcg_at_3
value: 80.253
- type: ndcg_at_5
value: 82.485
- type: precision_at_1
value: 72.392
- type: precision_at_10
value: 9.557
- type: precision_at_100
value: 1.004
- type: precision_at_1000
value: 0.101
- type: precision_at_3
value: 28.732000000000003
- type: precision_at_5
value: 18.377
- type: recall_at_1
value: 72.234
- type: recall_at_10
value: 94.573
- type: recall_at_100
value: 99.368
- type: recall_at_1000
value: 100.0
- type: recall_at_3
value: 85.669
- type: recall_at_5
value: 91.01700000000001
- task:
type: Retrieval
dataset:
name: MTEB DuRetrieval
type: C-MTEB/DuRetrieval
config: default
split: dev
revision: a1a333e290fe30b10f3f56498e3a0d911a693ced
metrics:
- type: map_at_1
value: 26.173999999999996
- type: map_at_10
value: 80.04
- type: map_at_100
value: 82.94500000000001
- type: map_at_1000
value: 82.98100000000001
- type: map_at_3
value: 55.562999999999995
- type: map_at_5
value: 69.89800000000001
- type: mrr_at_1
value: 89.5
- type: mrr_at_10
value: 92.996
- type: mrr_at_100
value: 93.06400000000001
- type: mrr_at_1000
value: 93.065
- type: mrr_at_3
value: 92.658
- type: mrr_at_5
value: 92.84599999999999
- type: ndcg_at_1
value: 89.5
- type: ndcg_at_10
value: 87.443
- type: ndcg_at_100
value: 90.253
- type: ndcg_at_1000
value: 90.549
- type: ndcg_at_3
value: 85.874
- type: ndcg_at_5
value: 84.842
- type: precision_at_1
value: 89.5
- type: precision_at_10
value: 41.805
- type: precision_at_100
value: 4.827
- type: precision_at_1000
value: 0.49
- type: precision_at_3
value: 76.85
- type: precision_at_5
value: 64.8
- type: recall_at_1
value: 26.173999999999996
- type: recall_at_10
value: 89.101
- type: recall_at_100
value: 98.08099999999999
- type: recall_at_1000
value: 99.529
- type: recall_at_3
value: 57.902
- type: recall_at_5
value: 74.602
- task:
type: Retrieval
dataset:
name: MTEB EcomRetrieval
type: C-MTEB/EcomRetrieval
config: default
split: dev
revision: 687de13dc7294d6fd9be10c6945f9e8fec8166b9
metrics:
- type: map_at_1
value: 56.10000000000001
- type: map_at_10
value: 66.15299999999999
- type: map_at_100
value: 66.625
- type: map_at_1000
value: 66.636
- type: map_at_3
value: 63.632999999999996
- type: map_at_5
value: 65.293
- type: mrr_at_1
value: 56.10000000000001
- type: mrr_at_10
value: 66.15299999999999
- type: mrr_at_100
value: 66.625
- type: mrr_at_1000
value: 66.636
- type: mrr_at_3
value: 63.632999999999996
- type: mrr_at_5
value: 65.293
- type: ndcg_at_1
value: 56.10000000000001
- type: ndcg_at_10
value: 71.146
- type: ndcg_at_100
value: 73.27799999999999
- type: ndcg_at_1000
value: 73.529
- type: ndcg_at_3
value: 66.09
- type: ndcg_at_5
value: 69.08999999999999
- type: precision_at_1
value: 56.10000000000001
- type: precision_at_10
value: 8.68
- type: precision_at_100
value: 0.964
- type: precision_at_1000
value: 0.098
- type: precision_at_3
value: 24.4
- type: precision_at_5
value: 16.1
- type: recall_at_1
value: 56.10000000000001
- type: recall_at_10
value: 86.8
- type: recall_at_100
value: 96.39999999999999
- type: recall_at_1000
value: 98.3
- type: recall_at_3
value: 73.2
- type: recall_at_5
value: 80.5
- task:
type: Classification
dataset:
name: MTEB IFlyTek
type: C-MTEB/IFlyTek-classification
config: default
split: validation
revision: 421605374b29664c5fc098418fe20ada9bd55f8a
metrics:
- type: accuracy
value: 54.52096960369373
- type: f1
value: 40.930845295808695
- task:
type: Classification
dataset:
name: MTEB JDReview
type: C-MTEB/JDReview-classification
config: default
split: test
revision: b7c64bd89eb87f8ded463478346f76731f07bf8b
metrics:
- type: accuracy
value: 86.51031894934334
- type: ap
value: 55.9516014323483
- type: f1
value: 81.54813679326381
- task:
type: STS
dataset:
name: MTEB LCQMC
type: C-MTEB/LCQMC
config: default
split: test
revision: 17f9b096f80380fce5ed12a9be8be7784b337daf
metrics:
- type: cos_sim_pearson
value: 69.67437838574276
- type: cos_sim_spearman
value: 73.81314174653045
- type: euclidean_pearson
value: 72.63430276680275
- type: euclidean_spearman
value: 73.81358736777001
- type: manhattan_pearson
value: 72.58743833842829
- type: manhattan_spearman
value: 73.7590419009179
- task:
type: Reranking
dataset:
name: MTEB MMarcoReranking
type: C-MTEB/Mmarco-reranking
config: default
split: dev
revision: None
metrics:
- type: map
value: 31.648613483640254
- type: mrr
value: 30.37420634920635
- task:
type: Retrieval
dataset:
name: MTEB MMarcoRetrieval
type: C-MTEB/MMarcoRetrieval
config: default
split: dev
revision: 539bbde593d947e2a124ba72651aafc09eb33fc2
metrics:
- type: map_at_1
value: 73.28099999999999
- type: map_at_10
value: 81.977
- type: map_at_100
value: 82.222
- type: map_at_1000
value: 82.22699999999999
- type: map_at_3
value: 80.441
- type: map_at_5
value: 81.46600000000001
- type: mrr_at_1
value: 75.673
- type: mrr_at_10
value: 82.41000000000001
- type: mrr_at_100
value: 82.616
- type: mrr_at_1000
value: 82.621
- type: mrr_at_3
value: 81.094
- type: mrr_at_5
value: 81.962
- type: ndcg_at_1
value: 75.673
- type: ndcg_at_10
value: 85.15599999999999
- type: ndcg_at_100
value: 86.151
- type: ndcg_at_1000
value: 86.26899999999999
- type: ndcg_at_3
value: 82.304
- type: ndcg_at_5
value: 84.009
- type: precision_at_1
value: 75.673
- type: precision_at_10
value: 10.042
- type: precision_at_100
value: 1.052
- type: precision_at_1000
value: 0.106
- type: precision_at_3
value: 30.673000000000002
- type: precision_at_5
value: 19.326999999999998
- type: recall_at_1
value: 73.28099999999999
- type: recall_at_10
value: 94.446
- type: recall_at_100
value: 98.737
- type: recall_at_1000
value: 99.649
- type: recall_at_3
value: 86.984
- type: recall_at_5
value: 91.024
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (zh-CN)
type: mteb/amazon_massive_intent
config: zh-CN
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 81.08607935440484
- type: f1
value: 78.24879986066307
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (zh-CN)
type: mteb/amazon_massive_scenario
config: zh-CN
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 86.05917955615332
- type: f1
value: 85.05279279434997
- task:
type: Retrieval
dataset:
name: MTEB MedicalRetrieval
type: C-MTEB/MedicalRetrieval
config: default
split: dev
revision: 2039188fb5800a9803ba5048df7b76e6fb151fc6
metrics:
- type: map_at_1
value: 56.2
- type: map_at_10
value: 62.57899999999999
- type: map_at_100
value: 63.154999999999994
- type: map_at_1000
value: 63.193
- type: map_at_3
value: 61.217
- type: map_at_5
value: 62.012
- type: mrr_at_1
value: 56.3
- type: mrr_at_10
value: 62.629000000000005
- type: mrr_at_100
value: 63.205999999999996
- type: mrr_at_1000
value: 63.244
- type: mrr_at_3
value: 61.267
- type: mrr_at_5
value: 62.062
- type: ndcg_at_1
value: 56.2
- type: ndcg_at_10
value: 65.592
- type: ndcg_at_100
value: 68.657
- type: ndcg_at_1000
value: 69.671
- type: ndcg_at_3
value: 62.808
- type: ndcg_at_5
value: 64.24499999999999
- type: precision_at_1
value: 56.2
- type: precision_at_10
value: 7.5
- type: precision_at_100
value: 0.899
- type: precision_at_1000
value: 0.098
- type: precision_at_3
value: 22.467000000000002
- type: precision_at_5
value: 14.180000000000001
- type: recall_at_1
value: 56.2
- type: recall_at_10
value: 75.0
- type: recall_at_100
value: 89.9
- type: recall_at_1000
value: 97.89999999999999
- type: recall_at_3
value: 67.4
- type: recall_at_5
value: 70.89999999999999
- task:
type: Classification
dataset:
name: MTEB MultilingualSentiment
type: C-MTEB/MultilingualSentiment-classification
config: default
split: validation
revision: 46958b007a63fdbf239b7672c25d0bea67b5ea1a
metrics:
- type: accuracy
value: 76.87666666666667
- type: f1
value: 76.7317686219665
- task:
type: PairClassification
dataset:
name: MTEB Ocnli
type: C-MTEB/OCNLI
config: default
split: validation
revision: 66e76a618a34d6d565d5538088562851e6daa7ec
metrics:
- type: cos_sim_accuracy
value: 79.64266377910124
- type: cos_sim_ap
value: 84.78274442344829
- type: cos_sim_f1
value: 81.16947472745292
- type: cos_sim_precision
value: 76.47058823529412
- type: cos_sim_recall
value: 86.48363252375924
- type: dot_accuracy
value: 79.64266377910124
- type: dot_ap
value: 84.7851404063692
- type: dot_f1
value: 81.16947472745292
- type: dot_precision
value: 76.47058823529412
- type: dot_recall
value: 86.48363252375924
- type: euclidean_accuracy
value: 79.64266377910124
- type: euclidean_ap
value: 84.78068373762378
- type: euclidean_f1
value: 81.14794656110837
- type: euclidean_precision
value: 76.35009310986965
- type: euclidean_recall
value: 86.58922914466737
- type: manhattan_accuracy
value: 79.48023822414727
- type: manhattan_ap
value: 84.72928897427576
- type: manhattan_f1
value: 81.32084770823064
- type: manhattan_precision
value: 76.24768946395564
- type: manhattan_recall
value: 87.11721224920802
- type: max_accuracy
value: 79.64266377910124
- type: max_ap
value: 84.7851404063692
- type: max_f1
value: 81.32084770823064
- task:
type: Classification
dataset:
name: MTEB OnlineShopping
type: C-MTEB/OnlineShopping-classification
config: default
split: test
revision: e610f2ebd179a8fda30ae534c3878750a96db120
metrics:
- type: accuracy
value: 94.3
- type: ap
value: 92.8664032274438
- type: f1
value: 94.29311102997727
- task:
type: STS
dataset:
name: MTEB PAWSX
type: C-MTEB/PAWSX
config: default
split: test
revision: 9c6a90e430ac22b5779fb019a23e820b11a8b5e1
metrics:
- type: cos_sim_pearson
value: 48.51392279882909
- type: cos_sim_spearman
value: 54.06338895994974
- type: euclidean_pearson
value: 52.58480559573412
- type: euclidean_spearman
value: 54.06417276612201
- type: manhattan_pearson
value: 52.69525121721343
- type: manhattan_spearman
value: 54.048147455389675
- task:
type: STS
dataset:
name: MTEB QBQTC
type: C-MTEB/QBQTC
config: default
split: test
revision: 790b0510dc52b1553e8c49f3d2afb48c0e5c48b7
metrics:
- type: cos_sim_pearson
value: 29.728387290757325
- type: cos_sim_spearman
value: 31.366121633635284
- type: euclidean_pearson
value: 29.14588368552961
- type: euclidean_spearman
value: 31.36764411112844
- type: manhattan_pearson
value: 29.63517350523121
- type: manhattan_spearman
value: 31.94157020583762
- task:
type: STS
dataset:
name: MTEB STS22 (zh)
type: mteb/sts22-crosslingual-sts
config: zh
split: test
revision: eea2b4fe26a775864c896887d910b76a8098ad3f
metrics:
- type: cos_sim_pearson
value: 63.64868296271406
- type: cos_sim_spearman
value: 66.12800618164744
- type: euclidean_pearson
value: 63.21405767340238
- type: euclidean_spearman
value: 66.12786567790748
- type: manhattan_pearson
value: 64.04300276525848
- type: manhattan_spearman
value: 66.5066857145652
- task:
type: STS
dataset:
name: MTEB STSB
type: C-MTEB/STSB
config: default
split: test
revision: 0cde68302b3541bb8b3c340dc0644b0b745b3dc0
metrics:
- type: cos_sim_pearson
value: 81.2302623912794
- type: cos_sim_spearman
value: 81.16833673266562
- type: euclidean_pearson
value: 79.47647843876024
- type: euclidean_spearman
value: 81.16944349524972
- type: manhattan_pearson
value: 79.84947238492208
- type: manhattan_spearman
value: 81.64626599410026
- task:
type: Reranking
dataset:
name: MTEB T2Reranking
type: C-MTEB/T2Reranking
config: default
split: dev
revision: 76631901a18387f85eaa53e5450019b87ad58ef9
metrics:
- type: map
value: 67.80129586475687
- type: mrr
value: 77.77402311635554
- task:
type: Retrieval
dataset:
name: MTEB T2Retrieval
type: C-MTEB/T2Retrieval
config: default
split: dev
revision: 8731a845f1bf500a4f111cf1070785c793d10e64
metrics:
- type: map_at_1
value: 28.666999999999998
- type: map_at_10
value: 81.063
- type: map_at_100
value: 84.504
- type: map_at_1000
value: 84.552
- type: map_at_3
value: 56.897
- type: map_at_5
value: 70.073
- type: mrr_at_1
value: 92.087
- type: mrr_at_10
value: 94.132
- type: mrr_at_100
value: 94.19800000000001
- type: mrr_at_1000
value: 94.19999999999999
- type: mrr_at_3
value: 93.78999999999999
- type: mrr_at_5
value: 94.002
- type: ndcg_at_1
value: 92.087
- type: ndcg_at_10
value: 87.734
- type: ndcg_at_100
value: 90.736
- type: ndcg_at_1000
value: 91.184
- type: ndcg_at_3
value: 88.78
- type: ndcg_at_5
value: 87.676
- type: precision_at_1
value: 92.087
- type: precision_at_10
value: 43.46
- type: precision_at_100
value: 5.07
- type: precision_at_1000
value: 0.518
- type: precision_at_3
value: 77.49000000000001
- type: precision_at_5
value: 65.194
- type: recall_at_1
value: 28.666999999999998
- type: recall_at_10
value: 86.632
- type: recall_at_100
value: 96.646
- type: recall_at_1000
value: 98.917
- type: recall_at_3
value: 58.333999999999996
- type: recall_at_5
value: 72.974
- task:
type: Classification
dataset:
name: MTEB TNews
type: C-MTEB/TNews-classification
config: default
split: validation
revision: 317f262bf1e6126357bbe89e875451e4b0938fe4
metrics:
- type: accuracy
value: 52.971999999999994
- type: f1
value: 50.2898280984929
- task:
type: Clustering
dataset:
name: MTEB ThuNewsClusteringP2P
type: C-MTEB/ThuNewsClusteringP2P
config: default
split: test
revision: 5798586b105c0434e4f0fe5e767abe619442cf93
metrics:
- type: v_measure
value: 86.0797948663824
- task:
type: Clustering
dataset:
name: MTEB ThuNewsClusteringS2S
type: C-MTEB/ThuNewsClusteringS2S
config: default
split: test
revision: 8a8b2caeda43f39e13c4bc5bea0f8a667896e10d
metrics:
- type: v_measure
value: 85.10759092255017
- task:
type: Retrieval
dataset:
name: MTEB VideoRetrieval
type: C-MTEB/VideoRetrieval
config: default
split: dev
revision: 58c2597a5943a2ba48f4668c3b90d796283c5639
metrics:
- type: map_at_1
value: 65.60000000000001
- type: map_at_10
value: 74.773
- type: map_at_100
value: 75.128
- type: map_at_1000
value: 75.136
- type: map_at_3
value: 73.05
- type: map_at_5
value: 74.13499999999999
- type: mrr_at_1
value: 65.60000000000001
- type: mrr_at_10
value: 74.773
- type: mrr_at_100
value: 75.128
- type: mrr_at_1000
value: 75.136
- type: mrr_at_3
value: 73.05
- type: mrr_at_5
value: 74.13499999999999
- type: ndcg_at_1
value: 65.60000000000001
- type: ndcg_at_10
value: 78.84299999999999
- type: ndcg_at_100
value: 80.40899999999999
- type: ndcg_at_1000
value: 80.57
- type: ndcg_at_3
value: 75.40599999999999
- type: ndcg_at_5
value: 77.351
- type: precision_at_1
value: 65.60000000000001
- type: precision_at_10
value: 9.139999999999999
- type: precision_at_100
value: 0.984
- type: precision_at_1000
value: 0.1
- type: precision_at_3
value: 27.400000000000002
- type: precision_at_5
value: 17.380000000000003
- type: recall_at_1
value: 65.60000000000001
- type: recall_at_10
value: 91.4
- type: recall_at_100
value: 98.4
- type: recall_at_1000
value: 99.6
- type: recall_at_3
value: 82.19999999999999
- type: recall_at_5
value: 86.9
- task:
type: Classification
dataset:
name: MTEB Waimai
type: C-MTEB/waimai-classification
config: default
split: test
revision: 339287def212450dcaa9df8c22bf93e9980c7023
metrics:
- type: accuracy
value: 89.47
- type: ap
value: 75.59561751845389
- type: f1
value: 87.95207751382563
- task:
type: Clustering
dataset:
name: MTEB AlloProfClusteringP2P
type: lyon-nlp/alloprof
config: default
split: test
revision: 392ba3f5bcc8c51f578786c1fc3dae648662cb9b
metrics:
- type: v_measure
value: 76.05592323841036
- type: v_measure
value: 64.51718058866508
- task:
type: Reranking
dataset:
name: MTEB AlloprofReranking
type: lyon-nlp/mteb-fr-reranking-alloprof-s2p
config: default
split: test
revision: 666fdacebe0291776e86f29345663dfaf80a0db9
metrics:
- type: map
value: 73.08278490943373
- type: mrr
value: 74.66561454570449
- task:
type: Retrieval
dataset:
name: MTEB AlloprofRetrieval
type: lyon-nlp/alloprof
config: default
split: test
revision: 392ba3f5bcc8c51f578786c1fc3dae648662cb9b
metrics:
- type: map_at_1
value: 38.912
- type: map_at_10
value: 52.437999999999995
- type: map_at_100
value: 53.38
- type: map_at_1000
value: 53.427
- type: map_at_3
value: 48.879
- type: map_at_5
value: 50.934000000000005
- type: mrr_at_1
value: 44.085
- type: mrr_at_10
value: 55.337
- type: mrr_at_100
value: 56.016999999999996
- type: mrr_at_1000
value: 56.043
- type: mrr_at_3
value: 52.55499999999999
- type: mrr_at_5
value: 54.20399999999999
- type: ndcg_at_1
value: 44.085
- type: ndcg_at_10
value: 58.876
- type: ndcg_at_100
value: 62.714000000000006
- type: ndcg_at_1000
value: 63.721000000000004
- type: ndcg_at_3
value: 52.444
- type: ndcg_at_5
value: 55.692
- type: precision_at_1
value: 44.085
- type: precision_at_10
value: 9.21
- type: precision_at_100
value: 1.164
- type: precision_at_1000
value: 0.128
- type: precision_at_3
value: 23.043
- type: precision_at_5
value: 15.898000000000001
- type: recall_at_1
value: 38.912
- type: recall_at_10
value: 75.577
- type: recall_at_100
value: 92.038
- type: recall_at_1000
value: 99.325
- type: recall_at_3
value: 58.592
- type: recall_at_5
value: 66.235
- task:
type: Classification
dataset:
name: MTEB AmazonReviewsClassification (fr)
type: mteb/amazon_reviews_multi
config: fr
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 55.532000000000004
- type: f1
value: 52.5783943471605
- task:
type: Retrieval
dataset:
name: MTEB BSARDRetrieval
type: maastrichtlawtech/bsard
config: default
split: test
revision: 5effa1b9b5fa3b0f9e12523e6e43e5f86a6e6d59
metrics:
- type: map_at_1
value: 8.108
- type: map_at_10
value: 14.710999999999999
- type: map_at_100
value: 15.891
- type: map_at_1000
value: 15.983
- type: map_at_3
value: 12.237
- type: map_at_5
value: 13.679
- type: mrr_at_1
value: 8.108
- type: mrr_at_10
value: 14.710999999999999
- type: mrr_at_100
value: 15.891
- type: mrr_at_1000
value: 15.983
- type: mrr_at_3
value: 12.237
- type: mrr_at_5
value: 13.679
- type: ndcg_at_1
value: 8.108
- type: ndcg_at_10
value: 18.796
- type: ndcg_at_100
value: 25.098
- type: ndcg_at_1000
value: 27.951999999999998
- type: ndcg_at_3
value: 13.712
- type: ndcg_at_5
value: 16.309
- type: precision_at_1
value: 8.108
- type: precision_at_10
value: 3.198
- type: precision_at_100
value: 0.626
- type: precision_at_1000
value: 0.086
- type: precision_at_3
value: 6.006
- type: precision_at_5
value: 4.865
- type: recall_at_1
value: 8.108
- type: recall_at_10
value: 31.982
- type: recall_at_100
value: 62.613
- type: recall_at_1000
value: 86.036
- type: recall_at_3
value: 18.018
- type: recall_at_5
value: 24.324
- task:
type: Clustering
dataset:
name: MTEB HALClusteringS2S
type: lyon-nlp/clustering-hal-s2s
config: default
split: test
revision: e06ebbbb123f8144bef1a5d18796f3dec9ae2915
metrics:
- type: v_measure
value: 30.833269778867116
- task:
type: Clustering
dataset:
name: MTEB MLSUMClusteringP2P
type: mlsum
config: default
split: test
revision: b5d54f8f3b61ae17845046286940f03c6bc79bc7
metrics:
- type: v_measure
value: 50.0281928004713
- type: v_measure
value: 43.699961510636534
- task:
type: Classification
dataset:
name: MTEB MTOPDomainClassification (fr)
type: mteb/mtop_domain
config: fr
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 96.68963357344191
- type: f1
value: 96.45175170820961
- task:
type: Classification
dataset:
name: MTEB MTOPIntentClassification (fr)
type: mteb/mtop_intent
config: fr
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 87.46946445349202
- type: f1
value: 65.79860440988624
- task:
type: Classification
dataset:
name: MTEB MasakhaNEWSClassification (fra)
type: masakhane/masakhanews
config: fra
split: test
revision: 8ccc72e69e65f40c70e117d8b3c08306bb788b60
metrics:
- type: accuracy
value: 82.60663507109005
- type: f1
value: 77.20462646604777
- task:
type: Clustering
dataset:
name: MTEB MasakhaNEWSClusteringP2P (fra)
type: masakhane/masakhanews
config: fra
split: test
revision: 8ccc72e69e65f40c70e117d8b3c08306bb788b60
metrics:
- type: v_measure
value: 60.19311264967803
- type: v_measure
value: 63.6235764409785
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (fr)
type: mteb/amazon_massive_intent
config: fr
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 81.65097511768661
- type: f1
value: 78.77796091490924
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (fr)
type: mteb/amazon_massive_scenario
config: fr
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 86.64425016812373
- type: f1
value: 85.4912728670017
- task:
type: Retrieval
dataset:
name: MTEB MintakaRetrieval (fr)
type: jinaai/mintakaqa
config: fr
split: test
revision: efa78cc2f74bbcd21eff2261f9e13aebe40b814e
metrics:
- type: map_at_1
value: 35.913000000000004
- type: map_at_10
value: 48.147
- type: map_at_100
value: 48.91
- type: map_at_1000
value: 48.949
- type: map_at_3
value: 45.269999999999996
- type: map_at_5
value: 47.115
- type: mrr_at_1
value: 35.913000000000004
- type: mrr_at_10
value: 48.147
- type: mrr_at_100
value: 48.91
- type: mrr_at_1000
value: 48.949
- type: mrr_at_3
value: 45.269999999999996
- type: mrr_at_5
value: 47.115
- type: ndcg_at_1
value: 35.913000000000004
- type: ndcg_at_10
value: 54.03
- type: ndcg_at_100
value: 57.839
- type: ndcg_at_1000
value: 58.925000000000004
- type: ndcg_at_3
value: 48.217999999999996
- type: ndcg_at_5
value: 51.56699999999999
- type: precision_at_1
value: 35.913000000000004
- type: precision_at_10
value: 7.244000000000001
- type: precision_at_100
value: 0.9039999999999999
- type: precision_at_1000
value: 0.099
- type: precision_at_3
value: 18.905
- type: precision_at_5
value: 12.981000000000002
- type: recall_at_1
value: 35.913000000000004
- type: recall_at_10
value: 72.441
- type: recall_at_100
value: 90.41799999999999
- type: recall_at_1000
value: 99.099
- type: recall_at_3
value: 56.716
- type: recall_at_5
value: 64.90599999999999
- task:
type: PairClassification
dataset:
name: MTEB OpusparcusPC (fr)
type: GEM/opusparcus
config: fr
split: test
revision: 9e9b1f8ef51616073f47f306f7f47dd91663f86a
metrics:
- type: cos_sim_accuracy
value: 99.90069513406156
- type: cos_sim_ap
value: 100.0
- type: cos_sim_f1
value: 99.95032290114257
- type: cos_sim_precision
value: 100.0
- type: cos_sim_recall
value: 99.90069513406156
- type: dot_accuracy
value: 99.90069513406156
- type: dot_ap
value: 100.0
- type: dot_f1
value: 99.95032290114257
- type: dot_precision
value: 100.0
- type: dot_recall
value: 99.90069513406156
- type: euclidean_accuracy
value: 99.90069513406156
- type: euclidean_ap
value: 100.0
- type: euclidean_f1
value: 99.95032290114257
- type: euclidean_precision
value: 100.0
- type: euclidean_recall
value: 99.90069513406156
- type: manhattan_accuracy
value: 99.90069513406156
- type: manhattan_ap
value: 100.0
- type: manhattan_f1
value: 99.95032290114257
- type: manhattan_precision
value: 100.0
- type: manhattan_recall
value: 99.90069513406156
- type: max_accuracy
value: 99.90069513406156
- type: max_ap
value: 100.0
- type: max_f1
value: 99.95032290114257
- task:
type: PairClassification
dataset:
name: MTEB PawsX (fr)
type: paws-x
config: fr
split: test
revision: 8a04d940a42cd40658986fdd8e3da561533a3646
metrics:
- type: cos_sim_accuracy
value: 75.25
- type: cos_sim_ap
value: 80.86376001270014
- type: cos_sim_f1
value: 73.65945437441204
- type: cos_sim_precision
value: 64.02289452166802
- type: cos_sim_recall
value: 86.71096345514951
- type: dot_accuracy
value: 75.25
- type: dot_ap
value: 80.93686107633002
- type: dot_f1
value: 73.65945437441204
- type: dot_precision
value: 64.02289452166802
- type: dot_recall
value: 86.71096345514951
- type: euclidean_accuracy
value: 75.25
- type: euclidean_ap
value: 80.86379136218862
- type: euclidean_f1
value: 73.65945437441204
- type: euclidean_precision
value: 64.02289452166802
- type: euclidean_recall
value: 86.71096345514951
- type: manhattan_accuracy
value: 75.3
- type: manhattan_ap
value: 80.87826606097734
- type: manhattan_f1
value: 73.68421052631581
- type: manhattan_precision
value: 64.0
- type: manhattan_recall
value: 86.82170542635659
- type: max_accuracy
value: 75.3
- type: max_ap
value: 80.93686107633002
- type: max_f1
value: 73.68421052631581
- task:
type: STS
dataset:
name: MTEB SICKFr
type: Lajavaness/SICK-fr
config: default
split: test
revision: e077ab4cf4774a1e36d86d593b150422fafd8e8a
metrics:
- type: cos_sim_pearson
value: 81.42349425981143
- type: cos_sim_spearman
value: 78.90454327031226
- type: euclidean_pearson
value: 78.39086497435166
- type: euclidean_spearman
value: 78.9046133980509
- type: manhattan_pearson
value: 78.63743094286502
- type: manhattan_spearman
value: 79.12136348449269
- task:
type: STS
dataset:
name: MTEB STS22 (fr)
type: mteb/sts22-crosslingual-sts
config: fr
split: test
revision: eea2b4fe26a775864c896887d910b76a8098ad3f
metrics:
- type: cos_sim_pearson
value: 81.452697919749
- type: cos_sim_spearman
value: 82.58116836039301
- type: euclidean_pearson
value: 81.04038478932786
- type: euclidean_spearman
value: 82.58116836039301
- type: manhattan_pearson
value: 81.37075396187771
- type: manhattan_spearman
value: 82.73678231355368
- task:
type: STS
dataset:
name: MTEB STSBenchmarkMultilingualSTS (fr)
type: stsb_multi_mt
config: fr
split: test
revision: 93d57ef91790589e3ce9c365164337a8a78b7632
metrics:
- type: cos_sim_pearson
value: 85.7419764013806
- type: cos_sim_spearman
value: 85.46085808849622
- type: euclidean_pearson
value: 83.70449639870063
- type: euclidean_spearman
value: 85.46159013076233
- type: manhattan_pearson
value: 83.95259510313929
- type: manhattan_spearman
value: 85.8029724659458
- task:
type: Summarization
dataset:
name: MTEB SummEvalFr
type: lyon-nlp/summarization-summeval-fr-p2p
config: default
split: test
revision: b385812de6a9577b6f4d0f88c6a6e35395a94054
metrics:
- type: cos_sim_pearson
value: 32.61063271753325
- type: cos_sim_spearman
value: 31.454589417353603
- type: dot_pearson
value: 32.6106288643431
- type: dot_spearman
value: 31.454589417353603
- task:
type: Reranking
dataset:
name: MTEB SyntecReranking
type: lyon-nlp/mteb-fr-reranking-syntec-s2p
config: default
split: test
revision: b205c5084a0934ce8af14338bf03feb19499c84d
metrics:
- type: map
value: 84.31666666666666
- type: mrr
value: 84.31666666666666
- task:
type: Retrieval
dataset:
name: MTEB SyntecRetrieval
type: lyon-nlp/mteb-fr-retrieval-syntec-s2p
config: default
split: test
revision: 77f7e271bf4a92b24fce5119f3486b583ca016ff
metrics:
- type: map_at_1
value: 63.0
- type: map_at_10
value: 73.471
- type: map_at_100
value: 73.87
- type: map_at_1000
value: 73.87
- type: map_at_3
value: 70.5
- type: map_at_5
value: 73.05
- type: mrr_at_1
value: 63.0
- type: mrr_at_10
value: 73.471
- type: mrr_at_100
value: 73.87
- type: mrr_at_1000
value: 73.87
- type: mrr_at_3
value: 70.5
- type: mrr_at_5
value: 73.05
- type: ndcg_at_1
value: 63.0
- type: ndcg_at_10
value: 78.255
- type: ndcg_at_100
value: 79.88
- type: ndcg_at_1000
value: 79.88
- type: ndcg_at_3
value: 72.702
- type: ndcg_at_5
value: 77.264
- type: precision_at_1
value: 63.0
- type: precision_at_10
value: 9.3
- type: precision_at_100
value: 1.0
- type: precision_at_1000
value: 0.1
- type: precision_at_3
value: 26.333000000000002
- type: precision_at_5
value: 18.0
- type: recall_at_1
value: 63.0
- type: recall_at_10
value: 93.0
- type: recall_at_100
value: 100.0
- type: recall_at_1000
value: 100.0
- type: recall_at_3
value: 79.0
- type: recall_at_5
value: 90.0
- task:
type: Retrieval
dataset:
name: MTEB XPQARetrieval (fr)
type: jinaai/xpqa
config: fr
split: test
revision: c99d599f0a6ab9b85b065da6f9d94f9cf731679f
metrics:
- type: map_at_1
value: 40.338
- type: map_at_10
value: 61.927
- type: map_at_100
value: 63.361999999999995
- type: map_at_1000
value: 63.405
- type: map_at_3
value: 55.479
- type: map_at_5
value: 59.732
- type: mrr_at_1
value: 63.551
- type: mrr_at_10
value: 71.006
- type: mrr_at_100
value: 71.501
- type: mrr_at_1000
value: 71.509
- type: mrr_at_3
value: 69.07
- type: mrr_at_5
value: 70.165
- type: ndcg_at_1
value: 63.551
- type: ndcg_at_10
value: 68.297
- type: ndcg_at_100
value: 73.13199999999999
- type: ndcg_at_1000
value: 73.751
- type: ndcg_at_3
value: 62.999
- type: ndcg_at_5
value: 64.89
- type: precision_at_1
value: 63.551
- type: precision_at_10
value: 15.661
- type: precision_at_100
value: 1.9789999999999999
- type: precision_at_1000
value: 0.207
- type: precision_at_3
value: 38.273
- type: precision_at_5
value: 27.61
- type: recall_at_1
value: 40.338
- type: recall_at_10
value: 77.267
- type: recall_at_100
value: 95.892
- type: recall_at_1000
value: 99.75500000000001
- type: recall_at_3
value: 60.36
- type: recall_at_5
value: 68.825
- task:
type: Clustering
dataset:
name: MTEB 8TagsClustering
type: PL-MTEB/8tags-clustering
config: default
split: test
revision: None
metrics:
- type: v_measure
value: 51.36126303874126
- task:
type: Classification
dataset:
name: MTEB AllegroReviews
type: PL-MTEB/allegro-reviews
config: default
split: test
revision: None
metrics:
- type: accuracy
value: 67.13717693836979
- type: f1
value: 57.27609848003782
- task:
type: Retrieval
dataset:
name: MTEB ArguAna-PL
type: clarin-knext/arguana-pl
config: default
split: test
revision: 63fc86750af76253e8c760fc9e534bbf24d260a2
metrics:
- type: map_at_1
value: 35.276999999999994
- type: map_at_10
value: 51.086
- type: map_at_100
value: 51.788000000000004
- type: map_at_1000
value: 51.791
- type: map_at_3
value: 46.147
- type: map_at_5
value: 49.078
- type: mrr_at_1
value: 35.917
- type: mrr_at_10
value: 51.315999999999995
- type: mrr_at_100
value: 52.018
- type: mrr_at_1000
value: 52.022
- type: mrr_at_3
value: 46.349000000000004
- type: mrr_at_5
value: 49.297000000000004
- type: ndcg_at_1
value: 35.276999999999994
- type: ndcg_at_10
value: 59.870999999999995
- type: ndcg_at_100
value: 62.590999999999994
- type: ndcg_at_1000
value: 62.661
- type: ndcg_at_3
value: 49.745
- type: ndcg_at_5
value: 55.067
- type: precision_at_1
value: 35.276999999999994
- type: precision_at_10
value: 8.791
- type: precision_at_100
value: 0.991
- type: precision_at_1000
value: 0.1
- type: precision_at_3
value: 20.057
- type: precision_at_5
value: 14.637
- type: recall_at_1
value: 35.276999999999994
- type: recall_at_10
value: 87.909
- type: recall_at_100
value: 99.14699999999999
- type: recall_at_1000
value: 99.644
- type: recall_at_3
value: 60.171
- type: recall_at_5
value: 73.18599999999999
- task:
type: Classification
dataset:
name: MTEB CBD
type: PL-MTEB/cbd
config: default
split: test
revision: None
metrics:
- type: accuracy
value: 78.03000000000002
- type: ap
value: 29.12548553897622
- type: f1
value: 66.54857118886073
- task:
type: PairClassification
dataset:
name: MTEB CDSC-E
type: PL-MTEB/cdsce-pairclassification
config: default
split: test
revision: None
metrics:
- type: cos_sim_accuracy
value: 89.0
- type: cos_sim_ap
value: 76.75437826834582
- type: cos_sim_f1
value: 66.4850136239782
- type: cos_sim_precision
value: 68.92655367231639
- type: cos_sim_recall
value: 64.21052631578948
- type: dot_accuracy
value: 89.0
- type: dot_ap
value: 76.75437826834582
- type: dot_f1
value: 66.4850136239782
- type: dot_precision
value: 68.92655367231639
- type: dot_recall
value: 64.21052631578948
- type: euclidean_accuracy
value: 89.0
- type: euclidean_ap
value: 76.75437826834582
- type: euclidean_f1
value: 66.4850136239782
- type: euclidean_precision
value: 68.92655367231639
- type: euclidean_recall
value: 64.21052631578948
- type: manhattan_accuracy
value: 89.0
- type: manhattan_ap
value: 76.66074220647083
- type: manhattan_f1
value: 66.47058823529412
- type: manhattan_precision
value: 75.33333333333333
- type: manhattan_recall
value: 59.473684210526315
- type: max_accuracy
value: 89.0
- type: max_ap
value: 76.75437826834582
- type: max_f1
value: 66.4850136239782
- task:
type: STS
dataset:
name: MTEB CDSC-R
type: PL-MTEB/cdscr-sts
config: default
split: test
revision: None
metrics:
- type: cos_sim_pearson
value: 93.12903172428328
- type: cos_sim_spearman
value: 92.66381487060741
- type: euclidean_pearson
value: 90.37278396708922
- type: euclidean_spearman
value: 92.66381487060741
- type: manhattan_pearson
value: 90.32503296540962
- type: manhattan_spearman
value: 92.6902938354313
- task:
type: Retrieval
dataset:
name: MTEB DBPedia-PL
type: clarin-knext/dbpedia-pl
config: default
split: test
revision: 76afe41d9af165cc40999fcaa92312b8b012064a
metrics:
- type: map_at_1
value: 8.83
- type: map_at_10
value: 18.326
- type: map_at_100
value: 26.496
- type: map_at_1000
value: 28.455000000000002
- type: map_at_3
value: 12.933
- type: map_at_5
value: 15.168000000000001
- type: mrr_at_1
value: 66.0
- type: mrr_at_10
value: 72.76700000000001
- type: mrr_at_100
value: 73.203
- type: mrr_at_1000
value: 73.219
- type: mrr_at_3
value: 71.458
- type: mrr_at_5
value: 72.246
- type: ndcg_at_1
value: 55.375
- type: ndcg_at_10
value: 41.3
- type: ndcg_at_100
value: 45.891
- type: ndcg_at_1000
value: 52.905
- type: ndcg_at_3
value: 46.472
- type: ndcg_at_5
value: 43.734
- type: precision_at_1
value: 66.0
- type: precision_at_10
value: 33.074999999999996
- type: precision_at_100
value: 11.094999999999999
- type: precision_at_1000
value: 2.374
- type: precision_at_3
value: 48.583
- type: precision_at_5
value: 42.0
- type: recall_at_1
value: 8.83
- type: recall_at_10
value: 22.587
- type: recall_at_100
value: 50.61600000000001
- type: recall_at_1000
value: 73.559
- type: recall_at_3
value: 13.688
- type: recall_at_5
value: 16.855
- task:
type: Retrieval
dataset:
name: MTEB FiQA-PL
type: clarin-knext/fiqa-pl
config: default
split: test
revision: 2e535829717f8bf9dc829b7f911cc5bbd4e6608e
metrics:
- type: map_at_1
value: 20.587
- type: map_at_10
value: 33.095
- type: map_at_100
value: 35.24
- type: map_at_1000
value: 35.429
- type: map_at_3
value: 28.626
- type: map_at_5
value: 31.136999999999997
- type: mrr_at_1
value: 40.586
- type: mrr_at_10
value: 49.033
- type: mrr_at_100
value: 49.952999999999996
- type: mrr_at_1000
value: 49.992
- type: mrr_at_3
value: 46.553
- type: mrr_at_5
value: 48.035
- type: ndcg_at_1
value: 40.586
- type: ndcg_at_10
value: 41.046
- type: ndcg_at_100
value: 48.586
- type: ndcg_at_1000
value: 51.634
- type: ndcg_at_3
value: 36.773
- type: ndcg_at_5
value: 38.389
- type: precision_at_1
value: 40.586
- type: precision_at_10
value: 11.466
- type: precision_at_100
value: 1.909
- type: precision_at_1000
value: 0.245
- type: precision_at_3
value: 24.434
- type: precision_at_5
value: 18.426000000000002
- type: recall_at_1
value: 20.587
- type: recall_at_10
value: 47.986000000000004
- type: recall_at_100
value: 75.761
- type: recall_at_1000
value: 94.065
- type: recall_at_3
value: 33.339
- type: recall_at_5
value: 39.765
- task:
type: Retrieval
dataset:
name: MTEB HotpotQA-PL
type: clarin-knext/hotpotqa-pl
config: default
split: test
revision: a0bd479ac97b4ccb5bd6ce320c415d0bb4beb907
metrics:
- type: map_at_1
value: 40.878
- type: map_at_10
value: 58.775999999999996
- type: map_at_100
value: 59.632
- type: map_at_1000
value: 59.707
- type: map_at_3
value: 56.074
- type: map_at_5
value: 57.629
- type: mrr_at_1
value: 81.756
- type: mrr_at_10
value: 86.117
- type: mrr_at_100
value: 86.299
- type: mrr_at_1000
value: 86.30600000000001
- type: mrr_at_3
value: 85.345
- type: mrr_at_5
value: 85.832
- type: ndcg_at_1
value: 81.756
- type: ndcg_at_10
value: 67.608
- type: ndcg_at_100
value: 70.575
- type: ndcg_at_1000
value: 71.99600000000001
- type: ndcg_at_3
value: 63.723
- type: ndcg_at_5
value: 65.70700000000001
- type: precision_at_1
value: 81.756
- type: precision_at_10
value: 13.619
- type: precision_at_100
value: 1.5939999999999999
- type: precision_at_1000
value: 0.178
- type: precision_at_3
value: 39.604
- type: precision_at_5
value: 25.332
- type: recall_at_1
value: 40.878
- type: recall_at_10
value: 68.096
- type: recall_at_100
value: 79.696
- type: recall_at_1000
value: 89.082
- type: recall_at_3
value: 59.406000000000006
- type: recall_at_5
value: 63.329
- task:
type: Retrieval
dataset:
name: MTEB MSMARCO-PL
type: clarin-knext/msmarco-pl
config: default
split: test
revision: 8634c07806d5cce3a6138e260e59b81760a0a640
metrics:
- type: map_at_1
value: 2.1839999999999997
- type: map_at_10
value: 11.346
- type: map_at_100
value: 30.325000000000003
- type: map_at_1000
value: 37.806
- type: map_at_3
value: 4.842
- type: map_at_5
value: 6.891
- type: mrr_at_1
value: 86.047
- type: mrr_at_10
value: 89.14699999999999
- type: mrr_at_100
value: 89.46600000000001
- type: mrr_at_1000
value: 89.46600000000001
- type: mrr_at_3
value: 89.14699999999999
- type: mrr_at_5
value: 89.14699999999999
- type: ndcg_at_1
value: 67.829
- type: ndcg_at_10
value: 62.222
- type: ndcg_at_100
value: 55.337
- type: ndcg_at_1000
value: 64.076
- type: ndcg_at_3
value: 68.12700000000001
- type: ndcg_at_5
value: 64.987
- type: precision_at_1
value: 86.047
- type: precision_at_10
value: 69.535
- type: precision_at_100
value: 32.93
- type: precision_at_1000
value: 6.6049999999999995
- type: precision_at_3
value: 79.845
- type: precision_at_5
value: 75.349
- type: recall_at_1
value: 2.1839999999999997
- type: recall_at_10
value: 12.866
- type: recall_at_100
value: 43.505
- type: recall_at_1000
value: 72.366
- type: recall_at_3
value: 4.947
- type: recall_at_5
value: 7.192
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (pl)
type: mteb/amazon_massive_intent
config: pl
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 80.75319435104238
- type: f1
value: 77.58961444860606
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (pl)
type: mteb/amazon_massive_scenario
config: pl
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 85.54472091459313
- type: f1
value: 84.29498563572106
- task:
type: Retrieval
dataset:
name: MTEB NFCorpus-PL
type: clarin-knext/nfcorpus-pl
config: default
split: test
revision: 9a6f9567fda928260afed2de480d79c98bf0bec0
metrics:
- type: map_at_1
value: 4.367
- type: map_at_10
value: 10.38
- type: map_at_100
value: 13.516
- type: map_at_1000
value: 14.982000000000001
- type: map_at_3
value: 7.367
- type: map_at_5
value: 8.59
- type: mrr_at_1
value: 41.486000000000004
- type: mrr_at_10
value: 48.886
- type: mrr_at_100
value: 49.657000000000004
- type: mrr_at_1000
value: 49.713
- type: mrr_at_3
value: 46.904
- type: mrr_at_5
value: 48.065000000000005
- type: ndcg_at_1
value: 40.402
- type: ndcg_at_10
value: 30.885
- type: ndcg_at_100
value: 28.393
- type: ndcg_at_1000
value: 37.428
- type: ndcg_at_3
value: 35.394999999999996
- type: ndcg_at_5
value: 33.391999999999996
- type: precision_at_1
value: 41.486000000000004
- type: precision_at_10
value: 23.437
- type: precision_at_100
value: 7.638
- type: precision_at_1000
value: 2.0389999999999997
- type: precision_at_3
value: 32.817
- type: precision_at_5
value: 28.915999999999997
- type: recall_at_1
value: 4.367
- type: recall_at_10
value: 14.655000000000001
- type: recall_at_100
value: 29.665999999999997
- type: recall_at_1000
value: 62.073
- type: recall_at_3
value: 8.51
- type: recall_at_5
value: 10.689
- task:
type: Retrieval
dataset:
name: MTEB NQ-PL
type: clarin-knext/nq-pl
config: default
split: test
revision: f171245712cf85dd4700b06bef18001578d0ca8d
metrics:
- type: map_at_1
value: 28.616000000000003
- type: map_at_10
value: 41.626000000000005
- type: map_at_100
value: 42.689
- type: map_at_1000
value: 42.733
- type: map_at_3
value: 37.729
- type: map_at_5
value: 39.879999999999995
- type: mrr_at_1
value: 32.068000000000005
- type: mrr_at_10
value: 44.029
- type: mrr_at_100
value: 44.87
- type: mrr_at_1000
value: 44.901
- type: mrr_at_3
value: 40.687
- type: mrr_at_5
value: 42.625
- type: ndcg_at_1
value: 32.068000000000005
- type: ndcg_at_10
value: 48.449999999999996
- type: ndcg_at_100
value: 53.13
- type: ndcg_at_1000
value: 54.186
- type: ndcg_at_3
value: 40.983999999999995
- type: ndcg_at_5
value: 44.628
- type: precision_at_1
value: 32.068000000000005
- type: precision_at_10
value: 7.9750000000000005
- type: precision_at_100
value: 1.061
- type: precision_at_1000
value: 0.116
- type: precision_at_3
value: 18.404999999999998
- type: precision_at_5
value: 13.111
- type: recall_at_1
value: 28.616000000000003
- type: recall_at_10
value: 66.956
- type: recall_at_100
value: 87.657
- type: recall_at_1000
value: 95.548
- type: recall_at_3
value: 47.453
- type: recall_at_5
value: 55.87800000000001
- task:
type: Classification
dataset:
name: MTEB PAC
type: laugustyniak/abusive-clauses-pl
config: default
split: test
revision: None
metrics:
- type: accuracy
value: 69.04141326382856
- type: ap
value: 77.47589122111044
- type: f1
value: 66.6332277374775
- task:
type: PairClassification
dataset:
name: MTEB PPC
type: PL-MTEB/ppc-pairclassification
config: default
split: test
revision: None
metrics:
- type: cos_sim_accuracy
value: 86.4
- type: cos_sim_ap
value: 94.1044939667201
- type: cos_sim_f1
value: 88.78048780487805
- type: cos_sim_precision
value: 87.22044728434504
- type: cos_sim_recall
value: 90.39735099337747
- type: dot_accuracy
value: 86.4
- type: dot_ap
value: 94.1044939667201
- type: dot_f1
value: 88.78048780487805
- type: dot_precision
value: 87.22044728434504
- type: dot_recall
value: 90.39735099337747
- type: euclidean_accuracy
value: 86.4
- type: euclidean_ap
value: 94.1044939667201
- type: euclidean_f1
value: 88.78048780487805
- type: euclidean_precision
value: 87.22044728434504
- type: euclidean_recall
value: 90.39735099337747
- type: manhattan_accuracy
value: 86.4
- type: manhattan_ap
value: 94.11438365697387
- type: manhattan_f1
value: 88.77968877968877
- type: manhattan_precision
value: 87.84440842787681
- type: manhattan_recall
value: 89.73509933774835
- type: max_accuracy
value: 86.4
- type: max_ap
value: 94.11438365697387
- type: max_f1
value: 88.78048780487805
- task:
type: PairClassification
dataset:
name: MTEB PSC
type: PL-MTEB/psc-pairclassification
config: default
split: test
revision: None
metrics:
- type: cos_sim_accuracy
value: 97.86641929499072
- type: cos_sim_ap
value: 99.36904211868182
- type: cos_sim_f1
value: 96.56203288490283
- type: cos_sim_precision
value: 94.72140762463343
- type: cos_sim_recall
value: 98.47560975609755
- type: dot_accuracy
value: 97.86641929499072
- type: dot_ap
value: 99.36904211868183
- type: dot_f1
value: 96.56203288490283
- type: dot_precision
value: 94.72140762463343
- type: dot_recall
value: 98.47560975609755
- type: euclidean_accuracy
value: 97.86641929499072
- type: euclidean_ap
value: 99.36904211868183
- type: euclidean_f1
value: 96.56203288490283
- type: euclidean_precision
value: 94.72140762463343
- type: euclidean_recall
value: 98.47560975609755
- type: manhattan_accuracy
value: 98.14471243042672
- type: manhattan_ap
value: 99.43359540492416
- type: manhattan_f1
value: 96.98795180722892
- type: manhattan_precision
value: 95.83333333333334
- type: manhattan_recall
value: 98.17073170731707
- type: max_accuracy
value: 98.14471243042672
- type: max_ap
value: 99.43359540492416
- type: max_f1
value: 96.98795180722892
- task:
type: Classification
dataset:
name: MTEB PolEmo2.0-IN
type: PL-MTEB/polemo2_in
config: default
split: test
revision: None
metrics:
- type: accuracy
value: 89.39058171745152
- type: f1
value: 86.8552093529568
- task:
type: Classification
dataset:
name: MTEB PolEmo2.0-OUT
type: PL-MTEB/polemo2_out
config: default
split: test
revision: None
metrics:
- type: accuracy
value: 74.97975708502024
- type: f1
value: 58.73081628832407
- task:
type: Retrieval
dataset:
name: MTEB Quora-PL
type: clarin-knext/quora-pl
config: default
split: test
revision: 0be27e93455051e531182b85e85e425aba12e9d4
metrics:
- type: map_at_1
value: 64.917
- type: map_at_10
value: 78.74600000000001
- type: map_at_100
value: 79.501
- type: map_at_1000
value: 79.524
- type: map_at_3
value: 75.549
- type: map_at_5
value: 77.495
- type: mrr_at_1
value: 74.9
- type: mrr_at_10
value: 82.112
- type: mrr_at_100
value: 82.314
- type: mrr_at_1000
value: 82.317
- type: mrr_at_3
value: 80.745
- type: mrr_at_5
value: 81.607
- type: ndcg_at_1
value: 74.83999999999999
- type: ndcg_at_10
value: 83.214
- type: ndcg_at_100
value: 84.997
- type: ndcg_at_1000
value: 85.207
- type: ndcg_at_3
value: 79.547
- type: ndcg_at_5
value: 81.46600000000001
- type: precision_at_1
value: 74.83999999999999
- type: precision_at_10
value: 12.822
- type: precision_at_100
value: 1.506
- type: precision_at_1000
value: 0.156
- type: precision_at_3
value: 34.903
- type: precision_at_5
value: 23.16
- type: recall_at_1
value: 64.917
- type: recall_at_10
value: 92.27199999999999
- type: recall_at_100
value: 98.715
- type: recall_at_1000
value: 99.854
- type: recall_at_3
value: 82.04599999999999
- type: recall_at_5
value: 87.2
- task:
type: Retrieval
dataset:
name: MTEB SCIDOCS-PL
type: clarin-knext/scidocs-pl
config: default
split: test
revision: 45452b03f05560207ef19149545f168e596c9337
metrics:
- type: map_at_1
value: 3.51
- type: map_at_10
value: 9.046999999999999
- type: map_at_100
value: 10.823
- type: map_at_1000
value: 11.144
- type: map_at_3
value: 6.257
- type: map_at_5
value: 7.648000000000001
- type: mrr_at_1
value: 17.299999999999997
- type: mrr_at_10
value: 27.419
- type: mrr_at_100
value: 28.618
- type: mrr_at_1000
value: 28.685
- type: mrr_at_3
value: 23.817
- type: mrr_at_5
value: 25.927
- type: ndcg_at_1
value: 17.299999999999997
- type: ndcg_at_10
value: 16.084
- type: ndcg_at_100
value: 23.729
- type: ndcg_at_1000
value: 29.476999999999997
- type: ndcg_at_3
value: 14.327000000000002
- type: ndcg_at_5
value: 13.017999999999999
- type: precision_at_1
value: 17.299999999999997
- type: precision_at_10
value: 8.63
- type: precision_at_100
value: 1.981
- type: precision_at_1000
value: 0.336
- type: precision_at_3
value: 13.4
- type: precision_at_5
value: 11.700000000000001
- type: recall_at_1
value: 3.51
- type: recall_at_10
value: 17.518
- type: recall_at_100
value: 40.275
- type: recall_at_1000
value: 68.203
- type: recall_at_3
value: 8.155
- type: recall_at_5
value: 11.875
- task:
type: PairClassification
dataset:
name: MTEB SICK-E-PL
type: PL-MTEB/sicke-pl-pairclassification
config: default
split: test
revision: None
metrics:
- type: cos_sim_accuracy
value: 86.30248675091724
- type: cos_sim_ap
value: 83.6756734006714
- type: cos_sim_f1
value: 74.97367497367497
- type: cos_sim_precision
value: 73.91003460207612
- type: cos_sim_recall
value: 76.06837606837607
- type: dot_accuracy
value: 86.30248675091724
- type: dot_ap
value: 83.6756734006714
- type: dot_f1
value: 74.97367497367497
- type: dot_precision
value: 73.91003460207612
- type: dot_recall
value: 76.06837606837607
- type: euclidean_accuracy
value: 86.30248675091724
- type: euclidean_ap
value: 83.67566984333091
- type: euclidean_f1
value: 74.97367497367497
- type: euclidean_precision
value: 73.91003460207612
- type: euclidean_recall
value: 76.06837606837607
- type: manhattan_accuracy
value: 86.28210354667753
- type: manhattan_ap
value: 83.64216119130171
- type: manhattan_f1
value: 74.92152075340078
- type: manhattan_precision
value: 73.4107997265892
- type: manhattan_recall
value: 76.49572649572649
- type: max_accuracy
value: 86.30248675091724
- type: max_ap
value: 83.6756734006714
- type: max_f1
value: 74.97367497367497
- task:
type: STS
dataset:
name: MTEB SICK-R-PL
type: PL-MTEB/sickr-pl-sts
config: default
split: test
revision: None
metrics:
- type: cos_sim_pearson
value: 82.23295940859121
- type: cos_sim_spearman
value: 78.89329160768719
- type: euclidean_pearson
value: 79.56019107076818
- type: euclidean_spearman
value: 78.89330209904084
- type: manhattan_pearson
value: 79.76098513973719
- type: manhattan_spearman
value: 79.05490162570123
- task:
type: STS
dataset:
name: MTEB STS22 (pl)
type: mteb/sts22-crosslingual-sts
config: pl
split: test
revision: eea2b4fe26a775864c896887d910b76a8098ad3f
metrics:
- type: cos_sim_pearson
value: 37.732606308062486
- type: cos_sim_spearman
value: 41.01645667030284
- type: euclidean_pearson
value: 26.61722556367085
- type: euclidean_spearman
value: 41.01645667030284
- type: manhattan_pearson
value: 26.60917378970807
- type: manhattan_spearman
value: 41.51335727617614
- task:
type: Retrieval
dataset:
name: MTEB SciFact-PL
type: clarin-knext/scifact-pl
config: default
split: test
revision: 47932a35f045ef8ed01ba82bf9ff67f6e109207e
metrics:
- type: map_at_1
value: 54.31700000000001
- type: map_at_10
value: 65.564
- type: map_at_100
value: 66.062
- type: map_at_1000
value: 66.08699999999999
- type: map_at_3
value: 62.592999999999996
- type: map_at_5
value: 63.888
- type: mrr_at_1
value: 56.99999999999999
- type: mrr_at_10
value: 66.412
- type: mrr_at_100
value: 66.85900000000001
- type: mrr_at_1000
value: 66.88
- type: mrr_at_3
value: 64.22200000000001
- type: mrr_at_5
value: 65.206
- type: ndcg_at_1
value: 56.99999999999999
- type: ndcg_at_10
value: 70.577
- type: ndcg_at_100
value: 72.879
- type: ndcg_at_1000
value: 73.45
- type: ndcg_at_3
value: 65.5
- type: ndcg_at_5
value: 67.278
- type: precision_at_1
value: 56.99999999999999
- type: precision_at_10
value: 9.667
- type: precision_at_100
value: 1.083
- type: precision_at_1000
value: 0.11299999999999999
- type: precision_at_3
value: 26.0
- type: precision_at_5
value: 16.933
- type: recall_at_1
value: 54.31700000000001
- type: recall_at_10
value: 85.056
- type: recall_at_100
value: 95.667
- type: recall_at_1000
value: 100.0
- type: recall_at_3
value: 71.0
- type: recall_at_5
value: 75.672
- task:
type: Retrieval
dataset:
name: MTEB TRECCOVID-PL
type: clarin-knext/trec-covid-pl
config: default
split: test
revision: 81bcb408f33366c2a20ac54adafad1ae7e877fdd
metrics:
- type: map_at_1
value: 0.245
- type: map_at_10
value: 2.051
- type: map_at_100
value: 12.009
- type: map_at_1000
value: 27.448
- type: map_at_3
value: 0.721
- type: map_at_5
value: 1.13
- type: mrr_at_1
value: 88.0
- type: mrr_at_10
value: 93.0
- type: mrr_at_100
value: 93.0
- type: mrr_at_1000
value: 93.0
- type: mrr_at_3
value: 93.0
- type: mrr_at_5
value: 93.0
- type: ndcg_at_1
value: 85.0
- type: ndcg_at_10
value: 80.303
- type: ndcg_at_100
value: 61.23499999999999
- type: ndcg_at_1000
value: 52.978
- type: ndcg_at_3
value: 84.419
- type: ndcg_at_5
value: 82.976
- type: precision_at_1
value: 88.0
- type: precision_at_10
value: 83.39999999999999
- type: precision_at_100
value: 61.96
- type: precision_at_1000
value: 22.648
- type: precision_at_3
value: 89.333
- type: precision_at_5
value: 87.2
- type: recall_at_1
value: 0.245
- type: recall_at_10
value: 2.193
- type: recall_at_100
value: 14.938
- type: recall_at_1000
value: 48.563
- type: recall_at_3
value: 0.738
- type: recall_at_5
value: 1.173
---
# VenkatNDivi77/gte-Qwen2-7B-instruct-Q4_K_M-GGUF
This model was converted to GGUF format from [`Alibaba-NLP/gte-Qwen2-7B-instruct`](https://huggingface.co/Alibaba-NLP/gte-Qwen2-7B-instruct) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Alibaba-NLP/gte-Qwen2-7B-instruct) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo VenkatNDivi77/gte-Qwen2-7B-instruct-Q4_K_M-GGUF --hf-file gte-qwen2-7b-instruct-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo VenkatNDivi77/gte-Qwen2-7B-instruct-Q4_K_M-GGUF --hf-file gte-qwen2-7b-instruct-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo VenkatNDivi77/gte-Qwen2-7B-instruct-Q4_K_M-GGUF --hf-file gte-qwen2-7b-instruct-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo VenkatNDivi77/gte-Qwen2-7B-instruct-Q4_K_M-GGUF --hf-file gte-qwen2-7b-instruct-q4_k_m.gguf -c 2048
```
| [
"SUMMARIZATION"
] | [
"BIOSSES",
"SCIFACT"
] |
RichardErkhov/EleutherAI_-_pythia-70m-deduped-v0-gguf | RichardErkhov | null | [
"gguf",
"arxiv:2101.00027",
"arxiv:2201.07311",
"endpoints_compatible",
"region:us"
] | 2024-11-07T01:06:22 | 2024-11-07T01:10:38 | 57 | 0 | ---
{}
---
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
pythia-70m-deduped-v0 - GGUF
- Model creator: https://huggingface.co/EleutherAI/
- Original model: https://huggingface.co/EleutherAI/pythia-70m-deduped-v0/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [pythia-70m-deduped-v0.Q2_K.gguf](https://huggingface.co/RichardErkhov/EleutherAI_-_pythia-70m-deduped-v0-gguf/blob/main/pythia-70m-deduped-v0.Q2_K.gguf) | Q2_K | 0.04GB |
| [pythia-70m-deduped-v0.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/EleutherAI_-_pythia-70m-deduped-v0-gguf/blob/main/pythia-70m-deduped-v0.Q3_K_S.gguf) | Q3_K_S | 0.04GB |
| [pythia-70m-deduped-v0.Q3_K.gguf](https://huggingface.co/RichardErkhov/EleutherAI_-_pythia-70m-deduped-v0-gguf/blob/main/pythia-70m-deduped-v0.Q3_K.gguf) | Q3_K | 0.04GB |
| [pythia-70m-deduped-v0.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/EleutherAI_-_pythia-70m-deduped-v0-gguf/blob/main/pythia-70m-deduped-v0.Q3_K_M.gguf) | Q3_K_M | 0.04GB |
| [pythia-70m-deduped-v0.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/EleutherAI_-_pythia-70m-deduped-v0-gguf/blob/main/pythia-70m-deduped-v0.Q3_K_L.gguf) | Q3_K_L | 0.04GB |
| [pythia-70m-deduped-v0.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/EleutherAI_-_pythia-70m-deduped-v0-gguf/blob/main/pythia-70m-deduped-v0.IQ4_XS.gguf) | IQ4_XS | 0.04GB |
| [pythia-70m-deduped-v0.Q4_0.gguf](https://huggingface.co/RichardErkhov/EleutherAI_-_pythia-70m-deduped-v0-gguf/blob/main/pythia-70m-deduped-v0.Q4_0.gguf) | Q4_0 | 0.04GB |
| [pythia-70m-deduped-v0.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/EleutherAI_-_pythia-70m-deduped-v0-gguf/blob/main/pythia-70m-deduped-v0.IQ4_NL.gguf) | IQ4_NL | 0.04GB |
| [pythia-70m-deduped-v0.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/EleutherAI_-_pythia-70m-deduped-v0-gguf/blob/main/pythia-70m-deduped-v0.Q4_K_S.gguf) | Q4_K_S | 0.04GB |
| [pythia-70m-deduped-v0.Q4_K.gguf](https://huggingface.co/RichardErkhov/EleutherAI_-_pythia-70m-deduped-v0-gguf/blob/main/pythia-70m-deduped-v0.Q4_K.gguf) | Q4_K | 0.05GB |
| [pythia-70m-deduped-v0.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/EleutherAI_-_pythia-70m-deduped-v0-gguf/blob/main/pythia-70m-deduped-v0.Q4_K_M.gguf) | Q4_K_M | 0.05GB |
| [pythia-70m-deduped-v0.Q4_1.gguf](https://huggingface.co/RichardErkhov/EleutherAI_-_pythia-70m-deduped-v0-gguf/blob/main/pythia-70m-deduped-v0.Q4_1.gguf) | Q4_1 | 0.05GB |
| [pythia-70m-deduped-v0.Q5_0.gguf](https://huggingface.co/RichardErkhov/EleutherAI_-_pythia-70m-deduped-v0-gguf/blob/main/pythia-70m-deduped-v0.Q5_0.gguf) | Q5_0 | 0.05GB |
| [pythia-70m-deduped-v0.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/EleutherAI_-_pythia-70m-deduped-v0-gguf/blob/main/pythia-70m-deduped-v0.Q5_K_S.gguf) | Q5_K_S | 0.05GB |
| [pythia-70m-deduped-v0.Q5_K.gguf](https://huggingface.co/RichardErkhov/EleutherAI_-_pythia-70m-deduped-v0-gguf/blob/main/pythia-70m-deduped-v0.Q5_K.gguf) | Q5_K | 0.05GB |
| [pythia-70m-deduped-v0.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/EleutherAI_-_pythia-70m-deduped-v0-gguf/blob/main/pythia-70m-deduped-v0.Q5_K_M.gguf) | Q5_K_M | 0.05GB |
| [pythia-70m-deduped-v0.Q5_1.gguf](https://huggingface.co/RichardErkhov/EleutherAI_-_pythia-70m-deduped-v0-gguf/blob/main/pythia-70m-deduped-v0.Q5_1.gguf) | Q5_1 | 0.05GB |
| [pythia-70m-deduped-v0.Q6_K.gguf](https://huggingface.co/RichardErkhov/EleutherAI_-_pythia-70m-deduped-v0-gguf/blob/main/pythia-70m-deduped-v0.Q6_K.gguf) | Q6_K | 0.06GB |
| [pythia-70m-deduped-v0.Q8_0.gguf](https://huggingface.co/RichardErkhov/EleutherAI_-_pythia-70m-deduped-v0-gguf/blob/main/pythia-70m-deduped-v0.Q8_0.gguf) | Q8_0 | 0.07GB |
Original model description:
---
language:
- en
tags:
- pytorch
- causal-lm
- pythia
- pythia_v0
license: apache-2.0
datasets:
- EleutherAI/the_pile_deduplicated
---
The *Pythia Scaling Suite* is a collection of models developed to facilitate
interpretability research. It contains two sets of eight models of sizes
70M, 160M, 410M, 1B, 1.4B, 2.8B, 6.9B, and 12B. For each size, there are two
models: one trained on the Pile, and one trained on the Pile after the dataset
has been globally deduplicated. All 8 model sizes are trained on the exact
same data, in the exact same order. All Pythia models are available
[on Hugging Face](https://huggingface.co/models?other=pythia).
The Pythia model suite was deliberately designed to promote scientific
research on large language models, especially interpretability research.
Despite not centering downstream performance as a design goal, we find the
models <a href="#evaluations">match or exceed</a> the performance of
similar and same-sized models, such as those in the OPT and GPT-Neo suites.
Please note that all models in the *Pythia* suite were renamed in January
2023. For clarity, a <a href="#naming-convention-and-parameter-count">table
comparing the old and new names</a> is provided in this model card, together
with exact parameter counts.
## Pythia-70M-deduped
### Model Details
- Developed by: [EleutherAI](http://eleuther.ai)
- Model type: Transformer-based Language Model
- Language: English
- Learn more: [Pythia's GitHub repository](https://github.com/EleutherAI/pythia)
for training procedure, config files, and details on how to use.
- Library: [GPT-NeoX](https://github.com/EleutherAI/gpt-neox)
- License: Apache 2.0
- Contact: to ask questions about this model, join the [EleutherAI
Discord](https://discord.gg/zBGx3azzUn), and post them in `#release-discussion`.
Please read the existing *Pythia* documentation before asking about it in the
EleutherAI Discord. For general correspondence: [contact@eleuther.
ai](mailto:[email protected]).
<figure>
| Pythia model | Non-Embedding Params | Layers | Model Dim | Heads | Batch Size | Learning Rate | Equivalent Models |
| -----------: | -------------------: | :----: | :-------: | :---: | :--------: | :-------------------: | :--------------------: |
| 70M | 18,915,328 | 6 | 512 | 8 | 2M | 1.0 x 10<sup>-3</sup> | — |
| 160M | 85,056,000 | 12 | 768 | 12 | 4M | 6.0 x 10<sup>-4</sup> | GPT-Neo 125M, OPT-125M |
| 410M | 302,311,424 | 24 | 1024 | 16 | 4M | 3.0 x 10<sup>-4</sup> | OPT-350M |
| 1.0B | 805,736,448 | 16 | 2048 | 8 | 2M | 3.0 x 10<sup>-4</sup> | — |
| 1.4B | 1,208,602,624 | 24 | 2048 | 16 | 4M | 2.0 x 10<sup>-4</sup> | GPT-Neo 1.3B, OPT-1.3B |
| 2.8B | 2,517,652,480 | 32 | 2560 | 32 | 2M | 1.6 x 10<sup>-4</sup> | GPT-Neo 2.7B, OPT-2.7B |
| 6.9B | 6,444,163,072 | 32 | 4096 | 32 | 2M | 1.2 x 10<sup>-4</sup> | OPT-6.7B |
| 12B | 11,327,027,200 | 36 | 5120 | 40 | 2M | 1.2 x 10<sup>-4</sup> | — |
<figcaption>Engineering details for the <i>Pythia Suite</i>. Deduped and
non-deduped models of a given size have the same hyperparameters. “Equivalent”
models have <b>exactly</b> the same architecture, and the same number of
non-embedding parameters.</figcaption>
</figure>
### Uses and Limitations
#### Intended Use
The primary intended use of Pythia is research on the behavior, functionality,
and limitations of large language models. This suite is intended to provide
a controlled setting for performing scientific experiments. To enable the
study of how language models change in the course of training, we provide
143 evenly spaced intermediate checkpoints per model. These checkpoints are
hosted on Hugging Face as branches. Note that branch `143000` corresponds
exactly to the model checkpoint on the `main` branch of each model.
You may also further fine-tune and adapt Pythia-70M-deduped for deployment,
as long as your use is in accordance with the Apache 2.0 license. Pythia
models work with the Hugging Face [Transformers
Library](https://huggingface.co/docs/transformers/index). If you decide to use
pre-trained Pythia-70M-deduped as a basis for your fine-tuned model, please
conduct your own risk and bias assessment.
#### Out-of-scope use
The Pythia Suite is **not** intended for deployment. It is not a in itself
a product and cannot be used for human-facing interactions.
Pythia models are English-language only, and are not suitable for translation
or generating text in other languages.
Pythia-70M-deduped has not been fine-tuned for downstream contexts in which
language models are commonly deployed, such as writing genre prose,
or commercial chatbots. This means Pythia-70M-deduped will **not**
respond to a given prompt the way a product like ChatGPT does. This is because,
unlike this model, ChatGPT was fine-tuned using methods such as Reinforcement
Learning from Human Feedback (RLHF) to better “understand” human instructions.
#### Limitations and biases
The core functionality of a large language model is to take a string of text
and predict the next token. The token deemed statistically most likely by the
model need not produce the most “accurate” text. Never rely on
Pythia-70M-deduped to produce factually accurate output.
This model was trained on [the Pile](https://pile.eleuther.ai/), a dataset
known to contain profanity and texts that are lewd or otherwise offensive.
See [Section 6 of the Pile paper](https://arxiv.org/abs/2101.00027) for a
discussion of documented biases with regards to gender, religion, and race.
Pythia-70M-deduped may produce socially unacceptable or undesirable text,
*even if* the prompt itself does not include anything explicitly offensive.
If you plan on using text generated through, for example, the Hosted Inference
API, we recommend having a human curate the outputs of this language model
before presenting it to other people. Please inform your audience that the
text was generated by Pythia-70M-deduped.
### Quickstart
Pythia models can be loaded and used via the following code, demonstrated here
for the third `pythia-70m-deduped` checkpoint:
```python
from transformers import GPTNeoXForCausalLM, AutoTokenizer
model = GPTNeoXForCausalLM.from_pretrained(
"EleutherAI/pythia-70m-deduped",
revision="step3000",
cache_dir="./pythia-70m-deduped/step3000",
)
tokenizer = AutoTokenizer.from_pretrained(
"EleutherAI/pythia-70m-deduped",
revision="step3000",
cache_dir="./pythia-70m-deduped/step3000",
)
inputs = tokenizer("Hello, I am", return_tensors="pt")
tokens = model.generate(**inputs)
tokenizer.decode(tokens[0])
```
Revision/branch `step143000` corresponds exactly to the model checkpoint on
the `main` branch of each model.<br>
For more information on how to use all Pythia models, see [documentation on
GitHub](https://github.com/EleutherAI/pythia).
### Training
#### Training data
Pythia-70M-deduped was trained on the Pile **after the dataset has been
globally deduplicated**.<br>
[The Pile](https://pile.eleuther.ai/) is a 825GiB general-purpose dataset in
English. It was created by EleutherAI specifically for training large language
models. It contains texts from 22 diverse sources, roughly broken down into
five categories: academic writing (e.g. arXiv), internet (e.g. CommonCrawl),
prose (e.g. Project Gutenberg), dialogue (e.g. YouTube subtitles), and
miscellaneous (e.g. GitHub, Enron Emails). See [the Pile
paper](https://arxiv.org/abs/2101.00027) for a breakdown of all data sources,
methodology, and a discussion of ethical implications. Consult [the
datasheet](https://arxiv.org/abs/2201.07311) for more detailed documentation
about the Pile and its component datasets. The Pile can be downloaded from
the [official website](https://pile.eleuther.ai/), or from a [community
mirror](https://the-eye.eu/public/AI/pile/).
#### Training procedure
All models were trained on the exact same data, in the exact same order. Each
model saw 299,892,736,000 tokens during training, and 143 checkpoints for each
model are saved every 2,097,152,000 tokens, spaced evenly throughout training.
This corresponds to training for just under 1 epoch on the Pile for
non-deduplicated models, and about 1.5 epochs on the deduplicated Pile.
All *Pythia* models trained for the equivalent of 143000 steps at a batch size
of 2,097,152 tokens. Two batch sizes were used: 2M and 4M. Models with a batch
size of 4M tokens listed were originally trained for 71500 steps instead, with
checkpoints every 500 steps. The checkpoints on Hugging Face are renamed for
consistency with all 2M batch models, so `step1000` is the first checkpoint
for `pythia-1.4b` that was saved (corresponding to step 500 in training), and
`step1000` is likewise the first `pythia-6.9b` checkpoint that was saved
(corresponding to 1000 “actual” steps).<br>
See [GitHub](https://github.com/EleutherAI/pythia) for more details on training
procedure, including [how to reproduce
it](https://github.com/EleutherAI/pythia/blob/main/README.md#reproducing-training).<br>
Pythia uses the same tokenizer as [GPT-NeoX-
20B](https://huggingface.co/EleutherAI/gpt-neox-20b).
### Evaluations
All 16 *Pythia* models were evaluated using the [LM Evaluation
Harness](https://github.com/EleutherAI/lm-evaluation-harness). You can access
the results by model and step at `results/json/*` in the [GitHub
repository](https://github.com/EleutherAI/pythia/tree/main/results/json).<br>
Expand the sections below to see plots of evaluation results for all
Pythia and Pythia-deduped models compared with OPT and BLOOM.
<details>
<summary>LAMBADA – OpenAI</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/lambada_openai.png" style="width:auto"/>
</details>
<details>
<summary>Physical Interaction: Question Answering (PIQA)</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/piqa.png" style="width:auto"/>
</details>
<details>
<summary>WinoGrande</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/winogrande.png" style="width:auto"/>
</details>
<details>
<summary>AI2 Reasoning Challenge – Challenge Set</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/arc_challenge.png" style="width:auto"/>
</details>
<details>
<summary>SciQ</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/sciq.png" style="width:auto"/>
</details>
### Naming convention and parameter count
*Pythia* models were renamed in January 2023. It is possible that the old
naming convention still persists in some documentation by accident. The
current naming convention (70M, 160M, etc.) is based on total parameter count.
<figure style="width:32em">
| current Pythia suffix | old suffix | total params | non-embedding params |
| --------------------: | ---------: | -------------: | -------------------: |
| 70M | 19M | 70,426,624 | 18,915,328 |
| 160M | 125M | 162,322,944 | 85,056,000 |
| 410M | 350M | 405,334,016 | 302,311,424 |
| 1B | 800M | 1,011,781,632 | 805,736,448 |
| 1.4B | 1.3B | 1,414,647,808 | 1,208,602,624 |
| 2.8B | 2.7B | 2,775,208,960 | 2,517,652,480 |
| 6.9B | 6.7B | 6,857,302,016 | 6,444,163,072 |
| 12B | 13B | 11,846,072,320 | 11,327,027,200 |
</figure>
| [
"QUESTION_ANSWERING",
"TRANSLATION"
] | [
"SCIQ"
] |
RichardErkhov/EleutherAI_-_pythia-70m-v0-gguf | RichardErkhov | null | [
"gguf",
"arxiv:2101.00027",
"arxiv:2201.07311",
"endpoints_compatible",
"region:us"
] | 2024-11-06T10:28:36 | 2024-11-06T10:30:17 | 56 | 0 | ---
{}
---
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
pythia-70m-v0 - GGUF
- Model creator: https://huggingface.co/EleutherAI/
- Original model: https://huggingface.co/EleutherAI/pythia-70m-v0/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [pythia-70m-v0.Q2_K.gguf](https://huggingface.co/RichardErkhov/EleutherAI_-_pythia-70m-v0-gguf/blob/main/pythia-70m-v0.Q2_K.gguf) | Q2_K | 0.04GB |
| [pythia-70m-v0.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/EleutherAI_-_pythia-70m-v0-gguf/blob/main/pythia-70m-v0.Q3_K_S.gguf) | Q3_K_S | 0.04GB |
| [pythia-70m-v0.Q3_K.gguf](https://huggingface.co/RichardErkhov/EleutherAI_-_pythia-70m-v0-gguf/blob/main/pythia-70m-v0.Q3_K.gguf) | Q3_K | 0.04GB |
| [pythia-70m-v0.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/EleutherAI_-_pythia-70m-v0-gguf/blob/main/pythia-70m-v0.Q3_K_M.gguf) | Q3_K_M | 0.04GB |
| [pythia-70m-v0.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/EleutherAI_-_pythia-70m-v0-gguf/blob/main/pythia-70m-v0.Q3_K_L.gguf) | Q3_K_L | 0.04GB |
| [pythia-70m-v0.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/EleutherAI_-_pythia-70m-v0-gguf/blob/main/pythia-70m-v0.IQ4_XS.gguf) | IQ4_XS | 0.04GB |
| [pythia-70m-v0.Q4_0.gguf](https://huggingface.co/RichardErkhov/EleutherAI_-_pythia-70m-v0-gguf/blob/main/pythia-70m-v0.Q4_0.gguf) | Q4_0 | 0.04GB |
| [pythia-70m-v0.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/EleutherAI_-_pythia-70m-v0-gguf/blob/main/pythia-70m-v0.IQ4_NL.gguf) | IQ4_NL | 0.04GB |
| [pythia-70m-v0.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/EleutherAI_-_pythia-70m-v0-gguf/blob/main/pythia-70m-v0.Q4_K_S.gguf) | Q4_K_S | 0.04GB |
| [pythia-70m-v0.Q4_K.gguf](https://huggingface.co/RichardErkhov/EleutherAI_-_pythia-70m-v0-gguf/blob/main/pythia-70m-v0.Q4_K.gguf) | Q4_K | 0.05GB |
| [pythia-70m-v0.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/EleutherAI_-_pythia-70m-v0-gguf/blob/main/pythia-70m-v0.Q4_K_M.gguf) | Q4_K_M | 0.05GB |
| [pythia-70m-v0.Q4_1.gguf](https://huggingface.co/RichardErkhov/EleutherAI_-_pythia-70m-v0-gguf/blob/main/pythia-70m-v0.Q4_1.gguf) | Q4_1 | 0.05GB |
| [pythia-70m-v0.Q5_0.gguf](https://huggingface.co/RichardErkhov/EleutherAI_-_pythia-70m-v0-gguf/blob/main/pythia-70m-v0.Q5_0.gguf) | Q5_0 | 0.05GB |
| [pythia-70m-v0.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/EleutherAI_-_pythia-70m-v0-gguf/blob/main/pythia-70m-v0.Q5_K_S.gguf) | Q5_K_S | 0.05GB |
| [pythia-70m-v0.Q5_K.gguf](https://huggingface.co/RichardErkhov/EleutherAI_-_pythia-70m-v0-gguf/blob/main/pythia-70m-v0.Q5_K.gguf) | Q5_K | 0.05GB |
| [pythia-70m-v0.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/EleutherAI_-_pythia-70m-v0-gguf/blob/main/pythia-70m-v0.Q5_K_M.gguf) | Q5_K_M | 0.05GB |
| [pythia-70m-v0.Q5_1.gguf](https://huggingface.co/RichardErkhov/EleutherAI_-_pythia-70m-v0-gguf/blob/main/pythia-70m-v0.Q5_1.gguf) | Q5_1 | 0.05GB |
| [pythia-70m-v0.Q6_K.gguf](https://huggingface.co/RichardErkhov/EleutherAI_-_pythia-70m-v0-gguf/blob/main/pythia-70m-v0.Q6_K.gguf) | Q6_K | 0.06GB |
| [pythia-70m-v0.Q8_0.gguf](https://huggingface.co/RichardErkhov/EleutherAI_-_pythia-70m-v0-gguf/blob/main/pythia-70m-v0.Q8_0.gguf) | Q8_0 | 0.07GB |
Original model description:
---
language:
- en
tags:
- pytorch
- causal-lm
- pythia
- pythia_v0
license: apache-2.0
datasets:
- the_pile
---
The *Pythia Scaling Suite* is a collection of models developed to facilitate
interpretability research. It contains two sets of eight models of sizes
70M, 160M, 410M, 1B, 1.4B, 2.8B, 6.9B, and 12B. For each size, there are two
models: one trained on the Pile, and one trained on the Pile after the dataset
has been globally deduplicated. All 8 model sizes are trained on the exact
same data, in the exact same order. All Pythia models are available
[on Hugging Face](https://huggingface.co/models?other=pythia).
The Pythia model suite was deliberately designed to promote scientific
research on large language models, especially interpretability research.
Despite not centering downstream performance as a design goal, we find the
models <a href="#evaluations">match or exceed</a> the performance of
similar and same-sized models, such as those in the OPT and GPT-Neo suites.
Please note that all models in the *Pythia* suite were renamed in January
2023. For clarity, a <a href="#naming-convention-and-parameter-count">table
comparing the old and new names</a> is provided in this model card, together
with exact parameter counts.
## Pythia-70M
### Model Details
- Developed by: [EleutherAI](http://eleuther.ai)
- Model type: Transformer-based Language Model
- Language: English
- Learn more: [Pythia's GitHub repository](https://github.com/EleutherAI/pythia)
for training procedure, config files, and details on how to use.
- Library: [GPT-NeoX](https://github.com/EleutherAI/gpt-neox)
- License: Apache 2.0
- Contact: to ask questions about this model, join the [EleutherAI
Discord](https://discord.gg/zBGx3azzUn), and post them in `#release-discussion`.
Please read the existing *Pythia* documentation before asking about it in the
EleutherAI Discord. For general correspondence: [contact@eleuther.
ai](mailto:[email protected]).
<figure>
| Pythia model | Non-Embedding Params | Layers | Model Dim | Heads | Batch Size | Learning Rate | Equivalent Models |
| -----------: | -------------------: | :----: | :-------: | :---: | :--------: | :-------------------: | :--------------------: |
| 70M | 18,915,328 | 6 | 512 | 8 | 2M | 1.0 x 10<sup>-3</sup> | — |
| 160M | 85,056,000 | 12 | 768 | 12 | 4M | 6.0 x 10<sup>-4</sup> | GPT-Neo 125M, OPT-125M |
| 410M | 302,311,424 | 24 | 1024 | 16 | 4M | 3.0 x 10<sup>-4</sup> | OPT-350M |
| 1.0B | 805,736,448 | 16 | 2048 | 8 | 2M | 3.0 x 10<sup>-4</sup> | — |
| 1.4B | 1,208,602,624 | 24 | 2048 | 16 | 4M | 2.0 x 10<sup>-4</sup> | GPT-Neo 1.3B, OPT-1.3B |
| 2.8B | 2,517,652,480 | 32 | 2560 | 32 | 2M | 1.6 x 10<sup>-4</sup> | GPT-Neo 2.7B, OPT-2.7B |
| 6.9B | 6,444,163,072 | 32 | 4096 | 32 | 2M | 1.2 x 10<sup>-4</sup> | OPT-6.7B |
| 12B | 11,327,027,200 | 36 | 5120 | 40 | 2M | 1.2 x 10<sup>-4</sup> | — |
<figcaption>Engineering details for the <i>Pythia Suite</i>. Deduped and
non-deduped models of a given size have the same hyperparameters. “Equivalent”
models have <b>exactly</b> the same architecture, and the same number of
non-embedding parameters.</figcaption>
</figure>
### Uses and Limitations
#### Intended Use
The primary intended use of Pythia is research on the behavior, functionality,
and limitations of large language models. This suite is intended to provide
a controlled setting for performing scientific experiments. To enable the
study of how language models change over the course of training, we provide
143 evenly spaced intermediate checkpoints per model. These checkpoints are
hosted on Hugging Face as branches. Note that branch `143000` corresponds
exactly to the model checkpoint on the `main` branch of each model.
You may also further fine-tune and adapt Pythia-70M for deployment,
as long as your use is in accordance with the Apache 2.0 license. Pythia
models work with the Hugging Face [Transformers
Library](https://huggingface.co/docs/transformers/index). If you decide to use
pre-trained Pythia-70M as a basis for your fine-tuned model, please
conduct your own risk and bias assessment.
#### Out-of-scope use
The Pythia Suite is **not** intended for deployment. It is not a in itself
a product and cannot be used for human-facing interactions.
Pythia models are English-language only, and are not suitable for translation
or generating text in other languages.
Pythia-70M has not been fine-tuned for downstream contexts in which
language models are commonly deployed, such as writing genre prose,
or commercial chatbots. This means Pythia-70M will **not**
respond to a given prompt the way a product like ChatGPT does. This is because,
unlike this model, ChatGPT was fine-tuned using methods such as Reinforcement
Learning from Human Feedback (RLHF) to better “understand” human instructions.
#### Limitations and biases
The core functionality of a large language model is to take a string of text
and predict the next token. The token deemed statistically most likely by the
model need not produce the most “accurate” text. Never rely on
Pythia-70M to produce factually accurate output.
This model was trained on [the Pile](https://pile.eleuther.ai/), a dataset
known to contain profanity and texts that are lewd or otherwise offensive.
See [Section 6 of the Pile paper](https://arxiv.org/abs/2101.00027) for a
discussion of documented biases with regards to gender, religion, and race.
Pythia-70M may produce socially unacceptable or undesirable text, *even if*
the prompt itself does not include anything explicitly offensive.
If you plan on using text generated through, for example, the Hosted Inference
API, we recommend having a human curate the outputs of this language model
before presenting it to other people. Please inform your audience that the
text was generated by Pythia-70M.
### Quickstart
Pythia models can be loaded and used via the following code, demonstrated here
for the third `pythia-70m-deduped` checkpoint:
```python
from transformers import GPTNeoXForCausalLM, AutoTokenizer
model = GPTNeoXForCausalLM.from_pretrained(
"EleutherAI/pythia-70m-deduped",
revision="step3000",
cache_dir="./pythia-70m-deduped/step3000",
)
tokenizer = AutoTokenizer.from_pretrained(
"EleutherAI/pythia-70m-deduped",
revision="step3000",
cache_dir="./pythia-70m-deduped/step3000",
)
inputs = tokenizer("Hello, I am", return_tensors="pt")
tokens = model.generate(**inputs)
tokenizer.decode(tokens[0])
```
Revision/branch `step143000` corresponds exactly to the model checkpoint on
the `main` branch of each model.<br>
For more information on how to use all Pythia models, see [documentation on
GitHub](https://github.com/EleutherAI/pythia).
### Training
#### Training data
[The Pile](https://pile.eleuther.ai/) is a 825GiB general-purpose dataset in
English. It was created by EleutherAI specifically for training large language
models. It contains texts from 22 diverse sources, roughly broken down into
five categories: academic writing (e.g. arXiv), internet (e.g. CommonCrawl),
prose (e.g. Project Gutenberg), dialogue (e.g. YouTube subtitles), and
miscellaneous (e.g. GitHub, Enron Emails). See [the Pile
paper](https://arxiv.org/abs/2101.00027) for a breakdown of all data sources,
methodology, and a discussion of ethical implications. Consult [the
datasheet](https://arxiv.org/abs/2201.07311) for more detailed documentation
about the Pile and its component datasets. The Pile can be downloaded from
the [official website](https://pile.eleuther.ai/), or from a [community
mirror](https://the-eye.eu/public/AI/pile/).<br>
The Pile was **not** deduplicated before being used to train Pythia-70M.
#### Training procedure
All models were trained on the exact same data, in the exact same order. Each
model saw 299,892,736,000 tokens during training, and 143 checkpoints for each
model are saved every 2,097,152,000 tokens, spaced evenly throughout training.
This corresponds to training for just under 1 epoch on the Pile for
non-deduplicated models, and about 1.5 epochs on the deduplicated Pile.
All *Pythia* models trained for the equivalent of 143000 steps at a batch size
of 2,097,152 tokens. Two batch sizes were used: 2M and 4M. Models with a batch
size of 4M tokens listed were originally trained for 71500 steps instead, with
checkpoints every 500 steps. The checkpoints on Hugging Face are renamed for
consistency with all 2M batch models, so `step1000` is the first checkpoint
for `pythia-1.4b` that was saved (corresponding to step 500 in training), and
`step1000` is likewise the first `pythia-6.9b` checkpoint that was saved
(corresponding to 1000 “actual” steps).<br>
See [GitHub](https://github.com/EleutherAI/pythia) for more details on training
procedure, including [how to reproduce
it](https://github.com/EleutherAI/pythia/blob/main/README.md#reproducing-training).<br>
Pythia uses the same tokenizer as [GPT-NeoX-
20B](https://huggingface.co/EleutherAI/gpt-neox-20b).
### Evaluations
All 16 *Pythia* models were evaluated using the [LM Evaluation
Harness](https://github.com/EleutherAI/lm-evaluation-harness). You can access
the results by model and step at `results/json/*` in the [GitHub
repository](https://github.com/EleutherAI/pythia/tree/main/results/json).<br>
Expand the sections below to see plots of evaluation results for all
Pythia and Pythia-deduped models compared with OPT and BLOOM.
<details>
<summary>LAMBADA – OpenAI</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/lambada_openai.png" style="width:auto"/>
</details>
<details>
<summary>Physical Interaction: Question Answering (PIQA)</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/piqa.png" style="width:auto"/>
</details>
<details>
<summary>WinoGrande</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/winogrande.png" style="width:auto"/>
</details>
<details>
<summary>AI2 Reasoning Challenge—Challenge Set</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/arc_challenge.png" style="width:auto"/>
</details>
<details>
<summary>SciQ</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/sciq.png" style="width:auto"/>
</details>
### Naming convention and parameter count
*Pythia* models were renamed in January 2023. It is possible that the old
naming convention still persists in some documentation by accident. The
current naming convention (70M, 160M, etc.) is based on total parameter count.
<figure style="width:32em">
| current Pythia suffix | old suffix | total params | non-embedding params |
| --------------------: | ---------: | -------------: | -------------------: |
| 70M | 19M | 70,426,624 | 18,915,328 |
| 160M | 125M | 162,322,944 | 85,056,000 |
| 410M | 350M | 405,334,016 | 302,311,424 |
| 1B | 800M | 1,011,781,632 | 805,736,448 |
| 1.4B | 1.3B | 1,414,647,808 | 1,208,602,624 |
| 2.8B | 2.7B | 2,775,208,960 | 2,517,652,480 |
| 6.9B | 6.7B | 6,857,302,016 | 6,444,163,072 |
| 12B | 13B | 11,846,072,320 | 11,327,027,200 |
</figure>
| [
"QUESTION_ANSWERING",
"TRANSLATION"
] | [
"SCIQ"
] |
Subsets and Splits