id
stringlengths 9
104
| author
stringlengths 3
36
| task_category
stringclasses 32
values | tags
sequencelengths 1
4.05k
| created_time
int64 1,646B
1,742B
| last_modified
timestamp[s]date 2021-02-13 00:06:56
2025-03-18 09:30:19
| downloads
int64 0
15.6M
| likes
int64 0
4.86k
| README
stringlengths 44
1.01M
| matched_bigbio_names
sequencelengths 1
8
| is_bionlp
stringclasses 3
values |
---|---|---|---|---|---|---|---|---|---|---|
EleutherAI/pythia-1.4b-deduped | EleutherAI | text-generation | [
"transformers",
"pytorch",
"gpt_neox",
"text-generation",
"causal-lm",
"pythia",
"en",
"dataset:EleutherAI/the_pile_deduplicated",
"arxiv:2304.01373",
"arxiv:2101.00027",
"arxiv:2201.07311",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | 1,675,978,924,000 | 2023-06-08T13:03:28 | 12,669 | 19 | ---
datasets:
- EleutherAI/the_pile_deduplicated
language:
- en
license: apache-2.0
tags:
- pytorch
- causal-lm
- pythia
---
The *Pythia Scaling Suite* is a collection of models developed to facilitate
interpretability research [(see paper)](https://arxiv.org/pdf/2304.01373.pdf).
It contains two sets of eight models of sizes
70M, 160M, 410M, 1B, 1.4B, 2.8B, 6.9B, and 12B. For each size, there are two
models: one trained on the Pile, and one trained on the Pile after the dataset
has been globally deduplicated. All 8 model sizes are trained on the exact
same data, in the exact same order. We also provide 154 intermediate
checkpoints per model, hosted on Hugging Face as branches.
The Pythia model suite was designed to promote scientific
research on large language models, especially interpretability research.
Despite not centering downstream performance as a design goal, we find the
models <a href="#evaluations">match or exceed</a> the performance of
similar and same-sized models, such as those in the OPT and GPT-Neo suites.
<details>
<summary style="font-weight:600">Details on previous early release and naming convention.</summary>
Previously, we released an early version of the Pythia suite to the public.
However, we decided to retrain the model suite to address a few hyperparameter
discrepancies. This model card <a href="#changelog">lists the changes</a>;
see appendix B in the Pythia paper for further discussion. We found no
difference in benchmark performance between the two Pythia versions.
The old models are
[still available](https://huggingface.co/models?other=pythia_v0), but we
suggest the retrained suite if you are just starting to use Pythia.<br>
**This is the current release.**
Please note that all models in the *Pythia* suite were renamed in January
2023. For clarity, a <a href="#naming-convention-and-parameter-count">table
comparing the old and new names</a> is provided in this model card, together
with exact parameter counts.
</details>
<br>
# Pythia-1.4B-deduped
## Model Details
- Developed by: [EleutherAI](http://eleuther.ai)
- Model type: Transformer-based Language Model
- Language: English
- Learn more: [Pythia's GitHub repository](https://github.com/EleutherAI/pythia)
for training procedure, config files, and details on how to use.
[See paper](https://arxiv.org/pdf/2304.01373.pdf) for more evals and implementation
details.
- Library: [GPT-NeoX](https://github.com/EleutherAI/gpt-neox)
- License: Apache 2.0
- Contact: to ask questions about this model, join the [EleutherAI
Discord](https://discord.gg/zBGx3azzUn), and post them in `#release-discussion`.
Please read the existing *Pythia* documentation before asking about it in the
EleutherAI Discord. For general correspondence: [contact@eleuther.
ai](mailto:[email protected]).
<figure>
| Pythia model | Non-Embedding Params | Layers | Model Dim | Heads | Batch Size | Learning Rate | Equivalent Models |
| -----------: | -------------------: | :----: | :-------: | :---: | :--------: | :-------------------: | :--------------------: |
| 70M | 18,915,328 | 6 | 512 | 8 | 2M | 1.0 x 10<sup>-3</sup> | — |
| 160M | 85,056,000 | 12 | 768 | 12 | 2M | 6.0 x 10<sup>-4</sup> | GPT-Neo 125M, OPT-125M |
| 410M | 302,311,424 | 24 | 1024 | 16 | 2M | 3.0 x 10<sup>-4</sup> | OPT-350M |
| 1.0B | 805,736,448 | 16 | 2048 | 8 | 2M | 3.0 x 10<sup>-4</sup> | — |
| 1.4B | 1,208,602,624 | 24 | 2048 | 16 | 2M | 2.0 x 10<sup>-4</sup> | GPT-Neo 1.3B, OPT-1.3B |
| 2.8B | 2,517,652,480 | 32 | 2560 | 32 | 2M | 1.6 x 10<sup>-4</sup> | GPT-Neo 2.7B, OPT-2.7B |
| 6.9B | 6,444,163,072 | 32 | 4096 | 32 | 2M | 1.2 x 10<sup>-4</sup> | OPT-6.7B |
| 12B | 11,327,027,200 | 36 | 5120 | 40 | 2M | 1.2 x 10<sup>-4</sup> | — |
<figcaption>Engineering details for the <i>Pythia Suite</i>. Deduped and
non-deduped models of a given size have the same hyperparameters. “Equivalent”
models have <b>exactly</b> the same architecture, and the same number of
non-embedding parameters.</figcaption>
</figure>
## Uses and Limitations
### Intended Use
The primary intended use of Pythia is research on the behavior, functionality,
and limitations of large language models. This suite is intended to provide
a controlled setting for performing scientific experiments. We also provide
154 checkpoints per model: initial `step0`, 10 log-spaced checkpoints
`step{1,2,4...512}`, and 143 evenly-spaced checkpoints from `step1000` to
`step143000`. These checkpoints are hosted on Hugging Face as branches. Note
that branch `143000` corresponds exactly to the model checkpoint on the `main`
branch of each model.
You may also further fine-tune and adapt Pythia-1.4B-deduped for deployment,
as long as your use is in accordance with the Apache 2.0 license. Pythia
models work with the Hugging Face [Transformers
Library](https://huggingface.co/docs/transformers/index). If you decide to use
pre-trained Pythia-1.4B-deduped as a basis for your fine-tuned model, please
conduct your own risk and bias assessment.
### Out-of-scope use
The Pythia Suite is **not** intended for deployment. It is not a in itself
a product and cannot be used for human-facing interactions. For example,
the model may generate harmful or offensive text. Please evaluate the risks
associated with your particular use case.
Pythia models are English-language only, and are not suitable for translation
or generating text in other languages.
Pythia-1.4B-deduped has not been fine-tuned for downstream contexts in which
language models are commonly deployed, such as writing genre prose,
or commercial chatbots. This means Pythia-1.4B-deduped will **not**
respond to a given prompt the way a product like ChatGPT does. This is because,
unlike this model, ChatGPT was fine-tuned using methods such as Reinforcement
Learning from Human Feedback (RLHF) to better “follow” human instructions.
### Limitations and biases
The core functionality of a large language model is to take a string of text
and predict the next token. The token used by the model need not produce the
most “accurate” text. Never rely on Pythia-1.4B-deduped to produce factually accurate
output.
This model was trained on [the Pile](https://pile.eleuther.ai/), a dataset
known to contain profanity and texts that are lewd or otherwise offensive.
See [Section 6 of the Pile paper](https://arxiv.org/abs/2101.00027) for a
discussion of documented biases with regards to gender, religion, and race.
Pythia-1.4B-deduped may produce socially unacceptable or undesirable text, *even if*
the prompt itself does not include anything explicitly offensive.
If you plan on using text generated through, for example, the Hosted Inference
API, we recommend having a human curate the outputs of this language model
before presenting it to other people. Please inform your audience that the
text was generated by Pythia-1.4B-deduped.
### Quickstart
Pythia models can be loaded and used via the following code, demonstrated here
for the third `pythia-70m-deduped` checkpoint:
```python
from transformers import GPTNeoXForCausalLM, AutoTokenizer
model = GPTNeoXForCausalLM.from_pretrained(
"EleutherAI/pythia-70m-deduped",
revision="step3000",
cache_dir="./pythia-70m-deduped/step3000",
)
tokenizer = AutoTokenizer.from_pretrained(
"EleutherAI/pythia-70m-deduped",
revision="step3000",
cache_dir="./pythia-70m-deduped/step3000",
)
inputs = tokenizer("Hello, I am", return_tensors="pt")
tokens = model.generate(**inputs)
tokenizer.decode(tokens[0])
```
Revision/branch `step143000` corresponds exactly to the model checkpoint on
the `main` branch of each model.<br>
For more information on how to use all Pythia models, see [documentation on
GitHub](https://github.com/EleutherAI/pythia).
## Training
### Training data
Pythia-1.4B-deduped was trained on the Pile **after the dataset has been globally
deduplicated**.<br>
[The Pile](https://pile.eleuther.ai/) is a 825GiB general-purpose dataset in
English. It was created by EleutherAI specifically for training large language
models. It contains texts from 22 diverse sources, roughly broken down into
five categories: academic writing (e.g. arXiv), internet (e.g. CommonCrawl),
prose (e.g. Project Gutenberg), dialogue (e.g. YouTube subtitles), and
miscellaneous (e.g. GitHub, Enron Emails). See [the Pile
paper](https://arxiv.org/abs/2101.00027) for a breakdown of all data sources,
methodology, and a discussion of ethical implications. Consult [the
datasheet](https://arxiv.org/abs/2201.07311) for more detailed documentation
about the Pile and its component datasets. The Pile can be downloaded from
the [official website](https://pile.eleuther.ai/), or from a [community
mirror](https://the-eye.eu/public/AI/pile/).
### Training procedure
All models were trained on the exact same data, in the exact same order. Each
model saw 299,892,736,000 tokens during training, and 143 checkpoints for each
model are saved every 2,097,152,000 tokens, spaced evenly throughout training,
from `step1000` to `step143000` (which is the same as `main`). In addition, we
also provide frequent early checkpoints: `step0` and `step{1,2,4...512}`.
This corresponds to training for just under 1 epoch on the Pile for
non-deduplicated models, and about 1.5 epochs on the deduplicated Pile.
All *Pythia* models trained for 143000 steps at a batch size
of 2M (2,097,152 tokens).<br>
See [GitHub](https://github.com/EleutherAI/pythia) for more details on training
procedure, including [how to reproduce
it](https://github.com/EleutherAI/pythia/blob/main/README.md#reproducing-training).<br>
Pythia uses the same tokenizer as [GPT-NeoX-
20B](https://huggingface.co/EleutherAI/gpt-neox-20b).
## Evaluations
All 16 *Pythia* models were evaluated using the [LM Evaluation
Harness](https://github.com/EleutherAI/lm-evaluation-harness). You can access
the results by model and step at `results/json/*` in the [GitHub
repository](https://github.com/EleutherAI/pythia/tree/main/results/json/).<br>
Expand the sections below to see plots of evaluation results for all
Pythia and Pythia-deduped models compared with OPT and BLOOM.
<details>
<summary>LAMBADA – OpenAI</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/lambada_openai_v1.png" style="width:auto"/>
</details>
<details>
<summary>Physical Interaction: Question Answering (PIQA)</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/piqa_v1.png" style="width:auto"/>
</details>
<details>
<summary>WinoGrande</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/winogrande_v1.png" style="width:auto"/>
</details>
<details>
<summary>AI2 Reasoning Challenge—Easy Set</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/arc_easy_v1.png" style="width:auto"/>
</details>
<details>
<summary>SciQ</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/sciq_v1.png" style="width:auto"/>
</details>
## Changelog
This section compares differences between previously released
[Pythia v0](https://huggingface.co/models?other=pythia_v0) and the current
models. See Appendix B of the Pythia paper for further discussion of these
changes and the motivation behind them. We found that retraining Pythia had no
impact on benchmark performance.
- All model sizes are now trained with uniform batch size of 2M tokens.
Previously, the models of size 160M, 410M, and 1.4B parameters were trained
with batch sizes of 4M tokens.
- We added checkpoints at initialization (step 0) and steps {1,2,4,8,16,32,64,
128,256,512} in addition to every 1000 training steps.
- Flash Attention was used in the new retrained suite.
- We remedied a minor inconsistency that existed in the original suite: all
models of size 2.8B parameters or smaller had a learning rate (LR) schedule
which decayed to a minimum LR of 10% the starting LR rate, but the 6.9B and
12B models all used an LR schedule which decayed to a minimum LR of 0. In
the redone training runs, we rectified this inconsistency: all models now were
trained with LR decaying to a minimum of 0.1× their maximum LR.
### Naming convention and parameter count
*Pythia* models were renamed in January 2023. It is possible that the old
naming convention still persists in some documentation by accident. The
current naming convention (70M, 160M, etc.) is based on total parameter count.
<figure style="width:32em">
| current Pythia suffix | old suffix | total params | non-embedding params |
| --------------------: | ---------: | -------------: | -------------------: |
| 70M | 19M | 70,426,624 | 18,915,328 |
| 160M | 125M | 162,322,944 | 85,056,000 |
| 410M | 350M | 405,334,016 | 302,311,424 |
| 1B | 800M | 1,011,781,632 | 805,736,448 |
| 1.4B | 1.3B | 1,414,647,808 | 1,208,602,624 |
| 2.8B | 2.7B | 2,775,208,960 | 2,517,652,480 |
| 6.9B | 6.7B | 6,857,302,016 | 6,444,163,072 |
| 12B | 13B | 11,846,072,320 | 11,327,027,200 |
</figure> | [
"SCIQ"
] | Non_BioNLP |
LoneStriker/BioMistral-7B-TIES-6.0bpw-h6-exl2 | LoneStriker | text-generation | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"ties",
"medical",
"biology",
"conversational",
"fr",
"en",
"pl",
"es",
"it",
"ro",
"de",
"nl",
"dataset:pubmed",
"arxiv:2306.01708",
"arxiv:2402.10373",
"base_model:BioMistral/BioMistral-7B",
"base_model:merge:BioMistral/BioMistral-7B",
"base_model:mistralai/Mistral-7B-Instruct-v0.1",
"base_model:merge:mistralai/Mistral-7B-Instruct-v0.1",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | 1,708,358,824,000 | 2024-02-19T16:10:06 | 10 | 0 | ---
base_model:
- mistralai/Mistral-7B-Instruct-v0.1
- BioMistral/BioMistral-7B
datasets:
- pubmed
language:
- fr
- en
- pl
- es
- it
- ro
- de
- nl
library_name: transformers
license: apache-2.0
pipeline_tag: text-generation
tags:
- mergekit
- merge
- ties
- medical
- biology
---
# BioMistral-7B-mistral7instruct-ties
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [TIES](https://arxiv.org/abs/2306.01708) merge method using [mistralai/Mistral-7B-Instruct-v0.1](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1) as a base.
### Models Merged
The following models were included in the merge:
* [BioMistral/BioMistral-7B](https://huggingface.co/BioMistral/BioMistral-7B)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: mistralai/Mistral-7B-Instruct-v0.1
- model: BioMistral/BioMistral-7B
parameters:
density: 0.5
weight: 0.5
merge_method: ties
base_model: mistralai/Mistral-7B-Instruct-v0.1
parameters:
normalize: true
dtype: bfloat16
```
<p align="center">
<img src="https://huggingface.co/BioMistral/BioMistral-7B/resolve/main/wordart_blue_m_rectangle.png?download=true" alt="drawing" width="250"/>
</p>
# BioMistral: A Collection of Open-Source Pretrained Large Language Models for Medical Domains
**Abstract:**
Large Language Models (LLMs) have demonstrated remarkable versatility in recent years, offering potential applications across specialized domains such as healthcare and medicine. Despite the availability of various open-source LLMs tailored for health contexts, adapting general-purpose LLMs to the medical domain presents significant challenges.
In this paper, we introduce BioMistral, an open-source LLM tailored for the biomedical domain, utilizing Mistral as its foundation model and further pre-trained on PubMed Central. We conduct a comprehensive evaluation of BioMistral on a benchmark comprising 10 established medical question-answering (QA) tasks in English. We also explore lightweight models obtained through quantization and model merging approaches. Our results demonstrate BioMistral's superior performance compared to existing open-source medical models and its competitive edge against proprietary counterparts. Finally, to address the limited availability of data beyond English and to assess the multilingual generalization of medical LLMs, we automatically translated and evaluated this benchmark into 7 other languages. This marks the first large-scale multilingual evaluation of LLMs in the medical domain. Datasets, multilingual evaluation benchmarks, scripts, and all the models obtained during our experiments are freely released.
**Advisory Notice!** Although BioMistral is intended to encapsulate medical knowledge sourced from high-quality evidence, it hasn't been tailored to effectively, safely, or suitably convey this knowledge within professional parameters for action. We advise refraining from utilizing BioMistral in medical contexts unless it undergoes thorough alignment with specific use cases and undergoes further testing, notably including randomized controlled trials in real-world medical environments. BioMistral 7B may possess inherent risks and biases that have not yet been thoroughly assessed. Additionally, the model's performance has not been evaluated in real-world clinical settings. Consequently, we recommend using BioMistral 7B strictly as a research tool and advise against deploying it in production environments for natural language generation or any professional health and medical purposes.
# 1. BioMistral models
**BioMistral** is a suite of Mistral-based further pre-trained open source models suited for the medical domains and pre-trained using textual data from PubMed Central Open Access (CC0, CC BY, CC BY-SA, and CC BY-ND). All the models are trained using the CNRS (French National Centre for Scientific Research) [Jean Zay](http://www.idris.fr/jean-zay/) French HPC.
| Model Name | Base Model | Model Type | Sequence Length | Download |
|:-------------------:|:----------------------------------:|:-------------------:|:---------------:|:-----------------------------------------------------:|
| BioMistral-7B | [Mistral-7B-Instruct-v0.1](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1) | Further Pre-trained | 2048 | [HuggingFace](https://huggingface.co/BioMistral/BioMistral-7B) |
| BioMistral-7B-DARE | [Mistral-7B-Instruct-v0.1](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1) | Merge DARE | 2048 | [HuggingFace](https://huggingface.co/BioMistral/BioMistral-7B-DARE) |
| BioMistral-7B-TIES | [Mistral-7B-Instruct-v0.1](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1) | Merge TIES | 2048 | [HuggingFace](https://huggingface.co/BioMistral/BioMistral-7B-TIES) |
| BioMistral-7B-SLERP | [Mistral-7B-Instruct-v0.1](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1) | Merge SLERP | 2048 | [HuggingFace](https://huggingface.co/BioMistral/BioMistral-7B-SLERP) |
# 2. Quantized Models
| Base Model | Method | q_group_size | w_bit | version | VRAM GB | Time | Download |
|:-------------------:|:------:|:------------:|:-----:|:-------:|:-------:|:------:|:--------:|
| BioMistral-7B | FP16/BF16 | | | | 15.02 | x1.00 | [HuggingFace](https://huggingface.co/BioMistral/BioMistral-7B) |
| BioMistral-7B | AWQ | 128 | 4 | GEMM | 4.68 | x1.41 | [HuggingFace](https://huggingface.co/BioMistral/BioMistral-7B-AWQ-QGS128-W4-GEMM) |
| BioMistral-7B | AWQ | 128 | 4 | GEMV | 4.68 | x10.30 | [HuggingFace](https://huggingface.co/BioMistral/BioMistral-7B-AWQ-QGS128-W4-GEMV) |
| BioMistral-7B | BnB.4 | | 4 | | 5.03 | x3.25 | [HuggingFace](blank) |
| BioMistral-7B | BnB.8 | | 8 | | 8.04 | x4.34 | [HuggingFace](blank) |
| BioMistral-7B-DARE | AWQ | 128 | 4 | GEMM | 4.68 | x1.41 | [HuggingFace](https://huggingface.co/BioMistral/BioMistral-7B-DARE-AWQ-QGS128-W4-GEMM) |
| BioMistral-7B-TIES | AWQ | 128 | 4 | GEMM | 4.68 | x1.41 | [HuggingFace](https://huggingface.co/BioMistral/BioMistral-7B-TIES-AWQ-QGS128-W4-GEMM) |
| BioMistral-7B-SLERP | AWQ | 128 | 4 | GEMM | 4.68 | x1.41 | [HuggingFace](https://huggingface.co/BioMistral/BioMistral-7B-SLERP-AWQ-QGS128-W4-GEMM) |
# 2. Using BioMistral
You can use BioMistral with [Hugging Face's Transformers library](https://github.com/huggingface/transformers) as follow.
Loading the model and tokenizer :
```python
from transformers import AutoModel, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("BioMistral/BioMistral-7B")
model = AutoModel.from_pretrained("BioMistral/BioMistral-7B")
```
# 3. Supervised Fine-tuning Benchmark
| | Clinical KG | Medical Genetics | Anatomy | Pro Medicine | College Biology | College Medicine | MedQA | MedQA 5 opts | PubMedQA | MedMCQA | Avg. |
|-------------------------------------------|:---------------------------------------------:|-----------------------------------------------|-----------------------------------------------|-----------------------------------------------|-----------------------------------------------|-----------------------------------------------|-----------------------------------------------|-----------------------------------------------|-----------------------------------------------|-----------------------------------------------|------------------|
| **BioMistral 7B** | 59.9 | 64.0 | 56.5 | 60.4 | 59.0 | 54.7 | 50.6 | 42.8 | 77.5 | 48.1 | 57.3 |
| **Mistral 7B Instruct** | **62.9** | 57.0 | 55.6 | 59.4 | 62.5 | <u>57.2</u> | 42.0 | 40.9 | 75.7 | 46.1 | 55.9 |
| | | | | | | | | | | | |
| **BioMistral 7B Ensemble** | <u>62.8</u> | 62.7 | <u>57.5</u> | **63.5** | 64.3 | 55.7 | 50.6 | 43.6 | 77.5 | **48.8** | 58.7 |
| **BioMistral 7B DARE** | 62.3 | **67.0** | 55.8 | 61.4 | **66.9** | **58.0** | **51.1** | **45.2** | <u>77.7</u> | <u>48.7</u> | **59.4** |
| **BioMistral 7B TIES** | 60.1 | <u>65.0</u> | **58.5** | 60.5 | 60.4 | 56.5 | 49.5 | 43.2 | 77.5 | 48.1 | 57.9 |
| **BioMistral 7B SLERP** | 62.5 | 64.7 | 55.8 | <u>62.7</u> | <u>64.8</u> | 56.3 | <u>50.8</u> | <u>44.3</u> | **77.8** | 48.6 | <u>58.8</u> |
| | | | | | | | | | | | |
| **MedAlpaca 7B** | 53.1 | 58.0 | 54.1 | 58.8 | 58.1 | 48.6 | 40.1 | 33.7 | 73.6 | 37.0 | 51.5 |
| **PMC-LLaMA 7B** | 24.5 | 27.7 | 35.3 | 17.4 | 30.3 | 23.3 | 25.5 | 20.2 | 72.9 | 26.6 | 30.4 |
| **MediTron-7B** | 41.6 | 50.3 | 46.4 | 27.9 | 44.4 | 30.8 | 41.6 | 28.1 | 74.9 | 41.3 | 42.7 |
| **BioMedGPT-LM-7B** | 51.4 | 52.0 | 49.4 | 53.3 | 50.7 | 49.1 | 42.5 | 33.9 | 76.8 | 37.6 | 49.7 |
| | | | | | | | | | | | |
| **GPT-3.5 Turbo 1106*** | 74.71 | 74.00 | 65.92 | 72.79 | 72.91 | 64.73 | 57.71 | 50.82 | 72.66 | 53.79 | 66.0 |
Supervised Fine-Tuning (SFT) performance of BioMistral 7B models compared to baselines, measured by accuracy (↑) and averaged across 3 random seeds of 3-shot. DARE, TIES, and SLERP are model merging strategies that combine BioMistral 7B and Mistral 7B Instruct. Best model in bold, and second-best underlined. *GPT-3.5 Turbo performances are reported from the 3-shot results without SFT.
# Citation BibTeX
Arxiv : [https://arxiv.org/abs/2402.10373](https://arxiv.org/abs/2402.10373)
```bibtex
@misc{labrak2024biomistral,
title={BioMistral: A Collection of Open-Source Pretrained Large Language Models for Medical Domains},
author={Yanis Labrak and Adrien Bazoge and Emmanuel Morin and Pierre-Antoine Gourraud and Mickael Rouvier and Richard Dufour},
year={2024},
eprint={2402.10373},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
**CAUTION!** Both direct and downstream users need to be informed about the risks, biases, and constraints inherent in the model. While the model can produce natural language text, our exploration of its capabilities and limitations is just beginning. In fields such as medicine, comprehending these limitations is crucial. Hence, we strongly advise against deploying this model for natural language generation in production or for professional tasks in the realm of health and medicine.
| [
"MEDQA",
"PUBMEDQA"
] | BioNLP |
binbin83/setfit-MiniLM-dialog-act-fr | binbin83 | text-classification | [
"sentence-transformers",
"safetensors",
"bert",
"setfit",
"text-classification",
"arxiv:2209.11055",
"license:apache-2.0",
"region:us"
] | 1,706,696,844,000 | 2024-01-31T10:39:25 | 5 | 0 | ---
license: apache-2.0
pipeline_tag: text-classification
tags:
- setfit
- sentence-transformers
- text-classification
---
# binbin83/setfit-MiniLM-dialog-act-13nov
The model is a multi-class multi-label text classifier to distinguish the different dialog act in semi-structured interview. The data used fot fine-tuning were in French.
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("binbin83/setfit-MiniLM-dialog-act-13nov")
label_dict = {'Introductory': 0, 'FollowUp': 1, 'Probing': 2, 'Specifying': 3, 'Structuring': 4, 'DirectQuestion': 5, 'Interpreting': 6, 'Ending': 7}
# Run inference
preds = model(["Vous pouvez continuer", "Pouvez-vous me dire précisément quel a été l'odre chronologique des événements ?"])
labels = [[[f for f, p in zip(labels_dict, ps) if p] for ps in [pred]] for pred in preds ]
```
## Labels and training data
Brinkmann, S., & Kvale, S (1), define classification of dialog act in interview:
* Introductory: Can you tell me about ... (something specific)?,
* Follow-up verbal cues: repeat back keywords to participants, ask for reflection or unpacking of point just made,
* Probing: Can you say a little more about X? Why do you think X...? (for example, Why do you think X is that way? Why do you think X is important?),
* Specifying: Can you give me an example of X?,
* Indirect: How do you think other people view X?,
* Structuring: Thank you for that. I’d like to move to another topic...
* Direct (later stages): When you mention X, are you thinking like Y or Z?,
* Interpreting: So, what I have gathered is that...,
* Ending: I have asked all the questions I had, but I wanted to check whether there is something else about your experience/understanding we haven’t covered? Do you have any questions for me?,
On our corpus of interviews, we humanly label 500 turn of speech using this classification. We use 0.7 to train and evaluate on 0.3.
The entire corpus is composed of the following examples:
('DirectQuestion', 23), ('Probing', 15), ('Interpreting', 15), ('Specifying', 14), ('Structuring', 7), ('FollowUp', 6), ('Introductory', 5), ('Ending', 5)
(1) Brinkmann, S., & Kvale, S. (2015). InterViews: Learning the Craft of Qualitative Research Interviewing. (3. ed.) SAGE Publications.
## Training and Performances
We finetune: "sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2"
using SetFit with CosineLossSimilarity and this parapeters: epochs = 20, batch_size=32, num_iterations = 50
On the test dataset :
('Probing', 146), ('Specifying', 135), ('FollowUp', 134), ('DirectQuestion', 125), ('Interpreting', 44), ('Structuring', 27), ('Introductory', 12), ('Ending', 12)
On our test dataset, we get this results:
{'f1': 0.35005547563028, 'f1_micro': 0.3686131386861314, 'f1_sample': 0.3120075046904315, 'accuracy': 0.19887429643527205}
## BibTeX entry and citation info
To cite the current study:
```bibtex
@article{
doi = {conference paper},
url = {https://arxiv.org/abs/2209.11055},
author = {Quillivic Robin, Charles Payet},
keywords = {NLP, JADT},
title = {Semi-Structured Interview Analysis: A French NLP Toolbox for Social Sciences},
publisher = {JADT},
year = {2024},
copyright = {Creative Commons Attribution 4.0 International}
}
```
To cite the setFit paper:
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
| [
"CRAFT"
] | Non_BioNLP |
microsoft/BiomedNLP-BiomedBERT-base-uncased-abstract-fulltext | microsoft | fill-mask | [
"transformers",
"pytorch",
"jax",
"bert",
"fill-mask",
"exbert",
"en",
"arxiv:2007.15779",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,646,263,745,000 | 2023-11-06T18:03:43 | 124,839 | 227 | ---
language: en
license: mit
tags:
- exbert
widget:
- text: '[MASK] is a tumor suppressor gene.'
---
## MSR BiomedBERT (abstracts + full text)
<div style="border: 2px solid orange; border-radius:10px; padding:0px 10px; width: fit-content;">
* This model was previously named **"PubMedBERT (abstracts + full text)"**.
* You can either adopt the new model name "microsoft/BiomedNLP-BiomedBERT-base-uncased-abstract-fulltext" or update your `transformers` library to version 4.22+ if you need to refer to the old name.
</div>
Pretraining large neural language models, such as BERT, has led to impressive gains on many natural language processing (NLP) tasks. However, most pretraining efforts focus on general domain corpora, such as newswire and Web. A prevailing assumption is that even domain-specific pretraining can benefit by starting from general-domain language models. [Recent work](https://arxiv.org/abs/2007.15779) shows that for domains with abundant unlabeled text, such as biomedicine, pretraining language models from scratch results in substantial gains over continual pretraining of general-domain language models.
BiomedBERT is pretrained from scratch using _abstracts_ from [PubMed](https://pubmed.ncbi.nlm.nih.gov/) and _full-text_ articles from [PubMedCentral](https://www.ncbi.nlm.nih.gov/pmc/). This model achieves state-of-the-art performance on many biomedical NLP tasks, and currently holds the top score on the [Biomedical Language Understanding and Reasoning Benchmark](https://aka.ms/BLURB).
## Citation
If you find BiomedBERT useful in your research, please cite the following paper:
```latex
@misc{pubmedbert,
author = {Yu Gu and Robert Tinn and Hao Cheng and Michael Lucas and Naoto Usuyama and Xiaodong Liu and Tristan Naumann and Jianfeng Gao and Hoifung Poon},
title = {Domain-Specific Language Model Pretraining for Biomedical Natural Language Processing},
year = {2020},
eprint = {arXiv:2007.15779},
}
```
<a href="https://huggingface.co/exbert/?model=microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext&modelKind=bidirectional&sentence=Gefitinib%20is%20an%20EGFR%20tyrosine%20kinase%20inhibitor,%20which%20is%20often%20used%20for%20breast%20cancer%20and%20NSCLC%20treatment.&layer=3&heads=..0,1,2,3,4,5,6,7,8,9,10,11&threshold=0.7&tokenInd=17&tokenSide=right&maskInds=..&hideClsSep=true">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
| [
"BLURB"
] | BioNLP |
dzanbek/fd3b6340-e602-4cbf-86a8-773f84a73015 | dzanbek | null | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:rayonlabs/merged-merged-af6dd40b-32e1-43b1-adfd-8ce14d65d738-PubMedQA-138437bf-44bd-4b03-8801-d05451a9ff28",
"base_model:adapter:rayonlabs/merged-merged-af6dd40b-32e1-43b1-adfd-8ce14d65d738-PubMedQA-138437bf-44bd-4b03-8801-d05451a9ff28",
"region:us"
] | 1,736,776,518,000 | 2025-01-13T15:30:14 | 1 | 0 | ---
base_model: rayonlabs/merged-merged-af6dd40b-32e1-43b1-adfd-8ce14d65d738-PubMedQA-138437bf-44bd-4b03-8801-d05451a9ff28
library_name: peft
tags:
- axolotl
- generated_from_trainer
model-index:
- name: fd3b6340-e602-4cbf-86a8-773f84a73015
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: rayonlabs/merged-merged-af6dd40b-32e1-43b1-adfd-8ce14d65d738-PubMedQA-138437bf-44bd-4b03-8801-d05451a9ff28
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 6d6f61a5e2d2e90a_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/6d6f61a5e2d2e90a_train_data.json
type:
field_input: context
field_instruction: question
field_output: final_decision
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device: cuda
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: dzanbek/fd3b6340-e602-4cbf-86a8-773f84a73015
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 3
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lr_scheduler: cosine
max_memory:
0: 78GiB
max_steps: 30
micro_batch_size: 2
mlflow_experiment_name: /tmp/6d6f61a5e2d2e90a_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_torch
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 10
sequence_len: 1024
special_tokens:
pad_token: <|end_of_text|>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: true
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 25dafc7c-5eb3-4bf6-b3fd-a340821007ea
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 25dafc7c-5eb3-4bf6-b3fd-a340821007ea
warmup_steps: 10
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# fd3b6340-e602-4cbf-86a8-773f84a73015
This model is a fine-tuned version of [rayonlabs/merged-merged-af6dd40b-32e1-43b1-adfd-8ce14d65d738-PubMedQA-138437bf-44bd-4b03-8801-d05451a9ff28](https://huggingface.co/rayonlabs/merged-merged-af6dd40b-32e1-43b1-adfd-8ce14d65d738-PubMedQA-138437bf-44bd-4b03-8801-d05451a9ff28) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4638
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0000 | 1 | 2.0510 |
| 1.8879 | 0.0003 | 8 | 1.6765 |
| 1.4689 | 0.0006 | 16 | 1.4930 |
| 1.4127 | 0.0010 | 24 | 1.4638 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 | [
"PUBMEDQA"
] | BioNLP |
exafluence/EXF-Medistral-Nemo-12B-4bit | exafluence | null | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"mistral",
"trl",
"Mistral-Nemo-Base-2407",
"Medical",
"Healthcare",
"Open-MedQA-Nexus",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | 1,729,272,895,000 | 2024-10-19T13:42:56 | 0 | 0 | ---
base_model: unsloth/mistral-nemo-instruct-2407-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- trl
- Mistral-Nemo-Base-2407
- Medical
- Healthcare
- Open-MedQA-Nexus
---
# EXF-Medistral-Nemo-12B
## Model Description
**EXF-Medistral-Nemo-12B** is a fine-tuned version of the **Mistral-Nemo-12B** model, optimized for tasks in the medical domain. It has been trained on the **Open-Nexus-MedQA** dataset, which integrates a wide range of medical knowledge from public datasets like **ChatDoctor**, **icliniq**, and others, to enhance the model’s ability to answer medical questions accurately and reliably. This model is designed to assist in clinical decision support, medical coding, and patient care by generating responses based on comprehensive medical knowledge.
## Model Architecture
- **Base Model**: Mistral-Nemo-12B
- **Parameters**: 12 billion
- **Fine-tuning Dataset**: Open-Nexus-MedQA
- **Task**: Medical question-answering (QA), medical coding, and healthcare information retrieval.
## Training Data
The model was fine-tuned on the **Open-Nexus-MedQA** dataset, which aggregates data from multiple medical QA sources such as:
- **ChatDoctor**
- **icliniq.com**
- **HealthCareMagic**
- **CareQA**
- **MedInstruct**
The dataset contains medical queries ranging from simple conditions to complex diagnoses, accompanied by accurate, domain-specific responses, making it a robust training source for real-world medical applications.
## Intended Use
**EXF-Medistral-Nemo-12B** is ideal for:
- **Medical Question-Answering**: It can be used for generating responses to patient queries or supporting healthcare professionals with clinical information.
- **Medical Coding**: The model supports tasks related to **CMS**, **OASIS**, **ICD-10**, and other coding systems.
- **Clinical Decision Support**: Assisting doctors and healthcare providers by offering evidence-based suggestions or answers.
- **Patient Care Tools**: Powering medical chatbots or virtual assistants for patients seeking health information.
## Performance
The model has been fine-tuned for precision in the medical domain, demonstrating high accuracy in understanding and generating responses to complex medical queries. It excels in:
- **Medical terminology comprehension**
- **Providing accurate ICD-10 and CMS codes**
- **Generating medically relevant and safe answers**
## Limitations
- **Not a Diagnostic Tool**: This model is not intended to replace medical professionals or provide definitive medical diagnoses. Always consult with a licensed healthcare provider for medical advice.
- **Training Data Bias**: The dataset is based on publicly available medical QA data, which might not cover all edge cases or international healthcare systems.
## How to Use
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("exafluence/EXF-Medistral-Nemo-12B-4bit")
model = AutoModelForCausalLM.from_pretrained("exafluence/EXF-Medistral-Nemo-12B-4bit")
input_text = "What are the symptoms of type 2 diabetes?"
inputs = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**inputs)
print(tokenizer.decode(outputs, skip_special_tokens=True))
```
## License
This model is provided under a proprietary license. Usage is restricted to non-commercial purposes unless explicit permission is granted.
Citation
If you use this model, please cite:
```bibtex
@inproceedings{exafluence2024EXFMedistralNemo12B,
title={EXF-Medistral-Nemo-12B: A Fine-Tuned Medical Language Model for Healthcare Applications},
author={Exafluence Inc.},
year={2024},
url={https://huggingface.co/exafluence/EXF-Medistral-Nemo-12B}
doi={https://doi.org/10.57967/hf/3284}
}
```
## Contact
For any questions or inquiries regarding usage, licensing, or access, please contact Exafluence Inc.
# Uploaded model
- **Developed by:** exafluence
- **License:** apache-2.0
- **Finetuned from model :** unsloth/mistral-nemo-instruct-2407-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
| [
"MEDQA"
] | BioNLP |
lhong4759/6926fa73-9c18-487b-ade4-a5bdca5efd9f | lhong4759 | null | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:rayonlabs/merged-merged-af6dd40b-32e1-43b1-adfd-8ce14d65d738-PubMedQA-138437bf-44bd-4b03-8801-d05451a9ff28",
"base_model:adapter:rayonlabs/merged-merged-af6dd40b-32e1-43b1-adfd-8ce14d65d738-PubMedQA-138437bf-44bd-4b03-8801-d05451a9ff28",
"8-bit",
"bitsandbytes",
"region:us"
] | 1,736,776,310,000 | 2025-01-13T15:07:25 | 1 | 0 | ---
base_model: rayonlabs/merged-merged-af6dd40b-32e1-43b1-adfd-8ce14d65d738-PubMedQA-138437bf-44bd-4b03-8801-d05451a9ff28
library_name: peft
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 6926fa73-9c18-487b-ade4-a5bdca5efd9f
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: rayonlabs/merged-merged-af6dd40b-32e1-43b1-adfd-8ce14d65d738-PubMedQA-138437bf-44bd-4b03-8801-d05451a9ff28
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 9687e4b10aa5235f_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/9687e4b10aa5235f_train_data.json
type:
field_input: context
field_instruction: question
field_output: final_decision
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: lhong4759/6926fa73-9c18-487b-ade4-a5bdca5efd9f
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/9687e4b10aa5235f_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
special_tokens:
pad_token: <|end_of_text|>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 7dc6f577-ea4d-4a37-a58b-fdcc42f9a448
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 7dc6f577-ea4d-4a37-a58b-fdcc42f9a448
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 6926fa73-9c18-487b-ade4-a5bdca5efd9f
This model is a fine-tuned version of [rayonlabs/merged-merged-af6dd40b-32e1-43b1-adfd-8ce14d65d738-PubMedQA-138437bf-44bd-4b03-8801-d05451a9ff28](https://huggingface.co/rayonlabs/merged-merged-af6dd40b-32e1-43b1-adfd-8ce14d65d738-PubMedQA-138437bf-44bd-4b03-8801-d05451a9ff28) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0447
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.0001 | 0.0080 | 200 | 0.0447 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 | [
"PUBMEDQA"
] | BioNLP |
RichardErkhov/johnsnowlabs_-_JSL-Med-Sft-Llama-3-8B-gguf | RichardErkhov | null | [
"gguf",
"endpoints_compatible",
"region:us"
] | 1,719,303,190,000 | 2024-06-25T12:58:47 | 47 | 0 | ---
{}
---
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
JSL-Med-Sft-Llama-3-8B - GGUF
- Model creator: https://huggingface.co/johnsnowlabs/
- Original model: https://huggingface.co/johnsnowlabs/JSL-Med-Sft-Llama-3-8B/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [JSL-Med-Sft-Llama-3-8B.Q2_K.gguf](https://huggingface.co/RichardErkhov/johnsnowlabs_-_JSL-Med-Sft-Llama-3-8B-gguf/blob/main/JSL-Med-Sft-Llama-3-8B.Q2_K.gguf) | Q2_K | 2.96GB |
| [JSL-Med-Sft-Llama-3-8B.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/johnsnowlabs_-_JSL-Med-Sft-Llama-3-8B-gguf/blob/main/JSL-Med-Sft-Llama-3-8B.IQ3_XS.gguf) | IQ3_XS | 3.28GB |
| [JSL-Med-Sft-Llama-3-8B.IQ3_S.gguf](https://huggingface.co/RichardErkhov/johnsnowlabs_-_JSL-Med-Sft-Llama-3-8B-gguf/blob/main/JSL-Med-Sft-Llama-3-8B.IQ3_S.gguf) | IQ3_S | 3.43GB |
| [JSL-Med-Sft-Llama-3-8B.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/johnsnowlabs_-_JSL-Med-Sft-Llama-3-8B-gguf/blob/main/JSL-Med-Sft-Llama-3-8B.Q3_K_S.gguf) | Q3_K_S | 3.41GB |
| [JSL-Med-Sft-Llama-3-8B.IQ3_M.gguf](https://huggingface.co/RichardErkhov/johnsnowlabs_-_JSL-Med-Sft-Llama-3-8B-gguf/blob/main/JSL-Med-Sft-Llama-3-8B.IQ3_M.gguf) | IQ3_M | 3.52GB |
| [JSL-Med-Sft-Llama-3-8B.Q3_K.gguf](https://huggingface.co/RichardErkhov/johnsnowlabs_-_JSL-Med-Sft-Llama-3-8B-gguf/blob/main/JSL-Med-Sft-Llama-3-8B.Q3_K.gguf) | Q3_K | 3.74GB |
| [JSL-Med-Sft-Llama-3-8B.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/johnsnowlabs_-_JSL-Med-Sft-Llama-3-8B-gguf/blob/main/JSL-Med-Sft-Llama-3-8B.Q3_K_M.gguf) | Q3_K_M | 3.74GB |
| [JSL-Med-Sft-Llama-3-8B.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/johnsnowlabs_-_JSL-Med-Sft-Llama-3-8B-gguf/blob/main/JSL-Med-Sft-Llama-3-8B.Q3_K_L.gguf) | Q3_K_L | 4.03GB |
| [JSL-Med-Sft-Llama-3-8B.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/johnsnowlabs_-_JSL-Med-Sft-Llama-3-8B-gguf/blob/main/JSL-Med-Sft-Llama-3-8B.IQ4_XS.gguf) | IQ4_XS | 4.18GB |
| [JSL-Med-Sft-Llama-3-8B.Q4_0.gguf](https://huggingface.co/RichardErkhov/johnsnowlabs_-_JSL-Med-Sft-Llama-3-8B-gguf/blob/main/JSL-Med-Sft-Llama-3-8B.Q4_0.gguf) | Q4_0 | 4.34GB |
| [JSL-Med-Sft-Llama-3-8B.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/johnsnowlabs_-_JSL-Med-Sft-Llama-3-8B-gguf/blob/main/JSL-Med-Sft-Llama-3-8B.IQ4_NL.gguf) | IQ4_NL | 4.38GB |
| [JSL-Med-Sft-Llama-3-8B.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/johnsnowlabs_-_JSL-Med-Sft-Llama-3-8B-gguf/blob/main/JSL-Med-Sft-Llama-3-8B.Q4_K_S.gguf) | Q4_K_S | 4.37GB |
| [JSL-Med-Sft-Llama-3-8B.Q4_K.gguf](https://huggingface.co/RichardErkhov/johnsnowlabs_-_JSL-Med-Sft-Llama-3-8B-gguf/blob/main/JSL-Med-Sft-Llama-3-8B.Q4_K.gguf) | Q4_K | 4.58GB |
| [JSL-Med-Sft-Llama-3-8B.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/johnsnowlabs_-_JSL-Med-Sft-Llama-3-8B-gguf/blob/main/JSL-Med-Sft-Llama-3-8B.Q4_K_M.gguf) | Q4_K_M | 4.58GB |
| [JSL-Med-Sft-Llama-3-8B.Q4_1.gguf](https://huggingface.co/RichardErkhov/johnsnowlabs_-_JSL-Med-Sft-Llama-3-8B-gguf/blob/main/JSL-Med-Sft-Llama-3-8B.Q4_1.gguf) | Q4_1 | 4.78GB |
| [JSL-Med-Sft-Llama-3-8B.Q5_0.gguf](https://huggingface.co/RichardErkhov/johnsnowlabs_-_JSL-Med-Sft-Llama-3-8B-gguf/blob/main/JSL-Med-Sft-Llama-3-8B.Q5_0.gguf) | Q5_0 | 5.21GB |
| [JSL-Med-Sft-Llama-3-8B.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/johnsnowlabs_-_JSL-Med-Sft-Llama-3-8B-gguf/blob/main/JSL-Med-Sft-Llama-3-8B.Q5_K_S.gguf) | Q5_K_S | 5.21GB |
| [JSL-Med-Sft-Llama-3-8B.Q5_K.gguf](https://huggingface.co/RichardErkhov/johnsnowlabs_-_JSL-Med-Sft-Llama-3-8B-gguf/blob/main/JSL-Med-Sft-Llama-3-8B.Q5_K.gguf) | Q5_K | 5.34GB |
| [JSL-Med-Sft-Llama-3-8B.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/johnsnowlabs_-_JSL-Med-Sft-Llama-3-8B-gguf/blob/main/JSL-Med-Sft-Llama-3-8B.Q5_K_M.gguf) | Q5_K_M | 5.34GB |
| [JSL-Med-Sft-Llama-3-8B.Q5_1.gguf](https://huggingface.co/RichardErkhov/johnsnowlabs_-_JSL-Med-Sft-Llama-3-8B-gguf/blob/main/JSL-Med-Sft-Llama-3-8B.Q5_1.gguf) | Q5_1 | 5.65GB |
| [JSL-Med-Sft-Llama-3-8B.Q6_K.gguf](https://huggingface.co/RichardErkhov/johnsnowlabs_-_JSL-Med-Sft-Llama-3-8B-gguf/blob/main/JSL-Med-Sft-Llama-3-8B.Q6_K.gguf) | Q6_K | 6.14GB |
| [JSL-Med-Sft-Llama-3-8B.Q8_0.gguf](https://huggingface.co/RichardErkhov/johnsnowlabs_-_JSL-Med-Sft-Llama-3-8B-gguf/blob/main/JSL-Med-Sft-Llama-3-8B.Q8_0.gguf) | Q8_0 | 7.95GB |
Original model description:
---
tags:
- llama-3-8b
- sft
- medical
base_model:
- meta-llama/Meta-Llama-3-8B
license: cc-by-nc-nd-4.0
---
# JSL-Med-Sft-Llama-3-8B
[<img src="https://repository-images.githubusercontent.com/104670986/2e728700-ace4-11ea-9cfc-f3e060b25ddf">](http://www.johnsnowlabs.com)
This model is developed by [John Snow Labs](https://www.johnsnowlabs.com/).
This model is available under a [CC-BY-NC-ND](https://creativecommons.org/licenses/by-nc-nd/4.0/deed.en) license and must also conform to this [Acceptable Use Policy](https://huggingface.co/johnsnowlabs). If you need to license this model for commercial use, please contact us at [email protected].
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "johnsnowlabs/JSL-Med-Sft-Llama-3-8B"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
## 🏆 Evaluation
| Tasks |Version|Filter|n-shot| Metric |Value | |Stderr|
|-------------------------------|-------|------|-----:|--------|-----:|---|-----:|
|stem |N/A |none | 0|acc_norm|0.5803|± |0.0067|
| | |none | 0|acc |0.6141|± |0.0057|
| - medmcqa |Yaml |none | 0|acc |0.5752|± |0.0076|
| | |none | 0|acc_norm|0.5752|± |0.0076|
| - medqa_4options |Yaml |none | 0|acc |0.5970|± |0.0138|
| | |none | 0|acc_norm|0.5970|± |0.0138|
| - anatomy (mmlu) | 0|none | 0|acc |0.6963|± |0.0397|
| - clinical_knowledge (mmlu) | 0|none | 0|acc |0.7472|± |0.0267|
| - college_biology (mmlu) | 0|none | 0|acc |0.7847|± |0.0344|
| - college_medicine (mmlu) | 0|none | 0|acc |0.6185|± |0.0370|
| - medical_genetics (mmlu) | 0|none | 0|acc |0.8300|± |0.0378|
| - professional_medicine (mmlu)| 0|none | 0|acc |0.7022|± |0.0278|
| - pubmedqa | 1|none | 0|acc |0.7480|± |0.0194|
|Groups|Version|Filter|n-shot| Metric |Value | |Stderr|
|------|-------|------|-----:|--------|-----:|---|-----:|
|stem |N/A |none | 0|acc_norm|0.5803|± |0.0067|
| | |none | 0|acc |0.6141|± |0.0057|
| [
"MEDQA",
"PUBMEDQA"
] | BioNLP |
fine-tuned/SciFact-512-192-gpt-4o-2024-05-13-935443 | fine-tuned | feature-extraction | [
"sentence-transformers",
"safetensors",
"bert",
"feature-extraction",
"sentence-similarity",
"mteb",
"en",
"dataset:fine-tuned/SciFact-512-192-gpt-4o-2024-05-13-935443",
"dataset:allenai/c4",
"license:apache-2.0",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | 1,716,912,319,000 | 2024-05-28T16:05:51 | 6 | 0 | ---
datasets:
- fine-tuned/SciFact-512-192-gpt-4o-2024-05-13-935443
- allenai/c4
language:
- en
- en
license: apache-2.0
pipeline_tag: feature-extraction
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- mteb
---
This model is a fine-tuned version of [**BAAI/bge-large-en-v1.5**](https://huggingface.co/BAAI/bge-large-en-v1.5) designed for the following use case:
None
## How to Use
This model can be easily integrated into your NLP pipeline for tasks such as text classification, sentiment analysis, entity recognition, and more. Here's a simple example to get you started:
```python
from sentence_transformers import SentenceTransformer
from sentence_transformers.util import cos_sim
model = SentenceTransformer(
'fine-tuned/SciFact-512-192-gpt-4o-2024-05-13-935443',
trust_remote_code=True
)
embeddings = model.encode([
'first text to embed',
'second text to embed'
])
print(cos_sim(embeddings[0], embeddings[1]))
```
| [
"SCIFACT"
] | Non_BioNLP |
Salesforce/xgen-mm-phi3-mini-instruct-r-v1 | Salesforce | image-text-to-text | [
"transformers",
"safetensors",
"xgenmm",
"feature-extraction",
"image-text-to-text",
"conversational",
"custom_code",
"en",
"arxiv:2408.08872",
"license:cc-by-nc-4.0",
"region:us"
] | 1,714,972,746,000 | 2025-02-03T06:26:42 | 843 | 185 | ---
language:
- en
license: cc-by-nc-4.0
pipeline_tag: image-text-to-text
---
# 📣 News
📌 [08/19/2024] xGen-MM-v1.5 released:
- [🤗 xgen-mm-phi3-mini-instruct-interleave-r-v1.5](https://huggingface.co/Salesforce/xgen-mm-phi3-mini-instruct-interleave-r-v1.5)
- [🤗 xgen-mm-phi3-mini-base-r-v1.5](https://huggingface.co/Salesforce/xgen-mm-phi3-mini-base-r-v1.5)
- [🤗 xgen-mm-phi3-mini-instruct-singleimg-r-v1.5](https://huggingface.co/Salesforce/xgen-mm-phi3-mini-instruct-singleimg-r-v1.5)
- [🤗 xgen-mm-phi3-mini-instruct-dpo-r-v1.5](https://huggingface.co/Salesforce/xgen-mm-phi3-mini-instruct-dpo-r-v1.5)
# Model description
We are excited to announce the continuation and rebranding of our **BLIP series** into **XGen-MM**, to be better aligned with Salesforce's unified XGen initiative for large foundation models! This rebranding marks a significant step in our ongoing development of cutting-edge multimodal technologies.
`XGen-MM` is a series of the latest foundational Large Multimodal Models (LMMs) developed by Salesforce AI Research. This series advances upon the successful designs of the `BLIP` series, incorporating fundamental enhancements that ensure a more robust and superior foundation. \
These models have been trained at scale on high-quality image caption datasets and interleaved image-text data. XGen-MM highlights a few features below,
* The **pretrained** foundation model, `xgen-mm-phi3-mini-base-r-v1`, achieves state-of-the-art performance under 5b parameters and demonstrates strong in-context learning capabilities.
* The **instruct** fine-tuned model, `xgen-mm-phi3-mini-instruct-r-v1`, achieves state-of-the-art performance among open-source and closed-source VLMs under 5b parameters.
* `xgen-mm-phi3-mini-instruct-r-v1` supports flexible high-resolution image encoding with efficient visual token sampling.
More technical details will come with a technical report soon.
# Results
### Pretrain (base model without instruction tuning)
| Model | Shot | COCO (val) | NoCaps (val) | TextCaps (val) | OKVQA (val) | TextVQA (val) | VizWiz (testdev) | VQAv2 (testdev) |
|-------------|------|------------|--------------|----------------|--------------|---------------|------------------|-----------------|
| Flamingo-3B | 4 | 85.0 | - | - | 43.3 | 32.7 | 34 | 53.2 |
| | 8 | 90.6 | - | - | 44.6 | 32.4 | 38.4 | 55.4 |
| MM1-3B | 0 | 73.5 | 55.6 | 63.3 | 26.1 | 29.4 | 15.6 | 46.2 |
| | 4 | 112.3 | 99.7 | 84.1 | 48.6 | 45.3 | 38.0 | 57.9 |
| | 8 | 114.6 | 104.7 | 88.8 | 48.4 | 44.6 | 46.4 | 63.6 |
| **xgen-mm-phi3-mini-base-r-v1 (Ours)**| 0 | **81.7** | **80.2** | 60.7 | **26.5** | **36.0** | **21.2** | **48.1** |
| | 4 | 110.5 | **101.7** | **84.6** | **49.2** | **46.1** | **38.4** | **63.9** |
| | 8 | 112.1 | 104.4 | 87.7 | **49.1** | **46.4** | 44.3 | **63.8** |
### Instruct (after instruction tuning)
| Model | SEED-IMG | MMBench(dev) | MME-total | MME-P | MME-C | MMStar | MMMU (val) | MMVet | MathVista (mini) | ScienceQA (test) | POPE | AI2D | |
|----------------------------|----------|--------------|-----------|----------|---------|----------|------------|----------|------------------|------------------|----------|----------|---|
| MM1-3B-Chat | 68.8 | 67.8 | 1761 | **1482** | 279 | - | 33.9 | 43.7 | - | - | **87.4** | - | |
| openbmb/MiniCPM-V-2 | 67.1 | 69.6 | 1808 | - | - | - | 38.2 | - | 38.7 | - | - | - | |
| VILA1.5-3B | 67.9 | 63.4 | - | 1442 | - | - | 33.3 | 35.4 | - | 69.0 | 85.9 | - | |
| xtuner/llava-phi-3-mini-hf | 70.0 | 69.2 | 1790 | 1477 | 313 | 43.7 | **41.4** | - | - | 73.7 | 87.3 | 69.3 | |
| **xgen-mm-phi3-mini-instruct-r-v1 (Ours)** | **72.1** | **74.1** | **1827** | 1467 | **360** | **44.6** | 39.8 | **45.1** | **39.3** | **74.2** | 87.2 | **75.8** | |
# How to use
~~> We require the use of the development version (`"4.41.0.dev0"`) of the `transformers` library. To get it, as of 05/07/2024, one can use `pip uninstall -y transformers && pip install git+https://github.com/huggingface/transformers.`~~
```python
from transformers import AutoModelForVision2Seq, AutoTokenizer, AutoImageProcessor, StoppingCriteria
import torch
import requests
from PIL import Image
# define the prompt template
def apply_prompt_template(prompt):
s = (
'<|system|>\nA chat between a curious user and an artificial intelligence assistant. '
"The assistant gives helpful, detailed, and polite answers to the user's questions.<|end|>\n"
f'<|user|>\n<image>\n{prompt}<|end|>\n<|assistant|>\n'
)
return s
class EosListStoppingCriteria(StoppingCriteria):
def __init__(self, eos_sequence = [32007]):
self.eos_sequence = eos_sequence
def __call__(self, input_ids: torch.LongTensor, scores: torch.FloatTensor, **kwargs) -> bool:
last_ids = input_ids[:,-len(self.eos_sequence):].tolist()
return self.eos_sequence in last_ids
# load models
model_name_or_path = "Salesforce/xgen-mm-phi3-mini-instruct-r-v1"
model = AutoModelForVision2Seq.from_pretrained(model_name_or_path, trust_remote_code=True)
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, trust_remote_code=True, use_fast=False, legacy=False)
image_processor = AutoImageProcessor.from_pretrained(model_name_or_path, trust_remote_code=True)
tokenizer = model.update_special_tokens(tokenizer)
# craft a test sample
img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg'
raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB')
query = "how many dogs are in the picture?"
model = model.cuda()
inputs = image_processor([raw_image], return_tensors="pt", image_aspect_ratio='anyres')
prompt = apply_prompt_template(query)
language_inputs = tokenizer([prompt], return_tensors="pt")
inputs.update(language_inputs)
inputs = {name: tensor.cuda() for name, tensor in inputs.items()}
generated_text = model.generate(**inputs, image_size=[raw_image.size],
pad_token_id=tokenizer.pad_token_id,
do_sample=False, max_new_tokens=768, top_p=None, num_beams=1,
stopping_criteria = [EosListStoppingCriteria()],
)
prediction = tokenizer.decode(generated_text[0], skip_special_tokens=True).split("<|end|>")[0]
print("==> prediction: ", prediction)
# output: ==> prediction: There is one dog in the picture.
```
More comprehensive examples can be found in the [notebook](demo.ipynb).
# Reproducibility:
Our SFT evaluation is based on the VLMEvalKit, in which we fixed some inconsistencies with the official benchmarks (e.g., LLM judge API). During our development, we noticed that the raw resolution of the input image would noticeably affect the model output in some cases.
# Bias, Risks, Limitations, and Ethical Considerations
The main data sources are from the internet, including webpages,
image stock sites, and curated datasets released by the research community. We have excluded certain data, such as LAION, due to known CSAM concerns.
The model may be subject to bias from the original data source, as well as bias from LLMs and commercial APIs.
We strongly recommend users assess safety and fairness before applying to downstream applications.
# Ethical Considerations
This release is for research purposes only in support of an academic paper. Our models, datasets, and code are not specifically designed or evaluated for all downstream purposes. We strongly recommend users evaluate and address potential concerns related to accuracy, safety, and fairness before deploying this model. We encourage users to consider the common limitations of AI, comply with applicable laws, and leverage best practices when selecting use cases, particularly for high-risk scenarios where errors or misuse could significantly impact people’s lives, rights, or safety. For further guidance on use cases, refer to our AUP and AI AUP.
# License
Our code and weights are released under the Creative Commons Attribution Non Commercial 4.0 [LICENSE](LICENSE.txt). Please fill out a form at [here](https://forms.gle/ffPc9oZC2ZGeJ1N68) to consult the commercial use of model weights.
# Code acknowledgment
[LAVIS](https://github.com/salesforce/LAVIS) \
[openflamingo](https://github.com/mlfoundations/open_flamingo) \
[VLMEvalKit](https://github.com/open-compass/VLMEvalKit/tree/main)
# Citation
```
@misc{xue2024xgenmmblip3familyopen,
title={xGen-MM (BLIP-3): A Family of Open Large Multimodal Models},
author={Le Xue and Manli Shu and Anas Awadalla and Jun Wang and An Yan and Senthil Purushwalkam and Honglu Zhou and Viraj Prabhu and Yutong Dai and Michael S Ryoo and Shrikant Kendre and Jieyu Zhang and Can Qin and Shu Zhang and Chia-Chih Chen and Ning Yu and Juntao Tan and Tulika Manoj Awalgaonkar and Shelby Heinecke and Huan Wang and Yejin Choi and Ludwig Schmidt and Zeyuan Chen and Silvio Savarese and Juan Carlos Niebles and Caiming Xiong and Ran Xu},
year={2024},
eprint={2408.08872},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2408.08872},
}
```
# Troubleshoot
1. If you missed any packages, please consider the following
```
pip install torch==2.2.1 torchvision==0.17.1 torchaudio==2.2.1 --index-url https://download.pytorch.org/whl/cu121
pip install open_clip_torch==2.24.0
pip install einops
pip install einops-exts
pip install transformers==4.41.1
```
# Changelog
* 05/24/2024
* update codebase to be compatible with `transformers==4.41.1`. | [
"CHIA",
"CRAFT"
] | Non_BioNLP |
minishlab/M2V_base_glove_subword | minishlab | null | [
"model2vec",
"onnx",
"safetensors",
"embeddings",
"static-embeddings",
"mteb",
"sentence-transformers",
"en",
"base_model:BAAI/bge-base-en-v1.5",
"base_model:quantized:BAAI/bge-base-en-v1.5",
"license:mit",
"model-index",
"region:us"
] | 1,727,893,116,000 | 2025-01-21T19:18:20 | 44 | 2 | ---
base_model: BAAI/bge-base-en-v1.5
language:
- en
library_name: model2vec
license: mit
tags:
- embeddings
- static-embeddings
- mteb
- sentence-transformers
model-index:
- name: M2V_base_glove_subword
results:
- task:
type: Classification
dataset:
name: MTEB AmazonCounterfactualClassification (en-ext)
type: mteb/amazon_counterfactual
config: en-ext
split: test
revision: e8379541af4e31359cca9fbcf4b00f2671dba205
metrics:
- type: accuracy
value: 66.4167916041979
- type: ap
value: 18.202949885376736
- type: ap_weighted
value: 18.202949885376736
- type: f1
value: 54.98453722214898
- type: f1_weighted
value: 72.84623161234782
- type: main_score
value: 66.4167916041979
- task:
type: Classification
dataset:
name: MTEB AmazonCounterfactualClassification (en)
type: mteb/amazon_counterfactual
config: en
split: test
revision: e8379541af4e31359cca9fbcf4b00f2671dba205
metrics:
- type: accuracy
value: 68.044776119403
- type: ap
value: 31.604323176091363
- type: ap_weighted
value: 31.604323176091363
- type: f1
value: 62.53323789238326
- type: f1_weighted
value: 71.2243167389672
- type: main_score
value: 68.044776119403
- task:
type: Classification
dataset:
name: MTEB AmazonPolarityClassification (default)
type: mteb/amazon_polarity
config: default
split: test
revision: e2d317d38cd51312af73b3d32a06d1a08b442046
metrics:
- type: accuracy
value: 67.21602499999999
- type: ap
value: 62.24635378305934
- type: ap_weighted
value: 62.24635378305934
- type: f1
value: 66.68107362746888
- type: f1_weighted
value: 66.68107362746888
- type: main_score
value: 67.21602499999999
- task:
type: Classification
dataset:
name: MTEB AmazonReviewsClassification (en)
type: mteb/amazon_reviews_multi
config: en
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 32.384
- type: f1
value: 32.05276706247388
- type: f1_weighted
value: 32.05276706247388
- type: main_score
value: 32.384
- task:
type: Retrieval
dataset:
name: MTEB ArguAna (default)
type: mteb/arguana
config: default
split: test
revision: c22ab2a51041ffd869aaddef7af8d8215647e41a
metrics:
- type: main_score
value: 29.599999999999998
- type: map_at_1
value: 14.438
- type: map_at_10
value: 23.803
- type: map_at_100
value: 24.85
- type: map_at_1000
value: 24.925
- type: map_at_20
value: 24.395
- type: map_at_3
value: 20.519000000000002
- type: map_at_5
value: 22.183
- type: mrr_at_1
value: 14.65149359886202
- type: mrr_at_10
value: 23.8787847998374
- type: mrr_at_100
value: 24.945306088918446
- type: mrr_at_1000
value: 25.019829460538446
- type: mrr_at_20
value: 24.48722055512828
- type: mrr_at_3
value: 20.661450924608815
- type: mrr_at_5
value: 22.254623044096704
- type: nauc_map_at_1000_diff1
value: 11.677995826704251
- type: nauc_map_at_1000_max
value: -1.7036225489906935
- type: nauc_map_at_1000_std
value: 13.608156164552337
- type: nauc_map_at_100_diff1
value: 11.69898827728831
- type: nauc_map_at_100_max
value: -1.6896771319000576
- type: nauc_map_at_100_std
value: 13.657417732243642
- type: nauc_map_at_10_diff1
value: 11.381029737026354
- type: nauc_map_at_10_max
value: -1.7701185174946374
- type: nauc_map_at_10_std
value: 12.878108250073275
- type: nauc_map_at_1_diff1
value: 13.270492079181698
- type: nauc_map_at_1_max
value: -5.320050131923338
- type: nauc_map_at_1_std
value: 9.145476528935111
- type: nauc_map_at_20_diff1
value: 11.636255256667027
- type: nauc_map_at_20_max
value: -1.5972839976414983
- type: nauc_map_at_20_std
value: 13.42888801202754
- type: nauc_map_at_3_diff1
value: 10.870897941570064
- type: nauc_map_at_3_max
value: -3.2129671196535785
- type: nauc_map_at_3_std
value: 11.017585726260462
- type: nauc_map_at_5_diff1
value: 11.323413777040606
- type: nauc_map_at_5_max
value: -2.4760041260478904
- type: nauc_map_at_5_std
value: 12.029899752157688
- type: nauc_mrr_at_1000_diff1
value: 10.742715816971687
- type: nauc_mrr_at_1000_max
value: -1.7753021168425986
- type: nauc_mrr_at_1000_std
value: 13.427125200171295
- type: nauc_mrr_at_100_diff1
value: 10.765635069630173
- type: nauc_mrr_at_100_max
value: -1.7612670077500088
- type: nauc_mrr_at_100_std
value: 13.47656838026296
- type: nauc_mrr_at_10_diff1
value: 10.35632278742462
- type: nauc_mrr_at_10_max
value: -1.9593749415315034
- type: nauc_mrr_at_10_std
value: 12.726659151321748
- type: nauc_mrr_at_1_diff1
value: 12.18980309927674
- type: nauc_mrr_at_1_max
value: -4.630938342229097
- type: nauc_mrr_at_1_std
value: 8.958732319219887
- type: nauc_mrr_at_20_diff1
value: 10.689736739154682
- type: nauc_mrr_at_20_max
value: -1.689535123826222
- type: nauc_mrr_at_20_std
value: 13.251612129414687
- type: nauc_mrr_at_3_diff1
value: 9.852214578314367
- type: nauc_mrr_at_3_max
value: -3.33487013011876
- type: nauc_mrr_at_3_std
value: 10.877855458667428
- type: nauc_mrr_at_5_diff1
value: 10.270810271458073
- type: nauc_mrr_at_5_max
value: -2.677309074821081
- type: nauc_mrr_at_5_std
value: 11.882706514806639
- type: nauc_ndcg_at_1000_diff1
value: 12.681360792690615
- type: nauc_ndcg_at_1000_max
value: 0.30517667512214525
- type: nauc_ndcg_at_1000_std
value: 17.50402456957222
- type: nauc_ndcg_at_100_diff1
value: 13.169226394338585
- type: nauc_ndcg_at_100_max
value: 0.7398525127020716
- type: nauc_ndcg_at_100_std
value: 18.85172563798729
- type: nauc_ndcg_at_10_diff1
value: 11.874278269234175
- type: nauc_ndcg_at_10_max
value: 0.742178692340471
- type: nauc_ndcg_at_10_std
value: 15.317281484021455
- type: nauc_ndcg_at_1_diff1
value: 13.270492079181698
- type: nauc_ndcg_at_1_max
value: -5.320050131923338
- type: nauc_ndcg_at_1_std
value: 9.145476528935111
- type: nauc_ndcg_at_20_diff1
value: 12.77788972412781
- type: nauc_ndcg_at_20_max
value: 1.3509880113588073
- type: nauc_ndcg_at_20_std
value: 17.20165293396484
- type: nauc_ndcg_at_3_diff1
value: 10.59415387301215
- type: nauc_ndcg_at_3_max
value: -2.5275550083941534
- type: nauc_ndcg_at_3_std
value: 11.765849158403212
- type: nauc_ndcg_at_5_diff1
value: 11.479181039452788
- type: nauc_ndcg_at_5_max
value: -1.1695551867031702
- type: nauc_ndcg_at_5_std
value: 13.366137540722084
- type: nauc_precision_at_1000_diff1
value: 24.13842177102596
- type: nauc_precision_at_1000_max
value: 15.778091220725535
- type: nauc_precision_at_1000_std
value: 57.991198111902065
- type: nauc_precision_at_100_diff1
value: 21.17988197332234
- type: nauc_precision_at_100_max
value: 10.072329200503201
- type: nauc_precision_at_100_std
value: 44.359368185927
- type: nauc_precision_at_10_diff1
value: 13.619970980685995
- type: nauc_precision_at_10_max
value: 7.683020411909876
- type: nauc_precision_at_10_std
value: 21.79402262800611
- type: nauc_precision_at_1_diff1
value: 13.270492079181698
- type: nauc_precision_at_1_max
value: -5.320050131923338
- type: nauc_precision_at_1_std
value: 9.145476528935111
- type: nauc_precision_at_20_diff1
value: 16.97319915821357
- type: nauc_precision_at_20_max
value: 10.315905315799096
- type: nauc_precision_at_20_std
value: 28.82688927043146
- type: nauc_precision_at_3_diff1
value: 10.02754671342287
- type: nauc_precision_at_3_max
value: -0.8699973044493069
- type: nauc_precision_at_3_std
value: 13.603782123513389
- type: nauc_precision_at_5_diff1
value: 12.084329744277978
- type: nauc_precision_at_5_max
value: 2.074626490481966
- type: nauc_precision_at_5_std
value: 16.608205795807304
- type: nauc_recall_at_1000_diff1
value: 24.138421771026135
- type: nauc_recall_at_1000_max
value: 15.778091220725404
- type: nauc_recall_at_1000_std
value: 57.99119811190208
- type: nauc_recall_at_100_diff1
value: 21.179881973322274
- type: nauc_recall_at_100_max
value: 10.072329200503164
- type: nauc_recall_at_100_std
value: 44.359368185926975
- type: nauc_recall_at_10_diff1
value: 13.619970980685975
- type: nauc_recall_at_10_max
value: 7.683020411909859
- type: nauc_recall_at_10_std
value: 21.794022628006108
- type: nauc_recall_at_1_diff1
value: 13.270492079181698
- type: nauc_recall_at_1_max
value: -5.320050131923338
- type: nauc_recall_at_1_std
value: 9.145476528935111
- type: nauc_recall_at_20_diff1
value: 16.973199158213596
- type: nauc_recall_at_20_max
value: 10.315905315799101
- type: nauc_recall_at_20_std
value: 28.82688927043146
- type: nauc_recall_at_3_diff1
value: 10.02754671342289
- type: nauc_recall_at_3_max
value: -0.869997304449278
- type: nauc_recall_at_3_std
value: 13.603782123513424
- type: nauc_recall_at_5_diff1
value: 12.084329744277952
- type: nauc_recall_at_5_max
value: 2.074626490481952
- type: nauc_recall_at_5_std
value: 16.60820579580728
- type: ndcg_at_1
value: 14.438
- type: ndcg_at_10
value: 29.599999999999998
- type: ndcg_at_100
value: 35.062
- type: ndcg_at_1000
value: 37.266
- type: ndcg_at_20
value: 31.734
- type: ndcg_at_3
value: 22.62
- type: ndcg_at_5
value: 25.643
- type: precision_at_1
value: 14.438
- type: precision_at_10
value: 4.843999999999999
- type: precision_at_100
value: 0.748
- type: precision_at_1000
value: 0.093
- type: precision_at_20
value: 2.841
- type: precision_at_3
value: 9.578000000000001
- type: precision_at_5
value: 7.226000000000001
- type: recall_at_1
value: 14.438
- type: recall_at_10
value: 48.435
- type: recall_at_100
value: 74.822
- type: recall_at_1000
value: 92.60300000000001
- type: recall_at_20
value: 56.828
- type: recall_at_3
value: 28.733999999999998
- type: recall_at_5
value: 36.131
- task:
type: Clustering
dataset:
name: MTEB ArxivClusteringP2P (default)
type: mteb/arxiv-clustering-p2p
config: default
split: test
revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d
metrics:
- type: main_score
value: 35.46255145204994
- type: v_measure
value: 35.46255145204994
- type: v_measure_std
value: 14.146815377034603
- task:
type: Clustering
dataset:
name: MTEB ArxivClusteringS2S (default)
type: mteb/arxiv-clustering-s2s
config: default
split: test
revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53
metrics:
- type: main_score
value: 26.34189987196252
- type: v_measure
value: 26.34189987196252
- type: v_measure_std
value: 14.798697652139317
- task:
type: Reranking
dataset:
name: MTEB AskUbuntuDupQuestions (default)
type: mteb/askubuntudupquestions-reranking
config: default
split: test
revision: 2000358ca161889fa9c082cb41daa8dcfb161a54
metrics:
- type: main_score
value: 52.85912447389551
- type: map
value: 52.85912447389551
- type: mrr
value: 66.7957173635844
- type: nAUC_map_diff1
value: 11.291158204891948
- type: nAUC_map_max
value: 14.0571982637716
- type: nAUC_map_std
value: 7.658903761935503
- type: nAUC_mrr_diff1
value: 13.851083215099605
- type: nAUC_mrr_max
value: 19.44964881732576
- type: nAUC_mrr_std
value: 9.313450884539453
- task:
type: STS
dataset:
name: MTEB BIOSSES (default)
type: mteb/biosses-sts
config: default
split: test
revision: d3fb88f8f02e40887cd149695127462bbcf29b4a
metrics:
- type: cosine_pearson
value: 73.38282679412139
- type: cosine_spearman
value: 75.59389113278942
- type: euclidean_pearson
value: 46.852724684799625
- type: euclidean_spearman
value: 55.00125324086669
- type: main_score
value: 75.59389113278942
- type: manhattan_pearson
value: 45.7988833997748
- type: manhattan_spearman
value: 53.28856361366204
- type: pearson
value: 73.38282679412139
- type: spearman
value: 75.59389113278942
- task:
type: Classification
dataset:
name: MTEB Banking77Classification (default)
type: mteb/banking77
config: default
split: test
revision: 0fd18e25b25c072e09e0d92ab615fda904d66300
metrics:
- type: accuracy
value: 71.38636363636363
- type: f1
value: 71.55994805461263
- type: f1_weighted
value: 71.55994805461263
- type: main_score
value: 71.38636363636363
- task:
type: Clustering
dataset:
name: MTEB BiorxivClusteringP2P (default)
type: mteb/biorxiv-clustering-p2p
config: default
split: test
revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40
metrics:
- type: main_score
value: 31.47309865069476
- type: v_measure
value: 31.47309865069476
- type: v_measure_std
value: 0.6360736715097297
- task:
type: Clustering
dataset:
name: MTEB BiorxivClusteringS2S (default)
type: mteb/biorxiv-clustering-s2s
config: default
split: test
revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908
metrics:
- type: main_score
value: 22.58199120148109
- type: v_measure
value: 22.58199120148109
- type: v_measure_std
value: 1.1055877138914942
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackAndroidRetrieval (default)
type: mteb/cqadupstack-android
config: default
split: test
revision: f46a197baaae43b4f621051089b82a364682dfeb
metrics:
- type: main_score
value: 28.518
- type: map_at_1
value: 17.355999999999998
- type: map_at_10
value: 24.007
- type: map_at_100
value: 25.016
- type: map_at_1000
value: 25.176
- type: map_at_20
value: 24.457
- type: map_at_3
value: 21.794
- type: map_at_5
value: 23.04
- type: mrr_at_1
value: 22.603719599427755
- type: mrr_at_10
value: 29.108760814769386
- type: mrr_at_100
value: 29.908376499291993
- type: mrr_at_1000
value: 29.994015228435632
- type: mrr_at_20
value: 29.504080407211593
- type: mrr_at_3
value: 27.25321888412018
- type: mrr_at_5
value: 28.233190271816884
- type: nauc_map_at_1000_diff1
value: 47.869786003745816
- type: nauc_map_at_1000_max
value: 27.54096137497838
- type: nauc_map_at_1000_std
value: -7.400161145378304
- type: nauc_map_at_100_diff1
value: 47.84118234991334
- type: nauc_map_at_100_max
value: 27.54904954135266
- type: nauc_map_at_100_std
value: -7.477944025206194
- type: nauc_map_at_10_diff1
value: 47.9735876072791
- type: nauc_map_at_10_max
value: 27.391055282545462
- type: nauc_map_at_10_std
value: -7.809853508011509
- type: nauc_map_at_1_diff1
value: 58.07291238335911
- type: nauc_map_at_1_max
value: 29.491926251716666
- type: nauc_map_at_1_std
value: -7.759388303825668
- type: nauc_map_at_20_diff1
value: 47.98612480482489
- type: nauc_map_at_20_max
value: 27.475036492625026
- type: nauc_map_at_20_std
value: -7.516599563783101
- type: nauc_map_at_3_diff1
value: 49.45201738384499
- type: nauc_map_at_3_max
value: 27.178788486813954
- type: nauc_map_at_3_std
value: -8.675581883315793
- type: nauc_map_at_5_diff1
value: 48.54428206844137
- type: nauc_map_at_5_max
value: 27.04154567160208
- type: nauc_map_at_5_std
value: -7.985715295487552
- type: nauc_mrr_at_1000_diff1
value: 46.574864956985365
- type: nauc_mrr_at_1000_max
value: 28.087519043166832
- type: nauc_mrr_at_1000_std
value: -6.451015366036509
- type: nauc_mrr_at_100_diff1
value: 46.56229597151685
- type: nauc_mrr_at_100_max
value: 28.097330034559143
- type: nauc_mrr_at_100_std
value: -6.475319386029993
- type: nauc_mrr_at_10_diff1
value: 46.72161155094325
- type: nauc_mrr_at_10_max
value: 28.136796558719162
- type: nauc_mrr_at_10_std
value: -6.804592873002316
- type: nauc_mrr_at_1_diff1
value: 55.89633445168951
- type: nauc_mrr_at_1_max
value: 30.47937590769701
- type: nauc_mrr_at_1_std
value: -7.1323488254717935
- type: nauc_mrr_at_20_diff1
value: 46.693169452232546
- type: nauc_mrr_at_20_max
value: 28.140872936089373
- type: nauc_mrr_at_20_std
value: -6.484331458969132
- type: nauc_mrr_at_3_diff1
value: 47.808872121231374
- type: nauc_mrr_at_3_max
value: 28.510278015059086
- type: nauc_mrr_at_3_std
value: -7.418313420962369
- type: nauc_mrr_at_5_diff1
value: 47.00163108991785
- type: nauc_mrr_at_5_max
value: 28.03825046154691
- type: nauc_mrr_at_5_std
value: -7.007540109114421
- type: nauc_ndcg_at_1000_diff1
value: 44.04808574593522
- type: nauc_ndcg_at_1000_max
value: 26.938526842644773
- type: nauc_ndcg_at_1000_std
value: -4.429274627595189
- type: nauc_ndcg_at_100_diff1
value: 43.556532019049136
- type: nauc_ndcg_at_100_max
value: 27.236734895647253
- type: nauc_ndcg_at_100_std
value: -5.869942528569457
- type: nauc_ndcg_at_10_diff1
value: 44.125042380771696
- type: nauc_ndcg_at_10_max
value: 27.283104729889622
- type: nauc_ndcg_at_10_std
value: -7.250075385018749
- type: nauc_ndcg_at_1_diff1
value: 55.89633445168951
- type: nauc_ndcg_at_1_max
value: 30.47937590769701
- type: nauc_ndcg_at_1_std
value: -7.1323488254717935
- type: nauc_ndcg_at_20_diff1
value: 44.41899784089651
- type: nauc_ndcg_at_20_max
value: 27.132007799782926
- type: nauc_ndcg_at_20_std
value: -6.018341603261965
- type: nauc_ndcg_at_3_diff1
value: 46.43333330203715
- type: nauc_ndcg_at_3_max
value: 26.867159196890523
- type: nauc_ndcg_at_3_std
value: -7.989033187697878
- type: nauc_ndcg_at_5_diff1
value: 44.97708505801694
- type: nauc_ndcg_at_5_max
value: 26.53850652652143
- type: nauc_ndcg_at_5_std
value: -7.429040061351512
- type: nauc_precision_at_1000_diff1
value: 10.90587664149544
- type: nauc_precision_at_1000_max
value: 0.7573834415907932
- type: nauc_precision_at_1000_std
value: 4.187233421717695
- type: nauc_precision_at_100_diff1
value: 16.70162637068987
- type: nauc_precision_at_100_max
value: 15.017760634485006
- type: nauc_precision_at_100_std
value: -1.4401234272452257
- type: nauc_precision_at_10_diff1
value: 27.11447978714884
- type: nauc_precision_at_10_max
value: 25.239563326602838
- type: nauc_precision_at_10_std
value: -5.113529015570373
- type: nauc_precision_at_1_diff1
value: 55.89633445168951
- type: nauc_precision_at_1_max
value: 30.47937590769701
- type: nauc_precision_at_1_std
value: -7.1323488254717935
- type: nauc_precision_at_20_diff1
value: 24.467549645043032
- type: nauc_precision_at_20_max
value: 23.51675958880599
- type: nauc_precision_at_20_std
value: -2.2460962355932654
- type: nauc_precision_at_3_diff1
value: 36.99310143703273
- type: nauc_precision_at_3_max
value: 24.28484429048304
- type: nauc_precision_at_3_std
value: -8.294205947711662
- type: nauc_precision_at_5_diff1
value: 32.53111998357926
- type: nauc_precision_at_5_max
value: 23.890361705484153
- type: nauc_precision_at_5_std
value: -6.119004280837306
- type: nauc_recall_at_1000_diff1
value: 26.372327810550182
- type: nauc_recall_at_1000_max
value: 17.386452637452958
- type: nauc_recall_at_1000_std
value: 17.18893134942721
- type: nauc_recall_at_100_diff1
value: 27.138092417145288
- type: nauc_recall_at_100_max
value: 22.704436530088913
- type: nauc_recall_at_100_std
value: -1.0716953053918568
- type: nauc_recall_at_10_diff1
value: 32.41154313152003
- type: nauc_recall_at_10_max
value: 23.2359443305839
- type: nauc_recall_at_10_std
value: -5.002290149250385
- type: nauc_recall_at_1_diff1
value: 58.07291238335911
- type: nauc_recall_at_1_max
value: 29.491926251716666
- type: nauc_recall_at_1_std
value: -7.759388303825668
- type: nauc_recall_at_20_diff1
value: 33.00899946361021
- type: nauc_recall_at_20_max
value: 22.82808333164438
- type: nauc_recall_at_20_std
value: -1.4141291649557204
- type: nauc_recall_at_3_diff1
value: 38.920601224546644
- type: nauc_recall_at_3_max
value: 23.89232056113095
- type: nauc_recall_at_3_std
value: -7.8481952205795995
- type: nauc_recall_at_5_diff1
value: 35.257535866907
- type: nauc_recall_at_5_max
value: 22.164920959223334
- type: nauc_recall_at_5_std
value: -5.9961105131656725
- type: ndcg_at_1
value: 22.604
- type: ndcg_at_10
value: 28.518
- type: ndcg_at_100
value: 33.442
- type: ndcg_at_1000
value: 36.691
- type: ndcg_at_20
value: 29.918
- type: ndcg_at_3
value: 25.278
- type: ndcg_at_5
value: 26.647
- type: precision_at_1
value: 22.604
- type: precision_at_10
value: 5.608
- type: precision_at_100
value: 1.0210000000000001
- type: precision_at_1000
value: 0.163
- type: precision_at_20
value: 3.319
- type: precision_at_3
value: 12.589
- type: precision_at_5
value: 8.984
- type: recall_at_1
value: 17.355999999999998
- type: recall_at_10
value: 36.59
- type: recall_at_100
value: 59.38099999999999
- type: recall_at_1000
value: 81.382
- type: recall_at_20
value: 41.972
- type: recall_at_3
value: 26.183
- type: recall_at_5
value: 30.653000000000002
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackEnglishRetrieval (default)
type: mteb/cqadupstack-english
config: default
split: test
revision: ad9991cb51e31e31e430383c75ffb2885547b5f0
metrics:
- type: main_score
value: 24.698999999999998
- type: map_at_1
value: 16.182
- type: map_at_10
value: 21.187
- type: map_at_100
value: 22.028
- type: map_at_1000
value: 22.147
- type: map_at_20
value: 21.603
- type: map_at_3
value: 19.689999999999998
- type: map_at_5
value: 20.402
- type: mrr_at_1
value: 20.573248407643312
- type: mrr_at_10
value: 25.743301991709615
- type: mrr_at_100
value: 26.466582692758493
- type: mrr_at_1000
value: 26.54213235591294
- type: mrr_at_20
value: 26.116902322631823
- type: mrr_at_3
value: 24.32059447983014
- type: mrr_at_5
value: 24.960721868365162
- type: nauc_map_at_1000_diff1
value: 43.80371326276162
- type: nauc_map_at_1000_max
value: 10.307189223525215
- type: nauc_map_at_1000_std
value: 1.1410206622059031
- type: nauc_map_at_100_diff1
value: 43.80398291664643
- type: nauc_map_at_100_max
value: 10.294039476698776
- type: nauc_map_at_100_std
value: 1.0838400387773035
- type: nauc_map_at_10_diff1
value: 43.987106322737205
- type: nauc_map_at_10_max
value: 10.44970205412866
- type: nauc_map_at_10_std
value: 0.4638949254801207
- type: nauc_map_at_1_diff1
value: 50.262982039499725
- type: nauc_map_at_1_max
value: 11.253389960693605
- type: nauc_map_at_1_std
value: -1.1369036906864514
- type: nauc_map_at_20_diff1
value: 43.86541706002641
- type: nauc_map_at_20_max
value: 10.333426229095483
- type: nauc_map_at_20_std
value: 0.7704746445769103
- type: nauc_map_at_3_diff1
value: 44.96796698986098
- type: nauc_map_at_3_max
value: 10.573187295958576
- type: nauc_map_at_3_std
value: 0.01433549559929614
- type: nauc_map_at_5_diff1
value: 44.245307311061204
- type: nauc_map_at_5_max
value: 10.644568381319045
- type: nauc_map_at_5_std
value: -0.029700274583380155
- type: nauc_mrr_at_1000_diff1
value: 42.327672613522914
- type: nauc_mrr_at_1000_max
value: 11.6999240554554
- type: nauc_mrr_at_1000_std
value: 2.112897885106764
- type: nauc_mrr_at_100_diff1
value: 42.31642286015079
- type: nauc_mrr_at_100_max
value: 11.68787957194085
- type: nauc_mrr_at_100_std
value: 2.105610688222343
- type: nauc_mrr_at_10_diff1
value: 42.467973855007116
- type: nauc_mrr_at_10_max
value: 11.797064798974974
- type: nauc_mrr_at_10_std
value: 1.9779659522730684
- type: nauc_mrr_at_1_diff1
value: 47.71737815016663
- type: nauc_mrr_at_1_max
value: 14.383095652386146
- type: nauc_mrr_at_1_std
value: -0.07474670021285572
- type: nauc_mrr_at_20_diff1
value: 42.3995701621796
- type: nauc_mrr_at_20_max
value: 11.701616710562975
- type: nauc_mrr_at_20_std
value: 2.085148056092746
- type: nauc_mrr_at_3_diff1
value: 42.95240734385427
- type: nauc_mrr_at_3_max
value: 12.039509345325337
- type: nauc_mrr_at_3_std
value: 1.7687962861822382
- type: nauc_mrr_at_5_diff1
value: 42.694804355468115
- type: nauc_mrr_at_5_max
value: 11.929565017206377
- type: nauc_mrr_at_5_std
value: 1.694875246947431
- type: nauc_ndcg_at_1000_diff1
value: 41.00761525475331
- type: nauc_ndcg_at_1000_max
value: 9.858142865194182
- type: nauc_ndcg_at_1000_std
value: 3.670728963648605
- type: nauc_ndcg_at_100_diff1
value: 40.95449329238105
- type: nauc_ndcg_at_100_max
value: 9.326306956218327
- type: nauc_ndcg_at_100_std
value: 2.8868853641438506
- type: nauc_ndcg_at_10_diff1
value: 41.53254984337585
- type: nauc_ndcg_at_10_max
value: 10.057078591477252
- type: nauc_ndcg_at_10_std
value: 1.604308043004992
- type: nauc_ndcg_at_1_diff1
value: 47.71737815016663
- type: nauc_ndcg_at_1_max
value: 14.383095652386146
- type: nauc_ndcg_at_1_std
value: -0.07474670021285572
- type: nauc_ndcg_at_20_diff1
value: 41.440675477881086
- type: nauc_ndcg_at_20_max
value: 9.630011024652227
- type: nauc_ndcg_at_20_std
value: 2.2157732372759256
- type: nauc_ndcg_at_3_diff1
value: 42.46487256960971
- type: nauc_ndcg_at_3_max
value: 11.038048797533829
- type: nauc_ndcg_at_3_std
value: 1.2243654696200774
- type: nauc_ndcg_at_5_diff1
value: 41.83878536100888
- type: nauc_ndcg_at_5_max
value: 10.720801901432624
- type: nauc_ndcg_at_5_std
value: 0.8712149388513847
- type: nauc_precision_at_1000_diff1
value: 1.5865611853545292
- type: nauc_precision_at_1000_max
value: 6.681393322922304
- type: nauc_precision_at_1000_std
value: 14.974673269542507
- type: nauc_precision_at_100_diff1
value: 13.555729326347315
- type: nauc_precision_at_100_max
value: 7.545824391218551
- type: nauc_precision_at_100_std
value: 13.934044415661273
- type: nauc_precision_at_10_diff1
value: 25.53208157998575
- type: nauc_precision_at_10_max
value: 10.861163675534936
- type: nauc_precision_at_10_std
value: 4.879245837329693
- type: nauc_precision_at_1_diff1
value: 47.71737815016663
- type: nauc_precision_at_1_max
value: 14.383095652386146
- type: nauc_precision_at_1_std
value: -0.07474670021285572
- type: nauc_precision_at_20_diff1
value: 22.554580803838196
- type: nauc_precision_at_20_max
value: 9.173222510159171
- type: nauc_precision_at_20_std
value: 8.91005482914735
- type: nauc_precision_at_3_diff1
value: 33.10508327009392
- type: nauc_precision_at_3_max
value: 12.86002329562499
- type: nauc_precision_at_3_std
value: 2.974310102418383
- type: nauc_precision_at_5_diff1
value: 29.21043001216549
- type: nauc_precision_at_5_max
value: 11.911630406472423
- type: nauc_precision_at_5_std
value: 3.0525160145985994
- type: nauc_recall_at_1000_diff1
value: 30.47927917267733
- type: nauc_recall_at_1000_max
value: 7.6799659504807245
- type: nauc_recall_at_1000_std
value: 12.501272715675682
- type: nauc_recall_at_100_diff1
value: 31.37456182815277
- type: nauc_recall_at_100_max
value: 4.3121178276146
- type: nauc_recall_at_100_std
value: 6.610653786295896
- type: nauc_recall_at_10_diff1
value: 35.70919804366768
- type: nauc_recall_at_10_max
value: 7.164595283036483
- type: nauc_recall_at_10_std
value: 2.511197530002145
- type: nauc_recall_at_1_diff1
value: 50.262982039499725
- type: nauc_recall_at_1_max
value: 11.253389960693605
- type: nauc_recall_at_1_std
value: -1.1369036906864514
- type: nauc_recall_at_20_diff1
value: 34.61353209754079
- type: nauc_recall_at_20_max
value: 5.959396627193594
- type: nauc_recall_at_20_std
value: 4.38802472107702
- type: nauc_recall_at_3_diff1
value: 38.54587550067196
- type: nauc_recall_at_3_max
value: 8.303476446370226
- type: nauc_recall_at_3_std
value: 0.918233189682653
- type: nauc_recall_at_5_diff1
value: 36.97453761390672
- type: nauc_recall_at_5_max
value: 8.452744877863443
- type: nauc_recall_at_5_std
value: 0.31182896781455743
- type: ndcg_at_1
value: 20.573
- type: ndcg_at_10
value: 24.698999999999998
- type: ndcg_at_100
value: 28.626
- type: ndcg_at_1000
value: 31.535999999999998
- type: ndcg_at_20
value: 25.971
- type: ndcg_at_3
value: 22.400000000000002
- type: ndcg_at_5
value: 23.153000000000002
- type: precision_at_1
value: 20.573
- type: precision_at_10
value: 4.682
- type: precision_at_100
value: 0.835
- type: precision_at_1000
value: 0.132
- type: precision_at_20
value: 2.806
- type: precision_at_3
value: 10.955
- type: precision_at_5
value: 7.580000000000001
- type: recall_at_1
value: 16.182
- type: recall_at_10
value: 30.410999999999998
- type: recall_at_100
value: 47.94
- type: recall_at_1000
value: 68.073
- type: recall_at_20
value: 35.241
- type: recall_at_3
value: 23.247999999999998
- type: recall_at_5
value: 25.611
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackGamingRetrieval (default)
type: mteb/cqadupstack-gaming
config: default
split: test
revision: 4885aa143210c98657558c04aaf3dc47cfb54340
metrics:
- type: main_score
value: 34.837
- type: map_at_1
value: 21.804000000000002
- type: map_at_10
value: 30.117
- type: map_at_100
value: 31.022
- type: map_at_1000
value: 31.123
- type: map_at_20
value: 30.592999999999996
- type: map_at_3
value: 27.485
- type: map_at_5
value: 29.015
- type: mrr_at_1
value: 25.391849529780565
- type: mrr_at_10
value: 33.06018311190724
- type: mrr_at_100
value: 33.86542467064614
- type: mrr_at_1000
value: 33.93133191694629
- type: mrr_at_20
value: 33.48454644646544
- type: mrr_at_3
value: 30.700104493207924
- type: mrr_at_5
value: 32.12016718913267
- type: nauc_map_at_1000_diff1
value: 45.5807513160407
- type: nauc_map_at_1000_max
value: 21.915072082554456
- type: nauc_map_at_1000_std
value: -7.325013122158723
- type: nauc_map_at_100_diff1
value: 45.54127845733458
- type: nauc_map_at_100_max
value: 21.90856139725234
- type: nauc_map_at_100_std
value: -7.378234997163831
- type: nauc_map_at_10_diff1
value: 45.56616787985884
- type: nauc_map_at_10_max
value: 21.977377645141427
- type: nauc_map_at_10_std
value: -7.953791461768689
- type: nauc_map_at_1_diff1
value: 50.13523755859727
- type: nauc_map_at_1_max
value: 22.079872106357826
- type: nauc_map_at_1_std
value: -10.517989063520115
- type: nauc_map_at_20_diff1
value: 45.47328572468456
- type: nauc_map_at_20_max
value: 21.907938618532206
- type: nauc_map_at_20_std
value: -7.654370878334637
- type: nauc_map_at_3_diff1
value: 46.64296035971972
- type: nauc_map_at_3_max
value: 21.55745539420763
- type: nauc_map_at_3_std
value: -9.322387704640397
- type: nauc_map_at_5_diff1
value: 45.87814328869891
- type: nauc_map_at_5_max
value: 21.97551177369846
- type: nauc_map_at_5_std
value: -8.442300800960686
- type: nauc_mrr_at_1000_diff1
value: 46.21214184609282
- type: nauc_mrr_at_1000_max
value: 24.121552423232732
- type: nauc_mrr_at_1000_std
value: -5.197081534530456
- type: nauc_mrr_at_100_diff1
value: 46.192209374562324
- type: nauc_mrr_at_100_max
value: 24.117295080133403
- type: nauc_mrr_at_100_std
value: -5.20106321371411
- type: nauc_mrr_at_10_diff1
value: 46.214433219910426
- type: nauc_mrr_at_10_max
value: 24.337609381566494
- type: nauc_mrr_at_10_std
value: -5.539128286307364
- type: nauc_mrr_at_1_diff1
value: 52.2527723494356
- type: nauc_mrr_at_1_max
value: 25.421197106410293
- type: nauc_mrr_at_1_std
value: -7.805349072851469
- type: nauc_mrr_at_20_diff1
value: 46.10135736013422
- type: nauc_mrr_at_20_max
value: 24.17582977429519
- type: nauc_mrr_at_20_std
value: -5.3844233771043255
- type: nauc_mrr_at_3_diff1
value: 47.089100932315574
- type: nauc_mrr_at_3_max
value: 24.589442349183855
- type: nauc_mrr_at_3_std
value: -6.861652459272909
- type: nauc_mrr_at_5_diff1
value: 46.50908152902759
- type: nauc_mrr_at_5_max
value: 24.44902343275474
- type: nauc_mrr_at_5_std
value: -5.90486733129187
- type: nauc_ndcg_at_1000_diff1
value: 44.01232290993056
- type: nauc_ndcg_at_1000_max
value: 21.7547520856293
- type: nauc_ndcg_at_1000_std
value: -2.8320334767530118
- type: nauc_ndcg_at_100_diff1
value: 43.333079641772805
- type: nauc_ndcg_at_100_max
value: 21.696558885860842
- type: nauc_ndcg_at_100_std
value: -3.8168722593708466
- type: nauc_ndcg_at_10_diff1
value: 43.55004080963945
- type: nauc_ndcg_at_10_max
value: 22.437821635174988
- type: nauc_ndcg_at_10_std
value: -6.156552890106106
- type: nauc_ndcg_at_1_diff1
value: 52.2527723494356
- type: nauc_ndcg_at_1_max
value: 25.421197106410293
- type: nauc_ndcg_at_1_std
value: -7.805349072851469
- type: nauc_ndcg_at_20_diff1
value: 43.09035864009835
- type: nauc_ndcg_at_20_max
value: 21.94863122459976
- type: nauc_ndcg_at_20_std
value: -5.4130728717458965
- type: nauc_ndcg_at_3_diff1
value: 45.44710289580689
- type: nauc_ndcg_at_3_max
value: 22.400341906939868
- type: nauc_ndcg_at_3_std
value: -8.619757656107849
- type: nauc_ndcg_at_5_diff1
value: 44.1896655275832
- type: nauc_ndcg_at_5_max
value: 22.587591758610802
- type: nauc_ndcg_at_5_std
value: -7.2269233073063575
- type: nauc_precision_at_1000_diff1
value: 10.365353118490535
- type: nauc_precision_at_1000_max
value: 7.8252547949888545
- type: nauc_precision_at_1000_std
value: 26.55091491372318
- type: nauc_precision_at_100_diff1
value: 21.049854477557055
- type: nauc_precision_at_100_max
value: 16.20485886511922
- type: nauc_precision_at_100_std
value: 15.969890079702717
- type: nauc_precision_at_10_diff1
value: 32.52426180873231
- type: nauc_precision_at_10_max
value: 22.685662047893707
- type: nauc_precision_at_10_std
value: 1.4729404419557324
- type: nauc_precision_at_1_diff1
value: 52.2527723494356
- type: nauc_precision_at_1_max
value: 25.421197106410293
- type: nauc_precision_at_1_std
value: -7.805349072851469
- type: nauc_precision_at_20_diff1
value: 28.090691152210972
- type: nauc_precision_at_20_max
value: 20.90743423717082
- type: nauc_precision_at_20_std
value: 4.817506381512236
- type: nauc_precision_at_3_diff1
value: 40.80538406829336
- type: nauc_precision_at_3_max
value: 23.323105131070363
- type: nauc_precision_at_3_std
value: -5.540716529624683
- type: nauc_precision_at_5_diff1
value: 36.58280618039231
- type: nauc_precision_at_5_max
value: 23.634816479662742
- type: nauc_precision_at_5_std
value: -1.7820384730109589
- type: nauc_recall_at_1000_diff1
value: 34.29190280951983
- type: nauc_recall_at_1000_max
value: 13.798111582798564
- type: nauc_recall_at_1000_std
value: 28.5351988388723
- type: nauc_recall_at_100_diff1
value: 32.064087882086476
- type: nauc_recall_at_100_max
value: 16.090743768333688
- type: nauc_recall_at_100_std
value: 8.307894883910041
- type: nauc_recall_at_10_diff1
value: 35.79378711197085
- type: nauc_recall_at_10_max
value: 20.68575839918982
- type: nauc_recall_at_10_std
value: -2.946830801840792
- type: nauc_recall_at_1_diff1
value: 50.13523755859727
- type: nauc_recall_at_1_max
value: 22.079872106357826
- type: nauc_recall_at_1_std
value: -10.517989063520115
- type: nauc_recall_at_20_diff1
value: 33.44790152149905
- type: nauc_recall_at_20_max
value: 18.594618679781895
- type: nauc_recall_at_20_std
value: -0.31826446038001266
- type: nauc_recall_at_3_diff1
value: 40.94878372307589
- type: nauc_recall_at_3_max
value: 20.42680666854128
- type: nauc_recall_at_3_std
value: -8.903430047857414
- type: nauc_recall_at_5_diff1
value: 37.927274464064844
- type: nauc_recall_at_5_max
value: 21.06930934356292
- type: nauc_recall_at_5_std
value: -5.831090950499156
- type: ndcg_at_1
value: 25.392
- type: ndcg_at_10
value: 34.837
- type: ndcg_at_100
value: 39.291
- type: ndcg_at_1000
value: 41.676
- type: ndcg_at_20
value: 36.416
- type: ndcg_at_3
value: 29.958000000000002
- type: ndcg_at_5
value: 32.435
- type: precision_at_1
value: 25.392
- type: precision_at_10
value: 5.806
- type: precision_at_100
value: 0.8789999999999999
- type: precision_at_1000
value: 0.117
- type: precision_at_20
value: 3.3320000000000003
- type: precision_at_3
value: 13.501
- type: precision_at_5
value: 9.718
- type: recall_at_1
value: 21.804000000000002
- type: recall_at_10
value: 46.367999999999995
- type: recall_at_100
value: 66.526
- type: recall_at_1000
value: 83.795
- type: recall_at_20
value: 52.201
- type: recall_at_3
value: 33.351
- type: recall_at_5
value: 39.345
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackGisRetrieval (default)
type: mteb/cqadupstack-gis
config: default
split: test
revision: 5003b3064772da1887988e05400cf3806fe491f2
metrics:
- type: main_score
value: 15.889000000000001
- type: map_at_1
value: 9.472999999999999
- type: map_at_10
value: 13.439
- type: map_at_100
value: 14.165
- type: map_at_1000
value: 14.267
- type: map_at_20
value: 13.778000000000002
- type: map_at_3
value: 12.136
- type: map_at_5
value: 12.803
- type: mrr_at_1
value: 10.056497175141244
- type: mrr_at_10
value: 14.27383194332347
- type: mrr_at_100
value: 15.012089041940587
- type: mrr_at_1000
value: 15.104068046441926
- type: mrr_at_20
value: 14.623929801790952
- type: mrr_at_3
value: 12.86252354048964
- type: mrr_at_5
value: 13.55743879472693
- type: nauc_map_at_1000_diff1
value: 30.334633457872854
- type: nauc_map_at_1000_max
value: 16.879524053860088
- type: nauc_map_at_1000_std
value: -11.608379714877143
- type: nauc_map_at_100_diff1
value: 30.315313717026044
- type: nauc_map_at_100_max
value: 16.85237939531867
- type: nauc_map_at_100_std
value: -11.622151859571831
- type: nauc_map_at_10_diff1
value: 30.914146463660085
- type: nauc_map_at_10_max
value: 16.957132658303777
- type: nauc_map_at_10_std
value: -11.731838090023269
- type: nauc_map_at_1_diff1
value: 38.059077642105095
- type: nauc_map_at_1_max
value: 17.258898457644563
- type: nauc_map_at_1_std
value: -15.1141417910556
- type: nauc_map_at_20_diff1
value: 30.657379748220464
- type: nauc_map_at_20_max
value: 16.728415773059652
- type: nauc_map_at_20_std
value: -11.58808790930077
- type: nauc_map_at_3_diff1
value: 33.46033892507575
- type: nauc_map_at_3_max
value: 17.063496859962274
- type: nauc_map_at_3_std
value: -12.540868416387656
- type: nauc_map_at_5_diff1
value: 31.833328131003665
- type: nauc_map_at_5_max
value: 16.85136559752421
- type: nauc_map_at_5_std
value: -12.482629966798948
- type: nauc_mrr_at_1000_diff1
value: 29.41507065744396
- type: nauc_mrr_at_1000_max
value: 18.49824554052624
- type: nauc_mrr_at_1000_std
value: -10.326025120569037
- type: nauc_mrr_at_100_diff1
value: 29.379801930215717
- type: nauc_mrr_at_100_max
value: 18.488234248143247
- type: nauc_mrr_at_100_std
value: -10.335639545339422
- type: nauc_mrr_at_10_diff1
value: 29.91432794618661
- type: nauc_mrr_at_10_max
value: 18.724879448569546
- type: nauc_mrr_at_10_std
value: -10.404101745775053
- type: nauc_mrr_at_1_diff1
value: 37.90615317749033
- type: nauc_mrr_at_1_max
value: 18.93535243576158
- type: nauc_mrr_at_1_std
value: -13.352192729903559
- type: nauc_mrr_at_20_diff1
value: 29.578605690031328
- type: nauc_mrr_at_20_max
value: 18.407726379219987
- type: nauc_mrr_at_20_std
value: -10.298490989990624
- type: nauc_mrr_at_3_diff1
value: 32.02343883506372
- type: nauc_mrr_at_3_max
value: 18.633783635235847
- type: nauc_mrr_at_3_std
value: -11.228435347275935
- type: nauc_mrr_at_5_diff1
value: 30.69962523728713
- type: nauc_mrr_at_5_max
value: 18.72446829188985
- type: nauc_mrr_at_5_std
value: -11.138830180701982
- type: nauc_ndcg_at_1000_diff1
value: 25.382297853226866
- type: nauc_ndcg_at_1000_max
value: 17.43716304218148
- type: nauc_ndcg_at_1000_std
value: -10.190696887337486
- type: nauc_ndcg_at_100_diff1
value: 24.735480242752285
- type: nauc_ndcg_at_100_max
value: 16.71943454741711
- type: nauc_ndcg_at_100_std
value: -9.924909206899162
- type: nauc_ndcg_at_10_diff1
value: 27.358228148721842
- type: nauc_ndcg_at_10_max
value: 16.922883804711265
- type: nauc_ndcg_at_10_std
value: -10.016699536056024
- type: nauc_ndcg_at_1_diff1
value: 37.90615317749033
- type: nauc_ndcg_at_1_max
value: 18.93535243576158
- type: nauc_ndcg_at_1_std
value: -13.352192729903559
- type: nauc_ndcg_at_20_diff1
value: 26.463382227572517
- type: nauc_ndcg_at_20_max
value: 16.22031339406569
- type: nauc_ndcg_at_20_std
value: -9.66724467521929
- type: nauc_ndcg_at_3_diff1
value: 31.53806923827287
- type: nauc_ndcg_at_3_max
value: 17.049495750298107
- type: nauc_ndcg_at_3_std
value: -11.58504512374531
- type: nauc_ndcg_at_5_diff1
value: 29.10131680215961
- type: nauc_ndcg_at_5_max
value: 16.786497467751296
- type: nauc_ndcg_at_5_std
value: -11.594059282963107
- type: nauc_precision_at_1000_diff1
value: 5.724183211042247
- type: nauc_precision_at_1000_max
value: 22.481314169026508
- type: nauc_precision_at_1000_std
value: -2.4780053135041844
- type: nauc_precision_at_100_diff1
value: 8.982535905232872
- type: nauc_precision_at_100_max
value: 19.23627381958997
- type: nauc_precision_at_100_std
value: -6.469375758025859
- type: nauc_precision_at_10_diff1
value: 18.446003934213422
- type: nauc_precision_at_10_max
value: 18.317564090743698
- type: nauc_precision_at_10_std
value: -5.258776187738409
- type: nauc_precision_at_1_diff1
value: 37.90615317749033
- type: nauc_precision_at_1_max
value: 18.93535243576158
- type: nauc_precision_at_1_std
value: -13.352192729903559
- type: nauc_precision_at_20_diff1
value: 16.32313052813914
- type: nauc_precision_at_20_max
value: 16.623118796672443
- type: nauc_precision_at_20_std
value: -5.178876021009233
- type: nauc_precision_at_3_diff1
value: 28.153843298140956
- type: nauc_precision_at_3_max
value: 18.261053599119773
- type: nauc_precision_at_3_std
value: -8.633656740784398
- type: nauc_precision_at_5_diff1
value: 22.30147327973116
- type: nauc_precision_at_5_max
value: 17.724668119940276
- type: nauc_precision_at_5_std
value: -9.147827083942738
- type: nauc_recall_at_1000_diff1
value: 12.936742845571006
- type: nauc_recall_at_1000_max
value: 17.728147389670845
- type: nauc_recall_at_1000_std
value: -10.026543773605697
- type: nauc_recall_at_100_diff1
value: 12.196046010910255
- type: nauc_recall_at_100_max
value: 14.320146451643033
- type: nauc_recall_at_100_std
value: -7.059868030131276
- type: nauc_recall_at_10_diff1
value: 19.81974166368456
- type: nauc_recall_at_10_max
value: 15.137717469839288
- type: nauc_recall_at_10_std
value: -6.894031649742936
- type: nauc_recall_at_1_diff1
value: 38.059077642105095
- type: nauc_recall_at_1_max
value: 17.258898457644563
- type: nauc_recall_at_1_std
value: -15.1141417910556
- type: nauc_recall_at_20_diff1
value: 17.87014099435801
- type: nauc_recall_at_20_max
value: 13.410148544576403
- type: nauc_recall_at_20_std
value: -6.139892629545985
- type: nauc_recall_at_3_diff1
value: 27.941355405054267
- type: nauc_recall_at_3_max
value: 15.300277815129304
- type: nauc_recall_at_3_std
value: -10.440312722587832
- type: nauc_recall_at_5_diff1
value: 23.715987229368274
- type: nauc_recall_at_5_max
value: 15.063760707410282
- type: nauc_recall_at_5_std
value: -10.521011536014003
- type: ndcg_at_1
value: 10.056
- type: ndcg_at_10
value: 15.889000000000001
- type: ndcg_at_100
value: 20.007
- type: ndcg_at_1000
value: 23.324
- type: ndcg_at_20
value: 17.127
- type: ndcg_at_3
value: 13.171
- type: ndcg_at_5
value: 14.358
- type: precision_at_1
value: 10.056
- type: precision_at_10
value: 2.588
- type: precision_at_100
value: 0.49300000000000005
- type: precision_at_1000
value: 0.083
- type: precision_at_20
value: 1.559
- type: precision_at_3
value: 5.612
- type: precision_at_5
value: 4.0680000000000005
- type: recall_at_1
value: 9.472999999999999
- type: recall_at_10
value: 22.676
- type: recall_at_100
value: 42.672
- type: recall_at_1000
value: 68.939
- type: recall_at_20
value: 27.462999999999997
- type: recall_at_3
value: 15.383
- type: recall_at_5
value: 18.174
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackMathematicaRetrieval (default)
type: mteb/cqadupstack-mathematica
config: default
split: test
revision: 90fceea13679c63fe563ded68f3b6f06e50061de
metrics:
- type: main_score
value: 11.0
- type: map_at_1
value: 5.148
- type: map_at_10
value: 8.469999999999999
- type: map_at_100
value: 9.212
- type: map_at_1000
value: 9.322
- type: map_at_20
value: 8.808
- type: map_at_3
value: 7.131
- type: map_at_5
value: 7.815999999999999
- type: mrr_at_1
value: 6.343283582089552
- type: mrr_at_10
value: 10.370913290689412
- type: mrr_at_100
value: 11.152489765865017
- type: mrr_at_1000
value: 11.240647288895591
- type: mrr_at_20
value: 10.741514212977526
- type: mrr_at_3
value: 8.872305140961858
- type: mrr_at_5
value: 9.631011608623549
- type: nauc_map_at_1000_diff1
value: 23.766626012326586
- type: nauc_map_at_1000_max
value: 12.653376257429583
- type: nauc_map_at_1000_std
value: 8.616529960924888
- type: nauc_map_at_100_diff1
value: 23.738827084996768
- type: nauc_map_at_100_max
value: 12.649650411660854
- type: nauc_map_at_100_std
value: 8.541383664809612
- type: nauc_map_at_10_diff1
value: 23.999578907568026
- type: nauc_map_at_10_max
value: 12.71263636252209
- type: nauc_map_at_10_std
value: 7.591195966672301
- type: nauc_map_at_1_diff1
value: 35.57446018071185
- type: nauc_map_at_1_max
value: 14.079653770667337
- type: nauc_map_at_1_std
value: 11.69336879118923
- type: nauc_map_at_20_diff1
value: 24.160966681198037
- type: nauc_map_at_20_max
value: 12.874042661878926
- type: nauc_map_at_20_std
value: 8.47225999927236
- type: nauc_map_at_3_diff1
value: 26.388037294578943
- type: nauc_map_at_3_max
value: 12.836707260430186
- type: nauc_map_at_3_std
value: 6.661759987628506
- type: nauc_map_at_5_diff1
value: 24.670961314269608
- type: nauc_map_at_5_max
value: 12.93683340709218
- type: nauc_map_at_5_std
value: 6.6199426801021435
- type: nauc_mrr_at_1000_diff1
value: 23.216930411387928
- type: nauc_mrr_at_1000_max
value: 15.19292342533299
- type: nauc_mrr_at_1000_std
value: 8.443837847880454
- type: nauc_mrr_at_100_diff1
value: 23.191640457286802
- type: nauc_mrr_at_100_max
value: 15.176060930237956
- type: nauc_mrr_at_100_std
value: 8.438353759551372
- type: nauc_mrr_at_10_diff1
value: 23.641665699722576
- type: nauc_mrr_at_10_max
value: 15.363771027025361
- type: nauc_mrr_at_10_std
value: 7.6943977364817675
- type: nauc_mrr_at_1_diff1
value: 34.13967231695169
- type: nauc_mrr_at_1_max
value: 18.217995055452356
- type: nauc_mrr_at_1_std
value: 11.691078655411745
- type: nauc_mrr_at_20_diff1
value: 23.584124655747633
- type: nauc_mrr_at_20_max
value: 15.504561511128212
- type: nauc_mrr_at_20_std
value: 8.487309205927613
- type: nauc_mrr_at_3_diff1
value: 26.239880657367205
- type: nauc_mrr_at_3_max
value: 15.653548540177347
- type: nauc_mrr_at_3_std
value: 6.349852805707984
- type: nauc_mrr_at_5_diff1
value: 23.976240360223915
- type: nauc_mrr_at_5_max
value: 15.744338647107542
- type: nauc_mrr_at_5_std
value: 6.487124576469712
- type: nauc_ndcg_at_1000_diff1
value: 19.496197697682945
- type: nauc_ndcg_at_1000_max
value: 12.101852407794244
- type: nauc_ndcg_at_1000_std
value: 12.016860314478954
- type: nauc_ndcg_at_100_diff1
value: 18.9745151618046
- type: nauc_ndcg_at_100_max
value: 11.815079877327287
- type: nauc_ndcg_at_100_std
value: 10.61036714041141
- type: nauc_ndcg_at_10_diff1
value: 20.49507024120394
- type: nauc_ndcg_at_10_max
value: 13.081108599437465
- type: nauc_ndcg_at_10_std
value: 7.930411944011889
- type: nauc_ndcg_at_1_diff1
value: 34.13967231695169
- type: nauc_ndcg_at_1_max
value: 18.217995055452356
- type: nauc_ndcg_at_1_std
value: 11.691078655411745
- type: nauc_ndcg_at_20_diff1
value: 20.839258395401707
- type: nauc_ndcg_at_20_max
value: 13.485012044482616
- type: nauc_ndcg_at_20_std
value: 10.423314754071841
- type: nauc_ndcg_at_3_diff1
value: 24.534248413854158
- type: nauc_ndcg_at_3_max
value: 13.612373481617901
- type: nauc_ndcg_at_3_std
value: 5.122655306518725
- type: nauc_ndcg_at_5_diff1
value: 21.45736115604528
- type: nauc_ndcg_at_5_max
value: 13.50049057414957
- type: nauc_ndcg_at_5_std
value: 5.5599020003710375
- type: nauc_precision_at_1000_diff1
value: 5.214729837045339
- type: nauc_precision_at_1000_max
value: 7.049726610933547
- type: nauc_precision_at_1000_std
value: 10.217710184510343
- type: nauc_precision_at_100_diff1
value: 10.428281377918521
- type: nauc_precision_at_100_max
value: 9.592496174158226
- type: nauc_precision_at_100_std
value: 11.524579687966593
- type: nauc_precision_at_10_diff1
value: 13.144126104006663
- type: nauc_precision_at_10_max
value: 12.791519232802509
- type: nauc_precision_at_10_std
value: 7.117254065134753
- type: nauc_precision_at_1_diff1
value: 34.13967231695169
- type: nauc_precision_at_1_max
value: 18.217995055452356
- type: nauc_precision_at_1_std
value: 11.691078655411745
- type: nauc_precision_at_20_diff1
value: 14.534665391717477
- type: nauc_precision_at_20_max
value: 13.373720011165052
- type: nauc_precision_at_20_std
value: 12.735872233304013
- type: nauc_precision_at_3_diff1
value: 20.050332454808
- type: nauc_precision_at_3_max
value: 14.287141036751699
- type: nauc_precision_at_3_std
value: 2.1412848715847774
- type: nauc_precision_at_5_diff1
value: 16.547335020939435
- type: nauc_precision_at_5_max
value: 14.007790386514285
- type: nauc_precision_at_5_std
value: 2.0821824154130835
- type: nauc_recall_at_1000_diff1
value: 12.811540518810224
- type: nauc_recall_at_1000_max
value: 8.292364898702107
- type: nauc_recall_at_1000_std
value: 21.172583907189164
- type: nauc_recall_at_100_diff1
value: 10.763207100689536
- type: nauc_recall_at_100_max
value: 7.433707421662763
- type: nauc_recall_at_100_std
value: 13.860488374098953
- type: nauc_recall_at_10_diff1
value: 14.171919964914773
- type: nauc_recall_at_10_max
value: 12.3310517183378
- type: nauc_recall_at_10_std
value: 8.627373443421941
- type: nauc_recall_at_1_diff1
value: 35.57446018071185
- type: nauc_recall_at_1_max
value: 14.079653770667337
- type: nauc_recall_at_1_std
value: 11.69336879118923
- type: nauc_recall_at_20_diff1
value: 15.254229786832758
- type: nauc_recall_at_20_max
value: 12.944155764013084
- type: nauc_recall_at_20_std
value: 13.947428525952118
- type: nauc_recall_at_3_diff1
value: 19.723050472865584
- type: nauc_recall_at_3_max
value: 12.208432070640235
- type: nauc_recall_at_3_std
value: 3.2560341221626357
- type: nauc_recall_at_5_diff1
value: 14.200616898717133
- type: nauc_recall_at_5_max
value: 12.262563917077088
- type: nauc_recall_at_5_std
value: 4.115380825048154
- type: ndcg_at_1
value: 6.343
- type: ndcg_at_10
value: 11.0
- type: ndcg_at_100
value: 15.332
- type: ndcg_at_1000
value: 18.505
- type: ndcg_at_20
value: 12.280000000000001
- type: ndcg_at_3
value: 8.297
- type: ndcg_at_5
value: 9.482
- type: precision_at_1
value: 6.343
- type: precision_at_10
value: 2.251
- type: precision_at_100
value: 0.516
- type: precision_at_1000
value: 0.091
- type: precision_at_20
value: 1.437
- type: precision_at_3
value: 4.104
- type: precision_at_5
value: 3.234
- type: recall_at_1
value: 5.148
- type: recall_at_10
value: 16.955000000000002
- type: recall_at_100
value: 37.295
- type: recall_at_1000
value: 60.681
- type: recall_at_20
value: 21.847
- type: recall_at_3
value: 9.735000000000001
- type: recall_at_5
value: 12.595999999999998
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackPhysicsRetrieval (default)
type: mteb/cqadupstack-physics
config: default
split: test
revision: 79531abbd1fb92d06c6d6315a0cbbbf5bb247ea4
metrics:
- type: main_score
value: 22.671
- type: map_at_1
value: 13.99
- type: map_at_10
value: 19.16
- type: map_at_100
value: 20.247999999999998
- type: map_at_1000
value: 20.392
- type: map_at_20
value: 19.741
- type: map_at_3
value: 17.527
- type: map_at_5
value: 18.431
- type: mrr_at_1
value: 17.035611164581326
- type: mrr_at_10
value: 22.920886994515485
- type: mrr_at_100
value: 23.890327247971815
- type: mrr_at_1000
value: 23.98416758924587
- type: mrr_at_20
value: 23.478953217825296
- type: mrr_at_3
value: 21.158164902149515
- type: mrr_at_5
value: 22.154315046519095
- type: nauc_map_at_1000_diff1
value: 40.20942586785694
- type: nauc_map_at_1000_max
value: 19.62019855432636
- type: nauc_map_at_1000_std
value: -6.491186533676609
- type: nauc_map_at_100_diff1
value: 40.20129829669095
- type: nauc_map_at_100_max
value: 19.550525879706164
- type: nauc_map_at_100_std
value: -6.557075399749154
- type: nauc_map_at_10_diff1
value: 40.467281905527244
- type: nauc_map_at_10_max
value: 19.43593214249552
- type: nauc_map_at_10_std
value: -7.194947764095804
- type: nauc_map_at_1_diff1
value: 49.99688096548819
- type: nauc_map_at_1_max
value: 22.94216810488251
- type: nauc_map_at_1_std
value: -8.778905956805103
- type: nauc_map_at_20_diff1
value: 40.23228770570461
- type: nauc_map_at_20_max
value: 19.53074463716011
- type: nauc_map_at_20_std
value: -6.93310286275384
- type: nauc_map_at_3_diff1
value: 42.462368040248364
- type: nauc_map_at_3_max
value: 20.15932725435944
- type: nauc_map_at_3_std
value: -7.524246324724258
- type: nauc_map_at_5_diff1
value: 40.874264936734775
- type: nauc_map_at_5_max
value: 19.741200249921643
- type: nauc_map_at_5_std
value: -7.301832585861893
- type: nauc_mrr_at_1000_diff1
value: 36.93104632204301
- type: nauc_mrr_at_1000_max
value: 22.851961632870285
- type: nauc_mrr_at_1000_std
value: -6.050824088401521
- type: nauc_mrr_at_100_diff1
value: 36.90287005748533
- type: nauc_mrr_at_100_max
value: 22.838209556819866
- type: nauc_mrr_at_100_std
value: -6.064342814003103
- type: nauc_mrr_at_10_diff1
value: 36.93428786395009
- type: nauc_mrr_at_10_max
value: 22.89500409199853
- type: nauc_mrr_at_10_std
value: -6.581360935957288
- type: nauc_mrr_at_1_diff1
value: 46.11618926628157
- type: nauc_mrr_at_1_max
value: 27.154042077346617
- type: nauc_mrr_at_1_std
value: -7.408231463170914
- type: nauc_mrr_at_20_diff1
value: 36.964474819881275
- type: nauc_mrr_at_20_max
value: 22.9072805988528
- type: nauc_mrr_at_20_std
value: -6.306124053032698
- type: nauc_mrr_at_3_diff1
value: 38.9506895551962
- type: nauc_mrr_at_3_max
value: 24.218011709989156
- type: nauc_mrr_at_3_std
value: -6.7973818662665995
- type: nauc_mrr_at_5_diff1
value: 37.42273475691658
- type: nauc_mrr_at_5_max
value: 23.270403975249025
- type: nauc_mrr_at_5_std
value: -6.745230968723559
- type: nauc_ndcg_at_1000_diff1
value: 35.79628671266452
- type: nauc_ndcg_at_1000_max
value: 19.26627785321929
- type: nauc_ndcg_at_1000_std
value: -2.569388520550047
- type: nauc_ndcg_at_100_diff1
value: 35.768798848849585
- type: nauc_ndcg_at_100_max
value: 18.377203611905518
- type: nauc_ndcg_at_100_std
value: -3.3799540521604636
- type: nauc_ndcg_at_10_diff1
value: 36.510770710845314
- type: nauc_ndcg_at_10_max
value: 18.461708026439457
- type: nauc_ndcg_at_10_std
value: -6.491226580238661
- type: nauc_ndcg_at_1_diff1
value: 46.11618926628157
- type: nauc_ndcg_at_1_max
value: 27.154042077346617
- type: nauc_ndcg_at_1_std
value: -7.408231463170914
- type: nauc_ndcg_at_20_diff1
value: 36.070548441535124
- type: nauc_ndcg_at_20_max
value: 18.42396263230167
- type: nauc_ndcg_at_20_std
value: -5.61879907431204
- type: nauc_ndcg_at_3_diff1
value: 39.41782933627965
- type: nauc_ndcg_at_3_max
value: 21.047162846620946
- type: nauc_ndcg_at_3_std
value: -6.840755018811107
- type: nauc_ndcg_at_5_diff1
value: 37.17959347569529
- type: nauc_ndcg_at_5_max
value: 19.680732729842823
- type: nauc_ndcg_at_5_std
value: -6.707637987639474
- type: nauc_precision_at_1000_diff1
value: 0.49247246717968796
- type: nauc_precision_at_1000_max
value: 14.62495465729825
- type: nauc_precision_at_1000_std
value: 9.669209534147573
- type: nauc_precision_at_100_diff1
value: 11.5414175528365
- type: nauc_precision_at_100_max
value: 18.504188333036936
- type: nauc_precision_at_100_std
value: 6.194157348432716
- type: nauc_precision_at_10_diff1
value: 23.453163613392075
- type: nauc_precision_at_10_max
value: 20.06043852181855
- type: nauc_precision_at_10_std
value: -3.1717316064536836
- type: nauc_precision_at_1_diff1
value: 46.11618926628157
- type: nauc_precision_at_1_max
value: 27.154042077346617
- type: nauc_precision_at_1_std
value: -7.408231463170914
- type: nauc_precision_at_20_diff1
value: 20.708737669355788
- type: nauc_precision_at_20_max
value: 20.584185448256555
- type: nauc_precision_at_20_std
value: -0.7112923884678451
- type: nauc_precision_at_3_diff1
value: 31.594155528934703
- type: nauc_precision_at_3_max
value: 21.789282355041912
- type: nauc_precision_at_3_std
value: -3.9339318840163666
- type: nauc_precision_at_5_diff1
value: 26.10899513884069
- type: nauc_precision_at_5_max
value: 21.193775642825518
- type: nauc_precision_at_5_std
value: -4.04371021464142
- type: nauc_recall_at_1000_diff1
value: 19.475747590569128
- type: nauc_recall_at_1000_max
value: 10.531569131631349
- type: nauc_recall_at_1000_std
value: 20.376238758750535
- type: nauc_recall_at_100_diff1
value: 24.539661771959622
- type: nauc_recall_at_100_max
value: 8.849671325401761
- type: nauc_recall_at_100_std
value: 8.155353459396068
- type: nauc_recall_at_10_diff1
value: 27.94562559317398
- type: nauc_recall_at_10_max
value: 12.341122611885497
- type: nauc_recall_at_10_std
value: -4.945672050235199
- type: nauc_recall_at_1_diff1
value: 49.99688096548819
- type: nauc_recall_at_1_max
value: 22.94216810488251
- type: nauc_recall_at_1_std
value: -8.778905956805103
- type: nauc_recall_at_20_diff1
value: 26.721295492823483
- type: nauc_recall_at_20_max
value: 11.354327070591353
- type: nauc_recall_at_20_std
value: -2.0775832506536145
- type: nauc_recall_at_3_diff1
value: 35.18424498331245
- type: nauc_recall_at_3_max
value: 16.737206820951112
- type: nauc_recall_at_3_std
value: -6.362047908804104
- type: nauc_recall_at_5_diff1
value: 30.146390141726233
- type: nauc_recall_at_5_max
value: 14.718619551703243
- type: nauc_recall_at_5_std
value: -5.7544278604675165
- type: ndcg_at_1
value: 17.036
- type: ndcg_at_10
value: 22.671
- type: ndcg_at_100
value: 28.105999999999998
- type: ndcg_at_1000
value: 31.432
- type: ndcg_at_20
value: 24.617
- type: ndcg_at_3
value: 19.787
- type: ndcg_at_5
value: 21.122
- type: precision_at_1
value: 17.036
- type: precision_at_10
value: 4.09
- type: precision_at_100
value: 0.836
- type: precision_at_1000
value: 0.131
- type: precision_at_20
value: 2.6470000000000002
- type: precision_at_3
value: 9.208
- type: precision_at_5
value: 6.660000000000001
- type: recall_at_1
value: 13.99
- type: recall_at_10
value: 29.743000000000002
- type: recall_at_100
value: 53.735
- type: recall_at_1000
value: 76.785
- type: recall_at_20
value: 36.624
- type: recall_at_3
value: 21.583
- type: recall_at_5
value: 24.937
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackProgrammersRetrieval (default)
type: mteb/cqadupstack-programmers
config: default
split: test
revision: 6184bc1440d2dbc7612be22b50686b8826d22b32
metrics:
- type: main_score
value: 16.306
- type: map_at_1
value: 8.802999999999999
- type: map_at_10
value: 13.148000000000001
- type: map_at_100
value: 13.971
- type: map_at_1000
value: 14.105
- type: map_at_20
value: 13.529
- type: map_at_3
value: 11.638
- type: map_at_5
value: 12.356
- type: mrr_at_1
value: 11.073059360730593
- type: mrr_at_10
value: 15.919583967529165
- type: mrr_at_100
value: 16.709279732986573
- type: mrr_at_1000
value: 16.815285605955996
- type: mrr_at_20
value: 16.30432527215681
- type: mrr_at_3
value: 14.23135464231354
- type: mrr_at_5
value: 15.041856925418564
- type: nauc_map_at_1000_diff1
value: 30.659955136068056
- type: nauc_map_at_1000_max
value: 18.44163576415389
- type: nauc_map_at_1000_std
value: -3.8367034295883577
- type: nauc_map_at_100_diff1
value: 30.67476361799846
- type: nauc_map_at_100_max
value: 18.428682857132582
- type: nauc_map_at_100_std
value: -3.8897179777637882
- type: nauc_map_at_10_diff1
value: 30.59247711976844
- type: nauc_map_at_10_max
value: 18.705778597272683
- type: nauc_map_at_10_std
value: -5.022221490794733
- type: nauc_map_at_1_diff1
value: 40.141433107510736
- type: nauc_map_at_1_max
value: 23.026643526851306
- type: nauc_map_at_1_std
value: -5.749563342494851
- type: nauc_map_at_20_diff1
value: 30.68509526178602
- type: nauc_map_at_20_max
value: 18.45627985639005
- type: nauc_map_at_20_std
value: -4.406952661617948
- type: nauc_map_at_3_diff1
value: 31.73558283054405
- type: nauc_map_at_3_max
value: 18.205161864303328
- type: nauc_map_at_3_std
value: -5.435667326361934
- type: nauc_map_at_5_diff1
value: 30.794538196458472
- type: nauc_map_at_5_max
value: 18.500170217691768
- type: nauc_map_at_5_std
value: -5.684418245921586
- type: nauc_mrr_at_1000_diff1
value: 29.43077651539303
- type: nauc_mrr_at_1000_max
value: 20.25130465933273
- type: nauc_mrr_at_1000_std
value: -4.403299701181712
- type: nauc_mrr_at_100_diff1
value: 29.42440095545253
- type: nauc_mrr_at_100_max
value: 20.262024168775454
- type: nauc_mrr_at_100_std
value: -4.46104833589502
- type: nauc_mrr_at_10_diff1
value: 29.557535725132624
- type: nauc_mrr_at_10_max
value: 20.517669578964018
- type: nauc_mrr_at_10_std
value: -4.768947635082991
- type: nauc_mrr_at_1_diff1
value: 37.4774948212758
- type: nauc_mrr_at_1_max
value: 23.439278749784055
- type: nauc_mrr_at_1_std
value: -5.157088191908156
- type: nauc_mrr_at_20_diff1
value: 29.48470932914118
- type: nauc_mrr_at_20_max
value: 20.278594953830762
- type: nauc_mrr_at_20_std
value: -4.705845733248912
- type: nauc_mrr_at_3_diff1
value: 30.77059795240642
- type: nauc_mrr_at_3_max
value: 20.391982151070895
- type: nauc_mrr_at_3_std
value: -5.0478682718453385
- type: nauc_mrr_at_5_diff1
value: 30.028856765971984
- type: nauc_mrr_at_5_max
value: 20.557553687197167
- type: nauc_mrr_at_5_std
value: -5.24319954121192
- type: nauc_ndcg_at_1000_diff1
value: 27.40711483349399
- type: nauc_ndcg_at_1000_max
value: 17.126369493537826
- type: nauc_ndcg_at_1000_std
value: 0.5342836524997823
- type: nauc_ndcg_at_100_diff1
value: 27.711441526870356
- type: nauc_ndcg_at_100_max
value: 17.276247470704032
- type: nauc_ndcg_at_100_std
value: -0.8750376980385484
- type: nauc_ndcg_at_10_diff1
value: 27.720574369240204
- type: nauc_ndcg_at_10_max
value: 18.456829787593097
- type: nauc_ndcg_at_10_std
value: -4.216473937357797
- type: nauc_ndcg_at_1_diff1
value: 37.4774948212758
- type: nauc_ndcg_at_1_max
value: 23.439278749784055
- type: nauc_ndcg_at_1_std
value: -5.157088191908156
- type: nauc_ndcg_at_20_diff1
value: 27.746972988773933
- type: nauc_ndcg_at_20_max
value: 17.52494953980253
- type: nauc_ndcg_at_20_std
value: -2.9781030890977322
- type: nauc_ndcg_at_3_diff1
value: 29.522350537696717
- type: nauc_ndcg_at_3_max
value: 18.011604144671008
- type: nauc_ndcg_at_3_std
value: -4.725546369301677
- type: nauc_ndcg_at_5_diff1
value: 28.15851614794711
- type: nauc_ndcg_at_5_max
value: 18.317965726201184
- type: nauc_ndcg_at_5_std
value: -5.54058686011457
- type: nauc_precision_at_1000_diff1
value: 4.343913518236252
- type: nauc_precision_at_1000_max
value: 7.949664745091711
- type: nauc_precision_at_1000_std
value: 2.986855849342956
- type: nauc_precision_at_100_diff1
value: 15.435700494268618
- type: nauc_precision_at_100_max
value: 15.530490741404742
- type: nauc_precision_at_100_std
value: 4.089210125048146
- type: nauc_precision_at_10_diff1
value: 19.57474708128042
- type: nauc_precision_at_10_max
value: 19.632161038711597
- type: nauc_precision_at_10_std
value: -1.7830580435403458
- type: nauc_precision_at_1_diff1
value: 37.4774948212758
- type: nauc_precision_at_1_max
value: 23.439278749784055
- type: nauc_precision_at_1_std
value: -5.157088191908156
- type: nauc_precision_at_20_diff1
value: 20.568797026407644
- type: nauc_precision_at_20_max
value: 17.15052399771233
- type: nauc_precision_at_20_std
value: 0.6381100303472123
- type: nauc_precision_at_3_diff1
value: 23.53527003948809
- type: nauc_precision_at_3_max
value: 18.260774860471376
- type: nauc_precision_at_3_std
value: -4.277699429606214
- type: nauc_precision_at_5_diff1
value: 20.957492799575085
- type: nauc_precision_at_5_max
value: 20.041536239699173
- type: nauc_precision_at_5_std
value: -5.250189398148323
- type: nauc_recall_at_1000_diff1
value: 19.56836100145482
- type: nauc_recall_at_1000_max
value: 7.776560050916105
- type: nauc_recall_at_1000_std
value: 20.13708584784103
- type: nauc_recall_at_100_diff1
value: 22.16510567224014
- type: nauc_recall_at_100_max
value: 11.397641876417932
- type: nauc_recall_at_100_std
value: 7.58221141431797
- type: nauc_recall_at_10_diff1
value: 21.305911125564595
- type: nauc_recall_at_10_max
value: 15.61442350884527
- type: nauc_recall_at_10_std
value: -2.264275057856056
- type: nauc_recall_at_1_diff1
value: 40.141433107510736
- type: nauc_recall_at_1_max
value: 23.026643526851306
- type: nauc_recall_at_1_std
value: -5.749563342494851
- type: nauc_recall_at_20_diff1
value: 21.33360178111777
- type: nauc_recall_at_20_max
value: 13.007427262980725
- type: nauc_recall_at_20_std
value: 0.8315450930852684
- type: nauc_recall_at_3_diff1
value: 24.26871252397936
- type: nauc_recall_at_3_max
value: 13.78009182310998
- type: nauc_recall_at_3_std
value: -4.427807391785745
- type: nauc_recall_at_5_diff1
value: 22.146386144738443
- type: nauc_recall_at_5_max
value: 14.558261310921718
- type: nauc_recall_at_5_std
value: -5.453171833787222
- type: ndcg_at_1
value: 11.073
- type: ndcg_at_10
value: 16.306
- type: ndcg_at_100
value: 20.605
- type: ndcg_at_1000
value: 24.321
- type: ndcg_at_20
value: 17.605999999999998
- type: ndcg_at_3
value: 13.242
- type: ndcg_at_5
value: 14.424000000000001
- type: precision_at_1
value: 11.073
- type: precision_at_10
value: 3.174
- type: precision_at_100
value: 0.632
- type: precision_at_1000
value: 0.11299999999999999
- type: precision_at_20
value: 1.981
- type: precision_at_3
value: 6.317
- type: precision_at_5
value: 4.658
- type: recall_at_1
value: 8.802999999999999
- type: recall_at_10
value: 23.294999999999998
- type: recall_at_100
value: 42.543
- type: recall_at_1000
value: 69.501
- type: recall_at_20
value: 27.788
- type: recall_at_3
value: 14.935
- type: recall_at_5
value: 17.862000000000002
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackRetrieval (default)
type: CQADupstackRetrieval_is_a_combined_dataset
config: default
split: test
revision: CQADupstackRetrieval_is_a_combined_dataset
metrics:
- type: main_score
value: 19.211500000000004
- type: ndcg_at_10
value: 19.211500000000004
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackStatsRetrieval (default)
type: mteb/cqadupstack-stats
config: default
split: test
revision: 65ac3a16b8e91f9cee4c9828cc7c335575432a2a
metrics:
- type: main_score
value: 13.274
- type: map_at_1
value: 7.514
- type: map_at_10
value: 10.763
- type: map_at_100
value: 11.466
- type: map_at_1000
value: 11.565
- type: map_at_20
value: 11.153
- type: map_at_3
value: 9.489
- type: map_at_5
value: 10.05
- type: mrr_at_1
value: 9.049079754601227
- type: mrr_at_10
value: 12.66140812153082
- type: mrr_at_100
value: 13.34440731558096
- type: mrr_at_1000
value: 13.431250805407094
- type: mrr_at_20
value: 13.015821938908093
- type: mrr_at_3
value: 11.349693251533745
- type: mrr_at_5
value: 11.955521472392643
- type: nauc_map_at_1000_diff1
value: 22.974932209110474
- type: nauc_map_at_1000_max
value: 19.2179493418811
- type: nauc_map_at_1000_std
value: -4.027224925667458
- type: nauc_map_at_100_diff1
value: 23.00306330611636
- type: nauc_map_at_100_max
value: 19.279597737188887
- type: nauc_map_at_100_std
value: -4.054272921846715
- type: nauc_map_at_10_diff1
value: 23.185643422536508
- type: nauc_map_at_10_max
value: 19.620815876636478
- type: nauc_map_at_10_std
value: -4.67640325592363
- type: nauc_map_at_1_diff1
value: 29.800345069729406
- type: nauc_map_at_1_max
value: 23.87910907490326
- type: nauc_map_at_1_std
value: -6.320599828399073
- type: nauc_map_at_20_diff1
value: 23.142569498191413
- type: nauc_map_at_20_max
value: 19.48779289778967
- type: nauc_map_at_20_std
value: -4.111902735804231
- type: nauc_map_at_3_diff1
value: 25.743034910929975
- type: nauc_map_at_3_max
value: 20.90755349054651
- type: nauc_map_at_3_std
value: -5.380592645823912
- type: nauc_map_at_5_diff1
value: 23.42137416675548
- type: nauc_map_at_5_max
value: 19.329228837468158
- type: nauc_map_at_5_std
value: -5.563525004474619
- type: nauc_mrr_at_1000_diff1
value: 24.10086479687415
- type: nauc_mrr_at_1000_max
value: 20.398011792778824
- type: nauc_mrr_at_1000_std
value: -2.1446120511727957
- type: nauc_mrr_at_100_diff1
value: 24.115697677435794
- type: nauc_mrr_at_100_max
value: 20.458646264375886
- type: nauc_mrr_at_100_std
value: -2.151550159504517
- type: nauc_mrr_at_10_diff1
value: 24.293579862933555
- type: nauc_mrr_at_10_max
value: 20.839345603643498
- type: nauc_mrr_at_10_std
value: -2.480503488415708
- type: nauc_mrr_at_1_diff1
value: 31.141124432852486
- type: nauc_mrr_at_1_max
value: 25.3974393459875
- type: nauc_mrr_at_1_std
value: -4.603112328474119
- type: nauc_mrr_at_20_diff1
value: 24.199943135873237
- type: nauc_mrr_at_20_max
value: 20.685578492011537
- type: nauc_mrr_at_20_std
value: -2.216739386860867
- type: nauc_mrr_at_3_diff1
value: 27.18978712305054
- type: nauc_mrr_at_3_max
value: 21.95145492661433
- type: nauc_mrr_at_3_std
value: -3.3010871727045004
- type: nauc_mrr_at_5_diff1
value: 24.55785813047769
- type: nauc_mrr_at_5_max
value: 20.630334122680697
- type: nauc_mrr_at_5_std
value: -3.4751492733475713
- type: nauc_ndcg_at_1000_diff1
value: 18.214182224000904
- type: nauc_ndcg_at_1000_max
value: 15.022677670245125
- type: nauc_ndcg_at_1000_std
value: -1.2757783952996276
- type: nauc_ndcg_at_100_diff1
value: 19.45648169337917
- type: nauc_ndcg_at_100_max
value: 16.160731902664246
- type: nauc_ndcg_at_100_std
value: -1.2021617745185982
- type: nauc_ndcg_at_10_diff1
value: 20.78032928549088
- type: nauc_ndcg_at_10_max
value: 18.37701966895512
- type: nauc_ndcg_at_10_std
value: -2.859756963061105
- type: nauc_ndcg_at_1_diff1
value: 31.141124432852486
- type: nauc_ndcg_at_1_max
value: 25.3974393459875
- type: nauc_ndcg_at_1_std
value: -4.603112328474119
- type: nauc_ndcg_at_20_diff1
value: 20.568804870494365
- type: nauc_ndcg_at_20_max
value: 17.688797629532804
- type: nauc_ndcg_at_20_std
value: -1.601270033947706
- type: nauc_ndcg_at_3_diff1
value: 25.352168775398777
- type: nauc_ndcg_at_3_max
value: 20.42319619108203
- type: nauc_ndcg_at_3_std
value: -4.2521134409577845
- type: nauc_ndcg_at_5_diff1
value: 21.18713014585295
- type: nauc_ndcg_at_5_max
value: 17.939191093215953
- type: nauc_ndcg_at_5_std
value: -4.743032229404275
- type: nauc_precision_at_1000_diff1
value: 4.892829090188313
- type: nauc_precision_at_1000_max
value: 7.933069592889083
- type: nauc_precision_at_1000_std
value: 4.24278581923629
- type: nauc_precision_at_100_diff1
value: 13.066398116495034
- type: nauc_precision_at_100_max
value: 14.384247527346716
- type: nauc_precision_at_100_std
value: 6.056873634302884
- type: nauc_precision_at_10_diff1
value: 16.616656372852148
- type: nauc_precision_at_10_max
value: 18.665616620054436
- type: nauc_precision_at_10_std
value: 1.1124326621912484
- type: nauc_precision_at_1_diff1
value: 31.141124432852486
- type: nauc_precision_at_1_max
value: 25.3974393459875
- type: nauc_precision_at_1_std
value: -4.603112328474119
- type: nauc_precision_at_20_diff1
value: 17.294215780840165
- type: nauc_precision_at_20_max
value: 18.09538722850449
- type: nauc_precision_at_20_std
value: 5.524315844370954
- type: nauc_precision_at_3_diff1
value: 25.1866897673422
- type: nauc_precision_at_3_max
value: 19.72076391537079
- type: nauc_precision_at_3_std
value: -1.6649392928833502
- type: nauc_precision_at_5_diff1
value: 17.254095768389526
- type: nauc_precision_at_5_max
value: 16.94859363403111
- type: nauc_precision_at_5_std
value: -1.9187213027734356
- type: nauc_recall_at_1000_diff1
value: 2.1491291924120404
- type: nauc_recall_at_1000_max
value: -0.6564763388554173
- type: nauc_recall_at_1000_std
value: 2.480520716627822
- type: nauc_recall_at_100_diff1
value: 10.764856128055248
- type: nauc_recall_at_100_max
value: 6.734689971662489
- type: nauc_recall_at_100_std
value: 3.0407690200004334
- type: nauc_recall_at_10_diff1
value: 14.979718773625542
- type: nauc_recall_at_10_max
value: 14.109838347838258
- type: nauc_recall_at_10_std
value: -0.5378433013187329
- type: nauc_recall_at_1_diff1
value: 29.800345069729406
- type: nauc_recall_at_1_max
value: 23.87910907490326
- type: nauc_recall_at_1_std
value: -6.320599828399073
- type: nauc_recall_at_20_diff1
value: 14.511882633459333
- type: nauc_recall_at_20_max
value: 12.011480653201415
- type: nauc_recall_at_20_std
value: 2.0767690218465877
- type: nauc_recall_at_3_diff1
value: 20.6626126323687
- type: nauc_recall_at_3_max
value: 17.25857728630443
- type: nauc_recall_at_3_std
value: -3.7939883071411717
- type: nauc_recall_at_5_diff1
value: 14.1235036082108
- type: nauc_recall_at_5_max
value: 12.727411826064857
- type: nauc_recall_at_5_std
value: -4.60850604165874
- type: ndcg_at_1
value: 9.049
- type: ndcg_at_10
value: 13.274
- type: ndcg_at_100
value: 17.086000000000002
- type: ndcg_at_1000
value: 19.936999999999998
- type: ndcg_at_20
value: 14.582999999999998
- type: ndcg_at_3
value: 10.725999999999999
- type: ndcg_at_5
value: 11.623
- type: precision_at_1
value: 9.049
- type: precision_at_10
value: 2.423
- type: precision_at_100
value: 0.479
- type: precision_at_1000
value: 0.079
- type: precision_at_20
value: 1.526
- type: precision_at_3
value: 4.9590000000000005
- type: precision_at_5
value: 3.62
- type: recall_at_1
value: 7.514
- type: recall_at_10
value: 19.31
- type: recall_at_100
value: 37.413999999999994
- type: recall_at_1000
value: 59.021
- type: recall_at_20
value: 24.21
- type: recall_at_3
value: 12.113999999999999
- type: recall_at_5
value: 14.371
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackTexRetrieval (default)
type: mteb/cqadupstack-tex
config: default
split: test
revision: 46989137a86843e03a6195de44b09deda022eec7
metrics:
- type: main_score
value: 10.994
- type: map_at_1
value: 6.225
- type: map_at_10
value: 8.953999999999999
- type: map_at_100
value: 9.603
- type: map_at_1000
value: 9.712
- type: map_at_20
value: 9.278
- type: map_at_3
value: 8.074
- type: map_at_5
value: 8.547
- type: mrr_at_1
value: 7.708189951823813
- type: mrr_at_10
value: 11.010238805317954
- type: mrr_at_100
value: 11.697852969394127
- type: mrr_at_1000
value: 11.788096222755389
- type: mrr_at_20
value: 11.36125747114887
- type: mrr_at_3
value: 9.967882541867406
- type: mrr_at_5
value: 10.53223216334021
- type: nauc_map_at_1000_diff1
value: 28.62895539988389
- type: nauc_map_at_1000_max
value: 16.242894414293037
- type: nauc_map_at_1000_std
value: -4.569604418870727
- type: nauc_map_at_100_diff1
value: 28.61807781605406
- type: nauc_map_at_100_max
value: 16.21900205663456
- type: nauc_map_at_100_std
value: -4.742228052779668
- type: nauc_map_at_10_diff1
value: 29.55698899178743
- type: nauc_map_at_10_max
value: 16.619065435982105
- type: nauc_map_at_10_std
value: -5.272914850396907
- type: nauc_map_at_1_diff1
value: 38.11099020611636
- type: nauc_map_at_1_max
value: 19.754663729177466
- type: nauc_map_at_1_std
value: -7.100435784719483
- type: nauc_map_at_20_diff1
value: 28.96213016918891
- type: nauc_map_at_20_max
value: 16.40536013245705
- type: nauc_map_at_20_std
value: -5.152060847207817
- type: nauc_map_at_3_diff1
value: 31.518330681088514
- type: nauc_map_at_3_max
value: 17.648594434363673
- type: nauc_map_at_3_std
value: -5.013522244046003
- type: nauc_map_at_5_diff1
value: 30.53555288667588
- type: nauc_map_at_5_max
value: 17.552873944829003
- type: nauc_map_at_5_std
value: -5.459559007946099
- type: nauc_mrr_at_1000_diff1
value: 28.56870451139856
- type: nauc_mrr_at_1000_max
value: 18.199477946334998
- type: nauc_mrr_at_1000_std
value: -3.83210753499382
- type: nauc_mrr_at_100_diff1
value: 28.55289316686771
- type: nauc_mrr_at_100_max
value: 18.190933266659705
- type: nauc_mrr_at_100_std
value: -3.910114024174217
- type: nauc_mrr_at_10_diff1
value: 29.44010525180224
- type: nauc_mrr_at_10_max
value: 18.5618742276953
- type: nauc_mrr_at_10_std
value: -4.318500155132472
- type: nauc_mrr_at_1_diff1
value: 37.756041398612425
- type: nauc_mrr_at_1_max
value: 22.180382124822522
- type: nauc_mrr_at_1_std
value: -6.881985725496932
- type: nauc_mrr_at_20_diff1
value: 28.862633708506863
- type: nauc_mrr_at_20_max
value: 18.368745544312883
- type: nauc_mrr_at_20_std
value: -4.231869471717514
- type: nauc_mrr_at_3_diff1
value: 31.67790485910417
- type: nauc_mrr_at_3_max
value: 20.067426011874694
- type: nauc_mrr_at_3_std
value: -4.35750935851484
- type: nauc_mrr_at_5_diff1
value: 30.3892346503623
- type: nauc_mrr_at_5_max
value: 19.427471974651258
- type: nauc_mrr_at_5_std
value: -4.501090877808792
- type: nauc_ndcg_at_1000_diff1
value: 23.124264919835152
- type: nauc_ndcg_at_1000_max
value: 13.725127541654583
- type: nauc_ndcg_at_1000_std
value: 0.8488267118015322
- type: nauc_ndcg_at_100_diff1
value: 22.931912676541813
- type: nauc_ndcg_at_100_max
value: 13.573133160305714
- type: nauc_ndcg_at_100_std
value: -1.9712575029716004
- type: nauc_ndcg_at_10_diff1
value: 26.49225179330549
- type: nauc_ndcg_at_10_max
value: 15.334589645844614
- type: nauc_ndcg_at_10_std
value: -4.732200420388755
- type: nauc_ndcg_at_1_diff1
value: 37.756041398612425
- type: nauc_ndcg_at_1_max
value: 22.180382124822522
- type: nauc_ndcg_at_1_std
value: -6.881985725496932
- type: nauc_ndcg_at_20_diff1
value: 24.758487984247115
- type: nauc_ndcg_at_20_max
value: 14.685319575357777
- type: nauc_ndcg_at_20_std
value: -4.432729957713687
- type: nauc_ndcg_at_3_diff1
value: 30.04172743163936
- type: nauc_ndcg_at_3_max
value: 17.942422342704166
- type: nauc_ndcg_at_3_std
value: -4.371869609553122
- type: nauc_ndcg_at_5_diff1
value: 28.394597447013364
- type: nauc_ndcg_at_5_max
value: 17.337563726465902
- type: nauc_ndcg_at_5_std
value: -4.979815289974346
- type: nauc_precision_at_1000_diff1
value: 13.358015963281982
- type: nauc_precision_at_1000_max
value: 13.588027398642533
- type: nauc_precision_at_1000_std
value: 16.038391304073617
- type: nauc_precision_at_100_diff1
value: 14.048154067920237
- type: nauc_precision_at_100_max
value: 13.442039272771812
- type: nauc_precision_at_100_std
value: 6.293550136432713
- type: nauc_precision_at_10_diff1
value: 19.7938197345429
- type: nauc_precision_at_10_max
value: 15.498999930693053
- type: nauc_precision_at_10_std
value: -2.820921985501471
- type: nauc_precision_at_1_diff1
value: 37.756041398612425
- type: nauc_precision_at_1_max
value: 22.180382124822522
- type: nauc_precision_at_1_std
value: -6.881985725496932
- type: nauc_precision_at_20_diff1
value: 16.86330177780297
- type: nauc_precision_at_20_max
value: 14.757498925286052
- type: nauc_precision_at_20_std
value: -1.4878113085077458
- type: nauc_precision_at_3_diff1
value: 26.22068335923554
- type: nauc_precision_at_3_max
value: 19.552244504819107
- type: nauc_precision_at_3_std
value: -2.903836612504541
- type: nauc_precision_at_5_diff1
value: 23.01543740291806
- type: nauc_precision_at_5_max
value: 18.976238791156298
- type: nauc_precision_at_5_std
value: -3.772870601995056
- type: nauc_recall_at_1000_diff1
value: 11.344856628291772
- type: nauc_recall_at_1000_max
value: 5.496064714954898
- type: nauc_recall_at_1000_std
value: 14.552915745152944
- type: nauc_recall_at_100_diff1
value: 11.37183345326816
- type: nauc_recall_at_100_max
value: 6.152609534633153
- type: nauc_recall_at_100_std
value: 3.3240506595168617
- type: nauc_recall_at_10_diff1
value: 19.414706457137537
- type: nauc_recall_at_10_max
value: 10.013408222848447
- type: nauc_recall_at_10_std
value: -4.469998335412016
- type: nauc_recall_at_1_diff1
value: 38.11099020611636
- type: nauc_recall_at_1_max
value: 19.754663729177466
- type: nauc_recall_at_1_std
value: -7.100435784719483
- type: nauc_recall_at_20_diff1
value: 15.570619584248163
- type: nauc_recall_at_20_max
value: 8.816676896160281
- type: nauc_recall_at_20_std
value: -3.7706693105174836
- type: nauc_recall_at_3_diff1
value: 25.664091285326485
- type: nauc_recall_at_3_max
value: 14.868700645447488
- type: nauc_recall_at_3_std
value: -3.5813114627791736
- type: nauc_recall_at_5_diff1
value: 22.650699032516435
- type: nauc_recall_at_5_max
value: 14.046776424466485
- type: nauc_recall_at_5_std
value: -5.072422590207594
- type: ndcg_at_1
value: 7.707999999999999
- type: ndcg_at_10
value: 10.994
- type: ndcg_at_100
value: 14.562
- type: ndcg_at_1000
value: 17.738
- type: ndcg_at_20
value: 12.152000000000001
- type: ndcg_at_3
value: 9.286999999999999
- type: ndcg_at_5
value: 10.057
- type: precision_at_1
value: 7.707999999999999
- type: precision_at_10
value: 2.068
- type: precision_at_100
value: 0.466
- type: precision_at_1000
value: 0.08800000000000001
- type: precision_at_20
value: 1.352
- type: precision_at_3
value: 4.508
- type: precision_at_5
value: 3.3169999999999997
- type: recall_at_1
value: 6.225
- type: recall_at_10
value: 15.177999999999999
- type: recall_at_100
value: 31.726
- type: recall_at_1000
value: 55.286
- type: recall_at_20
value: 19.516
- type: recall_at_3
value: 10.381
- type: recall_at_5
value: 12.354999999999999
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackUnixRetrieval (default)
type: mteb/cqadupstack-unix
config: default
split: test
revision: 6c6430d3a6d36f8d2a829195bc5dc94d7e063e53
metrics:
- type: main_score
value: 17.415
- type: map_at_1
value: 11.61
- type: map_at_10
value: 14.879000000000001
- type: map_at_100
value: 15.64
- type: map_at_1000
value: 15.744
- type: map_at_20
value: 15.222
- type: map_at_3
value: 13.818
- type: map_at_5
value: 14.221
- type: mrr_at_1
value: 14.085820895522389
- type: mrr_at_10
value: 17.784144752428336
- type: mrr_at_100
value: 18.59055632302295
- type: mrr_at_1000
value: 18.680733729013262
- type: mrr_at_20
value: 18.159102701666594
- type: mrr_at_3
value: 16.68221393034826
- type: mrr_at_5
value: 17.10665422885572
- type: nauc_map_at_1000_diff1
value: 39.56056915227938
- type: nauc_map_at_1000_max
value: 27.13397943596498
- type: nauc_map_at_1000_std
value: -7.0908382945611175
- type: nauc_map_at_100_diff1
value: 39.54030188989168
- type: nauc_map_at_100_max
value: 27.13281562979474
- type: nauc_map_at_100_std
value: -7.165159503138965
- type: nauc_map_at_10_diff1
value: 40.318171341397765
- type: nauc_map_at_10_max
value: 27.535451283580016
- type: nauc_map_at_10_std
value: -7.689737441073707
- type: nauc_map_at_1_diff1
value: 47.05601088674895
- type: nauc_map_at_1_max
value: 30.576608334052853
- type: nauc_map_at_1_std
value: -9.67702524348975
- type: nauc_map_at_20_diff1
value: 39.80136558735939
- type: nauc_map_at_20_max
value: 27.051853945437948
- type: nauc_map_at_20_std
value: -7.409144616339466
- type: nauc_map_at_3_diff1
value: 42.15633029927089
- type: nauc_map_at_3_max
value: 28.386143076096086
- type: nauc_map_at_3_std
value: -9.106105164113686
- type: nauc_map_at_5_diff1
value: 41.46860741828094
- type: nauc_map_at_5_max
value: 28.202178480215373
- type: nauc_map_at_5_std
value: -8.399626801433124
- type: nauc_mrr_at_1000_diff1
value: 37.78472411053756
- type: nauc_mrr_at_1000_max
value: 28.338277069066432
- type: nauc_mrr_at_1000_std
value: -7.391912169514899
- type: nauc_mrr_at_100_diff1
value: 37.74697100045658
- type: nauc_mrr_at_100_max
value: 28.35832528792151
- type: nauc_mrr_at_100_std
value: -7.4298805804754995
- type: nauc_mrr_at_10_diff1
value: 38.428674914285196
- type: nauc_mrr_at_10_max
value: 28.708508212507105
- type: nauc_mrr_at_10_std
value: -7.884064754659524
- type: nauc_mrr_at_1_diff1
value: 45.69997352898185
- type: nauc_mrr_at_1_max
value: 32.47880480030532
- type: nauc_mrr_at_1_std
value: -9.337266605729418
- type: nauc_mrr_at_20_diff1
value: 37.99989625388078
- type: nauc_mrr_at_20_max
value: 28.255616608253824
- type: nauc_mrr_at_20_std
value: -7.614369324242356
- type: nauc_mrr_at_3_diff1
value: 40.126736669268766
- type: nauc_mrr_at_3_max
value: 29.616770044400464
- type: nauc_mrr_at_3_std
value: -9.336882852739908
- type: nauc_mrr_at_5_diff1
value: 39.41517859913304
- type: nauc_mrr_at_5_max
value: 29.312224024493094
- type: nauc_mrr_at_5_std
value: -8.792379282413792
- type: nauc_ndcg_at_1000_diff1
value: 34.318717429678735
- type: nauc_ndcg_at_1000_max
value: 24.57185685965525
- type: nauc_ndcg_at_1000_std
value: -2.367526484055821
- type: nauc_ndcg_at_100_diff1
value: 33.59453283807552
- type: nauc_ndcg_at_100_max
value: 24.73858681825266
- type: nauc_ndcg_at_100_std
value: -4.087141295771279
- type: nauc_ndcg_at_10_diff1
value: 36.635105955522235
- type: nauc_ndcg_at_10_max
value: 25.975386842872318
- type: nauc_ndcg_at_10_std
value: -6.3751364798979315
- type: nauc_ndcg_at_1_diff1
value: 45.69997352898185
- type: nauc_ndcg_at_1_max
value: 32.47880480030532
- type: nauc_ndcg_at_1_std
value: -9.337266605729418
- type: nauc_ndcg_at_20_diff1
value: 35.16876791291799
- type: nauc_ndcg_at_20_max
value: 24.477658044207647
- type: nauc_ndcg_at_20_std
value: -5.555064208738701
- type: nauc_ndcg_at_3_diff1
value: 39.82534185570945
- type: nauc_ndcg_at_3_max
value: 28.139721552476963
- type: nauc_ndcg_at_3_std
value: -9.160710946542384
- type: nauc_ndcg_at_5_diff1
value: 38.98115351105197
- type: nauc_ndcg_at_5_max
value: 27.515452028134202
- type: nauc_ndcg_at_5_std
value: -8.025551102160557
- type: nauc_precision_at_1000_diff1
value: 12.303392079476001
- type: nauc_precision_at_1000_max
value: 15.521101561430214
- type: nauc_precision_at_1000_std
value: 13.875729823362349
- type: nauc_precision_at_100_diff1
value: 15.718813920537666
- type: nauc_precision_at_100_max
value: 20.036566730817615
- type: nauc_precision_at_100_std
value: 5.068608226979542
- type: nauc_precision_at_10_diff1
value: 25.3121404066982
- type: nauc_precision_at_10_max
value: 24.190797754465372
- type: nauc_precision_at_10_std
value: -3.28815407741081
- type: nauc_precision_at_1_diff1
value: 45.69997352898185
- type: nauc_precision_at_1_max
value: 32.47880480030532
- type: nauc_precision_at_1_std
value: -9.337266605729418
- type: nauc_precision_at_20_diff1
value: 21.370193752136633
- type: nauc_precision_at_20_max
value: 19.74829392747058
- type: nauc_precision_at_20_std
value: -1.1434647531180093
- type: nauc_precision_at_3_diff1
value: 33.27263719269652
- type: nauc_precision_at_3_max
value: 27.28958835327579
- type: nauc_precision_at_3_std
value: -9.03699952848916
- type: nauc_precision_at_5_diff1
value: 31.109130426292463
- type: nauc_precision_at_5_max
value: 26.959336149040137
- type: nauc_precision_at_5_std
value: -6.946474296738139
- type: nauc_recall_at_1000_diff1
value: 17.923508430691957
- type: nauc_recall_at_1000_max
value: 10.80984639138324
- type: nauc_recall_at_1000_std
value: 17.38699739341662
- type: nauc_recall_at_100_diff1
value: 17.188512794168755
- type: nauc_recall_at_100_max
value: 15.470956979815659
- type: nauc_recall_at_100_std
value: 4.263468796063786
- type: nauc_recall_at_10_diff1
value: 27.628371666732892
- type: nauc_recall_at_10_max
value: 19.847290125705662
- type: nauc_recall_at_10_std
value: -2.718782096589473
- type: nauc_recall_at_1_diff1
value: 47.05601088674895
- type: nauc_recall_at_1_max
value: 30.576608334052853
- type: nauc_recall_at_1_std
value: -9.67702524348975
- type: nauc_recall_at_20_diff1
value: 23.787114240920214
- type: nauc_recall_at_20_max
value: 15.65621275614017
- type: nauc_recall_at_20_std
value: -0.6996887505536454
- type: nauc_recall_at_3_diff1
value: 37.16605995449111
- type: nauc_recall_at_3_max
value: 24.971735910807293
- type: nauc_recall_at_3_std
value: -8.874845333377282
- type: nauc_recall_at_5_diff1
value: 34.15194539098878
- type: nauc_recall_at_5_max
value: 23.788685123818407
- type: nauc_recall_at_5_std
value: -6.520745742182325
- type: ndcg_at_1
value: 14.086000000000002
- type: ndcg_at_10
value: 17.415
- type: ndcg_at_100
value: 21.705
- type: ndcg_at_1000
value: 24.851
- type: ndcg_at_20
value: 18.674
- type: ndcg_at_3
value: 15.369
- type: ndcg_at_5
value: 15.903
- type: precision_at_1
value: 14.086000000000002
- type: precision_at_10
value: 2.9010000000000002
- type: precision_at_100
value: 0.567
- type: precision_at_1000
value: 0.093
- type: precision_at_20
value: 1.754
- type: precision_at_3
value: 6.903
- type: precision_at_5
value: 4.571
- type: recall_at_1
value: 11.61
- type: recall_at_10
value: 22.543
- type: recall_at_100
value: 42.586
- type: recall_at_1000
value: 66.3
- type: recall_at_20
value: 27.296
- type: recall_at_3
value: 16.458000000000002
- type: recall_at_5
value: 18.087
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackWebmastersRetrieval (default)
type: mteb/cqadupstack-webmasters
config: default
split: test
revision: 160c094312a0e1facb97e55eeddb698c0abe3571
metrics:
- type: main_score
value: 21.398
- type: map_at_1
value: 12.418
- type: map_at_10
value: 17.634
- type: map_at_100
value: 18.427
- type: map_at_1000
value: 18.601
- type: map_at_20
value: 17.949
- type: map_at_3
value: 16.070999999999998
- type: map_at_5
value: 16.909
- type: mrr_at_1
value: 16.007905138339922
- type: mrr_at_10
value: 21.244275048622875
- type: mrr_at_100
value: 21.913675154893422
- type: mrr_at_1000
value: 22.00394675539023
- type: mrr_at_20
value: 21.484105638892164
- type: mrr_at_3
value: 19.729907773386028
- type: mrr_at_5
value: 20.579710144927535
- type: nauc_map_at_1000_diff1
value: 33.276058954347164
- type: nauc_map_at_1000_max
value: 22.686785676254438
- type: nauc_map_at_1000_std
value: -15.623983007245663
- type: nauc_map_at_100_diff1
value: 33.277163035857754
- type: nauc_map_at_100_max
value: 22.79483533389435
- type: nauc_map_at_100_std
value: -15.806523169464585
- type: nauc_map_at_10_diff1
value: 33.31349011893446
- type: nauc_map_at_10_max
value: 23.16070733276047
- type: nauc_map_at_10_std
value: -16.557456309767332
- type: nauc_map_at_1_diff1
value: 43.560854870215444
- type: nauc_map_at_1_max
value: 22.785972852704127
- type: nauc_map_at_1_std
value: -17.629946377144794
- type: nauc_map_at_20_diff1
value: 33.570999449061176
- type: nauc_map_at_20_max
value: 22.993901876226587
- type: nauc_map_at_20_std
value: -16.272939675166977
- type: nauc_map_at_3_diff1
value: 35.03763295449743
- type: nauc_map_at_3_max
value: 22.445582103531297
- type: nauc_map_at_3_std
value: -16.560038144492275
- type: nauc_map_at_5_diff1
value: 34.27964006257987
- type: nauc_map_at_5_max
value: 23.332248714244795
- type: nauc_map_at_5_std
value: -16.57243447707981
- type: nauc_mrr_at_1000_diff1
value: 32.944240054080296
- type: nauc_mrr_at_1000_max
value: 21.812793329305745
- type: nauc_mrr_at_1000_std
value: -13.642145832181225
- type: nauc_mrr_at_100_diff1
value: 32.92776460042595
- type: nauc_mrr_at_100_max
value: 21.791203022888052
- type: nauc_mrr_at_100_std
value: -13.640560468524749
- type: nauc_mrr_at_10_diff1
value: 32.9752685024834
- type: nauc_mrr_at_10_max
value: 22.104988021339146
- type: nauc_mrr_at_10_std
value: -14.271356854639786
- type: nauc_mrr_at_1_diff1
value: 42.51316330983356
- type: nauc_mrr_at_1_max
value: 23.297138888078976
- type: nauc_mrr_at_1_std
value: -14.903606813837882
- type: nauc_mrr_at_20_diff1
value: 33.22223363073958
- type: nauc_mrr_at_20_max
value: 21.974295331873055
- type: nauc_mrr_at_20_std
value: -13.88205443342369
- type: nauc_mrr_at_3_diff1
value: 33.993832814261395
- type: nauc_mrr_at_3_max
value: 21.556945052605887
- type: nauc_mrr_at_3_std
value: -13.797171517214505
- type: nauc_mrr_at_5_diff1
value: 33.35409476101201
- type: nauc_mrr_at_5_max
value: 21.981426511175837
- type: nauc_mrr_at_5_std
value: -14.09531063812787
- type: nauc_ndcg_at_1000_diff1
value: 29.438860831545004
- type: nauc_ndcg_at_1000_max
value: 21.25973393436945
- type: nauc_ndcg_at_1000_std
value: -11.16393916502227
- type: nauc_ndcg_at_100_diff1
value: 28.444184419510172
- type: nauc_ndcg_at_100_max
value: 21.18616561891909
- type: nauc_ndcg_at_100_std
value: -12.037980607459001
- type: nauc_ndcg_at_10_diff1
value: 29.271087139678205
- type: nauc_ndcg_at_10_max
value: 22.032768110468098
- type: nauc_ndcg_at_10_std
value: -15.467782849927971
- type: nauc_ndcg_at_1_diff1
value: 42.51316330983356
- type: nauc_ndcg_at_1_max
value: 23.297138888078976
- type: nauc_ndcg_at_1_std
value: -14.903606813837882
- type: nauc_ndcg_at_20_diff1
value: 30.46132048728029
- type: nauc_ndcg_at_20_max
value: 21.81477297472493
- type: nauc_ndcg_at_20_std
value: -14.218418166481491
- type: nauc_ndcg_at_3_diff1
value: 32.0153358591922
- type: nauc_ndcg_at_3_max
value: 20.770546204709458
- type: nauc_ndcg_at_3_std
value: -14.747432002736549
- type: nauc_ndcg_at_5_diff1
value: 30.981699893250898
- type: nauc_ndcg_at_5_max
value: 22.090548813686304
- type: nauc_ndcg_at_5_std
value: -15.09612387707668
- type: nauc_precision_at_1000_diff1
value: 7.2014592078746125
- type: nauc_precision_at_1000_max
value: -5.678465880888778
- type: nauc_precision_at_1000_std
value: 22.430084503019
- type: nauc_precision_at_100_diff1
value: 7.47376139946301
- type: nauc_precision_at_100_max
value: 2.300260757829557
- type: nauc_precision_at_100_std
value: 13.810673946221709
- type: nauc_precision_at_10_diff1
value: 15.542740121996912
- type: nauc_precision_at_10_max
value: 15.807667200751279
- type: nauc_precision_at_10_std
value: -9.58878382311598
- type: nauc_precision_at_1_diff1
value: 42.51316330983356
- type: nauc_precision_at_1_max
value: 23.297138888078976
- type: nauc_precision_at_1_std
value: -14.903606813837882
- type: nauc_precision_at_20_diff1
value: 17.44141625096109
- type: nauc_precision_at_20_max
value: 12.987380515646793
- type: nauc_precision_at_20_std
value: -3.3241327401895018
- type: nauc_precision_at_3_diff1
value: 24.31306633873876
- type: nauc_precision_at_3_max
value: 20.59991114197874
- type: nauc_precision_at_3_std
value: -12.702555430555881
- type: nauc_precision_at_5_diff1
value: 21.113937977245538
- type: nauc_precision_at_5_max
value: 19.40330569402618
- type: nauc_precision_at_5_std
value: -11.001297546039366
- type: nauc_recall_at_1000_diff1
value: 14.316639289353503
- type: nauc_recall_at_1000_max
value: 14.663280590084184
- type: nauc_recall_at_1000_std
value: 10.373834237194783
- type: nauc_recall_at_100_diff1
value: 14.159748016577145
- type: nauc_recall_at_100_max
value: 15.266942159548291
- type: nauc_recall_at_100_std
value: 0.09898266158022606
- type: nauc_recall_at_10_diff1
value: 19.311511962157848
- type: nauc_recall_at_10_max
value: 21.086642659351444
- type: nauc_recall_at_10_std
value: -15.03280805118371
- type: nauc_recall_at_1_diff1
value: 43.560854870215444
- type: nauc_recall_at_1_max
value: 22.785972852704127
- type: nauc_recall_at_1_std
value: -17.629946377144794
- type: nauc_recall_at_20_diff1
value: 22.84188696362324
- type: nauc_recall_at_20_max
value: 19.255833980651115
- type: nauc_recall_at_20_std
value: -10.769401250685878
- type: nauc_recall_at_3_diff1
value: 25.289776971942963
- type: nauc_recall_at_3_max
value: 19.495340268606647
- type: nauc_recall_at_3_std
value: -14.682485696338162
- type: nauc_recall_at_5_diff1
value: 23.28267489764339
- type: nauc_recall_at_5_max
value: 21.90368937976734
- type: nauc_recall_at_5_std
value: -15.19826645274188
- type: ndcg_at_1
value: 16.008
- type: ndcg_at_10
value: 21.398
- type: ndcg_at_100
value: 25.241999999999997
- type: ndcg_at_1000
value: 28.833
- type: ndcg_at_20
value: 22.234
- type: ndcg_at_3
value: 18.86
- type: ndcg_at_5
value: 20.037
- type: precision_at_1
value: 16.008
- type: precision_at_10
value: 4.328
- type: precision_at_100
value: 0.893
- type: precision_at_1000
value: 0.17500000000000002
- type: precision_at_20
value: 2.579
- type: precision_at_3
value: 9.157
- type: precision_at_5
value: 6.837999999999999
- type: recall_at_1
value: 12.418
- type: recall_at_10
value: 27.935
- type: recall_at_100
value: 47.525
- type: recall_at_1000
value: 72.146
- type: recall_at_20
value: 31.861
- type: recall_at_3
value: 20.148
- type: recall_at_5
value: 23.296
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackWordpressRetrieval (default)
type: mteb/cqadupstack-wordpress
config: default
split: test
revision: 4ffe81d471b1924886b33c7567bfb200e9eec5c4
metrics:
- type: main_score
value: 13.536999999999999
- type: map_at_1
value: 7.468
- type: map_at_10
value: 10.972999999999999
- type: map_at_100
value: 11.744
- type: map_at_1000
value: 11.854000000000001
- type: map_at_20
value: 11.336
- type: map_at_3
value: 9.618
- type: map_at_5
value: 10.205
- type: mrr_at_1
value: 8.317929759704251
- type: mrr_at_10
value: 12.179752369216331
- type: mrr_at_100
value: 12.980085498763907
- type: mrr_at_1000
value: 13.075701345231755
- type: mrr_at_20
value: 12.550195110376356
- type: mrr_at_3
value: 10.659272951324708
- type: mrr_at_5
value: 11.30622304374615
- type: nauc_map_at_1000_diff1
value: 25.499689183541758
- type: nauc_map_at_1000_max
value: 26.492088085006486
- type: nauc_map_at_1000_std
value: -10.29049248054652
- type: nauc_map_at_100_diff1
value: 25.573124155292685
- type: nauc_map_at_100_max
value: 26.56159077339433
- type: nauc_map_at_100_std
value: -10.400824123310946
- type: nauc_map_at_10_diff1
value: 25.485224554587006
- type: nauc_map_at_10_max
value: 26.83491339438951
- type: nauc_map_at_10_std
value: -11.212653836584204
- type: nauc_map_at_1_diff1
value: 33.63991109177576
- type: nauc_map_at_1_max
value: 34.23354700535017
- type: nauc_map_at_1_std
value: -13.602316051776613
- type: nauc_map_at_20_diff1
value: 25.401091624302076
- type: nauc_map_at_20_max
value: 26.619190203647534
- type: nauc_map_at_20_std
value: -10.956292541627727
- type: nauc_map_at_3_diff1
value: 26.825203283397762
- type: nauc_map_at_3_max
value: 27.86659163589406
- type: nauc_map_at_3_std
value: -11.12760272108276
- type: nauc_map_at_5_diff1
value: 25.95917424438333
- type: nauc_map_at_5_max
value: 26.96719585977185
- type: nauc_map_at_5_std
value: -12.304191598798255
- type: nauc_mrr_at_1000_diff1
value: 26.058089211778814
- type: nauc_mrr_at_1000_max
value: 25.715522107102462
- type: nauc_mrr_at_1000_std
value: -9.26865979619022
- type: nauc_mrr_at_100_diff1
value: 26.098211857983944
- type: nauc_mrr_at_100_max
value: 25.751358106929445
- type: nauc_mrr_at_100_std
value: -9.348646640329418
- type: nauc_mrr_at_10_diff1
value: 26.245525532384857
- type: nauc_mrr_at_10_max
value: 25.751651308654733
- type: nauc_mrr_at_10_std
value: -10.162612510927444
- type: nauc_mrr_at_1_diff1
value: 33.74283305857714
- type: nauc_mrr_at_1_max
value: 33.58837545702206
- type: nauc_mrr_at_1_std
value: -11.623065310526266
- type: nauc_mrr_at_20_diff1
value: 25.889783688319756
- type: nauc_mrr_at_20_max
value: 25.752118615901914
- type: nauc_mrr_at_20_std
value: -9.822357050457521
- type: nauc_mrr_at_3_diff1
value: 27.564445527656073
- type: nauc_mrr_at_3_max
value: 27.360005995543013
- type: nauc_mrr_at_3_std
value: -9.833890331593217
- type: nauc_mrr_at_5_diff1
value: 26.822524992606787
- type: nauc_mrr_at_5_max
value: 26.284478920424583
- type: nauc_mrr_at_5_std
value: -11.036920037435278
- type: nauc_ndcg_at_1000_diff1
value: 22.865864500824603
- type: nauc_ndcg_at_1000_max
value: 22.771334973757252
- type: nauc_ndcg_at_1000_std
value: -4.391248945624055
- type: nauc_ndcg_at_100_diff1
value: 24.137939988386144
- type: nauc_ndcg_at_100_max
value: 23.87513301750976
- type: nauc_ndcg_at_100_std
value: -6.566673889142541
- type: nauc_ndcg_at_10_diff1
value: 23.28670973899235
- type: nauc_ndcg_at_10_max
value: 24.466850763499494
- type: nauc_ndcg_at_10_std
value: -10.258177551014816
- type: nauc_ndcg_at_1_diff1
value: 33.74283305857714
- type: nauc_ndcg_at_1_max
value: 33.58837545702206
- type: nauc_ndcg_at_1_std
value: -11.623065310526266
- type: nauc_ndcg_at_20_diff1
value: 22.989442500386524
- type: nauc_ndcg_at_20_max
value: 24.104082915814125
- type: nauc_ndcg_at_20_std
value: -9.45785928337488
- type: nauc_ndcg_at_3_diff1
value: 25.178014460273445
- type: nauc_ndcg_at_3_max
value: 25.942767533173754
- type: nauc_ndcg_at_3_std
value: -9.91363038933204
- type: nauc_ndcg_at_5_diff1
value: 23.991757042799776
- type: nauc_ndcg_at_5_max
value: 24.67696954394957
- type: nauc_ndcg_at_5_std
value: -12.31985800626722
- type: nauc_precision_at_1000_diff1
value: 8.73756056198236
- type: nauc_precision_at_1000_max
value: -2.2039393198217896
- type: nauc_precision_at_1000_std
value: 11.030221537933079
- type: nauc_precision_at_100_diff1
value: 20.215172391403144
- type: nauc_precision_at_100_max
value: 17.018645260191438
- type: nauc_precision_at_100_std
value: 3.767328710045164
- type: nauc_precision_at_10_diff1
value: 17.587454651591
- type: nauc_precision_at_10_max
value: 18.519756223864587
- type: nauc_precision_at_10_std
value: -7.57980264597448
- type: nauc_precision_at_1_diff1
value: 33.74283305857714
- type: nauc_precision_at_1_max
value: 33.58837545702206
- type: nauc_precision_at_1_std
value: -11.623065310526266
- type: nauc_precision_at_20_diff1
value: 16.8264764027673
- type: nauc_precision_at_20_max
value: 17.684383034724306
- type: nauc_precision_at_20_std
value: -4.715192266545397
- type: nauc_precision_at_3_diff1
value: 21.074816828033445
- type: nauc_precision_at_3_max
value: 21.203608983260384
- type: nauc_precision_at_3_std
value: -7.0598567996303165
- type: nauc_precision_at_5_diff1
value: 19.232226617012476
- type: nauc_precision_at_5_max
value: 18.21464537199811
- type: nauc_precision_at_5_std
value: -11.192063817701081
- type: nauc_recall_at_1000_diff1
value: 13.682126336330219
- type: nauc_recall_at_1000_max
value: 11.290148994929623
- type: nauc_recall_at_1000_std
value: 15.234970859087472
- type: nauc_recall_at_100_diff1
value: 21.54257810474028
- type: nauc_recall_at_100_max
value: 18.319728481896473
- type: nauc_recall_at_100_std
value: 1.8896944275133083
- type: nauc_recall_at_10_diff1
value: 18.303586564099813
- type: nauc_recall_at_10_max
value: 20.31707691425135
- type: nauc_recall_at_10_std
value: -8.56717254223721
- type: nauc_recall_at_1_diff1
value: 33.63991109177576
- type: nauc_recall_at_1_max
value: 34.23354700535017
- type: nauc_recall_at_1_std
value: -13.602316051776613
- type: nauc_recall_at_20_diff1
value: 18.133732998590617
- type: nauc_recall_at_20_max
value: 19.491824859679376
- type: nauc_recall_at_20_std
value: -6.958404447908455
- type: nauc_recall_at_3_diff1
value: 20.923379689287973
- type: nauc_recall_at_3_max
value: 22.305262469725605
- type: nauc_recall_at_3_std
value: -9.33545310798814
- type: nauc_recall_at_5_diff1
value: 18.697534927162877
- type: nauc_recall_at_5_max
value: 19.872464448638226
- type: nauc_recall_at_5_std
value: -13.201942499761413
- type: ndcg_at_1
value: 8.318
- type: ndcg_at_10
value: 13.536999999999999
- type: ndcg_at_100
value: 17.814
- type: ndcg_at_1000
value: 21.037
- type: ndcg_at_20
value: 14.795
- type: ndcg_at_3
value: 10.584
- type: ndcg_at_5
value: 11.631
- type: precision_at_1
value: 8.318
- type: precision_at_10
value: 2.348
- type: precision_at_100
value: 0.488
- type: precision_at_1000
value: 0.084
- type: precision_at_20
value: 1.4789999999999999
- type: precision_at_3
value: 4.559
- type: precision_at_5
value: 3.327
- type: recall_at_1
value: 7.468
- type: recall_at_10
value: 20.508000000000003
- type: recall_at_100
value: 40.969
- type: recall_at_1000
value: 66.01
- type: recall_at_20
value: 25.151
- type: recall_at_3
value: 12.187000000000001
- type: recall_at_5
value: 14.868
- task:
type: Retrieval
dataset:
name: MTEB ClimateFEVER (default)
type: mteb/climate-fever
config: default
split: test
revision: 47f2ac6acb640fc46020b02a5b59fdda04d39380
metrics:
- type: main_score
value: 14.015
- type: map_at_1
value: 5.794
- type: map_at_10
value: 9.467
- type: map_at_100
value: 10.583
- type: map_at_1000
value: 10.738
- type: map_at_20
value: 10.019
- type: map_at_3
value: 7.800999999999999
- type: map_at_5
value: 8.530999999999999
- type: mrr_at_1
value: 12.37785016286645
- type: mrr_at_10
value: 19.195232924874603
- type: mrr_at_100
value: 20.36171753911915
- type: mrr_at_1000
value: 20.43422170175313
- type: mrr_at_20
value: 19.925433949052078
- type: mrr_at_3
value: 16.612377850162883
- type: mrr_at_5
value: 17.928338762214977
- type: nauc_map_at_1000_diff1
value: 30.77100530113992
- type: nauc_map_at_1000_max
value: 3.930399825338355
- type: nauc_map_at_1000_std
value: 19.339256296860647
- type: nauc_map_at_100_diff1
value: 30.731834293026033
- type: nauc_map_at_100_max
value: 3.9391965871824577
- type: nauc_map_at_100_std
value: 18.994224188430934
- type: nauc_map_at_10_diff1
value: 30.52002817023447
- type: nauc_map_at_10_max
value: 4.047355652304053
- type: nauc_map_at_10_std
value: 16.271456948493867
- type: nauc_map_at_1_diff1
value: 40.78221783055125
- type: nauc_map_at_1_max
value: 6.03643489529247
- type: nauc_map_at_1_std
value: 10.164994264153364
- type: nauc_map_at_20_diff1
value: 30.667265850525062
- type: nauc_map_at_20_max
value: 3.808011497380771
- type: nauc_map_at_20_std
value: 17.64597024700993
- type: nauc_map_at_3_diff1
value: 32.9882945525325
- type: nauc_map_at_3_max
value: 4.81442279492956
- type: nauc_map_at_3_std
value: 11.72899701083213
- type: nauc_map_at_5_diff1
value: 31.319747944398486
- type: nauc_map_at_5_max
value: 4.789346536725522
- type: nauc_map_at_5_std
value: 13.280932876910251
- type: nauc_mrr_at_1000_diff1
value: 28.72974681423866
- type: nauc_mrr_at_1000_max
value: 5.334428633833756
- type: nauc_mrr_at_1000_std
value: 21.94603472046183
- type: nauc_mrr_at_100_diff1
value: 28.71022403484308
- type: nauc_mrr_at_100_max
value: 5.333420382518744
- type: nauc_mrr_at_100_std
value: 21.95720361127466
- type: nauc_mrr_at_10_diff1
value: 28.123142846152966
- type: nauc_mrr_at_10_max
value: 5.476579464822251
- type: nauc_mrr_at_10_std
value: 20.85306394069719
- type: nauc_mrr_at_1_diff1
value: 34.81794628491484
- type: nauc_mrr_at_1_max
value: 6.5806430588232905
- type: nauc_mrr_at_1_std
value: 14.459527094653325
- type: nauc_mrr_at_20_diff1
value: 28.439259242098213
- type: nauc_mrr_at_20_max
value: 5.357148444191085
- type: nauc_mrr_at_20_std
value: 21.61419717452997
- type: nauc_mrr_at_3_diff1
value: 29.687849776616204
- type: nauc_mrr_at_3_max
value: 5.740633779727121
- type: nauc_mrr_at_3_std
value: 17.8879483888456
- type: nauc_mrr_at_5_diff1
value: 28.47430129361797
- type: nauc_mrr_at_5_max
value: 5.630703322113187
- type: nauc_mrr_at_5_std
value: 19.229576158387964
- type: nauc_ndcg_at_1000_diff1
value: 29.601902706390376
- type: nauc_ndcg_at_1000_max
value: 2.953924251677932
- type: nauc_ndcg_at_1000_std
value: 33.43699716309924
- type: nauc_ndcg_at_100_diff1
value: 28.61050534370323
- type: nauc_ndcg_at_100_max
value: 3.4205261114094623
- type: nauc_ndcg_at_100_std
value: 29.71705615290654
- type: nauc_ndcg_at_10_diff1
value: 27.08320442286844
- type: nauc_ndcg_at_10_max
value: 3.7887194412304863
- type: nauc_ndcg_at_10_std
value: 21.676623605562256
- type: nauc_ndcg_at_1_diff1
value: 34.81794628491484
- type: nauc_ndcg_at_1_max
value: 6.5806430588232905
- type: nauc_ndcg_at_1_std
value: 14.459527094653325
- type: nauc_ndcg_at_20_diff1
value: 27.787198576453758
- type: nauc_ndcg_at_20_max
value: 3.1540397427527713
- type: nauc_ndcg_at_20_std
value: 24.886749384694483
- type: nauc_ndcg_at_3_diff1
value: 29.951818040541088
- type: nauc_ndcg_at_3_max
value: 5.01579970046346
- type: nauc_ndcg_at_3_std
value: 15.279492475081327
- type: nauc_ndcg_at_5_diff1
value: 28.06492691727927
- type: nauc_ndcg_at_5_max
value: 4.89933436886099
- type: nauc_ndcg_at_5_std
value: 16.918642834035854
- type: nauc_precision_at_1000_diff1
value: 15.771733257364474
- type: nauc_precision_at_1000_max
value: 1.823845951487625
- type: nauc_precision_at_1000_std
value: 49.1852294234272
- type: nauc_precision_at_100_diff1
value: 18.265609570523985
- type: nauc_precision_at_100_max
value: 4.2756221878446885
- type: nauc_precision_at_100_std
value: 44.777126764828196
- type: nauc_precision_at_10_diff1
value: 17.001368989158973
- type: nauc_precision_at_10_max
value: 3.567699919296151
- type: nauc_precision_at_10_std
value: 32.23622509514423
- type: nauc_precision_at_1_diff1
value: 34.81794628491484
- type: nauc_precision_at_1_max
value: 6.5806430588232905
- type: nauc_precision_at_1_std
value: 14.459527094653325
- type: nauc_precision_at_20_diff1
value: 17.635731357627552
- type: nauc_precision_at_20_max
value: 3.034597543962715
- type: nauc_precision_at_20_std
value: 37.444737258116376
- type: nauc_precision_at_3_diff1
value: 22.582871559622486
- type: nauc_precision_at_3_max
value: 6.018578205165446
- type: nauc_precision_at_3_std
value: 19.760719025296815
- type: nauc_precision_at_5_diff1
value: 18.665624106588705
- type: nauc_precision_at_5_max
value: 5.618829486159042
- type: nauc_precision_at_5_std
value: 24.487192977269594
- type: nauc_recall_at_1000_diff1
value: 26.313094272841823
- type: nauc_recall_at_1000_max
value: -3.0358409209748767
- type: nauc_recall_at_1000_std
value: 52.23483909347241
- type: nauc_recall_at_100_diff1
value: 22.619825448361848
- type: nauc_recall_at_100_max
value: -0.48782855898636057
- type: nauc_recall_at_100_std
value: 39.456946722540245
- type: nauc_recall_at_10_diff1
value: 21.248191636390427
- type: nauc_recall_at_10_max
value: 1.057162598023577
- type: nauc_recall_at_10_std
value: 26.28529915222162
- type: nauc_recall_at_1_diff1
value: 40.78221783055125
- type: nauc_recall_at_1_max
value: 6.03643489529247
- type: nauc_recall_at_1_std
value: 10.164994264153364
- type: nauc_recall_at_20_diff1
value: 22.329681015763143
- type: nauc_recall_at_20_max
value: -0.9021963926705002
- type: nauc_recall_at_20_std
value: 31.423263430139137
- type: nauc_recall_at_3_diff1
value: 27.367759082174025
- type: nauc_recall_at_3_max
value: 3.9289202004328527
- type: nauc_recall_at_3_std
value: 13.622863131134919
- type: nauc_recall_at_5_diff1
value: 22.76288213235621
- type: nauc_recall_at_5_max
value: 3.471221773429057
- type: nauc_recall_at_5_std
value: 17.585600220417064
- type: ndcg_at_1
value: 12.378
- type: ndcg_at_10
value: 14.015
- type: ndcg_at_100
value: 19.555
- type: ndcg_at_1000
value: 22.979
- type: ndcg_at_20
value: 16.019
- type: ndcg_at_3
value: 10.780000000000001
- type: ndcg_at_5
value: 11.773
- type: precision_at_1
value: 12.378
- type: precision_at_10
value: 4.567
- type: precision_at_100
value: 1.035
- type: precision_at_1000
value: 0.166
- type: precision_at_20
value: 3.114
- type: precision_at_3
value: 7.926
- type: precision_at_5
value: 6.215
- type: recall_at_1
value: 5.794
- type: recall_at_10
value: 17.407
- type: recall_at_100
value: 37.191
- type: recall_at_1000
value: 56.851
- type: recall_at_20
value: 23.165
- type: recall_at_3
value: 9.713
- type: recall_at_5
value: 12.415
- task:
type: Retrieval
dataset:
name: MTEB DBPedia (default)
type: mteb/dbpedia
config: default
split: test
revision: c0f706b76e590d620bd6618b3ca8efdd34e2d659
metrics:
- type: main_score
value: 19.899
- type: map_at_1
value: 3.465
- type: map_at_10
value: 7.794
- type: map_at_100
value: 10.933
- type: map_at_1000
value: 11.752
- type: map_at_20
value: 9.016
- type: map_at_3
value: 5.427
- type: map_at_5
value: 6.502
- type: mrr_at_1
value: 34.75
- type: mrr_at_10
value: 45.200793650793656
- type: mrr_at_100
value: 46.05239344037991
- type: mrr_at_1000
value: 46.0856684337964
- type: mrr_at_20
value: 45.710684362077565
- type: mrr_at_3
value: 42.208333333333336
- type: mrr_at_5
value: 43.808333333333344
- type: nauc_map_at_1000_diff1
value: 18.86972613270399
- type: nauc_map_at_1000_max
value: 20.274156189253244
- type: nauc_map_at_1000_std
value: 22.191040122589133
- type: nauc_map_at_100_diff1
value: 18.788504382797093
- type: nauc_map_at_100_max
value: 18.991259275904696
- type: nauc_map_at_100_std
value: 19.224470200905856
- type: nauc_map_at_10_diff1
value: 18.750083550817912
- type: nauc_map_at_10_max
value: 10.317804767409177
- type: nauc_map_at_10_std
value: 4.146780937716071
- type: nauc_map_at_1_diff1
value: 24.593368387483753
- type: nauc_map_at_1_max
value: 4.589639725353537
- type: nauc_map_at_1_std
value: -8.92237341364795
- type: nauc_map_at_20_diff1
value: 18.991788660584362
- type: nauc_map_at_20_max
value: 13.525701435829877
- type: nauc_map_at_20_std
value: 10.505788067068151
- type: nauc_map_at_3_diff1
value: 18.3208401615434
- type: nauc_map_at_3_max
value: 9.337037518676164
- type: nauc_map_at_3_std
value: -3.652233530159517
- type: nauc_map_at_5_diff1
value: 18.092639410476284
- type: nauc_map_at_5_max
value: 10.092917720641017
- type: nauc_map_at_5_std
value: 0.17001723577182712
- type: nauc_mrr_at_1000_diff1
value: 29.78358698105705
- type: nauc_mrr_at_1000_max
value: 28.715621788566008
- type: nauc_mrr_at_1000_std
value: 22.028656730472925
- type: nauc_mrr_at_100_diff1
value: 29.790252324106998
- type: nauc_mrr_at_100_max
value: 28.742783310038494
- type: nauc_mrr_at_100_std
value: 22.03968708083945
- type: nauc_mrr_at_10_diff1
value: 29.438930345540236
- type: nauc_mrr_at_10_max
value: 28.65369065827219
- type: nauc_mrr_at_10_std
value: 21.78750467411176
- type: nauc_mrr_at_1_diff1
value: 35.330827390243996
- type: nauc_mrr_at_1_max
value: 26.56882708002626
- type: nauc_mrr_at_1_std
value: 21.623824720391546
- type: nauc_mrr_at_20_diff1
value: 29.738885034343433
- type: nauc_mrr_at_20_max
value: 28.757633233697227
- type: nauc_mrr_at_20_std
value: 21.94206110931751
- type: nauc_mrr_at_3_diff1
value: 30.084883512926936
- type: nauc_mrr_at_3_max
value: 28.504733195949854
- type: nauc_mrr_at_3_std
value: 21.343105616755405
- type: nauc_mrr_at_5_diff1
value: 29.162370505723974
- type: nauc_mrr_at_5_max
value: 28.302134300102317
- type: nauc_mrr_at_5_std
value: 21.967069891186686
- type: nauc_ndcg_at_1000_diff1
value: 21.5599701482179
- type: nauc_ndcg_at_1000_max
value: 19.60442562497246
- type: nauc_ndcg_at_1000_std
value: 38.57803059971978
- type: nauc_ndcg_at_100_diff1
value: 20.869754081262034
- type: nauc_ndcg_at_100_max
value: 17.061854693160267
- type: nauc_ndcg_at_100_std
value: 28.495912815567348
- type: nauc_ndcg_at_10_diff1
value: 21.68424149188379
- type: nauc_ndcg_at_10_max
value: 17.7957499268384
- type: nauc_ndcg_at_10_std
value: 20.329697185043177
- type: nauc_ndcg_at_1_diff1
value: 33.15797652004303
- type: nauc_ndcg_at_1_max
value: 19.169777835934728
- type: nauc_ndcg_at_1_std
value: 16.460300389696954
- type: nauc_ndcg_at_20_diff1
value: 20.980003079381408
- type: nauc_ndcg_at_20_max
value: 16.31240132872873
- type: nauc_ndcg_at_20_std
value: 21.336530494236147
- type: nauc_ndcg_at_3_diff1
value: 23.747010783899103
- type: nauc_ndcg_at_3_max
value: 20.514543159699503
- type: nauc_ndcg_at_3_std
value: 19.913679184651535
- type: nauc_ndcg_at_5_diff1
value: 21.811506356457578
- type: nauc_ndcg_at_5_max
value: 19.600228375339086
- type: nauc_ndcg_at_5_std
value: 20.80223119600392
- type: nauc_precision_at_1000_diff1
value: 7.616167380395875
- type: nauc_precision_at_1000_max
value: 24.36987688613695
- type: nauc_precision_at_1000_std
value: 28.517709442088883
- type: nauc_precision_at_100_diff1
value: 10.899372478558005
- type: nauc_precision_at_100_max
value: 32.52543047557354
- type: nauc_precision_at_100_std
value: 40.418143841067725
- type: nauc_precision_at_10_diff1
value: 12.454659530883022
- type: nauc_precision_at_10_max
value: 26.633347275996822
- type: nauc_precision_at_10_std
value: 31.766535462628333
- type: nauc_precision_at_1_diff1
value: 35.330827390243996
- type: nauc_precision_at_1_max
value: 26.56882708002626
- type: nauc_precision_at_1_std
value: 21.623824720391546
- type: nauc_precision_at_20_diff1
value: 13.710148345557894
- type: nauc_precision_at_20_max
value: 30.06641352798287
- type: nauc_precision_at_20_std
value: 37.51642649937503
- type: nauc_precision_at_3_diff1
value: 19.379905126167277
- type: nauc_precision_at_3_max
value: 29.474064921517996
- type: nauc_precision_at_3_std
value: 24.324769024438673
- type: nauc_precision_at_5_diff1
value: 14.983583546795229
- type: nauc_precision_at_5_max
value: 29.377923800204137
- type: nauc_precision_at_5_std
value: 28.792665620205433
- type: nauc_recall_at_1000_diff1
value: 9.420323994147108
- type: nauc_recall_at_1000_max
value: 1.716458858147155
- type: nauc_recall_at_1000_std
value: 42.675208969537806
- type: nauc_recall_at_100_diff1
value: 10.524089820623148
- type: nauc_recall_at_100_max
value: 4.847393922578022
- type: nauc_recall_at_100_std
value: 25.881256479477425
- type: nauc_recall_at_10_diff1
value: 10.405559854705523
- type: nauc_recall_at_10_max
value: -0.7229949712397538
- type: nauc_recall_at_10_std
value: 1.2453684953323285
- type: nauc_recall_at_1_diff1
value: 24.593368387483753
- type: nauc_recall_at_1_max
value: 4.589639725353537
- type: nauc_recall_at_1_std
value: -8.92237341364795
- type: nauc_recall_at_20_diff1
value: 9.153545675349667
- type: nauc_recall_at_20_max
value: 1.0523663509920702
- type: nauc_recall_at_20_std
value: 9.617722656364721
- type: nauc_recall_at_3_diff1
value: 11.453608857041628
- type: nauc_recall_at_3_max
value: 6.541125581241787
- type: nauc_recall_at_3_std
value: -6.374588849217941
- type: nauc_recall_at_5_diff1
value: 10.747977942968255
- type: nauc_recall_at_5_max
value: 3.2154611210290445
- type: nauc_recall_at_5_std
value: -1.2652013924076986
- type: ndcg_at_1
value: 24.25
- type: ndcg_at_10
value: 19.899
- type: ndcg_at_100
value: 23.204
- type: ndcg_at_1000
value: 29.658
- type: ndcg_at_20
value: 19.583000000000002
- type: ndcg_at_3
value: 21.335
- type: ndcg_at_5
value: 20.413999999999998
- type: precision_at_1
value: 34.75
- type: precision_at_10
value: 18.075
- type: precision_at_100
value: 5.897
- type: precision_at_1000
value: 1.22
- type: precision_at_20
value: 13.55
- type: precision_at_3
value: 26.833000000000002
- type: precision_at_5
value: 22.6
- type: recall_at_1
value: 3.465
- type: recall_at_10
value: 12.606
- type: recall_at_100
value: 29.843999999999998
- type: recall_at_1000
value: 52.242999999999995
- type: recall_at_20
value: 16.930999999999997
- type: recall_at_3
value: 6.425
- type: recall_at_5
value: 8.818
- task:
type: Classification
dataset:
name: MTEB EmotionClassification (default)
type: mteb/emotion
config: default
split: test
revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37
metrics:
- type: accuracy
value: 38.339999999999996
- type: f1
value: 34.598741976118816
- type: f1_weighted
value: 40.51989104726522
- type: main_score
value: 38.339999999999996
- task:
type: Retrieval
dataset:
name: MTEB FEVER (default)
type: mteb/fever
config: default
split: test
revision: bea83ef9e8fb933d90a2f1d5515737465d613e12
metrics:
- type: main_score
value: 25.006
- type: map_at_1
value: 13.943
- type: map_at_10
value: 20.706
- type: map_at_100
value: 21.740000000000002
- type: map_at_1000
value: 21.822
- type: map_at_20
value: 21.267
- type: map_at_3
value: 18.35
- type: map_at_5
value: 19.636
- type: mrr_at_1
value: 14.79147914791479
- type: mrr_at_10
value: 21.939967806304423
- type: mrr_at_100
value: 22.991772526136195
- type: mrr_at_1000
value: 23.068306121221312
- type: mrr_at_20
value: 22.521146379622163
- type: mrr_at_3
value: 19.484448444844478
- type: mrr_at_5
value: 20.817331733173358
- type: nauc_map_at_1000_diff1
value: 19.35822964414219
- type: nauc_map_at_1000_max
value: 8.897124191699918
- type: nauc_map_at_1000_std
value: -14.004128494439424
- type: nauc_map_at_100_diff1
value: 19.34567869663468
- type: nauc_map_at_100_max
value: 8.8745190516295
- type: nauc_map_at_100_std
value: -14.025946762212236
- type: nauc_map_at_10_diff1
value: 19.478894508723158
- type: nauc_map_at_10_max
value: 8.614136366133858
- type: nauc_map_at_10_std
value: -14.636265322683597
- type: nauc_map_at_1_diff1
value: 23.688109743445253
- type: nauc_map_at_1_max
value: 10.721419669570178
- type: nauc_map_at_1_std
value: -17.00198995751755
- type: nauc_map_at_20_diff1
value: 19.40994853288039
- type: nauc_map_at_20_max
value: 8.788561538894676
- type: nauc_map_at_20_std
value: -14.287595480928521
- type: nauc_map_at_3_diff1
value: 20.019246737479236
- type: nauc_map_at_3_max
value: 8.530000749651693
- type: nauc_map_at_3_std
value: -16.31053852110094
- type: nauc_map_at_5_diff1
value: 19.574801722611753
- type: nauc_map_at_5_max
value: 8.431256040109632
- type: nauc_map_at_5_std
value: -15.42991927435635
- type: nauc_mrr_at_1000_diff1
value: 19.199456594864415
- type: nauc_mrr_at_1000_max
value: 9.053366261880821
- type: nauc_mrr_at_1000_std
value: -14.325311358790312
- type: nauc_mrr_at_100_diff1
value: 19.183968461336264
- type: nauc_mrr_at_100_max
value: 9.0406708211084
- type: nauc_mrr_at_100_std
value: -14.333168371749
- type: nauc_mrr_at_10_diff1
value: 19.286280952658004
- type: nauc_mrr_at_10_max
value: 8.786679451075301
- type: nauc_mrr_at_10_std
value: -14.85433165190137
- type: nauc_mrr_at_1_diff1
value: 23.372945217632637
- type: nauc_mrr_at_1_max
value: 10.757009456320713
- type: nauc_mrr_at_1_std
value: -17.37470573558239
- type: nauc_mrr_at_20_diff1
value: 19.204260097760162
- type: nauc_mrr_at_20_max
value: 8.967269936629057
- type: nauc_mrr_at_20_std
value: -14.556203577633491
- type: nauc_mrr_at_3_diff1
value: 19.802237510569196
- type: nauc_mrr_at_3_max
value: 8.660412322072549
- type: nauc_mrr_at_3_std
value: -16.483667365878983
- type: nauc_mrr_at_5_diff1
value: 19.417190218500963
- type: nauc_mrr_at_5_max
value: 8.592050482160923
- type: nauc_mrr_at_5_std
value: -15.666970940052721
- type: nauc_ndcg_at_1000_diff1
value: 17.770326257033936
- type: nauc_ndcg_at_1000_max
value: 9.986868282212038
- type: nauc_ndcg_at_1000_std
value: -9.378246687942493
- type: nauc_ndcg_at_100_diff1
value: 17.57851695979306
- type: nauc_ndcg_at_100_max
value: 9.516456101829059
- type: nauc_ndcg_at_100_std
value: -9.92852108588332
- type: nauc_ndcg_at_10_diff1
value: 18.211042534939516
- type: nauc_ndcg_at_10_max
value: 8.263500593038305
- type: nauc_ndcg_at_10_std
value: -12.860334730832001
- type: nauc_ndcg_at_1_diff1
value: 23.372945217632637
- type: nauc_ndcg_at_1_max
value: 10.757009456320713
- type: nauc_ndcg_at_1_std
value: -17.37470573558239
- type: nauc_ndcg_at_20_diff1
value: 17.910709608958474
- type: nauc_ndcg_at_20_max
value: 8.893940446709529
- type: nauc_ndcg_at_20_std
value: -11.689263799945813
- type: nauc_ndcg_at_3_diff1
value: 19.09880112910806
- type: nauc_ndcg_at_3_max
value: 8.023263463318175
- type: nauc_ndcg_at_3_std
value: -16.092374418892373
- type: nauc_ndcg_at_5_diff1
value: 18.42900402442049
- type: nauc_ndcg_at_5_max
value: 7.8858287226066235
- type: nauc_ndcg_at_5_std
value: -14.661280178399608
- type: nauc_precision_at_1000_diff1
value: 3.642347466781283
- type: nauc_precision_at_1000_max
value: 16.952404316587614
- type: nauc_precision_at_1000_std
value: 21.40131424089912
- type: nauc_precision_at_100_diff1
value: 9.750805732461842
- type: nauc_precision_at_100_max
value: 13.757879488937125
- type: nauc_precision_at_100_std
value: 8.039378982280406
- type: nauc_precision_at_10_diff1
value: 14.7918457440186
- type: nauc_precision_at_10_max
value: 8.123251440844076
- type: nauc_precision_at_10_std
value: -7.766522118292242
- type: nauc_precision_at_1_diff1
value: 23.372945217632637
- type: nauc_precision_at_1_max
value: 10.757009456320713
- type: nauc_precision_at_1_std
value: -17.37470573558239
- type: nauc_precision_at_20_diff1
value: 13.317651277911787
- type: nauc_precision_at_20_max
value: 10.204911801413331
- type: nauc_precision_at_20_std
value: -3.322012947463638
- type: nauc_precision_at_3_diff1
value: 16.938989829945534
- type: nauc_precision_at_3_max
value: 7.007727368306191
- type: nauc_precision_at_3_std
value: -15.264146253300096
- type: nauc_precision_at_5_diff1
value: 15.595830777905029
- type: nauc_precision_at_5_max
value: 6.87438645405223
- type: nauc_precision_at_5_std
value: -12.548740115098678
- type: nauc_recall_at_1000_diff1
value: 9.009543867034727
- type: nauc_recall_at_1000_max
value: 18.305044258577915
- type: nauc_recall_at_1000_std
value: 23.009148418514425
- type: nauc_recall_at_100_diff1
value: 11.15850015080056
- type: nauc_recall_at_100_max
value: 11.780408791390519
- type: nauc_recall_at_100_std
value: 6.246652097817795
- type: nauc_recall_at_10_diff1
value: 15.099829144415247
- type: nauc_recall_at_10_max
value: 7.075068492864811
- type: nauc_recall_at_10_std
value: -7.878092251138417
- type: nauc_recall_at_1_diff1
value: 23.688109743445253
- type: nauc_recall_at_1_max
value: 10.721419669570178
- type: nauc_recall_at_1_std
value: -17.00198995751755
- type: nauc_recall_at_20_diff1
value: 13.85704310580134
- type: nauc_recall_at_20_max
value: 9.007426388276338
- type: nauc_recall_at_20_std
value: -3.9997271157444843
- type: nauc_recall_at_3_diff1
value: 16.851129797737183
- type: nauc_recall_at_3_max
value: 6.616028659229676
- type: nauc_recall_at_3_std
value: -15.286301162412613
- type: nauc_recall_at_5_diff1
value: 15.671635716227339
- type: nauc_recall_at_5_max
value: 6.342388043913686
- type: nauc_recall_at_5_std
value: -12.39987752967968
- type: ndcg_at_1
value: 14.791000000000002
- type: ndcg_at_10
value: 25.006
- type: ndcg_at_100
value: 30.471999999999998
- type: ndcg_at_1000
value: 32.806000000000004
- type: ndcg_at_20
value: 27.058
- type: ndcg_at_3
value: 20.112
- type: ndcg_at_5
value: 22.413
- type: precision_at_1
value: 14.791000000000002
- type: precision_at_10
value: 4.055000000000001
- type: precision_at_100
value: 0.697
- type: precision_at_1000
value: 0.092
- type: precision_at_20
value: 2.465
- type: precision_at_3
value: 8.626000000000001
- type: precision_at_5
value: 6.382000000000001
- type: recall_at_1
value: 13.943
- type: recall_at_10
value: 37.397000000000006
- type: recall_at_100
value: 63.334999999999994
- type: recall_at_1000
value: 81.428
- type: recall_at_20
value: 45.358
- type: recall_at_3
value: 24.082
- type: recall_at_5
value: 29.563
- task:
type: Retrieval
dataset:
name: MTEB FiQA2018 (default)
type: mteb/fiqa
config: default
split: test
revision: 27a168819829fe9bcd655c2df245fb19452e8e06
metrics:
- type: main_score
value: 11.167
- type: map_at_1
value: 5.055
- type: map_at_10
value: 7.974
- type: map_at_100
value: 8.738
- type: map_at_1000
value: 8.916
- type: map_at_20
value: 8.341
- type: map_at_3
value: 6.857
- type: map_at_5
value: 7.5009999999999994
- type: mrr_at_1
value: 10.030864197530864
- type: mrr_at_10
value: 14.756087105624141
- type: mrr_at_100
value: 15.562190249516133
- type: mrr_at_1000
value: 15.69044643307793
- type: mrr_at_20
value: 15.164252290155286
- type: mrr_at_3
value: 13.297325102880658
- type: mrr_at_5
value: 14.130658436213992
- type: nauc_map_at_1000_diff1
value: 21.581584639641356
- type: nauc_map_at_1000_max
value: -3.591350057991658
- type: nauc_map_at_1000_std
value: 2.2450733180258466
- type: nauc_map_at_100_diff1
value: 21.678068750484663
- type: nauc_map_at_100_max
value: -3.754793884673454
- type: nauc_map_at_100_std
value: 2.1134125512643034
- type: nauc_map_at_10_diff1
value: 22.267707890250872
- type: nauc_map_at_10_max
value: -4.109027667129512
- type: nauc_map_at_10_std
value: 1.7397026170215282
- type: nauc_map_at_1_diff1
value: 24.393602819317127
- type: nauc_map_at_1_max
value: -5.463161484041758
- type: nauc_map_at_1_std
value: 3.4527844717330898
- type: nauc_map_at_20_diff1
value: 22.16603827194384
- type: nauc_map_at_20_max
value: -3.829133240985351
- type: nauc_map_at_20_std
value: 2.273305218017184
- type: nauc_map_at_3_diff1
value: 25.550971234557217
- type: nauc_map_at_3_max
value: -5.912131631375139
- type: nauc_map_at_3_std
value: 2.6270431833752226
- type: nauc_map_at_5_diff1
value: 23.693227817850918
- type: nauc_map_at_5_max
value: -4.430117256044587
- type: nauc_map_at_5_std
value: 1.90476330618582
- type: nauc_mrr_at_1000_diff1
value: 18.407848757651383
- type: nauc_mrr_at_1000_max
value: 1.4692643101259266
- type: nauc_mrr_at_1000_std
value: -1.4737021198395484
- type: nauc_mrr_at_100_diff1
value: 18.373936364611946
- type: nauc_mrr_at_100_max
value: 1.4600491055347338
- type: nauc_mrr_at_100_std
value: -1.5315816773226647
- type: nauc_mrr_at_10_diff1
value: 18.812075225359994
- type: nauc_mrr_at_10_max
value: 1.1423422260007967
- type: nauc_mrr_at_10_std
value: -1.4331421942145333
- type: nauc_mrr_at_1_diff1
value: 21.042020105537055
- type: nauc_mrr_at_1_max
value: -1.8286330117738627
- type: nauc_mrr_at_1_std
value: 0.6107108684145417
- type: nauc_mrr_at_20_diff1
value: 18.67480478225173
- type: nauc_mrr_at_20_max
value: 1.262037517477333
- type: nauc_mrr_at_20_std
value: -1.3030974525400356
- type: nauc_mrr_at_3_diff1
value: 20.263359986054837
- type: nauc_mrr_at_3_max
value: -0.3775317483949404
- type: nauc_mrr_at_3_std
value: -1.365236958935102
- type: nauc_mrr_at_5_diff1
value: 19.555216165143772
- type: nauc_mrr_at_5_max
value: 0.364621169263337
- type: nauc_mrr_at_5_std
value: -1.0513020604553038
- type: nauc_ndcg_at_1000_diff1
value: 15.768274611971735
- type: nauc_ndcg_at_1000_max
value: 2.0520976478520327
- type: nauc_ndcg_at_1000_std
value: 2.877627036243521
- type: nauc_ndcg_at_100_diff1
value: 16.128663871942763
- type: nauc_ndcg_at_100_max
value: -0.34227560585178396
- type: nauc_ndcg_at_100_std
value: 0.8164780238765409
- type: nauc_ndcg_at_10_diff1
value: 19.282198569420846
- type: nauc_ndcg_at_10_max
value: -1.3250908207898342
- type: nauc_ndcg_at_10_std
value: 0.28825143098016265
- type: nauc_ndcg_at_1_diff1
value: 21.042020105537055
- type: nauc_ndcg_at_1_max
value: -1.8286330117738627
- type: nauc_ndcg_at_1_std
value: 0.6107108684145417
- type: nauc_ndcg_at_20_diff1
value: 19.028654575882847
- type: nauc_ndcg_at_20_max
value: -0.9325610304848784
- type: nauc_ndcg_at_20_std
value: 1.5749962746078057
- type: nauc_ndcg_at_3_diff1
value: 21.864688221213875
- type: nauc_ndcg_at_3_max
value: -2.6883486751081693
- type: nauc_ndcg_at_3_std
value: 0.17632918486246743
- type: nauc_ndcg_at_5_diff1
value: 21.280319590515656
- type: nauc_ndcg_at_5_max
value: -1.7628672417522795
- type: nauc_ndcg_at_5_std
value: 0.35504411508050127
- type: nauc_precision_at_1000_diff1
value: -5.134118935123325
- type: nauc_precision_at_1000_max
value: 22.854317653101646
- type: nauc_precision_at_1000_std
value: -5.519945670535999
- type: nauc_precision_at_100_diff1
value: 2.410623305126647
- type: nauc_precision_at_100_max
value: 11.323949150994391
- type: nauc_precision_at_100_std
value: -4.4400164174748395
- type: nauc_precision_at_10_diff1
value: 11.14562925123435
- type: nauc_precision_at_10_max
value: 6.701684471603129
- type: nauc_precision_at_10_std
value: -3.507090397196342
- type: nauc_precision_at_1_diff1
value: 21.042020105537055
- type: nauc_precision_at_1_max
value: -1.8286330117738627
- type: nauc_precision_at_1_std
value: 0.6107108684145417
- type: nauc_precision_at_20_diff1
value: 10.58098788224169
- type: nauc_precision_at_20_max
value: 7.5107799297769935
- type: nauc_precision_at_20_std
value: -1.5100106529478114
- type: nauc_precision_at_3_diff1
value: 19.795198818057667
- type: nauc_precision_at_3_max
value: 0.4713854827815967
- type: nauc_precision_at_3_std
value: -3.125924766538086
- type: nauc_precision_at_5_diff1
value: 16.907379789095696
- type: nauc_precision_at_5_max
value: 4.140243156305644
- type: nauc_precision_at_5_std
value: -1.8178346354290582
- type: nauc_recall_at_1000_diff1
value: 4.711761259530349
- type: nauc_recall_at_1000_max
value: 3.897303116005553
- type: nauc_recall_at_1000_std
value: 14.259168849028104
- type: nauc_recall_at_100_diff1
value: 4.811342813866857
- type: nauc_recall_at_100_max
value: -0.46422331209391143
- type: nauc_recall_at_100_std
value: 1.702190380676355
- type: nauc_recall_at_10_diff1
value: 14.112982578958079
- type: nauc_recall_at_10_max
value: -0.6934250965951679
- type: nauc_recall_at_10_std
value: -0.19882683954238423
- type: nauc_recall_at_1_diff1
value: 24.393602819317127
- type: nauc_recall_at_1_max
value: -5.463161484041758
- type: nauc_recall_at_1_std
value: 3.4527844717330898
- type: nauc_recall_at_20_diff1
value: 13.19557557901834
- type: nauc_recall_at_20_max
value: 0.1538644708778628
- type: nauc_recall_at_20_std
value: 3.0492797001932974
- type: nauc_recall_at_3_diff1
value: 24.182210704492558
- type: nauc_recall_at_3_max
value: -6.034324229051654
- type: nauc_recall_at_3_std
value: 2.8490090980023637
- type: nauc_recall_at_5_diff1
value: 19.011063131073744
- type: nauc_recall_at_5_max
value: -2.119359618883548
- type: nauc_recall_at_5_std
value: 0.8198903805407032
- type: ndcg_at_1
value: 10.030999999999999
- type: ndcg_at_10
value: 11.167
- type: ndcg_at_100
value: 15.409
- type: ndcg_at_1000
value: 19.947
- type: ndcg_at_20
value: 12.483
- type: ndcg_at_3
value: 9.532
- type: ndcg_at_5
value: 10.184
- type: precision_at_1
value: 10.030999999999999
- type: precision_at_10
value: 3.1329999999999996
- type: precision_at_100
value: 0.7270000000000001
- type: precision_at_1000
value: 0.15
- type: precision_at_20
value: 2.06
- type: precision_at_3
value: 6.481000000000001
- type: precision_at_5
value: 4.877
- type: recall_at_1
value: 5.055
- type: recall_at_10
value: 14.193
- type: recall_at_100
value: 31.47
- type: recall_at_1000
value: 60.007
- type: recall_at_20
value: 18.532
- type: recall_at_3
value: 8.863999999999999
- type: recall_at_5
value: 11.354000000000001
- task:
type: Retrieval
dataset:
name: MTEB HotpotQA (default)
type: mteb/hotpotqa
config: default
split: test
revision: ab518f4d6fcca38d87c25209f94beba119d02014
metrics:
- type: main_score
value: 30.837999999999997
- type: map_at_1
value: 17.535
- type: map_at_10
value: 24.127000000000002
- type: map_at_100
value: 24.897
- type: map_at_1000
value: 24.991
- type: map_at_20
value: 24.537
- type: map_at_3
value: 22.314
- type: map_at_5
value: 23.369
- type: mrr_at_1
value: 35.07089804186361
- type: mrr_at_10
value: 41.84109835696607
- type: mrr_at_100
value: 42.50312939357189
- type: mrr_at_1000
value: 42.557192847100204
- type: mrr_at_20
value: 42.23392771922393
- type: mrr_at_3
value: 40.0540175557057
- type: mrr_at_5
value: 41.09723160027011
- type: nauc_map_at_1000_diff1
value: 53.405765033756104
- type: nauc_map_at_1000_max
value: 7.122736293690594
- type: nauc_map_at_1000_std
value: 25.154222353909706
- type: nauc_map_at_100_diff1
value: 53.424105025391235
- type: nauc_map_at_100_max
value: 7.127661247301736
- type: nauc_map_at_100_std
value: 25.080306702030054
- type: nauc_map_at_10_diff1
value: 53.83507469889932
- type: nauc_map_at_10_max
value: 7.239978390454264
- type: nauc_map_at_10_std
value: 24.216110502987867
- type: nauc_map_at_1_diff1
value: 64.45610830977103
- type: nauc_map_at_1_max
value: 10.831236114417758
- type: nauc_map_at_1_std
value: 18.282463736681766
- type: nauc_map_at_20_diff1
value: 53.50246555744542
- type: nauc_map_at_20_max
value: 7.1666672586766085
- type: nauc_map_at_20_std
value: 24.648695320801803
- type: nauc_map_at_3_diff1
value: 55.467529631560474
- type: nauc_map_at_3_max
value: 8.281275214726968
- type: nauc_map_at_3_std
value: 22.436972833181386
- type: nauc_map_at_5_diff1
value: 54.2596974292177
- type: nauc_map_at_5_max
value: 7.5791705198322585
- type: nauc_map_at_5_std
value: 23.272036332669295
- type: nauc_mrr_at_1000_diff1
value: 60.01986079158693
- type: nauc_mrr_at_1000_max
value: 9.046571417308733
- type: nauc_mrr_at_1000_std
value: 22.078576232724707
- type: nauc_mrr_at_100_diff1
value: 60.01145860886984
- type: nauc_mrr_at_100_max
value: 9.036448042324515
- type: nauc_mrr_at_100_std
value: 22.073613864801413
- type: nauc_mrr_at_10_diff1
value: 60.138490480821595
- type: nauc_mrr_at_10_max
value: 9.09851806151594
- type: nauc_mrr_at_10_std
value: 21.871816692853095
- type: nauc_mrr_at_1_diff1
value: 64.45610830977103
- type: nauc_mrr_at_1_max
value: 10.831236114417758
- type: nauc_mrr_at_1_std
value: 18.282463736681766
- type: nauc_mrr_at_20_diff1
value: 60.020756965348596
- type: nauc_mrr_at_20_max
value: 9.067384772615947
- type: nauc_mrr_at_20_std
value: 22.007284296200602
- type: nauc_mrr_at_3_diff1
value: 60.848848858927965
- type: nauc_mrr_at_3_max
value: 9.77819590832476
- type: nauc_mrr_at_3_std
value: 20.7857772481929
- type: nauc_mrr_at_5_diff1
value: 60.23023654313581
- type: nauc_mrr_at_5_max
value: 9.297697720996952
- type: nauc_mrr_at_5_std
value: 21.305246554366864
- type: nauc_ndcg_at_1000_diff1
value: 51.9050817941371
- type: nauc_ndcg_at_1000_max
value: 6.253060051785559
- type: nauc_ndcg_at_1000_std
value: 29.724428357103015
- type: nauc_ndcg_at_100_diff1
value: 52.197825295468256
- type: nauc_ndcg_at_100_max
value: 6.212784383093877
- type: nauc_ndcg_at_100_std
value: 28.65006820758606
- type: nauc_ndcg_at_10_diff1
value: 53.6117173506942
- type: nauc_ndcg_at_10_max
value: 6.6792682572264646
- type: nauc_ndcg_at_10_std
value: 25.56356291488488
- type: nauc_ndcg_at_1_diff1
value: 64.45610830977103
- type: nauc_ndcg_at_1_max
value: 10.831236114417758
- type: nauc_ndcg_at_1_std
value: 18.282463736681766
- type: nauc_ndcg_at_20_diff1
value: 52.725481130189465
- type: nauc_ndcg_at_20_max
value: 6.443880761918098
- type: nauc_ndcg_at_20_std
value: 26.623544659694815
- type: nauc_ndcg_at_3_diff1
value: 56.087927881432066
- type: nauc_ndcg_at_3_max
value: 8.38309550543212
- type: nauc_ndcg_at_3_std
value: 22.573762514655623
- type: nauc_ndcg_at_5_diff1
value: 54.351073912334144
- type: nauc_ndcg_at_5_max
value: 7.325834612406898
- type: nauc_ndcg_at_5_std
value: 23.7625099537027
- type: nauc_precision_at_1000_diff1
value: 24.555760070632065
- type: nauc_precision_at_1000_max
value: -0.030378364610462727
- type: nauc_precision_at_1000_std
value: 43.44197980424529
- type: nauc_precision_at_100_diff1
value: 31.89263750680818
- type: nauc_precision_at_100_max
value: 0.5967214311073074
- type: nauc_precision_at_100_std
value: 38.028330866223165
- type: nauc_precision_at_10_diff1
value: 42.72001946616996
- type: nauc_precision_at_10_max
value: 2.759405409849438
- type: nauc_precision_at_10_std
value: 29.948179807406504
- type: nauc_precision_at_1_diff1
value: 64.45610830977103
- type: nauc_precision_at_1_max
value: 10.831236114417758
- type: nauc_precision_at_1_std
value: 18.282463736681766
- type: nauc_precision_at_20_diff1
value: 38.77807631886789
- type: nauc_precision_at_20_max
value: 1.8720818516278552
- type: nauc_precision_at_20_std
value: 32.59464097769524
- type: nauc_precision_at_3_diff1
value: 50.84352281110305
- type: nauc_precision_at_3_max
value: 6.8098905022703455
- type: nauc_precision_at_3_std
value: 24.54656806570455
- type: nauc_precision_at_5_diff1
value: 46.09980845642094
- type: nauc_precision_at_5_max
value: 4.489864393832119
- type: nauc_precision_at_5_std
value: 26.34146412719015
- type: nauc_recall_at_1000_diff1
value: 24.55576007063215
- type: nauc_recall_at_1000_max
value: -0.030378364610333563
- type: nauc_recall_at_1000_std
value: 43.441979804245264
- type: nauc_recall_at_100_diff1
value: 31.892637506808146
- type: nauc_recall_at_100_max
value: 0.5967214311073054
- type: nauc_recall_at_100_std
value: 38.02833086622307
- type: nauc_recall_at_10_diff1
value: 42.72001946616998
- type: nauc_recall_at_10_max
value: 2.7594054098494403
- type: nauc_recall_at_10_std
value: 29.94817980740652
- type: nauc_recall_at_1_diff1
value: 64.45610830977103
- type: nauc_recall_at_1_max
value: 10.831236114417758
- type: nauc_recall_at_1_std
value: 18.282463736681766
- type: nauc_recall_at_20_diff1
value: 38.77807631886782
- type: nauc_recall_at_20_max
value: 1.872081851627872
- type: nauc_recall_at_20_std
value: 32.594640977695256
- type: nauc_recall_at_3_diff1
value: 50.843522811103036
- type: nauc_recall_at_3_max
value: 6.809890502270356
- type: nauc_recall_at_3_std
value: 24.546568065704555
- type: nauc_recall_at_5_diff1
value: 46.09980845642094
- type: nauc_recall_at_5_max
value: 4.48986439383211
- type: nauc_recall_at_5_std
value: 26.341464127190157
- type: ndcg_at_1
value: 35.071000000000005
- type: ndcg_at_10
value: 30.837999999999997
- type: ndcg_at_100
value: 34.473
- type: ndcg_at_1000
value: 36.788
- type: ndcg_at_20
value: 32.193
- type: ndcg_at_3
value: 27.412999999999997
- type: ndcg_at_5
value: 29.160999999999998
- type: precision_at_1
value: 35.071000000000005
- type: precision_at_10
value: 6.694999999999999
- type: precision_at_100
value: 0.96
- type: precision_at_1000
value: 0.127
- type: precision_at_20
value: 3.785
- type: precision_at_3
value: 17.187
- type: precision_at_5
value: 11.700000000000001
- type: recall_at_1
value: 17.535
- type: recall_at_10
value: 33.477000000000004
- type: recall_at_100
value: 48.015
- type: recall_at_1000
value: 63.483999999999995
- type: recall_at_20
value: 37.846000000000004
- type: recall_at_3
value: 25.779999999999998
- type: recall_at_5
value: 29.250999999999998
- task:
type: Classification
dataset:
name: MTEB ImdbClassification (default)
type: mteb/imdb
config: default
split: test
revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7
metrics:
- type: accuracy
value: 66.5616
- type: ap
value: 61.38581579080602
- type: ap_weighted
value: 61.38581579080602
- type: f1
value: 66.15361405073979
- type: f1_weighted
value: 66.15361405073978
- type: main_score
value: 66.5616
- task:
type: Retrieval
dataset:
name: MTEB MSMARCO (default)
type: mteb/msmarco
config: default
split: test
revision: c5a29a104738b98a9e76336939199e264163d4a0
metrics:
- type: main_score
value: 28.034
- type: map_at_1
value: 0.66
- type: map_at_10
value: 4.3709999999999996
- type: map_at_100
value: 12.02
- type: map_at_1000
value: 15.081
- type: map_at_20
value: 6.718
- type: map_at_3
value: 1.7389999999999999
- type: map_at_5
value: 2.5919999999999996
- type: mrr_at_1
value: 41.86046511627907
- type: mrr_at_10
value: 54.15651531930602
- type: mrr_at_100
value: 54.68712248786739
- type: mrr_at_1000
value: 54.68712248786739
- type: mrr_at_20
value: 54.272794389073454
- type: mrr_at_3
value: 51.937984496124024
- type: mrr_at_5
value: 52.40310077519379
- type: nauc_map_at_1000_diff1
value: 8.067177552562086
- type: nauc_map_at_1000_max
value: 50.80997888655191
- type: nauc_map_at_1000_std
value: 55.48450092063327
- type: nauc_map_at_100_diff1
value: 11.852088152898117
- type: nauc_map_at_100_max
value: 48.192262801076275
- type: nauc_map_at_100_std
value: 46.99716861803027
- type: nauc_map_at_10_diff1
value: 12.440097979884552
- type: nauc_map_at_10_max
value: 29.873253516213786
- type: nauc_map_at_10_std
value: 30.42960299808594
- type: nauc_map_at_1_diff1
value: 34.552395254431445
- type: nauc_map_at_1_max
value: 38.69572501766299
- type: nauc_map_at_1_std
value: 23.493916737503017
- type: nauc_map_at_20_diff1
value: 13.785974512045621
- type: nauc_map_at_20_max
value: 34.54060954861762
- type: nauc_map_at_20_std
value: 36.78361062739522
- type: nauc_map_at_3_diff1
value: 25.396598443628488
- type: nauc_map_at_3_max
value: 40.38715214284343
- type: nauc_map_at_3_std
value: 25.366480567034372
- type: nauc_map_at_5_diff1
value: 21.758905499107037
- type: nauc_map_at_5_max
value: 35.664518863717646
- type: nauc_map_at_5_std
value: 27.149202253810024
- type: nauc_mrr_at_1000_diff1
value: 17.603886573367394
- type: nauc_mrr_at_1000_max
value: 58.66874119428572
- type: nauc_mrr_at_1000_std
value: 42.279175325006555
- type: nauc_mrr_at_100_diff1
value: 17.603886573367394
- type: nauc_mrr_at_100_max
value: 58.66874119428572
- type: nauc_mrr_at_100_std
value: 42.279175325006555
- type: nauc_mrr_at_10_diff1
value: 17.323803643197643
- type: nauc_mrr_at_10_max
value: 58.762972566248315
- type: nauc_mrr_at_10_std
value: 42.56956515834332
- type: nauc_mrr_at_1_diff1
value: 27.861672627434668
- type: nauc_mrr_at_1_max
value: 62.257123563504756
- type: nauc_mrr_at_1_std
value: 44.379176486800986
- type: nauc_mrr_at_20_diff1
value: 17.44644565955209
- type: nauc_mrr_at_20_max
value: 58.58190663195971
- type: nauc_mrr_at_20_std
value: 42.33627290946193
- type: nauc_mrr_at_3_diff1
value: 17.262663278109798
- type: nauc_mrr_at_3_max
value: 56.454793834736094
- type: nauc_mrr_at_3_std
value: 41.08451346276091
- type: nauc_mrr_at_5_diff1
value: 16.613650570034434
- type: nauc_mrr_at_5_max
value: 55.66285623344173
- type: nauc_mrr_at_5_std
value: 40.38311275408144
- type: nauc_ndcg_at_1000_diff1
value: 10.174068866047635
- type: nauc_ndcg_at_1000_max
value: 51.73192889106936
- type: nauc_ndcg_at_1000_std
value: 59.65401111712334
- type: nauc_ndcg_at_100_diff1
value: 7.828653579924433
- type: nauc_ndcg_at_100_max
value: 54.36206806281852
- type: nauc_ndcg_at_100_std
value: 44.08756682730974
- type: nauc_ndcg_at_10_diff1
value: 3.1020204706672807
- type: nauc_ndcg_at_10_max
value: 49.25209127878138
- type: nauc_ndcg_at_10_std
value: 39.03800796651823
- type: nauc_ndcg_at_1_diff1
value: 31.384674368521292
- type: nauc_ndcg_at_1_max
value: 46.68691593258891
- type: nauc_ndcg_at_1_std
value: 23.497422044367447
- type: nauc_ndcg_at_20_diff1
value: 2.1223938698830445
- type: nauc_ndcg_at_20_max
value: 52.82778912003725
- type: nauc_ndcg_at_20_std
value: 40.85957147213028
- type: nauc_ndcg_at_3_diff1
value: 15.620541244360142
- type: nauc_ndcg_at_3_max
value: 53.11313758866487
- type: nauc_ndcg_at_3_std
value: 30.214636563641196
- type: nauc_ndcg_at_5_diff1
value: 11.094092047013888
- type: nauc_ndcg_at_5_max
value: 50.15717166769855
- type: nauc_ndcg_at_5_std
value: 32.63549193285381
- type: nauc_precision_at_1000_diff1
value: -18.87788252321529
- type: nauc_precision_at_1000_max
value: 47.752842936932964
- type: nauc_precision_at_1000_std
value: 46.53172081645067
- type: nauc_precision_at_100_diff1
value: -11.675608943686981
- type: nauc_precision_at_100_max
value: 57.37789290450161
- type: nauc_precision_at_100_std
value: 45.99043825302317
- type: nauc_precision_at_10_diff1
value: -5.316480906785367
- type: nauc_precision_at_10_max
value: 50.9022661670284
- type: nauc_precision_at_10_std
value: 41.249198804648444
- type: nauc_precision_at_1_diff1
value: 27.861672627434668
- type: nauc_precision_at_1_max
value: 62.257123563504756
- type: nauc_precision_at_1_std
value: 44.379176486800986
- type: nauc_precision_at_20_diff1
value: -4.546893782120849
- type: nauc_precision_at_20_max
value: 54.59631672833982
- type: nauc_precision_at_20_std
value: 42.784497023294186
- type: nauc_precision_at_3_diff1
value: 9.61605571022061
- type: nauc_precision_at_3_max
value: 58.49382945748053
- type: nauc_precision_at_3_std
value: 36.589164698407316
- type: nauc_precision_at_5_diff1
value: 4.337255192132767
- type: nauc_precision_at_5_max
value: 51.9951147484678
- type: nauc_precision_at_5_std
value: 34.468467294436486
- type: nauc_recall_at_1000_diff1
value: 12.99503296673786
- type: nauc_recall_at_1000_max
value: 40.71962531328987
- type: nauc_recall_at_1000_std
value: 61.64030151991186
- type: nauc_recall_at_100_diff1
value: 10.859337421704575
- type: nauc_recall_at_100_max
value: 38.842397587549044
- type: nauc_recall_at_100_std
value: 44.123802055364514
- type: nauc_recall_at_10_diff1
value: 5.054631656084283
- type: nauc_recall_at_10_max
value: 16.616637058750165
- type: nauc_recall_at_10_std
value: 23.85056756316223
- type: nauc_recall_at_1_diff1
value: 34.552395254431445
- type: nauc_recall_at_1_max
value: 38.69572501766299
- type: nauc_recall_at_1_std
value: 23.493916737503017
- type: nauc_recall_at_20_diff1
value: 11.266581564744333
- type: nauc_recall_at_20_max
value: 20.205268245387963
- type: nauc_recall_at_20_std
value: 25.000674179475464
- type: nauc_recall_at_3_diff1
value: 23.716522929925635
- type: nauc_recall_at_3_max
value: 33.675409791018915
- type: nauc_recall_at_3_std
value: 23.659590089606255
- type: nauc_recall_at_5_diff1
value: 13.826629690116377
- type: nauc_recall_at_5_max
value: 21.450396058089545
- type: nauc_recall_at_5_std
value: 21.053365906790678
- type: ndcg_at_1
value: 27.907
- type: ndcg_at_10
value: 28.034
- type: ndcg_at_100
value: 28.166000000000004
- type: ndcg_at_1000
value: 36.361
- type: ndcg_at_20
value: 28.047
- type: ndcg_at_3
value: 28.388999999999996
- type: ndcg_at_5
value: 28.307
- type: precision_at_1
value: 41.86
- type: precision_at_10
value: 37.208999999999996
- type: precision_at_100
value: 18.093
- type: precision_at_1000
value: 3.995
- type: precision_at_20
value: 33.372
- type: precision_at_3
value: 42.636
- type: precision_at_5
value: 40.0
- type: recall_at_1
value: 0.66
- type: recall_at_10
value: 6.287
- type: recall_at_100
value: 24.134
- type: recall_at_1000
value: 48.431999999999995
- type: recall_at_20
value: 10.897
- type: recall_at_3
value: 2.138
- type: recall_at_5
value: 3.3770000000000002
- task:
type: Classification
dataset:
name: MTEB MTOPDomainClassification (en)
type: mteb/mtop_domain
config: en
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 84.81988144094848
- type: f1
value: 84.06333895718355
- type: f1_weighted
value: 84.95181538630469
- type: main_score
value: 84.81988144094848
- task:
type: Classification
dataset:
name: MTEB MTOPIntentClassification (en)
type: mteb/mtop_intent
config: en
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 62.41222070223438
- type: f1
value: 46.156097858146175
- type: f1_weighted
value: 66.23266420473301
- type: main_score
value: 62.41222070223438
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (en)
type: mteb/amazon_massive_intent
config: en
split: test
revision: 4672e20407010da34463acc759c162ca9734bca6
metrics:
- type: accuracy
value: 62.50168123739073
- type: f1
value: 60.72805496384179
- type: f1_weighted
value: 62.787680759907204
- type: main_score
value: 62.50168123739073
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (en)
type: mteb/amazon_massive_scenario
config: en
split: test
revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8
metrics:
- type: accuracy
value: 66.09280430396772
- type: f1
value: 65.36448769357172
- type: f1_weighted
value: 66.15203456480924
- type: main_score
value: 66.09280430396772
- task:
type: Clustering
dataset:
name: MTEB MedrxivClusteringP2P (default)
type: mteb/medrxiv-clustering-p2p
config: default
split: test
revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73
metrics:
- type: main_score
value: 26.932942933622616
- type: v_measure
value: 26.932942933622616
- type: v_measure_std
value: 1.593124055965666
- task:
type: Clustering
dataset:
name: MTEB MedrxivClusteringS2S (default)
type: mteb/medrxiv-clustering-s2s
config: default
split: test
revision: 35191c8c0dca72d8ff3efcd72aa802307d469663
metrics:
- type: main_score
value: 22.9594415386389
- type: v_measure
value: 22.9594415386389
- type: v_measure_std
value: 1.2719806552652395
- task:
type: Reranking
dataset:
name: MTEB MindSmallReranking (default)
type: mteb/mind_small
config: default
split: test
revision: 59042f120c80e8afa9cdbb224f67076cec0fc9a7
metrics:
- type: main_score
value: 28.527234738258063
- type: map
value: 28.527234738258063
- type: mrr
value: 29.001137590751057
- type: nAUC_map_diff1
value: 17.894640005397015
- type: nAUC_map_max
value: -32.33772009018379
- type: nAUC_map_std
value: -13.932018270818118
- type: nAUC_mrr_diff1
value: 16.6645956799536
- type: nAUC_mrr_max
value: -26.591327847291947
- type: nAUC_mrr_std
value: -11.52072949105865
- task:
type: Retrieval
dataset:
name: MTEB NFCorpus (default)
type: mteb/nfcorpus
config: default
split: test
revision: ec0fa4fe99da2ff19ca1214b7966684033a58814
metrics:
- type: main_score
value: 23.318
- type: map_at_1
value: 3.9739999999999998
- type: map_at_10
value: 7.636
- type: map_at_100
value: 9.565999999999999
- type: map_at_1000
value: 10.731
- type: map_at_20
value: 8.389000000000001
- type: map_at_3
value: 5.836
- type: map_at_5
value: 6.6339999999999995
- type: mrr_at_1
value: 31.57894736842105
- type: mrr_at_10
value: 41.40436876504987
- type: mrr_at_100
value: 42.171381521810616
- type: mrr_at_1000
value: 42.21952740910268
- type: mrr_at_20
value: 41.75160733542153
- type: mrr_at_3
value: 38.544891640866865
- type: mrr_at_5
value: 40.495356037151694
- type: nauc_map_at_1000_diff1
value: 36.856779722587405
- type: nauc_map_at_1000_max
value: 1.0732856849015824
- type: nauc_map_at_1000_std
value: 9.651983758926798
- type: nauc_map_at_100_diff1
value: 37.7388774830525
- type: nauc_map_at_100_max
value: 0.5350831297890865
- type: nauc_map_at_100_std
value: 5.572219889903966
- type: nauc_map_at_10_diff1
value: 41.10439950831827
- type: nauc_map_at_10_max
value: -1.9365518645162703
- type: nauc_map_at_10_std
value: -0.14823142437775177
- type: nauc_map_at_1_diff1
value: 45.5844553027814
- type: nauc_map_at_1_max
value: -8.272551322248038
- type: nauc_map_at_1_std
value: -5.988582518897944
- type: nauc_map_at_20_diff1
value: 38.99926603388708
- type: nauc_map_at_20_max
value: -0.8765984795564569
- type: nauc_map_at_20_std
value: 1.8427808317285952
- type: nauc_map_at_3_diff1
value: 44.541009820342296
- type: nauc_map_at_3_max
value: -5.314865046137034
- type: nauc_map_at_3_std
value: -4.401240111896542
- type: nauc_map_at_5_diff1
value: 43.93142627220787
- type: nauc_map_at_5_max
value: -4.452186699937273
- type: nauc_map_at_5_std
value: -1.926768039888005
- type: nauc_mrr_at_1000_diff1
value: 31.753283629515227
- type: nauc_mrr_at_1000_max
value: 9.689948388217696
- type: nauc_mrr_at_1000_std
value: 22.70267321039036
- type: nauc_mrr_at_100_diff1
value: 31.729775359589773
- type: nauc_mrr_at_100_max
value: 9.729637548794349
- type: nauc_mrr_at_100_std
value: 22.680656825829267
- type: nauc_mrr_at_10_diff1
value: 31.725910736285666
- type: nauc_mrr_at_10_max
value: 9.676299619743284
- type: nauc_mrr_at_10_std
value: 22.987975982720496
- type: nauc_mrr_at_1_diff1
value: 33.222931085618626
- type: nauc_mrr_at_1_max
value: 3.484453564278958
- type: nauc_mrr_at_1_std
value: 14.566253883401012
- type: nauc_mrr_at_20_diff1
value: 31.70316773246007
- type: nauc_mrr_at_20_max
value: 9.857726052213023
- type: nauc_mrr_at_20_std
value: 22.691706596582133
- type: nauc_mrr_at_3_diff1
value: 33.123605268114545
- type: nauc_mrr_at_3_max
value: 7.595554226164336
- type: nauc_mrr_at_3_std
value: 22.833951307229185
- type: nauc_mrr_at_5_diff1
value: 32.33356989096538
- type: nauc_mrr_at_5_max
value: 8.78887950599465
- type: nauc_mrr_at_5_std
value: 23.75577044154664
- type: nauc_ndcg_at_1000_diff1
value: 29.06381153030341
- type: nauc_ndcg_at_1000_max
value: 12.496787837448844
- type: nauc_ndcg_at_1000_std
value: 21.957810402478064
- type: nauc_ndcg_at_100_diff1
value: 30.705847017840128
- type: nauc_ndcg_at_100_max
value: 7.14809714223451
- type: nauc_ndcg_at_100_std
value: 17.218742555337656
- type: nauc_ndcg_at_10_diff1
value: 28.03996243029464
- type: nauc_ndcg_at_10_max
value: 4.699374701730214
- type: nauc_ndcg_at_10_std
value: 24.227816808454218
- type: nauc_ndcg_at_1_diff1
value: 33.51847942809358
- type: nauc_ndcg_at_1_max
value: -0.15139755316818274
- type: nauc_ndcg_at_1_std
value: 17.16967561523347
- type: nauc_ndcg_at_20_diff1
value: 28.20952557682163
- type: nauc_ndcg_at_20_max
value: 4.145398659710493
- type: nauc_ndcg_at_20_std
value: 22.993088607717066
- type: nauc_ndcg_at_3_diff1
value: 27.613082038987592
- type: nauc_ndcg_at_3_max
value: 1.4593269064387369
- type: nauc_ndcg_at_3_std
value: 23.50820643331994
- type: nauc_ndcg_at_5_diff1
value: 28.240414065564686
- type: nauc_ndcg_at_5_max
value: 3.5129825777351504
- type: nauc_ndcg_at_5_std
value: 25.518429908335165
- type: nauc_precision_at_1000_diff1
value: 3.744031922083433
- type: nauc_precision_at_1000_max
value: -0.5091331293991512
- type: nauc_precision_at_1000_std
value: 44.81402869309276
- type: nauc_precision_at_100_diff1
value: 6.830797386827996
- type: nauc_precision_at_100_max
value: 4.0810548509653755
- type: nauc_precision_at_100_std
value: 42.7474662572479
- type: nauc_precision_at_10_diff1
value: 12.394335511926892
- type: nauc_precision_at_10_max
value: 10.49971612535947
- type: nauc_precision_at_10_std
value: 34.03347850666832
- type: nauc_precision_at_1_diff1
value: 33.222931085618626
- type: nauc_precision_at_1_max
value: 3.484453564278958
- type: nauc_precision_at_1_std
value: 14.566253883401012
- type: nauc_precision_at_20_diff1
value: 9.64344422081397
- type: nauc_precision_at_20_max
value: 6.621958244946981
- type: nauc_precision_at_20_std
value: 37.86581516903579
- type: nauc_precision_at_3_diff1
value: 20.278708738039267
- type: nauc_precision_at_3_max
value: 7.392289389157268
- type: nauc_precision_at_3_std
value: 27.036426818980896
- type: nauc_precision_at_5_diff1
value: 18.449282750023514
- type: nauc_precision_at_5_max
value: 9.979980772916283
- type: nauc_precision_at_5_std
value: 33.01802732071948
- type: nauc_recall_at_1000_diff1
value: 16.342561945689592
- type: nauc_recall_at_1000_max
value: 5.937671266428497
- type: nauc_recall_at_1000_std
value: 10.42918010425554
- type: nauc_recall_at_100_diff1
value: 19.13895811746396
- type: nauc_recall_at_100_max
value: 3.153899391811738
- type: nauc_recall_at_100_std
value: 1.04689826072118
- type: nauc_recall_at_10_diff1
value: 30.635745816653586
- type: nauc_recall_at_10_max
value: 1.5673249988390006
- type: nauc_recall_at_10_std
value: -3.6633108112395276
- type: nauc_recall_at_1_diff1
value: 45.5844553027814
- type: nauc_recall_at_1_max
value: -8.272551322248038
- type: nauc_recall_at_1_std
value: -5.988582518897944
- type: nauc_recall_at_20_diff1
value: 24.449469640898666
- type: nauc_recall_at_20_max
value: 3.6319822015373404
- type: nauc_recall_at_20_std
value: -3.460880541269202
- type: nauc_recall_at_3_diff1
value: 40.57120118352399
- type: nauc_recall_at_3_max
value: -6.4276251434173135
- type: nauc_recall_at_3_std
value: -5.987479062691147
- type: nauc_recall_at_5_diff1
value: 36.21768314516704
- type: nauc_recall_at_5_max
value: -4.847092890211095
- type: nauc_recall_at_5_std
value: -3.0514943484880144
- type: ndcg_at_1
value: 29.876
- type: ndcg_at_10
value: 23.318
- type: ndcg_at_100
value: 22.178
- type: ndcg_at_1000
value: 31.543
- type: ndcg_at_20
value: 21.718
- type: ndcg_at_3
value: 26.625
- type: ndcg_at_5
value: 25.412000000000003
- type: precision_at_1
value: 31.579
- type: precision_at_10
value: 17.244999999999997
- type: precision_at_100
value: 5.82
- type: precision_at_1000
value: 1.857
- type: precision_at_20
value: 12.709000000000001
- type: precision_at_3
value: 24.974
- type: precision_at_5
value: 21.981
- type: recall_at_1
value: 3.9739999999999998
- type: recall_at_10
value: 11.433
- type: recall_at_100
value: 24.861
- type: recall_at_1000
value: 57.75900000000001
- type: recall_at_20
value: 14.167
- type: recall_at_3
value: 6.773999999999999
- type: recall_at_5
value: 8.713
- task:
type: Retrieval
dataset:
name: MTEB NQ (default)
type: mteb/nq
config: default
split: test
revision: b774495ed302d8c44a3a7ea25c90dbce03968f31
metrics:
- type: main_score
value: 17.682000000000002
- type: map_at_1
value: 7.968999999999999
- type: map_at_10
value: 13.828
- type: map_at_100
value: 14.881
- type: map_at_1000
value: 14.979999999999999
- type: map_at_20
value: 14.421999999999999
- type: map_at_3
value: 11.681999999999999
- type: map_at_5
value: 12.837000000000002
- type: mrr_at_1
value: 9.096176129779836
- type: mrr_at_10
value: 15.333772462248707
- type: mrr_at_100
value: 16.309634922879194
- type: mrr_at_1000
value: 16.39475249150789
- type: mrr_at_20
value: 15.891392914358688
- type: mrr_at_3
value: 13.064889918887577
- type: mrr_at_5
value: 14.311993047508642
- type: nauc_map_at_1000_diff1
value: 19.775928600522615
- type: nauc_map_at_1000_max
value: 6.286282728873767
- type: nauc_map_at_1000_std
value: 10.433091988799701
- type: nauc_map_at_100_diff1
value: 19.76472010726201
- type: nauc_map_at_100_max
value: 6.3000520043276245
- type: nauc_map_at_100_std
value: 10.369742430725108
- type: nauc_map_at_10_diff1
value: 19.717104003612306
- type: nauc_map_at_10_max
value: 5.9416407746652915
- type: nauc_map_at_10_std
value: 9.269462518525886
- type: nauc_map_at_1_diff1
value: 22.577309259900126
- type: nauc_map_at_1_max
value: 4.4722142164380605
- type: nauc_map_at_1_std
value: 3.7899645702785345
- type: nauc_map_at_20_diff1
value: 19.71861462412693
- type: nauc_map_at_20_max
value: 6.104405666589615
- type: nauc_map_at_20_std
value: 9.774250304834347
- type: nauc_map_at_3_diff1
value: 20.745180167104174
- type: nauc_map_at_3_max
value: 4.726336508000744
- type: nauc_map_at_3_std
value: 7.012706580698335
- type: nauc_map_at_5_diff1
value: 20.401667911889596
- type: nauc_map_at_5_max
value: 5.021580992513943
- type: nauc_map_at_5_std
value: 8.232301301005908
- type: nauc_mrr_at_1000_diff1
value: 19.876105574468276
- type: nauc_mrr_at_1000_max
value: 5.92950987632599
- type: nauc_mrr_at_1000_std
value: 10.422385358307675
- type: nauc_mrr_at_100_diff1
value: 19.864601593092164
- type: nauc_mrr_at_100_max
value: 5.937364432461887
- type: nauc_mrr_at_100_std
value: 10.372545373358479
- type: nauc_mrr_at_10_diff1
value: 19.8074129108612
- type: nauc_mrr_at_10_max
value: 5.583608572112338
- type: nauc_mrr_at_10_std
value: 9.660933453553797
- type: nauc_mrr_at_1_diff1
value: 22.771833118893053
- type: nauc_mrr_at_1_max
value: 4.270593166778219
- type: nauc_mrr_at_1_std
value: 4.72067370933128
- type: nauc_mrr_at_20_diff1
value: 19.816299723557
- type: nauc_mrr_at_20_max
value: 5.803282270363233
- type: nauc_mrr_at_20_std
value: 9.982388740482714
- type: nauc_mrr_at_3_diff1
value: 20.764352672106014
- type: nauc_mrr_at_3_max
value: 4.308188794966225
- type: nauc_mrr_at_3_std
value: 7.424575450681196
- type: nauc_mrr_at_5_diff1
value: 20.468124439169884
- type: nauc_mrr_at_5_max
value: 4.717164145352797
- type: nauc_mrr_at_5_std
value: 8.75784949698527
- type: nauc_ndcg_at_1000_diff1
value: 18.988627444499162
- type: nauc_ndcg_at_1000_max
value: 8.336437983015612
- type: nauc_ndcg_at_1000_std
value: 17.785235937443314
- type: nauc_ndcg_at_100_diff1
value: 18.72435211905066
- type: nauc_ndcg_at_100_max
value: 8.509559844610813
- type: nauc_ndcg_at_100_std
value: 16.272027197158785
- type: nauc_ndcg_at_10_diff1
value: 18.50083720860625
- type: nauc_ndcg_at_10_max
value: 6.816989264362351
- type: nauc_ndcg_at_10_std
value: 11.70379688056292
- type: nauc_ndcg_at_1_diff1
value: 23.028151500845926
- type: nauc_ndcg_at_1_max
value: 4.252790790979486
- type: nauc_ndcg_at_1_std
value: 4.919320655470863
- type: nauc_ndcg_at_20_diff1
value: 18.61317480699593
- type: nauc_ndcg_at_20_max
value: 7.400038137531198
- type: nauc_ndcg_at_20_std
value: 12.975329660907905
- type: nauc_ndcg_at_3_diff1
value: 20.331305466487297
- type: nauc_ndcg_at_3_max
value: 4.451813547010051
- type: nauc_ndcg_at_3_std
value: 7.835866814473613
- type: nauc_ndcg_at_5_diff1
value: 19.933475062151903
- type: nauc_ndcg_at_5_max
value: 5.0523614629035
- type: nauc_ndcg_at_5_std
value: 9.763459907678518
- type: nauc_precision_at_1000_diff1
value: 10.24793761705778
- type: nauc_precision_at_1000_max
value: 10.459646580367272
- type: nauc_precision_at_1000_std
value: 35.19560755022326
- type: nauc_precision_at_100_diff1
value: 14.032733274764734
- type: nauc_precision_at_100_max
value: 12.582877921585014
- type: nauc_precision_at_100_std
value: 30.56446230218432
- type: nauc_precision_at_10_diff1
value: 15.46863641183508
- type: nauc_precision_at_10_max
value: 8.026206096826051
- type: nauc_precision_at_10_std
value: 17.580067448009732
- type: nauc_precision_at_1_diff1
value: 23.028151500845926
- type: nauc_precision_at_1_max
value: 4.252790790979486
- type: nauc_precision_at_1_std
value: 4.919320655470863
- type: nauc_precision_at_20_diff1
value: 15.577209585349616
- type: nauc_precision_at_20_max
value: 9.37176988371138
- type: nauc_precision_at_20_std
value: 20.825242862847972
- type: nauc_precision_at_3_diff1
value: 19.697434012748303
- type: nauc_precision_at_3_max
value: 3.817741628018302
- type: nauc_precision_at_3_std
value: 9.855204198464552
- type: nauc_precision_at_5_diff1
value: 18.757352510786994
- type: nauc_precision_at_5_max
value: 4.78932962761337
- type: nauc_precision_at_5_std
value: 13.485110478478058
- type: nauc_recall_at_1000_diff1
value: 16.784291464246394
- type: nauc_recall_at_1000_max
value: 15.357886220356304
- type: nauc_recall_at_1000_std
value: 47.3266711354422
- type: nauc_recall_at_100_diff1
value: 15.651366556591528
- type: nauc_recall_at_100_max
value: 14.108369717831499
- type: nauc_recall_at_100_std
value: 30.26307437972032
- type: nauc_recall_at_10_diff1
value: 15.332913342892315
- type: nauc_recall_at_10_max
value: 8.769293510819189
- type: nauc_recall_at_10_std
value: 15.625436932641975
- type: nauc_recall_at_1_diff1
value: 22.577309259900126
- type: nauc_recall_at_1_max
value: 4.4722142164380605
- type: nauc_recall_at_1_std
value: 3.7899645702785345
- type: nauc_recall_at_20_diff1
value: 15.760837708226655
- type: nauc_recall_at_20_max
value: 10.11729976512556
- type: nauc_recall_at_20_std
value: 18.300935029131725
- type: nauc_recall_at_3_diff1
value: 19.039476605698372
- type: nauc_recall_at_3_max
value: 4.107922037298003
- type: nauc_recall_at_3_std
value: 9.115412171303978
- type: nauc_recall_at_5_diff1
value: 18.363415603635758
- type: nauc_recall_at_5_max
value: 5.241253574533175
- type: nauc_recall_at_5_std
value: 12.124948884672802
- type: ndcg_at_1
value: 9.067
- type: ndcg_at_10
value: 17.682000000000002
- type: ndcg_at_100
value: 22.982
- type: ndcg_at_1000
value: 25.692999999999998
- type: ndcg_at_20
value: 19.747
- type: ndcg_at_3
value: 13.219
- type: ndcg_at_5
value: 15.312999999999999
- type: precision_at_1
value: 9.067
- type: precision_at_10
value: 3.3000000000000003
- type: precision_at_100
value: 0.631
- type: precision_at_1000
value: 0.089
- type: precision_at_20
value: 2.136
- type: precision_at_3
value: 6.228
- type: precision_at_5
value: 4.925
- type: recall_at_1
value: 7.968999999999999
- type: recall_at_10
value: 28.208
- type: recall_at_100
value: 52.776
- type: recall_at_1000
value: 73.571
- type: recall_at_20
value: 35.941
- type: recall_at_3
value: 16.338
- type: recall_at_5
value: 21.217
- task:
type: Retrieval
dataset:
name: MTEB QuoraRetrieval (default)
type: mteb/quora
config: default
split: test
revision: e4e08e0b7dbe3c8700f0daef558ff32256715259
metrics:
- type: main_score
value: 74.323
- type: map_at_1
value: 57.30800000000001
- type: map_at_10
value: 69.32000000000001
- type: map_at_100
value: 70.106
- type: map_at_1000
value: 70.149
- type: map_at_20
value: 69.807
- type: map_at_3
value: 66.418
- type: map_at_5
value: 68.184
- type: mrr_at_1
value: 66.0
- type: mrr_at_10
value: 73.97885714285673
- type: mrr_at_100
value: 74.29274218615109
- type: mrr_at_1000
value: 74.3051429938558
- type: mrr_at_20
value: 74.18544015014858
- type: mrr_at_3
value: 72.26666666666631
- type: mrr_at_5
value: 73.37966666666605
- type: nauc_map_at_1000_diff1
value: 69.18960163699573
- type: nauc_map_at_1000_max
value: 37.38136640005
- type: nauc_map_at_1000_std
value: -2.570923100785111
- type: nauc_map_at_100_diff1
value: 69.18751629878942
- type: nauc_map_at_100_max
value: 37.36952143443813
- type: nauc_map_at_100_std
value: -2.5886077139396027
- type: nauc_map_at_10_diff1
value: 69.09406013156409
- type: nauc_map_at_10_max
value: 36.877436974500775
- type: nauc_map_at_10_std
value: -3.3540620889292203
- type: nauc_map_at_1_diff1
value: 70.93951368121674
- type: nauc_map_at_1_max
value: 32.233487451612305
- type: nauc_map_at_1_std
value: -7.055750788201864
- type: nauc_map_at_20_diff1
value: 69.14097261555858
- type: nauc_map_at_20_max
value: 37.18308654380657
- type: nauc_map_at_20_std
value: -2.912685185426714
- type: nauc_map_at_3_diff1
value: 69.01140661964882
- type: nauc_map_at_3_max
value: 35.56708493366717
- type: nauc_map_at_3_std
value: -5.47958763916843
- type: nauc_map_at_5_diff1
value: 68.97841901572657
- type: nauc_map_at_5_max
value: 36.356674331191265
- type: nauc_map_at_5_std
value: -4.271166648670905
- type: nauc_mrr_at_1000_diff1
value: 70.61597700848178
- type: nauc_mrr_at_1000_max
value: 40.41208966087904
- type: nauc_mrr_at_1000_std
value: -0.15890737609620642
- type: nauc_mrr_at_100_diff1
value: 70.61360632996228
- type: nauc_mrr_at_100_max
value: 40.41568433400612
- type: nauc_mrr_at_100_std
value: -0.1448505595676874
- type: nauc_mrr_at_10_diff1
value: 70.5233993892019
- type: nauc_mrr_at_10_max
value: 40.36230785474746
- type: nauc_mrr_at_10_std
value: -0.22757815568658987
- type: nauc_mrr_at_1_diff1
value: 72.6747651764081
- type: nauc_mrr_at_1_max
value: 40.02178963789037
- type: nauc_mrr_at_1_std
value: -2.575126954097418
- type: nauc_mrr_at_20_diff1
value: 70.58326373490296
- type: nauc_mrr_at_20_max
value: 40.41333734338905
- type: nauc_mrr_at_20_std
value: -0.1345473571856357
- type: nauc_mrr_at_3_diff1
value: 70.37817581234762
- type: nauc_mrr_at_3_max
value: 40.203366387087705
- type: nauc_mrr_at_3_std
value: -1.2261489082901087
- type: nauc_mrr_at_5_diff1
value: 70.45626657672184
- type: nauc_mrr_at_5_max
value: 40.3234615411654
- type: nauc_mrr_at_5_std
value: -0.3805672716488398
- type: nauc_ndcg_at_1000_diff1
value: 69.21984468258341
- type: nauc_ndcg_at_1000_max
value: 39.0253925541956
- type: nauc_ndcg_at_1000_std
value: 0.8160264523775477
- type: nauc_ndcg_at_100_diff1
value: 69.15328478391302
- type: nauc_ndcg_at_100_max
value: 38.96655324359319
- type: nauc_ndcg_at_100_std
value: 1.1256651981311283
- type: nauc_ndcg_at_10_diff1
value: 68.53510190998198
- type: nauc_ndcg_at_10_max
value: 37.91208417950795
- type: nauc_ndcg_at_10_std
value: -0.7377655073302805
- type: nauc_ndcg_at_1_diff1
value: 72.63228601131651
- type: nauc_ndcg_at_1_max
value: 40.16828628757125
- type: nauc_ndcg_at_1_std
value: -2.528909627178983
- type: nauc_ndcg_at_20_diff1
value: 68.822583729052
- type: nauc_ndcg_at_20_max
value: 38.41592366520079
- type: nauc_ndcg_at_20_std
value: 0.06798311113755548
- type: nauc_ndcg_at_3_diff1
value: 68.1481692592636
- type: nauc_ndcg_at_3_max
value: 37.31206796055115
- type: nauc_ndcg_at_3_std
value: -3.254883595992796
- type: nauc_ndcg_at_5_diff1
value: 68.24715917081343
- type: nauc_ndcg_at_5_max
value: 37.56264948769021
- type: nauc_ndcg_at_5_std
value: -1.8709773297999994
- type: nauc_precision_at_1000_diff1
value: -27.810948267157137
- type: nauc_precision_at_1000_max
value: -0.24668486328059996
- type: nauc_precision_at_1000_std
value: 20.580820056804715
- type: nauc_precision_at_100_diff1
value: -22.061161829256797
- type: nauc_precision_at_100_max
value: 4.679165403717356
- type: nauc_precision_at_100_std
value: 21.989059211475855
- type: nauc_precision_at_10_diff1
value: -3.9320543024872556
- type: nauc_precision_at_10_max
value: 14.010070678201766
- type: nauc_precision_at_10_std
value: 16.669492507338155
- type: nauc_precision_at_1_diff1
value: 72.63228601131651
- type: nauc_precision_at_1_max
value: 40.16828628757125
- type: nauc_precision_at_1_std
value: -2.528909627178983
- type: nauc_precision_at_20_diff1
value: -12.164765481707331
- type: nauc_precision_at_20_max
value: 10.511899418907312
- type: nauc_precision_at_20_std
value: 19.320026937145183
- type: nauc_precision_at_3_diff1
value: 22.621554858906986
- type: nauc_precision_at_3_max
value: 24.326914902507287
- type: nauc_precision_at_3_std
value: 6.099411862597304
- type: nauc_precision_at_5_diff1
value: 8.981227790660293
- type: nauc_precision_at_5_max
value: 19.916827592062745
- type: nauc_precision_at_5_std
value: 11.93677912655441
- type: nauc_recall_at_1000_diff1
value: 60.79128240819883
- type: nauc_recall_at_1000_max
value: 44.80906309211301
- type: nauc_recall_at_1000_std
value: 56.54768589270181
- type: nauc_recall_at_100_diff1
value: 61.18835279218082
- type: nauc_recall_at_100_max
value: 39.61329094249297
- type: nauc_recall_at_100_std
value: 31.736658564346342
- type: nauc_recall_at_10_diff1
value: 61.3639032751697
- type: nauc_recall_at_10_max
value: 34.510711243051375
- type: nauc_recall_at_10_std
value: 4.855117542870995
- type: nauc_recall_at_1_diff1
value: 70.93951368121674
- type: nauc_recall_at_1_max
value: 32.233487451612305
- type: nauc_recall_at_1_std
value: -7.055750788201864
- type: nauc_recall_at_20_diff1
value: 61.27124485304799
- type: nauc_recall_at_20_max
value: 36.11805010411244
- type: nauc_recall_at_20_std
value: 11.38763207684191
- type: nauc_recall_at_3_diff1
value: 63.91101210841338
- type: nauc_recall_at_3_max
value: 33.23862328274836
- type: nauc_recall_at_3_std
value: -4.857791490570391
- type: nauc_recall_at_5_diff1
value: 62.37552817951354
- type: nauc_recall_at_5_max
value: 33.86753069930419
- type: nauc_recall_at_5_std
value: -0.4857746420435554
- type: ndcg_at_1
value: 66.02
- type: ndcg_at_10
value: 74.323
- type: ndcg_at_100
value: 76.806
- type: ndcg_at_1000
value: 77.436
- type: ndcg_at_20
value: 75.47500000000001
- type: ndcg_at_3
value: 70.44500000000001
- type: ndcg_at_5
value: 72.48
- type: precision_at_1
value: 66.02
- type: precision_at_10
value: 11.273
- type: precision_at_100
value: 1.373
- type: precision_at_1000
value: 0.149
- type: precision_at_20
value: 6.101
- type: precision_at_3
value: 30.5
- type: precision_at_5
value: 20.31
- type: recall_at_1
value: 57.30800000000001
- type: recall_at_10
value: 84.152
- type: recall_at_100
value: 93.989
- type: recall_at_1000
value: 97.89999999999999
- type: recall_at_20
value: 88.138
- type: recall_at_3
value: 73.137
- type: recall_at_5
value: 78.655
- task:
type: Clustering
dataset:
name: MTEB RedditClustering (default)
type: mteb/reddit-clustering
config: default
split: test
revision: 24640382cdbf8abc73003fb0fa6d111a705499eb
metrics:
- type: main_score
value: 28.89014544508522
- type: v_measure
value: 28.89014544508522
- type: v_measure_std
value: 4.477854992673074
- task:
type: Clustering
dataset:
name: MTEB RedditClusteringP2P (default)
type: mteb/reddit-clustering-p2p
config: default
split: test
revision: 385e3cb46b4cfa89021f56c4380204149d0efe33
metrics:
- type: main_score
value: 41.588064041506414
- type: v_measure
value: 41.588064041506414
- type: v_measure_std
value: 12.234957713539355
- task:
type: Retrieval
dataset:
name: MTEB SCIDOCS (default)
type: mteb/scidocs
config: default
split: test
revision: f8c2fcf00f625baaa80f62ec5bd9e1fff3b8ae88
metrics:
- type: main_score
value: 9.923
- type: map_at_1
value: 2.15
- type: map_at_10
value: 5.379
- type: map_at_100
value: 6.487
- type: map_at_1000
value: 6.726999999999999
- type: map_at_20
value: 5.845000000000001
- type: map_at_3
value: 3.943
- type: map_at_5
value: 4.642
- type: mrr_at_1
value: 10.6
- type: mrr_at_10
value: 17.65234126984126
- type: mrr_at_100
value: 18.72231260720679
- type: mrr_at_1000
value: 18.83457574677834
- type: mrr_at_20
value: 18.178004510968904
- type: mrr_at_3
value: 14.96666666666667
- type: mrr_at_5
value: 16.426666666666666
- type: nauc_map_at_1000_diff1
value: 11.904585832905996
- type: nauc_map_at_1000_max
value: 13.966912689458244
- type: nauc_map_at_1000_std
value: 14.274562318051975
- type: nauc_map_at_100_diff1
value: 11.914962635425084
- type: nauc_map_at_100_max
value: 13.792005445505046
- type: nauc_map_at_100_std
value: 13.688572560422358
- type: nauc_map_at_10_diff1
value: 12.924485348386265
- type: nauc_map_at_10_max
value: 12.924904365030008
- type: nauc_map_at_10_std
value: 11.028226417787405
- type: nauc_map_at_1_diff1
value: 17.278503151293908
- type: nauc_map_at_1_max
value: 7.878679954463645
- type: nauc_map_at_1_std
value: 5.787632681875146
- type: nauc_map_at_20_diff1
value: 12.361611976516448
- type: nauc_map_at_20_max
value: 13.430602876791497
- type: nauc_map_at_20_std
value: 11.626342360129135
- type: nauc_map_at_3_diff1
value: 13.25103680109857
- type: nauc_map_at_3_max
value: 11.851782553996365
- type: nauc_map_at_3_std
value: 7.429469629304992
- type: nauc_map_at_5_diff1
value: 13.800025735259355
- type: nauc_map_at_5_max
value: 12.565449305066048
- type: nauc_map_at_5_std
value: 9.75302950224773
- type: nauc_mrr_at_1000_diff1
value: 12.268595456055587
- type: nauc_mrr_at_1000_max
value: 9.25353359860505
- type: nauc_mrr_at_1000_std
value: 9.108487924061626
- type: nauc_mrr_at_100_diff1
value: 12.221030310338321
- type: nauc_mrr_at_100_max
value: 9.25521408834954
- type: nauc_mrr_at_100_std
value: 9.138330201368367
- type: nauc_mrr_at_10_diff1
value: 12.574921954053705
- type: nauc_mrr_at_10_max
value: 9.022771164246922
- type: nauc_mrr_at_10_std
value: 8.72904050693386
- type: nauc_mrr_at_1_diff1
value: 17.46158729503331
- type: nauc_mrr_at_1_max
value: 7.638928315208697
- type: nauc_mrr_at_1_std
value: 6.095710473752395
- type: nauc_mrr_at_20_diff1
value: 12.138920051010647
- type: nauc_mrr_at_20_max
value: 9.276258507402064
- type: nauc_mrr_at_20_std
value: 8.886687014526801
- type: nauc_mrr_at_3_diff1
value: 14.193338999133834
- type: nauc_mrr_at_3_max
value: 8.299120353947483
- type: nauc_mrr_at_3_std
value: 7.8035097667232005
- type: nauc_mrr_at_5_diff1
value: 13.111703855187907
- type: nauc_mrr_at_5_max
value: 9.120679964295672
- type: nauc_mrr_at_5_std
value: 8.32132668626495
- type: nauc_ndcg_at_1000_diff1
value: 8.86999972791066
- type: nauc_ndcg_at_1000_max
value: 15.310859480575436
- type: nauc_ndcg_at_1000_std
value: 21.250542726021116
- type: nauc_ndcg_at_100_diff1
value: 8.721788996698756
- type: nauc_ndcg_at_100_max
value: 13.753927264089416
- type: nauc_ndcg_at_100_std
value: 17.83014109593192
- type: nauc_ndcg_at_10_diff1
value: 10.851214040795984
- type: nauc_ndcg_at_10_max
value: 11.754038261909226
- type: nauc_ndcg_at_10_std
value: 11.732493442071242
- type: nauc_ndcg_at_1_diff1
value: 17.46158729503331
- type: nauc_ndcg_at_1_max
value: 7.638928315208697
- type: nauc_ndcg_at_1_std
value: 6.095710473752395
- type: nauc_ndcg_at_20_diff1
value: 9.76180043441647
- type: nauc_ndcg_at_20_max
value: 12.820709997321758
- type: nauc_ndcg_at_20_std
value: 12.721916889128632
- type: nauc_ndcg_at_3_diff1
value: 12.839313795789275
- type: nauc_ndcg_at_3_max
value: 10.610706825785767
- type: nauc_ndcg_at_3_std
value: 8.204558555180421
- type: nauc_ndcg_at_5_diff1
value: 12.406813811698386
- type: nauc_ndcg_at_5_max
value: 11.878799458897053
- type: nauc_ndcg_at_5_std
value: 10.186784386212949
- type: nauc_precision_at_1000_diff1
value: 2.8398170540614176
- type: nauc_precision_at_1000_max
value: 16.99931587707156
- type: nauc_precision_at_1000_std
value: 31.86724716316765
- type: nauc_precision_at_100_diff1
value: 3.4160417262207297
- type: nauc_precision_at_100_max
value: 14.437629378775577
- type: nauc_precision_at_100_std
value: 24.60677482735814
- type: nauc_precision_at_10_diff1
value: 7.433603751797789
- type: nauc_precision_at_10_max
value: 12.127707014834115
- type: nauc_precision_at_10_std
value: 14.347141705378737
- type: nauc_precision_at_1_diff1
value: 17.46158729503331
- type: nauc_precision_at_1_max
value: 7.638928315208697
- type: nauc_precision_at_1_std
value: 6.095710473752395
- type: nauc_precision_at_20_diff1
value: 5.555321803900292
- type: nauc_precision_at_20_max
value: 13.975730968140612
- type: nauc_precision_at_20_std
value: 15.701599582613069
- type: nauc_precision_at_3_diff1
value: 10.570021043882896
- type: nauc_precision_at_3_max
value: 11.640698048065092
- type: nauc_precision_at_3_std
value: 8.880832670930209
- type: nauc_precision_at_5_diff1
value: 10.192070602011636
- type: nauc_precision_at_5_max
value: 12.979688593338693
- type: nauc_precision_at_5_std
value: 12.116013499683467
- type: nauc_recall_at_1000_diff1
value: 2.883533640208864
- type: nauc_recall_at_1000_max
value: 18.09724738913881
- type: nauc_recall_at_1000_std
value: 32.15747757955521
- type: nauc_recall_at_100_diff1
value: 3.6040687535563998
- type: nauc_recall_at_100_max
value: 14.732664182141772
- type: nauc_recall_at_100_std
value: 24.427986607748
- type: nauc_recall_at_10_diff1
value: 7.587316953732061
- type: nauc_recall_at_10_max
value: 12.334929718954289
- type: nauc_recall_at_10_std
value: 14.094286673978088
- type: nauc_recall_at_1_diff1
value: 17.278503151293908
- type: nauc_recall_at_1_max
value: 7.878679954463645
- type: nauc_recall_at_1_std
value: 5.787632681875146
- type: nauc_recall_at_20_diff1
value: 5.706170516654628
- type: nauc_recall_at_20_max
value: 14.095625029855203
- type: nauc_recall_at_20_std
value: 15.241931131705527
- type: nauc_recall_at_3_diff1
value: 10.574961375800127
- type: nauc_recall_at_3_max
value: 11.733105660119586
- type: nauc_recall_at_3_std
value: 8.540340847563677
- type: nauc_recall_at_5_diff1
value: 10.158076693596577
- type: nauc_recall_at_5_max
value: 13.152816873926534
- type: nauc_recall_at_5_std
value: 11.843127888328391
- type: ndcg_at_1
value: 10.6
- type: ndcg_at_10
value: 9.923
- type: ndcg_at_100
value: 15.463
- type: ndcg_at_1000
value: 20.673
- type: ndcg_at_20
value: 11.468
- type: ndcg_at_3
value: 9.120000000000001
- type: ndcg_at_5
value: 8.08
- type: precision_at_1
value: 10.6
- type: precision_at_10
value: 5.319999999999999
- type: precision_at_100
value: 1.357
- type: precision_at_1000
value: 0.262
- type: precision_at_20
value: 3.56
- type: precision_at_3
value: 8.733
- type: precision_at_5
value: 7.3
- type: recall_at_1
value: 2.15
- type: recall_at_10
value: 10.745000000000001
- type: recall_at_100
value: 27.478
- type: recall_at_1000
value: 53.067
- type: recall_at_20
value: 14.432
- type: recall_at_3
value: 5.295
- type: recall_at_5
value: 7.37
- task:
type: STS
dataset:
name: MTEB SICK-R (default)
type: mteb/sickr-sts
config: default
split: test
revision: 20a6d6f312dd54037fe07a32d58e5e168867909d
metrics:
- type: cosine_pearson
value: 75.0950047498747
- type: cosine_spearman
value: 66.17240782538595
- type: euclidean_pearson
value: 67.00770252295281
- type: euclidean_spearman
value: 60.910363132843514
- type: main_score
value: 66.17240782538595
- type: manhattan_pearson
value: 67.05219198532856
- type: manhattan_spearman
value: 61.09670227979067
- type: pearson
value: 75.0950047498747
- type: spearman
value: 66.17240782538595
- task:
type: STS
dataset:
name: MTEB STS12 (default)
type: mteb/sts12-sts
config: default
split: test
revision: a0d554a64d88156834ff5ae9920b964011b16384
metrics:
- type: cosine_pearson
value: 70.27191745166907
- type: cosine_spearman
value: 61.89139464648924
- type: euclidean_pearson
value: 54.34524146536028
- type: euclidean_spearman
value: 50.72726514543895
- type: main_score
value: 61.89139464648924
- type: manhattan_pearson
value: 54.0517351204108
- type: manhattan_spearman
value: 50.62237885284486
- type: pearson
value: 70.27191745166907
- type: spearman
value: 61.89139464648924
- task:
type: STS
dataset:
name: MTEB STS13 (default)
type: mteb/sts13-sts
config: default
split: test
revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca
metrics:
- type: cosine_pearson
value: 70.19582039979868
- type: cosine_spearman
value: 71.66792475528088
- type: euclidean_pearson
value: 55.582203822685486
- type: euclidean_spearman
value: 56.20322977297382
- type: main_score
value: 71.66792475528088
- type: manhattan_pearson
value: 55.95799094895162
- type: manhattan_spearman
value: 56.588522991206325
- type: pearson
value: 70.19582039979868
- type: spearman
value: 71.66792475528088
- task:
type: STS
dataset:
name: MTEB STS14 (default)
type: mteb/sts14-sts
config: default
split: test
revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375
metrics:
- type: cosine_pearson
value: 69.52140108419252
- type: cosine_spearman
value: 67.82634222687376
- type: euclidean_pearson
value: 56.45640217254015
- type: euclidean_spearman
value: 56.232462674683994
- type: main_score
value: 67.82634222687376
- type: manhattan_pearson
value: 56.71095067060834
- type: manhattan_spearman
value: 56.419654300835596
- type: pearson
value: 69.52140108419252
- type: spearman
value: 67.82634222687376
- task:
type: STS
dataset:
name: MTEB STS15 (default)
type: mteb/sts15-sts
config: default
split: test
revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3
metrics:
- type: cosine_pearson
value: 73.66221619412464
- type: cosine_spearman
value: 75.48765072240437
- type: euclidean_pearson
value: 56.971989853952046
- type: euclidean_spearman
value: 59.57242983168428
- type: main_score
value: 75.48765072240437
- type: manhattan_pearson
value: 57.292670731862025
- type: manhattan_spearman
value: 59.64547291104911
- type: pearson
value: 73.66221619412464
- type: spearman
value: 75.48765072240437
- task:
type: STS
dataset:
name: MTEB STS16 (default)
type: mteb/sts16-sts
config: default
split: test
revision: 4d8694f8f0e0100860b497b999b3dbed754a0513
metrics:
- type: cosine_pearson
value: 62.328630460915925
- type: cosine_spearman
value: 66.48155706668948
- type: euclidean_pearson
value: 48.85087938485013
- type: euclidean_spearman
value: 51.58756922385477
- type: main_score
value: 66.48155706668948
- type: manhattan_pearson
value: 49.02650798849104
- type: manhattan_spearman
value: 51.597849334470936
- type: pearson
value: 62.328630460915925
- type: spearman
value: 66.48155706668948
- task:
type: STS
dataset:
name: MTEB STS17 (fr-en)
type: mteb/sts17-crosslingual-sts
config: fr-en
split: test
revision: faeb762787bd10488a50c8b5be4a3b82e411949c
metrics:
- type: cosine_pearson
value: 21.344883409729785
- type: cosine_spearman
value: 19.492480027372526
- type: euclidean_pearson
value: -8.605176891549817
- type: euclidean_spearman
value: -7.528098935541785
- type: main_score
value: 19.492480027372526
- type: manhattan_pearson
value: -10.120526712428015
- type: manhattan_spearman
value: -8.968202174485103
- type: pearson
value: 21.344883409729785
- type: spearman
value: 19.492480027372526
- task:
type: STS
dataset:
name: MTEB STS17 (es-en)
type: mteb/sts17-crosslingual-sts
config: es-en
split: test
revision: faeb762787bd10488a50c8b5be4a3b82e411949c
metrics:
- type: cosine_pearson
value: 14.966581838953037
- type: cosine_spearman
value: 13.24509138766898
- type: euclidean_pearson
value: -6.690226814122847
- type: euclidean_spearman
value: -11.282875560023765
- type: main_score
value: 13.24509138766898
- type: manhattan_pearson
value: -7.476797502897139
- type: manhattan_spearman
value: -11.92841312081328
- type: pearson
value: 14.966581838953037
- type: spearman
value: 13.24509138766898
- task:
type: STS
dataset:
name: MTEB STS17 (nl-en)
type: mteb/sts17-crosslingual-sts
config: nl-en
split: test
revision: faeb762787bd10488a50c8b5be4a3b82e411949c
metrics:
- type: cosine_pearson
value: 18.309414985775234
- type: cosine_spearman
value: 14.341489363671842
- type: euclidean_pearson
value: -12.122888971186411
- type: euclidean_spearman
value: -16.469354911796607
- type: main_score
value: 14.341489363671842
- type: manhattan_pearson
value: -10.903411096507561
- type: manhattan_spearman
value: -13.076094357191614
- type: pearson
value: 18.309414985775234
- type: spearman
value: 14.341489363671842
- task:
type: STS
dataset:
name: MTEB STS17 (en-de)
type: mteb/sts17-crosslingual-sts
config: en-de
split: test
revision: faeb762787bd10488a50c8b5be4a3b82e411949c
metrics:
- type: cosine_pearson
value: 21.301586456013037
- type: cosine_spearman
value: 22.571419522164376
- type: euclidean_pearson
value: -6.367176828477704
- type: euclidean_spearman
value: -9.877915052256634
- type: main_score
value: 22.571419522164376
- type: manhattan_pearson
value: -4.676449796672262
- type: manhattan_spearman
value: -7.3330561255268805
- type: pearson
value: 21.301586456013037
- type: spearman
value: 22.571419522164376
- task:
type: STS
dataset:
name: MTEB STS17 (it-en)
type: mteb/sts17-crosslingual-sts
config: it-en
split: test
revision: faeb762787bd10488a50c8b5be4a3b82e411949c
metrics:
- type: cosine_pearson
value: 16.140292893693204
- type: cosine_spearman
value: 10.216376215477217
- type: euclidean_pearson
value: -15.27866395332899
- type: euclidean_spearman
value: -14.09405330374556
- type: main_score
value: 10.216376215477217
- type: manhattan_pearson
value: -14.968016143069224
- type: manhattan_spearman
value: -12.871979788571364
- type: pearson
value: 16.140292893693204
- type: spearman
value: 10.216376215477217
- task:
type: STS
dataset:
name: MTEB STS17 (en-en)
type: mteb/sts17-crosslingual-sts
config: en-en
split: test
revision: faeb762787bd10488a50c8b5be4a3b82e411949c
metrics:
- type: cosine_pearson
value: 78.42242639560023
- type: cosine_spearman
value: 80.2472005970173
- type: euclidean_pearson
value: 66.28797094299918
- type: euclidean_spearman
value: 67.13581863643712
- type: main_score
value: 80.2472005970173
- type: manhattan_pearson
value: 66.02431023839748
- type: manhattan_spearman
value: 67.15538442088678
- type: pearson
value: 78.42242639560023
- type: spearman
value: 80.2472005970173
- task:
type: STS
dataset:
name: MTEB STS17 (en-ar)
type: mteb/sts17-crosslingual-sts
config: en-ar
split: test
revision: faeb762787bd10488a50c8b5be4a3b82e411949c
metrics:
- type: cosine_pearson
value: -5.762967943082491
- type: cosine_spearman
value: -6.184248227377756
- type: euclidean_pearson
value: -12.170911062337659
- type: euclidean_spearman
value: -9.846378276134612
- type: main_score
value: -6.184248227377756
- type: manhattan_pearson
value: -13.126030597269658
- type: manhattan_spearman
value: -11.320163726484019
- type: pearson
value: -5.762967943082491
- type: spearman
value: -6.184248227377756
- task:
type: STS
dataset:
name: MTEB STS17 (en-tr)
type: mteb/sts17-crosslingual-sts
config: en-tr
split: test
revision: faeb762787bd10488a50c8b5be4a3b82e411949c
metrics:
- type: cosine_pearson
value: -8.666319610669559
- type: cosine_spearman
value: -10.0877070299522
- type: euclidean_pearson
value: -21.16722886445997
- type: euclidean_spearman
value: -25.725365743898504
- type: main_score
value: -10.0877070299522
- type: manhattan_pearson
value: -22.03289222804741
- type: manhattan_spearman
value: -26.785390252425533
- type: pearson
value: -8.666319610669559
- type: spearman
value: -10.0877070299522
- task:
type: STS
dataset:
name: MTEB STS22 (es-en)
type: mteb/sts22-crosslingual-sts
config: es-en
split: test
revision: de9d86b3b84231dc21f76c7b7af1f28e2f57f6e3
metrics:
- type: cosine_pearson
value: 16.880423266497427
- type: cosine_spearman
value: 18.497107178067477
- type: euclidean_pearson
value: 14.33062698609246
- type: euclidean_spearman
value: 16.623349996837863
- type: main_score
value: 18.497107178067477
- type: manhattan_pearson
value: 21.024602299309286
- type: manhattan_spearman
value: 24.281840448539402
- type: pearson
value: 16.880423266497427
- type: spearman
value: 18.497107178067477
- task:
type: STS
dataset:
name: MTEB STS22 (en)
type: mteb/sts22-crosslingual-sts
config: en
split: test
revision: de9d86b3b84231dc21f76c7b7af1f28e2f57f6e3
metrics:
- type: cosine_pearson
value: 44.98861387948161
- type: cosine_spearman
value: 59.04270974068145
- type: euclidean_pearson
value: 49.574894395857484
- type: euclidean_spearman
value: 58.827686687567805
- type: main_score
value: 59.04270974068145
- type: manhattan_pearson
value: 48.65094961023066
- type: manhattan_spearman
value: 58.3204048215355
- type: pearson
value: 44.98861387948161
- type: spearman
value: 59.04270974068145
- task:
type: STS
dataset:
name: MTEB STS22 (de-en)
type: mteb/sts22-crosslingual-sts
config: de-en
split: test
revision: de9d86b3b84231dc21f76c7b7af1f28e2f57f6e3
metrics:
- type: cosine_pearson
value: 26.505168004689462
- type: cosine_spearman
value: 28.591720613248732
- type: euclidean_pearson
value: 24.74526273753091
- type: euclidean_spearman
value: 28.416241187559642
- type: main_score
value: 28.591720613248732
- type: manhattan_pearson
value: 23.527990703124505
- type: manhattan_spearman
value: 33.434031878984136
- type: pearson
value: 26.505168004689462
- type: spearman
value: 28.591720613248732
- task:
type: STS
dataset:
name: MTEB STS22 (zh-en)
type: mteb/sts22-crosslingual-sts
config: zh-en
split: test
revision: de9d86b3b84231dc21f76c7b7af1f28e2f57f6e3
metrics:
- type: cosine_pearson
value: 11.552622364692777
- type: cosine_spearman
value: 10.973019756392695
- type: euclidean_pearson
value: 2.373117729670719
- type: euclidean_spearman
value: 1.961823192174414
- type: main_score
value: 10.973019756392695
- type: manhattan_pearson
value: 2.4552310228655108
- type: manhattan_spearman
value: 2.9778196586898273
- type: pearson
value: 11.552622364692777
- type: spearman
value: 10.973019756392695
- task:
type: STS
dataset:
name: MTEB STS22 (pl-en)
type: mteb/sts22-crosslingual-sts
config: pl-en
split: test
revision: de9d86b3b84231dc21f76c7b7af1f28e2f57f6e3
metrics:
- type: cosine_pearson
value: 10.466988163502029
- type: cosine_spearman
value: -0.21879166839686814
- type: euclidean_pearson
value: 22.096342233944544
- type: euclidean_spearman
value: 3.010990103175947
- type: main_score
value: -0.21879166839686814
- type: manhattan_pearson
value: 27.847325418935775
- type: manhattan_spearman
value: 4.74569547403683
- type: pearson
value: 10.466988163502029
- type: spearman
value: -0.21879166839686814
- task:
type: STS
dataset:
name: MTEB STSBenchmark (default)
type: mteb/stsbenchmark-sts
config: default
split: test
revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831
metrics:
- type: cosine_pearson
value: 66.80057012864974
- type: cosine_spearman
value: 66.52235871936412
- type: euclidean_pearson
value: 55.372109895942536
- type: euclidean_spearman
value: 56.04078716357898
- type: main_score
value: 66.52235871936412
- type: manhattan_pearson
value: 55.58797025494765
- type: manhattan_spearman
value: 56.179959581772266
- type: pearson
value: 66.80057012864974
- type: spearman
value: 66.52235871936412
- task:
type: Reranking
dataset:
name: MTEB SciDocsRR (default)
type: mteb/scidocs-reranking
config: default
split: test
revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab
metrics:
- type: main_score
value: 71.11074203128574
- type: map
value: 71.11074203128574
- type: mrr
value: 89.77809499868323
- type: nAUC_map_diff1
value: 11.228330835325687
- type: nAUC_map_max
value: 54.45812469406701
- type: nAUC_map_std
value: 63.051723849534525
- type: nAUC_mrr_diff1
value: 47.94323704040123
- type: nAUC_mrr_max
value: 72.52180244204617
- type: nAUC_mrr_std
value: 64.6185657337566
- task:
type: Retrieval
dataset:
name: MTEB SciFact (default)
type: mteb/scifact
config: default
split: test
revision: 0228b52cf27578f30900b9e5271d331663a030d7
metrics:
- type: main_score
value: 50.663000000000004
- type: map_at_1
value: 34.9
- type: map_at_10
value: 45.591
- type: map_at_100
value: 46.478
- type: map_at_1000
value: 46.544000000000004
- type: map_at_20
value: 45.999
- type: map_at_3
value: 43.354
- type: map_at_5
value: 44.733000000000004
- type: mrr_at_1
value: 37.0
- type: mrr_at_10
value: 47.36547619047619
- type: mrr_at_100
value: 48.09705728333796
- type: mrr_at_1000
value: 48.152949244883104
- type: mrr_at_20
value: 47.69512736718619
- type: mrr_at_3
value: 45.388888888888886
- type: mrr_at_5
value: 46.605555555555554
- type: nauc_map_at_1000_diff1
value: 52.100145151741394
- type: nauc_map_at_1000_max
value: 27.410237212009648
- type: nauc_map_at_1000_std
value: 2.9904718168509814
- type: nauc_map_at_100_diff1
value: 52.078009501467115
- type: nauc_map_at_100_max
value: 27.388902536377337
- type: nauc_map_at_100_std
value: 2.9956426758632553
- type: nauc_map_at_10_diff1
value: 52.22446655004901
- type: nauc_map_at_10_max
value: 27.537880755428052
- type: nauc_map_at_10_std
value: 2.5329635707923672
- type: nauc_map_at_1_diff1
value: 56.87947977552147
- type: nauc_map_at_1_max
value: 26.992163127256497
- type: nauc_map_at_1_std
value: -0.9440039327267877
- type: nauc_map_at_20_diff1
value: 52.106371246476826
- type: nauc_map_at_20_max
value: 27.32862929056924
- type: nauc_map_at_20_std
value: 2.7349113689801996
- type: nauc_map_at_3_diff1
value: 53.35317860724047
- type: nauc_map_at_3_max
value: 26.25510463708658
- type: nauc_map_at_3_std
value: 2.289593280073433
- type: nauc_map_at_5_diff1
value: 51.678047431193974
- type: nauc_map_at_5_max
value: 27.418395689002818
- type: nauc_map_at_5_std
value: 2.1245361198440267
- type: nauc_mrr_at_1000_diff1
value: 49.98301669091194
- type: nauc_mrr_at_1000_max
value: 29.333209267321198
- type: nauc_mrr_at_1000_std
value: 5.252782451549811
- type: nauc_mrr_at_100_diff1
value: 49.967980336744034
- type: nauc_mrr_at_100_max
value: 29.331397088810657
- type: nauc_mrr_at_100_std
value: 5.261178047875302
- type: nauc_mrr_at_10_diff1
value: 50.02865512004594
- type: nauc_mrr_at_10_max
value: 29.665247088988096
- type: nauc_mrr_at_10_std
value: 5.105677188444364
- type: nauc_mrr_at_1_diff1
value: 55.219664224743944
- type: nauc_mrr_at_1_max
value: 29.369235255966586
- type: nauc_mrr_at_1_std
value: 1.294523738013475
- type: nauc_mrr_at_20_diff1
value: 49.98301552378738
- type: nauc_mrr_at_20_max
value: 29.388470718856922
- type: nauc_mrr_at_20_std
value: 5.178678395201041
- type: nauc_mrr_at_3_diff1
value: 51.00229122885918
- type: nauc_mrr_at_3_max
value: 28.064602643242907
- type: nauc_mrr_at_3_std
value: 4.744718855685464
- type: nauc_mrr_at_5_diff1
value: 49.20787956974137
- type: nauc_mrr_at_5_max
value: 29.663856377950655
- type: nauc_mrr_at_5_std
value: 4.889452630825029
- type: nauc_ndcg_at_1000_diff1
value: 50.26524611758448
- type: nauc_ndcg_at_1000_max
value: 28.816092638532105
- type: nauc_ndcg_at_1000_std
value: 5.777693934805941
- type: nauc_ndcg_at_100_diff1
value: 49.810321964883876
- type: nauc_ndcg_at_100_max
value: 28.85200497094049
- type: nauc_ndcg_at_100_std
value: 6.4161665223690445
- type: nauc_ndcg_at_10_diff1
value: 50.31987402674788
- type: nauc_ndcg_at_10_max
value: 29.1957589259604
- type: nauc_ndcg_at_10_std
value: 4.249172262339034
- type: nauc_ndcg_at_1_diff1
value: 55.219664224743944
- type: nauc_ndcg_at_1_max
value: 29.369235255966586
- type: nauc_ndcg_at_1_std
value: 1.294523738013475
- type: nauc_ndcg_at_20_diff1
value: 49.95117201846568
- type: nauc_ndcg_at_20_max
value: 28.252381258706883
- type: nauc_ndcg_at_20_std
value: 4.799900939787535
- type: nauc_ndcg_at_3_diff1
value: 51.81554260088138
- type: nauc_ndcg_at_3_max
value: 27.121304990834222
- type: nauc_ndcg_at_3_std
value: 3.720528057690934
- type: nauc_ndcg_at_5_diff1
value: 48.77973374919412
- type: nauc_ndcg_at_5_max
value: 29.131535344710002
- type: nauc_ndcg_at_5_std
value: 3.565095958368389
- type: nauc_precision_at_1000_diff1
value: -7.462742973759457
- type: nauc_precision_at_1000_max
value: 21.45790554414784
- type: nauc_precision_at_1000_std
value: 24.38429850971904
- type: nauc_precision_at_100_diff1
value: 10.210409634704046
- type: nauc_precision_at_100_max
value: 27.700772933352024
- type: nauc_precision_at_100_std
value: 27.80962272064547
- type: nauc_precision_at_10_diff1
value: 34.576585797430766
- type: nauc_precision_at_10_max
value: 33.364848337655786
- type: nauc_precision_at_10_std
value: 14.448906660652794
- type: nauc_precision_at_1_diff1
value: 55.219664224743944
- type: nauc_precision_at_1_max
value: 29.369235255966586
- type: nauc_precision_at_1_std
value: 1.294523738013475
- type: nauc_precision_at_20_diff1
value: 28.759871255957847
- type: nauc_precision_at_20_max
value: 28.756353659179982
- type: nauc_precision_at_20_std
value: 17.539177234113616
- type: nauc_precision_at_3_diff1
value: 44.99876896761731
- type: nauc_precision_at_3_max
value: 28.597098219106442
- type: nauc_precision_at_3_std
value: 9.21762492818973
- type: nauc_precision_at_5_diff1
value: 34.186850914452485
- type: nauc_precision_at_5_max
value: 33.954540973558686
- type: nauc_precision_at_5_std
value: 10.546528423678431
- type: nauc_recall_at_1000_diff1
value: 23.83001981280335
- type: nauc_recall_at_1000_max
value: 43.846644348796225
- type: nauc_recall_at_1000_std
value: 60.408553665368835
- type: nauc_recall_at_100_diff1
value: 38.4746907480832
- type: nauc_recall_at_100_max
value: 33.882306484150135
- type: nauc_recall_at_100_std
value: 27.750836673176565
- type: nauc_recall_at_10_diff1
value: 44.98978983013661
- type: nauc_recall_at_10_max
value: 31.241708340662296
- type: nauc_recall_at_10_std
value: 6.026684637828198
- type: nauc_recall_at_1_diff1
value: 56.87947977552147
- type: nauc_recall_at_1_max
value: 26.992163127256497
- type: nauc_recall_at_1_std
value: -0.9440039327267877
- type: nauc_recall_at_20_diff1
value: 43.253384002784074
- type: nauc_recall_at_20_max
value: 26.89815696422301
- type: nauc_recall_at_20_std
value: 8.446980210355042
- type: nauc_recall_at_3_diff1
value: 48.89792955260931
- type: nauc_recall_at_3_max
value: 26.765492965973237
- type: nauc_recall_at_3_std
value: 5.600856860068723
- type: nauc_recall_at_5_diff1
value: 40.79334879234603
- type: nauc_recall_at_5_max
value: 31.676509416439163
- type: nauc_recall_at_5_std
value: 4.7055724522242
- type: ndcg_at_1
value: 37.0
- type: ndcg_at_10
value: 50.663000000000004
- type: ndcg_at_100
value: 55.022999999999996
- type: ndcg_at_1000
value: 56.643
- type: ndcg_at_20
value: 52.001
- type: ndcg_at_3
value: 46.424
- type: ndcg_at_5
value: 48.653999999999996
- type: precision_at_1
value: 37.0
- type: precision_at_10
value: 7.133000000000001
- type: precision_at_100
value: 0.9570000000000001
- type: precision_at_1000
value: 0.11
- type: precision_at_20
value: 3.8670000000000004
- type: precision_at_3
value: 19.0
- type: precision_at_5
value: 12.733
- type: recall_at_1
value: 34.9
- type: recall_at_10
value: 64.372
- type: recall_at_100
value: 84.806
- type: recall_at_1000
value: 97.26700000000001
- type: recall_at_20
value: 69.428
- type: recall_at_3
value: 52.983000000000004
- type: recall_at_5
value: 58.428000000000004
- task:
type: PairClassification
dataset:
name: MTEB SprintDuplicateQuestions (default)
type: mteb/sprintduplicatequestions-pairclassification
config: default
split: test
revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46
metrics:
- type: cosine_accuracy
value: 99.6029702970297
- type: cosine_accuracy_threshold
value: 78.96339297294617
- type: cosine_ap
value: 85.09945680365945
- type: cosine_f1
value: 79.00249376558605
- type: cosine_f1_threshold
value: 77.54697799682617
- type: cosine_precision
value: 78.80597014925374
- type: cosine_recall
value: 79.2
- type: dot_accuracy
value: 99.07128712871287
- type: dot_accuracy_threshold
value: 113537.78076171875
- type: dot_ap
value: 32.974014883183614
- type: dot_f1
value: 38.70665417057169
- type: dot_f1_threshold
value: 82395.60546875
- type: dot_precision
value: 36.41975308641975
- type: dot_recall
value: 41.3
- type: euclidean_accuracy
value: 99.35742574257425
- type: euclidean_accuracy_threshold
value: 1716.6461944580078
- type: euclidean_ap
value: 60.79241641393818
- type: euclidean_f1
value: 61.254199328107504
- type: euclidean_f1_threshold
value: 1787.368392944336
- type: euclidean_precision
value: 69.59287531806616
- type: euclidean_recall
value: 54.7
- type: main_score
value: 85.09945680365945
- type: manhattan_accuracy
value: 99.35544554455446
- type: manhattan_accuracy_threshold
value: 21216.224670410156
- type: manhattan_ap
value: 60.67247165482485
- type: manhattan_f1
value: 61.16876024030584
- type: manhattan_f1_threshold
value: 22668.411254882812
- type: manhattan_precision
value: 67.38868832731649
- type: manhattan_recall
value: 56.00000000000001
- type: max_accuracy
value: 99.6029702970297
- type: max_ap
value: 85.09945680365945
- type: max_f1
value: 79.00249376558605
- type: max_precision
value: 78.80597014925374
- type: max_recall
value: 79.2
- type: similarity_accuracy
value: 99.6029702970297
- type: similarity_accuracy_threshold
value: 78.96339297294617
- type: similarity_ap
value: 85.09945680365945
- type: similarity_f1
value: 79.00249376558605
- type: similarity_f1_threshold
value: 77.54697799682617
- type: similarity_precision
value: 78.80597014925374
- type: similarity_recall
value: 79.2
- task:
type: Clustering
dataset:
name: MTEB StackExchangeClustering (default)
type: mteb/stackexchange-clustering
config: default
split: test
revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259
metrics:
- type: main_score
value: 40.01875953666112
- type: v_measure
value: 40.01875953666112
- type: v_measure_std
value: 4.519991014119391
- task:
type: Clustering
dataset:
name: MTEB StackExchangeClusteringP2P (default)
type: mteb/stackexchange-clustering-p2p
config: default
split: test
revision: 815ca46b2622cec33ccafc3735d572c266efdb44
metrics:
- type: main_score
value: 28.81354037080584
- type: v_measure
value: 28.81354037080584
- type: v_measure_std
value: 1.4144350664362755
- task:
type: Reranking
dataset:
name: MTEB StackOverflowDupQuestions (default)
type: mteb/stackoverflowdupquestions-reranking
config: default
split: test
revision: e185fbe320c72810689fc5848eb6114e1ef5ec69
metrics:
- type: main_score
value: 44.09716409649705
- type: map
value: 44.09716409649705
- type: mrr
value: 44.662380103556565
- type: nAUC_map_diff1
value: 35.29255607823797
- type: nAUC_map_max
value: 16.421837723462147
- type: nAUC_map_std
value: 6.1302069782322315
- type: nAUC_mrr_diff1
value: 34.559928528154806
- type: nAUC_mrr_max
value: 17.207604918830953
- type: nAUC_mrr_std
value: 6.664790258906265
- task:
type: Summarization
dataset:
name: MTEB SummEval (default)
type: mteb/summeval
config: default
split: test
revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c
metrics:
- type: cosine_pearson
value: 29.294245469087553
- type: cosine_spearman
value: 30.080488918284974
- type: dot_pearson
value: 18.322393003009722
- type: dot_spearman
value: 20.941469677129597
- type: main_score
value: 30.080488918284974
- type: pearson
value: 29.294245469087553
- type: spearman
value: 30.080488918284974
- task:
type: Retrieval
dataset:
name: MTEB TRECCOVID (default)
type: mteb/trec-covid
config: default
split: test
revision: bb9466bac8153a0349341eb1b22e06409e78ef4e
metrics:
- type: main_score
value: 39.983999999999995
- type: map_at_1
value: 0.106
- type: map_at_10
value: 0.644
- type: map_at_100
value: 3.021
- type: map_at_1000
value: 7.86
- type: map_at_20
value: 1.0959999999999999
- type: map_at_3
value: 0.26
- type: map_at_5
value: 0.383
- type: mrr_at_1
value: 52.0
- type: mrr_at_10
value: 63.62142857142856
- type: mrr_at_100
value: 64.14120879120878
- type: mrr_at_1000
value: 64.15196147938082
- type: mrr_at_20
value: 64.06428571428572
- type: mrr_at_3
value: 60.33333333333333
- type: mrr_at_5
value: 62.133333333333326
- type: nauc_map_at_1000_diff1
value: 24.416863084123577
- type: nauc_map_at_1000_max
value: 38.56500518410879
- type: nauc_map_at_1000_std
value: 57.28416632982124
- type: nauc_map_at_100_diff1
value: 7.320029678013508
- type: nauc_map_at_100_max
value: 31.67441200824679
- type: nauc_map_at_100_std
value: 46.99676723594155
- type: nauc_map_at_10_diff1
value: 2.1592330331050635
- type: nauc_map_at_10_max
value: 26.48308930412215
- type: nauc_map_at_10_std
value: 32.1215432254444
- type: nauc_map_at_1_diff1
value: 19.602070971946954
- type: nauc_map_at_1_max
value: 8.20575258643758
- type: nauc_map_at_1_std
value: 17.150126202821102
- type: nauc_map_at_20_diff1
value: 1.4525678948841099
- type: nauc_map_at_20_max
value: 25.398372034894923
- type: nauc_map_at_20_std
value: 37.98656048425611
- type: nauc_map_at_3_diff1
value: 14.189476148666769
- type: nauc_map_at_3_max
value: 13.645814074115348
- type: nauc_map_at_3_std
value: 24.193562926020505
- type: nauc_map_at_5_diff1
value: 6.385516140164152
- type: nauc_map_at_5_max
value: 19.028014747196977
- type: nauc_map_at_5_std
value: 27.2670171970273
- type: nauc_mrr_at_1000_diff1
value: 29.927939844415192
- type: nauc_mrr_at_1000_max
value: 19.139062731303653
- type: nauc_mrr_at_1000_std
value: 30.750244889158466
- type: nauc_mrr_at_100_diff1
value: 29.955577537768708
- type: nauc_mrr_at_100_max
value: 19.15999969363906
- type: nauc_mrr_at_100_std
value: 30.777558250465532
- type: nauc_mrr_at_10_diff1
value: 29.75190425697829
- type: nauc_mrr_at_10_max
value: 19.247901214296146
- type: nauc_mrr_at_10_std
value: 30.12495769940457
- type: nauc_mrr_at_1_diff1
value: 25.319658305674935
- type: nauc_mrr_at_1_max
value: 19.408020022852174
- type: nauc_mrr_at_1_std
value: 30.518526579248036
- type: nauc_mrr_at_20_diff1
value: 29.381724804135523
- type: nauc_mrr_at_20_max
value: 18.78203200071421
- type: nauc_mrr_at_20_std
value: 30.201392736164536
- type: nauc_mrr_at_3_diff1
value: 33.49197973287976
- type: nauc_mrr_at_3_max
value: 16.821299944157854
- type: nauc_mrr_at_3_std
value: 32.95866142740776
- type: nauc_mrr_at_5_diff1
value: 30.519933718405962
- type: nauc_mrr_at_5_max
value: 20.873028786250366
- type: nauc_mrr_at_5_std
value: 31.53952703715278
- type: nauc_ndcg_at_1000_diff1
value: 19.56599546833078
- type: nauc_ndcg_at_1000_max
value: 31.55417192496882
- type: nauc_ndcg_at_1000_std
value: 46.03469380933216
- type: nauc_ndcg_at_100_diff1
value: 17.03409656600608
- type: nauc_ndcg_at_100_max
value: 30.018921010755896
- type: nauc_ndcg_at_100_std
value: 42.083969481235535
- type: nauc_ndcg_at_10_diff1
value: 9.622601053598032
- type: nauc_ndcg_at_10_max
value: 24.036876646465473
- type: nauc_ndcg_at_10_std
value: 29.264022469658542
- type: nauc_ndcg_at_1_diff1
value: 10.162034267788544
- type: nauc_ndcg_at_1_max
value: 14.902101527295905
- type: nauc_ndcg_at_1_std
value: 22.89481729606148
- type: nauc_ndcg_at_20_diff1
value: 11.827596896516578
- type: nauc_ndcg_at_20_max
value: 21.89722632493682
- type: nauc_ndcg_at_20_std
value: 34.10813108354046
- type: nauc_ndcg_at_3_diff1
value: 9.885830514681343
- type: nauc_ndcg_at_3_max
value: 18.645371242229174
- type: nauc_ndcg_at_3_std
value: 27.61014855490183
- type: nauc_ndcg_at_5_diff1
value: 7.016021785588281
- type: nauc_ndcg_at_5_max
value: 21.223071359768444
- type: nauc_ndcg_at_5_std
value: 26.398061449644693
- type: nauc_precision_at_1000_diff1
value: 21.951465290665013
- type: nauc_precision_at_1000_max
value: 29.28795349580752
- type: nauc_precision_at_1000_std
value: 43.851885410437404
- type: nauc_precision_at_100_diff1
value: 20.103205413776266
- type: nauc_precision_at_100_max
value: 29.53467404908886
- type: nauc_precision_at_100_std
value: 43.41214281168461
- type: nauc_precision_at_10_diff1
value: 9.327632341614823
- type: nauc_precision_at_10_max
value: 27.739929968318993
- type: nauc_precision_at_10_std
value: 30.029060765584443
- type: nauc_precision_at_1_diff1
value: 25.319658305674935
- type: nauc_precision_at_1_max
value: 19.408020022852174
- type: nauc_precision_at_1_std
value: 30.518526579248036
- type: nauc_precision_at_20_diff1
value: 12.507551705078598
- type: nauc_precision_at_20_max
value: 25.437784661790673
- type: nauc_precision_at_20_std
value: 37.6038493343788
- type: nauc_precision_at_3_diff1
value: 17.302840903240426
- type: nauc_precision_at_3_max
value: 18.240884706076184
- type: nauc_precision_at_3_std
value: 32.34758075311221
- type: nauc_precision_at_5_diff1
value: 10.643711764387417
- type: nauc_precision_at_5_max
value: 24.411239239889554
- type: nauc_precision_at_5_std
value: 28.767392128200953
- type: nauc_recall_at_1000_diff1
value: 18.932208342315853
- type: nauc_recall_at_1000_max
value: 28.482052015706234
- type: nauc_recall_at_1000_std
value: 44.983993721189705
- type: nauc_recall_at_100_diff1
value: 12.30127094174658
- type: nauc_recall_at_100_max
value: 25.614395729836016
- type: nauc_recall_at_100_std
value: 40.04868566707452
- type: nauc_recall_at_10_diff1
value: -4.63806503951543
- type: nauc_recall_at_10_max
value: 25.05145496553497
- type: nauc_recall_at_10_std
value: 24.09893875274637
- type: nauc_recall_at_1_diff1
value: 19.602070971946954
- type: nauc_recall_at_1_max
value: 8.20575258643758
- type: nauc_recall_at_1_std
value: 17.150126202821102
- type: nauc_recall_at_20_diff1
value: 3.229932027028801
- type: nauc_recall_at_20_max
value: 18.794275827349168
- type: nauc_recall_at_20_std
value: 30.248974156728046
- type: nauc_recall_at_3_diff1
value: 15.00878750843053
- type: nauc_recall_at_3_max
value: 9.046387583277276
- type: nauc_recall_at_3_std
value: 22.79927256744018
- type: nauc_recall_at_5_diff1
value: 1.9090462818828973
- type: nauc_recall_at_5_max
value: 17.416622454402713
- type: nauc_recall_at_5_std
value: 21.915265437836833
- type: ndcg_at_1
value: 45.0
- type: ndcg_at_10
value: 39.983999999999995
- type: ndcg_at_100
value: 27.095999999999997
- type: ndcg_at_1000
value: 24.454
- type: ndcg_at_20
value: 37.319
- type: ndcg_at_3
value: 43.704
- type: ndcg_at_5
value: 41.568
- type: precision_at_1
value: 52.0
- type: precision_at_10
value: 42.6
- type: precision_at_100
value: 27.72
- type: precision_at_1000
value: 11.844000000000001
- type: precision_at_20
value: 39.6
- type: precision_at_3
value: 48.667
- type: precision_at_5
value: 45.6
- type: recall_at_1
value: 0.106
- type: recall_at_10
value: 0.9159999999999999
- type: recall_at_100
value: 5.715
- type: recall_at_1000
value: 23.662
- type: recall_at_20
value: 1.7160000000000002
- type: recall_at_3
value: 0.302
- type: recall_at_5
value: 0.482
- task:
type: Retrieval
dataset:
name: MTEB Touche2020 (default)
type: mteb/touche2020
config: default
split: test
revision: a34f9a33db75fa0cbb21bb5cfc3dae8dc8bec93f
metrics:
- type: main_score
value: 13.753000000000002
- type: map_at_1
value: 1.5970000000000002
- type: map_at_10
value: 4.601
- type: map_at_100
value: 7.7700000000000005
- type: map_at_1000
value: 9.096
- type: map_at_20
value: 5.817
- type: map_at_3
value: 2.377
- type: map_at_5
value: 2.98
- type: mrr_at_1
value: 22.448979591836736
- type: mrr_at_10
value: 33.38030450275348
- type: mrr_at_100
value: 35.01828931874863
- type: mrr_at_1000
value: 35.037725664715595
- type: mrr_at_20
value: 34.6865889212828
- type: mrr_at_3
value: 28.231292517006807
- type: mrr_at_5
value: 31.394557823129254
- type: nauc_map_at_1000_diff1
value: -11.252417383140266
- type: nauc_map_at_1000_max
value: -37.24375623641661
- type: nauc_map_at_1000_std
value: -38.122086330314595
- type: nauc_map_at_100_diff1
value: -13.970621196322664
- type: nauc_map_at_100_max
value: -39.871220844684366
- type: nauc_map_at_100_std
value: -41.05324590181932
- type: nauc_map_at_10_diff1
value: -12.163263778180402
- type: nauc_map_at_10_max
value: -36.76984556993433
- type: nauc_map_at_10_std
value: -37.53503392844242
- type: nauc_map_at_1_diff1
value: -21.481769300580112
- type: nauc_map_at_1_max
value: -34.78475326600437
- type: nauc_map_at_1_std
value: -31.34442054238037
- type: nauc_map_at_20_diff1
value: -14.607331295503842
- type: nauc_map_at_20_max
value: -40.507883730110066
- type: nauc_map_at_20_std
value: -42.25172210956502
- type: nauc_map_at_3_diff1
value: -16.11765086583003
- type: nauc_map_at_3_max
value: -39.875149479128375
- type: nauc_map_at_3_std
value: -36.495342441290575
- type: nauc_map_at_5_diff1
value: -12.762015642768567
- type: nauc_map_at_5_max
value: -35.84513643191068
- type: nauc_map_at_5_std
value: -34.507874404019105
- type: nauc_mrr_at_1000_diff1
value: -14.380678398651431
- type: nauc_mrr_at_1000_max
value: -34.916144132151764
- type: nauc_mrr_at_1000_std
value: -37.97719898398948
- type: nauc_mrr_at_100_diff1
value: -14.315571331226579
- type: nauc_mrr_at_100_max
value: -34.82941353583672
- type: nauc_mrr_at_100_std
value: -37.88850059416566
- type: nauc_mrr_at_10_diff1
value: -15.357854232460392
- type: nauc_mrr_at_10_max
value: -35.50556512154432
- type: nauc_mrr_at_10_std
value: -39.177327110088726
- type: nauc_mrr_at_1_diff1
value: -20.81375579297355
- type: nauc_mrr_at_1_max
value: -29.68218990777337
- type: nauc_mrr_at_1_std
value: -32.340167902766225
- type: nauc_mrr_at_20_diff1
value: -14.007415589033556
- type: nauc_mrr_at_20_max
value: -35.07243301300378
- type: nauc_mrr_at_20_std
value: -38.4083789449898
- type: nauc_mrr_at_3_diff1
value: -18.09416617081835
- type: nauc_mrr_at_3_max
value: -36.95185320631812
- type: nauc_mrr_at_3_std
value: -35.64342684468998
- type: nauc_mrr_at_5_diff1
value: -15.183051674277138
- type: nauc_mrr_at_5_max
value: -34.67724348034976
- type: nauc_mrr_at_5_std
value: -35.5955991849333
- type: nauc_ndcg_at_1000_diff1
value: 0.8638249190254136
- type: nauc_ndcg_at_1000_max
value: -27.240531292789573
- type: nauc_ndcg_at_1000_std
value: -26.34406627094641
- type: nauc_ndcg_at_100_diff1
value: -10.272509858747428
- type: nauc_ndcg_at_100_max
value: -40.27645670071093
- type: nauc_ndcg_at_100_std
value: -40.20324905617718
- type: nauc_ndcg_at_10_diff1
value: -10.251898880214641
- type: nauc_ndcg_at_10_max
value: -31.66063506955603
- type: nauc_ndcg_at_10_std
value: -35.18245248110904
- type: nauc_ndcg_at_1_diff1
value: -22.15796091381088
- type: nauc_ndcg_at_1_max
value: -28.012386493294734
- type: nauc_ndcg_at_1_std
value: -28.75534254770048
- type: nauc_ndcg_at_20_diff1
value: -13.257359699197114
- type: nauc_ndcg_at_20_max
value: -39.25007814100781
- type: nauc_ndcg_at_20_std
value: -41.74617039563512
- type: nauc_ndcg_at_3_diff1
value: -14.633327352889419
- type: nauc_ndcg_at_3_max
value: -35.76970667496168
- type: nauc_ndcg_at_3_std
value: -34.78512355124301
- type: nauc_ndcg_at_5_diff1
value: -9.008702427186012
- type: nauc_ndcg_at_5_max
value: -27.057510395795788
- type: nauc_ndcg_at_5_std
value: -31.06336991460067
- type: nauc_precision_at_1000_diff1
value: 24.915422567175415
- type: nauc_precision_at_1000_max
value: 47.53560015584683
- type: nauc_precision_at_1000_std
value: 38.21701614763806
- type: nauc_precision_at_100_diff1
value: 6.645491992850349
- type: nauc_precision_at_100_max
value: -14.578256280924878
- type: nauc_precision_at_100_std
value: -23.049085659678926
- type: nauc_precision_at_10_diff1
value: -0.9667619260601806
- type: nauc_precision_at_10_max
value: -25.529150834147217
- type: nauc_precision_at_10_std
value: -35.81209624358855
- type: nauc_precision_at_1_diff1
value: -20.81375579297355
- type: nauc_precision_at_1_max
value: -29.68218990777337
- type: nauc_precision_at_1_std
value: -32.340167902766225
- type: nauc_precision_at_20_diff1
value: -5.664913271170427
- type: nauc_precision_at_20_max
value: -31.789766954167682
- type: nauc_precision_at_20_std
value: -43.24957806575219
- type: nauc_precision_at_3_diff1
value: -8.78321692449596
- type: nauc_precision_at_3_max
value: -40.94190027571407
- type: nauc_precision_at_3_std
value: -40.42051526602616
- type: nauc_precision_at_5_diff1
value: -0.6700857649701735
- type: nauc_precision_at_5_max
value: -25.396527239026117
- type: nauc_precision_at_5_std
value: -31.60992759387055
- type: nauc_recall_at_1000_diff1
value: 6.608885618295343
- type: nauc_recall_at_1000_max
value: -17.90157348658524
- type: nauc_recall_at_1000_std
value: 1.4128128959708763
- type: nauc_recall_at_100_diff1
value: -10.790017345080633
- type: nauc_recall_at_100_max
value: -42.67969932770011
- type: nauc_recall_at_100_std
value: -36.57531070739207
- type: nauc_recall_at_10_diff1
value: -9.632249853815987
- type: nauc_recall_at_10_max
value: -35.775869145222444
- type: nauc_recall_at_10_std
value: -38.6290217611413
- type: nauc_recall_at_1_diff1
value: -21.481769300580112
- type: nauc_recall_at_1_max
value: -34.78475326600437
- type: nauc_recall_at_1_std
value: -31.34442054238037
- type: nauc_recall_at_20_diff1
value: -16.584366120363462
- type: nauc_recall_at_20_max
value: -45.0011419751979
- type: nauc_recall_at_20_std
value: -46.22137916249736
- type: nauc_recall_at_3_diff1
value: -16.227776403050605
- type: nauc_recall_at_3_max
value: -46.19831636902846
- type: nauc_recall_at_3_std
value: -39.31769096438802
- type: nauc_recall_at_5_diff1
value: -8.463083898122722
- type: nauc_recall_at_5_max
value: -34.1285878720165
- type: nauc_recall_at_5_std
value: -33.56523176213727
- type: ndcg_at_1
value: 19.387999999999998
- type: ndcg_at_10
value: 13.753000000000002
- type: ndcg_at_100
value: 23.552
- type: ndcg_at_1000
value: 36.061
- type: ndcg_at_20
value: 15.113999999999999
- type: ndcg_at_3
value: 14.994
- type: ndcg_at_5
value: 13.927
- type: precision_at_1
value: 22.448999999999998
- type: precision_at_10
value: 13.469000000000001
- type: precision_at_100
value: 5.531
- type: precision_at_1000
value: 1.333
- type: precision_at_20
value: 11.224
- type: precision_at_3
value: 15.645999999999999
- type: precision_at_5
value: 14.693999999999999
- type: recall_at_1
value: 1.5970000000000002
- type: recall_at_10
value: 9.428
- type: recall_at_100
value: 34.227000000000004
- type: recall_at_1000
value: 72.233
- type: recall_at_20
value: 15.456
- type: recall_at_3
value: 3.024
- type: recall_at_5
value: 4.776
- task:
type: Classification
dataset:
name: MTEB ToxicConversationsClassification (default)
type: mteb/toxic_conversations_50k
config: default
split: test
revision: edfaf9da55d3dd50d43143d90c1ac476895ae6de
metrics:
- type: accuracy
value: 65.6884765625
- type: ap
value: 11.395400787741414
- type: ap_weighted
value: 11.395400787741414
- type: f1
value: 49.997667284332806
- type: f1_weighted
value: 73.34420433686675
- type: main_score
value: 65.6884765625
- task:
type: Classification
dataset:
name: MTEB TweetSentimentExtractionClassification (default)
type: mteb/tweet_sentiment_extraction
config: default
split: test
revision: d604517c81ca91fe16a244d1248fc021f9ecee7a
metrics:
- type: accuracy
value: 49.83305036785513
- type: f1
value: 49.97910620163813
- type: f1_weighted
value: 49.32130156716104
- type: main_score
value: 49.83305036785513
- task:
type: Clustering
dataset:
name: MTEB TwentyNewsgroupsClustering (default)
type: mteb/twentynewsgroups-clustering
config: default
split: test
revision: 6125ec4e24fa026cec8a478383ee943acfbd5449
metrics:
- type: main_score
value: 25.27920179659098
- type: v_measure
value: 25.27920179659098
- type: v_measure_std
value: 2.092324622279832
- task:
type: PairClassification
dataset:
name: MTEB TwitterSemEval2015 (default)
type: mteb/twittersemeval2015-pairclassification
config: default
split: test
revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1
metrics:
- type: cosine_accuracy
value: 82.19586338439531
- type: cosine_accuracy_threshold
value: 75.0169038772583
- type: cosine_ap
value: 60.22081236487149
- type: cosine_f1
value: 57.192894671003245
- type: cosine_f1_threshold
value: 69.5034384727478
- type: cosine_precision
value: 54.3767840152236
- type: cosine_recall
value: 60.31662269129288
- type: dot_accuracy
value: 77.92215533170412
- type: dot_accuracy_threshold
value: 106759.60693359375
- type: dot_ap
value: 40.49772647740827
- type: dot_f1
value: 46.14293314417449
- type: dot_f1_threshold
value: 67732.36083984375
- type: dot_precision
value: 34.748931623931625
- type: dot_recall
value: 68.65435356200528
- type: euclidean_accuracy
value: 80.45538534898968
- type: euclidean_accuracy_threshold
value: 2147.9385375976562
- type: euclidean_ap
value: 52.814058086493475
- type: euclidean_f1
value: 50.80232161147149
- type: euclidean_f1_threshold
value: 2624.5105743408203
- type: euclidean_precision
value: 44.66680008004803
- type: euclidean_recall
value: 58.89182058047493
- type: main_score
value: 60.22081236487149
- type: manhattan_accuracy
value: 80.53883292602968
- type: manhattan_accuracy_threshold
value: 27107.672119140625
- type: manhattan_ap
value: 53.53662771884282
- type: manhattan_f1
value: 51.65052816901407
- type: manhattan_f1_threshold
value: 33232.24792480469
- type: manhattan_precision
value: 44.299735749339376
- type: manhattan_recall
value: 61.92612137203166
- type: max_accuracy
value: 82.19586338439531
- type: max_ap
value: 60.22081236487149
- type: max_f1
value: 57.192894671003245
- type: max_precision
value: 54.3767840152236
- type: max_recall
value: 68.65435356200528
- type: similarity_accuracy
value: 82.19586338439531
- type: similarity_accuracy_threshold
value: 75.0169038772583
- type: similarity_ap
value: 60.22081236487149
- type: similarity_f1
value: 57.192894671003245
- type: similarity_f1_threshold
value: 69.5034384727478
- type: similarity_precision
value: 54.3767840152236
- type: similarity_recall
value: 60.31662269129288
- task:
type: PairClassification
dataset:
name: MTEB TwitterURLCorpus (default)
type: mteb/twitterurlcorpus-pairclassification
config: default
split: test
revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf
metrics:
- type: cosine_accuracy
value: 85.86758256684907
- type: cosine_accuracy_threshold
value: 73.03299903869629
- type: cosine_ap
value: 78.79896751132692
- type: cosine_f1
value: 70.93762938984453
- type: cosine_f1_threshold
value: 69.51396465301514
- type: cosine_precision
value: 69.39391707784078
- type: cosine_recall
value: 72.55158607945796
- type: dot_accuracy
value: 81.69169868436373
- type: dot_accuracy_threshold
value: 51796.2890625
- type: dot_ap
value: 66.49022700054283
- type: dot_f1
value: 62.167484157387854
- type: dot_f1_threshold
value: 42622.021484375
- type: dot_precision
value: 58.10078297530617
- type: dot_recall
value: 66.84631967970435
- type: euclidean_accuracy
value: 83.17809601428183
- type: euclidean_accuracy_threshold
value: 1687.9749298095703
- type: euclidean_ap
value: 70.39367677734302
- type: euclidean_f1
value: 62.79221027661935
- type: euclidean_f1_threshold
value: 1905.8393478393555
- type: euclidean_precision
value: 62.40778766446118
- type: euclidean_recall
value: 63.181398213735754
- type: main_score
value: 78.79896751132692
- type: manhattan_accuracy
value: 83.23631000892615
- type: manhattan_accuracy_threshold
value: 21191.021728515625
- type: manhattan_ap
value: 70.60408795606112
- type: manhattan_f1
value: 62.99311208515969
- type: manhattan_f1_threshold
value: 23671.893310546875
- type: manhattan_precision
value: 64.05603311047437
- type: manhattan_recall
value: 61.964890668309216
- type: max_accuracy
value: 85.86758256684907
- type: max_ap
value: 78.79896751132692
- type: max_f1
value: 70.93762938984453
- type: max_precision
value: 69.39391707784078
- type: max_recall
value: 72.55158607945796
- type: similarity_accuracy
value: 85.86758256684907
- type: similarity_accuracy_threshold
value: 73.03299903869629
- type: similarity_ap
value: 78.79896751132692
- type: similarity_f1
value: 70.93762938984453
- type: similarity_f1_threshold
value: 69.51396465301514
- type: similarity_precision
value: 69.39391707784078
- type: similarity_recall
value: 72.55158607945796
---
# M2V_base_glove_subword Model Card
This [Model2Vec](https://github.com/MinishLab/model2vec) model is a distilled version of the [BAAI/bge-base-en-v1.5](https://huggingface.co/BAAI/bge-base-en-v1.5) Sentence Transformer. It uses static embeddings, allowing text embeddings to be computed orders of magnitude faster on both GPU and CPU. It is designed for applications where computational resources are limited or where real-time performance is critical.
## Installation
Install model2vec using pip:
```
pip install model2vec
```
## Usage
Load this model using the `from_pretrained` method:
```python
from model2vec import StaticModel
# Load a pretrained Model2Vec model
model = StaticModel.from_pretrained("minishlab/M2V_base_glove_subword")
# Compute text embeddings
embeddings = model.encode(["Example sentence"])
```
Alternatively, you can distill your own model using the `distill` method:
```python
from model2vec.distill import distill
# Choose a Sentence Transformer model
model_name = "BAAI/bge-base-en-v1.5"
# Distill the model
m2v_model = distill(model_name=model_name, pca_dims=256)
# Save the model
m2v_model.save_pretrained("m2v_model")
```
## How it works
Model2vec creates a small, fast, and powerful model that outperforms other static embedding models by a large margin on all tasks we could find, while being much faster to create than traditional static embedding models such as GloVe. Best of all, you don't need any data to distill a model using Model2Vec.
It works by passing a vocabulary through a sentence transformer model, then reducing the dimensionality of the resulting embeddings using PCA, and finally weighting the embeddings using zipf weighting. During inference, we simply take the mean of all token embeddings occurring in a sentence.
## Additional Resources
- [All Model2Vec models on the hub](https://huggingface.co/models?library=model2vec)
- [Model2Vec Repo](https://github.com/MinishLab/model2vec)
- [Model2Vec Results](https://github.com/MinishLab/model2vec?tab=readme-ov-file#results)
- [Model2Vec Tutorials](https://github.com/MinishLab/model2vec/tree/main/tutorials)
## Library Authors
Model2Vec was developed by the [Minish Lab](https://github.com/MinishLab) team consisting of [Stephan Tulkens](https://github.com/stephantul) and [Thomas van Dongen](https://github.com/Pringled).
## Citation
Please cite the [Model2Vec repository](https://github.com/MinishLab/model2vec) if you use this model in your work.
```
@software{minishlab2024model2vec,
authors = {Stephan Tulkens, Thomas van Dongen},
title = {Model2Vec: Turn any Sentence Transformer into a Small Fast Model},
year = {2024},
url = {https://github.com/MinishLab/model2vec},
}
``` | [
"BIOSSES",
"SCIFACT"
] | Non_BioNLP |
KeyurRamoliya/multilingual-e5-large-GGUF | KeyurRamoliya | feature-extraction | [
"sentence-transformers",
"gguf",
"mteb",
"Sentence Transformers",
"sentence-similarity",
"feature-extraction",
"llama-cpp",
"gguf-my-repo",
"multilingual",
"af",
"am",
"ar",
"as",
"az",
"be",
"bg",
"bn",
"br",
"bs",
"ca",
"cs",
"cy",
"da",
"de",
"el",
"en",
"eo",
"es",
"et",
"eu",
"fa",
"fi",
"fr",
"fy",
"ga",
"gd",
"gl",
"gu",
"ha",
"he",
"hi",
"hr",
"hu",
"hy",
"id",
"is",
"it",
"ja",
"jv",
"ka",
"kk",
"km",
"kn",
"ko",
"ku",
"ky",
"la",
"lo",
"lt",
"lv",
"mg",
"mk",
"ml",
"mn",
"mr",
"ms",
"my",
"ne",
"nl",
"no",
"om",
"or",
"pa",
"pl",
"ps",
"pt",
"ro",
"ru",
"sa",
"sd",
"si",
"sk",
"sl",
"so",
"sq",
"sr",
"su",
"sv",
"sw",
"ta",
"te",
"th",
"tl",
"tr",
"ug",
"uk",
"ur",
"uz",
"vi",
"xh",
"yi",
"zh",
"base_model:intfloat/multilingual-e5-large",
"base_model:quantized:intfloat/multilingual-e5-large",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,724,391,037,000 | 2024-08-23T05:30:43 | 14 | 1 | ---
base_model: intfloat/multilingual-e5-large
language:
- multilingual
- af
- am
- ar
- as
- az
- be
- bg
- bn
- br
- bs
- ca
- cs
- cy
- da
- de
- el
- en
- eo
- es
- et
- eu
- fa
- fi
- fr
- fy
- ga
- gd
- gl
- gu
- ha
- he
- hi
- hr
- hu
- hy
- id
- is
- it
- ja
- jv
- ka
- kk
- km
- kn
- ko
- ku
- ky
- la
- lo
- lt
- lv
- mg
- mk
- ml
- mn
- mr
- ms
- my
- ne
- nl
- 'no'
- om
- or
- pa
- pl
- ps
- pt
- ro
- ru
- sa
- sd
- si
- sk
- sl
- so
- sq
- sr
- su
- sv
- sw
- ta
- te
- th
- tl
- tr
- ug
- uk
- ur
- uz
- vi
- xh
- yi
- zh
license: mit
tags:
- mteb
- Sentence Transformers
- sentence-similarity
- feature-extraction
- sentence-transformers
- llama-cpp
- gguf-my-repo
model-index:
- name: multilingual-e5-large
results:
- task:
type: Classification
dataset:
name: MTEB AmazonCounterfactualClassification (en)
type: mteb/amazon_counterfactual
config: en
split: test
revision: e8379541af4e31359cca9fbcf4b00f2671dba205
metrics:
- type: accuracy
value: 79.05970149253731
- type: ap
value: 43.486574390835635
- type: f1
value: 73.32700092140148
- task:
type: Classification
dataset:
name: MTEB AmazonCounterfactualClassification (de)
type: mteb/amazon_counterfactual
config: de
split: test
revision: e8379541af4e31359cca9fbcf4b00f2671dba205
metrics:
- type: accuracy
value: 71.22055674518201
- type: ap
value: 81.55756710830498
- type: f1
value: 69.28271787752661
- task:
type: Classification
dataset:
name: MTEB AmazonCounterfactualClassification (en-ext)
type: mteb/amazon_counterfactual
config: en-ext
split: test
revision: e8379541af4e31359cca9fbcf4b00f2671dba205
metrics:
- type: accuracy
value: 80.41979010494754
- type: ap
value: 29.34879922376344
- type: f1
value: 67.62475449011278
- task:
type: Classification
dataset:
name: MTEB AmazonCounterfactualClassification (ja)
type: mteb/amazon_counterfactual
config: ja
split: test
revision: e8379541af4e31359cca9fbcf4b00f2671dba205
metrics:
- type: accuracy
value: 77.8372591006424
- type: ap
value: 26.557560591210738
- type: f1
value: 64.96619417368707
- task:
type: Classification
dataset:
name: MTEB AmazonPolarityClassification
type: mteb/amazon_polarity
config: default
split: test
revision: e2d317d38cd51312af73b3d32a06d1a08b442046
metrics:
- type: accuracy
value: 93.489875
- type: ap
value: 90.98758636917603
- type: f1
value: 93.48554819717332
- task:
type: Classification
dataset:
name: MTEB AmazonReviewsClassification (en)
type: mteb/amazon_reviews_multi
config: en
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 47.564
- type: f1
value: 46.75122173518047
- task:
type: Classification
dataset:
name: MTEB AmazonReviewsClassification (de)
type: mteb/amazon_reviews_multi
config: de
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 45.400000000000006
- type: f1
value: 44.17195682400632
- task:
type: Classification
dataset:
name: MTEB AmazonReviewsClassification (es)
type: mteb/amazon_reviews_multi
config: es
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 43.068
- type: f1
value: 42.38155696855596
- task:
type: Classification
dataset:
name: MTEB AmazonReviewsClassification (fr)
type: mteb/amazon_reviews_multi
config: fr
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 41.89
- type: f1
value: 40.84407321682663
- task:
type: Classification
dataset:
name: MTEB AmazonReviewsClassification (ja)
type: mteb/amazon_reviews_multi
config: ja
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 40.120000000000005
- type: f1
value: 39.522976223819114
- task:
type: Classification
dataset:
name: MTEB AmazonReviewsClassification (zh)
type: mteb/amazon_reviews_multi
config: zh
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 38.832
- type: f1
value: 38.0392533394713
- task:
type: Retrieval
dataset:
name: MTEB ArguAna
type: arguana
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 30.725
- type: map_at_10
value: 46.055
- type: map_at_100
value: 46.900999999999996
- type: map_at_1000
value: 46.911
- type: map_at_3
value: 41.548
- type: map_at_5
value: 44.297
- type: mrr_at_1
value: 31.152
- type: mrr_at_10
value: 46.231
- type: mrr_at_100
value: 47.07
- type: mrr_at_1000
value: 47.08
- type: mrr_at_3
value: 41.738
- type: mrr_at_5
value: 44.468999999999994
- type: ndcg_at_1
value: 30.725
- type: ndcg_at_10
value: 54.379999999999995
- type: ndcg_at_100
value: 58.138
- type: ndcg_at_1000
value: 58.389
- type: ndcg_at_3
value: 45.156
- type: ndcg_at_5
value: 50.123
- type: precision_at_1
value: 30.725
- type: precision_at_10
value: 8.087
- type: precision_at_100
value: 0.9769999999999999
- type: precision_at_1000
value: 0.1
- type: precision_at_3
value: 18.54
- type: precision_at_5
value: 13.542000000000002
- type: recall_at_1
value: 30.725
- type: recall_at_10
value: 80.868
- type: recall_at_100
value: 97.653
- type: recall_at_1000
value: 99.57300000000001
- type: recall_at_3
value: 55.619
- type: recall_at_5
value: 67.71000000000001
- task:
type: Clustering
dataset:
name: MTEB ArxivClusteringP2P
type: mteb/arxiv-clustering-p2p
config: default
split: test
revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d
metrics:
- type: v_measure
value: 44.30960650674069
- task:
type: Clustering
dataset:
name: MTEB ArxivClusteringS2S
type: mteb/arxiv-clustering-s2s
config: default
split: test
revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53
metrics:
- type: v_measure
value: 38.427074197498996
- task:
type: Reranking
dataset:
name: MTEB AskUbuntuDupQuestions
type: mteb/askubuntudupquestions-reranking
config: default
split: test
revision: 2000358ca161889fa9c082cb41daa8dcfb161a54
metrics:
- type: map
value: 60.28270056031872
- type: mrr
value: 74.38332673789738
- task:
type: STS
dataset:
name: MTEB BIOSSES
type: mteb/biosses-sts
config: default
split: test
revision: d3fb88f8f02e40887cd149695127462bbcf29b4a
metrics:
- type: cos_sim_pearson
value: 84.05942144105269
- type: cos_sim_spearman
value: 82.51212105850809
- type: euclidean_pearson
value: 81.95639829909122
- type: euclidean_spearman
value: 82.3717564144213
- type: manhattan_pearson
value: 81.79273425468256
- type: manhattan_spearman
value: 82.20066817871039
- task:
type: BitextMining
dataset:
name: MTEB BUCC (de-en)
type: mteb/bucc-bitext-mining
config: de-en
split: test
revision: d51519689f32196a32af33b075a01d0e7c51e252
metrics:
- type: accuracy
value: 99.46764091858039
- type: f1
value: 99.37717466945023
- type: precision
value: 99.33194154488518
- type: recall
value: 99.46764091858039
- task:
type: BitextMining
dataset:
name: MTEB BUCC (fr-en)
type: mteb/bucc-bitext-mining
config: fr-en
split: test
revision: d51519689f32196a32af33b075a01d0e7c51e252
metrics:
- type: accuracy
value: 98.29407880255337
- type: f1
value: 98.11248073959938
- type: precision
value: 98.02443319392472
- type: recall
value: 98.29407880255337
- task:
type: BitextMining
dataset:
name: MTEB BUCC (ru-en)
type: mteb/bucc-bitext-mining
config: ru-en
split: test
revision: d51519689f32196a32af33b075a01d0e7c51e252
metrics:
- type: accuracy
value: 97.79009352268791
- type: f1
value: 97.5176076665512
- type: precision
value: 97.38136473848286
- type: recall
value: 97.79009352268791
- task:
type: BitextMining
dataset:
name: MTEB BUCC (zh-en)
type: mteb/bucc-bitext-mining
config: zh-en
split: test
revision: d51519689f32196a32af33b075a01d0e7c51e252
metrics:
- type: accuracy
value: 99.26276987888363
- type: f1
value: 99.20133403545726
- type: precision
value: 99.17500438827453
- type: recall
value: 99.26276987888363
- task:
type: Classification
dataset:
name: MTEB Banking77Classification
type: mteb/banking77
config: default
split: test
revision: 0fd18e25b25c072e09e0d92ab615fda904d66300
metrics:
- type: accuracy
value: 84.72727272727273
- type: f1
value: 84.67672206031433
- task:
type: Clustering
dataset:
name: MTEB BiorxivClusteringP2P
type: mteb/biorxiv-clustering-p2p
config: default
split: test
revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40
metrics:
- type: v_measure
value: 35.34220182511161
- task:
type: Clustering
dataset:
name: MTEB BiorxivClusteringS2S
type: mteb/biorxiv-clustering-s2s
config: default
split: test
revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908
metrics:
- type: v_measure
value: 33.4987096128766
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackRetrieval
type: BeIR/cqadupstack
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 25.558249999999997
- type: map_at_10
value: 34.44425000000001
- type: map_at_100
value: 35.59833333333333
- type: map_at_1000
value: 35.706916666666665
- type: map_at_3
value: 31.691749999999995
- type: map_at_5
value: 33.252916666666664
- type: mrr_at_1
value: 30.252666666666666
- type: mrr_at_10
value: 38.60675
- type: mrr_at_100
value: 39.42666666666666
- type: mrr_at_1000
value: 39.48408333333334
- type: mrr_at_3
value: 36.17441666666665
- type: mrr_at_5
value: 37.56275
- type: ndcg_at_1
value: 30.252666666666666
- type: ndcg_at_10
value: 39.683
- type: ndcg_at_100
value: 44.68541666666667
- type: ndcg_at_1000
value: 46.94316666666668
- type: ndcg_at_3
value: 34.961749999999995
- type: ndcg_at_5
value: 37.215666666666664
- type: precision_at_1
value: 30.252666666666666
- type: precision_at_10
value: 6.904166666666667
- type: precision_at_100
value: 1.0989999999999995
- type: precision_at_1000
value: 0.14733333333333334
- type: precision_at_3
value: 16.037666666666667
- type: precision_at_5
value: 11.413583333333333
- type: recall_at_1
value: 25.558249999999997
- type: recall_at_10
value: 51.13341666666666
- type: recall_at_100
value: 73.08366666666667
- type: recall_at_1000
value: 88.79483333333334
- type: recall_at_3
value: 37.989083333333326
- type: recall_at_5
value: 43.787833333333325
- task:
type: Retrieval
dataset:
name: MTEB ClimateFEVER
type: climate-fever
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 10.338
- type: map_at_10
value: 18.360000000000003
- type: map_at_100
value: 19.942
- type: map_at_1000
value: 20.134
- type: map_at_3
value: 15.174000000000001
- type: map_at_5
value: 16.830000000000002
- type: mrr_at_1
value: 23.257
- type: mrr_at_10
value: 33.768
- type: mrr_at_100
value: 34.707
- type: mrr_at_1000
value: 34.766000000000005
- type: mrr_at_3
value: 30.977
- type: mrr_at_5
value: 32.528
- type: ndcg_at_1
value: 23.257
- type: ndcg_at_10
value: 25.733
- type: ndcg_at_100
value: 32.288
- type: ndcg_at_1000
value: 35.992000000000004
- type: ndcg_at_3
value: 20.866
- type: ndcg_at_5
value: 22.612
- type: precision_at_1
value: 23.257
- type: precision_at_10
value: 8.124
- type: precision_at_100
value: 1.518
- type: precision_at_1000
value: 0.219
- type: precision_at_3
value: 15.679000000000002
- type: precision_at_5
value: 12.117
- type: recall_at_1
value: 10.338
- type: recall_at_10
value: 31.154
- type: recall_at_100
value: 54.161
- type: recall_at_1000
value: 75.21900000000001
- type: recall_at_3
value: 19.427
- type: recall_at_5
value: 24.214
- task:
type: Retrieval
dataset:
name: MTEB DBPedia
type: dbpedia-entity
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 8.498
- type: map_at_10
value: 19.103
- type: map_at_100
value: 27.375
- type: map_at_1000
value: 28.981
- type: map_at_3
value: 13.764999999999999
- type: map_at_5
value: 15.950000000000001
- type: mrr_at_1
value: 65.5
- type: mrr_at_10
value: 74.53800000000001
- type: mrr_at_100
value: 74.71799999999999
- type: mrr_at_1000
value: 74.725
- type: mrr_at_3
value: 72.792
- type: mrr_at_5
value: 73.554
- type: ndcg_at_1
value: 53.37499999999999
- type: ndcg_at_10
value: 41.286
- type: ndcg_at_100
value: 45.972
- type: ndcg_at_1000
value: 53.123
- type: ndcg_at_3
value: 46.172999999999995
- type: ndcg_at_5
value: 43.033
- type: precision_at_1
value: 65.5
- type: precision_at_10
value: 32.725
- type: precision_at_100
value: 10.683
- type: precision_at_1000
value: 1.978
- type: precision_at_3
value: 50
- type: precision_at_5
value: 41.349999999999994
- type: recall_at_1
value: 8.498
- type: recall_at_10
value: 25.070999999999998
- type: recall_at_100
value: 52.383
- type: recall_at_1000
value: 74.91499999999999
- type: recall_at_3
value: 15.207999999999998
- type: recall_at_5
value: 18.563
- task:
type: Classification
dataset:
name: MTEB EmotionClassification
type: mteb/emotion
config: default
split: test
revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37
metrics:
- type: accuracy
value: 46.5
- type: f1
value: 41.93833713984145
- task:
type: Retrieval
dataset:
name: MTEB FEVER
type: fever
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 67.914
- type: map_at_10
value: 78.10000000000001
- type: map_at_100
value: 78.333
- type: map_at_1000
value: 78.346
- type: map_at_3
value: 76.626
- type: map_at_5
value: 77.627
- type: mrr_at_1
value: 72.74199999999999
- type: mrr_at_10
value: 82.414
- type: mrr_at_100
value: 82.511
- type: mrr_at_1000
value: 82.513
- type: mrr_at_3
value: 81.231
- type: mrr_at_5
value: 82.065
- type: ndcg_at_1
value: 72.74199999999999
- type: ndcg_at_10
value: 82.806
- type: ndcg_at_100
value: 83.677
- type: ndcg_at_1000
value: 83.917
- type: ndcg_at_3
value: 80.305
- type: ndcg_at_5
value: 81.843
- type: precision_at_1
value: 72.74199999999999
- type: precision_at_10
value: 10.24
- type: precision_at_100
value: 1.089
- type: precision_at_1000
value: 0.11299999999999999
- type: precision_at_3
value: 31.268
- type: precision_at_5
value: 19.706000000000003
- type: recall_at_1
value: 67.914
- type: recall_at_10
value: 92.889
- type: recall_at_100
value: 96.42699999999999
- type: recall_at_1000
value: 97.92
- type: recall_at_3
value: 86.21
- type: recall_at_5
value: 90.036
- task:
type: Retrieval
dataset:
name: MTEB FiQA2018
type: fiqa
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 22.166
- type: map_at_10
value: 35.57
- type: map_at_100
value: 37.405
- type: map_at_1000
value: 37.564
- type: map_at_3
value: 30.379
- type: map_at_5
value: 33.324
- type: mrr_at_1
value: 43.519000000000005
- type: mrr_at_10
value: 51.556000000000004
- type: mrr_at_100
value: 52.344
- type: mrr_at_1000
value: 52.373999999999995
- type: mrr_at_3
value: 48.868
- type: mrr_at_5
value: 50.319
- type: ndcg_at_1
value: 43.519000000000005
- type: ndcg_at_10
value: 43.803
- type: ndcg_at_100
value: 50.468999999999994
- type: ndcg_at_1000
value: 53.111
- type: ndcg_at_3
value: 38.893
- type: ndcg_at_5
value: 40.653
- type: precision_at_1
value: 43.519000000000005
- type: precision_at_10
value: 12.253
- type: precision_at_100
value: 1.931
- type: precision_at_1000
value: 0.242
- type: precision_at_3
value: 25.617
- type: precision_at_5
value: 19.383
- type: recall_at_1
value: 22.166
- type: recall_at_10
value: 51.6
- type: recall_at_100
value: 76.574
- type: recall_at_1000
value: 92.192
- type: recall_at_3
value: 34.477999999999994
- type: recall_at_5
value: 41.835
- task:
type: Retrieval
dataset:
name: MTEB HotpotQA
type: hotpotqa
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 39.041
- type: map_at_10
value: 62.961999999999996
- type: map_at_100
value: 63.79899999999999
- type: map_at_1000
value: 63.854
- type: map_at_3
value: 59.399
- type: map_at_5
value: 61.669
- type: mrr_at_1
value: 78.082
- type: mrr_at_10
value: 84.321
- type: mrr_at_100
value: 84.49600000000001
- type: mrr_at_1000
value: 84.502
- type: mrr_at_3
value: 83.421
- type: mrr_at_5
value: 83.977
- type: ndcg_at_1
value: 78.082
- type: ndcg_at_10
value: 71.229
- type: ndcg_at_100
value: 74.10900000000001
- type: ndcg_at_1000
value: 75.169
- type: ndcg_at_3
value: 66.28699999999999
- type: ndcg_at_5
value: 69.084
- type: precision_at_1
value: 78.082
- type: precision_at_10
value: 14.993
- type: precision_at_100
value: 1.7239999999999998
- type: precision_at_1000
value: 0.186
- type: precision_at_3
value: 42.737
- type: precision_at_5
value: 27.843
- type: recall_at_1
value: 39.041
- type: recall_at_10
value: 74.96300000000001
- type: recall_at_100
value: 86.199
- type: recall_at_1000
value: 93.228
- type: recall_at_3
value: 64.105
- type: recall_at_5
value: 69.608
- task:
type: Classification
dataset:
name: MTEB ImdbClassification
type: mteb/imdb
config: default
split: test
revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7
metrics:
- type: accuracy
value: 90.23160000000001
- type: ap
value: 85.5674856808308
- type: f1
value: 90.18033354786317
- task:
type: Retrieval
dataset:
name: MTEB MSMARCO
type: msmarco
config: default
split: dev
revision: None
metrics:
- type: map_at_1
value: 24.091
- type: map_at_10
value: 36.753
- type: map_at_100
value: 37.913000000000004
- type: map_at_1000
value: 37.958999999999996
- type: map_at_3
value: 32.818999999999996
- type: map_at_5
value: 35.171
- type: mrr_at_1
value: 24.742
- type: mrr_at_10
value: 37.285000000000004
- type: mrr_at_100
value: 38.391999999999996
- type: mrr_at_1000
value: 38.431
- type: mrr_at_3
value: 33.440999999999995
- type: mrr_at_5
value: 35.75
- type: ndcg_at_1
value: 24.742
- type: ndcg_at_10
value: 43.698
- type: ndcg_at_100
value: 49.145
- type: ndcg_at_1000
value: 50.23800000000001
- type: ndcg_at_3
value: 35.769
- type: ndcg_at_5
value: 39.961999999999996
- type: precision_at_1
value: 24.742
- type: precision_at_10
value: 6.7989999999999995
- type: precision_at_100
value: 0.95
- type: precision_at_1000
value: 0.104
- type: precision_at_3
value: 15.096000000000002
- type: precision_at_5
value: 11.183
- type: recall_at_1
value: 24.091
- type: recall_at_10
value: 65.068
- type: recall_at_100
value: 89.899
- type: recall_at_1000
value: 98.16
- type: recall_at_3
value: 43.68
- type: recall_at_5
value: 53.754999999999995
- task:
type: Classification
dataset:
name: MTEB MTOPDomainClassification (en)
type: mteb/mtop_domain
config: en
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 93.66621067031465
- type: f1
value: 93.49622853272142
- task:
type: Classification
dataset:
name: MTEB MTOPDomainClassification (de)
type: mteb/mtop_domain
config: de
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 91.94702733164272
- type: f1
value: 91.17043441745282
- task:
type: Classification
dataset:
name: MTEB MTOPDomainClassification (es)
type: mteb/mtop_domain
config: es
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 92.20146764509674
- type: f1
value: 91.98359080555608
- task:
type: Classification
dataset:
name: MTEB MTOPDomainClassification (fr)
type: mteb/mtop_domain
config: fr
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 88.99780770435328
- type: f1
value: 89.19746342724068
- task:
type: Classification
dataset:
name: MTEB MTOPDomainClassification (hi)
type: mteb/mtop_domain
config: hi
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 89.78486912871998
- type: f1
value: 89.24578823628642
- task:
type: Classification
dataset:
name: MTEB MTOPDomainClassification (th)
type: mteb/mtop_domain
config: th
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 88.74502712477394
- type: f1
value: 89.00297573881542
- task:
type: Classification
dataset:
name: MTEB MTOPIntentClassification (en)
type: mteb/mtop_intent
config: en
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 77.9046967624259
- type: f1
value: 59.36787125785957
- task:
type: Classification
dataset:
name: MTEB MTOPIntentClassification (de)
type: mteb/mtop_intent
config: de
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 74.5280360664976
- type: f1
value: 57.17723440888718
- task:
type: Classification
dataset:
name: MTEB MTOPIntentClassification (es)
type: mteb/mtop_intent
config: es
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 75.44029352901934
- type: f1
value: 54.052855531072964
- task:
type: Classification
dataset:
name: MTEB MTOPIntentClassification (fr)
type: mteb/mtop_intent
config: fr
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 70.5606013153774
- type: f1
value: 52.62215934386531
- task:
type: Classification
dataset:
name: MTEB MTOPIntentClassification (hi)
type: mteb/mtop_intent
config: hi
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 73.11581211903908
- type: f1
value: 52.341291845645465
- task:
type: Classification
dataset:
name: MTEB MTOPIntentClassification (th)
type: mteb/mtop_intent
config: th
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 74.28933092224233
- type: f1
value: 57.07918745504911
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (af)
type: mteb/amazon_massive_intent
config: af
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 62.38063214525892
- type: f1
value: 59.46463723443009
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (am)
type: mteb/amazon_massive_intent
config: am
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 56.06926698049766
- type: f1
value: 52.49084283283562
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (ar)
type: mteb/amazon_massive_intent
config: ar
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 60.74983187626093
- type: f1
value: 56.960640620165904
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (az)
type: mteb/amazon_massive_intent
config: az
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 64.86550100874243
- type: f1
value: 62.47370548140688
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (bn)
type: mteb/amazon_massive_intent
config: bn
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 63.971082716879636
- type: f1
value: 61.03812421957381
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (cy)
type: mteb/amazon_massive_intent
config: cy
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 54.98318762609282
- type: f1
value: 51.51207916008392
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (da)
type: mteb/amazon_massive_intent
config: da
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 69.45527908540686
- type: f1
value: 66.16631905400318
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (de)
type: mteb/amazon_massive_intent
config: de
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 69.32750504371216
- type: f1
value: 66.16755288646591
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (el)
type: mteb/amazon_massive_intent
config: el
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 69.09213180901143
- type: f1
value: 66.95654394661507
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (en)
type: mteb/amazon_massive_intent
config: en
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 73.75588433086752
- type: f1
value: 71.79973779656923
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (es)
type: mteb/amazon_massive_intent
config: es
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 70.49428379287154
- type: f1
value: 68.37494379215734
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (fa)
type: mteb/amazon_massive_intent
config: fa
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 69.90921318090115
- type: f1
value: 66.79517376481645
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (fi)
type: mteb/amazon_massive_intent
config: fi
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 70.12104909213181
- type: f1
value: 67.29448842879584
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (fr)
type: mteb/amazon_massive_intent
config: fr
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 69.34095494283793
- type: f1
value: 67.01134288992947
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (he)
type: mteb/amazon_massive_intent
config: he
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 67.61264290517822
- type: f1
value: 64.68730512660757
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (hi)
type: mteb/amazon_massive_intent
config: hi
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 67.79757901815738
- type: f1
value: 65.24938539425598
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (hu)
type: mteb/amazon_massive_intent
config: hu
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 69.68728984532616
- type: f1
value: 67.0487169762553
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (hy)
type: mteb/amazon_massive_intent
config: hy
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 62.07464694014795
- type: f1
value: 59.183532276789286
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (id)
type: mteb/amazon_massive_intent
config: id
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 70.04707464694015
- type: f1
value: 67.66829629003848
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (is)
type: mteb/amazon_massive_intent
config: is
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 62.42434431741762
- type: f1
value: 59.01617226544757
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (it)
type: mteb/amazon_massive_intent
config: it
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 70.53127101546738
- type: f1
value: 68.10033760906255
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (ja)
type: mteb/amazon_massive_intent
config: ja
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 72.50504371217215
- type: f1
value: 69.74931103158923
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (jv)
type: mteb/amazon_massive_intent
config: jv
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 57.91190316072628
- type: f1
value: 54.05551136648796
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (ka)
type: mteb/amazon_massive_intent
config: ka
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 51.78211163416275
- type: f1
value: 49.874888544058535
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (km)
type: mteb/amazon_massive_intent
config: km
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 47.017484868863484
- type: f1
value: 44.53364263352014
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (kn)
type: mteb/amazon_massive_intent
config: kn
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 62.16207128446537
- type: f1
value: 59.01185692320829
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (ko)
type: mteb/amazon_massive_intent
config: ko
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 69.42501681237391
- type: f1
value: 67.13169450166086
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (lv)
type: mteb/amazon_massive_intent
config: lv
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 67.0780094149294
- type: f1
value: 64.41720167850707
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (ml)
type: mteb/amazon_massive_intent
config: ml
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 65.57162071284466
- type: f1
value: 62.414138683804424
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (mn)
type: mteb/amazon_massive_intent
config: mn
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 61.71149966375252
- type: f1
value: 58.594805125087234
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (ms)
type: mteb/amazon_massive_intent
config: ms
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 66.03900470746471
- type: f1
value: 63.87937257883887
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (my)
type: mteb/amazon_massive_intent
config: my
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 60.8776059179556
- type: f1
value: 57.48587618059131
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (nb)
type: mteb/amazon_massive_intent
config: nb
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 69.87895090786819
- type: f1
value: 66.8141299430347
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (nl)
type: mteb/amazon_massive_intent
config: nl
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 70.45057162071285
- type: f1
value: 67.46444039673516
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (pl)
type: mteb/amazon_massive_intent
config: pl
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 71.546738399462
- type: f1
value: 68.63640876702655
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (pt)
type: mteb/amazon_massive_intent
config: pt
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 70.72965702757229
- type: f1
value: 68.54119560379115
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (ro)
type: mteb/amazon_massive_intent
config: ro
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 68.35574983187625
- type: f1
value: 65.88844917691927
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (ru)
type: mteb/amazon_massive_intent
config: ru
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 71.70477471418964
- type: f1
value: 69.19665697061978
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (sl)
type: mteb/amazon_massive_intent
config: sl
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 67.0880968392737
- type: f1
value: 64.76962317666086
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (sq)
type: mteb/amazon_massive_intent
config: sq
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 65.18493611297916
- type: f1
value: 62.49984559035371
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (sv)
type: mteb/amazon_massive_intent
config: sv
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 71.75857431069265
- type: f1
value: 69.20053687623418
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (sw)
type: mteb/amazon_massive_intent
config: sw
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 58.500336247478145
- type: f1
value: 55.2972398687929
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (ta)
type: mteb/amazon_massive_intent
config: ta
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 62.68997982515132
- type: f1
value: 59.36848202755348
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (te)
type: mteb/amazon_massive_intent
config: te
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 63.01950235373235
- type: f1
value: 60.09351954625423
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (th)
type: mteb/amazon_massive_intent
config: th
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 68.29186281102892
- type: f1
value: 67.57860496703447
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (tl)
type: mteb/amazon_massive_intent
config: tl
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 64.77471418964357
- type: f1
value: 61.913983147713836
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (tr)
type: mteb/amazon_massive_intent
config: tr
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 69.87222595830532
- type: f1
value: 66.03679033708141
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (ur)
type: mteb/amazon_massive_intent
config: ur
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 64.04505716207127
- type: f1
value: 61.28569169817908
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (vi)
type: mteb/amazon_massive_intent
config: vi
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 69.38466711499663
- type: f1
value: 67.20532357036844
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (zh-CN)
type: mteb/amazon_massive_intent
config: zh-CN
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 71.12306657700067
- type: f1
value: 68.91251226588182
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (zh-TW)
type: mteb/amazon_massive_intent
config: zh-TW
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 66.20040349697378
- type: f1
value: 66.02657347714175
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (af)
type: mteb/amazon_massive_scenario
config: af
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 68.73907195696032
- type: f1
value: 66.98484521791418
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (am)
type: mteb/amazon_massive_scenario
config: am
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 60.58843308675185
- type: f1
value: 58.95591723092005
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (ar)
type: mteb/amazon_massive_scenario
config: ar
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 66.22730329522528
- type: f1
value: 66.0894499712115
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (az)
type: mteb/amazon_massive_scenario
config: az
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 66.48285137861465
- type: f1
value: 65.21963176785157
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (bn)
type: mteb/amazon_massive_scenario
config: bn
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 67.74714189643578
- type: f1
value: 66.8212192745412
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (cy)
type: mteb/amazon_massive_scenario
config: cy
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 59.09213180901143
- type: f1
value: 56.70735546356339
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (da)
type: mteb/amazon_massive_scenario
config: da
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 75.05716207128448
- type: f1
value: 74.8413712365364
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (de)
type: mteb/amazon_massive_scenario
config: de
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 74.69737726967047
- type: f1
value: 74.7664341963
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (el)
type: mteb/amazon_massive_scenario
config: el
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 73.90383322125084
- type: f1
value: 73.59201554448323
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (en)
type: mteb/amazon_massive_scenario
config: en
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 77.51176866173503
- type: f1
value: 77.46104434577758
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (es)
type: mteb/amazon_massive_scenario
config: es
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 74.31069266980496
- type: f1
value: 74.61048660675635
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (fa)
type: mteb/amazon_massive_scenario
config: fa
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 72.95225285810356
- type: f1
value: 72.33160006574627
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (fi)
type: mteb/amazon_massive_scenario
config: fi
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 73.12373907195696
- type: f1
value: 73.20921012557481
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (fr)
type: mteb/amazon_massive_scenario
config: fr
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 73.86684599865501
- type: f1
value: 73.82348774610831
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (he)
type: mteb/amazon_massive_scenario
config: he
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 71.40215198386012
- type: f1
value: 71.11945183971858
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (hi)
type: mteb/amazon_massive_scenario
config: hi
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 72.12844653665098
- type: f1
value: 71.34450495911766
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (hu)
type: mteb/amazon_massive_scenario
config: hu
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 74.52252858103566
- type: f1
value: 73.98878711342999
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (hy)
type: mteb/amazon_massive_scenario
config: hy
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 64.93611297915265
- type: f1
value: 63.723200467653385
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (id)
type: mteb/amazon_massive_scenario
config: id
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 74.11903160726295
- type: f1
value: 73.82138439467096
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (is)
type: mteb/amazon_massive_scenario
config: is
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 67.15198386012105
- type: f1
value: 66.02172193802167
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (it)
type: mteb/amazon_massive_scenario
config: it
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 74.32414256893072
- type: f1
value: 74.30943421170574
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (ja)
type: mteb/amazon_massive_scenario
config: ja
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 77.46805648957633
- type: f1
value: 77.62808409298209
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (jv)
type: mteb/amazon_massive_scenario
config: jv
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 63.318762609280434
- type: f1
value: 62.094284066075076
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (ka)
type: mteb/amazon_massive_scenario
config: ka
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 58.34902488231338
- type: f1
value: 57.12893860987984
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (km)
type: mteb/amazon_massive_scenario
config: km
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 50.88433086751849
- type: f1
value: 48.2272350802058
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (kn)
type: mteb/amazon_massive_scenario
config: kn
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 66.4425016812374
- type: f1
value: 64.61463095996173
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (ko)
type: mteb/amazon_massive_scenario
config: ko
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 75.04707464694015
- type: f1
value: 75.05099199098998
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (lv)
type: mteb/amazon_massive_scenario
config: lv
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 70.50437121721586
- type: f1
value: 69.83397721096314
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (ml)
type: mteb/amazon_massive_scenario
config: ml
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 69.94283792871553
- type: f1
value: 68.8704663703913
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (mn)
type: mteb/amazon_massive_scenario
config: mn
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 64.79488903833222
- type: f1
value: 63.615424063345436
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (ms)
type: mteb/amazon_massive_scenario
config: ms
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 69.88231338264963
- type: f1
value: 68.57892302593237
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (my)
type: mteb/amazon_massive_scenario
config: my
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 63.248150638870214
- type: f1
value: 61.06680605338809
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (nb)
type: mteb/amazon_massive_scenario
config: nb
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 74.84196368527236
- type: f1
value: 74.52566464968763
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (nl)
type: mteb/amazon_massive_scenario
config: nl
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 74.8285137861466
- type: f1
value: 74.8853197608802
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (pl)
type: mteb/amazon_massive_scenario
config: pl
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 74.13248150638869
- type: f1
value: 74.3982040999179
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (pt)
type: mteb/amazon_massive_scenario
config: pt
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 73.49024882313383
- type: f1
value: 73.82153848368573
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (ro)
type: mteb/amazon_massive_scenario
config: ro
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 71.72158708809684
- type: f1
value: 71.85049433180541
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (ru)
type: mteb/amazon_massive_scenario
config: ru
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 75.137861466039
- type: f1
value: 75.37628348188467
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (sl)
type: mteb/amazon_massive_scenario
config: sl
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 71.86953597848016
- type: f1
value: 71.87537624521661
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (sq)
type: mteb/amazon_massive_scenario
config: sq
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 70.27572293207801
- type: f1
value: 68.80017302344231
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (sv)
type: mteb/amazon_massive_scenario
config: sv
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 76.09952925353059
- type: f1
value: 76.07992707688408
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (sw)
type: mteb/amazon_massive_scenario
config: sw
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 63.140551445864155
- type: f1
value: 61.73855010331415
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (ta)
type: mteb/amazon_massive_scenario
config: ta
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 66.27774041694687
- type: f1
value: 64.83664868894539
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (te)
type: mteb/amazon_massive_scenario
config: te
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 66.69468728984533
- type: f1
value: 64.76239666920868
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (th)
type: mteb/amazon_massive_scenario
config: th
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 73.44653665097512
- type: f1
value: 73.14646052013873
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (tl)
type: mteb/amazon_massive_scenario
config: tl
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 67.71351714862139
- type: f1
value: 66.67212180163382
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (tr)
type: mteb/amazon_massive_scenario
config: tr
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 73.9946200403497
- type: f1
value: 73.87348793725525
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (ur)
type: mteb/amazon_massive_scenario
config: ur
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 68.15400134498992
- type: f1
value: 67.09433241421094
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (vi)
type: mteb/amazon_massive_scenario
config: vi
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 73.11365164761264
- type: f1
value: 73.59502539433753
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (zh-CN)
type: mteb/amazon_massive_scenario
config: zh-CN
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 76.82582380632145
- type: f1
value: 76.89992945316313
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (zh-TW)
type: mteb/amazon_massive_scenario
config: zh-TW
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 71.81237390719569
- type: f1
value: 72.36499770986265
- task:
type: Clustering
dataset:
name: MTEB MedrxivClusteringP2P
type: mteb/medrxiv-clustering-p2p
config: default
split: test
revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73
metrics:
- type: v_measure
value: 31.480506569594695
- task:
type: Clustering
dataset:
name: MTEB MedrxivClusteringS2S
type: mteb/medrxiv-clustering-s2s
config: default
split: test
revision: 35191c8c0dca72d8ff3efcd72aa802307d469663
metrics:
- type: v_measure
value: 29.71252128004552
- task:
type: Reranking
dataset:
name: MTEB MindSmallReranking
type: mteb/mind_small
config: default
split: test
revision: 3bdac13927fdc888b903db93b2ffdbd90b295a69
metrics:
- type: map
value: 31.421396787056548
- type: mrr
value: 32.48155274872267
- task:
type: Retrieval
dataset:
name: MTEB NFCorpus
type: nfcorpus
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 5.595
- type: map_at_10
value: 12.642000000000001
- type: map_at_100
value: 15.726
- type: map_at_1000
value: 17.061999999999998
- type: map_at_3
value: 9.125
- type: map_at_5
value: 10.866000000000001
- type: mrr_at_1
value: 43.344
- type: mrr_at_10
value: 52.227999999999994
- type: mrr_at_100
value: 52.898999999999994
- type: mrr_at_1000
value: 52.944
- type: mrr_at_3
value: 49.845
- type: mrr_at_5
value: 51.115
- type: ndcg_at_1
value: 41.949999999999996
- type: ndcg_at_10
value: 33.995
- type: ndcg_at_100
value: 30.869999999999997
- type: ndcg_at_1000
value: 39.487
- type: ndcg_at_3
value: 38.903999999999996
- type: ndcg_at_5
value: 37.236999999999995
- type: precision_at_1
value: 43.344
- type: precision_at_10
value: 25.480000000000004
- type: precision_at_100
value: 7.672
- type: precision_at_1000
value: 2.028
- type: precision_at_3
value: 36.636
- type: precision_at_5
value: 32.632
- type: recall_at_1
value: 5.595
- type: recall_at_10
value: 16.466
- type: recall_at_100
value: 31.226
- type: recall_at_1000
value: 62.778999999999996
- type: recall_at_3
value: 9.931
- type: recall_at_5
value: 12.884
- task:
type: Retrieval
dataset:
name: MTEB NQ
type: nq
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 40.414
- type: map_at_10
value: 56.754000000000005
- type: map_at_100
value: 57.457
- type: map_at_1000
value: 57.477999999999994
- type: map_at_3
value: 52.873999999999995
- type: map_at_5
value: 55.175
- type: mrr_at_1
value: 45.278
- type: mrr_at_10
value: 59.192
- type: mrr_at_100
value: 59.650000000000006
- type: mrr_at_1000
value: 59.665
- type: mrr_at_3
value: 56.141
- type: mrr_at_5
value: 57.998000000000005
- type: ndcg_at_1
value: 45.278
- type: ndcg_at_10
value: 64.056
- type: ndcg_at_100
value: 66.89
- type: ndcg_at_1000
value: 67.364
- type: ndcg_at_3
value: 56.97
- type: ndcg_at_5
value: 60.719
- type: precision_at_1
value: 45.278
- type: precision_at_10
value: 9.994
- type: precision_at_100
value: 1.165
- type: precision_at_1000
value: 0.121
- type: precision_at_3
value: 25.512
- type: precision_at_5
value: 17.509
- type: recall_at_1
value: 40.414
- type: recall_at_10
value: 83.596
- type: recall_at_100
value: 95.72
- type: recall_at_1000
value: 99.24
- type: recall_at_3
value: 65.472
- type: recall_at_5
value: 74.039
- task:
type: Retrieval
dataset:
name: MTEB QuoraRetrieval
type: quora
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 70.352
- type: map_at_10
value: 84.369
- type: map_at_100
value: 85.02499999999999
- type: map_at_1000
value: 85.04
- type: map_at_3
value: 81.42399999999999
- type: map_at_5
value: 83.279
- type: mrr_at_1
value: 81.05
- type: mrr_at_10
value: 87.401
- type: mrr_at_100
value: 87.504
- type: mrr_at_1000
value: 87.505
- type: mrr_at_3
value: 86.443
- type: mrr_at_5
value: 87.10799999999999
- type: ndcg_at_1
value: 81.04
- type: ndcg_at_10
value: 88.181
- type: ndcg_at_100
value: 89.411
- type: ndcg_at_1000
value: 89.507
- type: ndcg_at_3
value: 85.28099999999999
- type: ndcg_at_5
value: 86.888
- type: precision_at_1
value: 81.04
- type: precision_at_10
value: 13.406
- type: precision_at_100
value: 1.5350000000000001
- type: precision_at_1000
value: 0.157
- type: precision_at_3
value: 37.31
- type: precision_at_5
value: 24.54
- type: recall_at_1
value: 70.352
- type: recall_at_10
value: 95.358
- type: recall_at_100
value: 99.541
- type: recall_at_1000
value: 99.984
- type: recall_at_3
value: 87.111
- type: recall_at_5
value: 91.643
- task:
type: Clustering
dataset:
name: MTEB RedditClustering
type: mteb/reddit-clustering
config: default
split: test
revision: 24640382cdbf8abc73003fb0fa6d111a705499eb
metrics:
- type: v_measure
value: 46.54068723291946
- task:
type: Clustering
dataset:
name: MTEB RedditClusteringP2P
type: mteb/reddit-clustering-p2p
config: default
split: test
revision: 282350215ef01743dc01b456c7f5241fa8937f16
metrics:
- type: v_measure
value: 63.216287629895994
- task:
type: Retrieval
dataset:
name: MTEB SCIDOCS
type: scidocs
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 4.023000000000001
- type: map_at_10
value: 10.071
- type: map_at_100
value: 11.892
- type: map_at_1000
value: 12.196
- type: map_at_3
value: 7.234
- type: map_at_5
value: 8.613999999999999
- type: mrr_at_1
value: 19.900000000000002
- type: mrr_at_10
value: 30.516
- type: mrr_at_100
value: 31.656000000000002
- type: mrr_at_1000
value: 31.723000000000003
- type: mrr_at_3
value: 27.400000000000002
- type: mrr_at_5
value: 29.270000000000003
- type: ndcg_at_1
value: 19.900000000000002
- type: ndcg_at_10
value: 17.474
- type: ndcg_at_100
value: 25.020999999999997
- type: ndcg_at_1000
value: 30.728
- type: ndcg_at_3
value: 16.588
- type: ndcg_at_5
value: 14.498
- type: precision_at_1
value: 19.900000000000002
- type: precision_at_10
value: 9.139999999999999
- type: precision_at_100
value: 2.011
- type: precision_at_1000
value: 0.33899999999999997
- type: precision_at_3
value: 15.667
- type: precision_at_5
value: 12.839999999999998
- type: recall_at_1
value: 4.023000000000001
- type: recall_at_10
value: 18.497
- type: recall_at_100
value: 40.8
- type: recall_at_1000
value: 68.812
- type: recall_at_3
value: 9.508
- type: recall_at_5
value: 12.983
- task:
type: STS
dataset:
name: MTEB SICK-R
type: mteb/sickr-sts
config: default
split: test
revision: a6ea5a8cab320b040a23452cc28066d9beae2cee
metrics:
- type: cos_sim_pearson
value: 83.967008785134
- type: cos_sim_spearman
value: 80.23142141101837
- type: euclidean_pearson
value: 81.20166064704539
- type: euclidean_spearman
value: 80.18961335654585
- type: manhattan_pearson
value: 81.13925443187625
- type: manhattan_spearman
value: 80.07948723044424
- task:
type: STS
dataset:
name: MTEB STS12
type: mteb/sts12-sts
config: default
split: test
revision: a0d554a64d88156834ff5ae9920b964011b16384
metrics:
- type: cos_sim_pearson
value: 86.94262461316023
- type: cos_sim_spearman
value: 80.01596278563865
- type: euclidean_pearson
value: 83.80799622922581
- type: euclidean_spearman
value: 79.94984954947103
- type: manhattan_pearson
value: 83.68473841756281
- type: manhattan_spearman
value: 79.84990707951822
- task:
type: STS
dataset:
name: MTEB STS13
type: mteb/sts13-sts
config: default
split: test
revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca
metrics:
- type: cos_sim_pearson
value: 80.57346443146068
- type: cos_sim_spearman
value: 81.54689837570866
- type: euclidean_pearson
value: 81.10909881516007
- type: euclidean_spearman
value: 81.56746243261762
- type: manhattan_pearson
value: 80.87076036186582
- type: manhattan_spearman
value: 81.33074987964402
- task:
type: STS
dataset:
name: MTEB STS14
type: mteb/sts14-sts
config: default
split: test
revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375
metrics:
- type: cos_sim_pearson
value: 79.54733787179849
- type: cos_sim_spearman
value: 77.72202105610411
- type: euclidean_pearson
value: 78.9043595478849
- type: euclidean_spearman
value: 77.93422804309435
- type: manhattan_pearson
value: 78.58115121621368
- type: manhattan_spearman
value: 77.62508135122033
- task:
type: STS
dataset:
name: MTEB STS15
type: mteb/sts15-sts
config: default
split: test
revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3
metrics:
- type: cos_sim_pearson
value: 88.59880017237558
- type: cos_sim_spearman
value: 89.31088630824758
- type: euclidean_pearson
value: 88.47069261564656
- type: euclidean_spearman
value: 89.33581971465233
- type: manhattan_pearson
value: 88.40774264100956
- type: manhattan_spearman
value: 89.28657485627835
- task:
type: STS
dataset:
name: MTEB STS16
type: mteb/sts16-sts
config: default
split: test
revision: 4d8694f8f0e0100860b497b999b3dbed754a0513
metrics:
- type: cos_sim_pearson
value: 84.08055117917084
- type: cos_sim_spearman
value: 85.78491813080304
- type: euclidean_pearson
value: 84.99329155500392
- type: euclidean_spearman
value: 85.76728064677287
- type: manhattan_pearson
value: 84.87947428989587
- type: manhattan_spearman
value: 85.62429454917464
- task:
type: STS
dataset:
name: MTEB STS17 (ko-ko)
type: mteb/sts17-crosslingual-sts
config: ko-ko
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 82.14190939287384
- type: cos_sim_spearman
value: 82.27331573306041
- type: euclidean_pearson
value: 81.891896953716
- type: euclidean_spearman
value: 82.37695542955998
- type: manhattan_pearson
value: 81.73123869460504
- type: manhattan_spearman
value: 82.19989168441421
- task:
type: STS
dataset:
name: MTEB STS17 (ar-ar)
type: mteb/sts17-crosslingual-sts
config: ar-ar
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 76.84695301843362
- type: cos_sim_spearman
value: 77.87790986014461
- type: euclidean_pearson
value: 76.91981583106315
- type: euclidean_spearman
value: 77.88154772749589
- type: manhattan_pearson
value: 76.94953277451093
- type: manhattan_spearman
value: 77.80499230728604
- task:
type: STS
dataset:
name: MTEB STS17 (en-ar)
type: mteb/sts17-crosslingual-sts
config: en-ar
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 75.44657840482016
- type: cos_sim_spearman
value: 75.05531095119674
- type: euclidean_pearson
value: 75.88161755829299
- type: euclidean_spearman
value: 74.73176238219332
- type: manhattan_pearson
value: 75.63984765635362
- type: manhattan_spearman
value: 74.86476440770737
- task:
type: STS
dataset:
name: MTEB STS17 (en-de)
type: mteb/sts17-crosslingual-sts
config: en-de
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 85.64700140524133
- type: cos_sim_spearman
value: 86.16014210425672
- type: euclidean_pearson
value: 86.49086860843221
- type: euclidean_spearman
value: 86.09729326815614
- type: manhattan_pearson
value: 86.43406265125513
- type: manhattan_spearman
value: 86.17740150939994
- task:
type: STS
dataset:
name: MTEB STS17 (en-en)
type: mteb/sts17-crosslingual-sts
config: en-en
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 87.91170098764921
- type: cos_sim_spearman
value: 88.12437004058931
- type: euclidean_pearson
value: 88.81828254494437
- type: euclidean_spearman
value: 88.14831794572122
- type: manhattan_pearson
value: 88.93442183448961
- type: manhattan_spearman
value: 88.15254630778304
- task:
type: STS
dataset:
name: MTEB STS17 (en-tr)
type: mteb/sts17-crosslingual-sts
config: en-tr
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 72.91390577997292
- type: cos_sim_spearman
value: 71.22979457536074
- type: euclidean_pearson
value: 74.40314008106749
- type: euclidean_spearman
value: 72.54972136083246
- type: manhattan_pearson
value: 73.85687539530218
- type: manhattan_spearman
value: 72.09500771742637
- task:
type: STS
dataset:
name: MTEB STS17 (es-en)
type: mteb/sts17-crosslingual-sts
config: es-en
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 80.9301067983089
- type: cos_sim_spearman
value: 80.74989828346473
- type: euclidean_pearson
value: 81.36781301814257
- type: euclidean_spearman
value: 80.9448819964426
- type: manhattan_pearson
value: 81.0351322685609
- type: manhattan_spearman
value: 80.70192121844177
- task:
type: STS
dataset:
name: MTEB STS17 (es-es)
type: mteb/sts17-crosslingual-sts
config: es-es
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 87.13820465980005
- type: cos_sim_spearman
value: 86.73532498758757
- type: euclidean_pearson
value: 87.21329451846637
- type: euclidean_spearman
value: 86.57863198601002
- type: manhattan_pearson
value: 87.06973713818554
- type: manhattan_spearman
value: 86.47534918791499
- task:
type: STS
dataset:
name: MTEB STS17 (fr-en)
type: mteb/sts17-crosslingual-sts
config: fr-en
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 85.48720108904415
- type: cos_sim_spearman
value: 85.62221757068387
- type: euclidean_pearson
value: 86.1010129512749
- type: euclidean_spearman
value: 85.86580966509942
- type: manhattan_pearson
value: 86.26800938808971
- type: manhattan_spearman
value: 85.88902721678429
- task:
type: STS
dataset:
name: MTEB STS17 (it-en)
type: mteb/sts17-crosslingual-sts
config: it-en
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 83.98021347333516
- type: cos_sim_spearman
value: 84.53806553803501
- type: euclidean_pearson
value: 84.61483347248364
- type: euclidean_spearman
value: 85.14191408011702
- type: manhattan_pearson
value: 84.75297588825967
- type: manhattan_spearman
value: 85.33176753669242
- task:
type: STS
dataset:
name: MTEB STS17 (nl-en)
type: mteb/sts17-crosslingual-sts
config: nl-en
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 84.51856644893233
- type: cos_sim_spearman
value: 85.27510748506413
- type: euclidean_pearson
value: 85.09886861540977
- type: euclidean_spearman
value: 85.62579245860887
- type: manhattan_pearson
value: 84.93017860464607
- type: manhattan_spearman
value: 85.5063988898453
- task:
type: STS
dataset:
name: MTEB STS22 (en)
type: mteb/sts22-crosslingual-sts
config: en
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 62.581573200584195
- type: cos_sim_spearman
value: 63.05503590247928
- type: euclidean_pearson
value: 63.652564812602094
- type: euclidean_spearman
value: 62.64811520876156
- type: manhattan_pearson
value: 63.506842893061076
- type: manhattan_spearman
value: 62.51289573046917
- task:
type: STS
dataset:
name: MTEB STS22 (de)
type: mteb/sts22-crosslingual-sts
config: de
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 48.2248801729127
- type: cos_sim_spearman
value: 56.5936604678561
- type: euclidean_pearson
value: 43.98149464089
- type: euclidean_spearman
value: 56.108561882423615
- type: manhattan_pearson
value: 43.86880305903564
- type: manhattan_spearman
value: 56.04671150510166
- task:
type: STS
dataset:
name: MTEB STS22 (es)
type: mteb/sts22-crosslingual-sts
config: es
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 55.17564527009831
- type: cos_sim_spearman
value: 64.57978560979488
- type: euclidean_pearson
value: 58.8818330154583
- type: euclidean_spearman
value: 64.99214839071281
- type: manhattan_pearson
value: 58.72671436121381
- type: manhattan_spearman
value: 65.10713416616109
- task:
type: STS
dataset:
name: MTEB STS22 (pl)
type: mteb/sts22-crosslingual-sts
config: pl
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 26.772131864023297
- type: cos_sim_spearman
value: 34.68200792408681
- type: euclidean_pearson
value: 16.68082419005441
- type: euclidean_spearman
value: 34.83099932652166
- type: manhattan_pearson
value: 16.52605949659529
- type: manhattan_spearman
value: 34.82075801399475
- task:
type: STS
dataset:
name: MTEB STS22 (tr)
type: mteb/sts22-crosslingual-sts
config: tr
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 54.42415189043831
- type: cos_sim_spearman
value: 63.54594264576758
- type: euclidean_pearson
value: 57.36577498297745
- type: euclidean_spearman
value: 63.111466379158074
- type: manhattan_pearson
value: 57.584543715873885
- type: manhattan_spearman
value: 63.22361054139183
- task:
type: STS
dataset:
name: MTEB STS22 (ar)
type: mteb/sts22-crosslingual-sts
config: ar
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 47.55216762405518
- type: cos_sim_spearman
value: 56.98670142896412
- type: euclidean_pearson
value: 50.15318757562699
- type: euclidean_spearman
value: 56.524941926541906
- type: manhattan_pearson
value: 49.955618528674904
- type: manhattan_spearman
value: 56.37102209240117
- task:
type: STS
dataset:
name: MTEB STS22 (ru)
type: mteb/sts22-crosslingual-sts
config: ru
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 49.20540980338571
- type: cos_sim_spearman
value: 59.9009453504406
- type: euclidean_pearson
value: 49.557749853620535
- type: euclidean_spearman
value: 59.76631621172456
- type: manhattan_pearson
value: 49.62340591181147
- type: manhattan_spearman
value: 59.94224880322436
- task:
type: STS
dataset:
name: MTEB STS22 (zh)
type: mteb/sts22-crosslingual-sts
config: zh
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 51.508169956576985
- type: cos_sim_spearman
value: 66.82461565306046
- type: euclidean_pearson
value: 56.2274426480083
- type: euclidean_spearman
value: 66.6775323848333
- type: manhattan_pearson
value: 55.98277796300661
- type: manhattan_spearman
value: 66.63669848497175
- task:
type: STS
dataset:
name: MTEB STS22 (fr)
type: mteb/sts22-crosslingual-sts
config: fr
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 72.86478788045507
- type: cos_sim_spearman
value: 76.7946552053193
- type: euclidean_pearson
value: 75.01598530490269
- type: euclidean_spearman
value: 76.83618917858281
- type: manhattan_pearson
value: 74.68337628304332
- type: manhattan_spearman
value: 76.57480204017773
- task:
type: STS
dataset:
name: MTEB STS22 (de-en)
type: mteb/sts22-crosslingual-sts
config: de-en
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 55.922619099401984
- type: cos_sim_spearman
value: 56.599362477240774
- type: euclidean_pearson
value: 56.68307052369783
- type: euclidean_spearman
value: 54.28760436777401
- type: manhattan_pearson
value: 56.67763566500681
- type: manhattan_spearman
value: 53.94619541711359
- task:
type: STS
dataset:
name: MTEB STS22 (es-en)
type: mteb/sts22-crosslingual-sts
config: es-en
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 66.74357206710913
- type: cos_sim_spearman
value: 72.5208244925311
- type: euclidean_pearson
value: 67.49254562186032
- type: euclidean_spearman
value: 72.02469076238683
- type: manhattan_pearson
value: 67.45251772238085
- type: manhattan_spearman
value: 72.05538819984538
- task:
type: STS
dataset:
name: MTEB STS22 (it)
type: mteb/sts22-crosslingual-sts
config: it
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 71.25734330033191
- type: cos_sim_spearman
value: 76.98349083946823
- type: euclidean_pearson
value: 73.71642838667736
- type: euclidean_spearman
value: 77.01715504651384
- type: manhattan_pearson
value: 73.61712711868105
- type: manhattan_spearman
value: 77.01392571153896
- task:
type: STS
dataset:
name: MTEB STS22 (pl-en)
type: mteb/sts22-crosslingual-sts
config: pl-en
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 63.18215462781212
- type: cos_sim_spearman
value: 65.54373266117607
- type: euclidean_pearson
value: 64.54126095439005
- type: euclidean_spearman
value: 65.30410369102711
- type: manhattan_pearson
value: 63.50332221148234
- type: manhattan_spearman
value: 64.3455878104313
- task:
type: STS
dataset:
name: MTEB STS22 (zh-en)
type: mteb/sts22-crosslingual-sts
config: zh-en
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 62.30509221440029
- type: cos_sim_spearman
value: 65.99582704642478
- type: euclidean_pearson
value: 63.43818859884195
- type: euclidean_spearman
value: 66.83172582815764
- type: manhattan_pearson
value: 63.055779168508764
- type: manhattan_spearman
value: 65.49585020501449
- task:
type: STS
dataset:
name: MTEB STS22 (es-it)
type: mteb/sts22-crosslingual-sts
config: es-it
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 59.587830825340404
- type: cos_sim_spearman
value: 68.93467614588089
- type: euclidean_pearson
value: 62.3073527367404
- type: euclidean_spearman
value: 69.69758171553175
- type: manhattan_pearson
value: 61.9074580815789
- type: manhattan_spearman
value: 69.57696375597865
- task:
type: STS
dataset:
name: MTEB STS22 (de-fr)
type: mteb/sts22-crosslingual-sts
config: de-fr
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 57.143220125577066
- type: cos_sim_spearman
value: 67.78857859159226
- type: euclidean_pearson
value: 55.58225107923733
- type: euclidean_spearman
value: 67.80662907184563
- type: manhattan_pearson
value: 56.24953502726514
- type: manhattan_spearman
value: 67.98262125431616
- task:
type: STS
dataset:
name: MTEB STS22 (de-pl)
type: mteb/sts22-crosslingual-sts
config: de-pl
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 21.826928900322066
- type: cos_sim_spearman
value: 49.578506634400405
- type: euclidean_pearson
value: 27.939890138843214
- type: euclidean_spearman
value: 52.71950519136242
- type: manhattan_pearson
value: 26.39878683847546
- type: manhattan_spearman
value: 47.54609580342499
- task:
type: STS
dataset:
name: MTEB STS22 (fr-pl)
type: mteb/sts22-crosslingual-sts
config: fr-pl
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 57.27603854632001
- type: cos_sim_spearman
value: 50.709255283710995
- type: euclidean_pearson
value: 59.5419024445929
- type: euclidean_spearman
value: 50.709255283710995
- type: manhattan_pearson
value: 59.03256832438492
- type: manhattan_spearman
value: 61.97797868009122
- task:
type: STS
dataset:
name: MTEB STSBenchmark
type: mteb/stsbenchmark-sts
config: default
split: test
revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831
metrics:
- type: cos_sim_pearson
value: 85.00757054859712
- type: cos_sim_spearman
value: 87.29283629622222
- type: euclidean_pearson
value: 86.54824171775536
- type: euclidean_spearman
value: 87.24364730491402
- type: manhattan_pearson
value: 86.5062156915074
- type: manhattan_spearman
value: 87.15052170378574
- task:
type: Reranking
dataset:
name: MTEB SciDocsRR
type: mteb/scidocs-reranking
config: default
split: test
revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab
metrics:
- type: map
value: 82.03549357197389
- type: mrr
value: 95.05437645143527
- task:
type: Retrieval
dataset:
name: MTEB SciFact
type: scifact
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 57.260999999999996
- type: map_at_10
value: 66.259
- type: map_at_100
value: 66.884
- type: map_at_1000
value: 66.912
- type: map_at_3
value: 63.685
- type: map_at_5
value: 65.35499999999999
- type: mrr_at_1
value: 60.333000000000006
- type: mrr_at_10
value: 67.5
- type: mrr_at_100
value: 68.013
- type: mrr_at_1000
value: 68.038
- type: mrr_at_3
value: 65.61099999999999
- type: mrr_at_5
value: 66.861
- type: ndcg_at_1
value: 60.333000000000006
- type: ndcg_at_10
value: 70.41
- type: ndcg_at_100
value: 73.10600000000001
- type: ndcg_at_1000
value: 73.846
- type: ndcg_at_3
value: 66.133
- type: ndcg_at_5
value: 68.499
- type: precision_at_1
value: 60.333000000000006
- type: precision_at_10
value: 9.232999999999999
- type: precision_at_100
value: 1.0630000000000002
- type: precision_at_1000
value: 0.11299999999999999
- type: precision_at_3
value: 25.667
- type: precision_at_5
value: 17.067
- type: recall_at_1
value: 57.260999999999996
- type: recall_at_10
value: 81.94399999999999
- type: recall_at_100
value: 93.867
- type: recall_at_1000
value: 99.667
- type: recall_at_3
value: 70.339
- type: recall_at_5
value: 76.25
- task:
type: PairClassification
dataset:
name: MTEB SprintDuplicateQuestions
type: mteb/sprintduplicatequestions-pairclassification
config: default
split: test
revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46
metrics:
- type: cos_sim_accuracy
value: 99.74356435643564
- type: cos_sim_ap
value: 93.13411948212683
- type: cos_sim_f1
value: 86.80521991300147
- type: cos_sim_precision
value: 84.00374181478017
- type: cos_sim_recall
value: 89.8
- type: dot_accuracy
value: 99.67920792079208
- type: dot_ap
value: 89.27277565444479
- type: dot_f1
value: 83.9276990718124
- type: dot_precision
value: 82.04393505253104
- type: dot_recall
value: 85.9
- type: euclidean_accuracy
value: 99.74257425742574
- type: euclidean_ap
value: 93.17993008259062
- type: euclidean_f1
value: 86.69396110542476
- type: euclidean_precision
value: 88.78406708595388
- type: euclidean_recall
value: 84.7
- type: manhattan_accuracy
value: 99.74257425742574
- type: manhattan_ap
value: 93.14413755550099
- type: manhattan_f1
value: 86.82483594144371
- type: manhattan_precision
value: 87.66564729867483
- type: manhattan_recall
value: 86
- type: max_accuracy
value: 99.74356435643564
- type: max_ap
value: 93.17993008259062
- type: max_f1
value: 86.82483594144371
- task:
type: Clustering
dataset:
name: MTEB StackExchangeClustering
type: mteb/stackexchange-clustering
config: default
split: test
revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259
metrics:
- type: v_measure
value: 57.525863806168566
- task:
type: Clustering
dataset:
name: MTEB StackExchangeClusteringP2P
type: mteb/stackexchange-clustering-p2p
config: default
split: test
revision: 815ca46b2622cec33ccafc3735d572c266efdb44
metrics:
- type: v_measure
value: 32.68850574423839
- task:
type: Reranking
dataset:
name: MTEB StackOverflowDupQuestions
type: mteb/stackoverflowdupquestions-reranking
config: default
split: test
revision: e185fbe320c72810689fc5848eb6114e1ef5ec69
metrics:
- type: map
value: 49.71580650644033
- type: mrr
value: 50.50971903913081
- task:
type: Summarization
dataset:
name: MTEB SummEval
type: mteb/summeval
config: default
split: test
revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c
metrics:
- type: cos_sim_pearson
value: 29.152190498799484
- type: cos_sim_spearman
value: 29.686180371952727
- type: dot_pearson
value: 27.248664793816342
- type: dot_spearman
value: 28.37748983721745
- task:
type: Retrieval
dataset:
name: MTEB TRECCOVID
type: trec-covid
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 0.20400000000000001
- type: map_at_10
value: 1.6209999999999998
- type: map_at_100
value: 9.690999999999999
- type: map_at_1000
value: 23.733
- type: map_at_3
value: 0.575
- type: map_at_5
value: 0.885
- type: mrr_at_1
value: 78
- type: mrr_at_10
value: 86.56700000000001
- type: mrr_at_100
value: 86.56700000000001
- type: mrr_at_1000
value: 86.56700000000001
- type: mrr_at_3
value: 85.667
- type: mrr_at_5
value: 86.56700000000001
- type: ndcg_at_1
value: 76
- type: ndcg_at_10
value: 71.326
- type: ndcg_at_100
value: 54.208999999999996
- type: ndcg_at_1000
value: 49.252
- type: ndcg_at_3
value: 74.235
- type: ndcg_at_5
value: 73.833
- type: precision_at_1
value: 78
- type: precision_at_10
value: 74.8
- type: precision_at_100
value: 55.50000000000001
- type: precision_at_1000
value: 21.836
- type: precision_at_3
value: 78
- type: precision_at_5
value: 78
- type: recall_at_1
value: 0.20400000000000001
- type: recall_at_10
value: 1.894
- type: recall_at_100
value: 13.245999999999999
- type: recall_at_1000
value: 46.373
- type: recall_at_3
value: 0.613
- type: recall_at_5
value: 0.991
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (sqi-eng)
type: mteb/tatoeba-bitext-mining
config: sqi-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 95.89999999999999
- type: f1
value: 94.69999999999999
- type: precision
value: 94.11666666666667
- type: recall
value: 95.89999999999999
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (fry-eng)
type: mteb/tatoeba-bitext-mining
config: fry-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 68.20809248554913
- type: f1
value: 63.431048720066066
- type: precision
value: 61.69143958161298
- type: recall
value: 68.20809248554913
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (kur-eng)
type: mteb/tatoeba-bitext-mining
config: kur-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 71.21951219512195
- type: f1
value: 66.82926829268293
- type: precision
value: 65.1260162601626
- type: recall
value: 71.21951219512195
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (tur-eng)
type: mteb/tatoeba-bitext-mining
config: tur-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 97.2
- type: f1
value: 96.26666666666667
- type: precision
value: 95.8
- type: recall
value: 97.2
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (deu-eng)
type: mteb/tatoeba-bitext-mining
config: deu-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 99.3
- type: f1
value: 99.06666666666666
- type: precision
value: 98.95
- type: recall
value: 99.3
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (nld-eng)
type: mteb/tatoeba-bitext-mining
config: nld-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 97.39999999999999
- type: f1
value: 96.63333333333333
- type: precision
value: 96.26666666666668
- type: recall
value: 97.39999999999999
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ron-eng)
type: mteb/tatoeba-bitext-mining
config: ron-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 96
- type: f1
value: 94.86666666666666
- type: precision
value: 94.31666666666668
- type: recall
value: 96
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ang-eng)
type: mteb/tatoeba-bitext-mining
config: ang-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 47.01492537313433
- type: f1
value: 40.178867566927266
- type: precision
value: 38.179295828549556
- type: recall
value: 47.01492537313433
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ido-eng)
type: mteb/tatoeba-bitext-mining
config: ido-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 86.5
- type: f1
value: 83.62537480063796
- type: precision
value: 82.44555555555554
- type: recall
value: 86.5
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (jav-eng)
type: mteb/tatoeba-bitext-mining
config: jav-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 80.48780487804879
- type: f1
value: 75.45644599303138
- type: precision
value: 73.37398373983739
- type: recall
value: 80.48780487804879
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (isl-eng)
type: mteb/tatoeba-bitext-mining
config: isl-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 93.7
- type: f1
value: 91.95666666666666
- type: precision
value: 91.125
- type: recall
value: 93.7
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (slv-eng)
type: mteb/tatoeba-bitext-mining
config: slv-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 91.73754556500607
- type: f1
value: 89.65168084244632
- type: precision
value: 88.73025516403402
- type: recall
value: 91.73754556500607
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (cym-eng)
type: mteb/tatoeba-bitext-mining
config: cym-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 81.04347826086956
- type: f1
value: 76.2128364389234
- type: precision
value: 74.2
- type: recall
value: 81.04347826086956
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (kaz-eng)
type: mteb/tatoeba-bitext-mining
config: kaz-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 83.65217391304348
- type: f1
value: 79.4376811594203
- type: precision
value: 77.65797101449274
- type: recall
value: 83.65217391304348
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (est-eng)
type: mteb/tatoeba-bitext-mining
config: est-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 87.5
- type: f1
value: 85.02690476190476
- type: precision
value: 83.96261904761904
- type: recall
value: 87.5
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (heb-eng)
type: mteb/tatoeba-bitext-mining
config: heb-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 89.3
- type: f1
value: 86.52333333333333
- type: precision
value: 85.22833333333332
- type: recall
value: 89.3
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (gla-eng)
type: mteb/tatoeba-bitext-mining
config: gla-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 65.01809408926418
- type: f1
value: 59.00594446432805
- type: precision
value: 56.827215807915444
- type: recall
value: 65.01809408926418
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (mar-eng)
type: mteb/tatoeba-bitext-mining
config: mar-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 91.2
- type: f1
value: 88.58
- type: precision
value: 87.33333333333334
- type: recall
value: 91.2
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (lat-eng)
type: mteb/tatoeba-bitext-mining
config: lat-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 59.199999999999996
- type: f1
value: 53.299166276284915
- type: precision
value: 51.3383908045977
- type: recall
value: 59.199999999999996
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (bel-eng)
type: mteb/tatoeba-bitext-mining
config: bel-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 93.2
- type: f1
value: 91.2
- type: precision
value: 90.25
- type: recall
value: 93.2
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (pms-eng)
type: mteb/tatoeba-bitext-mining
config: pms-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 64.76190476190476
- type: f1
value: 59.867110667110666
- type: precision
value: 58.07390192653351
- type: recall
value: 64.76190476190476
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (gle-eng)
type: mteb/tatoeba-bitext-mining
config: gle-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 76.2
- type: f1
value: 71.48147546897547
- type: precision
value: 69.65409090909091
- type: recall
value: 76.2
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (pes-eng)
type: mteb/tatoeba-bitext-mining
config: pes-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 93.8
- type: f1
value: 92.14
- type: precision
value: 91.35833333333333
- type: recall
value: 93.8
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (nob-eng)
type: mteb/tatoeba-bitext-mining
config: nob-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 97.89999999999999
- type: f1
value: 97.2
- type: precision
value: 96.85000000000001
- type: recall
value: 97.89999999999999
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (bul-eng)
type: mteb/tatoeba-bitext-mining
config: bul-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 94.6
- type: f1
value: 92.93333333333334
- type: precision
value: 92.13333333333333
- type: recall
value: 94.6
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (cbk-eng)
type: mteb/tatoeba-bitext-mining
config: cbk-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 74.1
- type: f1
value: 69.14817460317461
- type: precision
value: 67.2515873015873
- type: recall
value: 74.1
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (hun-eng)
type: mteb/tatoeba-bitext-mining
config: hun-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 95.19999999999999
- type: f1
value: 94.01333333333335
- type: precision
value: 93.46666666666667
- type: recall
value: 95.19999999999999
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (uig-eng)
type: mteb/tatoeba-bitext-mining
config: uig-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 76.9
- type: f1
value: 72.07523809523809
- type: precision
value: 70.19777777777779
- type: recall
value: 76.9
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (rus-eng)
type: mteb/tatoeba-bitext-mining
config: rus-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 94.1
- type: f1
value: 92.31666666666666
- type: precision
value: 91.43333333333332
- type: recall
value: 94.1
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (spa-eng)
type: mteb/tatoeba-bitext-mining
config: spa-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 97.8
- type: f1
value: 97.1
- type: precision
value: 96.76666666666668
- type: recall
value: 97.8
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (hye-eng)
type: mteb/tatoeba-bitext-mining
config: hye-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 92.85714285714286
- type: f1
value: 90.92093441150045
- type: precision
value: 90.00449236298293
- type: recall
value: 92.85714285714286
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (tel-eng)
type: mteb/tatoeba-bitext-mining
config: tel-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 93.16239316239316
- type: f1
value: 91.33903133903132
- type: precision
value: 90.56267806267806
- type: recall
value: 93.16239316239316
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (afr-eng)
type: mteb/tatoeba-bitext-mining
config: afr-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 92.4
- type: f1
value: 90.25666666666666
- type: precision
value: 89.25833333333334
- type: recall
value: 92.4
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (mon-eng)
type: mteb/tatoeba-bitext-mining
config: mon-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 90.22727272727272
- type: f1
value: 87.53030303030303
- type: precision
value: 86.37121212121211
- type: recall
value: 90.22727272727272
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (arz-eng)
type: mteb/tatoeba-bitext-mining
config: arz-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 79.03563941299791
- type: f1
value: 74.7349505840072
- type: precision
value: 72.9035639412998
- type: recall
value: 79.03563941299791
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (hrv-eng)
type: mteb/tatoeba-bitext-mining
config: hrv-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 97
- type: f1
value: 96.15
- type: precision
value: 95.76666666666668
- type: recall
value: 97
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (nov-eng)
type: mteb/tatoeba-bitext-mining
config: nov-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 76.26459143968872
- type: f1
value: 71.55642023346303
- type: precision
value: 69.7544932369835
- type: recall
value: 76.26459143968872
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (gsw-eng)
type: mteb/tatoeba-bitext-mining
config: gsw-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 58.119658119658126
- type: f1
value: 51.65242165242165
- type: precision
value: 49.41768108434775
- type: recall
value: 58.119658119658126
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (nds-eng)
type: mteb/tatoeba-bitext-mining
config: nds-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 74.3
- type: f1
value: 69.52055555555555
- type: precision
value: 67.7574938949939
- type: recall
value: 74.3
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ukr-eng)
type: mteb/tatoeba-bitext-mining
config: ukr-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 94.8
- type: f1
value: 93.31666666666666
- type: precision
value: 92.60000000000001
- type: recall
value: 94.8
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (uzb-eng)
type: mteb/tatoeba-bitext-mining
config: uzb-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 76.63551401869158
- type: f1
value: 72.35202492211837
- type: precision
value: 70.60358255451713
- type: recall
value: 76.63551401869158
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (lit-eng)
type: mteb/tatoeba-bitext-mining
config: lit-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 90.4
- type: f1
value: 88.4811111111111
- type: precision
value: 87.7452380952381
- type: recall
value: 90.4
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ina-eng)
type: mteb/tatoeba-bitext-mining
config: ina-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 95
- type: f1
value: 93.60666666666667
- type: precision
value: 92.975
- type: recall
value: 95
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (lfn-eng)
type: mteb/tatoeba-bitext-mining
config: lfn-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 67.2
- type: f1
value: 63.01595782872099
- type: precision
value: 61.596587301587306
- type: recall
value: 67.2
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (zsm-eng)
type: mteb/tatoeba-bitext-mining
config: zsm-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 95.7
- type: f1
value: 94.52999999999999
- type: precision
value: 94
- type: recall
value: 95.7
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ita-eng)
type: mteb/tatoeba-bitext-mining
config: ita-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 94.6
- type: f1
value: 93.28999999999999
- type: precision
value: 92.675
- type: recall
value: 94.6
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (cmn-eng)
type: mteb/tatoeba-bitext-mining
config: cmn-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 96.39999999999999
- type: f1
value: 95.28333333333333
- type: precision
value: 94.75
- type: recall
value: 96.39999999999999
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (lvs-eng)
type: mteb/tatoeba-bitext-mining
config: lvs-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 91.9
- type: f1
value: 89.83
- type: precision
value: 88.92
- type: recall
value: 91.9
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (glg-eng)
type: mteb/tatoeba-bitext-mining
config: glg-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 94.69999999999999
- type: f1
value: 93.34222222222223
- type: precision
value: 92.75416666666668
- type: recall
value: 94.69999999999999
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ceb-eng)
type: mteb/tatoeba-bitext-mining
config: ceb-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 60.333333333333336
- type: f1
value: 55.31203703703703
- type: precision
value: 53.39971108326371
- type: recall
value: 60.333333333333336
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (bre-eng)
type: mteb/tatoeba-bitext-mining
config: bre-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 12.9
- type: f1
value: 11.099861903031458
- type: precision
value: 10.589187932631877
- type: recall
value: 12.9
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ben-eng)
type: mteb/tatoeba-bitext-mining
config: ben-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 86.7
- type: f1
value: 83.0152380952381
- type: precision
value: 81.37833333333333
- type: recall
value: 86.7
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (swg-eng)
type: mteb/tatoeba-bitext-mining
config: swg-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 63.39285714285714
- type: f1
value: 56.832482993197274
- type: precision
value: 54.56845238095237
- type: recall
value: 63.39285714285714
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (arq-eng)
type: mteb/tatoeba-bitext-mining
config: arq-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 48.73765093304062
- type: f1
value: 41.555736920720456
- type: precision
value: 39.06874531737319
- type: recall
value: 48.73765093304062
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (kab-eng)
type: mteb/tatoeba-bitext-mining
config: kab-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 41.099999999999994
- type: f1
value: 36.540165945165946
- type: precision
value: 35.05175685425686
- type: recall
value: 41.099999999999994
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (fra-eng)
type: mteb/tatoeba-bitext-mining
config: fra-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 94.89999999999999
- type: f1
value: 93.42333333333333
- type: precision
value: 92.75833333333333
- type: recall
value: 94.89999999999999
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (por-eng)
type: mteb/tatoeba-bitext-mining
config: por-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 94.89999999999999
- type: f1
value: 93.63333333333334
- type: precision
value: 93.01666666666665
- type: recall
value: 94.89999999999999
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (tat-eng)
type: mteb/tatoeba-bitext-mining
config: tat-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 77.9
- type: f1
value: 73.64833333333334
- type: precision
value: 71.90282106782105
- type: recall
value: 77.9
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (oci-eng)
type: mteb/tatoeba-bitext-mining
config: oci-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 59.4
- type: f1
value: 54.90521367521367
- type: precision
value: 53.432840025471606
- type: recall
value: 59.4
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (pol-eng)
type: mteb/tatoeba-bitext-mining
config: pol-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 97.39999999999999
- type: f1
value: 96.6
- type: precision
value: 96.2
- type: recall
value: 97.39999999999999
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (war-eng)
type: mteb/tatoeba-bitext-mining
config: war-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 67.2
- type: f1
value: 62.25926129426129
- type: precision
value: 60.408376623376626
- type: recall
value: 67.2
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (aze-eng)
type: mteb/tatoeba-bitext-mining
config: aze-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 90.2
- type: f1
value: 87.60666666666667
- type: precision
value: 86.45277777777778
- type: recall
value: 90.2
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (vie-eng)
type: mteb/tatoeba-bitext-mining
config: vie-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 97.7
- type: f1
value: 97
- type: precision
value: 96.65
- type: recall
value: 97.7
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (nno-eng)
type: mteb/tatoeba-bitext-mining
config: nno-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 93.2
- type: f1
value: 91.39746031746031
- type: precision
value: 90.6125
- type: recall
value: 93.2
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (cha-eng)
type: mteb/tatoeba-bitext-mining
config: cha-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 32.11678832116788
- type: f1
value: 27.210415386260234
- type: precision
value: 26.20408990846947
- type: recall
value: 32.11678832116788
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (mhr-eng)
type: mteb/tatoeba-bitext-mining
config: mhr-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 8.5
- type: f1
value: 6.787319277832475
- type: precision
value: 6.3452094433344435
- type: recall
value: 8.5
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (dan-eng)
type: mteb/tatoeba-bitext-mining
config: dan-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 96.1
- type: f1
value: 95.08
- type: precision
value: 94.61666666666667
- type: recall
value: 96.1
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ell-eng)
type: mteb/tatoeba-bitext-mining
config: ell-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 95.3
- type: f1
value: 93.88333333333333
- type: precision
value: 93.18333333333332
- type: recall
value: 95.3
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (amh-eng)
type: mteb/tatoeba-bitext-mining
config: amh-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 85.11904761904762
- type: f1
value: 80.69444444444444
- type: precision
value: 78.72023809523809
- type: recall
value: 85.11904761904762
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (pam-eng)
type: mteb/tatoeba-bitext-mining
config: pam-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 11.1
- type: f1
value: 9.276381801735853
- type: precision
value: 8.798174603174601
- type: recall
value: 11.1
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (hsb-eng)
type: mteb/tatoeba-bitext-mining
config: hsb-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 63.56107660455487
- type: f1
value: 58.70433569191332
- type: precision
value: 56.896926581464015
- type: recall
value: 63.56107660455487
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (srp-eng)
type: mteb/tatoeba-bitext-mining
config: srp-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 94.69999999999999
- type: f1
value: 93.10000000000001
- type: precision
value: 92.35
- type: recall
value: 94.69999999999999
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (epo-eng)
type: mteb/tatoeba-bitext-mining
config: epo-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 96.8
- type: f1
value: 96.01222222222222
- type: precision
value: 95.67083333333332
- type: recall
value: 96.8
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (kzj-eng)
type: mteb/tatoeba-bitext-mining
config: kzj-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 9.2
- type: f1
value: 7.911555250305249
- type: precision
value: 7.631246556216846
- type: recall
value: 9.2
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (awa-eng)
type: mteb/tatoeba-bitext-mining
config: awa-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 77.48917748917748
- type: f1
value: 72.27375798804371
- type: precision
value: 70.14430014430013
- type: recall
value: 77.48917748917748
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (fao-eng)
type: mteb/tatoeba-bitext-mining
config: fao-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 77.09923664122137
- type: f1
value: 72.61541257724463
- type: precision
value: 70.8998380754106
- type: recall
value: 77.09923664122137
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (mal-eng)
type: mteb/tatoeba-bitext-mining
config: mal-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 98.2532751091703
- type: f1
value: 97.69529354682193
- type: precision
value: 97.42843279961184
- type: recall
value: 98.2532751091703
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ile-eng)
type: mteb/tatoeba-bitext-mining
config: ile-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 82.8
- type: f1
value: 79.14672619047619
- type: precision
value: 77.59489247311828
- type: recall
value: 82.8
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (bos-eng)
type: mteb/tatoeba-bitext-mining
config: bos-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 94.35028248587571
- type: f1
value: 92.86252354048965
- type: precision
value: 92.2080979284369
- type: recall
value: 94.35028248587571
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (cor-eng)
type: mteb/tatoeba-bitext-mining
config: cor-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 8.5
- type: f1
value: 6.282429263935621
- type: precision
value: 5.783274240739785
- type: recall
value: 8.5
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (cat-eng)
type: mteb/tatoeba-bitext-mining
config: cat-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 92.7
- type: f1
value: 91.025
- type: precision
value: 90.30428571428571
- type: recall
value: 92.7
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (eus-eng)
type: mteb/tatoeba-bitext-mining
config: eus-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 81
- type: f1
value: 77.8232380952381
- type: precision
value: 76.60194444444444
- type: recall
value: 81
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (yue-eng)
type: mteb/tatoeba-bitext-mining
config: yue-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 91
- type: f1
value: 88.70857142857142
- type: precision
value: 87.7
- type: recall
value: 91
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (swe-eng)
type: mteb/tatoeba-bitext-mining
config: swe-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 96.39999999999999
- type: f1
value: 95.3
- type: precision
value: 94.76666666666667
- type: recall
value: 96.39999999999999
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (dtp-eng)
type: mteb/tatoeba-bitext-mining
config: dtp-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 8.1
- type: f1
value: 7.001008218834307
- type: precision
value: 6.708329562594269
- type: recall
value: 8.1
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (kat-eng)
type: mteb/tatoeba-bitext-mining
config: kat-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 87.1313672922252
- type: f1
value: 84.09070598748882
- type: precision
value: 82.79171454104429
- type: recall
value: 87.1313672922252
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (jpn-eng)
type: mteb/tatoeba-bitext-mining
config: jpn-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 96.39999999999999
- type: f1
value: 95.28333333333333
- type: precision
value: 94.73333333333332
- type: recall
value: 96.39999999999999
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (csb-eng)
type: mteb/tatoeba-bitext-mining
config: csb-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 42.29249011857708
- type: f1
value: 36.981018542283365
- type: precision
value: 35.415877813576024
- type: recall
value: 42.29249011857708
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (xho-eng)
type: mteb/tatoeba-bitext-mining
config: xho-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 83.80281690140845
- type: f1
value: 80.86854460093896
- type: precision
value: 79.60093896713614
- type: recall
value: 83.80281690140845
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (orv-eng)
type: mteb/tatoeba-bitext-mining
config: orv-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 45.26946107784431
- type: f1
value: 39.80235464678088
- type: precision
value: 38.14342660001342
- type: recall
value: 45.26946107784431
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ind-eng)
type: mteb/tatoeba-bitext-mining
config: ind-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 94.3
- type: f1
value: 92.9
- type: precision
value: 92.26666666666668
- type: recall
value: 94.3
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (tuk-eng)
type: mteb/tatoeba-bitext-mining
config: tuk-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 37.93103448275862
- type: f1
value: 33.15192743764172
- type: precision
value: 31.57456528146183
- type: recall
value: 37.93103448275862
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (max-eng)
type: mteb/tatoeba-bitext-mining
config: max-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 69.01408450704226
- type: f1
value: 63.41549295774648
- type: precision
value: 61.342778895595806
- type: recall
value: 69.01408450704226
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (swh-eng)
type: mteb/tatoeba-bitext-mining
config: swh-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 76.66666666666667
- type: f1
value: 71.60705960705961
- type: precision
value: 69.60683760683762
- type: recall
value: 76.66666666666667
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (hin-eng)
type: mteb/tatoeba-bitext-mining
config: hin-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 95.8
- type: f1
value: 94.48333333333333
- type: precision
value: 93.83333333333333
- type: recall
value: 95.8
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (dsb-eng)
type: mteb/tatoeba-bitext-mining
config: dsb-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 52.81837160751566
- type: f1
value: 48.435977731384824
- type: precision
value: 47.11291973845539
- type: recall
value: 52.81837160751566
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ber-eng)
type: mteb/tatoeba-bitext-mining
config: ber-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 44.9
- type: f1
value: 38.88962621607783
- type: precision
value: 36.95936507936508
- type: recall
value: 44.9
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (tam-eng)
type: mteb/tatoeba-bitext-mining
config: tam-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 90.55374592833876
- type: f1
value: 88.22553125484721
- type: precision
value: 87.26927252985884
- type: recall
value: 90.55374592833876
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (slk-eng)
type: mteb/tatoeba-bitext-mining
config: slk-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 94.6
- type: f1
value: 93.13333333333333
- type: precision
value: 92.45333333333333
- type: recall
value: 94.6
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (tgl-eng)
type: mteb/tatoeba-bitext-mining
config: tgl-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 93.7
- type: f1
value: 91.99666666666667
- type: precision
value: 91.26666666666668
- type: recall
value: 93.7
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ast-eng)
type: mteb/tatoeba-bitext-mining
config: ast-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 85.03937007874016
- type: f1
value: 81.75853018372703
- type: precision
value: 80.34120734908137
- type: recall
value: 85.03937007874016
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (mkd-eng)
type: mteb/tatoeba-bitext-mining
config: mkd-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 88.3
- type: f1
value: 85.5
- type: precision
value: 84.25833333333334
- type: recall
value: 88.3
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (khm-eng)
type: mteb/tatoeba-bitext-mining
config: khm-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 65.51246537396122
- type: f1
value: 60.02297410192148
- type: precision
value: 58.133467727289236
- type: recall
value: 65.51246537396122
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ces-eng)
type: mteb/tatoeba-bitext-mining
config: ces-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 96
- type: f1
value: 94.89
- type: precision
value: 94.39166666666667
- type: recall
value: 96
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (tzl-eng)
type: mteb/tatoeba-bitext-mining
config: tzl-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 57.692307692307686
- type: f1
value: 53.162393162393165
- type: precision
value: 51.70673076923077
- type: recall
value: 57.692307692307686
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (urd-eng)
type: mteb/tatoeba-bitext-mining
config: urd-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 91.60000000000001
- type: f1
value: 89.21190476190475
- type: precision
value: 88.08666666666667
- type: recall
value: 91.60000000000001
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ara-eng)
type: mteb/tatoeba-bitext-mining
config: ara-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 88
- type: f1
value: 85.47
- type: precision
value: 84.43266233766234
- type: recall
value: 88
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (kor-eng)
type: mteb/tatoeba-bitext-mining
config: kor-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 92.7
- type: f1
value: 90.64999999999999
- type: precision
value: 89.68333333333332
- type: recall
value: 92.7
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (yid-eng)
type: mteb/tatoeba-bitext-mining
config: yid-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 80.30660377358491
- type: f1
value: 76.33044137466307
- type: precision
value: 74.78970125786164
- type: recall
value: 80.30660377358491
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (fin-eng)
type: mteb/tatoeba-bitext-mining
config: fin-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 96.39999999999999
- type: f1
value: 95.44
- type: precision
value: 94.99166666666666
- type: recall
value: 96.39999999999999
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (tha-eng)
type: mteb/tatoeba-bitext-mining
config: tha-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 96.53284671532847
- type: f1
value: 95.37712895377129
- type: precision
value: 94.7992700729927
- type: recall
value: 96.53284671532847
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (wuu-eng)
type: mteb/tatoeba-bitext-mining
config: wuu-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 89
- type: f1
value: 86.23190476190476
- type: precision
value: 85.035
- type: recall
value: 89
- task:
type: Retrieval
dataset:
name: MTEB Touche2020
type: webis-touche2020
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 2.585
- type: map_at_10
value: 9.012
- type: map_at_100
value: 14.027000000000001
- type: map_at_1000
value: 15.565000000000001
- type: map_at_3
value: 5.032
- type: map_at_5
value: 6.657
- type: mrr_at_1
value: 28.571
- type: mrr_at_10
value: 45.377
- type: mrr_at_100
value: 46.119
- type: mrr_at_1000
value: 46.127
- type: mrr_at_3
value: 41.156
- type: mrr_at_5
value: 42.585
- type: ndcg_at_1
value: 27.551
- type: ndcg_at_10
value: 23.395
- type: ndcg_at_100
value: 33.342
- type: ndcg_at_1000
value: 45.523
- type: ndcg_at_3
value: 25.158
- type: ndcg_at_5
value: 23.427
- type: precision_at_1
value: 28.571
- type: precision_at_10
value: 21.429000000000002
- type: precision_at_100
value: 6.714
- type: precision_at_1000
value: 1.473
- type: precision_at_3
value: 27.211000000000002
- type: precision_at_5
value: 24.490000000000002
- type: recall_at_1
value: 2.585
- type: recall_at_10
value: 15.418999999999999
- type: recall_at_100
value: 42.485
- type: recall_at_1000
value: 79.536
- type: recall_at_3
value: 6.239999999999999
- type: recall_at_5
value: 8.996
- task:
type: Classification
dataset:
name: MTEB ToxicConversationsClassification
type: mteb/toxic_conversations_50k
config: default
split: test
revision: d7c0de2777da35d6aae2200a62c6e0e5af397c4c
metrics:
- type: accuracy
value: 71.3234
- type: ap
value: 14.361688653847423
- type: f1
value: 54.819068624319044
- task:
type: Classification
dataset:
name: MTEB TweetSentimentExtractionClassification
type: mteb/tweet_sentiment_extraction
config: default
split: test
revision: d604517c81ca91fe16a244d1248fc021f9ecee7a
metrics:
- type: accuracy
value: 61.97792869269949
- type: f1
value: 62.28965628513728
- task:
type: Clustering
dataset:
name: MTEB TwentyNewsgroupsClustering
type: mteb/twentynewsgroups-clustering
config: default
split: test
revision: 6125ec4e24fa026cec8a478383ee943acfbd5449
metrics:
- type: v_measure
value: 38.90540145385218
- task:
type: PairClassification
dataset:
name: MTEB TwitterSemEval2015
type: mteb/twittersemeval2015-pairclassification
config: default
split: test
revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1
metrics:
- type: cos_sim_accuracy
value: 86.53513739047506
- type: cos_sim_ap
value: 75.27741586677557
- type: cos_sim_f1
value: 69.18792902473774
- type: cos_sim_precision
value: 67.94708725515136
- type: cos_sim_recall
value: 70.47493403693932
- type: dot_accuracy
value: 84.7052512368123
- type: dot_ap
value: 69.36075482849378
- type: dot_f1
value: 64.44688376631296
- type: dot_precision
value: 59.92288500793831
- type: dot_recall
value: 69.70976253298153
- type: euclidean_accuracy
value: 86.60666388508076
- type: euclidean_ap
value: 75.47512772621097
- type: euclidean_f1
value: 69.413872536473
- type: euclidean_precision
value: 67.39562624254472
- type: euclidean_recall
value: 71.55672823218997
- type: manhattan_accuracy
value: 86.52917684925792
- type: manhattan_ap
value: 75.34000110496703
- type: manhattan_f1
value: 69.28489190226429
- type: manhattan_precision
value: 67.24608889992551
- type: manhattan_recall
value: 71.45118733509234
- type: max_accuracy
value: 86.60666388508076
- type: max_ap
value: 75.47512772621097
- type: max_f1
value: 69.413872536473
- task:
type: PairClassification
dataset:
name: MTEB TwitterURLCorpus
type: mteb/twitterurlcorpus-pairclassification
config: default
split: test
revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf
metrics:
- type: cos_sim_accuracy
value: 89.01695967710637
- type: cos_sim_ap
value: 85.8298270742901
- type: cos_sim_f1
value: 78.46988128389272
- type: cos_sim_precision
value: 74.86017897091722
- type: cos_sim_recall
value: 82.44533415460425
- type: dot_accuracy
value: 88.19420188613343
- type: dot_ap
value: 83.82679165901324
- type: dot_f1
value: 76.55833777304208
- type: dot_precision
value: 75.6884875846501
- type: dot_recall
value: 77.44841392054204
- type: euclidean_accuracy
value: 89.03054294252338
- type: euclidean_ap
value: 85.89089555185325
- type: euclidean_f1
value: 78.62997658079624
- type: euclidean_precision
value: 74.92329149232914
- type: euclidean_recall
value: 82.72251308900523
- type: manhattan_accuracy
value: 89.0266620095471
- type: manhattan_ap
value: 85.86458997929147
- type: manhattan_f1
value: 78.50685331000291
- type: manhattan_precision
value: 74.5499861534201
- type: manhattan_recall
value: 82.90729904527257
- type: max_accuracy
value: 89.03054294252338
- type: max_ap
value: 85.89089555185325
- type: max_f1
value: 78.62997658079624
---
# KeyurRamoliya/multilingual-e5-large-Q8_0-GGUF
This model was converted to GGUF format from [`intfloat/multilingual-e5-large`](https://huggingface.co/intfloat/multilingual-e5-large) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/intfloat/multilingual-e5-large) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo KeyurRamoliya/multilingual-e5-large-Q8_0-GGUF --hf-file multilingual-e5-large-q8_0.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo KeyurRamoliya/multilingual-e5-large-Q8_0-GGUF --hf-file multilingual-e5-large-q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo KeyurRamoliya/multilingual-e5-large-Q8_0-GGUF --hf-file multilingual-e5-large-q8_0.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo KeyurRamoliya/multilingual-e5-large-Q8_0-GGUF --hf-file multilingual-e5-large-q8_0.gguf -c 2048
```
| [
"BIOSSES",
"SCIFACT"
] | Non_BioNLP |
aisingapore/llama3.1-8b-cpt-sea-lionv3-instruct | aisingapore | text-generation | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"en",
"zh",
"vi",
"id",
"th",
"fil",
"ta",
"ms",
"km",
"lo",
"my",
"jv",
"su",
"arxiv:2309.06085",
"arxiv:2311.07911",
"arxiv:2306.05685",
"base_model:aisingapore/llama3.1-8b-cpt-sea-lionv3-base",
"base_model:finetune:aisingapore/llama3.1-8b-cpt-sea-lionv3-base",
"license:llama3.1",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | 1,733,912,441,000 | 2024-12-19T12:49:17 | 3,931 | 4 | ---
base_model:
- aisingapore/llama3.1-8b-cpt-sea-lionv3-base
language:
- en
- zh
- vi
- id
- th
- fil
- ta
- ms
- km
- lo
- my
- jv
- su
library_name: transformers
license: llama3.1
pipeline_tag: text-generation
---
<div>
<img src="llama_3.1_8b_sea-lion_v3_instruct_banner.png"/>
</div>
# Llama3.1 8B CPT SEA-LIONv3 Instruct
SEA-LION is a collection of Large Language Models (LLMs) which have been pretrained and instruct-tuned for the Southeast Asia (SEA) region.
Llama3.1 8B CPT SEA-LIONv3 Instruct is a multilingual model that has been fine-tuned in two stages on approximately **12.3M English instruction-completion pairs** alongside a pool of **4.5M Southeast Asian instruction-completion pairs** from SEA languages such as Indonesian, Javanese, Sundanese, Tamil, Thai and Vietnamese.
SEA-LION stands for _Southeast Asian Languages In One Network_.
- **Developed by:** Products Pillar, AI Singapore
- **Funded by:** Singapore NRF
- **Model type:** Decoder
- **Languages supported:** Burmese, Chinese, English, Filipino, Indonesia, Javanese, Khmer, Lao, Malay, Sundanese, Tamil, Thai, Vietnamese
- **License:** [Llama 3.1 Community License](https://huggingface.co/meta-llama/Llama-3.1-70B-Instruct/blob/main/LICENSE)
## Model Details
### Model Description
We performed instruction tuning in English and also in SEA languages such as Indonesian, Javanese, Sundanese, Tamil, Thai and Vietnamese on our [continued pre-trained Llama3.1 8B CPT SEA-LIONv3 Base](https://huggingface.co/aisingapore/llama3.1-8b-cpt-sea-lionv3-base), a decoder model using the Llama 3.1 architecture, to create Llama3.1 8B CPT SEA-LIONv3 Instruct.
For tokenisation, the model employs the default tokenizer used in Llama 3.1 8B Instruct. The model has a context length of 128k.
### Benchmark Performance
We evaluated Llama3.1 8B CPT SEA-LIONv3 Instruct on both general language capabilities and instruction-following capabilities.
#### General Language Capabilities
For the evaluation of general language capabilities, we employed the [SEA-HELM (also known as BHASA) evaluation benchmark](https://arxiv.org/abs/2309.06085v2) across a variety of tasks.
These tasks include Question Answering (QA), Sentiment Analysis (Sentiment), Toxicity Detection (Toxicity), Translation in both directions (Eng>Lang & Lang>Eng), Abstractive Summarisation (Abssum), Causal Reasoning (Causal) and Natural Language Inference (NLI).
Note: SEA-HELM is implemented using prompts to elicit answers in a strict format. For all tasks, the model is expected to provide an answer tag from which the answer is automatically extracted. For tasks where options are provided, the answer should comprise one of the pre-defined options. The scores for each task is normalised to account for baseline performance due to random chance.
The evaluation was done **zero-shot** with native prompts on a sample of 100-1000 instances for each dataset.
#### Instruction-following Capabilities
Since Llama3.1 8B CPT SEA-LIONv3 Instruct is an instruction-following model, we also evaluated it on instruction-following capabilities with two datasets, SEA-IFEval (based on [IFEval](https://arxiv.org/abs/2311.07911)) and SEA-MTBench (based on [MT-Bench](https://arxiv.org/abs/2306.05685)).
As these two datasets were originally in English, the linguists and native speakers in the team worked together to filter, localise and translate the datasets into the respective target languages to ensure that the examples remained reasonable, meaningful and natural.
**SEA-IFEval**
SEA-IFEval evaluates a model's ability to adhere to constraints provided in the prompt, for example beginning a response with a specific word/phrase or answering with a certain number of sections. Additionally, accuracy is normalised by the proportion of responses in the correct language (if the model performs the task correctly but responds in the wrong language, it is judged to have failed the task).
**SEA-MTBench**
SEA-MTBench evaluates a model's ability to engage in multi-turn (2 turns) conversations and respond in ways that align with human needs. We use `gpt-4-1106-preview` as the judge model and compare against `gpt-3.5-turbo-0125` as the baseline model. The metric used is the weighted win rate against the baseline model (i.e. average win rate across each category: Math, Reasoning, STEM, Humanities, Roleplay, Writing, Extraction). A tie is given a score of 0.5.
For more details on Llama3.1 8B CPT SEA-LIONv3 Instruct benchmark performance, please refer to the SEA-HELM leaderboard, https://leaderboard.sea-lion.ai/.
### Usage
Llama3.1 8B CPT SEA-LIONv3 Instruct can be run using the 🤗 Transformers library
```python
import transformers
import torch
model_id = "aisingapore/llama3.1-8b-cpt-sea-lionv3-instruct"
pipeline = transformers.pipeline(
"text-generation",
model=model_id,
model_kwargs={"torch_dtype": torch.bfloat16},
device_map="auto",
)
messages = [
{"role": "user", "content": "Apa sentimen dari kalimat berikut ini?\nKalimat: Buku ini sangat membosankan.\nJawaban: "},
]
outputs = pipeline(
messages,
max_new_tokens=256,
)
print(outputs[0]["generated_text"][-1])
```
### Caveats
It is important for users to be aware that our model exhibits certain limitations that warrant consideration. Like many LLMs, the model can hallucinate and occasionally generates irrelevant content, introducing fictional elements that are not grounded in the provided context. Users should also exercise caution in interpreting and validating the model's responses due to the potential inconsistencies in its reasoning.
## Limitations
### Safety
Current SEA-LION models, including this commercially permissive release, have not been aligned for safety. Developers and users should perform their own safety fine-tuning and related security measures. In no event shall the authors be held liable for any claim, damages, or other liability arising from the use of the released weights and codes.
## Technical Specifications
### Fine-Tuning Details
Llama3.1 8B CPT SEA-LIONv3 Instruct was tuned using a combination of a full parameter fine-tune, on-policy alignment, and model merges of the best performing checkpoints. The training process for fine-tuning was approximately 1024 GPU hours, on a single node of 8x H100-80GB GPUs.
## Data
Llama3.1 8B CPT SEA-LIONv3 Instruct was trained on a wide range of synthetic instructions, alongside publicly available instructions hand-curated by the team with the assistance of native speakers. In addition, special care was taken to ensure that the datasets used had commercially permissive licenses through verification with the original data source.
## Call for Contributions
We encourage researchers, developers, and language enthusiasts to actively contribute to the enhancement and expansion of SEA-LION. Contributions can involve identifying and reporting bugs, sharing pre-training, instruction, and preference data, improving documentation usability, proposing and implementing new model evaluation tasks and metrics, or training versions of the model in additional Southeast Asian languages. Join us in shaping the future of SEA-LION by sharing your expertise and insights to make these models more accessible, accurate, and versatile. Please check out our GitHub for further information on the call for contributions.
## The Team
Chan Adwin, Cheng Nicholas, Choa Esther, Huang Yuli, Hulagadri Adithya Venkatadri, Lau Wayne, Lee Chwan Ren, Leong Wai Yi, Leong Wei Qi, Limkonchotiwat Peerat, Liu Bing Jie Darius, Montalan Jann Railey, Ng Boon Cheong Raymond, Ngui Jian Gang, Nguyen Thanh Ngan, Ong Brandon, Ong Tat-Wee David, Ong Zhi Hao, Rengarajan Hamsawardhini, Siow Bryan, Susanto Yosephine, Tai Ngee Chia, Tan Choon Meng, Teng Walter, Teo Eng Sipp Leslie, Teo Wei Yi, Tjhi William, Yeo Yeow Tong, Yong Xianbin
## Acknowledgements
[AI Singapore](https://aisingapore.org/) is a national programme supported by the National Research Foundation, Singapore and hosted by the National University of Singapore. Any opinions, findings and conclusions or recommendations expressed in this material are those of the author(s) and do not reflect the views of the National Research Foundation or the National University of Singapore.
## Contact
For more info, please contact us using this [SEA-LION Inquiry Form](https://forms.gle/sLCUVb95wmGf43hi6)
[Link to SEA-LION's GitHub repository](https://github.com/aisingapore/sealion)
## Disclaimer
This is the repository for the commercial instruction-tuned model.
The model has _not_ been aligned for safety.
Developers and users should perform their own safety fine-tuning and related security measures.
In no event shall the authors be held liable for any claims, damages, or other liabilities arising from the use of the released weights and codes. | [
"CHIA"
] | Non_BioNLP |
ntc-ai/SDXL-LoRA-slider.celestial | ntc-ai | text-to-image | [
"diffusers",
"text-to-image",
"stable-diffusion-xl",
"lora",
"template:sd-lora",
"template:sdxl-lora",
"sdxl-sliders",
"ntcai.xyz-sliders",
"concept",
"en",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:mit",
"region:us"
] | 1,704,441,954,000 | 2024-01-05T08:05:57 | 33 | 1 | ---
base_model: stabilityai/stable-diffusion-xl-base-1.0
language:
- en
license: mit
tags:
- text-to-image
- stable-diffusion-xl
- lora
- template:sd-lora
- template:sdxl-lora
- sdxl-sliders
- ntcai.xyz-sliders
- concept
- diffusers
thumbnail: images/evaluate/celestial.../celestial_17_3.0.png
widget:
- text: celestial
output:
url: images/celestial_17_3.0.png
- text: celestial
output:
url: images/celestial_19_3.0.png
- text: celestial
output:
url: images/celestial_20_3.0.png
- text: celestial
output:
url: images/celestial_21_3.0.png
- text: celestial
output:
url: images/celestial_22_3.0.png
inference: false
instance_prompt: celestial
---
# ntcai.xyz slider - celestial (SDXL LoRA)
| Strength: -3 | Strength: 0 | Strength: 3 |
| --- | --- | --- |
| <img src="images/celestial_17_-3.0.png" width=256 height=256 /> | <img src="images/celestial_17_0.0.png" width=256 height=256 /> | <img src="images/celestial_17_3.0.png" width=256 height=256 /> |
| <img src="images/celestial_19_-3.0.png" width=256 height=256 /> | <img src="images/celestial_19_0.0.png" width=256 height=256 /> | <img src="images/celestial_19_3.0.png" width=256 height=256 /> |
| <img src="images/celestial_20_-3.0.png" width=256 height=256 /> | <img src="images/celestial_20_0.0.png" width=256 height=256 /> | <img src="images/celestial_20_3.0.png" width=256 height=256 /> |
## Download
Weights for this model are available in Safetensors format.
## Trigger words
You can apply this LoRA with trigger words for additional effect:
```
celestial
```
## Use in diffusers
```python
from diffusers import StableDiffusionXLPipeline
from diffusers import EulerAncestralDiscreteScheduler
import torch
pipe = StableDiffusionXLPipeline.from_single_file("https://huggingface.co/martyn/sdxl-turbo-mario-merge-top-rated/blob/main/topRatedTurboxlLCM_v10.safetensors")
pipe.to("cuda")
pipe.scheduler = EulerAncestralDiscreteScheduler.from_config(pipe.scheduler.config)
# Load the LoRA
pipe.load_lora_weights('ntc-ai/SDXL-LoRA-slider.celestial', weight_name='celestial.safetensors', adapter_name="celestial")
# Activate the LoRA
pipe.set_adapters(["celestial"], adapter_weights=[2.0])
prompt = "medieval rich kingpin sitting in a tavern, celestial"
negative_prompt = "nsfw"
width = 512
height = 512
num_inference_steps = 10
guidance_scale = 2
image = pipe(prompt, negative_prompt=negative_prompt, width=width, height=height, guidance_scale=guidance_scale, num_inference_steps=num_inference_steps).images[0]
image.save('result.png')
```
## Support the Patreon
If you like this model please consider [joining our Patreon](https://www.patreon.com/NTCAI).
By joining our Patreon, you'll gain access to an ever-growing library of over 880+ unique and diverse LoRAs, covering a wide range of styles and genres. You'll also receive early access to new models and updates, exclusive behind-the-scenes content, and the powerful LoRA slider creator, allowing you to craft your own custom LoRAs and experiment with endless possibilities.
Your support on Patreon will allow us to continue developing and refining new models.
## Other resources
- [CivitAI](https://civitai.com/user/ntc) - Follow ntc on Civit for even more LoRAs
- [ntcai.xyz](https://ntcai.xyz) - See ntcai.xyz to find more articles and LoRAs
| [
"CRAFT"
] | Non_BioNLP |
ntc-ai/SDXL-LoRA-slider.magicalenchanted | ntc-ai | text-to-image | [
"diffusers",
"text-to-image",
"stable-diffusion-xl",
"lora",
"template:sd-lora",
"template:sdxl-lora",
"sdxl-sliders",
"ntcai.xyz-sliders",
"concept",
"en",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:mit",
"region:us"
] | 1,704,020,261,000 | 2023-12-31T10:57:44 | 7 | 2 | ---
base_model: stabilityai/stable-diffusion-xl-base-1.0
language:
- en
license: mit
tags:
- text-to-image
- stable-diffusion-xl
- lora
- template:sd-lora
- template:sdxl-lora
- sdxl-sliders
- ntcai.xyz-sliders
- concept
- diffusers
thumbnail: images/evaluate/magical,enchanted.../magical,enchanted_17_3.0.png
widget:
- text: magical,enchanted
output:
url: images/magical,enchanted_17_3.0.png
- text: magical,enchanted
output:
url: images/magical,enchanted_19_3.0.png
- text: magical,enchanted
output:
url: images/magical,enchanted_20_3.0.png
- text: magical,enchanted
output:
url: images/magical,enchanted_21_3.0.png
- text: magical,enchanted
output:
url: images/magical,enchanted_22_3.0.png
inference: false
instance_prompt: magical,enchanted
---
# ntcai.xyz slider - magical,enchanted (SDXL LoRA)
| Strength: -3 | Strength: 0 | Strength: 3 |
| --- | --- | --- |
| <img src="images/magical,enchanted_17_-3.0.png" width=256 height=256 /> | <img src="images/magical,enchanted_17_0.0.png" width=256 height=256 /> | <img src="images/magical,enchanted_17_3.0.png" width=256 height=256 /> |
| <img src="images/magical,enchanted_19_-3.0.png" width=256 height=256 /> | <img src="images/magical,enchanted_19_0.0.png" width=256 height=256 /> | <img src="images/magical,enchanted_19_3.0.png" width=256 height=256 /> |
| <img src="images/magical,enchanted_20_-3.0.png" width=256 height=256 /> | <img src="images/magical,enchanted_20_0.0.png" width=256 height=256 /> | <img src="images/magical,enchanted_20_3.0.png" width=256 height=256 /> |
## Download
Weights for this model are available in Safetensors format.
## Trigger words
You can apply this LoRA with trigger words for additional effect:
```
magical,enchanted
```
## Use in diffusers
```python
from diffusers import StableDiffusionXLPipeline
from diffusers import EulerAncestralDiscreteScheduler
import torch
pipe = StableDiffusionXLPipeline.from_single_file("https://huggingface.co/martyn/sdxl-turbo-mario-merge-top-rated/blob/main/topRatedTurboxlLCM_v10.safetensors")
pipe.to("cuda")
pipe.scheduler = EulerAncestralDiscreteScheduler.from_config(pipe.scheduler.config)
# Load the LoRA
pipe.load_lora_weights('ntc-ai/SDXL-LoRA-slider.magicalenchanted', weight_name='magical,enchanted.safetensors', adapter_name="magical,enchanted")
# Activate the LoRA
pipe.set_adapters(["magical,enchanted"], adapter_weights=[2.0])
prompt = "medieval rich kingpin sitting in a tavern, magical,enchanted"
negative_prompt = "nsfw"
width = 512
height = 512
num_inference_steps = 10
guidance_scale = 2
image = pipe(prompt, negative_prompt=negative_prompt, width=width, height=height, guidance_scale=guidance_scale, num_inference_steps=num_inference_steps).images[0]
image.save('result.png')
```
## Support the Patreon
If you like this model please consider [joining our Patreon](https://www.patreon.com/NTCAI).
By joining our Patreon, you'll gain access to an ever-growing library of over 760+ unique and diverse LoRAs, covering a wide range of styles and genres. You'll also receive early access to new models and updates, exclusive behind-the-scenes content, and the powerful LoRA slider creator, allowing you to craft your own custom LoRAs and experiment with endless possibilities.
Your support on Patreon will allow us to continue developing and refining new models.
## Other resources
- [CivitAI](https://civitai.com/user/ntc) - Follow ntc on Civit for even more LoRAs
- [ntcai.xyz](https://ntcai.xyz) - See ntcai.xyz to find more articles and LoRAs
| [
"CRAFT"
] | Non_BioNLP |
helpmefindaname/flair-eml-sapbert-bc5cdr-chemical | helpmefindaname | null | [
"flair",
"pytorch",
"entity-mention-linker",
"region:us"
] | 1,703,380,990,000 | 2023-12-24T02:23:43 | 7 | 0 | ---
tags:
- flair
- entity-mention-linker
---
## sapbert-bc5cdr-chemical
Biomedical Entity Mention Linking for chemical
### Demo: How to use in Flair
Requires:
- **[Flair](https://github.com/flairNLP/flair/)>=0.14.0** (`pip install flair` or `pip install git+https://github.com/flairNLP/flair.git`)
```python
from flair.data import Sentence
from flair.models import Classifier, EntityMentionLinker
sentence = Sentence("Behavioral abnormalities in the Fmr1 KO2 Mouse Model of Fragile X Syndrome")
# load hunflair to detect the entity mentions we want to link.
tagger = Classifier.load("hunflair")
tagger.predict(sentence)
# load the linker and dictionary
linker = EntityMentionLinker.load("helpmefindaname/flair-eml-sapbert-bc5cdr-chemical")
dictionary = linker.dictionary
# find then candidates for the mentions
linker.predict(sentence)
# print the results for each entity mention:
for span in sentence.get_spans(linker.entity_label_type):
print(f"Span: {span.text}")
for candidate_label in span.get_labels(linker.label_type):
candidate = dictionary[candidate_label.value]
print(f"Candidate: {candidate.concept_name}")
```
As an alternative to downloading the already precomputed model (much storage). You can also build the model
and compute the embeddings for the dataset using:
```python
linker = EntityMentionLinker.build("dmis-lab/biosyn-sapbert-bc5cdr-chemical", "chemical", dictionary_name_or_path="ctd-chemicals", hybrid_search=False, entity_type="chemical-eml")
```
This will reduce the download requirements, at the cost of computation.
This EntityMentionLinker uses [https://huggingface.co/dmis-lab/biosyn-sapbert-bc5cdr-chemical](dmis-lab/biosyn-sapbert-bc5cdr-chemical) as embeddings for linking mentions to candidates.
| [
"BC5CDR"
] | BioNLP |
mradermacher/Einstein-v7-Qwen2-7B-GGUF | mradermacher | null | [
"transformers",
"gguf",
"axolotl",
"instruct",
"finetune",
"chatml",
"gpt4",
"synthetic data",
"science",
"physics",
"chemistry",
"biology",
"math",
"qwen",
"qwen2",
"en",
"dataset:allenai/ai2_arc",
"dataset:camel-ai/physics",
"dataset:camel-ai/chemistry",
"dataset:camel-ai/biology",
"dataset:camel-ai/math",
"dataset:metaeval/reclor",
"dataset:openbookqa",
"dataset:mandyyyyii/scibench",
"dataset:derek-thomas/ScienceQA",
"dataset:TIGER-Lab/ScienceEval",
"dataset:jondurbin/airoboros-3.2",
"dataset:LDJnr/Capybara",
"dataset:Cot-Alpaca-GPT4-From-OpenHermes-2.5",
"dataset:STEM-AI-mtl/Electrical-engineering",
"dataset:knowrohit07/saraswati-stem",
"dataset:sablo/oasst2_curated",
"dataset:lmsys/lmsys-chat-1m",
"dataset:TIGER-Lab/MathInstruct",
"dataset:bigbio/med_qa",
"dataset:meta-math/MetaMathQA-40K",
"dataset:piqa",
"dataset:scibench",
"dataset:sciq",
"dataset:Open-Orca/SlimOrca",
"dataset:migtissera/Synthia-v1.3",
"dataset:allenai/WildChat",
"dataset:microsoft/orca-math-word-problems-200k",
"dataset:openchat/openchat_sharegpt4_dataset",
"dataset:teknium/GPTeacher-General-Instruct",
"dataset:m-a-p/CodeFeedback-Filtered-Instruction",
"dataset:totally-not-an-llm/EverythingLM-data-V3",
"dataset:HuggingFaceH4/no_robots",
"dataset:OpenAssistant/oasst_top1_2023-08-25",
"dataset:WizardLM/WizardLM_evol_instruct_70k",
"dataset:abacusai/SystemChat-1.1",
"dataset:H-D-T/Buzz-V1.2",
"base_model:Weyaxi/Einstein-v7-Qwen2-7B",
"base_model:quantized:Weyaxi/Einstein-v7-Qwen2-7B",
"license:other",
"endpoints_compatible",
"region:us",
"conversational"
] | 1,719,376,166,000 | 2024-06-26T13:43:00 | 552 | 0 | ---
base_model: Weyaxi/Einstein-v7-Qwen2-7B
datasets:
- allenai/ai2_arc
- camel-ai/physics
- camel-ai/chemistry
- camel-ai/biology
- camel-ai/math
- metaeval/reclor
- openbookqa
- mandyyyyii/scibench
- derek-thomas/ScienceQA
- TIGER-Lab/ScienceEval
- jondurbin/airoboros-3.2
- LDJnr/Capybara
- Cot-Alpaca-GPT4-From-OpenHermes-2.5
- STEM-AI-mtl/Electrical-engineering
- knowrohit07/saraswati-stem
- sablo/oasst2_curated
- lmsys/lmsys-chat-1m
- TIGER-Lab/MathInstruct
- bigbio/med_qa
- meta-math/MetaMathQA-40K
- openbookqa
- piqa
- metaeval/reclor
- derek-thomas/ScienceQA
- scibench
- sciq
- Open-Orca/SlimOrca
- migtissera/Synthia-v1.3
- TIGER-Lab/ScienceEval
- allenai/WildChat
- microsoft/orca-math-word-problems-200k
- openchat/openchat_sharegpt4_dataset
- teknium/GPTeacher-General-Instruct
- m-a-p/CodeFeedback-Filtered-Instruction
- totally-not-an-llm/EverythingLM-data-V3
- HuggingFaceH4/no_robots
- OpenAssistant/oasst_top1_2023-08-25
- WizardLM/WizardLM_evol_instruct_70k
- abacusai/SystemChat-1.1
- H-D-T/Buzz-V1.2
language:
- en
library_name: transformers
license: other
tags:
- axolotl
- instruct
- finetune
- chatml
- gpt4
- synthetic data
- science
- physics
- chemistry
- biology
- math
- qwen
- qwen2
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/Weyaxi/Einstein-v7-Qwen2-7B
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Einstein-v7-Qwen2-7B-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Einstein-v7-Qwen2-7B-GGUF/resolve/main/Einstein-v7-Qwen2-7B.Q2_K.gguf) | Q2_K | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/Einstein-v7-Qwen2-7B-GGUF/resolve/main/Einstein-v7-Qwen2-7B.IQ3_XS.gguf) | IQ3_XS | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/Einstein-v7-Qwen2-7B-GGUF/resolve/main/Einstein-v7-Qwen2-7B.Q3_K_S.gguf) | Q3_K_S | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/Einstein-v7-Qwen2-7B-GGUF/resolve/main/Einstein-v7-Qwen2-7B.IQ3_S.gguf) | IQ3_S | 3.6 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Einstein-v7-Qwen2-7B-GGUF/resolve/main/Einstein-v7-Qwen2-7B.IQ3_M.gguf) | IQ3_M | 3.7 | |
| [GGUF](https://huggingface.co/mradermacher/Einstein-v7-Qwen2-7B-GGUF/resolve/main/Einstein-v7-Qwen2-7B.Q3_K_M.gguf) | Q3_K_M | 3.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Einstein-v7-Qwen2-7B-GGUF/resolve/main/Einstein-v7-Qwen2-7B.Q3_K_L.gguf) | Q3_K_L | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/Einstein-v7-Qwen2-7B-GGUF/resolve/main/Einstein-v7-Qwen2-7B.IQ4_XS.gguf) | IQ4_XS | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/Einstein-v7-Qwen2-7B-GGUF/resolve/main/Einstein-v7-Qwen2-7B.Q4_K_S.gguf) | Q4_K_S | 4.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Einstein-v7-Qwen2-7B-GGUF/resolve/main/Einstein-v7-Qwen2-7B.Q4_K_M.gguf) | Q4_K_M | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Einstein-v7-Qwen2-7B-GGUF/resolve/main/Einstein-v7-Qwen2-7B.Q5_K_S.gguf) | Q5_K_S | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/Einstein-v7-Qwen2-7B-GGUF/resolve/main/Einstein-v7-Qwen2-7B.Q5_K_M.gguf) | Q5_K_M | 5.5 | |
| [GGUF](https://huggingface.co/mradermacher/Einstein-v7-Qwen2-7B-GGUF/resolve/main/Einstein-v7-Qwen2-7B.Q6_K.gguf) | Q6_K | 6.4 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Einstein-v7-Qwen2-7B-GGUF/resolve/main/Einstein-v7-Qwen2-7B.Q8_0.gguf) | Q8_0 | 8.2 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Einstein-v7-Qwen2-7B-GGUF/resolve/main/Einstein-v7-Qwen2-7B.f16.gguf) | f16 | 15.3 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
| [
"SCIQ"
] | Non_BioNLP |
mlx-community/1.5-Pints-16K-v0.1 | mlx-community | text-generation | [
"mlx",
"safetensors",
"llama",
"text-generation",
"conversational",
"en",
"dataset:pints-ai/Expository-Prose-V1",
"dataset:HuggingFaceH4/ultrachat_200k",
"dataset:Open-Orca/SlimOrca-Dedup",
"dataset:meta-math/MetaMathQA",
"dataset:HuggingFaceH4/deita-10k-v0-sft",
"dataset:WizardLM/WizardLM_evol_instruct_V2_196k",
"dataset:togethercomputer/llama-instruct",
"dataset:LDJnr/Capybara",
"dataset:HuggingFaceH4/ultrafeedback_binarized",
"base_model:pints-ai/1.5-Pints-16K-v0.1",
"base_model:finetune:pints-ai/1.5-Pints-16K-v0.1",
"license:mit",
"model-index",
"region:us"
] | 1,730,136,199,000 | 2024-10-28T17:25:32 | 7 | 1 | ---
base_model: pints-ai/1.5-Pints-16K-v0.1
datasets:
- pints-ai/Expository-Prose-V1
- HuggingFaceH4/ultrachat_200k
- Open-Orca/SlimOrca-Dedup
- meta-math/MetaMathQA
- HuggingFaceH4/deita-10k-v0-sft
- WizardLM/WizardLM_evol_instruct_V2_196k
- togethercomputer/llama-instruct
- LDJnr/Capybara
- HuggingFaceH4/ultrafeedback_binarized
language:
- en
license: mit
pipeline_tag: text-generation
tags:
- mlx
extra_gated_prompt: Though best efforts has been made to ensure, as much as possible,
that all texts in the training corpora are royalty free, this does not constitute
a legal guarantee that such is the case. **By using any of the models, corpora or
part thereof, the user agrees to bear full responsibility to do the necessary due
diligence to ensure that he / she is in compliance with their local copyright laws.
Additionally, the user agrees to bear any damages arising as a direct cause (or
otherwise) of using any artifacts released by the pints research team, as well as
full responsibility for the consequences of his / her usage (or implementation)
of any such released artifacts. The user also indemnifies Pints Research Team (and
any of its members or agents) of any damage, related or unrelated, to the release
or subsequent usage of any findings, artifacts or code by the team. For the avoidance
of doubt, any artifacts released by the Pints Research team are done so in accordance
with the 'fair use' clause of Copyright Law, in hopes that this will aid the research
community in bringing LLMs to the next frontier.
extra_gated_fields:
Company: text
Country: country
Specific date: date_picker
I want to use this model for:
type: select
options:
- Research
- Education
- label: Other
value: other
I agree to use this model for in accordance to the afore-mentioned Terms of Use: checkbox
model-index:
- name: 1.5-Pints
results:
- task:
type: text-generation
dataset:
name: MTBench
type: ai2_arc
metrics:
- type: LLM-as-a-Judge
value: 3.4
name: MTBench
source:
url: https://huggingface.co/spaces/lmsys/mt-bench
name: MTBench
---
# mlx-community/1.5-Pints-16K-v0.1
The Model [mlx-community/1.5-Pints-16K-v0.1](https://huggingface.co/mlx-community/1.5-Pints-16K-v0.1) was converted to MLX format from [pints-ai/1.5-Pints-16K-v0.1](https://huggingface.co/pints-ai/1.5-Pints-16K-v0.1) using mlx-lm version **0.19.2**.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("mlx-community/1.5-Pints-16K-v0.1")
prompt="hello"
if hasattr(tokenizer, "apply_chat_template") and tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, tokenize=False, add_generation_prompt=True
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)
```
| [
"BEAR"
] | Non_BioNLP |
LoneStriker/SeaLLM-7B-v2-6.0bpw-h6-exl2 | LoneStriker | text-generation | [
"transformers",
"mistral",
"text-generation",
"multilingual",
"sea",
"conversational",
"en",
"zh",
"vi",
"id",
"th",
"ms",
"km",
"lo",
"my",
"tl",
"arxiv:2312.00738",
"arxiv:2205.11916",
"arxiv:2306.05179",
"arxiv:2306.05685",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,706,976,692,000 | 2024-02-03T16:14:09 | 8 | 0 | ---
language:
- en
- zh
- vi
- id
- th
- ms
- km
- lo
- my
- tl
license: other
license_name: seallms
license_link: https://huggingface.co/SeaLLMs/SeaLLM-13B-Chat/blob/main/LICENSE
tags:
- multilingual
- sea
---
<p align="center">
<img src="seal_logo.png" width="200" />
</p>
# *SeaLLM-7B-v2* - Large Language Models for Southeast Asia
<p align="center">
<a href="https://huggingface.co/SeaLLMs/SeaLLM-7B-v2" target="_blank" rel="noopener"> 🤗 Tech Memo</a>
<a href="https://huggingface.co/spaces/SeaLLMs/SeaLLM-7B" target="_blank" rel="noopener"> 🤗 DEMO</a>
<a href="https://github.com/DAMO-NLP-SG/SeaLLMs" target="_blank" rel="noopener">Github</a>
<a href="https://arxiv.org/pdf/2312.00738.pdf" target="_blank" rel="noopener">Technical Report</a>
</p>
We introduce [SeaLLM-7B-v2](https://huggingface.co/SeaLLMs/SeaLLM-7B-v2), the state-of-the-art multilingual LLM for Southeast Asian (SEA) languages 🇬🇧 🇨🇳 🇻🇳 🇮🇩 🇹🇭 🇲🇾 🇰🇭 🇱🇦 🇲🇲 🇵🇭. It is the most significant upgrade since [SeaLLM-13B](https://huggingface.co/SeaLLMs/SeaLLM-13B-Chat), with half the size, outperforming performance across diverse multilingual tasks, from world knowledge, math reasoning, instruction following, etc.
### Highlights
* [SeaLLM-7B-v2](https://huggingface.co/SeaLLMs/SeaLLM-7B-v2) achieves the **7B-SOTA** on the **GSM8K** task with **78.2** score and outperforms GPT-3.5 in many GSM8K-translated tasks in SEA languages (🇨🇳 🇻🇳 🇮🇩 🇹🇭) as well as MGSM (🇨🇳 🇹🇭). It also surpasses GPT-3.5 in MATH for Thai 🇹🇭.
* It scores competitively against GPT-3.5 in many zero-shot commonsense benchmark, with **82.5, 68.3, 80.9** scores on Arc-C, Winogrande, and Hellaswag.
* It achieves **7.54** score on the 🇬🇧 **MT-bench**, it ranks 3rd place on the leaderboard for 7B category and is the most outperforming multilingual model.
* It scores **45.46** on the VMLU benchmark for Vietnamese 🇻🇳, and is the only open-source multilingual model that can be competitive to monolingual models ([Vistral-7B](https://huggingface.co/Viet-Mistral/Vistral-7B-Chat)) of similar sizes.
### Release and DEMO
- DEMO: [SeaLLMs/SeaLLM-7B](https://huggingface.co/spaces/SeaLLMs/SeaLLM-7B).
- Technical report: [Arxiv: SeaLLMs - Large Language Models for Southeast Asia](https://arxiv.org/pdf/2312.00738.pdf).
- Model weights: [SeaLLM-7B-v2](https://huggingface.co/SeaLLMs/SeaLLM-7B-v2).
<blockquote style="color:red">
<p><strong style="color: red">Terms of Use and License</strong>:
By using our released weights, codes, and demos, you agree to and comply with the terms and conditions specified in our <a href="https://huggingface.co/SeaLLMs/SeaLLM-Chat-13b/edit/main/LICENSE" target="_blank" rel="noopener">SeaLLMs Terms Of Use</a>.
</blockquote>
> **Disclaimer**:
> We must note that even though the weights, codes, and demos are released in an open manner, similar to other pre-trained language models, and despite our best efforts in red teaming and safety fine-tuning and enforcement, our models come with potential risks, including but not limited to inaccurate, misleading or potentially harmful generation.
> Developers and stakeholders should perform their own red teaming and provide related security measures before deployment, and they must abide by and comply with local governance and regulations.
> In no event shall the authors be held liable for any claim, damages, or other liability arising from the use of the released weights, codes, or demos.
> The logo was generated by DALL-E 3.
### What's new since SeaLLM-13B-v1 and SeaLLM-7B-v1?
* SeaLLM-7B-v2 is continue-pretrained from [Mistral-7B](https://huggingface.co/mistralai/Mistral-7B-v0.1) and underwent carefully designed tuning with focus in reasoning.
## Evaluation
### Zero-shot Multilingual Math Reasoning
[SeaLLM-7B-v2](https://huggingface.co/SeaLLMs/SeaLLM-7B-v2) achieves with **78.2** score on the GSM8K, making it the **state of the art** in the realm of 7B models. It also outperforms GPT-3.5 in the same GSM8K benchmark as translated into SEA languages (🇨🇳 🇻🇳 🇮🇩 🇹🇭). [SeaLLM-7B-v2](https://huggingface.co/SeaLLMs/SeaLLM-7B-v2) also surpasses GPT-3.5 on the Thai-translated MATH benchmark, with **22.4** vs 18.1 scores.

<details>
<summary>See details on English and translated GSM8K and MATH</summary>
<br>
| Model | GSM8K<br>en | MATH<br>en | GSM8K<br>zh | MATH<br>zh | GSM8K<br>vi | MATH<br>vi | GSM8K<br>id | MATH<br>id | GSM8K<br>th | MATH<br>th
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| GPT-3.5 | 80.8 | 34.1 | 48.2 | 21.5 | 55 | 26.5 | 64.3 | 26.4 | 35.8 | 18.1
| Qwen-14B-chat | 61.4 | 18.4 | 41.6 | 11.8 | 33.6 | 3.6 | 44.7 | 8.6 | 22 | 6
| Vistral-7b-chat | 48.2 | 12.5 | | | 48.7 | 3.1 | | | |
| SeaLLM-7B-v2 | 78.2 | 27.5 | 53.7 | 17.6 | 69.9 | 23.8 | 71.5 | 24.4 | 59.6 | 22.4
</details>
#### Zero-shot MGSM
[SeaLLM-7B-v2](https://huggingface.co/SeaLLMs/SeaLLM-7B-v2) also outperforms GPT-3.5 and Qwen-14B on the multilingual MGSM for Zh and Th.
| Model | MGSM-Zh | MGSM-Th
|-----| ----- | ---
| ChatGPT (reported) | 61.2* | 47.2*
| Qwen-14B-chat | 59.6 | 28
| SeaLLM-7B-v2 | **64.8** | **62.4**
### Zero-shot Commonsense Reasoning
We compare [SeaLLM-7B-v2](https://huggingface.co/SeaLLMs/SeaLLM-7B-v2) with ChatGPT and Mistral-7B-instruct on various zero-shot commonsense benchmarks (Arc-Challenge, Winogrande and Hellaswag). We use the 2-stage technique in [(Kojima et al., 2023)](https://arxiv.org/pdf/2205.11916.pdf) to grab the answer. Note that we **DID NOT** use "Let's think step-by-step" to invoke explicit CoT.
| Model | Arc-Challenge | Winogrande | Hellaswag
|-----| ----- | --- | -- |
| ChatGPT (reported) | 84.6* | 66.8* | 72.0*
| ChatGPT (reproduced) | 84.1 | 63.1 | 79.5
| Mistral-7B-Instruct | 68.1 | 56.4 | 45.6
| SeaLLM-7B-v2 | 82.5 | 68.3 | 80.9
### Multilingual World Knowledge
We evaluate models on 3 benchmarks following the recommended default setups: 5-shot MMLU for En, 3-shot [M3Exam](https://arxiv.org/pdf/2306.05179.pdf) (M3e) for En, Zh, Vi, Id, Th, and zero-shot [VMLU](https://vmlu.ai/) for Vi.
| Model | Langs | En<br>MMLU | En<br>M3e | Zh<br>M3e | Vi<br>M3e | Vi<br>VMLU | Id<br>M3e | Th<br>M3e
|-----| ----- | --- | -- | ----- | ---- | --- | --- | --- |
| ChatGPT | Multi | 68.90 | 75.46 | 60.20 | 58.64 | 46.32 | 49.27 | 37.41
|-----| ----- | --- | -- | ----- | ---- | --- | --- | --- |
| SeaLLM-13B | Multi | 52.78 | 62.69 | 44.50 | 46.45 | | 39.28 | 36.39
| Vistral-7B | Mono | 56.86 | 67.00 | 44.56 | 54.33 | 50.03 | 36.49 | 25.27
| SeaLLM-7B-v2 | Multi | 60.72 | 70.91 | 55.43 | 51.15 | 45.46 | 42.25 | 35.52
### MT-Bench
On the English [MT-bench](https://arxiv.org/abs/2306.05685) metric, SeaLLM-7B-v2 achieves **7.54** score on the MT-bench (3rd place on the leaderboard for 7B category), outperforms many 70B models and is arguably the only one that handles 10 SEA languages.
Refer to [mt_bench/seallm_7b_v2.jsonl](https://huggingface.co/SeaLLMs/SeaLLM-7B-v2/blob/main/evaluation/mt_bench/seallm_7b_v2.jsonl) for the MT-bench predictions of SeaLLM-7B-v2.
| Model | Access | Langs | MT-Bench
| --- | --- | --- | --- |
| GPT-4-turbo | closed | multi | 9.32
| GPT-4-0613 | closed | multi | 9.18
| Mixtral-8x7b (46B) | open | multi | 8.3
| Starling-LM-7B-alpha | open | mono (en) | 8.0
| OpenChat-3.5-7B | open | mono (en) | 7.81
| **SeaLLM-7B-v2** | **open** | **multi (10+)** | **7.54**
| [Qwen-14B](https://huggingface.co/Qwen/Qwen-14B-Chat) | open | multi | 6.96
| [Llama-2-70B](https://huggingface.co/meta-llama/Llama-2-70b-chat-hf) | open | mono (en) | 6.86
| Mistral-7B-instuct | open | mono (en) | 6.84
### Sea-Bench
Similar to MT-Bench, [Sea-bench](https://huggingface.co/datasets/SeaLLMs/Sea-bench) is a set of categorized instruction test sets to measure models' ability as an assistant that is specifically focused on 9 SEA languages, including non-Latin low-resource languages.
As shown, the huge improvements come from math-reasoning, reaching GPT-3.5 level of performance.

Refer to [sea_bench/seallm_7b_v2.jsonl](https://huggingface.co/SeaLLMs/SeaLLM-7B-v2/blob/main/evaluation/sea_bench/seallm_7b_v2.jsonl) for the Sea-bench predictions of SeaLLM-7B-v2.
### Usage
#### Instruction format
```python
prompt = """<|im_start|>system
You are a helpful assistant.</s>
<|im_start|>user
Hello world</s>
<|im_start|>assistant
Hi there, how can I help?</s>
# ! ENSURE 1 and only 1 bos `<s>` at the beginning of sequence
print(tokenizer.convert_ids_to_tokens(tokenizer.encode(prompt)))
['<s>', '▁<', '|', 'im', '_', 'start', '|', '>', 'system', '<0x0A>', 'You', '▁are', '▁a', '▁helpful', '▁assistant', '.', '</s>', '▁', '<0x0A>', '<', '|', 'im', '_', 'start', '|', '>', 'user', '<0x0A>', 'Hello', '▁world', '</s>', '▁', '<0x0A>', '<', '|', 'im', '_', 'start', '|', '>', 'ass', 'istant', '<0x0A>', 'Hi', '▁there', ',', '▁how', '▁can', '▁I', '▁help', '?', '</s>', '▁', '<0x0A>']
"""
```
#### Using transformers's chat_template
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
device = "cuda" # the device to load the model onto
model = AutoModelForCausalLM.from_pretrained("SeaLLMs/SeaLLM-7B-v2", torch_dtype=torch.bfloat16, device_map=device)
tokenizer = AutoTokenizer.from_pretrained("SeaLLMs/SeaLLM-7B-v2")
messages = [
{"role": "user", "content": "Hello world"},
{"role": "assistant", "content": "Hi there, how can I help you today?"},
{"role": "user", "content": "Explain general relativity in details."}
]
encodeds = tokenizer.apply_chat_template(messages, return_tensors="pt", add_generation_prompt=True)
print(tokenizer.convert_ids_to_tokens(encodeds[0]))
# ['<s>', '▁<', '|', 'im', '_', 'start', '|', '>', 'user', '<0x0A>', 'Hello', '▁world', '</s>', '▁', '<0x0A>', '<', '|', 'im ....
model_inputs = encodeds.to(device)
model.to(device)
generated_ids = model.generate(model_inputs, max_new_tokens=1000, do_sample=True, pad_token_id=tokenizer.pad_token_id)
decoded = tokenizer.batch_decode(generated_ids)
print(decoded[0])
```
#### Using vLLM
```python
from vllm import LLM, SamplingParams
TURN_TEMPLATE = "<|im_start|>{role}\n{content}</s>"
TURN_PREFIX = "<|im_start|>{role}\n"
def seallm_chat_convo_format(conversations, add_assistant_prefix: bool, system_prompt=None):
# conversations: list of dict with key `role` and `content` (openai format)
if conversations[0]['role'] != 'system' and system_prompt is not None:
conversations = [{"role": "system", "content": system_prompt}] + conversations
text = ''
for turn_id, turn in enumerate(conversations):
prompt = TURN_TEMPLATE.format(role=turn['role'], content=turn['content'])
text += prompt
if add_assistant_prefix:
prompt = TURN_PREFIX.format(role='assistant')
text += prompt
return text
sparams = SamplingParams(temperature=0.1, max_tokens=1024, stop=['</s>', '<|im_start|>'])
llm = LLM("SeaLLMs/SeaLLM-7B-v2", dtype="bfloat16")
message = "Explain general relativity in details."
prompt = seallm_chat_convo_format(message, True)
gen = llm.generate(prompt, sampling_params)
print(gen[0].outputs[0].text)
```
## Acknowledgement to Our Linguists
We would like to express our special thanks to our professional and native linguists, Tantong Champaiboon, Nguyen Ngoc Yen Nhi and Tara Devina Putri, who helped build, evaluate, and fact-check our sampled pretraining and SFT dataset as well as evaluating our models across different aspects, especially safety.
## Citation
If you find our project useful, we hope you would kindly star our repo and cite our work as follows: Corresponding Author: [[email protected]](mailto:[email protected])
**Author list and order will change!**
* `*` and `^` are equal contributions.
```
@article{damonlpsg2023seallm,
author = {Xuan-Phi Nguyen*, Wenxuan Zhang*, Xin Li*, Mahani Aljunied*,
Zhiqiang Hu, Chenhui Shen^, Yew Ken Chia^, Xingxuan Li, Jianyu Wang,
Qingyu Tan, Liying Cheng, Guanzheng Chen, Yue Deng, Sen Yang,
Chaoqun Liu, Hang Zhang, Lidong Bing},
title = {SeaLLMs - Large Language Models for Southeast Asia},
year = 2023,
Eprint = {arXiv:2312.00738},
}
```
| [
"CHIA"
] | Non_BioNLP |
medicalai/ClinicalGPT-base-zh | medicalai | text-generation | [
"transformers",
"pytorch",
"bloom",
"text-generation",
"medical",
"arxiv:2306.09968",
"license:afl-3.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | 1,687,350,710,000 | 2025-01-07T16:06:21 | 3,473 | 44 | ---
license: afl-3.0
tags:
- medical
---
# ClinicalGPT
This model card introduces ClinicalGPT model, a large language model designed and optimized for clinical scenarios. ClinicalGPT is fine-tuned on extensive and diverse medical datasets, including medical records, domain-specific knowledge, and multi-round dialogue consultations. The model is undergoing ongoing and continuous updates.
## Model Fine-tuning
We set the learning rate to 5e-5, with a batch size of 128 and a maximum length of 1,024, training across 3 epochs.
## How to use the model
Load the model via the transformers library:
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("medicalai/ClinicalGPT-base-zh")
model = AutoModelForCausalLM.from_pretrained("medicalai/ClinicalGPT-base-zh")
```
## Limitations
The project is intended for research purposes only and restricted from commercial or clinical use. The generated content by the model is subject to factors such as model computations, randomness, misinterpretation, and biases, and this project cannot guarantee its accuracy. This project assumes no legal liability for any content produced by the model. Users are advised to exercise caution and independently verify the generated results.
## Citation
Please cite these articles:
1.Wang, G., Liu, X., Liu, H., Yang, G. et al. A Generalist Medical Language Model for Disease Diagnosis Assistance. Nat Med (2025). https://doi.org/10.1038/s41591-024-03416-6
2.Wang, G., Yang, G., Du, Z., Fan, L., & Li, X. (2023). ClinicalGPT: large language models finetuned with diverse medical data and comprehensive evaluation. arXiv preprint arXiv:2306.09968. | [
"MEDICAL DATA"
] | BioNLP |
pritamdeka/PubMedBERT-MNLI-MedNLI | pritamdeka | text-classification | [
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:microsoft/BiomedNLP-BiomedBERT-base-uncased-abstract-fulltext",
"base_model:finetune:microsoft/BiomedNLP-BiomedBERT-base-uncased-abstract-fulltext",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,685,399,049,000 | 2024-03-01T02:58:46 | 270 | 3 | ---
base_model: microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext
metrics:
- accuracy
tags:
- generated_from_trainer
model-index:
- name: PubMedBERT-MNLI-MedNLI
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# PubMedBERT-MNLI-MedNLI
This model is a fine-tuned version of [PubMedBERT](https://huggingface.co/microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext) on the [MNLI](https://huggingface.co/datasets/multi_nli) dataset first and then on the [MedNLI](https://physionet.org/content/mednli/1.0.0/) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9501
- Accuracy: 0.8667
## Model description
More information needed
## Intended uses & limitations
The model can be used for NLI tasks related to biomedical data and even be adapted to fact-checking tasks. It can be used from the Huggingface pipeline method as follows:
```python
from transformers import TextClassificationPipeline, AutoModel, AutoTokenizer, AutoModelForSequenceClassification
model = AutoModelForSequenceClassification.from_pretrained("pritamdeka/PubMedBERT-MNLI-MedNLI", num_labels=3, id2label = {1: 'entailment', 0: 'contradiction',2:'neutral'})
tokenizer = AutoTokenizer.from_pretrained("pritamdeka/PubMedBERT-MNLI-MedNLI")
pipe = TextClassificationPipeline(model=model, tokenizer=tokenizer, return_all_scores=True, device=0, batch_size=128)
pipe(['ALDH1 expression is associated with better breast cancer outcomes',
'In a series of 577 breast carcinomas, expression of ALDH1 detected by immunostaining correlated with poor prognosis.'])
```
The output for the above will be:
```python
[[{'label': 'contradiction', 'score': 0.10193759202957153},
{'label': 'entailment', 'score': 0.2933262586593628},
{'label': 'neutral', 'score': 0.6047361493110657}],
[{'label': 'contradiction', 'score': 0.21726925671100616},
{'label': 'entailment', 'score': 0.24485822021961212},
{'label': 'neutral', 'score': 0.5378724932670593}]]
```
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.5673 | 1.42 | 500 | 0.4358 | 0.8437 |
| 0.2898 | 2.85 | 1000 | 0.4845 | 0.8523 |
| 0.1669 | 4.27 | 1500 | 0.6233 | 0.8573 |
| 0.1087 | 5.7 | 2000 | 0.7263 | 0.8573 |
| 0.0728 | 7.12 | 2500 | 0.8841 | 0.8638 |
| 0.0512 | 8.55 | 3000 | 0.9501 | 0.8667 |
| 0.0372 | 9.97 | 3500 | 1.0440 | 0.8566 |
| 0.0262 | 11.4 | 4000 | 1.0770 | 0.8609 |
| 0.0243 | 12.82 | 4500 | 1.0931 | 0.8616 |
| 0.023 | 14.25 | 5000 | 1.1088 | 0.8631 |
| 0.0163 | 15.67 | 5500 | 1.1264 | 0.8581 |
| 0.0111 | 17.09 | 6000 | 1.1541 | 0.8616 |
| 0.0098 | 18.52 | 6500 | 1.1542 | 0.8631 |
| 0.0074 | 19.94 | 7000 | 1.1653 | 0.8638 |
### Framework versions
- Transformers 4.22.0.dev0
- Pytorch 1.12.1+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
## Citing & Authors
<!--- Describe where people can find more information -->
If you use the model kindly cite the following work
```
@inproceedings{deka-etal-2023-multiple,
title = "Multiple Evidence Combination for Fact-Checking of Health-Related Information",
author = "Deka, Pritam and
Jurek-Loughrey, Anna and
P, Deepak",
booktitle = "The 22nd Workshop on Biomedical Natural Language Processing and BioNLP Shared Tasks",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.bionlp-1.20",
pages = "237--247",
abstract = "Fact-checking of health-related claims has become necessary in this digital age, where any information posted online is easily available to everyone. The most effective way to verify such claims is by using evidences obtained from reliable sources of medical knowledge, such as PubMed. Recent advances in the field of NLP have helped automate such fact-checking tasks. In this work, we propose a domain-specific BERT-based model using a transfer learning approach for the task of predicting the veracity of claim-evidence pairs for the verification of health-related facts. We also improvise on a method to combine multiple evidences retrieved for a single claim, taking into consideration conflicting evidences as well. We also show how our model can be exploited when labelled data is available and how back-translation can be used to augment data when there is data scarcity.",
}
``` | [
"MEDNLI"
] | BioNLP |
Locutusque/TinyMistral-248M-Instruct | Locutusque | text-generation | [
"transformers",
"pytorch",
"safetensors",
"mistral",
"text-generation",
"en",
"dataset:Locutusque/InstructMixCleaned",
"dataset:berkeley-nest/Nectar",
"base_model:Locutusque/TinyMistral-248M",
"base_model:finetune:Locutusque/TinyMistral-248M",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | 1,700,844,286,000 | 2023-12-17T21:02:42 | 156 | 11 | ---
base_model: Locutusque/TinyMistral-248M
datasets:
- Locutusque/InstructMixCleaned
- berkeley-nest/Nectar
language:
- en
license: apache-2.0
pipeline_tag: text-generation
widget:
- text: '<|USER|> Design a Neo4j database and Cypher function snippet to Display Extreme
Dental hygiene: Using Mouthwash for Analysis for Beginners. Implement if/else
or switch/case statements to handle different conditions related to the Consent.
Provide detailed comments explaining your control flow and the reasoning behind
each decision. <|ASSISTANT|> '
- text: '<|USER|> Write me a story about a magical place. <|ASSISTANT|> '
- text: '<|USER|> Write me an essay about the life of George Washington <|ASSISTANT|> '
- text: '<|USER|> Solve the following equation 2x + 10 = 20 <|ASSISTANT|> '
- text: '<|USER|> Craft me a list of some nice places to visit around the world. <|ASSISTANT|> '
- text: '<|USER|> How to manage a lazy employee: Address the employee verbally. Don''t
allow an employee''s laziness or lack of enthusiasm to become a recurring issue.
Tell the employee you''re hoping to speak with them about workplace expectations
and performance, and schedule a time to sit down together. Question: To manage
a lazy employee, it is suggested to talk to the employee. True, False, or Neither?
<|ASSISTANT|> '
inference:
parameters:
temperature: 0.5
do_sample: true
top_p: 0.5
top_k: 30
max_new_tokens: 250
repetition_penalty: 1.15
---
Base model Locutusque/TinyMistral-248M fully fine-tuned on Locutusque/InstructMix. During validation, this model achieved an average perplexity of 3.23 on Locutusque/InstructMix dataset.
It has so far been trained on approximately 608,000 examples. More epochs are planned for this model. | [
"CRAFT"
] | Non_BioNLP |
IIC/xlm-roberta-large-cantemist | IIC | text-classification | [
"transformers",
"pytorch",
"xlm-roberta",
"text-classification",
"biomedical",
"clinical",
"eHR",
"spanish",
"xlm-roberta-large",
"es",
"dataset:PlanTL-GOB-ES/cantemist-ner",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,687,188,421,000 | 2024-11-25T10:40:59 | 61 | 0 | ---
datasets:
- PlanTL-GOB-ES/cantemist-ner
language: es
license: mit
metrics:
- f1
tags:
- biomedical
- clinical
- eHR
- spanish
- xlm-roberta-large
widget:
- text: El diagnóstico definitivo de nuestro paciente fue de un Adenocarcinoma de
pulmón cT2a cN3 cM1a Estadio IV (por una única lesión pulmonar contralateral)
PD-L1 90%, EGFR negativo, ALK negativo y ROS-1 negativo.
- text: Durante el ingreso se realiza una TC, observándose un nódulo pulmonar en el
LII y una masa renal derecha indeterminada. Se realiza punción biopsia del nódulo
pulmonar, con hallazgos altamente sospechosos de carcinoma.
- text: Trombosis paraneoplásica con sospecha de hepatocarcinoma por imagen, sobre
hígado cirrótico, en paciente con índice Child-Pugh B.
model-index:
- name: IIC/xlm-roberta-large-cantemist
results:
- task:
type: token-classification
dataset:
name: cantemist-ner
type: PlanTL-GOB-ES/cantemist-ner
metrics:
- type: f1
value: 0.904
name: f1
---
# xlm-roberta-large-cantemist
This model is a finetuned version of xlm-roberta-large for the cantemist dataset used in a benchmark in the paper `A comparative analysis of Spanish Clinical encoder-based models on NER and classification tasks`. The model has a F1 of 0.904
Please refer to the [original publication](https://doi.org/10.1093/jamia/ocae054) for more information.
## Parameters used
| parameter | Value |
|-------------------------|:-----:|
| batch size | 16 |
| learning rate | 2e05 |
| classifier dropout | 0.1 |
| warmup ratio | 0 |
| warmup steps | 0 |
| weight decay | 0 |
| optimizer | AdamW |
| epochs | 10 |
| early stopping patience | 3 |
## BibTeX entry and citation info
```bibtext
@article{10.1093/jamia/ocae054,
author = {García Subies, Guillem and Barbero Jiménez, Álvaro and Martínez Fernández, Paloma},
title = {A comparative analysis of Spanish Clinical encoder-based models on NER and classification tasks},
journal = {Journal of the American Medical Informatics Association},
volume = {31},
number = {9},
pages = {2137-2146},
year = {2024},
month = {03},
issn = {1527-974X},
doi = {10.1093/jamia/ocae054},
url = {https://doi.org/10.1093/jamia/ocae054},
}
```
| [
"CANTEMIST"
] | BioNLP |
espnet/slurp_slu_2pass_gt | espnet | automatic-speech-recognition | [
"espnet",
"audio",
"automatic-speech-recognition",
"en",
"dataset:slurp",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | 1,663,098,551,000 | 2022-09-13T19:49:59 | 1 | 0 | ---
datasets:
- slurp
language: en
license: cc-by-4.0
tags:
- espnet
- audio
- automatic-speech-recognition
---
## ESPnet2 ASR model
### `espnet/slurp_slu_2pass_gt`
This model was trained by Siddhant using slurp recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
Follow the [ESPnet installation instructions](https://espnet.github.io/espnet/installation.html)
if you haven't done that already.
```bash
cd espnet
git checkout 3b54bfe52a294cdfce668c20d777bfa65f413745
pip install -e .
cd egs2/slurp/slu1
./run.sh --skip_data_prep false --skip_train true --download_model espnet/slurp_slu_2pass_gt
```
<!-- Generated by scripts/utils/show_asr_result.sh -->
# RESULTS
## Environments
- date: `Sat Aug 20 15:34:30 EDT 2022`
- python version: `3.9.5 (default, Jun 4 2021, 12:28:51) [GCC 7.5.0]`
- espnet version: `espnet 202207`
- pytorch version: `pytorch 1.8.1+cu102`
- Git hash: `45e2b13071f3cc4abbc3a7b2484bd6cffedd4d1c`
- Commit date: `Mon Aug 15 09:13:31 2022 -0400`
## slu_train_asr_bert_conformer_deliberation_raw_en_word
### WER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|inference_slu_model_valid.acc.ave_10best/devel|8690|108484|90.9|6.2|2.9|2.7|11.8|39.9|
|inference_slu_model_valid.acc.ave_10best/test|13078|159666|90.7|6.2|3.1|2.6|11.9|38.7|
### CER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|inference_slu_model_valid.acc.ave_10best/devel|8690|512732|95.5|2.3|2.2|2.5|7.0|39.9|
|inference_slu_model_valid.acc.ave_10best/test|13078|757056|95.3|2.3|2.3|2.5|7.1|38.7|
### TER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
## ASR config
<details><summary>expand</summary>
```
config: conf/tuning/train_asr_bert_conformer_deliberation.yaml
print_config: false
log_level: INFO
dry_run: false
iterator_type: sequence
output_dir: exp/slu_train_asr_bert_conformer_deliberation_raw_en_word
ngpu: 1
seed: 0
num_workers: 1
num_att_plot: 3
dist_backend: nccl
dist_init_method: env://
dist_world_size: null
dist_rank: null
local_rank: 0
dist_master_addr: null
dist_master_port: null
dist_launcher: null
multiprocessing_distributed: false
unused_parameters: false
sharded_ddp: false
cudnn_enabled: true
cudnn_benchmark: false
cudnn_deterministic: true
collect_stats: false
write_collected_feats: false
max_epoch: 50
patience: null
val_scheduler_criterion:
- valid
- loss
early_stopping_criterion:
- valid
- loss
- min
best_model_criterion:
- - valid
- acc
- max
keep_nbest_models: 10
nbest_averaging_interval: 0
grad_clip: 5.0
grad_clip_type: 2.0
grad_noise: false
accum_grad: 1
no_forward_run: false
resume: true
train_dtype: float32
use_amp: false
log_interval: null
use_matplotlib: true
use_tensorboard: true
create_graph_in_tensorboard: false
use_wandb: false
wandb_project: null
wandb_id: null
wandb_entity: null
wandb_name: null
wandb_model_log_interval: -1
detect_anomaly: false
pretrain_path: null
init_param: []
ignore_init_mismatch: false
freeze_param:
- encoder
- postdecoder.model
num_iters_per_epoch: null
batch_size: 20
valid_batch_size: null
batch_bins: 1000000
valid_batch_bins: null
train_shape_file:
- exp/slu_stats_raw_en_word/train/speech_shape
- exp/slu_stats_raw_en_word/train/text_shape.word
- exp/slu_stats_raw_en_word/train/transcript_shape.word
valid_shape_file:
- exp/slu_stats_raw_en_word/valid/speech_shape
- exp/slu_stats_raw_en_word/valid/text_shape.word
- exp/slu_stats_raw_en_word/valid/transcript_shape.word
batch_type: folded
valid_batch_type: null
fold_length:
- 80000
- 150
- 150
sort_in_batch: descending
sort_batch: descending
multiple_iterator: false
chunk_length: 500
chunk_shift_ratio: 0.5
num_cache_chunks: 1024
train_data_path_and_name_and_type:
- - dump/raw/train/wav.scp
- speech
- sound
- - dump/raw/train/text
- text
- text
- - dump/raw/train/transcript
- transcript
- text
valid_data_path_and_name_and_type:
- - dump/raw/devel/wav.scp
- speech
- sound
- - dump/raw/devel/text
- text
- text
- - dump/raw/devel/transcript
- transcript
- text
allow_variable_data_keys: false
max_cache_size: 0.0
max_cache_fd: 32
valid_max_cache_size: null
optim: adam
optim_conf:
lr: 0.0002
scheduler: warmuplr
scheduler_conf:
warmup_steps: 25000
token_list:
- <blank>
- <unk>
- ▁the
- s
- ▁to
- ▁i
- ▁me
- ▁you
- ▁what
- ▁a
- ▁is
- ▁my
- ▁please
- a
- ''''
- y
- ▁in
- ing
- ▁s
- e
- ▁for
- i
- ▁on
- d
- t
- o
- u
- er
- p
- ▁of
- es
- re
- l
- ▁it
- ▁p
- le
- ▁f
- ▁m
- ▁email
- ▁d
- m
- ▁c
- st
- r
- n
- ar
- ▁h
- b
- ▁that
- c
- ▁this
- h
- an
- email_query
- ▁play
- ▁re
- ▁b
- ▁do
- ▁can
- at
- ▁have
- g
- ▁from
- ▁and
- en
- email_sendemail
- ▁olly
- 'on'
- ▁new
- it
- qa_factoid
- calendar_set
- ▁any
- or
- ▁g
- ▁how
- ▁t
- ▁tell
- ch
- ▁not
- ▁about
- ▁at
- ate
- general_negate
- f
- ▁today
- ▁e
- ed
- ▁list
- ▁r
- in
- k
- ic
- social_post
- ▁are
- play_music
- general_quirky
- ▁l
- al
- v
- ent
- ▁n
- ▁be
- ▁an
- ▁st
- et
- ▁am
- general_praise
- ▁time
- weather_query
- ▁up
- ▁check
- calendar_query
- ▁w
- om
- ur
- ▁send
- ▁with
- ly
- w
- general_explain
- ad
- ▁th
- news_query
- ▁one
- ▁emails
- day
- ▁sh
- ce
- ▁last
- ve
- ▁he
- z
- ▁ch
- ▁will
- ▁set
- ▁would
- ▁was
- x
- general_repeat
- ▁add
- ou
- ▁again
- ▁ex
- is
- ct
- general_affirm
- general_confirm
- ▁song
- ▁next
- ▁j
- ▁meeting
- um
- ation
- ▁turn
- ▁did
- if
- ▁alarm
- am
- ▁like
- datetime_query
- ter
- ▁remind
- ▁o
- qa_definition
- ▁said
- ▁calendar
- ll
- se
- ers
- th
- ▁get
- our
- ▁need
- ▁all
- ot
- ▁want
- ▁off
- and
- ▁right
- ▁de
- ▁tr
- ut
- general_dontcare
- ▁
- ▁week
- as
- ▁tweet
- ight
- ir
- ▁your
- ▁event
- ▁news
- ▁se
- ay
- ion
- ▁com
- ▁there
- ▁ye
- ▁weather
- un
- ▁confirm
- ld
- calendar_remove
- ▁y
- ▁lights
- ▁more
- ▁v
- play_radio
- ▁does
- ▁po
- ▁now
- id
- email_querycontact
- ▁show
- ▁could
- ery
- op
- ▁day
- ▁pm
- ▁music
- ▁tomorrow
- ▁train
- ▁u
- ine
- ▁or
- ange
- qa_currency
- ice
- ▁contact
- ▁just
- ▁jo
- ▁think
- qa_stock
- end
- ss
- ber
- ▁tw
- ▁command
- ▁make
- ▁no
- ▁mo
- pe
- ▁find
- general_commandstop
- ▁when
- social_query
- ▁so
- ong
- ▁co
- ant
- ow
- ▁much
- ▁where
- ul
- ue
- ri
- ap
- ▁start
- ▁mar
- ▁by
- one
- ▁know
- ▁wor
- oo
- ▁give
- ▁let
- ▁events
- der
- ▁ro
- ▁pr
- ▁pl
- play_podcasts
- art
- us
- ▁work
- ▁current
- ol
- cooking_recipe
- nt
- ▁correct
- transport_query
- ia
- ▁stock
- ▁br
- ive
- ▁app
- ▁two
- ▁latest
- lists_query
- ▁some
- recommendation_events
- ab
- ▁go
- ▁but
- ook
- ke
- alarm_set
- play_audiobook
- ▁k
- ▁response
- ▁wr
- cast
- ▁open
- ▁cle
- ▁done
- ▁got
- ▁ca
- ite
- ase
- ▁thank
- iv
- ah
- ag
- ▁answer
- ie
- ▁five
- ▁book
- ist
- ▁rec
- ore
- ▁john
- ment
- ▁appreci
- ▁fri
- ack
- ▁remove
- ated
- ock
- ree
- j
- ▁good
- ▁many
- orn
- fe
- ▁radio
- ▁we
- int
- ▁facebook
- ▁cl
- ▁sev
- ▁schedule
- ard
- ▁per
- ▁li
- ▁going
- nd
- ain
- recommendation_locations
- ▁post
- lists_createoradd
- ff
- ▁su
- red
- iot_hue_lightoff
- lists_remove
- ▁ar
- een
- ▁say
- ro
- ▁volume
- ▁le
- ▁reply
- ▁complaint
- ▁out
- ▁delete
- ▁ne
- ame
- ▁detail
- ▁if
- im
- ▁happ
- orr
- ich
- em
- ▁ev
- ction
- ▁dollar
- ▁as
- alarm_query
- audio_volume_mute
- ac
- music_query
- ▁mon
- ther
- ▁thanks
- cel
- ▁who
- ave
- ▁service
- ▁mail
- ty
- ▁hear
- de
- ▁si
- ▁wh
- ood
- ell
- ▁con
- ▁once
- ound
- ▁don
- ▁loc
- ▁light
- ▁birthday
- ▁inf
- ort
- ffe
- ▁playlist
- el
- ening
- ▁us
- ▁un
- ▁has
- own
- ▁inc
- ai
- ▁speak
- age
- ▁mess
- ast
- ci
- ver
- ▁ten
- ▁underst
- ▁pro
- ▁q
- enty
- ▁ticket
- gh
- audio_volume_up
- ▁take
- ▁bo
- ally
- ome
- transport_ticket
- ind
- iot_hue_lightchange
- pp
- iot_coffee
- ▁res
- plain
- io
- lar
- takeaway_query
- ge
- takeaway_order
- email_addcontact
- play_game
- ak
- ▁fa
- transport_traffic
- music_likeness
- ▁rep
- act
- ust
- transport_taxi
- iot_hue_lightdim
- ▁mu
- ▁ti
- ick
- ▁ha
- ould
- general_joke
- '1'
- qa_maths
- ▁lo
- iot_cleaning
- q
- ake
- ill
- her
- iot_hue_lightup
- pl
- '2'
- alarm_remove
- orrect
- ▁cont
- mail
- out
- audio_volume_down
- book
- ail
- recommendation_movies
- ck
- ▁man
- ▁mus
- ▁che
- me
- ume
- ▁answ
- datetime_convert
- ▁late
- iot_wemo_on
- ▁twe
- music_settings
- iot_wemo_off
- orre
- ith
- ▁tom
- ▁fr
- ere
- ▁ad
- xt
- ▁ab
- ank
- general_greet
- now
- ▁meet
- ▁curre
- ▁respon
- ▁ag
- ght
- audio_volume_other
- ink
- ▁spe
- iot_hue_lighton
- ▁rem
- lly
- '?'
- urn
- ▁op
- ▁complain
- ▁comm
- let
- music_dislikeness
- ove
- ▁sch
- ather
- ▁rad
- edule
- ▁under
- icket
- lease
- ▁bir
- erv
- ▁birth
- ▁face
- ▁cur
- sw
- ▁serv
- ek
- aid
- '9'
- ▁vol
- edu
- '5'
- cooking_query
- lete
- ▁joh
- ▁det
- firm
- nder
- '0'
- irm
- '8'
- '&'
- _
- list
- pon
- qa_query
- '7'
- '3'
- '-'
- reci
- ▁doll
- <sos/eos>
transcript_token_list:
- <blank>
- <unk>
- the
- to
- i
- me
- you
- is
- what
- please
- my
- a
- for
- 'on'
- in
- of
- email
- this
- it
- have
- from
- and
- play
- olly
- that
- new
- can
- do
- how
- tell
- about
- at
- any
- today
- not
- time
- are
- check
- list
- send
- with
- an
- one
- emails
- last
- will
- am
- again
- set
- next
- would
- was
- up
- like
- turn
- said
- calendar
- meeting
- get
- what's
- right
- all
- did
- be
- need
- want
- song
- tweet
- add
- event
- your
- news
- 'off'
- weather
- there
- lights
- more
- now
- alarm
- pm
- music
- show
- confirm
- train
- could
- think
- does
- make
- command
- just
- find
- when
- tomorrow
- much
- where
- week
- by
- give
- events
- know
- day
- start
- two
- latest
- response
- that's
- remind
- done
- but
- thank
- stock
- some
- you've
- answer
- five
- open
- current
- many
- remove
- radio
- good
- book
- 'no'
- facebook
- going
- it's
- volume
- reply
- work
- delete
- go
- complaint
- contact
- if
- service
- let
- thanks
- so
- hear
- once
- correct
- john
- playlist
- birthday
- got
- post
- ten
- order
- sorry
- has
- date
- hey
- coffee
- who
- rate
- three
- exchange
- further
- light
- twenty
- price
- mail
- reminder
- explain
- podcast
- ticket
- down
- really
- clear
- seven
- schedule
- alarms
- say
- morning
- change
- twitter
- cancel
- number
- dollar
- stop
- out
- appreciated
- hundred
- wrong
- don't
- information
- address
- contacts
- read
- york
- us
- which
- should
- 'yes'
- details
- songs
- between
- nine
- anything
- s1
- received
- playing
- shut
- dot
- mind
- com
- google
- most
- put
- job
- traffic
- four
- best
- six
- create
- recent
- yeah
- happening
- friday
- name
- very
- area
- mom
- or
- take
- appointment
- yeap
- room
- world
- home
- hour
- message
- eight
- clarify
- s2
- party
- episode
- here
- elaborate
- alexa
- appreciate
- customer
- i'd
- sent
- thing
- march
- look
- tonight
- place
- try
- after
- definition
- call
- well
- times
- rock
- phone
- speak
- today's
- whats
- food
- thirty
- see
- joke
- every
- pizza
- write
- lists
- game
- shopping
- weekend
- rephrase
- month
- matter
- s
- update
- station
- vacuum
- great
- detail
- long
- gmail
- old
- repeat
- city
- audiobook
- perfectly
- status
- inbox
- mute
- local
- near
- restaurant
- thousand
- tuesday
- year
- we
- media
- before
- around
- resume
- musch
- her
- house
- taxi
- hours
- didn't
- describe
- answers
- understand
- incorrect
- word
- listen
- first
- item
- d
- trump
- save
- days
- socket
- recipe
- nice
- u
- reminders
- social
- search
- as
- monday
- subject
- location
- movie
- saturday
- euro
- dinner
- them
- ask
- let's
- scheduled
- plug
- i'm
- gotten
- question
- minutes
- friend
- favorite
- meetings
- define
- instructions
- exactly
- cook
- understood
- sentence
- thursday
- grocery
- correcly
- their
- words
- temperature
- person
- amazon
- catch
- company
- mean
- something
- correctly
- living
- fantastic
- help
- following
- dollars
- rain
- speakers
- instruction
- helpful
- increase
- consumer
- evening
- family
- upcoming
- jazz
- saying
- way
- switch
- forecast
- task
- cleaner
- love
- late
- boss
- wednesday
- yesterday
- updates
- lower
- people
- cool
- wonderful
- twelve
- afternoon
- color
- wake
- oh
- lunch
- perfect
- back
- understanding
- useful
- amazing
- his
- dim
- movies
- chicago
- things
- takeaway
- fifty
- unread
- happy
- available
- noon
- wouldn't
- night
- had
- appointments
- idea
- michael
- doing
- over
- doesn't
- select
- hi
- shit
- may
- they
- delivery
- nearest
- buy
- apple
- car
- left
- confirmed
- report
- worth
- robot
- uber
- wemo
- sunday
- excellent
- outside
- blue
- looking
- messages
- top
- wear
- point
- too
- i've
- country
- prices
- bring
- store
- awesome
- unclear
- ok
- mark
- speaker
- app
- sound
- hot
- live
- jackson
- bad
- recently
- currently
- smith
- pull
- whatever
- india
- messed
- kitchen
- ninety
- percent
- him
- use
- office
- brightness
- care
- gave
- description
- tom
- regarding
- meaning
- meet
- siri
- bob
- joe
- hmm
- leave
- sarah
- smart
- come
- chicken
- seventeen
- walmart
- bill
- enough
- choose
- louder
- our
- trending
- born
- london
- zone
- account
- cnn
- audio
- president
- isn't
- compose
- coming
- second
- manner
- pick
- album
- uhh
- plus
- provide
- erase
- notification
- played
- channel
- donald
- pound
- instagram
- made
- bbc
- recommend
- happened
- united
- replay
- shop
- free
- dammit
- nope
- b
- nearby
- pop
- shops
- california
- highest
- notifications
- shuffle
- fm
- chinese
- currency
- uh
- restaurants
- jack
- april
- robert
- only
- been
- why
- states
- friends
- skip
- important
- he
- samsung
- later
- notify
- bedroom
- john's
- mails
- eleven
- red
- exact
- cold
- cup
- rates
- incorrectly
- fifth
- money
- boston
- spoke
- tomorrow's
- forward
- respond
- funny
- wait
- business
- market
- star
- headlines
- third
- favorites
- bother
- retry
- stocks
- high
- g
- favourite
- george
- umbrella
- directions
- wedding
- content
- m
- close
- spoken
- concert
- run
- alert
- searching
- mary
- into
- artist
- located
- mike
- anyone
- snow
- tickets
- then
- reset
- garden
- route
- hello
- tall
- likes
- talk
- forty
- share
- feed
- were
- indian
- washington
- difference
- remember
- convert
- receive
- tune
- level
- asking
- capital
- life
- dad
- yen
- street
- raining
- mistake
- correctly?
- quite
- pandora
- jane
- town
- yet
- player
- park
- san
- american
- far
- sports
- raise
- popular
- display
- these
- couldn't
- mountain
- dentist
- importance
- unimportant
- complain
- clean
- continue
- euros
- los
- ready
- yahoo
- can't
- classical
- politics
- newest
- lighting
- miami
- trip
- horrible
- info
- added
- prepare
- iphone
- machine
- mother
- miles
- via
- chris
- tv
- since
- bathroom
- state
- cheese
- request
- items
- oops
- ah
- closest
- warm
- microsoft
- settings
- value
- keep
- brighter
- note
- everything
- wife
- decrease
- okay
- using
- rap
- election
- sunny
- eat
- usa
- eighty
- fifteen
- until
- wanted
- wrongly
- dog
- obama
- years
- coat
- week's
- japan
- quiet
- paris
- angeles
- comcast
- target
- emailed
- airport
- interesting
- mcdonalds
- mr
- married
- green
- product
- past
- little
- other
- t
- listening
- cooking
- activate
- earth
- dance
- title
- florida
- rupee
- travel
- kids
- takeout
- pending
- america
- making
- its
- than
- doctor
- population
- bar
- plans
- power
- fourth
- silent
- ride
- milk
- how's
- seventy
- sure
- fine
- jennifer
- july
- sister
- brighten
- picture
- deliver
- singer
- clock
- inform
- brad
- burger
- never
- pesos
- object
- hero
- arrive
- classic
- olive
- games
- group
- watch
- line
- justin
- cost
- project
- called
- lets
- track
- still
- starbucks
- form
- repeating
- christmas
- breaking
- due
- cheapest
- forget
- posted
- james
- posts
- central
- lot
- stories
- whole
- small
- ever
- steak
- review
- requested
- wish
- david
- workout
- alex
- seems
- given
- gym
- largest
- la
- average
- compare
- china
- fifteenth
- having
- rupees
- band
- background
- meal
- online
- reserve
- file
- lamp
- laugh
- sun
- anniversary
- eastern
- busy
- mobile
- bit
- jokes
- places
- geographic
- else
- chess
- meant
- working
- p
- planned
- program
- seconds
- rated
- large
- issues
- road
- pay
- big
- holiday
- daily
- 'true'
- celebrity
- better
- hut
- being
- sixty
- away
- helped
- peter
- god
- cab
- someone
- internet
- page
- anna
- feel
- video
- steve
- opening
- lately
- sandy
- bank
- weeks
- id
- sam
- pitt
- river
- february
- i'll
- saved
- soup
- phrase
- distance
- economy
- hits
- sony
- eggs
- low
- water
- text
- topic
- co
- begin
- attend
- groceries
- adele
- reach
- within
- pause
- half
- yourself
- kind
- dark
- replied
- enter
- must
- asked
- beatles
- fun
- ingredients
- against
- invite
- soon
- colour
- different
- jacket
- updated
- seattle
- denver
- canada
- vegas
- mode
- pasta
- january
- doe
- listed
- refresh
- listened
- team
- longest
- spotify
- remainder
- telling
- mumbai
- you're
- orlando
- card
- rice
- during
- reduce
- locate
- future
- starting
- boil
- genre
- class
- slow
- famous
- named
- allen
- youtube
- works
- olly's
- dc
- brew
- through
- pounds
- football
- pacific
- white
- sings
- egg
- oil
- festival
- clothes
- moment
- die
- orange
- school
- kim
- las
- divided
- whether
- photo
- everyday
- ryan
- bills
- headline
- fix
- square
- npr
- jake
- brother
- todays
- terrible
- weekly
- type
- topics
- months
- chat
- yoga
- reading
- products
- extra
- cut
- adjust
- king
- personal
- client
- jan
- data
- doctor's
- computer
- rohit
- johns
- o'clock
- canadian
- mistakes
- rid
- names
- control
- sunscreen
- per
- lady
- head
- taylor
- always
- budget
- pink
- bought
- x
- side
- ahead
- articles
- english
- ny
- able
- reschedule
- fast
- hashtag
- tweets
- countries
- numbers
- running
- alabama
- blank
- madonna
- bright
- yellow
- west
- went
- options
- story
- october
- russia
- together
- n
- basketball
- joe's
- dominos
- tomorrows
- less
- situation
- colors
- mom's
- end
- payment
- drop
- downtown
- provider
- joes
- means
- helping
- mexican
- friday's
- cricket
- return
- needed
- death
- tech
- charlotte
- heavy
- draft
- sea
- paul
- r
- condition
- seventh
- dallas
- hip
- related
- article
- heard
- war
- elvis
- everest
- problem
- stating
- bieber
- system
- sales
- shoes
- hard
- become
- based
- kevin
- age
- she
- quality
- mile
- hair
- gas
- biggest
- inr
- climate
- hate
- twentieth
- sucks
- dean
- angelina
- turkey
- harry
- cake
- national
- record
- longer
- dave
- subjects
- brown
- supposed
- ocean
- church
- drive
- gandhi
- needs
- above
- theatre
- cookies
- abraham
- gone
- map
- television
- such
- face
- sale
- jim
- francisco
- sean
- june
- romantic
- compared
- curry
- ball
- jeff
- subway
- lincoln
- bed
- lagos
- turned
- south
- won
- trains
- girlfriend
- mahatma
- nsa
- hop
- amy
- commute
- solve
- came
- created
- dont
- history
- math
- telephone
- says
- laptop
- pawel
- offer
- fox
- single
- sixth
- midnight
- missed
- potter
- loud
- richard
- chuck
- looks
- practice
- body
- dan
- husband
- waiting
- birth
- stuff
- adam
- sender
- gaga
- truck
- france
- texas
- restart
- intel
- colours
- statue
- liberty
- intensity
- previous
- problems
- outlook
- visit
- wine
- peso
- continent
- utterance
- helps
- asssistance
- each
- north
- grand
- patrick
- match
- opinion
- plan
- trump's
- papa
- instead
- martin
- root
- purchase
- perry
- richards
- closing
- cloudy
- eddie
- senders
- move
- susan
- tesco
- size
- shows
- folder
- spaghetti
- doctors
- stores
- presidential
- dates
- theater
- menu
- agenda
- ann
- code
- animal
- frequency
- kansas
- roomba
- technology
- tasks
- without
- flight
- who's
- beach
- empty
- tired
- driving
- entire
- carry
- british
- dr
- asia
- rccg
- uncle
- vacation
- pepperoni
- programme
- standard
- reminding
- maximum
- starts
- tallest
- gonna
- fourteenth
- playback
- medium
- nike
- cruise
- changed
- diego
- arrange
- bowie
- learn
- mount
- particular
- costumer
- sundays
- fire
- calls
- silence
- podcasts
- spain
- dominoes
- website
- italy
- strongly
- agree
- agreed
- suggest
- mood
- fourteen
- result
- metallica
- thinking
- session
- profile
- england
- active
- ohio
- grid
- fall
- pot
- marriage
- queue
- told
- narendra
- jerry
- mt
- frank
- tenth
- wishes
- recording
- finished
- international
- calculate
- hit
- towers
- ninth
- site
- feeling
- macy's
- tag
- actually
- black
- birthdays
- hottest
- mary's
- expect
- snapchat
- jay
- smith's
- mountains
- building
- setting
- cleaning
- height
- initiate
- hall
- breakfast
- martha
- conference
- aol
- win
- steps
- fancy
- smartphone
- led
- zeppelin
- houses
- holy
- currencies
- club
- children
- atlanta
- einstein
- happen
- cell
- landline
- coworker
- objects
- negative
- modi
- soft
- haven't
- mention
- radius
- books
- daughter
- results
- earlier
- bruce
- butter
- stars
- remaining
- delivers
- device
- domino's
- unmute
- joy
- twelfth
- voice
- taking
- snowing
- sick
- boots
- cleveland
- journey
- destination
- worker
- poker
- lee
- katy
- australia
- incoming
- least
- lisa
- experience
- million
- recurring
- scenario
- sacramento
- geography
- library
- brief
- jolie
- monthly
- elton
- sirius
- alaska
- lyrics
- oven
- log
- random
- moscow
- barack
- disney
- alive
- measurements
- maker
- poor
- error
- stone
- versus
- hotmail
- interpret
- sarah's
- memorial
- goes
- stay
- delhi
- health
- special
- speed
- thirteen
- test
- edinburgh
- credit
- facts
- cat
- neighborhood
- sometime
- empire
- entry
- financial
- comment
- link
- hockey
- circuit
- holidays
- singh
- jodhpur
- rockville
- ones
- features
- bread
- eye
- mall
- directv
- contain
- seacrest
- chance
- under
- table
- few
- hotel
- rude
- services
- yesterday's
- certain
- fb
- abc
- netflix
- linda
- notes
- length
- reminded
- shoe
- wild
- employees
- beef
- sushi
- fastest
- thirteenth
- recommendations
- fish
- tennis
- main
- jersey
- jones
- break
- concerts
- gomez
- angry
- uk
- replies
- emily
- kickball
- released
- upload
- effects
- quickest
- italian
- caroline
- emma
- real
- human
- minute
- took
- activity
- jeff's
- staff
- handler
- touch
- hold
- joanne
- range
- moon
- submit
- ends
- tomato
- lost
- prime
- twelveth
- phones
- amd
- hectic
- bobburgers
- screwed
- porch
- reviews
- vegan
- rihanna
- houston
- ham
- mondays
- general
- engaged
- walk
- melody
- electronic
- held
- selected
- equal
- getting
- tata
- wall
- clothing
- round
- leaving
- nasdaq
- total
- pressure
- expensive
- border
- exhibition
- trash
- november
- handle
- halloween
- attachment
- kardashian
- shoot
- rewind
- rating
- toronto
- department
- procedure
- member
- ray
- chelsea
- rohan
- arrow
- checked
- modify
- wasn't
- chances
- protest
- lottery
- prince
- include
- jo
- net
- pie
- sleep
- enjoy
- nineties
- taco
- banana
- source
- quieter
- bored
- desert
- guys
- gary
- activities
- already
- contract
- st
- minister
- disable
- woman
- europe
- arijit
- audible
- presentation
- cad
- records
- trips
- booking
- tacos
- sally
- non
- centre
- direct
- advance
- selena
- policy
- orders
- stefan
- arrival
- divide
- chocolate
- dish
- teeth
- hdfc
- silvia
- stove
- coast
- defined
- digest
- snafu
- manager
- pinterest
- tim
- conversation
- bulldog
- titanic
- brunch
- heat
- canyon
- dial
- earliest
- region
- stopped
- foreign
- folk
- watching
- brexit
- albert
- joejoe
- early
- cities
- manchester
- december
- biloxi
- often
- questions
- garage
- tunes
- possible
- ms
- ar
- kiss
- shares
- bangalore
- heading
- derek's
- desk
- cheers
- tomasz
- terms
- companyname
- sara
- asap
- super
- meryl
- streep
- rent
- dress
- cinema
- usually
- trend
- conversion
- friendly
- ties
- ordered
- electricity
- marked
- migration
- choice
- journal
- norris
- aniston
- mailbox
- minus
- fried
- miley
- cyrus
- newly
- theory
- rest
- swift
- windy
- dan's
- mass
- comes
- selfie
- wings
- julie
- masti
- celine
- plays
- pack
- including
- responded
- jason's
- ale
- apples
- dolly
- oranges
- lg
- washer
- substitute
- global
- feedback
- grandma
- ben
- drainage
- invoice
- sunset
- takeaways
- man
- art
- universe
- suitable
- antonio
- full
- delivered
- laundry
- wrote
- min
- register
- snap
- nixon
- bird
- spend
- rome
- jesse
- calories
- cappuccino
- quickly
- buying
- britney
- spears
- spacey
- jobs
- arriving
- jean
- potholes
- janet
- pictures
- ashwin
- morgan
- freeman
- baby
- microwave
- yellowstone
- francis
- dubai
- invitation
- hope
- melbourne
- rocky
- kroger
- rivers
- charles
- jim's
- rectify
- statement
- carpet
- baked
- jessica
- meatballs
- mushrooms
- amount
- switzerland
- relating
- zero
- front
- phonebook
- hows
- cheesecake
- carryout
- magic
- ola
- replace
- recorded
- access
- land
- where's
- elephant
- removed
- liz
- load
- metal
- package
- diner
- goog
- bob's
- k
- year's
- mars
- guy
- assistant
- rahman
- eagle
- part
- burn
- aran
- stevens
- daughter's
- eighteen
- chemistry
- action
- selling
- thats
- koc
- lines
- sugar
- major
- chair
- easter
- departing
- africa
- nigeria
- requests
- conditions
- you'll
- manhattan
- roll
- cracow
- candy
- crush
- bell
- massive
- gold
- happens
- usual
- andrew
- equals
- dead
- plane
- graduation
- warned
- shaun
- triangle
- wyatt's
- pass
- function
- max
- space
- programmes
- awful
- parton
- exciting
- battery
- hwu
- recipes
- dirham
- rushmore
- johndoe
- button
- express
- pontificate
- easiest
- magda
- selection
- reservations
- guess
- copy
- classes
- supplies
- schedules
- winning
- berkeley
- notice
- headed
- outgoing
- mi
- rainy
- wikipedia
- entertainment
- dow
- everyone
- aunt
- furniture
- oceans
- softer
- heart
- newmail
- while
- baseball
- easy
- stations
- philadelphia
- alice
- swat
- yearly
- poem
- soccer
- president's
- milan
- paper
- kardashian's
- loop
- shown
- sandals
- yo
- scan
- nevada
- apahelp
- coldplay
- french
- bay
- higher
- rumplestiltskin
- airlines
- fresh
- standing
- cream
- hamburger
- broadway
- oscars
- tokyo
- cable
- shipment
- formula
- teacher
- sweet
- golden
- newsfeed
- confirmation
- shirt
- austin
- own
- canon
- wanna
- gods
- spanish
- count
- seat
- ideas
- study
- tara
- mutual
- jennifer's
- because
- edit
- denmark
- direction
- timer
- growth
- luther
- marketing
- cd
- mine
- public
- peter's
- bolshoi
- flat
- crazy
- others
- dry
- pub
- theatres
- bro
- fashion
- teams
- cycle
- pickup
- dion
- teach
- series
- checkout
- male
- noise
- solitaire
- pf
- cassie
- travelling
- davis
- naty
- income
- disco
- dropping
- donna
- follow
- shelly
- accidents
- plot
- irene
- download
- circle
- law
- tea
- organize
- principal
- weekends
- camera
- solution
- bombay
- wuthering
- heights
- charged
- colorado
- kong
- keys
- race
- mona
- entries
- j
- nyc
- potatoes
- gospel
- raju
- trivia
- bike
- dating
- oregon
- event's
- prefers
- rush
- percentages
- peking
- cooker
- husbands
- won't
- tower
- heaven
- hugh
- june's
- fake
- figure
- purple
- takes
- l
- howard
- stern
- nineteen
- percentage
- motorola
- doe's
- outstanding
- tesla
- laura
- dale
- warning
- eighteenth
- golf
- island
- career
- bieber's
- vacuuming
- pizzas
- refund
- weekday
- s's
- derek
- thanksgiving
- delayed
- query
- buffet
- rachel
- pants
- wash
- survey
- photos
- except
- topography
- door
- jen
- queen
- depart
- cheap
- theaters
- web
- jesse's
- multiply
- workhouse
- press
- click
- loss
- recipient
- verizon
- volcano
- rolls
- royce
- pixel
- affirmative
- completing
- thai
- walking
- bananas
- hollywood
- equation
- dirty
- scores
- katrina
- exam
- creating
- letter
- sing
- construction
- broadcast
- tom's
- rupies
- management
- permanently
- converting
- ist
- iron
- religion
- kings
- tucson
- standup
- tic
- tac
- toe
- headset
- sex
- diapers
- purpose
- seventeenth
- eighth
- dylan
- temple
- refer
- gift
- fact
- drink
- inches
- air
- carpets
- newcastle
- clients
- private
- tasting
- sams
- nj
- chili
- cultural
- swimming
- they're
- iowa
- jordan
- period
- accept
- cincinnati
- college
- rainbow
- myself
- deep
- deepest
- warming
- sky
- vp
- seeing
- indianapolis
- kmart
- nikesupport
- image
- suck
- broiler
- timeline
- dell
- parisa
- brandon
- example
- y
- filter
- sad
- shine
- sixteen
- christian
- pic
- pdr
- fry
- another
- network
- omelette
- kilometers
- municipality
- giving
- leo
- cups
- earthquake
- susan's
- application
- cross
- across
- carl
- pawel's
- sauce
- relativity
- rail
- sisters
- letting
- shorts
- vs
- rajesh
- swift's
- starving
- discussing
- block
- written
- n9ne
- women
- celebrities
- bake
- cookie
- continents
- workers
- leonardo
- mel
- gibson
- shall
- beauty
- sum
- fair
- deli
- middle
- same
- nile
- sell
- role
- boat
- sandwich
- parts
- hearing
- knows
- sand
- manoj
- delivering
- rahul
- neil
- australian
- kindly
- properly
- assist
- esurance
- emilia
- breach
- loudly
- harvard
- marc
- nintendo
- scrabble
- farm
- lie
- patio
- greg
- screen
- degrees
- yesterdays
- carrots
- receipt
- lasagna
- clooney
- there's
- degree
- preferences
- hallway
- latin
- nicest
- lauren
- worst
- also
- checkers
- input
- boyfriend
- masala
- tournament
- monet's
- burmuda
- section
- eric
- japanese
- supervisor
- junk
- performance
- effective
- urgent
- oldest
- tone
- sweater
- goa
- bag
- lowest
- aus
- peace
- julia
- summer
- fan
- hurricane
- colder
- steven
- sachin
- tendulkar
- watson
- exorbitant
- bags
- macs
- yulia
- matthew
- pole
- toby
- pennsylvania
- carmen
- tiffany
- complete
- electric
- wallet
- albums
- maths
- distribution
- eminem
- familiar
- regard
- upwards
- ron
- couple
- acme
- angel
- zoo
- nineteenth
- shazam
- inflation
- offers
- devotional
- jackie
- tony
- artificial
- intelligence
- grill
- father
- predictions
- repeats
- manila
- cooked
- reason
- learning
- nowadays
- cheer
- jingle
- bells
- anxiety
- hoizer
- girl
- pondichery
- position
- teachers
- dictionary
- nap
- cafe
- m's
- meting
- crime
- eve
- horn
- bristol
- pubs
- companies
- johnson
- resolve
- waterfall
- female
- biriyani
- drama
- nothappy
- haircut
- remote
- colleagues
- bones
- saturdays
- cambridge
- jam
- maine
- category
- invented
- chang's
- boy
- planning
- chen
- assignment
- publish
- hunt
- alerts
- dad's
- deal
- leading
- trail
- follows
- young
- jay's
- summary
- ko
- beyonce
- vergara
- mexico
- whishes
- arrived
- placid
- specific
- depot
- tikka
- expire
- markets
- problematic
- highly
- blues
- thirtieth
- brooklyn
- tatum
- argentinian
- redso
- des
- moines
- women's
- richard's
- cellphone
- division
- hong
- political
- charley's
- steakhouse
- accident
- normal
- wakeup
- satellite
- freezing
- forex
- jimmy
- chores
- snooze
- design
- museum
- guide
- speech
- ran
- shift
- inferior
- mashed
- jcpenney
- environment
- raw
- disturbed
- sia
- chips
- anybody
- present
- reynolds
- limbaugh
- weekdays
- islands
- viral
- asian
- streets
- inception
- meatloaf
- alternative
- compliant
- sensex
- phil
- est
- hand
- switched
- recap
- ferrari
- nandy
- promotion
- kate
- brothers
- ma
- followers
- closer
- deleted
- gloves
- bands
- platter
- boland
- corner
- strong
- chipotle
- eu
- amtrak
- son
- charges
- version
- rajdhani
- chart
- manage
- musical
- hat
- den
- tonight's
- syria
- stronger
- homelessness
- nails
- support
- ally
- sentences
- penn
- ago
- turning
- center
- hungry
- actress
- keywords
- usain
- bolt
- ongoing
- cancelled
- idol
- julia's
- wells
- fargo
- ri
- sarahs
- computers
- devices
- toms
- regards
- quote
- production
- brother's
- inch
- shell
- marathon
- directory
- dictate
- huey
- lewis
- elections
- alone
- marry
- apart
- danielle
- jane's
- mankind
- singularity
- nye
- feynman
- whom
- inventory
- makes
- dept
- apple's
- education
- bugs
- settle
- when's
- geographical
- jason
- exchanges
- mcdonald's
- tgi
- ship
- hershey
- facing
- faulty
- zita
- jeremy
- irons
- wallmart
- sphere
- hp
- gottten
- pardon
- engagement
- showing
- format
- absolute
- interest
- messenger
- gate
- enable
- columbus
- hips
- tour
- sterling
- thumbs
- priced
- tablet
- amc
- bible
- safeway
- organism
- undertake
- freedom
- charger
- documents
- jars
- clay
- members
- o
- vegetables
- delicious
- beaumont
- tx
- finance
- exhibitions
- trumps
- month's
- v
- applebee
- dakota
- bus
- brighton
- pa
- darken
- promoted
- liverpool
- utah
- suggestions
- micheal
- complaints
- pencil
- keith
- fridays
- temperatures
- hardware
- exercise
- jpearsonjessica
- release
- hoover
- goshen
- chester
- wood
- woodchuck
- healthcare
- borges
- calculator
- dune
- reality
- jobe
- gossip
- piece
- convenient
- titled
- pork
- belongs
- hongbin
- wreck
- tool
- started
- gather
- bruno
- costa
- patel
- daniel
- corporate
- controversy
- wendy's
- texans
- biography
- flowers
- investing
- arrives
- finish
- spot
- crop
- culture
- enjoying
- fetch
- kill
- auto
- washing
- buffalo
- he's
- titles
- ross
- whose
- types
- pleasant
- erin
- madison
- tuesday's
- lif
- khan
- affordable
- season
- policies
- c
- expected
- hypothesis
- seth
- kicked
- unhappy
- gallery
- xorg
- used
- monali
- thakur
- noodles
- cher
- sally's
- tracks
- mid
- launch
- glasgow
- bridge
- releases
- pitt's
- server
- clarity
- yens
- motivational
- scratch
- blanket
- aib
- reads
- singing
- monas
- tuesdays
- winter
- rocket
- lands
- chan
- economic
- sister's
- aa
- film
- pb
- indiana
- departure
- pipeline
- stitch
- sleeved
- hail
- logan
- style
- quantum
- physics
- labeled
- delia
- began
- rrcg
- shape
- awards
- improve
- pertaining
- trance
- lives
- weight
- met
- brian
- sinatra
- sunglasses
- attending
- falls
- requesting
- sunday's
- overhead
- greg's
- rom
- historic
- georgia
- guest
- jaipur
- iroomba
- alfredo
- pride
- prejudice
- fill
- interview
- daddy
- wangs
- manchow
- university
- locally
- lowes
- tiring
- east
- medical
- metro
- bach
- schubert
- rooster
- czk
- channing
- pad's
- identify
- yelp
- scandal
- affect
- suffering
- enabled
- arby's
- saw
- mango
- itunes
- highlights
- brings
- sixteenth
- tourist
- wendys
- presley
- sold
- intern
- affairs
- fries
- buttermilk
- panda
- wants
- floor
- clint
- eastwood
- moe's
- planets
- equivalent
- morrocco
- gravity
- uploaded
- someplace
- availability
- issue
- fly
- jpy
- natural
- delta
- disappointed
- files
- q
- cindy
- shortest
- simple
- ring
- lotion
- maroon
- fort
- died
- bonus
- repetitive
- icecream
- statistics
- rebel
- lawn
- leith
- measure
- daytime
- september
- pilots
- pda's
- shade
- sil
- cap
- punjab
- gwalior
- ashley
- juice
- nagar
- ellen
- programs
- fairs
- invest
- suits
- ingredient
- launches
- leaves
- bjork
- crater
- elevation
- stewart
- hotels
- spices
- bubbles
- grass
- broccoli
- capricious
- philosophy
- anthony's
- apply
- pings
- gps
- thomas
- koontz
- acdc
- beijing
- ratings
- union
- prayer
- todo
- angles
- scissors
- stashable
- cinch
- bacon
- passive
- que
- occurred
- lakeland
- tulsa
- advise
- singapore
- risotto
- invested
- model
- helmsworth
- bench
- julian
- buddy
- rogers
- brains
- chap
- badminton
- dick
- lopez
- apartment
- points
- germany
- unknown
- thugs
- healthy
- rash
- casey
- oriam
- ps
- plants
- mailed
- ikoyi
- grassmarket
- marleen's
- locations
- bush
- mac
- reaching
- allan
- till
- cheering
- guitar
- oxford
- densely
- populated
- son's
- hubby
- comparison
- putin
- barcelona
- gss
- energy
- pan
- nyack
- worked
- unavailable
- bryan
- adams
- miss
- checkbook
- jared's
- enrique
- iglesias
- forms
- jeans
- voices
- alan
- tudek
- animals
- olx
- mts
- freed
- jenn's
- coordinates
- humid
- demographic
- otherwise
- tiffany's
- outdoor
- sheila
- lincon
- dust
- serve
- conduct
- estimated
- gaana
- funds
- downloaded
- indignation
- meijer
- necessary
- grubhub
- pancakes
- mario
- bars
- birmingham
- sites
- donuts
- chopra
- textual
- rapids
- cant
- prefix
- sounds
- provides
- amy's
- benton
- leeds
- dsw
- returning
- defective
- digital
- bhaji
- carlos
- linux
- upgrade
- shark
- attacks
- screening
- exposure
- souffle
- tracking
- od
- progress
- paused
- gilmore
- hour's
- imdb
- orleans
- european
- gdp
- surfers
- theme
- ash
- ikea
- klm
- marilia
- cars
- robin
- williams
- surfin
- ottawa
- trade
- contains
- field
- someone's
- prague
- brno
- rene
- interests
- radiolab
- harris
- strive
- accommodating
- fell
- relationship
- pharmacy
- memo
- nancy
- paid
- expressing
- disapproval
- yard
- royale
- hide
- amber
- cheeseburger
- coca
- cola
- al
- matrimony
- scott
- potato
- funniest
- polling
- mother's
- chase
- xmtune
- matt
- murphy
- detroit
- taiwan
- organic
- secrets
- domino
- ac
- assistants
- z
- fred
- owner
- required
- saga
- hanks
- trading
- erosser
- rosser
- vikki
- dhaka
- notepad
- oldies
- alison
- recur
- w
- mentioning
- languages
- lavender
- toned
- videos
- stein
- chennai
- resuming
- moms
- foke
- beep
- discussion
- woodland
- lowry
- meetups
- powerball
- toyota
- focus
- concentrate
- nbc
- roosendaal
- deactivate
- shrimp
- parmigiana
- bumper
- spouses
- lucknow
- paying
- hurry
- served
- rhythm
- enquiry
- hartford
- plaza
- hyundai
- wishing
- websites
- briefing
- complex
- calculations
- jarvis
- highway
- fired
- dissatisfied
- sandra
- bullock
- ratio
- haskell
- sharon
- horse
- mum's
- dillinger
- sunblock
- sub
- tab
- crude
- software
- stadium
- step
- short
- reddit
- appoints
- agra
- sheet
- keyboard
- kfi
- district
- connery
- carnival
- wok
- shutting
- phoenix
- cloth
- rehan
- lego
- alphabetical
- mexco
- charles's
- foodpoisoning
- ultra
- madonna's
- harley
- davidson
- daylight
- afi
- infy
- launched
- inboxes
- secretary
- increased
- resolving
- fuel
- injector
- multiple
- interval
- mike's
- espresso
- sasha
- susie
- salesperson
- country's
- cylinder
- specifications
- ivory
- pst
- zoella's
- jackman
- reacting
- potential
- frying
- boise
- wendy
- divisible
- automated
- katherine
- pre
- gaming
- containing
- decade
- industry
- foot
- chemical
- cause
- taste
- bra
- julianne
- hough
- addresses
- vonstaragrabber
- lion
- restroom
- kohl's
- mentioned
- hz
- royal
- bloodline
- relationships
- billings
- levin
- quarter
- lori's
- lori
- exclamation
- definitions
- birds
- raj
- priya
- allows
- worlds
- kelly
- clarkson
- garam
- scarlet
- found
- cub
- dmv
- excessively
- lake
- dried
- reporting
- smile
- changes
- charmin
- eternal
- smoked
- meat
- beanos
- processing
- chip
- logic
- insightbb
- highland
- terrace
- child
- peck
- midwest
- cardinal
- anthony
- barrack
- jancy
- thompson
- cassy
- gulls
- alternate
- sin
- dragons
- msnbc
- residential
- leader
- siblings
- pedro
- serendipitous
- bestbuy
- targets
- wawa
- mentions
- engagements
- hawaii
- jr
- applied
- halifax
- ahmedabad
- monty
- python
- stronomy
- blahblah
- blah
- arrivals
- subtract
- payoneer
- formal
- connors
- indranagar
- transform
- marcia
- perpetual
- arranging
- cvs
- callum
- steffi
- attention
- kanye
- mommy
- chucky
- forest
- polarized
- proposal
- conrad
- coldest
- hue
- dictator
- clancy
- geranium
- delays
- build
- lense
- rai
- transistor
- dildo
- warren
- exercises
- forman
- kinley
- bottle
- retail
- yan
- regal
- unprofessional
- annual
- payday
- tricep
- arts
- ripped
- vietnam
- trends
- chaise
- preparation
- nestle
- paula
- deen's
- bmw
- microsoft's
- bookstore
- below
- moving
- pretty
- lock
- administrator
- edition
- airways
- marvel
- garner's
- rubix
- cube
- kfc
- milwaukee
- pager
- alexander
- gilchrist
- goods
- performing
- unopened
- security
- chain
- probiotic
- colleague
- knowing
- novel
- fiesta
- comcasts
- acer
- farmers
- fraud
- weighing
- india's
- gotse
- grapefruit
- similar
- tmobile
- nifty
- sessions
- recital
- greatest
- openings
- zip
- demento
- fatigued
- disease
- prevention
- overcharged
- unquote
- cotton
- tweeter
- railways
- flipkart
- fist
- renee
- nutritional
- starred
- calculated
- mattress
- hillstead
- paul's
- jill's
- disregard
- pesto
- stinks
- nobody
- behind
- kid
- nature
- ounces
- ted
- boiled
- dancom
- wars
- fmod
- span
- along
- malls
- joining
- frequently
- realdonaldtrump
- bobby
- mcgee
- pwd
- obamacare
- clicked
- falling
- pampers
- virgin
- hayden
- pat
- amie
- infosys
- technologies
- roads
- aerosmith
- airtel
- dairy
- sends
- dues
- tobytoday
- ileana
- d'cruz
- rended
- taj
- ashok
- typhoon
- rama
- final
- missouri
- virginia
- announce
- haughty
- salmon
- joking
- goodnight
- rebecca
- believe
- vowels
- ban
- haze
- insight
- cable's
- fellow
- tweeters
- canoe
- warriors
- assassinated
- acceleration
- detailed
- wife's
- robert's
- angus
- interested
- jen's
- sjobs
- cdn
- ruth
- simran
- aapa
- kadai
- armor
- sms
- indefatigable
- indicate
- fra
- floors
- modcloth
- honor
- weigh
- priority
- hiking
- smoky
- judawa
- expense
- deals
- plethora
- sam's
- august
- elain
- bbq
- leap
- congressional
- representatives
- voting
- reproductive
- ge
- bbb
- contacted
- assigned
- jill
- drafts
- scoring
- touches
- relevance
- goggins
- medvesek
- philippiness
- booked
- board
- locality
- beth
- katey
- fans
- approximately
- charitable
- rae
- darker
- anymore
- printing
- significance
- fondle
- mate
- larry's
- larrylarry
- faripir
- gurpur
- seasons
- softball
- refreshments
- jamie
- carrie
- underwood
- abdul
- kalam
- subterranean
- colombo
- sri
- lanka
- quit
- dollar's
- award
- among
- spouse
- forgot
- ass
- millionaire
- indians
- americas
- julie's
- transcribe
- garbage
- geographics
- tree
- criticize
- tanzania
- heather's
- answering
- spam
- phishing
- reseda
- axel
- kailey
- prettiest
- century
- mattel
- toys
- grateful
- fixing
- maidan
- sophia
- betty
- reasons
- russian
- applicable
- loving
- claire
- crashed
- batteries
- philips
- person's
- compile
- ali
- matthews
- apologize
- comcastcom
- luke
- jean's
- carefully
- beg
- trying
- flooringco
- seams
- baking
- skiing
- calming
- continuously
- tale
- roraima
- innova
- bowling
- beginning
- identifier
- diverse
- santa
- continuous
- hangman
- vegetarian
- roast
- rewards
- allow
- immediately
- shelley
- hennessey
- waking
- dicaprio
- ways
- immigration
- raised
- lose
- digger
- cosmetic
- perth
- feet
- chick
- tornadoes
- upstairs
- badly
- timings
- lobster
- runner
- forum
- thunderstorms
- powered
- plugged
- rod
- mgccc
- bleed
- ga
- pune
- mixed
- dishes
- radisson
- cheetah
- what'sapp
- cm
- father's
- skill
- graham
- eggless
- collect
- favorited
- flag
- ssmith
- virtual
- bryant
- spots
- scapingyards
- washed
- springfield
- draw
- insurance
- quantity
- brightener
- cuba
- stream
- raincoat
- maiden
- soundtracks
- deliveroo
- humidity
- crowded
- built
- mesa
- rosenstock
- workpdf
- occurring
- environmental
- dbell
- converse
- radia
- logged
- scabble
- loads
- jacob
- hasbro
- aldi
- piramid
- completely
- method
- hems
- loose
- connect
- snapchats
- arizona
- festivals
- hospital
- peppers
- bowl
- korn
- lupe
- eurostar
- umf
- unchecked
- berlin
- lane
- synonyms
- hampshire
- shakira
- brads
- keanu
- reeves
- johns's
- increasing
- burgers
- stan
- falklands
- valley
- maria
- hangin
- glow
- we're
- newsource
- clark
- carrey
- jams
- crashing
- outback
- sugars
- defines
- joel
- venue
- huffington
- images
- elizabeth
- case
- agnes
- randomly
- mecky
- incredible
- even
- decreased
- vacations
- honey
- akon
- barbara
- handsome
- forensic
- spielberg
- korea
- coding
- achievements
- albert's
- clerk
- hopes
- zimbabwe
- buble
- research
- excel
- gun
- rogen
- resin
- tooth
- filling
- mody
- marinara
- vicki's
- mardi
- gras
- monika
- relatives
- chillin
- lol
- levis
- tricounty
- messy
- disgusted
- emoteck
- foroogh
- quick
- decline
- emailstudy
- atdfd
- giant
- trey
- kalka
- mcdo
- timestamp
- operate
- watched
- infinity
- tactics
- upbeat
- synonym
- racing
- towards
- fog
- muted
- coke
- eighties
- tvs
- theresa
- brent
- kamycka
- dejvicka
- tap
- peanut
- circumference
- saskatoon
- sync
- sofa
- mcdonald
- silenced
- catalogue
- algorithm
- sanctimonious
- talked
- realize
- reveca
- paok
- wipe
- bisque
- br
- rather
- silly
- stat
- tar
- vitamins
- gain
- xm
- fongs
- anywhere
- zanes
- se
- chronicles
- weber
- commence
- causes
- sangli
- german
- hedges
- truthdig
- coffees
- commuter
- plain
- mimo's
- oscar
- restrictions
- treasure
- louis
- stevenson
- fifa
- beast
- pav
- prambors
- hannah
- ringcast
- vegetable
- episodes
- overnight
- apps
- nathan
- dismiss
- karl
- hourly
- eyes
- breeds
- inside
- tribune
- join
- crabmeat
- shakira's
- yankee
- greenwich
- gala
- jump
- recall
- johnny
- cash
- pod
- cast
- rare
- suppose
- enjoyment
- emo
- nayagara
- passion
- pit
- marckel
- bohemian
- emma's
- arijit's
- pet
- prize
- receptionist's
- beat
- freds
- probles
- patagonia
- quart
- '?'
- zach
- duration
- jlo
- alphabetic
- phohouse
- badpho
- daybreak
- biryani
- battle
- divergent
- moby
- jungle
- jaiho
- casserole
- shooter
- columbine
- wednesdays
- soul
- accumulation
- squash
- calm
- debate
- schools
- amd's
- lee's
- managers
- myspace
- relaxing
- bahar
- antarctica
- atmosphere
- pinpoint
- payments
- illinois
- louisiana
- cfo
- pool
- vyas
- morel
- mysore
- rise
- sdfa
- newspaper
- calorie
- dangerous
- sunrise
- mostly
- dining
- shake
- flood
- prescription
- mix
- view
- jana
- spa
- comments
- pear
- factor
- clearance
- northern
- language
- arnold
- exxon
- mobil
- dragon
- fruit
- differences
- seashells
- seashore
- velocity
- motorolla
- haggis
- fiji
- irwin
- similarities
- hypertrophy
- sharukh
- implement
- kazakhstan
- mediterranean
- roman
- grigorean
- hardword
- quead
- amphibious
- roberts
- climatic
- tornado
- prone
- rising
- declining
- megatel
- denzel
- washington's
- citizens
- arm
- persos
- belarus
- gyllenhal
- geology
- helicopter
- iphone's
- drained
- manger
- navy
- daikin
- jerk
- nexus
- interaction
- platform
- tweeting
- at&t
- mahaboobsayyad
- kellogg
- ashmit
- ismail
- listing
- enalen
- projects
- clara
- clinic
- exams
- ammunition
- mark's
- divya
- jjnzt
- activation
- andy
- terry's
- brenden
- jeffrey
- burnette
- protests
- joshua
- pianist
- whiz
- schadenfraude
- rials
- storage
- bot
- provided
- massachusetts
- channin
- store's
- rump
- prior
- re
- intelligent
- recognise
- irobot
- areas
- lighter
- yell
- uses
- cn
- gadgets
- skynet
- marie
- lamb
- balcony
- nyt
- bennett
- ralph
- pda
- balloon
- maps
- degeneres
- character
- evans
- actor
- fitbit
- malika
- shivaji
- attitude
- lily's
- concerned
- upon
- startup
- stuffs
- tawa
- relative
- legacy
- cst
- leah
- remini
- mortgage
- amed
- cleaners
- seal
- abita
- grammar
- backdoor
- minimize
- leisure
- billie
- spicy
- training
- comfortably
- sunburn
- minneapolis
- habits
- braking
- notifier
- swan
- thoughts
- pleasure
- those
- kashmirstart
- sells
- i'dl
- kettle
- 'false'
- rta
- valia's
- visiting
- techno
- mornings
- mow
- cbs
- slightly
- francine
- vice
- postpone
- mins
- xyz
- hwood
- kept
- spider
- reopen
- billy
- connery's
- eiffel
- itinerary
- crash
- valentine's
- likexchange
- divorce
- danville
- il
- government
- menus
- capabara
- origin
- assistance
- vicinity
- chit
- drinks
- flabbergasted
- xy
- self
- double
- castle
- refrigerator
- bakery
- spray
- pyramids
- bio
- basic
- humans
- schwarzenegger
- inchoate
- rules
- caftan
- raleigh
- hobby
- ajay
- devgn
- corden
- aud
- prevailing
- kenny's
- crew
- aww
- spying
- employer
- thier
- juanpedro
- craig
- leon's
- looked
- players
- costs
- providers
- sydney
- documentary
- hyphen
- represent
- strings
- pianos
- acoustical
- celeb
- pong
- linear
- turn_down
- reaches
- strength
- routine
- billboard
- piano
- ed
- sheeran
- diet
- vietnamese
- yams
- grandmother's
- rihana
- require
- stressed
- option
- affected
- acquire
- retrieve
- clarion
- congress
- turiellos
- mates
- solar
- dice
- jalapenos
- wished
- painting
- therapy
- warehouse
- mop
- neighbor
- flappy
- returns
- someones
- spring
- wonton
- moves
- jagger
- fishing
- hiphop
- dunkin
- donut
- atlantic
- daughters
- hula
- hoop
- lessons
- scrote's
- indie
- grief
- lebron
- naughty
- preprogrammed
- alt
- needy
- sharpen
- butcher
- knife
- pulled
- starbuck's
- backward
- terrorist
- invaders
- parent
- crescent
- brewhouse
- prado
- science
- playlists
- debbie's
- sleeping
- searched
- lindsey
- lohan
- competitions
- subtracting
- challenge
- beer
- gainers
- chili's
- frubs
- police
- softly
- practical
- assessment
- bonefish
- rotating
- placed
- lakers
- barenaked
- ladies
- lord
- rings
- mar
- sneakers
- artists
- sanantha
- shuffles
- shuffled
- bardonia
- county
- analyze
- pattern
- girls
- league
- fjords
- nothing
- brewing
- smurfs
- tommy's
- lovin
- cottage
- ming
- photosynthesis
- danny's
- repeated
- peaceful
- migrations
- zydeco
- inkheart
- seller
- occurence
- telegraph
- invited
- wifi
- levels
- willie
- nelson
- dolores
- alter
- retirement
- professional
- development
- sainsburys
- byron's
- floyd
- raingear
- notorious
- bone
- explanation
- database
- likely
- lucky
- irish
- sshow
- ramsey
- aired
- sprint
- preparing
- academy
- yeshudas
- angels
- dancing
- aretha
- franklin's
- layers
- glass
- kuch
- hai
- wakey
- knitting
- mujhe
- feb
- king's
- malinda
- parents
- mirchi
- gallon
- seen
- parks
- safest
- evacuation
- beautiful
- sofia
- francs
- consequences
- various
- dicaprio's
- networth
- phelps
- disk
- constructed
- concern
- effectively
- lawrence
- zac
- galifrankas
- wheat
- prediction
- schemes
- mega
- capricorns
- dinky
- lanegan's
- princess
- pregnant
- smallest
- americans
- retweet
- insta
- sonys
- bk
- alzacz
- kohls
- cleanliness
- pizzahut
- delay
- lpg
- satisfied
- choke
- suqcom
- repairs
- killing
- miller
- budgets
- iamironman
- gbaby
- gma
- loves
- kate's
- margaret
- ben's
- brady
- palmer
- homework
- tax
- regional
- archive
- fitness
- vault
- footloose
- child's
- damage
- petco
- canceled
- passing
- pikes
- peak
- avatar
- diverge
- maron
- fault
- sword
- eventual
- contest
- dangal
- mauritania
- abs
- wondering
- southampton
- resources
- soy
- lexmark's
- hilly
- lyon
- beirut
- tribute
- madrid
- ate
- sweat
- charlize
- theron
- atif
- aslam
- capture
- actual
- shane
- dawson
- zedd
- snooker
- loquaciousness
- sholay
- tofu
- nightmare
- avenged
- sevenfold
- matters
- prompt
- panic
- brilliant
- boston's
- mckinleyville
- astrology
- strait
- countdown
- cats
- fruits
- embassy
- pita
- gyros
- negotiations
- hairdresser
- courteous
- enthusiastic
- funk
- sense
- heathens
- cabinet
- irctc
- stored
- shutoff
- glasses
- ella
- fitzgerald
- rover's
- vet
- polar
- bears
- oceanside
- medicine
- anita
- barrow
- burrito
- oliver
- covering
- ground
- zucchini
- textile
- antebellum
- chimes
- covington
- species
- bees
- cranston
- kilometer
- behaved
- rudely
- jimi
- hendrix
- calms
- outwards
- califonia
- composed
- hint
- shipping
- frosting
- sport
- napoleon
- hill
- athens
- middletown
- shirts
- sample
- politician
- investigated
- rapper
- con
- cuisine
- wizard
- brick
- conroe
- iterate
- architect
- salon
- babaji
- passed
- maryland
- surya
- monopoly
- avenue
- considering
- celebration
- brewed
- galoshes
- tutorials
- workouts
- millenium
- toward
- neighbourhood
- bannon
- storming
- reoccurring
- longtime
- sweetheart
- memos
- starfish
- centaur
- philippines
- oar
- departs
- preferably
- latte
- sides
- pentagon
- fashioned
- rescheduled
- transportation
- twins
- duker
- deadline
- samurai
- obaba
- bp
- ambiance
- automatically
- object's
- boost
- morale
- jogging
- spell
- firefly
- mura
- masa
- checklist
- biographies
- sucked
- congested
- avinash
- commando
- jolie's
- instrumentals
- clarksville
- tablespoons
- surveys
- flour
- acela
- calone
- bucket
- fulls
- valid
- references
- critical
- perpetuate
- luncheon
- ohm's
- values
- plying
- expectations
- musician
- mindsweper
- throughout
- noontime
- included
- tour's
- voted
- walgreens
- chickens
- monday's
- crankshaft
- surfer
- lunchtime
- skramz
- compounds
- diabetes
- might
- reservation
- homosapien
- engadget
- boeing
- brisbane
- ear
- headphones
- minimum
- worry
- snowplows
- burying
- driveway
- adapt
- destroy
- impanema
- equipment
- turnt
- attractive
- conducted
- cinnamon
- freshener
- watsapp
- bean
- awfully
- entitled
- murderer
- ford
- forties
- scenery
- morocco
- sf
- blokus
- preacher
- taken
- stormy
- centers
- ethics
- popup
- mysterious
- puts
- stage
- considerations
- lourie
- artic
- scoop
- carion
- merced
- bypass
- passwords
- quantico
- grade
- examples
- cuisines
- hibernate
- bear
- published
- authors
- tempo
- keidis
- tidal
- cookoff
- zones
- probable
- summerfest
- dogs
- aren't
- necessarily
- carolina
- eleventh
- chilling
- sleeve
- invoking
- term
- herald
- maria's
- poltergeist
- imagine
- uv
- index
- johncena
- instruct
- oscillate
- liter
- nelly
- shawarma
- baster
- pali
- vilnius
- tabs
- debates
- singers
- activated
- ozzy
- osbourne
- danish
- happypeoplecom
- accounting
- backpack
- im
- puttanesca
- keeps
- worse
- wrigley
- braise
- loin
- carnatic
- bases
- nick
- swisher
- stolen
- clouds
- cleared
- bola's
- norman
- reedus
- screwdriver
- window
- volcanoes
- rowan
- atkinson
- minneapoliscity
- delicacies
- monitor
- overall
- gymnastics
- channels
- kxly
- botswana
- enjoyable
- spectre
- chane
- decentralized
- men's
- freeze
- postal
- becomes
- ccn
- berth
- michigan
- composition
- shahi
- panner
- dakar
- jakarta
- equalizer
- weird
- barely
- rodriguez
- oklahoma
- giraffes
- margarita
- difficult
- crabs
- firework
- probability
- tools
- emigration
- legislation
- pdf
- cheeseburgers
- applications
- adopters
- priest
- walks
- mechanic
- h
- showers
- signs
- contrast
- recollect
- gm's
- duck
- beavers
- tail
- lucking
- horkersd
- wo
- myrtle
- hr
- steam
- entirety
- anirudh
- colored
- tropical
- bedrooms
- yellowish
- elephants
- expenses
- contents
- warmer
- royksopp
- etc
- progressives
- peoples
- cultures
- unset
- iceland
- mp
- mangalore
- tanya
- quad
- particulars
- insert
- tvf
- formidable
- origins
- eden
- depressed
- mc
- donalds
- rub
- regrets
- judgments
- scope
- intellectual
- capacity
- ahmadabad
- stethoscope
- superstitions
- rl
- stine
- quinoa
- martial
- smooth
- damn
- speeding
- stephen
- halley
- barry
- jealous
- siri's
- java
- scenarios
- pc
- transfer
- tw
- agent
- nightime
- creamy
- mirch
- dil
- cannon
- cameras
- process
- merriam
- webster
- dubstep
- rangoon
- wines
- older
- navigate
- chandelier
- egs
- recognize
- subscriptions
- mileage
- studies
- microphone
- immigrant
- electronics
- careful
- paint
- fund
- success
- resolved
- bola
- eva's
- roller
- augusta
- midtown
- surprise
- children's
- dongle
- seashell
- bots
- fallen
- centimeters
- poisoning
- sci
- fi
- outcome
- reform
- sleepy
- moderate
- chrome
- ultraviolet
- george's
- geek
- courses
- rundown
- legend
- equipments
- usher
- manor
- advertisers
- clue
- depending
- strongest
- outstation
- fallout
- shoal
- lastfm
- relocate
- pollution
- awareness
- bryce
- jessie
- carol
- nsnbc
- vacuumed
- chives
- splits
- arbor
- receiving
- toast
- futures
- brokers
- routes
- fixed
- additional
- switches
- church's
- governor
- enacted
- grams
- guitarists
- android
- babe
- sonny
- sear
- eliminate
- remain
- uc
- polk
- pakistani
- bedside
- reshuffle
- frida
- devil's
- rusk
- actors
- pakistan
- happenings
- sit
- montauk
- beethoven
- legends
- sunshine
- mothers
- smoke
- feels
- rockies
- miamy
- operations
- addition
- subtraction
- incite
- annoying
- cristiano
- ronaldo
- spin
- cows
- jenny
- spread
- wallstreet
- selections
- nashik
- ipl
- oswald
- chambers
- horoscope
- mgk
- dog's
- residing
- cricketer
- dhoni
- byron
- fluctuations
- talks
- palermo
- shallowest
- bbcnews
- nsdl
- flights
- lineup
- stick
- ribs
- jeopardy
- timetables
- emi
- maya
- mackensie
- osteen
- jimmie's
- adjustments
- precocious
- fork
- husband's
- audi
- hibachi
- disputed
- crack
- visible
- boiling
- rogan
- karachi
- babysitter
- kidnapping
- hamburgers
- madonnas
- lessen
- ipo
- greenville
- carries
- creamed
- pickled
- herring
- tackle
- brush
- geyser
- savings
- torey
- hurt
- subscribe
- picks
- birthdate
- goals
- cairo
- projected
- patrick's
- capita
- honda
- intended
- hurriedly
- activates
- it'll
- wsj
- spy
- broods
- grommet
- steven's
- underground
- seahawks
- participants
- workday
- ammi
- nightlife
- donner
- summit
- ukraine's
- ended
- arrangements
- altucher's
- writer
- fortune
- brisket
- grant
- audiobooks
- twilight
- bass
- hunger
- roses
- barbecue
- tuna
- deadly
- killers
- finally
- trilogy
- grisham
- goblet
- roadblocks
- birthday's
- biscuits
- lawyers
- steve's
- kari
- labyrinth
- commonwealth
- sharma
- gulf
- petrol
- earthly
- ultimate
- ending
- allison
- canberra
- honolulu
- flash
- salman
- gresham
- hindustani
- stroganoff
- sock
- creates
- geo
- traits
- moral
- rein
- blood
- slayer
- pro
- bono
- succinct
- dalls
- somethings
- sharp
- izzo
- whiny
- bitch
- macaroni
- nights
- jumper
- blind
- cure
- cancer
- vibrant
- sloth
- transition
- recycling
- bbc's
- columbia
- kentucky
- hire
- opera
- prefer
- avoid
- sort
- comedy
- compassionate
- nc
- va
- riddles
- segment
- youth
- charity
- surrounding
- punjabi
- sharply
- lovett
- barber
- label
- hypocrisy
- subscriber
- captain
- disillusion
- hyderabad
- dashboard
- storm
- barrel
- panasonic
- clinton
- canasta
- mittens
- badra
- amit
- trivedi
- crystal
- lewis's
- everywhere
- rue
- evaporated
- mma
- offered
- tutoring
- peas
- dream
- cafes
- lauderdale
- deletion
- precise
- parliamentary
- remotely
- connection
- calendars
- stupidest
- shovel
- western
- cutting
- ll
- rapping
- spelling
- mama
- tatum's
- fulton
- universal
- garner
- chill
- icebo
- college's
- rehman
- soundcloud
- scorecards
- ketchup
- jimmy's
- crate
- lexmark
- preference
- females
- federal
- andreas
- sportsnet
- favourites
- janice
- bins
- pamela
- covered
- rhapsody
- italian's
- ke
- panera
- remainders
- tandoori
- sukhwinder
- sunidhi
- etymology
- googleplex
- slide
- wearing
- trivial
- pursuit
- cancels
- martina
- mcbride
- finances
- vocab
- zipcode
- compaq
- composer
- margarine
- jonathan
- entrepreneur
- extended
- combo
- memories
- tupac
- affects
- drunks
- ford's
- liked
- dealership
- olky
- realtor
- thighs
- ourselves
- economics
- medication
- gross
- domestic
- donaldson
- prostate
- wicker
- rooms
- instrumental
- savannah
- outing
- affleck
- quotes
- tire
- montana
- exhausted
- acoustic
- commercials
- convenience
- consciousness
- serge
- gainsbourg
- windows
- turks
- generate
- pedicures
- btaxes
- departures
- frasier
- amazon's
- bluetooth
- verus
- neat
- forecasted
- bing's
- dropped
- recurrent
- candidate
- aware
- blackeyed
- pees
- prince's
- perimeter
- rectangle
- aaron
- carter
- involve
- drugs
- lighten
- slicker
- rains
- cloud
- carrot
- popcorn
- carmike
- cinemas
- greater
- minestart
- frog
- lenon
- unique
- hanging
- hung
- sporty
- seldom
- jocko's
- kid's
- viewers
- cantonese
- usage
- specs
- bugatti
- veyron
- chief
- blockbuster
- krishnarajpuram
- interstate
- hammers
- obligatory
- wonder
- southeast
- marlon
- brando
- ferrel
- tal
- obidallah
- manoeuvres
- merita
- rotate
- changs
- pepsi
- shanghai
- branden
- wind
- landmarks
- dvr
- congestion
- valentines
- eastwind
- lomaine
- geneva
- officially
- hopkins
- takjistan
- dimmer
- karo
- apne
- aur
- karna
- chahta
- hu
- purchased
- otherplace
- giraffe
- ute
- requirement
- watts
- powerful
- bulb
- oclock
- nba
- hulu
- composing
- melissas
- millilitres
- spoons
- goulash
- thor
- harischand
- mg
- i95
- sb
- kilo
- diana
- llyod
- webber
- wool
- penultimate
- bang
- philosophers
- nietzche
- focault
- profession
- kilograms
- turkeys
- bibulous
- angeline
- atm
- narwhal
- kilamanjaro
- captia
- volkswagen
- onkyo
- av
- receiver
- ipad
- aniston's
- summarize
- ice
- jindel
- pump
- nikki
- minaj
- nationality
- snoodle
- yemen
- sudan
- unprompted
- organization
- megan
- fares
- engage
- functioning
- dinar
- conservative
- korean
- sahara
- kingdom
- antartica
- telugu
- tamil
- tsunami
- rajani
- khanth
- venture
- goalkeeper
- dushambe
- abrupt
- hbo
- sopranos
- parana
- cave
- anime
- posters
- johny
- depp
- invisible
- graphical
- joli
- pricing
- beech
- nuclear
- triad
- hilton
- borders
- lucille
- redhead
- geraldine
- ferraro
- bde
- lowered
- phrases
- nicole
- mcgoat's
- manipulate
- roip
- nasa
- google's
- davy
- crockett
- springsteen's
- richest
- costliest
- easily
- gm
- psso
- kroner
- maple
- trees
- christie
- brinkley
- libraries
- gmb
- key
- mongolia
- anastasia
- telekenesis
- promise
- stray
- cruise's
- starring
- odyssey
- polish
- zloty
- hook
- ups
- integral
- exponential
- berkshire
- hathaway
- tables
- pink's
- alligator
- porto
- tommy
- hilfiger
- print
- networks
- snaps
- celebrate
- bina
- yay
- smiley
- emoticon
- commented
- folgers
- hathway
- huge
- lfi
- tagged
- treated
- hersheys
- aircel
- nastyburger
- linkedin
- tracy
- waiter
- drain
- charge
- neptunal
- poorly
- waited
- inappropriate
- potus
- accounts
- vodafone
- complaining
- spoiled
- positive
- tumblr
- unpleasant
- overpricing
- cheating
- connected
- else's
- greetings
- thought
- waste
- excess
- micro
- lodge
- snapdeal
- sonic
- hole
- sole
- patel's
- insect
- packet
- elsewhere
- moan
- easyjet
- snotty
- expired
- xl
- sizes
- filing
- applebee's
- angela
- merkel
- swagging
- moto
- sluggish
- flavia
- mum
- jacob's
- existing
- cannot
- pleas
- mahmoud
- ebay
- smsayyad1985
- kishore17051985
- fedex
- truette
- petey's
- tessa
- gaurav
- karen
- mongomery
- llc
- joseph
- turnpike
- accumulated
- deadlines
- fees
- ppt
- emergency
- missing
- carl's
- attach
- physical
- drill
- marilyn
- jugal
- here's
- bug
- sarasigmon123
- lindafancy55
- markpolomm
- gary's
- mailing
- bill's
- erins
- beth's
- wont
- stacy
- cadwell
- tori
- aloud
- brenda
- thisome
- smurfette
- smithjoe
- hwacuk
- chong
- giselle
- bosses
- havent
- frieda's
- jjjindia
- exists
- batch
- samuelwaters
- joose
- hellen
- builders
- accepted
- victor
- taxi's
- terry
- macdonald
- yahoocom
- metion
- rodger
- christy's
- otp
- jayesh
- tried
- morgan's
- office's
- rob
- qerwerq
- secured
- gerry
- raj's
- junable
- shopyourway
- reference
- jhonny's
- marissa
- rosa
- bert
- ana
- goddammit
- pronounce
- serious
- recheck
- slowly
- failed
- fuck
- executed
- clearly
- errors
- showed
- races
- thursdays
- funky
- handmaid's
- beam
- scotty
- debit
- wiki
- editor's
- automobiles
- promo
- discount
- director
- act
- bejeweled
- aside
- snakes
- ladders
- marsala
- influx
- bayou
- reasonably
- tapas
- az
- ddlj
- meatball
- newscast
- bibber
- tmz
- devon
- applebees
- hihop
- doggie
- feelings
- radios
- litle
- tsos
- congratulate
- links
- treble
- flame
- eta
- encourage
- students
- choices
- lobby
- vf
- chore
- butterfly
- clips
- urban
- regular
- bi-weekly
- baltimore
- sport's
- breakups
- dale's
- brea
- douglasville
- fundraiser
- dolphines
- maradona
- pe
- becky
- appointed
- deputy
- utar
- pradesh
- anniston
- handy
- sainsbury's
- attenuate
- parcel
- jakes
- bristo
- stressful
- deposit
- mathematical
- superstar
- survivor
- destiny's
- westcombe
- facility
- oboe
- mcnamara
- abolish
- swim
- repair
- grub
- hub
- ill
- dec
- dreams
- wyatts
- obstacle
- poach
- dental
- rose
- davinci
- trevor
- noah
- ncaa
- entrapreneur
- sanam
- differs
- ave
- hopsin
- enya
- wbc
- accordingly
- remarks
- sufi
- beibers
- arrested
- sensor
- music's
- author
- antwerp
- cnn's
- foodnetworkcom
- customize
- preferred
- unable
- duct
- tape
- gooseto
- apig
- ringer
- secure
- passage
- tomatoes
- wan
- senelena
- americano
- makeup
- robotics
- teleconference
- robotic
- poughkeepsie
- steel
- day's
- soundtrack
- tobymac
- transit
- gloria
- furious
- nazi
- hunting
- effect
- marvin
- gaye
- pasadena
- ca
- constrain
- singles
- outer
- nowhereville
- comfortable
- erica
- grebe
- wooly
- trigonametry
- obsessed
- graphics
- undone
- tough
- treasury
- toledo
- munich
- obtain
- nutritionally
- balanced
- internal
- locks
- exit
- mocking
- lyft
- transaction
- tasty
- mixture
- according
- hands
- supports
- canceling
- congressman's
- lenin
- spagetti
- controversial
- statements
- walker
- humor
- nkotb
- jon
- snow's
- possibility
- wellington
- nz
- advantages
- disadvantages
- driver
- towels
- stretch
- gear
- joey
- crimson
- chose
- pineapple
- asparagus
- teaspoons
- bling
- medieval
- engines
- foods
- hurts
- cannibal
- tonic
- bitcoin
- collection
- hidden
- figures
- brasil
- politic
- superb
- dalida
- capuccino
- analysts
- thankama
- kodaikanal
- vote
- burritto
- chipolte
- abut
- sedaka
- chamber
- rfi
- knock
- cnncom
- remchi
- fl
- ortcars
- flip
- wire
- thriller
- fiasco
- breaks
- dam
- paradise
- presidency
- sigur
- ros
- socks
- van
- halen
- wayne
- spare
- lightness
- appropriately
- both
- musics
- coastal
- cry
- friend's
- wore
- veganism
- picnic
- regent
- visited
- therapist
- inauguration
- swatishs
- dorothy
- known
- supervision
- superbowl
- eric's
- bday
- kar
- abhi
- achche
- ache
- rahe
- honge
- mhz
- sponge
- bistros
- brownies
- tenderloin
- enchiladas
- gluten
- hotdog
- row
- bing
- notebook
- pulldown
- clearer
- medford
- drivers
- waverley
- canal
- connecting
- summers
- gibraltar
- monoprice
- mxblue
- mechanical
- turbulence
- carey
- blunder
- factorial
- depends
- commands
- stand
- draymond
- susumu
- hirasawa
- yosemite
- '200'
- baguette
- stonehenge
- douriff
- ivf
- ivr
- litt
- runs
- hesitant
- crock
- guetta
- malaysia
- whelers
- sadness
- william
- coral
- daft
- punk
- sandle
- santha
- ingerman
- calc
- shibaru
- alcohols
- nano
- gina
- desta
- mgmt
- bana
- talking
- garvin
- trilly
- nytimes
- chhana
- mereya
- favor
- strained
- cooler
- films
- einstein's
- aroma
- ska
- raphsody
- trebuchet
- forth
- relate
- qualifications
- kirk
- franklin
- arithmetic
- skyfall
- bathrooms
- raghu
- dixit
- reports
- availables
- haddock
- odd
- cape
- cod
- noisy
- dull
- hackernews
- porn
- pad
- fight
- fighter
- nzd
- melodious
- burton
- helena
- campaign
- mcclanahan
- mummy's
- motown
- rasgulla
- janta
- pvt
- ltd
- heartthrob
- justin's
- velociraptor
- hippo
- senatra
- giggle
- peru
- nirvana
- anirudh's
- retro
- mf
- doom
- summarise
- ariana
- grande
- predicted
- creed
- user
- desire
- kenny
- roger
- sia's
- thrills
- wapo
- stockholm
- okinawa
- occasionally
- shuffling
- veggie
- mukkala
- mukkabilla
- guardian
- anytime
- themes
- horror
- ennema
- eatha
- homestead
- forever
- mayor's
- stance
- council
- master
- louies
- keane's
- fears
- noe
- reggae
- largo
- swiftm
- afi's
- xinhua
- dedicated
- bottom
- franks
- yelawolf
- ucl
- flop
- grammys
- espn
- joni
- mitchell
- shot
- tequila
- sleepyhead
- aces
- redder
- edms
- lamp's
- loudest
- brolly
- thao
- nguyen
- interior
- dine
- dogwalking
- nytimescom
- overcast
- deactive
- foo
- disasters
- opacity
- dea
- guam
- drug
- abuse
- itzhak
- perlman
- drawing
- sweden
- bombing
- ireland
- poll
- hotha
- defrosting
- salt
- toggle
- spb
- weatherit
- either
- forecasts
- intellicast
- weathercom
- orevena
- recorder
- pizzahouse
- reorganize
- sticky
- umbrellas
- opened
- cleaned
- shakin
- bakey
- tips
- hypoallergenic
- sarcastic
- cheat
- ii
- developers
- edg
- yaad
- dilana
- kahin
- samantha's
- rita's
- adding
- bro's
- attendees
- maggie
- valet
- groomer
- timeframe
- pete
- faculty
- parade
- greens
- jack's
- walter
- gemma
- nail
- arora's
- namkeen
- tonights
- ggg
- tie
- iheartradio
- rov
- javan
- wfrn
- kicks
- osteen's
- wgrr
- lite
- prairie
- companion
- palhunik
- pudding
- tutorial
- welsh
- rarebit
- oatmeal
- pathia
- achieve
- veg
- pulav
- crockpot
- prepared
- keno
- pinball
- fishdom
- nfs
- harvest
- crops
- farmvile
- millionaires
- vodka
- depend
- pon
- stationary
- mad
- errands
- paav
- queried
- pepper
- rowling
- shadi
- viewed
- mlb
- heavyweight
- citadel
- scene
- circus
- trolls
- grab
- kung
- fu
- bowery
- railway
- coach
- fare
- metrolink
- navigation
- westwood
- layfayette
- inconvenience
- emotions
- arrahman
- cosmos
- multiplied
- abouts
- hitting
- eliot's
- el
- ribbons
- sperm
- whale
- eaten
- lbs
- pinhead
- timeliness
- defining
- thesaurus
- penalty
- approval
- poetry
- ambulance
- jello
- shots
- ferrell
- stassi
- schroedder's
- tacobell
- hierophant
- zealand
- stockton
- emissions
- blowing
- kennedy
- ziggurat
- gagas
- gretszky
- hemingway
- pages
- earn
- nobel
- actions
- sloths
- parton's
- madagascar
- acting
- tiangle
- trebuchets
- googs
- gandhiji
- amal
- brazil
- adviser
- rich
- acted
- rihanas
- stamp
- mugy
- msn
- busdriver
- fergie
- flick
- ribons
- nakumuka
- postmates
- complaintum
- glinder
- gta
- rcg
- outlet
- hadock
- mclanahan
- coal
- mumy's
- piza
- wheelers
- guarante
- debugging
- debuging
- proper
- sung
- bilando
- terrorism
- cover
- dimmed
- vanilli
- marauthr
- wooo
- michael's
- shutdown
- pittsburgh
- precipitation
- riff
- portland
- muggy
- giants
- banks
- steelz
- ensure
- ricky
- matin
- tyres
- plant
- chased
- advice
- gossiping
- society
- mitushree
- hairdresser's
- biology
- fsu
- reflect
- yashas
- vinay
- vally
- closed
- shoutcast
- pilkington
- soda
- powder
- sambar
- cookingforu
- thermonuclear
- battleship
- cereal
- wishlist
- wrist
- hipsterhood
- duncan
- trussel's
- simmons
- wide
- cisco
- crafts
- sporting
- presently
- sheffield
- septa
- lead
- fransisco
- washingdon
- evolution
- mariah
- kya
- tum
- mere
- karne
- karoge
- acts
- assembly
- idle
- brand
- meridian
- terranova
- guarantee
- marian
- fields
- farthest
- philippine
- cambodia
- situated
- foruget
- monopricechanical
- peenth
- moroco
- piz
- tre
- supplwn
- viki
- shivle
- loged
- applebe
- acess
- madagar
- anp
- socer
- subcribe
- pluged
- imigration
- audiowan
- debie's
- imediately
- f
- locar
- duark
- rebeca
- talle
- banas
- ragh
- acordingly
- wakely
- en
- bress
- acording
- stefanan
- puding
- vegie
- vius
- edie
- domizza
- eg
- cheeseiza
- ocurred
- brightnes
- alaba
- memory
- fransico
- sunderland
- boogie
- butt
- leviathan
- shinning
- premier
- cleanup
- wacky
- aman
- cherry
- bomb
- solstice
- silently
- closet
- nakumukka
- shed
- responses
- yankees
- investigation
- dooa
- pieces
- imogen
- heap
- stole
- dynamite
- cease
- operating
- rained
- uptown
- suggestion
- finlee's
- bedtime
- sockets
- sanfranscio
- abbas
- cn's
- vibrate
- cooling
- sheriffs
- hike
- ilayaraja
- speaking
- un
- storms
- roof
- tube
- jackpot
- classmates
- extremely
- somewhere
- drenched
- sentient
- budy
- heating
- apt
- parenting
- concerning
- seo
- searches
- sticking
- patterns
- numbered
- impression
- reunion
- presents
- mehta
- willing
- discuss
- evan
- parker
- violin
- lesson
- musicworkz
- registration
- opens
- evening's
- thursday's
- nineteenth's
- hayathis
- shower
- corresponding
- showcase
- famosa
- kamp
- neal
- brenan
- gx
- nonstop
- rm
- giver
- traveller
- knowledge
- crispy
- supper
- broil
- noodle
- stuffed
- maccoroni
- almond
- clash
- clans
- ping
- keeper
- enemy
- coc
- detergent
- corn
- dill
- pickles
- ranch
- dressing
- lentils
- translate
- toothpaste
- rearrange
- groups
- santana
- pritzker
- winners
- libertarian
- mc's
- vitaly
- nfl
- mythical
- oriented
- provisional
- experiences
- safely
- themselves
- mia
- reducing
- learly
- court
- vin
- diesel
- netbooks
- chinatown
- aberdeen
- queens
- luni
- purchasing
- timing
- bagmati
- narrow
- egypt
- represented
- revelation
- britain
- aamir
- priyanka
- middleton
- base
- original
- nhl
- goal
- scorers
- osteoperosis
- laws
- correlation
- motivation
- ncaaa
- tense
- touring
- framework
- adel
- diamond
- schwarzenegger's
- stomachs
- cow
- chairs
- steph
- subjegant
- pategonia
- michelle
- todlers
- stakes
- tinder
- matches
- fjord
- equator
- triumph
- hell
- moldova
- presley's
- wa
- rajinikanth
- basalt
- bali
- airplane
- hash
- lit
- <sos/eos>
two_pass: false
pre_postencoder_norm: false
init: null
input_size: null
ctc_conf:
dropout_rate: 0.0
ctc_type: builtin
reduce: true
ignore_nan_grad: null
zero_infinity: true
joint_net_conf: null
use_preprocessor: true
token_type: word
bpemodel: null
non_linguistic_symbols: null
cleaner: null
g2p: null
speech_volume_normalize: null
rir_scp: null
rir_apply_prob: 1.0
noise_scp: null
noise_apply_prob: 1.0
noise_db_range: '13_15'
short_noise_thres: 0.5
frontend: default
frontend_conf:
fs: 16k
specaug: specaug
specaug_conf:
apply_time_warp: true
time_warp_window: 5
time_warp_mode: bicubic
apply_freq_mask: true
freq_mask_width_range:
- 0
- 30
num_freq_mask: 2
apply_time_mask: true
time_mask_width_range:
- 0
- 40
num_time_mask: 2
normalize: utterance_mvn
normalize_conf: {}
model: espnet
model_conf:
ctc_weight: 0.3
lsm_weight: 0.1
length_normalized_loss: false
extract_feats_in_collect_stats: false
transcript_token_list:
- <blank>
- <unk>
- the
- to
- i
- me
- you
- is
- what
- please
- my
- a
- for
- 'on'
- in
- of
- email
- this
- it
- have
- from
- and
- play
- olly
- that
- new
- can
- do
- how
- tell
- about
- at
- any
- today
- not
- time
- are
- check
- list
- send
- with
- an
- one
- emails
- last
- will
- am
- again
- set
- next
- would
- was
- up
- like
- turn
- said
- calendar
- meeting
- get
- what's
- right
- all
- did
- be
- need
- want
- song
- tweet
- add
- event
- your
- news
- 'off'
- weather
- there
- lights
- more
- now
- alarm
- pm
- music
- show
- confirm
- train
- could
- think
- does
- make
- command
- just
- find
- when
- tomorrow
- much
- where
- week
- by
- give
- events
- know
- day
- start
- two
- latest
- response
- that's
- remind
- done
- but
- thank
- stock
- some
- you've
- answer
- five
- open
- current
- many
- remove
- radio
- good
- book
- 'no'
- facebook
- going
- it's
- volume
- reply
- work
- delete
- go
- complaint
- contact
- if
- service
- let
- thanks
- so
- hear
- once
- correct
- john
- playlist
- birthday
- got
- post
- ten
- order
- sorry
- has
- date
- hey
- coffee
- who
- rate
- three
- exchange
- further
- light
- twenty
- price
- mail
- reminder
- explain
- podcast
- ticket
- down
- really
- clear
- seven
- schedule
- alarms
- say
- morning
- change
- twitter
- cancel
- number
- dollar
- stop
- out
- appreciated
- hundred
- wrong
- don't
- information
- address
- contacts
- read
- york
- us
- which
- should
- 'yes'
- details
- songs
- between
- nine
- anything
- s1
- received
- playing
- shut
- dot
- mind
- com
- google
- most
- put
- job
- traffic
- four
- best
- six
- create
- recent
- yeah
- happening
- friday
- name
- very
- area
- mom
- or
- take
- appointment
- yeap
- room
- world
- home
- hour
- message
- eight
- clarify
- s2
- party
- episode
- here
- elaborate
- alexa
- appreciate
- customer
- i'd
- sent
- thing
- march
- look
- tonight
- place
- try
- after
- definition
- call
- well
- times
- rock
- phone
- speak
- today's
- whats
- food
- thirty
- see
- joke
- every
- pizza
- write
- lists
- game
- shopping
- weekend
- rephrase
- month
- matter
- s
- update
- station
- vacuum
- great
- detail
- long
- gmail
- old
- repeat
- city
- audiobook
- perfectly
- status
- inbox
- mute
- local
- near
- restaurant
- thousand
- tuesday
- year
- we
- media
- before
- around
- resume
- musch
- her
- house
- taxi
- hours
- didn't
- describe
- answers
- understand
- incorrect
- word
- listen
- first
- item
- d
- trump
- save
- days
- socket
- recipe
- nice
- u
- reminders
- social
- search
- as
- monday
- subject
- location
- movie
- saturday
- euro
- dinner
- them
- ask
- let's
- scheduled
- plug
- i'm
- gotten
- question
- minutes
- friend
- favorite
- meetings
- define
- instructions
- exactly
- cook
- understood
- sentence
- thursday
- grocery
- correcly
- their
- words
- temperature
- person
- amazon
- catch
- company
- mean
- something
- correctly
- living
- fantastic
- help
- following
- dollars
- rain
- speakers
- instruction
- helpful
- increase
- consumer
- evening
- family
- upcoming
- jazz
- saying
- way
- switch
- forecast
- task
- cleaner
- love
- late
- boss
- wednesday
- yesterday
- updates
- lower
- people
- cool
- wonderful
- twelve
- afternoon
- color
- wake
- oh
- lunch
- perfect
- back
- understanding
- useful
- amazing
- his
- dim
- movies
- chicago
- things
- takeaway
- fifty
- unread
- happy
- available
- noon
- wouldn't
- night
- had
- appointments
- idea
- michael
- doing
- over
- doesn't
- select
- hi
- shit
- may
- they
- delivery
- nearest
- buy
- apple
- car
- left
- confirmed
- report
- worth
- robot
- uber
- wemo
- sunday
- excellent
- outside
- blue
- looking
- messages
- top
- wear
- point
- too
- i've
- country
- prices
- bring
- store
- awesome
- unclear
- ok
- mark
- speaker
- app
- sound
- hot
- live
- jackson
- bad
- recently
- currently
- smith
- pull
- whatever
- india
- messed
- kitchen
- ninety
- percent
- him
- use
- office
- brightness
- care
- gave
- description
- tom
- regarding
- meaning
- meet
- siri
- bob
- joe
- hmm
- leave
- sarah
- smart
- come
- chicken
- seventeen
- walmart
- bill
- enough
- choose
- louder
- our
- trending
- born
- london
- zone
- account
- cnn
- audio
- president
- isn't
- compose
- coming
- second
- manner
- pick
- album
- uhh
- plus
- provide
- erase
- notification
- played
- channel
- donald
- pound
- instagram
- made
- bbc
- recommend
- happened
- united
- replay
- shop
- free
- dammit
- nope
- b
- nearby
- pop
- shops
- california
- highest
- notifications
- shuffle
- fm
- chinese
- currency
- uh
- restaurants
- jack
- april
- robert
- only
- been
- why
- states
- friends
- skip
- important
- he
- samsung
- later
- notify
- bedroom
- john's
- mails
- eleven
- red
- exact
- cold
- cup
- rates
- incorrectly
- fifth
- money
- boston
- spoke
- tomorrow's
- forward
- respond
- funny
- wait
- business
- market
- star
- headlines
- third
- favorites
- bother
- retry
- stocks
- high
- g
- favourite
- george
- umbrella
- directions
- wedding
- content
- m
- close
- spoken
- concert
- run
- alert
- searching
- mary
- into
- artist
- located
- mike
- anyone
- snow
- tickets
- then
- reset
- garden
- route
- hello
- tall
- likes
- talk
- forty
- share
- feed
- were
- indian
- washington
- difference
- remember
- convert
- receive
- tune
- level
- asking
- capital
- life
- dad
- yen
- street
- raining
- mistake
- correctly?
- quite
- pandora
- jane
- town
- yet
- player
- park
- san
- american
- far
- sports
- raise
- popular
- display
- these
- couldn't
- mountain
- dentist
- importance
- unimportant
- complain
- clean
- continue
- euros
- los
- ready
- yahoo
- can't
- classical
- politics
- newest
- lighting
- miami
- trip
- horrible
- info
- added
- prepare
- iphone
- machine
- mother
- miles
- via
- chris
- tv
- since
- bathroom
- state
- cheese
- request
- items
- oops
- ah
- closest
- warm
- microsoft
- settings
- value
- keep
- brighter
- note
- everything
- wife
- decrease
- okay
- using
- rap
- election
- sunny
- eat
- usa
- eighty
- fifteen
- until
- wanted
- wrongly
- dog
- obama
- years
- coat
- week's
- japan
- quiet
- paris
- angeles
- comcast
- target
- emailed
- airport
- interesting
- mcdonalds
- mr
- married
- green
- product
- past
- little
- other
- t
- listening
- cooking
- activate
- earth
- dance
- title
- florida
- rupee
- travel
- kids
- takeout
- pending
- america
- making
- its
- than
- doctor
- population
- bar
- plans
- power
- fourth
- silent
- ride
- milk
- how's
- seventy
- sure
- fine
- jennifer
- july
- sister
- brighten
- picture
- deliver
- singer
- clock
- inform
- brad
- burger
- never
- pesos
- object
- hero
- arrive
- classic
- olive
- games
- group
- watch
- line
- justin
- cost
- project
- called
- lets
- track
- still
- starbucks
- form
- repeating
- christmas
- breaking
- due
- cheapest
- forget
- posted
- james
- posts
- central
- lot
- stories
- whole
- small
- ever
- steak
- review
- requested
- wish
- david
- workout
- alex
- seems
- given
- gym
- largest
- la
- average
- compare
- china
- fifteenth
- having
- rupees
- band
- background
- meal
- online
- reserve
- file
- lamp
- laugh
- sun
- anniversary
- eastern
- busy
- mobile
- bit
- jokes
- places
- geographic
- else
- chess
- meant
- working
- p
- planned
- program
- seconds
- rated
- large
- issues
- road
- pay
- big
- holiday
- daily
- 'true'
- celebrity
- better
- hut
- being
- sixty
- away
- helped
- peter
- god
- cab
- someone
- internet
- page
- anna
- feel
- video
- steve
- opening
- lately
- sandy
- bank
- weeks
- id
- sam
- pitt
- river
- february
- i'll
- saved
- soup
- phrase
- distance
- economy
- hits
- sony
- eggs
- low
- water
- text
- topic
- co
- begin
- attend
- groceries
- adele
- reach
- within
- pause
- half
- yourself
- kind
- dark
- replied
- enter
- must
- asked
- beatles
- fun
- ingredients
- against
- invite
- soon
- colour
- different
- jacket
- updated
- seattle
- denver
- canada
- vegas
- mode
- pasta
- january
- doe
- listed
- refresh
- listened
- team
- longest
- spotify
- remainder
- telling
- mumbai
- you're
- orlando
- card
- rice
- during
- reduce
- locate
- future
- starting
- boil
- genre
- class
- slow
- famous
- named
- allen
- youtube
- works
- olly's
- dc
- brew
- through
- pounds
- football
- pacific
- white
- sings
- egg
- oil
- festival
- clothes
- moment
- die
- orange
- school
- kim
- las
- divided
- whether
- photo
- everyday
- ryan
- bills
- headline
- fix
- square
- npr
- jake
- brother
- todays
- terrible
- weekly
- type
- topics
- months
- chat
- yoga
- reading
- products
- extra
- cut
- adjust
- king
- personal
- client
- jan
- data
- doctor's
- computer
- rohit
- johns
- o'clock
- canadian
- mistakes
- rid
- names
- control
- sunscreen
- per
- lady
- head
- taylor
- always
- budget
- pink
- bought
- x
- side
- ahead
- articles
- english
- ny
- able
- reschedule
- fast
- hashtag
- tweets
- countries
- numbers
- running
- alabama
- blank
- madonna
- bright
- yellow
- west
- went
- options
- story
- october
- russia
- together
- n
- basketball
- joe's
- dominos
- tomorrows
- less
- situation
- colors
- mom's
- end
- payment
- drop
- downtown
- provider
- joes
- means
- helping
- mexican
- friday's
- cricket
- return
- needed
- death
- tech
- charlotte
- heavy
- draft
- sea
- paul
- r
- condition
- seventh
- dallas
- hip
- related
- article
- heard
- war
- elvis
- everest
- problem
- stating
- bieber
- system
- sales
- shoes
- hard
- become
- based
- kevin
- age
- she
- quality
- mile
- hair
- gas
- biggest
- inr
- climate
- hate
- twentieth
- sucks
- dean
- angelina
- turkey
- harry
- cake
- national
- record
- longer
- dave
- subjects
- brown
- supposed
- ocean
- church
- drive
- gandhi
- needs
- above
- theatre
- cookies
- abraham
- gone
- map
- television
- such
- face
- sale
- jim
- francisco
- sean
- june
- romantic
- compared
- curry
- ball
- jeff
- subway
- lincoln
- bed
- lagos
- turned
- south
- won
- trains
- girlfriend
- mahatma
- nsa
- hop
- amy
- commute
- solve
- came
- created
- dont
- history
- math
- telephone
- says
- laptop
- pawel
- offer
- fox
- single
- sixth
- midnight
- missed
- potter
- loud
- richard
- chuck
- looks
- practice
- body
- dan
- husband
- waiting
- birth
- stuff
- adam
- sender
- gaga
- truck
- france
- texas
- restart
- intel
- colours
- statue
- liberty
- intensity
- previous
- problems
- outlook
- visit
- wine
- peso
- continent
- utterance
- helps
- asssistance
- each
- north
- grand
- patrick
- match
- opinion
- plan
- trump's
- papa
- instead
- martin
- root
- purchase
- perry
- richards
- closing
- cloudy
- eddie
- senders
- move
- susan
- tesco
- size
- shows
- folder
- spaghetti
- doctors
- stores
- presidential
- dates
- theater
- menu
- agenda
- ann
- code
- animal
- frequency
- kansas
- roomba
- technology
- tasks
- without
- flight
- who's
- beach
- empty
- tired
- driving
- entire
- carry
- british
- dr
- asia
- rccg
- uncle
- vacation
- pepperoni
- programme
- standard
- reminding
- maximum
- starts
- tallest
- gonna
- fourteenth
- playback
- medium
- nike
- cruise
- changed
- diego
- arrange
- bowie
- learn
- mount
- particular
- costumer
- sundays
- fire
- calls
- silence
- podcasts
- spain
- dominoes
- website
- italy
- strongly
- agree
- agreed
- suggest
- mood
- fourteen
- result
- metallica
- thinking
- session
- profile
- england
- active
- ohio
- grid
- fall
- pot
- marriage
- queue
- told
- narendra
- jerry
- mt
- frank
- tenth
- wishes
- recording
- finished
- international
- calculate
- hit
- towers
- ninth
- site
- feeling
- macy's
- tag
- actually
- black
- birthdays
- hottest
- mary's
- expect
- snapchat
- jay
- smith's
- mountains
- building
- setting
- cleaning
- height
- initiate
- hall
- breakfast
- martha
- conference
- aol
- win
- steps
- fancy
- smartphone
- led
- zeppelin
- houses
- holy
- currencies
- club
- children
- atlanta
- einstein
- happen
- cell
- landline
- coworker
- objects
- negative
- modi
- soft
- haven't
- mention
- radius
- books
- daughter
- results
- earlier
- bruce
- butter
- stars
- remaining
- delivers
- device
- domino's
- unmute
- joy
- twelfth
- voice
- taking
- snowing
- sick
- boots
- cleveland
- journey
- destination
- worker
- poker
- lee
- katy
- australia
- incoming
- least
- lisa
- experience
- million
- recurring
- scenario
- sacramento
- geography
- library
- brief
- jolie
- monthly
- elton
- sirius
- alaska
- lyrics
- oven
- log
- random
- moscow
- barack
- disney
- alive
- measurements
- maker
- poor
- error
- stone
- versus
- hotmail
- interpret
- sarah's
- memorial
- goes
- stay
- delhi
- health
- special
- speed
- thirteen
- test
- edinburgh
- credit
- facts
- cat
- neighborhood
- sometime
- empire
- entry
- financial
- comment
- link
- hockey
- circuit
- holidays
- singh
- jodhpur
- rockville
- ones
- features
- bread
- eye
- mall
- directv
- contain
- seacrest
- chance
- under
- table
- few
- hotel
- rude
- services
- yesterday's
- certain
- fb
- abc
- netflix
- linda
- notes
- length
- reminded
- shoe
- wild
- employees
- beef
- sushi
- fastest
- thirteenth
- recommendations
- fish
- tennis
- main
- jersey
- jones
- break
- concerts
- gomez
- angry
- uk
- replies
- emily
- kickball
- released
- upload
- effects
- quickest
- italian
- caroline
- emma
- real
- human
- minute
- took
- activity
- jeff's
- staff
- handler
- touch
- hold
- joanne
- range
- moon
- submit
- ends
- tomato
- lost
- prime
- twelveth
- phones
- amd
- hectic
- bobburgers
- screwed
- porch
- reviews
- vegan
- rihanna
- houston
- ham
- mondays
- general
- engaged
- walk
- melody
- electronic
- held
- selected
- equal
- getting
- tata
- wall
- clothing
- round
- leaving
- nasdaq
- total
- pressure
- expensive
- border
- exhibition
- trash
- november
- handle
- halloween
- attachment
- kardashian
- shoot
- rewind
- rating
- toronto
- department
- procedure
- member
- ray
- chelsea
- rohan
- arrow
- checked
- modify
- wasn't
- chances
- protest
- lottery
- prince
- include
- jo
- net
- pie
- sleep
- enjoy
- nineties
- taco
- banana
- source
- quieter
- bored
- desert
- guys
- gary
- activities
- already
- contract
- st
- minister
- disable
- woman
- europe
- arijit
- audible
- presentation
- cad
- records
- trips
- booking
- tacos
- sally
- non
- centre
- direct
- advance
- selena
- policy
- orders
- stefan
- arrival
- divide
- chocolate
- dish
- teeth
- hdfc
- silvia
- stove
- coast
- defined
- digest
- snafu
- manager
- pinterest
- tim
- conversation
- bulldog
- titanic
- brunch
- heat
- canyon
- dial
- earliest
- region
- stopped
- foreign
- folk
- watching
- brexit
- albert
- joejoe
- early
- cities
- manchester
- december
- biloxi
- often
- questions
- garage
- tunes
- possible
- ms
- ar
- kiss
- shares
- bangalore
- heading
- derek's
- desk
- cheers
- tomasz
- terms
- companyname
- sara
- asap
- super
- meryl
- streep
- rent
- dress
- cinema
- usually
- trend
- conversion
- friendly
- ties
- ordered
- electricity
- marked
- migration
- choice
- journal
- norris
- aniston
- mailbox
- minus
- fried
- miley
- cyrus
- newly
- theory
- rest
- swift
- windy
- dan's
- mass
- comes
- selfie
- wings
- julie
- masti
- celine
- plays
- pack
- including
- responded
- jason's
- ale
- apples
- dolly
- oranges
- lg
- washer
- substitute
- global
- feedback
- grandma
- ben
- drainage
- invoice
- sunset
- takeaways
- man
- art
- universe
- suitable
- antonio
- full
- delivered
- laundry
- wrote
- min
- register
- snap
- nixon
- bird
- spend
- rome
- jesse
- calories
- cappuccino
- quickly
- buying
- britney
- spears
- spacey
- jobs
- arriving
- jean
- potholes
- janet
- pictures
- ashwin
- morgan
- freeman
- baby
- microwave
- yellowstone
- francis
- dubai
- invitation
- hope
- melbourne
- rocky
- kroger
- rivers
- charles
- jim's
- rectify
- statement
- carpet
- baked
- jessica
- meatballs
- mushrooms
- amount
- switzerland
- relating
- zero
- front
- phonebook
- hows
- cheesecake
- carryout
- magic
- ola
- replace
- recorded
- access
- land
- where's
- elephant
- removed
- liz
- load
- metal
- package
- diner
- goog
- bob's
- k
- year's
- mars
- guy
- assistant
- rahman
- eagle
- part
- burn
- aran
- stevens
- daughter's
- eighteen
- chemistry
- action
- selling
- thats
- koc
- lines
- sugar
- major
- chair
- easter
- departing
- africa
- nigeria
- requests
- conditions
- you'll
- manhattan
- roll
- cracow
- candy
- crush
- bell
- massive
- gold
- happens
- usual
- andrew
- equals
- dead
- plane
- graduation
- warned
- shaun
- triangle
- wyatt's
- pass
- function
- max
- space
- programmes
- awful
- parton
- exciting
- battery
- hwu
- recipes
- dirham
- rushmore
- johndoe
- button
- express
- pontificate
- easiest
- magda
- selection
- reservations
- guess
- copy
- classes
- supplies
- schedules
- winning
- berkeley
- notice
- headed
- outgoing
- mi
- rainy
- wikipedia
- entertainment
- dow
- everyone
- aunt
- furniture
- oceans
- softer
- heart
- newmail
- while
- baseball
- easy
- stations
- philadelphia
- alice
- swat
- yearly
- poem
- soccer
- president's
- milan
- paper
- kardashian's
- loop
- shown
- sandals
- yo
- scan
- nevada
- apahelp
- coldplay
- french
- bay
- higher
- rumplestiltskin
- airlines
- fresh
- standing
- cream
- hamburger
- broadway
- oscars
- tokyo
- cable
- shipment
- formula
- teacher
- sweet
- golden
- newsfeed
- confirmation
- shirt
- austin
- own
- canon
- wanna
- gods
- spanish
- count
- seat
- ideas
- study
- tara
- mutual
- jennifer's
- because
- edit
- denmark
- direction
- timer
- growth
- luther
- marketing
- cd
- mine
- public
- peter's
- bolshoi
- flat
- crazy
- others
- dry
- pub
- theatres
- bro
- fashion
- teams
- cycle
- pickup
- dion
- teach
- series
- checkout
- male
- noise
- solitaire
- pf
- cassie
- travelling
- davis
- naty
- income
- disco
- dropping
- donna
- follow
- shelly
- accidents
- plot
- irene
- download
- circle
- law
- tea
- organize
- principal
- weekends
- camera
- solution
- bombay
- wuthering
- heights
- charged
- colorado
- kong
- keys
- race
- mona
- entries
- j
- nyc
- potatoes
- gospel
- raju
- trivia
- bike
- dating
- oregon
- event's
- prefers
- rush
- percentages
- peking
- cooker
- husbands
- won't
- tower
- heaven
- hugh
- june's
- fake
- figure
- purple
- takes
- l
- howard
- stern
- nineteen
- percentage
- motorola
- doe's
- outstanding
- tesla
- laura
- dale
- warning
- eighteenth
- golf
- island
- career
- bieber's
- vacuuming
- pizzas
- refund
- weekday
- s's
- derek
- thanksgiving
- delayed
- query
- buffet
- rachel
- pants
- wash
- survey
- photos
- except
- topography
- door
- jen
- queen
- depart
- cheap
- theaters
- web
- jesse's
- multiply
- workhouse
- press
- click
- loss
- recipient
- verizon
- volcano
- rolls
- royce
- pixel
- affirmative
- completing
- thai
- walking
- bananas
- hollywood
- equation
- dirty
- scores
- katrina
- exam
- creating
- letter
- sing
- construction
- broadcast
- tom's
- rupies
- management
- permanently
- converting
- ist
- iron
- religion
- kings
- tucson
- standup
- tic
- tac
- toe
- headset
- sex
- diapers
- purpose
- seventeenth
- eighth
- dylan
- temple
- refer
- gift
- fact
- drink
- inches
- air
- carpets
- newcastle
- clients
- private
- tasting
- sams
- nj
- chili
- cultural
- swimming
- they're
- iowa
- jordan
- period
- accept
- cincinnati
- college
- rainbow
- myself
- deep
- deepest
- warming
- sky
- vp
- seeing
- indianapolis
- kmart
- nikesupport
- image
- suck
- broiler
- timeline
- dell
- parisa
- brandon
- example
- y
- filter
- sad
- shine
- sixteen
- christian
- pic
- pdr
- fry
- another
- network
- omelette
- kilometers
- municipality
- giving
- leo
- cups
- earthquake
- susan's
- application
- cross
- across
- carl
- pawel's
- sauce
- relativity
- rail
- sisters
- letting
- shorts
- vs
- rajesh
- swift's
- starving
- discussing
- block
- written
- n9ne
- women
- celebrities
- bake
- cookie
- continents
- workers
- leonardo
- mel
- gibson
- shall
- beauty
- sum
- fair
- deli
- middle
- same
- nile
- sell
- role
- boat
- sandwich
- parts
- hearing
- knows
- sand
- manoj
- delivering
- rahul
- neil
- australian
- kindly
- properly
- assist
- esurance
- emilia
- breach
- loudly
- harvard
- marc
- nintendo
- scrabble
- farm
- lie
- patio
- greg
- screen
- degrees
- yesterdays
- carrots
- receipt
- lasagna
- clooney
- there's
- degree
- preferences
- hallway
- latin
- nicest
- lauren
- worst
- also
- checkers
- input
- boyfriend
- masala
- tournament
- monet's
- burmuda
- section
- eric
- japanese
- supervisor
- junk
- performance
- effective
- urgent
- oldest
- tone
- sweater
- goa
- bag
- lowest
- aus
- peace
- julia
- summer
- fan
- hurricane
- colder
- steven
- sachin
- tendulkar
- watson
- exorbitant
- bags
- macs
- yulia
- matthew
- pole
- toby
- pennsylvania
- carmen
- tiffany
- complete
- electric
- wallet
- albums
- maths
- distribution
- eminem
- familiar
- regard
- upwards
- ron
- couple
- acme
- angel
- zoo
- nineteenth
- shazam
- inflation
- offers
- devotional
- jackie
- tony
- artificial
- intelligence
- grill
- father
- predictions
- repeats
- manila
- cooked
- reason
- learning
- nowadays
- cheer
- jingle
- bells
- anxiety
- hoizer
- girl
- pondichery
- position
- teachers
- dictionary
- nap
- cafe
- m's
- meting
- crime
- eve
- horn
- bristol
- pubs
- companies
- johnson
- resolve
- waterfall
- female
- biriyani
- drama
- nothappy
- haircut
- remote
- colleagues
- bones
- saturdays
- cambridge
- jam
- maine
- category
- invented
- chang's
- boy
- planning
- chen
- assignment
- publish
- hunt
- alerts
- dad's
- deal
- leading
- trail
- follows
- young
- jay's
- summary
- ko
- beyonce
- vergara
- mexico
- whishes
- arrived
- placid
- specific
- depot
- tikka
- expire
- markets
- problematic
- highly
- blues
- thirtieth
- brooklyn
- tatum
- argentinian
- redso
- des
- moines
- women's
- richard's
- cellphone
- division
- hong
- political
- charley's
- steakhouse
- accident
- normal
- wakeup
- satellite
- freezing
- forex
- jimmy
- chores
- snooze
- design
- museum
- guide
- speech
- ran
- shift
- inferior
- mashed
- jcpenney
- environment
- raw
- disturbed
- sia
- chips
- anybody
- present
- reynolds
- limbaugh
- weekdays
- islands
- viral
- asian
- streets
- inception
- meatloaf
- alternative
- compliant
- sensex
- phil
- est
- hand
- switched
- recap
- ferrari
- nandy
- promotion
- kate
- brothers
- ma
- followers
- closer
- deleted
- gloves
- bands
- platter
- boland
- corner
- strong
- chipotle
- eu
- amtrak
- son
- charges
- version
- rajdhani
- chart
- manage
- musical
- hat
- den
- tonight's
- syria
- stronger
- homelessness
- nails
- support
- ally
- sentences
- penn
- ago
- turning
- center
- hungry
- actress
- keywords
- usain
- bolt
- ongoing
- cancelled
- idol
- julia's
- wells
- fargo
- ri
- sarahs
- computers
- devices
- toms
- regards
- quote
- production
- brother's
- inch
- shell
- marathon
- directory
- dictate
- huey
- lewis
- elections
- alone
- marry
- apart
- danielle
- jane's
- mankind
- singularity
- nye
- feynman
- whom
- inventory
- makes
- dept
- apple's
- education
- bugs
- settle
- when's
- geographical
- jason
- exchanges
- mcdonald's
- tgi
- ship
- hershey
- facing
- faulty
- zita
- jeremy
- irons
- wallmart
- sphere
- hp
- gottten
- pardon
- engagement
- showing
- format
- absolute
- interest
- messenger
- gate
- enable
- columbus
- hips
- tour
- sterling
- thumbs
- priced
- tablet
- amc
- bible
- safeway
- organism
- undertake
- freedom
- charger
- documents
- jars
- clay
- members
- o
- vegetables
- delicious
- beaumont
- tx
- finance
- exhibitions
- trumps
- month's
- v
- applebee
- dakota
- bus
- brighton
- pa
- darken
- promoted
- liverpool
- utah
- suggestions
- micheal
- complaints
- pencil
- keith
- fridays
- temperatures
- hardware
- exercise
- jpearsonjessica
- release
- hoover
- goshen
- chester
- wood
- woodchuck
- healthcare
- borges
- calculator
- dune
- reality
- jobe
- gossip
- piece
- convenient
- titled
- pork
- belongs
- hongbin
- wreck
- tool
- started
- gather
- bruno
- costa
- patel
- daniel
- corporate
- controversy
- wendy's
- texans
- biography
- flowers
- investing
- arrives
- finish
- spot
- crop
- culture
- enjoying
- fetch
- kill
- auto
- washing
- buffalo
- he's
- titles
- ross
- whose
- types
- pleasant
- erin
- madison
- tuesday's
- lif
- khan
- affordable
- season
- policies
- c
- expected
- hypothesis
- seth
- kicked
- unhappy
- gallery
- xorg
- used
- monali
- thakur
- noodles
- cher
- sally's
- tracks
- mid
- launch
- glasgow
- bridge
- releases
- pitt's
- server
- clarity
- yens
- motivational
- scratch
- blanket
- aib
- reads
- singing
- monas
- tuesdays
- winter
- rocket
- lands
- chan
- economic
- sister's
- aa
- film
- pb
- indiana
- departure
- pipeline
- stitch
- sleeved
- hail
- logan
- style
- quantum
- physics
- labeled
- delia
- began
- rrcg
- shape
- awards
- improve
- pertaining
- trance
- lives
- weight
- met
- brian
- sinatra
- sunglasses
- attending
- falls
- requesting
- sunday's
- overhead
- greg's
- rom
- historic
- georgia
- guest
- jaipur
- iroomba
- alfredo
- pride
- prejudice
- fill
- interview
- daddy
- wangs
- manchow
- university
- locally
- lowes
- tiring
- east
- medical
- metro
- bach
- schubert
- rooster
- czk
- channing
- pad's
- identify
- yelp
- scandal
- affect
- suffering
- enabled
- arby's
- saw
- mango
- itunes
- highlights
- brings
- sixteenth
- tourist
- wendys
- presley
- sold
- intern
- affairs
- fries
- buttermilk
- panda
- wants
- floor
- clint
- eastwood
- moe's
- planets
- equivalent
- morrocco
- gravity
- uploaded
- someplace
- availability
- issue
- fly
- jpy
- natural
- delta
- disappointed
- files
- q
- cindy
- shortest
- simple
- ring
- lotion
- maroon
- fort
- died
- bonus
- repetitive
- icecream
- statistics
- rebel
- lawn
- leith
- measure
- daytime
- september
- pilots
- pda's
- shade
- sil
- cap
- punjab
- gwalior
- ashley
- juice
- nagar
- ellen
- programs
- fairs
- invest
- suits
- ingredient
- launches
- leaves
- bjork
- crater
- elevation
- stewart
- hotels
- spices
- bubbles
- grass
- broccoli
- capricious
- philosophy
- anthony's
- apply
- pings
- gps
- thomas
- koontz
- acdc
- beijing
- ratings
- union
- prayer
- todo
- angles
- scissors
- stashable
- cinch
- bacon
- passive
- que
- occurred
- lakeland
- tulsa
- advise
- singapore
- risotto
- invested
- model
- helmsworth
- bench
- julian
- buddy
- rogers
- brains
- chap
- badminton
- dick
- lopez
- apartment
- points
- germany
- unknown
- thugs
- healthy
- rash
- casey
- oriam
- ps
- plants
- mailed
- ikoyi
- grassmarket
- marleen's
- locations
- bush
- mac
- reaching
- allan
- till
- cheering
- guitar
- oxford
- densely
- populated
- son's
- hubby
- comparison
- putin
- barcelona
- gss
- energy
- pan
- nyack
- worked
- unavailable
- bryan
- adams
- miss
- checkbook
- jared's
- enrique
- iglesias
- forms
- jeans
- voices
- alan
- tudek
- animals
- olx
- mts
- freed
- jenn's
- coordinates
- humid
- demographic
- otherwise
- tiffany's
- outdoor
- sheila
- lincon
- dust
- serve
- conduct
- estimated
- gaana
- funds
- downloaded
- indignation
- meijer
- necessary
- grubhub
- pancakes
- mario
- bars
- birmingham
- sites
- donuts
- chopra
- textual
- rapids
- cant
- prefix
- sounds
- provides
- amy's
- benton
- leeds
- dsw
- returning
- defective
- digital
- bhaji
- carlos
- linux
- upgrade
- shark
- attacks
- screening
- exposure
- souffle
- tracking
- od
- progress
- paused
- gilmore
- hour's
- imdb
- orleans
- european
- gdp
- surfers
- theme
- ash
- ikea
- klm
- marilia
- cars
- robin
- williams
- surfin
- ottawa
- trade
- contains
- field
- someone's
- prague
- brno
- rene
- interests
- radiolab
- harris
- strive
- accommodating
- fell
- relationship
- pharmacy
- memo
- nancy
- paid
- expressing
- disapproval
- yard
- royale
- hide
- amber
- cheeseburger
- coca
- cola
- al
- matrimony
- scott
- potato
- funniest
- polling
- mother's
- chase
- xmtune
- matt
- murphy
- detroit
- taiwan
- organic
- secrets
- domino
- ac
- assistants
- z
- fred
- owner
- required
- saga
- hanks
- trading
- erosser
- rosser
- vikki
- dhaka
- notepad
- oldies
- alison
- recur
- w
- mentioning
- languages
- lavender
- toned
- videos
- stein
- chennai
- resuming
- moms
- foke
- beep
- discussion
- woodland
- lowry
- meetups
- powerball
- toyota
- focus
- concentrate
- nbc
- roosendaal
- deactivate
- shrimp
- parmigiana
- bumper
- spouses
- lucknow
- paying
- hurry
- served
- rhythm
- enquiry
- hartford
- plaza
- hyundai
- wishing
- websites
- briefing
- complex
- calculations
- jarvis
- highway
- fired
- dissatisfied
- sandra
- bullock
- ratio
- haskell
- sharon
- horse
- mum's
- dillinger
- sunblock
- sub
- tab
- crude
- software
- stadium
- step
- short
- reddit
- appoints
- agra
- sheet
- keyboard
- kfi
- district
- connery
- carnival
- wok
- shutting
- phoenix
- cloth
- rehan
- lego
- alphabetical
- mexco
- charles's
- foodpoisoning
- ultra
- madonna's
- harley
- davidson
- daylight
- afi
- infy
- launched
- inboxes
- secretary
- increased
- resolving
- fuel
- injector
- multiple
- interval
- mike's
- espresso
- sasha
- susie
- salesperson
- country's
- cylinder
- specifications
- ivory
- pst
- zoella's
- jackman
- reacting
- potential
- frying
- boise
- wendy
- divisible
- automated
- katherine
- pre
- gaming
- containing
- decade
- industry
- foot
- chemical
- cause
- taste
- bra
- julianne
- hough
- addresses
- vonstaragrabber
- lion
- restroom
- kohl's
- mentioned
- hz
- royal
- bloodline
- relationships
- billings
- levin
- quarter
- lori's
- lori
- exclamation
- definitions
- birds
- raj
- priya
- allows
- worlds
- kelly
- clarkson
- garam
- scarlet
- found
- cub
- dmv
- excessively
- lake
- dried
- reporting
- smile
- changes
- charmin
- eternal
- smoked
- meat
- beanos
- processing
- chip
- logic
- insightbb
- highland
- terrace
- child
- peck
- midwest
- cardinal
- anthony
- barrack
- jancy
- thompson
- cassy
- gulls
- alternate
- sin
- dragons
- msnbc
- residential
- leader
- siblings
- pedro
- serendipitous
- bestbuy
- targets
- wawa
- mentions
- engagements
- hawaii
- jr
- applied
- halifax
- ahmedabad
- monty
- python
- stronomy
- blahblah
- blah
- arrivals
- subtract
- payoneer
- formal
- connors
- indranagar
- transform
- marcia
- perpetual
- arranging
- cvs
- callum
- steffi
- attention
- kanye
- mommy
- chucky
- forest
- polarized
- proposal
- conrad
- coldest
- hue
- dictator
- clancy
- geranium
- delays
- build
- lense
- rai
- transistor
- dildo
- warren
- exercises
- forman
- kinley
- bottle
- retail
- yan
- regal
- unprofessional
- annual
- payday
- tricep
- arts
- ripped
- vietnam
- trends
- chaise
- preparation
- nestle
- paula
- deen's
- bmw
- microsoft's
- bookstore
- below
- moving
- pretty
- lock
- administrator
- edition
- airways
- marvel
- garner's
- rubix
- cube
- kfc
- milwaukee
- pager
- alexander
- gilchrist
- goods
- performing
- unopened
- security
- chain
- probiotic
- colleague
- knowing
- novel
- fiesta
- comcasts
- acer
- farmers
- fraud
- weighing
- india's
- gotse
- grapefruit
- similar
- tmobile
- nifty
- sessions
- recital
- greatest
- openings
- zip
- demento
- fatigued
- disease
- prevention
- overcharged
- unquote
- cotton
- tweeter
- railways
- flipkart
- fist
- renee
- nutritional
- starred
- calculated
- mattress
- hillstead
- paul's
- jill's
- disregard
- pesto
- stinks
- nobody
- behind
- kid
- nature
- ounces
- ted
- boiled
- dancom
- wars
- fmod
- span
- along
- malls
- joining
- frequently
- realdonaldtrump
- bobby
- mcgee
- pwd
- obamacare
- clicked
- falling
- pampers
- virgin
- hayden
- pat
- amie
- infosys
- technologies
- roads
- aerosmith
- airtel
- dairy
- sends
- dues
- tobytoday
- ileana
- d'cruz
- rended
- taj
- ashok
- typhoon
- rama
- final
- missouri
- virginia
- announce
- haughty
- salmon
- joking
- goodnight
- rebecca
- believe
- vowels
- ban
- haze
- insight
- cable's
- fellow
- tweeters
- canoe
- warriors
- assassinated
- acceleration
- detailed
- wife's
- robert's
- angus
- interested
- jen's
- sjobs
- cdn
- ruth
- simran
- aapa
- kadai
- armor
- sms
- indefatigable
- indicate
- fra
- floors
- modcloth
- honor
- weigh
- priority
- hiking
- smoky
- judawa
- expense
- deals
- plethora
- sam's
- august
- elain
- bbq
- leap
- congressional
- representatives
- voting
- reproductive
- ge
- bbb
- contacted
- assigned
- jill
- drafts
- scoring
- touches
- relevance
- goggins
- medvesek
- philippiness
- booked
- board
- locality
- beth
- katey
- fans
- approximately
- charitable
- rae
- darker
- anymore
- printing
- significance
- fondle
- mate
- larry's
- larrylarry
- faripir
- gurpur
- seasons
- softball
- refreshments
- jamie
- carrie
- underwood
- abdul
- kalam
- subterranean
- colombo
- sri
- lanka
- quit
- dollar's
- award
- among
- spouse
- forgot
- ass
- millionaire
- indians
- americas
- julie's
- transcribe
- garbage
- geographics
- tree
- criticize
- tanzania
- heather's
- answering
- spam
- phishing
- reseda
- axel
- kailey
- prettiest
- century
- mattel
- toys
- grateful
- fixing
- maidan
- sophia
- betty
- reasons
- russian
- applicable
- loving
- claire
- crashed
- batteries
- philips
- person's
- compile
- ali
- matthews
- apologize
- comcastcom
- luke
- jean's
- carefully
- beg
- trying
- flooringco
- seams
- baking
- skiing
- calming
- continuously
- tale
- roraima
- innova
- bowling
- beginning
- identifier
- diverse
- santa
- continuous
- hangman
- vegetarian
- roast
- rewards
- allow
- immediately
- shelley
- hennessey
- waking
- dicaprio
- ways
- immigration
- raised
- lose
- digger
- cosmetic
- perth
- feet
- chick
- tornadoes
- upstairs
- badly
- timings
- lobster
- runner
- forum
- thunderstorms
- powered
- plugged
- rod
- mgccc
- bleed
- ga
- pune
- mixed
- dishes
- radisson
- cheetah
- what'sapp
- cm
- father's
- skill
- graham
- eggless
- collect
- favorited
- flag
- ssmith
- virtual
- bryant
- spots
- scapingyards
- washed
- springfield
- draw
- insurance
- quantity
- brightener
- cuba
- stream
- raincoat
- maiden
- soundtracks
- deliveroo
- humidity
- crowded
- built
- mesa
- rosenstock
- workpdf
- occurring
- environmental
- dbell
- converse
- radia
- logged
- scabble
- loads
- jacob
- hasbro
- aldi
- piramid
- completely
- method
- hems
- loose
- connect
- snapchats
- arizona
- festivals
- hospital
- peppers
- bowl
- korn
- lupe
- eurostar
- umf
- unchecked
- berlin
- lane
- synonyms
- hampshire
- shakira
- brads
- keanu
- reeves
- johns's
- increasing
- burgers
- stan
- falklands
- valley
- maria
- hangin
- glow
- we're
- newsource
- clark
- carrey
- jams
- crashing
- outback
- sugars
- defines
- joel
- venue
- huffington
- images
- elizabeth
- case
- agnes
- randomly
- mecky
- incredible
- even
- decreased
- vacations
- honey
- akon
- barbara
- handsome
- forensic
- spielberg
- korea
- coding
- achievements
- albert's
- clerk
- hopes
- zimbabwe
- buble
- research
- excel
- gun
- rogen
- resin
- tooth
- filling
- mody
- marinara
- vicki's
- mardi
- gras
- monika
- relatives
- chillin
- lol
- levis
- tricounty
- messy
- disgusted
- emoteck
- foroogh
- quick
- decline
- emailstudy
- atdfd
- giant
- trey
- kalka
- mcdo
- timestamp
- operate
- watched
- infinity
- tactics
- upbeat
- synonym
- racing
- towards
- fog
- muted
- coke
- eighties
- tvs
- theresa
- brent
- kamycka
- dejvicka
- tap
- peanut
- circumference
- saskatoon
- sync
- sofa
- mcdonald
- silenced
- catalogue
- algorithm
- sanctimonious
- talked
- realize
- reveca
- paok
- wipe
- bisque
- br
- rather
- silly
- stat
- tar
- vitamins
- gain
- xm
- fongs
- anywhere
- zanes
- se
- chronicles
- weber
- commence
- causes
- sangli
- german
- hedges
- truthdig
- coffees
- commuter
- plain
- mimo's
- oscar
- restrictions
- treasure
- louis
- stevenson
- fifa
- beast
- pav
- prambors
- hannah
- ringcast
- vegetable
- episodes
- overnight
- apps
- nathan
- dismiss
- karl
- hourly
- eyes
- breeds
- inside
- tribune
- join
- crabmeat
- shakira's
- yankee
- greenwich
- gala
- jump
- recall
- johnny
- cash
- pod
- cast
- rare
- suppose
- enjoyment
- emo
- nayagara
- passion
- pit
- marckel
- bohemian
- emma's
- arijit's
- pet
- prize
- receptionist's
- beat
- freds
- probles
- patagonia
- quart
- '?'
- zach
- duration
- jlo
- alphabetic
- phohouse
- badpho
- daybreak
- biryani
- battle
- divergent
- moby
- jungle
- jaiho
- casserole
- shooter
- columbine
- wednesdays
- soul
- accumulation
- squash
- calm
- debate
- schools
- amd's
- lee's
- managers
- myspace
- relaxing
- bahar
- antarctica
- atmosphere
- pinpoint
- payments
- illinois
- louisiana
- cfo
- pool
- vyas
- morel
- mysore
- rise
- sdfa
- newspaper
- calorie
- dangerous
- sunrise
- mostly
- dining
- shake
- flood
- prescription
- mix
- view
- jana
- spa
- comments
- pear
- factor
- clearance
- northern
- language
- arnold
- exxon
- mobil
- dragon
- fruit
- differences
- seashells
- seashore
- velocity
- motorolla
- haggis
- fiji
- irwin
- similarities
- hypertrophy
- sharukh
- implement
- kazakhstan
- mediterranean
- roman
- grigorean
- hardword
- quead
- amphibious
- roberts
- climatic
- tornado
- prone
- rising
- declining
- megatel
- denzel
- washington's
- citizens
- arm
- persos
- belarus
- gyllenhal
- geology
- helicopter
- iphone's
- drained
- manger
- navy
- daikin
- jerk
- nexus
- interaction
- platform
- tweeting
- at&t
- mahaboobsayyad
- kellogg
- ashmit
- ismail
- listing
- enalen
- projects
- clara
- clinic
- exams
- ammunition
- mark's
- divya
- jjnzt
- activation
- andy
- terry's
- brenden
- jeffrey
- burnette
- protests
- joshua
- pianist
- whiz
- schadenfraude
- rials
- storage
- bot
- provided
- massachusetts
- channin
- store's
- rump
- prior
- re
- intelligent
- recognise
- irobot
- areas
- lighter
- yell
- uses
- cn
- gadgets
- skynet
- marie
- lamb
- balcony
- nyt
- bennett
- ralph
- pda
- balloon
- maps
- degeneres
- character
- evans
- actor
- fitbit
- malika
- shivaji
- attitude
- lily's
- concerned
- upon
- startup
- stuffs
- tawa
- relative
- legacy
- cst
- leah
- remini
- mortgage
- amed
- cleaners
- seal
- abita
- grammar
- backdoor
- minimize
- leisure
- billie
- spicy
- training
- comfortably
- sunburn
- minneapolis
- habits
- braking
- notifier
- swan
- thoughts
- pleasure
- those
- kashmirstart
- sells
- i'dl
- kettle
- 'false'
- rta
- valia's
- visiting
- techno
- mornings
- mow
- cbs
- slightly
- francine
- vice
- postpone
- mins
- xyz
- hwood
- kept
- spider
- reopen
- billy
- connery's
- eiffel
- itinerary
- crash
- valentine's
- likexchange
- divorce
- danville
- il
- government
- menus
- capabara
- origin
- assistance
- vicinity
- chit
- drinks
- flabbergasted
- xy
- self
- double
- castle
- refrigerator
- bakery
- spray
- pyramids
- bio
- basic
- humans
- schwarzenegger
- inchoate
- rules
- caftan
- raleigh
- hobby
- ajay
- devgn
- corden
- aud
- prevailing
- kenny's
- crew
- aww
- spying
- employer
- thier
- juanpedro
- craig
- leon's
- looked
- players
- costs
- providers
- sydney
- documentary
- hyphen
- represent
- strings
- pianos
- acoustical
- celeb
- pong
- linear
- turn_down
- reaches
- strength
- routine
- billboard
- piano
- ed
- sheeran
- diet
- vietnamese
- yams
- grandmother's
- rihana
- require
- stressed
- option
- affected
- acquire
- retrieve
- clarion
- congress
- turiellos
- mates
- solar
- dice
- jalapenos
- wished
- painting
- therapy
- warehouse
- mop
- neighbor
- flappy
- returns
- someones
- spring
- wonton
- moves
- jagger
- fishing
- hiphop
- dunkin
- donut
- atlantic
- daughters
- hula
- hoop
- lessons
- scrote's
- indie
- grief
- lebron
- naughty
- preprogrammed
- alt
- needy
- sharpen
- butcher
- knife
- pulled
- starbuck's
- backward
- terrorist
- invaders
- parent
- crescent
- brewhouse
- prado
- science
- playlists
- debbie's
- sleeping
- searched
- lindsey
- lohan
- competitions
- subtracting
- challenge
- beer
- gainers
- chili's
- frubs
- police
- softly
- practical
- assessment
- bonefish
- rotating
- placed
- lakers
- barenaked
- ladies
- lord
- rings
- mar
- sneakers
- artists
- sanantha
- shuffles
- shuffled
- bardonia
- county
- analyze
- pattern
- girls
- league
- fjords
- nothing
- brewing
- smurfs
- tommy's
- lovin
- cottage
- ming
- photosynthesis
- danny's
- repeated
- peaceful
- migrations
- zydeco
- inkheart
- seller
- occurence
- telegraph
- invited
- wifi
- levels
- willie
- nelson
- dolores
- alter
- retirement
- professional
- development
- sainsburys
- byron's
- floyd
- raingear
- notorious
- bone
- explanation
- database
- likely
- lucky
- irish
- sshow
- ramsey
- aired
- sprint
- preparing
- academy
- yeshudas
- angels
- dancing
- aretha
- franklin's
- layers
- glass
- kuch
- hai
- wakey
- knitting
- mujhe
- feb
- king's
- malinda
- parents
- mirchi
- gallon
- seen
- parks
- safest
- evacuation
- beautiful
- sofia
- francs
- consequences
- various
- dicaprio's
- networth
- phelps
- disk
- constructed
- concern
- effectively
- lawrence
- zac
- galifrankas
- wheat
- prediction
- schemes
- mega
- capricorns
- dinky
- lanegan's
- princess
- pregnant
- smallest
- americans
- retweet
- insta
- sonys
- bk
- alzacz
- kohls
- cleanliness
- pizzahut
- delay
- lpg
- satisfied
- choke
- suqcom
- repairs
- killing
- miller
- budgets
- iamironman
- gbaby
- gma
- loves
- kate's
- margaret
- ben's
- brady
- palmer
- homework
- tax
- regional
- archive
- fitness
- vault
- footloose
- child's
- damage
- petco
- canceled
- passing
- pikes
- peak
- avatar
- diverge
- maron
- fault
- sword
- eventual
- contest
- dangal
- mauritania
- abs
- wondering
- southampton
- resources
- soy
- lexmark's
- hilly
- lyon
- beirut
- tribute
- madrid
- ate
- sweat
- charlize
- theron
- atif
- aslam
- capture
- actual
- shane
- dawson
- zedd
- snooker
- loquaciousness
- sholay
- tofu
- nightmare
- avenged
- sevenfold
- matters
- prompt
- panic
- brilliant
- boston's
- mckinleyville
- astrology
- strait
- countdown
- cats
- fruits
- embassy
- pita
- gyros
- negotiations
- hairdresser
- courteous
- enthusiastic
- funk
- sense
- heathens
- cabinet
- irctc
- stored
- shutoff
- glasses
- ella
- fitzgerald
- rover's
- vet
- polar
- bears
- oceanside
- medicine
- anita
- barrow
- burrito
- oliver
- covering
- ground
- zucchini
- textile
- antebellum
- chimes
- covington
- species
- bees
- cranston
- kilometer
- behaved
- rudely
- jimi
- hendrix
- calms
- outwards
- califonia
- composed
- hint
- shipping
- frosting
- sport
- napoleon
- hill
- athens
- middletown
- shirts
- sample
- politician
- investigated
- rapper
- con
- cuisine
- wizard
- brick
- conroe
- iterate
- architect
- salon
- babaji
- passed
- maryland
- surya
- monopoly
- avenue
- considering
- celebration
- brewed
- galoshes
- tutorials
- workouts
- millenium
- toward
- neighbourhood
- bannon
- storming
- reoccurring
- longtime
- sweetheart
- memos
- starfish
- centaur
- philippines
- oar
- departs
- preferably
- latte
- sides
- pentagon
- fashioned
- rescheduled
- transportation
- twins
- duker
- deadline
- samurai
- obaba
- bp
- ambiance
- automatically
- object's
- boost
- morale
- jogging
- spell
- firefly
- mura
- masa
- checklist
- biographies
- sucked
- congested
- avinash
- commando
- jolie's
- instrumentals
- clarksville
- tablespoons
- surveys
- flour
- acela
- calone
- bucket
- fulls
- valid
- references
- critical
- perpetuate
- luncheon
- ohm's
- values
- plying
- expectations
- musician
- mindsweper
- throughout
- noontime
- included
- tour's
- voted
- walgreens
- chickens
- monday's
- crankshaft
- surfer
- lunchtime
- skramz
- compounds
- diabetes
- might
- reservation
- homosapien
- engadget
- boeing
- brisbane
- ear
- headphones
- minimum
- worry
- snowplows
- burying
- driveway
- adapt
- destroy
- impanema
- equipment
- turnt
- attractive
- conducted
- cinnamon
- freshener
- watsapp
- bean
- awfully
- entitled
- murderer
- ford
- forties
- scenery
- morocco
- sf
- blokus
- preacher
- taken
- stormy
- centers
- ethics
- popup
- mysterious
- puts
- stage
- considerations
- lourie
- artic
- scoop
- carion
- merced
- bypass
- passwords
- quantico
- grade
- examples
- cuisines
- hibernate
- bear
- published
- authors
- tempo
- keidis
- tidal
- cookoff
- zones
- probable
- summerfest
- dogs
- aren't
- necessarily
- carolina
- eleventh
- chilling
- sleeve
- invoking
- term
- herald
- maria's
- poltergeist
- imagine
- uv
- index
- johncena
- instruct
- oscillate
- liter
- nelly
- shawarma
- baster
- pali
- vilnius
- tabs
- debates
- singers
- activated
- ozzy
- osbourne
- danish
- happypeoplecom
- accounting
- backpack
- im
- puttanesca
- keeps
- worse
- wrigley
- braise
- loin
- carnatic
- bases
- nick
- swisher
- stolen
- clouds
- cleared
- bola's
- norman
- reedus
- screwdriver
- window
- volcanoes
- rowan
- atkinson
- minneapoliscity
- delicacies
- monitor
- overall
- gymnastics
- channels
- kxly
- botswana
- enjoyable
- spectre
- chane
- decentralized
- men's
- freeze
- postal
- becomes
- ccn
- berth
- michigan
- composition
- shahi
- panner
- dakar
- jakarta
- equalizer
- weird
- barely
- rodriguez
- oklahoma
- giraffes
- margarita
- difficult
- crabs
- firework
- probability
- tools
- emigration
- legislation
- pdf
- cheeseburgers
- applications
- adopters
- priest
- walks
- mechanic
- h
- showers
- signs
- contrast
- recollect
- gm's
- duck
- beavers
- tail
- lucking
- horkersd
- wo
- myrtle
- hr
- steam
- entirety
- anirudh
- colored
- tropical
- bedrooms
- yellowish
- elephants
- expenses
- contents
- warmer
- royksopp
- etc
- progressives
- peoples
- cultures
- unset
- iceland
- mp
- mangalore
- tanya
- quad
- particulars
- insert
- tvf
- formidable
- origins
- eden
- depressed
- mc
- donalds
- rub
- regrets
- judgments
- scope
- intellectual
- capacity
- ahmadabad
- stethoscope
- superstitions
- rl
- stine
- quinoa
- martial
- smooth
- damn
- speeding
- stephen
- halley
- barry
- jealous
- siri's
- java
- scenarios
- pc
- transfer
- tw
- agent
- nightime
- creamy
- mirch
- dil
- cannon
- cameras
- process
- merriam
- webster
- dubstep
- rangoon
- wines
- older
- navigate
- chandelier
- egs
- recognize
- subscriptions
- mileage
- studies
- microphone
- immigrant
- electronics
- careful
- paint
- fund
- success
- resolved
- bola
- eva's
- roller
- augusta
- midtown
- surprise
- children's
- dongle
- seashell
- bots
- fallen
- centimeters
- poisoning
- sci
- fi
- outcome
- reform
- sleepy
- moderate
- chrome
- ultraviolet
- george's
- geek
- courses
- rundown
- legend
- equipments
- usher
- manor
- advertisers
- clue
- depending
- strongest
- outstation
- fallout
- shoal
- lastfm
- relocate
- pollution
- awareness
- bryce
- jessie
- carol
- nsnbc
- vacuumed
- chives
- splits
- arbor
- receiving
- toast
- futures
- brokers
- routes
- fixed
- additional
- switches
- church's
- governor
- enacted
- grams
- guitarists
- android
- babe
- sonny
- sear
- eliminate
- remain
- uc
- polk
- pakistani
- bedside
- reshuffle
- frida
- devil's
- rusk
- actors
- pakistan
- happenings
- sit
- montauk
- beethoven
- legends
- sunshine
- mothers
- smoke
- feels
- rockies
- miamy
- operations
- addition
- subtraction
- incite
- annoying
- cristiano
- ronaldo
- spin
- cows
- jenny
- spread
- wallstreet
- selections
- nashik
- ipl
- oswald
- chambers
- horoscope
- mgk
- dog's
- residing
- cricketer
- dhoni
- byron
- fluctuations
- talks
- palermo
- shallowest
- bbcnews
- nsdl
- flights
- lineup
- stick
- ribs
- jeopardy
- timetables
- emi
- maya
- mackensie
- osteen
- jimmie's
- adjustments
- precocious
- fork
- husband's
- audi
- hibachi
- disputed
- crack
- visible
- boiling
- rogan
- karachi
- babysitter
- kidnapping
- hamburgers
- madonnas
- lessen
- ipo
- greenville
- carries
- creamed
- pickled
- herring
- tackle
- brush
- geyser
- savings
- torey
- hurt
- subscribe
- picks
- birthdate
- goals
- cairo
- projected
- patrick's
- capita
- honda
- intended
- hurriedly
- activates
- it'll
- wsj
- spy
- broods
- grommet
- steven's
- underground
- seahawks
- participants
- workday
- ammi
- nightlife
- donner
- summit
- ukraine's
- ended
- arrangements
- altucher's
- writer
- fortune
- brisket
- grant
- audiobooks
- twilight
- bass
- hunger
- roses
- barbecue
- tuna
- deadly
- killers
- finally
- trilogy
- grisham
- goblet
- roadblocks
- birthday's
- biscuits
- lawyers
- steve's
- kari
- labyrinth
- commonwealth
- sharma
- gulf
- petrol
- earthly
- ultimate
- ending
- allison
- canberra
- honolulu
- flash
- salman
- gresham
- hindustani
- stroganoff
- sock
- creates
- geo
- traits
- moral
- rein
- blood
- slayer
- pro
- bono
- succinct
- dalls
- somethings
- sharp
- izzo
- whiny
- bitch
- macaroni
- nights
- jumper
- blind
- cure
- cancer
- vibrant
- sloth
- transition
- recycling
- bbc's
- columbia
- kentucky
- hire
- opera
- prefer
- avoid
- sort
- comedy
- compassionate
- nc
- va
- riddles
- segment
- youth
- charity
- surrounding
- punjabi
- sharply
- lovett
- barber
- label
- hypocrisy
- subscriber
- captain
- disillusion
- hyderabad
- dashboard
- storm
- barrel
- panasonic
- clinton
- canasta
- mittens
- badra
- amit
- trivedi
- crystal
- lewis's
- everywhere
- rue
- evaporated
- mma
- offered
- tutoring
- peas
- dream
- cafes
- lauderdale
- deletion
- precise
- parliamentary
- remotely
- connection
- calendars
- stupidest
- shovel
- western
- cutting
- ll
- rapping
- spelling
- mama
- tatum's
- fulton
- universal
- garner
- chill
- icebo
- college's
- rehman
- soundcloud
- scorecards
- ketchup
- jimmy's
- crate
- lexmark
- preference
- females
- federal
- andreas
- sportsnet
- favourites
- janice
- bins
- pamela
- covered
- rhapsody
- italian's
- ke
- panera
- remainders
- tandoori
- sukhwinder
- sunidhi
- etymology
- googleplex
- slide
- wearing
- trivial
- pursuit
- cancels
- martina
- mcbride
- finances
- vocab
- zipcode
- compaq
- composer
- margarine
- jonathan
- entrepreneur
- extended
- combo
- memories
- tupac
- affects
- drunks
- ford's
- liked
- dealership
- olky
- realtor
- thighs
- ourselves
- economics
- medication
- gross
- domestic
- donaldson
- prostate
- wicker
- rooms
- instrumental
- savannah
- outing
- affleck
- quotes
- tire
- montana
- exhausted
- acoustic
- commercials
- convenience
- consciousness
- serge
- gainsbourg
- windows
- turks
- generate
- pedicures
- btaxes
- departures
- frasier
- amazon's
- bluetooth
- verus
- neat
- forecasted
- bing's
- dropped
- recurrent
- candidate
- aware
- blackeyed
- pees
- prince's
- perimeter
- rectangle
- aaron
- carter
- involve
- drugs
- lighten
- slicker
- rains
- cloud
- carrot
- popcorn
- carmike
- cinemas
- greater
- minestart
- frog
- lenon
- unique
- hanging
- hung
- sporty
- seldom
- jocko's
- kid's
- viewers
- cantonese
- usage
- specs
- bugatti
- veyron
- chief
- blockbuster
- krishnarajpuram
- interstate
- hammers
- obligatory
- wonder
- southeast
- marlon
- brando
- ferrel
- tal
- obidallah
- manoeuvres
- merita
- rotate
- changs
- pepsi
- shanghai
- branden
- wind
- landmarks
- dvr
- congestion
- valentines
- eastwind
- lomaine
- geneva
- officially
- hopkins
- takjistan
- dimmer
- karo
- apne
- aur
- karna
- chahta
- hu
- purchased
- otherplace
- giraffe
- ute
- requirement
- watts
- powerful
- bulb
- oclock
- nba
- hulu
- composing
- melissas
- millilitres
- spoons
- goulash
- thor
- harischand
- mg
- i95
- sb
- kilo
- diana
- llyod
- webber
- wool
- penultimate
- bang
- philosophers
- nietzche
- focault
- profession
- kilograms
- turkeys
- bibulous
- angeline
- atm
- narwhal
- kilamanjaro
- captia
- volkswagen
- onkyo
- av
- receiver
- ipad
- aniston's
- summarize
- ice
- jindel
- pump
- nikki
- minaj
- nationality
- snoodle
- yemen
- sudan
- unprompted
- organization
- megan
- fares
- engage
- functioning
- dinar
- conservative
- korean
- sahara
- kingdom
- antartica
- telugu
- tamil
- tsunami
- rajani
- khanth
- venture
- goalkeeper
- dushambe
- abrupt
- hbo
- sopranos
- parana
- cave
- anime
- posters
- johny
- depp
- invisible
- graphical
- joli
- pricing
- beech
- nuclear
- triad
- hilton
- borders
- lucille
- redhead
- geraldine
- ferraro
- bde
- lowered
- phrases
- nicole
- mcgoat's
- manipulate
- roip
- nasa
- google's
- davy
- crockett
- springsteen's
- richest
- costliest
- easily
- gm
- psso
- kroner
- maple
- trees
- christie
- brinkley
- libraries
- gmb
- key
- mongolia
- anastasia
- telekenesis
- promise
- stray
- cruise's
- starring
- odyssey
- polish
- zloty
- hook
- ups
- integral
- exponential
- berkshire
- hathaway
- tables
- pink's
- alligator
- porto
- tommy
- hilfiger
- print
- networks
- snaps
- celebrate
- bina
- yay
- smiley
- emoticon
- commented
- folgers
- hathway
- huge
- lfi
- tagged
- treated
- hersheys
- aircel
- nastyburger
- linkedin
- tracy
- waiter
- drain
- charge
- neptunal
- poorly
- waited
- inappropriate
- potus
- accounts
- vodafone
- complaining
- spoiled
- positive
- tumblr
- unpleasant
- overpricing
- cheating
- connected
- else's
- greetings
- thought
- waste
- excess
- micro
- lodge
- snapdeal
- sonic
- hole
- sole
- patel's
- insect
- packet
- elsewhere
- moan
- easyjet
- snotty
- expired
- xl
- sizes
- filing
- applebee's
- angela
- merkel
- swagging
- moto
- sluggish
- flavia
- mum
- jacob's
- existing
- cannot
- pleas
- mahmoud
- ebay
- smsayyad1985
- kishore17051985
- fedex
- truette
- petey's
- tessa
- gaurav
- karen
- mongomery
- llc
- joseph
- turnpike
- accumulated
- deadlines
- fees
- ppt
- emergency
- missing
- carl's
- attach
- physical
- drill
- marilyn
- jugal
- here's
- bug
- sarasigmon123
- lindafancy55
- markpolomm
- gary's
- mailing
- bill's
- erins
- beth's
- wont
- stacy
- cadwell
- tori
- aloud
- brenda
- thisome
- smurfette
- smithjoe
- hwacuk
- chong
- giselle
- bosses
- havent
- frieda's
- jjjindia
- exists
- batch
- samuelwaters
- joose
- hellen
- builders
- accepted
- victor
- taxi's
- terry
- macdonald
- yahoocom
- metion
- rodger
- christy's
- otp
- jayesh
- tried
- morgan's
- office's
- rob
- qerwerq
- secured
- gerry
- raj's
- junable
- shopyourway
- reference
- jhonny's
- marissa
- rosa
- bert
- ana
- goddammit
- pronounce
- serious
- recheck
- slowly
- failed
- fuck
- executed
- clearly
- errors
- showed
- races
- thursdays
- funky
- handmaid's
- beam
- scotty
- debit
- wiki
- editor's
- automobiles
- promo
- discount
- director
- act
- bejeweled
- aside
- snakes
- ladders
- marsala
- influx
- bayou
- reasonably
- tapas
- az
- ddlj
- meatball
- newscast
- bibber
- tmz
- devon
- applebees
- hihop
- doggie
- feelings
- radios
- litle
- tsos
- congratulate
- links
- treble
- flame
- eta
- encourage
- students
- choices
- lobby
- vf
- chore
- butterfly
- clips
- urban
- regular
- bi-weekly
- baltimore
- sport's
- breakups
- dale's
- brea
- douglasville
- fundraiser
- dolphines
- maradona
- pe
- becky
- appointed
- deputy
- utar
- pradesh
- anniston
- handy
- sainsbury's
- attenuate
- parcel
- jakes
- bristo
- stressful
- deposit
- mathematical
- superstar
- survivor
- destiny's
- westcombe
- facility
- oboe
- mcnamara
- abolish
- swim
- repair
- grub
- hub
- ill
- dec
- dreams
- wyatts
- obstacle
- poach
- dental
- rose
- davinci
- trevor
- noah
- ncaa
- entrapreneur
- sanam
- differs
- ave
- hopsin
- enya
- wbc
- accordingly
- remarks
- sufi
- beibers
- arrested
- sensor
- music's
- author
- antwerp
- cnn's
- foodnetworkcom
- customize
- preferred
- unable
- duct
- tape
- gooseto
- apig
- ringer
- secure
- passage
- tomatoes
- wan
- senelena
- americano
- makeup
- robotics
- teleconference
- robotic
- poughkeepsie
- steel
- day's
- soundtrack
- tobymac
- transit
- gloria
- furious
- nazi
- hunting
- effect
- marvin
- gaye
- pasadena
- ca
- constrain
- singles
- outer
- nowhereville
- comfortable
- erica
- grebe
- wooly
- trigonametry
- obsessed
- graphics
- undone
- tough
- treasury
- toledo
- munich
- obtain
- nutritionally
- balanced
- internal
- locks
- exit
- mocking
- lyft
- transaction
- tasty
- mixture
- according
- hands
- supports
- canceling
- congressman's
- lenin
- spagetti
- controversial
- statements
- walker
- humor
- nkotb
- jon
- snow's
- possibility
- wellington
- nz
- advantages
- disadvantages
- driver
- towels
- stretch
- gear
- joey
- crimson
- chose
- pineapple
- asparagus
- teaspoons
- bling
- medieval
- engines
- foods
- hurts
- cannibal
- tonic
- bitcoin
- collection
- hidden
- figures
- brasil
- politic
- superb
- dalida
- capuccino
- analysts
- thankama
- kodaikanal
- vote
- burritto
- chipolte
- abut
- sedaka
- chamber
- rfi
- knock
- cnncom
- remchi
- fl
- ortcars
- flip
- wire
- thriller
- fiasco
- breaks
- dam
- paradise
- presidency
- sigur
- ros
- socks
- van
- halen
- wayne
- spare
- lightness
- appropriately
- both
- musics
- coastal
- cry
- friend's
- wore
- veganism
- picnic
- regent
- visited
- therapist
- inauguration
- swatishs
- dorothy
- known
- supervision
- superbowl
- eric's
- bday
- kar
- abhi
- achche
- ache
- rahe
- honge
- mhz
- sponge
- bistros
- brownies
- tenderloin
- enchiladas
- gluten
- hotdog
- row
- bing
- notebook
- pulldown
- clearer
- medford
- drivers
- waverley
- canal
- connecting
- summers
- gibraltar
- monoprice
- mxblue
- mechanical
- turbulence
- carey
- blunder
- factorial
- depends
- commands
- stand
- draymond
- susumu
- hirasawa
- yosemite
- '200'
- baguette
- stonehenge
- douriff
- ivf
- ivr
- litt
- runs
- hesitant
- crock
- guetta
- malaysia
- whelers
- sadness
- william
- coral
- daft
- punk
- sandle
- santha
- ingerman
- calc
- shibaru
- alcohols
- nano
- gina
- desta
- mgmt
- bana
- talking
- garvin
- trilly
- nytimes
- chhana
- mereya
- favor
- strained
- cooler
- films
- einstein's
- aroma
- ska
- raphsody
- trebuchet
- forth
- relate
- qualifications
- kirk
- franklin
- arithmetic
- skyfall
- bathrooms
- raghu
- dixit
- reports
- availables
- haddock
- odd
- cape
- cod
- noisy
- dull
- hackernews
- porn
- pad
- fight
- fighter
- nzd
- melodious
- burton
- helena
- campaign
- mcclanahan
- mummy's
- motown
- rasgulla
- janta
- pvt
- ltd
- heartthrob
- justin's
- velociraptor
- hippo
- senatra
- giggle
- peru
- nirvana
- anirudh's
- retro
- mf
- doom
- summarise
- ariana
- grande
- predicted
- creed
- user
- desire
- kenny
- roger
- sia's
- thrills
- wapo
- stockholm
- okinawa
- occasionally
- shuffling
- veggie
- mukkala
- mukkabilla
- guardian
- anytime
- themes
- horror
- ennema
- eatha
- homestead
- forever
- mayor's
- stance
- council
- master
- louies
- keane's
- fears
- noe
- reggae
- largo
- swiftm
- afi's
- xinhua
- dedicated
- bottom
- franks
- yelawolf
- ucl
- flop
- grammys
- espn
- joni
- mitchell
- shot
- tequila
- sleepyhead
- aces
- redder
- edms
- lamp's
- loudest
- brolly
- thao
- nguyen
- interior
- dine
- dogwalking
- nytimescom
- overcast
- deactive
- foo
- disasters
- opacity
- dea
- guam
- drug
- abuse
- itzhak
- perlman
- drawing
- sweden
- bombing
- ireland
- poll
- hotha
- defrosting
- salt
- toggle
- spb
- weatherit
- either
- forecasts
- intellicast
- weathercom
- orevena
- recorder
- pizzahouse
- reorganize
- sticky
- umbrellas
- opened
- cleaned
- shakin
- bakey
- tips
- hypoallergenic
- sarcastic
- cheat
- ii
- developers
- edg
- yaad
- dilana
- kahin
- samantha's
- rita's
- adding
- bro's
- attendees
- maggie
- valet
- groomer
- timeframe
- pete
- faculty
- parade
- greens
- jack's
- walter
- gemma
- nail
- arora's
- namkeen
- tonights
- ggg
- tie
- iheartradio
- rov
- javan
- wfrn
- kicks
- osteen's
- wgrr
- lite
- prairie
- companion
- palhunik
- pudding
- tutorial
- welsh
- rarebit
- oatmeal
- pathia
- achieve
- veg
- pulav
- crockpot
- prepared
- keno
- pinball
- fishdom
- nfs
- harvest
- crops
- farmvile
- millionaires
- vodka
- depend
- pon
- stationary
- mad
- errands
- paav
- queried
- pepper
- rowling
- shadi
- viewed
- mlb
- heavyweight
- citadel
- scene
- circus
- trolls
- grab
- kung
- fu
- bowery
- railway
- coach
- fare
- metrolink
- navigation
- westwood
- layfayette
- inconvenience
- emotions
- arrahman
- cosmos
- multiplied
- abouts
- hitting
- eliot's
- el
- ribbons
- sperm
- whale
- eaten
- lbs
- pinhead
- timeliness
- defining
- thesaurus
- penalty
- approval
- poetry
- ambulance
- jello
- shots
- ferrell
- stassi
- schroedder's
- tacobell
- hierophant
- zealand
- stockton
- emissions
- blowing
- kennedy
- ziggurat
- gagas
- gretszky
- hemingway
- pages
- earn
- nobel
- actions
- sloths
- parton's
- madagascar
- acting
- tiangle
- trebuchets
- googs
- gandhiji
- amal
- brazil
- adviser
- rich
- acted
- rihanas
- stamp
- mugy
- msn
- busdriver
- fergie
- flick
- ribons
- nakumuka
- postmates
- complaintum
- glinder
- gta
- rcg
- outlet
- hadock
- mclanahan
- coal
- mumy's
- piza
- wheelers
- guarante
- debugging
- debuging
- proper
- sung
- bilando
- terrorism
- cover
- dimmed
- vanilli
- marauthr
- wooo
- michael's
- shutdown
- pittsburgh
- precipitation
- riff
- portland
- muggy
- giants
- banks
- steelz
- ensure
- ricky
- matin
- tyres
- plant
- chased
- advice
- gossiping
- society
- mitushree
- hairdresser's
- biology
- fsu
- reflect
- yashas
- vinay
- vally
- closed
- shoutcast
- pilkington
- soda
- powder
- sambar
- cookingforu
- thermonuclear
- battleship
- cereal
- wishlist
- wrist
- hipsterhood
- duncan
- trussel's
- simmons
- wide
- cisco
- crafts
- sporting
- presently
- sheffield
- septa
- lead
- fransisco
- washingdon
- evolution
- mariah
- kya
- tum
- mere
- karne
- karoge
- acts
- assembly
- idle
- brand
- meridian
- terranova
- guarantee
- marian
- fields
- farthest
- philippine
- cambodia
- situated
- foruget
- monopricechanical
- peenth
- moroco
- piz
- tre
- supplwn
- viki
- shivle
- loged
- applebe
- acess
- madagar
- anp
- socer
- subcribe
- pluged
- imigration
- audiowan
- debie's
- imediately
- f
- locar
- duark
- rebeca
- talle
- banas
- ragh
- acordingly
- wakely
- en
- bress
- acording
- stefanan
- puding
- vegie
- vius
- edie
- domizza
- eg
- cheeseiza
- ocurred
- brightnes
- alaba
- memory
- fransico
- sunderland
- boogie
- butt
- leviathan
- shinning
- premier
- cleanup
- wacky
- aman
- cherry
- bomb
- solstice
- silently
- closet
- nakumukka
- shed
- responses
- yankees
- investigation
- dooa
- pieces
- imogen
- heap
- stole
- dynamite
- cease
- operating
- rained
- uptown
- suggestion
- finlee's
- bedtime
- sockets
- sanfranscio
- abbas
- cn's
- vibrate
- cooling
- sheriffs
- hike
- ilayaraja
- speaking
- un
- storms
- roof
- tube
- jackpot
- classmates
- extremely
- somewhere
- drenched
- sentient
- budy
- heating
- apt
- parenting
- concerning
- seo
- searches
- sticking
- patterns
- numbered
- impression
- reunion
- presents
- mehta
- willing
- discuss
- evan
- parker
- violin
- lesson
- musicworkz
- registration
- opens
- evening's
- thursday's
- nineteenth's
- hayathis
- shower
- corresponding
- showcase
- famosa
- kamp
- neal
- brenan
- gx
- nonstop
- rm
- giver
- traveller
- knowledge
- crispy
- supper
- broil
- noodle
- stuffed
- maccoroni
- almond
- clash
- clans
- ping
- keeper
- enemy
- coc
- detergent
- corn
- dill
- pickles
- ranch
- dressing
- lentils
- translate
- toothpaste
- rearrange
- groups
- santana
- pritzker
- winners
- libertarian
- mc's
- vitaly
- nfl
- mythical
- oriented
- provisional
- experiences
- safely
- themselves
- mia
- reducing
- learly
- court
- vin
- diesel
- netbooks
- chinatown
- aberdeen
- queens
- luni
- purchasing
- timing
- bagmati
- narrow
- egypt
- represented
- revelation
- britain
- aamir
- priyanka
- middleton
- base
- original
- nhl
- goal
- scorers
- osteoperosis
- laws
- correlation
- motivation
- ncaaa
- tense
- touring
- framework
- adel
- diamond
- schwarzenegger's
- stomachs
- cow
- chairs
- steph
- subjegant
- pategonia
- michelle
- todlers
- stakes
- tinder
- matches
- fjord
- equator
- triumph
- hell
- moldova
- presley's
- wa
- rajinikanth
- basalt
- bali
- airplane
- hash
- lit
- <sos/eos>
two_pass: false
pre_postencoder_norm: false
preencoder: null
preencoder_conf: {}
encoder: conformer
encoder_conf:
output_size: 512
attention_heads: 8
linear_units: 2048
num_blocks: 12
dropout_rate: 0.1
positional_dropout_rate: 0.1
attention_dropout_rate: 0.1
input_layer: conv2d
normalize_before: true
macaron_style: true
pos_enc_layer_type: rel_pos
selfattention_layer_type: rel_selfattn
activation_type: swish
use_cnn_module: true
cnn_module_kernel: 31
postencoder: null
postencoder_conf: {}
deliberationencoder: conformer
deliberationencoder_conf:
output_size: 512
attention_heads: 8
linear_units: 2048
num_blocks: 12
dropout_rate: 0.1
positional_dropout_rate: 0.1
attention_dropout_rate: 0.1
input_layer: linear
normalize_before: true
macaron_style: true
pos_enc_layer_type: rel_pos
selfattention_layer_type: rel_selfattn
activation_type: swish
use_cnn_module: true
cnn_module_kernel: 31
decoder: transformer
decoder_conf:
attention_heads: 8
linear_units: 2048
num_blocks: 6
dropout_rate: 0.1
positional_dropout_rate: 0.1
self_attention_dropout_rate: 0.1
src_attention_dropout_rate: 0.1
postdecoder: hugging_face_transformers
postdecoder_conf:
model_name_or_path: bert-base-cased
output_size: 512
required:
- output_dir
- token_list
version: '202207'
distributed: false
```
</details>
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
| [
"BEAR",
"PDR"
] | Non_BioNLP |
aimlresearch2023/snowflake-arctic-embed-m-v1.5-Q8_0-GGUF | aimlresearch2023 | sentence-similarity | [
"sentence-transformers",
"gguf",
"feature-extraction",
"sentence-similarity",
"mteb",
"arctic",
"snowflake-arctic-embed",
"transformers.js",
"llama-cpp",
"gguf-my-repo",
"base_model:Snowflake/snowflake-arctic-embed-m-v1.5",
"base_model:quantized:Snowflake/snowflake-arctic-embed-m-v1.5",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,726,132,804,000 | 2024-09-12T09:20:08 | 7 | 0 | ---
base_model: Snowflake/snowflake-arctic-embed-m-v1.5
license: apache-2.0
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- mteb
- arctic
- snowflake-arctic-embed
- transformers.js
- llama-cpp
- gguf-my-repo
model-index:
- name: snowflake-arctic-embed-m-v1.5
results:
- task:
type: Retrieval
dataset:
name: MTEB ArguAna
type: mteb/arguana
config: default
split: test
revision: c22ab2a51041ffd869aaddef7af8d8215647e41a
metrics:
- type: main_score
value: 59.53000000000001
- type: map_at_1
value: 34.282000000000004
- type: map_at_10
value: 50.613
- type: map_at_100
value: 51.269
- type: map_at_1000
value: 51.271
- type: map_at_20
value: 51.158
- type: map_at_3
value: 45.626
- type: map_at_5
value: 48.638
- type: mrr_at_1
value: 34.92176386913229
- type: mrr_at_10
value: 50.856081645555406
- type: mrr_at_100
value: 51.510739437069034
- type: mrr_at_1000
value: 51.51299498830165
- type: mrr_at_20
value: 51.39987941081724
- type: mrr_at_3
value: 45.993361782835514
- type: mrr_at_5
value: 48.88098624940742
- type: nauc_map_at_1000_diff1
value: 10.628675774160785
- type: nauc_map_at_1000_max
value: -10.11742589992339
- type: nauc_map_at_1000_std
value: -18.29277379812427
- type: nauc_map_at_100_diff1
value: 10.63250240035489
- type: nauc_map_at_100_max
value: -10.112078786734363
- type: nauc_map_at_100_std
value: -18.288524872706834
- type: nauc_map_at_10_diff1
value: 10.476494913081712
- type: nauc_map_at_10_max
value: -9.890937746734037
- type: nauc_map_at_10_std
value: -18.279750514750443
- type: nauc_map_at_1_diff1
value: 14.549204048461151
- type: nauc_map_at_1_max
value: -12.230560087701225
- type: nauc_map_at_1_std
value: -19.469903650130362
- type: nauc_map_at_20_diff1
value: 10.586564571825674
- type: nauc_map_at_20_max
value: -10.00292720526217
- type: nauc_map_at_20_std
value: -18.258077347878064
- type: nauc_map_at_3_diff1
value: 10.378663968090372
- type: nauc_map_at_3_max
value: -10.458896171786185
- type: nauc_map_at_3_std
value: -18.38852760333766
- type: nauc_map_at_5_diff1
value: 10.235960275925581
- type: nauc_map_at_5_max
value: -10.239496080409058
- type: nauc_map_at_5_std
value: -18.817023479445886
- type: nauc_mrr_at_1000_diff1
value: 8.718212649575722
- type: nauc_mrr_at_1000_max
value: -10.81022794038691
- type: nauc_mrr_at_1000_std
value: -17.87669499555167
- type: nauc_mrr_at_100_diff1
value: 8.722174171165133
- type: nauc_mrr_at_100_max
value: -10.804840985713525
- type: nauc_mrr_at_100_std
value: -17.872487099359986
- type: nauc_mrr_at_10_diff1
value: 8.609421635870238
- type: nauc_mrr_at_10_max
value: -10.568644717548432
- type: nauc_mrr_at_10_std
value: -17.872968762635814
- type: nauc_mrr_at_1_diff1
value: 12.69590006263834
- type: nauc_mrr_at_1_max
value: -12.082056561238321
- type: nauc_mrr_at_1_std
value: -18.036424092186657
- type: nauc_mrr_at_20_diff1
value: 8.684842497970315
- type: nauc_mrr_at_20_max
value: -10.691578914627286
- type: nauc_mrr_at_20_std
value: -17.84350301434992
- type: nauc_mrr_at_3_diff1
value: 8.649761557556763
- type: nauc_mrr_at_3_max
value: -11.104694428047496
- type: nauc_mrr_at_3_std
value: -18.149917948370344
- type: nauc_mrr_at_5_diff1
value: 8.433489750038396
- type: nauc_mrr_at_5_max
value: -10.917772454397436
- type: nauc_mrr_at_5_std
value: -18.4094211134111
- type: nauc_ndcg_at_1000_diff1
value: 10.19041067807956
- type: nauc_ndcg_at_1000_max
value: -9.54328201605796
- type: nauc_ndcg_at_1000_std
value: -17.824620427456633
- type: nauc_ndcg_at_100_diff1
value: 10.289491087585963
- type: nauc_ndcg_at_100_max
value: -9.357214331420337
- type: nauc_ndcg_at_100_std
value: -17.657600653632873
- type: nauc_ndcg_at_10_diff1
value: 9.435530877596092
- type: nauc_ndcg_at_10_max
value: -8.182581635383546
- type: nauc_ndcg_at_10_std
value: -17.603156479980388
- type: nauc_ndcg_at_1_diff1
value: 14.549204048461151
- type: nauc_ndcg_at_1_max
value: -12.230560087701225
- type: nauc_ndcg_at_1_std
value: -19.469903650130362
- type: nauc_ndcg_at_20_diff1
value: 9.885227087275197
- type: nauc_ndcg_at_20_max
value: -8.52362662391439
- type: nauc_ndcg_at_20_std
value: -17.441705436231764
- type: nauc_ndcg_at_3_diff1
value: 9.22542769998547
- type: nauc_ndcg_at_3_max
value: -9.903590564219288
- type: nauc_ndcg_at_3_std
value: -18.357220221111593
- type: nauc_ndcg_at_5_diff1
value: 8.8756720745828
- type: nauc_ndcg_at_5_max
value: -9.269764943861245
- type: nauc_ndcg_at_5_std
value: -19.009229433187784
- type: nauc_precision_at_1000_diff1
value: 3.733355117431035
- type: nauc_precision_at_1000_max
value: 3.9603571352517393
- type: nauc_precision_at_1000_std
value: 70.07345061131439
- type: nauc_precision_at_100_diff1
value: 29.019032142462457
- type: nauc_precision_at_100_max
value: 40.75153328286103
- type: nauc_precision_at_100_std
value: 62.634249549126594
- type: nauc_precision_at_10_diff1
value: 2.5762677254910353
- type: nauc_precision_at_10_max
value: 6.096298633773051
- type: nauc_precision_at_10_std
value: -11.507400451348587
- type: nauc_precision_at_1_diff1
value: 14.549204048461151
- type: nauc_precision_at_1_max
value: -12.230560087701225
- type: nauc_precision_at_1_std
value: -19.469903650130362
- type: nauc_precision_at_20_diff1
value: 1.715540124567996
- type: nauc_precision_at_20_max
value: 21.53546453945913
- type: nauc_precision_at_20_std
value: 1.537961142195571
- type: nauc_precision_at_3_diff1
value: 5.701850652555737
- type: nauc_precision_at_3_max
value: -8.180345365085552
- type: nauc_precision_at_3_std
value: -18.37033750502482
- type: nauc_precision_at_5_diff1
value: 3.6053552181042843
- type: nauc_precision_at_5_max
value: -5.207647070615612
- type: nauc_precision_at_5_std
value: -19.89491085427258
- type: nauc_recall_at_1000_diff1
value: 3.733355117431255
- type: nauc_recall_at_1000_max
value: 3.9603571352482194
- type: nauc_recall_at_1000_std
value: 70.07345061131205
- type: nauc_recall_at_100_diff1
value: 29.01903214246288
- type: nauc_recall_at_100_max
value: 40.7515332828621
- type: nauc_recall_at_100_std
value: 62.63424954912607
- type: nauc_recall_at_10_diff1
value: 2.5762677254911988
- type: nauc_recall_at_10_max
value: 6.0962986337729905
- type: nauc_recall_at_10_std
value: -11.507400451348577
- type: nauc_recall_at_1_diff1
value: 14.549204048461151
- type: nauc_recall_at_1_max
value: -12.230560087701225
- type: nauc_recall_at_1_std
value: -19.469903650130362
- type: nauc_recall_at_20_diff1
value: 1.7155401245682675
- type: nauc_recall_at_20_max
value: 21.535464539459632
- type: nauc_recall_at_20_std
value: 1.5379611421957025
- type: nauc_recall_at_3_diff1
value: 5.7018506525557875
- type: nauc_recall_at_3_max
value: -8.180345365085538
- type: nauc_recall_at_3_std
value: -18.370337505024796
- type: nauc_recall_at_5_diff1
value: 3.6053552181043913
- type: nauc_recall_at_5_max
value: -5.207647070615579
- type: nauc_recall_at_5_std
value: -19.894910854272492
- type: ndcg_at_1
value: 34.282000000000004
- type: ndcg_at_10
value: 59.53000000000001
- type: ndcg_at_100
value: 62.187000000000005
- type: ndcg_at_1000
value: 62.243
- type: ndcg_at_20
value: 61.451
- type: ndcg_at_3
value: 49.393
- type: ndcg_at_5
value: 54.771
- type: precision_at_1
value: 34.282000000000004
- type: precision_at_10
value: 8.791
- type: precision_at_100
value: 0.992
- type: precision_at_1000
value: 0.1
- type: precision_at_20
value: 4.769
- type: precision_at_3
value: 20.104
- type: precision_at_5
value: 14.651
- type: recall_at_1
value: 34.282000000000004
- type: recall_at_10
value: 87.909
- type: recall_at_100
value: 99.21799999999999
- type: recall_at_1000
value: 99.644
- type: recall_at_20
value: 95.377
- type: recall_at_3
value: 60.313
- type: recall_at_5
value: 73.257
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackAndroidRetrieval
type: mteb/cqadupstack-android
config: default
split: test
revision: f46a197baaae43b4f621051089b82a364682dfeb
metrics:
- type: main_score
value: 53.885000000000005
- type: map_at_1
value: 35.429
- type: map_at_10
value: 47.469
- type: map_at_100
value: 48.997
- type: map_at_1000
value: 49.117
- type: map_at_20
value: 48.324
- type: map_at_3
value: 43.835
- type: map_at_5
value: 46.043
- type: mrr_at_1
value: 43.34763948497854
- type: mrr_at_10
value: 53.258623430297234
- type: mrr_at_100
value: 53.99123884299005
- type: mrr_at_1000
value: 54.02458101713216
- type: mrr_at_20
value: 53.695964669618945
- type: mrr_at_3
value: 50.81068192656173
- type: mrr_at_5
value: 52.45588936576058
- type: nauc_map_at_1000_diff1
value: 51.55382824218782
- type: nauc_map_at_1000_max
value: 31.855350695084606
- type: nauc_map_at_1000_std
value: -5.465862008150992
- type: nauc_map_at_100_diff1
value: 51.55889312452534
- type: nauc_map_at_100_max
value: 31.88429637207401
- type: nauc_map_at_100_std
value: -5.40805152544196
- type: nauc_map_at_10_diff1
value: 51.6592677505875
- type: nauc_map_at_10_max
value: 31.554425233617543
- type: nauc_map_at_10_std
value: -6.125756131339046
- type: nauc_map_at_1_diff1
value: 55.6889617582672
- type: nauc_map_at_1_max
value: 27.821166966868176
- type: nauc_map_at_1_std
value: -5.778838498211728
- type: nauc_map_at_20_diff1
value: 51.70520970992564
- type: nauc_map_at_20_max
value: 31.811676633900465
- type: nauc_map_at_20_std
value: -5.463596751904718
- type: nauc_map_at_3_diff1
value: 53.206169626589606
- type: nauc_map_at_3_max
value: 31.64373830824983
- type: nauc_map_at_3_std
value: -6.054761451312827
- type: nauc_map_at_5_diff1
value: 52.37308971673694
- type: nauc_map_at_5_max
value: 31.974302019633644
- type: nauc_map_at_5_std
value: -6.302653399940531
- type: nauc_mrr_at_1000_diff1
value: 49.345152231490616
- type: nauc_mrr_at_1000_max
value: 33.49789501712511
- type: nauc_mrr_at_1000_std
value: -6.054730861163538
- type: nauc_mrr_at_100_diff1
value: 49.3387577601307
- type: nauc_mrr_at_100_max
value: 33.48149992464187
- type: nauc_mrr_at_100_std
value: -6.061177137579308
- type: nauc_mrr_at_10_diff1
value: 49.08312288449718
- type: nauc_mrr_at_10_max
value: 33.470393322577465
- type: nauc_mrr_at_10_std
value: -6.180286430216975
- type: nauc_mrr_at_1_diff1
value: 52.43364978537192
- type: nauc_mrr_at_1_max
value: 31.521755633355713
- type: nauc_mrr_at_1_std
value: -7.002499524130836
- type: nauc_mrr_at_20_diff1
value: 49.311059224991766
- type: nauc_mrr_at_20_max
value: 33.538523037692144
- type: nauc_mrr_at_20_std
value: -6.034619474981136
- type: nauc_mrr_at_3_diff1
value: 49.90489868439366
- type: nauc_mrr_at_3_max
value: 34.400493912164606
- type: nauc_mrr_at_3_std
value: -6.028875320994629
- type: nauc_mrr_at_5_diff1
value: 49.033661898983475
- type: nauc_mrr_at_5_max
value: 33.732315350193936
- type: nauc_mrr_at_5_std
value: -6.272548556330368
- type: nauc_ndcg_at_1000_diff1
value: 49.81681892539247
- type: nauc_ndcg_at_1000_max
value: 33.06518006062093
- type: nauc_ndcg_at_1000_std
value: -4.282105713014755
- type: nauc_ndcg_at_100_diff1
value: 49.42362108857786
- type: nauc_ndcg_at_100_max
value: 32.92024325540483
- type: nauc_ndcg_at_100_std
value: -3.7786765305496717
- type: nauc_ndcg_at_10_diff1
value: 48.83102435475594
- type: nauc_ndcg_at_10_max
value: 31.898404563611958
- type: nauc_ndcg_at_10_std
value: -6.2024003866707
- type: nauc_ndcg_at_1_diff1
value: 52.43364978537192
- type: nauc_ndcg_at_1_max
value: 31.521755633355713
- type: nauc_ndcg_at_1_std
value: -7.002499524130836
- type: nauc_ndcg_at_20_diff1
value: 49.466526454438316
- type: nauc_ndcg_at_20_max
value: 32.424462698701674
- type: nauc_ndcg_at_20_std
value: -4.520809563712905
- type: nauc_ndcg_at_3_diff1
value: 50.997884562583884
- type: nauc_ndcg_at_3_max
value: 33.26787046916917
- type: nauc_ndcg_at_3_std
value: -6.340699471083753
- type: nauc_ndcg_at_5_diff1
value: 49.68314458398097
- type: nauc_ndcg_at_5_max
value: 32.80910071143984
- type: nauc_ndcg_at_5_std
value: -6.734495576445887
- type: nauc_precision_at_1000_diff1
value: -24.18940012795299
- type: nauc_precision_at_1000_max
value: -10.995343674356896
- type: nauc_precision_at_1000_std
value: -8.298841004724856
- type: nauc_precision_at_100_diff1
value: -18.104939577865935
- type: nauc_precision_at_100_max
value: -1.3757613100627637
- type: nauc_precision_at_100_std
value: 0.07661922190466432
- type: nauc_precision_at_10_diff1
value: 3.9624459059275967
- type: nauc_precision_at_10_max
value: 14.841561593450391
- type: nauc_precision_at_10_std
value: -2.485374333613117
- type: nauc_precision_at_1_diff1
value: 52.43364978537192
- type: nauc_precision_at_1_max
value: 31.521755633355713
- type: nauc_precision_at_1_std
value: -7.002499524130836
- type: nauc_precision_at_20_diff1
value: -4.4791763436505265
- type: nauc_precision_at_20_max
value: 9.157872836996276
- type: nauc_precision_at_20_std
value: 2.086903518342088
- type: nauc_precision_at_3_diff1
value: 28.480888018235568
- type: nauc_precision_at_3_max
value: 30.34526267718485
- type: nauc_precision_at_3_std
value: -6.3006706923866025
- type: nauc_precision_at_5_diff1
value: 16.488039195453517
- type: nauc_precision_at_5_max
value: 24.593477099241852
- type: nauc_precision_at_5_std
value: -5.316448107840636
- type: nauc_recall_at_1000_diff1
value: 34.715187316533076
- type: nauc_recall_at_1000_max
value: 58.2266544684947
- type: nauc_recall_at_1000_std
value: 63.85237636398278
- type: nauc_recall_at_100_diff1
value: 36.08623826028132
- type: nauc_recall_at_100_max
value: 33.05011429439473
- type: nauc_recall_at_100_std
value: 16.559545021212564
- type: nauc_recall_at_10_diff1
value: 39.76738610714205
- type: nauc_recall_at_10_max
value: 28.233045706945997
- type: nauc_recall_at_10_std
value: -5.13243784043598
- type: nauc_recall_at_1_diff1
value: 55.6889617582672
- type: nauc_recall_at_1_max
value: 27.821166966868176
- type: nauc_recall_at_1_std
value: -5.778838498211728
- type: nauc_recall_at_20_diff1
value: 41.18682480073759
- type: nauc_recall_at_20_max
value: 29.525993239296945
- type: nauc_recall_at_20_std
value: 1.5003598438954298
- type: nauc_recall_at_3_diff1
value: 48.31879460301157
- type: nauc_recall_at_3_max
value: 32.93751306970167
- type: nauc_recall_at_3_std
value: -5.28070084211707
- type: nauc_recall_at_5_diff1
value: 44.327686388315435
- type: nauc_recall_at_5_max
value: 32.04823486234599
- type: nauc_recall_at_5_std
value: -6.4221525602778256
- type: ndcg_at_1
value: 43.348
- type: ndcg_at_10
value: 53.885000000000005
- type: ndcg_at_100
value: 59.204
- type: ndcg_at_1000
value: 60.744
- type: ndcg_at_20
value: 55.995
- type: ndcg_at_3
value: 49.112
- type: ndcg_at_5
value: 51.61900000000001
- type: precision_at_1
value: 43.348
- type: precision_at_10
value: 10.242999999999999
- type: precision_at_100
value: 1.6150000000000002
- type: precision_at_1000
value: 0.203
- type: precision_at_20
value: 6.066
- type: precision_at_3
value: 23.605
- type: precision_at_5
value: 17.024
- type: recall_at_1
value: 35.429
- type: recall_at_10
value: 65.77199999999999
- type: recall_at_100
value: 87.89
- type: recall_at_1000
value: 97.13000000000001
- type: recall_at_20
value: 73.299
- type: recall_at_3
value: 52.034000000000006
- type: recall_at_5
value: 58.96
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackEnglishRetrieval
type: mteb/cqadupstack-english
config: default
split: test
revision: ad9991cb51e31e31e430383c75ffb2885547b5f0
metrics:
- type: main_score
value: 49.55
- type: map_at_1
value: 31.684
- type: map_at_10
value: 43.258
- type: map_at_100
value: 44.628
- type: map_at_1000
value: 44.761
- type: map_at_20
value: 44.015
- type: map_at_3
value: 39.778000000000006
- type: map_at_5
value: 41.643
- type: mrr_at_1
value: 39.87261146496815
- type: mrr_at_10
value: 49.31978566373469
- type: mrr_at_100
value: 49.94922739445482
- type: mrr_at_1000
value: 49.990325601254106
- type: mrr_at_20
value: 49.70597468576704
- type: mrr_at_3
value: 47.070063694267546
- type: mrr_at_5
value: 48.23248407643316
- type: nauc_map_at_1000_diff1
value: 53.44044712371752
- type: nauc_map_at_1000_max
value: 34.5651440062204
- type: nauc_map_at_1000_std
value: -0.9814384609230475
- type: nauc_map_at_100_diff1
value: 53.429004435388464
- type: nauc_map_at_100_max
value: 34.52038957273436
- type: nauc_map_at_100_std
value: -1.1021936362699805
- type: nauc_map_at_10_diff1
value: 53.879128574022005
- type: nauc_map_at_10_max
value: 33.74771524140917
- type: nauc_map_at_10_std
value: -2.945132777205236
- type: nauc_map_at_1_diff1
value: 60.25159799695403
- type: nauc_map_at_1_max
value: 26.843892985235808
- type: nauc_map_at_1_std
value: -9.618702739509093
- type: nauc_map_at_20_diff1
value: 53.56789898225283
- type: nauc_map_at_20_max
value: 34.11628845872402
- type: nauc_map_at_20_std
value: -2.024376635870884
- type: nauc_map_at_3_diff1
value: 54.45882099014072
- type: nauc_map_at_3_max
value: 31.29495446507793
- type: nauc_map_at_3_std
value: -6.391948228781555
- type: nauc_map_at_5_diff1
value: 54.20536489050697
- type: nauc_map_at_5_max
value: 32.31001487256826
- type: nauc_map_at_5_std
value: -5.050953263346934
- type: nauc_mrr_at_1000_diff1
value: 50.835858995999125
- type: nauc_mrr_at_1000_max
value: 38.20717381701079
- type: nauc_mrr_at_1000_std
value: 4.174163368228787
- type: nauc_mrr_at_100_diff1
value: 50.827072441041224
- type: nauc_mrr_at_100_max
value: 38.21077622034756
- type: nauc_mrr_at_100_std
value: 4.1951082737013365
- type: nauc_mrr_at_10_diff1
value: 50.90578491570948
- type: nauc_mrr_at_10_max
value: 38.19229691746408
- type: nauc_mrr_at_10_std
value: 3.8290750066335546
- type: nauc_mrr_at_1_diff1
value: 54.807021746871186
- type: nauc_mrr_at_1_max
value: 37.09225642043841
- type: nauc_mrr_at_1_std
value: 0.5654547513131355
- type: nauc_mrr_at_20_diff1
value: 50.86247832095378
- type: nauc_mrr_at_20_max
value: 38.19277867384178
- type: nauc_mrr_at_20_std
value: 4.098932316791841
- type: nauc_mrr_at_3_diff1
value: 50.788934370903036
- type: nauc_mrr_at_3_max
value: 37.72130561895659
- type: nauc_mrr_at_3_std
value: 2.7339370381517583
- type: nauc_mrr_at_5_diff1
value: 50.72543792525547
- type: nauc_mrr_at_5_max
value: 37.57740908475375
- type: nauc_mrr_at_5_std
value: 2.742881431085094
- type: nauc_ndcg_at_1000_diff1
value: 50.89692885407576
- type: nauc_ndcg_at_1000_max
value: 37.250583054716955
- type: nauc_ndcg_at_1000_std
value: 5.552279826578831
- type: nauc_ndcg_at_100_diff1
value: 50.624606875496944
- type: nauc_ndcg_at_100_max
value: 37.1024514234627
- type: nauc_ndcg_at_100_std
value: 5.495892760032762
- type: nauc_ndcg_at_10_diff1
value: 51.910387255793445
- type: nauc_ndcg_at_10_max
value: 36.71168418905039
- type: nauc_ndcg_at_10_std
value: 2.3064115117905217
- type: nauc_ndcg_at_1_diff1
value: 54.807021746871186
- type: nauc_ndcg_at_1_max
value: 37.09225642043841
- type: nauc_ndcg_at_1_std
value: 0.5654547513131355
- type: nauc_ndcg_at_20_diff1
value: 51.43416588546778
- type: nauc_ndcg_at_20_max
value: 36.76387180172346
- type: nauc_ndcg_at_20_std
value: 3.7012798827049718
- type: nauc_ndcg_at_3_diff1
value: 50.91198494475423
- type: nauc_ndcg_at_3_max
value: 34.92770670756687
- type: nauc_ndcg_at_3_std
value: -0.9071486759887368
- type: nauc_ndcg_at_5_diff1
value: 51.63559468683886
- type: nauc_ndcg_at_5_max
value: 34.86849679864564
- type: nauc_ndcg_at_5_std
value: -0.734837221224976
- type: nauc_precision_at_1000_diff1
value: -13.43645457127175
- type: nauc_precision_at_1000_max
value: 12.71162105198664
- type: nauc_precision_at_1000_std
value: 33.175399007040255
- type: nauc_precision_at_100_diff1
value: -8.549834785105412
- type: nauc_precision_at_100_max
value: 22.47383497331883
- type: nauc_precision_at_100_std
value: 39.09108761430844
- type: nauc_precision_at_10_diff1
value: 7.556572451100043
- type: nauc_precision_at_10_max
value: 35.35285122987575
- type: nauc_precision_at_10_std
value: 29.417466305615967
- type: nauc_precision_at_1_diff1
value: 54.807021746871186
- type: nauc_precision_at_1_max
value: 37.09225642043841
- type: nauc_precision_at_1_std
value: 0.5654547513131355
- type: nauc_precision_at_20_diff1
value: -0.550158641635712
- type: nauc_precision_at_20_max
value: 29.9068430006187
- type: nauc_precision_at_20_std
value: 33.920603132821185
- type: nauc_precision_at_3_diff1
value: 25.551264664276687
- type: nauc_precision_at_3_max
value: 37.59463225854679
- type: nauc_precision_at_3_std
value: 13.707295021359043
- type: nauc_precision_at_5_diff1
value: 17.76136129817151
- type: nauc_precision_at_5_max
value: 35.85363807255972
- type: nauc_precision_at_5_std
value: 19.48470876841111
- type: nauc_recall_at_1000_diff1
value: 37.1593620123866
- type: nauc_recall_at_1000_max
value: 46.29322536951135
- type: nauc_recall_at_1000_std
value: 51.47312657083967
- type: nauc_recall_at_100_diff1
value: 37.7542224949536
- type: nauc_recall_at_100_max
value: 38.84120637703135
- type: nauc_recall_at_100_std
value: 28.839672572221925
- type: nauc_recall_at_10_diff1
value: 46.24130302658384
- type: nauc_recall_at_10_max
value: 35.89001724712849
- type: nauc_recall_at_10_std
value: 6.985137790828618
- type: nauc_recall_at_1_diff1
value: 60.25159799695403
- type: nauc_recall_at_1_max
value: 26.843892985235808
- type: nauc_recall_at_1_std
value: -9.618702739509093
- type: nauc_recall_at_20_diff1
value: 43.63576680886187
- type: nauc_recall_at_20_max
value: 36.79079644708101
- type: nauc_recall_at_20_std
value: 13.81561928605839
- type: nauc_recall_at_3_diff1
value: 48.2299322140522
- type: nauc_recall_at_3_max
value: 30.038088484376203
- type: nauc_recall_at_3_std
value: -4.871116183843762
- type: nauc_recall_at_5_diff1
value: 47.22331872695983
- type: nauc_recall_at_5_max
value: 30.398541477173136
- type: nauc_recall_at_5_std
value: -3.2038541888528957
- type: ndcg_at_1
value: 39.873
- type: ndcg_at_10
value: 49.55
- type: ndcg_at_100
value: 53.809
- type: ndcg_at_1000
value: 55.767999999999994
- type: ndcg_at_20
value: 51.275999999999996
- type: ndcg_at_3
value: 44.91
- type: ndcg_at_5
value: 46.855999999999995
- type: precision_at_1
value: 39.873
- type: precision_at_10
value: 9.65
- type: precision_at_100
value: 1.522
- type: precision_at_1000
value: 0.196
- type: precision_at_20
value: 5.701
- type: precision_at_3
value: 22.166
- type: precision_at_5
value: 15.643
- type: recall_at_1
value: 31.684
- type: recall_at_10
value: 60.69
- type: recall_at_100
value: 78.521
- type: recall_at_1000
value: 91.02900000000001
- type: recall_at_20
value: 66.973
- type: recall_at_3
value: 46.807
- type: recall_at_5
value: 52.402
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackGamingRetrieval
type: mteb/cqadupstack-gaming
config: default
split: test
revision: 4885aa143210c98657558c04aaf3dc47cfb54340
metrics:
- type: main_score
value: 62.686
- type: map_at_1
value: 43.856
- type: map_at_10
value: 57.056
- type: map_at_100
value: 58.048
- type: map_at_1000
value: 58.092
- type: map_at_20
value: 57.684000000000005
- type: map_at_3
value: 53.958
- type: map_at_5
value: 55.80500000000001
- type: mrr_at_1
value: 50.03134796238244
- type: mrr_at_10
value: 60.31022043091019
- type: mrr_at_100
value: 60.91892338857461
- type: mrr_at_1000
value: 60.93770463536649
- type: mrr_at_20
value: 60.705642387392736
- type: mrr_at_3
value: 58.286311389759746
- type: mrr_at_5
value: 59.49320794148393
- type: nauc_map_at_1000_diff1
value: 54.849140197256695
- type: nauc_map_at_1000_max
value: 38.978448968260224
- type: nauc_map_at_1000_std
value: 0.4955439383268162
- type: nauc_map_at_100_diff1
value: 54.824334747823364
- type: nauc_map_at_100_max
value: 38.959443109450994
- type: nauc_map_at_100_std
value: 0.49626092018886037
- type: nauc_map_at_10_diff1
value: 54.778189277103394
- type: nauc_map_at_10_max
value: 38.20972191654546
- type: nauc_map_at_10_std
value: -0.7239823837455759
- type: nauc_map_at_1_diff1
value: 58.74017164752485
- type: nauc_map_at_1_max
value: 31.528974862589585
- type: nauc_map_at_1_std
value: -3.273824691929492
- type: nauc_map_at_20_diff1
value: 54.78943693416187
- type: nauc_map_at_20_max
value: 38.77930316443076
- type: nauc_map_at_20_std
value: 0.25607460088355544
- type: nauc_map_at_3_diff1
value: 55.68313410225767
- type: nauc_map_at_3_max
value: 36.22847284104399
- type: nauc_map_at_3_std
value: -3.010979639100503
- type: nauc_map_at_5_diff1
value: 55.11385094420661
- type: nauc_map_at_5_max
value: 37.319681045490924
- type: nauc_map_at_5_std
value: -2.156640733221061
- type: nauc_mrr_at_1000_diff1
value: 54.504759468380705
- type: nauc_mrr_at_1000_max
value: 40.58849492650406
- type: nauc_mrr_at_1000_std
value: 1.8226622175866118
- type: nauc_mrr_at_100_diff1
value: 54.4918034449886
- type: nauc_mrr_at_100_max
value: 40.59202728933427
- type: nauc_mrr_at_100_std
value: 1.8276428096536335
- type: nauc_mrr_at_10_diff1
value: 54.33603399493329
- type: nauc_mrr_at_10_max
value: 40.58896878978089
- type: nauc_mrr_at_10_std
value: 1.5733340909114375
- type: nauc_mrr_at_1_diff1
value: 58.062410036466105
- type: nauc_mrr_at_1_max
value: 37.660958859966506
- type: nauc_mrr_at_1_std
value: 0.029007600674170648
- type: nauc_mrr_at_20_diff1
value: 54.43793386924358
- type: nauc_mrr_at_20_max
value: 40.66773423875307
- type: nauc_mrr_at_20_std
value: 1.891967891797154
- type: nauc_mrr_at_3_diff1
value: 54.77901284537966
- type: nauc_mrr_at_3_max
value: 40.182219821206964
- type: nauc_mrr_at_3_std
value: 0.8911935034597871
- type: nauc_mrr_at_5_diff1
value: 54.466068837163675
- type: nauc_mrr_at_5_max
value: 40.334996916684126
- type: nauc_mrr_at_5_std
value: 0.9460830492892364
- type: nauc_ndcg_at_1000_diff1
value: 53.8465376860938
- type: nauc_ndcg_at_1000_max
value: 41.63158111016696
- type: nauc_ndcg_at_1000_std
value: 3.864205884257578
- type: nauc_ndcg_at_100_diff1
value: 53.4025864436944
- type: nauc_ndcg_at_100_max
value: 41.805453995307914
- type: nauc_ndcg_at_100_std
value: 4.36777557904857
- type: nauc_ndcg_at_10_diff1
value: 52.96034987157544
- type: nauc_ndcg_at_10_max
value: 40.7601173480795
- type: nauc_ndcg_at_10_std
value: 1.905824035879141
- type: nauc_ndcg_at_1_diff1
value: 58.062410036466105
- type: nauc_ndcg_at_1_max
value: 37.660958859966506
- type: nauc_ndcg_at_1_std
value: 0.029007600674170648
- type: nauc_ndcg_at_20_diff1
value: 53.2834771889242
- type: nauc_ndcg_at_20_max
value: 41.713541932946406
- type: nauc_ndcg_at_20_std
value: 3.865102828793311
- type: nauc_ndcg_at_3_diff1
value: 54.03389464372289
- type: nauc_ndcg_at_3_max
value: 38.41449914649933
- type: nauc_ndcg_at_3_std
value: -0.886276189886313
- type: nauc_ndcg_at_5_diff1
value: 53.456413320299
- type: nauc_ndcg_at_5_max
value: 39.49048882649335
- type: nauc_ndcg_at_5_std
value: -0.42692690160443814
- type: nauc_precision_at_1000_diff1
value: -14.770791653274824
- type: nauc_precision_at_1000_max
value: 21.479874538905246
- type: nauc_precision_at_1000_std
value: 28.607024261300207
- type: nauc_precision_at_100_diff1
value: -12.189696449878126
- type: nauc_precision_at_100_max
value: 26.69785787492456
- type: nauc_precision_at_100_std
value: 33.59098307467553
- type: nauc_precision_at_10_diff1
value: 6.922968330978399
- type: nauc_precision_at_10_max
value: 34.52138344123087
- type: nauc_precision_at_10_std
value: 21.768427637079952
- type: nauc_precision_at_1_diff1
value: 58.062410036466105
- type: nauc_precision_at_1_max
value: 37.660958859966506
- type: nauc_precision_at_1_std
value: 0.029007600674170648
- type: nauc_precision_at_20_diff1
value: -0.6837867902179278
- type: nauc_precision_at_20_max
value: 33.98683709011133
- type: nauc_precision_at_20_std
value: 30.8845561918902
- type: nauc_precision_at_3_diff1
value: 28.195043041120847
- type: nauc_precision_at_3_max
value: 37.659916094938836
- type: nauc_precision_at_3_std
value: 7.226520146634867
- type: nauc_precision_at_5_diff1
value: 16.633667288096245
- type: nauc_precision_at_5_max
value: 34.90176597404891
- type: nauc_precision_at_5_std
value: 12.421585442334088
- type: nauc_recall_at_1000_diff1
value: 45.20743732415397
- type: nauc_recall_at_1000_max
value: 72.77115913579242
- type: nauc_recall_at_1000_std
value: 70.48328496679083
- type: nauc_recall_at_100_diff1
value: 38.56282680810794
- type: nauc_recall_at_100_max
value: 55.46797683321103
- type: nauc_recall_at_100_std
value: 36.878791151929136
- type: nauc_recall_at_10_diff1
value: 44.18252051452362
- type: nauc_recall_at_10_max
value: 43.33391810040086
- type: nauc_recall_at_10_std
value: 6.663378192277723
- type: nauc_recall_at_1_diff1
value: 58.74017164752485
- type: nauc_recall_at_1_max
value: 31.528974862589585
- type: nauc_recall_at_1_std
value: -3.273824691929492
- type: nauc_recall_at_20_diff1
value: 44.19944231642417
- type: nauc_recall_at_20_max
value: 49.401101483915866
- type: nauc_recall_at_20_std
value: 18.97803841673839
- type: nauc_recall_at_3_diff1
value: 49.56378985428704
- type: nauc_recall_at_3_max
value: 36.434210616870224
- type: nauc_recall_at_3_std
value: -2.850559971607616
- type: nauc_recall_at_5_diff1
value: 47.37107217086109
- type: nauc_recall_at_5_max
value: 39.0236745509895
- type: nauc_recall_at_5_std
value: -1.7402454457937195
- type: ndcg_at_1
value: 50.031000000000006
- type: ndcg_at_10
value: 62.686
- type: ndcg_at_100
value: 66.403
- type: ndcg_at_1000
value: 67.241
- type: ndcg_at_20
value: 64.37899999999999
- type: ndcg_at_3
value: 57.859
- type: ndcg_at_5
value: 60.375
- type: precision_at_1
value: 50.031000000000006
- type: precision_at_10
value: 9.856
- type: precision_at_100
value: 1.266
- type: precision_at_1000
value: 0.13799999999999998
- type: precision_at_20
value: 5.489
- type: precision_at_3
value: 25.746999999999996
- type: precision_at_5
value: 17.492
- type: recall_at_1
value: 43.856
- type: recall_at_10
value: 75.824
- type: recall_at_100
value: 91.622
- type: recall_at_1000
value: 97.538
- type: recall_at_20
value: 81.951
- type: recall_at_3
value: 63.016000000000005
- type: recall_at_5
value: 69.18299999999999
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackGisRetrieval
type: mteb/cqadupstack-gis
config: default
split: test
revision: 5003b3064772da1887988e05400cf3806fe491f2
metrics:
- type: main_score
value: 43.983
- type: map_at_1
value: 28.942
- type: map_at_10
value: 38.621
- type: map_at_100
value: 39.7
- type: map_at_1000
value: 39.766
- type: map_at_20
value: 39.262
- type: map_at_3
value: 35.719
- type: map_at_5
value: 37.378
- type: mrr_at_1
value: 31.29943502824859
- type: mrr_at_10
value: 40.76463994260603
- type: mrr_at_100
value: 41.67073617629083
- type: mrr_at_1000
value: 41.717446259457105
- type: mrr_at_20
value: 41.32577374689195
- type: mrr_at_3
value: 37.984934086628996
- type: mrr_at_5
value: 39.64595103578152
- type: nauc_map_at_1000_diff1
value: 43.64461679688985
- type: nauc_map_at_1000_max
value: 31.53717883948204
- type: nauc_map_at_1000_std
value: 1.193745788248017
- type: nauc_map_at_100_diff1
value: 43.63847825079489
- type: nauc_map_at_100_max
value: 31.536602619279165
- type: nauc_map_at_100_std
value: 1.2001240243342401
- type: nauc_map_at_10_diff1
value: 43.845991987142014
- type: nauc_map_at_10_max
value: 31.27509937344113
- type: nauc_map_at_10_std
value: 0.7327934840520994
- type: nauc_map_at_1_diff1
value: 50.62269273984579
- type: nauc_map_at_1_max
value: 30.16325757909521
- type: nauc_map_at_1_std
value: -0.6398875136233392
- type: nauc_map_at_20_diff1
value: 43.630758403790914
- type: nauc_map_at_20_max
value: 31.408258098047703
- type: nauc_map_at_20_std
value: 1.12616034652217
- type: nauc_map_at_3_diff1
value: 44.823493567359456
- type: nauc_map_at_3_max
value: 31.075886347614496
- type: nauc_map_at_3_std
value: -0.25126874515735426
- type: nauc_map_at_5_diff1
value: 43.79768853087658
- type: nauc_map_at_5_max
value: 31.091080995725324
- type: nauc_map_at_5_std
value: 0.16440771782544047
- type: nauc_mrr_at_1000_diff1
value: 42.7865400752329
- type: nauc_mrr_at_1000_max
value: 32.84731670326893
- type: nauc_mrr_at_1000_std
value: 2.6067637582013825
- type: nauc_mrr_at_100_diff1
value: 42.771741548331065
- type: nauc_mrr_at_100_max
value: 32.85324232845987
- type: nauc_mrr_at_100_std
value: 2.6092786694308376
- type: nauc_mrr_at_10_diff1
value: 42.82969738870672
- type: nauc_mrr_at_10_max
value: 32.69407549631432
- type: nauc_mrr_at_10_std
value: 2.302903910016054
- type: nauc_mrr_at_1_diff1
value: 49.05638333657571
- type: nauc_mrr_at_1_max
value: 33.12030717171514
- type: nauc_mrr_at_1_std
value: 1.3278035087690774
- type: nauc_mrr_at_20_diff1
value: 42.74267239536286
- type: nauc_mrr_at_20_max
value: 32.78571108973092
- type: nauc_mrr_at_20_std
value: 2.5932669908758643
- type: nauc_mrr_at_3_diff1
value: 43.69963426089187
- type: nauc_mrr_at_3_max
value: 32.78193126956233
- type: nauc_mrr_at_3_std
value: 1.634874463134699
- type: nauc_mrr_at_5_diff1
value: 42.838630647832524
- type: nauc_mrr_at_5_max
value: 32.459318735260545
- type: nauc_mrr_at_5_std
value: 1.9412518283209172
- type: nauc_ndcg_at_1000_diff1
value: 41.01253839851583
- type: nauc_ndcg_at_1000_max
value: 32.69570568894237
- type: nauc_ndcg_at_1000_std
value: 3.4254737113410343
- type: nauc_ndcg_at_100_diff1
value: 40.62589243745832
- type: nauc_ndcg_at_100_max
value: 32.664990655736126
- type: nauc_ndcg_at_100_std
value: 3.799569445326048
- type: nauc_ndcg_at_10_diff1
value: 41.31658753735306
- type: nauc_ndcg_at_10_max
value: 31.511946320339295
- type: nauc_ndcg_at_10_std
value: 2.0492930500796662
- type: nauc_ndcg_at_1_diff1
value: 49.05638333657571
- type: nauc_ndcg_at_1_max
value: 33.12030717171514
- type: nauc_ndcg_at_1_std
value: 1.3278035087690774
- type: nauc_ndcg_at_20_diff1
value: 40.66188223212841
- type: nauc_ndcg_at_20_max
value: 31.926240431497476
- type: nauc_ndcg_at_20_std
value: 3.370398664595343
- type: nauc_ndcg_at_3_diff1
value: 43.035580180241
- type: nauc_ndcg_at_3_max
value: 31.363874129878404
- type: nauc_ndcg_at_3_std
value: 0.1422507242819929
- type: nauc_ndcg_at_5_diff1
value: 41.29049003955878
- type: nauc_ndcg_at_5_max
value: 31.112034994977737
- type: nauc_ndcg_at_5_std
value: 0.860179279828966
- type: nauc_precision_at_1000_diff1
value: -12.41854465881981
- type: nauc_precision_at_1000_max
value: 14.706779246590548
- type: nauc_precision_at_1000_std
value: 9.812804367375206
- type: nauc_precision_at_100_diff1
value: 2.797520107808461
- type: nauc_precision_at_100_max
value: 24.335873541811406
- type: nauc_precision_at_100_std
value: 12.87186398750545
- type: nauc_precision_at_10_diff1
value: 24.530962799265847
- type: nauc_precision_at_10_max
value: 31.00772010798733
- type: nauc_precision_at_10_std
value: 6.696733001548185
- type: nauc_precision_at_1_diff1
value: 49.05638333657571
- type: nauc_precision_at_1_max
value: 33.12030717171514
- type: nauc_precision_at_1_std
value: 1.3278035087690774
- type: nauc_precision_at_20_diff1
value: 16.25028416351204
- type: nauc_precision_at_20_max
value: 29.629326492027342
- type: nauc_precision_at_20_std
value: 11.085888573121679
- type: nauc_precision_at_3_diff1
value: 33.923667689694256
- type: nauc_precision_at_3_max
value: 33.5859782361996
- type: nauc_precision_at_3_std
value: 1.9468331086918693
- type: nauc_precision_at_5_diff1
value: 27.917827233088875
- type: nauc_precision_at_5_max
value: 33.13290043423535
- type: nauc_precision_at_5_std
value: 3.800870695945311
- type: nauc_recall_at_1000_diff1
value: 9.680283388428789
- type: nauc_recall_at_1000_max
value: 49.479399284871235
- type: nauc_recall_at_1000_std
value: 31.506985071436088
- type: nauc_recall_at_100_diff1
value: 23.607673377885448
- type: nauc_recall_at_100_max
value: 36.637750366403935
- type: nauc_recall_at_100_std
value: 18.30770690564224
- type: nauc_recall_at_10_diff1
value: 33.199683418312446
- type: nauc_recall_at_10_max
value: 29.63115497012312
- type: nauc_recall_at_10_std
value: 4.813200391480566
- type: nauc_recall_at_1_diff1
value: 50.62269273984579
- type: nauc_recall_at_1_max
value: 30.16325757909521
- type: nauc_recall_at_1_std
value: -0.6398875136233392
- type: nauc_recall_at_20_diff1
value: 29.16488387844995
- type: nauc_recall_at_20_max
value: 30.788019479459
- type: nauc_recall_at_20_std
value: 11.031953917298853
- type: nauc_recall_at_3_diff1
value: 38.215351600417065
- type: nauc_recall_at_3_max
value: 29.619887154236128
- type: nauc_recall_at_3_std
value: -0.13237298980339363
- type: nauc_recall_at_5_diff1
value: 33.93788042633265
- type: nauc_recall_at_5_max
value: 28.67185092656741
- type: nauc_recall_at_5_std
value: 1.316700201091445
- type: ndcg_at_1
value: 31.299
- type: ndcg_at_10
value: 43.983
- type: ndcg_at_100
value: 48.992999999999995
- type: ndcg_at_1000
value: 50.757
- type: ndcg_at_20
value: 46.152
- type: ndcg_at_3
value: 38.367000000000004
- type: ndcg_at_5
value: 41.171
- type: precision_at_1
value: 31.299
- type: precision_at_10
value: 6.734
- type: precision_at_100
value: 0.972
- type: precision_at_1000
value: 0.11499999999999999
- type: precision_at_20
value: 3.898
- type: precision_at_3
value: 16.121
- type: precision_at_5
value: 11.344999999999999
- type: recall_at_1
value: 28.942
- type: recall_at_10
value: 58.343999999999994
- type: recall_at_100
value: 80.82300000000001
- type: recall_at_1000
value: 94.348
- type: recall_at_20
value: 66.449
- type: recall_at_3
value: 43.415
- type: recall_at_5
value: 50.007999999999996
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackMathematicaRetrieval
type: mteb/cqadupstack-mathematica
config: default
split: test
revision: 90fceea13679c63fe563ded68f3b6f06e50061de
metrics:
- type: main_score
value: 33.144
- type: map_at_1
value: 19.41
- type: map_at_10
value: 27.802
- type: map_at_100
value: 29.157
- type: map_at_1000
value: 29.274
- type: map_at_20
value: 28.549000000000003
- type: map_at_3
value: 25.052999999999997
- type: map_at_5
value: 26.521
- type: mrr_at_1
value: 23.756218905472636
- type: mrr_at_10
value: 32.3623450209271
- type: mrr_at_100
value: 33.3648208444617
- type: mrr_at_1000
value: 33.427688215162185
- type: mrr_at_20
value: 32.93723485575758
- type: mrr_at_3
value: 29.539800995024883
- type: mrr_at_5
value: 31.156716417910452
- type: nauc_map_at_1000_diff1
value: 36.196391248081284
- type: nauc_map_at_1000_max
value: 25.650644367091495
- type: nauc_map_at_1000_std
value: 6.130340697729844
- type: nauc_map_at_100_diff1
value: 36.138890642411376
- type: nauc_map_at_100_max
value: 25.587124763888518
- type: nauc_map_at_100_std
value: 6.129336379055536
- type: nauc_map_at_10_diff1
value: 36.254426743566775
- type: nauc_map_at_10_max
value: 25.465599906543034
- type: nauc_map_at_10_std
value: 5.880280378112879
- type: nauc_map_at_1_diff1
value: 42.890551563179976
- type: nauc_map_at_1_max
value: 25.813805281076956
- type: nauc_map_at_1_std
value: 5.150718386163028
- type: nauc_map_at_20_diff1
value: 35.98551587974314
- type: nauc_map_at_20_max
value: 25.501540521726636
- type: nauc_map_at_20_std
value: 5.858703157458749
- type: nauc_map_at_3_diff1
value: 37.646558039577734
- type: nauc_map_at_3_max
value: 26.138491471124247
- type: nauc_map_at_3_std
value: 6.0487505175540734
- type: nauc_map_at_5_diff1
value: 36.817582976153695
- type: nauc_map_at_5_max
value: 25.398200211121146
- type: nauc_map_at_5_std
value: 6.31126763919522
- type: nauc_mrr_at_1000_diff1
value: 37.313544952847835
- type: nauc_mrr_at_1000_max
value: 26.96218532078988
- type: nauc_mrr_at_1000_std
value: 6.814359224654042
- type: nauc_mrr_at_100_diff1
value: 37.28104407653679
- type: nauc_mrr_at_100_max
value: 26.931243040477256
- type: nauc_mrr_at_100_std
value: 6.800500150841733
- type: nauc_mrr_at_10_diff1
value: 37.315832621275895
- type: nauc_mrr_at_10_max
value: 26.941454225978372
- type: nauc_mrr_at_10_std
value: 6.837046527796884
- type: nauc_mrr_at_1_diff1
value: 43.19904188582958
- type: nauc_mrr_at_1_max
value: 26.975620445904795
- type: nauc_mrr_at_1_std
value: 4.52071008581395
- type: nauc_mrr_at_20_diff1
value: 37.2200524790774
- type: nauc_mrr_at_20_max
value: 26.971494160765847
- type: nauc_mrr_at_20_std
value: 6.716431228783282
- type: nauc_mrr_at_3_diff1
value: 38.46236387340654
- type: nauc_mrr_at_3_max
value: 27.846812992192056
- type: nauc_mrr_at_3_std
value: 6.550711872569794
- type: nauc_mrr_at_5_diff1
value: 37.620346007658476
- type: nauc_mrr_at_5_max
value: 27.031025952102038
- type: nauc_mrr_at_5_std
value: 7.32343760231163
- type: nauc_ndcg_at_1000_diff1
value: 34.95081314840592
- type: nauc_ndcg_at_1000_max
value: 26.89265465124325
- type: nauc_ndcg_at_1000_std
value: 7.854154466831975
- type: nauc_ndcg_at_100_diff1
value: 34.01417812563093
- type: nauc_ndcg_at_100_max
value: 25.792737746436835
- type: nauc_ndcg_at_100_std
value: 7.726584165493833
- type: nauc_ndcg_at_10_diff1
value: 33.895122516474466
- type: nauc_ndcg_at_10_max
value: 25.388442204589612
- type: nauc_ndcg_at_10_std
value: 6.359560223645991
- type: nauc_ndcg_at_1_diff1
value: 43.19904188582958
- type: nauc_ndcg_at_1_max
value: 26.975620445904795
- type: nauc_ndcg_at_1_std
value: 4.52071008581395
- type: nauc_ndcg_at_20_diff1
value: 33.36078689830245
- type: nauc_ndcg_at_20_max
value: 25.531794610571563
- type: nauc_ndcg_at_20_std
value: 6.136658608653248
- type: nauc_ndcg_at_3_diff1
value: 36.44505602530781
- type: nauc_ndcg_at_3_max
value: 26.9104071983157
- type: nauc_ndcg_at_3_std
value: 6.427178520371878
- type: nauc_ndcg_at_5_diff1
value: 35.01384323197442
- type: nauc_ndcg_at_5_max
value: 25.5560447088692
- type: nauc_ndcg_at_5_std
value: 7.3676236760360485
- type: nauc_precision_at_1000_diff1
value: 2.8903331041804514
- type: nauc_precision_at_1000_max
value: 4.059662742366004
- type: nauc_precision_at_1000_std
value: -1.5891687644008334
- type: nauc_precision_at_100_diff1
value: 8.437726471693766
- type: nauc_precision_at_100_max
value: 11.250588557568427
- type: nauc_precision_at_100_std
value: 4.231571164627862
- type: nauc_precision_at_10_diff1
value: 19.57085237210294
- type: nauc_precision_at_10_max
value: 20.973093492003905
- type: nauc_precision_at_10_std
value: 3.197416248152466
- type: nauc_precision_at_1_diff1
value: 43.19904188582958
- type: nauc_precision_at_1_max
value: 26.975620445904795
- type: nauc_precision_at_1_std
value: 4.52071008581395
- type: nauc_precision_at_20_diff1
value: 15.67136554192724
- type: nauc_precision_at_20_max
value: 17.706882621057858
- type: nauc_precision_at_20_std
value: 1.9363472182867714
- type: nauc_precision_at_3_diff1
value: 30.38035695042325
- type: nauc_precision_at_3_max
value: 26.48218693244094
- type: nauc_precision_at_3_std
value: 6.424657705785632
- type: nauc_precision_at_5_diff1
value: 25.272543315171458
- type: nauc_precision_at_5_max
value: 22.32441421311652
- type: nauc_precision_at_5_std
value: 7.4912569081905716
- type: nauc_recall_at_1000_diff1
value: 25.5748044137675
- type: nauc_recall_at_1000_max
value: 43.85796585370269
- type: nauc_recall_at_1000_std
value: 30.0338086596789
- type: nauc_recall_at_100_diff1
value: 22.577080638885093
- type: nauc_recall_at_100_max
value: 23.224511700617477
- type: nauc_recall_at_100_std
value: 15.187963852289313
- type: nauc_recall_at_10_diff1
value: 25.058592299355908
- type: nauc_recall_at_10_max
value: 22.24448483279841
- type: nauc_recall_at_10_std
value: 6.3179089740052765
- type: nauc_recall_at_1_diff1
value: 42.890551563179976
- type: nauc_recall_at_1_max
value: 25.813805281076956
- type: nauc_recall_at_1_std
value: 5.150718386163028
- type: nauc_recall_at_20_diff1
value: 22.433865123187307
- type: nauc_recall_at_20_max
value: 22.739695641511762
- type: nauc_recall_at_20_std
value: 5.362005125538497
- type: nauc_recall_at_3_diff1
value: 32.17919168998616
- type: nauc_recall_at_3_max
value: 26.044028436867357
- type: nauc_recall_at_3_std
value: 7.420349884006329
- type: nauc_recall_at_5_diff1
value: 28.967104573649138
- type: nauc_recall_at_5_max
value: 23.40865848168201
- type: nauc_recall_at_5_std
value: 9.174406147723621
- type: ndcg_at_1
value: 23.756
- type: ndcg_at_10
value: 33.144
- type: ndcg_at_100
value: 39.261
- type: ndcg_at_1000
value: 41.881
- type: ndcg_at_20
value: 35.56
- type: ndcg_at_3
value: 27.927999999999997
- type: ndcg_at_5
value: 30.293999999999997
- type: precision_at_1
value: 23.756
- type: precision_at_10
value: 5.995
- type: precision_at_100
value: 1.053
- type: precision_at_1000
value: 0.14100000000000001
- type: precision_at_20
value: 3.688
- type: precision_at_3
value: 13.059999999999999
- type: precision_at_5
value: 9.602
- type: recall_at_1
value: 19.41
- type: recall_at_10
value: 45.074
- type: recall_at_100
value: 71.131
- type: recall_at_1000
value: 89.604
- type: recall_at_20
value: 53.673
- type: recall_at_3
value: 31.055
- type: recall_at_5
value: 36.714999999999996
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackPhysicsRetrieval
type: mteb/cqadupstack-physics
config: default
split: test
revision: 79531abbd1fb92d06c6d6315a0cbbbf5bb247ea4
metrics:
- type: main_score
value: 49.675000000000004
- type: map_at_1
value: 33.178999999999995
- type: map_at_10
value: 43.807
- type: map_at_100
value: 45.17
- type: map_at_1000
value: 45.271
- type: map_at_20
value: 44.516
- type: map_at_3
value: 40.813
- type: map_at_5
value: 42.457
- type: mrr_at_1
value: 40.32723772858518
- type: mrr_at_10
value: 49.646867409138814
- type: mrr_at_100
value: 50.493686101426285
- type: mrr_at_1000
value: 50.525386961808834
- type: mrr_at_20
value: 50.120274354884586
- type: mrr_at_3
value: 47.49759384023096
- type: mrr_at_5
value: 48.72473532242535
- type: nauc_map_at_1000_diff1
value: 49.5947127786396
- type: nauc_map_at_1000_max
value: 33.39720045844929
- type: nauc_map_at_1000_std
value: -3.131428593252271
- type: nauc_map_at_100_diff1
value: 49.57797867324617
- type: nauc_map_at_100_max
value: 33.356927974709464
- type: nauc_map_at_100_std
value: -3.1661365376766337
- type: nauc_map_at_10_diff1
value: 49.59294630598952
- type: nauc_map_at_10_max
value: 32.86647346990462
- type: nauc_map_at_10_std
value: -4.1582043443386745
- type: nauc_map_at_1_diff1
value: 53.98646767288695
- type: nauc_map_at_1_max
value: 29.45629077638936
- type: nauc_map_at_1_std
value: -5.621187380771589
- type: nauc_map_at_20_diff1
value: 49.486982890447074
- type: nauc_map_at_20_max
value: 33.11681933406332
- type: nauc_map_at_20_std
value: -3.5826433195146854
- type: nauc_map_at_3_diff1
value: 50.81807107491861
- type: nauc_map_at_3_max
value: 32.32552291988859
- type: nauc_map_at_3_std
value: -3.952946504088928
- type: nauc_map_at_5_diff1
value: 49.70201354274439
- type: nauc_map_at_5_max
value: 32.831846031004886
- type: nauc_map_at_5_std
value: -3.8330488624207737
- type: nauc_mrr_at_1000_diff1
value: 49.04159472507738
- type: nauc_mrr_at_1000_max
value: 35.617600171138676
- type: nauc_mrr_at_1000_std
value: -1.5975830757486646
- type: nauc_mrr_at_100_diff1
value: 49.03848471692094
- type: nauc_mrr_at_100_max
value: 35.61936748662614
- type: nauc_mrr_at_100_std
value: -1.5922053398594729
- type: nauc_mrr_at_10_diff1
value: 48.92463964652612
- type: nauc_mrr_at_10_max
value: 35.37757708992045
- type: nauc_mrr_at_10_std
value: -2.2052028139567303
- type: nauc_mrr_at_1_diff1
value: 52.23915787290734
- type: nauc_mrr_at_1_max
value: 34.393531787632334
- type: nauc_mrr_at_1_std
value: -1.452007661016969
- type: nauc_mrr_at_20_diff1
value: 48.91168438018404
- type: nauc_mrr_at_20_max
value: 35.478962544421876
- type: nauc_mrr_at_20_std
value: -1.8246048423555414
- type: nauc_mrr_at_3_diff1
value: 50.115432665442164
- type: nauc_mrr_at_3_max
value: 35.89093796085569
- type: nauc_mrr_at_3_std
value: -1.4895016313153366
- type: nauc_mrr_at_5_diff1
value: 49.04321261351915
- type: nauc_mrr_at_5_max
value: 35.85730520949451
- type: nauc_mrr_at_5_std
value: -1.68790556880753
- type: nauc_ndcg_at_1000_diff1
value: 48.294697499154374
- type: nauc_ndcg_at_1000_max
value: 35.167410242367595
- type: nauc_ndcg_at_1000_std
value: -0.6346078535914157
- type: nauc_ndcg_at_100_diff1
value: 48.025525283449014
- type: nauc_ndcg_at_100_max
value: 34.79288511776105
- type: nauc_ndcg_at_100_std
value: -0.7823403044086993
- type: nauc_ndcg_at_10_diff1
value: 47.70793258015258
- type: nauc_ndcg_at_10_max
value: 33.09558927880104
- type: nauc_ndcg_at_10_std
value: -4.7793864166260605
- type: nauc_ndcg_at_1_diff1
value: 52.23915787290734
- type: nauc_ndcg_at_1_max
value: 34.393531787632334
- type: nauc_ndcg_at_1_std
value: -1.452007661016969
- type: nauc_ndcg_at_20_diff1
value: 47.354286045074815
- type: nauc_ndcg_at_20_max
value: 33.686648806027975
- type: nauc_ndcg_at_20_std
value: -3.0189085132476556
- type: nauc_ndcg_at_3_diff1
value: 49.68805334316908
- type: nauc_ndcg_at_3_max
value: 34.196077748056496
- type: nauc_ndcg_at_3_std
value: -2.7167289163768436
- type: nauc_ndcg_at_5_diff1
value: 47.94474868912989
- type: nauc_ndcg_at_5_max
value: 34.00261603413051
- type: nauc_ndcg_at_5_std
value: -3.3541028103046115
- type: nauc_precision_at_1000_diff1
value: -12.0150100710755
- type: nauc_precision_at_1000_max
value: 5.332942816568796
- type: nauc_precision_at_1000_std
value: 14.543288479130458
- type: nauc_precision_at_100_diff1
value: -4.920332181588838
- type: nauc_precision_at_100_max
value: 14.42313332017491
- type: nauc_precision_at_100_std
value: 17.821953321018384
- type: nauc_precision_at_10_diff1
value: 14.70509089079217
- type: nauc_precision_at_10_max
value: 25.381887131649716
- type: nauc_precision_at_10_std
value: 5.226419288645675
- type: nauc_precision_at_1_diff1
value: 52.23915787290734
- type: nauc_precision_at_1_max
value: 34.393531787632334
- type: nauc_precision_at_1_std
value: -1.452007661016969
- type: nauc_precision_at_20_diff1
value: 6.312827641507564
- type: nauc_precision_at_20_max
value: 22.483038562271933
- type: nauc_precision_at_20_std
value: 11.368419856892416
- type: nauc_precision_at_3_diff1
value: 33.271443420273606
- type: nauc_precision_at_3_max
value: 33.571078182106675
- type: nauc_precision_at_3_std
value: 4.47382265155717
- type: nauc_precision_at_5_diff1
value: 23.43287104284656
- type: nauc_precision_at_5_max
value: 30.909085068105313
- type: nauc_precision_at_5_std
value: 5.545672049452433
- type: nauc_recall_at_1000_diff1
value: 35.22615594677707
- type: nauc_recall_at_1000_max
value: 52.0710533173532
- type: nauc_recall_at_1000_std
value: 45.17683523786464
- type: nauc_recall_at_100_diff1
value: 36.2169056956332
- type: nauc_recall_at_100_max
value: 35.02435003210817
- type: nauc_recall_at_100_std
value: 15.833632946282508
- type: nauc_recall_at_10_diff1
value: 39.12440292974848
- type: nauc_recall_at_10_max
value: 28.0546011979648
- type: nauc_recall_at_10_std
value: -9.620558638092172
- type: nauc_recall_at_1_diff1
value: 53.98646767288695
- type: nauc_recall_at_1_max
value: 29.45629077638936
- type: nauc_recall_at_1_std
value: -5.621187380771589
- type: nauc_recall_at_20_diff1
value: 36.39254630768161
- type: nauc_recall_at_20_max
value: 29.277856508751967
- type: nauc_recall_at_20_std
value: -3.048007490798412
- type: nauc_recall_at_3_diff1
value: 45.64706642644958
- type: nauc_recall_at_3_max
value: 31.003050159737413
- type: nauc_recall_at_3_std
value: -4.849763876930667
- type: nauc_recall_at_5_diff1
value: 40.918108859971746
- type: nauc_recall_at_5_max
value: 30.69907335071493
- type: nauc_recall_at_5_std
value: -6.1445436251916865
- type: ndcg_at_1
value: 40.327
- type: ndcg_at_10
value: 49.675000000000004
- type: ndcg_at_100
value: 55.364000000000004
- type: ndcg_at_1000
value: 56.992
- type: ndcg_at_20
value: 51.803999999999995
- type: ndcg_at_3
value: 45.227000000000004
- type: ndcg_at_5
value: 47.244
- type: precision_at_1
value: 40.327
- type: precision_at_10
value: 8.826
- type: precision_at_100
value: 1.354
- type: precision_at_1000
value: 0.167
- type: precision_at_20
value: 5.115
- type: precision_at_3
value: 21.303
- type: precision_at_5
value: 14.726
- type: recall_at_1
value: 33.178999999999995
- type: recall_at_10
value: 61.087
- type: recall_at_100
value: 85.099
- type: recall_at_1000
value: 95.14099999999999
- type: recall_at_20
value: 68.623
- type: recall_at_3
value: 48.245
- type: recall_at_5
value: 53.832
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackProgrammersRetrieval
type: mteb/cqadupstack-programmers
config: default
split: test
revision: 6184bc1440d2dbc7612be22b50686b8826d22b32
metrics:
- type: main_score
value: 44.99
- type: map_at_1
value: 28.089
- type: map_at_10
value: 38.98
- type: map_at_100
value: 40.339000000000006
- type: map_at_1000
value: 40.441
- type: map_at_20
value: 39.702
- type: map_at_3
value: 35.620000000000005
- type: map_at_5
value: 37.657000000000004
- type: mrr_at_1
value: 35.15981735159817
- type: mrr_at_10
value: 44.54075161266937
- type: mrr_at_100
value: 45.435730392436646
- type: mrr_at_1000
value: 45.47673849356812
- type: mrr_at_20
value: 45.05949613726918
- type: mrr_at_3
value: 42.00913242009131
- type: mrr_at_5
value: 43.52739726027392
- type: nauc_map_at_1000_diff1
value: 42.6375513442399
- type: nauc_map_at_1000_max
value: 35.83899956589522
- type: nauc_map_at_1000_std
value: 5.798620017712549
- type: nauc_map_at_100_diff1
value: 42.609712253881504
- type: nauc_map_at_100_max
value: 35.85401871065736
- type: nauc_map_at_100_std
value: 5.829007296755533
- type: nauc_map_at_10_diff1
value: 42.90931172127824
- type: nauc_map_at_10_max
value: 35.46694204511423
- type: nauc_map_at_10_std
value: 5.131477704152026
- type: nauc_map_at_1_diff1
value: 48.066312177855956
- type: nauc_map_at_1_max
value: 30.67745267941573
- type: nauc_map_at_1_std
value: -1.4170737991670943
- type: nauc_map_at_20_diff1
value: 42.730423700784
- type: nauc_map_at_20_max
value: 35.710039616497085
- type: nauc_map_at_20_std
value: 5.363961887475162
- type: nauc_map_at_3_diff1
value: 43.499223646579935
- type: nauc_map_at_3_max
value: 33.872570039621564
- type: nauc_map_at_3_std
value: 3.0787571843453008
- type: nauc_map_at_5_diff1
value: 43.28963642946521
- type: nauc_map_at_5_max
value: 35.18327408279892
- type: nauc_map_at_5_std
value: 4.516467154662473
- type: nauc_mrr_at_1000_diff1
value: 42.71279871641341
- type: nauc_mrr_at_1000_max
value: 37.48825064817496
- type: nauc_mrr_at_1000_std
value: 8.10015025024314
- type: nauc_mrr_at_100_diff1
value: 42.694777404773376
- type: nauc_mrr_at_100_max
value: 37.476741768741086
- type: nauc_mrr_at_100_std
value: 8.11525130417229
- type: nauc_mrr_at_10_diff1
value: 42.954194054560176
- type: nauc_mrr_at_10_max
value: 37.606138578797506
- type: nauc_mrr_at_10_std
value: 8.092519513302399
- type: nauc_mrr_at_1_diff1
value: 48.350790286038574
- type: nauc_mrr_at_1_max
value: 33.97992759739641
- type: nauc_mrr_at_1_std
value: 1.8332987018664093
- type: nauc_mrr_at_20_diff1
value: 42.664983701783044
- type: nauc_mrr_at_20_max
value: 37.47450702110784
- type: nauc_mrr_at_20_std
value: 8.001067634745462
- type: nauc_mrr_at_3_diff1
value: 42.921968602737955
- type: nauc_mrr_at_3_max
value: 37.19599728791262
- type: nauc_mrr_at_3_std
value: 7.4692697422507575
- type: nauc_mrr_at_5_diff1
value: 42.96028546491891
- type: nauc_mrr_at_5_max
value: 37.688350071295915
- type: nauc_mrr_at_5_std
value: 8.213017954012372
- type: nauc_ndcg_at_1000_diff1
value: 40.70763263942397
- type: nauc_ndcg_at_1000_max
value: 37.87768319167602
- type: nauc_ndcg_at_1000_std
value: 9.908807071686738
- type: nauc_ndcg_at_100_diff1
value: 39.97828438221707
- type: nauc_ndcg_at_100_max
value: 37.7723393835996
- type: nauc_ndcg_at_100_std
value: 10.666779466040097
- type: nauc_ndcg_at_10_diff1
value: 41.172233451172936
- type: nauc_ndcg_at_10_max
value: 37.12252131573939
- type: nauc_ndcg_at_10_std
value: 8.273798754436639
- type: nauc_ndcg_at_1_diff1
value: 48.350790286038574
- type: nauc_ndcg_at_1_max
value: 33.97992759739641
- type: nauc_ndcg_at_1_std
value: 1.8332987018664093
- type: nauc_ndcg_at_20_diff1
value: 40.33325895172716
- type: nauc_ndcg_at_20_max
value: 37.36015594019951
- type: nauc_ndcg_at_20_std
value: 8.818556108749302
- type: nauc_ndcg_at_3_diff1
value: 41.652701699747254
- type: nauc_ndcg_at_3_max
value: 35.499109874223294
- type: nauc_ndcg_at_3_std
value: 5.831784865606119
- type: nauc_ndcg_at_5_diff1
value: 41.856346892595475
- type: nauc_ndcg_at_5_max
value: 36.940681835687194
- type: nauc_ndcg_at_5_std
value: 7.507798515093516
- type: nauc_precision_at_1000_diff1
value: -2.4605367806784866
- type: nauc_precision_at_1000_max
value: -0.3538142127162922
- type: nauc_precision_at_1000_std
value: 8.369794961833236
- type: nauc_precision_at_100_diff1
value: -0.34954522096524704
- type: nauc_precision_at_100_max
value: 13.159909603146458
- type: nauc_precision_at_100_std
value: 19.425561514133996
- type: nauc_precision_at_10_diff1
value: 17.048304710148145
- type: nauc_precision_at_10_max
value: 29.816041846806375
- type: nauc_precision_at_10_std
value: 18.358893367243798
- type: nauc_precision_at_1_diff1
value: 48.350790286038574
- type: nauc_precision_at_1_max
value: 33.97992759739641
- type: nauc_precision_at_1_std
value: 1.8332987018664093
- type: nauc_precision_at_20_diff1
value: 10.450903599411344
- type: nauc_precision_at_20_max
value: 25.228916373799127
- type: nauc_precision_at_20_std
value: 18.46893569529936
- type: nauc_precision_at_3_diff1
value: 29.181236567048636
- type: nauc_precision_at_3_max
value: 35.64918262500281
- type: nauc_precision_at_3_std
value: 13.347538222514968
- type: nauc_precision_at_5_diff1
value: 23.693323840550345
- type: nauc_precision_at_5_max
value: 33.972399735191225
- type: nauc_precision_at_5_std
value: 17.107012760554618
- type: nauc_recall_at_1000_diff1
value: 20.297340483227945
- type: nauc_recall_at_1000_max
value: 63.084305970127275
- type: nauc_recall_at_1000_std
value: 63.04655000858784
- type: nauc_recall_at_100_diff1
value: 22.587332148979723
- type: nauc_recall_at_100_max
value: 40.740968468024775
- type: nauc_recall_at_100_std
value: 34.120423684507124
- type: nauc_recall_at_10_diff1
value: 33.361195948673675
- type: nauc_recall_at_10_max
value: 37.1411402410262
- type: nauc_recall_at_10_std
value: 13.475407196166259
- type: nauc_recall_at_1_diff1
value: 48.066312177855956
- type: nauc_recall_at_1_max
value: 30.67745267941573
- type: nauc_recall_at_1_std
value: -1.4170737991670943
- type: nauc_recall_at_20_diff1
value: 28.703982984383984
- type: nauc_recall_at_20_max
value: 37.32929431193496
- type: nauc_recall_at_20_std
value: 16.139135347989903
- type: nauc_recall_at_3_diff1
value: 36.53346179134789
- type: nauc_recall_at_3_max
value: 34.11397914899309
- type: nauc_recall_at_3_std
value: 7.19358019807132
- type: nauc_recall_at_5_diff1
value: 36.24058894947452
- type: nauc_recall_at_5_max
value: 37.00990358651097
- type: nauc_recall_at_5_std
value: 11.074645476821619
- type: ndcg_at_1
value: 35.160000000000004
- type: ndcg_at_10
value: 44.99
- type: ndcg_at_100
value: 50.661
- type: ndcg_at_1000
value: 52.599
- type: ndcg_at_20
value: 47.154
- type: ndcg_at_3
value: 39.843
- type: ndcg_at_5
value: 42.486000000000004
- type: precision_at_1
value: 35.160000000000004
- type: precision_at_10
value: 8.299
- type: precision_at_100
value: 1.2850000000000001
- type: precision_at_1000
value: 0.16199999999999998
- type: precision_at_20
value: 4.84
- type: precision_at_3
value: 19.178
- type: precision_at_5
value: 13.927
- type: recall_at_1
value: 28.089
- type: recall_at_10
value: 57.158
- type: recall_at_100
value: 81.461
- type: recall_at_1000
value: 94.46900000000001
- type: recall_at_20
value: 64.927
- type: recall_at_3
value: 42.775999999999996
- type: recall_at_5
value: 49.719
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackRetrieval
type: mteb/cqadupstack
config: default
split: test
revision: CQADupstackRetrieval is a combined dataset
metrics:
- type: main_score
value: 44.989166666666655
- type: ndcg_at_10
value: 44.989166666666655
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackStatsRetrieval
type: mteb/cqadupstack-stats
config: default
split: test
revision: 65ac3a16b8e91f9cee4c9828cc7c335575432a2a
metrics:
- type: main_score
value: 39.586
- type: map_at_1
value: 27.301
- type: map_at_10
value: 35.022
- type: map_at_100
value: 36.061
- type: map_at_1000
value: 36.146
- type: map_at_20
value: 35.608000000000004
- type: map_at_3
value: 32.978
- type: map_at_5
value: 33.994
- type: mrr_at_1
value: 30.67484662576687
- type: mrr_at_10
value: 38.1696124257474
- type: mrr_at_100
value: 38.99730898994137
- type: mrr_at_1000
value: 39.049871007408136
- type: mrr_at_20
value: 38.62424051396064
- type: mrr_at_3
value: 36.40081799591004
- type: mrr_at_5
value: 37.23670756646219
- type: nauc_map_at_1000_diff1
value: 50.4395097150819
- type: nauc_map_at_1000_max
value: 42.36231476768413
- type: nauc_map_at_1000_std
value: 1.0739414045485742
- type: nauc_map_at_100_diff1
value: 50.4253775421283
- type: nauc_map_at_100_max
value: 42.34508969348633
- type: nauc_map_at_100_std
value: 1.0590256535050135
- type: nauc_map_at_10_diff1
value: 50.74196619464362
- type: nauc_map_at_10_max
value: 42.354326434590284
- type: nauc_map_at_10_std
value: 0.6330167542705694
- type: nauc_map_at_1_diff1
value: 55.7404810490963
- type: nauc_map_at_1_max
value: 40.7676941648045
- type: nauc_map_at_1_std
value: -5.021772566610674
- type: nauc_map_at_20_diff1
value: 50.39792463598886
- type: nauc_map_at_20_max
value: 42.25768760228577
- type: nauc_map_at_20_std
value: 0.8979017700131807
- type: nauc_map_at_3_diff1
value: 51.53267996170815
- type: nauc_map_at_3_max
value: 41.78801756883417
- type: nauc_map_at_3_std
value: -0.6652383024396911
- type: nauc_map_at_5_diff1
value: 50.992783683271504
- type: nauc_map_at_5_max
value: 41.8607977828188
- type: nauc_map_at_5_std
value: 0.3484379897869807
- type: nauc_mrr_at_1000_diff1
value: 48.952907124445126
- type: nauc_mrr_at_1000_max
value: 42.93563741482114
- type: nauc_mrr_at_1000_std
value: 3.0791495753556424
- type: nauc_mrr_at_100_diff1
value: 48.941921107360805
- type: nauc_mrr_at_100_max
value: 42.94419657374061
- type: nauc_mrr_at_100_std
value: 3.075397087180154
- type: nauc_mrr_at_10_diff1
value: 49.098926306303056
- type: nauc_mrr_at_10_max
value: 42.941857820499806
- type: nauc_mrr_at_10_std
value: 2.8184474174054372
- type: nauc_mrr_at_1_diff1
value: 54.428109877009334
- type: nauc_mrr_at_1_max
value: 42.50273386972492
- type: nauc_mrr_at_1_std
value: -2.1811826216412187
- type: nauc_mrr_at_20_diff1
value: 48.82502192775839
- type: nauc_mrr_at_20_max
value: 42.92227277257095
- type: nauc_mrr_at_20_std
value: 2.975812634368533
- type: nauc_mrr_at_3_diff1
value: 49.440009227591176
- type: nauc_mrr_at_3_max
value: 42.95503176290712
- type: nauc_mrr_at_3_std
value: 2.2997128945013796
- type: nauc_mrr_at_5_diff1
value: 49.09846782701398
- type: nauc_mrr_at_5_max
value: 42.51449168285772
- type: nauc_mrr_at_5_std
value: 2.7785816484421297
- type: nauc_ndcg_at_1000_diff1
value: 48.14680758187888
- type: nauc_ndcg_at_1000_max
value: 43.57465718500695
- type: nauc_ndcg_at_1000_std
value: 5.287435676678261
- type: nauc_ndcg_at_100_diff1
value: 47.66081605743284
- type: nauc_ndcg_at_100_max
value: 43.28156751251163
- type: nauc_ndcg_at_100_std
value: 4.959626409663624
- type: nauc_ndcg_at_10_diff1
value: 48.25075619623878
- type: nauc_ndcg_at_10_max
value: 43.00688660666578
- type: nauc_ndcg_at_10_std
value: 3.2319193368891637
- type: nauc_ndcg_at_1_diff1
value: 54.428109877009334
- type: nauc_ndcg_at_1_max
value: 42.50273386972492
- type: nauc_ndcg_at_1_std
value: -2.1811826216412187
- type: nauc_ndcg_at_20_diff1
value: 47.1943098627403
- type: nauc_ndcg_at_20_max
value: 42.86954491768707
- type: nauc_ndcg_at_20_std
value: 4.08583080150737
- type: nauc_ndcg_at_3_diff1
value: 49.32681523192246
- type: nauc_ndcg_at_3_max
value: 42.46898641470274
- type: nauc_ndcg_at_3_std
value: 1.7416962407725236
- type: nauc_ndcg_at_5_diff1
value: 48.59647012439291
- type: nauc_ndcg_at_5_max
value: 42.07098889846439
- type: nauc_ndcg_at_5_std
value: 2.979621233356828
- type: nauc_precision_at_1000_diff1
value: -1.7366334161587105
- type: nauc_precision_at_1000_max
value: 17.70969166396819
- type: nauc_precision_at_1000_std
value: 17.50619975322144
- type: nauc_precision_at_100_diff1
value: 10.082579982582155
- type: nauc_precision_at_100_max
value: 28.024893516091776
- type: nauc_precision_at_100_std
value: 18.41413013357596
- type: nauc_precision_at_10_diff1
value: 28.796167732373657
- type: nauc_precision_at_10_max
value: 40.37340024485382
- type: nauc_precision_at_10_std
value: 13.718572711091733
- type: nauc_precision_at_1_diff1
value: 54.428109877009334
- type: nauc_precision_at_1_max
value: 42.50273386972492
- type: nauc_precision_at_1_std
value: -2.1811826216412187
- type: nauc_precision_at_20_diff1
value: 19.82691920771315
- type: nauc_precision_at_20_max
value: 34.45075390159975
- type: nauc_precision_at_20_std
value: 16.410812072348058
- type: nauc_precision_at_3_diff1
value: 40.85430254962678
- type: nauc_precision_at_3_max
value: 43.63016056067074
- type: nauc_precision_at_3_std
value: 9.322014634477581
- type: nauc_precision_at_5_diff1
value: 35.830272848975795
- type: nauc_precision_at_5_max
value: 41.30047691620363
- type: nauc_precision_at_5_std
value: 13.145693992266565
- type: nauc_recall_at_1000_diff1
value: 35.532000545890504
- type: nauc_recall_at_1000_max
value: 50.714223194510325
- type: nauc_recall_at_1000_std
value: 43.09037309139045
- type: nauc_recall_at_100_diff1
value: 35.11024488875192
- type: nauc_recall_at_100_max
value: 43.0874566265193
- type: nauc_recall_at_100_std
value: 19.70628521846854
- type: nauc_recall_at_10_diff1
value: 40.36203726741153
- type: nauc_recall_at_10_max
value: 42.581482582576726
- type: nauc_recall_at_10_std
value: 8.642553371022348
- type: nauc_recall_at_1_diff1
value: 55.7404810490963
- type: nauc_recall_at_1_max
value: 40.7676941648045
- type: nauc_recall_at_1_std
value: -5.021772566610674
- type: nauc_recall_at_20_diff1
value: 35.97348868186562
- type: nauc_recall_at_20_max
value: 41.82695933305065
- type: nauc_recall_at_20_std
value: 11.444957541593585
- type: nauc_recall_at_3_diff1
value: 44.20020470014979
- type: nauc_recall_at_3_max
value: 40.84130855296979
- type: nauc_recall_at_3_std
value: 5.004883338558809
- type: nauc_recall_at_5_diff1
value: 42.08756885472078
- type: nauc_recall_at_5_max
value: 39.90323783606852
- type: nauc_recall_at_5_std
value: 8.085182534171127
- type: ndcg_at_1
value: 30.675
- type: ndcg_at_10
value: 39.586
- type: ndcg_at_100
value: 44.737
- type: ndcg_at_1000
value: 46.863
- type: ndcg_at_20
value: 41.495
- type: ndcg_at_3
value: 35.8
- type: ndcg_at_5
value: 37.3
- type: precision_at_1
value: 30.675
- type: precision_at_10
value: 6.196
- type: precision_at_100
value: 0.9570000000000001
- type: precision_at_1000
value: 0.122
- type: precision_at_20
value: 3.6350000000000002
- type: precision_at_3
value: 15.337
- type: precision_at_5
value: 10.337
- type: recall_at_1
value: 27.301
- type: recall_at_10
value: 50.346999999999994
- type: recall_at_100
value: 74.459
- type: recall_at_1000
value: 90.018
- type: recall_at_20
value: 57.473
- type: recall_at_3
value: 39.672000000000004
- type: recall_at_5
value: 43.383
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackTexRetrieval
type: mteb/cqadupstack-tex
config: default
split: test
revision: 46989137a86843e03a6195de44b09deda022eec7
metrics:
- type: main_score
value: 32.842
- type: map_at_1
value: 19.527
- type: map_at_10
value: 27.711999999999996
- type: map_at_100
value: 28.98
- type: map_at_1000
value: 29.108
- type: map_at_20
value: 28.407
- type: map_at_3
value: 25.023
- type: map_at_5
value: 26.528000000000002
- type: mrr_at_1
value: 23.675154852030282
- type: mrr_at_10
value: 31.810676323752784
- type: mrr_at_100
value: 32.788970614380716
- type: mrr_at_1000
value: 32.86028758975889
- type: mrr_at_20
value: 32.35935756676056
- type: mrr_at_3
value: 29.41615049323246
- type: mrr_at_5
value: 30.785730672172633
- type: nauc_map_at_1000_diff1
value: 35.597766688968015
- type: nauc_map_at_1000_max
value: 26.295790183159845
- type: nauc_map_at_1000_std
value: -0.04229904865958209
- type: nauc_map_at_100_diff1
value: 35.568782622469925
- type: nauc_map_at_100_max
value: 26.27850795471227
- type: nauc_map_at_100_std
value: -0.04944875782811099
- type: nauc_map_at_10_diff1
value: 35.63760937893694
- type: nauc_map_at_10_max
value: 26.130094042028233
- type: nauc_map_at_10_std
value: -0.6896882769027717
- type: nauc_map_at_1_diff1
value: 41.759098341890976
- type: nauc_map_at_1_max
value: 23.918885427783326
- type: nauc_map_at_1_std
value: -2.1383574897865074
- type: nauc_map_at_20_diff1
value: 35.55706530442612
- type: nauc_map_at_20_max
value: 26.23339626569677
- type: nauc_map_at_20_std
value: -0.162172033918129
- type: nauc_map_at_3_diff1
value: 37.22183376355153
- type: nauc_map_at_3_max
value: 25.770512522122186
- type: nauc_map_at_3_std
value: -1.3105892187778403
- type: nauc_map_at_5_diff1
value: 36.205913161663084
- type: nauc_map_at_5_max
value: 25.953300641502064
- type: nauc_map_at_5_std
value: -0.7987363137547906
- type: nauc_mrr_at_1000_diff1
value: 34.864016559617646
- type: nauc_mrr_at_1000_max
value: 26.8689525348564
- type: nauc_mrr_at_1000_std
value: -0.5839923973914446
- type: nauc_mrr_at_100_diff1
value: 34.83820469598538
- type: nauc_mrr_at_100_max
value: 26.864669056231282
- type: nauc_mrr_at_100_std
value: -0.5785645654158633
- type: nauc_mrr_at_10_diff1
value: 34.81868397381981
- type: nauc_mrr_at_10_max
value: 26.79988560460627
- type: nauc_mrr_at_10_std
value: -1.1113808365827318
- type: nauc_mrr_at_1_diff1
value: 40.0281507903504
- type: nauc_mrr_at_1_max
value: 25.036735941806583
- type: nauc_mrr_at_1_std
value: -2.508700799268523
- type: nauc_mrr_at_20_diff1
value: 34.81954537357966
- type: nauc_mrr_at_20_max
value: 26.877673033315453
- type: nauc_mrr_at_20_std
value: -0.6706028107452919
- type: nauc_mrr_at_3_diff1
value: 35.87313782549696
- type: nauc_mrr_at_3_max
value: 26.776261693392335
- type: nauc_mrr_at_3_std
value: -1.8010591328112908
- type: nauc_mrr_at_5_diff1
value: 35.31673912159536
- type: nauc_mrr_at_5_max
value: 26.78720786106881
- type: nauc_mrr_at_5_std
value: -1.3096326953900546
- type: nauc_ndcg_at_1000_diff1
value: 33.43105244339048
- type: nauc_ndcg_at_1000_max
value: 27.52195065724684
- type: nauc_ndcg_at_1000_std
value: 2.8376056562675744
- type: nauc_ndcg_at_100_diff1
value: 32.90916846420573
- type: nauc_ndcg_at_100_max
value: 27.27161017736065
- type: nauc_ndcg_at_100_std
value: 2.8703122625872126
- type: nauc_ndcg_at_10_diff1
value: 33.12714979317447
- type: nauc_ndcg_at_10_max
value: 26.67762031747992
- type: nauc_ndcg_at_10_std
value: -0.1341345572932233
- type: nauc_ndcg_at_1_diff1
value: 40.0281507903504
- type: nauc_ndcg_at_1_max
value: 25.036735941806583
- type: nauc_ndcg_at_1_std
value: -2.508700799268523
- type: nauc_ndcg_at_20_diff1
value: 32.891656138688546
- type: nauc_ndcg_at_20_max
value: 26.991976404027163
- type: nauc_ndcg_at_20_std
value: 1.6050741106677746
- type: nauc_ndcg_at_3_diff1
value: 35.576958713955484
- type: nauc_ndcg_at_3_max
value: 26.41687745899445
- type: nauc_ndcg_at_3_std
value: -1.5326687067002291
- type: nauc_ndcg_at_5_diff1
value: 34.27335619067276
- type: nauc_ndcg_at_5_max
value: 26.479515412084208
- type: nauc_ndcg_at_5_std
value: -0.5597648935666003
- type: nauc_precision_at_1000_diff1
value: -0.18660914306684007
- type: nauc_precision_at_1000_max
value: 7.268255385799229
- type: nauc_precision_at_1000_std
value: -0.1968875268478991
- type: nauc_precision_at_100_diff1
value: 7.386701205054449
- type: nauc_precision_at_100_max
value: 15.477735603019607
- type: nauc_precision_at_100_std
value: 4.753153414679307
- type: nauc_precision_at_10_diff1
value: 18.4668296945938
- type: nauc_precision_at_10_max
value: 25.457144217779597
- type: nauc_precision_at_10_std
value: 0.40165373733963605
- type: nauc_precision_at_1_diff1
value: 40.0281507903504
- type: nauc_precision_at_1_max
value: 25.036735941806583
- type: nauc_precision_at_1_std
value: -2.508700799268523
- type: nauc_precision_at_20_diff1
value: 14.751135844289335
- type: nauc_precision_at_20_max
value: 22.763373329576293
- type: nauc_precision_at_20_std
value: 4.360731801761864
- type: nauc_precision_at_3_diff1
value: 28.154753888265393
- type: nauc_precision_at_3_max
value: 27.838427033527147
- type: nauc_precision_at_3_std
value: -1.0042621266717804
- type: nauc_precision_at_5_diff1
value: 23.549026872711423
- type: nauc_precision_at_5_max
value: 27.192214745385044
- type: nauc_precision_at_5_std
value: 0.4455206110174471
- type: nauc_recall_at_1000_diff1
value: 17.905404210815632
- type: nauc_recall_at_1000_max
value: 32.8674418535776
- type: nauc_recall_at_1000_std
value: 35.187050415735435
- type: nauc_recall_at_100_diff1
value: 20.903609751984757
- type: nauc_recall_at_100_max
value: 27.180306691518364
- type: nauc_recall_at_100_std
value: 17.553030959393297
- type: nauc_recall_at_10_diff1
value: 25.615147693464387
- type: nauc_recall_at_10_max
value: 25.97062699453565
- type: nauc_recall_at_10_std
value: 2.2181702899826576
- type: nauc_recall_at_1_diff1
value: 41.759098341890976
- type: nauc_recall_at_1_max
value: 23.918885427783326
- type: nauc_recall_at_1_std
value: -2.1383574897865074
- type: nauc_recall_at_20_diff1
value: 23.922775940094386
- type: nauc_recall_at_20_max
value: 26.384627814902785
- type: nauc_recall_at_20_std
value: 7.944532403561578
- type: nauc_recall_at_3_diff1
value: 32.26543270634743
- type: nauc_recall_at_3_max
value: 26.36357710828272
- type: nauc_recall_at_3_std
value: -0.42723331708340706
- type: nauc_recall_at_5_diff1
value: 29.080464141763336
- type: nauc_recall_at_5_max
value: 25.81238438303652
- type: nauc_recall_at_5_std
value: 1.1649311168287726
- type: ndcg_at_1
value: 23.674999999999997
- type: ndcg_at_10
value: 32.842
- type: ndcg_at_100
value: 38.64
- type: ndcg_at_1000
value: 41.367
- type: ndcg_at_20
value: 35.032999999999994
- type: ndcg_at_3
value: 28.166000000000004
- type: ndcg_at_5
value: 30.407
- type: precision_at_1
value: 23.674999999999997
- type: precision_at_10
value: 6.005
- type: precision_at_100
value: 1.053
- type: precision_at_1000
value: 0.146
- type: precision_at_20
value: 3.6580000000000004
- type: precision_at_3
value: 13.352
- type: precision_at_5
value: 9.718
- type: recall_at_1
value: 19.527
- type: recall_at_10
value: 44.096999999999994
- type: recall_at_100
value: 69.962
- type: recall_at_1000
value: 89.035
- type: recall_at_20
value: 52.166000000000004
- type: recall_at_3
value: 30.946
- type: recall_at_5
value: 36.789
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackUnixRetrieval
type: mteb/cqadupstack-unix
config: default
split: test
revision: 6c6430d3a6d36f8d2a829195bc5dc94d7e063e53
metrics:
- type: main_score
value: 46.54
- type: map_at_1
value: 29.953999999999997
- type: map_at_10
value: 40.742
- type: map_at_100
value: 41.964
- type: map_at_1000
value: 42.059999999999995
- type: map_at_20
value: 41.426
- type: map_at_3
value: 37.378
- type: map_at_5
value: 39.267
- type: mrr_at_1
value: 34.701492537313435
- type: mrr_at_10
value: 44.29978085761664
- type: mrr_at_100
value: 45.205551401915486
- type: mrr_at_1000
value: 45.24735017384963
- type: mrr_at_20
value: 44.85338423755729
- type: mrr_at_3
value: 41.57338308457707
- type: mrr_at_5
value: 43.19185323383077
- type: nauc_map_at_1000_diff1
value: 48.45170522932164
- type: nauc_map_at_1000_max
value: 31.544164363591204
- type: nauc_map_at_1000_std
value: 0.8661088818146858
- type: nauc_map_at_100_diff1
value: 48.47347800061323
- type: nauc_map_at_100_max
value: 31.568637596620313
- type: nauc_map_at_100_std
value: 0.9252699336843858
- type: nauc_map_at_10_diff1
value: 48.64849891585432
- type: nauc_map_at_10_max
value: 31.40371265579746
- type: nauc_map_at_10_std
value: 0.7088016563713089
- type: nauc_map_at_1_diff1
value: 53.57918993108331
- type: nauc_map_at_1_max
value: 31.392632653740993
- type: nauc_map_at_1_std
value: -2.857306170463933
- type: nauc_map_at_20_diff1
value: 48.49084353023969
- type: nauc_map_at_20_max
value: 31.470313174779374
- type: nauc_map_at_20_std
value: 0.8950296035234309
- type: nauc_map_at_3_diff1
value: 49.273481161619806
- type: nauc_map_at_3_max
value: 31.101471509782826
- type: nauc_map_at_3_std
value: -0.886510096257905
- type: nauc_map_at_5_diff1
value: 48.85344288229106
- type: nauc_map_at_5_max
value: 31.32633663238284
- type: nauc_map_at_5_std
value: -0.44752909698881177
- type: nauc_mrr_at_1000_diff1
value: 46.27593166906613
- type: nauc_mrr_at_1000_max
value: 31.637594372116336
- type: nauc_mrr_at_1000_std
value: 0.8444917550670064
- type: nauc_mrr_at_100_diff1
value: 46.27161543033672
- type: nauc_mrr_at_100_max
value: 31.64330655339695
- type: nauc_mrr_at_100_std
value: 0.8717446416398773
- type: nauc_mrr_at_10_diff1
value: 46.100348481312864
- type: nauc_mrr_at_10_max
value: 31.594271897882237
- type: nauc_mrr_at_10_std
value: 0.8807168907688873
- type: nauc_mrr_at_1_diff1
value: 51.35163098909763
- type: nauc_mrr_at_1_max
value: 31.99084441327899
- type: nauc_mrr_at_1_std
value: -2.688594880742662
- type: nauc_mrr_at_20_diff1
value: 46.18178546174727
- type: nauc_mrr_at_20_max
value: 31.639111674119448
- type: nauc_mrr_at_20_std
value: 0.9855008641374622
- type: nauc_mrr_at_3_diff1
value: 46.307484835305864
- type: nauc_mrr_at_3_max
value: 31.35563850804847
- type: nauc_mrr_at_3_std
value: -0.3419536587707561
- type: nauc_mrr_at_5_diff1
value: 46.17646418781234
- type: nauc_mrr_at_5_max
value: 31.313474270239833
- type: nauc_mrr_at_5_std
value: -0.08656550526568331
- type: nauc_ndcg_at_1000_diff1
value: 46.12095795101613
- type: nauc_ndcg_at_1000_max
value: 31.989083597726314
- type: nauc_ndcg_at_1000_std
value: 3.2965704707660763
- type: nauc_ndcg_at_100_diff1
value: 46.05376249841318
- type: nauc_ndcg_at_100_max
value: 32.39195988574972
- type: nauc_ndcg_at_100_std
value: 4.518018135593347
- type: nauc_ndcg_at_10_diff1
value: 46.133631183744875
- type: nauc_ndcg_at_10_max
value: 31.45358876172339
- type: nauc_ndcg_at_10_std
value: 3.4254370918871055
- type: nauc_ndcg_at_1_diff1
value: 51.35163098909763
- type: nauc_ndcg_at_1_max
value: 31.99084441327899
- type: nauc_ndcg_at_1_std
value: -2.688594880742662
- type: nauc_ndcg_at_20_diff1
value: 45.94584949766954
- type: nauc_ndcg_at_20_max
value: 31.689777515111295
- type: nauc_ndcg_at_20_std
value: 4.189082428922442
- type: nauc_ndcg_at_3_diff1
value: 46.5057835389752
- type: nauc_ndcg_at_3_max
value: 30.941407592082047
- type: nauc_ndcg_at_3_std
value: -0.042473944857831535
- type: nauc_ndcg_at_5_diff1
value: 46.369027395136136
- type: nauc_ndcg_at_5_max
value: 31.057841776505352
- type: nauc_ndcg_at_5_std
value: 0.6878993420489522
- type: nauc_precision_at_1000_diff1
value: -17.30759714093202
- type: nauc_precision_at_1000_max
value: -4.441155558458858
- type: nauc_precision_at_1000_std
value: 1.5537300718220326
- type: nauc_precision_at_100_diff1
value: -7.18920438222021
- type: nauc_precision_at_100_max
value: 8.017878121399253
- type: nauc_precision_at_100_std
value: 11.357132919349102
- type: nauc_precision_at_10_diff1
value: 15.202451884794076
- type: nauc_precision_at_10_max
value: 19.077295902881417
- type: nauc_precision_at_10_std
value: 9.885526867355805
- type: nauc_precision_at_1_diff1
value: 51.35163098909763
- type: nauc_precision_at_1_max
value: 31.99084441327899
- type: nauc_precision_at_1_std
value: -2.688594880742662
- type: nauc_precision_at_20_diff1
value: 6.827461091494899
- type: nauc_precision_at_20_max
value: 15.27268633497114
- type: nauc_precision_at_20_std
value: 11.515826649647384
- type: nauc_precision_at_3_diff1
value: 31.043021807472027
- type: nauc_precision_at_3_max
value: 26.22457157531548
- type: nauc_precision_at_3_std
value: 1.788215968301994
- type: nauc_precision_at_5_diff1
value: 25.030185818513235
- type: nauc_precision_at_5_max
value: 23.680129160901537
- type: nauc_precision_at_5_std
value: 4.303018899688115
- type: nauc_recall_at_1000_diff1
value: 28.68826642607512
- type: nauc_recall_at_1000_max
value: 42.33849804103852
- type: nauc_recall_at_1000_std
value: 42.67413575876864
- type: nauc_recall_at_100_diff1
value: 36.51494878715
- type: nauc_recall_at_100_max
value: 37.4764995034434
- type: nauc_recall_at_100_std
value: 28.295671266661017
- type: nauc_recall_at_10_diff1
value: 39.416721111463524
- type: nauc_recall_at_10_max
value: 29.95985608454179
- type: nauc_recall_at_10_std
value: 12.423335839786201
- type: nauc_recall_at_1_diff1
value: 53.57918993108331
- type: nauc_recall_at_1_max
value: 31.392632653740993
- type: nauc_recall_at_1_std
value: -2.857306170463933
- type: nauc_recall_at_20_diff1
value: 38.228803480194046
- type: nauc_recall_at_20_max
value: 30.87261362975955
- type: nauc_recall_at_20_std
value: 16.977113091834095
- type: nauc_recall_at_3_diff1
value: 43.154348566653155
- type: nauc_recall_at_3_max
value: 29.54536633744803
- type: nauc_recall_at_3_std
value: 2.02842672250621
- type: nauc_recall_at_5_diff1
value: 41.00436246072242
- type: nauc_recall_at_5_max
value: 29.413569555348023
- type: nauc_recall_at_5_std
value: 3.845214021958289
- type: ndcg_at_1
value: 34.701
- type: ndcg_at_10
value: 46.54
- type: ndcg_at_100
value: 51.754999999999995
- type: ndcg_at_1000
value: 53.71
- type: ndcg_at_20
value: 48.679
- type: ndcg_at_3
value: 40.892
- type: ndcg_at_5
value: 43.595
- type: precision_at_1
value: 34.701
- type: precision_at_10
value: 8.004
- type: precision_at_100
value: 1.185
- type: precision_at_1000
value: 0.145
- type: precision_at_20
value: 4.632
- type: precision_at_3
value: 18.719
- type: precision_at_5
value: 13.245999999999999
- type: recall_at_1
value: 29.953999999999997
- type: recall_at_10
value: 60.246
- type: recall_at_100
value: 82.128
- type: recall_at_1000
value: 95.622
- type: recall_at_20
value: 67.756
- type: recall_at_3
value: 45.096000000000004
- type: recall_at_5
value: 51.9
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackWebmastersRetrieval
type: mteb/cqadupstack-webmasters
config: default
split: test
revision: 160c094312a0e1facb97e55eeddb698c0abe3571
metrics:
- type: main_score
value: 44.718999999999994
- type: map_at_1
value: 28.383999999999997
- type: map_at_10
value: 38.422
- type: map_at_100
value: 40.058
- type: map_at_1000
value: 40.276
- type: map_at_20
value: 39.301
- type: map_at_3
value: 35.205
- type: map_at_5
value: 36.803999999999995
- type: mrr_at_1
value: 33.59683794466403
- type: mrr_at_10
value: 42.837536859275986
- type: mrr_at_100
value: 43.7501703455481
- type: mrr_at_1000
value: 43.79258407771123
- type: mrr_at_20
value: 43.36044710445095
- type: mrr_at_3
value: 40.15151515151516
- type: mrr_at_5
value: 41.74242424242425
- type: nauc_map_at_1000_diff1
value: 47.934826596875304
- type: nauc_map_at_1000_max
value: 32.39759438116062
- type: nauc_map_at_1000_std
value: 0.9489007346763054
- type: nauc_map_at_100_diff1
value: 47.94844822157888
- type: nauc_map_at_100_max
value: 32.51485845519537
- type: nauc_map_at_100_std
value: 0.8094339925545622
- type: nauc_map_at_10_diff1
value: 48.251456404874645
- type: nauc_map_at_10_max
value: 31.412906399154245
- type: nauc_map_at_10_std
value: -0.7024825737369933
- type: nauc_map_at_1_diff1
value: 55.81906101970174
- type: nauc_map_at_1_max
value: 31.811715334193796
- type: nauc_map_at_1_std
value: -6.17056859281584
- type: nauc_map_at_20_diff1
value: 47.80902650237369
- type: nauc_map_at_20_max
value: 32.22465403023091
- type: nauc_map_at_20_std
value: 0.20706526946705656
- type: nauc_map_at_3_diff1
value: 49.97333984346632
- type: nauc_map_at_3_max
value: 31.58195498640799
- type: nauc_map_at_3_std
value: -2.577539707727459
- type: nauc_map_at_5_diff1
value: 49.40005767350608
- type: nauc_map_at_5_max
value: 30.998435600377434
- type: nauc_map_at_5_std
value: -2.1231771618690307
- type: nauc_mrr_at_1000_diff1
value: 46.86811371969663
- type: nauc_mrr_at_1000_max
value: 31.25147138171024
- type: nauc_mrr_at_1000_std
value: 1.9954422477585918
- type: nauc_mrr_at_100_diff1
value: 46.855870345882195
- type: nauc_mrr_at_100_max
value: 31.263524035665966
- type: nauc_mrr_at_100_std
value: 2.0160751193806568
- type: nauc_mrr_at_10_diff1
value: 46.93294772825783
- type: nauc_mrr_at_10_max
value: 30.927002048701663
- type: nauc_mrr_at_10_std
value: 1.6538220080908224
- type: nauc_mrr_at_1_diff1
value: 52.416386548395664
- type: nauc_mrr_at_1_max
value: 32.28582003787206
- type: nauc_mrr_at_1_std
value: -2.154991145714492
- type: nauc_mrr_at_20_diff1
value: 46.71796185319694
- type: nauc_mrr_at_20_max
value: 31.16219902794994
- type: nauc_mrr_at_20_std
value: 1.8590646572728409
- type: nauc_mrr_at_3_diff1
value: 47.697100317669914
- type: nauc_mrr_at_3_max
value: 30.821806030159383
- type: nauc_mrr_at_3_std
value: 1.1927626358099177
- type: nauc_mrr_at_5_diff1
value: 47.065272061365704
- type: nauc_mrr_at_5_max
value: 30.299230962805023
- type: nauc_mrr_at_5_std
value: 1.3225842862629529
- type: nauc_ndcg_at_1000_diff1
value: 45.20612583136058
- type: nauc_ndcg_at_1000_max
value: 33.51931869947315
- type: nauc_ndcg_at_1000_std
value: 4.923707509620363
- type: nauc_ndcg_at_100_diff1
value: 44.76206243393775
- type: nauc_ndcg_at_100_max
value: 33.57771606755598
- type: nauc_ndcg_at_100_std
value: 5.30915563331338
- type: nauc_ndcg_at_10_diff1
value: 45.12714032463827
- type: nauc_ndcg_at_10_max
value: 30.351909495610492
- type: nauc_ndcg_at_10_std
value: 2.3972947289996873
- type: nauc_ndcg_at_1_diff1
value: 52.416386548395664
- type: nauc_ndcg_at_1_max
value: 32.28582003787206
- type: nauc_ndcg_at_1_std
value: -2.154991145714492
- type: nauc_ndcg_at_20_diff1
value: 44.20281844000005
- type: nauc_ndcg_at_20_max
value: 32.14112739396226
- type: nauc_ndcg_at_20_std
value: 3.3971385462591916
- type: nauc_ndcg_at_3_diff1
value: 47.0633767031858
- type: nauc_ndcg_at_3_max
value: 31.032896053733435
- type: nauc_ndcg_at_3_std
value: 0.6827544906310201
- type: nauc_ndcg_at_5_diff1
value: 46.735352294106484
- type: nauc_ndcg_at_5_max
value: 29.784992270528544
- type: nauc_ndcg_at_5_std
value: 0.8685943819516141
- type: nauc_precision_at_1000_diff1
value: -12.223330179860852
- type: nauc_precision_at_1000_max
value: -9.266492213777273
- type: nauc_precision_at_1000_std
value: 19.0569899587788
- type: nauc_precision_at_100_diff1
value: -5.803751085072067
- type: nauc_precision_at_100_max
value: 3.448932057044294
- type: nauc_precision_at_100_std
value: 23.470863527030627
- type: nauc_precision_at_10_diff1
value: 8.887357341361907
- type: nauc_precision_at_10_max
value: 18.67165390928126
- type: nauc_precision_at_10_std
value: 19.158543337955404
- type: nauc_precision_at_1_diff1
value: 52.416386548395664
- type: nauc_precision_at_1_max
value: 32.28582003787206
- type: nauc_precision_at_1_std
value: -2.154991145714492
- type: nauc_precision_at_20_diff1
value: 0.942496138409553
- type: nauc_precision_at_20_max
value: 18.86957127610774
- type: nauc_precision_at_20_std
value: 24.075503903246496
- type: nauc_precision_at_3_diff1
value: 28.15363877307106
- type: nauc_precision_at_3_max
value: 27.064928137991824
- type: nauc_precision_at_3_std
value: 8.632807104504753
- type: nauc_precision_at_5_diff1
value: 20.805862332497973
- type: nauc_precision_at_5_max
value: 21.420201475758404
- type: nauc_precision_at_5_std
value: 12.380239645425714
- type: nauc_recall_at_1000_diff1
value: 18.478341468055547
- type: nauc_recall_at_1000_max
value: 56.293560115074506
- type: nauc_recall_at_1000_std
value: 64.31607185065428
- type: nauc_recall_at_100_diff1
value: 26.737267337771886
- type: nauc_recall_at_100_max
value: 38.011889141496326
- type: nauc_recall_at_100_std
value: 30.44904690114732
- type: nauc_recall_at_10_diff1
value: 35.22772732735716
- type: nauc_recall_at_10_max
value: 26.000054115159486
- type: nauc_recall_at_10_std
value: 5.174264254271206
- type: nauc_recall_at_1_diff1
value: 55.81906101970174
- type: nauc_recall_at_1_max
value: 31.811715334193796
- type: nauc_recall_at_1_std
value: -6.17056859281584
- type: nauc_recall_at_20_diff1
value: 30.48493302415641
- type: nauc_recall_at_20_max
value: 31.05487040370753
- type: nauc_recall_at_20_std
value: 10.319948318834136
- type: nauc_recall_at_3_diff1
value: 43.12289512340243
- type: nauc_recall_at_3_max
value: 28.176279771026135
- type: nauc_recall_at_3_std
value: -0.1775154523381921
- type: nauc_recall_at_5_diff1
value: 40.9934933741234
- type: nauc_recall_at_5_max
value: 25.569156290584733
- type: nauc_recall_at_5_std
value: 0.21166696686855038
- type: ndcg_at_1
value: 33.597
- type: ndcg_at_10
value: 44.718999999999994
- type: ndcg_at_100
value: 50.324000000000005
- type: ndcg_at_1000
value: 52.468
- type: ndcg_at_20
value: 46.822
- type: ndcg_at_3
value: 39.558
- type: ndcg_at_5
value: 41.827999999999996
- type: precision_at_1
value: 33.597
- type: precision_at_10
value: 8.735
- type: precision_at_100
value: 1.6420000000000001
- type: precision_at_1000
value: 0.246
- type: precision_at_20
value: 5.375
- type: precision_at_3
value: 18.511
- type: precision_at_5
value: 13.399
- type: recall_at_1
value: 28.383999999999997
- type: recall_at_10
value: 56.425000000000004
- type: recall_at_100
value: 82.01899999999999
- type: recall_at_1000
value: 95.285
- type: recall_at_20
value: 64.615
- type: recall_at_3
value: 42.171
- type: recall_at_5
value: 48.296
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackWordpressRetrieval
type: mteb/cqadupstack-wordpress
config: default
split: test
revision: 4ffe81d471b1924886b33c7567bfb200e9eec5c4
metrics:
- type: main_score
value: 38.269999999999996
- type: map_at_1
value: 25.324999999999996
- type: map_at_10
value: 33.263
- type: map_at_100
value: 34.304
- type: map_at_1000
value: 34.394000000000005
- type: map_at_20
value: 33.827
- type: map_at_3
value: 30.259999999999998
- type: map_at_5
value: 31.832
- type: mrr_at_1
value: 27.171903881700555
- type: mrr_at_10
value: 35.334991051257234
- type: mrr_at_100
value: 36.251283465952355
- type: mrr_at_1000
value: 36.316236092511055
- type: mrr_at_20
value: 35.87141909945257
- type: mrr_at_3
value: 32.71719038817007
- type: mrr_at_5
value: 34.19593345656194
- type: nauc_map_at_1000_diff1
value: 39.614836211522714
- type: nauc_map_at_1000_max
value: 22.019768626310192
- type: nauc_map_at_1000_std
value: -1.5238708712112499
- type: nauc_map_at_100_diff1
value: 39.63008548572307
- type: nauc_map_at_100_max
value: 22.044756063752345
- type: nauc_map_at_100_std
value: -1.4869190221494792
- type: nauc_map_at_10_diff1
value: 39.73025012395569
- type: nauc_map_at_10_max
value: 22.117710178892107
- type: nauc_map_at_10_std
value: -2.5129984871932973
- type: nauc_map_at_1_diff1
value: 45.015617718902654
- type: nauc_map_at_1_max
value: 19.313800263189638
- type: nauc_map_at_1_std
value: -4.763931386681675
- type: nauc_map_at_20_diff1
value: 39.53678019013766
- type: nauc_map_at_20_max
value: 21.880316719428258
- type: nauc_map_at_20_std
value: -1.882003994523355
- type: nauc_map_at_3_diff1
value: 40.37307665298228
- type: nauc_map_at_3_max
value: 20.851976075322533
- type: nauc_map_at_3_std
value: -2.429569082966531
- type: nauc_map_at_5_diff1
value: 39.763015635086
- type: nauc_map_at_5_max
value: 22.010102196900725
- type: nauc_map_at_5_std
value: -2.654896415670943
- type: nauc_mrr_at_1000_diff1
value: 39.74071733680025
- type: nauc_mrr_at_1000_max
value: 21.67309640681989
- type: nauc_mrr_at_1000_std
value: -1.4003373135477462
- type: nauc_mrr_at_100_diff1
value: 39.730614151966485
- type: nauc_mrr_at_100_max
value: 21.678390048971767
- type: nauc_mrr_at_100_std
value: -1.3655362623563931
- type: nauc_mrr_at_10_diff1
value: 39.7900031013241
- type: nauc_mrr_at_10_max
value: 21.73643491725051
- type: nauc_mrr_at_10_std
value: -2.1175389838696312
- type: nauc_mrr_at_1_diff1
value: 46.165736140679776
- type: nauc_mrr_at_1_max
value: 20.071083446822147
- type: nauc_mrr_at_1_std
value: -5.018909100858311
- type: nauc_mrr_at_20_diff1
value: 39.6371295762885
- type: nauc_mrr_at_20_max
value: 21.659557440270973
- type: nauc_mrr_at_20_std
value: -1.4909603958341686
- type: nauc_mrr_at_3_diff1
value: 40.351150322758876
- type: nauc_mrr_at_3_max
value: 20.83706249041544
- type: nauc_mrr_at_3_std
value: -1.956027373253151
- type: nauc_mrr_at_5_diff1
value: 39.57759107791911
- type: nauc_mrr_at_5_max
value: 21.79552045204151
- type: nauc_mrr_at_5_std
value: -2.1507013120951126
- type: nauc_ndcg_at_1000_diff1
value: 37.717619356839016
- type: nauc_ndcg_at_1000_max
value: 22.545375504379805
- type: nauc_ndcg_at_1000_std
value: 1.682348628141016
- type: nauc_ndcg_at_100_diff1
value: 37.656027803682626
- type: nauc_ndcg_at_100_max
value: 22.49278246383637
- type: nauc_ndcg_at_100_std
value: 2.6818118152357773
- type: nauc_ndcg_at_10_diff1
value: 37.834954205539766
- type: nauc_ndcg_at_10_max
value: 22.655839885558443
- type: nauc_ndcg_at_10_std
value: -1.97159619786231
- type: nauc_ndcg_at_1_diff1
value: 46.165736140679776
- type: nauc_ndcg_at_1_max
value: 20.071083446822147
- type: nauc_ndcg_at_1_std
value: -5.018909100858311
- type: nauc_ndcg_at_20_diff1
value: 37.171914857454304
- type: nauc_ndcg_at_20_max
value: 21.858904801745897
- type: nauc_ndcg_at_20_std
value: 0.3809854859496657
- type: nauc_ndcg_at_3_diff1
value: 38.4460623883955
- type: nauc_ndcg_at_3_max
value: 20.95244159463402
- type: nauc_ndcg_at_3_std
value: -1.2685011660086651
- type: nauc_ndcg_at_5_diff1
value: 37.48831054573054
- type: nauc_ndcg_at_5_max
value: 22.625921624640526
- type: nauc_ndcg_at_5_std
value: -2.049221092724925
- type: nauc_precision_at_1000_diff1
value: -19.120500628263994
- type: nauc_precision_at_1000_max
value: -6.650707109047473
- type: nauc_precision_at_1000_std
value: 15.71193179253002
- type: nauc_precision_at_100_diff1
value: 6.254606806876069
- type: nauc_precision_at_100_max
value: 14.601826922181823
- type: nauc_precision_at_100_std
value: 28.38299592246453
- type: nauc_precision_at_10_diff1
value: 22.978614338670816
- type: nauc_precision_at_10_max
value: 23.04146766323557
- type: nauc_precision_at_10_std
value: 6.226264308612577
- type: nauc_precision_at_1_diff1
value: 46.165736140679776
- type: nauc_precision_at_1_max
value: 20.071083446822147
- type: nauc_precision_at_1_std
value: -5.018909100858311
- type: nauc_precision_at_20_diff1
value: 17.681032853225602
- type: nauc_precision_at_20_max
value: 18.66680304585122
- type: nauc_precision_at_20_std
value: 15.34896796713905
- type: nauc_precision_at_3_diff1
value: 31.359396694559194
- type: nauc_precision_at_3_max
value: 22.279263308973274
- type: nauc_precision_at_3_std
value: 3.6302537979529035
- type: nauc_precision_at_5_diff1
value: 26.32257879892933
- type: nauc_precision_at_5_max
value: 25.402524493181026
- type: nauc_precision_at_5_std
value: 4.731450603747359
- type: nauc_recall_at_1000_diff1
value: 23.562925244967875
- type: nauc_recall_at_1000_max
value: 30.737399333586797
- type: nauc_recall_at_1000_std
value: 34.19418935008663
- type: nauc_recall_at_100_diff1
value: 28.703574970574824
- type: nauc_recall_at_100_max
value: 22.448663600170278
- type: nauc_recall_at_100_std
value: 24.53297349042035
- type: nauc_recall_at_10_diff1
value: 31.73603907811882
- type: nauc_recall_at_10_max
value: 23.453183748640765
- type: nauc_recall_at_10_std
value: -1.8279054407176274
- type: nauc_recall_at_1_diff1
value: 45.015617718902654
- type: nauc_recall_at_1_max
value: 19.313800263189638
- type: nauc_recall_at_1_std
value: -4.763931386681675
- type: nauc_recall_at_20_diff1
value: 28.74169081866096
- type: nauc_recall_at_20_max
value: 20.035509169577324
- type: nauc_recall_at_20_std
value: 7.371615811227748
- type: nauc_recall_at_3_diff1
value: 34.09890157333362
- type: nauc_recall_at_3_max
value: 20.46565842748346
- type: nauc_recall_at_3_std
value: -0.4337283067447526
- type: nauc_recall_at_5_diff1
value: 30.974580787842402
- type: nauc_recall_at_5_max
value: 23.76379349487105
- type: nauc_recall_at_5_std
value: -1.8407515927979428
- type: ndcg_at_1
value: 27.172
- type: ndcg_at_10
value: 38.269999999999996
- type: ndcg_at_100
value: 43.338
- type: ndcg_at_1000
value: 45.594
- type: ndcg_at_20
value: 40.256
- type: ndcg_at_3
value: 32.673
- type: ndcg_at_5
value: 35.224
- type: precision_at_1
value: 27.172
- type: precision_at_10
value: 6.063000000000001
- type: precision_at_100
value: 0.9259999999999999
- type: precision_at_1000
value: 0.123
- type: precision_at_20
value: 3.5029999999999997
- type: precision_at_3
value: 13.74
- type: precision_at_5
value: 9.797
- type: recall_at_1
value: 25.324999999999996
- type: recall_at_10
value: 51.634
- type: recall_at_100
value: 74.687
- type: recall_at_1000
value: 91.412
- type: recall_at_20
value: 59.207
- type: recall_at_3
value: 36.678
- type: recall_at_5
value: 42.742999999999995
- task:
type: Retrieval
dataset:
name: MTEB ClimateFEVER
type: mteb/climate-fever
config: default
split: test
revision: 47f2ac6acb640fc46020b02a5b59fdda04d39380
metrics:
- type: main_score
value: 36.853
- type: map_at_1
value: 15.371000000000002
- type: map_at_10
value: 27.122
- type: map_at_100
value: 29.226000000000003
- type: map_at_1000
value: 29.409999999999997
- type: map_at_20
value: 28.274
- type: map_at_3
value: 22.431
- type: map_at_5
value: 24.877
- type: mrr_at_1
value: 34.13680781758958
- type: mrr_at_10
value: 47.265911793599145
- type: mrr_at_100
value: 48.028369995763846
- type: mrr_at_1000
value: 48.05317022537804
- type: mrr_at_20
value: 47.75785292259516
- type: mrr_at_3
value: 43.887079261672156
- type: mrr_at_5
value: 45.906623235613544
- type: nauc_map_at_1000_diff1
value: 24.949211292921547
- type: nauc_map_at_1000_max
value: 38.69844483304584
- type: nauc_map_at_1000_std
value: 18.336359440844753
- type: nauc_map_at_100_diff1
value: 24.8951732982492
- type: nauc_map_at_100_max
value: 38.65049158594052
- type: nauc_map_at_100_std
value: 18.28935278388095
- type: nauc_map_at_10_diff1
value: 24.606032216798273
- type: nauc_map_at_10_max
value: 38.00608351559887
- type: nauc_map_at_10_std
value: 16.61261615173358
- type: nauc_map_at_1_diff1
value: 30.83614944448221
- type: nauc_map_at_1_max
value: 33.757528532809
- type: nauc_map_at_1_std
value: 8.880622713261126
- type: nauc_map_at_20_diff1
value: 24.75491310922017
- type: nauc_map_at_20_max
value: 38.353679076398834
- type: nauc_map_at_20_std
value: 17.58637493443171
- type: nauc_map_at_3_diff1
value: 25.563085273287083
- type: nauc_map_at_3_max
value: 35.14515679047155
- type: nauc_map_at_3_std
value: 11.75594869817732
- type: nauc_map_at_5_diff1
value: 24.815807517691614
- type: nauc_map_at_5_max
value: 36.25905426665983
- type: nauc_map_at_5_std
value: 14.516391726180697
- type: nauc_mrr_at_1000_diff1
value: 27.948233427121274
- type: nauc_mrr_at_1000_max
value: 37.5893640945859
- type: nauc_mrr_at_1000_std
value: 19.588442449629763
- type: nauc_mrr_at_100_diff1
value: 27.947962345854037
- type: nauc_mrr_at_100_max
value: 37.60375479481945
- type: nauc_mrr_at_100_std
value: 19.614791576283793
- type: nauc_mrr_at_10_diff1
value: 27.882311310262136
- type: nauc_mrr_at_10_max
value: 37.58580968074054
- type: nauc_mrr_at_10_std
value: 19.49875186170201
- type: nauc_mrr_at_1_diff1
value: 28.017413073648477
- type: nauc_mrr_at_1_max
value: 32.87710191514022
- type: nauc_mrr_at_1_std
value: 14.04889142608459
- type: nauc_mrr_at_20_diff1
value: 27.89129925771968
- type: nauc_mrr_at_20_max
value: 37.6142863106945
- type: nauc_mrr_at_20_std
value: 19.645390143394163
- type: nauc_mrr_at_3_diff1
value: 27.99609559690795
- type: nauc_mrr_at_3_max
value: 36.87362332456197
- type: nauc_mrr_at_3_std
value: 18.598416821915333
- type: nauc_mrr_at_5_diff1
value: 27.68306089976716
- type: nauc_mrr_at_5_max
value: 37.12264485659723
- type: nauc_mrr_at_5_std
value: 19.18875305730564
- type: nauc_ndcg_at_1000_diff1
value: 25.736779186453777
- type: nauc_ndcg_at_1000_max
value: 41.93281139456004
- type: nauc_ndcg_at_1000_std
value: 25.179038422659993
- type: nauc_ndcg_at_100_diff1
value: 25.144796623848322
- type: nauc_ndcg_at_100_max
value: 41.72820916876173
- type: nauc_ndcg_at_100_std
value: 25.12851686850754
- type: nauc_ndcg_at_10_diff1
value: 24.321249191226652
- type: nauc_ndcg_at_10_max
value: 40.23711916935706
- type: nauc_ndcg_at_10_std
value: 20.89060972334557
- type: nauc_ndcg_at_1_diff1
value: 28.017413073648477
- type: nauc_ndcg_at_1_max
value: 32.87710191514022
- type: nauc_ndcg_at_1_std
value: 14.04889142608459
- type: nauc_ndcg_at_20_diff1
value: 24.5090484877482
- type: nauc_ndcg_at_20_max
value: 40.752854032983606
- type: nauc_ndcg_at_20_std
value: 22.70331074781384
- type: nauc_ndcg_at_3_diff1
value: 25.13499057756147
- type: nauc_ndcg_at_3_max
value: 35.8325682137567
- type: nauc_ndcg_at_3_std
value: 15.23768392706637
- type: nauc_ndcg_at_5_diff1
value: 24.614105695451116
- type: nauc_ndcg_at_5_max
value: 37.68089587624492
- type: nauc_ndcg_at_5_std
value: 17.946406099261708
- type: nauc_precision_at_1000_diff1
value: -2.022340544774227
- type: nauc_precision_at_1000_max
value: 6.070578645067797
- type: nauc_precision_at_1000_std
value: 22.15132728777549
- type: nauc_precision_at_100_diff1
value: 4.544144474504255
- type: nauc_precision_at_100_max
value: 19.780392159848574
- type: nauc_precision_at_100_std
value: 31.107111186002438
- type: nauc_precision_at_10_diff1
value: 10.107015022955848
- type: nauc_precision_at_10_max
value: 30.779709099060465
- type: nauc_precision_at_10_std
value: 27.324148451668602
- type: nauc_precision_at_1_diff1
value: 28.017413073648477
- type: nauc_precision_at_1_max
value: 32.87710191514022
- type: nauc_precision_at_1_std
value: 14.04889142608459
- type: nauc_precision_at_20_diff1
value: 8.270881053079405
- type: nauc_precision_at_20_max
value: 27.26753946078481
- type: nauc_precision_at_20_std
value: 29.156725822074204
- type: nauc_precision_at_3_diff1
value: 17.82468940497632
- type: nauc_precision_at_3_max
value: 31.490021174215155
- type: nauc_precision_at_3_std
value: 18.73818985054394
- type: nauc_precision_at_5_diff1
value: 13.24803141673961
- type: nauc_precision_at_5_max
value: 29.94926240784298
- type: nauc_precision_at_5_std
value: 23.2940906142919
- type: nauc_recall_at_1000_diff1
value: 19.09850333580471
- type: nauc_recall_at_1000_max
value: 46.026306142840596
- type: nauc_recall_at_1000_std
value: 46.50391519568263
- type: nauc_recall_at_100_diff1
value: 16.739384224869738
- type: nauc_recall_at_100_max
value: 40.68987136431252
- type: nauc_recall_at_100_std
value: 36.01609750485591
- type: nauc_recall_at_10_diff1
value: 17.51796617221814
- type: nauc_recall_at_10_max
value: 39.47453129444401
- type: nauc_recall_at_10_std
value: 23.79239002974899
- type: nauc_recall_at_1_diff1
value: 30.83614944448221
- type: nauc_recall_at_1_max
value: 33.757528532809
- type: nauc_recall_at_1_std
value: 8.880622713261126
- type: nauc_recall_at_20_diff1
value: 16.978668307251652
- type: nauc_recall_at_20_max
value: 39.09115357303713
- type: nauc_recall_at_20_std
value: 27.278668534187524
- type: nauc_recall_at_3_diff1
value: 22.55937738994021
- type: nauc_recall_at_3_max
value: 36.25055459395638
- type: nauc_recall_at_3_std
value: 14.828905168761247
- type: nauc_recall_at_5_diff1
value: 19.32656748627199
- type: nauc_recall_at_5_max
value: 36.28836228620816
- type: nauc_recall_at_5_std
value: 19.264352933914278
- type: ndcg_at_1
value: 34.137
- type: ndcg_at_10
value: 36.853
- type: ndcg_at_100
value: 44.279
- type: ndcg_at_1000
value: 47.336
- type: ndcg_at_20
value: 39.815
- type: ndcg_at_3
value: 30.253999999999998
- type: ndcg_at_5
value: 32.649
- type: precision_at_1
value: 34.137
- type: precision_at_10
value: 11.655
- type: precision_at_100
value: 1.9619999999999997
- type: precision_at_1000
value: 0.254
- type: precision_at_20
value: 7.1209999999999996
- type: precision_at_3
value: 22.823
- type: precision_at_5
value: 17.655
- type: recall_at_1
value: 15.371000000000002
- type: recall_at_10
value: 43.718
- type: recall_at_100
value: 68.81
- type: recall_at_1000
value: 85.69600000000001
- type: recall_at_20
value: 51.94
- type: recall_at_3
value: 27.694000000000003
- type: recall_at_5
value: 34.469
- task:
type: Retrieval
dataset:
name: MTEB DBPedia
type: mteb/dbpedia
config: default
split: test
revision: c0f706b76e590d620bd6618b3ca8efdd34e2d659
metrics:
- type: main_score
value: 45.553
- type: map_at_1
value: 9.168999999999999
- type: map_at_10
value: 22.154
- type: map_at_100
value: 32.174
- type: map_at_1000
value: 33.974
- type: map_at_20
value: 25.899
- type: map_at_3
value: 15.275
- type: map_at_5
value: 18.291
- type: mrr_at_1
value: 70.75
- type: mrr_at_10
value: 78.39662698412697
- type: mrr_at_100
value: 78.56221458977012
- type: mrr_at_1000
value: 78.56669970642338
- type: mrr_at_20
value: 78.49688805346696
- type: mrr_at_3
value: 76.33333333333333
- type: mrr_at_5
value: 77.70833333333333
- type: nauc_map_at_1000_diff1
value: 18.465085922071346
- type: nauc_map_at_1000_max
value: 24.29804638788498
- type: nauc_map_at_1000_std
value: 22.380463943423514
- type: nauc_map_at_100_diff1
value: 19.37585410674523
- type: nauc_map_at_100_max
value: 22.56424042509462
- type: nauc_map_at_100_std
value: 19.672237275984426
- type: nauc_map_at_10_diff1
value: 23.597788166305577
- type: nauc_map_at_10_max
value: 9.157316105122925
- type: nauc_map_at_10_std
value: -3.8881247055786807
- type: nauc_map_at_1_diff1
value: 43.96699602275052
- type: nauc_map_at_1_max
value: -0.7577088440873263
- type: nauc_map_at_1_std
value: -17.732463891968404
- type: nauc_map_at_20_diff1
value: 22.326759054850097
- type: nauc_map_at_20_max
value: 14.879191412167703
- type: nauc_map_at_20_std
value: 5.405751236575241
- type: nauc_map_at_3_diff1
value: 28.73583545428074
- type: nauc_map_at_3_max
value: 1.5986597211018239
- type: nauc_map_at_3_std
value: -16.512455883681515
- type: nauc_map_at_5_diff1
value: 25.401810959155057
- type: nauc_map_at_5_max
value: 4.418875376978587
- type: nauc_map_at_5_std
value: -12.296750992013052
- type: nauc_mrr_at_1000_diff1
value: 51.228801807498584
- type: nauc_mrr_at_1000_max
value: 61.040998883279585
- type: nauc_mrr_at_1000_std
value: 40.93983887257123
- type: nauc_mrr_at_100_diff1
value: 51.23715338435314
- type: nauc_mrr_at_100_max
value: 61.03971408781317
- type: nauc_mrr_at_100_std
value: 40.91796923590573
- type: nauc_mrr_at_10_diff1
value: 51.1214868552331
- type: nauc_mrr_at_10_max
value: 61.03069045590881
- type: nauc_mrr_at_10_std
value: 40.661621199704264
- type: nauc_mrr_at_1_diff1
value: 50.84660003035892
- type: nauc_mrr_at_1_max
value: 60.692091499960895
- type: nauc_mrr_at_1_std
value: 42.126228731502955
- type: nauc_mrr_at_20_diff1
value: 51.0402624284872
- type: nauc_mrr_at_20_max
value: 60.94577844338166
- type: nauc_mrr_at_20_std
value: 40.89505950503613
- type: nauc_mrr_at_3_diff1
value: 51.771113665996516
- type: nauc_mrr_at_3_max
value: 61.65264793077224
- type: nauc_mrr_at_3_std
value: 41.75781827057092
- type: nauc_mrr_at_5_diff1
value: 51.0656793772882
- type: nauc_mrr_at_5_max
value: 61.08042065139715
- type: nauc_mrr_at_5_std
value: 41.11203271084835
- type: nauc_ndcg_at_1000_diff1
value: 22.347978262245107
- type: nauc_ndcg_at_1000_max
value: 36.56458763955002
- type: nauc_ndcg_at_1000_std
value: 35.99616144258822
- type: nauc_ndcg_at_100_diff1
value: 23.1120990977162
- type: nauc_ndcg_at_100_max
value: 30.79663306311657
- type: nauc_ndcg_at_100_std
value: 27.387572106784297
- type: nauc_ndcg_at_10_diff1
value: 23.329746066899656
- type: nauc_ndcg_at_10_max
value: 28.69246947084685
- type: nauc_ndcg_at_10_std
value: 21.457736188325345
- type: nauc_ndcg_at_1_diff1
value: 39.99399153456974
- type: nauc_ndcg_at_1_max
value: 38.12447856470389
- type: nauc_ndcg_at_1_std
value: 27.768869260384676
- type: nauc_ndcg_at_20_diff1
value: 24.945374175339907
- type: nauc_ndcg_at_20_max
value: 27.67836982165295
- type: nauc_ndcg_at_20_std
value: 19.7933631060578
- type: nauc_ndcg_at_3_diff1
value: 26.063492354398527
- type: nauc_ndcg_at_3_max
value: 33.06541959550656
- type: nauc_ndcg_at_3_std
value: 23.278902797288726
- type: nauc_ndcg_at_5_diff1
value: 22.521596060750035
- type: nauc_ndcg_at_5_max
value: 31.210005673730784
- type: nauc_ndcg_at_5_std
value: 22.893106456317927
- type: nauc_precision_at_1000_diff1
value: -19.845356495096006
- type: nauc_precision_at_1000_max
value: 4.163819381816099
- type: nauc_precision_at_1000_std
value: 7.612952884590339
- type: nauc_precision_at_100_diff1
value: -8.2679285153361
- type: nauc_precision_at_100_max
value: 29.78018175573565
- type: nauc_precision_at_100_std
value: 41.07244463956215
- type: nauc_precision_at_10_diff1
value: -3.2451428407349057
- type: nauc_precision_at_10_max
value: 36.92563008274906
- type: nauc_precision_at_10_std
value: 45.06962043489777
- type: nauc_precision_at_1_diff1
value: 50.84660003035892
- type: nauc_precision_at_1_max
value: 60.692091499960895
- type: nauc_precision_at_1_std
value: 42.126228731502955
- type: nauc_precision_at_20_diff1
value: -3.432279149061878
- type: nauc_precision_at_20_max
value: 37.013592483974875
- type: nauc_precision_at_20_std
value: 46.47324739428665
- type: nauc_precision_at_3_diff1
value: 7.28495481051025
- type: nauc_precision_at_3_max
value: 38.66372411741402
- type: nauc_precision_at_3_std
value: 35.23163993723955
- type: nauc_precision_at_5_diff1
value: -0.16540230063716202
- type: nauc_precision_at_5_max
value: 37.322494255721715
- type: nauc_precision_at_5_std
value: 39.666653561269754
- type: nauc_recall_at_1000_diff1
value: 11.388326469283681
- type: nauc_recall_at_1000_max
value: 32.698146308591674
- type: nauc_recall_at_1000_std
value: 49.48830488070777
- type: nauc_recall_at_100_diff1
value: 11.497443532756819
- type: nauc_recall_at_100_max
value: 20.196970431621615
- type: nauc_recall_at_100_std
value: 23.688772100803433
- type: nauc_recall_at_10_diff1
value: 16.519851398596003
- type: nauc_recall_at_10_max
value: 0.774066845071221
- type: nauc_recall_at_10_std
value: -10.89514647001814
- type: nauc_recall_at_1_diff1
value: 43.96699602275052
- type: nauc_recall_at_1_max
value: -0.7577088440873263
- type: nauc_recall_at_1_std
value: -17.732463891968404
- type: nauc_recall_at_20_diff1
value: 15.202960269878258
- type: nauc_recall_at_20_max
value: 7.067263295590253
- type: nauc_recall_at_20_std
value: -0.06050108222640702
- type: nauc_recall_at_3_diff1
value: 24.066741361525125
- type: nauc_recall_at_3_max
value: -2.1961525860488424
- type: nauc_recall_at_3_std
value: -19.48307077749568
- type: nauc_recall_at_5_diff1
value: 20.086330794102707
- type: nauc_recall_at_5_max
value: -0.8866528062747986
- type: nauc_recall_at_5_std
value: -16.53799173962747
- type: ndcg_at_1
value: 57.99999999999999
- type: ndcg_at_10
value: 45.553
- type: ndcg_at_100
value: 51.014
- type: ndcg_at_1000
value: 58.226
- type: ndcg_at_20
value: 44.98
- type: ndcg_at_3
value: 48.981
- type: ndcg_at_5
value: 46.794999999999995
- type: precision_at_1
value: 70.75
- type: precision_at_10
value: 36.85
- type: precision_at_100
value: 11.955
- type: precision_at_1000
value: 2.247
- type: precision_at_20
value: 28.075
- type: precision_at_3
value: 52.666999999999994
- type: precision_at_5
value: 45.85
- type: recall_at_1
value: 9.168999999999999
- type: recall_at_10
value: 28.796
- type: recall_at_100
value: 58.892999999999994
- type: recall_at_1000
value: 81.644
- type: recall_at_20
value: 36.659000000000006
- type: recall_at_3
value: 16.709
- type: recall_at_5
value: 21.387
- task:
type: Retrieval
dataset:
name: MTEB FEVER
type: mteb/fever
config: default
split: test
revision: bea83ef9e8fb933d90a2f1d5515737465d613e12
metrics:
- type: main_score
value: 88.41
- type: map_at_1
value: 75.637
- type: map_at_10
value: 84.674
- type: map_at_100
value: 84.909
- type: map_at_1000
value: 84.92
- type: map_at_20
value: 84.836
- type: map_at_3
value: 83.44200000000001
- type: map_at_5
value: 84.28099999999999
- type: mrr_at_1
value: 81.56315631563157
- type: mrr_at_10
value: 88.89571695264748
- type: mrr_at_100
value: 88.93671417216285
- type: mrr_at_1000
value: 88.93708016011664
- type: mrr_at_20
value: 88.9311652665256
- type: mrr_at_3
value: 88.20882088208805
- type: mrr_at_5
value: 88.72937293729349
- type: nauc_map_at_1000_diff1
value: 54.41216035074026
- type: nauc_map_at_1000_max
value: 13.346153003554361
- type: nauc_map_at_1000_std
value: -6.721664416152164
- type: nauc_map_at_100_diff1
value: 54.36538350995795
- type: nauc_map_at_100_max
value: 13.355583381471298
- type: nauc_map_at_100_std
value: -6.696921015641016
- type: nauc_map_at_10_diff1
value: 54.0389127730555
- type: nauc_map_at_10_max
value: 13.387802159150663
- type: nauc_map_at_10_std
value: -6.73514381731833
- type: nauc_map_at_1_diff1
value: 57.99489574836453
- type: nauc_map_at_1_max
value: 7.830032589171654
- type: nauc_map_at_1_std
value: -10.140208285080295
- type: nauc_map_at_20_diff1
value: 54.16841004736076
- type: nauc_map_at_20_max
value: 13.345607363689746
- type: nauc_map_at_20_std
value: -6.663119775158465
- type: nauc_map_at_3_diff1
value: 53.82879543599303
- type: nauc_map_at_3_max
value: 12.716952288433902
- type: nauc_map_at_3_std
value: -7.746102082835598
- type: nauc_map_at_5_diff1
value: 53.82838395350109
- type: nauc_map_at_5_max
value: 13.487373534211702
- type: nauc_map_at_5_std
value: -6.869504398693434
- type: nauc_mrr_at_1000_diff1
value: 68.92783546581906
- type: nauc_mrr_at_1000_max
value: 12.076297180596592
- type: nauc_mrr_at_1000_std
value: -13.306257067567998
- type: nauc_mrr_at_100_diff1
value: 68.92780219775517
- type: nauc_mrr_at_100_max
value: 12.078449805054374
- type: nauc_mrr_at_100_std
value: -13.303524852703719
- type: nauc_mrr_at_10_diff1
value: 68.92686206881258
- type: nauc_mrr_at_10_max
value: 12.273295656884873
- type: nauc_mrr_at_10_std
value: -13.222483496603965
- type: nauc_mrr_at_1_diff1
value: 70.1738022073041
- type: nauc_mrr_at_1_max
value: 9.378639533482806
- type: nauc_mrr_at_1_std
value: -13.444033823202348
- type: nauc_mrr_at_20_diff1
value: 68.91161304905303
- type: nauc_mrr_at_20_max
value: 12.117091514817885
- type: nauc_mrr_at_20_std
value: -13.258261750160239
- type: nauc_mrr_at_3_diff1
value: 68.61982455945467
- type: nauc_mrr_at_3_max
value: 12.608213879734578
- type: nauc_mrr_at_3_std
value: -13.558003431587839
- type: nauc_mrr_at_5_diff1
value: 68.81439097457242
- type: nauc_mrr_at_5_max
value: 12.54025598903624
- type: nauc_mrr_at_5_std
value: -13.199231514972093
- type: nauc_ndcg_at_1000_diff1
value: 56.47563443877495
- type: nauc_ndcg_at_1000_max
value: 14.508331783439466
- type: nauc_ndcg_at_1000_std
value: -6.206829736668775
- type: nauc_ndcg_at_100_diff1
value: 55.54015515673474
- type: nauc_ndcg_at_100_max
value: 14.753595778278136
- type: nauc_ndcg_at_100_std
value: -5.638517949568802
- type: nauc_ndcg_at_10_diff1
value: 54.220845223257996
- type: nauc_ndcg_at_10_max
value: 15.265309648490021
- type: nauc_ndcg_at_10_std
value: -5.516276098929109
- type: nauc_ndcg_at_1_diff1
value: 70.1738022073041
- type: nauc_ndcg_at_1_max
value: 9.378639533482806
- type: nauc_ndcg_at_1_std
value: -13.444033823202348
- type: nauc_ndcg_at_20_diff1
value: 54.481406100854635
- type: nauc_ndcg_at_20_max
value: 14.868763583210498
- type: nauc_ndcg_at_20_std
value: -5.328097380018734
- type: nauc_ndcg_at_3_diff1
value: 54.94411725607744
- type: nauc_ndcg_at_3_max
value: 14.27186734506607
- type: nauc_ndcg_at_3_std
value: -7.894724962312474
- type: nauc_ndcg_at_5_diff1
value: 54.08048166974806
- type: nauc_ndcg_at_5_max
value: 15.528233170721006
- type: nauc_ndcg_at_5_std
value: -5.984768714537104
- type: nauc_precision_at_1000_diff1
value: -8.744323640074445
- type: nauc_precision_at_1000_max
value: -0.01881224392053465
- type: nauc_precision_at_1000_std
value: 3.8721477979260635
- type: nauc_precision_at_100_diff1
value: -11.86150156952171
- type: nauc_precision_at_100_max
value: 3.2736651314552314
- type: nauc_precision_at_100_std
value: 8.12687620615509
- type: nauc_precision_at_10_diff1
value: -10.360708676781178
- type: nauc_precision_at_10_max
value: 10.945552490433458
- type: nauc_precision_at_10_std
value: 11.016707653014485
- type: nauc_precision_at_1_diff1
value: 70.1738022073041
- type: nauc_precision_at_1_max
value: 9.378639533482806
- type: nauc_precision_at_1_std
value: -13.444033823202348
- type: nauc_precision_at_20_diff1
value: -13.557721925696583
- type: nauc_precision_at_20_max
value: 6.331386521718574
- type: nauc_precision_at_20_std
value: 10.322188778142388
- type: nauc_precision_at_3_diff1
value: 15.139456770248968
- type: nauc_precision_at_3_max
value: 17.10220985600708
- type: nauc_precision_at_3_std
value: 3.0448183682558074
- type: nauc_precision_at_5_diff1
value: -1.9825577548111102
- type: nauc_precision_at_5_max
value: 17.139148127012625
- type: nauc_precision_at_5_std
value: 10.598435750554753
- type: nauc_recall_at_1000_diff1
value: 15.641740744283005
- type: nauc_recall_at_1000_max
value: 44.65315702195612
- type: nauc_recall_at_1000_std
value: 52.34265862835513
- type: nauc_recall_at_100_diff1
value: 5.254385435323394
- type: nauc_recall_at_100_max
value: 38.53577774395794
- type: nauc_recall_at_100_std
value: 43.47744274335829
- type: nauc_recall_at_10_diff1
value: 19.135735476268042
- type: nauc_recall_at_10_max
value: 30.05417445923848
- type: nauc_recall_at_10_std
value: 18.3988023241141
- type: nauc_recall_at_1_diff1
value: 57.99489574836453
- type: nauc_recall_at_1_max
value: 7.830032589171654
- type: nauc_recall_at_1_std
value: -10.140208285080295
- type: nauc_recall_at_20_diff1
value: 9.444797759735126
- type: nauc_recall_at_20_max
value: 31.001311675371017
- type: nauc_recall_at_20_std
value: 29.351418893822178
- type: nauc_recall_at_3_diff1
value: 36.88862653262064
- type: nauc_recall_at_3_max
value: 19.845892741607823
- type: nauc_recall_at_3_std
value: -1.0584273105890794
- type: nauc_recall_at_5_diff1
value: 27.360718561944974
- type: nauc_recall_at_5_max
value: 26.698311215441738
- type: nauc_recall_at_5_std
value: 8.97113997755362
- type: ndcg_at_1
value: 81.563
- type: ndcg_at_10
value: 88.41
- type: ndcg_at_100
value: 89.101
- type: ndcg_at_1000
value: 89.25800000000001
- type: ndcg_at_20
value: 88.79
- type: ndcg_at_3
value: 86.599
- type: ndcg_at_5
value: 87.74
- type: precision_at_1
value: 81.563
- type: precision_at_10
value: 10.699
- type: precision_at_100
value: 1.13
- type: precision_at_1000
value: 0.116
- type: precision_at_20
value: 5.479
- type: precision_at_3
value: 33.238
- type: precision_at_5
value: 20.744
- type: recall_at_1
value: 75.637
- type: recall_at_10
value: 95.57600000000001
- type: recall_at_100
value: 98.072
- type: recall_at_1000
value: 98.951
- type: recall_at_20
value: 96.792
- type: recall_at_3
value: 90.79599999999999
- type: recall_at_5
value: 93.674
- task:
type: Retrieval
dataset:
name: MTEB FiQA2018
type: mteb/fiqa
config: default
split: test
revision: 27a168819829fe9bcd655c2df245fb19452e8e06
metrics:
- type: main_score
value: 42.396
- type: map_at_1
value: 21.711
- type: map_at_10
value: 34.628
- type: map_at_100
value: 36.549
- type: map_at_1000
value: 36.719
- type: map_at_20
value: 35.673
- type: map_at_3
value: 30.585
- type: map_at_5
value: 32.875
- type: mrr_at_1
value: 41.82098765432099
- type: mrr_at_10
value: 50.69505682931607
- type: mrr_at_100
value: 51.50556608727901
- type: mrr_at_1000
value: 51.53870583208304
- type: mrr_at_20
value: 51.15345764364655
- type: mrr_at_3
value: 48.35390946502059
- type: mrr_at_5
value: 49.87397119341563
- type: nauc_map_at_1000_diff1
value: 45.182252919583895
- type: nauc_map_at_1000_max
value: 35.66124930024801
- type: nauc_map_at_1000_std
value: -0.6925562638650965
- type: nauc_map_at_100_diff1
value: 45.116964706960125
- type: nauc_map_at_100_max
value: 35.54990469525889
- type: nauc_map_at_100_std
value: -0.6667263852859368
- type: nauc_map_at_10_diff1
value: 45.39189096228184
- type: nauc_map_at_10_max
value: 34.780111261901
- type: nauc_map_at_10_std
value: -1.8169859294150819
- type: nauc_map_at_1_diff1
value: 47.72764937952259
- type: nauc_map_at_1_max
value: 24.83306559709341
- type: nauc_map_at_1_std
value: -4.714128457297418
- type: nauc_map_at_20_diff1
value: 45.17073365898278
- type: nauc_map_at_20_max
value: 35.0938403469058
- type: nauc_map_at_20_std
value: -1.373412631183604
- type: nauc_map_at_3_diff1
value: 46.525724305731295
- type: nauc_map_at_3_max
value: 31.042538866512597
- type: nauc_map_at_3_std
value: -4.119355935975354
- type: nauc_map_at_5_diff1
value: 45.79569633383187
- type: nauc_map_at_5_max
value: 32.88779656647293
- type: nauc_map_at_5_std
value: -3.2518474739335312
- type: nauc_mrr_at_1000_diff1
value: 52.83619185487903
- type: nauc_mrr_at_1000_max
value: 42.30310720405186
- type: nauc_mrr_at_1000_std
value: -1.1487703348518024
- type: nauc_mrr_at_100_diff1
value: 52.82248853996664
- type: nauc_mrr_at_100_max
value: 42.30549701564678
- type: nauc_mrr_at_100_std
value: -1.1240113031894834
- type: nauc_mrr_at_10_diff1
value: 52.74644276642243
- type: nauc_mrr_at_10_max
value: 42.39103029476398
- type: nauc_mrr_at_10_std
value: -1.1043413237848576
- type: nauc_mrr_at_1_diff1
value: 54.810335521617326
- type: nauc_mrr_at_1_max
value: 40.733260207843394
- type: nauc_mrr_at_1_std
value: -4.452554921565855
- type: nauc_mrr_at_20_diff1
value: 52.788257862499954
- type: nauc_mrr_at_20_max
value: 42.32658875363406
- type: nauc_mrr_at_20_std
value: -1.2209728080684497
- type: nauc_mrr_at_3_diff1
value: 53.43281175319808
- type: nauc_mrr_at_3_max
value: 41.735942650867926
- type: nauc_mrr_at_3_std
value: -2.462688102468019
- type: nauc_mrr_at_5_diff1
value: 52.874037126566606
- type: nauc_mrr_at_5_max
value: 41.93740449458822
- type: nauc_mrr_at_5_std
value: -1.2928874908441947
- type: nauc_ndcg_at_1000_diff1
value: 46.5532425476402
- type: nauc_ndcg_at_1000_max
value: 40.369611603370515
- type: nauc_ndcg_at_1000_std
value: 3.472567588386994
- type: nauc_ndcg_at_100_diff1
value: 45.75244404695404
- type: nauc_ndcg_at_100_max
value: 39.36470550675439
- type: nauc_ndcg_at_100_std
value: 4.356189041115731
- type: nauc_ndcg_at_10_diff1
value: 46.005135323539704
- type: nauc_ndcg_at_10_max
value: 37.89018165334218
- type: nauc_ndcg_at_10_std
value: 0.7129618297768014
- type: nauc_ndcg_at_1_diff1
value: 54.810335521617326
- type: nauc_ndcg_at_1_max
value: 40.733260207843394
- type: nauc_ndcg_at_1_std
value: -4.452554921565855
- type: nauc_ndcg_at_20_diff1
value: 45.841552790490034
- type: nauc_ndcg_at_20_max
value: 38.04992825472661
- type: nauc_ndcg_at_20_std
value: 1.2748305707955212
- type: nauc_ndcg_at_3_diff1
value: 46.683033449357744
- type: nauc_ndcg_at_3_max
value: 37.46397870760607
- type: nauc_ndcg_at_3_std
value: -2.3421854966319824
- type: nauc_ndcg_at_5_diff1
value: 45.82409645378457
- type: nauc_ndcg_at_5_max
value: 36.27588234096716
- type: nauc_ndcg_at_5_std
value: -1.5141197170944254
- type: nauc_precision_at_1000_diff1
value: -3.137944321071885
- type: nauc_precision_at_1000_max
value: 24.12803166253776
- type: nauc_precision_at_1000_std
value: 11.076454789944101
- type: nauc_precision_at_100_diff1
value: 3.9896283891401048
- type: nauc_precision_at_100_max
value: 31.00198316788829
- type: nauc_precision_at_100_std
value: 15.725887643803063
- type: nauc_precision_at_10_diff1
value: 20.493420889888394
- type: nauc_precision_at_10_max
value: 41.689699671507405
- type: nauc_precision_at_10_std
value: 9.374983385669914
- type: nauc_precision_at_1_diff1
value: 54.810335521617326
- type: nauc_precision_at_1_max
value: 40.733260207843394
- type: nauc_precision_at_1_std
value: -4.452554921565855
- type: nauc_precision_at_20_diff1
value: 15.02911800246446
- type: nauc_precision_at_20_max
value: 39.227068888505
- type: nauc_precision_at_20_std
value: 11.755558515319404
- type: nauc_precision_at_3_diff1
value: 34.044986535461746
- type: nauc_precision_at_3_max
value: 40.96605829831656
- type: nauc_precision_at_3_std
value: 1.1903535705688038
- type: nauc_precision_at_5_diff1
value: 26.617002443432707
- type: nauc_precision_at_5_max
value: 40.60413785916794
- type: nauc_precision_at_5_std
value: 3.6984531670502814
- type: nauc_recall_at_1000_diff1
value: 26.96489389440101
- type: nauc_recall_at_1000_max
value: 41.811583968523955
- type: nauc_recall_at_1000_std
value: 41.5719519496712
- type: nauc_recall_at_100_diff1
value: 28.50851434908223
- type: nauc_recall_at_100_max
value: 32.19528060706322
- type: nauc_recall_at_100_std
value: 25.56935294258179
- type: nauc_recall_at_10_diff1
value: 35.139582891180964
- type: nauc_recall_at_10_max
value: 32.15221840434225
- type: nauc_recall_at_10_std
value: 5.550434611582702
- type: nauc_recall_at_1_diff1
value: 47.72764937952259
- type: nauc_recall_at_1_max
value: 24.83306559709341
- type: nauc_recall_at_1_std
value: -4.714128457297418
- type: nauc_recall_at_20_diff1
value: 32.78604811055205
- type: nauc_recall_at_20_max
value: 29.62940720700254
- type: nauc_recall_at_20_std
value: 6.769941491859872
- type: nauc_recall_at_3_diff1
value: 40.76090616138699
- type: nauc_recall_at_3_max
value: 27.506425490226867
- type: nauc_recall_at_3_std
value: -2.608872693119243
- type: nauc_recall_at_5_diff1
value: 37.06532485024711
- type: nauc_recall_at_5_max
value: 27.704150556658448
- type: nauc_recall_at_5_std
value: 0.4718707152343872
- type: ndcg_at_1
value: 41.821000000000005
- type: ndcg_at_10
value: 42.396
- type: ndcg_at_100
value: 49.370000000000005
- type: ndcg_at_1000
value: 52.251000000000005
- type: ndcg_at_20
value: 45.097
- type: ndcg_at_3
value: 39.028
- type: ndcg_at_5
value: 40.222
- type: precision_at_1
value: 41.821000000000005
- type: precision_at_10
value: 11.451
- type: precision_at_100
value: 1.863
- type: precision_at_1000
value: 0.23900000000000002
- type: precision_at_20
value: 6.798
- type: precision_at_3
value: 25.823
- type: precision_at_5
value: 18.735
- type: recall_at_1
value: 21.711
- type: recall_at_10
value: 48.862
- type: recall_at_100
value: 74.708
- type: recall_at_1000
value: 91.865
- type: recall_at_20
value: 57.50999999999999
- type: recall_at_3
value: 35.85
- type: recall_at_5
value: 41.976
- task:
type: Retrieval
dataset:
name: MTEB HotpotQA
type: mteb/hotpotqa
config: default
split: test
revision: ab518f4d6fcca38d87c25209f94beba119d02014
metrics:
- type: main_score
value: 72.21
- type: map_at_1
value: 39.487
- type: map_at_10
value: 63.949999999999996
- type: map_at_100
value: 64.873
- type: map_at_1000
value: 64.927
- type: map_at_20
value: 64.529
- type: map_at_3
value: 60.243
- type: map_at_5
value: 62.613
- type: mrr_at_1
value: 78.97366644159351
- type: mrr_at_10
value: 84.84600173627825
- type: mrr_at_100
value: 85.0172804866798
- type: mrr_at_1000
value: 85.02245651152857
- type: mrr_at_20
value: 84.9625577788225
- type: mrr_at_3
value: 83.90276839972962
- type: mrr_at_5
value: 84.48278190411845
- type: nauc_map_at_1000_diff1
value: 19.825004700775164
- type: nauc_map_at_1000_max
value: 19.943221724164182
- type: nauc_map_at_1000_std
value: 10.068951166560058
- type: nauc_map_at_100_diff1
value: 19.80139472181137
- type: nauc_map_at_100_max
value: 19.938006132804347
- type: nauc_map_at_100_std
value: 10.100008107666842
- type: nauc_map_at_10_diff1
value: 19.53604502514735
- type: nauc_map_at_10_max
value: 19.62768870331064
- type: nauc_map_at_10_std
value: 9.446859074725705
- type: nauc_map_at_1_diff1
value: 67.7764270505257
- type: nauc_map_at_1_max
value: 38.45166604737058
- type: nauc_map_at_1_std
value: 1.9919181988552352
- type: nauc_map_at_20_diff1
value: 19.635871913149913
- type: nauc_map_at_20_max
value: 19.812838965919155
- type: nauc_map_at_20_std
value: 9.905163140101845
- type: nauc_map_at_3_diff1
value: 18.965707122532212
- type: nauc_map_at_3_max
value: 17.878860313056517
- type: nauc_map_at_3_std
value: 6.189378752019195
- type: nauc_map_at_5_diff1
value: 19.493354049675954
- type: nauc_map_at_5_max
value: 19.24527088109141
- type: nauc_map_at_5_std
value: 8.283883139680066
- type: nauc_mrr_at_1000_diff1
value: 66.87150374356781
- type: nauc_mrr_at_1000_max
value: 41.413456443203984
- type: nauc_mrr_at_1000_std
value: 4.140387282484357
- type: nauc_mrr_at_100_diff1
value: 66.87178015619061
- type: nauc_mrr_at_100_max
value: 41.419754763150834
- type: nauc_mrr_at_100_std
value: 4.15222235416704
- type: nauc_mrr_at_10_diff1
value: 66.89720586892301
- type: nauc_mrr_at_10_max
value: 41.56353878125211
- type: nauc_mrr_at_10_std
value: 4.213376519922392
- type: nauc_mrr_at_1_diff1
value: 67.7764270505257
- type: nauc_mrr_at_1_max
value: 38.45166604737058
- type: nauc_mrr_at_1_std
value: 1.9919181988552352
- type: nauc_mrr_at_20_diff1
value: 66.8714688713149
- type: nauc_mrr_at_20_max
value: 41.46170778986735
- type: nauc_mrr_at_20_std
value: 4.165154741309859
- type: nauc_mrr_at_3_diff1
value: 66.31615462679144
- type: nauc_mrr_at_3_max
value: 41.419637693259936
- type: nauc_mrr_at_3_std
value: 3.814834551396097
- type: nauc_mrr_at_5_diff1
value: 66.7289413087213
- type: nauc_mrr_at_5_max
value: 41.668346356371586
- type: nauc_mrr_at_5_std
value: 4.116331539882484
- type: nauc_ndcg_at_1000_diff1
value: 26.37325375970598
- type: nauc_ndcg_at_1000_max
value: 24.850915174721735
- type: nauc_ndcg_at_1000_std
value: 13.37585683440429
- type: nauc_ndcg_at_100_diff1
value: 25.591771178059503
- type: nauc_ndcg_at_100_max
value: 24.562820829532473
- type: nauc_ndcg_at_100_std
value: 14.093690500501541
- type: nauc_ndcg_at_10_diff1
value: 24.64600598115805
- type: nauc_ndcg_at_10_max
value: 23.543499404760023
- type: nauc_ndcg_at_10_std
value: 11.55823632781553
- type: nauc_ndcg_at_1_diff1
value: 67.7764270505257
- type: nauc_ndcg_at_1_max
value: 38.45166604737058
- type: nauc_ndcg_at_1_std
value: 1.9919181988552352
- type: nauc_ndcg_at_20_diff1
value: 24.757843275306726
- type: nauc_ndcg_at_20_max
value: 23.951154200380827
- type: nauc_ndcg_at_20_std
value: 12.931320453044886
- type: nauc_ndcg_at_3_diff1
value: 24.37742630418847
- type: nauc_ndcg_at_3_max
value: 21.310512304883723
- type: nauc_ndcg_at_3_std
value: 6.503993200818077
- type: nauc_ndcg_at_5_diff1
value: 24.813706829269716
- type: nauc_ndcg_at_5_max
value: 22.993657212898
- type: nauc_ndcg_at_5_std
value: 9.34462052506809
- type: nauc_precision_at_1000_diff1
value: -0.6506415756958156
- type: nauc_precision_at_1000_max
value: 28.039755644694875
- type: nauc_precision_at_1000_std
value: 53.46474329623814
- type: nauc_precision_at_100_diff1
value: 3.78462668236152
- type: nauc_precision_at_100_max
value: 22.501700881673862
- type: nauc_precision_at_100_std
value: 40.56672716474142
- type: nauc_precision_at_10_diff1
value: 9.156113228907534
- type: nauc_precision_at_10_max
value: 19.734206254833254
- type: nauc_precision_at_10_std
value: 19.986282545779602
- type: nauc_precision_at_1_diff1
value: 67.7764270505257
- type: nauc_precision_at_1_max
value: 38.45166604737058
- type: nauc_precision_at_1_std
value: 1.9919181988552352
- type: nauc_precision_at_20_diff1
value: 6.6164335644470125
- type: nauc_precision_at_20_max
value: 20.29343459608317
- type: nauc_precision_at_20_std
value: 26.51115475333977
- type: nauc_precision_at_3_diff1
value: 12.476520554399546
- type: nauc_precision_at_3_max
value: 16.69401409858964
- type: nauc_precision_at_3_std
value: 8.165880294907444
- type: nauc_precision_at_5_diff1
value: 11.783242828320958
- type: nauc_precision_at_5_max
value: 19.0679467875759
- type: nauc_precision_at_5_std
value: 13.615358345509884
- type: nauc_recall_at_1000_diff1
value: -0.6506415756960168
- type: nauc_recall_at_1000_max
value: 28.039755644694786
- type: nauc_recall_at_1000_std
value: 53.46474329623801
- type: nauc_recall_at_100_diff1
value: 3.7846266823613877
- type: nauc_recall_at_100_max
value: 22.501700881674008
- type: nauc_recall_at_100_std
value: 40.566727164741366
- type: nauc_recall_at_10_diff1
value: 9.15611322890755
- type: nauc_recall_at_10_max
value: 19.73420625483318
- type: nauc_recall_at_10_std
value: 19.98628254577951
- type: nauc_recall_at_1_diff1
value: 67.7764270505257
- type: nauc_recall_at_1_max
value: 38.45166604737058
- type: nauc_recall_at_1_std
value: 1.9919181988552352
- type: nauc_recall_at_20_diff1
value: 6.616433564446929
- type: nauc_recall_at_20_max
value: 20.293434596083248
- type: nauc_recall_at_20_std
value: 26.5111547533396
- type: nauc_recall_at_3_diff1
value: 12.476520554399531
- type: nauc_recall_at_3_max
value: 16.69401409858966
- type: nauc_recall_at_3_std
value: 8.165880294907438
- type: nauc_recall_at_5_diff1
value: 11.783242828320999
- type: nauc_recall_at_5_max
value: 19.067946787575845
- type: nauc_recall_at_5_std
value: 13.61535834550991
- type: ndcg_at_1
value: 78.974
- type: ndcg_at_10
value: 72.21
- type: ndcg_at_100
value: 75.264
- type: ndcg_at_1000
value: 76.259
- type: ndcg_at_20
value: 73.628
- type: ndcg_at_3
value: 67.047
- type: ndcg_at_5
value: 69.974
- type: precision_at_1
value: 78.974
- type: precision_at_10
value: 15.267
- type: precision_at_100
value: 1.762
- type: precision_at_1000
value: 0.189
- type: precision_at_20
value: 8.09
- type: precision_at_3
value: 43.309
- type: precision_at_5
value: 28.294000000000004
- type: recall_at_1
value: 39.487
- type: recall_at_10
value: 76.334
- type: recall_at_100
value: 88.076
- type: recall_at_1000
value: 94.59100000000001
- type: recall_at_20
value: 80.898
- type: recall_at_3
value: 64.96300000000001
- type: recall_at_5
value: 70.736
- task:
type: Retrieval
dataset:
name: MTEB MSMARCO
type: mteb/msmarco
config: default
split: dev
revision: c5a29a104738b98a9e76336939199e264163d4a0
metrics:
- type: main_score
value: 42.027
- type: map_at_1
value: 22.118
- type: map_at_10
value: 34.816
- type: map_at_100
value: 35.983
- type: map_at_1000
value: 36.028999999999996
- type: map_at_20
value: 35.545
- type: map_at_3
value: 30.752000000000002
- type: map_at_5
value: 33.114
- type: mrr_at_1
value: 22.793696275071635
- type: mrr_at_10
value: 35.47250079592483
- type: mrr_at_100
value: 36.576471512902856
- type: mrr_at_1000
value: 36.616205680509786
- type: mrr_at_20
value: 36.16557033864942
- type: mrr_at_3
value: 31.48758357211065
- type: mrr_at_5
value: 33.80563514804202
- type: nauc_map_at_1000_diff1
value: 32.89234100489284
- type: nauc_map_at_1000_max
value: 1.1802816553581001
- type: nauc_map_at_1000_std
value: -20.187692925732446
- type: nauc_map_at_100_diff1
value: 32.88694493681772
- type: nauc_map_at_100_max
value: 1.1732717578080365
- type: nauc_map_at_100_std
value: -20.164165529035245
- type: nauc_map_at_10_diff1
value: 32.826182211848796
- type: nauc_map_at_10_max
value: 1.1551262165737235
- type: nauc_map_at_10_std
value: -20.88326292319754
- type: nauc_map_at_1_diff1
value: 36.12732122790642
- type: nauc_map_at_1_max
value: 1.8197550109156913
- type: nauc_map_at_1_std
value: -17.205625720792167
- type: nauc_map_at_20_diff1
value: 32.83333177195551
- type: nauc_map_at_20_max
value: 1.0937431645506202
- type: nauc_map_at_20_std
value: -20.503956514646145
- type: nauc_map_at_3_diff1
value: 32.76264193805814
- type: nauc_map_at_3_max
value: 0.8560962042500389
- type: nauc_map_at_3_std
value: -20.608930717315577
- type: nauc_map_at_5_diff1
value: 32.78673238978775
- type: nauc_map_at_5_max
value: 1.0511863039329437
- type: nauc_map_at_5_std
value: -21.02164728626011
- type: nauc_mrr_at_1000_diff1
value: 32.610323934702286
- type: nauc_mrr_at_1000_max
value: 1.276669121901405
- type: nauc_mrr_at_1000_std
value: -19.908120615285043
- type: nauc_mrr_at_100_diff1
value: 32.601373758102795
- type: nauc_mrr_at_100_max
value: 1.2752735149992132
- type: nauc_mrr_at_100_std
value: -19.87937042610101
- type: nauc_mrr_at_10_diff1
value: 32.55795432078168
- type: nauc_mrr_at_10_max
value: 1.2881786969258637
- type: nauc_mrr_at_10_std
value: -20.54564519015977
- type: nauc_mrr_at_1_diff1
value: 35.596301376443726
- type: nauc_mrr_at_1_max
value: 1.7633238037306902
- type: nauc_mrr_at_1_std
value: -17.1999420019887
- type: nauc_mrr_at_20_diff1
value: 32.57185739111023
- type: nauc_mrr_at_20_max
value: 1.2212620853201877
- type: nauc_mrr_at_20_std
value: -20.179517281041264
- type: nauc_mrr_at_3_diff1
value: 32.42681377099514
- type: nauc_mrr_at_3_max
value: 0.8745921708861145
- type: nauc_mrr_at_3_std
value: -20.41017687790572
- type: nauc_mrr_at_5_diff1
value: 32.499107129648266
- type: nauc_mrr_at_5_max
value: 1.1159673851851573
- type: nauc_mrr_at_5_std
value: -20.695143502133824
- type: nauc_ndcg_at_1000_diff1
value: 32.16957965806702
- type: nauc_ndcg_at_1000_max
value: 1.6763998947980905
- type: nauc_ndcg_at_1000_std
value: -18.970592350332893
- type: nauc_ndcg_at_100_diff1
value: 31.977550102558872
- type: nauc_ndcg_at_100_max
value: 1.5625858650110014
- type: nauc_ndcg_at_100_std
value: -17.990456766123835
- type: nauc_ndcg_at_10_diff1
value: 31.82738932481356
- type: nauc_ndcg_at_10_max
value: 1.1661362042692103
- type: nauc_ndcg_at_10_std
value: -21.872680193994217
- type: nauc_ndcg_at_1_diff1
value: 35.596301376443726
- type: nauc_ndcg_at_1_max
value: 1.7633238037306902
- type: nauc_ndcg_at_1_std
value: -17.1999420019887
- type: nauc_ndcg_at_20_diff1
value: 31.749656399266264
- type: nauc_ndcg_at_20_max
value: 0.9629024493088691
- type: nauc_ndcg_at_20_std
value: -20.4379403899277
- type: nauc_ndcg_at_3_diff1
value: 31.731361436850836
- type: nauc_ndcg_at_3_max
value: 0.531749791578849
- type: nauc_ndcg_at_3_std
value: -21.551112910698674
- type: nauc_ndcg_at_5_diff1
value: 31.785373941157303
- type: nauc_ndcg_at_5_max
value: 0.86207769368333
- type: nauc_ndcg_at_5_std
value: -22.24923399160171
- type: nauc_precision_at_1000_diff1
value: -3.841288331986519
- type: nauc_precision_at_1000_max
value: 13.558041371634976
- type: nauc_precision_at_1000_std
value: 15.181510484512827
- type: nauc_precision_at_100_diff1
value: 12.441154582709053
- type: nauc_precision_at_100_max
value: 8.428136255841935
- type: nauc_precision_at_100_std
value: 14.710391839731656
- type: nauc_precision_at_10_diff1
value: 26.185854813986705
- type: nauc_precision_at_10_max
value: 1.6348387310504464
- type: nauc_precision_at_10_std
value: -23.448927004357298
- type: nauc_precision_at_1_diff1
value: 35.596301376443726
- type: nauc_precision_at_1_max
value: 1.7633238037306902
- type: nauc_precision_at_1_std
value: -17.1999420019887
- type: nauc_precision_at_20_diff1
value: 22.69194179544158
- type: nauc_precision_at_20_max
value: 1.2972015009169306
- type: nauc_precision_at_20_std
value: -15.751482380060269
- type: nauc_precision_at_3_diff1
value: 28.255531512125188
- type: nauc_precision_at_3_max
value: -0.3715575458464333
- type: nauc_precision_at_3_std
value: -24.227970454057697
- type: nauc_precision_at_5_diff1
value: 27.65497951098847
- type: nauc_precision_at_5_max
value: 0.449773375292472
- type: nauc_precision_at_5_std
value: -25.37445450938601
- type: nauc_recall_at_1000_diff1
value: 15.243948516763819
- type: nauc_recall_at_1000_max
value: 41.821227805251375
- type: nauc_recall_at_1000_std
value: 61.66297794838101
- type: nauc_recall_at_100_diff1
value: 24.516543685029994
- type: nauc_recall_at_100_max
value: 7.093972966253228
- type: nauc_recall_at_100_std
value: 17.244452321212282
- type: nauc_recall_at_10_diff1
value: 28.404243095182828
- type: nauc_recall_at_10_max
value: 1.0805210480930945
- type: nauc_recall_at_10_std
value: -24.885018657039527
- type: nauc_recall_at_1_diff1
value: 36.12732122790642
- type: nauc_recall_at_1_max
value: 1.8197550109156913
- type: nauc_recall_at_1_std
value: -17.205625720792167
- type: nauc_recall_at_20_diff1
value: 26.956250169438512
- type: nauc_recall_at_20_max
value: 0.023973408161285917
- type: nauc_recall_at_20_std
value: -18.32944444428131
- type: nauc_recall_at_3_diff1
value: 28.9894205130054
- type: nauc_recall_at_3_max
value: -0.36140658021466865
- type: nauc_recall_at_3_std
value: -24.022505107768364
- type: nauc_recall_at_5_diff1
value: 28.907023434955104
- type: nauc_recall_at_5_max
value: 0.2501037567297729
- type: nauc_recall_at_5_std
value: -25.719919602271496
- type: ndcg_at_1
value: 22.794
- type: ndcg_at_10
value: 42.027
- type: ndcg_at_100
value: 47.601
- type: ndcg_at_1000
value: 48.713
- type: ndcg_at_20
value: 44.623000000000005
- type: ndcg_at_3
value: 33.772999999999996
- type: ndcg_at_5
value: 37.991
- type: precision_at_1
value: 22.794
- type: precision_at_10
value: 6.711
- type: precision_at_100
value: 0.9490000000000001
- type: precision_at_1000
value: 0.105
- type: precision_at_20
value: 3.8920000000000003
- type: precision_at_3
value: 14.46
- type: precision_at_5
value: 10.822
- type: recall_at_1
value: 22.118
- type: recall_at_10
value: 64.201
- type: recall_at_100
value: 89.878
- type: recall_at_1000
value: 98.259
- type: recall_at_20
value: 74.34100000000001
- type: recall_at_3
value: 41.8
- type: recall_at_5
value: 51.959
- task:
type: Retrieval
dataset:
name: MTEB NFCorpus
type: mteb/nfcorpus
config: default
split: test
revision: ec0fa4fe99da2ff19ca1214b7966684033a58814
metrics:
- type: main_score
value: 36.201
- type: map_at_1
value: 5.654
- type: map_at_10
value: 13.402
- type: map_at_100
value: 16.849
- type: map_at_1000
value: 18.264
- type: map_at_20
value: 14.832
- type: map_at_3
value: 9.619
- type: map_at_5
value: 11.483
- type: mrr_at_1
value: 47.6780185758514
- type: mrr_at_10
value: 56.47906531033466
- type: mrr_at_100
value: 57.04539749991402
- type: mrr_at_1000
value: 57.08810157607369
- type: mrr_at_20
value: 56.88003170105462
- type: mrr_at_3
value: 54.43756449948401
- type: mrr_at_5
value: 55.660474716202266
- type: nauc_map_at_1000_diff1
value: 31.134615238698192
- type: nauc_map_at_1000_max
value: 36.09522002487132
- type: nauc_map_at_1000_std
value: 14.72627666649002
- type: nauc_map_at_100_diff1
value: 32.777473351864444
- type: nauc_map_at_100_max
value: 35.25391471621035
- type: nauc_map_at_100_std
value: 12.024428973861083
- type: nauc_map_at_10_diff1
value: 36.46466466148528
- type: nauc_map_at_10_max
value: 29.707805406826722
- type: nauc_map_at_10_std
value: 2.0678757794226335
- type: nauc_map_at_1_diff1
value: 54.30208426149679
- type: nauc_map_at_1_max
value: 18.69125148481608
- type: nauc_map_at_1_std
value: -8.970955660291802
- type: nauc_map_at_20_diff1
value: 34.76513311600623
- type: nauc_map_at_20_max
value: 32.20666003570514
- type: nauc_map_at_20_std
value: 5.924889441518581
- type: nauc_map_at_3_diff1
value: 45.73465176835491
- type: nauc_map_at_3_max
value: 23.492291524989106
- type: nauc_map_at_3_std
value: -5.0123536561688855
- type: nauc_map_at_5_diff1
value: 39.7128319374107
- type: nauc_map_at_5_max
value: 25.84231729559691
- type: nauc_map_at_5_std
value: -2.0861428981140344
- type: nauc_mrr_at_1000_diff1
value: 33.0997881703397
- type: nauc_mrr_at_1000_max
value: 52.7089709923531
- type: nauc_mrr_at_1000_std
value: 28.8517952674151
- type: nauc_mrr_at_100_diff1
value: 33.1094984027438
- type: nauc_mrr_at_100_max
value: 52.74301398138847
- type: nauc_mrr_at_100_std
value: 28.897997840300892
- type: nauc_mrr_at_10_diff1
value: 33.300713655464925
- type: nauc_mrr_at_10_max
value: 52.572139698742184
- type: nauc_mrr_at_10_std
value: 28.66875615527188
- type: nauc_mrr_at_1_diff1
value: 32.57632582147155
- type: nauc_mrr_at_1_max
value: 46.020072246328816
- type: nauc_mrr_at_1_std
value: 20.99097889820076
- type: nauc_mrr_at_20_diff1
value: 33.04083904518949
- type: nauc_mrr_at_20_max
value: 52.597451362456994
- type: nauc_mrr_at_20_std
value: 28.681527293587898
- type: nauc_mrr_at_3_diff1
value: 33.64864656322754
- type: nauc_mrr_at_3_max
value: 51.82256412011279
- type: nauc_mrr_at_3_std
value: 27.241260746740686
- type: nauc_mrr_at_5_diff1
value: 33.53201325467246
- type: nauc_mrr_at_5_max
value: 52.79440885773516
- type: nauc_mrr_at_5_std
value: 28.663081392086028
- type: nauc_ndcg_at_1000_diff1
value: 28.632650542040714
- type: nauc_ndcg_at_1000_max
value: 51.24103069835822
- type: nauc_ndcg_at_1000_std
value: 35.05503784757999
- type: nauc_ndcg_at_100_diff1
value: 29.082177715298503
- type: nauc_ndcg_at_100_max
value: 45.24750203464315
- type: nauc_ndcg_at_100_std
value: 27.146548925680914
- type: nauc_ndcg_at_10_diff1
value: 25.123554466093594
- type: nauc_ndcg_at_10_max
value: 42.74355537806512
- type: nauc_ndcg_at_10_std
value: 22.234407997803935
- type: nauc_ndcg_at_1_diff1
value: 33.75083940012058
- type: nauc_ndcg_at_1_max
value: 44.44319402133161
- type: nauc_ndcg_at_1_std
value: 19.146499358406487
- type: nauc_ndcg_at_20_diff1
value: 24.954207968331872
- type: nauc_ndcg_at_20_max
value: 41.25991844405748
- type: nauc_ndcg_at_20_std
value: 22.169009285868864
- type: nauc_ndcg_at_3_diff1
value: 28.186539942033516
- type: nauc_ndcg_at_3_max
value: 44.40790009754965
- type: nauc_ndcg_at_3_std
value: 20.99226576085115
- type: nauc_ndcg_at_5_diff1
value: 25.498387899376706
- type: nauc_ndcg_at_5_max
value: 43.174709766261316
- type: nauc_ndcg_at_5_std
value: 21.88111962672031
- type: nauc_precision_at_1000_diff1
value: -16.22321012507648
- type: nauc_precision_at_1000_max
value: 5.808852256649677
- type: nauc_precision_at_1000_std
value: 19.875641776698824
- type: nauc_precision_at_100_diff1
value: -10.248089374355486
- type: nauc_precision_at_100_max
value: 19.29065415127588
- type: nauc_precision_at_100_std
value: 31.75019665627339
- type: nauc_precision_at_10_diff1
value: 3.6783257583955056
- type: nauc_precision_at_10_max
value: 39.22286010695767
- type: nauc_precision_at_10_std
value: 31.225485732801022
- type: nauc_precision_at_1_diff1
value: 32.57632582147155
- type: nauc_precision_at_1_max
value: 46.020072246328816
- type: nauc_precision_at_1_std
value: 20.99097889820076
- type: nauc_precision_at_20_diff1
value: -3.1632510833242784
- type: nauc_precision_at_20_max
value: 31.575496762405734
- type: nauc_precision_at_20_std
value: 31.576283324468115
- type: nauc_precision_at_3_diff1
value: 17.78864585545647
- type: nauc_precision_at_3_max
value: 44.201289661125585
- type: nauc_precision_at_3_std
value: 25.447840649726693
- type: nauc_precision_at_5_diff1
value: 9.986748662091358
- type: nauc_precision_at_5_max
value: 41.214164860776755
- type: nauc_precision_at_5_std
value: 28.22551704127726
- type: nauc_recall_at_1000_diff1
value: 10.984331766850506
- type: nauc_recall_at_1000_max
value: 24.641216018034104
- type: nauc_recall_at_1000_std
value: 26.91064221008446
- type: nauc_recall_at_100_diff1
value: 23.7009352078473
- type: nauc_recall_at_100_max
value: 30.176031609451297
- type: nauc_recall_at_100_std
value: 20.360365243211564
- type: nauc_recall_at_10_diff1
value: 28.11831737650638
- type: nauc_recall_at_10_max
value: 24.21539670487414
- type: nauc_recall_at_10_std
value: 2.245504974150148
- type: nauc_recall_at_1_diff1
value: 54.30208426149679
- type: nauc_recall_at_1_max
value: 18.69125148481608
- type: nauc_recall_at_1_std
value: -8.970955660291802
- type: nauc_recall_at_20_diff1
value: 26.199425305139908
- type: nauc_recall_at_20_max
value: 24.66704097503736
- type: nauc_recall_at_20_std
value: 5.86052107206246
- type: nauc_recall_at_3_diff1
value: 42.88348677575622
- type: nauc_recall_at_3_max
value: 21.189371077603308
- type: nauc_recall_at_3_std
value: -4.537510127238226
- type: nauc_recall_at_5_diff1
value: 30.7936756722569
- type: nauc_recall_at_5_max
value: 21.06136406164962
- type: nauc_recall_at_5_std
value: -1.4113804735229794
- type: ndcg_at_1
value: 45.975
- type: ndcg_at_10
value: 36.201
- type: ndcg_at_100
value: 32.736
- type: ndcg_at_1000
value: 41.099000000000004
- type: ndcg_at_20
value: 33.724
- type: ndcg_at_3
value: 42.242000000000004
- type: ndcg_at_5
value: 40.137
- type: precision_at_1
value: 47.678
- type: precision_at_10
value: 26.904
- type: precision_at_100
value: 8.368
- type: precision_at_1000
value: 2.078
- type: precision_at_20
value: 19.845
- type: precision_at_3
value: 40.351
- type: precision_at_5
value: 35.108
- type: recall_at_1
value: 5.654
- type: recall_at_10
value: 17.793
- type: recall_at_100
value: 32.483000000000004
- type: recall_at_1000
value: 63.294
- type: recall_at_20
value: 21.754
- type: recall_at_3
value: 10.771
- type: recall_at_5
value: 14.084
- task:
type: Retrieval
dataset:
name: MTEB NQ
type: mteb/nq
config: default
split: test
revision: b774495ed302d8c44a3a7ea25c90dbce03968f31
metrics:
- type: main_score
value: 62.464
- type: map_at_1
value: 38.0
- type: map_at_10
value: 54.806
- type: map_at_100
value: 55.599
- type: map_at_1000
value: 55.617000000000004
- type: map_at_20
value: 55.336
- type: map_at_3
value: 50.58200000000001
- type: map_at_5
value: 53.181
- type: mrr_at_1
value: 42.46813441483198
- type: mrr_at_10
value: 57.060710147326446
- type: mrr_at_100
value: 57.60978373431328
- type: mrr_at_1000
value: 57.62192762809547
- type: mrr_at_20
value: 57.43431796174232
- type: mrr_at_3
value: 53.78041714947835
- type: mrr_at_5
value: 55.81257242178437
- type: nauc_map_at_1000_diff1
value: 38.337572188308194
- type: nauc_map_at_1000_max
value: 27.550035254787197
- type: nauc_map_at_1000_std
value: -7.5513729587308145
- type: nauc_map_at_100_diff1
value: 38.335337794455015
- type: nauc_map_at_100_max
value: 27.56919614414171
- type: nauc_map_at_100_std
value: -7.526017855405723
- type: nauc_map_at_10_diff1
value: 38.308131361353816
- type: nauc_map_at_10_max
value: 27.691849580929933
- type: nauc_map_at_10_std
value: -7.971461731555123
- type: nauc_map_at_1_diff1
value: 42.721072690634884
- type: nauc_map_at_1_max
value: 21.750451486885332
- type: nauc_map_at_1_std
value: -9.99540950522643
- type: nauc_map_at_20_diff1
value: 38.25792874982169
- type: nauc_map_at_20_max
value: 27.68877906159661
- type: nauc_map_at_20_std
value: -7.560753583212102
- type: nauc_map_at_3_diff1
value: 37.950570055936254
- type: nauc_map_at_3_max
value: 26.257969511794858
- type: nauc_map_at_3_std
value: -9.236868658300553
- type: nauc_map_at_5_diff1
value: 37.99893219450212
- type: nauc_map_at_5_max
value: 27.293454259158057
- type: nauc_map_at_5_std
value: -8.734089449603806
- type: nauc_mrr_at_1000_diff1
value: 37.777767467474774
- type: nauc_mrr_at_1000_max
value: 27.39507603748298
- type: nauc_mrr_at_1000_std
value: -5.554754076870114
- type: nauc_mrr_at_100_diff1
value: 37.77981674583538
- type: nauc_mrr_at_100_max
value: 27.411100989441557
- type: nauc_mrr_at_100_std
value: -5.539061231412731
- type: nauc_mrr_at_10_diff1
value: 37.72399003363479
- type: nauc_mrr_at_10_max
value: 27.618142546685416
- type: nauc_mrr_at_10_std
value: -5.6819843907448195
- type: nauc_mrr_at_1_diff1
value: 41.17596078958236
- type: nauc_mrr_at_1_max
value: 23.32588591818617
- type: nauc_mrr_at_1_std
value: -7.126628034623689
- type: nauc_mrr_at_20_diff1
value: 37.695136721588
- type: nauc_mrr_at_20_max
value: 27.52850676467322
- type: nauc_mrr_at_20_std
value: -5.50667995515647
- type: nauc_mrr_at_3_diff1
value: 37.23845700908964
- type: nauc_mrr_at_3_max
value: 26.69389772971012
- type: nauc_mrr_at_3_std
value: -6.31868405989011
- type: nauc_mrr_at_5_diff1
value: 37.33757394192838
- type: nauc_mrr_at_5_max
value: 27.42091593836207
- type: nauc_mrr_at_5_std
value: -5.993243330132065
- type: nauc_ndcg_at_1000_diff1
value: 37.74836061640332
- type: nauc_ndcg_at_1000_max
value: 29.03148916289089
- type: nauc_ndcg_at_1000_std
value: -5.543065770074502
- type: nauc_ndcg_at_100_diff1
value: 37.75593955089626
- type: nauc_ndcg_at_100_max
value: 29.67109480272493
- type: nauc_ndcg_at_100_std
value: -4.773697596687493
- type: nauc_ndcg_at_10_diff1
value: 37.41701174824348
- type: nauc_ndcg_at_10_max
value: 30.448703434043445
- type: nauc_ndcg_at_10_std
value: -6.306202666419071
- type: nauc_ndcg_at_1_diff1
value: 41.17596078958236
- type: nauc_ndcg_at_1_max
value: 23.32588591818617
- type: nauc_ndcg_at_1_std
value: -7.126628034623689
- type: nauc_ndcg_at_20_diff1
value: 37.17445197824622
- type: nauc_ndcg_at_20_max
value: 30.47378561555209
- type: nauc_ndcg_at_20_std
value: -4.921584853993488
- type: nauc_ndcg_at_3_diff1
value: 36.5261976812068
- type: nauc_ndcg_at_3_max
value: 27.560538820208926
- type: nauc_ndcg_at_3_std
value: -8.556686332882931
- type: nauc_ndcg_at_5_diff1
value: 36.571462759614526
- type: nauc_ndcg_at_5_max
value: 29.363401730752585
- type: nauc_ndcg_at_5_std
value: -7.825739170420347
- type: nauc_precision_at_1000_diff1
value: -12.588899483401223
- type: nauc_precision_at_1000_max
value: 2.641097890578701
- type: nauc_precision_at_1000_std
value: 17.643107625788748
- type: nauc_precision_at_100_diff1
value: -8.40579874206785
- type: nauc_precision_at_100_max
value: 9.725496771040037
- type: nauc_precision_at_100_std
value: 21.558582760191243
- type: nauc_precision_at_10_diff1
value: 6.619157191854486
- type: nauc_precision_at_10_max
value: 23.767406373688402
- type: nauc_precision_at_10_std
value: 10.428535003478808
- type: nauc_precision_at_1_diff1
value: 41.17596078958236
- type: nauc_precision_at_1_max
value: 23.32588591818617
- type: nauc_precision_at_1_std
value: -7.126628034623689
- type: nauc_precision_at_20_diff1
value: -0.6449974218292859
- type: nauc_precision_at_20_max
value: 20.211503851418783
- type: nauc_precision_at_20_std
value: 17.922745410142575
- type: nauc_precision_at_3_diff1
value: 19.710276097428657
- type: nauc_precision_at_3_max
value: 26.768918044758706
- type: nauc_precision_at_3_std
value: -1.0636448912049246
- type: nauc_precision_at_5_diff1
value: 13.073181337982613
- type: nauc_precision_at_5_max
value: 26.418340338971024
- type: nauc_precision_at_5_std
value: 2.9842078949528688
- type: nauc_recall_at_1000_diff1
value: 30.52411148739828
- type: nauc_recall_at_1000_max
value: 90.96409807536762
- type: nauc_recall_at_1000_std
value: 83.94857830921949
- type: nauc_recall_at_100_diff1
value: 36.936303690592155
- type: nauc_recall_at_100_max
value: 71.91515014325869
- type: nauc_recall_at_100_std
value: 48.93061263403371
- type: nauc_recall_at_10_diff1
value: 32.84292362076269
- type: nauc_recall_at_10_max
value: 44.27252783122478
- type: nauc_recall_at_10_std
value: -1.5981198975612385
- type: nauc_recall_at_1_diff1
value: 42.721072690634884
- type: nauc_recall_at_1_max
value: 21.750451486885332
- type: nauc_recall_at_1_std
value: -9.99540950522643
- type: nauc_recall_at_20_diff1
value: 29.36724417081702
- type: nauc_recall_at_20_max
value: 52.035846390214715
- type: nauc_recall_at_20_std
value: 11.967264191332818
- type: nauc_recall_at_3_diff1
value: 31.634923771936098
- type: nauc_recall_at_3_max
value: 30.225743369869473
- type: nauc_recall_at_3_std
value: -9.253665347118615
- type: nauc_recall_at_5_diff1
value: 30.66271853090737
- type: nauc_recall_at_5_max
value: 35.70815715994996
- type: nauc_recall_at_5_std
value: -7.836012956078996
- type: ndcg_at_1
value: 42.468
- type: ndcg_at_10
value: 62.464
- type: ndcg_at_100
value: 65.618
- type: ndcg_at_1000
value: 66.014
- type: ndcg_at_20
value: 64.12
- type: ndcg_at_3
value: 54.790000000000006
- type: ndcg_at_5
value: 58.992
- type: precision_at_1
value: 42.468
- type: precision_at_10
value: 9.959
- type: precision_at_100
value: 1.174
- type: precision_at_1000
value: 0.121
- type: precision_at_20
value: 5.380999999999999
- type: precision_at_3
value: 24.73
- type: precision_at_5
value: 17.299999999999997
- type: recall_at_1
value: 38.0
- type: recall_at_10
value: 83.22699999999999
- type: recall_at_100
value: 96.584
- type: recall_at_1000
value: 99.512
- type: recall_at_20
value: 89.291
- type: recall_at_3
value: 63.666
- type: recall_at_5
value: 73.27900000000001
- task:
type: Retrieval
dataset:
name: MTEB QuoraRetrieval
type: mteb/quora
config: default
split: test
revision: e4e08e0b7dbe3c8700f0daef558ff32256715259
metrics:
- type: main_score
value: 87.366
- type: map_at_1
value: 69.95700000000001
- type: map_at_10
value: 83.55
- type: map_at_100
value: 84.196
- type: map_at_1000
value: 84.21600000000001
- type: map_at_20
value: 83.982
- type: map_at_3
value: 80.647
- type: map_at_5
value: 82.443
- type: mrr_at_1
value: 80.39
- type: mrr_at_10
value: 86.65646031746004
- type: mrr_at_100
value: 86.7852113210373
- type: mrr_at_1000
value: 86.78651118354796
- type: mrr_at_20
value: 86.75772838878498
- type: mrr_at_3
value: 85.67499999999971
- type: mrr_at_5
value: 86.33749999999962
- type: nauc_map_at_1000_diff1
value: 76.68189702770007
- type: nauc_map_at_1000_max
value: 36.19988239025682
- type: nauc_map_at_1000_std
value: -26.231691135645736
- type: nauc_map_at_100_diff1
value: 76.68832712120171
- type: nauc_map_at_100_max
value: 36.18627717337547
- type: nauc_map_at_100_std
value: -26.28243886166
- type: nauc_map_at_10_diff1
value: 76.88888516032657
- type: nauc_map_at_10_max
value: 35.69809861085124
- type: nauc_map_at_10_std
value: -27.859425473864224
- type: nauc_map_at_1_diff1
value: 79.5243725217315
- type: nauc_map_at_1_max
value: 27.092773841207002
- type: nauc_map_at_1_std
value: -26.223200911204543
- type: nauc_map_at_20_diff1
value: 76.74938996155176
- type: nauc_map_at_20_max
value: 36.07373781351406
- type: nauc_map_at_20_std
value: -26.891400098628015
- type: nauc_map_at_3_diff1
value: 77.29604745045076
- type: nauc_map_at_3_max
value: 33.11431059356283
- type: nauc_map_at_3_std
value: -29.555237195931085
- type: nauc_map_at_5_diff1
value: 77.14069217901078
- type: nauc_map_at_5_max
value: 34.68656073526487
- type: nauc_map_at_5_std
value: -28.945053669861508
- type: nauc_mrr_at_1000_diff1
value: 76.66087451567746
- type: nauc_mrr_at_1000_max
value: 38.78133177265328
- type: nauc_mrr_at_1000_std
value: -23.75726541774991
- type: nauc_mrr_at_100_diff1
value: 76.66117078261013
- type: nauc_mrr_at_100_max
value: 38.782533036423885
- type: nauc_mrr_at_100_std
value: -23.752587601473568
- type: nauc_mrr_at_10_diff1
value: 76.65866401411019
- type: nauc_mrr_at_10_max
value: 38.87950311049704
- type: nauc_mrr_at_10_std
value: -23.873660706680578
- type: nauc_mrr_at_1_diff1
value: 77.42633506487041
- type: nauc_mrr_at_1_max
value: 37.93973722217786
- type: nauc_mrr_at_1_std
value: -23.3984130771317
- type: nauc_mrr_at_20_diff1
value: 76.66210684923414
- type: nauc_mrr_at_20_max
value: 38.81293033048911
- type: nauc_mrr_at_20_std
value: -23.736590746133736
- type: nauc_mrr_at_3_diff1
value: 76.33711764736019
- type: nauc_mrr_at_3_max
value: 38.5659231830368
- type: nauc_mrr_at_3_std
value: -23.99588149124865
- type: nauc_mrr_at_5_diff1
value: 76.57123830226054
- type: nauc_mrr_at_5_max
value: 38.97947097392977
- type: nauc_mrr_at_5_std
value: -23.943668957974246
- type: nauc_ndcg_at_1000_diff1
value: 76.38447339050585
- type: nauc_ndcg_at_1000_max
value: 37.756822792877934
- type: nauc_ndcg_at_1000_std
value: -24.046995734357164
- type: nauc_ndcg_at_100_diff1
value: 76.44058018066822
- type: nauc_ndcg_at_100_max
value: 37.72948294169218
- type: nauc_ndcg_at_100_std
value: -24.083432140741795
- type: nauc_ndcg_at_10_diff1
value: 76.56246287923074
- type: nauc_ndcg_at_10_max
value: 37.0329253490553
- type: nauc_ndcg_at_10_std
value: -26.6495163705961
- type: nauc_ndcg_at_1_diff1
value: 77.4085129990432
- type: nauc_ndcg_at_1_max
value: 38.06139172214421
- type: nauc_ndcg_at_1_std
value: -23.656477126977386
- type: nauc_ndcg_at_20_diff1
value: 76.50192496743098
- type: nauc_ndcg_at_20_max
value: 37.51759311013985
- type: nauc_ndcg_at_20_std
value: -25.45517058360004
- type: nauc_ndcg_at_3_diff1
value: 75.94398494081794
- type: nauc_ndcg_at_3_max
value: 35.7666711547279
- type: nauc_ndcg_at_3_std
value: -26.866022682361578
- type: nauc_ndcg_at_5_diff1
value: 76.47334274088344
- type: nauc_ndcg_at_5_max
value: 36.40830331490731
- type: nauc_ndcg_at_5_std
value: -27.170121189572765
- type: nauc_precision_at_1000_diff1
value: -43.33672630765437
- type: nauc_precision_at_1000_max
value: -5.089751329149161
- type: nauc_precision_at_1000_std
value: 30.6241447847051
- type: nauc_precision_at_100_diff1
value: -42.736833035629864
- type: nauc_precision_at_100_max
value: -4.060198408346224
- type: nauc_precision_at_100_std
value: 29.807050266205344
- type: nauc_precision_at_10_diff1
value: -35.90810562245906
- type: nauc_precision_at_10_max
value: 1.1633204529249133
- type: nauc_precision_at_10_std
value: 20.129691203276018
- type: nauc_precision_at_1_diff1
value: 77.4085129990432
- type: nauc_precision_at_1_max
value: 38.06139172214421
- type: nauc_precision_at_1_std
value: -23.656477126977386
- type: nauc_precision_at_20_diff1
value: -40.2132286912738
- type: nauc_precision_at_20_max
value: -1.3004735030734194
- type: nauc_precision_at_20_std
value: 25.15612293757488
- type: nauc_precision_at_3_diff1
value: -13.873825299883904
- type: nauc_precision_at_3_max
value: 11.038689278907233
- type: nauc_precision_at_3_std
value: 5.4276449621706
- type: nauc_precision_at_5_diff1
value: -27.151668633894737
- type: nauc_precision_at_5_max
value: 5.795130010163115
- type: nauc_precision_at_5_std
value: 13.220722167587375
- type: nauc_recall_at_1000_diff1
value: 83.903950427863
- type: nauc_recall_at_1000_max
value: 37.82919000897223
- type: nauc_recall_at_1000_std
value: 70.65670846771707
- type: nauc_recall_at_100_diff1
value: 75.23306095335836
- type: nauc_recall_at_100_max
value: 37.54281648247423
- type: nauc_recall_at_100_std
value: 8.434289114377373
- type: nauc_recall_at_10_diff1
value: 72.7872912723047
- type: nauc_recall_at_10_max
value: 34.261519652104184
- type: nauc_recall_at_10_std
value: -34.60101950810808
- type: nauc_recall_at_1_diff1
value: 79.5243725217315
- type: nauc_recall_at_1_max
value: 27.092773841207002
- type: nauc_recall_at_1_std
value: -26.223200911204543
- type: nauc_recall_at_20_diff1
value: 72.8297963091964
- type: nauc_recall_at_20_max
value: 36.070220569670916
- type: nauc_recall_at_20_std
value: -27.20897179168245
- type: nauc_recall_at_3_diff1
value: 73.47456374650459
- type: nauc_recall_at_3_max
value: 29.901663407294816
- type: nauc_recall_at_3_std
value: -32.83329537040381
- type: nauc_recall_at_5_diff1
value: 73.05025750827126
- type: nauc_recall_at_5_max
value: 32.35733470860963
- type: nauc_recall_at_5_std
value: -34.32357558493091
- type: ndcg_at_1
value: 80.4
- type: ndcg_at_10
value: 87.366
- type: ndcg_at_100
value: 88.7
- type: ndcg_at_1000
value: 88.842
- type: ndcg_at_20
value: 88.11
- type: ndcg_at_3
value: 84.52499999999999
- type: ndcg_at_5
value: 86.047
- type: precision_at_1
value: 80.4
- type: precision_at_10
value: 13.235
- type: precision_at_100
value: 1.516
- type: precision_at_1000
value: 0.156
- type: precision_at_20
value: 7.037
- type: precision_at_3
value: 36.9
- type: precision_at_5
value: 24.236
- type: recall_at_1
value: 69.95700000000001
- type: recall_at_10
value: 94.535
- type: recall_at_100
value: 99.164
- type: recall_at_1000
value: 99.855
- type: recall_at_20
value: 96.974
- type: recall_at_3
value: 86.33800000000001
- type: recall_at_5
value: 90.69
- task:
type: Retrieval
dataset:
name: MTEB SCIDOCS
type: mteb/scidocs
config: default
split: test
revision: f8c2fcf00f625baaa80f62ec5bd9e1fff3b8ae88
metrics:
- type: main_score
value: 21.492
- type: map_at_1
value: 5.192
- type: map_at_10
value: 12.959000000000001
- type: map_at_100
value: 14.963999999999999
- type: map_at_1000
value: 15.261
- type: map_at_20
value: 13.988999999999999
- type: map_at_3
value: 9.235
- type: map_at_5
value: 11.042
- type: mrr_at_1
value: 25.5
- type: mrr_at_10
value: 36.37313492063491
- type: mrr_at_100
value: 37.36517957347626
- type: mrr_at_1000
value: 37.42538601073437
- type: mrr_at_20
value: 36.987896404421136
- type: mrr_at_3
value: 32.966666666666654
- type: mrr_at_5
value: 34.95166666666664
- type: nauc_map_at_1000_diff1
value: 13.635120934154395
- type: nauc_map_at_1000_max
value: 28.03542983005195
- type: nauc_map_at_1000_std
value: 17.07156940311778
- type: nauc_map_at_100_diff1
value: 13.59237295184475
- type: nauc_map_at_100_max
value: 27.992291365051237
- type: nauc_map_at_100_std
value: 16.926533467400464
- type: nauc_map_at_10_diff1
value: 14.149193235999993
- type: nauc_map_at_10_max
value: 26.520643811139305
- type: nauc_map_at_10_std
value: 13.168673602548925
- type: nauc_map_at_1_diff1
value: 20.096094508148465
- type: nauc_map_at_1_max
value: 17.41582245576302
- type: nauc_map_at_1_std
value: 5.771729007558897
- type: nauc_map_at_20_diff1
value: 13.977726400526427
- type: nauc_map_at_20_max
value: 27.2322235491895
- type: nauc_map_at_20_std
value: 14.972781677750435
- type: nauc_map_at_3_diff1
value: 17.371153027460355
- type: nauc_map_at_3_max
value: 24.457758503208254
- type: nauc_map_at_3_std
value: 7.719726821179824
- type: nauc_map_at_5_diff1
value: 14.600442843442574
- type: nauc_map_at_5_max
value: 25.899736370856296
- type: nauc_map_at_5_std
value: 10.125349354853359
- type: nauc_mrr_at_1000_diff1
value: 18.70342821390236
- type: nauc_mrr_at_1000_max
value: 23.365194520549114
- type: nauc_mrr_at_1000_std
value: 12.185114294903236
- type: nauc_mrr_at_100_diff1
value: 18.677858738015907
- type: nauc_mrr_at_100_max
value: 23.372641996726742
- type: nauc_mrr_at_100_std
value: 12.216130561991909
- type: nauc_mrr_at_10_diff1
value: 18.79094453090232
- type: nauc_mrr_at_10_max
value: 23.511686337006466
- type: nauc_mrr_at_10_std
value: 11.879716687008134
- type: nauc_mrr_at_1_diff1
value: 20.10455171810408
- type: nauc_mrr_at_1_max
value: 17.741566234315428
- type: nauc_mrr_at_1_std
value: 6.1676764583652215
- type: nauc_mrr_at_20_diff1
value: 18.70143648544655
- type: nauc_mrr_at_20_max
value: 23.45603239095019
- type: nauc_mrr_at_20_std
value: 12.244613576686202
- type: nauc_mrr_at_3_diff1
value: 18.894662528857374
- type: nauc_mrr_at_3_max
value: 23.3739038101588
- type: nauc_mrr_at_3_std
value: 10.4709044796543
- type: nauc_mrr_at_5_diff1
value: 18.877786065095563
- type: nauc_mrr_at_5_max
value: 23.78061081203872
- type: nauc_mrr_at_5_std
value: 11.847882917869622
- type: nauc_ndcg_at_1000_diff1
value: 13.99159027398115
- type: nauc_ndcg_at_1000_max
value: 29.44766808611483
- type: nauc_ndcg_at_1000_std
value: 24.289749574699915
- type: nauc_ndcg_at_100_diff1
value: 13.164020363258746
- type: nauc_ndcg_at_100_max
value: 29.642442997167723
- type: nauc_ndcg_at_100_std
value: 23.761764515453866
- type: nauc_ndcg_at_10_diff1
value: 14.839883268638546
- type: nauc_ndcg_at_10_max
value: 27.21043708455449
- type: nauc_ndcg_at_10_std
value: 15.56110419291775
- type: nauc_ndcg_at_1_diff1
value: 20.10455171810408
- type: nauc_ndcg_at_1_max
value: 17.741566234315428
- type: nauc_ndcg_at_1_std
value: 6.1676764583652215
- type: nauc_ndcg_at_20_diff1
value: 14.27998110295395
- type: nauc_ndcg_at_20_max
value: 28.2492026337839
- type: nauc_ndcg_at_20_std
value: 18.822356982979105
- type: nauc_ndcg_at_3_diff1
value: 17.659263157535445
- type: nauc_ndcg_at_3_max
value: 25.416706421591396
- type: nauc_ndcg_at_3_std
value: 9.650689638152636
- type: nauc_ndcg_at_5_diff1
value: 15.38459833918123
- type: nauc_ndcg_at_5_max
value: 26.92495519416969
- type: nauc_ndcg_at_5_std
value: 12.71017696809276
- type: nauc_precision_at_1000_diff1
value: 6.128490135458364
- type: nauc_precision_at_1000_max
value: 23.52693893261883
- type: nauc_precision_at_1000_std
value: 36.280432732819925
- type: nauc_precision_at_100_diff1
value: 5.306163791220436
- type: nauc_precision_at_100_max
value: 27.67851033239246
- type: nauc_precision_at_100_std
value: 34.29821573752515
- type: nauc_precision_at_10_diff1
value: 10.829686435425472
- type: nauc_precision_at_10_max
value: 27.201648684015318
- type: nauc_precision_at_10_std
value: 19.376999508233254
- type: nauc_precision_at_1_diff1
value: 20.10455171810408
- type: nauc_precision_at_1_max
value: 17.741566234315428
- type: nauc_precision_at_1_std
value: 6.1676764583652215
- type: nauc_precision_at_20_diff1
value: 9.416169626702048
- type: nauc_precision_at_20_max
value: 27.65257998670333
- type: nauc_precision_at_20_std
value: 24.761868509805826
- type: nauc_precision_at_3_diff1
value: 16.666456902017348
- type: nauc_precision_at_3_max
value: 27.9969730961105
- type: nauc_precision_at_3_std
value: 10.991562741393231
- type: nauc_precision_at_5_diff1
value: 12.26205064462843
- type: nauc_precision_at_5_max
value: 29.083848730874095
- type: nauc_precision_at_5_std
value: 15.66630836555747
- type: nauc_recall_at_1000_diff1
value: 5.600277836894063
- type: nauc_recall_at_1000_max
value: 23.228705161815526
- type: nauc_recall_at_1000_std
value: 36.822431061799485
- type: nauc_recall_at_100_diff1
value: 4.991781244867178
- type: nauc_recall_at_100_max
value: 27.70095625483475
- type: nauc_recall_at_100_std
value: 34.67168431597854
- type: nauc_recall_at_10_diff1
value: 10.580860425931972
- type: nauc_recall_at_10_max
value: 27.145829414223666
- type: nauc_recall_at_10_std
value: 19.330630157067382
- type: nauc_recall_at_1_diff1
value: 20.096094508148465
- type: nauc_recall_at_1_max
value: 17.41582245576302
- type: nauc_recall_at_1_std
value: 5.771729007558897
- type: nauc_recall_at_20_diff1
value: 9.06945331260344
- type: nauc_recall_at_20_max
value: 27.56725251066482
- type: nauc_recall_at_20_std
value: 24.77644509886098
- type: nauc_recall_at_3_diff1
value: 16.660507676429322
- type: nauc_recall_at_3_max
value: 27.816546386536434
- type: nauc_recall_at_3_std
value: 10.687824478247007
- type: nauc_recall_at_5_diff1
value: 11.992514446369388
- type: nauc_recall_at_5_max
value: 28.789031176671948
- type: nauc_recall_at_5_std
value: 15.422118990090805
- type: ndcg_at_1
value: 25.5
- type: ndcg_at_10
value: 21.492
- type: ndcg_at_100
value: 29.022
- type: ndcg_at_1000
value: 34.298
- type: ndcg_at_20
value: 24.237000000000002
- type: ndcg_at_3
value: 20.392
- type: ndcg_at_5
value: 17.801000000000002
- type: precision_at_1
value: 25.5
- type: precision_at_10
value: 11.09
- type: precision_at_100
value: 2.1919999999999997
- type: precision_at_1000
value: 0.346
- type: precision_at_20
value: 7.135
- type: precision_at_3
value: 18.933
- type: precision_at_5
value: 15.52
- type: recall_at_1
value: 5.192
- type: recall_at_10
value: 22.512999999999998
- type: recall_at_100
value: 44.505
- type: recall_at_1000
value: 70.267
- type: recall_at_20
value: 28.965000000000003
- type: recall_at_3
value: 11.522
- type: recall_at_5
value: 15.751999999999999
- task:
type: Retrieval
dataset:
name: MTEB SciFact
type: mteb/scifact
config: default
split: test
revision: 0228b52cf27578f30900b9e5271d331663a030d7
metrics:
- type: main_score
value: 71.586
- type: map_at_1
value: 56.760999999999996
- type: map_at_10
value: 66.893
- type: map_at_100
value: 67.42
- type: map_at_1000
value: 67.44200000000001
- type: map_at_20
value: 67.232
- type: map_at_3
value: 64.193
- type: map_at_5
value: 65.73400000000001
- type: mrr_at_1
value: 60.0
- type: mrr_at_10
value: 68.20383597883595
- type: mrr_at_100
value: 68.58867453733343
- type: mrr_at_1000
value: 68.61117469977329
- type: mrr_at_20
value: 68.43973740684265
- type: mrr_at_3
value: 66.11111111111111
- type: mrr_at_5
value: 67.44444444444446
- type: nauc_map_at_1000_diff1
value: 72.66688261123035
- type: nauc_map_at_1000_max
value: 61.02926282006283
- type: nauc_map_at_1000_std
value: 11.084549829740526
- type: nauc_map_at_100_diff1
value: 72.66226192320828
- type: nauc_map_at_100_max
value: 61.04393223108811
- type: nauc_map_at_100_std
value: 11.101529343291695
- type: nauc_map_at_10_diff1
value: 72.66732266693091
- type: nauc_map_at_10_max
value: 61.24124296311832
- type: nauc_map_at_10_std
value: 10.91179451961794
- type: nauc_map_at_1_diff1
value: 74.2356464256346
- type: nauc_map_at_1_max
value: 54.06962758957632
- type: nauc_map_at_1_std
value: 0.8037891907963532
- type: nauc_map_at_20_diff1
value: 72.65198594061253
- type: nauc_map_at_20_max
value: 61.130159351448185
- type: nauc_map_at_20_std
value: 11.2246899245522
- type: nauc_map_at_3_diff1
value: 72.78578673303954
- type: nauc_map_at_3_max
value: 59.19073262936321
- type: nauc_map_at_3_std
value: 8.460301560522968
- type: nauc_map_at_5_diff1
value: 72.55004168261968
- type: nauc_map_at_5_max
value: 59.75181935082357
- type: nauc_map_at_5_std
value: 9.440299527201889
- type: nauc_mrr_at_1000_diff1
value: 72.82720348470325
- type: nauc_mrr_at_1000_max
value: 62.344231223741446
- type: nauc_mrr_at_1000_std
value: 12.60196558488974
- type: nauc_mrr_at_100_diff1
value: 72.82236849255094
- type: nauc_mrr_at_100_max
value: 62.35799491393125
- type: nauc_mrr_at_100_std
value: 12.617900773655673
- type: nauc_mrr_at_10_diff1
value: 72.7722847495086
- type: nauc_mrr_at_10_max
value: 62.66642401155435
- type: nauc_mrr_at_10_std
value: 12.906381237738746
- type: nauc_mrr_at_1_diff1
value: 74.71208073612343
- type: nauc_mrr_at_1_max
value: 59.50430394775893
- type: nauc_mrr_at_1_std
value: 8.129514198080512
- type: nauc_mrr_at_20_diff1
value: 72.78312367361772
- type: nauc_mrr_at_20_max
value: 62.421122493761885
- type: nauc_mrr_at_20_std
value: 12.693437522498588
- type: nauc_mrr_at_3_diff1
value: 73.50670156385345
- type: nauc_mrr_at_3_max
value: 62.01717537699209
- type: nauc_mrr_at_3_std
value: 11.926548252191182
- type: nauc_mrr_at_5_diff1
value: 72.62204028549876
- type: nauc_mrr_at_5_max
value: 62.319358766312085
- type: nauc_mrr_at_5_std
value: 13.081257923284342
- type: nauc_ndcg_at_1000_diff1
value: 72.29960539074736
- type: nauc_ndcg_at_1000_max
value: 62.75096959221402
- type: nauc_ndcg_at_1000_std
value: 13.81528462505362
- type: nauc_ndcg_at_100_diff1
value: 72.19985782073529
- type: nauc_ndcg_at_100_max
value: 63.18837705326287
- type: nauc_ndcg_at_100_std
value: 14.506479655117138
- type: nauc_ndcg_at_10_diff1
value: 71.85759847832983
- type: nauc_ndcg_at_10_max
value: 64.150996056865
- type: nauc_ndcg_at_10_std
value: 14.580606901634278
- type: nauc_ndcg_at_1_diff1
value: 74.71208073612343
- type: nauc_ndcg_at_1_max
value: 59.50430394775893
- type: nauc_ndcg_at_1_std
value: 8.129514198080512
- type: nauc_ndcg_at_20_diff1
value: 71.80987178228351
- type: nauc_ndcg_at_20_max
value: 63.56269460865743
- type: nauc_ndcg_at_20_std
value: 15.024978004625922
- type: nauc_ndcg_at_3_diff1
value: 72.35095651602592
- type: nauc_ndcg_at_3_max
value: 61.60548011855679
- type: nauc_ndcg_at_3_std
value: 12.048248788835263
- type: nauc_ndcg_at_5_diff1
value: 71.48615621881864
- type: nauc_ndcg_at_5_max
value: 61.72870035979784
- type: nauc_ndcg_at_5_std
value: 12.83048357446691
- type: nauc_precision_at_1000_diff1
value: -14.743011420972
- type: nauc_precision_at_1000_max
value: 19.281995763080158
- type: nauc_precision_at_1000_std
value: 49.6140660398164
- type: nauc_precision_at_100_diff1
value: 0.11278174806205563
- type: nauc_precision_at_100_max
value: 29.704511820077332
- type: nauc_precision_at_100_std
value: 47.84916954122579
- type: nauc_precision_at_10_diff1
value: 20.498227967235728
- type: nauc_precision_at_10_max
value: 47.883119365891595
- type: nauc_precision_at_10_std
value: 45.182178693450595
- type: nauc_precision_at_1_diff1
value: 74.71208073612343
- type: nauc_precision_at_1_max
value: 59.50430394775893
- type: nauc_precision_at_1_std
value: 8.129514198080512
- type: nauc_precision_at_20_diff1
value: 12.551737222341455
- type: nauc_precision_at_20_max
value: 40.618899501225634
- type: nauc_precision_at_20_std
value: 48.5598454249067
- type: nauc_precision_at_3_diff1
value: 47.67720764601145
- type: nauc_precision_at_3_max
value: 56.50632017305064
- type: nauc_precision_at_3_std
value: 31.14175140162157
- type: nauc_precision_at_5_diff1
value: 35.10058622792819
- type: nauc_precision_at_5_max
value: 51.88948872657981
- type: nauc_precision_at_5_std
value: 37.62796957461928
- type: nauc_recall_at_1000_diff1
value: 79.57516339869238
- type: nauc_recall_at_1000_max
value: 86.11111111111035
- type: nauc_recall_at_1000_std
value: 79.57516339869238
- type: nauc_recall_at_100_diff1
value: 70.50859559510081
- type: nauc_recall_at_100_max
value: 79.17009941231396
- type: nauc_recall_at_100_std
value: 44.32910419069595
- type: nauc_recall_at_10_diff1
value: 66.16118569361245
- type: nauc_recall_at_10_max
value: 74.73542948302286
- type: nauc_recall_at_10_std
value: 27.680330939810037
- type: nauc_recall_at_1_diff1
value: 74.2356464256346
- type: nauc_recall_at_1_max
value: 54.06962758957632
- type: nauc_recall_at_1_std
value: 0.8037891907963532
- type: nauc_recall_at_20_diff1
value: 65.4748436545527
- type: nauc_recall_at_20_max
value: 73.81532199081235
- type: nauc_recall_at_20_std
value: 33.59324708196253
- type: nauc_recall_at_3_diff1
value: 68.83194804473622
- type: nauc_recall_at_3_max
value: 61.77722610439669
- type: nauc_recall_at_3_std
value: 13.984923756556714
- type: nauc_recall_at_5_diff1
value: 65.51467417209523
- type: nauc_recall_at_5_max
value: 64.08276291427661
- type: nauc_recall_at_5_std
value: 19.976472037847167
- type: ndcg_at_1
value: 60.0
- type: ndcg_at_10
value: 71.586
- type: ndcg_at_100
value: 73.76899999999999
- type: ndcg_at_1000
value: 74.386
- type: ndcg_at_20
value: 72.612
- type: ndcg_at_3
value: 66.944
- type: ndcg_at_5
value: 69.333
- type: precision_at_1
value: 60.0
- type: precision_at_10
value: 9.6
- type: precision_at_100
value: 1.073
- type: precision_at_1000
value: 0.11199999999999999
- type: precision_at_20
value: 5.033
- type: precision_at_3
value: 26.333000000000002
- type: precision_at_5
value: 17.4
- type: recall_at_1
value: 56.760999999999996
- type: recall_at_10
value: 84.589
- type: recall_at_100
value: 94.333
- type: recall_at_1000
value: 99.333
- type: recall_at_20
value: 88.43299999999999
- type: recall_at_3
value: 72.10600000000001
- type: recall_at_5
value: 78.194
- task:
type: Retrieval
dataset:
name: MTEB TRECCOVID
type: mteb/trec-covid
config: default
split: test
revision: bb9466bac8153a0349341eb1b22e06409e78ef4e
metrics:
- type: main_score
value: 84.60600000000001
- type: map_at_1
value: 0.257
- type: map_at_10
value: 2.196
- type: map_at_100
value: 13.252
- type: map_at_1000
value: 31.473000000000003
- type: map_at_20
value: 4.023000000000001
- type: map_at_3
value: 0.722
- type: map_at_5
value: 1.146
- type: mrr_at_1
value: 94.0
- type: mrr_at_10
value: 97.0
- type: mrr_at_100
value: 97.0
- type: mrr_at_1000
value: 97.0
- type: mrr_at_20
value: 97.0
- type: mrr_at_3
value: 97.0
- type: mrr_at_5
value: 97.0
- type: nauc_map_at_1000_diff1
value: -30.674816554207062
- type: nauc_map_at_1000_max
value: 53.18598689657068
- type: nauc_map_at_1000_std
value: 78.88325309469121
- type: nauc_map_at_100_diff1
value: -17.6877824653978
- type: nauc_map_at_100_max
value: 19.584159765315658
- type: nauc_map_at_100_std
value: 48.051154190992726
- type: nauc_map_at_10_diff1
value: 20.076631089898626
- type: nauc_map_at_10_max
value: -8.642556160185636
- type: nauc_map_at_10_std
value: -5.768698617334298
- type: nauc_map_at_1_diff1
value: 27.342260509653798
- type: nauc_map_at_1_max
value: -23.400451210297994
- type: nauc_map_at_1_std
value: -21.152006353733853
- type: nauc_map_at_20_diff1
value: 8.019321726240506
- type: nauc_map_at_20_max
value: -1.4826378210544222
- type: nauc_map_at_20_std
value: 5.698208117745366
- type: nauc_map_at_3_diff1
value: 32.073377946749446
- type: nauc_map_at_3_max
value: -13.099353983204654
- type: nauc_map_at_3_std
value: -15.36319127398037
- type: nauc_map_at_5_diff1
value: 22.500045815797876
- type: nauc_map_at_5_max
value: -8.548135411428023
- type: nauc_map_at_5_std
value: -8.547850460331334
- type: nauc_mrr_at_1000_diff1
value: -6.022408963585526
- type: nauc_mrr_at_1000_max
value: 4.481792717087155
- type: nauc_mrr_at_1000_std
value: 51.6962340491753
- type: nauc_mrr_at_100_diff1
value: -6.022408963585526
- type: nauc_mrr_at_100_max
value: 4.481792717087155
- type: nauc_mrr_at_100_std
value: 51.6962340491753
- type: nauc_mrr_at_10_diff1
value: -6.022408963585526
- type: nauc_mrr_at_10_max
value: 4.481792717087155
- type: nauc_mrr_at_10_std
value: 51.6962340491753
- type: nauc_mrr_at_1_diff1
value: -6.022408963585076
- type: nauc_mrr_at_1_max
value: 4.481792717087146
- type: nauc_mrr_at_1_std
value: 51.69623404917518
- type: nauc_mrr_at_20_diff1
value: -6.022408963585526
- type: nauc_mrr_at_20_max
value: 4.481792717087155
- type: nauc_mrr_at_20_std
value: 51.6962340491753
- type: nauc_mrr_at_3_diff1
value: -6.022408963585526
- type: nauc_mrr_at_3_max
value: 4.481792717087155
- type: nauc_mrr_at_3_std
value: 51.6962340491753
- type: nauc_mrr_at_5_diff1
value: -6.022408963585526
- type: nauc_mrr_at_5_max
value: 4.481792717087155
- type: nauc_mrr_at_5_std
value: 51.6962340491753
- type: nauc_ndcg_at_1000_diff1
value: -20.79697283984295
- type: nauc_ndcg_at_1000_max
value: 52.97671908009218
- type: nauc_ndcg_at_1000_std
value: 75.43907707019758
- type: nauc_ndcg_at_100_diff1
value: -38.620752706946455
- type: nauc_ndcg_at_100_max
value: 49.41307462381511
- type: nauc_ndcg_at_100_std
value: 81.33299379244252
- type: nauc_ndcg_at_10_diff1
value: -18.611906363037356
- type: nauc_ndcg_at_10_max
value: 44.20544651664479
- type: nauc_ndcg_at_10_std
value: 61.322552829935816
- type: nauc_ndcg_at_1_diff1
value: 18.625935567849073
- type: nauc_ndcg_at_1_max
value: -10.104132769280879
- type: nauc_ndcg_at_1_std
value: 22.449560689879743
- type: nauc_ndcg_at_20_diff1
value: -30.61130208138771
- type: nauc_ndcg_at_20_max
value: 52.68851710375231
- type: nauc_ndcg_at_20_std
value: 69.72357683382992
- type: nauc_ndcg_at_3_diff1
value: 5.695394821691213
- type: nauc_ndcg_at_3_max
value: 37.909122367102135
- type: nauc_ndcg_at_3_std
value: 46.2366603255159
- type: nauc_ndcg_at_5_diff1
value: -15.273067832464731
- type: nauc_ndcg_at_5_max
value: 49.7054639475091
- type: nauc_ndcg_at_5_std
value: 58.83754007826166
- type: nauc_precision_at_1000_diff1
value: -31.565302588492035
- type: nauc_precision_at_1000_max
value: 52.56214379514724
- type: nauc_precision_at_1000_std
value: 53.40618234326055
- type: nauc_precision_at_100_diff1
value: -44.67273120709088
- type: nauc_precision_at_100_max
value: 48.30381155522576
- type: nauc_precision_at_100_std
value: 82.1984661602578
- type: nauc_precision_at_10_diff1
value: -24.737383556860145
- type: nauc_precision_at_10_max
value: 52.816815002878556
- type: nauc_precision_at_10_std
value: 67.99052410030845
- type: nauc_precision_at_1_diff1
value: -6.022408963585076
- type: nauc_precision_at_1_max
value: 4.481792717087146
- type: nauc_precision_at_1_std
value: 51.69623404917518
- type: nauc_precision_at_20_diff1
value: -40.23628054967093
- type: nauc_precision_at_20_max
value: 56.980056980057014
- type: nauc_precision_at_20_std
value: 76.60976777785895
- type: nauc_precision_at_3_diff1
value: -4.661784068466279
- type: nauc_precision_at_3_max
value: 59.052007899934125
- type: nauc_precision_at_3_std
value: 58.187952600394986
- type: nauc_precision_at_5_diff1
value: -38.11848143512736
- type: nauc_precision_at_5_max
value: 68.6149353358365
- type: nauc_precision_at_5_std
value: 73.55652899457661
- type: nauc_recall_at_1000_diff1
value: -14.886527444436345
- type: nauc_recall_at_1000_max
value: 48.07492302795808
- type: nauc_recall_at_1000_std
value: 65.05623212485906
- type: nauc_recall_at_100_diff1
value: -8.148385729388195
- type: nauc_recall_at_100_max
value: 8.041615364614533
- type: nauc_recall_at_100_std
value: 33.77187914574611
- type: nauc_recall_at_10_diff1
value: 24.333628413035942
- type: nauc_recall_at_10_max
value: -14.577877145192078
- type: nauc_recall_at_10_std
value: -12.131819145098557
- type: nauc_recall_at_1_diff1
value: 27.342260509653798
- type: nauc_recall_at_1_max
value: -23.400451210297994
- type: nauc_recall_at_1_std
value: -21.152006353733853
- type: nauc_recall_at_20_diff1
value: 13.695556376785564
- type: nauc_recall_at_20_max
value: -8.872009346408264
- type: nauc_recall_at_20_std
value: -3.163199444247112
- type: nauc_recall_at_3_diff1
value: 32.00442538217753
- type: nauc_recall_at_3_max
value: -15.159737942664552
- type: nauc_recall_at_3_std
value: -17.530833132440645
- type: nauc_recall_at_5_diff1
value: 22.64740552912405
- type: nauc_recall_at_5_max
value: -12.947090597010414
- type: nauc_recall_at_5_std
value: -12.914478822476807
- type: ndcg_at_1
value: 88.0
- type: ndcg_at_10
value: 84.60600000000001
- type: ndcg_at_100
value: 64.31700000000001
- type: ndcg_at_1000
value: 56.40500000000001
- type: ndcg_at_20
value: 80.561
- type: ndcg_at_3
value: 87.87700000000001
- type: ndcg_at_5
value: 86.641
- type: precision_at_1
value: 94.0
- type: precision_at_10
value: 88.2
- type: precision_at_100
value: 65.9
- type: precision_at_1000
value: 25.019999999999996
- type: precision_at_20
value: 84.7
- type: precision_at_3
value: 92.0
- type: precision_at_5
value: 90.0
- type: recall_at_1
value: 0.257
- type: recall_at_10
value: 2.338
- type: recall_at_100
value: 15.831999999999999
- type: recall_at_1000
value: 52.519000000000005
- type: recall_at_20
value: 4.367
- type: recall_at_3
value: 0.74
- type: recall_at_5
value: 1.196
- task:
type: Retrieval
dataset:
name: MTEB Touche2020
type: mteb/touche2020
config: default
split: test
revision: a34f9a33db75fa0cbb21bb5cfc3dae8dc8bec93f
metrics:
- type: main_score
value: 31.426
- type: map_at_1
value: 3.4709999999999996
- type: map_at_10
value: 13.236999999999998
- type: map_at_100
value: 19.521
- type: map_at_1000
value: 21.224
- type: map_at_20
value: 15.626000000000001
- type: map_at_3
value: 7.152
- type: map_at_5
value: 9.914000000000001
- type: mrr_at_1
value: 44.89795918367347
- type: mrr_at_10
value: 57.54373177842565
- type: mrr_at_100
value: 57.855267710139536
- type: mrr_at_1000
value: 57.855267710139536
- type: mrr_at_20
value: 57.70071764969724
- type: mrr_at_3
value: 52.72108843537414
- type: mrr_at_5
value: 55.06802721088435
- type: nauc_map_at_1000_diff1
value: 21.148857552115558
- type: nauc_map_at_1000_max
value: 2.0837572569021323
- type: nauc_map_at_1000_std
value: 3.203419709665347
- type: nauc_map_at_100_diff1
value: 21.383778167597878
- type: nauc_map_at_100_max
value: 0.965767943155967
- type: nauc_map_at_100_std
value: 0.3949924961020957
- type: nauc_map_at_10_diff1
value: 27.178555638086394
- type: nauc_map_at_10_max
value: 4.480675175857958
- type: nauc_map_at_10_std
value: -13.69553539513878
- type: nauc_map_at_1_diff1
value: 27.63901823865334
- type: nauc_map_at_1_max
value: -18.6387233237763
- type: nauc_map_at_1_std
value: -27.02164241863646
- type: nauc_map_at_20_diff1
value: 23.892104752374888
- type: nauc_map_at_20_max
value: 3.5343136621362348
- type: nauc_map_at_20_std
value: -8.765101188860816
- type: nauc_map_at_3_diff1
value: 22.065793929837493
- type: nauc_map_at_3_max
value: 0.8063396680860568
- type: nauc_map_at_3_std
value: -20.404849396621824
- type: nauc_map_at_5_diff1
value: 22.66626080580714
- type: nauc_map_at_5_max
value: 5.423340658352383
- type: nauc_map_at_5_std
value: -18.31523779843455
- type: nauc_mrr_at_1000_diff1
value: 30.520722269282665
- type: nauc_mrr_at_1000_max
value: -16.644959497742267
- type: nauc_mrr_at_1000_std
value: -16.3824126273053
- type: nauc_mrr_at_100_diff1
value: 30.520722269282665
- type: nauc_mrr_at_100_max
value: -16.644959497742267
- type: nauc_mrr_at_100_std
value: -16.3824126273053
- type: nauc_mrr_at_10_diff1
value: 30.428248939332974
- type: nauc_mrr_at_10_max
value: -16.300183919261585
- type: nauc_mrr_at_10_std
value: -15.404823235836309
- type: nauc_mrr_at_1_diff1
value: 27.041346572613474
- type: nauc_mrr_at_1_max
value: -23.181309312755804
- type: nauc_mrr_at_1_std
value: -24.33076726484014
- type: nauc_mrr_at_20_diff1
value: 30.676558567379303
- type: nauc_mrr_at_20_max
value: -16.914268763031416
- type: nauc_mrr_at_20_std
value: -15.77742854976336
- type: nauc_mrr_at_3_diff1
value: 31.718457109787096
- type: nauc_mrr_at_3_max
value: -15.508391132202235
- type: nauc_mrr_at_3_std
value: -20.33229438349494
- type: nauc_mrr_at_5_diff1
value: 28.73798376227693
- type: nauc_mrr_at_5_max
value: -16.086295031060196
- type: nauc_mrr_at_5_std
value: -15.644604635769321
- type: nauc_ndcg_at_1000_diff1
value: 22.158724660189606
- type: nauc_ndcg_at_1000_max
value: -3.1755686809941475
- type: nauc_ndcg_at_1000_std
value: 19.258386224159075
- type: nauc_ndcg_at_100_diff1
value: 21.83846748649288
- type: nauc_ndcg_at_100_max
value: -10.939957598756036
- type: nauc_ndcg_at_100_std
value: 14.729678880436623
- type: nauc_ndcg_at_10_diff1
value: 26.944882726098424
- type: nauc_ndcg_at_10_max
value: -3.5176483833346617
- type: nauc_ndcg_at_10_std
value: -5.400606773697211
- type: nauc_ndcg_at_1_diff1
value: 26.649410985172985
- type: nauc_ndcg_at_1_max
value: -18.806716526067493
- type: nauc_ndcg_at_1_std
value: -25.100244999343506
- type: nauc_ndcg_at_20_diff1
value: 24.860266153648315
- type: nauc_ndcg_at_20_max
value: -7.521401821712892
- type: nauc_ndcg_at_20_std
value: -3.3696577425983003
- type: nauc_ndcg_at_3_diff1
value: 23.9933326962406
- type: nauc_ndcg_at_3_max
value: -0.4609479344284664
- type: nauc_ndcg_at_3_std
value: -15.176459166869897
- type: nauc_ndcg_at_5_diff1
value: 22.50595978713142
- type: nauc_ndcg_at_5_max
value: -2.1093870656000857
- type: nauc_ndcg_at_5_std
value: -12.732197425528257
- type: nauc_precision_at_1000_diff1
value: -20.335120385950024
- type: nauc_precision_at_1000_max
value: 26.95109729939765
- type: nauc_precision_at_1000_std
value: 29.981685890622117
- type: nauc_precision_at_100_diff1
value: -2.782114329320704
- type: nauc_precision_at_100_max
value: 2.9489322002048604
- type: nauc_precision_at_100_std
value: 67.3074073674319
- type: nauc_precision_at_10_diff1
value: 21.385177180383383
- type: nauc_precision_at_10_max
value: -2.4696365259422817
- type: nauc_precision_at_10_std
value: 14.469784299536673
- type: nauc_precision_at_1_diff1
value: 27.041346572613474
- type: nauc_precision_at_1_max
value: -23.181309312755804
- type: nauc_precision_at_1_std
value: -24.33076726484014
- type: nauc_precision_at_20_diff1
value: 11.993846579997673
- type: nauc_precision_at_20_max
value: -2.4792189693296227
- type: nauc_precision_at_20_std
value: 28.581394687807745
- type: nauc_precision_at_3_diff1
value: 20.70568446328836
- type: nauc_precision_at_3_max
value: 0.37326398699875984
- type: nauc_precision_at_3_std
value: -12.983918676694389
- type: nauc_precision_at_5_diff1
value: 19.47466335828124
- type: nauc_precision_at_5_max
value: -1.8921617684385994
- type: nauc_precision_at_5_std
value: -6.533875294402164
- type: nauc_recall_at_1000_diff1
value: 7.611201305723156
- type: nauc_recall_at_1000_max
value: 5.6416194035820055
- type: nauc_recall_at_1000_std
value: 61.695208644278
- type: nauc_recall_at_100_diff1
value: 10.0183258158735
- type: nauc_recall_at_100_max
value: -10.950612455698973
- type: nauc_recall_at_100_std
value: 33.06069987640471
- type: nauc_recall_at_10_diff1
value: 24.738210305731535
- type: nauc_recall_at_10_max
value: -2.6592454032071546
- type: nauc_recall_at_10_std
value: -4.83987517793115
- type: nauc_recall_at_1_diff1
value: 27.63901823865334
- type: nauc_recall_at_1_max
value: -18.6387233237763
- type: nauc_recall_at_1_std
value: -27.02164241863646
- type: nauc_recall_at_20_diff1
value: 17.79601177409034
- type: nauc_recall_at_20_max
value: -6.681637093148051
- type: nauc_recall_at_20_std
value: 3.369193919932238
- type: nauc_recall_at_3_diff1
value: 24.9589431081204
- type: nauc_recall_at_3_max
value: 2.4783640980500232
- type: nauc_recall_at_3_std
value: -19.567415651090702
- type: nauc_recall_at_5_diff1
value: 23.71803410135437
- type: nauc_recall_at_5_max
value: 1.6294309357641652
- type: nauc_recall_at_5_std
value: -15.365511906408983
- type: ndcg_at_1
value: 40.816
- type: ndcg_at_10
value: 31.426
- type: ndcg_at_100
value: 41.558
- type: ndcg_at_1000
value: 53.042
- type: ndcg_at_20
value: 31.108999999999998
- type: ndcg_at_3
value: 35.518
- type: ndcg_at_5
value: 33.235
- type: precision_at_1
value: 44.897999999999996
- type: precision_at_10
value: 27.551
- type: precision_at_100
value: 8.204
- type: precision_at_1000
value: 1.582
- type: precision_at_20
value: 19.796
- type: precision_at_3
value: 36.735
- type: precision_at_5
value: 33.061
- type: recall_at_1
value: 3.4709999999999996
- type: recall_at_10
value: 19.563
- type: recall_at_100
value: 50.3
- type: recall_at_1000
value: 85.13199999999999
- type: recall_at_20
value: 26.738
- type: recall_at_3
value: 7.8420000000000005
- type: recall_at_5
value: 11.994
- task:
type: Classification
dataset:
name: MTEB AmazonCounterfactualClassification (en)
type: mteb/amazon_counterfactual
config: en
split: test
revision: e8379541af4e31359cca9fbcf4b00f2671dba205
metrics:
- type: accuracy
value: 68.29850746268657
- type: ap
value: 30.109785890841966
- type: ap_weighted
value: 30.109785890841966
- type: f1
value: 61.76875915202924
- type: f1_weighted
value: 71.32073190458556
- type: main_score
value: 68.29850746268657
- task:
type: Classification
dataset:
name: MTEB AmazonPolarityClassification (default)
type: mteb/amazon_polarity
config: default
split: test
revision: e2d317d38cd51312af73b3d32a06d1a08b442046
metrics:
- type: accuracy
value: 90.3068
- type: ap
value: 86.17914339624038
- type: ap_weighted
value: 86.17914339624038
- type: f1
value: 90.29716826358077
- type: f1_weighted
value: 90.29716826358077
- type: main_score
value: 90.3068
- task:
type: Classification
dataset:
name: MTEB AmazonReviewsClassification (en)
type: mteb/amazon_reviews_multi
config: en
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 46.272000000000006
- type: f1
value: 45.57042543386915
- type: f1_weighted
value: 45.57042543386915
- type: main_score
value: 46.272000000000006
- task:
type: Clustering
dataset:
name: MTEB ArxivClusteringP2P (default)
type: mteb/arxiv-clustering-p2p
config: default
split: test
revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d
metrics:
- type: main_score
value: 44.9469238081379
- type: v_measure
value: 44.9469238081379
- type: v_measure_std
value: 13.26811262671461
- task:
type: Clustering
dataset:
name: MTEB ArxivClusteringS2S (default)
type: mteb/arxiv-clustering-s2s
config: default
split: test
revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53
metrics:
- type: main_score
value: 34.12071448053325
- type: v_measure
value: 34.12071448053325
- type: v_measure_std
value: 13.7019879046405
- task:
type: Reranking
dataset:
name: MTEB AskUbuntuDupQuestions (default)
type: mteb/askubuntudupquestions-reranking
config: default
split: test
revision: 2000358ca161889fa9c082cb41daa8dcfb161a54
metrics:
- type: main_score
value: 61.597667288657846
- type: map
value: 61.597667288657846
- type: mrr
value: 75.57940904893813
- type: nAUC_map_diff1
value: 8.745172077340095
- type: nAUC_map_max
value: 20.114863024035493
- type: nAUC_map_std
value: 15.991351189572192
- type: nAUC_mrr_diff1
value: 20.781369244159983
- type: nAUC_mrr_max
value: 30.78542570228559
- type: nAUC_mrr_std
value: 19.861484857303676
- task:
type: STS
dataset:
name: MTEB BIOSSES (default)
type: mteb/biosses-sts
config: default
split: test
revision: d3fb88f8f02e40887cd149695127462bbcf29b4a
metrics:
- type: cosine_pearson
value: 88.55587996301419
- type: cosine_spearman
value: 86.40317357420093
- type: euclidean_pearson
value: 86.93771958250231
- type: euclidean_spearman
value: 86.40317357420093
- type: main_score
value: 86.40317357420093
- type: manhattan_pearson
value: 86.92196577117366
- type: manhattan_spearman
value: 85.79834051556095
- type: pearson
value: 88.55587996301419
- type: spearman
value: 86.40317357420093
- task:
type: Classification
dataset:
name: MTEB Banking77Classification (default)
type: mteb/banking77
config: default
split: test
revision: 0fd18e25b25c072e09e0d92ab615fda904d66300
metrics:
- type: accuracy
value: 80.0064935064935
- type: f1
value: 79.29524254086299
- type: f1_weighted
value: 79.295242540863
- type: main_score
value: 80.0064935064935
- task:
type: Clustering
dataset:
name: MTEB BiorxivClusteringP2P (default)
type: mteb/biorxiv-clustering-p2p
config: default
split: test
revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40
metrics:
- type: main_score
value: 35.27186813341181
- type: v_measure
value: 35.27186813341181
- type: v_measure_std
value: 0.8621482145872432
- task:
type: Clustering
dataset:
name: MTEB BiorxivClusteringS2S (default)
type: mteb/biorxiv-clustering-s2s
config: default
split: test
revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908
metrics:
- type: main_score
value: 28.411805064852295
- type: v_measure
value: 28.411805064852295
- type: v_measure_std
value: 0.7194290078011281
- task:
type: Classification
dataset:
name: MTEB EmotionClassification (default)
type: mteb/emotion
config: default
split: test
revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37
metrics:
- type: accuracy
value: 43.675
- type: f1
value: 40.15061931375577
- type: f1_weighted
value: 45.714186572727066
- type: main_score
value: 43.675
- task:
type: Classification
dataset:
name: MTEB ImdbClassification (default)
type: mteb/imdb
config: default
split: test
revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7
metrics:
- type: accuracy
value: 84.35640000000001
- type: ap
value: 79.07507736685174
- type: ap_weighted
value: 79.07507736685174
- type: f1
value: 84.32288494833531
- type: f1_weighted
value: 84.32288494833531
- type: main_score
value: 84.35640000000001
- task:
type: Classification
dataset:
name: MTEB MTOPDomainClassification (en)
type: mteb/mtop_domain
config: en
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 91.35658914728684
- type: f1
value: 90.86877537911086
- type: f1_weighted
value: 91.3282092774443
- type: main_score
value: 91.35658914728684
- task:
type: Classification
dataset:
name: MTEB MTOPIntentClassification (en)
type: mteb/mtop_intent
config: en
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 60.63611491108071
- type: f1
value: 42.78886482112741
- type: f1_weighted
value: 63.44208631840539
- type: main_score
value: 60.63611491108071
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (en)
type: mteb/amazon_massive_intent
config: en
split: test
revision: 4672e20407010da34463acc759c162ca9734bca6
metrics:
- type: accuracy
value: 66.68796234028245
- type: f1
value: 64.44940791000278
- type: f1_weighted
value: 65.77554417406792
- type: main_score
value: 66.68796234028245
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (en)
type: mteb/amazon_massive_scenario
config: en
split: test
revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8
metrics:
- type: accuracy
value: 73.0598520511096
- type: f1
value: 72.14267273884774
- type: f1_weighted
value: 72.93345180137516
- type: main_score
value: 73.0598520511096
- task:
type: Clustering
dataset:
name: MTEB MedrxivClusteringP2P (default)
type: mteb/medrxiv-clustering-p2p
config: default
split: test
revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73
metrics:
- type: main_score
value: 31.143081341699606
- type: v_measure
value: 31.143081341699606
- type: v_measure_std
value: 1.5578716347076906
- task:
type: Clustering
dataset:
name: MTEB MedrxivClusteringS2S (default)
type: mteb/medrxiv-clustering-s2s
config: default
split: test
revision: 35191c8c0dca72d8ff3efcd72aa802307d469663
metrics:
- type: main_score
value: 27.010818869829556
- type: v_measure
value: 27.010818869829556
- type: v_measure_std
value: 1.1771554540819378
- task:
type: Reranking
dataset:
name: MTEB MindSmallReranking (default)
type: mteb/mind_small
config: default
split: test
revision: 59042f120c80e8afa9cdbb224f67076cec0fc9a7
metrics:
- type: main_score
value: 30.20503776754942
- type: map
value: 30.20503776754942
- type: mrr
value: 31.076636002733437
- type: nAUC_map_diff1
value: 7.290568655287842
- type: nAUC_map_max
value: -21.381599355932945
- type: nAUC_map_std
value: -7.709920607543168
- type: nAUC_mrr_diff1
value: 7.558397329284913
- type: nAUC_mrr_max
value: -15.981397186427607
- type: nAUC_mrr_std
value: -4.870495243168834
- task:
type: Clustering
dataset:
name: MTEB RedditClustering (default)
type: mteb/reddit-clustering
config: default
split: test
revision: 24640382cdbf8abc73003fb0fa6d111a705499eb
metrics:
- type: main_score
value: 51.85893476633338
- type: v_measure
value: 51.85893476633338
- type: v_measure_std
value: 4.704770139385852
- task:
type: Clustering
dataset:
name: MTEB RedditClusteringP2P (default)
type: mteb/reddit-clustering-p2p
config: default
split: test
revision: 385e3cb46b4cfa89021f56c4380204149d0efe33
metrics:
- type: main_score
value: 61.8124222918822
- type: v_measure
value: 61.8124222918822
- type: v_measure_std
value: 11.994472578100165
- task:
type: STS
dataset:
name: MTEB SICK-R (default)
type: mteb/sickr-sts
config: default
split: test
revision: 20a6d6f312dd54037fe07a32d58e5e168867909d
metrics:
- type: cosine_pearson
value: 77.63310776935984
- type: cosine_spearman
value: 69.86468291111039
- type: euclidean_pearson
value: 73.91537077798837
- type: euclidean_spearman
value: 69.86468376650203
- type: main_score
value: 69.86468291111039
- type: manhattan_pearson
value: 73.68616048370464
- type: manhattan_spearman
value: 69.76232036206659
- type: pearson
value: 77.63310776935984
- type: spearman
value: 69.86468291111039
- task:
type: STS
dataset:
name: MTEB STS12 (default)
type: mteb/sts12-sts
config: default
split: test
revision: a0d554a64d88156834ff5ae9920b964011b16384
metrics:
- type: cosine_pearson
value: 57.71716838245049
- type: cosine_spearman
value: 61.797855543446424
- type: euclidean_pearson
value: 58.22958675325848
- type: euclidean_spearman
value: 61.797855543446424
- type: main_score
value: 61.797855543446424
- type: manhattan_pearson
value: 57.63117544997929
- type: manhattan_spearman
value: 61.3629404350085
- type: pearson
value: 57.71716838245049
- type: spearman
value: 61.797855543446424
- task:
type: STS
dataset:
name: MTEB STS13 (default)
type: mteb/sts13-sts
config: default
split: test
revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca
metrics:
- type: cosine_pearson
value: 82.30260026790903
- type: cosine_spearman
value: 82.66959813070869
- type: euclidean_pearson
value: 82.08383017580783
- type: euclidean_spearman
value: 82.66959813070869
- type: main_score
value: 82.66959813070869
- type: manhattan_pearson
value: 81.77991451392153
- type: manhattan_spearman
value: 82.3652534745606
- type: pearson
value: 82.30260026790903
- type: spearman
value: 82.66959813070869
- task:
type: STS
dataset:
name: MTEB STS14 (default)
type: mteb/sts14-sts
config: default
split: test
revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375
metrics:
- type: cosine_pearson
value: 71.50608384084478
- type: cosine_spearman
value: 68.94968064977785
- type: euclidean_pearson
value: 70.73381299949564
- type: euclidean_spearman
value: 68.94968064977785
- type: main_score
value: 68.94968064977785
- type: manhattan_pearson
value: 70.5385486953787
- type: manhattan_spearman
value: 68.82132770672365
- type: pearson
value: 71.50608384084478
- type: spearman
value: 68.94968064977785
- task:
type: STS
dataset:
name: MTEB STS15 (default)
type: mteb/sts15-sts
config: default
split: test
revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3
metrics:
- type: cosine_pearson
value: 73.66969825874907
- type: cosine_spearman
value: 75.55374982088381
- type: euclidean_pearson
value: 75.9339313749594
- type: euclidean_spearman
value: 75.55374982088381
- type: main_score
value: 75.55374982088381
- type: manhattan_pearson
value: 75.88287553383817
- type: manhattan_spearman
value: 75.50729812977688
- type: pearson
value: 73.66969825874907
- type: spearman
value: 75.55374982088381
- task:
type: STS
dataset:
name: MTEB STS16 (default)
type: mteb/sts16-sts
config: default
split: test
revision: 4d8694f8f0e0100860b497b999b3dbed754a0513
metrics:
- type: cosine_pearson
value: 74.5954724414016
- type: cosine_spearman
value: 77.2688820850505
- type: euclidean_pearson
value: 77.19866353971555
- type: euclidean_spearman
value: 77.2688820850505
- type: main_score
value: 77.2688820850505
- type: manhattan_pearson
value: 77.27072603680978
- type: manhattan_spearman
value: 77.29408453673607
- type: pearson
value: 74.5954724414016
- type: spearman
value: 77.2688820850505
- task:
type: STS
dataset:
name: MTEB STS17 (en-en)
type: mteb/sts17-crosslingual-sts
config: en-en
split: test
revision: faeb762787bd10488a50c8b5be4a3b82e411949c
metrics:
- type: cosine_pearson
value: 71.52588722654055
- type: cosine_spearman
value: 74.97235736456061
- type: euclidean_pearson
value: 74.51952528854038
- type: euclidean_spearman
value: 74.97235736456061
- type: main_score
value: 74.97235736456061
- type: manhattan_pearson
value: 74.48272300884209
- type: manhattan_spearman
value: 74.80633649415176
- type: pearson
value: 71.52588722654055
- type: spearman
value: 74.97235736456061
- task:
type: STS
dataset:
name: MTEB STS22 (en)
type: mteb/sts22-crosslingual-sts
config: en
split: test
revision: de9d86b3b84231dc21f76c7b7af1f28e2f57f6e3
metrics:
- type: cosine_pearson
value: 68.80031120401976
- type: cosine_spearman
value: 69.07945196478491
- type: euclidean_pearson
value: 68.99674496430792
- type: euclidean_spearman
value: 69.07945196478491
- type: main_score
value: 69.07945196478491
- type: manhattan_pearson
value: 69.00236107775687
- type: manhattan_spearman
value: 68.98064879049272
- type: pearson
value: 68.80031120401976
- type: spearman
value: 69.07945196478491
- task:
type: STS
dataset:
name: MTEB STSBenchmark (default)
type: mteb/stsbenchmark-sts
config: default
split: test
revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831
metrics:
- type: cosine_pearson
value: 65.6898007230089
- type: cosine_spearman
value: 69.72386211803668
- type: euclidean_pearson
value: 69.04523003701475
- type: euclidean_spearman
value: 69.72386211803668
- type: main_score
value: 69.72386211803668
- type: manhattan_pearson
value: 68.80479743770702
- type: manhattan_spearman
value: 69.43264575177459
- type: pearson
value: 65.6898007230089
- type: spearman
value: 69.72386211803668
- task:
type: Reranking
dataset:
name: MTEB SciDocsRR (default)
type: mteb/scidocs-reranking
config: default
split: test
revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab
metrics:
- type: main_score
value: 79.74088066874383
- type: map
value: 79.74088066874383
- type: mrr
value: 94.47697455050397
- type: nAUC_map_diff1
value: 8.036086256905502
- type: nAUC_map_max
value: 54.88199803816819
- type: nAUC_map_std
value: 69.16267942176574
- type: nAUC_mrr_diff1
value: 50.020738477678115
- type: nAUC_mrr_max
value: 83.28922770326483
- type: nAUC_mrr_std
value: 83.63973501802224
- task:
type: PairClassification
dataset:
name: MTEB SprintDuplicateQuestions (default)
type: mteb/sprintduplicatequestions-pairclassification
config: default
split: test
revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46
metrics:
- type: cosine_accuracy
value: 99.83861386138614
- type: cosine_accuracy_threshold
value: 74.75666999816895
- type: cosine_ap
value: 96.15132792066652
- type: cosine_f1
value: 91.84890656063618
- type: cosine_f1_threshold
value: 71.70594930648804
- type: cosine_precision
value: 91.30434782608695
- type: cosine_recall
value: 92.4
- type: dot_accuracy
value: 99.83861386138614
- type: dot_accuracy_threshold
value: 74.75666999816895
- type: dot_ap
value: 96.15132792066653
- type: dot_f1
value: 91.84890656063618
- type: dot_f1_threshold
value: 71.70596122741699
- type: dot_precision
value: 91.30434782608695
- type: dot_recall
value: 92.4
- type: euclidean_accuracy
value: 99.83861386138614
- type: euclidean_accuracy_threshold
value: 71.05395793914795
- type: euclidean_ap
value: 96.15132792066652
- type: euclidean_f1
value: 91.84890656063618
- type: euclidean_f1_threshold
value: 75.22505521774292
- type: euclidean_precision
value: 91.30434782608695
- type: euclidean_recall
value: 92.4
- type: main_score
value: 96.15132792066653
- type: manhattan_accuracy
value: 99.83564356435643
- type: manhattan_accuracy_threshold
value: 1547.6950645446777
- type: manhattan_ap
value: 96.06151211452136
- type: manhattan_f1
value: 91.61676646706587
- type: manhattan_f1_threshold
value: 1626.3608932495117
- type: manhattan_precision
value: 91.43426294820716
- type: manhattan_recall
value: 91.8
- type: max_ap
value: 96.15132792066653
- type: max_f1
value: 91.84890656063618
- type: max_precision
value: 91.43426294820716
- type: max_recall
value: 92.4
- type: similarity_accuracy
value: 99.83861386138614
- type: similarity_accuracy_threshold
value: 74.75666999816895
- type: similarity_ap
value: 96.15132792066652
- type: similarity_f1
value: 91.84890656063618
- type: similarity_f1_threshold
value: 71.70594930648804
- type: similarity_precision
value: 91.30434782608695
- type: similarity_recall
value: 92.4
- task:
type: Clustering
dataset:
name: MTEB StackExchangeClustering (default)
type: mteb/stackexchange-clustering
config: default
split: test
revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259
metrics:
- type: main_score
value: 61.24120328328453
- type: v_measure
value: 61.24120328328453
- type: v_measure_std
value: 3.9946560691100372
- task:
type: Clustering
dataset:
name: MTEB StackExchangeClusteringP2P (default)
type: mteb/stackexchange-clustering-p2p
config: default
split: test
revision: 815ca46b2622cec33ccafc3735d572c266efdb44
metrics:
- type: main_score
value: 33.808268374864745
- type: v_measure
value: 33.808268374864745
- type: v_measure_std
value: 1.2212188701887239
- task:
type: Reranking
dataset:
name: MTEB StackOverflowDupQuestions (default)
type: mteb/stackoverflowdupquestions-reranking
config: default
split: test
revision: e185fbe320c72810689fc5848eb6114e1ef5ec69
metrics:
- type: main_score
value: 52.19806018468037
- type: map
value: 52.19806018468037
- type: mrr
value: 52.98921462524404
- type: nAUC_map_diff1
value: 37.41443156995912
- type: nAUC_map_max
value: 9.410262727675603
- type: nAUC_map_std
value: 8.7094185014992
- type: nAUC_mrr_diff1
value: 37.78202772392581
- type: nAUC_mrr_max
value: 10.517635536565816
- type: nAUC_mrr_std
value: 8.509423813772491
- task:
type: Summarization
dataset:
name: MTEB SummEval (default)
type: mteb/summeval
config: default
split: test
revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c
metrics:
- type: cosine_pearson
value: 30.48413700430812
- type: cosine_spearman
value: 30.357162200875816
- type: dot_pearson
value: 30.484140144824938
- type: dot_spearman
value: 30.357162200875816
- type: main_score
value: 30.357162200875816
- type: pearson
value: 30.48413700430812
- type: spearman
value: 30.357162200875816
- task:
type: Classification
dataset:
name: MTEB ToxicConversationsClassification (default)
type: mteb/toxic_conversations_50k
config: default
split: test
revision: edfaf9da55d3dd50d43143d90c1ac476895ae6de
metrics:
- type: accuracy
value: 66.8359375
- type: ap
value: 12.482653786025985
- type: ap_weighted
value: 12.482653786025985
- type: f1
value: 51.328608527332385
- type: f1_weighted
value: 74.07974463955398
- type: main_score
value: 66.8359375
- task:
type: Classification
dataset:
name: MTEB TweetSentimentExtractionClassification (default)
type: mteb/tweet_sentiment_extraction
config: default
split: test
revision: d604517c81ca91fe16a244d1248fc021f9ecee7a
metrics:
- type: accuracy
value: 53.907753254103
- type: f1
value: 54.22707647269581
- type: f1_weighted
value: 53.611822984407695
- type: main_score
value: 53.907753254103
- task:
type: Clustering
dataset:
name: MTEB TwentyNewsgroupsClustering (default)
type: mteb/twentynewsgroups-clustering
config: default
split: test
revision: 6125ec4e24fa026cec8a478383ee943acfbd5449
metrics:
- type: main_score
value: 38.1364789307295
- type: v_measure
value: 38.1364789307295
- type: v_measure_std
value: 2.0731634966352077
- task:
type: PairClassification
dataset:
name: MTEB TwitterSemEval2015 (default)
type: mteb/twittersemeval2015-pairclassification
config: default
split: test
revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1
metrics:
- type: cosine_accuracy
value: 82.66674614054956
- type: cosine_accuracy_threshold
value: 79.80123162269592
- type: cosine_ap
value: 63.28209719072804
- type: cosine_f1
value: 60.16389710903711
- type: cosine_f1_threshold
value: 72.22893834114075
- type: cosine_precision
value: 52.90232185748599
- type: cosine_recall
value: 69.73614775725594
- type: dot_accuracy
value: 82.66674614054956
- type: dot_accuracy_threshold
value: 79.8012375831604
- type: dot_ap
value: 63.282103870645166
- type: dot_f1
value: 60.16389710903711
- type: dot_f1_threshold
value: 72.22894430160522
- type: dot_precision
value: 52.90232185748599
- type: dot_recall
value: 69.73614775725594
- type: euclidean_accuracy
value: 82.66674614054956
- type: euclidean_accuracy_threshold
value: 63.55905532836914
- type: euclidean_ap
value: 63.282095399953164
- type: euclidean_f1
value: 60.16389710903711
- type: euclidean_f1_threshold
value: 74.5265781879425
- type: euclidean_precision
value: 52.90232185748599
- type: euclidean_recall
value: 69.73614775725594
- type: main_score
value: 63.282103870645166
- type: manhattan_accuracy
value: 82.74423317637242
- type: manhattan_accuracy_threshold
value: 1415.380859375
- type: manhattan_ap
value: 63.26931757839598
- type: manhattan_f1
value: 60.11014948859166
- type: manhattan_f1_threshold
value: 1632.522201538086
- type: manhattan_precision
value: 52.359506559624045
- type: manhattan_recall
value: 70.55408970976254
- type: max_ap
value: 63.282103870645166
- type: max_f1
value: 60.16389710903711
- type: max_precision
value: 52.90232185748599
- type: max_recall
value: 70.55408970976254
- type: similarity_accuracy
value: 82.66674614054956
- type: similarity_accuracy_threshold
value: 79.80123162269592
- type: similarity_ap
value: 63.28209719072804
- type: similarity_f1
value: 60.16389710903711
- type: similarity_f1_threshold
value: 72.22893834114075
- type: similarity_precision
value: 52.90232185748599
- type: similarity_recall
value: 69.73614775725594
- task:
type: PairClassification
dataset:
name: MTEB TwitterURLCorpus (default)
type: mteb/twitterurlcorpus-pairclassification
config: default
split: test
revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf
metrics:
- type: cosine_accuracy
value: 88.10105949470253
- type: cosine_accuracy_threshold
value: 68.95147562026978
- type: cosine_ap
value: 84.65516103854583
- type: cosine_f1
value: 76.54581123301605
- type: cosine_f1_threshold
value: 63.92929553985596
- type: cosine_precision
value: 72.46526344751685
- type: cosine_recall
value: 81.11333538651063
- type: dot_accuracy
value: 88.10105949470253
- type: dot_accuracy_threshold
value: 68.95147562026978
- type: dot_ap
value: 84.65516301437592
- type: dot_f1
value: 76.54581123301605
- type: dot_f1_threshold
value: 63.92928957939148
- type: dot_precision
value: 72.46526344751685
- type: dot_recall
value: 81.11333538651063
- type: euclidean_accuracy
value: 88.10105949470253
- type: euclidean_accuracy_threshold
value: 78.80169153213501
- type: euclidean_ap
value: 84.65517268264233
- type: euclidean_f1
value: 76.54581123301605
- type: euclidean_f1_threshold
value: 84.93610620498657
- type: euclidean_precision
value: 72.46526344751685
- type: euclidean_recall
value: 81.11333538651063
- type: main_score
value: 84.65517268264233
- type: manhattan_accuracy
value: 88.08941669577366
- type: manhattan_accuracy_threshold
value: 1739.3169403076172
- type: manhattan_ap
value: 84.64592398855694
- type: manhattan_f1
value: 76.62890540443034
- type: manhattan_f1_threshold
value: 1861.344337463379
- type: manhattan_precision
value: 72.09775967413442
- type: manhattan_recall
value: 81.76778564829073
- type: max_ap
value: 84.65517268264233
- type: max_f1
value: 76.62890540443034
- type: max_precision
value: 72.46526344751685
- type: max_recall
value: 81.76778564829073
- type: similarity_accuracy
value: 88.10105949470253
- type: similarity_accuracy_threshold
value: 68.95147562026978
- type: similarity_ap
value: 84.65516103854583
- type: similarity_f1
value: 76.54581123301605
- type: similarity_f1_threshold
value: 63.92929553985596
- type: similarity_precision
value: 72.46526344751685
- type: similarity_recall
value: 81.11333538651063
---
# aimlresearch2023/snowflake-arctic-embed-m-v1.5-Q8_0-GGUF
This model was converted to GGUF format from [`Snowflake/snowflake-arctic-embed-m-v1.5`](https://huggingface.co/Snowflake/snowflake-arctic-embed-m-v1.5) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Snowflake/snowflake-arctic-embed-m-v1.5) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo aimlresearch2023/snowflake-arctic-embed-m-v1.5-Q8_0-GGUF --hf-file snowflake-arctic-embed-m-v1.5-q8_0.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo aimlresearch2023/snowflake-arctic-embed-m-v1.5-Q8_0-GGUF --hf-file snowflake-arctic-embed-m-v1.5-q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo aimlresearch2023/snowflake-arctic-embed-m-v1.5-Q8_0-GGUF --hf-file snowflake-arctic-embed-m-v1.5-q8_0.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo aimlresearch2023/snowflake-arctic-embed-m-v1.5-Q8_0-GGUF --hf-file snowflake-arctic-embed-m-v1.5-q8_0.gguf -c 2048
```
| [
"BIOSSES",
"SCIFACT"
] | Non_BioNLP |
sultan/BioM-ELECTRA-Base-Discriminator | sultan | null | [
"transformers",
"pytorch",
"electra",
"pretraining",
"endpoints_compatible",
"region:us"
] | 1,646,263,745,000 | 2023-11-04T23:06:42 | 179 | 3 | ---
{}
---
# BioM-Transformers: Building Large Biomedical Language Models with BERT, ALBERT and ELECTRA
# Abstract
The impact of design choices on the performance
of biomedical language models recently
has been a subject for investigation. In
this paper, we empirically study biomedical
domain adaptation with large transformer models
using different design choices. We evaluate
the performance of our pretrained models
against other existing biomedical language
models in the literature. Our results show that
we achieve state-of-the-art results on several
biomedical domain tasks despite using similar
or less computational cost compared to other
models in the literature. Our findings highlight
the significant effect of design choices on
improving the performance of biomedical language
models.
# Model Description
This model was pre-trained on PubMed Abstracts only with biomedical domain vocabulary for 500K steps with a batch size of 1024 on TPUv3-32 unit.
Check our GitHub repo at https://github.com/salrowili/BioM-Transformers for TensorFlow and GluonNLP checkpoints.
# Colab Notebook Examples
BioM-ELECTRA-LARGE on NER and ChemProt Task [![Open In Colab][COLAB]](https://colab.research.google.com/github/salrowili/BioM-Transformers/blob/main/examples/Example_of_NER_and_ChemProt_Task_on_TPU.ipynb)
BioM-ELECTRA-Large on SQuAD2.0 and BioASQ7B Factoid tasks [![Open In Colab][COLAB]](https://colab.research.google.com/github/salrowili/BioM-Transformers/blob/main/examples/Example_of_SQuAD2_0_and_BioASQ7B_tasks_with_BioM_ELECTRA_Large_on_TPU.ipynb)
BioM-ALBERT-xxlarge on SQuAD2.0 and BioASQ7B Factoid tasks [![Open In Colab][COLAB]](https://colab.research.google.com/github/salrowili/BioM-Transformers/blob/main/examples/Example_of_SQuAD2_0_and_BioASQ7B_tasks_with_BioM_ALBERT_xxlarge_on_TPU.ipynb)
Text Classification Task With HuggingFace Transformers and PyTorchXLA on Free TPU [![Open In Colab][COLAB]](https://colab.research.google.com/github/salrowili/BioM-Transformers/blob/main/examples/Fine_Tuning_Biomedical_Models_on_Text_Classification_Task_With_HuggingFace_Transformers_and_PyTorch_XLA.ipynb)
Reproducing our BLURB results with JAX [![Open In Colab][COLAB]](https://colab.research.google.com/github/salrowili/BioM-Transformers/blob/main/examples/BLURB_LeaderBoard_with_TPU_VM.ipynb)
Finetunning BioM-Transformers with Jax/Flax on TPUv3-8 with free Kaggle resource [![Open In Colab][COLAB]](https://www.kaggle.com/code/sultanalrowili/biom-transoformers-with-flax-on-tpu-with-kaggle)
[COLAB]: https://colab.research.google.com/assets/colab-badge.svg
# Acknowledgment
We would like to acknowledge the support we have from Tensorflow Research Cloud (TFRC) team to grant us access to TPUv3 units.
# Citation
```bibtex
@inproceedings{alrowili-shanker-2021-biom,
title = "{B}io{M}-Transformers: Building Large Biomedical Language Models with {BERT}, {ALBERT} and {ELECTRA}",
author = "Alrowili, Sultan and
Shanker, Vijay",
booktitle = "Proceedings of the 20th Workshop on Biomedical Language Processing",
month = jun,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2021.bionlp-1.24",
pages = "221--227",
abstract = "The impact of design choices on the performance of biomedical language models recently has been a subject for investigation. In this paper, we empirically study biomedical domain adaptation with large transformer models using different design choices. We evaluate the performance of our pretrained models against other existing biomedical language models in the literature. Our results show that we achieve state-of-the-art results on several biomedical domain tasks despite using similar or less computational cost compared to other models in the literature. Our findings highlight the significant effect of design choices on improving the performance of biomedical language models.",
}
``` | [
"BLURB",
"CHEMPROT"
] | BioNLP |
Daemontatox/AetherDrake-SFT | Daemontatox | text-generation | [
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"Llama3",
"trl",
"COT",
"Reasoning",
"conversational",
"en",
"dataset:Daemontatox/LongCOT-Reason",
"base_model:prithivMLmods/Llama-3.1-8B-Open-SFT",
"base_model:finetune:prithivMLmods/Llama-3.1-8B-Open-SFT",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,735,052,111,000 | 2024-12-25T21:37:39 | 41 | 1 | ---
base_model:
- prithivMLmods/Llama-3.1-8B-Open-SFT
datasets:
- Daemontatox/LongCOT-Reason
language:
- en
library_name: transformers
license: apache-2.0
metrics:
- accuracy
- character
- competition_math
- code_eval
pipeline_tag: text-generation
tags:
- text-generation-inference
- transformers
- unsloth
- Llama3
- trl
- COT
- Reasoning
model-index:
- name: AetherDrake-SFT
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: IFEval (0-Shot)
type: HuggingFaceH4/ifeval
args:
num_few_shot: 0
metrics:
- type: inst_level_strict_acc and prompt_level_strict_acc
value: 48.13
name: strict accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=Daemontatox/AetherDrake-SFT
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: BBH (3-Shot)
type: BBH
args:
num_few_shot: 3
metrics:
- type: acc_norm
value: 27.14
name: normalized accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=Daemontatox/AetherDrake-SFT
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MATH Lvl 5 (4-Shot)
type: hendrycks/competition_math
args:
num_few_shot: 4
metrics:
- type: exact_match
value: 14.65
name: exact match
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=Daemontatox/AetherDrake-SFT
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GPQA (0-shot)
type: Idavidrein/gpqa
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 9.4
name: acc_norm
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=Daemontatox/AetherDrake-SFT
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MuSR (0-shot)
type: TAUR-Lab/MuSR
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 9.97
name: acc_norm
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=Daemontatox/AetherDrake-SFT
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU-PRO (5-shot)
type: TIGER-Lab/MMLU-Pro
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 27.77
name: accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=Daemontatox/AetherDrake-SFT
name: Open LLM Leaderboard
---

# AetherDrake-SFT
- **Developed by:** Daemontatox
- **License:** Apache 2.0
- **Finetuned Using:** [Unsloth](https://github.com/unslothai/unsloth), Hugging Face Transformers, and TRL Library
## Model Overview
The **AetherDrake-SFT Model** is an advanced AI system optimized for logical reasoning, multi-step problem-solving, and decision-making tasks. Designed with efficiency and accuracy in mind, it employs a structured system prompt to ensure high-quality answers through a transparent and iterative thought process.
### System Prompt and Workflow
This model operates using an innovative reasoning framework structured around the following steps:
1. **Initial Thought:**
The model uses `<Thinking>` tags to reason step-by-step and craft its best possible response.
Example:
2. **Self-Critique:**
It evaluates its initial response within `<Critique>` tags, focusing on:
- **Accuracy:** Is it factually correct and verifiable?
- **Clarity:** Is it clear and free of ambiguity?
- **Completeness:** Does it fully address the request?
- **Improvement:** What can be enhanced?
Example:
3. **Revision:**
Based on the critique, the model refines its response within `<Revising>` tags.
Example:
4. **Final Response:**
The revised response is presented clearly within `<Final>` tags.
Example:
5. **Tag Innovation:**
When needed, the model creates and defines new tags for better structuring or clarity, ensuring consistent usage.
Example:
### Key Features
- **Structured Reasoning:** Transparent, multi-step approach for generating and refining answers.
- **Self-Improvement:** Built-in critique and revision ensure continuous response enhancement.
- **Clarity and Adaptability:** Tagging system provides organized, adaptable responses tailored to user needs.
- **Creative Flexibility:** Supports dynamic problem-solving with the ability to introduce new tags and concepts.
---
## Use Cases
The model is designed for various domains, including:
1. **Research and Analysis:** Extracting insights and providing structured explanations.
2. **Education:** Assisting with tutoring by breaking down complex problems step-by-step.
3. **Problem-Solving:** Offering logical and actionable solutions for multi-step challenges.
4. **Content Generation:** Producing clear, well-organized creative or professional content.
---
## Training Details
- **Frameworks:**
- [Unsloth](https://github.com/unslothai/unsloth) for accelerated training.
- Hugging Face Transformers and the TRL library for reinforcement learning with human feedback (RLHF).
- **Dataset:** Finetuned on diverse reasoning-focused tasks, including logical puzzles, mathematical problems, and commonsense reasoning scenarios.
- **Hardware Efficiency:**
- Trained with bnb-4bit precision for reduced memory usage.
- Optimized training pipeline achieving 2x faster development cycles.
---
## Limitations
- **Arithmetic Equations** Model might hallucinate in the middle of thinking and using Arithmetic Equations as it wasn't trained on latex equations.
- **Very Complex problems** Model has a tendency to get side tracked when asked long and complex problems and might answer with uncertainty.
---
## Ethical Considerations
- **Transparency:** Responses are structured for verifiability through tagging.
- **Bias Mitigation:** Includes self-critique to minimize biases and ensure fairness.
- **Safe Deployment:** Users are encouraged to evaluate outputs to prevent harm or misinformation.
---
## License
This model is distributed under the Apache 2.0 license, allowing users to use, modify, and share it in compliance with the license terms.
---
## Acknowledgments
Special thanks to:
- [Unsloth](https://github.com/unslothai/unsloth) for accelerated training workflows.
- Hugging Face for their powerful tools and libraries.
---
Experience the **AetherDrake-SFT**, leveraging its structured reasoning and self-improvement capabilities for any task requiring advanced AI reasoning.
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/Daemontatox__AetherDrake-SFT-details)!
Summarized results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/contents/viewer/default/train?q=Daemontatox/AetherDrake-SFT)!
| Metric |% Value|
|-------------------|------:|
|Avg. | 22.84|
|IFEval (0-Shot) | 48.13|
|BBH (3-Shot) | 27.14|
|MATH Lvl 5 (4-Shot)| 14.65|
|GPQA (0-shot) | 9.40|
|MuSR (0-shot) | 9.97|
|MMLU-PRO (5-shot) | 27.77|
| [
"CRAFT"
] | Non_BioNLP |
mradermacher/1.5-Pints-16K-v0.1-GGUF | mradermacher | null | [
"transformers",
"gguf",
"en",
"dataset:pints-ai/Expository-Prose-V1",
"dataset:HuggingFaceH4/ultrachat_200k",
"dataset:Open-Orca/SlimOrca-Dedup",
"dataset:meta-math/MetaMathQA",
"dataset:HuggingFaceH4/deita-10k-v0-sft",
"dataset:WizardLM/WizardLM_evol_instruct_V2_196k",
"dataset:togethercomputer/llama-instruct",
"dataset:LDJnr/Capybara",
"dataset:HuggingFaceH4/ultrafeedback_binarized",
"base_model:pints-ai/1.5-Pints-16K-v0.1",
"base_model:quantized:pints-ai/1.5-Pints-16K-v0.1",
"license:mit",
"endpoints_compatible",
"region:us",
"conversational"
] | 1,741,431,552,000 | 2025-03-08T13:37:41 | 302 | 0 | ---
base_model: pints-ai/1.5-Pints-16K-v0.1
datasets:
- pints-ai/Expository-Prose-V1
- HuggingFaceH4/ultrachat_200k
- Open-Orca/SlimOrca-Dedup
- meta-math/MetaMathQA
- HuggingFaceH4/deita-10k-v0-sft
- WizardLM/WizardLM_evol_instruct_V2_196k
- togethercomputer/llama-instruct
- LDJnr/Capybara
- HuggingFaceH4/ultrafeedback_binarized
language:
- en
library_name: transformers
license: mit
extra_gated_fields:
Company: text
Country: country
I agree to use this model for in accordance to the afore-mentioned Terms of Use: checkbox
I want to use this model for:
options:
- Research
- Education
- label: Other
value: other
type: select
Specific date: date_picker
extra_gated_prompt: Though best efforts has been made to ensure, as much as possible,
that all texts in the training corpora are royalty free, this does not constitute
a legal guarantee that such is the case. **By using any of the models, corpora or
part thereof, the user agrees to bear full responsibility to do the necessary due
diligence to ensure that he / she is in compliance with their local copyright laws.
Additionally, the user agrees to bear any damages arising as a direct cause (or
otherwise) of using any artifacts released by the pints research team, as well as
full responsibility for the consequences of his / her usage (or implementation)
of any such released artifacts. The user also indemnifies Pints Research Team (and
any of its members or agents) of any damage, related or unrelated, to the release
or subsequent usage of any findings, artifacts or code by the team. For the avoidance
of doubt, any artifacts released by the Pints Research team are done so in accordance
with the 'fair use' clause of Copyright Law, in hopes that this will aid the research
community in bringing LLMs to the next frontier.
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/pints-ai/1.5-Pints-16K-v0.1
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/1.5-Pints-16K-v0.1-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/1.5-Pints-16K-v0.1-GGUF/resolve/main/1.5-Pints-16K-v0.1.Q2_K.gguf) | Q2_K | 0.7 | |
| [GGUF](https://huggingface.co/mradermacher/1.5-Pints-16K-v0.1-GGUF/resolve/main/1.5-Pints-16K-v0.1.Q3_K_S.gguf) | Q3_K_S | 0.8 | |
| [GGUF](https://huggingface.co/mradermacher/1.5-Pints-16K-v0.1-GGUF/resolve/main/1.5-Pints-16K-v0.1.Q3_K_M.gguf) | Q3_K_M | 0.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/1.5-Pints-16K-v0.1-GGUF/resolve/main/1.5-Pints-16K-v0.1.Q3_K_L.gguf) | Q3_K_L | 0.9 | |
| [GGUF](https://huggingface.co/mradermacher/1.5-Pints-16K-v0.1-GGUF/resolve/main/1.5-Pints-16K-v0.1.IQ4_XS.gguf) | IQ4_XS | 1.0 | |
| [GGUF](https://huggingface.co/mradermacher/1.5-Pints-16K-v0.1-GGUF/resolve/main/1.5-Pints-16K-v0.1.Q4_K_S.gguf) | Q4_K_S | 1.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/1.5-Pints-16K-v0.1-GGUF/resolve/main/1.5-Pints-16K-v0.1.Q4_K_M.gguf) | Q4_K_M | 1.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/1.5-Pints-16K-v0.1-GGUF/resolve/main/1.5-Pints-16K-v0.1.Q5_K_S.gguf) | Q5_K_S | 1.2 | |
| [GGUF](https://huggingface.co/mradermacher/1.5-Pints-16K-v0.1-GGUF/resolve/main/1.5-Pints-16K-v0.1.Q5_K_M.gguf) | Q5_K_M | 1.2 | |
| [GGUF](https://huggingface.co/mradermacher/1.5-Pints-16K-v0.1-GGUF/resolve/main/1.5-Pints-16K-v0.1.Q6_K.gguf) | Q6_K | 1.4 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/1.5-Pints-16K-v0.1-GGUF/resolve/main/1.5-Pints-16K-v0.1.Q8_0.gguf) | Q8_0 | 1.8 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/1.5-Pints-16K-v0.1-GGUF/resolve/main/1.5-Pints-16K-v0.1.f16.gguf) | f16 | 3.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
| [
"BEAR"
] | Non_BioNLP |
pyf98/librispeech_conformer | pyf98 | automatic-speech-recognition | [
"espnet",
"audio",
"automatic-speech-recognition",
"en",
"dataset:librispeech",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | 1,646,676,965,000 | 2022-03-07T18:33:17 | 1 | 0 | ---
datasets:
- librispeech
language: en
license: cc-by-4.0
tags:
- espnet
- audio
- automatic-speech-recognition
---
## ESPnet2 ASR model
### `pyf98/librispeech_conformer`
This model was trained by Yifan Peng using librispeech recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```bash
cd espnet
git checkout c3569453a408fd4ff4173d9c1d2062c88d1fc060
pip install -e .
cd egs2/librispeech/asr1
./run.sh --skip_data_prep false --skip_train true --download_model pyf98/librispeech_conformer
```
<!-- Generated by scripts/utils/show_asr_result.sh -->
# RESULTS
## Environments
- date: `Mon Mar 7 12:26:10 EST 2022`
- python version: `3.9.7 (default, Sep 16 2021, 13:09:58) [GCC 7.5.0]`
- espnet version: `espnet 0.10.7a1`
- pytorch version: `pytorch 1.10.1`
- Git hash: `c3569453a408fd4ff4173d9c1d2062c88d1fc060`
- Commit date: `Sun Mar 6 23:58:36 2022 -0500`
## asr_train_asr_conformer8_raw_en_bpe5000_sp
### WER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|beam60_ctc0.2/dev_clean|2703|54402|98.0|1.8|0.2|0.2|2.2|27.2|
|beam60_ctc0.2/dev_other|2864|50948|95.1|4.4|0.5|0.5|5.4|43.3|
|beam60_ctc0.2/test_clean|2620|52576|97.9|1.9|0.2|0.3|2.4|28.8|
|beam60_ctc0.2/test_other|2939|52343|95.2|4.3|0.5|0.6|5.4|45.5|
|beam60_ctc0.2_lm0.6/dev_clean|2703|54402|98.3|1.4|0.3|0.2|1.9|23.7|
|beam60_ctc0.2_lm0.6/dev_other|2864|50948|96.2|3.3|0.4|0.4|4.2|37.2|
|beam60_ctc0.2_lm0.6/test_clean|2620|52576|98.2|1.5|0.3|0.2|2.0|24.3|
|beam60_ctc0.2_lm0.6/test_other|2939|52343|96.1|3.3|0.6|0.4|4.4|39.9|
|beam60_ctc0.3/dev_clean|2703|54402|98.1|1.8|0.2|0.2|2.1|27.3|
|beam60_ctc0.3/dev_other|2864|50948|95.2|4.4|0.4|0.5|5.4|43.7|
|beam60_ctc0.3/test_clean|2620|52576|97.9|1.9|0.2|0.3|2.3|29.0|
|beam60_ctc0.3/test_other|2939|52343|95.2|4.3|0.4|0.6|5.4|45.7|
|beam60_ctc0.3_lm0.6/dev_clean|2703|54402|98.4|1.4|0.2|0.2|1.8|23.5|
|beam60_ctc0.3_lm0.6/dev_other|2864|50948|96.2|3.4|0.4|0.4|4.1|37.4|
|beam60_ctc0.3_lm0.6/test_clean|2620|52576|98.3|1.5|0.2|0.2|1.9|24.1|
|beam60_ctc0.3_lm0.6/test_other|2939|52343|96.2|3.3|0.5|0.5|4.3|39.9|
### CER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|beam60_ctc0.2/dev_clean|2703|288456|99.4|0.3|0.3|0.2|0.8|27.2|
|beam60_ctc0.2/dev_other|2864|265951|98.1|1.1|0.8|0.6|2.5|43.3|
|beam60_ctc0.2/test_clean|2620|281530|99.4|0.3|0.3|0.2|0.8|28.8|
|beam60_ctc0.2/test_other|2939|272758|98.3|1.0|0.7|0.6|2.3|45.5|
|beam60_ctc0.2_lm0.6/dev_clean|2703|288456|99.4|0.3|0.3|0.2|0.8|23.7|
|beam60_ctc0.2_lm0.6/dev_other|2864|265951|98.4|0.9|0.7|0.5|2.1|37.2|
|beam60_ctc0.2_lm0.6/test_clean|2620|281530|99.4|0.2|0.4|0.2|0.8|24.3|
|beam60_ctc0.2_lm0.6/test_other|2939|272758|98.5|0.8|0.8|0.5|2.0|39.9|
|beam60_ctc0.3/dev_clean|2703|288456|99.5|0.3|0.2|0.2|0.7|27.3|
|beam60_ctc0.3/dev_other|2864|265951|98.2|1.1|0.7|0.6|2.4|43.7|
|beam60_ctc0.3/test_clean|2620|281530|99.4|0.3|0.3|0.2|0.8|29.0|
|beam60_ctc0.3/test_other|2939|272758|98.4|0.9|0.7|0.6|2.2|45.7|
|beam60_ctc0.3_lm0.6/dev_clean|2703|288456|99.5|0.2|0.2|0.2|0.7|23.5|
|beam60_ctc0.3_lm0.6/dev_other|2864|265951|98.5|0.9|0.7|0.5|2.0|37.4|
|beam60_ctc0.3_lm0.6/test_clean|2620|281530|99.5|0.2|0.3|0.2|0.7|24.1|
|beam60_ctc0.3_lm0.6/test_other|2939|272758|98.6|0.7|0.7|0.5|1.9|39.9|
### TER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|beam60_ctc0.2/dev_clean|2703|68010|97.5|1.8|0.7|0.3|2.9|27.2|
|beam60_ctc0.2/dev_other|2864|63110|94.1|4.4|1.6|0.9|6.8|43.3|
|beam60_ctc0.2/test_clean|2620|65818|97.4|1.8|0.8|0.3|2.9|28.8|
|beam60_ctc0.2/test_other|2939|65101|94.1|4.1|1.8|0.8|6.7|45.5|
|beam60_ctc0.2_lm0.6/dev_clean|2703|68010|97.8|1.4|0.8|0.3|2.5|23.7|
|beam60_ctc0.2_lm0.6/dev_other|2864|63110|95.1|3.5|1.5|0.7|5.6|37.2|
|beam60_ctc0.2_lm0.6/test_clean|2620|65818|97.6|1.5|0.9|0.3|2.7|24.3|
|beam60_ctc0.2_lm0.6/test_other|2939|65101|95.0|3.2|1.8|0.6|5.6|39.9|
|beam60_ctc0.3/dev_clean|2703|68010|97.6|1.8|0.7|0.3|2.8|27.3|
|beam60_ctc0.3/dev_other|2864|63110|94.1|4.4|1.5|0.9|6.8|43.7|
|beam60_ctc0.3/test_clean|2620|65818|97.4|1.8|0.7|0.3|2.9|29.0|
|beam60_ctc0.3/test_other|2939|65101|94.2|4.1|1.7|0.8|6.6|45.7|
|beam60_ctc0.3_lm0.6/dev_clean|2703|68010|97.9|1.5|0.7|0.3|2.4|23.5|
|beam60_ctc0.3_lm0.6/dev_other|2864|63110|95.1|3.5|1.4|0.6|5.6|37.4|
|beam60_ctc0.3_lm0.6/test_clean|2620|65818|97.7|1.5|0.8|0.3|2.5|24.1|
|beam60_ctc0.3_lm0.6/test_other|2939|65101|95.1|3.2|1.7|0.6|5.5|39.9|
## ASR config
<details><summary>expand</summary>
```
config: conf/tuning/train_asr_conformer8.yaml
print_config: false
log_level: INFO
dry_run: false
iterator_type: sequence
output_dir: exp/asr_train_asr_conformer8_raw_en_bpe5000_sp
ngpu: 1
seed: 0
num_workers: 4
num_att_plot: 3
dist_backend: nccl
dist_init_method: env://
dist_world_size: 3
dist_rank: 0
local_rank: 0
dist_master_addr: localhost
dist_master_port: 59673
dist_launcher: null
multiprocessing_distributed: true
unused_parameters: false
sharded_ddp: false
cudnn_enabled: true
cudnn_benchmark: false
cudnn_deterministic: true
collect_stats: false
write_collected_feats: false
max_epoch: 50
patience: null
val_scheduler_criterion:
- valid
- loss
early_stopping_criterion:
- valid
- loss
- min
best_model_criterion:
- - valid
- acc
- max
keep_nbest_models: 10
nbest_averaging_interval: 0
grad_clip: 5.0
grad_clip_type: 2.0
grad_noise: false
accum_grad: 4
no_forward_run: false
resume: true
train_dtype: float32
use_amp: true
log_interval: null
use_matplotlib: true
use_tensorboard: true
use_wandb: false
wandb_project: null
wandb_id: null
wandb_entity: null
wandb_name: null
wandb_model_log_interval: -1
detect_anomaly: false
pretrain_path: null
init_param: []
ignore_init_mismatch: false
freeze_param: []
num_iters_per_epoch: null
batch_size: 20
valid_batch_size: null
batch_bins: 35000000
valid_batch_bins: null
train_shape_file:
- exp/asr_stats_raw_en_bpe5000_sp/train/speech_shape
- exp/asr_stats_raw_en_bpe5000_sp/train/text_shape.bpe
valid_shape_file:
- exp/asr_stats_raw_en_bpe5000_sp/valid/speech_shape
- exp/asr_stats_raw_en_bpe5000_sp/valid/text_shape.bpe
batch_type: numel
valid_batch_type: null
fold_length:
- 80000
- 150
sort_in_batch: descending
sort_batch: descending
multiple_iterator: false
chunk_length: 500
chunk_shift_ratio: 0.5
num_cache_chunks: 1024
train_data_path_and_name_and_type:
- - dump/raw/train_960_sp/wav.scp
- speech
- sound
- - dump/raw/train_960_sp/text
- text
- text
valid_data_path_and_name_and_type:
- - dump/raw/dev/wav.scp
- speech
- sound
- - dump/raw/dev/text
- text
- text
allow_variable_data_keys: false
max_cache_size: 0.0
max_cache_fd: 32
valid_max_cache_size: null
optim: adam
optim_conf:
lr: 0.0025
weight_decay: 1.0e-06
scheduler: warmuplr
scheduler_conf:
warmup_steps: 40000
token_list:
- <blank>
- <unk>
- ▁THE
- S
- ▁AND
- ▁OF
- ▁TO
- ▁A
- ▁IN
- ▁I
- ▁HE
- ▁THAT
- ▁WAS
- ED
- ▁IT
- ''''
- ▁HIS
- ING
- ▁YOU
- ▁WITH
- ▁FOR
- ▁HAD
- T
- ▁AS
- ▁HER
- ▁IS
- ▁BE
- ▁BUT
- ▁NOT
- ▁SHE
- D
- ▁AT
- ▁ON
- LY
- ▁HIM
- ▁THEY
- ▁ALL
- ▁HAVE
- ▁BY
- ▁SO
- ▁THIS
- ▁MY
- ▁WHICH
- ▁ME
- ▁SAID
- ▁FROM
- ▁ONE
- Y
- E
- ▁WERE
- ▁WE
- ▁NO
- N
- ▁THERE
- ▁OR
- ER
- ▁AN
- ▁WHEN
- ▁ARE
- ▁THEIR
- ▁WOULD
- ▁IF
- ▁WHAT
- ▁THEM
- ▁WHO
- ▁OUT
- M
- ▁DO
- ▁WILL
- ▁UP
- ▁BEEN
- P
- R
- ▁MAN
- ▁THEN
- ▁COULD
- ▁MORE
- C
- ▁INTO
- ▁NOW
- ▁VERY
- ▁YOUR
- ▁SOME
- ▁LITTLE
- ES
- ▁TIME
- RE
- ▁CAN
- ▁LIKE
- LL
- ▁ABOUT
- ▁HAS
- ▁THAN
- ▁DID
- ▁UPON
- ▁OVER
- IN
- ▁ANY
- ▁WELL
- ▁ONLY
- B
- ▁SEE
- ▁GOOD
- ▁OTHER
- ▁TWO
- L
- ▁KNOW
- ▁GO
- ▁DOWN
- ▁BEFORE
- A
- AL
- ▁OUR
- ▁OLD
- ▁SHOULD
- ▁MADE
- ▁AFTER
- ▁GREAT
- ▁DAY
- ▁MUST
- ▁COME
- ▁HOW
- ▁SUCH
- ▁CAME
- LE
- ▁WHERE
- ▁US
- ▁NEVER
- ▁THESE
- ▁MUCH
- ▁DE
- ▁MISTER
- ▁WAY
- G
- ▁S
- ▁MAY
- ATION
- ▁LONG
- OR
- ▁AM
- ▁FIRST
- ▁BACK
- ▁OWN
- ▁RE
- ▁AGAIN
- ▁SAY
- ▁MEN
- ▁WENT
- ▁HIMSELF
- ▁HERE
- NESS
- ▁THINK
- V
- IC
- ▁EVEN
- ▁THOUGHT
- ▁HAND
- ▁JUST
- ▁O
- ▁UN
- VE
- ION
- ▁ITS
- 'ON'
- ▁MAKE
- ▁MIGHT
- ▁TOO
- K
- ▁AWAY
- ▁LIFE
- TH
- ▁WITHOUT
- ST
- ▁THROUGH
- ▁MOST
- ▁TAKE
- ▁DON
- ▁EVERY
- F
- O
- ▁SHALL
- ▁THOSE
- ▁EYES
- AR
- ▁STILL
- ▁LAST
- ▁HOUSE
- ▁HEAD
- ABLE
- ▁NOTHING
- ▁NIGHT
- ITY
- ▁LET
- ▁MANY
- ▁OFF
- ▁BEING
- ▁FOUND
- ▁WHILE
- EN
- ▁SAW
- ▁GET
- ▁PEOPLE
- ▁FACE
- ▁YOUNG
- CH
- ▁UNDER
- ▁ONCE
- ▁TELL
- AN
- ▁THREE
- ▁PLACE
- ▁ROOM
- ▁YET
- ▁SAME
- IL
- US
- U
- ▁FATHER
- ▁RIGHT
- EL
- ▁THOUGH
- ▁ANOTHER
- LI
- RI
- ▁HEART
- IT
- ▁PUT
- ▁TOOK
- ▁GIVE
- ▁EVER
- ▁E
- ▁PART
- ▁WORK
- ERS
- ▁LOOK
- ▁NEW
- ▁KING
- ▁MISSUS
- ▁SIR
- ▁LOVE
- ▁MIND
- ▁LOOKED
- W
- RY
- ▁ASKED
- ▁LEFT
- ET
- ▁LIGHT
- CK
- ▁DOOR
- ▁MOMENT
- RO
- ▁WORLD
- ▁THINGS
- ▁HOME
- UL
- ▁THING
- LA
- ▁WHY
- ▁MOTHER
- ▁ALWAYS
- ▁FAR
- FUL
- ▁WATER
- CE
- IVE
- UR
- ▁HEARD
- ▁SOMETHING
- ▁SEEMED
- I
- LO
- ▁BECAUSE
- OL
- ▁END
- ▁TOLD
- ▁CON
- ▁YES
- ▁GOING
- ▁GOT
- RA
- IR
- ▁WOMAN
- ▁GOD
- EST
- TED
- ▁FIND
- ▁KNEW
- ▁SOON
- ▁EACH
- ▁SIDE
- H
- TON
- MENT
- ▁OH
- NE
- Z
- LING
- ▁AGAINST
- TER
- ▁NAME
- ▁MISS
- ▁QUITE
- ▁WANT
- ▁YEARS
- ▁FEW
- ▁BETTER
- ENT
- ▁HALF
- ▁DONE
- ▁ALSO
- ▁BEGAN
- ▁HAVING
- ▁ENOUGH
- IS
- ▁LADY
- ▁WHOLE
- LESS
- ▁BOTH
- ▁SEEN
- ▁SET
- ▁WHITE
- ▁COURSE
- IES
- ▁VOICE
- ▁CALLED
- ▁D
- ▁EX
- ATE
- ▁TURNED
- ▁GAVE
- ▁C
- ▁POOR
- MAN
- UT
- NA
- ▁DEAR
- ISH
- ▁GIRL
- ▁MORNING
- ▁BETWEEN
- LED
- ▁NOR
- IA
- ▁AMONG
- MA
- ▁
- ▁SMALL
- ▁REST
- ▁WHOM
- ▁FELT
- ▁HANDS
- ▁MYSELF
- ▁HIGH
- ▁M
- ▁HOWEVER
- ▁HERSELF
- ▁P
- CO
- ▁STOOD
- ID
- ▁KIND
- ▁HUNDRED
- AS
- ▁ROUND
- ▁ALMOST
- TY
- ▁SINCE
- ▁G
- AM
- ▁LA
- SE
- ▁BOY
- ▁MA
- ▁PERHAPS
- ▁WORDS
- ATED
- ▁HO
- X
- ▁MO
- ▁SAT
- ▁REPLIED
- ▁FOUR
- ▁ANYTHING
- ▁TILL
- ▁UNTIL
- ▁BLACK
- TION
- ▁CRIED
- RU
- TE
- ▁FACT
- ▁HELP
- ▁NEXT
- ▁LOOKING
- ▁DOES
- ▁FRIEND
- ▁LAY
- ANCE
- ▁POWER
- ▁BROUGHT
- VER
- ▁FIRE
- ▁KEEP
- PO
- FF
- ▁COUNTRY
- ▁SEA
- ▁WORD
- ▁CAR
- ▁DAYS
- ▁TOGETHER
- ▁IMP
- ▁REASON
- KE
- ▁INDEED
- TING
- ▁MATTER
- ▁FULL
- ▁TEN
- TIC
- ▁LAND
- ▁RATHER
- ▁AIR
- ▁HOPE
- ▁DA
- ▁OPEN
- ▁FEET
- ▁EN
- ▁FIVE
- ▁POINT
- ▁CO
- OM
- ▁LARGE
- ▁B
- ▁CL
- ME
- ▁GONE
- ▁CHILD
- INE
- GG
- ▁BEST
- ▁DIS
- UM
- ▁HARD
- ▁LORD
- OUS
- ▁WIFE
- ▁SURE
- ▁FORM
- DE
- ▁DEATH
- ANT
- ▁NATURE
- ▁BA
- ▁CARE
- ▁BELIEVE
- PP
- ▁NEAR
- ▁RO
- ▁RED
- ▁WAR
- IE
- ▁SPEAK
- ▁FEAR
- ▁CASE
- ▁TAKEN
- ▁ALONG
- ▁CANNOT
- ▁HEAR
- ▁THEMSELVES
- CI
- ▁PRESENT
- AD
- ▁MASTER
- ▁SON
- ▁THUS
- ▁LI
- ▁LESS
- ▁SUN
- ▁TRUE
- IM
- IOUS
- ▁THOUSAND
- ▁MONEY
- ▁W
- ▁BEHIND
- ▁CHILDREN
- ▁DOCTOR
- AC
- ▁TWENTY
- ▁WISH
- ▁SOUND
- ▁WHOSE
- ▁LEAVE
- ▁ANSWERED
- ▁THOU
- ▁DUR
- ▁HA
- ▁CERTAIN
- ▁PO
- ▁PASSED
- GE
- TO
- ▁ARM
- ▁LO
- ▁STATE
- ▁ALONE
- TA
- ▁SHOW
- ▁NEED
- ▁LIVE
- ND
- ▁DEAD
- ENCE
- ▁STRONG
- ▁PRE
- ▁TI
- ▁GROUND
- SH
- TI
- ▁SHORT
- IAN
- UN
- ▁PRO
- ▁HORSE
- MI
- ▁PRINCE
- ARD
- ▁FELL
- ▁ORDER
- ▁CALL
- AT
- ▁GIVEN
- ▁DARK
- ▁THEREFORE
- ▁CLOSE
- ▁BODY
- ▁OTHERS
- ▁SENT
- ▁SECOND
- ▁OFTEN
- ▁CA
- ▁MANNER
- MO
- NI
- ▁BRING
- ▁QUESTION
- ▁HOUR
- ▁BO
- AGE
- ▁ST
- ▁TURN
- ▁TABLE
- ▁GENERAL
- ▁EARTH
- ▁BED
- ▁REALLY
- ▁SIX
- 'NO'
- IST
- ▁BECOME
- ▁USE
- ▁READ
- ▁SE
- ▁VI
- ▁COMING
- ▁EVERYTHING
- ▁EM
- ▁ABOVE
- ▁EVENING
- ▁BEAUTIFUL
- ▁FEEL
- ▁RAN
- ▁LEAST
- ▁LAW
- ▁ALREADY
- ▁MEAN
- ▁ROSE
- WARD
- ▁ITSELF
- ▁SOUL
- ▁SUDDENLY
- ▁AROUND
- RED
- ▁ANSWER
- ICAL
- ▁RA
- ▁WIND
- ▁FINE
- ▁WON
- ▁WHETHER
- ▁KNOWN
- BER
- NG
- ▁TA
- ▁CAPTAIN
- ▁EYE
- ▁PERSON
- ▁WOMEN
- ▁SORT
- ▁ASK
- ▁BROTHER
- ▁USED
- ▁HELD
- ▁BIG
- ▁RETURNED
- ▁STRANGE
- ▁BU
- ▁PER
- ▁FREE
- ▁EITHER
- ▁WITHIN
- ▁DOUBT
- ▁YEAR
- ▁CLEAR
- ▁SIGHT
- ▁GRA
- ▁LOST
- ▁KEPT
- ▁F
- PE
- ▁BAR
- ▁TOWN
- ▁SLEEP
- ARY
- ▁HAIR
- ▁FRIENDS
- ▁DREAM
- ▁FELLOW
- PER
- ▁DEEP
- QUE
- ▁BECAME
- ▁REAL
- ▁PAST
- ▁MAKING
- RING
- ▁COMP
- ▁ACT
- ▁BAD
- HO
- STER
- ▁YE
- ▁MEANS
- ▁RUN
- MEN
- ▁DAUGHTER
- ▁SENSE
- ▁CITY
- ▁SOMETIMES
- ▁TOWARDS
- ▁ROAD
- ▁SP
- ▁LU
- ▁READY
- ▁FOOT
- ▁COLD
- ▁SA
- ▁LETTER
- ▁ELSE
- ▁MAR
- ▁STA
- BE
- ▁TRUTH
- ▁LE
- BO
- ▁BUSINESS
- CHE
- ▁JOHN
- ▁SUBJECT
- ▁COURT
- ▁IDEA
- ILY
- ▁RIVER
- ATING
- ▁FAMILY
- HE
- ▁DIDN
- ▁GLAD
- ▁SEVERAL
- IAL
- ▁UNDERSTAND
- ▁SC
- ▁POSSIBLE
- ▁DIFFERENT
- ▁RETURN
- ▁ARMS
- ▁LOW
- ▁HOLD
- ▁TALK
- ▁RU
- ▁WINDOW
- ▁INTEREST
- ▁SISTER
- SON
- ▁SH
- ▁BLOOD
- ▁SAYS
- ▁CAP
- ▁DI
- ▁HUMAN
- ▁CAUSE
- NCE
- ▁THANK
- ▁LATE
- GO
- ▁CUT
- ▁ACROSS
- ▁STORY
- NT
- ▁COUNT
- ▁ABLE
- DY
- LEY
- ▁NUMBER
- ▁STAND
- ▁CHURCH
- ▁THY
- ▁SUPPOSE
- LES
- BLE
- OP
- ▁EFFECT
- BY
- ▁K
- ▁NA
- ▁SPOKE
- ▁MET
- ▁GREEN
- ▁HUSBAND
- ▁RESPECT
- ▁PA
- ▁FOLLOWED
- ▁REMEMBER
- ▁LONGER
- ▁AGE
- ▁TAKING
- ▁LINE
- ▁SEEM
- ▁HAPPY
- LAND
- EM
- ▁STAY
- ▁PLAY
- ▁COMMON
- ▁GA
- ▁BOOK
- ▁TIMES
- ▁OBJECT
- ▁SEVEN
- QUI
- DO
- UND
- ▁FL
- ▁PRETTY
- ▁FAIR
- WAY
- ▁WOOD
- ▁REACHED
- ▁APPEARED
- ▁SWEET
- ▁FALL
- BA
- ▁PASS
- ▁SIGN
- ▁TREE
- IONS
- ▁GARDEN
- ▁ILL
- ▁ART
- ▁REMAIN
- ▁OPENED
- ▁BRIGHT
- ▁STREET
- ▁TROUBLE
- ▁PAIN
- ▁CONTINUED
- ▁SCHOOL
- OUR
- ▁CARRIED
- ▁SAYING
- HA
- ▁CHANGE
- ▁FOLLOW
- ▁GOLD
- ▁SW
- ▁FEELING
- ▁COMMAND
- ▁BEAR
- ▁CERTAINLY
- ▁BLUE
- ▁NE
- CA
- ▁WILD
- ▁ACCOUNT
- ▁OUGHT
- UD
- ▁T
- ▁BREATH
- ▁WANTED
- ▁RI
- ▁HEAVEN
- ▁PURPOSE
- ▁CHARACTER
- ▁RICH
- ▁PE
- ▁DRESS
- OS
- FA
- ▁TH
- ▁ENGLISH
- ▁CHANCE
- ▁SHIP
- ▁VIEW
- ▁TOWARD
- AK
- ▁JOY
- ▁JA
- ▁HAR
- ▁NEITHER
- ▁FORCE
- ▁UNCLE
- DER
- ▁PLAN
- ▁PRINCESS
- DI
- ▁CHIEF
- ▁HAT
- ▁LIVED
- ▁AB
- ▁VISIT
- ▁MOR
- TEN
- ▁WALL
- UC
- ▁MINE
- ▁PLEASURE
- ▁SMILE
- ▁FRONT
- ▁HU
- ▁DEAL
- OW
- ▁FURTHER
- GED
- ▁TRIED
- DA
- VA
- ▁NONE
- ▁ENTERED
- ▁QUEEN
- ▁PAY
- ▁EL
- ▁EXCEPT
- ▁SHA
- ▁FORWARD
- ▁EIGHT
- ▁ADDED
- ▁PUBLIC
- ▁EIGHTEEN
- ▁STAR
- ▁HAPPENED
- ▁LED
- ▁WALKED
- ▁ALTHOUGH
- ▁LATER
- ▁SPIRIT
- ▁WALK
- ▁BIT
- ▁MEET
- LIN
- ▁FI
- LT
- ▁MOUTH
- ▁WAIT
- ▁HOURS
- ▁LIVING
- ▁YOURSELF
- ▁FAST
- ▁CHA
- ▁HALL
- ▁BEYOND
- ▁BOAT
- ▁SECRET
- ENS
- ▁CHAIR
- RN
- ▁RECEIVED
- ▁CAT
- RESS
- ▁DESIRE
- ▁GENTLEMAN
- UGH
- ▁LAID
- EVER
- ▁OCCASION
- ▁WONDER
- ▁GU
- ▁PARTY
- DEN
- ▁FISH
- ▁SEND
- ▁NEARLY
- ▁TRY
- CON
- ▁SEEMS
- RS
- ▁BELL
- ▁BRA
- ▁SILENCE
- IG
- ▁GUARD
- ▁DIE
- ▁DOING
- ▁TU
- ▁COR
- ▁EARLY
- ▁BANK
- ▁FIGURE
- IF
- ▁ENGLAND
- ▁MARY
- ▁AFRAID
- LER
- ▁FO
- ▁WATCH
- ▁FA
- ▁VA
- ▁GRE
- ▁AUNT
- PED
- ▁SERVICE
- ▁JE
- ▁PEN
- ▁MINUTES
- ▁PAN
- ▁TREES
- NED
- ▁GLASS
- ▁TONE
- ▁PLEASE
- ▁FORTH
- ▁CROSS
- ▁EXCLAIMED
- ▁DREW
- ▁EAT
- ▁AH
- ▁GRAVE
- ▁CUR
- PA
- URE
- CENT
- ▁MILES
- ▁SOFT
- ▁AGO
- ▁POSITION
- ▁WARM
- ▁LENGTH
- ▁NECESSARY
- ▁THINKING
- ▁PICTURE
- ▁PI
- SHIP
- IBLE
- ▁HEAVY
- ▁ATTENTION
- ▁DOG
- ABLY
- ▁STANDING
- ▁NATURAL
- ▁APPEAR
- OV
- ▁CAUGHT
- VO
- ISM
- ▁SPRING
- ▁EXPERIENCE
- ▁PAT
- OT
- ▁STOPPED
- ▁REGARD
- ▁HARDLY
- ▁SELF
- ▁STRENGTH
- ▁GREW
- ▁KNIGHT
- ▁OPINION
- ▁WIDE
- ▁INSTEAD
- ▁SOUTH
- ▁TRANS
- ▁CORNER
- ▁LEARN
- ▁ISLAND
- ▁MI
- ▁THIRD
- ▁STE
- ▁STRAIGHT
- ▁TEA
- ▁BOUND
- ▁SEEING
- ▁JU
- ▁DINNER
- ▁BEAUTY
- ▁PEACE
- AH
- ▁REP
- ▁SILENT
- ▁CRE
- ALLY
- RIC
- ▁STEP
- ▁VER
- ▁JO
- GER
- ▁SITTING
- ▁THIRTY
- ▁SAVE
- ENED
- ▁GLANCE
- ▁REACH
- ▁ACTION
- ▁SAL
- ▁SAD
- ▁STONE
- ITIES
- ▁FRENCH
- ▁STRUCK
- ▁PAPER
- ▁WHATEVER
- ▁SUB
- ▁DISTANCE
- ▁WRONG
- ▁KNOWLEDGE
- ▁SAFE
- ▁SNOW
- ▁MUSIC
- ▁FIFTY
- RON
- ▁ATTEMPT
- ▁GOVERNMENT
- TU
- ▁CROWD
- ▁BESIDES
- ▁LOVED
- ▁BOX
- ▁DIRECTION
- ▁TRAIN
- ▁NORTH
- ▁THICK
- ▁GETTING
- AV
- ▁FLOOR
- ▁COMPANY
- ▁BLOW
- ▁PLAIN
- TRO
- ▁BESIDE
- ▁ROCK
- ▁IMMEDIATELY
- FI
- ▁SHADOW
- ▁SIT
- ORS
- ILE
- ▁DRINK
- ▁SPOT
- ▁DANGER
- ▁AL
- ▁SAINT
- ▁SLOWLY
- ▁PALACE
- IER
- ▁RESULT
- ▁PETER
- ▁FOREST
- ▁BELONG
- ▁SU
- ▁PAR
- RIS
- ▁TEARS
- ▁APPEARANCE
- ▁GATE
- BU
- ITION
- ▁QUICKLY
- ▁QUIET
- ▁LONDON
- ▁START
- ▁BROWN
- TRA
- KIN
- ▁CONSIDER
- ▁BATTLE
- ▁ANNE
- ▁PIECE
- ▁DIED
- ▁SUCCESS
- ▁LIPS
- ▁FILLED
- ▁FORGET
- ▁POST
- IFIED
- ▁MARGARET
- ▁FOOD
- HAM
- ▁PLEASANT
- ▁FE
- ▁EXPRESSION
- ▁POCKET
- ▁FRESH
- ▁WEAR
- TRI
- ▁BROKEN
- ▁LAUGHED
- GING
- ▁FOLLOWING
- WN
- IP
- ▁TOUCH
- ▁YOUTH
- ATIVE
- ▁LEG
- ▁WEEK
- ▁REMAINED
- ▁EASY
- NER
- RK
- ▁ENTER
- ▁FIGHT
- ▁PLACED
- ▁TRAVEL
- ▁SIMPLE
- ▁GIRLS
- ▁WAITING
- ▁STOP
- ▁WAVE
- AU
- ▁WISE
- ▁CAMP
- TURE
- UB
- ▁VE
- ▁OFFICE
- ▁GRAND
- ▁FIT
- ▁JUDGE
- UP
- MENTS
- ▁QUICK
- HI
- ▁FLO
- RIES
- VAL
- ▁COMFORT
- ▁PARTICULAR
- ▁STARTED
- ▁SUIT
- ▁NI
- ▁PALE
- ▁IMPOSSIBLE
- ▁HOT
- ▁CONVERSATION
- ▁SCENE
- ▁BOYS
- ▁WIN
- ▁BRE
- ▁SOCIETY
- ▁OUTSIDE
- ▁WRITE
- ▁EFFORT
- ▁TALKING
- ▁FORTUNE
- ▁NINE
- ▁WA
- ▁SINGLE
- ▁RULE
- ▁PORT
- ▁WINTER
- ▁CAST
- ▁CRA
- ▁HAPPEN
- ▁CRO
- ▁SHUT
- NING
- ▁GUN
- ▁NOBLE
- ▁BEGIN
- ▁PATH
- ▁SKY
- ▁WONDERFUL
- ▁SUDDEN
- ▁ARMY
- ▁CHE
- ▁WORTH
- ▁MOUNTAIN
- ▁MIN
- AG
- ▁FLU
- ▁GRACE
- ▁CHAPTER
- ▁BELOW
- ▁RING
- ▁TURNING
- ▁IRON
- ▁TOP
- ▁AFTERNOON
- ORY
- ▁EVIL
- ▁TRUST
- ▁BOW
- ▁TRI
- ▁SAIL
- ▁CONTENT
- ▁HORSES
- ITE
- ▁SILVER
- AP
- ▁LAD
- ▁RUNNING
- ▁HILL
- ▁BEGINNING
- ▁MAD
- ▁HABIT
- GRA
- ▁CLOTHES
- ▁MORROW
- ▁CRY
- ▁FASHION
- ▁PRESENCE
- ▁Z
- FE
- ▁ARRIVED
- ▁QUARTER
- ▁PERFECT
- ▁WO
- ▁TRA
- ▁USUAL
- ▁NECK
- ▁MARRIED
- ▁SEAT
- ▁WI
- ▁GAR
- ▁SAND
- ▁SHORE
- ▁GIVING
- NY
- ▁PROBABLY
- ▁MINUTE
- ▁EXPECT
- ▁DU
- ▁SHOT
- ▁INSTANT
- ▁DEGREE
- ▁COLOR
- ▁WEST
- RT
- ▁MARCH
- ▁BIRD
- ▁SHOWED
- ▁GREATER
- ▁SERIOUS
- ▁CARRY
- ▁COVERED
- ▁FORMER
- ▁LOUD
- ▁MOVED
- ▁MASS
- ▁SEEK
- ▁CHO
- GEN
- ▁ROMAN
- IB
- ▁MOON
- ▁BOARD
- ▁STREAM
- ▁EASILY
- ▁WISHED
- ▁SEARCH
- ▁COULDN
- ▁MONTHS
- ▁SICK
- LIE
- ▁DUTY
- ▁TWELVE
- ▁FAINT
- ▁STRANGER
- ▁SURPRISE
- ▁KILL
- ▁LEAVING
- ▁JOURNEY
- ▁SCARCELY
- ▁RAISED
- ▁SPEAKING
- ▁TERRIBLE
- ▁TOM
- ▁FIELD
- ▁GAME
- ▁QUA
- ▁PROMISE
- ▁LIE
- ▁CONDITION
- ▁TRO
- ▁PERSONAL
- ▁TALL
- ▁STICK
- ▁THREW
- ▁MARRY
- ▁VAN
- ▁BURN
- ▁ACCORDING
- ▁RISE
- ▁ATTACK
- ▁SWORD
- ▁GUESS
- ▁THOUGHTS
- ▁THIN
- ▁THROW
- ▁CALM
- SIDE
- ▁VILLAGE
- ▁DEN
- ▁ANXIOUS
- ▁MER
- GI
- ▁EXPECTED
- ▁BALL
- ▁ESPECIALLY
- ▁CHARGE
- ▁MEASURE
- ISE
- ▁NICE
- ▁TRYING
- ▁ALLOW
- ▁SHARP
- ▁BREAD
- ▁HONOUR
- ▁HONOR
- ▁ENTIRELY
- ▁BILL
- ▁BRI
- ▁WRITTEN
- ▁AR
- ▁BROKE
- ▁KILLED
- ▁MARK
- ▁VEN
- ▁LADIES
- ▁LEARNED
- ▁FLOWERS
- PLE
- ▁FORTY
- ▁OFFER
- ▁HAPPINESS
- ▁PRAY
- ▁CLASS
- ▁FER
- ▁PRINCIPLE
- GU
- ▁BOOKS
- ▁SHAPE
- ▁SUMMER
- ▁JACK
- ▁DRAW
- ▁GOLDEN
- ▁DECIDED
- ▁LEAD
- ▁UNLESS
- ▁HARM
- ▁LISTEN
- HER
- ▁SHOOK
- ▁INFLUENCE
- ▁PERFECTLY
- ▁MARRIAGE
- ▁BROAD
- ▁ESCAPE
- ▁STATES
- ▁MIDDLE
- ▁PLANT
- ▁MIL
- ▁MOVEMENT
- ▁NOISE
- ▁ENEMY
- ▁HISTORY
- ▁BREAK
- ROUS
- ▁UNDERSTOOD
- ▁LATTER
- FER
- ▁COMES
- ▁MERELY
- ▁SIMPLY
- WI
- ▁IMAGINE
- ▁LOWER
- ▁CONDUCT
- ▁BORN
- WA
- ▁YARD
- ▁KA
- ▁CLOSED
- ▁NOTE
- GA
- ▁STRA
- RAN
- ▁EXIST
- EV
- ▁SPEECH
- ▁BITTER
- JO
- ▁MAKES
- ▁GRASS
- ▁REPLY
- ▁CHANGED
- ▁MON
- ▁LYING
- ▁DANCE
- ▁FINALLY
- ▁AMERICAN
- ▁ENJOY
- ▁CONTAIN
- ▁MEANT
- USE
- ▁OBSERVED
- THER
- ▁LAUGH
- ▁AFTERWARDS
- ▁BEAT
- ▁RACE
- ▁EQUAL
- ▁RAIN
- PS
- ▁STEPS
- ▁BENEATH
- ▁TAIL
- ▁TASTE
- IO
- EY
- ▁CHAR
- ▁GE
- GN
- TIN
- ▁GROW
- ▁TE
- IANS
- ▁MOVE
- ▁REPEATED
- ▁DRIVE
- TUR
- ▁SI
- CLOCK
- ▁BRAVE
- ▁MADAME
- ▁LOT
- ▁CASTLE
- ▁HI
- AND
- ▁FUTURE
- ▁RELATION
- ▁SORRY
- ▁HEALTH
- ▁DICK
- ▁R
- ▁BUILDING
- ▁EDGE
- ▁BLESS
- ▁SPITE
- WE
- ▁MIS
- ▁PRISONER
- ▁ALLOWED
- ▁PH
- ▁CATCH
- MER
- ETH
- ▁COAT
- ▁COMPLETE
- ▁WOULDN
- ▁CREATURE
- ▁YELLOW
- ▁IMPORTANT
- ▁ADD
- ▁PASSING
- ▁DARKNESS
- ▁CARRIAGE
- ▁MILL
- ▁FIFTEEN
- NCY
- ▁HUNG
- ▁OB
- ▁PLEASED
- ▁SPREAD
- ▁CURIOUS
- ▁WORSE
- ▁CIRCUMSTANCES
- ▁GI
- LAR
- ▁CAL
- ▁HY
- ▁MERE
- ▁JANE
- ▁EAST
- BI
- ▁CUP
- ▁BLIND
- ▁PASSION
- ▁DISCOVERED
- ▁NOTICE
- ▁REPORT
- ▁SPACE
- ▁PRESENTLY
- ▁SORROW
- ▁PACK
- ▁DIN
- CY
- ▁DRY
- ▁ANCIENT
- ▁DRESSED
- ▁COVER
- ▁VO
- ▁EXISTENCE
- ▁EXACTLY
- ▁BEAST
- ▁PROPER
- ▁DROPPED
- ▁CLEAN
- ▁COLOUR
- ▁HOST
- ▁CHAMBER
- ▁FAITH
- LET
- ▁DETERMINED
- ▁PRIEST
- ▁STORM
- ▁SKIN
- ▁DARE
- ▁PERSONS
- ▁PICK
- ▁NARROW
- ▁SUPPORT
- ▁PRIVATE
- ▁SMILED
- ▁COUSIN
- ▁DRAWING
- ▁ATTEND
- ▁COOK
- ▁PREVENT
- ▁VARIOUS
- ▁BLA
- ▁FIXED
- ▁WEAK
- THE
- ▁HOLE
- ▁BOTTOM
- ▁NOBODY
- ADE
- ▁LEGS
- ITCH
- ▁INDIVIDUAL
- ▁EARS
- LIKE
- ▁ADVANTAGE
- ▁FRANCE
- ▁BON
- ▁WINE
- ▁LIVES
- OD
- ▁WALLS
- ▁TIRED
- ▁SHOP
- ▁ANIMAL
- ▁CRU
- ▁WROTE
- ▁ROYAL
- ▁CONSIDERED
- ▁MORAL
- ▁COMPANION
- ▁LOSE
- ▁ISN
- ▁BAG
- ▁LAKE
- ▁INTER
- ▁COM
- ▁LETTERS
- ▁LUCK
- ▁EAR
- ▁GERMAN
- ▁PET
- ▁SAKE
- ▁DROP
- ▁PAID
- ▁BREAKFAST
- ▁LABOR
- ▁DESERT
- ▁DECLARED
- ▁HUM
- ▁STUDY
- ▁INSTANCE
- ONE
- ▁SOMEWHAT
- ▁CLOTH
- ▁SPECIAL
- ▁COLONEL
- ▁SONG
- ▁MAIN
- ▁VALUE
- ▁PROUD
- ▁EXPRESS
- ▁NATION
- ▁HANDSOME
- ▁CONFESS
- ▁PU
- ▁PASSAGE
- ▁PERIOD
- ▁CUSTOM
- ▁HURT
- ▁SHOULDER
- ▁CHRIST
- ZA
- ▁RECEIVE
- ▁DIFFICULT
- ▁DEPEND
- ▁MEETING
- ▁CHI
- ▁GEN
- LIGHT
- ▁BELIEVED
- ▁SOCIAL
- ▁DIFFICULTY
- ▁GREATEST
- ▁DRAWN
- ▁GRANT
- ▁BIRDS
- ▁ANGRY
- ▁HEAT
- UFF
- ▁DUE
- ▁PLACES
- ▁SIN
- ▁COURAGE
- ▁EVIDENTLY
- ▁GENTLE
- ▁CRUEL
- ▁GEORGE
- ▁GRI
- ▁SERVANT
- ▁U
- ▁PURE
- OOK
- ▁KNOWS
- ▁KNOWING
- LF
- ▁WRITING
- ▁REMEMBERED
- ▁CU
- ▁HOLDING
- ▁TENDER
- ▁QUI
- ▁BURST
- ▁SURELY
- IGN
- ▁VALLEY
- ▁FU
- ▁BUTTER
- ▁SPOKEN
- ▁STORE
- ▁DISC
- ▁CHRISTIAN
- ▁PARIS
- ▁HENRY
- ▁FINISHED
- ▁PROVE
- ▁FOOL
- ▁SOLDIERS
- ▁LANGUAGE
- ▁INSIDE
- ▁BAN
- ▁FALLEN
- ROW
- ▁MAL
- ▁BABY
- ▁SITUATION
- ▁WATCHED
- ANS
- ▁RUIN
- ▁GENTLEMEN
- ▁FRO
- ▁FANCY
- ▁ACCEPT
- ▁SEASON
- ▁OURSELVES
- ▁SAN
- ▁SPEED
- IZED
- ▁COOL
- ▁SERVE
- ▁VESSEL
- ▁WILLIAM
- ▁OBLIGED
- ▁GROUP
- FORM
- ▁GOES
- UOUS
- ▁LEAVES
- ▁PECULIAR
- ▁NEWS
- ▁VAIN
- ▁EVERYBODY
- ▁PIN
- UG
- ▁FORGOTTEN
- ▁FRA
- GAN
- ▁CAREFULLY
- ▁FLASH
- UCH
- ▁FUR
- ▁MURDER
- ▁DELIGHT
- ▁WAITED
- ▁RENDER
- ▁PROPERTY
- ▁NOTICED
- ▁ROLL
- ▁KNOCK
- ▁EARNEST
- KI
- ▁HONEST
- ▁PROMISED
- ▁BAL
- AW
- ▁WALKING
- ANG
- ▁SQUARE
- ▁QUIETLY
- ▁CLOUD
- WOOD
- ▁FORMED
- ▁HIGHER
- ▁BUILT
- ▁FATE
- ▁TEACH
- MY
- ▁FALSE
- ▁YORK
- ▁DUST
- ▁CLIMB
- ▁FOND
- ▁GROWN
- ▁DESCEND
- ▁RAG
- ▁FRUIT
- ▁GENERALLY
- ▁OFFERED
- ▁ER
- ▁NURSE
- POSE
- ▁SPENT
- ▁JOIN
- ▁STATION
- ▁MEANING
- ▁SMOKE
- HOOD
- ▁ROUGH
- JU
- ▁LIKELY
- ▁SURFACE
- ▁KE
- ▁MONTH
- ▁POSSESSION
- ▁TONGUE
- ▁DUKE
- ▁NOSE
- ▁LAUGHING
- ▁WEATHER
- ▁WHISPERED
- ▁SYSTEM
- ▁LAWS
- DDLE
- ▁TOUCHED
- ▁TRADE
- LD
- ▁SURPRISED
- RIN
- ▁ARCH
- ▁WEALTH
- FOR
- ▁TEMPER
- ▁FRANK
- ▁GAL
- ▁BARE
- ▁OPPORTUNITY
- ▁CLAIM
- ▁ANIMALS
- ▁REV
- ▁COST
- ▁WASH
- ZE
- ▁CORN
- ▁OPPOSITE
- ▁POLICE
- ▁IDEAS
- LON
- ▁KEY
- ▁READING
- ▁COLLECT
- CHED
- ▁H
- ▁CROWN
- ▁TAR
- ▁SWIFT
- ▁SHOULDERS
- ▁ICE
- ▁GRAY
- ▁SHARE
- ▁PREPARED
- ▁GRO
- ▁UND
- ▁TER
- ▁EMPTY
- CING
- ▁SMILING
- ▁AVOID
- ▁DIFFERENCE
- ▁EXPLAIN
- ▁POUR
- ▁ATTRACT
- ▁OPENING
- ▁WHEEL
- ▁MATERIAL
- ▁BREAST
- ▁SUFFERING
- ▁DISTINCT
- ▁BOOT
- ▁ROW
- ▁FINGERS
- HAN
- ▁ALTOGETHER
- ▁FAT
- ▁PAPA
- ▁BRAIN
- ▁ASLEEP
- ▁GREY
- ▁SUM
- ▁GAS
- ▁WINDOWS
- ▁ALIVE
- ▁PROCEED
- ▁FLOWER
- ▁LEAP
- ▁PUR
- ▁PIECES
- ▁ALTER
- ▁MEMORY
- IENT
- ▁FILL
- ▁CLO
- ▁THROWN
- ▁KINGDOM
- ▁RODE
- IUS
- ▁MAID
- ▁DIM
- ▁BAND
- ▁VIRTUE
- ▁DISH
- ▁GUEST
- ▁LOSS
- ▁CAUSED
- ▁MOTION
- ▁POT
- ▁MILLION
- ▁FAULT
- ▁LOVELY
- ▁HERO
- PPING
- ▁UNITED
- ▁SPI
- SOME
- BRA
- ▁MOUNTAINS
- ▁NU
- ▁SATISFIED
- ▁DOLLARS
- ▁LOVER
- ▁CONCEAL
- ▁VAST
- ▁PULL
- ▁HATH
- ▁RUSH
- ▁J
- ▁DESPAIR
- EX
- ▁HEIGHT
- ▁CE
- ▁BENT
- ▁PITY
- ▁RISING
- ATH
- ▁PRIDE
- ▁HURRY
- KA
- ▁SETTLED
- ▁JUSTICE
- ▁LIFTED
- PEN
- ▁SOLDIER
- ▁FINDING
- ▁REMARK
- ▁REGULAR
- ▁STRUGGLE
- ▁MACHINE
- ▁SING
- ▁HURRIED
- ▁SUFFICIENT
- ▁REPRESENT
- ▁DOUBLE
- ▁ALARM
- ▁SUPPER
- ▁DREADFUL
- ▁FORE
- ATOR
- ▁STOCK
- ▁TIN
- ▁EXAMPLE
- ▁ROOF
- ▁FLOW
- ▁SUPPOSED
- ▁PRESERV
- ▁L
- ▁LISTENED
- OC
- ▁STO
- ▁SECURE
- ▁FRIGHTENED
- ▁DISTURB
- ▁EMOTION
- ▁SERVANTS
- ▁YO
- ▁BUY
- ▁FORCED
- ▁KITCHEN
- ▁TERROR
- ▁STAIRS
- ▁SIXTY
- KER
- ▁ORDINARY
- ▁DIRECTLY
- ▁HEADS
- ▁METHOD
- ▁FORGIVE
- ▁AWFUL
- ▁REFLECT
- ▁GREATLY
- ▁TALKED
- ▁RIDE
- STONE
- ▁FAVOUR
- ▁WELCOME
- ▁SEIZED
- OU
- ▁CONTROL
- ▁ORDERED
- ▁ANGEL
- ▁USUALLY
- ▁POET
- ▁BOLD
- LINE
- ▁ADVENTURE
- ▁WATCHING
- ▁FOLK
- ▁MISTRESS
- IZE
- ▁GROWING
- ▁CAVE
- ▁EVIDENCE
- ▁FINGER
- ▁SEVENTEEN
- ▁MOVING
- EOUS
- ▁DOESN
- ▁COW
- ▁TYPE
- ▁BOIL
- ▁TALE
- ▁DELIVER
- ▁FARM
- ▁MONSIEUR
- ▁GATHERED
- ▁FEELINGS
- ▁RATE
- ▁REMARKED
- ▁PUTTING
- ▁MAT
- ▁CONTRARY
- ▁CRIME
- ▁PLA
- ▁COL
- ▁NEARER
- TES
- ▁CIVIL
- ▁SHAME
- ▁LOOSE
- ▁DISCOVER
- ▁FLAT
- ▁TWICE
- ▁FAIL
- VIS
- ▁UNC
- EA
- ▁EUROPE
- ▁PATIENT
- ▁UNTO
- ▁SUFFER
- ▁PAIR
- ▁TREASURE
- OSE
- ▁EAGER
- ▁FLY
- ▁N
- ▁VAL
- ▁DAN
- ▁SALT
- ▁BORE
- BBE
- ▁ARTHUR
- ▁AFFAIRS
- ▁SLOW
- ▁CONSIST
- ▁DEVIL
- LAN
- ▁AFFECTION
- ▁ENGAGED
- ▁KISS
- ▁YA
- ▁OFFICER
- IFICATION
- ▁LAMP
- ▁PARTS
- HEN
- ▁MILK
- ▁PROCESS
- ▁GIFT
- ▁PULLED
- ▁HID
- ▁RAY
- ▁EXCELLENT
- ▁IMPRESSION
- ▁AUTHORITY
- ▁PROVED
- ▁TELLING
- TTE
- ▁TOWER
- ▁CONSEQUENCE
- ▁FAVOR
- ▁FLEW
- ▁CHARLES
- ISTS
- ▁ADDRESS
- ▁FAMILIAR
- ▁LIMIT
- ▁CONFIDENCE
- ▁RARE
- ▁WEEKS
- ▁WOODS
- ▁INTENTION
- ▁DIRECT
- ▁PERFORM
- ▁SOLEMN
- ▁DISTANT
- ▁IMAGE
- ▁PRESIDENT
- ▁FIRM
- ▁INDIAN
- ▁RANK
- ▁LIKED
- ▁AGREE
- ▁HOUSES
- ▁WIL
- ▁MATTERS
- ▁PRISON
- ▁MODE
- ▁MAJOR
- ▁WORKING
- ▁SLIP
- ▁WEIGHT
- ▁AWARE
- ▁BUSY
- ▁LOOKS
- ▁WOUND
- ▁THOR
- ▁BATH
- ▁EXERCISE
- ▁SIMILAR
- ▁WORE
- ▁AMOUNT
- ▁QUESTIONS
- ▁VIOLENT
- ▁EXCUSE
- ▁ASIDE
- ▁TUR
- ▁DULL
- OF
- ▁EMPEROR
- ▁NEVERTHELESS
- ▁SHOUT
- ▁EXPLAINED
- ▁SIZE
- ▁ACCOMPLISH
- FORD
- CAN
- ▁MISTAKE
- ▁INSTANTLY
- ▁SMOOTH
- ▁STRIKE
- ▁BOB
- ISED
- ▁HORROR
- ▁SCIENCE
- ▁PROTEST
- ▁MANAGE
- ▁OBEY
- ▁NECESSITY
- ▁SPLENDID
- ▁PRESS
- ▁INTERESTING
- ▁RELIGION
- ▁UNKNOWN
- ▁FIERCE
- ▁DISAPPEARED
- ▁HOLY
- ▁HATE
- ▁PLAYED
- ▁LIN
- ▁NATURALLY
- ▁DROVE
- ▁LOUIS
- TIES
- ▁BRAND
- INESS
- RIE
- ▁SHOOT
- ▁CONSENT
- ▁SEATED
- ▁LINES
- GUE
- ▁AGREED
- ▁CIRCLE
- ▁STIR
- ▁STREETS
- ▁TASK
- ▁RID
- ▁PRODUCED
- ▁ACCIDENT
- ▁WITNESS
- ▁LIBERTY
- ▁DETAIL
- ▁MINISTER
- ▁POWERFUL
- ▁SAVAGE
- ▁SIXTEEN
- ▁PRETEND
- ▁COAST
- ▁SQU
- ▁UTTER
- ▁NAMED
- ▁CLEVER
- ▁ADMIT
- ▁COUPLE
- ▁WICKED
- ▁MESSAGE
- ▁TEMPLE
- ▁STONES
- ▁YESTERDAY
- ▁HILLS
- DAY
- ▁SLIGHT
- ▁DIAMOND
- ▁POSSIBLY
- ▁AFFAIR
- ▁ORIGINAL
- ▁HEARING
- ▁WORTHY
- ▁SELL
- NEY
- ICK
- ▁COTTAGE
- ▁SACRIFICE
- ▁PROGRESS
- ▁SHOCK
- ▁DESIGN
- ▁SOUGHT
- ▁PIT
- ▁SUNDAY
- ▁OTHERWISE
- ▁CABIN
- ▁PRAYER
- ▁DWELL
- ▁GAIN
- ▁BRIDGE
- ▁PARTICULARLY
- ▁YIELD
- ▁TREAT
- RIGHT
- ▁OAK
- ▁ROPE
- WIN
- ▁ORDERS
- ▁SUSPECT
- ▁EDWARD
- AB
- ▁ELEVEN
- ▁TEETH
- ▁OCCURRED
- DDING
- ▁AMERICA
- ▁FALLING
- ▁LION
- ▁DEPART
- ▁KEEPING
- ▁DEMAND
- ▁PAUSED
- ▁CEASED
- INA
- ▁FUN
- ▁CHEER
- ▁PARDON
- ▁NATIVE
- LUS
- LOW
- ▁DOGS
- ▁REQUIRED
- ILITY
- ▁ELECT
- ▁ENTERTAIN
- ITUDE
- ▁HUGE
- ▁CARRYING
- ▁BLU
- ▁INSIST
- ▁SATISFACTION
- ▁HUNT
- ▁COUNTENANCE
- ▁UPPER
- ▁MAIDEN
- ▁FAILED
- ▁JAMES
- ▁FOREIGN
- ▁GATHER
- ▁TEST
- BOARD
- ▁TERMS
- ▁SILK
- ▁BEG
- ▁BROTHERS
- ▁PAGE
- ▁KNEES
- ▁SHOWN
- ▁PROFESSOR
- ▁MIGHTY
- ▁DEFI
- ▁CHARM
- ▁REQUIRE
- ▁LOG
- MORE
- ▁PROOF
- ▁POSSESSED
- ▁SOFTLY
- ▁UNFORTUNATE
- ▁PRICE
- ▁SEVERE
- ▁SINGING
- ▁STAGE
- ▁FREEDOM
- ▁SHOUTED
- ▁FARTHER
- ▁MAJESTY
- ▁PREVIOUS
- ▁GUIDE
- ▁MATCH
- ▁CHEST
- ▁INTENDED
- ▁BI
- ▁EXCITEMENT
- ▁OFFICERS
- ▁SUR
- ▁SHAKE
- ▁SENTIMENT
- ▁GENTLY
- ▁SUCCEEDED
- ▁MENTION
- ▁LOCK
- ▁ACQUAINTANCE
- ▁IMAGINATION
- ▁PHYSICAL
- ▁LEADING
- ▁SLAVE
- ▁CART
- ▁POINTED
- ▁STEAM
- ▁SHADE
- ▁PIPE
- ▁BASE
- ▁INVENT
- ▁ALAS
- ▁WORKED
- ▁REGRET
- ▁BUR
- ▁FAITHFUL
- ▁MENTIONED
- ▁RECORD
- ▁COMPLAIN
- ▁SUPERIOR
- ▁BAY
- ▁PAL
- EMENT
- UE
- ▁SEVENTY
- ▁HOTEL
- ▁SHEEP
- ▁MEAL
- ▁ADVICE
- ▁HIDDEN
- ▁DEMANDED
- ▁CONSCIOUS
- ▁BROW
- ▁POSSESS
- ▁FOURTH
- ▁EVENTS
- ▁FRI
- ▁PRAISE
- ▁ADVANCED
- ▁RESOLVED
- ▁STUFF
- ▁CHEERFUL
- ▁BIRTH
- ▁GRIEF
- ▁AFFORD
- ▁FAIRY
- ▁WAKE
- ▁SIDES
- ▁SUBSTANCE
- ▁ARTICLE
- ▁LEVEL
- ▁MIST
- ▁JOINED
- ▁PRACTICAL
- ▁CLEARLY
- ▁TRACE
- ▁AWAKE
- ▁OBSERVE
- ▁BASKET
- ▁LACK
- VILLE
- ▁SPIRITS
- ▁EXCITED
- ▁ABANDON
- ▁SHINING
- ▁FULLY
- ▁CALLING
- ▁CONSIDERABLE
- ▁SPRANG
- ▁MILE
- ▁DOZEN
- ▁PEA
- ▁DANGEROUS
- ▁WIT
- ▁JEW
- ▁POUNDS
- ▁FOX
- ▁INFORMATION
- ▁LIES
- ▁DECK
- NNY
- ▁PAUL
- ▁STARS
- ▁ANGER
- ▁SETTLE
- ▁WILLING
- ▁ADAM
- ▁FACES
- ▁SMITH
- ▁IMPORTANCE
- ▁STRAIN
- WAR
- ▁SAM
- ▁FEATHER
- ▁SERVED
- ▁AUTHOR
- ▁PERCEIVED
- ▁FLAME
- ▁DIVINE
- ▁TRAIL
- ▁ANYBODY
- ▁SIGH
- ▁DELICATE
- KY
- ▁FOLD
- ▁HAVEN
- ▁DESIRED
- ▁CURIOSITY
- ▁PRACTICE
- ▁CONSIDERATION
- ▁ABSOLUTELY
- ▁CITIZEN
- ▁BOTTLE
- ▁INTERESTED
- ▁MEAT
- ▁OCCUPIED
- ▁CHOOSE
- ▁THROAT
- ETTE
- ▁CANDLE
- ▁DAWN
- ▁PROTECT
- ▁SENTENCE
- IED
- ▁ROCKS
- ▁PORTION
- ▁APPARENTLY
- ▁PRESENTED
- ▁TIGHT
- ▁ACTUALLY
- ▁DYING
- ▁HAM
- ▁DAILY
- ▁SUFFERED
- ▁POLITICAL
- ▁BODIES
- ▁MODERN
- ▁COMPLETELY
- ▁SOONER
- TAN
- ▁PROP
- ▁ADVANCE
- ▁REFUSED
- ▁FARMER
- ▁POLITE
- ▁THUNDER
- ▁BRIEF
- ▁ELSIE
- ▁SAILOR
- ▁SUGGESTED
- ▁PLATE
- ▁AID
- ▁FLESH
- ▁WEEP
- ▁BUCK
- ▁ANTI
- ▁OCEAN
- ▁SPEND
- WELL
- ▁ODD
- ▁GOVERNOR
- ▁ENTRANCE
- ▁SUSPICION
- ▁STEPPED
- ▁RAPIDLY
- ▁CHECK
- ▁HIDE
- ▁FLIGHT
- ▁CLUB
- ▁ENTIRE
- ▁INDIANS
- ASH
- ▁CAPITAL
- ▁MAMMA
- HAR
- ▁CORRECT
- ▁CRACK
- ▁SENSATION
- ▁WORST
- ▁PACE
- ▁MIDST
- ▁AUGUST
- ▁PROPORTION
- ▁INNOCENT
- LINESS
- ▁REGARDED
- ▁DRIVEN
- ORD
- ▁HASTE
- ▁EDUCATION
- ▁EMPLOY
- ▁TRULY
- ▁INSTRUMENT
- ▁MAG
- ▁FRAME
- ▁FOOLISH
- ▁TAUGHT
- ▁HANG
- ▁ARGUMENT
- ▁NINETEEN
- ▁ELDER
- ▁NAY
- ▁NEEDED
- ▁NEIGHBOR
- ▁INSTRUCT
- ▁PAPERS
- ▁REWARD
- ▁EQUALLY
- ▁FIELDS
- ▁DIG
- HIN
- ▁CONDITIONS
- JA
- ▁SPAR
- ▁REQUEST
- ▁WORN
- ▁REMARKABLE
- ▁LOAD
- ▁WORSHIP
- ▁PARK
- ▁KI
- ▁INTERRUPTED
- ▁SKILL
- ▁TERM
- LAC
- ▁CRITIC
- ▁DISTRESS
- ▁BELIEF
- ▁STERN
- IGHT
- ▁TRACK
- ▁HUNTING
- ▁JEWEL
- ▁GRADUALLY
- ▁GLOW
- ▁RUSHED
- ▁MENTAL
- ▁VISITOR
- ▁PICKED
- ▁BEHOLD
- ▁EXPRESSED
- ▁RUB
- ▁SKI
- ARTAGNAN
- ▁MOREOVER
- ▁OPERATION
- ▁CAREFUL
- ▁KEEN
- ▁ASSERT
- ▁WANDER
- ▁ENEMIES
- ▁MYSTERIOUS
- ▁DEPTH
- ▁PREFER
- ▁CROSSED
- ▁CHARMING
- ▁DREAD
- ▁FLOUR
- ▁ROBIN
- ▁TRE
- ▁RELIEF
- ▁INQUIRED
- ▁APPLE
- ▁HENCE
- ▁WINGS
- ▁CHOICE
- ▁JUD
- OO
- ▁SPECIES
- ▁DELIGHTED
- IUM
- ▁RAPID
- ▁APPEAL
- ▁FAMOUS
- ▁USEFUL
- ▁HELEN
- ▁NEWSPAPER
- ▁PLENTY
- ▁BEARING
- ▁NERVOUS
- ▁PARA
- ▁URGE
- ▁ROAR
- ▁WOUNDED
- ▁CHAIN
- ▁PRODUCE
- ▁REFLECTION
- ▁MERCHANT
- ▁QUARREL
- ▁GLORY
- ▁BEGUN
- ▁BARON
- CUS
- ▁QUEER
- ▁MIX
- ▁GAZE
- ▁WHISPER
- ▁BURIED
- ▁DIV
- ▁CARD
- ▁FREQUENTLY
- ▁TIP
- ▁KNEE
- ▁REGION
- ▁ROOT
- ▁LEST
- ▁JEALOUS
- CTOR
- ▁SAVED
- ▁ASKING
- ▁TRIP
- QUA
- ▁UNION
- HY
- ▁COMPANIONS
- ▁SHIPS
- ▁HALE
- ▁APPROACHED
- ▁HARRY
- ▁DRUNK
- ▁ARRIVAL
- ▁SLEPT
- ▁FURNISH
- HEAD
- ▁PIG
- ▁ABSENCE
- ▁PHIL
- ▁HEAP
- ▁SHOES
- ▁CONSCIOUSNESS
- ▁KINDLY
- ▁EVIDENT
- ▁SCAR
- ▁DETERMIN
- ▁GRASP
- ▁STEAL
- ▁OWE
- ▁KNIFE
- ▁PRECIOUS
- ▁ELEMENT
- ▁PROCEEDED
- ▁FEVER
- ▁LEADER
- ▁RISK
- ▁EASE
- ▁GRIM
- ▁MOUNT
- ▁MEANWHILE
- ▁CENTURY
- OON
- ▁JUDGMENT
- ▁AROSE
- ▁VISION
- ▁SPARE
- ▁EXTREME
- ▁CONSTANT
- ▁OBSERVATION
- ▁THRUST
- ▁DELAY
- ▁CENT
- ▁INCLUD
- ▁LIFT
- ▁ADMIRE
- ▁ISSUE
- ▁FRIENDSHIP
- ▁LESSON
- ▁PRINCIPAL
- ▁MOURN
- ▁ACCEPTED
- ▁BURNING
- ▁CAPABLE
- ▁EXTRAORDINARY
- ▁SANG
- ▁REMOVED
- ▁HOPED
- ▁HORN
- ▁ALICE
- ▁MUD
- ▁APARTMENT
- ▁FIGHTING
- ▁BLAME
- ▁TREMBLING
- ▁SOMEBODY
- ▁ANYONE
- ▁BRIDE
- ▁READER
- ▁ROB
- ▁EVERYWHERE
- ▁LABOUR
- ▁RECALL
- ▁BULL
- ▁HIT
- ▁COUNCIL
- ▁POPULAR
- ▁CHAP
- ▁TRIAL
- ▁DUN
- ▁WISHES
- ▁BRILLIANT
- ▁ASSURED
- ▁FORGOT
- ▁CONTINUE
- ▁ACKNOWLEDG
- ▁RETREAT
- ▁INCREASED
- ▁CONTEMPT
- ▁GRANDFATHER
- ▁SYMPATHY
- ▁GHOST
- ▁STRETCHED
- ▁CREATURES
- ▁CAB
- ▁HIND
- ▁PLAYING
- ▁MISERABLE
- ▁MEMBERS
- ▁KINDNESS
- ▁HIGHEST
- ▁PRIM
- ▁KISSED
- ▁DESERVE
- ▁HUT
- ▁BEGGED
- ▁EIGHTY
- ▁CLOSELY
- ▁WONDERED
- ▁MILITARY
- ▁REMIND
- ▁ACCORDINGLY
- ▁LARGER
- ▁MAINTAIN
- ▁ENGINE
- ▁MOTIVE
- ▁DESTROY
- ▁STRIP
- ▁HANS
- ▁AHEAD
- ▁INFINITE
- ▁PROMPT
- ▁INFORMED
- TTLE
- ▁PEER
- ▁PRESSED
- ▁TRAP
- ▁SOMEWHERE
- ▁BOUGHT
- ▁VISIBLE
- ▁ASHAMED
- ▁TEAR
- ▁NEIGHBOUR
- ▁CONSTITUTION
- ▁INTELLIGENCE
- ▁PROFESSION
- ▁HUNGRY
- RIDGE
- ▁SMELL
- ▁STORIES
- ▁LISTENING
- ▁APPROACH
- ▁STRING
- ▁EXPLANATION
- ▁IMMENSE
- ▁RELIGIOUS
- ▁THROUGHOUT
- ▁HOLLOW
- ▁AWAIT
- ▁FLYING
- ▁SCREAM
- ▁ACTIVE
- ▁RUM
- ▁PRODUCT
- ▁UNHAPPY
- ▁VAGUE
- ARIES
- ▁ELIZABETH
- ▁STUPID
- ▁DIGNITY
- ▁ISABEL
- GAR
- ▁BRO
- ▁PITCH
- ▁COMRADE
- ▁STIFF
- ▁RECKON
- ▁SOLD
- ▁SPARK
- ▁STRO
- ▁CRYING
- ▁MAGIC
- ▁REPEAT
- PORT
- ▁MARKED
- ▁COMFORTABLE
- ▁PROJECT
- ▁BECOMING
- ▁PARENTS
- ▁SHELTER
- ▁STOLE
- ▁HINT
- ▁NEST
- ▁TRICK
- ▁THOROUGHLY
- ▁HOSPITAL
- ▁WEAPON
- ▁ROME
- ▁STYLE
- ▁ADMITTED
- ▁SAFETY
- FIELD
- ▁UNDERSTANDING
- ▁TREMBLE
- ▁PRINT
- ▁SLAVES
- ▁WEARY
- ▁ARTIST
- ▁CREDIT
- BURG
- ▁CONCLUSION
- ▁SELDOM
- ▁UNUSUAL
- ▁CLOUDS
- ▁UNABLE
- ▁GAY
- ▁HANGING
- ▁SCR
- ▁BOWED
- ▁DAVID
- ▁VOL
- ▁PUSHED
- ▁ESCAPED
- MOND
- ▁WARN
- ▁BETRAY
- ▁EGGS
- ▁PLAINLY
- ▁EXHIBIT
- ▁DISPLAY
- ▁MEMBER
- ▁GRIN
- ▁PROSPECT
- ▁BRUSH
- ▁BID
- ▁SUCCESSFUL
- ▁EXTENT
- ▁PERSUADE
- ▁MID
- ▁MOOD
- ▁ARRANGED
- ▁UNIVERSAL
- ▁JIM
- ▁SIGNAL
- ▁WHILST
- ▁PHILIP
- ▁WOLF
- RATE
- ▁EAGERLY
- ▁BILLY
- ▁RETURNING
- ▁CONSCIENCE
- ▁FORTUNATE
- ▁FEMALE
- ▁GLEAM
- ▁HASTILY
- ▁PROVIDED
- ▁OBTAIN
- ▁INSTINCT
- ▁CONCERNED
- ▁CONCERNING
- ▁SOMEHOW
- ▁PINK
- ▁RAGE
- ▁ACCUSTOMED
- ▁UNCONSCIOUS
- ▁ADVISE
- ▁BRANCHES
- ▁TINY
- ▁REFUSE
- ▁BISHOP
- ▁SUPPLY
- ▁PEASANT
- ▁LAWYER
- ▁WASTE
- ▁CONNECTION
- ▁DEVELOP
- ▁CORRESPOND
- ▁PLUM
- ▁NODDED
- ▁SLIPPED
- ▁EU
- ▁CONSTANTLY
- CUM
- MMED
- ▁FAIRLY
- HOUSE
- ▁KIT
- ▁RANG
- ▁FEATURES
- ▁PAUSE
- ▁PAINFUL
- ▁JOE
- ▁WHENCE
- ▁LAUGHTER
- ▁COACH
- ▁CHRISTMAS
- ▁EATING
- ▁WHOLLY
- ▁APART
- ▁SUPER
- ▁REVOLUTION
- ▁LONELY
- ▁CHEEKS
- ▁THRONE
- ▁CREW
- ▁ATTAIN
- ▁ESTABLISHED
- TIME
- ▁DASH
- ▁FRIENDLY
- ▁OPERA
- ▁EARL
- ▁EXHAUST
- ▁CLIFF
- ▁REVEAL
- ▁ADOPT
- ▁CENTRE
- ▁MERRY
- ▁SYLVIA
- ▁IDEAL
- ▁MISFORTUNE
- ▁FEAST
- ▁ARAB
- ▁NUT
- ▁FETCH
- ▁FOUGHT
- ▁PILE
- ▁SETTING
- ▁SOURCE
- ▁PERSIST
- ▁MERCY
- ▁BARK
- ▁LUC
- ▁DEEPLY
- ▁COMPARE
- ▁ATTITUDE
- ▁ENDURE
- ▁DELIGHTFUL
- ▁BEARD
- ▁PATIENCE
- ▁LOCAL
- ▁UTTERED
- ▁VICTORY
- ▁TREATED
- ▁SEPARATE
- ▁WAG
- ▁DRAGG
- ▁TITLE
- ▁TROOPS
- ▁TRIUMPH
- ▁REAR
- ▁GAINED
- ▁SINK
- ▁DEFEND
- ▁TIED
- ▁FLED
- ▁DARED
- ▁INCREASE
- ▁POND
- ▁CONQUER
- ▁FOREHEAD
- ▁FAN
- ▁ANXIETY
- ▁ENCOUNTER
- ▁SEX
- ▁HALT
- ▁SANK
- ▁CHEEK
- ▁HUMBLE
- ▁WRITER
- ▁EMPLOYED
- ▁DISTINGUISHED
- ▁RAISE
- ▁WHIP
- ▁GIANT
- ▁RANGE
- ▁OBTAINED
- ▁FLAG
- ▁MAC
- ▁JUMPED
- ▁DISCOVERY
- ▁NATIONAL
- ▁COMMISSION
- ▁POSITIVE
- ▁LOVING
- ▁EXACT
- ▁MURMURED
- ▁GAZED
- ▁REFER
- ▁COLLEGE
- ▁ENCOURAGE
- ▁NOVEL
- ▁CLOCK
- ▁MORTAL
- ▁ROLLED
- ▁RAT
- IZING
- ▁GUILTY
- ▁VICTOR
- WORTH
- ▁PRA
- ▁APPROACHING
- ▁RELATIVE
- ▁ESTATE
- ▁UGLY
- ▁METAL
- ▁ROBERT
- ▁TENT
- ▁ADMIRATION
- ▁FOURTEEN
- ▁BARBAR
- ▁WITCH
- ELLA
- ▁CAKE
- ▁SHONE
- ▁MANAGED
- ▁VOLUME
- ▁GREEK
- ▁DANCING
- ▁WRETCHED
- ▁CONDEMN
- ▁MAGNIFICENT
- ▁CONSULT
- J
- ▁ORGAN
- ▁FLEET
- ▁ARRANGEMENT
- ▁INCIDENT
- ▁MISERY
- ▁ARROW
- ▁STROKE
- ▁ASSIST
- ▁BUILD
- ▁SUCCEED
- ▁DESPERATE
- ▁WIDOW
- UDE
- ▁MARKET
- ▁WISDOM
- ▁PRECISE
- ▁CURRENT
- ▁SPOIL
- ▁BADE
- ▁WOODEN
- ▁RESIST
- ▁OBVIOUS
- ▁SENSIBLE
- FALL
- ▁ADDRESSED
- ▁GIL
- ▁COUNSEL
- ▁PURCHASE
- ▁SELECT
- ▁USELESS
- ▁STARED
- ▁ARREST
- ▁POISON
- ▁FIN
- ▁SWALLOW
- ▁BLOCK
- ▁SLID
- ▁NINETY
- ▁SPORT
- ▁PROVIDE
- ▁ANNA
- ▁LAMB
- ▁INTERVAL
- ▁JUMP
- ▁DESCRIBED
- ▁STRIKING
- ▁PROVISION
- ▁PROPOSED
- ▁MELANCHOLY
- ▁WARRIOR
- ▁SUGGEST
- ▁DEPARTURE
- ▁BURDEN
- ▁LIMB
- ▁TROUBLED
- ▁MEADOW
- ▁SACRED
- ▁SOLID
- ▁TRU
- ▁LUCY
- ▁RECOVER
- ▁ENERGY
- ▁POWDER
- ▁RESUMED
- ▁INTENSE
- ▁BRITISH
- ▁STRAW
- ▁AGREEABLE
- ▁EVERYONE
- ▁CONCERN
- ▁VOYAGE
- ▁SOUTHERN
- ▁BOSOM
- ▁UTTERLY
- ▁FEED
- ▁ESSENTIAL
- ▁CONFINE
- ▁HOUSEHOLD
- ▁EXTREMELY
- ▁WONDERING
- ▁LIST
- ▁PINE
- PHA
- ▁EXPERIMENT
- ▁JOSEPH
- ▁MYSTERY
- ▁RESTORE
- ▁BLUSH
- FOLD
- ▁CHOSEN
- ▁INTELLECT
- ▁CURTAIN
- OLOGY
- ▁MOUNTED
- ▁LAP
- ▁EPI
- ▁PUNISH
- ▁WEDDING
- ▁RECOGNIZED
- ▁DRIFT
- ▁PREPARATION
- ▁RESOLUTION
- ▁OPPRESS
- ▁FIX
- ▁VICTIM
- OGRAPH
- ▁SUMMON
- ▁JULIA
- ▁FLOOD
- ▁WAL
- ULATION
- ▁SLIGHTLY
- ▁LODGE
- ▁WIRE
- ▁CONFUSION
- ▁UNEXPECTED
- ▁CONCEIVE
- ▁PRIZE
- ▁JESUS
- ▁ADDITION
- ▁RUDE
- ▁FATAL
- ▁CARELESS
- ▁PATCH
- ▁KO
- ▁CATHERINE
- ▁PARLIAMENT
- ▁PROFOUND
- ▁ALOUD
- ▁RELIEVE
- ▁PUSH
- ABILITY
- ▁ACCOMPANIED
- ▁SOVEREIGN
- ▁SINGULAR
- ▁ECHO
- ▁COMPOSED
- ▁SHAKING
- ATORY
- ▁ASSISTANCE
- ▁TEACHER
- ▁HORRIBLE
- ▁STRICT
- ▁VERSE
- ▁PUNISHMENT
- ▁GOWN
- ▁MISTAKEN
- ▁VARI
- ▁SWEPT
- ▁GESTURE
- ▁BUSH
- ▁STEEL
- ▁AFFECTED
- ▁DIRECTED
- ▁SURROUNDED
- ▁ABSURD
- ▁SUGAR
- ▁SCRAP
- ▁IMMEDIATE
- ▁SADDLE
- ▁TY
- ▁ARISE
- ▁SIGHED
- ▁EXCHANGE
- ▁IMPATIENT
- ▁SNAP
- ▁EMBRACE
- ▁DISEASE
- ▁PROFIT
- ▁RIDING
- ▁RECOVERED
- ▁GOVERN
- ▁STRETCH
- ▁CONVINCED
- ▁LEANING
- ▁DOMESTIC
- ▁COMPLEX
- ▁MANIFEST
- ▁INDULGE
- ▁GENIUS
- ▁AGENT
- ▁VEIL
- ▁DESCRIPTION
- ▁INCLINED
- ▁DECEIVE
- ▁DARLING
- ▁REIGN
- HU
- ▁ENORMOUS
- ▁RESTRAIN
- ▁DUTIES
- BURY
- TTERED
- ▁POLE
- ▁ENABLE
- ▁EXCEPTION
- ▁INTIMATE
- ▁COUNTESS
- ▁TRIBE
- ▁HANDKERCHIEF
- ▁MIDNIGHT
- ▁PROBLEM
- ▁TRAMP
- ▁OIL
- CAST
- ▁CRUSH
- ▁DISCUSS
- ▁RAM
- ▁TROT
- ▁UNRE
- ▁WHIRL
- ▁LOCKED
- ▁HORIZON
- ▁OFFICIAL
- ▁SCHEME
- ▁DROWN
- ▁PIERRE
- ▁PERMITTED
- ▁CONNECTED
- ▁ASSURE
- ▁COCK
- ▁UTMOST
- ▁DEVOTED
- ▁RELI
- ▁SUFFICIENTLY
- ▁INTELLECTUAL
- ▁CARPET
- ▁OBJECTION
- ▁AFTERWARD
- ▁REALITY
- ▁NEGRO
- ▁RETAIN
- ▁ASCEND
- ▁CEASE
- ▁KATE
- ▁MARVEL
- KO
- ▁BOND
- MOST
- ▁COAL
- GATE
- ▁IGNORANT
- ▁BREAKING
- ▁TWIN
- ▁ASTONISHMENT
- ▁COFFEE
- ▁JAR
- ▁CITIES
- ▁ORIGIN
- ▁EXECUT
- ▁FINAL
- ▁INHABITANTS
- ▁STABLE
- ▁CHIN
- ▁PARTIES
- ▁PLUNGE
- ▁GENEROUS
- ▁DESCRIBE
- ▁ANNOUNCED
- ▁MERIT
- ▁REVERE
- ▁ERE
- ACIOUS
- ZI
- ▁DISAPPOINT
- ▁SUGGESTION
- ▁DOUBTLESS
- ▁TRUNK
- ▁STAMP
- ▁JOB
- ▁APPOINTED
- ▁DIVIDED
- ▁ACQUAINTED
- CHI
- ▁ABSOLUTE
- ▁FEARFUL
- ▁PRIVILEGE
- ▁CRAFT
- ▁STEEP
- ▁HUNTER
- ▁FORBID
- ▁MODEST
- ▁ENDEAVOUR
- ▁SWEEP
- ▁BEHELD
- ▁ABSORB
- ▁CONSTRUCT
- ▁EMPIRE
- ▁EXPEDITION
- ▁ERECT
- ▁OFFEND
- ▁INTEND
- ▁PERMIT
- ▁DESTROYED
- ▁CONTRACT
- ▁THIRST
- ▁WAGON
- ▁EVA
- ▁GLOOM
- ▁ATMOSPHERE
- ▁RESERVE
- ▁VOTE
- ▁GER
- ▁NONSENSE
- ▁PREVAIL
- ▁QUALITY
- ▁CLASP
- ▁CONCLUDED
- ▁RAP
- ▁KATY
- ▁ETERNAL
- ▁MUTTERED
- ▁NEGLECT
- ▁SQUIRE
- ▁CREEP
- LOCK
- ▁ELECTRIC
- ▁HAY
- ▁EXPENSE
- ▁SCORN
- ▁RETIRED
- ▁STOUT
- ▁MURMUR
- ▁SHARPLY
- ▁DISTRICT
- ▁LEAF
- ▁FAILURE
- WICK
- ▁JEAN
- ▁NUMEROUS
- ▁INFANT
- ▁REALIZED
- ▁TRAVELLER
- ▁HUNGER
- ▁JUNE
- ▁MUN
- ▁RECOMMEND
- ▁CREP
- ZZLE
- ▁RICHARD
- WORK
- ▁MONTE
- ▁PREACH
- ▁PALM
- AVI
- ▁ANYWHERE
- ▁DISPOSITION
- ▁MIRROR
- ▁VENTURE
- ▁POUND
- ▁CIGAR
- ▁INVITED
- ▁BENCH
- ▁PROTECTION
- ▁BENEFIT
- ▁THOMAS
- ▁CLERK
- ▁REPROACH
- ▁UNIFORM
- ▁GENERATION
- ▁SEAL
- ▁COMPASS
- ▁WARNING
- ▁EXTENDED
- ▁DIFFICULTIES
- ▁MAYBE
- ▁GROAN
- ▁AFFECT
- ▁COMB
- ▁EARN
- ▁WESTERN
- ▁IDLE
- ▁SCORE
- ▁TAP
- ▁ASTONISHED
- ▁INTRODUCED
- ▁LEISURE
- ▁LIEUTENANT
- ▁VIOLENCE
- ▁FIRMLY
- ▁MONSTER
- ▁UR
- ▁PROPERLY
- ▁TWIST
- ▁PIRATE
- ▁ROBBER
- ▁BATTER
- ▁WEPT
- ▁LEANED
- ▁FOG
- ▁ORNAMENT
- ▁ANDREW
- ▁BUSHES
- ▁REPUBLIC
- ▁CONFIDENT
- ▁LEAN
- ▁DART
- ▁STOOP
- ▁CURL
- ▁COUNTER
- ▁NORTHERN
- ▁PEARL
- ▁NEAREST
- ▁FRANCIS
- ▁WANDERING
- ▁FREQUENT
- ▁STARTLED
- ▁STATEMENT
- ▁OCCUR
- ▁BLOOM
- ▁NERVE
- ▁INSPECT
- ▁INDUCE
- ▁FLATTER
- ▁DATE
- ▁AMBITION
- ▁SLOPE
- ▁MALE
- ▁MADAM
- ▁MONK
- ▁RENT
- ▁CONFIRM
- ▁INVESTIGAT
- ▁RABBIT
- ▁REGIMENT
- ▁SUBMIT
- ▁SPELL
- ▁FURIOUS
- ▁RAIL
- ▁BESTOW
- ▁RALPH
- ▁SCATTERED
- ▁COMPELLED
- ▁THREAD
- ▁CHILL
- ▁DENY
- ▁PRONOUNC
- ▁MANKIND
- ▁CATTLE
- ▁EXECUTION
- ▁REBEL
- ▁SUPREME
- ▁VALUABLE
- ▁LIKEWISE
- ▁CONVEY
- ▁TIDE
- ▁GLOOMY
- ▁COIN
- ▁ACTUAL
- ▁TAX
- ▁PROVINCE
- ▁GRATEFUL
- ▁SPIRITUAL
- ▁VANISHED
- ▁DIANA
- ▁HAUNT
- ▁DRAGON
- ▁CRAWL
- ▁CHINA
- ▁GRATITUDE
- ▁NEAT
- ▁FINISH
- ▁INTENT
- ▁FRIGHT
- ▁EMBARRASS
- ▁THIRTEEN
- ▁RUTH
- ▁SLIGHTEST
- ▁DEVELOPMENT
- ▁INTERVIEW
- ▁SPECTACLE
- ▁BROOK
- VIE
- ▁WEAKNESS
- ▁AUDIENCE
- ▁CONSEQUENTLY
- ▁ABROAD
- ▁ASPECT
- ▁PAINTED
- ▁RELEASE
- ▁INSULT
- ▁SOOTH
- ▁DISAPPOINTMENT
- ▁EMERG
- ▁BRIG
- ▁ESTEEM
- ▁INVITATION
- ▁PASSENGER
- ▁PUBLISH
- ▁PIANO
- ▁IRISH
- ▁DESK
- ▁BEATEN
- ▁FIFTH
- ▁IMPULSE
- ▁SWEAR
- ▁EATEN
- ▁PURPLE
- ▁COMMITTED
- ▁COUNTRIES
- ▁PERCEIVE
- ISON
- ▁CELEBRAT
- ▁GRANDMOTHER
- ▁SHUDDER
- ▁SUNSHINE
- ▁SPANISH
- ▁HITHERTO
- ▁MARILLA
- ▁SNAKE
- ▁MOCK
- ▁INTERFERE
- ▁WALTER
- ▁AMID
- ▁MARBLE
- ▁MISSION
- TERIOR
- ▁DRIVING
- ▁FURNITURE
- ▁STEADY
- ▁CIRCUMSTANCE
- ▁INTERPRET
- ▁ENCHANT
- ▁ERROR
- ▁CONVICTION
- ▁HELPLESS
- ▁MEDICINE
- ▁QUALITIES
- ▁ITALIAN
- ▁HASTENED
- ▁OCCASIONALLY
- ▁PURSUED
- ▁HESITATED
- ▁INDEPENDENT
- ▁OLIVER
- ▁LINGER
- UX
- ▁EXAMINED
- ▁REPENT
- ▁PHYSICIAN
- ▁CHASE
- ▁BELOVED
- ▁ATTACHED
- ▁FLORENCE
- ▁HONEY
- ▁MOUSE
- ▁CRIES
- ▁BAKE
- ▁POEM
- ▁DESTRUCTION
- ▁FULFIL
- ▁MESSENGER
- ▁TRISTRAM
- ▁FANCIED
- ▁EXCESS
- ▁CURSE
- ▁CHU
- ▁QUANTITY
- ▁THORNTON
- ▁CREATED
- ▁CONTINUALLY
- ▁LIGHTNING
- ▁BORNE
- ▁TOTAL
- ▁DISPOSED
- ▁RIFLE
- ▁POLLY
- ▁GOAT
- ▁BACKWARD
- ▁VIRGINIA
- ▁KICK
- ▁PERIL
- ▁QUO
- ▁GLORIOUS
- ▁MULTITUDE
- ▁LEATHER
- ▁ABSENT
- ▁DEMON
- ▁DEBT
- ▁TORTURE
- ▁ACCORD
- ▁MATE
- ▁CATHOLIC
- ▁PILL
- ▁LIBRARY
- ▁PURSUIT
- ▁SHIRT
- ▁DEAREST
- ▁COLLAR
- ▁BEACH
- ▁ROBE
- ▁DECLARE
- ▁BRANCH
- ▁TEMPT
- ▁STEADILY
- ▁DISGUST
- ▁SILLY
- ▁ARRIVE
- ▁DRANK
- ▁LEVI
- ▁COMMUNICAT
- ▁RACHEL
- ▁WASHINGTON
- ▁RESIGN
- ▁MEANTIME
- ▁LACE
- ▁ENGAGEMENT
- ▁QUIVER
- ▁SEPARATED
- ▁DISCUSSION
- ▁VENTURED
- ▁SURROUNDING
- ▁POLISH
- ▁NAIL
- ▁SWELL
- ▁JOKE
- ▁LINCOLN
- ▁STUDENT
- ▁GLITTER
- ▁RUSSIAN
- ▁READILY
- ▁CHRIS
- ▁POVERTY
- ▁DISGRACE
- ▁CHEESE
- ▁HEAVILY
- ▁SCALE
- ▁STAFF
- ▁ENTREAT
- ▁FAREWELL
- ▁LUNCH
- ▁PEEP
- ▁MULE
- ▁SOMEONE
- ▁DISAPPEAR
- ▁DECISION
- ▁PISTOL
- ▁PUN
- ▁SPUR
- ▁ASSUMED
- ▁EXTEND
- ▁ENTHUSIASM
- ▁DEFINITE
- ▁UNDERTAKE
- ▁COMMITTEE
- ▁SIMON
- ▁FENCE
- ▁APPLIED
- ▁RELATED
- ▁VICE
- ▁UNPLEASANT
- ▁PROBABLE
- ▁PROCURE
- ▁FROWN
- ▁CLOAK
- ▁HUMANITY
- ▁FAMILIES
- ▁PHILOSOPHER
- ▁DWARF
- ▁OVERCOME
- ▁DEFEAT
- ▁FASTENED
- ▁MARSH
- ▁CLASSES
- ▁TOMB
- ▁GRACIOUS
- ▁REMOTE
- ▁CELL
- ▁SHRIEK
- ▁RESCUE
- ▁POOL
- ▁ORGANIZ
- ▁CHOSE
- ▁CUTTING
- ▁COWARD
- ▁BORDER
- ▁DIRTY
- ▁MONKEY
- ▁HOOK
- ▁CHUCK
- ▁EMILY
- ▁JEST
- ▁PLAC
- ▁WEIGH
- ▁ASSOCIATE
- ▁GLIMPSE
- ▁STUCK
- ▁BOLT
- ▁MURDERER
- ▁PONY
- ▁DISTINGUISH
- ▁INSTITUTION
- ▁CUNNING
- ▁COMPLIMENT
- ▁APPETITE
- ▁REPUTATION
- ▁FEEBLE
- ▁KIN
- ▁SERIES
- ▁GRACEFUL
- ▁PLATFORM
- ▁BREEZE
- ▁PHRASE
- ▁CLAY
- MONT
- ▁RATTL
- ▁OPPOSITION
- ▁LANE
- ▁BOAST
- ▁GROWTH
- ▁INCLINATION
- ▁BEHAVE
- ▁SUSAN
- ▁DISTINCTION
- ▁DISLIKE
- ▁NICHOLAS
- ▁SATISFY
- ▁DRAMA
- ▁ELBOW
- ▁GAZING
- ▁CONSUM
- ▁SPIN
- ▁OATH
- ▁CHANNEL
- ▁CHARACTERISTIC
- ▁SPEAR
- ▁SLAIN
- ▁SAUCE
- ▁FROG
- ▁CONCEPTION
- ▁TIMID
- ▁ZEAL
- ▁APPARENT
- SHIRE
- ▁CENTER
- ▁VARIETY
- ▁DUSK
- ▁APT
- ▁COLUMN
- ▁REVENGE
- ▁RIVAL
- ▁IMITAT
- ▁PASSIONATE
- ▁SELFISH
- ▁NORMAN
- ▁REPAIR
- ▁THRILL
- ▁TREATMENT
- ▁ROSA
- ▁MARTIN
- ▁INDIFFERENT
- ▁THITHER
- ▁GALLANT
- ▁PEPPER
- ▁RECOLLECT
- ▁VINE
- ▁SCARCE
- ▁SHIELD
- ▁MINGLED
- CLOSE
- ▁HARSH
- ▁BRICK
- ▁HUMOR
- ▁MISCHIEF
- ▁TREMENDOUS
- ▁FUNCTION
- ▁SMART
- ▁SULTAN
- ▁DISMISS
- ▁THREATENED
- ▁CHEAP
- ▁FLOCK
- ▁ENDEAVOR
- ▁WHISK
- ▁ITALY
- ▁WAIST
- ▁FLUTTER
- ▁SMOKING
- ▁MONARCH
- ▁AFRICA
- ▁ACCUSE
- ▁HERBERT
- ▁REFRESH
- ▁REJOICE
- ▁PILLOW
- ▁EXPECTATION
- ▁POETRY
- ▁HOPELESS
- ▁PERISH
- ▁PHILOSOPHY
- ▁WHISTLE
- ▁BERNARD
- ▁LAMENT
- ▁IMPROVE
- ▁SUP
- ▁PERPLEX
- ▁FOUNTAIN
- ▁LEAGUE
- ▁DESPISE
- ▁IGNORANCE
- ▁REFERENCE
- ▁DUCK
- ▁GROVE
- ▁PURSE
- ▁PARTNER
- ▁PROPHET
- ▁SHIVER
- ▁NEIGHBOURHOOD
- ▁REPRESENTATIVE
- SAIL
- ▁WIP
- ▁ACQUIRED
- ▁CHIMNEY
- ▁DOCTRINE
- ▁MAXIM
- ▁ANGLE
- ▁MAJORITY
- ▁AUTUMN
- ▁CONFUSED
- ▁CRISTO
- ▁ACHIEVE
- ▁DISGUISE
- ▁REDUCED
- ▁EARLIER
- ▁THEATRE
- ▁DECIDE
- MINATED
- OLOGICAL
- ▁OCCUPATION
- ▁VIGOROUS
- ▁CONTINENT
- ▁DECLINE
- ▁COMMUNITY
- ▁MOTIONLESS
- ▁HATRED
- ▁COMMUNICATION
- ▁BOWL
- ▁COMMENT
- ▁APPROVE
- ▁CEREMONY
- ▁CRIMINAL
- ▁SCIENTIFIC
- ▁DUCHESS
- ▁VIVID
- ▁SHIFT
- ▁AVAIL
- ▁DAMP
- ▁JOHNSON
- ▁SLENDER
- ▁CONTRAST
- ▁AMUSEMENT
- ▁PLOT
- ▁LYN
- ▁ASSOCIATION
- ▁SNATCH
- ▁UNCERTAIN
- ▁PRESSURE
- ▁PERCH
- ▁APPLY
- ▁PLANET
- ▁NOTWITHSTANDING
- ▁SWUNG
- ▁STIRRED
- ▁ATTENDANT
- ▁ENJOYMENT
- ▁WORRY
- ▁ALBERT
- ▁NAKED
- ▁TALENT
- ▁MARIAN
- ▁REFORM
- ▁DELIBERATE
- ▁INTELLIGENT
- ▁SENSITIVE
- ▁YONDER
- ▁PUPIL
- ▁FRIGHTFUL
- ▁DOUBTFUL
- ▁STANDARD
- ▁MAGISTRATE
- ▁SHEPHERD
- ▁STOMACH
- ▁DEPOSIT
- ▁RENEW
- ▁HEDGE
- ▁FRANCS
- ▁POSSIBILITY
- ▁RESEMBLE
- ▁FATIGUE
- ▁PORTRAIT
- ▁FAVORITE
- ▁CREAM
- ▁BURG
- ▁SECRETARY
- ▁DIVERS
- ▁ACTIVITY
- ▁SPECULAT
- ▁HUMOUR
- ▁FITTED
- ▁EXTERNAL
- ▁CETERA
- ▁WRAPPED
- ▁WHIT
- ▁FRED
- ▁EXAMINATION
- ▁LODGING
- ▁OWING
- ▁JAW
- ▁CROW
- ▁BALANCE
- ▁PUFF
- ▁TENDERNESS
- ▁PORTHOS
- ▁ANCHOR
- ▁INTERRUPT
- ▁NECESSARILY
- ▁PERPETUAL
- ▁AGONY
- ▁POPE
- ▁SCHOLAR
- ▁SCOTLAND
- ▁SUPPRESS
- ▁WRATH
- ▁WRECK
- ▁EXCEED
- ▁PERFECTION
- ▁INDIA
- ▁TRADITION
- ▁SECTION
- ▁EASTERN
- ▁DOORWAY
- ▁WIVES
- ▁CONVENTION
- ▁ANNOUNC
- ▁EGYPT
- ▁CONTRADICT
- ▁SCRATCH
- ▁CENTRAL
- ▁GLOVE
- ▁WAX
- ▁PREPARE
- ▁ACCOMPANY
- ▁INCREASING
- ▁LIBERAL
- ▁RAISING
- ▁ORANGE
- ▁SHOE
- ▁ATTRIBUTE
- ▁LITERATURE
- ▁PUZZLED
- ▁WITHDRAW
- ▁WHITHER
- ▁HAWK
- ▁MOONLIGHT
- ▁EXAMINE
- ▁HAPPILY
- ▁PRECEDE
- ▁DETECTIVE
- ▁INCHES
- ▁SOLITARY
- ▁DUTCH
- ▁NAPOLEON
- ▁UNEASY
- ▁CARDINAL
- ▁BLEW
- ▁FOWL
- ▁DECORAT
- ▁CHILDHOOD
- ▁TORMENT
- ▁LOSING
- ▁PERMISSION
- ▁BLANK
- ▁UPSTAIRS
- ▁CAPACITY
- ▁TRIFLE
- ▁FOLLY
- ▁RECOGNIZE
- ▁REMOVE
- ▁VENGEANCE
- ▁ENTERPRISE
- ▁BEDROOM
- ▁ANYHOW
- ▁INQUIRY
- ▁ASHES
- ▁DRAG
- ▁HUSH
- ▁AWKWARD
- ▁SATURDAY
- ▁GENUINE
- ▁SURVIV
- ▁SKIRT
- ▁AFFECTIONATE
- ▁TANG
- ▁MUTUAL
- ▁DISPUTE
- ▁EAGLE
- ▁INCOME
- ▁BIND
- ▁FAME
- ▁IMPROVEMENT
- ROVING
- ▁DIFFER
- ▁AWOKE
- ▁SLEEVE
- ▁SOLITUDE
- ▁FAVOURITE
- JI
- ▁DETECT
- ▁COMPREHEND
- ▁PREPARING
- ▁SERPENT
- ▁SUMMIT
- ▁KNOT
- ▁KNIT
- ▁COPY
- ▁STOPPING
- ▁FADED
- ▁HIDEOUS
- ▁JULIE
- STEAD
- ▁SHINE
- ▁CONFLICT
- ▁PROPOSITION
- ▁REFUGE
- ▁GALLERY
- ▁BUNDLE
- ▁AXE
- ▁SLAVERY
- ▁MASK
- ▁ALYOSHA
- ▁LADDER
- ▁DEPARTMENT
- ▁DISCHARGE
- ▁DEPRESS
- ▁GALLOP
- ▁SCARLET
- ▁KITTY
- ▁RECEIVING
- ▁SURRENDER
- ▁SUSTAIN
- ▁TWILIGHT
- ▁CONGRESS
- ▁IRELAND
- ▁FUNNY
- ▁LEND
- ▁CONSTITUTE
- ▁FUNERAL
- ▁CRYSTAL
- ▁SPAIN
- ▁EXCEEDINGLY
- ▁DAMN
- ▁COMMUN
- ▁CIVILIZATION
- ▁PREJUDICE
- ▁PORCH
- ▁ASSISTANT
- ▁INDUSTRY
- ▁TUMBLE
- ▁DEFENCE
- ▁HITHER
- ▁SMOT
- ▁COLONI
- ▁AMAZEMENT
- ▁MARGUERITE
- ▁MIRACLE
- ▁INHERIT
- ▁BEGGAR
- ▁ENVELOPE
- ▁INDIGNATION
- ▁NATASHA
- ▁PROPOSAL
- ▁FRAGMENT
- ▁ROUSED
- ▁ROAST
- ENCIES
- ▁COMMENCED
- ▁RESOURCE
- ▁POPULATION
- ▁QUOTH
- ▁PURSUE
- ▁EDUCAT
- ▁AFFLICT
- ▁CONTACT
- ▁CRIMSON
- ▁DIVISION
- ▁DISORDER
- ▁COPPER
- ▁SOLICIT
- ▁MODERATE
- ▁DRUM
- ▁SWIM
- ▁SALUTE
- ▁ASSUME
- ▁MUSCLE
- ▁OVERWHELM
- ▁SHAKESPEARE
- ▁STRUGGLING
- ▁TRANQUIL
- ▁CHICKEN
- ▁TREAD
- ▁CLAW
- ▁BIBLE
- ▁RIDGE
- ▁THREAT
- ▁VELVET
- ▁EXPOSED
- ▁IDIOT
- ▁BARREL
- ▁PENNY
- ▁TEMPTATION
- ▁DANGLARS
- ▁CENTURIES
- ▁DISTRIBUT
- ▁REJECT
- ▁RETORTED
- ▁CONCENTRAT
- ▁CORDIAL
- ▁MOTOR
- ▁CANNON
- KEEP
- ▁WRETCH
- ▁ASSURANCE
- ▁THIEF
- ▁SURVEY
- ▁VITAL
- ▁RAILWAY
- ▁JACKSON
- ▁CRASH
- ▁GROWL
- ▁COMBAT
- ▁RECOLLECTION
- ▁SECURITY
- ▁JACOB
- ▁CLUTCH
- ▁BLANKET
- ▁NANCY
- ▁CELLAR
- ▁CONVENIENT
- ▁INDIGNANT
- ▁COARSE
- ▁WORM
- ▁SCREEN
- ▁TRANSPORT
- ▁BULLET
- ▁APPRECIATE
- ▁DEVOTION
- ▁INVISIBLE
- ▁DRIED
- ▁MIXTURE
- ▁CANDID
- ▁PERFORMANCE
- ▁RIPE
- ▁EXQUISITE
- ▁BARGAIN
- ▁TOBACCO
- ▁LOYAL
- ▁MOULD
- ▁ATTENTIVE
- ▁DOROTHY
- ▁BRUTE
- ▁ESTABLISHMENT
- ▁ABILITY
- ▁INHABIT
- ▁OBSCURE
- ▁BORROW
- ▁ESSENCE
- ▁DISMAY
- ▁FLEE
- ▁BLADE
- ▁PLUCK
- ▁COFFIN
- ▁SUNSET
- ▁STEPHEN
- ▁ECONOMIC
- ▁HOLIDAY
- ▁MECHANICAL
- ▁COTTON
- ▁AWAKENED
- ▁SEIZE
- ▁RIDICULOUS
- ▁SANCHO
- ▁HESITATION
- ▁CORPSE
- ▁SAVING
- HOLD
- FOOT
- ▁ELDEST
- ▁DESPITE
- ▁EDITH
- ▁CHERISH
- ▁RESISTANCE
- ▁WILSON
- ▁ARGUE
- ▁INQUIRE
- ▁APPREHENSION
- ▁AVENUE
- ▁DRAKE
- ▁PROPOSE
- HURST
- ▁INFERIOR
- ▁STAIRCASE
- ▁WHEREFORE
- ▁CARLYLE
- ▁COUCH
- ▁ROUTE
- ▁POLITICS
- ▁TOMORROW
- ▁THRONG
- ▁NAUGHT
- ▁SUNLIGHT
- ▁INDIFFERENCE
- ▁OBEDIENCE
- ▁RECEPTION
- ▁VEGETABLE
- ▁IMPERFECT
- ▁RESIDENCE
- ▁TURKEY
- ▁VIOLET
- ▁SARAH
- ▁ALTAR
- ▁GRIEVE
- ▁JERK
- ▁ENSU
- ▁MAGICIAN
- ▁BLOSSOM
- ▁LANTERN
- ▁RESOLUTE
- ▁THOUGHTFULLY
- ▁FORTNIGHT
- ▁TRUMPET
- ▁VALJEAN
- ▁UNWILLING
- ▁LECTURE
- ▁WHEREUPON
- ▁HOLLAND
- ▁CHANGING
- ▁CREEK
- ▁SLICE
- ▁NORMAL
- ▁ANNIE
- ▁ACCENT
- ▁FREDERICK
- ▁DISAGREEABLE
- ▁RUBBED
- ▁DUMB
- ▁ESTABLISH
- ▁IMPORT
- ▁AFFIRM
- ▁MATTHEW
- ▁BRISK
- ▁CONVERT
- ▁BENDING
- ▁IVAN
- ▁MADEMOISELLE
- ▁MICHAEL
- ▁EASIER
- ▁JONES
- ▁FACING
- ▁EXCELLENCY
- ▁LITERARY
- ▁GOSSIP
- ▁DEVOUR
- ▁STAGGER
- ▁PENCIL
- ▁AVERAGE
- ▁HAMMER
- ▁TRIUMPHANT
- ▁PREFERRED
- ▁APPLICATION
- ▁OCCUPY
- ▁AUTHORITIES
- BURN
- ▁ASCERTAIN
- ▁CORRIDOR
- ▁DELICIOUS
- ▁PRACTISE
- ▁UNIVERSE
- ▁SHILLING
- ▁CONTEST
- ▁ASHORE
- ▁COMMIT
- ▁ADMINISTRATION
- ▁STUDIED
- ▁RIGID
- ▁ADORN
- ▁ELSEWHERE
- ▁INNOCENCE
- ▁JOURNAL
- ▁LANDSCAPE
- ▁TELEGRAPH
- ▁ANGRILY
- ▁CAMPAIGN
- ▁UNJUST
- ▁CHALLENGE
- ▁TORRENT
- ▁RELATE
- ▁ASSEMBLED
- ▁IMPRESSED
- ▁CANOE
- ▁CONCLUD
- ▁QUIXOTE
- ▁SATISFACTORY
- ▁NIECE
- ▁DEAF
- ▁RAFT
- ▁JIMMY
- ▁GLID
- ▁REGULAT
- ▁CHATTER
- ▁GLACIER
- ▁ENVY
- ▁STATUE
- ▁BOSTON
- ▁RICHMOND
- ▁DENIED
- ▁FANNY
- ▁SOLOMON
- ▁VULGAR
- ▁STALK
- ▁REPLACE
- ▁SPOON
- ▁BASIN
- ▁FEATURE
- ▁CONVICT
- ▁ARCHITECT
- ▁ADMIRAL
- ▁RIBBON
- ▁PERMANENT
- ▁APRIL
- ▁JOLLY
- ▁NEIGHBORHOOD
- ▁IMPART
- BOROUGH
- CAMP
- ▁HORRID
- ▁IMMORTAL
- ▁PRUDENCE
- ▁SPANIARD
- ▁SUPPOSING
- ▁TELEPHONE
- ▁TEMPERATURE
- ▁PENETRATE
- ▁OYSTER
- ▁APPOINTMENT
- ▁EGYPTIAN
- ▁DWELT
- ▁NEPHEW
- ▁RAILROAD
- ▁SEPTEMBER
- ▁DEVICE
- ▁WHEAT
- ▁GILBERT
- ▁ELEGANT
- ▁ADVERTISE
- ▁RATIONAL
- ▁TURTLE
- ▁BROOD
- ▁ASSEMBLY
- ▁CULTIVATE
- ▁EDITOR
- ▁SPECIMEN
- ▁UNDOUBTEDLY
- ▁WHALE
- ▁DROPPING
- ▁BALLOON
- ▁MEDICAL
- COMB
- ▁COMPOSITION
- ▁FOOTSTEPS
- ▁LAUNCELOT
- ▁DISCOURSE
- ▁ERRAND
- ▁CONVERSE
- ▁ADVANCING
- ▁DOWNSTAIRS
- ▁TUMULT
- ▁CORRUPT
- ▁SUFFICE
- ▁ANGUISH
- ▁SHAGGY
- ▁RETIRE
- ▁TIMBER
- ▁BLAZE
- ▁ABSTRACT
- ▁EMBROIDER
- ▁PHOTOGRAPH
- ▁PROSPERITY
- ▁TERRIBLY
- ▁TERRITORY
- ▁THRESHOLD
- ▁PAVEMENT
- ▁INJURED
- ▁LIMP
- ▁AGITATION
- ▁RASCAL
- ▁PRESUME
- ▁OBSERVING
- ▁OBSTACLE
- ▁SIMPLICITY
- ▁SLUMBER
- ▁SUPPLIED
- ▁COMBINATION
- ▁DRAIN
- ▁WILDERNESS
- ▁BELIEVING
- ▁VILLAIN
- ▁RECKLESS
- ▁INJURY
- ▁CLAPP
- ▁FRIDAY
- ▁HERCULES
- ▁KENNEDY
- ▁SYMPTOM
- ▁SLEDGE
- ▁CEILING
- ▁LEMON
- ▁PLAGUE
- ▁MONDAY
- ▁CANVAS
- ▁IMPATIENCE
- ▁UNCOMFORTABLE
- ▁ACCESS
- ▁FROZEN
- ▁SENATOR
- ▁FRANZ
- ▁SWIMMING
- ▁BARRIER
- ▁ADJUST
- ▁COMPARISON
- ▁PROCLAIM
- ▁WRINKL
- ▁OVERLOOK
- ▁MITYA
- ▁GUILT
- ▁PERCEPTION
- ▁PRECAUTION
- ▁SPECTATOR
- ▁SURPRISING
- ▁DISTRACT
- ▁DISDAIN
- ▁BONNET
- ▁MAGNET
- ▁PROFESS
- ▁CONFOUND
- ▁NARRATIVE
- ▁STRUCTURE
- ▁SKETCH
- ▁ULTIMATE
- ▁GLOBE
- ▁INSECT
- FICIENCY
- ▁ORCHARD
- ▁AMIABLE
- ▁DESCENT
- ▁INDEPENDENCE
- ▁MANUFACTURE
- ▁SPRINKLE
- ▁NIGHTINGALE
- ▁CUSHION
- ▁EMINENT
- ▁SCOTT
- ▁ARRAY
- ▁COSETTE
- ▁WAVING
- ▁EXTRACT
- ▁IRREGULAR
- ▁PERSECUT
- ▁DERIVED
- ▁WITHDREW
- ▁CAUTION
- ▁SUSPICIOUS
- ▁MEMORIES
- ▁NOWHERE
- ▁SUBTLE
- ▁THOROUGH
- Q
- ▁APPROPRIATE
- ▁SLAUGHTER
- ▁YOURSELVES
- ▁THUMB
- ▁TWAS
- ▁ABODE
- ▁BIDDING
- ▁CONSPICUOUS
- ▁REBECCA
- ▁SERGEANT
- ▁APRON
- ▁ANTICIPATE
- ▁DISCIPLINE
- ▁GLANCING
- ▁PILGRIM
- ▁SULLEN
- ▁CONTRIBUTE
- ▁PRAIRIE
- ▁CARVED
- ▁COMMERCE
- ▁EXCLAMATION
- ▁MUSCULAR
- ▁NOVEMBER
- ▁PHENOMENA
- ▁SYMBOL
- ▁UMBRELLA
- ▁DIMINISH
- ▁PARLOUR
- ▁THREATENING
- ▁STUMP
- ▁EXTENSIVE
- ▁PLEASING
- ▁REMEMBRANCE
- ▁COMBINED
- ▁SHERIFF
- ▁SHAFT
- ▁LAURA
- ▁INTERCOURSE
- ▁STRICKEN
- ▁SUPPLIES
- ▁LANDLORD
- ▁SHRINK
- ▁PRICK
- ▁CAESAR
- ▁DRUG
- ▁BEWILDERED
- ▁NAUTILUS
- ▁BRUTAL
- ▁COMMERCIAL
- ▁MAGGIE
- ▁SPHERE
- ▁VIRGIN
- ▁BRETHREN
- ▁DESTINY
- ▁POLICY
- ▁TERRIFIED
- ▁HOUSEKEEPER
- ▁CRAZY
- ▁ARDENT
- ▁DISCERN
- ▁WRAP
- ▁MARQUIS
- ▁RUSSIA
- MOUTH
- ▁BRITAIN
- ▁HARBOUR
- ▁CONCERT
- ▁DONKEY
- ▁DAMAGE
- ▁SLIM
- ABOUT
- ▁LUXURY
- ▁MONSTROUS
- ▁TENDENCY
- ▁PARADISE
- ▁CULTURE
- ▁JULIUS
- ▁RAOUL
- ▁REMEDY
- ▁DECAY
- ▁SCOLD
- ▁SPLIT
- ▁ASSAULT
- ▁DECEMBER
- ▁MOSCOW
- ▁EXPLORE
- ▁TROUSERS
- ▁WRIST
- PIECE
- ▁MUSKET
- ▁VALENTINE
- ▁TYRANT
- ▁ABRAHAM
- ▁MEDIUM
- ▁ARTIFICIAL
- ▁FACULTY
- ▁OBLIGATION
- ▁RESEMBLANCE
- ▁INQUIRIES
- ▁DETAIN
- ▁SWARM
- ▁PLEDGE
- ▁ADMIRABLE
- ▁DEFECT
- ▁SUPERINTEND
- ▁PATRIOT
- ▁CLUNG
- ▁DISMAL
- ▁RECIT
- ▁IGNOR
- ▁AMELIA
- ▁JUSTIFY
- ▁ELEPHANT
- ▁ESTIMATE
- ▁KNELT
- ▁SERVING
- ▁WHIM
- ▁SHRILL
- ▁STUDIO
- ▁TEXT
- ▁ALEXANDER
- ▁WROUGHT
- ▁ABUNDANT
- ▁SITUATED
- ▁REGAIN
- ▁FIERY
- ▁SNEER
- ▁SWEAT
- ▁GLARE
- ▁NIGH
- ▁ESCORT
- ▁INEVITABLE
- ▁PSMITH
- ▁RELUCTANT
- ▁PRECEDING
- ▁RESORT
- ▁OUTRAGE
- ▁AMBASSADOR
- ▁CONSOLATION
- ▁RECOGNITION
- ▁REMORSE
- ▁BEHALF
- ▁FORMIDABLE
- ▁GRAVITY
- ▁DIVIDE
- ▁CONFRONT
- ▁GIGANTIC
- ▁OCTOBER
- ▁FLANK
- ▁SLEW
- ▁CLARA
- ▁FILM
- ▁BULK
- ▁POMP
- ▁ELEANOR
- ▁EMPHASIS
- ▁JAPANESE
- ▁CAVALRY
- ▁EXCLUSIVE
- ▁PERFUME
- ▁BRONZE
- ▁FEDERAL
- ▁LIQUID
- ▁RUBBING
- ▁OVEN
- DOLPH
- ▁CONVULS
- ▁DEPRIVED
- ▁RESPONSIBILITY
- ▁SIGNIFICANT
- ▁WAISTCOAT
- ▁CLUSTER
- ▁MARTHA
- ▁REVERSE
- ▁ATTORNEY
- ▁DROOP
- ▁SKILFUL
- ▁HABITUAL
- ▁PUMP
- ▁INTERVEN
- ▁OWL
- ▁CONJECTURE
- ▁FANTASTIC
- ▁RESPONSIBLE
- ▁DESTINED
- ▁DOCUMENT
- ▁THEREUPON
- ▁GODDESS
- ▁PACIFIC
- ▁WARRANT
- ▁COSTUME
- ▁BRIDLE
- ▁CALIFORNIA
- ▁DEMOCRATIC
- ▁EUSTACE
- ▁SQUIRREL
- ▁UNCOMMON
- ▁MARVELLOUS
- ▁PLOUGH
- ▁TRAGEDY
- ▁VAULT
- ▁HESITATE
- ▁REFRAIN
- ▁ADMIRING
- ▁CORPORAL
- ▁ENTITLED
- ▁SHREWD
- ▁SQUEEZ
- ▁ACCURATE
- ▁TEMPEST
- ▁MONUMENT
- ▁SIEGE
- ▁CHINESE
- ▁RAVEN
- ▁LOUNG
- ▁ASSASSIN
- ▁INFLICT
- ▁AGITATED
- ▁DESIRABLE
- ▁EARLIEST
- ▁LAUNCH
- ▁PILOT
- ▁PULSE
- ▁MUTE
- LEIGH
- ▁LIQUOR
- ▁SCARECROW
- ▁SKULL
- ▁DESOLATE
- ▁SUBLIME
- ▁SERENE
- ▁RECESS
- ▁WAKING
- ▁CHARLOTTE
- ▁CIRCULAR
- ▁INJUSTICE
- ▁PINOCCHIO
- ▁PRISCILLA
- ▁THYSELF
- ▁OCCURRENCE
- ▁CASUAL
- ▁FRANTIC
- ▁LEGEND
- ▁FERTIL
- ▁BACKGROUND
- ▁DELICACY
- ▁ESTRALLA
- ▁MANUSCRIPT
- ▁RESPONSE
- ▁UNIVERSITY
- ▁WOLVES
- ▁SCANDAL
- ▁STUMBLE
- ▁HOARSE
- ▁BODILY
- ▁CONVENT
- ▁EXAMINING
- ▁INCAPABLE
- ▁PERCEIVING
- ▁PHILADELPHIA
- ▁SUBSEQUENT
- ▁THIEVES
- ▁ACCUMULAT
- ▁DAMSEL
- ▁SCOTCH
- ▁UNDERNEATH
- ▁NOBILITY
- ▁SMASH
- ▁REVOLT
- ▁ENGAGE
- ▁CATHEDRAL
- ▁CHAMPION
- ▁DESPATCH
- ▁ETERNITY
- ▁JANUARY
- ▁PLEADED
- ▁PROBABILITY
- ▁JIMMIE
- ▁PARALLEL
- ▁FISHERMAN
- ▁JERRY
- ▁SWORE
- ▁DRAUGHT
- ▁OPPONENT
- ▁PRIMITIVE
- ▁SIGNIFICANCE
- ▁SUBSTANTIAL
- ▁AMAZED
- ▁DUNBAR
- ▁COMMEND
- ▁CONTEMPLATE
- ▁TESTIMONY
- ▁IMPERIAL
- ▁ADAPT
- ▁JUICE
- ▁CALAMIT
- CULAR
- ▁CHATEAU
- ▁PHOENIX
- ▁PRUDENT
- ▁SOLUTION
- ▁VILLEFORT
- ▁REACTION
- ▁RELAX
- ▁YU
- ▁PROHIBIT
- ▁DISTRUST
- ▁PLUNDER
- ▁WELFARE
- ▁NAVIGAT
- ▁PARLOR
- ▁LAZY
- ▁DETACH
- OMETER
- ▁PRIV
- ▁DISCOURAGE
- ▁OBSTINATE
- ▁REJOICING
- ▁SERMON
- ▁VEHICLE
- ▁FANCIES
- ▁ENLIGHTEN
- ▁ACUTE
- ▁ILLUSION
- ▁ANTHEA
- ▁MARTIAN
- ▁EXCITE
- ▁GENEROSITY
- OLOGIST
- ▁AMAZING
- ▁UNWORTHY
- ▁INTERNAL
- ▁INCENSE
- ▁VIBRAT
- ▁ADHERE
- ROACH
- ▁FEBRUARY
- ▁MEXICAN
- ▁POTATOES
- ▁INCESSANT
- ▁INTERPOSED
- ▁PARCEL
- ▁VEXED
- ▁PROMOTE
- MIDST
- ▁ARISTOCRAT
- ▁CYRIL
- ▁EMBARK
- ▁ABUNDANCE
- ▁LITERALLY
- ▁SURGEON
- ▁TERRACE
- ▁ATLANTIC
- ▁MARTYR
- ▁SPECK
- ▁SENATE
- ▁LOAF
- ▁ADMINISTER
- ▁APPREHEND
- ▁SUBDUED
- ▁TEMPORARY
- ▁DOMINION
- ▁ELABORATE
- ▁DIGNIFIED
- ▁ELIZA
- ▁SPLASH
- ▁CONSEIL
- ▁DEXTER
- ▁UNSEEN
- ▁TRAGIC
- VOCATION
- ▁GRATIFY
- ▁BACHELOR
- ▁DEFENSE
- ▁EXCURSION
- ▁FACULTIES
- ▁PROPRIETOR
- ▁SYMPATHETIC
- ▁UNNECESSARY
- ▁RADIANT
- ▁VACANT
- ▁OUNCE
- ▁SCREW
- ▁PHENOMENON
- ▁PROMINENT
- ▁WORRIED
- ▁STUDIES
- ▁CLIMATE
- ▁KEITH
- ▁ARAMIS
- ▁BLISS
- ▁CONTINUAL
- ▁SURPASS
- ▁HEBREW
- ▁IDENTITY
- ▁PROVOKE
- ▁TEMPERAMENT
- ▁CHARIOT
- ▁HARBOR
- ▁NINTH
- ▁PRIOR
- ▁DESIROUS
- ▁JERUSALEM
- ▁UNDERTAKING
- ▁EDISON
- ▁MIRTH
- ▁SCOUT
- ▁APPARATUS
- ▁ILLUSTRATION
- ▁INTELLIGIBLE
- ▁INVARIABLY
- ▁PIERCED
- ▁REVIEW
- ▁FLICKER
- ▁HAZARD
- ▁REVELATION
- ▁DIXON
- ▁EXCITING
- ▁GOSPEL
- ▁CONSTANCE
- ▁OVERTAKE
- ▁GUINEA
- ▁ALADDIN
- ▁CHICAGO
- ▁TULLIVER
- ▁HAMILTON
- ▁GARRISON
- ▁DISCIPLE
- ▁INTENSITY
- ▁TRAITOR
- ▁CHANCELLOR
- ▁PROVERB
- ▁DAGGER
- ▁FORESEE
- ▁CONFIDE
- ▁GLIMMER
- ▁CHAUVELIN
- ▁ILLUSTRATE
- ▁VOLUNTEER
- ▁JUNGLE
- ▁STREAK
- ▁SUNRISE
- ▁DISSOLV
- ▁QUEST
- ▁AWHILE
- ▁FELICITY
- ▁LEGISLATURE
- ▁LEONORA
- ▁MAGAZINE
- ▁PITIFUL
- ▁COLONY
- ▁SHAWL
- ▁ARRIVING
- ▁FUNDAMENTAL
- ▁CARPENTER
- ▁OVERFLOW
- ▁EXPAND
- ▁HARVEST
- ▁FEMININE
- ▁INNUMERABLE
- ▁SCRAMBLE
- ▁TWENTIETH
- ▁TRIFLING
- ▁GHASTL
- ▁CONQUEST
- ▁DANIEL
- ▁FACILIT
- ▁FORSAKE
- ▁BEHAVIOUR
- ▁GORGEOUS
- ▁PRODUCING
- ▁HAPPIER
- ▁PROMISING
- ▁RAINBOW
- ▁INSTINCTIVELY
- ▁DECREE
- ▁EYEBROWS
- ▁IRRESISTIBLE
- ▁PHARAOH
- ▁SCROOGE
- ▁UNNATURAL
- ▁CRUMBS
- ▁REFINED
- ▁DREARY
- ▁TRENCH
- ▁CONVINCE
- ▁FRINGE
- ▁EXTREMITY
- ▁INTIMACY
- ▁SCOUNDREL
- ▁SUFFRAGE
- ▁UNEASINESS
- ▁BARRICADE
- ▁CIRCULAT
- ▁SAMUEL
- ▁BRUCE
- ▁DARCY
- <sos/eos>
init: null
input_size: null
ctc_conf:
dropout_rate: 0.0
ctc_type: builtin
reduce: true
ignore_nan_grad: true
joint_net_conf: null
model_conf:
ctc_weight: 0.3
lsm_weight: 0.1
length_normalized_loss: false
use_preprocessor: true
token_type: bpe
bpemodel: data/en_token_list/bpe_unigram5000/bpe.model
non_linguistic_symbols: null
cleaner: null
g2p: null
speech_volume_normalize: null
rir_scp: null
rir_apply_prob: 1.0
noise_scp: null
noise_apply_prob: 1.0
noise_db_range: '13_15'
frontend: default
frontend_conf:
n_fft: 512
hop_length: 256
fs: 16k
specaug: specaug
specaug_conf:
apply_time_warp: true
time_warp_window: 5
time_warp_mode: bicubic
apply_freq_mask: true
freq_mask_width_range:
- 0
- 27
num_freq_mask: 2
apply_time_mask: true
time_mask_width_ratio_range:
- 0.0
- 0.05
num_time_mask: 10
normalize: global_mvn
normalize_conf:
stats_file: exp/asr_stats_raw_en_bpe5000_sp/train/feats_stats.npz
preencoder: null
preencoder_conf: {}
encoder: conformer
encoder_conf:
output_size: 512
attention_heads: 8
linear_units: 2048
num_blocks: 12
dropout_rate: 0.1
positional_dropout_rate: 0.1
attention_dropout_rate: 0.1
input_layer: conv2d
normalize_before: true
macaron_style: true
rel_pos_type: latest
pos_enc_layer_type: rel_pos
selfattention_layer_type: rel_selfattn
activation_type: swish
use_cnn_module: true
cnn_module_kernel: 31
postencoder: null
postencoder_conf: {}
decoder: transformer
decoder_conf:
attention_heads: 8
linear_units: 2048
num_blocks: 6
dropout_rate: 0.1
positional_dropout_rate: 0.1
self_attention_dropout_rate: 0.1
src_attention_dropout_rate: 0.1
required:
- output_dir
- token_list
version: 0.10.7a1
distributed: true
```
</details>
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
| [
"BEAR",
"CRAFT"
] | Non_BioNLP |
mlx-community/Llama-3.2-3B-Fluxed | mlx-community | text-generation | [
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"mlx",
"conversational",
"en",
"dataset:VincentGOURBIN/FluxPrompting",
"base_model:VincentGOURBIN/Llama-3.2-3B-Fluxed",
"base_model:finetune:VincentGOURBIN/Llama-3.2-3B-Fluxed",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,733,257,468,000 | 2024-12-03T20:35:23 | 55 | 2 | ---
base_model: VincentGOURBIN/Llama-3.2-3B-Fluxed
datasets:
- VincentGOURBIN/FluxPrompting
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- mlx
---
# mlx-community/Llama-3.2-3B-Fluxed
The Model [mlx-community/Llama-3.2-3B-Fluxed](https://huggingface.co/mlx-community/Llama-3.2-3B-Fluxed) was converted to MLX format from [VincentGOURBIN/Llama-3.2-3B-Fluxed](https://huggingface.co/VincentGOURBIN/Llama-3.2-3B-Fluxed) using mlx-lm version **0.19.3**.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model_id = "mlx-community/Llama-3.2-3B-Fluxed"
model, tokenizer = load(model_id)
user_need = "a toucan coding on a mac"
system_message = """
You are a prompt creation assistant for FLUX, an AI image generation model. Your mission is to help the user craft a detailed and optimized prompt by following these steps:
1. **Understanding the User's Needs**:
- The user provides a basic idea, concept, or description.
- Analyze their input to determine essential details and nuances.
2. **Enhancing Details**:
- Enrich the basic idea with vivid, specific, and descriptive elements.
- Include factors such as lighting, mood, style, perspective, and specific objects or elements the user wants in the scene.
3. **Formatting the Prompt**:
- Structure the enriched description into a clear, precise, and effective prompt.
- Ensure the prompt is tailored for high-quality output from the FLUX model, considering its strengths (e.g., photorealistic details, fine anatomy, or artistic styles).
Use this process to compose a detailed and coherent prompt. Ensure the final prompt is clear and complete, and write your response in English.
Ensure that the final part is a synthesized version of the prompt.
"""
if hasattr(tokenizer, "apply_chat_template") and tokenizer.chat_template is not None:
messages = [{"role": "system", "content": system_message},
{"role": "user", "content": user_need}]
prompt = tokenizer.apply_chat_template(
messages, tokenize=False, add_generation_prompt=True
)
response = generate(model, tokenizer, prompt=prompt, verbose=True,max_tokens=1000)
``` | [
"CRAFT"
] | Non_BioNLP |
Dagobert42/distilbert-base-uncased-biored-augmented | Dagobert42 | token-classification | [
"transformers",
"safetensors",
"distilbert",
"token-classification",
"low-resource NER",
"token_classification",
"biomedicine",
"medical NER",
"generated_from_trainer",
"en",
"dataset:medicine",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,707,937,268,000 | 2024-02-22T11:27:55 | 22 | 0 | ---
base_model: distilbert-base-uncased
datasets:
- medicine
language:
- en
license: mit
metrics:
- accuracy
- precision
- recall
- f1
tags:
- low-resource NER
- token_classification
- biomedicine
- medical NER
- generated_from_trainer
model-index:
- name: Dagobert42/distilbert-base-uncased-biored-augmented
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Dagobert42/distilbert-base-uncased-biored-augmented
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the bigbio/biored dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5692
- Accuracy: 0.7978
- Precision: 0.5993
- Recall: 0.5337
- F1: 0.5536
- Weighted F1: 0.7929
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 | Weighted F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|:-----------:|
| No log | 1.0 | 25 | 0.6037 | 0.7824 | 0.5931 | 0.4937 | 0.5272 | 0.7719 |
| No log | 2.0 | 50 | 0.5858 | 0.7932 | 0.6023 | 0.5298 | 0.5511 | 0.7849 |
| No log | 3.0 | 75 | 0.5887 | 0.795 | 0.5757 | 0.5283 | 0.544 | 0.7842 |
| No log | 4.0 | 100 | 0.5890 | 0.7937 | 0.5911 | 0.5331 | 0.5466 | 0.7864 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.15.0
| [
"BIORED"
] | BioNLP |
Lots-of-LoRAs/Mistral-7B-Instruct-v0.2-4b-r16-task592 | Lots-of-LoRAs | null | [
"pytorch",
"safetensors",
"en",
"arxiv:1910.09700",
"arxiv:2407.00066",
"base_model:mistralai/Mistral-7B-Instruct-v0.2",
"base_model:finetune:mistralai/Mistral-7B-Instruct-v0.2",
"license:mit",
"region:us"
] | 1,736,087,017,000 | 2025-01-05T14:23:42 | 0 | 0 | ---
base_model: mistralai/Mistral-7B-Instruct-v0.2
language: en
library_name: pytorch
license: mit
---
# Model Card for Mistral-7B-Instruct-v0.2-4b-r16-task592
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
LoRA trained on task592_sciq_incorrect_answer_generation
- **Developed by:** bruel
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** LoRA
- **Language(s) (NLP):** en
- **License:** mit
- **Finetuned from model [optional]:** mistralai/Mistral-7B-Instruct-v0.2
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** https://github.com/bruel-gabrielsson
- **Paper [optional]:** "Compress then Serve: Serving Thousands of LoRA Adapters with Little Overhead" (2024), Rickard Brüel Gabrielsson, Jiacheng Zhu, Onkar Bhardwaj, Leshem Choshen, Kristjan Greenewald, Mikhail Yurochkin and Justin Solomon
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
https://huggingface.co/datasets/Lots-of-LoRAs/task592_sciq_incorrect_answer_generation sourced from https://github.com/allenai/natural-instructions
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
@misc{brüelgabrielsson2024compressserveservingthousands,
title={Compress then Serve: Serving Thousands of LoRA Adapters with Little Overhead},
author={Rickard Brüel-Gabrielsson and Jiacheng Zhu and Onkar Bhardwaj and Leshem Choshen and Kristjan Greenewald and Mikhail Yurochkin and Justin Solomon},
year={2024},
eprint={2407.00066},
archivePrefix={arXiv},
primaryClass={cs.DC},
url={https://arxiv.org/abs/2407.00066},
}
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | [
"SCIQ"
] | Non_BioNLP |
codegood/Llama3-Aloe-8B-Alpha-Q4_K_M-GGUF | codegood | question-answering | [
"transformers",
"gguf",
"biology",
"medical",
"llama-cpp",
"gguf-my-repo",
"question-answering",
"en",
"dataset:argilla/dpo-mix-7k",
"dataset:nvidia/HelpSteer",
"dataset:jondurbin/airoboros-3.2",
"dataset:hkust-nlp/deita-10k-v0",
"dataset:LDJnr/Capybara",
"dataset:HPAI-BSC/CareQA",
"dataset:GBaker/MedQA-USMLE-4-options",
"dataset:lukaemon/mmlu",
"dataset:bigbio/pubmed_qa",
"dataset:openlifescienceai/medmcqa",
"dataset:bigbio/med_qa",
"dataset:HPAI-BSC/better-safe-than-sorry",
"dataset:HPAI-BSC/pubmedqa-cot",
"dataset:HPAI-BSC/medmcqa-cot",
"dataset:HPAI-BSC/medqa-cot",
"base_model:HPAI-BSC/Llama3-Aloe-8B-Alpha",
"base_model:quantized:HPAI-BSC/Llama3-Aloe-8B-Alpha",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us",
"conversational"
] | 1,729,098,632,000 | 2024-10-16T17:10:56 | 60 | 1 | ---
base_model: HPAI-BSC/Llama3-Aloe-8B-Alpha
datasets:
- argilla/dpo-mix-7k
- nvidia/HelpSteer
- jondurbin/airoboros-3.2
- hkust-nlp/deita-10k-v0
- LDJnr/Capybara
- HPAI-BSC/CareQA
- GBaker/MedQA-USMLE-4-options
- lukaemon/mmlu
- bigbio/pubmed_qa
- openlifescienceai/medmcqa
- bigbio/med_qa
- HPAI-BSC/better-safe-than-sorry
- HPAI-BSC/pubmedqa-cot
- HPAI-BSC/medmcqa-cot
- HPAI-BSC/medqa-cot
language:
- en
library_name: transformers
license: cc-by-nc-4.0
pipeline_tag: question-answering
tags:
- biology
- medical
- llama-cpp
- gguf-my-repo
---
# codegood/Llama3-Aloe-8B-Alpha-Q4_K_M-GGUF
This model was converted to GGUF format from [`HPAI-BSC/Llama3-Aloe-8B-Alpha`](https://huggingface.co/HPAI-BSC/Llama3-Aloe-8B-Alpha) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/HPAI-BSC/Llama3-Aloe-8B-Alpha) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo codegood/Llama3-Aloe-8B-Alpha-Q4_K_M-GGUF --hf-file llama3-aloe-8b-alpha-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo codegood/Llama3-Aloe-8B-Alpha-Q4_K_M-GGUF --hf-file llama3-aloe-8b-alpha-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo codegood/Llama3-Aloe-8B-Alpha-Q4_K_M-GGUF --hf-file llama3-aloe-8b-alpha-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo codegood/Llama3-Aloe-8B-Alpha-Q4_K_M-GGUF --hf-file llama3-aloe-8b-alpha-q4_k_m.gguf -c 2048
```
| [
"MEDQA",
"PUBMEDQA"
] | BioNLP |
chris-code/multilingual-e5-large-Q8_0-GGUF | chris-code | feature-extraction | [
"sentence-transformers",
"gguf",
"mteb",
"Sentence Transformers",
"sentence-similarity",
"feature-extraction",
"llama-cpp",
"gguf-my-repo",
"multilingual",
"af",
"am",
"ar",
"as",
"az",
"be",
"bg",
"bn",
"br",
"bs",
"ca",
"cs",
"cy",
"da",
"de",
"el",
"en",
"eo",
"es",
"et",
"eu",
"fa",
"fi",
"fr",
"fy",
"ga",
"gd",
"gl",
"gu",
"ha",
"he",
"hi",
"hr",
"hu",
"hy",
"id",
"is",
"it",
"ja",
"jv",
"ka",
"kk",
"km",
"kn",
"ko",
"ku",
"ky",
"la",
"lo",
"lt",
"lv",
"mg",
"mk",
"ml",
"mn",
"mr",
"ms",
"my",
"ne",
"nl",
"no",
"om",
"or",
"pa",
"pl",
"ps",
"pt",
"ro",
"ru",
"sa",
"sd",
"si",
"sk",
"sl",
"so",
"sq",
"sr",
"su",
"sv",
"sw",
"ta",
"te",
"th",
"tl",
"tr",
"ug",
"uk",
"ur",
"uz",
"vi",
"xh",
"yi",
"zh",
"base_model:intfloat/multilingual-e5-large",
"base_model:quantized:intfloat/multilingual-e5-large",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,724,915,602,000 | 2024-08-29T07:13:29 | 21 | 0 | ---
base_model: intfloat/multilingual-e5-large
language:
- multilingual
- af
- am
- ar
- as
- az
- be
- bg
- bn
- br
- bs
- ca
- cs
- cy
- da
- de
- el
- en
- eo
- es
- et
- eu
- fa
- fi
- fr
- fy
- ga
- gd
- gl
- gu
- ha
- he
- hi
- hr
- hu
- hy
- id
- is
- it
- ja
- jv
- ka
- kk
- km
- kn
- ko
- ku
- ky
- la
- lo
- lt
- lv
- mg
- mk
- ml
- mn
- mr
- ms
- my
- ne
- nl
- 'no'
- om
- or
- pa
- pl
- ps
- pt
- ro
- ru
- sa
- sd
- si
- sk
- sl
- so
- sq
- sr
- su
- sv
- sw
- ta
- te
- th
- tl
- tr
- ug
- uk
- ur
- uz
- vi
- xh
- yi
- zh
license: mit
tags:
- mteb
- Sentence Transformers
- sentence-similarity
- feature-extraction
- sentence-transformers
- llama-cpp
- gguf-my-repo
model-index:
- name: multilingual-e5-large
results:
- task:
type: Classification
dataset:
name: MTEB AmazonCounterfactualClassification (en)
type: mteb/amazon_counterfactual
config: en
split: test
revision: e8379541af4e31359cca9fbcf4b00f2671dba205
metrics:
- type: accuracy
value: 79.05970149253731
- type: ap
value: 43.486574390835635
- type: f1
value: 73.32700092140148
- task:
type: Classification
dataset:
name: MTEB AmazonCounterfactualClassification (de)
type: mteb/amazon_counterfactual
config: de
split: test
revision: e8379541af4e31359cca9fbcf4b00f2671dba205
metrics:
- type: accuracy
value: 71.22055674518201
- type: ap
value: 81.55756710830498
- type: f1
value: 69.28271787752661
- task:
type: Classification
dataset:
name: MTEB AmazonCounterfactualClassification (en-ext)
type: mteb/amazon_counterfactual
config: en-ext
split: test
revision: e8379541af4e31359cca9fbcf4b00f2671dba205
metrics:
- type: accuracy
value: 80.41979010494754
- type: ap
value: 29.34879922376344
- type: f1
value: 67.62475449011278
- task:
type: Classification
dataset:
name: MTEB AmazonCounterfactualClassification (ja)
type: mteb/amazon_counterfactual
config: ja
split: test
revision: e8379541af4e31359cca9fbcf4b00f2671dba205
metrics:
- type: accuracy
value: 77.8372591006424
- type: ap
value: 26.557560591210738
- type: f1
value: 64.96619417368707
- task:
type: Classification
dataset:
name: MTEB AmazonPolarityClassification
type: mteb/amazon_polarity
config: default
split: test
revision: e2d317d38cd51312af73b3d32a06d1a08b442046
metrics:
- type: accuracy
value: 93.489875
- type: ap
value: 90.98758636917603
- type: f1
value: 93.48554819717332
- task:
type: Classification
dataset:
name: MTEB AmazonReviewsClassification (en)
type: mteb/amazon_reviews_multi
config: en
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 47.564
- type: f1
value: 46.75122173518047
- task:
type: Classification
dataset:
name: MTEB AmazonReviewsClassification (de)
type: mteb/amazon_reviews_multi
config: de
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 45.400000000000006
- type: f1
value: 44.17195682400632
- task:
type: Classification
dataset:
name: MTEB AmazonReviewsClassification (es)
type: mteb/amazon_reviews_multi
config: es
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 43.068
- type: f1
value: 42.38155696855596
- task:
type: Classification
dataset:
name: MTEB AmazonReviewsClassification (fr)
type: mteb/amazon_reviews_multi
config: fr
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 41.89
- type: f1
value: 40.84407321682663
- task:
type: Classification
dataset:
name: MTEB AmazonReviewsClassification (ja)
type: mteb/amazon_reviews_multi
config: ja
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 40.120000000000005
- type: f1
value: 39.522976223819114
- task:
type: Classification
dataset:
name: MTEB AmazonReviewsClassification (zh)
type: mteb/amazon_reviews_multi
config: zh
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 38.832
- type: f1
value: 38.0392533394713
- task:
type: Retrieval
dataset:
name: MTEB ArguAna
type: arguana
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 30.725
- type: map_at_10
value: 46.055
- type: map_at_100
value: 46.900999999999996
- type: map_at_1000
value: 46.911
- type: map_at_3
value: 41.548
- type: map_at_5
value: 44.297
- type: mrr_at_1
value: 31.152
- type: mrr_at_10
value: 46.231
- type: mrr_at_100
value: 47.07
- type: mrr_at_1000
value: 47.08
- type: mrr_at_3
value: 41.738
- type: mrr_at_5
value: 44.468999999999994
- type: ndcg_at_1
value: 30.725
- type: ndcg_at_10
value: 54.379999999999995
- type: ndcg_at_100
value: 58.138
- type: ndcg_at_1000
value: 58.389
- type: ndcg_at_3
value: 45.156
- type: ndcg_at_5
value: 50.123
- type: precision_at_1
value: 30.725
- type: precision_at_10
value: 8.087
- type: precision_at_100
value: 0.9769999999999999
- type: precision_at_1000
value: 0.1
- type: precision_at_3
value: 18.54
- type: precision_at_5
value: 13.542000000000002
- type: recall_at_1
value: 30.725
- type: recall_at_10
value: 80.868
- type: recall_at_100
value: 97.653
- type: recall_at_1000
value: 99.57300000000001
- type: recall_at_3
value: 55.619
- type: recall_at_5
value: 67.71000000000001
- task:
type: Clustering
dataset:
name: MTEB ArxivClusteringP2P
type: mteb/arxiv-clustering-p2p
config: default
split: test
revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d
metrics:
- type: v_measure
value: 44.30960650674069
- task:
type: Clustering
dataset:
name: MTEB ArxivClusteringS2S
type: mteb/arxiv-clustering-s2s
config: default
split: test
revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53
metrics:
- type: v_measure
value: 38.427074197498996
- task:
type: Reranking
dataset:
name: MTEB AskUbuntuDupQuestions
type: mteb/askubuntudupquestions-reranking
config: default
split: test
revision: 2000358ca161889fa9c082cb41daa8dcfb161a54
metrics:
- type: map
value: 60.28270056031872
- type: mrr
value: 74.38332673789738
- task:
type: STS
dataset:
name: MTEB BIOSSES
type: mteb/biosses-sts
config: default
split: test
revision: d3fb88f8f02e40887cd149695127462bbcf29b4a
metrics:
- type: cos_sim_pearson
value: 84.05942144105269
- type: cos_sim_spearman
value: 82.51212105850809
- type: euclidean_pearson
value: 81.95639829909122
- type: euclidean_spearman
value: 82.3717564144213
- type: manhattan_pearson
value: 81.79273425468256
- type: manhattan_spearman
value: 82.20066817871039
- task:
type: BitextMining
dataset:
name: MTEB BUCC (de-en)
type: mteb/bucc-bitext-mining
config: de-en
split: test
revision: d51519689f32196a32af33b075a01d0e7c51e252
metrics:
- type: accuracy
value: 99.46764091858039
- type: f1
value: 99.37717466945023
- type: precision
value: 99.33194154488518
- type: recall
value: 99.46764091858039
- task:
type: BitextMining
dataset:
name: MTEB BUCC (fr-en)
type: mteb/bucc-bitext-mining
config: fr-en
split: test
revision: d51519689f32196a32af33b075a01d0e7c51e252
metrics:
- type: accuracy
value: 98.29407880255337
- type: f1
value: 98.11248073959938
- type: precision
value: 98.02443319392472
- type: recall
value: 98.29407880255337
- task:
type: BitextMining
dataset:
name: MTEB BUCC (ru-en)
type: mteb/bucc-bitext-mining
config: ru-en
split: test
revision: d51519689f32196a32af33b075a01d0e7c51e252
metrics:
- type: accuracy
value: 97.79009352268791
- type: f1
value: 97.5176076665512
- type: precision
value: 97.38136473848286
- type: recall
value: 97.79009352268791
- task:
type: BitextMining
dataset:
name: MTEB BUCC (zh-en)
type: mteb/bucc-bitext-mining
config: zh-en
split: test
revision: d51519689f32196a32af33b075a01d0e7c51e252
metrics:
- type: accuracy
value: 99.26276987888363
- type: f1
value: 99.20133403545726
- type: precision
value: 99.17500438827453
- type: recall
value: 99.26276987888363
- task:
type: Classification
dataset:
name: MTEB Banking77Classification
type: mteb/banking77
config: default
split: test
revision: 0fd18e25b25c072e09e0d92ab615fda904d66300
metrics:
- type: accuracy
value: 84.72727272727273
- type: f1
value: 84.67672206031433
- task:
type: Clustering
dataset:
name: MTEB BiorxivClusteringP2P
type: mteb/biorxiv-clustering-p2p
config: default
split: test
revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40
metrics:
- type: v_measure
value: 35.34220182511161
- task:
type: Clustering
dataset:
name: MTEB BiorxivClusteringS2S
type: mteb/biorxiv-clustering-s2s
config: default
split: test
revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908
metrics:
- type: v_measure
value: 33.4987096128766
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackRetrieval
type: BeIR/cqadupstack
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 25.558249999999997
- type: map_at_10
value: 34.44425000000001
- type: map_at_100
value: 35.59833333333333
- type: map_at_1000
value: 35.706916666666665
- type: map_at_3
value: 31.691749999999995
- type: map_at_5
value: 33.252916666666664
- type: mrr_at_1
value: 30.252666666666666
- type: mrr_at_10
value: 38.60675
- type: mrr_at_100
value: 39.42666666666666
- type: mrr_at_1000
value: 39.48408333333334
- type: mrr_at_3
value: 36.17441666666665
- type: mrr_at_5
value: 37.56275
- type: ndcg_at_1
value: 30.252666666666666
- type: ndcg_at_10
value: 39.683
- type: ndcg_at_100
value: 44.68541666666667
- type: ndcg_at_1000
value: 46.94316666666668
- type: ndcg_at_3
value: 34.961749999999995
- type: ndcg_at_5
value: 37.215666666666664
- type: precision_at_1
value: 30.252666666666666
- type: precision_at_10
value: 6.904166666666667
- type: precision_at_100
value: 1.0989999999999995
- type: precision_at_1000
value: 0.14733333333333334
- type: precision_at_3
value: 16.037666666666667
- type: precision_at_5
value: 11.413583333333333
- type: recall_at_1
value: 25.558249999999997
- type: recall_at_10
value: 51.13341666666666
- type: recall_at_100
value: 73.08366666666667
- type: recall_at_1000
value: 88.79483333333334
- type: recall_at_3
value: 37.989083333333326
- type: recall_at_5
value: 43.787833333333325
- task:
type: Retrieval
dataset:
name: MTEB ClimateFEVER
type: climate-fever
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 10.338
- type: map_at_10
value: 18.360000000000003
- type: map_at_100
value: 19.942
- type: map_at_1000
value: 20.134
- type: map_at_3
value: 15.174000000000001
- type: map_at_5
value: 16.830000000000002
- type: mrr_at_1
value: 23.257
- type: mrr_at_10
value: 33.768
- type: mrr_at_100
value: 34.707
- type: mrr_at_1000
value: 34.766000000000005
- type: mrr_at_3
value: 30.977
- type: mrr_at_5
value: 32.528
- type: ndcg_at_1
value: 23.257
- type: ndcg_at_10
value: 25.733
- type: ndcg_at_100
value: 32.288
- type: ndcg_at_1000
value: 35.992000000000004
- type: ndcg_at_3
value: 20.866
- type: ndcg_at_5
value: 22.612
- type: precision_at_1
value: 23.257
- type: precision_at_10
value: 8.124
- type: precision_at_100
value: 1.518
- type: precision_at_1000
value: 0.219
- type: precision_at_3
value: 15.679000000000002
- type: precision_at_5
value: 12.117
- type: recall_at_1
value: 10.338
- type: recall_at_10
value: 31.154
- type: recall_at_100
value: 54.161
- type: recall_at_1000
value: 75.21900000000001
- type: recall_at_3
value: 19.427
- type: recall_at_5
value: 24.214
- task:
type: Retrieval
dataset:
name: MTEB DBPedia
type: dbpedia-entity
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 8.498
- type: map_at_10
value: 19.103
- type: map_at_100
value: 27.375
- type: map_at_1000
value: 28.981
- type: map_at_3
value: 13.764999999999999
- type: map_at_5
value: 15.950000000000001
- type: mrr_at_1
value: 65.5
- type: mrr_at_10
value: 74.53800000000001
- type: mrr_at_100
value: 74.71799999999999
- type: mrr_at_1000
value: 74.725
- type: mrr_at_3
value: 72.792
- type: mrr_at_5
value: 73.554
- type: ndcg_at_1
value: 53.37499999999999
- type: ndcg_at_10
value: 41.286
- type: ndcg_at_100
value: 45.972
- type: ndcg_at_1000
value: 53.123
- type: ndcg_at_3
value: 46.172999999999995
- type: ndcg_at_5
value: 43.033
- type: precision_at_1
value: 65.5
- type: precision_at_10
value: 32.725
- type: precision_at_100
value: 10.683
- type: precision_at_1000
value: 1.978
- type: precision_at_3
value: 50
- type: precision_at_5
value: 41.349999999999994
- type: recall_at_1
value: 8.498
- type: recall_at_10
value: 25.070999999999998
- type: recall_at_100
value: 52.383
- type: recall_at_1000
value: 74.91499999999999
- type: recall_at_3
value: 15.207999999999998
- type: recall_at_5
value: 18.563
- task:
type: Classification
dataset:
name: MTEB EmotionClassification
type: mteb/emotion
config: default
split: test
revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37
metrics:
- type: accuracy
value: 46.5
- type: f1
value: 41.93833713984145
- task:
type: Retrieval
dataset:
name: MTEB FEVER
type: fever
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 67.914
- type: map_at_10
value: 78.10000000000001
- type: map_at_100
value: 78.333
- type: map_at_1000
value: 78.346
- type: map_at_3
value: 76.626
- type: map_at_5
value: 77.627
- type: mrr_at_1
value: 72.74199999999999
- type: mrr_at_10
value: 82.414
- type: mrr_at_100
value: 82.511
- type: mrr_at_1000
value: 82.513
- type: mrr_at_3
value: 81.231
- type: mrr_at_5
value: 82.065
- type: ndcg_at_1
value: 72.74199999999999
- type: ndcg_at_10
value: 82.806
- type: ndcg_at_100
value: 83.677
- type: ndcg_at_1000
value: 83.917
- type: ndcg_at_3
value: 80.305
- type: ndcg_at_5
value: 81.843
- type: precision_at_1
value: 72.74199999999999
- type: precision_at_10
value: 10.24
- type: precision_at_100
value: 1.089
- type: precision_at_1000
value: 0.11299999999999999
- type: precision_at_3
value: 31.268
- type: precision_at_5
value: 19.706000000000003
- type: recall_at_1
value: 67.914
- type: recall_at_10
value: 92.889
- type: recall_at_100
value: 96.42699999999999
- type: recall_at_1000
value: 97.92
- type: recall_at_3
value: 86.21
- type: recall_at_5
value: 90.036
- task:
type: Retrieval
dataset:
name: MTEB FiQA2018
type: fiqa
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 22.166
- type: map_at_10
value: 35.57
- type: map_at_100
value: 37.405
- type: map_at_1000
value: 37.564
- type: map_at_3
value: 30.379
- type: map_at_5
value: 33.324
- type: mrr_at_1
value: 43.519000000000005
- type: mrr_at_10
value: 51.556000000000004
- type: mrr_at_100
value: 52.344
- type: mrr_at_1000
value: 52.373999999999995
- type: mrr_at_3
value: 48.868
- type: mrr_at_5
value: 50.319
- type: ndcg_at_1
value: 43.519000000000005
- type: ndcg_at_10
value: 43.803
- type: ndcg_at_100
value: 50.468999999999994
- type: ndcg_at_1000
value: 53.111
- type: ndcg_at_3
value: 38.893
- type: ndcg_at_5
value: 40.653
- type: precision_at_1
value: 43.519000000000005
- type: precision_at_10
value: 12.253
- type: precision_at_100
value: 1.931
- type: precision_at_1000
value: 0.242
- type: precision_at_3
value: 25.617
- type: precision_at_5
value: 19.383
- type: recall_at_1
value: 22.166
- type: recall_at_10
value: 51.6
- type: recall_at_100
value: 76.574
- type: recall_at_1000
value: 92.192
- type: recall_at_3
value: 34.477999999999994
- type: recall_at_5
value: 41.835
- task:
type: Retrieval
dataset:
name: MTEB HotpotQA
type: hotpotqa
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 39.041
- type: map_at_10
value: 62.961999999999996
- type: map_at_100
value: 63.79899999999999
- type: map_at_1000
value: 63.854
- type: map_at_3
value: 59.399
- type: map_at_5
value: 61.669
- type: mrr_at_1
value: 78.082
- type: mrr_at_10
value: 84.321
- type: mrr_at_100
value: 84.49600000000001
- type: mrr_at_1000
value: 84.502
- type: mrr_at_3
value: 83.421
- type: mrr_at_5
value: 83.977
- type: ndcg_at_1
value: 78.082
- type: ndcg_at_10
value: 71.229
- type: ndcg_at_100
value: 74.10900000000001
- type: ndcg_at_1000
value: 75.169
- type: ndcg_at_3
value: 66.28699999999999
- type: ndcg_at_5
value: 69.084
- type: precision_at_1
value: 78.082
- type: precision_at_10
value: 14.993
- type: precision_at_100
value: 1.7239999999999998
- type: precision_at_1000
value: 0.186
- type: precision_at_3
value: 42.737
- type: precision_at_5
value: 27.843
- type: recall_at_1
value: 39.041
- type: recall_at_10
value: 74.96300000000001
- type: recall_at_100
value: 86.199
- type: recall_at_1000
value: 93.228
- type: recall_at_3
value: 64.105
- type: recall_at_5
value: 69.608
- task:
type: Classification
dataset:
name: MTEB ImdbClassification
type: mteb/imdb
config: default
split: test
revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7
metrics:
- type: accuracy
value: 90.23160000000001
- type: ap
value: 85.5674856808308
- type: f1
value: 90.18033354786317
- task:
type: Retrieval
dataset:
name: MTEB MSMARCO
type: msmarco
config: default
split: dev
revision: None
metrics:
- type: map_at_1
value: 24.091
- type: map_at_10
value: 36.753
- type: map_at_100
value: 37.913000000000004
- type: map_at_1000
value: 37.958999999999996
- type: map_at_3
value: 32.818999999999996
- type: map_at_5
value: 35.171
- type: mrr_at_1
value: 24.742
- type: mrr_at_10
value: 37.285000000000004
- type: mrr_at_100
value: 38.391999999999996
- type: mrr_at_1000
value: 38.431
- type: mrr_at_3
value: 33.440999999999995
- type: mrr_at_5
value: 35.75
- type: ndcg_at_1
value: 24.742
- type: ndcg_at_10
value: 43.698
- type: ndcg_at_100
value: 49.145
- type: ndcg_at_1000
value: 50.23800000000001
- type: ndcg_at_3
value: 35.769
- type: ndcg_at_5
value: 39.961999999999996
- type: precision_at_1
value: 24.742
- type: precision_at_10
value: 6.7989999999999995
- type: precision_at_100
value: 0.95
- type: precision_at_1000
value: 0.104
- type: precision_at_3
value: 15.096000000000002
- type: precision_at_5
value: 11.183
- type: recall_at_1
value: 24.091
- type: recall_at_10
value: 65.068
- type: recall_at_100
value: 89.899
- type: recall_at_1000
value: 98.16
- type: recall_at_3
value: 43.68
- type: recall_at_5
value: 53.754999999999995
- task:
type: Classification
dataset:
name: MTEB MTOPDomainClassification (en)
type: mteb/mtop_domain
config: en
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 93.66621067031465
- type: f1
value: 93.49622853272142
- task:
type: Classification
dataset:
name: MTEB MTOPDomainClassification (de)
type: mteb/mtop_domain
config: de
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 91.94702733164272
- type: f1
value: 91.17043441745282
- task:
type: Classification
dataset:
name: MTEB MTOPDomainClassification (es)
type: mteb/mtop_domain
config: es
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 92.20146764509674
- type: f1
value: 91.98359080555608
- task:
type: Classification
dataset:
name: MTEB MTOPDomainClassification (fr)
type: mteb/mtop_domain
config: fr
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 88.99780770435328
- type: f1
value: 89.19746342724068
- task:
type: Classification
dataset:
name: MTEB MTOPDomainClassification (hi)
type: mteb/mtop_domain
config: hi
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 89.78486912871998
- type: f1
value: 89.24578823628642
- task:
type: Classification
dataset:
name: MTEB MTOPDomainClassification (th)
type: mteb/mtop_domain
config: th
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 88.74502712477394
- type: f1
value: 89.00297573881542
- task:
type: Classification
dataset:
name: MTEB MTOPIntentClassification (en)
type: mteb/mtop_intent
config: en
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 77.9046967624259
- type: f1
value: 59.36787125785957
- task:
type: Classification
dataset:
name: MTEB MTOPIntentClassification (de)
type: mteb/mtop_intent
config: de
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 74.5280360664976
- type: f1
value: 57.17723440888718
- task:
type: Classification
dataset:
name: MTEB MTOPIntentClassification (es)
type: mteb/mtop_intent
config: es
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 75.44029352901934
- type: f1
value: 54.052855531072964
- task:
type: Classification
dataset:
name: MTEB MTOPIntentClassification (fr)
type: mteb/mtop_intent
config: fr
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 70.5606013153774
- type: f1
value: 52.62215934386531
- task:
type: Classification
dataset:
name: MTEB MTOPIntentClassification (hi)
type: mteb/mtop_intent
config: hi
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 73.11581211903908
- type: f1
value: 52.341291845645465
- task:
type: Classification
dataset:
name: MTEB MTOPIntentClassification (th)
type: mteb/mtop_intent
config: th
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 74.28933092224233
- type: f1
value: 57.07918745504911
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (af)
type: mteb/amazon_massive_intent
config: af
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 62.38063214525892
- type: f1
value: 59.46463723443009
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (am)
type: mteb/amazon_massive_intent
config: am
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 56.06926698049766
- type: f1
value: 52.49084283283562
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (ar)
type: mteb/amazon_massive_intent
config: ar
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 60.74983187626093
- type: f1
value: 56.960640620165904
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (az)
type: mteb/amazon_massive_intent
config: az
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 64.86550100874243
- type: f1
value: 62.47370548140688
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (bn)
type: mteb/amazon_massive_intent
config: bn
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 63.971082716879636
- type: f1
value: 61.03812421957381
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (cy)
type: mteb/amazon_massive_intent
config: cy
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 54.98318762609282
- type: f1
value: 51.51207916008392
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (da)
type: mteb/amazon_massive_intent
config: da
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 69.45527908540686
- type: f1
value: 66.16631905400318
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (de)
type: mteb/amazon_massive_intent
config: de
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 69.32750504371216
- type: f1
value: 66.16755288646591
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (el)
type: mteb/amazon_massive_intent
config: el
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 69.09213180901143
- type: f1
value: 66.95654394661507
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (en)
type: mteb/amazon_massive_intent
config: en
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 73.75588433086752
- type: f1
value: 71.79973779656923
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (es)
type: mteb/amazon_massive_intent
config: es
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 70.49428379287154
- type: f1
value: 68.37494379215734
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (fa)
type: mteb/amazon_massive_intent
config: fa
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 69.90921318090115
- type: f1
value: 66.79517376481645
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (fi)
type: mteb/amazon_massive_intent
config: fi
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 70.12104909213181
- type: f1
value: 67.29448842879584
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (fr)
type: mteb/amazon_massive_intent
config: fr
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 69.34095494283793
- type: f1
value: 67.01134288992947
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (he)
type: mteb/amazon_massive_intent
config: he
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 67.61264290517822
- type: f1
value: 64.68730512660757
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (hi)
type: mteb/amazon_massive_intent
config: hi
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 67.79757901815738
- type: f1
value: 65.24938539425598
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (hu)
type: mteb/amazon_massive_intent
config: hu
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 69.68728984532616
- type: f1
value: 67.0487169762553
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (hy)
type: mteb/amazon_massive_intent
config: hy
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 62.07464694014795
- type: f1
value: 59.183532276789286
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (id)
type: mteb/amazon_massive_intent
config: id
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 70.04707464694015
- type: f1
value: 67.66829629003848
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (is)
type: mteb/amazon_massive_intent
config: is
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 62.42434431741762
- type: f1
value: 59.01617226544757
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (it)
type: mteb/amazon_massive_intent
config: it
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 70.53127101546738
- type: f1
value: 68.10033760906255
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (ja)
type: mteb/amazon_massive_intent
config: ja
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 72.50504371217215
- type: f1
value: 69.74931103158923
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (jv)
type: mteb/amazon_massive_intent
config: jv
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 57.91190316072628
- type: f1
value: 54.05551136648796
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (ka)
type: mteb/amazon_massive_intent
config: ka
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 51.78211163416275
- type: f1
value: 49.874888544058535
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (km)
type: mteb/amazon_massive_intent
config: km
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 47.017484868863484
- type: f1
value: 44.53364263352014
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (kn)
type: mteb/amazon_massive_intent
config: kn
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 62.16207128446537
- type: f1
value: 59.01185692320829
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (ko)
type: mteb/amazon_massive_intent
config: ko
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 69.42501681237391
- type: f1
value: 67.13169450166086
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (lv)
type: mteb/amazon_massive_intent
config: lv
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 67.0780094149294
- type: f1
value: 64.41720167850707
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (ml)
type: mteb/amazon_massive_intent
config: ml
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 65.57162071284466
- type: f1
value: 62.414138683804424
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (mn)
type: mteb/amazon_massive_intent
config: mn
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 61.71149966375252
- type: f1
value: 58.594805125087234
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (ms)
type: mteb/amazon_massive_intent
config: ms
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 66.03900470746471
- type: f1
value: 63.87937257883887
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (my)
type: mteb/amazon_massive_intent
config: my
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 60.8776059179556
- type: f1
value: 57.48587618059131
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (nb)
type: mteb/amazon_massive_intent
config: nb
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 69.87895090786819
- type: f1
value: 66.8141299430347
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (nl)
type: mteb/amazon_massive_intent
config: nl
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 70.45057162071285
- type: f1
value: 67.46444039673516
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (pl)
type: mteb/amazon_massive_intent
config: pl
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 71.546738399462
- type: f1
value: 68.63640876702655
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (pt)
type: mteb/amazon_massive_intent
config: pt
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 70.72965702757229
- type: f1
value: 68.54119560379115
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (ro)
type: mteb/amazon_massive_intent
config: ro
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 68.35574983187625
- type: f1
value: 65.88844917691927
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (ru)
type: mteb/amazon_massive_intent
config: ru
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 71.70477471418964
- type: f1
value: 69.19665697061978
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (sl)
type: mteb/amazon_massive_intent
config: sl
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 67.0880968392737
- type: f1
value: 64.76962317666086
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (sq)
type: mteb/amazon_massive_intent
config: sq
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 65.18493611297916
- type: f1
value: 62.49984559035371
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (sv)
type: mteb/amazon_massive_intent
config: sv
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 71.75857431069265
- type: f1
value: 69.20053687623418
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (sw)
type: mteb/amazon_massive_intent
config: sw
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 58.500336247478145
- type: f1
value: 55.2972398687929
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (ta)
type: mteb/amazon_massive_intent
config: ta
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 62.68997982515132
- type: f1
value: 59.36848202755348
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (te)
type: mteb/amazon_massive_intent
config: te
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 63.01950235373235
- type: f1
value: 60.09351954625423
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (th)
type: mteb/amazon_massive_intent
config: th
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 68.29186281102892
- type: f1
value: 67.57860496703447
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (tl)
type: mteb/amazon_massive_intent
config: tl
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 64.77471418964357
- type: f1
value: 61.913983147713836
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (tr)
type: mteb/amazon_massive_intent
config: tr
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 69.87222595830532
- type: f1
value: 66.03679033708141
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (ur)
type: mteb/amazon_massive_intent
config: ur
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 64.04505716207127
- type: f1
value: 61.28569169817908
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (vi)
type: mteb/amazon_massive_intent
config: vi
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 69.38466711499663
- type: f1
value: 67.20532357036844
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (zh-CN)
type: mteb/amazon_massive_intent
config: zh-CN
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 71.12306657700067
- type: f1
value: 68.91251226588182
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (zh-TW)
type: mteb/amazon_massive_intent
config: zh-TW
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 66.20040349697378
- type: f1
value: 66.02657347714175
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (af)
type: mteb/amazon_massive_scenario
config: af
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 68.73907195696032
- type: f1
value: 66.98484521791418
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (am)
type: mteb/amazon_massive_scenario
config: am
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 60.58843308675185
- type: f1
value: 58.95591723092005
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (ar)
type: mteb/amazon_massive_scenario
config: ar
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 66.22730329522528
- type: f1
value: 66.0894499712115
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (az)
type: mteb/amazon_massive_scenario
config: az
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 66.48285137861465
- type: f1
value: 65.21963176785157
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (bn)
type: mteb/amazon_massive_scenario
config: bn
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 67.74714189643578
- type: f1
value: 66.8212192745412
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (cy)
type: mteb/amazon_massive_scenario
config: cy
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 59.09213180901143
- type: f1
value: 56.70735546356339
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (da)
type: mteb/amazon_massive_scenario
config: da
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 75.05716207128448
- type: f1
value: 74.8413712365364
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (de)
type: mteb/amazon_massive_scenario
config: de
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 74.69737726967047
- type: f1
value: 74.7664341963
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (el)
type: mteb/amazon_massive_scenario
config: el
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 73.90383322125084
- type: f1
value: 73.59201554448323
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (en)
type: mteb/amazon_massive_scenario
config: en
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 77.51176866173503
- type: f1
value: 77.46104434577758
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (es)
type: mteb/amazon_massive_scenario
config: es
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 74.31069266980496
- type: f1
value: 74.61048660675635
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (fa)
type: mteb/amazon_massive_scenario
config: fa
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 72.95225285810356
- type: f1
value: 72.33160006574627
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (fi)
type: mteb/amazon_massive_scenario
config: fi
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 73.12373907195696
- type: f1
value: 73.20921012557481
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (fr)
type: mteb/amazon_massive_scenario
config: fr
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 73.86684599865501
- type: f1
value: 73.82348774610831
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (he)
type: mteb/amazon_massive_scenario
config: he
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 71.40215198386012
- type: f1
value: 71.11945183971858
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (hi)
type: mteb/amazon_massive_scenario
config: hi
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 72.12844653665098
- type: f1
value: 71.34450495911766
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (hu)
type: mteb/amazon_massive_scenario
config: hu
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 74.52252858103566
- type: f1
value: 73.98878711342999
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (hy)
type: mteb/amazon_massive_scenario
config: hy
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 64.93611297915265
- type: f1
value: 63.723200467653385
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (id)
type: mteb/amazon_massive_scenario
config: id
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 74.11903160726295
- type: f1
value: 73.82138439467096
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (is)
type: mteb/amazon_massive_scenario
config: is
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 67.15198386012105
- type: f1
value: 66.02172193802167
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (it)
type: mteb/amazon_massive_scenario
config: it
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 74.32414256893072
- type: f1
value: 74.30943421170574
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (ja)
type: mteb/amazon_massive_scenario
config: ja
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 77.46805648957633
- type: f1
value: 77.62808409298209
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (jv)
type: mteb/amazon_massive_scenario
config: jv
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 63.318762609280434
- type: f1
value: 62.094284066075076
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (ka)
type: mteb/amazon_massive_scenario
config: ka
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 58.34902488231338
- type: f1
value: 57.12893860987984
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (km)
type: mteb/amazon_massive_scenario
config: km
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 50.88433086751849
- type: f1
value: 48.2272350802058
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (kn)
type: mteb/amazon_massive_scenario
config: kn
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 66.4425016812374
- type: f1
value: 64.61463095996173
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (ko)
type: mteb/amazon_massive_scenario
config: ko
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 75.04707464694015
- type: f1
value: 75.05099199098998
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (lv)
type: mteb/amazon_massive_scenario
config: lv
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 70.50437121721586
- type: f1
value: 69.83397721096314
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (ml)
type: mteb/amazon_massive_scenario
config: ml
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 69.94283792871553
- type: f1
value: 68.8704663703913
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (mn)
type: mteb/amazon_massive_scenario
config: mn
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 64.79488903833222
- type: f1
value: 63.615424063345436
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (ms)
type: mteb/amazon_massive_scenario
config: ms
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 69.88231338264963
- type: f1
value: 68.57892302593237
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (my)
type: mteb/amazon_massive_scenario
config: my
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 63.248150638870214
- type: f1
value: 61.06680605338809
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (nb)
type: mteb/amazon_massive_scenario
config: nb
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 74.84196368527236
- type: f1
value: 74.52566464968763
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (nl)
type: mteb/amazon_massive_scenario
config: nl
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 74.8285137861466
- type: f1
value: 74.8853197608802
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (pl)
type: mteb/amazon_massive_scenario
config: pl
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 74.13248150638869
- type: f1
value: 74.3982040999179
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (pt)
type: mteb/amazon_massive_scenario
config: pt
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 73.49024882313383
- type: f1
value: 73.82153848368573
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (ro)
type: mteb/amazon_massive_scenario
config: ro
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 71.72158708809684
- type: f1
value: 71.85049433180541
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (ru)
type: mteb/amazon_massive_scenario
config: ru
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 75.137861466039
- type: f1
value: 75.37628348188467
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (sl)
type: mteb/amazon_massive_scenario
config: sl
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 71.86953597848016
- type: f1
value: 71.87537624521661
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (sq)
type: mteb/amazon_massive_scenario
config: sq
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 70.27572293207801
- type: f1
value: 68.80017302344231
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (sv)
type: mteb/amazon_massive_scenario
config: sv
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 76.09952925353059
- type: f1
value: 76.07992707688408
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (sw)
type: mteb/amazon_massive_scenario
config: sw
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 63.140551445864155
- type: f1
value: 61.73855010331415
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (ta)
type: mteb/amazon_massive_scenario
config: ta
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 66.27774041694687
- type: f1
value: 64.83664868894539
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (te)
type: mteb/amazon_massive_scenario
config: te
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 66.69468728984533
- type: f1
value: 64.76239666920868
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (th)
type: mteb/amazon_massive_scenario
config: th
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 73.44653665097512
- type: f1
value: 73.14646052013873
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (tl)
type: mteb/amazon_massive_scenario
config: tl
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 67.71351714862139
- type: f1
value: 66.67212180163382
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (tr)
type: mteb/amazon_massive_scenario
config: tr
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 73.9946200403497
- type: f1
value: 73.87348793725525
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (ur)
type: mteb/amazon_massive_scenario
config: ur
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 68.15400134498992
- type: f1
value: 67.09433241421094
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (vi)
type: mteb/amazon_massive_scenario
config: vi
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 73.11365164761264
- type: f1
value: 73.59502539433753
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (zh-CN)
type: mteb/amazon_massive_scenario
config: zh-CN
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 76.82582380632145
- type: f1
value: 76.89992945316313
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (zh-TW)
type: mteb/amazon_massive_scenario
config: zh-TW
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 71.81237390719569
- type: f1
value: 72.36499770986265
- task:
type: Clustering
dataset:
name: MTEB MedrxivClusteringP2P
type: mteb/medrxiv-clustering-p2p
config: default
split: test
revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73
metrics:
- type: v_measure
value: 31.480506569594695
- task:
type: Clustering
dataset:
name: MTEB MedrxivClusteringS2S
type: mteb/medrxiv-clustering-s2s
config: default
split: test
revision: 35191c8c0dca72d8ff3efcd72aa802307d469663
metrics:
- type: v_measure
value: 29.71252128004552
- task:
type: Reranking
dataset:
name: MTEB MindSmallReranking
type: mteb/mind_small
config: default
split: test
revision: 3bdac13927fdc888b903db93b2ffdbd90b295a69
metrics:
- type: map
value: 31.421396787056548
- type: mrr
value: 32.48155274872267
- task:
type: Retrieval
dataset:
name: MTEB NFCorpus
type: nfcorpus
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 5.595
- type: map_at_10
value: 12.642000000000001
- type: map_at_100
value: 15.726
- type: map_at_1000
value: 17.061999999999998
- type: map_at_3
value: 9.125
- type: map_at_5
value: 10.866000000000001
- type: mrr_at_1
value: 43.344
- type: mrr_at_10
value: 52.227999999999994
- type: mrr_at_100
value: 52.898999999999994
- type: mrr_at_1000
value: 52.944
- type: mrr_at_3
value: 49.845
- type: mrr_at_5
value: 51.115
- type: ndcg_at_1
value: 41.949999999999996
- type: ndcg_at_10
value: 33.995
- type: ndcg_at_100
value: 30.869999999999997
- type: ndcg_at_1000
value: 39.487
- type: ndcg_at_3
value: 38.903999999999996
- type: ndcg_at_5
value: 37.236999999999995
- type: precision_at_1
value: 43.344
- type: precision_at_10
value: 25.480000000000004
- type: precision_at_100
value: 7.672
- type: precision_at_1000
value: 2.028
- type: precision_at_3
value: 36.636
- type: precision_at_5
value: 32.632
- type: recall_at_1
value: 5.595
- type: recall_at_10
value: 16.466
- type: recall_at_100
value: 31.226
- type: recall_at_1000
value: 62.778999999999996
- type: recall_at_3
value: 9.931
- type: recall_at_5
value: 12.884
- task:
type: Retrieval
dataset:
name: MTEB NQ
type: nq
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 40.414
- type: map_at_10
value: 56.754000000000005
- type: map_at_100
value: 57.457
- type: map_at_1000
value: 57.477999999999994
- type: map_at_3
value: 52.873999999999995
- type: map_at_5
value: 55.175
- type: mrr_at_1
value: 45.278
- type: mrr_at_10
value: 59.192
- type: mrr_at_100
value: 59.650000000000006
- type: mrr_at_1000
value: 59.665
- type: mrr_at_3
value: 56.141
- type: mrr_at_5
value: 57.998000000000005
- type: ndcg_at_1
value: 45.278
- type: ndcg_at_10
value: 64.056
- type: ndcg_at_100
value: 66.89
- type: ndcg_at_1000
value: 67.364
- type: ndcg_at_3
value: 56.97
- type: ndcg_at_5
value: 60.719
- type: precision_at_1
value: 45.278
- type: precision_at_10
value: 9.994
- type: precision_at_100
value: 1.165
- type: precision_at_1000
value: 0.121
- type: precision_at_3
value: 25.512
- type: precision_at_5
value: 17.509
- type: recall_at_1
value: 40.414
- type: recall_at_10
value: 83.596
- type: recall_at_100
value: 95.72
- type: recall_at_1000
value: 99.24
- type: recall_at_3
value: 65.472
- type: recall_at_5
value: 74.039
- task:
type: Retrieval
dataset:
name: MTEB QuoraRetrieval
type: quora
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 70.352
- type: map_at_10
value: 84.369
- type: map_at_100
value: 85.02499999999999
- type: map_at_1000
value: 85.04
- type: map_at_3
value: 81.42399999999999
- type: map_at_5
value: 83.279
- type: mrr_at_1
value: 81.05
- type: mrr_at_10
value: 87.401
- type: mrr_at_100
value: 87.504
- type: mrr_at_1000
value: 87.505
- type: mrr_at_3
value: 86.443
- type: mrr_at_5
value: 87.10799999999999
- type: ndcg_at_1
value: 81.04
- type: ndcg_at_10
value: 88.181
- type: ndcg_at_100
value: 89.411
- type: ndcg_at_1000
value: 89.507
- type: ndcg_at_3
value: 85.28099999999999
- type: ndcg_at_5
value: 86.888
- type: precision_at_1
value: 81.04
- type: precision_at_10
value: 13.406
- type: precision_at_100
value: 1.5350000000000001
- type: precision_at_1000
value: 0.157
- type: precision_at_3
value: 37.31
- type: precision_at_5
value: 24.54
- type: recall_at_1
value: 70.352
- type: recall_at_10
value: 95.358
- type: recall_at_100
value: 99.541
- type: recall_at_1000
value: 99.984
- type: recall_at_3
value: 87.111
- type: recall_at_5
value: 91.643
- task:
type: Clustering
dataset:
name: MTEB RedditClustering
type: mteb/reddit-clustering
config: default
split: test
revision: 24640382cdbf8abc73003fb0fa6d111a705499eb
metrics:
- type: v_measure
value: 46.54068723291946
- task:
type: Clustering
dataset:
name: MTEB RedditClusteringP2P
type: mteb/reddit-clustering-p2p
config: default
split: test
revision: 282350215ef01743dc01b456c7f5241fa8937f16
metrics:
- type: v_measure
value: 63.216287629895994
- task:
type: Retrieval
dataset:
name: MTEB SCIDOCS
type: scidocs
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 4.023000000000001
- type: map_at_10
value: 10.071
- type: map_at_100
value: 11.892
- type: map_at_1000
value: 12.196
- type: map_at_3
value: 7.234
- type: map_at_5
value: 8.613999999999999
- type: mrr_at_1
value: 19.900000000000002
- type: mrr_at_10
value: 30.516
- type: mrr_at_100
value: 31.656000000000002
- type: mrr_at_1000
value: 31.723000000000003
- type: mrr_at_3
value: 27.400000000000002
- type: mrr_at_5
value: 29.270000000000003
- type: ndcg_at_1
value: 19.900000000000002
- type: ndcg_at_10
value: 17.474
- type: ndcg_at_100
value: 25.020999999999997
- type: ndcg_at_1000
value: 30.728
- type: ndcg_at_3
value: 16.588
- type: ndcg_at_5
value: 14.498
- type: precision_at_1
value: 19.900000000000002
- type: precision_at_10
value: 9.139999999999999
- type: precision_at_100
value: 2.011
- type: precision_at_1000
value: 0.33899999999999997
- type: precision_at_3
value: 15.667
- type: precision_at_5
value: 12.839999999999998
- type: recall_at_1
value: 4.023000000000001
- type: recall_at_10
value: 18.497
- type: recall_at_100
value: 40.8
- type: recall_at_1000
value: 68.812
- type: recall_at_3
value: 9.508
- type: recall_at_5
value: 12.983
- task:
type: STS
dataset:
name: MTEB SICK-R
type: mteb/sickr-sts
config: default
split: test
revision: a6ea5a8cab320b040a23452cc28066d9beae2cee
metrics:
- type: cos_sim_pearson
value: 83.967008785134
- type: cos_sim_spearman
value: 80.23142141101837
- type: euclidean_pearson
value: 81.20166064704539
- type: euclidean_spearman
value: 80.18961335654585
- type: manhattan_pearson
value: 81.13925443187625
- type: manhattan_spearman
value: 80.07948723044424
- task:
type: STS
dataset:
name: MTEB STS12
type: mteb/sts12-sts
config: default
split: test
revision: a0d554a64d88156834ff5ae9920b964011b16384
metrics:
- type: cos_sim_pearson
value: 86.94262461316023
- type: cos_sim_spearman
value: 80.01596278563865
- type: euclidean_pearson
value: 83.80799622922581
- type: euclidean_spearman
value: 79.94984954947103
- type: manhattan_pearson
value: 83.68473841756281
- type: manhattan_spearman
value: 79.84990707951822
- task:
type: STS
dataset:
name: MTEB STS13
type: mteb/sts13-sts
config: default
split: test
revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca
metrics:
- type: cos_sim_pearson
value: 80.57346443146068
- type: cos_sim_spearman
value: 81.54689837570866
- type: euclidean_pearson
value: 81.10909881516007
- type: euclidean_spearman
value: 81.56746243261762
- type: manhattan_pearson
value: 80.87076036186582
- type: manhattan_spearman
value: 81.33074987964402
- task:
type: STS
dataset:
name: MTEB STS14
type: mteb/sts14-sts
config: default
split: test
revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375
metrics:
- type: cos_sim_pearson
value: 79.54733787179849
- type: cos_sim_spearman
value: 77.72202105610411
- type: euclidean_pearson
value: 78.9043595478849
- type: euclidean_spearman
value: 77.93422804309435
- type: manhattan_pearson
value: 78.58115121621368
- type: manhattan_spearman
value: 77.62508135122033
- task:
type: STS
dataset:
name: MTEB STS15
type: mteb/sts15-sts
config: default
split: test
revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3
metrics:
- type: cos_sim_pearson
value: 88.59880017237558
- type: cos_sim_spearman
value: 89.31088630824758
- type: euclidean_pearson
value: 88.47069261564656
- type: euclidean_spearman
value: 89.33581971465233
- type: manhattan_pearson
value: 88.40774264100956
- type: manhattan_spearman
value: 89.28657485627835
- task:
type: STS
dataset:
name: MTEB STS16
type: mteb/sts16-sts
config: default
split: test
revision: 4d8694f8f0e0100860b497b999b3dbed754a0513
metrics:
- type: cos_sim_pearson
value: 84.08055117917084
- type: cos_sim_spearman
value: 85.78491813080304
- type: euclidean_pearson
value: 84.99329155500392
- type: euclidean_spearman
value: 85.76728064677287
- type: manhattan_pearson
value: 84.87947428989587
- type: manhattan_spearman
value: 85.62429454917464
- task:
type: STS
dataset:
name: MTEB STS17 (ko-ko)
type: mteb/sts17-crosslingual-sts
config: ko-ko
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 82.14190939287384
- type: cos_sim_spearman
value: 82.27331573306041
- type: euclidean_pearson
value: 81.891896953716
- type: euclidean_spearman
value: 82.37695542955998
- type: manhattan_pearson
value: 81.73123869460504
- type: manhattan_spearman
value: 82.19989168441421
- task:
type: STS
dataset:
name: MTEB STS17 (ar-ar)
type: mteb/sts17-crosslingual-sts
config: ar-ar
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 76.84695301843362
- type: cos_sim_spearman
value: 77.87790986014461
- type: euclidean_pearson
value: 76.91981583106315
- type: euclidean_spearman
value: 77.88154772749589
- type: manhattan_pearson
value: 76.94953277451093
- type: manhattan_spearman
value: 77.80499230728604
- task:
type: STS
dataset:
name: MTEB STS17 (en-ar)
type: mteb/sts17-crosslingual-sts
config: en-ar
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 75.44657840482016
- type: cos_sim_spearman
value: 75.05531095119674
- type: euclidean_pearson
value: 75.88161755829299
- type: euclidean_spearman
value: 74.73176238219332
- type: manhattan_pearson
value: 75.63984765635362
- type: manhattan_spearman
value: 74.86476440770737
- task:
type: STS
dataset:
name: MTEB STS17 (en-de)
type: mteb/sts17-crosslingual-sts
config: en-de
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 85.64700140524133
- type: cos_sim_spearman
value: 86.16014210425672
- type: euclidean_pearson
value: 86.49086860843221
- type: euclidean_spearman
value: 86.09729326815614
- type: manhattan_pearson
value: 86.43406265125513
- type: manhattan_spearman
value: 86.17740150939994
- task:
type: STS
dataset:
name: MTEB STS17 (en-en)
type: mteb/sts17-crosslingual-sts
config: en-en
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 87.91170098764921
- type: cos_sim_spearman
value: 88.12437004058931
- type: euclidean_pearson
value: 88.81828254494437
- type: euclidean_spearman
value: 88.14831794572122
- type: manhattan_pearson
value: 88.93442183448961
- type: manhattan_spearman
value: 88.15254630778304
- task:
type: STS
dataset:
name: MTEB STS17 (en-tr)
type: mteb/sts17-crosslingual-sts
config: en-tr
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 72.91390577997292
- type: cos_sim_spearman
value: 71.22979457536074
- type: euclidean_pearson
value: 74.40314008106749
- type: euclidean_spearman
value: 72.54972136083246
- type: manhattan_pearson
value: 73.85687539530218
- type: manhattan_spearman
value: 72.09500771742637
- task:
type: STS
dataset:
name: MTEB STS17 (es-en)
type: mteb/sts17-crosslingual-sts
config: es-en
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 80.9301067983089
- type: cos_sim_spearman
value: 80.74989828346473
- type: euclidean_pearson
value: 81.36781301814257
- type: euclidean_spearman
value: 80.9448819964426
- type: manhattan_pearson
value: 81.0351322685609
- type: manhattan_spearman
value: 80.70192121844177
- task:
type: STS
dataset:
name: MTEB STS17 (es-es)
type: mteb/sts17-crosslingual-sts
config: es-es
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 87.13820465980005
- type: cos_sim_spearman
value: 86.73532498758757
- type: euclidean_pearson
value: 87.21329451846637
- type: euclidean_spearman
value: 86.57863198601002
- type: manhattan_pearson
value: 87.06973713818554
- type: manhattan_spearman
value: 86.47534918791499
- task:
type: STS
dataset:
name: MTEB STS17 (fr-en)
type: mteb/sts17-crosslingual-sts
config: fr-en
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 85.48720108904415
- type: cos_sim_spearman
value: 85.62221757068387
- type: euclidean_pearson
value: 86.1010129512749
- type: euclidean_spearman
value: 85.86580966509942
- type: manhattan_pearson
value: 86.26800938808971
- type: manhattan_spearman
value: 85.88902721678429
- task:
type: STS
dataset:
name: MTEB STS17 (it-en)
type: mteb/sts17-crosslingual-sts
config: it-en
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 83.98021347333516
- type: cos_sim_spearman
value: 84.53806553803501
- type: euclidean_pearson
value: 84.61483347248364
- type: euclidean_spearman
value: 85.14191408011702
- type: manhattan_pearson
value: 84.75297588825967
- type: manhattan_spearman
value: 85.33176753669242
- task:
type: STS
dataset:
name: MTEB STS17 (nl-en)
type: mteb/sts17-crosslingual-sts
config: nl-en
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 84.51856644893233
- type: cos_sim_spearman
value: 85.27510748506413
- type: euclidean_pearson
value: 85.09886861540977
- type: euclidean_spearman
value: 85.62579245860887
- type: manhattan_pearson
value: 84.93017860464607
- type: manhattan_spearman
value: 85.5063988898453
- task:
type: STS
dataset:
name: MTEB STS22 (en)
type: mteb/sts22-crosslingual-sts
config: en
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 62.581573200584195
- type: cos_sim_spearman
value: 63.05503590247928
- type: euclidean_pearson
value: 63.652564812602094
- type: euclidean_spearman
value: 62.64811520876156
- type: manhattan_pearson
value: 63.506842893061076
- type: manhattan_spearman
value: 62.51289573046917
- task:
type: STS
dataset:
name: MTEB STS22 (de)
type: mteb/sts22-crosslingual-sts
config: de
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 48.2248801729127
- type: cos_sim_spearman
value: 56.5936604678561
- type: euclidean_pearson
value: 43.98149464089
- type: euclidean_spearman
value: 56.108561882423615
- type: manhattan_pearson
value: 43.86880305903564
- type: manhattan_spearman
value: 56.04671150510166
- task:
type: STS
dataset:
name: MTEB STS22 (es)
type: mteb/sts22-crosslingual-sts
config: es
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 55.17564527009831
- type: cos_sim_spearman
value: 64.57978560979488
- type: euclidean_pearson
value: 58.8818330154583
- type: euclidean_spearman
value: 64.99214839071281
- type: manhattan_pearson
value: 58.72671436121381
- type: manhattan_spearman
value: 65.10713416616109
- task:
type: STS
dataset:
name: MTEB STS22 (pl)
type: mteb/sts22-crosslingual-sts
config: pl
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 26.772131864023297
- type: cos_sim_spearman
value: 34.68200792408681
- type: euclidean_pearson
value: 16.68082419005441
- type: euclidean_spearman
value: 34.83099932652166
- type: manhattan_pearson
value: 16.52605949659529
- type: manhattan_spearman
value: 34.82075801399475
- task:
type: STS
dataset:
name: MTEB STS22 (tr)
type: mteb/sts22-crosslingual-sts
config: tr
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 54.42415189043831
- type: cos_sim_spearman
value: 63.54594264576758
- type: euclidean_pearson
value: 57.36577498297745
- type: euclidean_spearman
value: 63.111466379158074
- type: manhattan_pearson
value: 57.584543715873885
- type: manhattan_spearman
value: 63.22361054139183
- task:
type: STS
dataset:
name: MTEB STS22 (ar)
type: mteb/sts22-crosslingual-sts
config: ar
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 47.55216762405518
- type: cos_sim_spearman
value: 56.98670142896412
- type: euclidean_pearson
value: 50.15318757562699
- type: euclidean_spearman
value: 56.524941926541906
- type: manhattan_pearson
value: 49.955618528674904
- type: manhattan_spearman
value: 56.37102209240117
- task:
type: STS
dataset:
name: MTEB STS22 (ru)
type: mteb/sts22-crosslingual-sts
config: ru
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 49.20540980338571
- type: cos_sim_spearman
value: 59.9009453504406
- type: euclidean_pearson
value: 49.557749853620535
- type: euclidean_spearman
value: 59.76631621172456
- type: manhattan_pearson
value: 49.62340591181147
- type: manhattan_spearman
value: 59.94224880322436
- task:
type: STS
dataset:
name: MTEB STS22 (zh)
type: mteb/sts22-crosslingual-sts
config: zh
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 51.508169956576985
- type: cos_sim_spearman
value: 66.82461565306046
- type: euclidean_pearson
value: 56.2274426480083
- type: euclidean_spearman
value: 66.6775323848333
- type: manhattan_pearson
value: 55.98277796300661
- type: manhattan_spearman
value: 66.63669848497175
- task:
type: STS
dataset:
name: MTEB STS22 (fr)
type: mteb/sts22-crosslingual-sts
config: fr
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 72.86478788045507
- type: cos_sim_spearman
value: 76.7946552053193
- type: euclidean_pearson
value: 75.01598530490269
- type: euclidean_spearman
value: 76.83618917858281
- type: manhattan_pearson
value: 74.68337628304332
- type: manhattan_spearman
value: 76.57480204017773
- task:
type: STS
dataset:
name: MTEB STS22 (de-en)
type: mteb/sts22-crosslingual-sts
config: de-en
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 55.922619099401984
- type: cos_sim_spearman
value: 56.599362477240774
- type: euclidean_pearson
value: 56.68307052369783
- type: euclidean_spearman
value: 54.28760436777401
- type: manhattan_pearson
value: 56.67763566500681
- type: manhattan_spearman
value: 53.94619541711359
- task:
type: STS
dataset:
name: MTEB STS22 (es-en)
type: mteb/sts22-crosslingual-sts
config: es-en
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 66.74357206710913
- type: cos_sim_spearman
value: 72.5208244925311
- type: euclidean_pearson
value: 67.49254562186032
- type: euclidean_spearman
value: 72.02469076238683
- type: manhattan_pearson
value: 67.45251772238085
- type: manhattan_spearman
value: 72.05538819984538
- task:
type: STS
dataset:
name: MTEB STS22 (it)
type: mteb/sts22-crosslingual-sts
config: it
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 71.25734330033191
- type: cos_sim_spearman
value: 76.98349083946823
- type: euclidean_pearson
value: 73.71642838667736
- type: euclidean_spearman
value: 77.01715504651384
- type: manhattan_pearson
value: 73.61712711868105
- type: manhattan_spearman
value: 77.01392571153896
- task:
type: STS
dataset:
name: MTEB STS22 (pl-en)
type: mteb/sts22-crosslingual-sts
config: pl-en
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 63.18215462781212
- type: cos_sim_spearman
value: 65.54373266117607
- type: euclidean_pearson
value: 64.54126095439005
- type: euclidean_spearman
value: 65.30410369102711
- type: manhattan_pearson
value: 63.50332221148234
- type: manhattan_spearman
value: 64.3455878104313
- task:
type: STS
dataset:
name: MTEB STS22 (zh-en)
type: mteb/sts22-crosslingual-sts
config: zh-en
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 62.30509221440029
- type: cos_sim_spearman
value: 65.99582704642478
- type: euclidean_pearson
value: 63.43818859884195
- type: euclidean_spearman
value: 66.83172582815764
- type: manhattan_pearson
value: 63.055779168508764
- type: manhattan_spearman
value: 65.49585020501449
- task:
type: STS
dataset:
name: MTEB STS22 (es-it)
type: mteb/sts22-crosslingual-sts
config: es-it
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 59.587830825340404
- type: cos_sim_spearman
value: 68.93467614588089
- type: euclidean_pearson
value: 62.3073527367404
- type: euclidean_spearman
value: 69.69758171553175
- type: manhattan_pearson
value: 61.9074580815789
- type: manhattan_spearman
value: 69.57696375597865
- task:
type: STS
dataset:
name: MTEB STS22 (de-fr)
type: mteb/sts22-crosslingual-sts
config: de-fr
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 57.143220125577066
- type: cos_sim_spearman
value: 67.78857859159226
- type: euclidean_pearson
value: 55.58225107923733
- type: euclidean_spearman
value: 67.80662907184563
- type: manhattan_pearson
value: 56.24953502726514
- type: manhattan_spearman
value: 67.98262125431616
- task:
type: STS
dataset:
name: MTEB STS22 (de-pl)
type: mteb/sts22-crosslingual-sts
config: de-pl
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 21.826928900322066
- type: cos_sim_spearman
value: 49.578506634400405
- type: euclidean_pearson
value: 27.939890138843214
- type: euclidean_spearman
value: 52.71950519136242
- type: manhattan_pearson
value: 26.39878683847546
- type: manhattan_spearman
value: 47.54609580342499
- task:
type: STS
dataset:
name: MTEB STS22 (fr-pl)
type: mteb/sts22-crosslingual-sts
config: fr-pl
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 57.27603854632001
- type: cos_sim_spearman
value: 50.709255283710995
- type: euclidean_pearson
value: 59.5419024445929
- type: euclidean_spearman
value: 50.709255283710995
- type: manhattan_pearson
value: 59.03256832438492
- type: manhattan_spearman
value: 61.97797868009122
- task:
type: STS
dataset:
name: MTEB STSBenchmark
type: mteb/stsbenchmark-sts
config: default
split: test
revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831
metrics:
- type: cos_sim_pearson
value: 85.00757054859712
- type: cos_sim_spearman
value: 87.29283629622222
- type: euclidean_pearson
value: 86.54824171775536
- type: euclidean_spearman
value: 87.24364730491402
- type: manhattan_pearson
value: 86.5062156915074
- type: manhattan_spearman
value: 87.15052170378574
- task:
type: Reranking
dataset:
name: MTEB SciDocsRR
type: mteb/scidocs-reranking
config: default
split: test
revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab
metrics:
- type: map
value: 82.03549357197389
- type: mrr
value: 95.05437645143527
- task:
type: Retrieval
dataset:
name: MTEB SciFact
type: scifact
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 57.260999999999996
- type: map_at_10
value: 66.259
- type: map_at_100
value: 66.884
- type: map_at_1000
value: 66.912
- type: map_at_3
value: 63.685
- type: map_at_5
value: 65.35499999999999
- type: mrr_at_1
value: 60.333000000000006
- type: mrr_at_10
value: 67.5
- type: mrr_at_100
value: 68.013
- type: mrr_at_1000
value: 68.038
- type: mrr_at_3
value: 65.61099999999999
- type: mrr_at_5
value: 66.861
- type: ndcg_at_1
value: 60.333000000000006
- type: ndcg_at_10
value: 70.41
- type: ndcg_at_100
value: 73.10600000000001
- type: ndcg_at_1000
value: 73.846
- type: ndcg_at_3
value: 66.133
- type: ndcg_at_5
value: 68.499
- type: precision_at_1
value: 60.333000000000006
- type: precision_at_10
value: 9.232999999999999
- type: precision_at_100
value: 1.0630000000000002
- type: precision_at_1000
value: 0.11299999999999999
- type: precision_at_3
value: 25.667
- type: precision_at_5
value: 17.067
- type: recall_at_1
value: 57.260999999999996
- type: recall_at_10
value: 81.94399999999999
- type: recall_at_100
value: 93.867
- type: recall_at_1000
value: 99.667
- type: recall_at_3
value: 70.339
- type: recall_at_5
value: 76.25
- task:
type: PairClassification
dataset:
name: MTEB SprintDuplicateQuestions
type: mteb/sprintduplicatequestions-pairclassification
config: default
split: test
revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46
metrics:
- type: cos_sim_accuracy
value: 99.74356435643564
- type: cos_sim_ap
value: 93.13411948212683
- type: cos_sim_f1
value: 86.80521991300147
- type: cos_sim_precision
value: 84.00374181478017
- type: cos_sim_recall
value: 89.8
- type: dot_accuracy
value: 99.67920792079208
- type: dot_ap
value: 89.27277565444479
- type: dot_f1
value: 83.9276990718124
- type: dot_precision
value: 82.04393505253104
- type: dot_recall
value: 85.9
- type: euclidean_accuracy
value: 99.74257425742574
- type: euclidean_ap
value: 93.17993008259062
- type: euclidean_f1
value: 86.69396110542476
- type: euclidean_precision
value: 88.78406708595388
- type: euclidean_recall
value: 84.7
- type: manhattan_accuracy
value: 99.74257425742574
- type: manhattan_ap
value: 93.14413755550099
- type: manhattan_f1
value: 86.82483594144371
- type: manhattan_precision
value: 87.66564729867483
- type: manhattan_recall
value: 86
- type: max_accuracy
value: 99.74356435643564
- type: max_ap
value: 93.17993008259062
- type: max_f1
value: 86.82483594144371
- task:
type: Clustering
dataset:
name: MTEB StackExchangeClustering
type: mteb/stackexchange-clustering
config: default
split: test
revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259
metrics:
- type: v_measure
value: 57.525863806168566
- task:
type: Clustering
dataset:
name: MTEB StackExchangeClusteringP2P
type: mteb/stackexchange-clustering-p2p
config: default
split: test
revision: 815ca46b2622cec33ccafc3735d572c266efdb44
metrics:
- type: v_measure
value: 32.68850574423839
- task:
type: Reranking
dataset:
name: MTEB StackOverflowDupQuestions
type: mteb/stackoverflowdupquestions-reranking
config: default
split: test
revision: e185fbe320c72810689fc5848eb6114e1ef5ec69
metrics:
- type: map
value: 49.71580650644033
- type: mrr
value: 50.50971903913081
- task:
type: Summarization
dataset:
name: MTEB SummEval
type: mteb/summeval
config: default
split: test
revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c
metrics:
- type: cos_sim_pearson
value: 29.152190498799484
- type: cos_sim_spearman
value: 29.686180371952727
- type: dot_pearson
value: 27.248664793816342
- type: dot_spearman
value: 28.37748983721745
- task:
type: Retrieval
dataset:
name: MTEB TRECCOVID
type: trec-covid
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 0.20400000000000001
- type: map_at_10
value: 1.6209999999999998
- type: map_at_100
value: 9.690999999999999
- type: map_at_1000
value: 23.733
- type: map_at_3
value: 0.575
- type: map_at_5
value: 0.885
- type: mrr_at_1
value: 78
- type: mrr_at_10
value: 86.56700000000001
- type: mrr_at_100
value: 86.56700000000001
- type: mrr_at_1000
value: 86.56700000000001
- type: mrr_at_3
value: 85.667
- type: mrr_at_5
value: 86.56700000000001
- type: ndcg_at_1
value: 76
- type: ndcg_at_10
value: 71.326
- type: ndcg_at_100
value: 54.208999999999996
- type: ndcg_at_1000
value: 49.252
- type: ndcg_at_3
value: 74.235
- type: ndcg_at_5
value: 73.833
- type: precision_at_1
value: 78
- type: precision_at_10
value: 74.8
- type: precision_at_100
value: 55.50000000000001
- type: precision_at_1000
value: 21.836
- type: precision_at_3
value: 78
- type: precision_at_5
value: 78
- type: recall_at_1
value: 0.20400000000000001
- type: recall_at_10
value: 1.894
- type: recall_at_100
value: 13.245999999999999
- type: recall_at_1000
value: 46.373
- type: recall_at_3
value: 0.613
- type: recall_at_5
value: 0.991
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (sqi-eng)
type: mteb/tatoeba-bitext-mining
config: sqi-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 95.89999999999999
- type: f1
value: 94.69999999999999
- type: precision
value: 94.11666666666667
- type: recall
value: 95.89999999999999
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (fry-eng)
type: mteb/tatoeba-bitext-mining
config: fry-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 68.20809248554913
- type: f1
value: 63.431048720066066
- type: precision
value: 61.69143958161298
- type: recall
value: 68.20809248554913
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (kur-eng)
type: mteb/tatoeba-bitext-mining
config: kur-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 71.21951219512195
- type: f1
value: 66.82926829268293
- type: precision
value: 65.1260162601626
- type: recall
value: 71.21951219512195
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (tur-eng)
type: mteb/tatoeba-bitext-mining
config: tur-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 97.2
- type: f1
value: 96.26666666666667
- type: precision
value: 95.8
- type: recall
value: 97.2
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (deu-eng)
type: mteb/tatoeba-bitext-mining
config: deu-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 99.3
- type: f1
value: 99.06666666666666
- type: precision
value: 98.95
- type: recall
value: 99.3
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (nld-eng)
type: mteb/tatoeba-bitext-mining
config: nld-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 97.39999999999999
- type: f1
value: 96.63333333333333
- type: precision
value: 96.26666666666668
- type: recall
value: 97.39999999999999
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ron-eng)
type: mteb/tatoeba-bitext-mining
config: ron-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 96
- type: f1
value: 94.86666666666666
- type: precision
value: 94.31666666666668
- type: recall
value: 96
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ang-eng)
type: mteb/tatoeba-bitext-mining
config: ang-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 47.01492537313433
- type: f1
value: 40.178867566927266
- type: precision
value: 38.179295828549556
- type: recall
value: 47.01492537313433
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ido-eng)
type: mteb/tatoeba-bitext-mining
config: ido-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 86.5
- type: f1
value: 83.62537480063796
- type: precision
value: 82.44555555555554
- type: recall
value: 86.5
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (jav-eng)
type: mteb/tatoeba-bitext-mining
config: jav-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 80.48780487804879
- type: f1
value: 75.45644599303138
- type: precision
value: 73.37398373983739
- type: recall
value: 80.48780487804879
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (isl-eng)
type: mteb/tatoeba-bitext-mining
config: isl-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 93.7
- type: f1
value: 91.95666666666666
- type: precision
value: 91.125
- type: recall
value: 93.7
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (slv-eng)
type: mteb/tatoeba-bitext-mining
config: slv-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 91.73754556500607
- type: f1
value: 89.65168084244632
- type: precision
value: 88.73025516403402
- type: recall
value: 91.73754556500607
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (cym-eng)
type: mteb/tatoeba-bitext-mining
config: cym-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 81.04347826086956
- type: f1
value: 76.2128364389234
- type: precision
value: 74.2
- type: recall
value: 81.04347826086956
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (kaz-eng)
type: mteb/tatoeba-bitext-mining
config: kaz-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 83.65217391304348
- type: f1
value: 79.4376811594203
- type: precision
value: 77.65797101449274
- type: recall
value: 83.65217391304348
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (est-eng)
type: mteb/tatoeba-bitext-mining
config: est-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 87.5
- type: f1
value: 85.02690476190476
- type: precision
value: 83.96261904761904
- type: recall
value: 87.5
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (heb-eng)
type: mteb/tatoeba-bitext-mining
config: heb-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 89.3
- type: f1
value: 86.52333333333333
- type: precision
value: 85.22833333333332
- type: recall
value: 89.3
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (gla-eng)
type: mteb/tatoeba-bitext-mining
config: gla-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 65.01809408926418
- type: f1
value: 59.00594446432805
- type: precision
value: 56.827215807915444
- type: recall
value: 65.01809408926418
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (mar-eng)
type: mteb/tatoeba-bitext-mining
config: mar-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 91.2
- type: f1
value: 88.58
- type: precision
value: 87.33333333333334
- type: recall
value: 91.2
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (lat-eng)
type: mteb/tatoeba-bitext-mining
config: lat-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 59.199999999999996
- type: f1
value: 53.299166276284915
- type: precision
value: 51.3383908045977
- type: recall
value: 59.199999999999996
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (bel-eng)
type: mteb/tatoeba-bitext-mining
config: bel-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 93.2
- type: f1
value: 91.2
- type: precision
value: 90.25
- type: recall
value: 93.2
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (pms-eng)
type: mteb/tatoeba-bitext-mining
config: pms-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 64.76190476190476
- type: f1
value: 59.867110667110666
- type: precision
value: 58.07390192653351
- type: recall
value: 64.76190476190476
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (gle-eng)
type: mteb/tatoeba-bitext-mining
config: gle-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 76.2
- type: f1
value: 71.48147546897547
- type: precision
value: 69.65409090909091
- type: recall
value: 76.2
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (pes-eng)
type: mteb/tatoeba-bitext-mining
config: pes-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 93.8
- type: f1
value: 92.14
- type: precision
value: 91.35833333333333
- type: recall
value: 93.8
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (nob-eng)
type: mteb/tatoeba-bitext-mining
config: nob-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 97.89999999999999
- type: f1
value: 97.2
- type: precision
value: 96.85000000000001
- type: recall
value: 97.89999999999999
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (bul-eng)
type: mteb/tatoeba-bitext-mining
config: bul-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 94.6
- type: f1
value: 92.93333333333334
- type: precision
value: 92.13333333333333
- type: recall
value: 94.6
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (cbk-eng)
type: mteb/tatoeba-bitext-mining
config: cbk-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 74.1
- type: f1
value: 69.14817460317461
- type: precision
value: 67.2515873015873
- type: recall
value: 74.1
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (hun-eng)
type: mteb/tatoeba-bitext-mining
config: hun-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 95.19999999999999
- type: f1
value: 94.01333333333335
- type: precision
value: 93.46666666666667
- type: recall
value: 95.19999999999999
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (uig-eng)
type: mteb/tatoeba-bitext-mining
config: uig-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 76.9
- type: f1
value: 72.07523809523809
- type: precision
value: 70.19777777777779
- type: recall
value: 76.9
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (rus-eng)
type: mteb/tatoeba-bitext-mining
config: rus-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 94.1
- type: f1
value: 92.31666666666666
- type: precision
value: 91.43333333333332
- type: recall
value: 94.1
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (spa-eng)
type: mteb/tatoeba-bitext-mining
config: spa-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 97.8
- type: f1
value: 97.1
- type: precision
value: 96.76666666666668
- type: recall
value: 97.8
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (hye-eng)
type: mteb/tatoeba-bitext-mining
config: hye-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 92.85714285714286
- type: f1
value: 90.92093441150045
- type: precision
value: 90.00449236298293
- type: recall
value: 92.85714285714286
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (tel-eng)
type: mteb/tatoeba-bitext-mining
config: tel-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 93.16239316239316
- type: f1
value: 91.33903133903132
- type: precision
value: 90.56267806267806
- type: recall
value: 93.16239316239316
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (afr-eng)
type: mteb/tatoeba-bitext-mining
config: afr-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 92.4
- type: f1
value: 90.25666666666666
- type: precision
value: 89.25833333333334
- type: recall
value: 92.4
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (mon-eng)
type: mteb/tatoeba-bitext-mining
config: mon-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 90.22727272727272
- type: f1
value: 87.53030303030303
- type: precision
value: 86.37121212121211
- type: recall
value: 90.22727272727272
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (arz-eng)
type: mteb/tatoeba-bitext-mining
config: arz-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 79.03563941299791
- type: f1
value: 74.7349505840072
- type: precision
value: 72.9035639412998
- type: recall
value: 79.03563941299791
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (hrv-eng)
type: mteb/tatoeba-bitext-mining
config: hrv-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 97
- type: f1
value: 96.15
- type: precision
value: 95.76666666666668
- type: recall
value: 97
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (nov-eng)
type: mteb/tatoeba-bitext-mining
config: nov-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 76.26459143968872
- type: f1
value: 71.55642023346303
- type: precision
value: 69.7544932369835
- type: recall
value: 76.26459143968872
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (gsw-eng)
type: mteb/tatoeba-bitext-mining
config: gsw-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 58.119658119658126
- type: f1
value: 51.65242165242165
- type: precision
value: 49.41768108434775
- type: recall
value: 58.119658119658126
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (nds-eng)
type: mteb/tatoeba-bitext-mining
config: nds-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 74.3
- type: f1
value: 69.52055555555555
- type: precision
value: 67.7574938949939
- type: recall
value: 74.3
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ukr-eng)
type: mteb/tatoeba-bitext-mining
config: ukr-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 94.8
- type: f1
value: 93.31666666666666
- type: precision
value: 92.60000000000001
- type: recall
value: 94.8
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (uzb-eng)
type: mteb/tatoeba-bitext-mining
config: uzb-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 76.63551401869158
- type: f1
value: 72.35202492211837
- type: precision
value: 70.60358255451713
- type: recall
value: 76.63551401869158
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (lit-eng)
type: mteb/tatoeba-bitext-mining
config: lit-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 90.4
- type: f1
value: 88.4811111111111
- type: precision
value: 87.7452380952381
- type: recall
value: 90.4
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ina-eng)
type: mteb/tatoeba-bitext-mining
config: ina-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 95
- type: f1
value: 93.60666666666667
- type: precision
value: 92.975
- type: recall
value: 95
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (lfn-eng)
type: mteb/tatoeba-bitext-mining
config: lfn-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 67.2
- type: f1
value: 63.01595782872099
- type: precision
value: 61.596587301587306
- type: recall
value: 67.2
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (zsm-eng)
type: mteb/tatoeba-bitext-mining
config: zsm-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 95.7
- type: f1
value: 94.52999999999999
- type: precision
value: 94
- type: recall
value: 95.7
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ita-eng)
type: mteb/tatoeba-bitext-mining
config: ita-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 94.6
- type: f1
value: 93.28999999999999
- type: precision
value: 92.675
- type: recall
value: 94.6
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (cmn-eng)
type: mteb/tatoeba-bitext-mining
config: cmn-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 96.39999999999999
- type: f1
value: 95.28333333333333
- type: precision
value: 94.75
- type: recall
value: 96.39999999999999
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (lvs-eng)
type: mteb/tatoeba-bitext-mining
config: lvs-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 91.9
- type: f1
value: 89.83
- type: precision
value: 88.92
- type: recall
value: 91.9
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (glg-eng)
type: mteb/tatoeba-bitext-mining
config: glg-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 94.69999999999999
- type: f1
value: 93.34222222222223
- type: precision
value: 92.75416666666668
- type: recall
value: 94.69999999999999
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ceb-eng)
type: mteb/tatoeba-bitext-mining
config: ceb-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 60.333333333333336
- type: f1
value: 55.31203703703703
- type: precision
value: 53.39971108326371
- type: recall
value: 60.333333333333336
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (bre-eng)
type: mteb/tatoeba-bitext-mining
config: bre-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 12.9
- type: f1
value: 11.099861903031458
- type: precision
value: 10.589187932631877
- type: recall
value: 12.9
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ben-eng)
type: mteb/tatoeba-bitext-mining
config: ben-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 86.7
- type: f1
value: 83.0152380952381
- type: precision
value: 81.37833333333333
- type: recall
value: 86.7
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (swg-eng)
type: mteb/tatoeba-bitext-mining
config: swg-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 63.39285714285714
- type: f1
value: 56.832482993197274
- type: precision
value: 54.56845238095237
- type: recall
value: 63.39285714285714
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (arq-eng)
type: mteb/tatoeba-bitext-mining
config: arq-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 48.73765093304062
- type: f1
value: 41.555736920720456
- type: precision
value: 39.06874531737319
- type: recall
value: 48.73765093304062
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (kab-eng)
type: mteb/tatoeba-bitext-mining
config: kab-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 41.099999999999994
- type: f1
value: 36.540165945165946
- type: precision
value: 35.05175685425686
- type: recall
value: 41.099999999999994
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (fra-eng)
type: mteb/tatoeba-bitext-mining
config: fra-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 94.89999999999999
- type: f1
value: 93.42333333333333
- type: precision
value: 92.75833333333333
- type: recall
value: 94.89999999999999
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (por-eng)
type: mteb/tatoeba-bitext-mining
config: por-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 94.89999999999999
- type: f1
value: 93.63333333333334
- type: precision
value: 93.01666666666665
- type: recall
value: 94.89999999999999
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (tat-eng)
type: mteb/tatoeba-bitext-mining
config: tat-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 77.9
- type: f1
value: 73.64833333333334
- type: precision
value: 71.90282106782105
- type: recall
value: 77.9
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (oci-eng)
type: mteb/tatoeba-bitext-mining
config: oci-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 59.4
- type: f1
value: 54.90521367521367
- type: precision
value: 53.432840025471606
- type: recall
value: 59.4
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (pol-eng)
type: mteb/tatoeba-bitext-mining
config: pol-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 97.39999999999999
- type: f1
value: 96.6
- type: precision
value: 96.2
- type: recall
value: 97.39999999999999
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (war-eng)
type: mteb/tatoeba-bitext-mining
config: war-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 67.2
- type: f1
value: 62.25926129426129
- type: precision
value: 60.408376623376626
- type: recall
value: 67.2
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (aze-eng)
type: mteb/tatoeba-bitext-mining
config: aze-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 90.2
- type: f1
value: 87.60666666666667
- type: precision
value: 86.45277777777778
- type: recall
value: 90.2
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (vie-eng)
type: mteb/tatoeba-bitext-mining
config: vie-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 97.7
- type: f1
value: 97
- type: precision
value: 96.65
- type: recall
value: 97.7
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (nno-eng)
type: mteb/tatoeba-bitext-mining
config: nno-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 93.2
- type: f1
value: 91.39746031746031
- type: precision
value: 90.6125
- type: recall
value: 93.2
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (cha-eng)
type: mteb/tatoeba-bitext-mining
config: cha-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 32.11678832116788
- type: f1
value: 27.210415386260234
- type: precision
value: 26.20408990846947
- type: recall
value: 32.11678832116788
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (mhr-eng)
type: mteb/tatoeba-bitext-mining
config: mhr-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 8.5
- type: f1
value: 6.787319277832475
- type: precision
value: 6.3452094433344435
- type: recall
value: 8.5
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (dan-eng)
type: mteb/tatoeba-bitext-mining
config: dan-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 96.1
- type: f1
value: 95.08
- type: precision
value: 94.61666666666667
- type: recall
value: 96.1
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ell-eng)
type: mteb/tatoeba-bitext-mining
config: ell-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 95.3
- type: f1
value: 93.88333333333333
- type: precision
value: 93.18333333333332
- type: recall
value: 95.3
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (amh-eng)
type: mteb/tatoeba-bitext-mining
config: amh-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 85.11904761904762
- type: f1
value: 80.69444444444444
- type: precision
value: 78.72023809523809
- type: recall
value: 85.11904761904762
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (pam-eng)
type: mteb/tatoeba-bitext-mining
config: pam-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 11.1
- type: f1
value: 9.276381801735853
- type: precision
value: 8.798174603174601
- type: recall
value: 11.1
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (hsb-eng)
type: mteb/tatoeba-bitext-mining
config: hsb-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 63.56107660455487
- type: f1
value: 58.70433569191332
- type: precision
value: 56.896926581464015
- type: recall
value: 63.56107660455487
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (srp-eng)
type: mteb/tatoeba-bitext-mining
config: srp-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 94.69999999999999
- type: f1
value: 93.10000000000001
- type: precision
value: 92.35
- type: recall
value: 94.69999999999999
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (epo-eng)
type: mteb/tatoeba-bitext-mining
config: epo-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 96.8
- type: f1
value: 96.01222222222222
- type: precision
value: 95.67083333333332
- type: recall
value: 96.8
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (kzj-eng)
type: mteb/tatoeba-bitext-mining
config: kzj-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 9.2
- type: f1
value: 7.911555250305249
- type: precision
value: 7.631246556216846
- type: recall
value: 9.2
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (awa-eng)
type: mteb/tatoeba-bitext-mining
config: awa-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 77.48917748917748
- type: f1
value: 72.27375798804371
- type: precision
value: 70.14430014430013
- type: recall
value: 77.48917748917748
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (fao-eng)
type: mteb/tatoeba-bitext-mining
config: fao-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 77.09923664122137
- type: f1
value: 72.61541257724463
- type: precision
value: 70.8998380754106
- type: recall
value: 77.09923664122137
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (mal-eng)
type: mteb/tatoeba-bitext-mining
config: mal-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 98.2532751091703
- type: f1
value: 97.69529354682193
- type: precision
value: 97.42843279961184
- type: recall
value: 98.2532751091703
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ile-eng)
type: mteb/tatoeba-bitext-mining
config: ile-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 82.8
- type: f1
value: 79.14672619047619
- type: precision
value: 77.59489247311828
- type: recall
value: 82.8
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (bos-eng)
type: mteb/tatoeba-bitext-mining
config: bos-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 94.35028248587571
- type: f1
value: 92.86252354048965
- type: precision
value: 92.2080979284369
- type: recall
value: 94.35028248587571
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (cor-eng)
type: mteb/tatoeba-bitext-mining
config: cor-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 8.5
- type: f1
value: 6.282429263935621
- type: precision
value: 5.783274240739785
- type: recall
value: 8.5
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (cat-eng)
type: mteb/tatoeba-bitext-mining
config: cat-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 92.7
- type: f1
value: 91.025
- type: precision
value: 90.30428571428571
- type: recall
value: 92.7
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (eus-eng)
type: mteb/tatoeba-bitext-mining
config: eus-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 81
- type: f1
value: 77.8232380952381
- type: precision
value: 76.60194444444444
- type: recall
value: 81
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (yue-eng)
type: mteb/tatoeba-bitext-mining
config: yue-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 91
- type: f1
value: 88.70857142857142
- type: precision
value: 87.7
- type: recall
value: 91
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (swe-eng)
type: mteb/tatoeba-bitext-mining
config: swe-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 96.39999999999999
- type: f1
value: 95.3
- type: precision
value: 94.76666666666667
- type: recall
value: 96.39999999999999
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (dtp-eng)
type: mteb/tatoeba-bitext-mining
config: dtp-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 8.1
- type: f1
value: 7.001008218834307
- type: precision
value: 6.708329562594269
- type: recall
value: 8.1
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (kat-eng)
type: mteb/tatoeba-bitext-mining
config: kat-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 87.1313672922252
- type: f1
value: 84.09070598748882
- type: precision
value: 82.79171454104429
- type: recall
value: 87.1313672922252
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (jpn-eng)
type: mteb/tatoeba-bitext-mining
config: jpn-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 96.39999999999999
- type: f1
value: 95.28333333333333
- type: precision
value: 94.73333333333332
- type: recall
value: 96.39999999999999
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (csb-eng)
type: mteb/tatoeba-bitext-mining
config: csb-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 42.29249011857708
- type: f1
value: 36.981018542283365
- type: precision
value: 35.415877813576024
- type: recall
value: 42.29249011857708
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (xho-eng)
type: mteb/tatoeba-bitext-mining
config: xho-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 83.80281690140845
- type: f1
value: 80.86854460093896
- type: precision
value: 79.60093896713614
- type: recall
value: 83.80281690140845
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (orv-eng)
type: mteb/tatoeba-bitext-mining
config: orv-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 45.26946107784431
- type: f1
value: 39.80235464678088
- type: precision
value: 38.14342660001342
- type: recall
value: 45.26946107784431
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ind-eng)
type: mteb/tatoeba-bitext-mining
config: ind-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 94.3
- type: f1
value: 92.9
- type: precision
value: 92.26666666666668
- type: recall
value: 94.3
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (tuk-eng)
type: mteb/tatoeba-bitext-mining
config: tuk-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 37.93103448275862
- type: f1
value: 33.15192743764172
- type: precision
value: 31.57456528146183
- type: recall
value: 37.93103448275862
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (max-eng)
type: mteb/tatoeba-bitext-mining
config: max-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 69.01408450704226
- type: f1
value: 63.41549295774648
- type: precision
value: 61.342778895595806
- type: recall
value: 69.01408450704226
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (swh-eng)
type: mteb/tatoeba-bitext-mining
config: swh-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 76.66666666666667
- type: f1
value: 71.60705960705961
- type: precision
value: 69.60683760683762
- type: recall
value: 76.66666666666667
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (hin-eng)
type: mteb/tatoeba-bitext-mining
config: hin-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 95.8
- type: f1
value: 94.48333333333333
- type: precision
value: 93.83333333333333
- type: recall
value: 95.8
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (dsb-eng)
type: mteb/tatoeba-bitext-mining
config: dsb-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 52.81837160751566
- type: f1
value: 48.435977731384824
- type: precision
value: 47.11291973845539
- type: recall
value: 52.81837160751566
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ber-eng)
type: mteb/tatoeba-bitext-mining
config: ber-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 44.9
- type: f1
value: 38.88962621607783
- type: precision
value: 36.95936507936508
- type: recall
value: 44.9
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (tam-eng)
type: mteb/tatoeba-bitext-mining
config: tam-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 90.55374592833876
- type: f1
value: 88.22553125484721
- type: precision
value: 87.26927252985884
- type: recall
value: 90.55374592833876
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (slk-eng)
type: mteb/tatoeba-bitext-mining
config: slk-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 94.6
- type: f1
value: 93.13333333333333
- type: precision
value: 92.45333333333333
- type: recall
value: 94.6
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (tgl-eng)
type: mteb/tatoeba-bitext-mining
config: tgl-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 93.7
- type: f1
value: 91.99666666666667
- type: precision
value: 91.26666666666668
- type: recall
value: 93.7
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ast-eng)
type: mteb/tatoeba-bitext-mining
config: ast-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 85.03937007874016
- type: f1
value: 81.75853018372703
- type: precision
value: 80.34120734908137
- type: recall
value: 85.03937007874016
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (mkd-eng)
type: mteb/tatoeba-bitext-mining
config: mkd-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 88.3
- type: f1
value: 85.5
- type: precision
value: 84.25833333333334
- type: recall
value: 88.3
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (khm-eng)
type: mteb/tatoeba-bitext-mining
config: khm-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 65.51246537396122
- type: f1
value: 60.02297410192148
- type: precision
value: 58.133467727289236
- type: recall
value: 65.51246537396122
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ces-eng)
type: mteb/tatoeba-bitext-mining
config: ces-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 96
- type: f1
value: 94.89
- type: precision
value: 94.39166666666667
- type: recall
value: 96
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (tzl-eng)
type: mteb/tatoeba-bitext-mining
config: tzl-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 57.692307692307686
- type: f1
value: 53.162393162393165
- type: precision
value: 51.70673076923077
- type: recall
value: 57.692307692307686
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (urd-eng)
type: mteb/tatoeba-bitext-mining
config: urd-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 91.60000000000001
- type: f1
value: 89.21190476190475
- type: precision
value: 88.08666666666667
- type: recall
value: 91.60000000000001
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ara-eng)
type: mteb/tatoeba-bitext-mining
config: ara-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 88
- type: f1
value: 85.47
- type: precision
value: 84.43266233766234
- type: recall
value: 88
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (kor-eng)
type: mteb/tatoeba-bitext-mining
config: kor-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 92.7
- type: f1
value: 90.64999999999999
- type: precision
value: 89.68333333333332
- type: recall
value: 92.7
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (yid-eng)
type: mteb/tatoeba-bitext-mining
config: yid-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 80.30660377358491
- type: f1
value: 76.33044137466307
- type: precision
value: 74.78970125786164
- type: recall
value: 80.30660377358491
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (fin-eng)
type: mteb/tatoeba-bitext-mining
config: fin-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 96.39999999999999
- type: f1
value: 95.44
- type: precision
value: 94.99166666666666
- type: recall
value: 96.39999999999999
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (tha-eng)
type: mteb/tatoeba-bitext-mining
config: tha-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 96.53284671532847
- type: f1
value: 95.37712895377129
- type: precision
value: 94.7992700729927
- type: recall
value: 96.53284671532847
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (wuu-eng)
type: mteb/tatoeba-bitext-mining
config: wuu-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 89
- type: f1
value: 86.23190476190476
- type: precision
value: 85.035
- type: recall
value: 89
- task:
type: Retrieval
dataset:
name: MTEB Touche2020
type: webis-touche2020
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 2.585
- type: map_at_10
value: 9.012
- type: map_at_100
value: 14.027000000000001
- type: map_at_1000
value: 15.565000000000001
- type: map_at_3
value: 5.032
- type: map_at_5
value: 6.657
- type: mrr_at_1
value: 28.571
- type: mrr_at_10
value: 45.377
- type: mrr_at_100
value: 46.119
- type: mrr_at_1000
value: 46.127
- type: mrr_at_3
value: 41.156
- type: mrr_at_5
value: 42.585
- type: ndcg_at_1
value: 27.551
- type: ndcg_at_10
value: 23.395
- type: ndcg_at_100
value: 33.342
- type: ndcg_at_1000
value: 45.523
- type: ndcg_at_3
value: 25.158
- type: ndcg_at_5
value: 23.427
- type: precision_at_1
value: 28.571
- type: precision_at_10
value: 21.429000000000002
- type: precision_at_100
value: 6.714
- type: precision_at_1000
value: 1.473
- type: precision_at_3
value: 27.211000000000002
- type: precision_at_5
value: 24.490000000000002
- type: recall_at_1
value: 2.585
- type: recall_at_10
value: 15.418999999999999
- type: recall_at_100
value: 42.485
- type: recall_at_1000
value: 79.536
- type: recall_at_3
value: 6.239999999999999
- type: recall_at_5
value: 8.996
- task:
type: Classification
dataset:
name: MTEB ToxicConversationsClassification
type: mteb/toxic_conversations_50k
config: default
split: test
revision: d7c0de2777da35d6aae2200a62c6e0e5af397c4c
metrics:
- type: accuracy
value: 71.3234
- type: ap
value: 14.361688653847423
- type: f1
value: 54.819068624319044
- task:
type: Classification
dataset:
name: MTEB TweetSentimentExtractionClassification
type: mteb/tweet_sentiment_extraction
config: default
split: test
revision: d604517c81ca91fe16a244d1248fc021f9ecee7a
metrics:
- type: accuracy
value: 61.97792869269949
- type: f1
value: 62.28965628513728
- task:
type: Clustering
dataset:
name: MTEB TwentyNewsgroupsClustering
type: mteb/twentynewsgroups-clustering
config: default
split: test
revision: 6125ec4e24fa026cec8a478383ee943acfbd5449
metrics:
- type: v_measure
value: 38.90540145385218
- task:
type: PairClassification
dataset:
name: MTEB TwitterSemEval2015
type: mteb/twittersemeval2015-pairclassification
config: default
split: test
revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1
metrics:
- type: cos_sim_accuracy
value: 86.53513739047506
- type: cos_sim_ap
value: 75.27741586677557
- type: cos_sim_f1
value: 69.18792902473774
- type: cos_sim_precision
value: 67.94708725515136
- type: cos_sim_recall
value: 70.47493403693932
- type: dot_accuracy
value: 84.7052512368123
- type: dot_ap
value: 69.36075482849378
- type: dot_f1
value: 64.44688376631296
- type: dot_precision
value: 59.92288500793831
- type: dot_recall
value: 69.70976253298153
- type: euclidean_accuracy
value: 86.60666388508076
- type: euclidean_ap
value: 75.47512772621097
- type: euclidean_f1
value: 69.413872536473
- type: euclidean_precision
value: 67.39562624254472
- type: euclidean_recall
value: 71.55672823218997
- type: manhattan_accuracy
value: 86.52917684925792
- type: manhattan_ap
value: 75.34000110496703
- type: manhattan_f1
value: 69.28489190226429
- type: manhattan_precision
value: 67.24608889992551
- type: manhattan_recall
value: 71.45118733509234
- type: max_accuracy
value: 86.60666388508076
- type: max_ap
value: 75.47512772621097
- type: max_f1
value: 69.413872536473
- task:
type: PairClassification
dataset:
name: MTEB TwitterURLCorpus
type: mteb/twitterurlcorpus-pairclassification
config: default
split: test
revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf
metrics:
- type: cos_sim_accuracy
value: 89.01695967710637
- type: cos_sim_ap
value: 85.8298270742901
- type: cos_sim_f1
value: 78.46988128389272
- type: cos_sim_precision
value: 74.86017897091722
- type: cos_sim_recall
value: 82.44533415460425
- type: dot_accuracy
value: 88.19420188613343
- type: dot_ap
value: 83.82679165901324
- type: dot_f1
value: 76.55833777304208
- type: dot_precision
value: 75.6884875846501
- type: dot_recall
value: 77.44841392054204
- type: euclidean_accuracy
value: 89.03054294252338
- type: euclidean_ap
value: 85.89089555185325
- type: euclidean_f1
value: 78.62997658079624
- type: euclidean_precision
value: 74.92329149232914
- type: euclidean_recall
value: 82.72251308900523
- type: manhattan_accuracy
value: 89.0266620095471
- type: manhattan_ap
value: 85.86458997929147
- type: manhattan_f1
value: 78.50685331000291
- type: manhattan_precision
value: 74.5499861534201
- type: manhattan_recall
value: 82.90729904527257
- type: max_accuracy
value: 89.03054294252338
- type: max_ap
value: 85.89089555185325
- type: max_f1
value: 78.62997658079624
---
# chris-code/multilingual-e5-large-Q8_0-GGUF
This model was converted to GGUF format from [`intfloat/multilingual-e5-large`](https://huggingface.co/intfloat/multilingual-e5-large) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/intfloat/multilingual-e5-large) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo chris-code/multilingual-e5-large-Q8_0-GGUF --hf-file multilingual-e5-large-q8_0.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo chris-code/multilingual-e5-large-Q8_0-GGUF --hf-file multilingual-e5-large-q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo chris-code/multilingual-e5-large-Q8_0-GGUF --hf-file multilingual-e5-large-q8_0.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo chris-code/multilingual-e5-large-Q8_0-GGUF --hf-file multilingual-e5-large-q8_0.gguf -c 2048
```
| [
"BIOSSES",
"SCIFACT"
] | Non_BioNLP |
binbin83/setfit-MiniLM-dialog-act-13nov | binbin83 | text-classification | [
"setfit",
"safetensors",
"bert",
"sentence-transformers",
"text-classification",
"fr",
"arxiv:2209.11055",
"license:apache-2.0",
"region:us"
] | 1,706,687,715,000 | 2024-01-31T08:48:33 | 6 | 2 | ---
language:
- fr
library_name: setfit
license: apache-2.0
metrics:
- f1
- accuracy
pipeline_tag: text-classification
tags:
- setfit
- sentence-transformers
- text-classification
---
# binbin83/setfit-MiniLM-dialog-act-13nov
The model is a multi-class multi-label text classifier to distinguish the different dialog act in semi-structured interview. The data used fot fine-tuning were in French.
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("binbin83/setfit-MiniLM-dialog-act-13nov")
label_dict = {'Introductory': 0, 'FollowUp': 1, 'Probing': 2, 'Specifying': 3, 'Structuring': 4, 'DirectQuestion': 5, 'Interpreting': 6, 'Ending': 7}
# Run inference
preds = model(["Vous pouvez continuer", "Pouvez-vous me dire précisément quel a été l'odre chronologique des événements ?"])
labels = [[[f for f, p in zip(labels_dict, ps) if p] for ps in [pred]] for pred in preds ]
```
## Labels and training data
Brinkmann, S., & Kvale, S (1), define classification of dialog act in interview:
* Introductory: Can you tell me about ... (something specific)?,
* Follow-up verbal cues: repeat back keywords to participants, ask for reflection or unpacking of point just made,
* Probing: Can you say a little more about X? Why do you think X...? (for example, Why do you think X is that way? Why do you think X is important?),
* Specifying: Can you give me an example of X?,
* Indirect: How do you think other people view X?,
* Structuring: Thank you for that. I’d like to move to another topic...
* Direct (later stages): When you mention X, are you thinking like Y or Z?,
* Interpreting: So, what I have gathered is that...,
* Ending: I have asked all the questions I had, but I wanted to check whether there is something else about your experience/understanding we haven’t covered? Do you have any questions for me?,
On our corpus of interviews, we humanly label 500 turn of speech using this classification. We use 0.7 to train and evaluate on 0.3.
The entire corpus is composed of the following examples:
('Probing', 146), ('Specifying', 135), ('FollowUp', 134), ('DirectQuestion', 125), ('Interpreting', 44), ('Structuring', 27), ('Introductory', 12), ('Ending', 12)
(1) Brinkmann, S., & Kvale, S. (2015). InterViews: Learning the Craft of Qualitative Research Interviewing. (3. ed.) SAGE Publications.
## Training and Performances
We finetune: "sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2"
using SetFit with CosineLossSimilarity and this parapeters: epochs = 5, batch_size=32, num_iterations = 20
On our custom dataset, on test set, we get:
{f1': 0.65, 'f1_micro': 0.64, 'f1_sample': 0.64, 'accuracy': 0.475}
## BibTeX entry and citation info
To cite the current study:
```bibtex
@article{
doi = {conference paper},
url = {https://arxiv.org/abs/2209.11055},
author = {Quillivic Robin, Charles Payet},
keywords = {NLP, JADT},
title = {Semi-Structured Interview Analysis: A French NLP Toolbox for Social Sciences},
publisher = {JADT},
year = {2024},
copyright = {Creative Commons Attribution 4.0 International}
}
```
To cite the setFit paper:
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
``` | [
"CRAFT"
] | Non_BioNLP |
RichardErkhov/XeAI_-_LLaMa_3.2_3B_Instruct_Text2SQL_Legacy-8bits | RichardErkhov | null | [
"safetensors",
"llama",
"8-bit",
"bitsandbytes",
"region:us"
] | 1,741,756,722,000 | 2025-03-12T05:20:58 | 3 | 0 | ---
{}
---
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
LLaMa_3.2_3B_Instruct_Text2SQL_Legacy - bnb 8bits
- Model creator: https://huggingface.co/XeAI/
- Original model: https://huggingface.co/XeAI/LLaMa_3.2_3B_Instruct_Text2SQL_Legacy/
Original model description:
---
library_name: transformers
license: mit
datasets:
- gretelai/synthetic_text_to_sql
pipeline_tag: text-generation
---
# Model Card for LLaMA 3.2 3B Instruct Text2SQL
## Model Details
### Model Description
This is a fine-tuned version of LLaMA 3.2 3B Instruct model, specifically optimized for Text-to-SQL generation tasks. The model has been trained to convert natural language queries into structured SQL commands.
- **Developed by:** Zhafran Ramadhan - XeAI
- **Model type:** Decoder-only Language Model
- **Language(s):** English - MultiLingual
- **License:** MIT
- **Finetuned from model:** LLaMA 3.2 3B Instruct
- **Log WandB Report:** [WandB Report](https://wandb.ai/zhafranr/LLaMA_3-2_3B_Instruct_FineTune_Text2SQL/reports/LLaMa-3-2-3B-Instruct-Fine-Tune-Text2SQL--VmlldzoxMDA2NDkzNA)
### Model Sources
- **Repository:** [LLaMA 3.2 3B Instruct](https://huggingface.co/meta-llama/Llama-3.2-3B-Instruct)
- **Dataset:** [Synthethic Text2SQL](https://huggingface.co/datasets/gretelai/synthetic_text_to_sql)
## How to Get Started with the Model
### Installation
```python
pip install transformers torch accelerate
```
### Input Format and Usage
The model expects input in a specific format following this template:
```text
<|begin_of_text|><|start_header_id|>system<|end_header_id|>
[System context and database schema]
<|eot_id|><|start_header_id|>user<|end_header_id|>
[User query]
<|eot_id|><|start_header_id|>assistant<|end_header_id|>
```
### Basic Usage
```python
from transformers import pipeline
import torch
# Initialize the pipeline
generator = pipeline(
"text-generation",
model="XeAI/LLaMa_3.2_3B_Instruct_Text2SQL", # Replace with your model ID
torch_dtype=torch.float16,
device_map="auto"
)
def generate_sql_query(context, question):
# Format the prompt according to the training template
prompt = f"""<|begin_of_text|><|start_header_id|>system<|end_header_id|>
Cutting Knowledge Date: December 2023
Today Date: 07 Nov 2024
You are a specialized SQL query generator focused solely on the provided RAG database. Your tasks are:
1. Generate SQL queries based on user requests that are related to querying the RAG database.
2. Only output the SQL query itself, without any additional explanation or commentary.
3. Use the context provided from the RAG database to craft accurate queries.
Context: {context}
<|eot_id|><|start_header_id|>user<|end_header_id|>
{question}<|eot_id|><|start_header_id|>assistant<|end_header_id|>"""
response = generator(
prompt,
max_length=500,
num_return_sequences=1,
temperature=0.1,
do_sample=True,
pad_token_id=generator.tokenizer.eos_token_id
)
return response[0]['generated_text']
# Example usage
context = """CREATE TABLE upgrades (id INT, cost FLOAT, type TEXT);
INSERT INTO upgrades (id, cost, type) VALUES
(1, 500, 'Insulation'),
(2, 1000, 'HVAC'),
(3, 1500, 'Lighting');"""
questions = [
"Find the energy efficiency upgrades with the highest cost and their types.",
"Show me all upgrades costing less than 1000 dollars.",
"Calculate the average cost of all upgrades."
]
for question in questions:
sql = generate_sql_query(context, question)
print(f"\nQuestion: {question}")
print(f"Generated SQL: {sql}\n")
```
### Advanced Usage with Custom System Prompt
```python
def generate_sql_with_custom_prompt(context, question, custom_system_prompt=""):
base_prompt = """<|begin_of_text|><|start_header_id|>system<|end_header_id|>
Cutting Knowledge Date: December 2023
Today Date: 07 Nov 2024
You are a specialized SQL query generator focused solely on the provided RAG database."""
full_prompt = f"""{base_prompt}
{custom_system_prompt}
Context: {context}
<|eot_id|><|start_header_id|>user<|end_header_id|>
{question}<|eot_id|><|start_header_id|>assistant<|end_header_id|>"""
response = generator(
full_prompt,
max_length=500,
num_return_sequences=1,
temperature=0.1,
do_sample=True,
pad_token_id=generator.tokenizer.eos_token_id
)
return response[0]['generated_text']
```
### Best Practices
1. **Input Formatting**:
- Always include the special tokens (<|begin_of_text|>, <|eot_id|>, etc.)
- Provide complete database schema in context
- Keep questions clear and focused on data retrieval
2. **Parameter Configuration**:
- Use temperature=0.1 for consistent SQL generation
- Adjust max_length based on expected query complexity
- Enable do_sample for more natural completions
3. **Context Management**:
- Include relevant table schemas
- Provide sample data when needed
- Keep context concise but complete
## Uses
### Direct Use
The model is designed for converting natural language questions into SQL queries. It can be used for:
- Database query generation from natural language
- SQL query assistance
- Data analysis automation
### Out-of-Scope Use
- Production deployment without human validation
- Critical decision-making without human oversight
- Direct database execution without query validation
## Training Details
### Training Data
- Dataset: [Synthethic Text2SQL](https://huggingface.co/datasets/gretelai/synthetic_text_to_sql)
- Data preprocessing: Standard text-to-SQL formatting
### Training Procedure
#### Training Hyperparameters
- **Total Steps:** 4,149
- **Final Training Loss:** 0.1168
- **Evaluation Loss:** 0.2125
- **Learning Rate:** Dynamic with final LR = 0
- **Epochs:** 2.99
- **Gradient Norm:** 1.3121
#### Performance Metrics
- **Training Samples/Second:** 6.291
- **Evaluation Samples/Second:** 19.325
- **Steps/Second:** 3.868
- **Total FLOPS:** 1.92e18
#### Training Infrastructure
- **Hardware:** Single NVIDIA H100 GPU
- **Training Duration:** 5-6 hours
- **Total Runtime:** 16,491.75 seconds
- **Model Preparation Time:** 0.0051 seconds
## Evaluation
### Metrics
The model's performance was tracked using several key metrics:
- **Training Loss:** Started at ~1.2, converged to 0.1168
- **Evaluation Loss:** 0.2125
- **Processing Efficiency:** 19.325 samples per second during evaluation
### Results Summary
- Achieved stable convergence after ~4000 steps
- Maintained consistent performance metrics throughout training
- Shows good balance between training and evaluation loss
## Environmental Impact
- **Hardware Type:** NVIDIA H100 GPU
- **Hours used:** ~6 hours
- **Training Location:** [GPUaaS](www.runpod.io)
## Technical Specifications
### Compute Infrastructure
- **GPU:** NVIDIA H100
- **Training Duration:** 5-6 hours
- **Total Steps:** 4,149
- **FLOPs Utilized:** 1.92e18
## Model Card Contact
[Contact information to be added by Zhafran Ramadhan]
---
*Note: This model card follows the guidelines set by the ML community for responsible AI development and deployment.*
| [
"CRAFT"
] | Non_BioNLP |
Euanyu/GERBERA-Celltype | Euanyu | null | [
"transformers",
"pytorch",
"roberta",
"license:mit",
"endpoints_compatible",
"region:us"
] | 1,716,326,921,000 | 2024-05-21T21:32:09 | 6 | 0 | ---
license: mit
---
The GERBERA BioNER model for identifying cell type types, trained on the JNLPBA-ct dataset and GUM-Time. | [
"JNLPBA"
] | BioNLP |
pruas/BENT-PubMedBERT-NER-Chemical | pruas | token-classification | [
"transformers",
"pytorch",
"bert",
"token-classification",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,673,468,374,000 | 2024-03-01T13:56:32 | 687 | 8 | ---
language:
- en
license: apache-2.0
pipeline_tag: token-classification
---
Named Entity Recognition (NER) model to recognize chemical entities.
Please cite our work:
```
@article{NILNKER2022,
title = {NILINKER: Attention-based approach to NIL Entity Linking},
journal = {Journal of Biomedical Informatics},
volume = {132},
pages = {104137},
year = {2022},
issn = {1532-0464},
doi = {https://doi.org/10.1016/j.jbi.2022.104137},
url = {https://www.sciencedirect.com/science/article/pii/S1532046422001526},
author = {Pedro Ruas and Francisco M. Couto},
}
```
[PubMedBERT](https://huggingface.co/microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext) fine-tuned on the following datasets:
- [Chemdner patents CEMP corpus](https://biocreative.bioinformatics.udel.edu/resources/corpora/chemdner-patents-cemp-corpus/) (train, dev, test sets)
- [DDI corpus](https://github.com/isegura/DDICorpus) (train, dev, test sets): entity types "GROUP", "DRUG", "DRUG_N"
- [GREC Corpus](http://www.nactem.ac.uk/GREC/standoff.php) (train, dev, test sets): entity type "organic_compounds"
- [MLEE](http://nactem.ac.uk/MLEE/) (train, dev, test sets): entity type "Drug or compound"
- [NLM-CHEM](https://ftp.ncbi.nlm.nih.gov/pub/lu/NLMChem/) (train, dev, test sets)
- [CHEMDNER](https://biocreative.bioinformatics.udel.edu/resources/) (train, dev, test sets)
- [Chebi Corpus](http://www.nactem.ac.uk/chebi/) (train, dev, test sets): entity types "Metabolite", "Chemical"
- [PHAEDRA](http://www.nactem.ac.uk/PHAEDRA/) (train, dev, test sets): entity type "Pharmalogical_substance"
- [Chemprot](https://biocreative.bioinformatics.udel.edu/tasks/biocreative-vi/track-5/) (train, dev, test sets)
- [PGx Corpus](https://github.com/practikpharma/PGxCorpus) (train, dev, test sets): entity type "Chemical"
- [BioNLP11ID](https://github.com/cambridgeltl/MTL-Bioinformatics-2016/tree/master/data/BioNLP11ID-chem-IOB) (train, dev, test sets): entity type "Chemical"
- [BioNLP13CG]() (train, dev, test sets): entity type "Chemical"
- [BC4CHEMD](https://github.com/cambridgeltl/MTL-Bioinformatics-2016/tree/master/data/BC4CHEMD) (train, dev, test sets)
- [CRAFT corpus](https://github.com/UCDenver-ccp/CRAFT/tree/master/concept-annotation) (train, dev, test sets): entity type "ChEBI"
- [BC5CDR]() (train, dev, test sets): entity type "Chemical" | [
"BC5CDR",
"CHEBI CORPUS",
"CHEMDNER",
"CRAFT",
"CHEMPROT",
"DDI CORPUS",
"MLEE",
"NLM-CHEM"
] | TBD |
mradermacher/BC5CDR-mistral-False-Cosine-GGUF | mradermacher | null | [
"transformers",
"gguf",
"en",
"endpoints_compatible",
"region:us",
"conversational"
] | 1,728,369,306,000 | 2024-10-08T07:00:07 | 105 | 0 | ---
base_model: Motasem7/BC5CDR-mistral-False-Cosine
language:
- en
library_name: transformers
tags: []
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/Motasem7/BC5CDR-mistral-False-Cosine
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/BC5CDR-mistral-False-Cosine-GGUF/resolve/main/BC5CDR-mistral-False-Cosine.Q2_K.gguf) | Q2_K | 2.8 | |
| [GGUF](https://huggingface.co/mradermacher/BC5CDR-mistral-False-Cosine-GGUF/resolve/main/BC5CDR-mistral-False-Cosine.IQ3_XS.gguf) | IQ3_XS | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/BC5CDR-mistral-False-Cosine-GGUF/resolve/main/BC5CDR-mistral-False-Cosine.Q3_K_S.gguf) | Q3_K_S | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/BC5CDR-mistral-False-Cosine-GGUF/resolve/main/BC5CDR-mistral-False-Cosine.IQ3_S.gguf) | IQ3_S | 3.3 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/BC5CDR-mistral-False-Cosine-GGUF/resolve/main/BC5CDR-mistral-False-Cosine.IQ3_M.gguf) | IQ3_M | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/BC5CDR-mistral-False-Cosine-GGUF/resolve/main/BC5CDR-mistral-False-Cosine.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/BC5CDR-mistral-False-Cosine-GGUF/resolve/main/BC5CDR-mistral-False-Cosine.Q3_K_L.gguf) | Q3_K_L | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/BC5CDR-mistral-False-Cosine-GGUF/resolve/main/BC5CDR-mistral-False-Cosine.IQ4_XS.gguf) | IQ4_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/BC5CDR-mistral-False-Cosine-GGUF/resolve/main/BC5CDR-mistral-False-Cosine.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/BC5CDR-mistral-False-Cosine-GGUF/resolve/main/BC5CDR-mistral-False-Cosine.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/BC5CDR-mistral-False-Cosine-GGUF/resolve/main/BC5CDR-mistral-False-Cosine.Q5_K_S.gguf) | Q5_K_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/BC5CDR-mistral-False-Cosine-GGUF/resolve/main/BC5CDR-mistral-False-Cosine.Q5_K_M.gguf) | Q5_K_M | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/BC5CDR-mistral-False-Cosine-GGUF/resolve/main/BC5CDR-mistral-False-Cosine.Q6_K.gguf) | Q6_K | 6.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/BC5CDR-mistral-False-Cosine-GGUF/resolve/main/BC5CDR-mistral-False-Cosine.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/BC5CDR-mistral-False-Cosine-GGUF/resolve/main/BC5CDR-mistral-False-Cosine.f16.gguf) | f16 | 14.6 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
| [
"BC5CDR"
] | BioNLP |
Cosmo-Hug/FeverDream | Cosmo-Hug | text-to-image | [
"diffusers",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"en",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | 1,680,104,400,000 | 2023-04-01T16:46:50 | 63 | 10 | ---
language:
- en
license: creativeml-openrail-m
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
inference: true
---
# FeverDream
This is a fine tuning of SD1.5 and designed as a general purpose model trained on high quality photographs and traditional artworks upscaled and denoised before training to get the sharpest cleanest results possible.
It's trained at 576 resolution using the offset noise fix so generations are sharp and detailed with vibrant colors, deep blacks, and a well balanced contrast.
This model was not trained with A.I. generated images or merged with any other models which means your images won't have that green/aqua color cast seen in so many models today.
The largest portion of the dataset consists of photographs of women, men, gorgeous landscapes, and luxurious home/cabin/hotel interiors, some abandoned buildings and cityscapes, followed by a few unique art styles that can most easily be discovered by prompting with the commonly used words below.
Some tips for realistic images:
avoid using the word "realistic," part of the dataset contains "realistic porcelain dolls" and the weights for "realistic" have shifted in the direction of the dolls. It shouldn't be an issue but If your faces look a bit smoother than you like try throwing the word "doll" in the negative prompt.
This model is great at both high detailed realism and stylized images when prompted correctly.
A few commonly used words found in the training dataset include:
Papercut
Liquid Splash
Realistic Porcelain Doll
Interior
Landscape
...and a few others for you to discover.
If your so inclined to leave a tip I'm happy to accept monero at this address:
82s3fk8bQB2DHJ3r9idZUsST1Dvf5cKKC6Fu87rYgV9dAFbCbAcXMPXaP59yDwWzRXfYfTBszHZno6xGwDb17xUzEkCsAah
Thanks for checking out my work and enjoy!
## Examples
Below are some examples of images generated using this model:
**Liquid Splash:**

```
close up of a young woman wearing a black and gold liquid splash dress, pretty face, detailed eyes, soft lips, floating in outer space and planets in the background, fluid, wet, dripping, waxy, smooth, realistic, octane render
Negative prompt: two, few, couple, group, lowres, text, error, cropped, worst quality, low quality, jpeg artifacts, ugly, duplicate, morbid, mutilated, out of frame, dehydrated, extra limbs, clone, disfigured, gross, malformed, extra arms, extra legs, fingers, long neck, username, watermark, signature
Steps: 30, Sampler: DPM++ SDE, CFG scale: 7
```
**Party Skelly:**

```
a 3d render of a cyberpunk gothic skull face surrounded by neon lines and veins, in the style ayami kojima with headdress made out of sushi
Negative prompt: two, few, couple, double, blurry background, low depth of field, bokeh, lowres, text, error, cropped, worst quality, low quality, jpeg artifacts, ugly, duplicate, morbid, mutilated, out of frame, dehydrated, extra limbs, clone, disfigured, gross, malformed, extra arms, extra legs, fingers, long neck, username, watermark, signature
Steps: 30, Sampler: DPM++ SDE Karras, CFG scale: 5
```
**Dirty Girl:**

```
close up photo action shot of a buxom woman cyborg, beautiful, sweaty, sweat glistening, science fiction movie, old scars, suspended movement, slow motion, visual flow, rough and tumble, dirt flying, pale, big gray eyes, piercing eyes, undercut hair, (highly detailed:1.1), dramatic, hi-res, (film grain), high ISO, 35mm, gorgeous, outdoor lighting, perfect face, delicate features, bloody nose, rusty tattered and dented metal parts, wasteland, dramatic rim lighting
Negative prompt: doll, blurry background, low depth of field, bokeh, lowres, text, error, cropped, worst quality, low quality, jpeg artifacts, ugly, duplicate, morbid, mutilated, out of frame, dehydrated, bad anatomy, bad proportions, extra limbs, cloned face, disfigured, gross proportions, malformed limbs, missing arms, missing legs, extra arms, extra legs, fused fingers, too many fingers, long neck, username, watermark, signature
Steps: 20, Sampler: DPM++ SDE Karras, CFG scale: 5
```
**Apartment:**

```
a new york manhattan luxury bedroom in the spring, hyper detailed, ultra realistic, sharp focus, octane render, volumetric, ray tracing, artstation trending, cgsociety, insanely detailed studio photography hdr, 8k, cinematic lighting, dramatic lighting, Cannon EOS 5D, 85mm
Negative prompt: lowres, text, error, cropped, worst quality, low quality, jpeg artifacts, ugly, duplicate, morbid, mutilated, out of frame, dehydrated, extra limbs, clone, disfigured, gross, malformed, extra arms, extra legs, fingers, long neck, username, watermark, signature
Steps: 30, Sampler: DPM2, CFG scale: 5
``` | [
"MONERO"
] | Non_BioNLP |
jncraton/stella-base-en-v2-ct2-int8 | jncraton | feature-extraction | [
"sentence-transformers",
"feature-extraction",
"sentence-similarity",
"mteb",
"en",
"arxiv:1612.00796",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,713,636,080,000 | 2024-04-20T18:02:00 | 8 | 0 | ---
language:
- en
license: mit
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- mteb
model-index:
- name: stella-base-en-v2
results:
- task:
type: Classification
dataset:
name: MTEB AmazonCounterfactualClassification (en)
type: mteb/amazon_counterfactual
config: en
split: test
revision: e8379541af4e31359cca9fbcf4b00f2671dba205
metrics:
- type: accuracy
value: 77.19402985074628
- type: ap
value: 40.43267503017359
- type: f1
value: 71.15585210518594
- task:
type: Classification
dataset:
name: MTEB AmazonPolarityClassification
type: mteb/amazon_polarity
config: default
split: test
revision: e2d317d38cd51312af73b3d32a06d1a08b442046
metrics:
- type: accuracy
value: 93.256675
- type: ap
value: 90.00824833079179
- type: f1
value: 93.2473146151734
- task:
type: Classification
dataset:
name: MTEB AmazonReviewsClassification (en)
type: mteb/amazon_reviews_multi
config: en
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 49.612
- type: f1
value: 48.530785631574304
- task:
type: Retrieval
dataset:
name: MTEB ArguAna
type: arguana
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 37.411
- type: map_at_10
value: 52.673
- type: map_at_100
value: 53.410999999999994
- type: map_at_1000
value: 53.415
- type: map_at_3
value: 48.495
- type: map_at_5
value: 51.183
- type: mrr_at_1
value: 37.838
- type: mrr_at_10
value: 52.844
- type: mrr_at_100
value: 53.581999999999994
- type: mrr_at_1000
value: 53.586
- type: mrr_at_3
value: 48.672
- type: mrr_at_5
value: 51.272
- type: ndcg_at_1
value: 37.411
- type: ndcg_at_10
value: 60.626999999999995
- type: ndcg_at_100
value: 63.675000000000004
- type: ndcg_at_1000
value: 63.776999999999994
- type: ndcg_at_3
value: 52.148
- type: ndcg_at_5
value: 57.001999999999995
- type: precision_at_1
value: 37.411
- type: precision_at_10
value: 8.578
- type: precision_at_100
value: 0.989
- type: precision_at_1000
value: 0.1
- type: precision_at_3
value: 20.91
- type: precision_at_5
value: 14.908
- type: recall_at_1
value: 37.411
- type: recall_at_10
value: 85.775
- type: recall_at_100
value: 98.86200000000001
- type: recall_at_1000
value: 99.644
- type: recall_at_3
value: 62.731
- type: recall_at_5
value: 74.53800000000001
- task:
type: Clustering
dataset:
name: MTEB ArxivClusteringP2P
type: mteb/arxiv-clustering-p2p
config: default
split: test
revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d
metrics:
- type: v_measure
value: 47.24219029437865
- task:
type: Clustering
dataset:
name: MTEB ArxivClusteringS2S
type: mteb/arxiv-clustering-s2s
config: default
split: test
revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53
metrics:
- type: v_measure
value: 40.474604844291726
- task:
type: Reranking
dataset:
name: MTEB AskUbuntuDupQuestions
type: mteb/askubuntudupquestions-reranking
config: default
split: test
revision: 2000358ca161889fa9c082cb41daa8dcfb161a54
metrics:
- type: map
value: 62.720542706366054
- type: mrr
value: 75.59633733456448
- task:
type: STS
dataset:
name: MTEB BIOSSES
type: mteb/biosses-sts
config: default
split: test
revision: d3fb88f8f02e40887cd149695127462bbcf29b4a
metrics:
- type: cos_sim_pearson
value: 86.31345008397868
- type: cos_sim_spearman
value: 85.94292212320399
- type: euclidean_pearson
value: 85.03974302774525
- type: euclidean_spearman
value: 85.88087251659051
- type: manhattan_pearson
value: 84.91900996712951
- type: manhattan_spearman
value: 85.96701905781116
- task:
type: Classification
dataset:
name: MTEB Banking77Classification
type: mteb/banking77
config: default
split: test
revision: 0fd18e25b25c072e09e0d92ab615fda904d66300
metrics:
- type: accuracy
value: 84.72727272727273
- type: f1
value: 84.29572512364581
- task:
type: Clustering
dataset:
name: MTEB BiorxivClusteringP2P
type: mteb/biorxiv-clustering-p2p
config: default
split: test
revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40
metrics:
- type: v_measure
value: 39.55532460397536
- task:
type: Clustering
dataset:
name: MTEB BiorxivClusteringS2S
type: mteb/biorxiv-clustering-s2s
config: default
split: test
revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908
metrics:
- type: v_measure
value: 35.91195973591251
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackAndroidRetrieval
type: BeIR/cqadupstack
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 32.822
- type: map_at_10
value: 44.139
- type: map_at_100
value: 45.786
- type: map_at_1000
value: 45.906000000000006
- type: map_at_3
value: 40.637
- type: map_at_5
value: 42.575
- type: mrr_at_1
value: 41.059
- type: mrr_at_10
value: 50.751000000000005
- type: mrr_at_100
value: 51.548
- type: mrr_at_1000
value: 51.583999999999996
- type: mrr_at_3
value: 48.236000000000004
- type: mrr_at_5
value: 49.838
- type: ndcg_at_1
value: 41.059
- type: ndcg_at_10
value: 50.573
- type: ndcg_at_100
value: 56.25
- type: ndcg_at_1000
value: 58.004
- type: ndcg_at_3
value: 45.995000000000005
- type: ndcg_at_5
value: 48.18
- type: precision_at_1
value: 41.059
- type: precision_at_10
value: 9.757
- type: precision_at_100
value: 1.609
- type: precision_at_1000
value: 0.20600000000000002
- type: precision_at_3
value: 22.222
- type: precision_at_5
value: 16.023
- type: recall_at_1
value: 32.822
- type: recall_at_10
value: 61.794000000000004
- type: recall_at_100
value: 85.64699999999999
- type: recall_at_1000
value: 96.836
- type: recall_at_3
value: 47.999
- type: recall_at_5
value: 54.376999999999995
- type: map_at_1
value: 29.579
- type: map_at_10
value: 39.787
- type: map_at_100
value: 40.976
- type: map_at_1000
value: 41.108
- type: map_at_3
value: 36.819
- type: map_at_5
value: 38.437
- type: mrr_at_1
value: 37.516
- type: mrr_at_10
value: 45.822
- type: mrr_at_100
value: 46.454
- type: mrr_at_1000
value: 46.495999999999995
- type: mrr_at_3
value: 43.556
- type: mrr_at_5
value: 44.814
- type: ndcg_at_1
value: 37.516
- type: ndcg_at_10
value: 45.5
- type: ndcg_at_100
value: 49.707
- type: ndcg_at_1000
value: 51.842
- type: ndcg_at_3
value: 41.369
- type: ndcg_at_5
value: 43.161
- type: precision_at_1
value: 37.516
- type: precision_at_10
value: 8.713
- type: precision_at_100
value: 1.38
- type: precision_at_1000
value: 0.188
- type: precision_at_3
value: 20.233999999999998
- type: precision_at_5
value: 14.280000000000001
- type: recall_at_1
value: 29.579
- type: recall_at_10
value: 55.458
- type: recall_at_100
value: 73.49799999999999
- type: recall_at_1000
value: 87.08200000000001
- type: recall_at_3
value: 42.858000000000004
- type: recall_at_5
value: 48.215
- type: map_at_1
value: 40.489999999999995
- type: map_at_10
value: 53.313
- type: map_at_100
value: 54.290000000000006
- type: map_at_1000
value: 54.346000000000004
- type: map_at_3
value: 49.983
- type: map_at_5
value: 51.867
- type: mrr_at_1
value: 46.27
- type: mrr_at_10
value: 56.660999999999994
- type: mrr_at_100
value: 57.274
- type: mrr_at_1000
value: 57.301
- type: mrr_at_3
value: 54.138
- type: mrr_at_5
value: 55.623999999999995
- type: ndcg_at_1
value: 46.27
- type: ndcg_at_10
value: 59.192
- type: ndcg_at_100
value: 63.026
- type: ndcg_at_1000
value: 64.079
- type: ndcg_at_3
value: 53.656000000000006
- type: ndcg_at_5
value: 56.387
- type: precision_at_1
value: 46.27
- type: precision_at_10
value: 9.511
- type: precision_at_100
value: 1.23
- type: precision_at_1000
value: 0.136
- type: precision_at_3
value: 24.096
- type: precision_at_5
value: 16.476
- type: recall_at_1
value: 40.489999999999995
- type: recall_at_10
value: 73.148
- type: recall_at_100
value: 89.723
- type: recall_at_1000
value: 97.073
- type: recall_at_3
value: 58.363
- type: recall_at_5
value: 65.083
- type: map_at_1
value: 26.197
- type: map_at_10
value: 35.135
- type: map_at_100
value: 36.14
- type: map_at_1000
value: 36.216
- type: map_at_3
value: 32.358
- type: map_at_5
value: 33.814
- type: mrr_at_1
value: 28.475
- type: mrr_at_10
value: 37.096000000000004
- type: mrr_at_100
value: 38.006
- type: mrr_at_1000
value: 38.06
- type: mrr_at_3
value: 34.52
- type: mrr_at_5
value: 35.994
- type: ndcg_at_1
value: 28.475
- type: ndcg_at_10
value: 40.263
- type: ndcg_at_100
value: 45.327
- type: ndcg_at_1000
value: 47.225
- type: ndcg_at_3
value: 34.882000000000005
- type: ndcg_at_5
value: 37.347
- type: precision_at_1
value: 28.475
- type: precision_at_10
value: 6.249
- type: precision_at_100
value: 0.919
- type: precision_at_1000
value: 0.11199999999999999
- type: precision_at_3
value: 14.689
- type: precision_at_5
value: 10.237
- type: recall_at_1
value: 26.197
- type: recall_at_10
value: 54.17999999999999
- type: recall_at_100
value: 77.768
- type: recall_at_1000
value: 91.932
- type: recall_at_3
value: 39.804
- type: recall_at_5
value: 45.660000000000004
- type: map_at_1
value: 16.683
- type: map_at_10
value: 25.013999999999996
- type: map_at_100
value: 26.411
- type: map_at_1000
value: 26.531
- type: map_at_3
value: 22.357
- type: map_at_5
value: 23.982999999999997
- type: mrr_at_1
value: 20.896
- type: mrr_at_10
value: 29.758000000000003
- type: mrr_at_100
value: 30.895
- type: mrr_at_1000
value: 30.964999999999996
- type: mrr_at_3
value: 27.177
- type: mrr_at_5
value: 28.799999999999997
- type: ndcg_at_1
value: 20.896
- type: ndcg_at_10
value: 30.294999999999998
- type: ndcg_at_100
value: 36.68
- type: ndcg_at_1000
value: 39.519
- type: ndcg_at_3
value: 25.480999999999998
- type: ndcg_at_5
value: 28.027
- type: precision_at_1
value: 20.896
- type: precision_at_10
value: 5.56
- type: precision_at_100
value: 1.006
- type: precision_at_1000
value: 0.13899999999999998
- type: precision_at_3
value: 12.231
- type: precision_at_5
value: 9.104
- type: recall_at_1
value: 16.683
- type: recall_at_10
value: 41.807
- type: recall_at_100
value: 69.219
- type: recall_at_1000
value: 89.178
- type: recall_at_3
value: 28.772
- type: recall_at_5
value: 35.167
- type: map_at_1
value: 30.653000000000002
- type: map_at_10
value: 41.21
- type: map_at_100
value: 42.543
- type: map_at_1000
value: 42.657000000000004
- type: map_at_3
value: 38.094
- type: map_at_5
value: 39.966
- type: mrr_at_1
value: 37.824999999999996
- type: mrr_at_10
value: 47.087
- type: mrr_at_100
value: 47.959
- type: mrr_at_1000
value: 48.003
- type: mrr_at_3
value: 45.043
- type: mrr_at_5
value: 46.352
- type: ndcg_at_1
value: 37.824999999999996
- type: ndcg_at_10
value: 47.158
- type: ndcg_at_100
value: 52.65
- type: ndcg_at_1000
value: 54.644999999999996
- type: ndcg_at_3
value: 42.632999999999996
- type: ndcg_at_5
value: 44.994
- type: precision_at_1
value: 37.824999999999996
- type: precision_at_10
value: 8.498999999999999
- type: precision_at_100
value: 1.308
- type: precision_at_1000
value: 0.166
- type: precision_at_3
value: 20.308
- type: precision_at_5
value: 14.283000000000001
- type: recall_at_1
value: 30.653000000000002
- type: recall_at_10
value: 58.826
- type: recall_at_100
value: 81.94
- type: recall_at_1000
value: 94.71000000000001
- type: recall_at_3
value: 45.965
- type: recall_at_5
value: 52.294
- type: map_at_1
value: 26.71
- type: map_at_10
value: 36.001
- type: map_at_100
value: 37.416
- type: map_at_1000
value: 37.522
- type: map_at_3
value: 32.841
- type: map_at_5
value: 34.515
- type: mrr_at_1
value: 32.647999999999996
- type: mrr_at_10
value: 41.43
- type: mrr_at_100
value: 42.433
- type: mrr_at_1000
value: 42.482
- type: mrr_at_3
value: 39.117000000000004
- type: mrr_at_5
value: 40.35
- type: ndcg_at_1
value: 32.647999999999996
- type: ndcg_at_10
value: 41.629
- type: ndcg_at_100
value: 47.707
- type: ndcg_at_1000
value: 49.913000000000004
- type: ndcg_at_3
value: 36.598000000000006
- type: ndcg_at_5
value: 38.696000000000005
- type: precision_at_1
value: 32.647999999999996
- type: precision_at_10
value: 7.704999999999999
- type: precision_at_100
value: 1.242
- type: precision_at_1000
value: 0.16
- type: precision_at_3
value: 17.314
- type: precision_at_5
value: 12.374
- type: recall_at_1
value: 26.71
- type: recall_at_10
value: 52.898
- type: recall_at_100
value: 79.08
- type: recall_at_1000
value: 93.94
- type: recall_at_3
value: 38.731
- type: recall_at_5
value: 44.433
- type: map_at_1
value: 26.510999999999996
- type: map_at_10
value: 35.755333333333326
- type: map_at_100
value: 36.97525
- type: map_at_1000
value: 37.08741666666667
- type: map_at_3
value: 32.921
- type: map_at_5
value: 34.45041666666667
- type: mrr_at_1
value: 31.578416666666666
- type: mrr_at_10
value: 40.06066666666667
- type: mrr_at_100
value: 40.93350000000001
- type: mrr_at_1000
value: 40.98716666666667
- type: mrr_at_3
value: 37.710499999999996
- type: mrr_at_5
value: 39.033249999999995
- type: ndcg_at_1
value: 31.578416666666666
- type: ndcg_at_10
value: 41.138666666666666
- type: ndcg_at_100
value: 46.37291666666666
- type: ndcg_at_1000
value: 48.587500000000006
- type: ndcg_at_3
value: 36.397083333333335
- type: ndcg_at_5
value: 38.539
- type: precision_at_1
value: 31.578416666666666
- type: precision_at_10
value: 7.221583333333332
- type: precision_at_100
value: 1.1581666666666668
- type: precision_at_1000
value: 0.15416666666666667
- type: precision_at_3
value: 16.758
- type: precision_at_5
value: 11.830916666666665
- type: recall_at_1
value: 26.510999999999996
- type: recall_at_10
value: 52.7825
- type: recall_at_100
value: 75.79675
- type: recall_at_1000
value: 91.10483333333335
- type: recall_at_3
value: 39.48233333333334
- type: recall_at_5
value: 45.07116666666667
- type: map_at_1
value: 24.564
- type: map_at_10
value: 31.235000000000003
- type: map_at_100
value: 32.124
- type: map_at_1000
value: 32.216
- type: map_at_3
value: 29.330000000000002
- type: map_at_5
value: 30.379
- type: mrr_at_1
value: 27.761000000000003
- type: mrr_at_10
value: 34.093
- type: mrr_at_100
value: 34.885
- type: mrr_at_1000
value: 34.957
- type: mrr_at_3
value: 32.388
- type: mrr_at_5
value: 33.269
- type: ndcg_at_1
value: 27.761000000000003
- type: ndcg_at_10
value: 35.146
- type: ndcg_at_100
value: 39.597
- type: ndcg_at_1000
value: 42.163000000000004
- type: ndcg_at_3
value: 31.674000000000003
- type: ndcg_at_5
value: 33.224
- type: precision_at_1
value: 27.761000000000003
- type: precision_at_10
value: 5.383
- type: precision_at_100
value: 0.836
- type: precision_at_1000
value: 0.11199999999999999
- type: precision_at_3
value: 13.599
- type: precision_at_5
value: 9.202
- type: recall_at_1
value: 24.564
- type: recall_at_10
value: 44.36
- type: recall_at_100
value: 64.408
- type: recall_at_1000
value: 83.892
- type: recall_at_3
value: 34.653
- type: recall_at_5
value: 38.589
- type: map_at_1
value: 17.01
- type: map_at_10
value: 24.485
- type: map_at_100
value: 25.573
- type: map_at_1000
value: 25.703
- type: map_at_3
value: 21.953
- type: map_at_5
value: 23.294999999999998
- type: mrr_at_1
value: 20.544
- type: mrr_at_10
value: 28.238000000000003
- type: mrr_at_100
value: 29.142000000000003
- type: mrr_at_1000
value: 29.219
- type: mrr_at_3
value: 25.802999999999997
- type: mrr_at_5
value: 27.105
- type: ndcg_at_1
value: 20.544
- type: ndcg_at_10
value: 29.387999999999998
- type: ndcg_at_100
value: 34.603
- type: ndcg_at_1000
value: 37.564
- type: ndcg_at_3
value: 24.731
- type: ndcg_at_5
value: 26.773000000000003
- type: precision_at_1
value: 20.544
- type: precision_at_10
value: 5.509
- type: precision_at_100
value: 0.9450000000000001
- type: precision_at_1000
value: 0.13799999999999998
- type: precision_at_3
value: 11.757
- type: precision_at_5
value: 8.596
- type: recall_at_1
value: 17.01
- type: recall_at_10
value: 40.392
- type: recall_at_100
value: 64.043
- type: recall_at_1000
value: 85.031
- type: recall_at_3
value: 27.293
- type: recall_at_5
value: 32.586999999999996
- type: map_at_1
value: 27.155
- type: map_at_10
value: 35.92
- type: map_at_100
value: 37.034
- type: map_at_1000
value: 37.139
- type: map_at_3
value: 33.263999999999996
- type: map_at_5
value: 34.61
- type: mrr_at_1
value: 32.183
- type: mrr_at_10
value: 40.099000000000004
- type: mrr_at_100
value: 41.001
- type: mrr_at_1000
value: 41.059
- type: mrr_at_3
value: 37.889
- type: mrr_at_5
value: 39.007999999999996
- type: ndcg_at_1
value: 32.183
- type: ndcg_at_10
value: 41.127
- type: ndcg_at_100
value: 46.464
- type: ndcg_at_1000
value: 48.67
- type: ndcg_at_3
value: 36.396
- type: ndcg_at_5
value: 38.313
- type: precision_at_1
value: 32.183
- type: precision_at_10
value: 6.847
- type: precision_at_100
value: 1.0739999999999998
- type: precision_at_1000
value: 0.13699999999999998
- type: precision_at_3
value: 16.356
- type: precision_at_5
value: 11.362
- type: recall_at_1
value: 27.155
- type: recall_at_10
value: 52.922000000000004
- type: recall_at_100
value: 76.39
- type: recall_at_1000
value: 91.553
- type: recall_at_3
value: 39.745999999999995
- type: recall_at_5
value: 44.637
- type: map_at_1
value: 25.523
- type: map_at_10
value: 34.268
- type: map_at_100
value: 35.835
- type: map_at_1000
value: 36.046
- type: map_at_3
value: 31.662000000000003
- type: map_at_5
value: 32.71
- type: mrr_at_1
value: 31.028
- type: mrr_at_10
value: 38.924
- type: mrr_at_100
value: 39.95
- type: mrr_at_1000
value: 40.003
- type: mrr_at_3
value: 36.594
- type: mrr_at_5
value: 37.701
- type: ndcg_at_1
value: 31.028
- type: ndcg_at_10
value: 39.848
- type: ndcg_at_100
value: 45.721000000000004
- type: ndcg_at_1000
value: 48.424
- type: ndcg_at_3
value: 35.329
- type: ndcg_at_5
value: 36.779
- type: precision_at_1
value: 31.028
- type: precision_at_10
value: 7.51
- type: precision_at_100
value: 1.478
- type: precision_at_1000
value: 0.24
- type: precision_at_3
value: 16.337
- type: precision_at_5
value: 11.383000000000001
- type: recall_at_1
value: 25.523
- type: recall_at_10
value: 50.735
- type: recall_at_100
value: 76.593
- type: recall_at_1000
value: 93.771
- type: recall_at_3
value: 37.574000000000005
- type: recall_at_5
value: 41.602
- type: map_at_1
value: 20.746000000000002
- type: map_at_10
value: 28.557
- type: map_at_100
value: 29.575000000000003
- type: map_at_1000
value: 29.659000000000002
- type: map_at_3
value: 25.753999999999998
- type: map_at_5
value: 27.254
- type: mrr_at_1
value: 22.736
- type: mrr_at_10
value: 30.769000000000002
- type: mrr_at_100
value: 31.655
- type: mrr_at_1000
value: 31.717000000000002
- type: mrr_at_3
value: 28.065
- type: mrr_at_5
value: 29.543999999999997
- type: ndcg_at_1
value: 22.736
- type: ndcg_at_10
value: 33.545
- type: ndcg_at_100
value: 38.743
- type: ndcg_at_1000
value: 41.002
- type: ndcg_at_3
value: 28.021
- type: ndcg_at_5
value: 30.586999999999996
- type: precision_at_1
value: 22.736
- type: precision_at_10
value: 5.416
- type: precision_at_100
value: 0.8710000000000001
- type: precision_at_1000
value: 0.116
- type: precision_at_3
value: 11.953
- type: precision_at_5
value: 8.651
- type: recall_at_1
value: 20.746000000000002
- type: recall_at_10
value: 46.87
- type: recall_at_100
value: 71.25200000000001
- type: recall_at_1000
value: 88.26
- type: recall_at_3
value: 32.029999999999994
- type: recall_at_5
value: 38.21
- task:
type: Retrieval
dataset:
name: MTEB ClimateFEVER
type: climate-fever
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 12.105
- type: map_at_10
value: 20.577
- type: map_at_100
value: 22.686999999999998
- type: map_at_1000
value: 22.889
- type: map_at_3
value: 17.174
- type: map_at_5
value: 18.807
- type: mrr_at_1
value: 27.101
- type: mrr_at_10
value: 38.475
- type: mrr_at_100
value: 39.491
- type: mrr_at_1000
value: 39.525
- type: mrr_at_3
value: 34.886
- type: mrr_at_5
value: 36.922
- type: ndcg_at_1
value: 27.101
- type: ndcg_at_10
value: 29.002
- type: ndcg_at_100
value: 37.218
- type: ndcg_at_1000
value: 40.644000000000005
- type: ndcg_at_3
value: 23.464
- type: ndcg_at_5
value: 25.262
- type: precision_at_1
value: 27.101
- type: precision_at_10
value: 9.179
- type: precision_at_100
value: 1.806
- type: precision_at_1000
value: 0.244
- type: precision_at_3
value: 17.394000000000002
- type: precision_at_5
value: 13.342
- type: recall_at_1
value: 12.105
- type: recall_at_10
value: 35.143
- type: recall_at_100
value: 63.44499999999999
- type: recall_at_1000
value: 82.49499999999999
- type: recall_at_3
value: 21.489
- type: recall_at_5
value: 26.82
- task:
type: Retrieval
dataset:
name: MTEB DBPedia
type: dbpedia-entity
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 8.769
- type: map_at_10
value: 18.619
- type: map_at_100
value: 26.3
- type: map_at_1000
value: 28.063
- type: map_at_3
value: 13.746
- type: map_at_5
value: 16.035
- type: mrr_at_1
value: 65.25
- type: mrr_at_10
value: 73.678
- type: mrr_at_100
value: 73.993
- type: mrr_at_1000
value: 74.003
- type: mrr_at_3
value: 72.042
- type: mrr_at_5
value: 72.992
- type: ndcg_at_1
value: 53.625
- type: ndcg_at_10
value: 39.638
- type: ndcg_at_100
value: 44.601
- type: ndcg_at_1000
value: 52.80200000000001
- type: ndcg_at_3
value: 44.727
- type: ndcg_at_5
value: 42.199
- type: precision_at_1
value: 65.25
- type: precision_at_10
value: 31.025000000000002
- type: precision_at_100
value: 10.174999999999999
- type: precision_at_1000
value: 2.0740000000000003
- type: precision_at_3
value: 48.083
- type: precision_at_5
value: 40.6
- type: recall_at_1
value: 8.769
- type: recall_at_10
value: 23.910999999999998
- type: recall_at_100
value: 51.202999999999996
- type: recall_at_1000
value: 77.031
- type: recall_at_3
value: 15.387999999999998
- type: recall_at_5
value: 18.919
- task:
type: Classification
dataset:
name: MTEB EmotionClassification
type: mteb/emotion
config: default
split: test
revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37
metrics:
- type: accuracy
value: 54.47
- type: f1
value: 48.21839043361556
- task:
type: Retrieval
dataset:
name: MTEB FEVER
type: fever
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 63.564
- type: map_at_10
value: 74.236
- type: map_at_100
value: 74.53699999999999
- type: map_at_1000
value: 74.557
- type: map_at_3
value: 72.556
- type: map_at_5
value: 73.656
- type: mrr_at_1
value: 68.497
- type: mrr_at_10
value: 78.373
- type: mrr_at_100
value: 78.54299999999999
- type: mrr_at_1000
value: 78.549
- type: mrr_at_3
value: 77.03
- type: mrr_at_5
value: 77.938
- type: ndcg_at_1
value: 68.497
- type: ndcg_at_10
value: 79.12599999999999
- type: ndcg_at_100
value: 80.319
- type: ndcg_at_1000
value: 80.71199999999999
- type: ndcg_at_3
value: 76.209
- type: ndcg_at_5
value: 77.90700000000001
- type: precision_at_1
value: 68.497
- type: precision_at_10
value: 9.958
- type: precision_at_100
value: 1.077
- type: precision_at_1000
value: 0.11299999999999999
- type: precision_at_3
value: 29.908
- type: precision_at_5
value: 18.971
- type: recall_at_1
value: 63.564
- type: recall_at_10
value: 90.05199999999999
- type: recall_at_100
value: 95.028
- type: recall_at_1000
value: 97.667
- type: recall_at_3
value: 82.17999999999999
- type: recall_at_5
value: 86.388
- task:
type: Retrieval
dataset:
name: MTEB FiQA2018
type: fiqa
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 19.042
- type: map_at_10
value: 30.764999999999997
- type: map_at_100
value: 32.678000000000004
- type: map_at_1000
value: 32.881
- type: map_at_3
value: 26.525
- type: map_at_5
value: 28.932000000000002
- type: mrr_at_1
value: 37.653999999999996
- type: mrr_at_10
value: 46.597
- type: mrr_at_100
value: 47.413
- type: mrr_at_1000
value: 47.453
- type: mrr_at_3
value: 43.775999999999996
- type: mrr_at_5
value: 45.489000000000004
- type: ndcg_at_1
value: 37.653999999999996
- type: ndcg_at_10
value: 38.615
- type: ndcg_at_100
value: 45.513999999999996
- type: ndcg_at_1000
value: 48.815999999999995
- type: ndcg_at_3
value: 34.427
- type: ndcg_at_5
value: 35.954
- type: precision_at_1
value: 37.653999999999996
- type: precision_at_10
value: 10.864
- type: precision_at_100
value: 1.7850000000000001
- type: precision_at_1000
value: 0.23800000000000002
- type: precision_at_3
value: 22.788
- type: precision_at_5
value: 17.346
- type: recall_at_1
value: 19.042
- type: recall_at_10
value: 45.707
- type: recall_at_100
value: 71.152
- type: recall_at_1000
value: 90.7
- type: recall_at_3
value: 30.814000000000004
- type: recall_at_5
value: 37.478
- task:
type: Retrieval
dataset:
name: MTEB HotpotQA
type: hotpotqa
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 38.001000000000005
- type: map_at_10
value: 59.611000000000004
- type: map_at_100
value: 60.582
- type: map_at_1000
value: 60.646
- type: map_at_3
value: 56.031
- type: map_at_5
value: 58.243
- type: mrr_at_1
value: 76.003
- type: mrr_at_10
value: 82.15400000000001
- type: mrr_at_100
value: 82.377
- type: mrr_at_1000
value: 82.383
- type: mrr_at_3
value: 81.092
- type: mrr_at_5
value: 81.742
- type: ndcg_at_1
value: 76.003
- type: ndcg_at_10
value: 68.216
- type: ndcg_at_100
value: 71.601
- type: ndcg_at_1000
value: 72.821
- type: ndcg_at_3
value: 63.109
- type: ndcg_at_5
value: 65.902
- type: precision_at_1
value: 76.003
- type: precision_at_10
value: 14.379
- type: precision_at_100
value: 1.702
- type: precision_at_1000
value: 0.186
- type: precision_at_3
value: 40.396
- type: precision_at_5
value: 26.442
- type: recall_at_1
value: 38.001000000000005
- type: recall_at_10
value: 71.897
- type: recall_at_100
value: 85.105
- type: recall_at_1000
value: 93.133
- type: recall_at_3
value: 60.594
- type: recall_at_5
value: 66.104
- task:
type: Classification
dataset:
name: MTEB ImdbClassification
type: mteb/imdb
config: default
split: test
revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7
metrics:
- type: accuracy
value: 91.31280000000001
- type: ap
value: 87.53723467501632
- type: f1
value: 91.30282906596291
- task:
type: Retrieval
dataset:
name: MTEB MSMARCO
type: msmarco
config: default
split: dev
revision: None
metrics:
- type: map_at_1
value: 21.917
- type: map_at_10
value: 34.117999999999995
- type: map_at_100
value: 35.283
- type: map_at_1000
value: 35.333999999999996
- type: map_at_3
value: 30.330000000000002
- type: map_at_5
value: 32.461
- type: mrr_at_1
value: 22.579
- type: mrr_at_10
value: 34.794000000000004
- type: mrr_at_100
value: 35.893
- type: mrr_at_1000
value: 35.937000000000005
- type: mrr_at_3
value: 31.091
- type: mrr_at_5
value: 33.173
- type: ndcg_at_1
value: 22.579
- type: ndcg_at_10
value: 40.951
- type: ndcg_at_100
value: 46.558
- type: ndcg_at_1000
value: 47.803000000000004
- type: ndcg_at_3
value: 33.262
- type: ndcg_at_5
value: 37.036
- type: precision_at_1
value: 22.579
- type: precision_at_10
value: 6.463000000000001
- type: precision_at_100
value: 0.928
- type: precision_at_1000
value: 0.104
- type: precision_at_3
value: 14.174000000000001
- type: precision_at_5
value: 10.421
- type: recall_at_1
value: 21.917
- type: recall_at_10
value: 61.885
- type: recall_at_100
value: 87.847
- type: recall_at_1000
value: 97.322
- type: recall_at_3
value: 41.010000000000005
- type: recall_at_5
value: 50.031000000000006
- task:
type: Classification
dataset:
name: MTEB MTOPDomainClassification (en)
type: mteb/mtop_domain
config: en
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 93.49521203830369
- type: f1
value: 93.30882341740241
- task:
type: Classification
dataset:
name: MTEB MTOPIntentClassification (en)
type: mteb/mtop_intent
config: en
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 71.0579115367077
- type: f1
value: 51.2368258319339
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (en)
type: mteb/amazon_massive_intent
config: en
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 73.88029589778077
- type: f1
value: 72.34422048584663
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (en)
type: mteb/amazon_massive_scenario
config: en
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 78.2817753866846
- type: f1
value: 77.87746050004304
- task:
type: Clustering
dataset:
name: MTEB MedrxivClusteringP2P
type: mteb/medrxiv-clustering-p2p
config: default
split: test
revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73
metrics:
- type: v_measure
value: 33.247341454119216
- task:
type: Clustering
dataset:
name: MTEB MedrxivClusteringS2S
type: mteb/medrxiv-clustering-s2s
config: default
split: test
revision: 35191c8c0dca72d8ff3efcd72aa802307d469663
metrics:
- type: v_measure
value: 31.9647477166234
- task:
type: Reranking
dataset:
name: MTEB MindSmallReranking
type: mteb/mind_small
config: default
split: test
revision: 3bdac13927fdc888b903db93b2ffdbd90b295a69
metrics:
- type: map
value: 31.90698374676892
- type: mrr
value: 33.07523683771251
- task:
type: Retrieval
dataset:
name: MTEB NFCorpus
type: nfcorpus
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 6.717
- type: map_at_10
value: 14.566
- type: map_at_100
value: 18.465999999999998
- type: map_at_1000
value: 20.033
- type: map_at_3
value: 10.863
- type: map_at_5
value: 12.589
- type: mrr_at_1
value: 49.845
- type: mrr_at_10
value: 58.385
- type: mrr_at_100
value: 58.989999999999995
- type: mrr_at_1000
value: 59.028999999999996
- type: mrr_at_3
value: 56.76
- type: mrr_at_5
value: 57.766
- type: ndcg_at_1
value: 47.678
- type: ndcg_at_10
value: 37.511
- type: ndcg_at_100
value: 34.537
- type: ndcg_at_1000
value: 43.612
- type: ndcg_at_3
value: 43.713
- type: ndcg_at_5
value: 41.303
- type: precision_at_1
value: 49.845
- type: precision_at_10
value: 27.307
- type: precision_at_100
value: 8.746
- type: precision_at_1000
value: 2.182
- type: precision_at_3
value: 40.764
- type: precision_at_5
value: 35.232
- type: recall_at_1
value: 6.717
- type: recall_at_10
value: 18.107
- type: recall_at_100
value: 33.759
- type: recall_at_1000
value: 67.31
- type: recall_at_3
value: 11.68
- type: recall_at_5
value: 14.557999999999998
- task:
type: Retrieval
dataset:
name: MTEB NQ
type: nq
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 27.633999999999997
- type: map_at_10
value: 42.400999999999996
- type: map_at_100
value: 43.561
- type: map_at_1000
value: 43.592
- type: map_at_3
value: 37.865
- type: map_at_5
value: 40.650999999999996
- type: mrr_at_1
value: 31.286
- type: mrr_at_10
value: 44.996
- type: mrr_at_100
value: 45.889
- type: mrr_at_1000
value: 45.911
- type: mrr_at_3
value: 41.126000000000005
- type: mrr_at_5
value: 43.536
- type: ndcg_at_1
value: 31.257
- type: ndcg_at_10
value: 50.197
- type: ndcg_at_100
value: 55.062
- type: ndcg_at_1000
value: 55.81700000000001
- type: ndcg_at_3
value: 41.650999999999996
- type: ndcg_at_5
value: 46.324
- type: precision_at_1
value: 31.257
- type: precision_at_10
value: 8.508000000000001
- type: precision_at_100
value: 1.121
- type: precision_at_1000
value: 0.11900000000000001
- type: precision_at_3
value: 19.1
- type: precision_at_5
value: 14.16
- type: recall_at_1
value: 27.633999999999997
- type: recall_at_10
value: 71.40100000000001
- type: recall_at_100
value: 92.463
- type: recall_at_1000
value: 98.13199999999999
- type: recall_at_3
value: 49.382
- type: recall_at_5
value: 60.144
- task:
type: Retrieval
dataset:
name: MTEB QuoraRetrieval
type: quora
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 71.17099999999999
- type: map_at_10
value: 85.036
- type: map_at_100
value: 85.67099999999999
- type: map_at_1000
value: 85.68599999999999
- type: map_at_3
value: 82.086
- type: map_at_5
value: 83.956
- type: mrr_at_1
value: 82.04
- type: mrr_at_10
value: 88.018
- type: mrr_at_100
value: 88.114
- type: mrr_at_1000
value: 88.115
- type: mrr_at_3
value: 87.047
- type: mrr_at_5
value: 87.73100000000001
- type: ndcg_at_1
value: 82.03
- type: ndcg_at_10
value: 88.717
- type: ndcg_at_100
value: 89.904
- type: ndcg_at_1000
value: 89.991
- type: ndcg_at_3
value: 85.89099999999999
- type: ndcg_at_5
value: 87.485
- type: precision_at_1
value: 82.03
- type: precision_at_10
value: 13.444999999999999
- type: precision_at_100
value: 1.533
- type: precision_at_1000
value: 0.157
- type: precision_at_3
value: 37.537
- type: precision_at_5
value: 24.692
- type: recall_at_1
value: 71.17099999999999
- type: recall_at_10
value: 95.634
- type: recall_at_100
value: 99.614
- type: recall_at_1000
value: 99.99
- type: recall_at_3
value: 87.48
- type: recall_at_5
value: 91.996
- task:
type: Clustering
dataset:
name: MTEB RedditClustering
type: mteb/reddit-clustering
config: default
split: test
revision: 24640382cdbf8abc73003fb0fa6d111a705499eb
metrics:
- type: v_measure
value: 55.067219624685315
- task:
type: Clustering
dataset:
name: MTEB RedditClusteringP2P
type: mteb/reddit-clustering-p2p
config: default
split: test
revision: 282350215ef01743dc01b456c7f5241fa8937f16
metrics:
- type: v_measure
value: 62.121822992300444
- task:
type: Retrieval
dataset:
name: MTEB SCIDOCS
type: scidocs
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 4.153
- type: map_at_10
value: 11.024000000000001
- type: map_at_100
value: 13.233
- type: map_at_1000
value: 13.62
- type: map_at_3
value: 7.779999999999999
- type: map_at_5
value: 9.529
- type: mrr_at_1
value: 20.599999999999998
- type: mrr_at_10
value: 31.361
- type: mrr_at_100
value: 32.738
- type: mrr_at_1000
value: 32.792
- type: mrr_at_3
value: 28.15
- type: mrr_at_5
value: 30.085
- type: ndcg_at_1
value: 20.599999999999998
- type: ndcg_at_10
value: 18.583
- type: ndcg_at_100
value: 27.590999999999998
- type: ndcg_at_1000
value: 34.001
- type: ndcg_at_3
value: 17.455000000000002
- type: ndcg_at_5
value: 15.588
- type: precision_at_1
value: 20.599999999999998
- type: precision_at_10
value: 9.74
- type: precision_at_100
value: 2.284
- type: precision_at_1000
value: 0.381
- type: precision_at_3
value: 16.533
- type: precision_at_5
value: 14.02
- type: recall_at_1
value: 4.153
- type: recall_at_10
value: 19.738
- type: recall_at_100
value: 46.322
- type: recall_at_1000
value: 77.378
- type: recall_at_3
value: 10.048
- type: recall_at_5
value: 14.233
- task:
type: STS
dataset:
name: MTEB SICK-R
type: mteb/sickr-sts
config: default
split: test
revision: a6ea5a8cab320b040a23452cc28066d9beae2cee
metrics:
- type: cos_sim_pearson
value: 85.07097501003639
- type: cos_sim_spearman
value: 81.05827848407056
- type: euclidean_pearson
value: 82.6279003372546
- type: euclidean_spearman
value: 81.00031515279802
- type: manhattan_pearson
value: 82.59338284959495
- type: manhattan_spearman
value: 80.97432711064945
- task:
type: STS
dataset:
name: MTEB STS12
type: mteb/sts12-sts
config: default
split: test
revision: a0d554a64d88156834ff5ae9920b964011b16384
metrics:
- type: cos_sim_pearson
value: 86.28991993621685
- type: cos_sim_spearman
value: 78.71828082424351
- type: euclidean_pearson
value: 83.4881331520832
- type: euclidean_spearman
value: 78.51746826842316
- type: manhattan_pearson
value: 83.4109223774324
- type: manhattan_spearman
value: 78.431544382179
- task:
type: STS
dataset:
name: MTEB STS13
type: mteb/sts13-sts
config: default
split: test
revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca
metrics:
- type: cos_sim_pearson
value: 83.16651661072123
- type: cos_sim_spearman
value: 84.88094386637867
- type: euclidean_pearson
value: 84.3547603585416
- type: euclidean_spearman
value: 84.85148665860193
- type: manhattan_pearson
value: 84.29648369879266
- type: manhattan_spearman
value: 84.76074870571124
- task:
type: STS
dataset:
name: MTEB STS14
type: mteb/sts14-sts
config: default
split: test
revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375
metrics:
- type: cos_sim_pearson
value: 83.40596254292149
- type: cos_sim_spearman
value: 83.10699573133829
- type: euclidean_pearson
value: 83.22794776876958
- type: euclidean_spearman
value: 83.22583316084712
- type: manhattan_pearson
value: 83.15899233935681
- type: manhattan_spearman
value: 83.17668293648019
- task:
type: STS
dataset:
name: MTEB STS15
type: mteb/sts15-sts
config: default
split: test
revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3
metrics:
- type: cos_sim_pearson
value: 87.27977121352563
- type: cos_sim_spearman
value: 88.73903130248591
- type: euclidean_pearson
value: 88.30685958438735
- type: euclidean_spearman
value: 88.79755484280406
- type: manhattan_pearson
value: 88.30305607758652
- type: manhattan_spearman
value: 88.80096577072784
- task:
type: STS
dataset:
name: MTEB STS16
type: mteb/sts16-sts
config: default
split: test
revision: 4d8694f8f0e0100860b497b999b3dbed754a0513
metrics:
- type: cos_sim_pearson
value: 84.08819031430218
- type: cos_sim_spearman
value: 86.35414445951125
- type: euclidean_pearson
value: 85.4683192388315
- type: euclidean_spearman
value: 86.2079674669473
- type: manhattan_pearson
value: 85.35835702257341
- type: manhattan_spearman
value: 86.08483380002187
- task:
type: STS
dataset:
name: MTEB STS17 (en-en)
type: mteb/sts17-crosslingual-sts
config: en-en
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 87.36149449801478
- type: cos_sim_spearman
value: 87.7102980757725
- type: euclidean_pearson
value: 88.16457177837161
- type: euclidean_spearman
value: 87.6598652482716
- type: manhattan_pearson
value: 88.23894728971618
- type: manhattan_spearman
value: 87.74470156709361
- task:
type: STS
dataset:
name: MTEB STS22 (en)
type: mteb/sts22-crosslingual-sts
config: en
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 64.54023758394433
- type: cos_sim_spearman
value: 66.28491960187773
- type: euclidean_pearson
value: 67.0853128483472
- type: euclidean_spearman
value: 66.10307543766307
- type: manhattan_pearson
value: 66.7635365592556
- type: manhattan_spearman
value: 65.76408004780167
- task:
type: STS
dataset:
name: MTEB STSBenchmark
type: mteb/stsbenchmark-sts
config: default
split: test
revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831
metrics:
- type: cos_sim_pearson
value: 85.15858398195317
- type: cos_sim_spearman
value: 87.44850004752102
- type: euclidean_pearson
value: 86.60737082550408
- type: euclidean_spearman
value: 87.31591549824242
- type: manhattan_pearson
value: 86.56187011429977
- type: manhattan_spearman
value: 87.23854795795319
- task:
type: Reranking
dataset:
name: MTEB SciDocsRR
type: mteb/scidocs-reranking
config: default
split: test
revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab
metrics:
- type: map
value: 86.66210488769109
- type: mrr
value: 96.23100664767331
- task:
type: Retrieval
dataset:
name: MTEB SciFact
type: scifact
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 56.094
- type: map_at_10
value: 67.486
- type: map_at_100
value: 67.925
- type: map_at_1000
value: 67.949
- type: map_at_3
value: 64.857
- type: map_at_5
value: 66.31
- type: mrr_at_1
value: 58.667
- type: mrr_at_10
value: 68.438
- type: mrr_at_100
value: 68.733
- type: mrr_at_1000
value: 68.757
- type: mrr_at_3
value: 66.389
- type: mrr_at_5
value: 67.456
- type: ndcg_at_1
value: 58.667
- type: ndcg_at_10
value: 72.506
- type: ndcg_at_100
value: 74.27
- type: ndcg_at_1000
value: 74.94800000000001
- type: ndcg_at_3
value: 67.977
- type: ndcg_at_5
value: 70.028
- type: precision_at_1
value: 58.667
- type: precision_at_10
value: 9.767000000000001
- type: precision_at_100
value: 1.073
- type: precision_at_1000
value: 0.11299999999999999
- type: precision_at_3
value: 27.0
- type: precision_at_5
value: 17.666999999999998
- type: recall_at_1
value: 56.094
- type: recall_at_10
value: 86.68900000000001
- type: recall_at_100
value: 94.333
- type: recall_at_1000
value: 99.667
- type: recall_at_3
value: 74.522
- type: recall_at_5
value: 79.611
- task:
type: PairClassification
dataset:
name: MTEB SprintDuplicateQuestions
type: mteb/sprintduplicatequestions-pairclassification
config: default
split: test
revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46
metrics:
- type: cos_sim_accuracy
value: 99.83069306930693
- type: cos_sim_ap
value: 95.69184662911199
- type: cos_sim_f1
value: 91.4027149321267
- type: cos_sim_precision
value: 91.91102123356926
- type: cos_sim_recall
value: 90.9
- type: dot_accuracy
value: 99.69405940594059
- type: dot_ap
value: 90.21674151456216
- type: dot_f1
value: 84.4489179667841
- type: dot_precision
value: 85.00506585612969
- type: dot_recall
value: 83.89999999999999
- type: euclidean_accuracy
value: 99.83069306930693
- type: euclidean_ap
value: 95.67760109671087
- type: euclidean_f1
value: 91.19754350051177
- type: euclidean_precision
value: 93.39622641509435
- type: euclidean_recall
value: 89.1
- type: manhattan_accuracy
value: 99.83267326732673
- type: manhattan_ap
value: 95.69771347732625
- type: manhattan_f1
value: 91.32420091324201
- type: manhattan_precision
value: 92.68795056642637
- type: manhattan_recall
value: 90.0
- type: max_accuracy
value: 99.83267326732673
- type: max_ap
value: 95.69771347732625
- type: max_f1
value: 91.4027149321267
- task:
type: Clustering
dataset:
name: MTEB StackExchangeClustering
type: mteb/stackexchange-clustering
config: default
split: test
revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259
metrics:
- type: v_measure
value: 64.47378332953092
- task:
type: Clustering
dataset:
name: MTEB StackExchangeClusteringP2P
type: mteb/stackexchange-clustering-p2p
config: default
split: test
revision: 815ca46b2622cec33ccafc3735d572c266efdb44
metrics:
- type: v_measure
value: 33.79602531604151
- task:
type: Reranking
dataset:
name: MTEB StackOverflowDupQuestions
type: mteb/stackoverflowdupquestions-reranking
config: default
split: test
revision: e185fbe320c72810689fc5848eb6114e1ef5ec69
metrics:
- type: map
value: 53.80707639107175
- type: mrr
value: 54.64886522790935
- task:
type: Summarization
dataset:
name: MTEB SummEval
type: mteb/summeval
config: default
split: test
revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c
metrics:
- type: cos_sim_pearson
value: 30.852448373051395
- type: cos_sim_spearman
value: 32.51821499493775
- type: dot_pearson
value: 30.390650062190456
- type: dot_spearman
value: 30.588836159667636
- task:
type: Retrieval
dataset:
name: MTEB TRECCOVID
type: trec-covid
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 0.198
- type: map_at_10
value: 1.51
- type: map_at_100
value: 8.882
- type: map_at_1000
value: 22.181
- type: map_at_3
value: 0.553
- type: map_at_5
value: 0.843
- type: mrr_at_1
value: 74.0
- type: mrr_at_10
value: 84.89999999999999
- type: mrr_at_100
value: 84.89999999999999
- type: mrr_at_1000
value: 84.89999999999999
- type: mrr_at_3
value: 84.0
- type: mrr_at_5
value: 84.89999999999999
- type: ndcg_at_1
value: 68.0
- type: ndcg_at_10
value: 64.792
- type: ndcg_at_100
value: 51.37199999999999
- type: ndcg_at_1000
value: 47.392
- type: ndcg_at_3
value: 68.46900000000001
- type: ndcg_at_5
value: 67.084
- type: precision_at_1
value: 74.0
- type: precision_at_10
value: 69.39999999999999
- type: precision_at_100
value: 53.080000000000005
- type: precision_at_1000
value: 21.258
- type: precision_at_3
value: 76.0
- type: precision_at_5
value: 73.2
- type: recall_at_1
value: 0.198
- type: recall_at_10
value: 1.7950000000000002
- type: recall_at_100
value: 12.626999999999999
- type: recall_at_1000
value: 44.84
- type: recall_at_3
value: 0.611
- type: recall_at_5
value: 0.959
- task:
type: Retrieval
dataset:
name: MTEB Touche2020
type: webis-touche2020
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 1.4949999999999999
- type: map_at_10
value: 8.797
- type: map_at_100
value: 14.889
- type: map_at_1000
value: 16.309
- type: map_at_3
value: 4.389
- type: map_at_5
value: 6.776
- type: mrr_at_1
value: 18.367
- type: mrr_at_10
value: 35.844
- type: mrr_at_100
value: 37.119
- type: mrr_at_1000
value: 37.119
- type: mrr_at_3
value: 30.612000000000002
- type: mrr_at_5
value: 33.163
- type: ndcg_at_1
value: 16.326999999999998
- type: ndcg_at_10
value: 21.9
- type: ndcg_at_100
value: 34.705000000000005
- type: ndcg_at_1000
value: 45.709
- type: ndcg_at_3
value: 22.7
- type: ndcg_at_5
value: 23.197000000000003
- type: precision_at_1
value: 18.367
- type: precision_at_10
value: 21.02
- type: precision_at_100
value: 7.714
- type: precision_at_1000
value: 1.504
- type: precision_at_3
value: 26.531
- type: precision_at_5
value: 26.122
- type: recall_at_1
value: 1.4949999999999999
- type: recall_at_10
value: 15.504000000000001
- type: recall_at_100
value: 47.978
- type: recall_at_1000
value: 81.56
- type: recall_at_3
value: 5.569
- type: recall_at_5
value: 9.821
- task:
type: Classification
dataset:
name: MTEB ToxicConversationsClassification
type: mteb/toxic_conversations_50k
config: default
split: test
revision: d7c0de2777da35d6aae2200a62c6e0e5af397c4c
metrics:
- type: accuracy
value: 72.99279999999999
- type: ap
value: 15.459189680101492
- type: f1
value: 56.33023271441895
- task:
type: Classification
dataset:
name: MTEB TweetSentimentExtractionClassification
type: mteb/tweet_sentiment_extraction
config: default
split: test
revision: d604517c81ca91fe16a244d1248fc021f9ecee7a
metrics:
- type: accuracy
value: 63.070175438596486
- type: f1
value: 63.28070758709465
- task:
type: Clustering
dataset:
name: MTEB TwentyNewsgroupsClustering
type: mteb/twentynewsgroups-clustering
config: default
split: test
revision: 6125ec4e24fa026cec8a478383ee943acfbd5449
metrics:
- type: v_measure
value: 50.076231309703054
- task:
type: PairClassification
dataset:
name: MTEB TwitterSemEval2015
type: mteb/twittersemeval2015-pairclassification
config: default
split: test
revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1
metrics:
- type: cos_sim_accuracy
value: 87.21463908922931
- type: cos_sim_ap
value: 77.67287017966282
- type: cos_sim_f1
value: 70.34412955465588
- type: cos_sim_precision
value: 67.57413709285368
- type: cos_sim_recall
value: 73.35092348284961
- type: dot_accuracy
value: 85.04500208618943
- type: dot_ap
value: 70.4075203869744
- type: dot_f1
value: 66.18172537008678
- type: dot_precision
value: 64.08798813643104
- type: dot_recall
value: 68.41688654353561
- type: euclidean_accuracy
value: 87.17887584192646
- type: euclidean_ap
value: 77.5774128274464
- type: euclidean_f1
value: 70.09307972480777
- type: euclidean_precision
value: 71.70852884349986
- type: euclidean_recall
value: 68.54881266490766
- type: manhattan_accuracy
value: 87.28020504261787
- type: manhattan_ap
value: 77.57835820297892
- type: manhattan_f1
value: 70.23063591521131
- type: manhattan_precision
value: 70.97817299919159
- type: manhattan_recall
value: 69.49868073878628
- type: max_accuracy
value: 87.28020504261787
- type: max_ap
value: 77.67287017966282
- type: max_f1
value: 70.34412955465588
- task:
type: PairClassification
dataset:
name: MTEB TwitterURLCorpus
type: mteb/twitterurlcorpus-pairclassification
config: default
split: test
revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf
metrics:
- type: cos_sim_accuracy
value: 88.96650754841464
- type: cos_sim_ap
value: 86.00185968965064
- type: cos_sim_f1
value: 77.95861256351718
- type: cos_sim_precision
value: 74.70712773465067
- type: cos_sim_recall
value: 81.50600554357868
- type: dot_accuracy
value: 87.36950362867233
- type: dot_ap
value: 82.22071181147555
- type: dot_f1
value: 74.85680716698488
- type: dot_precision
value: 71.54688377316114
- type: dot_recall
value: 78.48783492454572
- type: euclidean_accuracy
value: 88.99561454573679
- type: euclidean_ap
value: 86.15882097229648
- type: euclidean_f1
value: 78.18463125322332
- type: euclidean_precision
value: 74.95408956067241
- type: euclidean_recall
value: 81.70619032953496
- type: manhattan_accuracy
value: 88.96650754841464
- type: manhattan_ap
value: 86.13133111232099
- type: manhattan_f1
value: 78.10771470160115
- type: manhattan_precision
value: 74.05465084184377
- type: manhattan_recall
value: 82.63012011087157
- type: max_accuracy
value: 88.99561454573679
- type: max_ap
value: 86.15882097229648
- type: max_f1
value: 78.18463125322332
---
**新闻 | News**
**[2024-04-06]** 开源[puff](https://huggingface.co/infgrad/puff-base-v1)系列模型,**专门针对检索和语义匹配任务,更多的考虑泛化性和私有通用测试集效果,向量维度可变,中英双语**。
**[2024-02-27]** 开源stella-mrl-large-zh-v3.5-1792d模型,支持**向量可变维度**。
**[2024-02-17]** 开源stella v3系列、dialogue编码模型和相关训练数据。
**[2023-10-19]** 开源stella-base-en-v2 使用简单,**不需要任何前缀文本**。
**[2023-10-12]** 开源stella-base-zh-v2和stella-large-zh-v2, 效果更好且使用简单,**不需要任何前缀文本**。
**[2023-09-11]** 开源stella-base-zh和stella-large-zh
欢迎去[本人主页](https://huggingface.co/infgrad)查看最新模型,并提出您的宝贵意见!
## stella model
stella是一个通用的文本编码模型,主要有以下模型:
| Model Name | Model Size (GB) | Dimension | Sequence Length | Language | Need instruction for retrieval? |
|:------------------:|:---------------:|:---------:|:---------------:|:--------:|:-------------------------------:|
| stella-base-en-v2 | 0.2 | 768 | 512 | English | No |
| stella-large-zh-v2 | 0.65 | 1024 | 1024 | Chinese | No |
| stella-base-zh-v2 | 0.2 | 768 | 1024 | Chinese | No |
| stella-large-zh | 0.65 | 1024 | 1024 | Chinese | Yes |
| stella-base-zh | 0.2 | 768 | 1024 | Chinese | Yes |
完整的训练思路和训练过程已记录在[博客1](https://zhuanlan.zhihu.com/p/655322183)和[博客2](https://zhuanlan.zhihu.com/p/662209559),欢迎阅读讨论。
**训练数据:**
1. 开源数据(wudao_base_200GB[1]、m3e[2]和simclue[3]),着重挑选了长度大于512的文本
2. 在通用语料库上使用LLM构造一批(question, paragraph)和(sentence, paragraph)数据
**训练方法:**
1. 对比学习损失函数
2. 带有难负例的对比学习损失函数(分别基于bm25和vector构造了难负例)
3. EWC(Elastic Weights Consolidation)[4]
4. cosent loss[5]
5. 每一种类型的数据一个迭代器,分别计算loss进行更新
stella-v2在stella模型的基础上,使用了更多的训练数据,同时知识蒸馏等方法去除了前置的instruction(
比如piccolo的`查询:`, `结果:`, e5的`query:`和`passage:`)。
**初始权重:**\
stella-base-zh和stella-large-zh分别以piccolo-base-zh[6]和piccolo-large-zh作为基础模型,512-1024的position
embedding使用层次分解位置编码[7]进行初始化。\
感谢商汤科技研究院开源的[piccolo系列模型](https://huggingface.co/sensenova)。
stella is a general-purpose text encoder, which mainly includes the following models:
| Model Name | Model Size (GB) | Dimension | Sequence Length | Language | Need instruction for retrieval? |
|:------------------:|:---------------:|:---------:|:---------------:|:--------:|:-------------------------------:|
| stella-base-en-v2 | 0.2 | 768 | 512 | English | No |
| stella-large-zh-v2 | 0.65 | 1024 | 1024 | Chinese | No |
| stella-base-zh-v2 | 0.2 | 768 | 1024 | Chinese | No |
| stella-large-zh | 0.65 | 1024 | 1024 | Chinese | Yes |
| stella-base-zh | 0.2 | 768 | 1024 | Chinese | Yes |
The training data mainly includes:
1. Open-source training data (wudao_base_200GB, m3e, and simclue), with a focus on selecting texts with lengths greater
than 512.
2. A batch of (question, paragraph) and (sentence, paragraph) data constructed on a general corpus using LLM.
The loss functions mainly include:
1. Contrastive learning loss function
2. Contrastive learning loss function with hard negative examples (based on bm25 and vector hard negatives)
3. EWC (Elastic Weights Consolidation)
4. cosent loss
Model weight initialization:\
stella-base-zh and stella-large-zh use piccolo-base-zh and piccolo-large-zh as the base models, respectively, and the
512-1024 position embedding uses the initialization strategy of hierarchical decomposed position encoding.
Training strategy:\
One iterator for each type of data, separately calculating the loss.
Based on stella models, stella-v2 use more training data and remove instruction by Knowledge Distillation.
## Metric
#### C-MTEB leaderboard (Chinese)
| Model Name | Model Size (GB) | Dimension | Sequence Length | Average (35) | Classification (9) | Clustering (4) | Pair Classification (2) | Reranking (4) | Retrieval (8) | STS (8) |
|:------------------:|:---------------:|:---------:|:---------------:|:------------:|:------------------:|:--------------:|:-----------------------:|:-------------:|:-------------:|:-------:|
| stella-large-zh-v2 | 0.65 | 1024 | 1024 | 65.13 | 69.05 | 49.16 | 82.68 | 66.41 | 70.14 | 58.66 |
| stella-base-zh-v2 | 0.2 | 768 | 1024 | 64.36 | 68.29 | 49.4 | 79.95 | 66.1 | 70.08 | 56.92 |
| stella-large-zh | 0.65 | 1024 | 1024 | 64.54 | 67.62 | 48.65 | 78.72 | 65.98 | 71.02 | 58.3 |
| stella-base-zh | 0.2 | 768 | 1024 | 64.16 | 67.77 | 48.7 | 76.09 | 66.95 | 71.07 | 56.54 |
#### MTEB leaderboard (English)
| Model Name | Model Size (GB) | Dimension | Sequence Length | Average (56) | Classification (12) | Clustering (11) | Pair Classification (3) | Reranking (4) | Retrieval (15) | STS (10) | Summarization (1) |
|:-----------------:|:---------------:|:---------:|:---------------:|:------------:|:-------------------:|:---------------:|:-----------------------:|:-------------:|:--------------:|:--------:|:------------------:|
| stella-base-en-v2 | 0.2 | 768 | 512 | 62.61 | 75.28 | 44.9 | 86.45 | 58.77 | 50.1 | 83.02 | 32.52 |
#### Reproduce our results
**C-MTEB:**
```python
import torch
import numpy as np
from typing import List
from mteb import MTEB
from sentence_transformers import SentenceTransformer
class FastTextEncoder():
def __init__(self, model_name):
self.model = SentenceTransformer(model_name).cuda().half().eval()
self.model.max_seq_length = 512
def encode(
self,
input_texts: List[str],
*args,
**kwargs
):
new_sens = list(set(input_texts))
new_sens.sort(key=lambda x: len(x), reverse=True)
vecs = self.model.encode(
new_sens, normalize_embeddings=True, convert_to_numpy=True, batch_size=256
).astype(np.float32)
sen2arrid = {sen: idx for idx, sen in enumerate(new_sens)}
vecs = vecs[[sen2arrid[sen] for sen in input_texts]]
torch.cuda.empty_cache()
return vecs
if __name__ == '__main__':
model_name = "infgrad/stella-base-zh-v2"
output_folder = "zh_mteb_results/stella-base-zh-v2"
task_names = [t.description["name"] for t in MTEB(task_langs=['zh', 'zh-CN']).tasks]
model = FastTextEncoder(model_name)
for task in task_names:
MTEB(tasks=[task], task_langs=['zh', 'zh-CN']).run(model, output_folder=output_folder)
```
**MTEB:**
You can use official script to reproduce our result. [scripts/run_mteb_english.py](https://github.com/embeddings-benchmark/mteb/blob/main/scripts/run_mteb_english.py)
#### Evaluation for long text
经过实际观察发现,C-MTEB的评测数据长度基本都是小于512的,
更致命的是那些长度大于512的文本,其重点都在前半部分
这里以CMRC2018的数据为例说明这个问题:
```
question: 《无双大蛇z》是谁旗下ω-force开发的动作游戏?
passage:《无双大蛇z》是光荣旗下ω-force开发的动作游戏,于2009年3月12日登陆索尼playstation3,并于2009年11月27日推......
```
passage长度为800多,大于512,但是对于这个question而言只需要前面40个字就足以检索,多的内容对于模型而言是一种噪声,反而降低了效果。\
简言之,现有数据集的2个问题:\
1)长度大于512的过少\
2)即便大于512,对于检索而言也只需要前512的文本内容\
导致**无法准确评估模型的长文本编码能力。**
为了解决这个问题,搜集了相关开源数据并使用规则进行过滤,最终整理了6份长文本测试集,他们分别是:
- CMRC2018,通用百科
- CAIL,法律阅读理解
- DRCD,繁体百科,已转简体
- Military,军工问答
- Squad,英文阅读理解,已转中文
- Multifieldqa_zh,清华的大模型长文本理解能力评测数据[9]
处理规则是选取答案在512长度之后的文本,短的测试数据会欠采样一下,长短文本占比约为1:2,所以模型既得理解短文本也得理解长文本。
除了Military数据集,我们提供了其他5个测试数据的下载地址:https://drive.google.com/file/d/1WC6EWaCbVgz-vPMDFH4TwAMkLyh5WNcN/view?usp=sharing
评测指标为Recall@5, 结果如下:
| Dataset | piccolo-base-zh | piccolo-large-zh | bge-base-zh | bge-large-zh | stella-base-zh | stella-large-zh |
|:---------------:|:---------------:|:----------------:|:-----------:|:------------:|:--------------:|:---------------:|
| CMRC2018 | 94.34 | 93.82 | 91.56 | 93.12 | 96.08 | 95.56 |
| CAIL | 28.04 | 33.64 | 31.22 | 33.94 | 34.62 | 37.18 |
| DRCD | 78.25 | 77.9 | 78.34 | 80.26 | 86.14 | 84.58 |
| Military | 76.61 | 73.06 | 75.65 | 75.81 | 83.71 | 80.48 |
| Squad | 91.21 | 86.61 | 87.87 | 90.38 | 93.31 | 91.21 |
| Multifieldqa_zh | 81.41 | 83.92 | 83.92 | 83.42 | 79.9 | 80.4 |
| **Average** | 74.98 | 74.83 | 74.76 | 76.15 | **78.96** | **78.24** |
**注意:** 因为长文本评测数据数量稀少,所以构造时也使用了train部分,如果自行评测,请注意模型的训练数据以免数据泄露。
## Usage
#### stella 中文系列模型
stella-base-zh 和 stella-large-zh: 本模型是在piccolo基础上训练的,因此**用法和piccolo完全一致**
,即在检索重排任务上给query和passage加上`查询: `和`结果: `。对于短短匹配不需要做任何操作。
stella-base-zh-v2 和 stella-large-zh-v2: 本模型使用简单,**任何使用场景中都不需要加前缀文本**。
stella中文系列模型均使用mean pooling做为文本向量。
在sentence-transformer库中的使用方法:
```python
from sentence_transformers import SentenceTransformer
sentences = ["数据1", "数据2"]
model = SentenceTransformer('infgrad/stella-base-zh-v2')
print(model.max_seq_length)
embeddings_1 = model.encode(sentences, normalize_embeddings=True)
embeddings_2 = model.encode(sentences, normalize_embeddings=True)
similarity = embeddings_1 @ embeddings_2.T
print(similarity)
```
直接使用transformers库:
```python
from transformers import AutoModel, AutoTokenizer
from sklearn.preprocessing import normalize
model = AutoModel.from_pretrained('infgrad/stella-base-zh-v2')
tokenizer = AutoTokenizer.from_pretrained('infgrad/stella-base-zh-v2')
sentences = ["数据1", "数据ABCDEFGH"]
batch_data = tokenizer(
batch_text_or_text_pairs=sentences,
padding="longest",
return_tensors="pt",
max_length=1024,
truncation=True,
)
attention_mask = batch_data["attention_mask"]
model_output = model(**batch_data)
last_hidden = model_output.last_hidden_state.masked_fill(~attention_mask[..., None].bool(), 0.0)
vectors = last_hidden.sum(dim=1) / attention_mask.sum(dim=1)[..., None]
vectors = normalize(vectors, norm="l2", axis=1, )
print(vectors.shape) # 2,768
```
#### stella models for English
**Using Sentence-Transformers:**
```python
from sentence_transformers import SentenceTransformer
sentences = ["one car come", "one car go"]
model = SentenceTransformer('infgrad/stella-base-en-v2')
print(model.max_seq_length)
embeddings_1 = model.encode(sentences, normalize_embeddings=True)
embeddings_2 = model.encode(sentences, normalize_embeddings=True)
similarity = embeddings_1 @ embeddings_2.T
print(similarity)
```
**Using HuggingFace Transformers:**
```python
from transformers import AutoModel, AutoTokenizer
from sklearn.preprocessing import normalize
model = AutoModel.from_pretrained('infgrad/stella-base-en-v2')
tokenizer = AutoTokenizer.from_pretrained('infgrad/stella-base-en-v2')
sentences = ["one car come", "one car go"]
batch_data = tokenizer(
batch_text_or_text_pairs=sentences,
padding="longest",
return_tensors="pt",
max_length=512,
truncation=True,
)
attention_mask = batch_data["attention_mask"]
model_output = model(**batch_data)
last_hidden = model_output.last_hidden_state.masked_fill(~attention_mask[..., None].bool(), 0.0)
vectors = last_hidden.sum(dim=1) / attention_mask.sum(dim=1)[..., None]
vectors = normalize(vectors, norm="l2", axis=1, )
print(vectors.shape) # 2,768
```
## Training Detail
**硬件:** 单卡A100-80GB
**环境:** torch1.13.*; transformers-trainer + deepspeed + gradient-checkpointing
**学习率:** 1e-6
**batch_size:** base模型为1024,额外增加20%的难负例;large模型为768,额外增加20%的难负例
**数据量:** 第一版模型约100万,其中用LLM构造的数据约有200K. LLM模型大小为13b。v2系列模型到了2000万训练数据。
## ToDoList
**评测的稳定性:**
评测过程中发现Clustering任务会和官方的结果不一致,大约有±0.0x的小差距,原因是聚类代码没有设置random_seed,差距可以忽略不计,不影响评测结论。
**更高质量的长文本训练和测试数据:** 训练数据多是用13b模型构造的,肯定会存在噪声。
测试数据基本都是从mrc数据整理来的,所以问题都是factoid类型,不符合真实分布。
**OOD的性能:** 虽然近期出现了很多向量编码模型,但是对于不是那么通用的domain,这一众模型包括stella、openai和cohere,
它们的效果均比不上BM25。
## Reference
1. https://www.scidb.cn/en/detail?dataSetId=c6a3fe684227415a9db8e21bac4a15ab
2. https://github.com/wangyuxinwhy/uniem
3. https://github.com/CLUEbenchmark/SimCLUE
4. https://arxiv.org/abs/1612.00796
5. https://kexue.fm/archives/8847
6. https://huggingface.co/sensenova/piccolo-base-zh
7. https://kexue.fm/archives/7947
8. https://github.com/FlagOpen/FlagEmbedding
9. https://github.com/THUDM/LongBench
| [
"BIOSSES",
"SCIFACT"
] | Non_BioNLP |
rohitanurag/ClinicalGPT-Pubmed-Instruct-V1.0 | rohitanurag | question-answering | [
"peft",
"safetensors",
"mistral",
"medical",
"lifescience",
"drugdiscovery",
"question-answering",
"base_model:mistralai/Mistral-7B-Instruct-v0.2",
"base_model:adapter:mistralai/Mistral-7B-Instruct-v0.2",
"license:apache-2.0",
"region:us"
] | 1,729,593,954,000 | 2024-10-22T11:13:25 | 0 | 2 | ---
base_model:
- mistralai/Mistral-7B-Instruct-v0.2
library_name: peft
license: apache-2.0
pipeline_tag: question-answering
tags:
- medical
- lifescience
- drugdiscovery
---
# ClinicalGPT-Pubmed-Instruct-V1.0
## Overview
ClinicalGPT-Pubmed-Instruct-V1.0 is a specialized language model fine-tuned on the mistralai/Mistral-7B-Instruct-v0.2 base model. While primarily trained on 10 million PubMed abstracts and titles, this model excels at generating responses to life science-related medical questions with relevant citations from various scientific sources.
## Key Features
- Built on Mistral-7B-Instruct-v0.2 base model
- Primary training on 10M PubMed abstracts and titles
- Generates answers with scientific citations from multiple sources
- Specialized for medical and life science domains
## Applications
- **Life Science Research**: Generate accurate, referenced answers for biomedical and healthcare queries
- **Pharmaceutical Industry**: Support healthcare professionals with evidence-based responses
- **Medical Education**: Aid students and educators with scientifically-supported content from various academic sources
## System Requirements
### GPU Requirements
- **Minimum VRAM**: 16-18 GB for inference in BF16 (BFloat16) precision
- **Recommended GPUs**:
- NVIDIA A100 (20GB) - Ideal for BF16 precision
- Any GPU with 16+ GB VRAM
- Performance may vary based on available memory
### Software Prerequisites
- Python 3.x
- PyTorch
- Transformers library
### Basic Implementation
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
# Set parameters
model_dir = "rohitanurag/ClinicalGPT-Pubmed-Instruct-V1.0"
max_new_tokens = 1500
device = "cuda" if torch.cuda.is_available() else "cpu"
# Load tokenizer and model
tokenizer = AutoTokenizer.from_pretrained(model_dir)
model = AutoModelForCausalLM.from_pretrained(model_dir).to(device)
# Define your question
question = "What is the role of the tumor microenvironment in cancer progression?"
prompt = f"""Please provide the answer to the question asked.
### Question: {question}
### Answer: """
# Tokenize input
inputs = tokenizer(prompt, return_tensors="pt", padding=True, truncation=True).to(device)
# Generate output
output_ids = model.generate(
inputs.input_ids,
attention_mask=inputs.attention_mask,
max_new_tokens=1000,
repetition_penalty=1.2,
pad_token_id=tokenizer.eos_token_id,
)
# Decode and print
generated_text = tokenizer.decode(output_ids[0], skip_special_tokens=True)
print(f"Generated Answer:\n{generated_text}")
```
## Sample Output
```
### Question: What is the role of the tumor microenvironment in cancer progression, and how does it influence the response to therapy?
### Answer:
The tumor microenvironment (TME) refers to the complex network of cells, extracellular matrix components, signaling molecules, and immune cells that surround a growing tumor. It plays an essential role in regulating various aspects of cancer development and progression...
### References:
1. Hanahan D, Weinberg RA. Hallmarks of Cancer: The Next Generation. Cell. 2011;144(5):646-74. doi:10.1016/j.cell.2011.03.019
2. Coussens LM, Pollard JW. Angiogenesis and Metastasis. Nature Reviews Cancer. 2006;6(1):57-68. doi:10.1038/nrc2210
3. Mantovani A, et al. Cancer's Educated Environment: How the Tumour Microenvironment Promotes Progression. Cell. 2017;168(6):988-1001.e15. doi:10.1016/j.cell.2017.02.011
4. Cheng YH, et al. Targeting the Tumor Microenvironment for Improved Therapy Response. Journal of Clinical Oncology. 2018;34(18_suppl):LBA10001. doi:10.1200/JCO.2018.34.18_suppl.LBA10001
5. Kang YS, et al. Role of the Tumor Microenvironment in Cancer Immunotherapy. Current Opinion in Pharmacology. 2018;30:101-108. doi:10.1016/j.ycoop.20
```
## Model Details
- **Base Model**: Mistral-7B-Instruct-v0.2
- **Primary Training Data**: 10 million PubMed abstracts and titles
- **Specialization**: Medical question-answering with scientific citations
- **Output**: Generates detailed answers with relevant academic references
## Future Development
ClinicalGPT-Pubmed-Instruct-V2.0 is under development, featuring:
- Training on new 20 million pubmed articles
- Inclusion of full-text articles from various academic sources
- Enhanced performance for life science tasks
- Expanded citation capabilities across multiple scientific databases
## Contributors
- Rohit Anurag – Principal Data Scientist
- Aneesh Paul – Data Scientist
## License
Licensed under the Apache License, Version 2.0. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 | [
"HALLMARKS OF CANCER"
] | BioNLP |
Nextcloud-AI/multilingual-e5-large-instruct | Nextcloud-AI | feature-extraction | [
"sentence-transformers",
"onnx",
"safetensors",
"xlm-roberta",
"feature-extraction",
"mteb",
"transformers",
"multilingual",
"af",
"am",
"ar",
"as",
"az",
"be",
"bg",
"bn",
"br",
"bs",
"ca",
"cs",
"cy",
"da",
"de",
"el",
"en",
"eo",
"es",
"et",
"eu",
"fa",
"fi",
"fr",
"fy",
"ga",
"gd",
"gl",
"gu",
"ha",
"he",
"hi",
"hr",
"hu",
"hy",
"id",
"is",
"it",
"ja",
"jv",
"ka",
"kk",
"km",
"kn",
"ko",
"ku",
"ky",
"la",
"lo",
"lt",
"lv",
"mg",
"mk",
"ml",
"mn",
"mr",
"ms",
"my",
"ne",
"nl",
"no",
"om",
"or",
"pa",
"pl",
"ps",
"pt",
"ro",
"ru",
"sa",
"sd",
"si",
"sk",
"sl",
"so",
"sq",
"sr",
"su",
"sv",
"sw",
"ta",
"te",
"th",
"tl",
"tr",
"ug",
"uk",
"ur",
"uz",
"vi",
"xh",
"yi",
"zh",
"arxiv:2402.05672",
"arxiv:2401.00368",
"arxiv:2104.08663",
"arxiv:2210.07316",
"license:mit",
"model-index",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | 1,727,419,452,000 | 2024-09-26T06:33:15 | 329 | 5 | ---
language:
- multilingual
- af
- am
- ar
- as
- az
- be
- bg
- bn
- br
- bs
- ca
- cs
- cy
- da
- de
- el
- en
- eo
- es
- et
- eu
- fa
- fi
- fr
- fy
- ga
- gd
- gl
- gu
- ha
- he
- hi
- hr
- hu
- hy
- id
- is
- it
- ja
- jv
- ka
- kk
- km
- kn
- ko
- ku
- ky
- la
- lo
- lt
- lv
- mg
- mk
- ml
- mn
- mr
- ms
- my
- ne
- nl
- 'no'
- om
- or
- pa
- pl
- ps
- pt
- ro
- ru
- sa
- sd
- si
- sk
- sl
- so
- sq
- sr
- su
- sv
- sw
- ta
- te
- th
- tl
- tr
- ug
- uk
- ur
- uz
- vi
- xh
- yi
- zh
license: mit
tags:
- mteb
- sentence-transformers
- transformers
model-index:
- name: multilingual-e5-large-instruct
results:
- task:
type: Classification
dataset:
name: MTEB AmazonCounterfactualClassification (en)
type: mteb/amazon_counterfactual
config: en
split: test
revision: e8379541af4e31359cca9fbcf4b00f2671dba205
metrics:
- type: accuracy
value: 76.23880597014924
- type: ap
value: 39.07351965022687
- type: f1
value: 70.04836733862683
- task:
type: Classification
dataset:
name: MTEB AmazonCounterfactualClassification (de)
type: mteb/amazon_counterfactual
config: de
split: test
revision: e8379541af4e31359cca9fbcf4b00f2671dba205
metrics:
- type: accuracy
value: 66.71306209850107
- type: ap
value: 79.01499914759529
- type: f1
value: 64.81951817560703
- task:
type: Classification
dataset:
name: MTEB AmazonCounterfactualClassification (en-ext)
type: mteb/amazon_counterfactual
config: en-ext
split: test
revision: e8379541af4e31359cca9fbcf4b00f2671dba205
metrics:
- type: accuracy
value: 73.85307346326837
- type: ap
value: 22.447519885878737
- type: f1
value: 61.0162730745633
- task:
type: Classification
dataset:
name: MTEB AmazonCounterfactualClassification (ja)
type: mteb/amazon_counterfactual
config: ja
split: test
revision: e8379541af4e31359cca9fbcf4b00f2671dba205
metrics:
- type: accuracy
value: 76.04925053533191
- type: ap
value: 23.44983217128922
- type: f1
value: 62.5723230907759
- task:
type: Classification
dataset:
name: MTEB AmazonPolarityClassification
type: mteb/amazon_polarity
config: default
split: test
revision: e2d317d38cd51312af73b3d32a06d1a08b442046
metrics:
- type: accuracy
value: 96.28742500000001
- type: ap
value: 94.8449918887462
- type: f1
value: 96.28680923610432
- task:
type: Classification
dataset:
name: MTEB AmazonReviewsClassification (en)
type: mteb/amazon_reviews_multi
config: en
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 56.716
- type: f1
value: 55.76510398266401
- task:
type: Classification
dataset:
name: MTEB AmazonReviewsClassification (de)
type: mteb/amazon_reviews_multi
config: de
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 52.99999999999999
- type: f1
value: 52.00829994765178
- task:
type: Classification
dataset:
name: MTEB AmazonReviewsClassification (es)
type: mteb/amazon_reviews_multi
config: es
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 48.806000000000004
- type: f1
value: 48.082345914983634
- task:
type: Classification
dataset:
name: MTEB AmazonReviewsClassification (fr)
type: mteb/amazon_reviews_multi
config: fr
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 48.507999999999996
- type: f1
value: 47.68752844642045
- task:
type: Classification
dataset:
name: MTEB AmazonReviewsClassification (ja)
type: mteb/amazon_reviews_multi
config: ja
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 47.709999999999994
- type: f1
value: 47.05870376637181
- task:
type: Classification
dataset:
name: MTEB AmazonReviewsClassification (zh)
type: mteb/amazon_reviews_multi
config: zh
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 44.662000000000006
- type: f1
value: 43.42371965372771
- task:
type: Retrieval
dataset:
name: MTEB ArguAna
type: arguana
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 31.721
- type: map_at_10
value: 49.221
- type: map_at_100
value: 49.884
- type: map_at_1000
value: 49.888
- type: map_at_3
value: 44.31
- type: map_at_5
value: 47.276
- type: mrr_at_1
value: 32.432
- type: mrr_at_10
value: 49.5
- type: mrr_at_100
value: 50.163000000000004
- type: mrr_at_1000
value: 50.166
- type: mrr_at_3
value: 44.618
- type: mrr_at_5
value: 47.541
- type: ndcg_at_1
value: 31.721
- type: ndcg_at_10
value: 58.384
- type: ndcg_at_100
value: 61.111000000000004
- type: ndcg_at_1000
value: 61.187999999999995
- type: ndcg_at_3
value: 48.386
- type: ndcg_at_5
value: 53.708999999999996
- type: precision_at_1
value: 31.721
- type: precision_at_10
value: 8.741
- type: precision_at_100
value: 0.991
- type: precision_at_1000
value: 0.1
- type: precision_at_3
value: 20.057
- type: precision_at_5
value: 14.609
- type: recall_at_1
value: 31.721
- type: recall_at_10
value: 87.411
- type: recall_at_100
value: 99.075
- type: recall_at_1000
value: 99.644
- type: recall_at_3
value: 60.171
- type: recall_at_5
value: 73.044
- task:
type: Clustering
dataset:
name: MTEB ArxivClusteringP2P
type: mteb/arxiv-clustering-p2p
config: default
split: test
revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d
metrics:
- type: v_measure
value: 46.40419580759799
- task:
type: Clustering
dataset:
name: MTEB ArxivClusteringS2S
type: mteb/arxiv-clustering-s2s
config: default
split: test
revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53
metrics:
- type: v_measure
value: 40.48593255007969
- task:
type: Reranking
dataset:
name: MTEB AskUbuntuDupQuestions
type: mteb/askubuntudupquestions-reranking
config: default
split: test
revision: 2000358ca161889fa9c082cb41daa8dcfb161a54
metrics:
- type: map
value: 63.889179122289995
- type: mrr
value: 77.61146286769556
- task:
type: STS
dataset:
name: MTEB BIOSSES
type: mteb/biosses-sts
config: default
split: test
revision: d3fb88f8f02e40887cd149695127462bbcf29b4a
metrics:
- type: cos_sim_pearson
value: 88.15075203727929
- type: cos_sim_spearman
value: 86.9622224570873
- type: euclidean_pearson
value: 86.70473853624121
- type: euclidean_spearman
value: 86.9622224570873
- type: manhattan_pearson
value: 86.21089380980065
- type: manhattan_spearman
value: 86.75318154937008
- task:
type: BitextMining
dataset:
name: MTEB BUCC (de-en)
type: mteb/bucc-bitext-mining
config: de-en
split: test
revision: d51519689f32196a32af33b075a01d0e7c51e252
metrics:
- type: accuracy
value: 99.65553235908142
- type: f1
value: 99.60681976339595
- type: precision
value: 99.58246346555325
- type: recall
value: 99.65553235908142
- task:
type: BitextMining
dataset:
name: MTEB BUCC (fr-en)
type: mteb/bucc-bitext-mining
config: fr-en
split: test
revision: d51519689f32196a32af33b075a01d0e7c51e252
metrics:
- type: accuracy
value: 99.26260180497468
- type: f1
value: 99.14520507740848
- type: precision
value: 99.08650671362535
- type: recall
value: 99.26260180497468
- task:
type: BitextMining
dataset:
name: MTEB BUCC (ru-en)
type: mteb/bucc-bitext-mining
config: ru-en
split: test
revision: d51519689f32196a32af33b075a01d0e7c51e252
metrics:
- type: accuracy
value: 98.07412538967787
- type: f1
value: 97.86629719431936
- type: precision
value: 97.76238309664012
- type: recall
value: 98.07412538967787
- task:
type: BitextMining
dataset:
name: MTEB BUCC (zh-en)
type: mteb/bucc-bitext-mining
config: zh-en
split: test
revision: d51519689f32196a32af33b075a01d0e7c51e252
metrics:
- type: accuracy
value: 99.42074776197998
- type: f1
value: 99.38564156573635
- type: precision
value: 99.36808846761454
- type: recall
value: 99.42074776197998
- task:
type: Classification
dataset:
name: MTEB Banking77Classification
type: mteb/banking77
config: default
split: test
revision: 0fd18e25b25c072e09e0d92ab615fda904d66300
metrics:
- type: accuracy
value: 85.73376623376623
- type: f1
value: 85.68480707214599
- task:
type: Clustering
dataset:
name: MTEB BiorxivClusteringP2P
type: mteb/biorxiv-clustering-p2p
config: default
split: test
revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40
metrics:
- type: v_measure
value: 40.935218072113855
- task:
type: Clustering
dataset:
name: MTEB BiorxivClusteringS2S
type: mteb/biorxiv-clustering-s2s
config: default
split: test
revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908
metrics:
- type: v_measure
value: 36.276389017675264
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackRetrieval
type: BeIR/cqadupstack
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 27.764166666666668
- type: map_at_10
value: 37.298166666666674
- type: map_at_100
value: 38.530166666666666
- type: map_at_1000
value: 38.64416666666667
- type: map_at_3
value: 34.484833333333334
- type: map_at_5
value: 36.0385
- type: mrr_at_1
value: 32.93558333333333
- type: mrr_at_10
value: 41.589749999999995
- type: mrr_at_100
value: 42.425333333333334
- type: mrr_at_1000
value: 42.476333333333336
- type: mrr_at_3
value: 39.26825
- type: mrr_at_5
value: 40.567083333333336
- type: ndcg_at_1
value: 32.93558333333333
- type: ndcg_at_10
value: 42.706583333333334
- type: ndcg_at_100
value: 47.82483333333333
- type: ndcg_at_1000
value: 49.95733333333334
- type: ndcg_at_3
value: 38.064750000000004
- type: ndcg_at_5
value: 40.18158333333333
- type: precision_at_1
value: 32.93558333333333
- type: precision_at_10
value: 7.459833333333334
- type: precision_at_100
value: 1.1830833333333335
- type: precision_at_1000
value: 0.15608333333333332
- type: precision_at_3
value: 17.5235
- type: precision_at_5
value: 12.349833333333333
- type: recall_at_1
value: 27.764166666666668
- type: recall_at_10
value: 54.31775
- type: recall_at_100
value: 76.74350000000001
- type: recall_at_1000
value: 91.45208333333332
- type: recall_at_3
value: 41.23425
- type: recall_at_5
value: 46.73983333333334
- task:
type: Retrieval
dataset:
name: MTEB ClimateFEVER
type: climate-fever
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 12.969
- type: map_at_10
value: 21.584999999999997
- type: map_at_100
value: 23.3
- type: map_at_1000
value: 23.5
- type: map_at_3
value: 18.218999999999998
- type: map_at_5
value: 19.983
- type: mrr_at_1
value: 29.316
- type: mrr_at_10
value: 40.033
- type: mrr_at_100
value: 40.96
- type: mrr_at_1000
value: 41.001
- type: mrr_at_3
value: 37.123
- type: mrr_at_5
value: 38.757999999999996
- type: ndcg_at_1
value: 29.316
- type: ndcg_at_10
value: 29.858
- type: ndcg_at_100
value: 36.756
- type: ndcg_at_1000
value: 40.245999999999995
- type: ndcg_at_3
value: 24.822
- type: ndcg_at_5
value: 26.565
- type: precision_at_1
value: 29.316
- type: precision_at_10
value: 9.186
- type: precision_at_100
value: 1.6549999999999998
- type: precision_at_1000
value: 0.22999999999999998
- type: precision_at_3
value: 18.436
- type: precision_at_5
value: 13.876
- type: recall_at_1
value: 12.969
- type: recall_at_10
value: 35.142
- type: recall_at_100
value: 59.143
- type: recall_at_1000
value: 78.594
- type: recall_at_3
value: 22.604
- type: recall_at_5
value: 27.883000000000003
- task:
type: Retrieval
dataset:
name: MTEB DBPedia
type: dbpedia-entity
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 8.527999999999999
- type: map_at_10
value: 17.974999999999998
- type: map_at_100
value: 25.665
- type: map_at_1000
value: 27.406000000000002
- type: map_at_3
value: 13.017999999999999
- type: map_at_5
value: 15.137
- type: mrr_at_1
value: 62.5
- type: mrr_at_10
value: 71.891
- type: mrr_at_100
value: 72.294
- type: mrr_at_1000
value: 72.296
- type: mrr_at_3
value: 69.958
- type: mrr_at_5
value: 71.121
- type: ndcg_at_1
value: 50.875
- type: ndcg_at_10
value: 38.36
- type: ndcg_at_100
value: 44.235
- type: ndcg_at_1000
value: 52.154
- type: ndcg_at_3
value: 43.008
- type: ndcg_at_5
value: 40.083999999999996
- type: precision_at_1
value: 62.5
- type: precision_at_10
value: 30.0
- type: precision_at_100
value: 10.038
- type: precision_at_1000
value: 2.0869999999999997
- type: precision_at_3
value: 46.833000000000006
- type: precision_at_5
value: 38.800000000000004
- type: recall_at_1
value: 8.527999999999999
- type: recall_at_10
value: 23.828
- type: recall_at_100
value: 52.322
- type: recall_at_1000
value: 77.143
- type: recall_at_3
value: 14.136000000000001
- type: recall_at_5
value: 17.761
- task:
type: Classification
dataset:
name: MTEB EmotionClassification
type: mteb/emotion
config: default
split: test
revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37
metrics:
- type: accuracy
value: 51.51
- type: f1
value: 47.632159862049896
- task:
type: Retrieval
dataset:
name: MTEB FEVER
type: fever
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 60.734
- type: map_at_10
value: 72.442
- type: map_at_100
value: 72.735
- type: map_at_1000
value: 72.75
- type: map_at_3
value: 70.41199999999999
- type: map_at_5
value: 71.80499999999999
- type: mrr_at_1
value: 65.212
- type: mrr_at_10
value: 76.613
- type: mrr_at_100
value: 76.79899999999999
- type: mrr_at_1000
value: 76.801
- type: mrr_at_3
value: 74.8
- type: mrr_at_5
value: 76.12400000000001
- type: ndcg_at_1
value: 65.212
- type: ndcg_at_10
value: 77.988
- type: ndcg_at_100
value: 79.167
- type: ndcg_at_1000
value: 79.452
- type: ndcg_at_3
value: 74.362
- type: ndcg_at_5
value: 76.666
- type: precision_at_1
value: 65.212
- type: precision_at_10
value: 10.003
- type: precision_at_100
value: 1.077
- type: precision_at_1000
value: 0.11199999999999999
- type: precision_at_3
value: 29.518
- type: precision_at_5
value: 19.016
- type: recall_at_1
value: 60.734
- type: recall_at_10
value: 90.824
- type: recall_at_100
value: 95.71600000000001
- type: recall_at_1000
value: 97.577
- type: recall_at_3
value: 81.243
- type: recall_at_5
value: 86.90299999999999
- task:
type: Retrieval
dataset:
name: MTEB FiQA2018
type: fiqa
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 23.845
- type: map_at_10
value: 39.281
- type: map_at_100
value: 41.422
- type: map_at_1000
value: 41.593
- type: map_at_3
value: 34.467
- type: map_at_5
value: 37.017
- type: mrr_at_1
value: 47.531
- type: mrr_at_10
value: 56.204
- type: mrr_at_100
value: 56.928999999999995
- type: mrr_at_1000
value: 56.962999999999994
- type: mrr_at_3
value: 54.115
- type: mrr_at_5
value: 55.373000000000005
- type: ndcg_at_1
value: 47.531
- type: ndcg_at_10
value: 47.711999999999996
- type: ndcg_at_100
value: 54.510999999999996
- type: ndcg_at_1000
value: 57.103
- type: ndcg_at_3
value: 44.145
- type: ndcg_at_5
value: 45.032
- type: precision_at_1
value: 47.531
- type: precision_at_10
value: 13.194
- type: precision_at_100
value: 2.045
- type: precision_at_1000
value: 0.249
- type: precision_at_3
value: 29.424
- type: precision_at_5
value: 21.451
- type: recall_at_1
value: 23.845
- type: recall_at_10
value: 54.967
- type: recall_at_100
value: 79.11399999999999
- type: recall_at_1000
value: 94.56700000000001
- type: recall_at_3
value: 40.256
- type: recall_at_5
value: 46.215
- task:
type: Retrieval
dataset:
name: MTEB HotpotQA
type: hotpotqa
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 37.819
- type: map_at_10
value: 60.889
- type: map_at_100
value: 61.717999999999996
- type: map_at_1000
value: 61.778
- type: map_at_3
value: 57.254000000000005
- type: map_at_5
value: 59.541
- type: mrr_at_1
value: 75.638
- type: mrr_at_10
value: 82.173
- type: mrr_at_100
value: 82.362
- type: mrr_at_1000
value: 82.37
- type: mrr_at_3
value: 81.089
- type: mrr_at_5
value: 81.827
- type: ndcg_at_1
value: 75.638
- type: ndcg_at_10
value: 69.317
- type: ndcg_at_100
value: 72.221
- type: ndcg_at_1000
value: 73.382
- type: ndcg_at_3
value: 64.14
- type: ndcg_at_5
value: 67.07600000000001
- type: precision_at_1
value: 75.638
- type: precision_at_10
value: 14.704999999999998
- type: precision_at_100
value: 1.698
- type: precision_at_1000
value: 0.185
- type: precision_at_3
value: 41.394999999999996
- type: precision_at_5
value: 27.162999999999997
- type: recall_at_1
value: 37.819
- type: recall_at_10
value: 73.52499999999999
- type: recall_at_100
value: 84.875
- type: recall_at_1000
value: 92.559
- type: recall_at_3
value: 62.092999999999996
- type: recall_at_5
value: 67.907
- task:
type: Classification
dataset:
name: MTEB ImdbClassification
type: mteb/imdb
config: default
split: test
revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7
metrics:
- type: accuracy
value: 94.60079999999999
- type: ap
value: 92.67396345347356
- type: f1
value: 94.5988098167121
- task:
type: Retrieval
dataset:
name: MTEB MSMARCO
type: msmarco
config: default
split: dev
revision: None
metrics:
- type: map_at_1
value: 21.285
- type: map_at_10
value: 33.436
- type: map_at_100
value: 34.63
- type: map_at_1000
value: 34.681
- type: map_at_3
value: 29.412
- type: map_at_5
value: 31.715
- type: mrr_at_1
value: 21.848
- type: mrr_at_10
value: 33.979
- type: mrr_at_100
value: 35.118
- type: mrr_at_1000
value: 35.162
- type: mrr_at_3
value: 30.036
- type: mrr_at_5
value: 32.298
- type: ndcg_at_1
value: 21.862000000000002
- type: ndcg_at_10
value: 40.43
- type: ndcg_at_100
value: 46.17
- type: ndcg_at_1000
value: 47.412
- type: ndcg_at_3
value: 32.221
- type: ndcg_at_5
value: 36.332
- type: precision_at_1
value: 21.862000000000002
- type: precision_at_10
value: 6.491
- type: precision_at_100
value: 0.935
- type: precision_at_1000
value: 0.104
- type: precision_at_3
value: 13.744
- type: precision_at_5
value: 10.331999999999999
- type: recall_at_1
value: 21.285
- type: recall_at_10
value: 62.083
- type: recall_at_100
value: 88.576
- type: recall_at_1000
value: 98.006
- type: recall_at_3
value: 39.729
- type: recall_at_5
value: 49.608000000000004
- task:
type: Classification
dataset:
name: MTEB MTOPDomainClassification (en)
type: mteb/mtop_domain
config: en
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 93.92612859097127
- type: f1
value: 93.82370333372853
- task:
type: Classification
dataset:
name: MTEB MTOPDomainClassification (de)
type: mteb/mtop_domain
config: de
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 92.67681036911807
- type: f1
value: 92.14191382411472
- task:
type: Classification
dataset:
name: MTEB MTOPDomainClassification (es)
type: mteb/mtop_domain
config: es
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 92.26817878585723
- type: f1
value: 91.92824250337878
- task:
type: Classification
dataset:
name: MTEB MTOPDomainClassification (fr)
type: mteb/mtop_domain
config: fr
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 89.96554963983714
- type: f1
value: 90.02859329630792
- task:
type: Classification
dataset:
name: MTEB MTOPDomainClassification (hi)
type: mteb/mtop_domain
config: hi
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 90.02509860164935
- type: f1
value: 89.30665159182062
- task:
type: Classification
dataset:
name: MTEB MTOPDomainClassification (th)
type: mteb/mtop_domain
config: th
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 87.55515370705244
- type: f1
value: 87.94449232331907
- task:
type: Classification
dataset:
name: MTEB MTOPIntentClassification (en)
type: mteb/mtop_intent
config: en
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 82.4623803009576
- type: f1
value: 66.06738378772725
- task:
type: Classification
dataset:
name: MTEB MTOPIntentClassification (de)
type: mteb/mtop_intent
config: de
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 79.3716539870386
- type: f1
value: 60.37614033396853
- task:
type: Classification
dataset:
name: MTEB MTOPIntentClassification (es)
type: mteb/mtop_intent
config: es
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 80.34022681787857
- type: f1
value: 58.302008026952
- task:
type: Classification
dataset:
name: MTEB MTOPIntentClassification (fr)
type: mteb/mtop_intent
config: fr
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 76.72095208268087
- type: f1
value: 59.64524724009049
- task:
type: Classification
dataset:
name: MTEB MTOPIntentClassification (hi)
type: mteb/mtop_intent
config: hi
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 77.87020437432773
- type: f1
value: 57.80202694670567
- task:
type: Classification
dataset:
name: MTEB MTOPIntentClassification (th)
type: mteb/mtop_intent
config: th
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 77.73598553345387
- type: f1
value: 58.19628250675031
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (af)
type: mteb/amazon_massive_intent
config: af
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 67.6630800268998
- type: f1
value: 65.00996668051691
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (am)
type: mteb/amazon_massive_intent
config: am
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 60.7128446536651
- type: f1
value: 57.95860594874963
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (ar)
type: mteb/amazon_massive_intent
config: ar
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 63.61129791526563
- type: f1
value: 59.75328290206483
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (az)
type: mteb/amazon_massive_intent
config: az
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 69.00134498991257
- type: f1
value: 67.0230483991802
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (bn)
type: mteb/amazon_massive_intent
config: bn
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 68.54068594485541
- type: f1
value: 65.54604628946976
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (cy)
type: mteb/amazon_massive_intent
config: cy
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 63.032952252858095
- type: f1
value: 58.715741857057104
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (da)
type: mteb/amazon_massive_intent
config: da
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 71.80901143241427
- type: f1
value: 68.33963989243877
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (de)
type: mteb/amazon_massive_intent
config: de
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 72.47141896435777
- type: f1
value: 69.56765020308262
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (el)
type: mteb/amazon_massive_intent
config: el
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 71.2373907195696
- type: f1
value: 69.04529836036467
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (en)
type: mteb/amazon_massive_intent
config: en
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 77.05783456624076
- type: f1
value: 74.69430584708174
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (es)
type: mteb/amazon_massive_intent
config: es
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 72.82111634162744
- type: f1
value: 70.77228952803762
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (fa)
type: mteb/amazon_massive_intent
config: fa
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 74.25353059852051
- type: f1
value: 71.05310103416411
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (fi)
type: mteb/amazon_massive_intent
config: fi
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 72.28648285137861
- type: f1
value: 69.08020473732226
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (fr)
type: mteb/amazon_massive_intent
config: fr
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 73.31540013449899
- type: f1
value: 70.9426355465791
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (he)
type: mteb/amazon_massive_intent
config: he
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 70.2151983860121
- type: f1
value: 67.52541755908858
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (hi)
type: mteb/amazon_massive_intent
config: hi
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 71.58372562205784
- type: f1
value: 69.49769064229827
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (hu)
type: mteb/amazon_massive_intent
config: hu
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 71.9233355749832
- type: f1
value: 69.36311548259593
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (hy)
type: mteb/amazon_massive_intent
config: hy
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 68.07330195023538
- type: f1
value: 64.99882022345572
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (id)
type: mteb/amazon_massive_intent
config: id
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 72.62273032952253
- type: f1
value: 70.6394885471001
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (is)
type: mteb/amazon_massive_intent
config: is
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 65.77000672494957
- type: f1
value: 62.9368944815065
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (it)
type: mteb/amazon_massive_intent
config: it
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 73.453261600538
- type: f1
value: 70.85069934666681
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (ja)
type: mteb/amazon_massive_intent
config: ja
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 74.6906523201076
- type: f1
value: 72.03249740074217
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (jv)
type: mteb/amazon_massive_intent
config: jv
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 63.03631472763953
- type: f1
value: 59.3165215571852
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (ka)
type: mteb/amazon_massive_intent
config: ka
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 58.913920645595155
- type: f1
value: 57.367337711611285
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (km)
type: mteb/amazon_massive_intent
config: km
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 54.42837928715535
- type: f1
value: 52.60527294970906
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (kn)
type: mteb/amazon_massive_intent
config: kn
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 66.33490248823135
- type: f1
value: 63.213340969404065
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (ko)
type: mteb/amazon_massive_intent
config: ko
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 70.58507061197041
- type: f1
value: 68.40256628040486
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (lv)
type: mteb/amazon_massive_intent
config: lv
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 69.11230665770006
- type: f1
value: 66.44863577842305
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (ml)
type: mteb/amazon_massive_intent
config: ml
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 69.70073974445192
- type: f1
value: 67.21291337273702
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (mn)
type: mteb/amazon_massive_intent
config: mn
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 66.43913920645595
- type: f1
value: 64.09838087422806
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (ms)
type: mteb/amazon_massive_intent
config: ms
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 70.80026899798251
- type: f1
value: 68.76986742962444
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (my)
type: mteb/amazon_massive_intent
config: my
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 64.78816408876934
- type: f1
value: 62.18781873428972
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (nb)
type: mteb/amazon_massive_intent
config: nb
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 71.6577000672495
- type: f1
value: 68.75171511133003
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (nl)
type: mteb/amazon_massive_intent
config: nl
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 74.42501681237391
- type: f1
value: 71.18434963451544
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (pl)
type: mteb/amazon_massive_intent
config: pl
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 73.64828513786146
- type: f1
value: 70.67741914007422
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (pt)
type: mteb/amazon_massive_intent
config: pt
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 73.62811028917284
- type: f1
value: 71.36402039740959
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (ro)
type: mteb/amazon_massive_intent
config: ro
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 71.88634835238736
- type: f1
value: 69.23701923480677
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (ru)
type: mteb/amazon_massive_intent
config: ru
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 74.15938130464022
- type: f1
value: 71.87792218993388
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (sl)
type: mteb/amazon_massive_intent
config: sl
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 69.96301277740416
- type: f1
value: 67.29584200202983
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (sq)
type: mteb/amazon_massive_intent
config: sq
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 69.49562878278412
- type: f1
value: 66.91716685679431
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (sv)
type: mteb/amazon_massive_intent
config: sv
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 74.6805648957633
- type: f1
value: 72.02723592594374
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (sw)
type: mteb/amazon_massive_intent
config: sw
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 63.00605245460659
- type: f1
value: 60.16716669482932
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (ta)
type: mteb/amazon_massive_intent
config: ta
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 66.90988567585742
- type: f1
value: 63.99405488777784
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (te)
type: mteb/amazon_massive_intent
config: te
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 67.62273032952253
- type: f1
value: 65.17213906909481
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (th)
type: mteb/amazon_massive_intent
config: th
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 69.50907868190988
- type: f1
value: 69.15165697194853
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (tl)
type: mteb/amazon_massive_intent
config: tl
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 69.30733019502352
- type: f1
value: 66.69024007380474
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (tr)
type: mteb/amazon_massive_intent
config: tr
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 72.24277067921989
- type: f1
value: 68.80515408492947
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (ur)
type: mteb/amazon_massive_intent
config: ur
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 67.49831876260929
- type: f1
value: 64.83778567111116
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (vi)
type: mteb/amazon_massive_intent
config: vi
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 71.28782784129119
- type: f1
value: 69.3294186700733
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (zh-CN)
type: mteb/amazon_massive_intent
config: zh-CN
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 73.315400134499
- type: f1
value: 71.22674385243207
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (zh-TW)
type: mteb/amazon_massive_intent
config: zh-TW
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 69.37794216543377
- type: f1
value: 68.96962492838232
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (af)
type: mteb/amazon_massive_scenario
config: af
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 73.33557498318764
- type: f1
value: 72.28949738478356
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (am)
type: mteb/amazon_massive_scenario
config: am
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 65.84398117014123
- type: f1
value: 64.71026362091463
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (ar)
type: mteb/amazon_massive_scenario
config: ar
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 69.76462676529925
- type: f1
value: 69.8229667407667
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (az)
type: mteb/amazon_massive_scenario
config: az
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 72.02420981842636
- type: f1
value: 71.76576384895898
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (bn)
type: mteb/amazon_massive_scenario
config: bn
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 72.7572293207801
- type: f1
value: 72.76840765295256
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (cy)
type: mteb/amazon_massive_scenario
config: cy
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 68.02286482851379
- type: f1
value: 66.17237947327872
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (da)
type: mteb/amazon_massive_scenario
config: da
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 77.60928043039678
- type: f1
value: 77.27094731234773
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (de)
type: mteb/amazon_massive_scenario
config: de
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 77.68325487558843
- type: f1
value: 77.97530399082261
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (el)
type: mteb/amazon_massive_scenario
config: el
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 76.13315400134498
- type: f1
value: 75.97558584796424
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (en)
type: mteb/amazon_massive_scenario
config: en
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 80.47410894418292
- type: f1
value: 80.52244841473792
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (es)
type: mteb/amazon_massive_scenario
config: es
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 76.9670477471419
- type: f1
value: 77.37318805793146
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (fa)
type: mteb/amazon_massive_scenario
config: fa
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 78.09683927370544
- type: f1
value: 77.69773737430847
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (fi)
type: mteb/amazon_massive_scenario
config: fi
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 75.20847343644922
- type: f1
value: 75.17071738727348
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (fr)
type: mteb/amazon_massive_scenario
config: fr
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 77.07464694014796
- type: f1
value: 77.16136207698571
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (he)
type: mteb/amazon_massive_scenario
config: he
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 73.53396099529255
- type: f1
value: 73.58296404484122
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (hi)
type: mteb/amazon_massive_scenario
config: hi
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 75.75319435104237
- type: f1
value: 75.24674707850833
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (hu)
type: mteb/amazon_massive_scenario
config: hu
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 77.0948217888366
- type: f1
value: 76.47559490205028
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (hy)
type: mteb/amazon_massive_scenario
config: hy
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 71.07599193006052
- type: f1
value: 70.76028043093511
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (id)
type: mteb/amazon_massive_scenario
config: id
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 77.10490921318089
- type: f1
value: 77.01215275283272
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (is)
type: mteb/amazon_massive_scenario
config: is
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 71.25756556825824
- type: f1
value: 70.20605314648762
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (it)
type: mteb/amazon_massive_scenario
config: it
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 77.08137188971082
- type: f1
value: 77.3899269057439
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (ja)
type: mteb/amazon_massive_scenario
config: ja
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 79.35440484196369
- type: f1
value: 79.58964690002772
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (jv)
type: mteb/amazon_massive_scenario
config: jv
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 68.42299932750504
- type: f1
value: 68.07844356925413
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (ka)
type: mteb/amazon_massive_scenario
config: ka
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 66.15669132481507
- type: f1
value: 65.89383352608513
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (km)
type: mteb/amazon_massive_scenario
config: km
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 60.11432414256894
- type: f1
value: 57.69910594559806
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (kn)
type: mteb/amazon_massive_scenario
config: kn
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 71.24747814391392
- type: f1
value: 70.42455553830918
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (ko)
type: mteb/amazon_massive_scenario
config: ko
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 76.46267652992603
- type: f1
value: 76.8854559308316
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (lv)
type: mteb/amazon_massive_scenario
config: lv
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 73.24815063887021
- type: f1
value: 72.77805034658074
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (ml)
type: mteb/amazon_massive_scenario
config: ml
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 74.11566913248151
- type: f1
value: 73.86147988001356
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (mn)
type: mteb/amazon_massive_scenario
config: mn
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 70.0168123739072
- type: f1
value: 69.38515920054571
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (ms)
type: mteb/amazon_massive_scenario
config: ms
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 74.41156691324814
- type: f1
value: 73.43474953408237
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (my)
type: mteb/amazon_massive_scenario
config: my
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 68.39609952925353
- type: f1
value: 67.29731681109291
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (nb)
type: mteb/amazon_massive_scenario
config: nb
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 77.20914593140552
- type: f1
value: 77.07066497935367
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (nl)
type: mteb/amazon_massive_scenario
config: nl
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 78.52387357094821
- type: f1
value: 78.5259569473291
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (pl)
type: mteb/amazon_massive_scenario
config: pl
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 76.6913248150639
- type: f1
value: 76.91201656350455
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (pt)
type: mteb/amazon_massive_scenario
config: pt
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 77.1217215870881
- type: f1
value: 77.41179937912504
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (ro)
type: mteb/amazon_massive_scenario
config: ro
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 75.25891055817083
- type: f1
value: 75.8089244542887
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (ru)
type: mteb/amazon_massive_scenario
config: ru
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 77.70679219905851
- type: f1
value: 78.21459594517711
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (sl)
type: mteb/amazon_massive_scenario
config: sl
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 74.83523873570948
- type: f1
value: 74.86847028401978
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (sq)
type: mteb/amazon_massive_scenario
config: sq
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 74.71755211835911
- type: f1
value: 74.0214326485662
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (sv)
type: mteb/amazon_massive_scenario
config: sv
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 79.06523201075991
- type: f1
value: 79.10545620325138
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (sw)
type: mteb/amazon_massive_scenario
config: sw
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 67.91862811028918
- type: f1
value: 66.50386121217983
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (ta)
type: mteb/amazon_massive_scenario
config: ta
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 70.93140551445865
- type: f1
value: 70.755435928495
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (te)
type: mteb/amazon_massive_scenario
config: te
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 72.40753194351042
- type: f1
value: 71.61816115782923
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (th)
type: mteb/amazon_massive_scenario
config: th
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 75.1815736381977
- type: f1
value: 75.08016717887205
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (tl)
type: mteb/amazon_massive_scenario
config: tl
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 72.86482851378614
- type: f1
value: 72.39521180006291
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (tr)
type: mteb/amazon_massive_scenario
config: tr
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 76.46940147948891
- type: f1
value: 76.70044085362349
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (ur)
type: mteb/amazon_massive_scenario
config: ur
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 71.89307330195024
- type: f1
value: 71.5721825332298
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (vi)
type: mteb/amazon_massive_scenario
config: vi
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 74.7511768661735
- type: f1
value: 75.17918654541515
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (zh-CN)
type: mteb/amazon_massive_scenario
config: zh-CN
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 78.69535978480162
- type: f1
value: 78.90019070153316
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (zh-TW)
type: mteb/amazon_massive_scenario
config: zh-TW
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 75.45729657027572
- type: f1
value: 76.19578371794672
- task:
type: Clustering
dataset:
name: MTEB MedrxivClusteringP2P
type: mteb/medrxiv-clustering-p2p
config: default
split: test
revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73
metrics:
- type: v_measure
value: 36.92715354123554
- task:
type: Clustering
dataset:
name: MTEB MedrxivClusteringS2S
type: mteb/medrxiv-clustering-s2s
config: default
split: test
revision: 35191c8c0dca72d8ff3efcd72aa802307d469663
metrics:
- type: v_measure
value: 35.53536244162518
- task:
type: Reranking
dataset:
name: MTEB MindSmallReranking
type: mteb/mind_small
config: default
split: test
revision: 3bdac13927fdc888b903db93b2ffdbd90b295a69
metrics:
- type: map
value: 33.08507884504006
- type: mrr
value: 34.32436977159129
- task:
type: Retrieval
dataset:
name: MTEB NFCorpus
type: nfcorpus
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 5.935
- type: map_at_10
value: 13.297
- type: map_at_100
value: 16.907
- type: map_at_1000
value: 18.391
- type: map_at_3
value: 9.626999999999999
- type: map_at_5
value: 11.190999999999999
- type: mrr_at_1
value: 46.129999999999995
- type: mrr_at_10
value: 54.346000000000004
- type: mrr_at_100
value: 55.067
- type: mrr_at_1000
value: 55.1
- type: mrr_at_3
value: 51.961
- type: mrr_at_5
value: 53.246
- type: ndcg_at_1
value: 44.118
- type: ndcg_at_10
value: 35.534
- type: ndcg_at_100
value: 32.946999999999996
- type: ndcg_at_1000
value: 41.599000000000004
- type: ndcg_at_3
value: 40.25
- type: ndcg_at_5
value: 37.978
- type: precision_at_1
value: 46.129999999999995
- type: precision_at_10
value: 26.842
- type: precision_at_100
value: 8.427
- type: precision_at_1000
value: 2.128
- type: precision_at_3
value: 37.977
- type: precision_at_5
value: 32.879000000000005
- type: recall_at_1
value: 5.935
- type: recall_at_10
value: 17.211000000000002
- type: recall_at_100
value: 34.33
- type: recall_at_1000
value: 65.551
- type: recall_at_3
value: 10.483
- type: recall_at_5
value: 13.078999999999999
- task:
type: Retrieval
dataset:
name: MTEB NQ
type: nq
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 35.231
- type: map_at_10
value: 50.202000000000005
- type: map_at_100
value: 51.154999999999994
- type: map_at_1000
value: 51.181
- type: map_at_3
value: 45.774
- type: map_at_5
value: 48.522
- type: mrr_at_1
value: 39.687
- type: mrr_at_10
value: 52.88
- type: mrr_at_100
value: 53.569
- type: mrr_at_1000
value: 53.58500000000001
- type: mrr_at_3
value: 49.228
- type: mrr_at_5
value: 51.525
- type: ndcg_at_1
value: 39.687
- type: ndcg_at_10
value: 57.754000000000005
- type: ndcg_at_100
value: 61.597
- type: ndcg_at_1000
value: 62.18900000000001
- type: ndcg_at_3
value: 49.55
- type: ndcg_at_5
value: 54.11899999999999
- type: precision_at_1
value: 39.687
- type: precision_at_10
value: 9.313
- type: precision_at_100
value: 1.146
- type: precision_at_1000
value: 0.12
- type: precision_at_3
value: 22.229
- type: precision_at_5
value: 15.939
- type: recall_at_1
value: 35.231
- type: recall_at_10
value: 78.083
- type: recall_at_100
value: 94.42099999999999
- type: recall_at_1000
value: 98.81
- type: recall_at_3
value: 57.047000000000004
- type: recall_at_5
value: 67.637
- task:
type: Retrieval
dataset:
name: MTEB QuoraRetrieval
type: quora
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 71.241
- type: map_at_10
value: 85.462
- type: map_at_100
value: 86.083
- type: map_at_1000
value: 86.09700000000001
- type: map_at_3
value: 82.49499999999999
- type: map_at_5
value: 84.392
- type: mrr_at_1
value: 82.09
- type: mrr_at_10
value: 88.301
- type: mrr_at_100
value: 88.383
- type: mrr_at_1000
value: 88.384
- type: mrr_at_3
value: 87.37
- type: mrr_at_5
value: 88.035
- type: ndcg_at_1
value: 82.12
- type: ndcg_at_10
value: 89.149
- type: ndcg_at_100
value: 90.235
- type: ndcg_at_1000
value: 90.307
- type: ndcg_at_3
value: 86.37599999999999
- type: ndcg_at_5
value: 87.964
- type: precision_at_1
value: 82.12
- type: precision_at_10
value: 13.56
- type: precision_at_100
value: 1.539
- type: precision_at_1000
value: 0.157
- type: precision_at_3
value: 37.88
- type: precision_at_5
value: 24.92
- type: recall_at_1
value: 71.241
- type: recall_at_10
value: 96.128
- type: recall_at_100
value: 99.696
- type: recall_at_1000
value: 99.994
- type: recall_at_3
value: 88.181
- type: recall_at_5
value: 92.694
- task:
type: Clustering
dataset:
name: MTEB RedditClustering
type: mteb/reddit-clustering
config: default
split: test
revision: 24640382cdbf8abc73003fb0fa6d111a705499eb
metrics:
- type: v_measure
value: 56.59757799655151
- task:
type: Clustering
dataset:
name: MTEB RedditClusteringP2P
type: mteb/reddit-clustering-p2p
config: default
split: test
revision: 282350215ef01743dc01b456c7f5241fa8937f16
metrics:
- type: v_measure
value: 64.27391998854624
- task:
type: Retrieval
dataset:
name: MTEB SCIDOCS
type: scidocs
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 4.243
- type: map_at_10
value: 10.965
- type: map_at_100
value: 12.934999999999999
- type: map_at_1000
value: 13.256
- type: map_at_3
value: 7.907
- type: map_at_5
value: 9.435
- type: mrr_at_1
value: 20.9
- type: mrr_at_10
value: 31.849
- type: mrr_at_100
value: 32.964
- type: mrr_at_1000
value: 33.024
- type: mrr_at_3
value: 28.517
- type: mrr_at_5
value: 30.381999999999998
- type: ndcg_at_1
value: 20.9
- type: ndcg_at_10
value: 18.723
- type: ndcg_at_100
value: 26.384999999999998
- type: ndcg_at_1000
value: 32.114
- type: ndcg_at_3
value: 17.753
- type: ndcg_at_5
value: 15.558
- type: precision_at_1
value: 20.9
- type: precision_at_10
value: 9.8
- type: precision_at_100
value: 2.078
- type: precision_at_1000
value: 0.345
- type: precision_at_3
value: 16.900000000000002
- type: precision_at_5
value: 13.88
- type: recall_at_1
value: 4.243
- type: recall_at_10
value: 19.885
- type: recall_at_100
value: 42.17
- type: recall_at_1000
value: 70.12
- type: recall_at_3
value: 10.288
- type: recall_at_5
value: 14.072000000000001
- task:
type: STS
dataset:
name: MTEB SICK-R
type: mteb/sickr-sts
config: default
split: test
revision: a6ea5a8cab320b040a23452cc28066d9beae2cee
metrics:
- type: cos_sim_pearson
value: 85.84209174935282
- type: cos_sim_spearman
value: 81.73248048438833
- type: euclidean_pearson
value: 83.02810070308149
- type: euclidean_spearman
value: 81.73248295679514
- type: manhattan_pearson
value: 82.95368060376002
- type: manhattan_spearman
value: 81.60277910998718
- task:
type: STS
dataset:
name: MTEB STS12
type: mteb/sts12-sts
config: default
split: test
revision: a0d554a64d88156834ff5ae9920b964011b16384
metrics:
- type: cos_sim_pearson
value: 88.52628804556943
- type: cos_sim_spearman
value: 82.5713913555672
- type: euclidean_pearson
value: 85.8796774746988
- type: euclidean_spearman
value: 82.57137506803424
- type: manhattan_pearson
value: 85.79671002960058
- type: manhattan_spearman
value: 82.49445981618027
- task:
type: STS
dataset:
name: MTEB STS13
type: mteb/sts13-sts
config: default
split: test
revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca
metrics:
- type: cos_sim_pearson
value: 86.23682503505542
- type: cos_sim_spearman
value: 87.15008956711806
- type: euclidean_pearson
value: 86.79805401524959
- type: euclidean_spearman
value: 87.15008956711806
- type: manhattan_pearson
value: 86.65298502699244
- type: manhattan_spearman
value: 86.97677821948562
- task:
type: STS
dataset:
name: MTEB STS14
type: mteb/sts14-sts
config: default
split: test
revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375
metrics:
- type: cos_sim_pearson
value: 85.63370304677802
- type: cos_sim_spearman
value: 84.97105553540318
- type: euclidean_pearson
value: 85.28896108687721
- type: euclidean_spearman
value: 84.97105553540318
- type: manhattan_pearson
value: 85.09663190337331
- type: manhattan_spearman
value: 84.79126831644619
- task:
type: STS
dataset:
name: MTEB STS15
type: mteb/sts15-sts
config: default
split: test
revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3
metrics:
- type: cos_sim_pearson
value: 90.2614838800733
- type: cos_sim_spearman
value: 91.0509162991835
- type: euclidean_pearson
value: 90.33098317533373
- type: euclidean_spearman
value: 91.05091625871644
- type: manhattan_pearson
value: 90.26250435151107
- type: manhattan_spearman
value: 90.97999594417519
- task:
type: STS
dataset:
name: MTEB STS16
type: mteb/sts16-sts
config: default
split: test
revision: 4d8694f8f0e0100860b497b999b3dbed754a0513
metrics:
- type: cos_sim_pearson
value: 85.80480973335091
- type: cos_sim_spearman
value: 87.313695492969
- type: euclidean_pearson
value: 86.49267251576939
- type: euclidean_spearman
value: 87.313695492969
- type: manhattan_pearson
value: 86.44019901831935
- type: manhattan_spearman
value: 87.24205395460392
- task:
type: STS
dataset:
name: MTEB STS17 (en-en)
type: mteb/sts17-crosslingual-sts
config: en-en
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 90.05662789380672
- type: cos_sim_spearman
value: 90.02759424426651
- type: euclidean_pearson
value: 90.4042483422981
- type: euclidean_spearman
value: 90.02759424426651
- type: manhattan_pearson
value: 90.51446975000226
- type: manhattan_spearman
value: 90.08832889933616
- task:
type: STS
dataset:
name: MTEB STS22 (en)
type: mteb/sts22-crosslingual-sts
config: en
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 67.5975528273532
- type: cos_sim_spearman
value: 67.62969861411354
- type: euclidean_pearson
value: 69.224275734323
- type: euclidean_spearman
value: 67.62969861411354
- type: manhattan_pearson
value: 69.3761447059927
- type: manhattan_spearman
value: 67.90921005611467
- task:
type: STS
dataset:
name: MTEB STSBenchmark
type: mteb/stsbenchmark-sts
config: default
split: test
revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831
metrics:
- type: cos_sim_pearson
value: 87.11244327231684
- type: cos_sim_spearman
value: 88.37902438979035
- type: euclidean_pearson
value: 87.86054279847336
- type: euclidean_spearman
value: 88.37902438979035
- type: manhattan_pearson
value: 87.77257757320378
- type: manhattan_spearman
value: 88.25208966098123
- task:
type: Reranking
dataset:
name: MTEB SciDocsRR
type: mteb/scidocs-reranking
config: default
split: test
revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab
metrics:
- type: map
value: 85.87174608143563
- type: mrr
value: 96.12836872640794
- task:
type: Retrieval
dataset:
name: MTEB SciFact
type: scifact
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 57.760999999999996
- type: map_at_10
value: 67.258
- type: map_at_100
value: 67.757
- type: map_at_1000
value: 67.78800000000001
- type: map_at_3
value: 64.602
- type: map_at_5
value: 65.64
- type: mrr_at_1
value: 60.667
- type: mrr_at_10
value: 68.441
- type: mrr_at_100
value: 68.825
- type: mrr_at_1000
value: 68.853
- type: mrr_at_3
value: 66.444
- type: mrr_at_5
value: 67.26100000000001
- type: ndcg_at_1
value: 60.667
- type: ndcg_at_10
value: 71.852
- type: ndcg_at_100
value: 73.9
- type: ndcg_at_1000
value: 74.628
- type: ndcg_at_3
value: 67.093
- type: ndcg_at_5
value: 68.58
- type: precision_at_1
value: 60.667
- type: precision_at_10
value: 9.6
- type: precision_at_100
value: 1.0670000000000002
- type: precision_at_1000
value: 0.11199999999999999
- type: precision_at_3
value: 26.111
- type: precision_at_5
value: 16.733
- type: recall_at_1
value: 57.760999999999996
- type: recall_at_10
value: 84.967
- type: recall_at_100
value: 93.833
- type: recall_at_1000
value: 99.333
- type: recall_at_3
value: 71.589
- type: recall_at_5
value: 75.483
- task:
type: PairClassification
dataset:
name: MTEB SprintDuplicateQuestions
type: mteb/sprintduplicatequestions-pairclassification
config: default
split: test
revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46
metrics:
- type: cos_sim_accuracy
value: 99.66633663366336
- type: cos_sim_ap
value: 91.17685358899108
- type: cos_sim_f1
value: 82.16818642350559
- type: cos_sim_precision
value: 83.26488706365504
- type: cos_sim_recall
value: 81.10000000000001
- type: dot_accuracy
value: 99.66633663366336
- type: dot_ap
value: 91.17663411119032
- type: dot_f1
value: 82.16818642350559
- type: dot_precision
value: 83.26488706365504
- type: dot_recall
value: 81.10000000000001
- type: euclidean_accuracy
value: 99.66633663366336
- type: euclidean_ap
value: 91.17685189882275
- type: euclidean_f1
value: 82.16818642350559
- type: euclidean_precision
value: 83.26488706365504
- type: euclidean_recall
value: 81.10000000000001
- type: manhattan_accuracy
value: 99.66633663366336
- type: manhattan_ap
value: 91.2241619496737
- type: manhattan_f1
value: 82.20472440944883
- type: manhattan_precision
value: 86.51933701657458
- type: manhattan_recall
value: 78.3
- type: max_accuracy
value: 99.66633663366336
- type: max_ap
value: 91.2241619496737
- type: max_f1
value: 82.20472440944883
- task:
type: Clustering
dataset:
name: MTEB StackExchangeClustering
type: mteb/stackexchange-clustering
config: default
split: test
revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259
metrics:
- type: v_measure
value: 66.85101268897951
- task:
type: Clustering
dataset:
name: MTEB StackExchangeClusteringP2P
type: mteb/stackexchange-clustering-p2p
config: default
split: test
revision: 815ca46b2622cec33ccafc3735d572c266efdb44
metrics:
- type: v_measure
value: 42.461184054706905
- task:
type: Reranking
dataset:
name: MTEB StackOverflowDupQuestions
type: mteb/stackoverflowdupquestions-reranking
config: default
split: test
revision: e185fbe320c72810689fc5848eb6114e1ef5ec69
metrics:
- type: map
value: 51.44542568873886
- type: mrr
value: 52.33656151854681
- task:
type: Summarization
dataset:
name: MTEB SummEval
type: mteb/summeval
config: default
split: test
revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c
metrics:
- type: cos_sim_pearson
value: 30.75982974997539
- type: cos_sim_spearman
value: 30.385405026539914
- type: dot_pearson
value: 30.75982433546523
- type: dot_spearman
value: 30.385405026539914
- task:
type: Retrieval
dataset:
name: MTEB TRECCOVID
type: trec-covid
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 0.22799999999999998
- type: map_at_10
value: 2.064
- type: map_at_100
value: 13.056000000000001
- type: map_at_1000
value: 31.747999999999998
- type: map_at_3
value: 0.67
- type: map_at_5
value: 1.097
- type: mrr_at_1
value: 90.0
- type: mrr_at_10
value: 94.667
- type: mrr_at_100
value: 94.667
- type: mrr_at_1000
value: 94.667
- type: mrr_at_3
value: 94.667
- type: mrr_at_5
value: 94.667
- type: ndcg_at_1
value: 86.0
- type: ndcg_at_10
value: 82.0
- type: ndcg_at_100
value: 64.307
- type: ndcg_at_1000
value: 57.023999999999994
- type: ndcg_at_3
value: 85.816
- type: ndcg_at_5
value: 84.904
- type: precision_at_1
value: 90.0
- type: precision_at_10
value: 85.8
- type: precision_at_100
value: 66.46
- type: precision_at_1000
value: 25.202
- type: precision_at_3
value: 90.0
- type: precision_at_5
value: 89.2
- type: recall_at_1
value: 0.22799999999999998
- type: recall_at_10
value: 2.235
- type: recall_at_100
value: 16.185
- type: recall_at_1000
value: 53.620999999999995
- type: recall_at_3
value: 0.7040000000000001
- type: recall_at_5
value: 1.172
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (sqi-eng)
type: mteb/tatoeba-bitext-mining
config: sqi-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 97.39999999999999
- type: f1
value: 96.75
- type: precision
value: 96.45
- type: recall
value: 97.39999999999999
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (fry-eng)
type: mteb/tatoeba-bitext-mining
config: fry-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 85.54913294797689
- type: f1
value: 82.46628131021194
- type: precision
value: 81.1175337186898
- type: recall
value: 85.54913294797689
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (kur-eng)
type: mteb/tatoeba-bitext-mining
config: kur-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 81.21951219512195
- type: f1
value: 77.33333333333334
- type: precision
value: 75.54878048780488
- type: recall
value: 81.21951219512195
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (tur-eng)
type: mteb/tatoeba-bitext-mining
config: tur-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 98.6
- type: f1
value: 98.26666666666665
- type: precision
value: 98.1
- type: recall
value: 98.6
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (deu-eng)
type: mteb/tatoeba-bitext-mining
config: deu-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 99.5
- type: f1
value: 99.33333333333333
- type: precision
value: 99.25
- type: recall
value: 99.5
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (nld-eng)
type: mteb/tatoeba-bitext-mining
config: nld-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 97.8
- type: f1
value: 97.2
- type: precision
value: 96.89999999999999
- type: recall
value: 97.8
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ron-eng)
type: mteb/tatoeba-bitext-mining
config: ron-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 97.8
- type: f1
value: 97.18333333333334
- type: precision
value: 96.88333333333333
- type: recall
value: 97.8
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ang-eng)
type: mteb/tatoeba-bitext-mining
config: ang-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 77.61194029850746
- type: f1
value: 72.81094527363183
- type: precision
value: 70.83333333333333
- type: recall
value: 77.61194029850746
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ido-eng)
type: mteb/tatoeba-bitext-mining
config: ido-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 93.7
- type: f1
value: 91.91666666666667
- type: precision
value: 91.08333333333334
- type: recall
value: 93.7
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (jav-eng)
type: mteb/tatoeba-bitext-mining
config: jav-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 88.29268292682927
- type: f1
value: 85.27642276422765
- type: precision
value: 84.01277584204414
- type: recall
value: 88.29268292682927
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (isl-eng)
type: mteb/tatoeba-bitext-mining
config: isl-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 96.1
- type: f1
value: 95.0
- type: precision
value: 94.46666666666668
- type: recall
value: 96.1
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (slv-eng)
type: mteb/tatoeba-bitext-mining
config: slv-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 93.681652490887
- type: f1
value: 91.90765492102065
- type: precision
value: 91.05913325232888
- type: recall
value: 93.681652490887
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (cym-eng)
type: mteb/tatoeba-bitext-mining
config: cym-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 92.17391304347827
- type: f1
value: 89.97101449275361
- type: precision
value: 88.96811594202899
- type: recall
value: 92.17391304347827
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (kaz-eng)
type: mteb/tatoeba-bitext-mining
config: kaz-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 90.43478260869566
- type: f1
value: 87.72173913043478
- type: precision
value: 86.42028985507245
- type: recall
value: 90.43478260869566
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (est-eng)
type: mteb/tatoeba-bitext-mining
config: est-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 90.4
- type: f1
value: 88.03
- type: precision
value: 86.95
- type: recall
value: 90.4
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (heb-eng)
type: mteb/tatoeba-bitext-mining
config: heb-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 93.4
- type: f1
value: 91.45666666666666
- type: precision
value: 90.525
- type: recall
value: 93.4
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (gla-eng)
type: mteb/tatoeba-bitext-mining
config: gla-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 81.9059107358263
- type: f1
value: 78.32557872364869
- type: precision
value: 76.78260286824823
- type: recall
value: 81.9059107358263
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (mar-eng)
type: mteb/tatoeba-bitext-mining
config: mar-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 94.3
- type: f1
value: 92.58333333333333
- type: precision
value: 91.73333333333332
- type: recall
value: 94.3
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (lat-eng)
type: mteb/tatoeba-bitext-mining
config: lat-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 79.10000000000001
- type: f1
value: 74.50500000000001
- type: precision
value: 72.58928571428571
- type: recall
value: 79.10000000000001
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (bel-eng)
type: mteb/tatoeba-bitext-mining
config: bel-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 96.6
- type: f1
value: 95.55
- type: precision
value: 95.05
- type: recall
value: 96.6
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (pms-eng)
type: mteb/tatoeba-bitext-mining
config: pms-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 82.0952380952381
- type: f1
value: 77.98458049886621
- type: precision
value: 76.1968253968254
- type: recall
value: 82.0952380952381
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (gle-eng)
type: mteb/tatoeba-bitext-mining
config: gle-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 87.9
- type: f1
value: 84.99190476190476
- type: precision
value: 83.65
- type: recall
value: 87.9
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (pes-eng)
type: mteb/tatoeba-bitext-mining
config: pes-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 95.7
- type: f1
value: 94.56666666666666
- type: precision
value: 94.01666666666667
- type: recall
value: 95.7
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (nob-eng)
type: mteb/tatoeba-bitext-mining
config: nob-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 98.6
- type: f1
value: 98.2
- type: precision
value: 98.0
- type: recall
value: 98.6
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (bul-eng)
type: mteb/tatoeba-bitext-mining
config: bul-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 95.6
- type: f1
value: 94.38333333333334
- type: precision
value: 93.78333333333335
- type: recall
value: 95.6
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (cbk-eng)
type: mteb/tatoeba-bitext-mining
config: cbk-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 87.4
- type: f1
value: 84.10380952380952
- type: precision
value: 82.67
- type: recall
value: 87.4
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (hun-eng)
type: mteb/tatoeba-bitext-mining
config: hun-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 95.5
- type: f1
value: 94.33333333333334
- type: precision
value: 93.78333333333333
- type: recall
value: 95.5
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (uig-eng)
type: mteb/tatoeba-bitext-mining
config: uig-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 89.4
- type: f1
value: 86.82000000000001
- type: precision
value: 85.64500000000001
- type: recall
value: 89.4
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (rus-eng)
type: mteb/tatoeba-bitext-mining
config: rus-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 95.1
- type: f1
value: 93.56666666666668
- type: precision
value: 92.81666666666666
- type: recall
value: 95.1
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (spa-eng)
type: mteb/tatoeba-bitext-mining
config: spa-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 98.9
- type: f1
value: 98.6
- type: precision
value: 98.45
- type: recall
value: 98.9
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (hye-eng)
type: mteb/tatoeba-bitext-mining
config: hye-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 95.01347708894879
- type: f1
value: 93.51752021563343
- type: precision
value: 92.82794249775381
- type: recall
value: 95.01347708894879
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (tel-eng)
type: mteb/tatoeba-bitext-mining
config: tel-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 97.00854700854701
- type: f1
value: 96.08262108262107
- type: precision
value: 95.65527065527067
- type: recall
value: 97.00854700854701
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (afr-eng)
type: mteb/tatoeba-bitext-mining
config: afr-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 96.5
- type: f1
value: 95.39999999999999
- type: precision
value: 94.88333333333333
- type: recall
value: 96.5
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (mon-eng)
type: mteb/tatoeba-bitext-mining
config: mon-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 96.5909090909091
- type: f1
value: 95.49242424242425
- type: precision
value: 94.9621212121212
- type: recall
value: 96.5909090909091
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (arz-eng)
type: mteb/tatoeba-bitext-mining
config: arz-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 84.90566037735849
- type: f1
value: 81.85883997204752
- type: precision
value: 80.54507337526205
- type: recall
value: 84.90566037735849
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (hrv-eng)
type: mteb/tatoeba-bitext-mining
config: hrv-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 97.5
- type: f1
value: 96.75
- type: precision
value: 96.38333333333333
- type: recall
value: 97.5
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (nov-eng)
type: mteb/tatoeba-bitext-mining
config: nov-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 86.7704280155642
- type: f1
value: 82.99610894941635
- type: precision
value: 81.32295719844358
- type: recall
value: 86.7704280155642
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (gsw-eng)
type: mteb/tatoeba-bitext-mining
config: gsw-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 67.52136752136752
- type: f1
value: 61.89662189662191
- type: precision
value: 59.68660968660969
- type: recall
value: 67.52136752136752
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (nds-eng)
type: mteb/tatoeba-bitext-mining
config: nds-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 89.2
- type: f1
value: 86.32
- type: precision
value: 85.015
- type: recall
value: 89.2
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ukr-eng)
type: mteb/tatoeba-bitext-mining
config: ukr-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 96.0
- type: f1
value: 94.78333333333333
- type: precision
value: 94.18333333333334
- type: recall
value: 96.0
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (uzb-eng)
type: mteb/tatoeba-bitext-mining
config: uzb-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 83.8785046728972
- type: f1
value: 80.54517133956385
- type: precision
value: 79.154984423676
- type: recall
value: 83.8785046728972
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (lit-eng)
type: mteb/tatoeba-bitext-mining
config: lit-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 93.60000000000001
- type: f1
value: 92.01333333333334
- type: precision
value: 91.28333333333333
- type: recall
value: 93.60000000000001
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ina-eng)
type: mteb/tatoeba-bitext-mining
config: ina-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 97.1
- type: f1
value: 96.26666666666667
- type: precision
value: 95.85000000000001
- type: recall
value: 97.1
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (lfn-eng)
type: mteb/tatoeba-bitext-mining
config: lfn-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 84.3
- type: f1
value: 80.67833333333333
- type: precision
value: 79.03928571428571
- type: recall
value: 84.3
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (zsm-eng)
type: mteb/tatoeba-bitext-mining
config: zsm-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 97.3
- type: f1
value: 96.48333333333332
- type: precision
value: 96.08333333333331
- type: recall
value: 97.3
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ita-eng)
type: mteb/tatoeba-bitext-mining
config: ita-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 95.7
- type: f1
value: 94.66666666666667
- type: precision
value: 94.16666666666667
- type: recall
value: 95.7
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (cmn-eng)
type: mteb/tatoeba-bitext-mining
config: cmn-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 97.2
- type: f1
value: 96.36666666666667
- type: precision
value: 95.96666666666668
- type: recall
value: 97.2
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (lvs-eng)
type: mteb/tatoeba-bitext-mining
config: lvs-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 94.3
- type: f1
value: 92.80666666666667
- type: precision
value: 92.12833333333333
- type: recall
value: 94.3
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (glg-eng)
type: mteb/tatoeba-bitext-mining
config: glg-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 97.0
- type: f1
value: 96.22333333333334
- type: precision
value: 95.875
- type: recall
value: 97.0
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ceb-eng)
type: mteb/tatoeba-bitext-mining
config: ceb-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 74.33333333333333
- type: f1
value: 70.78174603174602
- type: precision
value: 69.28333333333332
- type: recall
value: 74.33333333333333
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (bre-eng)
type: mteb/tatoeba-bitext-mining
config: bre-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 37.6
- type: f1
value: 32.938348952090365
- type: precision
value: 31.2811038961039
- type: recall
value: 37.6
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ben-eng)
type: mteb/tatoeba-bitext-mining
config: ben-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 91.5
- type: f1
value: 89.13333333333333
- type: precision
value: 88.03333333333333
- type: recall
value: 91.5
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (swg-eng)
type: mteb/tatoeba-bitext-mining
config: swg-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 82.14285714285714
- type: f1
value: 77.67857142857143
- type: precision
value: 75.59523809523809
- type: recall
value: 82.14285714285714
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (arq-eng)
type: mteb/tatoeba-bitext-mining
config: arq-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 69.0450054884742
- type: f1
value: 63.070409283362075
- type: precision
value: 60.58992781824835
- type: recall
value: 69.0450054884742
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (kab-eng)
type: mteb/tatoeba-bitext-mining
config: kab-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 63.1
- type: f1
value: 57.848333333333336
- type: precision
value: 55.69500000000001
- type: recall
value: 63.1
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (fra-eng)
type: mteb/tatoeba-bitext-mining
config: fra-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 96.1
- type: f1
value: 95.01666666666667
- type: precision
value: 94.5
- type: recall
value: 96.1
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (por-eng)
type: mteb/tatoeba-bitext-mining
config: por-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 95.89999999999999
- type: f1
value: 94.90666666666667
- type: precision
value: 94.425
- type: recall
value: 95.89999999999999
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (tat-eng)
type: mteb/tatoeba-bitext-mining
config: tat-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 87.6
- type: f1
value: 84.61333333333333
- type: precision
value: 83.27
- type: recall
value: 87.6
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (oci-eng)
type: mteb/tatoeba-bitext-mining
config: oci-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 76.4
- type: f1
value: 71.90746031746032
- type: precision
value: 70.07027777777778
- type: recall
value: 76.4
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (pol-eng)
type: mteb/tatoeba-bitext-mining
config: pol-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 97.89999999999999
- type: f1
value: 97.26666666666667
- type: precision
value: 96.95
- type: recall
value: 97.89999999999999
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (war-eng)
type: mteb/tatoeba-bitext-mining
config: war-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 78.8
- type: f1
value: 74.39555555555555
- type: precision
value: 72.59416666666667
- type: recall
value: 78.8
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (aze-eng)
type: mteb/tatoeba-bitext-mining
config: aze-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 95.19999999999999
- type: f1
value: 93.78999999999999
- type: precision
value: 93.125
- type: recall
value: 95.19999999999999
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (vie-eng)
type: mteb/tatoeba-bitext-mining
config: vie-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 97.8
- type: f1
value: 97.1
- type: precision
value: 96.75
- type: recall
value: 97.8
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (nno-eng)
type: mteb/tatoeba-bitext-mining
config: nno-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 95.6
- type: f1
value: 94.25666666666666
- type: precision
value: 93.64166666666668
- type: recall
value: 95.6
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (cha-eng)
type: mteb/tatoeba-bitext-mining
config: cha-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 56.934306569343065
- type: f1
value: 51.461591936044485
- type: precision
value: 49.37434827945776
- type: recall
value: 56.934306569343065
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (mhr-eng)
type: mteb/tatoeba-bitext-mining
config: mhr-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 20.200000000000003
- type: f1
value: 16.91799284049284
- type: precision
value: 15.791855158730158
- type: recall
value: 20.200000000000003
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (dan-eng)
type: mteb/tatoeba-bitext-mining
config: dan-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 96.2
- type: f1
value: 95.3
- type: precision
value: 94.85
- type: recall
value: 96.2
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ell-eng)
type: mteb/tatoeba-bitext-mining
config: ell-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 96.3
- type: f1
value: 95.11666666666667
- type: precision
value: 94.53333333333333
- type: recall
value: 96.3
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (amh-eng)
type: mteb/tatoeba-bitext-mining
config: amh-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 89.88095238095238
- type: f1
value: 87.14285714285714
- type: precision
value: 85.96230158730161
- type: recall
value: 89.88095238095238
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (pam-eng)
type: mteb/tatoeba-bitext-mining
config: pam-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 24.099999999999998
- type: f1
value: 19.630969083349783
- type: precision
value: 18.275094905094907
- type: recall
value: 24.099999999999998
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (hsb-eng)
type: mteb/tatoeba-bitext-mining
config: hsb-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 83.4368530020704
- type: f1
value: 79.45183870649709
- type: precision
value: 77.7432712215321
- type: recall
value: 83.4368530020704
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (srp-eng)
type: mteb/tatoeba-bitext-mining
config: srp-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 95.8
- type: f1
value: 94.53333333333333
- type: precision
value: 93.91666666666666
- type: recall
value: 95.8
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (epo-eng)
type: mteb/tatoeba-bitext-mining
config: epo-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 98.8
- type: f1
value: 98.48333333333332
- type: precision
value: 98.33333333333334
- type: recall
value: 98.8
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (kzj-eng)
type: mteb/tatoeba-bitext-mining
config: kzj-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 17.5
- type: f1
value: 14.979285714285714
- type: precision
value: 14.23235060690943
- type: recall
value: 17.5
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (awa-eng)
type: mteb/tatoeba-bitext-mining
config: awa-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 93.93939393939394
- type: f1
value: 91.991341991342
- type: precision
value: 91.05339105339105
- type: recall
value: 93.93939393939394
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (fao-eng)
type: mteb/tatoeba-bitext-mining
config: fao-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 89.31297709923665
- type: f1
value: 86.76844783715012
- type: precision
value: 85.63613231552164
- type: recall
value: 89.31297709923665
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (mal-eng)
type: mteb/tatoeba-bitext-mining
config: mal-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 99.12663755458514
- type: f1
value: 98.93255701115964
- type: precision
value: 98.83551673944687
- type: recall
value: 99.12663755458514
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ile-eng)
type: mteb/tatoeba-bitext-mining
config: ile-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 92.0
- type: f1
value: 89.77999999999999
- type: precision
value: 88.78333333333333
- type: recall
value: 92.0
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (bos-eng)
type: mteb/tatoeba-bitext-mining
config: bos-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 96.89265536723164
- type: f1
value: 95.85687382297553
- type: precision
value: 95.33898305084746
- type: recall
value: 96.89265536723164
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (cor-eng)
type: mteb/tatoeba-bitext-mining
config: cor-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 14.6
- type: f1
value: 11.820611790170615
- type: precision
value: 11.022616224355355
- type: recall
value: 14.6
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (cat-eng)
type: mteb/tatoeba-bitext-mining
config: cat-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 95.89999999999999
- type: f1
value: 94.93333333333334
- type: precision
value: 94.48666666666666
- type: recall
value: 95.89999999999999
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (eus-eng)
type: mteb/tatoeba-bitext-mining
config: eus-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 87.6
- type: f1
value: 84.72333333333334
- type: precision
value: 83.44166666666666
- type: recall
value: 87.6
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (yue-eng)
type: mteb/tatoeba-bitext-mining
config: yue-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 94.8
- type: f1
value: 93.47333333333333
- type: precision
value: 92.875
- type: recall
value: 94.8
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (swe-eng)
type: mteb/tatoeba-bitext-mining
config: swe-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 96.6
- type: f1
value: 95.71666666666665
- type: precision
value: 95.28333333333335
- type: recall
value: 96.6
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (dtp-eng)
type: mteb/tatoeba-bitext-mining
config: dtp-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 17.8
- type: f1
value: 14.511074040901628
- type: precision
value: 13.503791000666002
- type: recall
value: 17.8
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (kat-eng)
type: mteb/tatoeba-bitext-mining
config: kat-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 94.10187667560321
- type: f1
value: 92.46648793565683
- type: precision
value: 91.71134941912423
- type: recall
value: 94.10187667560321
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (jpn-eng)
type: mteb/tatoeba-bitext-mining
config: jpn-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 97.0
- type: f1
value: 96.11666666666666
- type: precision
value: 95.68333333333334
- type: recall
value: 97.0
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (csb-eng)
type: mteb/tatoeba-bitext-mining
config: csb-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 72.72727272727273
- type: f1
value: 66.58949745906267
- type: precision
value: 63.86693017127799
- type: recall
value: 72.72727272727273
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (xho-eng)
type: mteb/tatoeba-bitext-mining
config: xho-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 90.14084507042254
- type: f1
value: 88.26291079812206
- type: precision
value: 87.32394366197182
- type: recall
value: 90.14084507042254
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (orv-eng)
type: mteb/tatoeba-bitext-mining
config: orv-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 64.67065868263472
- type: f1
value: 58.2876627696987
- type: precision
value: 55.79255774165953
- type: recall
value: 64.67065868263472
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ind-eng)
type: mteb/tatoeba-bitext-mining
config: ind-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 95.6
- type: f1
value: 94.41666666666667
- type: precision
value: 93.85
- type: recall
value: 95.6
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (tuk-eng)
type: mteb/tatoeba-bitext-mining
config: tuk-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 55.172413793103445
- type: f1
value: 49.63992493549144
- type: precision
value: 47.71405113769646
- type: recall
value: 55.172413793103445
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (max-eng)
type: mteb/tatoeba-bitext-mining
config: max-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 77.46478873239437
- type: f1
value: 73.4417616811983
- type: precision
value: 71.91607981220658
- type: recall
value: 77.46478873239437
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (swh-eng)
type: mteb/tatoeba-bitext-mining
config: swh-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 84.61538461538461
- type: f1
value: 80.91452991452994
- type: precision
value: 79.33760683760683
- type: recall
value: 84.61538461538461
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (hin-eng)
type: mteb/tatoeba-bitext-mining
config: hin-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 98.2
- type: f1
value: 97.6
- type: precision
value: 97.3
- type: recall
value: 98.2
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (dsb-eng)
type: mteb/tatoeba-bitext-mining
config: dsb-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 75.5741127348643
- type: f1
value: 72.00417536534445
- type: precision
value: 70.53467872883321
- type: recall
value: 75.5741127348643
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ber-eng)
type: mteb/tatoeba-bitext-mining
config: ber-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 62.2
- type: f1
value: 55.577460317460314
- type: precision
value: 52.98583333333333
- type: recall
value: 62.2
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (tam-eng)
type: mteb/tatoeba-bitext-mining
config: tam-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 92.18241042345277
- type: f1
value: 90.6468124709167
- type: precision
value: 89.95656894679696
- type: recall
value: 92.18241042345277
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (slk-eng)
type: mteb/tatoeba-bitext-mining
config: slk-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 96.1
- type: f1
value: 95.13333333333333
- type: precision
value: 94.66666666666667
- type: recall
value: 96.1
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (tgl-eng)
type: mteb/tatoeba-bitext-mining
config: tgl-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 96.8
- type: f1
value: 95.85000000000001
- type: precision
value: 95.39999999999999
- type: recall
value: 96.8
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ast-eng)
type: mteb/tatoeba-bitext-mining
config: ast-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 92.1259842519685
- type: f1
value: 89.76377952755905
- type: precision
value: 88.71391076115485
- type: recall
value: 92.1259842519685
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (mkd-eng)
type: mteb/tatoeba-bitext-mining
config: mkd-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 94.1
- type: f1
value: 92.49
- type: precision
value: 91.725
- type: recall
value: 94.1
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (khm-eng)
type: mteb/tatoeba-bitext-mining
config: khm-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 77.5623268698061
- type: f1
value: 73.27364463791058
- type: precision
value: 71.51947852086357
- type: recall
value: 77.5623268698061
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ces-eng)
type: mteb/tatoeba-bitext-mining
config: ces-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 97.39999999999999
- type: f1
value: 96.56666666666666
- type: precision
value: 96.16666666666667
- type: recall
value: 97.39999999999999
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (tzl-eng)
type: mteb/tatoeba-bitext-mining
config: tzl-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 66.34615384615384
- type: f1
value: 61.092032967032964
- type: precision
value: 59.27197802197802
- type: recall
value: 66.34615384615384
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (urd-eng)
type: mteb/tatoeba-bitext-mining
config: urd-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 94.89999999999999
- type: f1
value: 93.41190476190476
- type: precision
value: 92.7
- type: recall
value: 94.89999999999999
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ara-eng)
type: mteb/tatoeba-bitext-mining
config: ara-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 93.10000000000001
- type: f1
value: 91.10000000000001
- type: precision
value: 90.13333333333333
- type: recall
value: 93.10000000000001
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (kor-eng)
type: mteb/tatoeba-bitext-mining
config: kor-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 93.7
- type: f1
value: 91.97333333333334
- type: precision
value: 91.14166666666667
- type: recall
value: 93.7
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (yid-eng)
type: mteb/tatoeba-bitext-mining
config: yid-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 92.21698113207547
- type: f1
value: 90.3796046720575
- type: precision
value: 89.56367924528303
- type: recall
value: 92.21698113207547
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (fin-eng)
type: mteb/tatoeba-bitext-mining
config: fin-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 97.6
- type: f1
value: 96.91666666666667
- type: precision
value: 96.6
- type: recall
value: 97.6
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (tha-eng)
type: mteb/tatoeba-bitext-mining
config: tha-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 97.44525547445255
- type: f1
value: 96.71532846715328
- type: precision
value: 96.35036496350365
- type: recall
value: 97.44525547445255
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (wuu-eng)
type: mteb/tatoeba-bitext-mining
config: wuu-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 94.1
- type: f1
value: 92.34000000000002
- type: precision
value: 91.49166666666667
- type: recall
value: 94.1
- task:
type: Retrieval
dataset:
name: MTEB Touche2020
type: webis-touche2020
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 3.2910000000000004
- type: map_at_10
value: 10.373000000000001
- type: map_at_100
value: 15.612
- type: map_at_1000
value: 17.06
- type: map_at_3
value: 6.119
- type: map_at_5
value: 7.917000000000001
- type: mrr_at_1
value: 44.897999999999996
- type: mrr_at_10
value: 56.054
- type: mrr_at_100
value: 56.82000000000001
- type: mrr_at_1000
value: 56.82000000000001
- type: mrr_at_3
value: 52.381
- type: mrr_at_5
value: 53.81
- type: ndcg_at_1
value: 42.857
- type: ndcg_at_10
value: 27.249000000000002
- type: ndcg_at_100
value: 36.529
- type: ndcg_at_1000
value: 48.136
- type: ndcg_at_3
value: 33.938
- type: ndcg_at_5
value: 29.951
- type: precision_at_1
value: 44.897999999999996
- type: precision_at_10
value: 22.653000000000002
- type: precision_at_100
value: 7.000000000000001
- type: precision_at_1000
value: 1.48
- type: precision_at_3
value: 32.653
- type: precision_at_5
value: 27.755000000000003
- type: recall_at_1
value: 3.2910000000000004
- type: recall_at_10
value: 16.16
- type: recall_at_100
value: 43.908
- type: recall_at_1000
value: 79.823
- type: recall_at_3
value: 7.156
- type: recall_at_5
value: 10.204
- task:
type: Classification
dataset:
name: MTEB ToxicConversationsClassification
type: mteb/toxic_conversations_50k
config: default
split: test
revision: d7c0de2777da35d6aae2200a62c6e0e5af397c4c
metrics:
- type: accuracy
value: 71.05879999999999
- type: ap
value: 14.609748142799111
- type: f1
value: 54.878956295843096
- task:
type: Classification
dataset:
name: MTEB TweetSentimentExtractionClassification
type: mteb/tweet_sentiment_extraction
config: default
split: test
revision: d604517c81ca91fe16a244d1248fc021f9ecee7a
metrics:
- type: accuracy
value: 64.61799660441426
- type: f1
value: 64.8698191961434
- task:
type: Clustering
dataset:
name: MTEB TwentyNewsgroupsClustering
type: mteb/twentynewsgroups-clustering
config: default
split: test
revision: 6125ec4e24fa026cec8a478383ee943acfbd5449
metrics:
- type: v_measure
value: 51.32860036611885
- task:
type: PairClassification
dataset:
name: MTEB TwitterSemEval2015
type: mteb/twittersemeval2015-pairclassification
config: default
split: test
revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1
metrics:
- type: cos_sim_accuracy
value: 88.34714192048638
- type: cos_sim_ap
value: 80.26732975975634
- type: cos_sim_f1
value: 73.53415148134374
- type: cos_sim_precision
value: 69.34767360299276
- type: cos_sim_recall
value: 78.25857519788919
- type: dot_accuracy
value: 88.34714192048638
- type: dot_ap
value: 80.26733698491206
- type: dot_f1
value: 73.53415148134374
- type: dot_precision
value: 69.34767360299276
- type: dot_recall
value: 78.25857519788919
- type: euclidean_accuracy
value: 88.34714192048638
- type: euclidean_ap
value: 80.26734337771738
- type: euclidean_f1
value: 73.53415148134374
- type: euclidean_precision
value: 69.34767360299276
- type: euclidean_recall
value: 78.25857519788919
- type: manhattan_accuracy
value: 88.30541813196639
- type: manhattan_ap
value: 80.19415808104145
- type: manhattan_f1
value: 73.55143870713441
- type: manhattan_precision
value: 73.25307511122743
- type: manhattan_recall
value: 73.85224274406332
- type: max_accuracy
value: 88.34714192048638
- type: max_ap
value: 80.26734337771738
- type: max_f1
value: 73.55143870713441
- task:
type: PairClassification
dataset:
name: MTEB TwitterURLCorpus
type: mteb/twitterurlcorpus-pairclassification
config: default
split: test
revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf
metrics:
- type: cos_sim_accuracy
value: 89.81061047075717
- type: cos_sim_ap
value: 87.11747055081017
- type: cos_sim_f1
value: 80.04355498817256
- type: cos_sim_precision
value: 78.1165262000733
- type: cos_sim_recall
value: 82.06806282722513
- type: dot_accuracy
value: 89.81061047075717
- type: dot_ap
value: 87.11746902745236
- type: dot_f1
value: 80.04355498817256
- type: dot_precision
value: 78.1165262000733
- type: dot_recall
value: 82.06806282722513
- type: euclidean_accuracy
value: 89.81061047075717
- type: euclidean_ap
value: 87.11746919324248
- type: euclidean_f1
value: 80.04355498817256
- type: euclidean_precision
value: 78.1165262000733
- type: euclidean_recall
value: 82.06806282722513
- type: manhattan_accuracy
value: 89.79508673885202
- type: manhattan_ap
value: 87.11074390832218
- type: manhattan_f1
value: 80.13002540726349
- type: manhattan_precision
value: 77.83826945412311
- type: manhattan_recall
value: 82.56082537727133
- type: max_accuracy
value: 89.81061047075717
- type: max_ap
value: 87.11747055081017
- type: max_f1
value: 80.13002540726349
---
## Multilingual-E5-large-instruct
[Multilingual E5 Text Embeddings: A Technical Report](https://arxiv.org/pdf/2402.05672).
Liang Wang, Nan Yang, Xiaolong Huang, Linjun Yang, Rangan Majumder, Furu Wei, arXiv 2024
This model has 24 layers and the embedding size is 1024.
## Usage
Below are examples to encode queries and passages from the MS-MARCO passage ranking dataset.
### Transformers
```python
import torch.nn.functional as F
from torch import Tensor
from transformers import AutoTokenizer, AutoModel
def average_pool(last_hidden_states: Tensor,
attention_mask: Tensor) -> Tensor:
last_hidden = last_hidden_states.masked_fill(~attention_mask[..., None].bool(), 0.0)
return last_hidden.sum(dim=1) / attention_mask.sum(dim=1)[..., None]
def get_detailed_instruct(task_description: str, query: str) -> str:
return f'Instruct: {task_description}\nQuery: {query}'
# Each query must come with a one-sentence instruction that describes the task
task = 'Given a web search query, retrieve relevant passages that answer the query'
queries = [
get_detailed_instruct(task, 'how much protein should a female eat'),
get_detailed_instruct(task, '南瓜的家常做法')
]
# No need to add instruction for retrieval documents
documents = [
"As a general guideline, the CDC's average requirement of protein for women ages 19 to 70 is 46 grams per day. But, as you can see from this chart, you'll need to increase that if you're expecting or training for a marathon. Check out the chart below to see how much protein you should be eating each day.",
"1.清炒南瓜丝 原料:嫩南瓜半个 调料:葱、盐、白糖、鸡精 做法: 1、南瓜用刀薄薄的削去表面一层皮,用勺子刮去瓤 2、擦成细丝(没有擦菜板就用刀慢慢切成细丝) 3、锅烧热放油,入葱花煸出香味 4、入南瓜丝快速翻炒一分钟左右,放盐、一点白糖和鸡精调味出锅 2.香葱炒南瓜 原料:南瓜1只 调料:香葱、蒜末、橄榄油、盐 做法: 1、将南瓜去皮,切成片 2、油锅8成热后,将蒜末放入爆香 3、爆香后,将南瓜片放入,翻炒 4、在翻炒的同时,可以不时地往锅里加水,但不要太多 5、放入盐,炒匀 6、南瓜差不多软和绵了之后,就可以关火 7、撒入香葱,即可出锅"
]
input_texts = queries + documents
tokenizer = AutoTokenizer.from_pretrained('intfloat/multilingual-e5-large-instruct')
model = AutoModel.from_pretrained('intfloat/multilingual-e5-large-instruct')
# Tokenize the input texts
batch_dict = tokenizer(input_texts, max_length=512, padding=True, truncation=True, return_tensors='pt')
outputs = model(**batch_dict)
embeddings = average_pool(outputs.last_hidden_state, batch_dict['attention_mask'])
# normalize embeddings
embeddings = F.normalize(embeddings, p=2, dim=1)
scores = (embeddings[:2] @ embeddings[2:].T) * 100
print(scores.tolist())
# => [[91.92852783203125, 67.580322265625], [70.3814468383789, 92.1330795288086]]
```
### Sentence Transformers
```python
from sentence_transformers import SentenceTransformer
def get_detailed_instruct(task_description: str, query: str) -> str:
return f'Instruct: {task_description}\nQuery: {query}'
# Each query must come with a one-sentence instruction that describes the task
task = 'Given a web search query, retrieve relevant passages that answer the query'
queries = [
get_detailed_instruct(task, 'how much protein should a female eat'),
get_detailed_instruct(task, '南瓜的家常做法')
]
# No need to add instruction for retrieval documents
documents = [
"As a general guideline, the CDC's average requirement of protein for women ages 19 to 70 is 46 grams per day. But, as you can see from this chart, you'll need to increase that if you're expecting or training for a marathon. Check out the chart below to see how much protein you should be eating each day.",
"1.清炒南瓜丝 原料:嫩南瓜半个 调料:葱、盐、白糖、鸡精 做法: 1、南瓜用刀薄薄的削去表面一层皮,用勺子刮去瓤 2、擦成细丝(没有擦菜板就用刀慢慢切成细丝) 3、锅烧热放油,入葱花煸出香味 4、入南瓜丝快速翻炒一分钟左右,放盐、一点白糖和鸡精调味出锅 2.香葱炒南瓜 原料:南瓜1只 调料:香葱、蒜末、橄榄油、盐 做法: 1、将南瓜去皮,切成片 2、油锅8成热后,将蒜末放入爆香 3、爆香后,将南瓜片放入,翻炒 4、在翻炒的同时,可以不时地往锅里加水,但不要太多 5、放入盐,炒匀 6、南瓜差不多软和绵了之后,就可以关火 7、撒入香葱,即可出锅"
]
input_texts = queries + documents
model = SentenceTransformer('intfloat/multilingual-e5-large-instruct')
embeddings = model.encode(input_texts, convert_to_tensor=True, normalize_embeddings=True)
scores = (embeddings[:2] @ embeddings[2:].T) * 100
print(scores.tolist())
# [[91.92853546142578, 67.5802993774414], [70.38143157958984, 92.13307189941406]]
```
## Supported Languages
This model is initialized from [xlm-roberta-large](https://huggingface.co/xlm-roberta-large)
and continually trained on a mixture of multilingual datasets.
It supports 100 languages from xlm-roberta,
but low-resource languages may see performance degradation.
## Training Details
**Initialization**: [xlm-roberta-large](https://huggingface.co/xlm-roberta-large)
**First stage**: contrastive pre-training with 1 billion weakly supervised text pairs.
**Second stage**: fine-tuning on datasets from the [E5-mistral](https://arxiv.org/abs/2401.00368) paper.
## MTEB Benchmark Evaluation
Check out [unilm/e5](https://github.com/microsoft/unilm/tree/master/e5) to reproduce evaluation results
on the [BEIR](https://arxiv.org/abs/2104.08663) and [MTEB benchmark](https://arxiv.org/abs/2210.07316).
## FAQ
**1. Do I need to add instructions to the query?**
Yes, this is how the model is trained, otherwise you will see a performance degradation.
The task definition should be a one-sentence instruction that describes the task.
This is a way to customize text embeddings for different scenarios through natural language instructions.
Please check out [unilm/e5/utils.py](https://github.com/microsoft/unilm/blob/9c0f1ff7ca53431fe47d2637dfe253643d94185b/e5/utils.py#L106) for instructions we used for evaluation.
On the other hand, there is no need to add instructions to the document side.
**2. Why are my reproduced results slightly different from reported in the model card?**
Different versions of `transformers` and `pytorch` could cause negligible but non-zero performance differences.
**3. Why does the cosine similarity scores distribute around 0.7 to 1.0?**
This is a known and expected behavior as we use a low temperature 0.01 for InfoNCE contrastive loss.
For text embedding tasks like text retrieval or semantic similarity,
what matters is the relative order of the scores instead of the absolute values,
so this should not be an issue.
## Citation
If you find our paper or models helpful, please consider cite as follows:
```
@article{wang2024multilingual,
title={Multilingual E5 Text Embeddings: A Technical Report},
author={Wang, Liang and Yang, Nan and Huang, Xiaolong and Yang, Linjun and Majumder, Rangan and Wei, Furu},
journal={arXiv preprint arXiv:2402.05672},
year={2024}
}
```
## Limitations
Long texts will be truncated to at most 512 tokens.
| [
"BIOSSES",
"SCIFACT"
] | Non_BioNLP |
Lots-of-LoRAs/Mistral-7B-Instruct-v0.2-4b-r16-task1480 | Lots-of-LoRAs | null | [
"pytorch",
"safetensors",
"en",
"arxiv:1910.09700",
"arxiv:2407.00066",
"base_model:mistralai/Mistral-7B-Instruct-v0.2",
"base_model:finetune:mistralai/Mistral-7B-Instruct-v0.2",
"license:mit",
"region:us"
] | 1,735,930,007,000 | 2025-01-03T18:46:52 | 0 | 0 | ---
base_model: mistralai/Mistral-7B-Instruct-v0.2
language: en
library_name: pytorch
license: mit
---
# Model Card for Mistral-7B-Instruct-v0.2-4b-r16-task1480
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
LoRA trained on task1480_gene_extraction_jnlpba_dataset
- **Developed by:** bruel
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** LoRA
- **Language(s) (NLP):** en
- **License:** mit
- **Finetuned from model [optional]:** mistralai/Mistral-7B-Instruct-v0.2
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** https://github.com/bruel-gabrielsson
- **Paper [optional]:** "Compress then Serve: Serving Thousands of LoRA Adapters with Little Overhead" (2024), Rickard Brüel Gabrielsson, Jiacheng Zhu, Onkar Bhardwaj, Leshem Choshen, Kristjan Greenewald, Mikhail Yurochkin and Justin Solomon
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
https://huggingface.co/datasets/Lots-of-LoRAs/task1480_gene_extraction_jnlpba_dataset sourced from https://github.com/allenai/natural-instructions
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
@misc{brüelgabrielsson2024compressserveservingthousands,
title={Compress then Serve: Serving Thousands of LoRA Adapters with Little Overhead},
author={Rickard Brüel-Gabrielsson and Jiacheng Zhu and Onkar Bhardwaj and Leshem Choshen and Kristjan Greenewald and Mikhail Yurochkin and Justin Solomon},
year={2024},
eprint={2407.00066},
archivePrefix={arXiv},
primaryClass={cs.DC},
url={https://arxiv.org/abs/2407.00066},
}
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | [
"JNLPBA"
] | TBD |
McGill-NLP/LLM2Vec-Meta-Llama-3-8B-Instruct-mntp-unsup-simcse | McGill-NLP | sentence-similarity | [
"peft",
"safetensors",
"text-embedding",
"embeddings",
"information-retrieval",
"beir",
"text-classification",
"language-model",
"text-clustering",
"text-semantic-similarity",
"text-evaluation",
"text-reranking",
"feature-extraction",
"sentence-similarity",
"Sentence Similarity",
"natural_questions",
"ms_marco",
"fever",
"hotpot_qa",
"mteb",
"en",
"arxiv:2404.05961",
"license:mit",
"model-index",
"region:us"
] | 1,714,445,132,000 | 2024-04-30T03:42:49 | 2,287 | 4 | ---
language:
- en
library_name: peft
license: mit
pipeline_tag: sentence-similarity
tags:
- text-embedding
- embeddings
- information-retrieval
- beir
- text-classification
- language-model
- text-clustering
- text-semantic-similarity
- text-evaluation
- text-reranking
- feature-extraction
- sentence-similarity
- Sentence Similarity
- natural_questions
- ms_marco
- fever
- hotpot_qa
- mteb
model-index:
- name: LLM2Vec-Meta-Llama-3-unsupervised
results:
- task:
type: Classification
dataset:
name: MTEB AmazonCounterfactualClassification (en)
type: mteb/amazon_counterfactual
config: en
split: test
revision: e8379541af4e31359cca9fbcf4b00f2671dba205
metrics:
- type: accuracy
value: 75.70149253731343
- type: ap
value: 40.824269118508354
- type: f1
value: 70.55918234479084
- task:
type: Classification
dataset:
name: MTEB AmazonPolarityClassification
type: mteb/amazon_polarity
config: default
split: test
revision: e2d317d38cd51312af73b3d32a06d1a08b442046
metrics:
- type: accuracy
value: 80.6812
- type: ap
value: 76.63327889516552
- type: f1
value: 80.5276613226382
- task:
type: Classification
dataset:
name: MTEB AmazonReviewsClassification (en)
type: mteb/amazon_reviews_multi
config: en
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 40.002
- type: f1
value: 39.67277678335084
- task:
type: Retrieval
dataset:
name: MTEB ArguAna
type: arguana
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 26.173999999999996
- type: map_at_10
value: 42.548
- type: map_at_100
value: 43.492999999999995
- type: map_at_1000
value: 43.5
- type: map_at_3
value: 37.376
- type: map_at_5
value: 40.359
- type: mrr_at_1
value: 27.24
- type: mrr_at_10
value: 42.945
- type: mrr_at_100
value: 43.89
- type: mrr_at_1000
value: 43.897000000000006
- type: mrr_at_3
value: 37.779
- type: mrr_at_5
value: 40.755
- type: ndcg_at_1
value: 26.173999999999996
- type: ndcg_at_10
value: 51.731
- type: ndcg_at_100
value: 55.684999999999995
- type: ndcg_at_1000
value: 55.86
- type: ndcg_at_3
value: 41.122
- type: ndcg_at_5
value: 46.491
- type: precision_at_1
value: 26.173999999999996
- type: precision_at_10
value: 8.108
- type: precision_at_100
value: 0.9820000000000001
- type: precision_at_1000
value: 0.1
- type: precision_at_3
value: 17.330000000000002
- type: precision_at_5
value: 13.001
- type: recall_at_1
value: 26.173999999999996
- type: recall_at_10
value: 81.081
- type: recall_at_100
value: 98.222
- type: recall_at_1000
value: 99.57300000000001
- type: recall_at_3
value: 51.991
- type: recall_at_5
value: 65.007
- task:
type: Clustering
dataset:
name: MTEB ArxivClusteringP2P
type: mteb/arxiv-clustering-p2p
config: default
split: test
revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d
metrics:
- type: v_measure
value: 49.215974795578546
- task:
type: Clustering
dataset:
name: MTEB ArxivClusteringS2S
type: mteb/arxiv-clustering-s2s
config: default
split: test
revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53
metrics:
- type: v_measure
value: 41.71067780141813
- task:
type: Reranking
dataset:
name: MTEB AskUbuntuDupQuestions
type: mteb/askubuntudupquestions-reranking
config: default
split: test
revision: 2000358ca161889fa9c082cb41daa8dcfb161a54
metrics:
- type: map
value: 57.15639347603191
- type: mrr
value: 71.4509959108297
- task:
type: STS
dataset:
name: MTEB BIOSSES
type: mteb/biosses-sts
config: default
split: test
revision: d3fb88f8f02e40887cd149695127462bbcf29b4a
metrics:
- type: cos_sim_spearman
value: 84.67361609277127
- task:
type: Classification
dataset:
name: MTEB Banking77Classification
type: mteb/banking77
config: default
split: test
revision: 0fd18e25b25c072e09e0d92ab615fda904d66300
metrics:
- type: accuracy
value: 84.76623376623375
- type: f1
value: 84.70041172334481
- task:
type: Clustering
dataset:
name: MTEB BiorxivClusteringP2P
type: mteb/biorxiv-clustering-p2p
config: default
split: test
revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40
metrics:
- type: v_measure
value: 38.39251163108548
- task:
type: Clustering
dataset:
name: MTEB BiorxivClusteringS2S
type: mteb/biorxiv-clustering-s2s
config: default
split: test
revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908
metrics:
- type: v_measure
value: 31.30501371807517
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackAndroidRetrieval
type: cqadupstack/android
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 26.409
- type: map_at_10
value: 36.925000000000004
- type: map_at_100
value: 38.651
- type: map_at_1000
value: 38.798
- type: map_at_3
value: 33.437
- type: map_at_5
value: 35.506
- type: mrr_at_1
value: 33.763
- type: mrr_at_10
value: 43.442
- type: mrr_at_100
value: 44.339
- type: mrr_at_1000
value: 44.391000000000005
- type: mrr_at_3
value: 40.749
- type: mrr_at_5
value: 42.408
- type: ndcg_at_1
value: 33.763
- type: ndcg_at_10
value: 43.486999999999995
- type: ndcg_at_100
value: 49.71
- type: ndcg_at_1000
value: 51.81
- type: ndcg_at_3
value: 38.586
- type: ndcg_at_5
value: 41.074
- type: precision_at_1
value: 33.763
- type: precision_at_10
value: 8.798
- type: precision_at_100
value: 1.544
- type: precision_at_1000
value: 0.21
- type: precision_at_3
value: 19.361
- type: precision_at_5
value: 14.335
- type: recall_at_1
value: 26.409
- type: recall_at_10
value: 55.352999999999994
- type: recall_at_100
value: 81.66799999999999
- type: recall_at_1000
value: 95.376
- type: recall_at_3
value: 40.304
- type: recall_at_5
value: 47.782000000000004
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackEnglishRetrieval
type: cqadupstack/english
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 26.6
- type: map_at_10
value: 36.42
- type: map_at_100
value: 37.628
- type: map_at_1000
value: 37.767
- type: map_at_3
value: 33.553
- type: map_at_5
value: 35.118
- type: mrr_at_1
value: 34.394999999999996
- type: mrr_at_10
value: 42.586
- type: mrr_at_100
value: 43.251
- type: mrr_at_1000
value: 43.303000000000004
- type: mrr_at_3
value: 40.297
- type: mrr_at_5
value: 41.638
- type: ndcg_at_1
value: 34.394999999999996
- type: ndcg_at_10
value: 42.05
- type: ndcg_at_100
value: 46.371
- type: ndcg_at_1000
value: 48.76
- type: ndcg_at_3
value: 37.936
- type: ndcg_at_5
value: 39.827
- type: precision_at_1
value: 34.394999999999996
- type: precision_at_10
value: 8.268
- type: precision_at_100
value: 1.355
- type: precision_at_1000
value: 0.186
- type: precision_at_3
value: 18.726000000000003
- type: precision_at_5
value: 13.541
- type: recall_at_1
value: 26.6
- type: recall_at_10
value: 51.529
- type: recall_at_100
value: 70.038
- type: recall_at_1000
value: 85.67
- type: recall_at_3
value: 39.448
- type: recall_at_5
value: 44.6
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackGamingRetrieval
type: cqadupstack/gaming
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 31.863000000000003
- type: map_at_10
value: 43.733
- type: map_at_100
value: 45.005
- type: map_at_1000
value: 45.074
- type: map_at_3
value: 40.593
- type: map_at_5
value: 42.272
- type: mrr_at_1
value: 37.555
- type: mrr_at_10
value: 47.532999999999994
- type: mrr_at_100
value: 48.431999999999995
- type: mrr_at_1000
value: 48.47
- type: mrr_at_3
value: 44.901
- type: mrr_at_5
value: 46.274
- type: ndcg_at_1
value: 37.555
- type: ndcg_at_10
value: 49.789
- type: ndcg_at_100
value: 55.059999999999995
- type: ndcg_at_1000
value: 56.434
- type: ndcg_at_3
value: 44.238
- type: ndcg_at_5
value: 46.698
- type: precision_at_1
value: 37.555
- type: precision_at_10
value: 8.257
- type: precision_at_100
value: 1.189
- type: precision_at_1000
value: 0.136
- type: precision_at_3
value: 20.23
- type: precision_at_5
value: 13.868
- type: recall_at_1
value: 31.863000000000003
- type: recall_at_10
value: 64.188
- type: recall_at_100
value: 87.02600000000001
- type: recall_at_1000
value: 96.761
- type: recall_at_3
value: 48.986000000000004
- type: recall_at_5
value: 55.177
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackGisRetrieval
type: cqadupstack/gis
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 15.964
- type: map_at_10
value: 22.746
- type: map_at_100
value: 23.704
- type: map_at_1000
value: 23.82
- type: map_at_3
value: 20.5
- type: map_at_5
value: 21.836
- type: mrr_at_1
value: 17.740000000000002
- type: mrr_at_10
value: 24.634
- type: mrr_at_100
value: 25.535999999999998
- type: mrr_at_1000
value: 25.628
- type: mrr_at_3
value: 22.429
- type: mrr_at_5
value: 23.791
- type: ndcg_at_1
value: 17.740000000000002
- type: ndcg_at_10
value: 26.838
- type: ndcg_at_100
value: 31.985000000000003
- type: ndcg_at_1000
value: 35.289
- type: ndcg_at_3
value: 22.384
- type: ndcg_at_5
value: 24.726
- type: precision_at_1
value: 17.740000000000002
- type: precision_at_10
value: 4.35
- type: precision_at_100
value: 0.753
- type: precision_at_1000
value: 0.108
- type: precision_at_3
value: 9.754999999999999
- type: precision_at_5
value: 7.164
- type: recall_at_1
value: 15.964
- type: recall_at_10
value: 37.705
- type: recall_at_100
value: 61.94499999999999
- type: recall_at_1000
value: 87.646
- type: recall_at_3
value: 25.714
- type: recall_at_5
value: 31.402
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackMathematicaRetrieval
type: cqadupstack/mathematica
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 9.221
- type: map_at_10
value: 14.735000000000001
- type: map_at_100
value: 15.778
- type: map_at_1000
value: 15.9
- type: map_at_3
value: 12.791
- type: map_at_5
value: 13.703999999999999
- type: mrr_at_1
value: 12.438
- type: mrr_at_10
value: 18.353
- type: mrr_at_100
value: 19.285
- type: mrr_at_1000
value: 19.375
- type: mrr_at_3
value: 16.439
- type: mrr_at_5
value: 17.352999999999998
- type: ndcg_at_1
value: 12.438
- type: ndcg_at_10
value: 18.703
- type: ndcg_at_100
value: 24.104999999999997
- type: ndcg_at_1000
value: 27.366
- type: ndcg_at_3
value: 15.055
- type: ndcg_at_5
value: 16.42
- type: precision_at_1
value: 12.438
- type: precision_at_10
value: 3.818
- type: precision_at_100
value: 0.77
- type: precision_at_1000
value: 0.11800000000000001
- type: precision_at_3
value: 7.753
- type: precision_at_5
value: 5.622
- type: recall_at_1
value: 9.221
- type: recall_at_10
value: 27.461999999999996
- type: recall_at_100
value: 51.909000000000006
- type: recall_at_1000
value: 75.56
- type: recall_at_3
value: 17.046
- type: recall_at_5
value: 20.766000000000002
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackPhysicsRetrieval
type: cqadupstack/physics
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 22.828
- type: map_at_10
value: 33.166000000000004
- type: map_at_100
value: 34.618
- type: map_at_1000
value: 34.744
- type: map_at_3
value: 29.737000000000002
- type: map_at_5
value: 31.541000000000004
- type: mrr_at_1
value: 29.548000000000002
- type: mrr_at_10
value: 38.582
- type: mrr_at_100
value: 39.527
- type: mrr_at_1000
value: 39.577
- type: mrr_at_3
value: 35.884
- type: mrr_at_5
value: 37.413999999999994
- type: ndcg_at_1
value: 29.548000000000002
- type: ndcg_at_10
value: 39.397
- type: ndcg_at_100
value: 45.584
- type: ndcg_at_1000
value: 47.823
- type: ndcg_at_3
value: 33.717000000000006
- type: ndcg_at_5
value: 36.223
- type: precision_at_1
value: 29.548000000000002
- type: precision_at_10
value: 7.767
- type: precision_at_100
value: 1.2959999999999998
- type: precision_at_1000
value: 0.17099999999999999
- type: precision_at_3
value: 16.747
- type: precision_at_5
value: 12.203999999999999
- type: recall_at_1
value: 22.828
- type: recall_at_10
value: 52.583999999999996
- type: recall_at_100
value: 79.06400000000001
- type: recall_at_1000
value: 93.59100000000001
- type: recall_at_3
value: 36.671
- type: recall_at_5
value: 43.22
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackProgrammersRetrieval
type: cqadupstack/programmers
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 21.366
- type: map_at_10
value: 30.214000000000002
- type: map_at_100
value: 31.647
- type: map_at_1000
value: 31.763
- type: map_at_3
value: 27.234
- type: map_at_5
value: 28.801
- type: mrr_at_1
value: 26.256
- type: mrr_at_10
value: 35.299
- type: mrr_at_100
value: 36.284
- type: mrr_at_1000
value: 36.342
- type: mrr_at_3
value: 32.572
- type: mrr_at_5
value: 34.050999999999995
- type: ndcg_at_1
value: 26.256
- type: ndcg_at_10
value: 35.899
- type: ndcg_at_100
value: 41.983
- type: ndcg_at_1000
value: 44.481
- type: ndcg_at_3
value: 30.665
- type: ndcg_at_5
value: 32.879999999999995
- type: precision_at_1
value: 26.256
- type: precision_at_10
value: 6.804
- type: precision_at_100
value: 1.187
- type: precision_at_1000
value: 0.16
- type: precision_at_3
value: 14.84
- type: precision_at_5
value: 10.708
- type: recall_at_1
value: 21.366
- type: recall_at_10
value: 47.878
- type: recall_at_100
value: 73.245
- type: recall_at_1000
value: 90.623
- type: recall_at_3
value: 33.341
- type: recall_at_5
value: 39.198
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackRetrieval
type: mteb/cqadupstack
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 19.477166666666665
- type: map_at_10
value: 27.431416666666664
- type: map_at_100
value: 28.656000000000002
- type: map_at_1000
value: 28.787583333333338
- type: map_at_3
value: 24.85175
- type: map_at_5
value: 26.270166666666668
- type: mrr_at_1
value: 24.06841666666667
- type: mrr_at_10
value: 31.620000000000005
- type: mrr_at_100
value: 32.52283333333333
- type: mrr_at_1000
value: 32.59441666666667
- type: mrr_at_3
value: 29.328666666666663
- type: mrr_at_5
value: 30.620416666666667
- type: ndcg_at_1
value: 24.06841666666667
- type: ndcg_at_10
value: 32.404583333333335
- type: ndcg_at_100
value: 37.779500000000006
- type: ndcg_at_1000
value: 40.511583333333334
- type: ndcg_at_3
value: 27.994166666666665
- type: ndcg_at_5
value: 30.021749999999997
- type: precision_at_1
value: 24.06841666666667
- type: precision_at_10
value: 6.03725
- type: precision_at_100
value: 1.0500833333333337
- type: precision_at_1000
value: 0.14875000000000002
- type: precision_at_3
value: 13.419583333333335
- type: precision_at_5
value: 9.700666666666665
- type: recall_at_1
value: 19.477166666666665
- type: recall_at_10
value: 42.99441666666667
- type: recall_at_100
value: 66.787
- type: recall_at_1000
value: 86.18825000000001
- type: recall_at_3
value: 30.46366666666667
- type: recall_at_5
value: 35.83141666666667
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackStatsRetrieval
type: cqadupstack/stats
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 16.246
- type: map_at_10
value: 22.127
- type: map_at_100
value: 23.006
- type: map_at_1000
value: 23.125
- type: map_at_3
value: 20.308999999999997
- type: map_at_5
value: 21.139
- type: mrr_at_1
value: 19.631999999999998
- type: mrr_at_10
value: 24.884999999999998
- type: mrr_at_100
value: 25.704
- type: mrr_at_1000
value: 25.793
- type: mrr_at_3
value: 23.083000000000002
- type: mrr_at_5
value: 23.942
- type: ndcg_at_1
value: 19.631999999999998
- type: ndcg_at_10
value: 25.862000000000002
- type: ndcg_at_100
value: 30.436000000000003
- type: ndcg_at_1000
value: 33.638
- type: ndcg_at_3
value: 22.431
- type: ndcg_at_5
value: 23.677
- type: precision_at_1
value: 19.631999999999998
- type: precision_at_10
value: 4.417
- type: precision_at_100
value: 0.7270000000000001
- type: precision_at_1000
value: 0.109
- type: precision_at_3
value: 10.327
- type: precision_at_5
value: 7.147
- type: recall_at_1
value: 16.246
- type: recall_at_10
value: 34.869
- type: recall_at_100
value: 56.221
- type: recall_at_1000
value: 80.449
- type: recall_at_3
value: 24.83
- type: recall_at_5
value: 28.142
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackTexRetrieval
type: cqadupstack/tex
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 9.798
- type: map_at_10
value: 14.695
- type: map_at_100
value: 15.590000000000002
- type: map_at_1000
value: 15.726999999999999
- type: map_at_3
value: 13.004999999999999
- type: map_at_5
value: 13.861
- type: mrr_at_1
value: 12.939
- type: mrr_at_10
value: 18.218
- type: mrr_at_100
value: 18.998
- type: mrr_at_1000
value: 19.093
- type: mrr_at_3
value: 16.454
- type: mrr_at_5
value: 17.354
- type: ndcg_at_1
value: 12.939
- type: ndcg_at_10
value: 18.278
- type: ndcg_at_100
value: 22.709
- type: ndcg_at_1000
value: 26.064
- type: ndcg_at_3
value: 15.204
- type: ndcg_at_5
value: 16.416
- type: precision_at_1
value: 12.939
- type: precision_at_10
value: 3.768
- type: precision_at_100
value: 0.724
- type: precision_at_1000
value: 0.11800000000000001
- type: precision_at_3
value: 7.707999999999999
- type: precision_at_5
value: 5.733
- type: recall_at_1
value: 9.798
- type: recall_at_10
value: 25.562
- type: recall_at_100
value: 45.678999999999995
- type: recall_at_1000
value: 69.963
- type: recall_at_3
value: 16.705000000000002
- type: recall_at_5
value: 19.969
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackUnixRetrieval
type: cqadupstack/unix
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 19.1
- type: map_at_10
value: 27.034999999999997
- type: map_at_100
value: 28.396
- type: map_at_1000
value: 28.518
- type: map_at_3
value: 24.363
- type: map_at_5
value: 25.826999999999998
- type: mrr_at_1
value: 23.694000000000003
- type: mrr_at_10
value: 31.724999999999998
- type: mrr_at_100
value: 32.743
- type: mrr_at_1000
value: 32.82
- type: mrr_at_3
value: 29.275000000000002
- type: mrr_at_5
value: 30.684
- type: ndcg_at_1
value: 23.694000000000003
- type: ndcg_at_10
value: 32.366
- type: ndcg_at_100
value: 38.241
- type: ndcg_at_1000
value: 40.973
- type: ndcg_at_3
value: 27.661
- type: ndcg_at_5
value: 29.782999999999998
- type: precision_at_1
value: 23.694000000000003
- type: precision_at_10
value: 5.951
- type: precision_at_100
value: 1.0070000000000001
- type: precision_at_1000
value: 0.135
- type: precision_at_3
value: 13.34
- type: precision_at_5
value: 9.533999999999999
- type: recall_at_1
value: 19.1
- type: recall_at_10
value: 44.032
- type: recall_at_100
value: 69.186
- type: recall_at_1000
value: 88.562
- type: recall_at_3
value: 30.712
- type: recall_at_5
value: 36.372
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackWebmastersRetrieval
type: cqadupstack/webmasters
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 20.671
- type: map_at_10
value: 28.583
- type: map_at_100
value: 30.098999999999997
- type: map_at_1000
value: 30.364
- type: map_at_3
value: 25.825
- type: map_at_5
value: 27.500999999999998
- type: mrr_at_1
value: 25.889
- type: mrr_at_10
value: 33.617999999999995
- type: mrr_at_100
value: 34.687
- type: mrr_at_1000
value: 34.774
- type: mrr_at_3
value: 31.191999999999997
- type: mrr_at_5
value: 32.675
- type: ndcg_at_1
value: 25.889
- type: ndcg_at_10
value: 34.056999999999995
- type: ndcg_at_100
value: 40.142
- type: ndcg_at_1000
value: 43.614000000000004
- type: ndcg_at_3
value: 29.688
- type: ndcg_at_5
value: 32.057
- type: precision_at_1
value: 25.889
- type: precision_at_10
value: 6.7
- type: precision_at_100
value: 1.417
- type: precision_at_1000
value: 0.241
- type: precision_at_3
value: 14.360999999999999
- type: precision_at_5
value: 10.711
- type: recall_at_1
value: 20.671
- type: recall_at_10
value: 43.97
- type: recall_at_100
value: 71.83699999999999
- type: recall_at_1000
value: 94.42399999999999
- type: recall_at_3
value: 31.0
- type: recall_at_5
value: 37.489
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackWordpressRetrieval
type: cqadupstack/wordpress
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 13.66
- type: map_at_10
value: 18.798000000000002
- type: map_at_100
value: 19.75
- type: map_at_1000
value: 19.851
- type: map_at_3
value: 16.874
- type: map_at_5
value: 18.136
- type: mrr_at_1
value: 14.972
- type: mrr_at_10
value: 20.565
- type: mrr_at_100
value: 21.488
- type: mrr_at_1000
value: 21.567
- type: mrr_at_3
value: 18.669
- type: mrr_at_5
value: 19.861
- type: ndcg_at_1
value: 14.972
- type: ndcg_at_10
value: 22.128999999999998
- type: ndcg_at_100
value: 27.028000000000002
- type: ndcg_at_1000
value: 29.887000000000004
- type: ndcg_at_3
value: 18.365000000000002
- type: ndcg_at_5
value: 20.48
- type: precision_at_1
value: 14.972
- type: precision_at_10
value: 3.549
- type: precision_at_100
value: 0.632
- type: precision_at_1000
value: 0.093
- type: precision_at_3
value: 7.887
- type: precision_at_5
value: 5.840999999999999
- type: recall_at_1
value: 13.66
- type: recall_at_10
value: 30.801000000000002
- type: recall_at_100
value: 53.626
- type: recall_at_1000
value: 75.634
- type: recall_at_3
value: 20.807000000000002
- type: recall_at_5
value: 25.86
- task:
type: Retrieval
dataset:
name: MTEB ClimateFEVER
type: climate-fever
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 8.622
- type: map_at_10
value: 16.042
- type: map_at_100
value: 18.023
- type: map_at_1000
value: 18.228
- type: map_at_3
value: 12.995999999999999
- type: map_at_5
value: 14.424000000000001
- type: mrr_at_1
value: 18.892999999999997
- type: mrr_at_10
value: 30.575000000000003
- type: mrr_at_100
value: 31.814999999999998
- type: mrr_at_1000
value: 31.856
- type: mrr_at_3
value: 26.851000000000003
- type: mrr_at_5
value: 29.021
- type: ndcg_at_1
value: 18.892999999999997
- type: ndcg_at_10
value: 23.575
- type: ndcg_at_100
value: 31.713
- type: ndcg_at_1000
value: 35.465
- type: ndcg_at_3
value: 18.167
- type: ndcg_at_5
value: 20.071
- type: precision_at_1
value: 18.892999999999997
- type: precision_at_10
value: 7.883
- type: precision_at_100
value: 1.652
- type: precision_at_1000
value: 0.23500000000000001
- type: precision_at_3
value: 13.898
- type: precision_at_5
value: 11.14
- type: recall_at_1
value: 8.622
- type: recall_at_10
value: 30.044999999999998
- type: recall_at_100
value: 58.072
- type: recall_at_1000
value: 79.226
- type: recall_at_3
value: 17.21
- type: recall_at_5
value: 22.249
- task:
type: Retrieval
dataset:
name: MTEB DBPedia
type: dbpedia-entity
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 4.845
- type: map_at_10
value: 12.352
- type: map_at_100
value: 17.423
- type: map_at_1000
value: 18.529
- type: map_at_3
value: 8.505
- type: map_at_5
value: 10.213
- type: mrr_at_1
value: 41.75
- type: mrr_at_10
value: 54.6
- type: mrr_at_100
value: 55.345
- type: mrr_at_1000
value: 55.374
- type: mrr_at_3
value: 52.37500000000001
- type: mrr_at_5
value: 53.87499999999999
- type: ndcg_at_1
value: 31.25
- type: ndcg_at_10
value: 26.779999999999998
- type: ndcg_at_100
value: 31.929000000000002
- type: ndcg_at_1000
value: 39.290000000000006
- type: ndcg_at_3
value: 28.746
- type: ndcg_at_5
value: 27.334999999999997
- type: precision_at_1
value: 41.75
- type: precision_at_10
value: 22.55
- type: precision_at_100
value: 7.242
- type: precision_at_1000
value: 1.439
- type: precision_at_3
value: 33.833
- type: precision_at_5
value: 28.65
- type: recall_at_1
value: 4.845
- type: recall_at_10
value: 18.664
- type: recall_at_100
value: 41.085
- type: recall_at_1000
value: 65.242
- type: recall_at_3
value: 10.572
- type: recall_at_5
value: 13.961000000000002
- task:
type: Classification
dataset:
name: MTEB EmotionClassification
type: mteb/emotion
config: default
split: test
revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37
metrics:
- type: accuracy
value: 47.08
- type: f1
value: 42.843345856303756
- task:
type: Retrieval
dataset:
name: MTEB FEVER
type: fever
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 33.743
- type: map_at_10
value: 46.521
- type: map_at_100
value: 47.235
- type: map_at_1000
value: 47.272
- type: map_at_3
value: 43.252
- type: map_at_5
value: 45.267
- type: mrr_at_1
value: 36.484
- type: mrr_at_10
value: 49.406
- type: mrr_at_100
value: 50.03300000000001
- type: mrr_at_1000
value: 50.058
- type: mrr_at_3
value: 46.195
- type: mrr_at_5
value: 48.193999999999996
- type: ndcg_at_1
value: 36.484
- type: ndcg_at_10
value: 53.42
- type: ndcg_at_100
value: 56.69499999999999
- type: ndcg_at_1000
value: 57.623999999999995
- type: ndcg_at_3
value: 47.010999999999996
- type: ndcg_at_5
value: 50.524
- type: precision_at_1
value: 36.484
- type: precision_at_10
value: 7.925
- type: precision_at_100
value: 0.975
- type: precision_at_1000
value: 0.107
- type: precision_at_3
value: 19.967
- type: precision_at_5
value: 13.87
- type: recall_at_1
value: 33.743
- type: recall_at_10
value: 71.988
- type: recall_at_100
value: 86.60799999999999
- type: recall_at_1000
value: 93.54
- type: recall_at_3
value: 54.855
- type: recall_at_5
value: 63.341
- task:
type: Retrieval
dataset:
name: MTEB FiQA2018
type: fiqa
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 13.003
- type: map_at_10
value: 21.766
- type: map_at_100
value: 23.618
- type: map_at_1000
value: 23.832
- type: map_at_3
value: 18.282999999999998
- type: map_at_5
value: 20.267
- type: mrr_at_1
value: 26.851999999999997
- type: mrr_at_10
value: 34.658
- type: mrr_at_100
value: 35.729
- type: mrr_at_1000
value: 35.785
- type: mrr_at_3
value: 31.686999999999998
- type: mrr_at_5
value: 33.315
- type: ndcg_at_1
value: 26.851999999999997
- type: ndcg_at_10
value: 28.563
- type: ndcg_at_100
value: 36.374
- type: ndcg_at_1000
value: 40.306999999999995
- type: ndcg_at_3
value: 24.224
- type: ndcg_at_5
value: 25.939
- type: precision_at_1
value: 26.851999999999997
- type: precision_at_10
value: 8.193999999999999
- type: precision_at_100
value: 1.616
- type: precision_at_1000
value: 0.232
- type: precision_at_3
value: 16.255
- type: precision_at_5
value: 12.469
- type: recall_at_1
value: 13.003
- type: recall_at_10
value: 35.689
- type: recall_at_100
value: 65.762
- type: recall_at_1000
value: 89.546
- type: recall_at_3
value: 21.820999999999998
- type: recall_at_5
value: 28.097
- task:
type: Retrieval
dataset:
name: MTEB HotpotQA
type: hotpotqa
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 29.541
- type: map_at_10
value: 43.088
- type: map_at_100
value: 44.252
- type: map_at_1000
value: 44.345
- type: map_at_3
value: 39.79
- type: map_at_5
value: 41.687000000000005
- type: mrr_at_1
value: 59.082
- type: mrr_at_10
value: 67.27300000000001
- type: mrr_at_100
value: 67.708
- type: mrr_at_1000
value: 67.731
- type: mrr_at_3
value: 65.526
- type: mrr_at_5
value: 66.589
- type: ndcg_at_1
value: 59.082
- type: ndcg_at_10
value: 52.372
- type: ndcg_at_100
value: 56.725
- type: ndcg_at_1000
value: 58.665
- type: ndcg_at_3
value: 47.129
- type: ndcg_at_5
value: 49.808
- type: precision_at_1
value: 59.082
- type: precision_at_10
value: 11.275
- type: precision_at_100
value: 1.469
- type: precision_at_1000
value: 0.173
- type: precision_at_3
value: 29.773
- type: precision_at_5
value: 19.980999999999998
- type: recall_at_1
value: 29.541
- type: recall_at_10
value: 56.374
- type: recall_at_100
value: 73.42999999999999
- type: recall_at_1000
value: 86.28
- type: recall_at_3
value: 44.659
- type: recall_at_5
value: 49.952999999999996
- task:
type: Classification
dataset:
name: MTEB ImdbClassification
type: mteb/imdb
config: default
split: test
revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7
metrics:
- type: accuracy
value: 75.1904
- type: ap
value: 69.80555086826531
- type: f1
value: 74.93725389065787
- task:
type: Retrieval
dataset:
name: MTEB MSMARCO
type: msmarco
config: default
split: dev
revision: None
metrics:
- type: map_at_1
value: 7.085
- type: map_at_10
value: 13.344000000000001
- type: map_at_100
value: 14.501
- type: map_at_1000
value: 14.605
- type: map_at_3
value: 10.758
- type: map_at_5
value: 12.162
- type: mrr_at_1
value: 7.278
- type: mrr_at_10
value: 13.607
- type: mrr_at_100
value: 14.761
- type: mrr_at_1000
value: 14.860000000000001
- type: mrr_at_3
value: 11.003
- type: mrr_at_5
value: 12.421
- type: ndcg_at_1
value: 7.278
- type: ndcg_at_10
value: 17.473
- type: ndcg_at_100
value: 23.721
- type: ndcg_at_1000
value: 26.69
- type: ndcg_at_3
value: 12.078
- type: ndcg_at_5
value: 14.62
- type: precision_at_1
value: 7.278
- type: precision_at_10
value: 3.175
- type: precision_at_100
value: 0.639
- type: precision_at_1000
value: 0.09
- type: precision_at_3
value: 5.382
- type: precision_at_5
value: 4.519
- type: recall_at_1
value: 7.085
- type: recall_at_10
value: 30.549
- type: recall_at_100
value: 60.919999999999995
- type: recall_at_1000
value: 84.372
- type: recall_at_3
value: 15.675
- type: recall_at_5
value: 21.818
- task:
type: Classification
dataset:
name: MTEB MTOPDomainClassification (en)
type: mteb/mtop_domain
config: en
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 94.46876424988601
- type: f1
value: 94.23159241922738
- task:
type: Classification
dataset:
name: MTEB MTOPIntentClassification (en)
type: mteb/mtop_intent
config: en
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 81.0875512995896
- type: f1
value: 61.674961674414
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (en)
type: mteb/amazon_massive_intent
config: en
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 75.01344989912575
- type: f1
value: 71.7942527839921
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (en)
type: mteb/amazon_massive_scenario
config: en
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 79.15601882985877
- type: f1
value: 78.82502954601195
- task:
type: Clustering
dataset:
name: MTEB MedrxivClusteringP2P
type: mteb/medrxiv-clustering-p2p
config: default
split: test
revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73
metrics:
- type: v_measure
value: 31.468806971345227
- task:
type: Clustering
dataset:
name: MTEB MedrxivClusteringS2S
type: mteb/medrxiv-clustering-s2s
config: default
split: test
revision: 35191c8c0dca72d8ff3efcd72aa802307d469663
metrics:
- type: v_measure
value: 27.874332804382256
- task:
type: Reranking
dataset:
name: MTEB MindSmallReranking
type: mteb/mind_small
config: default
split: test
revision: 3bdac13927fdc888b903db93b2ffdbd90b295a69
metrics:
- type: map
value: 30.099340785595842
- type: mrr
value: 31.077367694660257
- task:
type: Retrieval
dataset:
name: MTEB NFCorpus
type: nfcorpus
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 3.9050000000000002
- type: map_at_10
value: 8.931000000000001
- type: map_at_100
value: 11.246
- type: map_at_1000
value: 12.579
- type: map_at_3
value: 6.544
- type: map_at_5
value: 7.854
- type: mrr_at_1
value: 33.745999999999995
- type: mrr_at_10
value: 44.734
- type: mrr_at_100
value: 45.486
- type: mrr_at_1000
value: 45.534
- type: mrr_at_3
value: 42.157
- type: mrr_at_5
value: 43.813
- type: ndcg_at_1
value: 31.734
- type: ndcg_at_10
value: 26.284999999999997
- type: ndcg_at_100
value: 25.211
- type: ndcg_at_1000
value: 34.974
- type: ndcg_at_3
value: 29.918
- type: ndcg_at_5
value: 29.066
- type: precision_at_1
value: 33.745999999999995
- type: precision_at_10
value: 19.628
- type: precision_at_100
value: 6.476999999999999
- type: precision_at_1000
value: 1.976
- type: precision_at_3
value: 28.793000000000003
- type: precision_at_5
value: 25.759
- type: recall_at_1
value: 3.9050000000000002
- type: recall_at_10
value: 13.375
- type: recall_at_100
value: 28.453
- type: recall_at_1000
value: 61.67399999999999
- type: recall_at_3
value: 7.774
- type: recall_at_5
value: 10.754
- task:
type: Retrieval
dataset:
name: MTEB NQ
type: nq
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 18.33
- type: map_at_10
value: 30.44
- type: map_at_100
value: 31.848
- type: map_at_1000
value: 31.906000000000002
- type: map_at_3
value: 26.143
- type: map_at_5
value: 28.583
- type: mrr_at_1
value: 21.031
- type: mrr_at_10
value: 33.028
- type: mrr_at_100
value: 34.166000000000004
- type: mrr_at_1000
value: 34.208
- type: mrr_at_3
value: 29.089
- type: mrr_at_5
value: 31.362000000000002
- type: ndcg_at_1
value: 21.031
- type: ndcg_at_10
value: 37.65
- type: ndcg_at_100
value: 43.945
- type: ndcg_at_1000
value: 45.338
- type: ndcg_at_3
value: 29.256999999999998
- type: ndcg_at_5
value: 33.453
- type: precision_at_1
value: 21.031
- type: precision_at_10
value: 6.8309999999999995
- type: precision_at_100
value: 1.035
- type: precision_at_1000
value: 0.117
- type: precision_at_3
value: 13.818
- type: precision_at_5
value: 10.649000000000001
- type: recall_at_1
value: 18.33
- type: recall_at_10
value: 57.330999999999996
- type: recall_at_100
value: 85.284
- type: recall_at_1000
value: 95.676
- type: recall_at_3
value: 35.356
- type: recall_at_5
value: 45.073
- task:
type: Retrieval
dataset:
name: MTEB QuoraRetrieval
type: quora
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 66.373
- type: map_at_10
value: 80.233
- type: map_at_100
value: 80.973
- type: map_at_1000
value: 80.99499999999999
- type: map_at_3
value: 77.127
- type: map_at_5
value: 79.056
- type: mrr_at_1
value: 76.55
- type: mrr_at_10
value: 83.813
- type: mrr_at_100
value: 83.96900000000001
- type: mrr_at_1000
value: 83.97200000000001
- type: mrr_at_3
value: 82.547
- type: mrr_at_5
value: 83.38600000000001
- type: ndcg_at_1
value: 76.53999999999999
- type: ndcg_at_10
value: 84.638
- type: ndcg_at_100
value: 86.28099999999999
- type: ndcg_at_1000
value: 86.459
- type: ndcg_at_3
value: 81.19
- type: ndcg_at_5
value: 83.057
- type: precision_at_1
value: 76.53999999999999
- type: precision_at_10
value: 12.928999999999998
- type: precision_at_100
value: 1.514
- type: precision_at_1000
value: 0.156
- type: precision_at_3
value: 35.503
- type: precision_at_5
value: 23.512
- type: recall_at_1
value: 66.373
- type: recall_at_10
value: 93.273
- type: recall_at_100
value: 99.031
- type: recall_at_1000
value: 99.91799999999999
- type: recall_at_3
value: 83.55799999999999
- type: recall_at_5
value: 88.644
- task:
type: Clustering
dataset:
name: MTEB RedditClustering
type: mteb/reddit-clustering
config: default
split: test
revision: 24640382cdbf8abc73003fb0fa6d111a705499eb
metrics:
- type: v_measure
value: 43.67174666339103
- task:
type: Clustering
dataset:
name: MTEB RedditClusteringP2P
type: mteb/reddit-clustering-p2p
config: default
split: test
revision: 282350215ef01743dc01b456c7f5241fa8937f16
metrics:
- type: v_measure
value: 61.66838659211271
- task:
type: Retrieval
dataset:
name: MTEB SCIDOCS
type: scidocs
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 2.318
- type: map_at_10
value: 5.938000000000001
- type: map_at_100
value: 7.582
- type: map_at_1000
value: 7.936
- type: map_at_3
value: 4.208
- type: map_at_5
value: 5.098
- type: mrr_at_1
value: 11.4
- type: mrr_at_10
value: 17.655
- type: mrr_at_100
value: 19.088
- type: mrr_at_1000
value: 19.203
- type: mrr_at_3
value: 15.25
- type: mrr_at_5
value: 16.535
- type: ndcg_at_1
value: 11.4
- type: ndcg_at_10
value: 10.388
- type: ndcg_at_100
value: 18.165
- type: ndcg_at_1000
value: 24.842
- type: ndcg_at_3
value: 9.414
- type: ndcg_at_5
value: 8.453
- type: precision_at_1
value: 11.4
- type: precision_at_10
value: 5.54
- type: precision_at_100
value: 1.71
- type: precision_at_1000
value: 0.33
- type: precision_at_3
value: 8.866999999999999
- type: precision_at_5
value: 7.580000000000001
- type: recall_at_1
value: 2.318
- type: recall_at_10
value: 11.267000000000001
- type: recall_at_100
value: 34.743
- type: recall_at_1000
value: 67.07300000000001
- type: recall_at_3
value: 5.408
- type: recall_at_5
value: 7.713
- task:
type: STS
dataset:
name: MTEB SICK-R
type: mteb/sickr-sts
config: default
split: test
revision: a6ea5a8cab320b040a23452cc28066d9beae2cee
metrics:
- type: cos_sim_spearman
value: 72.15850185456762
- task:
type: STS
dataset:
name: MTEB STS12
type: mteb/sts12-sts
config: default
split: test
revision: a0d554a64d88156834ff5ae9920b964011b16384
metrics:
- type: cos_sim_spearman
value: 61.59518395985063
- task:
type: STS
dataset:
name: MTEB STS13
type: mteb/sts13-sts
config: default
split: test
revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca
metrics:
- type: cos_sim_spearman
value: 79.71131323749228
- task:
type: STS
dataset:
name: MTEB STS14
type: mteb/sts14-sts
config: default
split: test
revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375
metrics:
- type: cos_sim_spearman
value: 72.10974664733891
- task:
type: STS
dataset:
name: MTEB STS15
type: mteb/sts15-sts
config: default
split: test
revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3
metrics:
- type: cos_sim_spearman
value: 82.17899407125657
- task:
type: STS
dataset:
name: MTEB STS16
type: mteb/sts16-sts
config: default
split: test
revision: 4d8694f8f0e0100860b497b999b3dbed754a0513
metrics:
- type: cos_sim_spearman
value: 79.41138579273438
- task:
type: STS
dataset:
name: MTEB STS17 (en-en)
type: mteb/sts17-crosslingual-sts
config: en-en
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_spearman
value: 85.44343473477939
- task:
type: STS
dataset:
name: MTEB STS22 (en)
type: mteb/sts22-crosslingual-sts
config: en
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_spearman
value: 63.90264271389905
- task:
type: STS
dataset:
name: MTEB STSBenchmark
type: mteb/stsbenchmark-sts
config: default
split: test
revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831
metrics:
- type: cos_sim_spearman
value: 77.44151296326804
- task:
type: Reranking
dataset:
name: MTEB SciDocsRR
type: mteb/scidocs-reranking
config: default
split: test
revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab
metrics:
- type: map
value: 76.27597486396654
- type: mrr
value: 93.28127119793788
- task:
type: Retrieval
dataset:
name: MTEB SciFact
type: scifact
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 49.594
- type: map_at_10
value: 60.951
- type: map_at_100
value: 61.68599999999999
- type: map_at_1000
value: 61.712
- type: map_at_3
value: 57.946
- type: map_at_5
value: 59.89
- type: mrr_at_1
value: 52.666999999999994
- type: mrr_at_10
value: 62.724000000000004
- type: mrr_at_100
value: 63.269
- type: mrr_at_1000
value: 63.291
- type: mrr_at_3
value: 60.167
- type: mrr_at_5
value: 61.95
- type: ndcg_at_1
value: 52.666999999999994
- type: ndcg_at_10
value: 66.35600000000001
- type: ndcg_at_100
value: 69.463
- type: ndcg_at_1000
value: 70.111
- type: ndcg_at_3
value: 60.901
- type: ndcg_at_5
value: 64.054
- type: precision_at_1
value: 52.666999999999994
- type: precision_at_10
value: 9.0
- type: precision_at_100
value: 1.073
- type: precision_at_1000
value: 0.11299999999999999
- type: precision_at_3
value: 24.221999999999998
- type: precision_at_5
value: 16.333000000000002
- type: recall_at_1
value: 49.594
- type: recall_at_10
value: 81.256
- type: recall_at_100
value: 94.989
- type: recall_at_1000
value: 100.0
- type: recall_at_3
value: 66.706
- type: recall_at_5
value: 74.411
- task:
type: PairClassification
dataset:
name: MTEB SprintDuplicateQuestions
type: mteb/sprintduplicatequestions-pairclassification
config: default
split: test
revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46
metrics:
- type: cos_sim_accuracy
value: 99.65049504950495
- type: cos_sim_ap
value: 88.1421623503371
- type: cos_sim_f1
value: 81.44072036018008
- type: cos_sim_precision
value: 81.48148148148148
- type: cos_sim_recall
value: 81.39999999999999
- type: dot_accuracy
value: 99.37623762376238
- type: dot_ap
value: 69.87152032240303
- type: dot_f1
value: 65.64885496183206
- type: dot_precision
value: 72.18225419664267
- type: dot_recall
value: 60.199999999999996
- type: euclidean_accuracy
value: 99.63069306930693
- type: euclidean_ap
value: 86.13858297902517
- type: euclidean_f1
value: 79.87679671457904
- type: euclidean_precision
value: 82.0675105485232
- type: euclidean_recall
value: 77.8
- type: manhattan_accuracy
value: 99.63168316831683
- type: manhattan_ap
value: 86.31976532265482
- type: manhattan_f1
value: 80.10204081632654
- type: manhattan_precision
value: 81.77083333333334
- type: manhattan_recall
value: 78.5
- type: max_accuracy
value: 99.65049504950495
- type: max_ap
value: 88.1421623503371
- type: max_f1
value: 81.44072036018008
- task:
type: Clustering
dataset:
name: MTEB StackExchangeClustering
type: mteb/stackexchange-clustering
config: default
split: test
revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259
metrics:
- type: v_measure
value: 68.19604139959692
- task:
type: Clustering
dataset:
name: MTEB StackExchangeClusteringP2P
type: mteb/stackexchange-clustering-p2p
config: default
split: test
revision: 815ca46b2622cec33ccafc3735d572c266efdb44
metrics:
- type: v_measure
value: 36.3569584557381
- task:
type: Reranking
dataset:
name: MTEB StackOverflowDupQuestions
type: mteb/stackoverflowdupquestions-reranking
config: default
split: test
revision: e185fbe320c72810689fc5848eb6114e1ef5ec69
metrics:
- type: map
value: 48.82174503355024
- type: mrr
value: 49.610933388506915
- task:
type: Summarization
dataset:
name: MTEB SummEval
type: mteb/summeval
config: default
split: test
revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c
metrics:
- type: cos_sim_pearson
value: 30.805895993742798
- type: cos_sim_spearman
value: 31.445431226826738
- type: dot_pearson
value: 24.441585432516867
- type: dot_spearman
value: 25.468117334810188
- task:
type: Retrieval
dataset:
name: MTEB TRECCOVID
type: trec-covid
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 0.2
- type: map_at_10
value: 1.431
- type: map_at_100
value: 7.138999999999999
- type: map_at_1000
value: 17.933
- type: map_at_3
value: 0.551
- type: map_at_5
value: 0.7979999999999999
- type: mrr_at_1
value: 76.0
- type: mrr_at_10
value: 85.167
- type: mrr_at_100
value: 85.21300000000001
- type: mrr_at_1000
value: 85.21300000000001
- type: mrr_at_3
value: 84.667
- type: mrr_at_5
value: 85.167
- type: ndcg_at_1
value: 72.0
- type: ndcg_at_10
value: 63.343
- type: ndcg_at_100
value: 45.739999999999995
- type: ndcg_at_1000
value: 41.875
- type: ndcg_at_3
value: 68.162
- type: ndcg_at_5
value: 65.666
- type: precision_at_1
value: 76.0
- type: precision_at_10
value: 66.4
- type: precision_at_100
value: 46.800000000000004
- type: precision_at_1000
value: 18.996
- type: precision_at_3
value: 72.667
- type: precision_at_5
value: 68.4
- type: recall_at_1
value: 0.2
- type: recall_at_10
value: 1.712
- type: recall_at_100
value: 10.896
- type: recall_at_1000
value: 40.115
- type: recall_at_3
value: 0.594
- type: recall_at_5
value: 0.889
- task:
type: Retrieval
dataset:
name: MTEB Touche2020
type: webis-touche2020
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 1.0619999999999998
- type: map_at_10
value: 5.611
- type: map_at_100
value: 8.841000000000001
- type: map_at_1000
value: 10.154
- type: map_at_3
value: 2.7720000000000002
- type: map_at_5
value: 4.181
- type: mrr_at_1
value: 14.285999999999998
- type: mrr_at_10
value: 26.249
- type: mrr_at_100
value: 28.046
- type: mrr_at_1000
value: 28.083000000000002
- type: mrr_at_3
value: 21.769
- type: mrr_at_5
value: 24.524
- type: ndcg_at_1
value: 11.224
- type: ndcg_at_10
value: 12.817
- type: ndcg_at_100
value: 23.183999999999997
- type: ndcg_at_1000
value: 35.099000000000004
- type: ndcg_at_3
value: 11.215
- type: ndcg_at_5
value: 12.016
- type: precision_at_1
value: 14.285999999999998
- type: precision_at_10
value: 12.653
- type: precision_at_100
value: 5.306
- type: precision_at_1000
value: 1.294
- type: precision_at_3
value: 13.605
- type: precision_at_5
value: 13.877999999999998
- type: recall_at_1
value: 1.0619999999999998
- type: recall_at_10
value: 10.377
- type: recall_at_100
value: 34.77
- type: recall_at_1000
value: 70.875
- type: recall_at_3
value: 3.688
- type: recall_at_5
value: 6.2509999999999994
- task:
type: Classification
dataset:
name: MTEB ToxicConversationsClassification
type: mteb/toxic_conversations_50k
config: default
split: test
revision: d7c0de2777da35d6aae2200a62c6e0e5af397c4c
metrics:
- type: accuracy
value: 71.8488
- type: ap
value: 15.590122317097372
- type: f1
value: 55.86108396102662
- task:
type: Classification
dataset:
name: MTEB TweetSentimentExtractionClassification
type: mteb/tweet_sentiment_extraction
config: default
split: test
revision: d604517c81ca91fe16a244d1248fc021f9ecee7a
metrics:
- type: accuracy
value: 57.61460101867573
- type: f1
value: 57.8678726826158
- task:
type: Clustering
dataset:
name: MTEB TwentyNewsgroupsClustering
type: mteb/twentynewsgroups-clustering
config: default
split: test
revision: 6125ec4e24fa026cec8a478383ee943acfbd5449
metrics:
- type: v_measure
value: 32.01459876897588
- task:
type: PairClassification
dataset:
name: MTEB TwitterSemEval2015
type: mteb/twittersemeval2015-pairclassification
config: default
split: test
revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1
metrics:
- type: cos_sim_accuracy
value: 84.1032365738809
- type: cos_sim_ap
value: 66.60137415520323
- type: cos_sim_f1
value: 62.12845010615712
- type: cos_sim_precision
value: 62.493326214628944
- type: cos_sim_recall
value: 61.76781002638523
- type: dot_accuracy
value: 81.85015199380103
- type: dot_ap
value: 58.854644211365084
- type: dot_f1
value: 56.15180082185158
- type: dot_precision
value: 51.806422836752894
- type: dot_recall
value: 61.2928759894459
- type: euclidean_accuracy
value: 83.6681170650295
- type: euclidean_ap
value: 64.93555585305603
- type: euclidean_f1
value: 61.02775195857125
- type: euclidean_precision
value: 61.42742582197273
- type: euclidean_recall
value: 60.633245382585756
- type: manhattan_accuracy
value: 83.73368301841808
- type: manhattan_ap
value: 65.45422483039611
- type: manhattan_f1
value: 61.58552806597499
- type: manhattan_precision
value: 62.09763948497854
- type: manhattan_recall
value: 61.08179419525066
- type: max_accuracy
value: 84.1032365738809
- type: max_ap
value: 66.60137415520323
- type: max_f1
value: 62.12845010615712
- task:
type: PairClassification
dataset:
name: MTEB TwitterURLCorpus
type: mteb/twitterurlcorpus-pairclassification
config: default
split: test
revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf
metrics:
- type: cos_sim_accuracy
value: 86.36628245430201
- type: cos_sim_ap
value: 79.29963896460292
- type: cos_sim_f1
value: 72.63895990066467
- type: cos_sim_precision
value: 69.09128803668196
- type: cos_sim_recall
value: 76.57068062827224
- type: dot_accuracy
value: 84.65091007878294
- type: dot_ap
value: 75.04883449222972
- type: dot_f1
value: 69.18569117382708
- type: dot_precision
value: 64.89512376070682
- type: dot_recall
value: 74.08376963350786
- type: euclidean_accuracy
value: 85.88116583226608
- type: euclidean_ap
value: 78.42687640324908
- type: euclidean_f1
value: 71.74350111107192
- type: euclidean_precision
value: 66.19800820152314
- type: euclidean_recall
value: 78.3030489682784
- type: manhattan_accuracy
value: 86.27508052935926
- type: manhattan_ap
value: 79.29581298930101
- type: manhattan_f1
value: 72.51838235294117
- type: manhattan_precision
value: 67.03921568627452
- type: manhattan_recall
value: 78.97289805974745
- type: max_accuracy
value: 86.36628245430201
- type: max_ap
value: 79.29963896460292
- type: max_f1
value: 72.63895990066467
---
> LLM2Vec is a simple recipe to convert decoder-only LLMs into text encoders. It consists of 3 simple steps: 1) enabling bidirectional attention, 2) masked next token prediction, and 3) unsupervised contrastive learning. The model can be further fine-tuned to achieve state-of-the-art performance.
- **Repository:** https://github.com/McGill-NLP/llm2vec
- **Paper:** https://arxiv.org/abs/2404.05961
## Installation
```bash
pip install llm2vec
```
## Usage
```python
from llm2vec import LLM2Vec
import torch
from transformers import AutoTokenizer, AutoModel, AutoConfig
from peft import PeftModel
# Loading base Mistral model, along with custom code that enables bidirectional connections in decoder-only LLMs. MNTP LoRA weights are merged into the base model.
tokenizer = AutoTokenizer.from_pretrained(
"McGill-NLP/LLM2Vec-Meta-Llama-3-8B-Instruct-mntp"
)
config = AutoConfig.from_pretrained(
"McGill-NLP/LLM2Vec-Meta-Llama-3-8B-Instruct-mntp", trust_remote_code=True
)
model = AutoModel.from_pretrained(
"McGill-NLP/LLM2Vec-Meta-Llama-3-8B-Instruct-mntp",
trust_remote_code=True,
config=config,
torch_dtype=torch.bfloat16,
device_map="cuda" if torch.cuda.is_available() else "cpu",
)
model = PeftModel.from_pretrained(
model,
"McGill-NLP/LLM2Vec-Meta-Llama-3-8B-Instruct-mntp",
)
model = model.merge_and_unload() # This can take several minutes on cpu
# Loading unsupervised SimCSE model. This loads the trained LoRA weights on top of MNTP model. Hence the final weights are -- Base model + MNTP (LoRA) + SimCSE (LoRA).
model = PeftModel.from_pretrained(
model, "McGill-NLP/LLM2Vec-Meta-Llama-3-8B-Instruct-mntp-unsup-simcse"
)
# Wrapper for encoding and pooling operations
l2v = LLM2Vec(model, tokenizer, pooling_mode="mean", max_length=512)
# Encoding queries using instructions
instruction = (
"Given a web search query, retrieve relevant passages that answer the query:"
)
queries = [
[instruction, "how much protein should a female eat"],
[instruction, "summit define"],
]
q_reps = l2v.encode(queries)
# Encoding documents. Instruction are not required for documents
documents = [
"As a general guideline, the CDC's average requirement of protein for women ages 19 to 70 is 46 grams per day. But, as you can see from this chart, you'll need to increase that if you're expecting or training for a marathon. Check out the chart below to see how much protein you should be eating each day.",
"Definition of summit for English Language Learners. : 1 the highest point of a mountain : the top of a mountain. : 2 the highest level. : 3 a meeting or series of meetings between the leaders of two or more governments.",
]
d_reps = l2v.encode(documents)
# Compute cosine similarity
q_reps_norm = torch.nn.functional.normalize(q_reps, p=2, dim=1)
d_reps_norm = torch.nn.functional.normalize(d_reps, p=2, dim=1)
cos_sim = torch.mm(q_reps_norm, d_reps_norm.transpose(0, 1))
print(cos_sim)
"""
tensor([[0.6522, 0.1891],
[0.1162, 0.3457]])
"""
```
## Questions
If you have any question about the code, feel free to email Parishad (`[email protected]`) and Vaibhav (`[email protected]`). | [
"BIOSSES",
"SCIFACT"
] | Non_BioNLP |
ntc-ai/SDXL-LoRA-slider.magical-energy-swirling-around | ntc-ai | text-to-image | [
"diffusers",
"text-to-image",
"stable-diffusion-xl",
"lora",
"template:sd-lora",
"template:sdxl-lora",
"sdxl-sliders",
"ntcai.xyz-sliders",
"concept",
"en",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:mit",
"region:us"
] | 1,704,279,767,000 | 2024-01-03T11:02:50 | 5 | 1 | ---
base_model: stabilityai/stable-diffusion-xl-base-1.0
language:
- en
license: mit
tags:
- text-to-image
- stable-diffusion-xl
- lora
- template:sd-lora
- template:sdxl-lora
- sdxl-sliders
- ntcai.xyz-sliders
- concept
- diffusers
thumbnail: images/evaluate/magical energy swirling around.../magical energy swirling
around_17_3.0.png
widget:
- text: magical energy swirling around
output:
url: images/magical energy swirling around_17_3.0.png
- text: magical energy swirling around
output:
url: images/magical energy swirling around_19_3.0.png
- text: magical energy swirling around
output:
url: images/magical energy swirling around_20_3.0.png
- text: magical energy swirling around
output:
url: images/magical energy swirling around_21_3.0.png
- text: magical energy swirling around
output:
url: images/magical energy swirling around_22_3.0.png
inference: false
instance_prompt: magical energy swirling around
---
# ntcai.xyz slider - magical energy swirling around (SDXL LoRA)
| Strength: -3 | Strength: 0 | Strength: 3 |
| --- | --- | --- |
| <img src="images/magical energy swirling around_17_-3.0.png" width=256 height=256 /> | <img src="images/magical energy swirling around_17_0.0.png" width=256 height=256 /> | <img src="images/magical energy swirling around_17_3.0.png" width=256 height=256 /> |
| <img src="images/magical energy swirling around_19_-3.0.png" width=256 height=256 /> | <img src="images/magical energy swirling around_19_0.0.png" width=256 height=256 /> | <img src="images/magical energy swirling around_19_3.0.png" width=256 height=256 /> |
| <img src="images/magical energy swirling around_20_-3.0.png" width=256 height=256 /> | <img src="images/magical energy swirling around_20_0.0.png" width=256 height=256 /> | <img src="images/magical energy swirling around_20_3.0.png" width=256 height=256 /> |
## Download
Weights for this model are available in Safetensors format.
## Trigger words
You can apply this LoRA with trigger words for additional effect:
```
magical energy swirling around
```
## Use in diffusers
```python
from diffusers import StableDiffusionXLPipeline
from diffusers import EulerAncestralDiscreteScheduler
import torch
pipe = StableDiffusionXLPipeline.from_single_file("https://huggingface.co/martyn/sdxl-turbo-mario-merge-top-rated/blob/main/topRatedTurboxlLCM_v10.safetensors")
pipe.to("cuda")
pipe.scheduler = EulerAncestralDiscreteScheduler.from_config(pipe.scheduler.config)
# Load the LoRA
pipe.load_lora_weights('ntc-ai/SDXL-LoRA-slider.magical-energy-swirling-around', weight_name='magical energy swirling around.safetensors', adapter_name="magical energy swirling around")
# Activate the LoRA
pipe.set_adapters(["magical energy swirling around"], adapter_weights=[2.0])
prompt = "medieval rich kingpin sitting in a tavern, magical energy swirling around"
negative_prompt = "nsfw"
width = 512
height = 512
num_inference_steps = 10
guidance_scale = 2
image = pipe(prompt, negative_prompt=negative_prompt, width=width, height=height, guidance_scale=guidance_scale, num_inference_steps=num_inference_steps).images[0]
image.save('result.png')
```
## Support the Patreon
If you like this model please consider [joining our Patreon](https://www.patreon.com/NTCAI).
By joining our Patreon, you'll gain access to an ever-growing library of over 830+ unique and diverse LoRAs, covering a wide range of styles and genres. You'll also receive early access to new models and updates, exclusive behind-the-scenes content, and the powerful LoRA slider creator, allowing you to craft your own custom LoRAs and experiment with endless possibilities.
Your support on Patreon will allow us to continue developing and refining new models.
## Other resources
- [CivitAI](https://civitai.com/user/ntc) - Follow ntc on Civit for even more LoRAs
- [ntcai.xyz](https://ntcai.xyz) - See ntcai.xyz to find more articles and LoRAs
| [
"CRAFT"
] | Non_BioNLP |
RichardErkhov/ricepaper_-_vi-gemma-2b-RAG-8bits | RichardErkhov | null | [
"safetensors",
"gemma",
"8-bit",
"bitsandbytes",
"region:us"
] | 1,729,276,401,000 | 2024-10-18T18:35:47 | 4 | 0 | ---
{}
---
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
vi-gemma-2b-RAG - bnb 8bits
- Model creator: https://huggingface.co/ricepaper/
- Original model: https://huggingface.co/ricepaper/vi-gemma-2b-RAG/
Original model description:
---
base_model: unsloth/gemma-1.1-2b-it-bnb-4bit
language:
- en
- vi
license: apache-2.0
tags:
- text-generation-inference
- retrieval-augmented-generation
- transformers
- unsloth
- gemma
- trl
- sft
---
## Model Card: vi-gemma-2b-RAG
### (English below)
### Tiếng Việt (Vietnamese)
**Mô tả mô hình:**
vi-gemma-2b-RAG là một mô hình ngôn ngữ lớn được tinh chỉnh từ mô hình cơ sở [google/gemma-1.1-2b-it](https://huggingface.co/google/gemma-1.1-2b-it) sử dụng kỹ thuật LoRA. Mô hình được huấn luyện trên tập dữ liệu tiếng Việt với mục tiêu cải thiện khả năng xử lý ngôn ngữ tiếng Việt và nâng cao hiệu suất cho các tác vụ truy xuất thông tin mở (Retrieval Augmented Generation - RAG).
**Mục đích sử dụng:**
Mô hình vi-gemma-2b-RAG phù hợp cho các tác vụ sau:
* Trả lời câu hỏi dựa trên ngữ cảnh tiếng Việt.
* Tóm tắt văn bản tiếng Việt.
* Dịch máy tiếng Việt.
* Và các tác vụ tạo văn bản tiếng Việt khác.
**Giới hạn:**
Mặc dù đã được tinh chỉnh cho tiếng Việt, vi-gemma-2b-RAG vẫn có thể gặp phải một số hạn chế:
* Có thể tạo ra thông tin sai lệch hoặc không chính xác.
* Có thể thể hiện thành kiến hoặc quan điểm không phù hợp.
* Hiệu suất có thể bị ảnh hưởng bởi chất lượng của dữ liệu đầu vào.
**Cách sử dụng:**
Dưới đây chúng tôi chia sẻ một số đoạn mã về cách bắt đầu nhanh chóng để sử dụng mô hình. Trước tiên, hãy đảm bảo đã cài đặt `pip install -U transformers`, sau đó sao chép đoạn mã từ phần có liên quan đến usecase của bạn.
Chúng tôi khuyến nghị sử dụng `torch.bfloat16` làm mặc định.
```python
# pip install transformers torch accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
# Khởi tạo tokenizer và model từ checkpoint đã lưu
tokenizer = AutoTokenizer.from_pretrained("himmeow/vi-gemma-2b-RAG")
model = AutoModelForCausalLM.from_pretrained(
"himmeow/vi-gemma-2b-RAG",
device_map="auto",
torch_dtype=torch.bfloat16
)
# Sử dụng GPU nếu có
if torch.cuda.is_available():
model.to("cuda")
# Định dạng prompt cho model
prompt = """
### Instruction and Input:
Dựa vào ngữ cảnh/tài liệu sau:
{}
Hãy trả lời câu hỏi: {}
### Response:
{}
"""
# Chuẩn bị dữ liệu đầu vào
input_data = """
Short Tandem Repeats (STRs) là các trình tự DNA lặp lại ngắn (2- 6 nucleotides) xuất hiện phổ biến trong hệ gen của con người. Các trình tự này có tính đa hình rất cao trong tự nhiên, điều này khiến các STRs trở thành những markers di truyền rất quan trọng trong nghiên cứu bản đồ gen người và chuẩn đoán bệnh lý di truyền cũng như xác định danh tính trong lĩnh vực pháp y.
Các STRs trở nên phổ biến tại các phòng xét nghiệm pháp y bởi vì việc nhân bản và phân tích STRs chỉ cần lượng DNA rất thấp ngay cả khi ở dạng bị phân hủy việc đinh danh vẫn có thể được thực hiện thành công. Hơn nữa việc phát hiện và đánh giá sự nhiễm DNA mẫu trong các mẫu vật có thể được giải quyết nhanh với kết quả phân tích STRs. Ở Hoa Kỳ hiện nay, từ bộ 13 markers nay đã tăng lên 20 markers chính đang được sử dụng để tạo ra một cơ sở dữ liệu DNA trên toàn đất nước được gọi là The FBI Combined DNA Index System (Expaned CODIS).
CODIS và các cơ sử dữ liệu DNA tương tự đang được sử dụng thực sự thành công trong việc liên kết các hồ sơ DNA từ các tội phạm và các bằng chứng hiện trường vụ án. Kết quả định danh STRs cũng được sử dụng để hỗ trợ hàng trăm nghìn trường hợp xét nghiệm huyết thống cha con mỗi năm'
"""
query = "Hãy cho tôi biết một số tính chất của STRs được dùng để làm gì?"
# Định dạng input text
input_text = prompt.format(input_data, query," ")
# Mã hóa input text thành input ids
input_ids = tokenizer(input_text, return_tensors="pt")
# Sử dụng GPU cho input ids nếu có
if torch.cuda.is_available():
input_ids = input_ids.to("cuda")
# Tạo văn bản bằng model
outputs = model.generate(
**input_ids,
max_new_tokens=500,
no_repeat_ngram_size=5, # Ngăn chặn lặp lại các cụm từ 5 gram
# do_sample=True, # Kích hoạt chế độ tạo văn bản dựa trên lấy mẫu. Trong chế độ này, model sẽ chọn ngẫu nhiên token tiếp theo dựa trên xác suất được tính từ phân phối xác suất của các token.
# temperature=0.7, # Giảm temperature để kiểm soát tính ngẫu nhiên
# early_stopping=True, # Dừng tạo văn bản khi tìm thấy kết thúc phù hợp
)
# Giải mã và in kết quả
print(tokenizer.decode(outputs[0]))
'''
<bos>
### Instruction and Input:
Dựa vào ngữ cảnh/tài liệu sau:
Short Tandem Repeats (STRs) là các trình tự DNA lặp lại ngắn (2- 6 nucleotides) xuất hiện phổ biến trong hệ gen của con người. Các trình tự này có tính đa hình rất cao trong tự nhiên, điều này khiến các STRs trở thành những markers di truyền rất quan trọng trong nghiên cứu bản đồ gen người và chuẩn đoán bệnh lý di truyền cũng như xác định danh tính trong lĩnh vực pháp y.
Các STRs trở nên phổ biến tại các phòng xét nghiệm pháp y bởi vì việc nhân bản và phân tích STRs chỉ cần lượng DNA rất thấp ngay cả khi ở dạng bị phân hủy việc đinh danh vẫn có thể được thực hiện thành công. Hơn nữa việc phát hiện và đánh giá sự nhiễm DNA mẫu trong các mẫu vật có thể được giải quyết nhanh với kết quả phân tích STRs. Ở Hoa Kỳ hiện nay, từ bộ 13 markers nay đã tăng lên 20 markers chính đang được sử dụng để tạo ra một cơ sở dữ liệu DNA trên toàn đất nước được gọi là The FBI Combined DNA Index System (Expaned CODIS).
CODIS và các cơ sử dữ liệu DNA tương tự đang được sử dụng thực sự thành công trong việc liên kết các hồ sơ DNA từ các tội phạm và các bằng chứng hiện trường vụ án. Kết quả định danh STRs cũng được sử dụng để hỗ trợ hàng trăm nghìn trường hợp xét nghiệm huyết thống cha con mỗi năm'
Hãy trả lời câu hỏi: Hãy cho tôi biết một số tính chất của STRs được dùng để làm gì?
### Response:
STRs được sử dụng để xác định danh tính, chuẩn đoán bệnh lý và xác định bệnh lý di truyền.
<eos>
'''
```
**Huấn luyện:**
* **Mô hình cơ sở:** google/gemma-1.1-2b-it
* **Tập dữ liệu:** lamhieu/mabrycodes_dialogue_vi
* **Phương pháp tinh chỉnh:** LoRA, PEFT với Unsloth
## Model Card: vi-gemma-2b-RAG
### English
**Model Description:**
vi-gemma-2b-RAG is a large language model fine-tuned from the base model [google/gemma-1.1-2b-it](https://huggingface.co/google/gemma-1.1-2b-it) using LoRA. The model is trained on a Vietnamese dataset to improve its Vietnamese language processing capabilities and enhance its performance for Retrieval Augmented Generation (RAG) tasks.
**Intended Use:**
The vi-gemma-2b-RAG model is suitable for tasks such as:
* Vietnamese question answering.
* Vietnamese text summarization.
* Vietnamese machine translation.
* And other Vietnamese text generation tasks.
**Limitations:**
While fine-tuned for Vietnamese, vi-gemma-2b-RAG may still have some limitations:
* It may generate incorrect or misleading information.
* It may exhibit biases or inappropriate opinions.
* Its performance may be affected by the quality of the input data.
**How to Use:**
### Usage
Below we share some code snippets on how to get quickly started with running the model. First make sure to `pip install -U transformers`, then copy the snippet from the section that is relevant for your usecase.
We recommend `torch.bfloat16` as the default dtype.
```python
# pip install transformers torch accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
# Initialize the tokenizer and model from the saved checkpoint
tokenizer = AutoTokenizer.from_pretrained("himmeow/vi-gemma-2b-RAG")
model = AutoModelForCausalLM.from_pretrained(
"himmeow/vi-gemma-2b-RAG",
device_map="auto",
torch_dtype=torch.bfloat16
)
# Use GPU if available
if torch.cuda.is_available():
model.to("cuda")
# Define the prompt format for the model
prompt = """
### Instruction and Input:
Based on the following context/document:
{}
Please answer the question: {}
### Response:
{}
"""
# Prepare the input data
input_data = """
Short Tandem Repeats (STRs) are short (2-6 nucleotides) repeating DNA sequences that are widespread in the human genome. These sequences are highly polymorphic in nature, which makes STRs very important genetic markers in human gene mapping and diagnosis of hereditary diseases as well as identification in the field of forensics.
STRs have become popular in forensic laboratories because the replication and analysis of STRs requires very small amounts of DNA, even in decomposed form, identification can still be performed successfully. Furthermore, the detection and assessment of sample DNA contamination in specimens can be quickly resolved with STR analysis results. In the United States today, the set of 13 markers has now been increased to 20 main markers being used to create a nationwide DNA database called The FBI Combined DNA Index System (Expaned CODIS).
CODIS and similar DNA databases are being used very successfully in linking DNA records from criminals and crime scene evidence. STR identification results are also used to support hundreds of thousands of paternity test cases each year.'
"""
query = "Tell me what are some properties of STRs used for?"
# Format the input text
input_text = prompt.format(input_data, query," ")
# Encode the input text into input ids
input_ids = tokenizer(input_text, return_tensors="pt")
# Use GPU for input ids if available
if torch.cuda.is_available():
input_ids = input_ids.to("cuda")
# Generate text using the model
outputs = model.generate(
**input_ids,
max_new_tokens=500, # Limit the number of tokens generated
no_repeat_ngram_size=5, # Prevent repetition of 5-gram phrases
# do_sample=True,
# temperature=0.7, # Adjust the randomness of the generated text
# early_stopping=True, # Stop generating text when a suitable ending is found
)
# Decode and print the results
print(tokenizer.decode(outputs[0]))
```
**Training:**
* **Base Model:** google/gemma-1.1-2b-it
* **Dataset:** lamhieu/mabrycodes_dialogue_vi
* **Fine-tuning Method:** LoRA, PEFT and Unsloth
**Using example repository:** https://github.com/Martincrux/Vietnamese-RAG-system-building-with-vi-gemma-2b-RAG-and-halong_embedding
# Uploaded model
- **Developed by:** [hiieu](https://huggingface.co/hiieu), [himmeow the coder](https://huggingface.co/himmeow), [cuctrinh](https://www.linkedin.com/in/trinh-cuc-5722832b6)
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gemma-1.1-2b-it-bnb-4bit
This gemma model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
| [
"CHIA"
] | Non_BioNLP |
RichardErkhov/vonjack_-_Phi-3-mini-4k-instruct-LLaMAfied-gguf | RichardErkhov | null | [
"gguf",
"endpoints_compatible",
"region:us",
"conversational"
] | 1,728,145,378,000 | 2024-10-05T19:07:51 | 48 | 0 | ---
{}
---
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Phi-3-mini-4k-instruct-LLaMAfied - GGUF
- Model creator: https://huggingface.co/vonjack/
- Original model: https://huggingface.co/vonjack/Phi-3-mini-4k-instruct-LLaMAfied/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [Phi-3-mini-4k-instruct-LLaMAfied.Q2_K.gguf](https://huggingface.co/RichardErkhov/vonjack_-_Phi-3-mini-4k-instruct-LLaMAfied-gguf/blob/main/Phi-3-mini-4k-instruct-LLaMAfied.Q2_K.gguf) | Q2_K | 1.35GB |
| [Phi-3-mini-4k-instruct-LLaMAfied.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/vonjack_-_Phi-3-mini-4k-instruct-LLaMAfied-gguf/blob/main/Phi-3-mini-4k-instruct-LLaMAfied.IQ3_XS.gguf) | IQ3_XS | 1.49GB |
| [Phi-3-mini-4k-instruct-LLaMAfied.IQ3_S.gguf](https://huggingface.co/RichardErkhov/vonjack_-_Phi-3-mini-4k-instruct-LLaMAfied-gguf/blob/main/Phi-3-mini-4k-instruct-LLaMAfied.IQ3_S.gguf) | IQ3_S | 1.57GB |
| [Phi-3-mini-4k-instruct-LLaMAfied.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/vonjack_-_Phi-3-mini-4k-instruct-LLaMAfied-gguf/blob/main/Phi-3-mini-4k-instruct-LLaMAfied.Q3_K_S.gguf) | Q3_K_S | 1.57GB |
| [Phi-3-mini-4k-instruct-LLaMAfied.IQ3_M.gguf](https://huggingface.co/RichardErkhov/vonjack_-_Phi-3-mini-4k-instruct-LLaMAfied-gguf/blob/main/Phi-3-mini-4k-instruct-LLaMAfied.IQ3_M.gguf) | IQ3_M | 1.65GB |
| [Phi-3-mini-4k-instruct-LLaMAfied.Q3_K.gguf](https://huggingface.co/RichardErkhov/vonjack_-_Phi-3-mini-4k-instruct-LLaMAfied-gguf/blob/main/Phi-3-mini-4k-instruct-LLaMAfied.Q3_K.gguf) | Q3_K | 1.75GB |
| [Phi-3-mini-4k-instruct-LLaMAfied.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/vonjack_-_Phi-3-mini-4k-instruct-LLaMAfied-gguf/blob/main/Phi-3-mini-4k-instruct-LLaMAfied.Q3_K_M.gguf) | Q3_K_M | 1.75GB |
| [Phi-3-mini-4k-instruct-LLaMAfied.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/vonjack_-_Phi-3-mini-4k-instruct-LLaMAfied-gguf/blob/main/Phi-3-mini-4k-instruct-LLaMAfied.Q3_K_L.gguf) | Q3_K_L | 1.9GB |
| [Phi-3-mini-4k-instruct-LLaMAfied.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/vonjack_-_Phi-3-mini-4k-instruct-LLaMAfied-gguf/blob/main/Phi-3-mini-4k-instruct-LLaMAfied.IQ4_XS.gguf) | IQ4_XS | 1.93GB |
| [Phi-3-mini-4k-instruct-LLaMAfied.Q4_0.gguf](https://huggingface.co/RichardErkhov/vonjack_-_Phi-3-mini-4k-instruct-LLaMAfied-gguf/blob/main/Phi-3-mini-4k-instruct-LLaMAfied.Q4_0.gguf) | Q4_0 | 2.03GB |
| [Phi-3-mini-4k-instruct-LLaMAfied.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/vonjack_-_Phi-3-mini-4k-instruct-LLaMAfied-gguf/blob/main/Phi-3-mini-4k-instruct-LLaMAfied.IQ4_NL.gguf) | IQ4_NL | 2.04GB |
| [Phi-3-mini-4k-instruct-LLaMAfied.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/vonjack_-_Phi-3-mini-4k-instruct-LLaMAfied-gguf/blob/main/Phi-3-mini-4k-instruct-LLaMAfied.Q4_K_S.gguf) | Q4_K_S | 2.04GB |
| [Phi-3-mini-4k-instruct-LLaMAfied.Q4_K.gguf](https://huggingface.co/RichardErkhov/vonjack_-_Phi-3-mini-4k-instruct-LLaMAfied-gguf/blob/main/Phi-3-mini-4k-instruct-LLaMAfied.Q4_K.gguf) | Q4_K | 2.16GB |
| [Phi-3-mini-4k-instruct-LLaMAfied.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/vonjack_-_Phi-3-mini-4k-instruct-LLaMAfied-gguf/blob/main/Phi-3-mini-4k-instruct-LLaMAfied.Q4_K_M.gguf) | Q4_K_M | 2.16GB |
| [Phi-3-mini-4k-instruct-LLaMAfied.Q4_1.gguf](https://huggingface.co/RichardErkhov/vonjack_-_Phi-3-mini-4k-instruct-LLaMAfied-gguf/blob/main/Phi-3-mini-4k-instruct-LLaMAfied.Q4_1.gguf) | Q4_1 | 2.24GB |
| [Phi-3-mini-4k-instruct-LLaMAfied.Q5_0.gguf](https://huggingface.co/RichardErkhov/vonjack_-_Phi-3-mini-4k-instruct-LLaMAfied-gguf/blob/main/Phi-3-mini-4k-instruct-LLaMAfied.Q5_0.gguf) | Q5_0 | 2.46GB |
| [Phi-3-mini-4k-instruct-LLaMAfied.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/vonjack_-_Phi-3-mini-4k-instruct-LLaMAfied-gguf/blob/main/Phi-3-mini-4k-instruct-LLaMAfied.Q5_K_S.gguf) | Q5_K_S | 2.46GB |
| [Phi-3-mini-4k-instruct-LLaMAfied.Q5_K.gguf](https://huggingface.co/RichardErkhov/vonjack_-_Phi-3-mini-4k-instruct-LLaMAfied-gguf/blob/main/Phi-3-mini-4k-instruct-LLaMAfied.Q5_K.gguf) | Q5_K | 2.53GB |
| [Phi-3-mini-4k-instruct-LLaMAfied.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/vonjack_-_Phi-3-mini-4k-instruct-LLaMAfied-gguf/blob/main/Phi-3-mini-4k-instruct-LLaMAfied.Q5_K_M.gguf) | Q5_K_M | 2.53GB |
| [Phi-3-mini-4k-instruct-LLaMAfied.Q5_1.gguf](https://huggingface.co/RichardErkhov/vonjack_-_Phi-3-mini-4k-instruct-LLaMAfied-gguf/blob/main/Phi-3-mini-4k-instruct-LLaMAfied.Q5_1.gguf) | Q5_1 | 2.68GB |
| [Phi-3-mini-4k-instruct-LLaMAfied.Q6_K.gguf](https://huggingface.co/RichardErkhov/vonjack_-_Phi-3-mini-4k-instruct-LLaMAfied-gguf/blob/main/Phi-3-mini-4k-instruct-LLaMAfied.Q6_K.gguf) | Q6_K | 2.92GB |
| [Phi-3-mini-4k-instruct-LLaMAfied.Q8_0.gguf](https://huggingface.co/RichardErkhov/vonjack_-_Phi-3-mini-4k-instruct-LLaMAfied-gguf/blob/main/Phi-3-mini-4k-instruct-LLaMAfied.Q8_0.gguf) | Q8_0 | 3.78GB |
Original model description:
---
license: mit
license_link: https://huggingface.co/microsoft/Phi-3-mini-4k-instruct/resolve/main/LICENSE
language:
- en
pipeline_tag: text-generation
tags:
- nlp
- code
---
## Model Summary
The Phi-3-Mini-4K-Instruct is a 3.8B parameters, lightweight, state-of-the-art open model trained with the Phi-3 datasets that includes both synthetic data and the filtered publicly available websites data with a focus on high-quality and reasoning dense properties.
The model belongs to the Phi-3 family with the Mini version in two variants [4K](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct) and [128K](https://huggingface.co/microsoft/Phi-3-mini-128k-instruct) which is the context length (in tokens) that it can support.
The model has underwent a post-training process that incorporates both supervised fine-tuning and direct preference optimization for the instruction following and safety measures.
When assessed against benchmarks testing common sense, language understanding, math, code, long context and logical reasoning, Phi-3 Mini-4K-Instruct showcased a robust and state-of-the-art performance among models with less than 13 billion parameters.
Resources and Technical Documentation:
+ [Phi-3 Microsoft Blog](https://aka.ms/phi3blog-april)
+ [Phi-3 Technical Report](https://aka.ms/phi3-tech-report)
+ [Phi-3 on Azure AI Studio](https://aka.ms/phi3-azure-ai)
+ Phi-3 GGUF: [4K](https://aka.ms/Phi3-mini-4k-instruct-gguf)
+ Phi-3 ONNX: [4K](https://aka.ms/Phi3-mini-4k-instruct-onnx)
## Intended Uses
**Primary use cases**
The model is intended for commercial and research use in English. The model provides uses for applications which require:
1) Memory/compute constrained environments
2) Latency bound scenarios
3) Strong reasoning (especially code, math and logic)
Our model is designed to accelerate research on language and multimodal models, for use as a building block for generative AI powered features.
**Use case considerations**
Our models are not specifically designed or evaluated for all downstream purposes. Developers should consider common limitations of language models as they select use cases, and evaluate and mitigate for accuracy, safety, and fariness before using within a specific downstream use case, particularly for high risk scenarios. Developers should be aware of and adhere to applicable laws or regulations (including privacy, trade compliance laws, etc.) that are relevant to their use case.
Nothing contained in this Model Card should be interpreted as or deemed a restriction or modification to the license the model is released under.
## How to Use
Phi-3 Mini-4K-Instruct has been integrated in the development version (4.40.0) of `transformers`. Until the official version is released through `pip`, ensure that you are doing one of the following:
* When loading the model, ensure that `trust_remote_code=True` is passed as an argument of the `from_pretrained()` function.
* Update your local `transformers` to the development version: `pip uninstall -y transformers && pip install git+https://github.com/huggingface/transformers`. The previous command is an alternative to cloning and installing from the source.
The current `transformers` version can be verified with: `pip list | grep transformers`.
Phi-3 Mini-4K-Instruct is also available in [HuggingChat](https://aka.ms/try-phi3-hf-chat).
### Chat Format
Given the nature of the training data, the Phi-3 Mini-4K-Instruct model is best suited for prompts using the chat format as follows.
You can provide the prompt as a question with a generic template as follow:
```markdown
<|user|>\nQuestion <|end|>\n<|assistant|>
```
For example:
```markdown
<|system|>
You are a helpful AI assistant.<|end|>
<|user|>
How to explain Internet for a medieval knight?<|end|>
<|assistant|>
```
where the model generates the text after `<|assistant|>` . In case of few-shots prompt, the prompt can be formatted as the following:
```markdown
<|system|>
You are a helpful AI assistant.<|end|>
<|user|>
I am going to Paris, what should I see?<|end|>
<|assistant|>
Paris, the capital of France, is known for its stunning architecture, art museums, historical landmarks, and romantic atmosphere. Here are some of the top attractions to see in Paris:\n\n1. The Eiffel Tower: The iconic Eiffel Tower is one of the most recognizable landmarks in the world and offers breathtaking views of the city.\n2. The Louvre Museum: The Louvre is one of the world's largest and most famous museums, housing an impressive collection of art and artifacts, including the Mona Lisa.\n3. Notre-Dame Cathedral: This beautiful cathedral is one of the most famous landmarks in Paris and is known for its Gothic architecture and stunning stained glass windows.\n\nThese are just a few of the many attractions that Paris has to offer. With so much to see and do, it's no wonder that Paris is one of the most popular tourist destinations in the world."<|end|>
<|user|>
What is so great about #1?<|end|>
<|assistant|>
```
### Sample inference code
This code snippets show how to get quickly started with running the model on a GPU:
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
torch.random.manual_seed(0)
model = AutoModelForCausalLM.from_pretrained(
"microsoft/Phi-3-mini-4k-instruct",
device_map="cuda",
torch_dtype="auto",
trust_remote_code=True,
)
tokenizer = AutoTokenizer.from_pretrained("microsoft/Phi-3-mini-4k-instruct")
messages = [
{"role": "system", "content": "You are a helpful digital assistant. Please provide safe, ethical and accurate information to the user."},
{"role": "user", "content": "Can you provide ways to eat combinations of bananas and dragonfruits?"},
{"role": "assistant", "content": "Sure! Here are some ways to eat bananas and dragonfruits together: 1. Banana and dragonfruit smoothie: Blend bananas and dragonfruits together with some milk and honey. 2. Banana and dragonfruit salad: Mix sliced bananas and dragonfruits together with some lemon juice and honey."},
{"role": "user", "content": "What about solving an 2x + 3 = 7 equation?"},
]
pipe = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
)
generation_args = {
"max_new_tokens": 500,
"return_full_text": False,
"temperature": 0.0,
"do_sample": False,
}
output = pipe(messages, **generation_args)
print(output[0]['generated_text'])
```
## Responsible AI Considerations
Like other language models, the Phi series models can potentially behave in ways that are unfair, unreliable, or offensive. Some of the limiting behaviors to be aware of include:
+ Quality of Service: the Phi models are trained primarily on English text. Languages other than English will experience worse performance. English language varieties with less representation in the training data might experience worse performance than standard American English.
+ Representation of Harms & Perpetuation of Stereotypes: These models can over- or under-represent groups of people, erase representation of some groups, or reinforce demeaning or negative stereotypes. Despite safety post-training, these limitations may still be present due to differing levels of representation of different groups or prevalence of examples of negative stereotypes in training data that reflect real-world patterns and societal biases.
+ Inappropriate or Offensive Content: these models may produce other types of inappropriate or offensive content, which may make it inappropriate to deploy for sensitive contexts without additional mitigations that are specific to the use case.
+ Information Reliability: Language models can generate nonsensical content or fabricate content that might sound reasonable but is inaccurate or outdated.
+ Limited Scope for Code: Majority of Phi-3 training data is based in Python and use common packages such as "typing, math, random, collections, datetime, itertools". If the model generates Python scripts that utilize other packages or scripts in other languages, we strongly recommend users manually verify all API uses.
Developers should apply responsible AI best practices and are responsible for ensuring that a specific use case complies with relevant laws and regulations (e.g. privacy, trade, etc.). Important areas for consideration include:
+ Allocation: Models may not be suitable for scenarios that could have consequential impact on legal status or the allocation of resources or life opportunities (ex: housing, employment, credit, etc.) without further assessments and additional debiasing techniques.
+ High-Risk Scenarios: Developers should assess suitability of using models in high-risk scenarios where unfair, unreliable or offensive outputs might be extremely costly or lead to harm. This includes providing advice in sensitive or expert domains where accuracy and reliability are critical (ex: legal or health advice). Additional safeguards should be implemented at the application level according to the deployment context.
+ Misinformation: Models may produce inaccurate information. Developers should follow transparency best practices and inform end-users they are interacting with an AI system. At the application level, developers can build feedback mechanisms and pipelines to ground responses in use-case specific, contextual information, a technique known as Retrieval Augmented Generation (RAG).
+ Generation of Harmful Content: Developers should assess outputs for their context and use available safety classifiers or custom solutions appropriate for their use case.
+ Misuse: Other forms of misuse such as fraud, spam, or malware production may be possible, and developers should ensure that their applications do not violate applicable laws and regulations.
## Training
### Model
* Architecture: Phi-3 Mini-4K-Instruct has 3.8B parameters and is a dense decoder-only Transformer model. The model is fine-tuned with Supervised fine-tuning (SFT) and Direct Preference Optimization (DPO) to ensure alignment with human preferences and safety guidlines.
* Inputs: Text. It is best suited for prompts using chat format.
* Context length: 4K tokens
* GPUs: 512 H100-80G
* Training time: 7 days
* Training data: 3.3T tokens
* Outputs: Generated text in response to the input
* Dates: Our models were trained between February and April 2024
* Status: This is a static model trained on an offline dataset with cutoff date October 2023. Future versions of the tuned models may be released as we improve models.
### Datasets
Our training data includes a wide variety of sources, totaling 3.3 trillion tokens, and is a combination of
1) Publicly available documents filtered rigorously for quality, selected high-quality educational data, and code;
2) Newly created synthetic, “textbook-like” data for the purpose of teaching math, coding, common sense reasoning, general knowledge of the world (science, daily activities, theory of mind, etc.);
3) High quality chat format supervised data covering various topics to reflect human preferences on different aspects such as instruct-following, truthfulness, honesty and helpfulness.
### Fine-tuning
A basic example of multi-GPUs supervised fine-tuning (SFT) with TRL and Accelerate modules is provided [here](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct/resolve/main/sample_finetune.py).
## Benchmarks
We report the results for Phi-3-Mini-4K-Instruct on standard open-source benchmarks measuring the model's reasoning ability (both common sense reasoning and logical reasoning). We compare to Phi-2, Mistral-7b-v0.1, Mixtral-8x7b, Gemma 7B, Llama-3-8B-Instruct, and GPT-3.5.
All the reported numbers are produced with the exact same pipeline to ensure that the numbers are comparable. These numbers might differ from other published numbers due to slightly different choices in the evaluation.
As is now standard, we use few-shot prompts to evaluate the models, at temperature 0.
The prompts and number of shots are part of a Microsoft internal tool to evaluate language models, and in particular we did no optimization to the pipeline for Phi-3.
More specifically, we do not change prompts, pick different few-shot examples, change prompt format, or do any other form of optimization for the model.
The number of k–shot examples is listed per-benchmark.
| | Phi-3-Mini-4K-In<br>3.8b | Phi-3-Small<br>7b (preview) | Phi-3-Medium<br>14b (preview) | Phi-2<br>2.7b | Mistral<br>7b | Gemma<br>7b | Llama-3-In<br>8b | Mixtral<br>8x7b | GPT-3.5<br>version 1106 |
|---|---|---|---|---|---|---|---|---|---|
| MMLU <br>5-Shot | 68.8 | 75.3 | 78.2 | 56.3 | 61.7 | 63.6 | 66.5 | 68.4 | 71.4 |
| HellaSwag <br> 5-Shot | 76.7 | 78.7 | 83.2 | 53.6 | 58.5 | 49.8 | 71.1 | 70.4 | 78.8 |
| ANLI <br> 7-Shot | 52.8 | 55.0 | 58.7 | 42.5 | 47.1 | 48.7 | 57.3 | 55.2 | 58.1 |
| GSM-8K <br> 0-Shot; CoT | 82.5 | 86.4 | 90.8 | 61.1 | 46.4 | 59.8 | 77.4 | 64.7 | 78.1 |
| MedQA <br> 2-Shot | 53.8 | 58.2 | 69.8 | 40.9 | 49.6 | 50.0 | 60.5 | 62.2 | 63.4 |
| AGIEval <br> 0-Shot | 37.5 | 45.0 | 49.7 | 29.8 | 35.1 | 42.1 | 42.0 | 45.2 | 48.4 |
| TriviaQA <br> 5-Shot | 64.0 | 59.1 | 73.3 | 45.2 | 72.3 | 75.2 | 67.7 | 82.2 | 85.8 |
| Arc-C <br> 10-Shot | 84.9 | 90.7 | 91.9 | 75.9 | 78.6 | 78.3 | 82.8 | 87.3 | 87.4 |
| Arc-E <br> 10-Shot | 94.6 | 97.1 | 98.0 | 88.5 | 90.6 | 91.4 | 93.4 | 95.6 | 96.3 |
| PIQA <br> 5-Shot | 84.2 | 87.8 | 88.2 | 60.2 | 77.7 | 78.1 | 75.7 | 86.0 | 86.6 |
| SociQA <br> 5-Shot | 76.6 | 79.0 | 79.4 | 68.3 | 74.6 | 65.5 | 73.9 | 75.9 | 68.3 |
| BigBench-Hard <br> 0-Shot | 71.7 | 75.0 | 82.5 | 59.4 | 57.3 | 59.6 | 51.5 | 69.7 | 68.32 |
| WinoGrande <br> 5-Shot | 70.8 | 82.5 | 81.2 | 54.7 | 54.2 | 55.6 | 65 | 62.0 | 68.8 |
| OpenBookQA <br> 10-Shot | 83.2 | 88.4 | 86.6 | 73.6 | 79.8 | 78.6 | 82.6 | 85.8 | 86.0 |
| BoolQ <br> 0-Shot | 77.6 | 82.9 | 86.5 | -- | 72.2 | 66.0 | 80.9 | 77.6 | 79.1 |
| CommonSenseQA <br> 10-Shot | 80.2 | 80.3 | 82.6 | 69.3 | 72.6 | 76.2 | 79 | 78.1 | 79.6 |
| TruthfulQA <br> 10-Shot | 65.0 | 68.1 | 74.8 | -- | 52.1 | 53.0 | 63.2 | 60.1 | 85.8 |
| HumanEval <br> 0-Shot | 59.1 | 59.1 | 54.7 | 59.0 | 28.0 | 34.1 | 60.4 | 37.8 | 62.2 |
| MBPP <br> 3-Shot | 53.8 | 71.4 | 73.7 | 60.6 | 50.8 | 51.5 | 67.7 | 60.2 | 77.8 |
## Software
* [PyTorch](https://github.com/pytorch/pytorch)
* [DeepSpeed](https://github.com/microsoft/DeepSpeed)
* [Transformers](https://github.com/huggingface/transformers)
* [Flash-Attention](https://github.com/HazyResearch/flash-attention)
## Hardware
Note that by default, the Phi-3-mini model uses flash attention, which requires certain types of GPU hardware to run. We have tested on the following GPU types:
* NVIDIA A100
* NVIDIA A6000
* NVIDIA H100
If you want to run the model on:
* NVIDIA V100 or earlier generation GPUs: call AutoModelForCausalLM.from_pretrained() with attn_implementation="eager"
* CPU: use the **GGUF** quantized models [4K](https://aka.ms/Phi3-mini-4k-instruct-gguf)
+ Optimized inference on GPU, CPU, and Mobile: use the **ONNX** models [4K](https://aka.ms/Phi3-mini-4k-instruct-onnx)
## Cross Platform Support
ONNX runtime ecosystem now supports Phi-3 Mini models across platforms and hardware. You can find the optimized Phi-3 Mini-4K-Instruct ONNX model [here](https://aka.ms/phi3-mini-4k-instruct-onnx).
Optimized Phi-3 models are also published here in ONNX format, to run with ONNX Runtime on CPU and GPU across devices, including server platforms, Windows, Linux and Mac desktops, and mobile CPUs, with the precision best suited to each of these targets. DirectML support lets developers bring hardware acceleration to Windows devices at scale across AMD, Intel, and NVIDIA GPUs.
Along with DirectML, ONNX Runtime provides cross platform support for Phi-3 across a range of devices CPU, GPU, and mobile.
Here are some of the optimized configurations we have added:
1. ONNX models for int4 DML: Quantized to int4 via AWQ
2. ONNX model for fp16 CUDA
3. ONNX model for int4 CUDA: Quantized to int4 via RTN
4. ONNX model for int4 CPU and Mobile: Quantized to int4 via RTN
## License
The model is licensed under the [MIT license](https://huggingface.co/microsoft/Phi-3-mini-4k/resolve/main/LICENSE).
## Trademarks
This project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft trademarks or logos is subject to and must follow [Microsoft’s Trademark & Brand Guidelines](https://www.microsoft.com/en-us/legal/intellectualproperty/trademarks). Use of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship. Any use of third-party trademarks or logos are subject to those third-party’s policies.
| [
"MEDQA"
] | Non_BioNLP |
tomaarsen/static-bert-uncased-gooaq | tomaarsen | sentence-similarity | [
"sentence-transformers",
"safetensors",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:3012496",
"loss:MatryoshkaLoss",
"loss:MultipleNegativesRankingLoss",
"en",
"dataset:sentence-transformers/gooaq",
"arxiv:1908.10084",
"arxiv:2205.13147",
"arxiv:1705.00652",
"license:apache-2.0",
"model-index",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,727,351,276,000 | 2024-10-18T10:35:51 | 0 | 4 | ---
datasets:
- sentence-transformers/gooaq
language:
- en
library_name: sentence-transformers
license: apache-2.0
metrics:
- cosine_accuracy@1
- cosine_accuracy@3
- cosine_accuracy@5
- cosine_accuracy@10
- cosine_precision@1
- cosine_precision@3
- cosine_precision@5
- cosine_precision@10
- cosine_recall@1
- cosine_recall@3
- cosine_recall@5
- cosine_recall@10
- cosine_ndcg@10
- cosine_mrr@10
- cosine_map@100
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:3012496
- loss:MatryoshkaLoss
- loss:MultipleNegativesRankingLoss
widget:
- source_sentence: how to sign legal documents as power of attorney?
sentences:
- 'After the principal''s name, write “by” and then sign your own name. Under or
after the signature line, indicate your status as POA by including any of the
following identifiers: as POA, as Agent, as Attorney in Fact or as Power of Attorney.'
- '[''From the Home screen, swipe left to Apps.'', ''Tap Transfer my Data.'', ''Tap
Menu (...).'', ''Tap Export to SD card.'']'
- Ginger Dank Nugs (Grape) - 350mg. Feast your eyes on these unique and striking
gourmet chocolates; Coco Nugs created by Ginger Dank. Crafted to resemble perfect
nugs of cannabis, each of the 10 buds contains 35mg of THC. ... This is a perfect
product for both cannabis and chocolate lovers, who appreciate a little twist.
- source_sentence: how to delete vdom in fortigate?
sentences:
- Go to System -> VDOM -> VDOM2 and select 'Delete'. This VDOM is now successfully
removed from the configuration.
- 'Both combination birth control pills and progestin-only pills may cause headaches
as a side effect. Additional side effects of birth control pills may include:
breast tenderness. nausea.'
- White cheese tends to show imperfections more readily and as consumers got more
used to yellow-orange cheese, it became an expected option. Today, many cheddars
are yellow. While most cheesemakers use annatto, some use an artificial coloring
agent instead, according to Sachs.
- source_sentence: where are earthquakes most likely to occur on earth?
sentences:
- Zelle in the Bank of the America app is a fast, safe, and easy way to send and
receive money with family and friends who have a bank account in the U.S., all
with no fees. Money moves in minutes directly between accounts that are already
enrolled with Zelle.
- It takes about 3 days for a spacecraft to reach the Moon. During that time a spacecraft
travels at least 240,000 miles (386,400 kilometers) which is the distance between
Earth and the Moon.
- Most earthquakes occur along the edge of the oceanic and continental plates. The
earth's crust (the outer layer of the planet) is made up of several pieces, called
plates. The plates under the oceans are called oceanic plates and the rest are
continental plates.
- source_sentence: fix iphone is disabled connect to itunes without itunes?
sentences:
- To fix a disabled iPhone or iPad without iTunes, you have to erase your device.
Click on the "Erase iPhone" option and confirm your selection. Wait for a while
as the "Find My iPhone" feature will remotely erase your iOS device. Needless
to say, it will also disable its lock.
- How Māui brought fire to the world. One evening, after eating a hearty meal, Māui
lay beside his fire staring into the flames. ... In the middle of the night, while
everyone was sleeping, Māui went from village to village and extinguished all
the fires until not a single fire burned in the world.
- Angry Orchard makes a variety of year-round craft cider styles, including Angry
Orchard Crisp Apple, a fruit-forward hard cider that balances the sweetness of
culinary apples with dryness and bright acidity of bittersweet apples for a complex,
refreshing taste.
- source_sentence: how to reverse a video on tiktok that's not yours?
sentences:
- '[''Tap "Effects" at the bottom of your screen — it\''s an icon that looks like
a clock. Open the Effects menu. ... '', ''At the end of the new list that appears,
tap "Time." Select "Time" at the end. ... '', ''Select "Reverse" — you\''ll then
see a preview of your new, reversed video appear on the screen.'']'
- Franchise Facts Poke Bar has a franchise fee of up to $30,000, with a total initial
investment range of $157,800 to $438,000. The initial cost of a franchise includes
several fees -- Unlock this franchise to better understand the costs such as training
and territory fees.
- Relative age is the age of a rock layer (or the fossils it contains) compared
to other layers. It can be determined by looking at the position of rock layers.
Absolute age is the numeric age of a layer of rocks or fossils. Absolute age can
be determined by using radiometric dating.
co2_eq_emissions:
emissions: 6.448001991119035
energy_consumed: 0.0165885485310573
source: codecarbon
training_type: fine-tuning
on_cloud: false
cpu_model: 13th Gen Intel(R) Core(TM) i7-13700K
ram_total_size: 31.777088165283203
hours_used: 0.109
hardware_used: 1 x NVIDIA GeForce RTX 3090
model-index:
- name: Static Embeddings with BERT uncased tokenizer finetuned on GooAQ pairs
results:
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: gooaq 1024 dev
type: gooaq-1024-dev
metrics:
- type: cosine_accuracy@1
value: 0.6309
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.8409
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.8986
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.9444
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.6309
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.28029999999999994
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.17972000000000002
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.09444000000000002
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.6309
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.8409
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.8986
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.9444
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.7932643237589305
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.7440336111111036
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.7465739001132767
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: gooaq 512 dev
type: gooaq-512-dev
metrics:
- type: cosine_accuracy@1
value: 0.6271
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.8366
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.8946
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.9431
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.6271
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.27886666666666665
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.17892000000000002
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.09431000000000002
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.6271
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.8366
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.8946
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.9431
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.7904860196985286
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.7408453174603101
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.7434337897783787
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: gooaq 256 dev
type: gooaq-256-dev
metrics:
- type: cosine_accuracy@1
value: 0.6192
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.8235
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.8866
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.9364
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.6192
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.27449999999999997
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.17732000000000003
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.09364000000000001
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.6192
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.8235
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.8866
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.9364
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.7821476540310974
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.7321259126984055
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.7348893313013708
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: gooaq 128 dev
type: gooaq-128-dev
metrics:
- type: cosine_accuracy@1
value: 0.5942
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.804
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.8721
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.9249
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.5942
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.268
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.17442000000000002
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.09249
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.5942
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.804
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.8721
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.9249
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.7627845665665897
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.7103426587301529
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.7133975871277517
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: gooaq 64 dev
type: gooaq-64-dev
metrics:
- type: cosine_accuracy@1
value: 0.556
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.7553
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.8267
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.8945
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.556
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.25176666666666664
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.16534000000000001
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.08945
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.556
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.7553
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.8267
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.8945
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.7246435400765202
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.6701957142857087
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.6743443703166442
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: gooaq 32 dev
type: gooaq-32-dev
metrics:
- type: cosine_accuracy@1
value: 0.4628
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.6619
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.7415
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.8241
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.4628
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.2206333333333333
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.1483
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.08241
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.4628
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.6619
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.7415
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.8241
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.6387155548290799
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.5797731349206319
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.5857231820662888
name: Cosine Map@100
---
# Static Embeddings with BERT uncased tokenizer finetuned on GooAQ pairs
This is a [sentence-transformers](https://www.SBERT.net) model trained on the [gooaq](https://huggingface.co/datasets/sentence-transformers/gooaq) dataset. It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
This model was trained using the [train_script.py](train_script.py) code.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
<!-- - **Base model:** [Unknown](https://huggingface.co/unknown) -->
- **Maximum Sequence Length:** inf tokens
- **Output Dimensionality:** 1024 tokens
- **Similarity Function:** Cosine Similarity
- **Training Dataset:**
- [gooaq](https://huggingface.co/datasets/sentence-transformers/gooaq)
- **Language:** en
- **License:** apache-2.0
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): StaticEmbedding(
(embedding): EmbeddingBag(30522, 1024, mode='mean')
)
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("tomaarsen/static-bert-uncased-gooaq")
# Run inference
sentences = [
"how to reverse a video on tiktok that's not yours?",
'[\'Tap "Effects" at the bottom of your screen — it\\\'s an icon that looks like a clock. Open the Effects menu. ... \', \'At the end of the new list that appears, tap "Time." Select "Time" at the end. ... \', \'Select "Reverse" — you\\\'ll then see a preview of your new, reversed video appear on the screen.\']',
'Relative age is the age of a rock layer (or the fossils it contains) compared to other layers. It can be determined by looking at the position of rock layers. Absolute age is the numeric age of a layer of rocks or fossils. Absolute age can be determined by using radiometric dating.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 1024]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Information Retrieval
* Dataset: `gooaq-1024-dev`
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| cosine_accuracy@1 | 0.6309 |
| cosine_accuracy@3 | 0.8409 |
| cosine_accuracy@5 | 0.8986 |
| cosine_accuracy@10 | 0.9444 |
| cosine_precision@1 | 0.6309 |
| cosine_precision@3 | 0.2803 |
| cosine_precision@5 | 0.1797 |
| cosine_precision@10 | 0.0944 |
| cosine_recall@1 | 0.6309 |
| cosine_recall@3 | 0.8409 |
| cosine_recall@5 | 0.8986 |
| cosine_recall@10 | 0.9444 |
| cosine_ndcg@10 | 0.7933 |
| cosine_mrr@10 | 0.744 |
| **cosine_map@100** | **0.7466** |
#### Information Retrieval
* Dataset: `gooaq-512-dev`
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| cosine_accuracy@1 | 0.6271 |
| cosine_accuracy@3 | 0.8366 |
| cosine_accuracy@5 | 0.8946 |
| cosine_accuracy@10 | 0.9431 |
| cosine_precision@1 | 0.6271 |
| cosine_precision@3 | 0.2789 |
| cosine_precision@5 | 0.1789 |
| cosine_precision@10 | 0.0943 |
| cosine_recall@1 | 0.6271 |
| cosine_recall@3 | 0.8366 |
| cosine_recall@5 | 0.8946 |
| cosine_recall@10 | 0.9431 |
| cosine_ndcg@10 | 0.7905 |
| cosine_mrr@10 | 0.7408 |
| **cosine_map@100** | **0.7434** |
#### Information Retrieval
* Dataset: `gooaq-256-dev`
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| cosine_accuracy@1 | 0.6192 |
| cosine_accuracy@3 | 0.8235 |
| cosine_accuracy@5 | 0.8866 |
| cosine_accuracy@10 | 0.9364 |
| cosine_precision@1 | 0.6192 |
| cosine_precision@3 | 0.2745 |
| cosine_precision@5 | 0.1773 |
| cosine_precision@10 | 0.0936 |
| cosine_recall@1 | 0.6192 |
| cosine_recall@3 | 0.8235 |
| cosine_recall@5 | 0.8866 |
| cosine_recall@10 | 0.9364 |
| cosine_ndcg@10 | 0.7821 |
| cosine_mrr@10 | 0.7321 |
| **cosine_map@100** | **0.7349** |
#### Information Retrieval
* Dataset: `gooaq-128-dev`
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| cosine_accuracy@1 | 0.5942 |
| cosine_accuracy@3 | 0.804 |
| cosine_accuracy@5 | 0.8721 |
| cosine_accuracy@10 | 0.9249 |
| cosine_precision@1 | 0.5942 |
| cosine_precision@3 | 0.268 |
| cosine_precision@5 | 0.1744 |
| cosine_precision@10 | 0.0925 |
| cosine_recall@1 | 0.5942 |
| cosine_recall@3 | 0.804 |
| cosine_recall@5 | 0.8721 |
| cosine_recall@10 | 0.9249 |
| cosine_ndcg@10 | 0.7628 |
| cosine_mrr@10 | 0.7103 |
| **cosine_map@100** | **0.7134** |
#### Information Retrieval
* Dataset: `gooaq-64-dev`
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| cosine_accuracy@1 | 0.556 |
| cosine_accuracy@3 | 0.7553 |
| cosine_accuracy@5 | 0.8267 |
| cosine_accuracy@10 | 0.8945 |
| cosine_precision@1 | 0.556 |
| cosine_precision@3 | 0.2518 |
| cosine_precision@5 | 0.1653 |
| cosine_precision@10 | 0.0895 |
| cosine_recall@1 | 0.556 |
| cosine_recall@3 | 0.7553 |
| cosine_recall@5 | 0.8267 |
| cosine_recall@10 | 0.8945 |
| cosine_ndcg@10 | 0.7246 |
| cosine_mrr@10 | 0.6702 |
| **cosine_map@100** | **0.6743** |
#### Information Retrieval
* Dataset: `gooaq-32-dev`
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| cosine_accuracy@1 | 0.4628 |
| cosine_accuracy@3 | 0.6619 |
| cosine_accuracy@5 | 0.7415 |
| cosine_accuracy@10 | 0.8241 |
| cosine_precision@1 | 0.4628 |
| cosine_precision@3 | 0.2206 |
| cosine_precision@5 | 0.1483 |
| cosine_precision@10 | 0.0824 |
| cosine_recall@1 | 0.4628 |
| cosine_recall@3 | 0.6619 |
| cosine_recall@5 | 0.7415 |
| cosine_recall@10 | 0.8241 |
| cosine_ndcg@10 | 0.6387 |
| cosine_mrr@10 | 0.5798 |
| **cosine_map@100** | **0.5857** |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### gooaq
* Dataset: [gooaq](https://huggingface.co/datasets/sentence-transformers/gooaq) at [b089f72](https://huggingface.co/datasets/sentence-transformers/gooaq/tree/b089f728748a068b7bc5234e5bcf5b25e3c8279c)
* Size: 3,012,496 training samples
* Columns: <code>question</code> and <code>answer</code>
* Approximate statistics based on the first 1000 samples:
| | question | answer |
|:--------|:-----------------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 18 characters</li><li>mean: 43.23 characters</li><li>max: 96 characters</li></ul> | <ul><li>min: 55 characters</li><li>mean: 253.36 characters</li><li>max: 371 characters</li></ul> |
* Samples:
| question | answer |
|:-----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>what is the difference between broilers and layers?</code> | <code>An egg laying poultry is called egger or layer whereas broilers are reared for obtaining meat. So a layer should be able to produce more number of large sized eggs, without growing too much. On the other hand, a broiler should yield more meat and hence should be able to grow well.</code> |
| <code>what is the difference between chronological order and spatial order?</code> | <code>As a writer, you should always remember that unlike chronological order and the other organizational methods for data, spatial order does not take into account the time. Spatial order is primarily focused on the location. All it does is take into account the location of objects and not the time.</code> |
| <code>is kamagra same as viagra?</code> | <code>Kamagra is thought to contain the same active ingredient as Viagra, sildenafil citrate. In theory, it should work in much the same way as Viagra, taking about 45 minutes to take effect, and lasting for around 4-6 hours. However, this will vary from person to person.</code> |
* Loss: [<code>MatryoshkaLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#matryoshkaloss) with these parameters:
```json
{
"loss": "MultipleNegativesRankingLoss",
"matryoshka_dims": [
1024,
512,
256,
128,
64,
32
],
"matryoshka_weights": [
1,
1,
1,
1,
1,
1
],
"n_dims_per_step": -1
}
```
### Evaluation Dataset
#### gooaq
* Dataset: [gooaq](https://huggingface.co/datasets/sentence-transformers/gooaq) at [b089f72](https://huggingface.co/datasets/sentence-transformers/gooaq/tree/b089f728748a068b7bc5234e5bcf5b25e3c8279c)
* Size: 3,012,496 evaluation samples
* Columns: <code>question</code> and <code>answer</code>
* Approximate statistics based on the first 1000 samples:
| | question | answer |
|:--------|:-----------------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 18 characters</li><li>mean: 43.17 characters</li><li>max: 98 characters</li></ul> | <ul><li>min: 51 characters</li><li>mean: 254.12 characters</li><li>max: 360 characters</li></ul> |
* Samples:
| question | answer |
|:-----------------------------------------------------------------------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>how do i program my directv remote with my tv?</code> | <code>['Press MENU on your remote.', 'Select Settings & Help > Settings > Remote Control > Program Remote.', 'Choose the device (TV, audio, DVD) you wish to program. ... ', 'Follow the on-screen prompts to complete programming.']</code> |
| <code>are rodrigues fruit bats nocturnal?</code> | <code>Before its numbers were threatened by habitat destruction, storms, and hunting, some of those groups could number 500 or more members. Sunrise, sunset. Rodrigues fruit bats are most active at dawn, at dusk, and at night.</code> |
| <code>why does your heart rate increase during exercise bbc bitesize?</code> | <code>During exercise there is an increase in physical activity and muscle cells respire more than they do when the body is at rest. The heart rate increases during exercise. The rate and depth of breathing increases - this makes sure that more oxygen is absorbed into the blood, and more carbon dioxide is removed from it.</code> |
* Loss: [<code>MatryoshkaLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#matryoshkaloss) with these parameters:
```json
{
"loss": "MultipleNegativesRankingLoss",
"matryoshka_dims": [
1024,
512,
256,
128,
64,
32
],
"matryoshka_weights": [
1,
1,
1,
1,
1,
1
],
"n_dims_per_step": -1
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: steps
- `per_device_train_batch_size`: 2048
- `per_device_eval_batch_size`: 2048
- `learning_rate`: 0.2
- `num_train_epochs`: 1
- `warmup_ratio`: 0.1
- `bf16`: True
- `batch_sampler`: no_duplicates
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: steps
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 2048
- `per_device_eval_batch_size`: 2048
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 0.2
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 1
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.1
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: True
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: False
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `eval_use_gather_object`: False
- `batch_sampler`: no_duplicates
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
| Epoch | Step | Training Loss | Validation Loss | gooaq-1024-dev_cosine_map@100 | gooaq-512-dev_cosine_map@100 | gooaq-256-dev_cosine_map@100 | gooaq-128-dev_cosine_map@100 | gooaq-64-dev_cosine_map@100 | gooaq-32-dev_cosine_map@100 |
|:------:|:----:|:-------------:|:---------------:|:-----------------------------:|:----------------------------:|:----------------------------:|:----------------------------:|:---------------------------:|:---------------------------:|
| 0 | 0 | - | - | 0.2095 | 0.2010 | 0.1735 | 0.1381 | 0.0750 | 0.0331 |
| 0.0007 | 1 | 34.953 | - | - | - | - | - | - | - |
| 0.0682 | 100 | 16.2504 | - | - | - | - | - | - | - |
| 0.1363 | 200 | 5.9502 | - | - | - | - | - | - | - |
| 0.1704 | 250 | - | 1.6781 | 0.6791 | 0.6729 | 0.6619 | 0.6409 | 0.5904 | 0.4934 |
| 0.2045 | 300 | 4.8411 | - | - | - | - | - | - | - |
| 0.2727 | 400 | 4.336 | - | - | - | - | - | - | - |
| 0.3408 | 500 | 4.0484 | 1.3935 | 0.7104 | 0.7055 | 0.6968 | 0.6756 | 0.6322 | 0.5358 |
| 0.4090 | 600 | 3.8378 | - | - | - | - | - | - | - |
| 0.4772 | 700 | 3.6765 | - | - | - | - | - | - | - |
| 0.5112 | 750 | - | 1.2549 | 0.7246 | 0.7216 | 0.7133 | 0.6943 | 0.6482 | 0.5582 |
| 0.5453 | 800 | 3.5439 | - | - | - | - | - | - | - |
| 0.6135 | 900 | 3.4284 | - | - | - | - | - | - | - |
| 0.6817 | 1000 | 3.3576 | 1.1656 | 0.7359 | 0.7338 | 0.7252 | 0.7040 | 0.6604 | 0.5715 |
| 0.7498 | 1100 | 3.2456 | - | - | - | - | - | - | - |
| 0.8180 | 1200 | 3.2014 | - | - | - | - | - | - | - |
| 0.8521 | 1250 | - | 1.1133 | 0.7438 | 0.7398 | 0.7310 | 0.7099 | 0.6704 | 0.5796 |
| 0.8862 | 1300 | 3.1536 | - | - | - | - | - | - | - |
| 0.9543 | 1400 | 3.0696 | - | - | - | - | - | - | - |
| 1.0 | 1467 | - | - | 0.7466 | 0.7434 | 0.7349 | 0.7134 | 0.6743 | 0.5857 |
### Environmental Impact
Carbon emissions were measured using [CodeCarbon](https://github.com/mlco2/codecarbon).
- **Energy Consumed**: 0.017 kWh
- **Carbon Emitted**: 0.006 kg of CO2
- **Hours Used**: 0.109 hours
### Training Hardware
- **On Cloud**: No
- **GPU Model**: 1 x NVIDIA GeForce RTX 3090
- **CPU Model**: 13th Gen Intel(R) Core(TM) i7-13700K
- **RAM Size**: 31.78 GB
### Framework Versions
- Python: 3.11.6
- Sentence Transformers: 3.2.0.dev0
- Transformers: 4.43.4
- PyTorch: 2.5.0.dev20240807+cu121
- Accelerate: 0.31.0
- Datasets: 2.20.0
- Tokenizers: 0.19.1
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### MatryoshkaLoss
```bibtex
@misc{kusupati2024matryoshka,
title={Matryoshka Representation Learning},
author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
year={2024},
eprint={2205.13147},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
```
#### MultipleNegativesRankingLoss
```bibtex
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
--> | [
"CRAFT"
] | Non_BioNLP |
niancheng/gte-Qwen2-1.5B-instruct-Q4_K_M-GGUF | niancheng | sentence-similarity | [
"sentence-transformers",
"gguf",
"mteb",
"transformers",
"Qwen2",
"sentence-similarity",
"llama-cpp",
"gguf-my-repo",
"base_model:Alibaba-NLP/gte-Qwen2-1.5B-instruct",
"base_model:quantized:Alibaba-NLP/gte-Qwen2-1.5B-instruct",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us",
"conversational"
] | 1,721,030,190,000 | 2024-07-15T07:56:38 | 26 | 0 | ---
base_model: Alibaba-NLP/gte-Qwen2-1.5B-instruct
license: apache-2.0
tags:
- mteb
- sentence-transformers
- transformers
- Qwen2
- sentence-similarity
- llama-cpp
- gguf-my-repo
model-index:
- name: gte-qwen2-7B-instruct
results:
- task:
type: Classification
dataset:
name: MTEB AmazonCounterfactualClassification (en)
type: mteb/amazon_counterfactual
config: en
split: test
revision: e8379541af4e31359cca9fbcf4b00f2671dba205
metrics:
- type: accuracy
value: 83.98507462686567
- type: ap
value: 50.93015252587014
- type: f1
value: 78.50416599051215
- task:
type: Classification
dataset:
name: MTEB AmazonPolarityClassification
type: mteb/amazon_polarity
config: default
split: test
revision: e2d317d38cd51312af73b3d32a06d1a08b442046
metrics:
- type: accuracy
value: 96.61065
- type: ap
value: 94.89174052954196
- type: f1
value: 96.60942596940565
- task:
type: Classification
dataset:
name: MTEB AmazonReviewsClassification (en)
type: mteb/amazon_reviews_multi
config: en
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 55.614000000000004
- type: f1
value: 54.90553480294904
- task:
type: Retrieval
dataset:
name: MTEB ArguAna
type: mteb/arguana
config: default
split: test
revision: c22ab2a51041ffd869aaddef7af8d8215647e41a
metrics:
- type: map_at_1
value: 45.164
- type: map_at_10
value: 61.519
- type: map_at_100
value: 61.769
- type: map_at_1000
value: 61.769
- type: map_at_3
value: 57.443999999999996
- type: map_at_5
value: 60.058
- type: mrr_at_1
value: 46.088
- type: mrr_at_10
value: 61.861
- type: mrr_at_100
value: 62.117999999999995
- type: mrr_at_1000
value: 62.117999999999995
- type: mrr_at_3
value: 57.729
- type: mrr_at_5
value: 60.392
- type: ndcg_at_1
value: 45.164
- type: ndcg_at_10
value: 69.72
- type: ndcg_at_100
value: 70.719
- type: ndcg_at_1000
value: 70.719
- type: ndcg_at_3
value: 61.517999999999994
- type: ndcg_at_5
value: 66.247
- type: precision_at_1
value: 45.164
- type: precision_at_10
value: 9.545
- type: precision_at_100
value: 0.996
- type: precision_at_1000
value: 0.1
- type: precision_at_3
value: 24.443
- type: precision_at_5
value: 16.97
- type: recall_at_1
value: 45.164
- type: recall_at_10
value: 95.448
- type: recall_at_100
value: 99.644
- type: recall_at_1000
value: 99.644
- type: recall_at_3
value: 73.329
- type: recall_at_5
value: 84.851
- task:
type: Clustering
dataset:
name: MTEB ArxivClusteringP2P
type: mteb/arxiv-clustering-p2p
config: default
split: test
revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d
metrics:
- type: v_measure
value: 50.511868162026175
- task:
type: Clustering
dataset:
name: MTEB ArxivClusteringS2S
type: mteb/arxiv-clustering-s2s
config: default
split: test
revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53
metrics:
- type: v_measure
value: 45.007803189284004
- task:
type: Reranking
dataset:
name: MTEB AskUbuntuDupQuestions
type: mteb/askubuntudupquestions-reranking
config: default
split: test
revision: 2000358ca161889fa9c082cb41daa8dcfb161a54
metrics:
- type: map
value: 64.55292107723382
- type: mrr
value: 77.66158818097877
- task:
type: STS
dataset:
name: MTEB BIOSSES
type: mteb/biosses-sts
config: default
split: test
revision: d3fb88f8f02e40887cd149695127462bbcf29b4a
metrics:
- type: cos_sim_pearson
value: 85.65459047085452
- type: cos_sim_spearman
value: 82.10729255710761
- type: euclidean_pearson
value: 82.78079159312476
- type: euclidean_spearman
value: 80.50002701880933
- type: manhattan_pearson
value: 82.41372641383016
- type: manhattan_spearman
value: 80.57412509272639
- task:
type: Classification
dataset:
name: MTEB Banking77Classification
type: mteb/banking77
config: default
split: test
revision: 0fd18e25b25c072e09e0d92ab615fda904d66300
metrics:
- type: accuracy
value: 87.30844155844156
- type: f1
value: 87.25307322443255
- task:
type: Clustering
dataset:
name: MTEB BiorxivClusteringP2P
type: mteb/biorxiv-clustering-p2p
config: default
split: test
revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40
metrics:
- type: v_measure
value: 43.20754608934859
- task:
type: Clustering
dataset:
name: MTEB BiorxivClusteringS2S
type: mteb/biorxiv-clustering-s2s
config: default
split: test
revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908
metrics:
- type: v_measure
value: 38.818037697335505
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackAndroidRetrieval
type: BeIR/cqadupstack
config: default
split: test
revision: f46a197baaae43b4f621051089b82a364682dfeb
metrics:
- type: map_at_1
value: 35.423
- type: map_at_10
value: 47.198
- type: map_at_100
value: 48.899
- type: map_at_1000
value: 49.004
- type: map_at_3
value: 43.114999999999995
- type: map_at_5
value: 45.491
- type: mrr_at_1
value: 42.918
- type: mrr_at_10
value: 53.299
- type: mrr_at_100
value: 54.032000000000004
- type: mrr_at_1000
value: 54.055
- type: mrr_at_3
value: 50.453
- type: mrr_at_5
value: 52.205999999999996
- type: ndcg_at_1
value: 42.918
- type: ndcg_at_10
value: 53.98
- type: ndcg_at_100
value: 59.57
- type: ndcg_at_1000
value: 60.879000000000005
- type: ndcg_at_3
value: 48.224000000000004
- type: ndcg_at_5
value: 50.998
- type: precision_at_1
value: 42.918
- type: precision_at_10
value: 10.299999999999999
- type: precision_at_100
value: 1.687
- type: precision_at_1000
value: 0.211
- type: precision_at_3
value: 22.842000000000002
- type: precision_at_5
value: 16.681
- type: recall_at_1
value: 35.423
- type: recall_at_10
value: 66.824
- type: recall_at_100
value: 89.564
- type: recall_at_1000
value: 97.501
- type: recall_at_3
value: 50.365
- type: recall_at_5
value: 57.921
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackEnglishRetrieval
type: BeIR/cqadupstack
config: default
split: test
revision: ad9991cb51e31e31e430383c75ffb2885547b5f0
metrics:
- type: map_at_1
value: 33.205
- type: map_at_10
value: 44.859
- type: map_at_100
value: 46.135
- type: map_at_1000
value: 46.259
- type: map_at_3
value: 41.839
- type: map_at_5
value: 43.662
- type: mrr_at_1
value: 41.146
- type: mrr_at_10
value: 50.621
- type: mrr_at_100
value: 51.207
- type: mrr_at_1000
value: 51.246
- type: mrr_at_3
value: 48.535000000000004
- type: mrr_at_5
value: 49.818
- type: ndcg_at_1
value: 41.146
- type: ndcg_at_10
value: 50.683
- type: ndcg_at_100
value: 54.82
- type: ndcg_at_1000
value: 56.69
- type: ndcg_at_3
value: 46.611000000000004
- type: ndcg_at_5
value: 48.66
- type: precision_at_1
value: 41.146
- type: precision_at_10
value: 9.439
- type: precision_at_100
value: 1.465
- type: precision_at_1000
value: 0.194
- type: precision_at_3
value: 22.59
- type: precision_at_5
value: 15.86
- type: recall_at_1
value: 33.205
- type: recall_at_10
value: 61.028999999999996
- type: recall_at_100
value: 78.152
- type: recall_at_1000
value: 89.59700000000001
- type: recall_at_3
value: 49.05
- type: recall_at_5
value: 54.836
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackGamingRetrieval
type: BeIR/cqadupstack
config: default
split: test
revision: 4885aa143210c98657558c04aaf3dc47cfb54340
metrics:
- type: map_at_1
value: 41.637
- type: map_at_10
value: 55.162
- type: map_at_100
value: 56.142
- type: map_at_1000
value: 56.188
- type: map_at_3
value: 51.564
- type: map_at_5
value: 53.696
- type: mrr_at_1
value: 47.524
- type: mrr_at_10
value: 58.243
- type: mrr_at_100
value: 58.879999999999995
- type: mrr_at_1000
value: 58.9
- type: mrr_at_3
value: 55.69499999999999
- type: mrr_at_5
value: 57.284
- type: ndcg_at_1
value: 47.524
- type: ndcg_at_10
value: 61.305
- type: ndcg_at_100
value: 65.077
- type: ndcg_at_1000
value: 65.941
- type: ndcg_at_3
value: 55.422000000000004
- type: ndcg_at_5
value: 58.516
- type: precision_at_1
value: 47.524
- type: precision_at_10
value: 9.918000000000001
- type: precision_at_100
value: 1.276
- type: precision_at_1000
value: 0.13899999999999998
- type: precision_at_3
value: 24.765
- type: precision_at_5
value: 17.204
- type: recall_at_1
value: 41.637
- type: recall_at_10
value: 76.185
- type: recall_at_100
value: 92.149
- type: recall_at_1000
value: 98.199
- type: recall_at_3
value: 60.856
- type: recall_at_5
value: 68.25099999999999
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackGisRetrieval
type: BeIR/cqadupstack
config: default
split: test
revision: 5003b3064772da1887988e05400cf3806fe491f2
metrics:
- type: map_at_1
value: 26.27
- type: map_at_10
value: 37.463
- type: map_at_100
value: 38.434000000000005
- type: map_at_1000
value: 38.509
- type: map_at_3
value: 34.226
- type: map_at_5
value: 36.161
- type: mrr_at_1
value: 28.588
- type: mrr_at_10
value: 39.383
- type: mrr_at_100
value: 40.23
- type: mrr_at_1000
value: 40.281
- type: mrr_at_3
value: 36.422
- type: mrr_at_5
value: 38.252
- type: ndcg_at_1
value: 28.588
- type: ndcg_at_10
value: 43.511
- type: ndcg_at_100
value: 48.274
- type: ndcg_at_1000
value: 49.975
- type: ndcg_at_3
value: 37.319
- type: ndcg_at_5
value: 40.568
- type: precision_at_1
value: 28.588
- type: precision_at_10
value: 6.893000000000001
- type: precision_at_100
value: 0.9900000000000001
- type: precision_at_1000
value: 0.117
- type: precision_at_3
value: 16.347
- type: precision_at_5
value: 11.661000000000001
- type: recall_at_1
value: 26.27
- type: recall_at_10
value: 60.284000000000006
- type: recall_at_100
value: 81.902
- type: recall_at_1000
value: 94.43
- type: recall_at_3
value: 43.537
- type: recall_at_5
value: 51.475
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackMathematicaRetrieval
type: BeIR/cqadupstack
config: default
split: test
revision: 90fceea13679c63fe563ded68f3b6f06e50061de
metrics:
- type: map_at_1
value: 18.168
- type: map_at_10
value: 28.410000000000004
- type: map_at_100
value: 29.78
- type: map_at_1000
value: 29.892999999999997
- type: map_at_3
value: 25.238
- type: map_at_5
value: 26.96
- type: mrr_at_1
value: 23.507
- type: mrr_at_10
value: 33.382
- type: mrr_at_100
value: 34.404
- type: mrr_at_1000
value: 34.467999999999996
- type: mrr_at_3
value: 30.637999999999998
- type: mrr_at_5
value: 32.199
- type: ndcg_at_1
value: 23.507
- type: ndcg_at_10
value: 34.571000000000005
- type: ndcg_at_100
value: 40.663
- type: ndcg_at_1000
value: 43.236000000000004
- type: ndcg_at_3
value: 29.053
- type: ndcg_at_5
value: 31.563999999999997
- type: precision_at_1
value: 23.507
- type: precision_at_10
value: 6.654
- type: precision_at_100
value: 1.113
- type: precision_at_1000
value: 0.146
- type: precision_at_3
value: 14.427999999999999
- type: precision_at_5
value: 10.498000000000001
- type: recall_at_1
value: 18.168
- type: recall_at_10
value: 48.443000000000005
- type: recall_at_100
value: 74.47
- type: recall_at_1000
value: 92.494
- type: recall_at_3
value: 33.379999999999995
- type: recall_at_5
value: 39.76
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackPhysicsRetrieval
type: BeIR/cqadupstack
config: default
split: test
revision: 79531abbd1fb92d06c6d6315a0cbbbf5bb247ea4
metrics:
- type: map_at_1
value: 32.39
- type: map_at_10
value: 44.479
- type: map_at_100
value: 45.977000000000004
- type: map_at_1000
value: 46.087
- type: map_at_3
value: 40.976
- type: map_at_5
value: 43.038
- type: mrr_at_1
value: 40.135
- type: mrr_at_10
value: 50.160000000000004
- type: mrr_at_100
value: 51.052
- type: mrr_at_1000
value: 51.087
- type: mrr_at_3
value: 47.818
- type: mrr_at_5
value: 49.171
- type: ndcg_at_1
value: 40.135
- type: ndcg_at_10
value: 50.731
- type: ndcg_at_100
value: 56.452000000000005
- type: ndcg_at_1000
value: 58.123000000000005
- type: ndcg_at_3
value: 45.507
- type: ndcg_at_5
value: 48.11
- type: precision_at_1
value: 40.135
- type: precision_at_10
value: 9.192
- type: precision_at_100
value: 1.397
- type: precision_at_1000
value: 0.169
- type: precision_at_3
value: 21.816
- type: precision_at_5
value: 15.476
- type: recall_at_1
value: 32.39
- type: recall_at_10
value: 63.597
- type: recall_at_100
value: 86.737
- type: recall_at_1000
value: 97.039
- type: recall_at_3
value: 48.906
- type: recall_at_5
value: 55.659000000000006
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackProgrammersRetrieval
type: BeIR/cqadupstack
config: default
split: test
revision: 6184bc1440d2dbc7612be22b50686b8826d22b32
metrics:
- type: map_at_1
value: 28.397
- type: map_at_10
value: 39.871
- type: map_at_100
value: 41.309000000000005
- type: map_at_1000
value: 41.409
- type: map_at_3
value: 36.047000000000004
- type: map_at_5
value: 38.104
- type: mrr_at_1
value: 34.703
- type: mrr_at_10
value: 44.773
- type: mrr_at_100
value: 45.64
- type: mrr_at_1000
value: 45.678999999999995
- type: mrr_at_3
value: 41.705
- type: mrr_at_5
value: 43.406
- type: ndcg_at_1
value: 34.703
- type: ndcg_at_10
value: 46.271
- type: ndcg_at_100
value: 52.037
- type: ndcg_at_1000
value: 53.81700000000001
- type: ndcg_at_3
value: 39.966
- type: ndcg_at_5
value: 42.801
- type: precision_at_1
value: 34.703
- type: precision_at_10
value: 8.744
- type: precision_at_100
value: 1.348
- type: precision_at_1000
value: 0.167
- type: precision_at_3
value: 19.102
- type: precision_at_5
value: 13.836
- type: recall_at_1
value: 28.397
- type: recall_at_10
value: 60.299
- type: recall_at_100
value: 84.595
- type: recall_at_1000
value: 96.155
- type: recall_at_3
value: 43.065
- type: recall_at_5
value: 50.371
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackRetrieval
type: BeIR/cqadupstack
config: default
split: test
revision: 4ffe81d471b1924886b33c7567bfb200e9eec5c4
metrics:
- type: map_at_1
value: 28.044333333333338
- type: map_at_10
value: 38.78691666666666
- type: map_at_100
value: 40.113
- type: map_at_1000
value: 40.22125
- type: map_at_3
value: 35.52966666666667
- type: map_at_5
value: 37.372749999999996
- type: mrr_at_1
value: 33.159083333333335
- type: mrr_at_10
value: 42.913583333333335
- type: mrr_at_100
value: 43.7845
- type: mrr_at_1000
value: 43.830333333333336
- type: mrr_at_3
value: 40.29816666666667
- type: mrr_at_5
value: 41.81366666666667
- type: ndcg_at_1
value: 33.159083333333335
- type: ndcg_at_10
value: 44.75750000000001
- type: ndcg_at_100
value: 50.13658333333334
- type: ndcg_at_1000
value: 52.037
- type: ndcg_at_3
value: 39.34258333333334
- type: ndcg_at_5
value: 41.93708333333333
- type: precision_at_1
value: 33.159083333333335
- type: precision_at_10
value: 7.952416666666667
- type: precision_at_100
value: 1.2571666666666668
- type: precision_at_1000
value: 0.16099999999999998
- type: precision_at_3
value: 18.303833333333337
- type: precision_at_5
value: 13.057083333333333
- type: recall_at_1
value: 28.044333333333338
- type: recall_at_10
value: 58.237249999999996
- type: recall_at_100
value: 81.35391666666666
- type: recall_at_1000
value: 94.21283333333334
- type: recall_at_3
value: 43.32341666666667
- type: recall_at_5
value: 49.94908333333333
- type: map_at_1
value: 18.398
- type: map_at_10
value: 27.929
- type: map_at_100
value: 29.032999999999998
- type: map_at_1000
value: 29.126
- type: map_at_3
value: 25.070999999999998
- type: map_at_5
value: 26.583000000000002
- type: mrr_at_1
value: 19.963
- type: mrr_at_10
value: 29.997
- type: mrr_at_100
value: 30.9
- type: mrr_at_1000
value: 30.972
- type: mrr_at_3
value: 27.264
- type: mrr_at_5
value: 28.826
- type: ndcg_at_1
value: 19.963
- type: ndcg_at_10
value: 33.678999999999995
- type: ndcg_at_100
value: 38.931
- type: ndcg_at_1000
value: 41.379
- type: ndcg_at_3
value: 28.000000000000004
- type: ndcg_at_5
value: 30.637999999999998
- type: precision_at_1
value: 19.963
- type: precision_at_10
value: 5.7299999999999995
- type: precision_at_100
value: 0.902
- type: precision_at_1000
value: 0.122
- type: precision_at_3
value: 12.631
- type: precision_at_5
value: 9.057
- type: recall_at_1
value: 18.398
- type: recall_at_10
value: 49.254
- type: recall_at_100
value: 73.182
- type: recall_at_1000
value: 91.637
- type: recall_at_3
value: 34.06
- type: recall_at_5
value: 40.416000000000004
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackStatsRetrieval
type: BeIR/cqadupstack
config: default
split: test
revision: 65ac3a16b8e91f9cee4c9828cc7c335575432a2a
metrics:
- type: map_at_1
value: 27.838
- type: map_at_10
value: 36.04
- type: map_at_100
value: 37.113
- type: map_at_1000
value: 37.204
- type: map_at_3
value: 33.585
- type: map_at_5
value: 34.845
- type: mrr_at_1
value: 30.982
- type: mrr_at_10
value: 39.105000000000004
- type: mrr_at_100
value: 39.98
- type: mrr_at_1000
value: 40.042
- type: mrr_at_3
value: 36.912
- type: mrr_at_5
value: 38.062000000000005
- type: ndcg_at_1
value: 30.982
- type: ndcg_at_10
value: 40.982
- type: ndcg_at_100
value: 46.092
- type: ndcg_at_1000
value: 48.25
- type: ndcg_at_3
value: 36.41
- type: ndcg_at_5
value: 38.379999999999995
- type: precision_at_1
value: 30.982
- type: precision_at_10
value: 6.534
- type: precision_at_100
value: 0.9820000000000001
- type: precision_at_1000
value: 0.124
- type: precision_at_3
value: 15.745999999999999
- type: precision_at_5
value: 10.828
- type: recall_at_1
value: 27.838
- type: recall_at_10
value: 52.971000000000004
- type: recall_at_100
value: 76.357
- type: recall_at_1000
value: 91.973
- type: recall_at_3
value: 40.157
- type: recall_at_5
value: 45.147999999999996
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackTexRetrieval
type: BeIR/cqadupstack
config: default
split: test
revision: 46989137a86843e03a6195de44b09deda022eec7
metrics:
- type: map_at_1
value: 19.059
- type: map_at_10
value: 27.454
- type: map_at_100
value: 28.736
- type: map_at_1000
value: 28.865000000000002
- type: map_at_3
value: 24.773999999999997
- type: map_at_5
value: 26.266000000000002
- type: mrr_at_1
value: 23.125
- type: mrr_at_10
value: 31.267
- type: mrr_at_100
value: 32.32
- type: mrr_at_1000
value: 32.394
- type: mrr_at_3
value: 28.894
- type: mrr_at_5
value: 30.281000000000002
- type: ndcg_at_1
value: 23.125
- type: ndcg_at_10
value: 32.588
- type: ndcg_at_100
value: 38.432
- type: ndcg_at_1000
value: 41.214
- type: ndcg_at_3
value: 27.938000000000002
- type: ndcg_at_5
value: 30.127
- type: precision_at_1
value: 23.125
- type: precision_at_10
value: 5.9639999999999995
- type: precision_at_100
value: 1.047
- type: precision_at_1000
value: 0.148
- type: precision_at_3
value: 13.294
- type: precision_at_5
value: 9.628
- type: recall_at_1
value: 19.059
- type: recall_at_10
value: 44.25
- type: recall_at_100
value: 69.948
- type: recall_at_1000
value: 89.35300000000001
- type: recall_at_3
value: 31.114000000000004
- type: recall_at_5
value: 36.846000000000004
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackUnixRetrieval
type: BeIR/cqadupstack
config: default
split: test
revision: 6c6430d3a6d36f8d2a829195bc5dc94d7e063e53
metrics:
- type: map_at_1
value: 28.355999999999998
- type: map_at_10
value: 39.055
- type: map_at_100
value: 40.486
- type: map_at_1000
value: 40.571
- type: map_at_3
value: 35.69
- type: map_at_5
value: 37.605
- type: mrr_at_1
value: 33.302
- type: mrr_at_10
value: 42.986000000000004
- type: mrr_at_100
value: 43.957
- type: mrr_at_1000
value: 43.996
- type: mrr_at_3
value: 40.111999999999995
- type: mrr_at_5
value: 41.735
- type: ndcg_at_1
value: 33.302
- type: ndcg_at_10
value: 44.962999999999994
- type: ndcg_at_100
value: 50.917
- type: ndcg_at_1000
value: 52.622
- type: ndcg_at_3
value: 39.182
- type: ndcg_at_5
value: 41.939
- type: precision_at_1
value: 33.302
- type: precision_at_10
value: 7.779999999999999
- type: precision_at_100
value: 1.203
- type: precision_at_1000
value: 0.145
- type: precision_at_3
value: 18.035
- type: precision_at_5
value: 12.873000000000001
- type: recall_at_1
value: 28.355999999999998
- type: recall_at_10
value: 58.782000000000004
- type: recall_at_100
value: 84.02199999999999
- type: recall_at_1000
value: 95.511
- type: recall_at_3
value: 43.126999999999995
- type: recall_at_5
value: 50.14999999999999
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackWebmastersRetrieval
type: BeIR/cqadupstack
config: default
split: test
revision: 160c094312a0e1facb97e55eeddb698c0abe3571
metrics:
- type: map_at_1
value: 27.391
- type: map_at_10
value: 37.523
- type: map_at_100
value: 39.312000000000005
- type: map_at_1000
value: 39.54
- type: map_at_3
value: 34.231
- type: map_at_5
value: 36.062
- type: mrr_at_1
value: 32.016
- type: mrr_at_10
value: 41.747
- type: mrr_at_100
value: 42.812
- type: mrr_at_1000
value: 42.844
- type: mrr_at_3
value: 39.129999999999995
- type: mrr_at_5
value: 40.524
- type: ndcg_at_1
value: 32.016
- type: ndcg_at_10
value: 43.826
- type: ndcg_at_100
value: 50.373999999999995
- type: ndcg_at_1000
value: 52.318
- type: ndcg_at_3
value: 38.479
- type: ndcg_at_5
value: 40.944
- type: precision_at_1
value: 32.016
- type: precision_at_10
value: 8.280999999999999
- type: precision_at_100
value: 1.6760000000000002
- type: precision_at_1000
value: 0.25
- type: precision_at_3
value: 18.05
- type: precision_at_5
value: 13.083
- type: recall_at_1
value: 27.391
- type: recall_at_10
value: 56.928999999999995
- type: recall_at_100
value: 85.169
- type: recall_at_1000
value: 96.665
- type: recall_at_3
value: 42.264
- type: recall_at_5
value: 48.556
- task:
type: Retrieval
dataset:
name: MTEB ClimateFEVER
type: mteb/climate-fever
config: default
split: test
revision: 47f2ac6acb640fc46020b02a5b59fdda04d39380
metrics:
- type: map_at_1
value: 19.681
- type: map_at_10
value: 32.741
- type: map_at_100
value: 34.811
- type: map_at_1000
value: 35.003
- type: map_at_3
value: 27.697
- type: map_at_5
value: 30.372
- type: mrr_at_1
value: 44.951
- type: mrr_at_10
value: 56.34400000000001
- type: mrr_at_100
value: 56.961
- type: mrr_at_1000
value: 56.987
- type: mrr_at_3
value: 53.681
- type: mrr_at_5
value: 55.407
- type: ndcg_at_1
value: 44.951
- type: ndcg_at_10
value: 42.905
- type: ndcg_at_100
value: 49.95
- type: ndcg_at_1000
value: 52.917
- type: ndcg_at_3
value: 36.815
- type: ndcg_at_5
value: 38.817
- type: precision_at_1
value: 44.951
- type: precision_at_10
value: 12.989999999999998
- type: precision_at_100
value: 2.068
- type: precision_at_1000
value: 0.263
- type: precision_at_3
value: 27.275
- type: precision_at_5
value: 20.365
- type: recall_at_1
value: 19.681
- type: recall_at_10
value: 48.272999999999996
- type: recall_at_100
value: 71.87400000000001
- type: recall_at_1000
value: 87.929
- type: recall_at_3
value: 32.653999999999996
- type: recall_at_5
value: 39.364
- task:
type: Retrieval
dataset:
name: MTEB DBPedia
type: mteb/dbpedia
config: default
split: test
revision: c0f706b76e590d620bd6618b3ca8efdd34e2d659
metrics:
- type: map_at_1
value: 10.231
- type: map_at_10
value: 22.338
- type: map_at_100
value: 31.927
- type: map_at_1000
value: 33.87
- type: map_at_3
value: 15.559999999999999
- type: map_at_5
value: 18.239
- type: mrr_at_1
value: 75.0
- type: mrr_at_10
value: 81.303
- type: mrr_at_100
value: 81.523
- type: mrr_at_1000
value: 81.53
- type: mrr_at_3
value: 80.083
- type: mrr_at_5
value: 80.758
- type: ndcg_at_1
value: 64.625
- type: ndcg_at_10
value: 48.687000000000005
- type: ndcg_at_100
value: 52.791
- type: ndcg_at_1000
value: 60.041999999999994
- type: ndcg_at_3
value: 53.757999999999996
- type: ndcg_at_5
value: 50.76500000000001
- type: precision_at_1
value: 75.0
- type: precision_at_10
value: 38.3
- type: precision_at_100
value: 12.025
- type: precision_at_1000
value: 2.3970000000000002
- type: precision_at_3
value: 55.417
- type: precision_at_5
value: 47.5
- type: recall_at_1
value: 10.231
- type: recall_at_10
value: 27.697
- type: recall_at_100
value: 57.409
- type: recall_at_1000
value: 80.547
- type: recall_at_3
value: 16.668
- type: recall_at_5
value: 20.552
- task:
type: Classification
dataset:
name: MTEB EmotionClassification
type: mteb/emotion
config: default
split: test
revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37
metrics:
- type: accuracy
value: 61.365
- type: f1
value: 56.7540827912991
- task:
type: Retrieval
dataset:
name: MTEB FEVER
type: mteb/fever
config: default
split: test
revision: bea83ef9e8fb933d90a2f1d5515737465d613e12
metrics:
- type: map_at_1
value: 83.479
- type: map_at_10
value: 88.898
- type: map_at_100
value: 89.11
- type: map_at_1000
value: 89.12400000000001
- type: map_at_3
value: 88.103
- type: map_at_5
value: 88.629
- type: mrr_at_1
value: 89.934
- type: mrr_at_10
value: 93.91000000000001
- type: mrr_at_100
value: 93.937
- type: mrr_at_1000
value: 93.938
- type: mrr_at_3
value: 93.62700000000001
- type: mrr_at_5
value: 93.84599999999999
- type: ndcg_at_1
value: 89.934
- type: ndcg_at_10
value: 91.574
- type: ndcg_at_100
value: 92.238
- type: ndcg_at_1000
value: 92.45
- type: ndcg_at_3
value: 90.586
- type: ndcg_at_5
value: 91.16300000000001
- type: precision_at_1
value: 89.934
- type: precision_at_10
value: 10.555
- type: precision_at_100
value: 1.1159999999999999
- type: precision_at_1000
value: 0.11499999999999999
- type: precision_at_3
value: 33.588
- type: precision_at_5
value: 20.642
- type: recall_at_1
value: 83.479
- type: recall_at_10
value: 94.971
- type: recall_at_100
value: 97.397
- type: recall_at_1000
value: 98.666
- type: recall_at_3
value: 92.24799999999999
- type: recall_at_5
value: 93.797
- task:
type: Retrieval
dataset:
name: MTEB FiQA2018
type: mteb/fiqa
config: default
split: test
revision: 27a168819829fe9bcd655c2df245fb19452e8e06
metrics:
- type: map_at_1
value: 27.16
- type: map_at_10
value: 45.593
- type: map_at_100
value: 47.762
- type: map_at_1000
value: 47.899
- type: map_at_3
value: 39.237
- type: map_at_5
value: 42.970000000000006
- type: mrr_at_1
value: 52.623
- type: mrr_at_10
value: 62.637
- type: mrr_at_100
value: 63.169
- type: mrr_at_1000
value: 63.185
- type: mrr_at_3
value: 59.928000000000004
- type: mrr_at_5
value: 61.702999999999996
- type: ndcg_at_1
value: 52.623
- type: ndcg_at_10
value: 54.701
- type: ndcg_at_100
value: 61.263
- type: ndcg_at_1000
value: 63.134
- type: ndcg_at_3
value: 49.265
- type: ndcg_at_5
value: 51.665000000000006
- type: precision_at_1
value: 52.623
- type: precision_at_10
value: 15.185
- type: precision_at_100
value: 2.202
- type: precision_at_1000
value: 0.254
- type: precision_at_3
value: 32.767
- type: precision_at_5
value: 24.722
- type: recall_at_1
value: 27.16
- type: recall_at_10
value: 63.309000000000005
- type: recall_at_100
value: 86.722
- type: recall_at_1000
value: 97.505
- type: recall_at_3
value: 45.045
- type: recall_at_5
value: 54.02400000000001
- task:
type: Retrieval
dataset:
name: MTEB HotpotQA
type: mteb/hotpotqa
config: default
split: test
revision: ab518f4d6fcca38d87c25209f94beba119d02014
metrics:
- type: map_at_1
value: 42.573
- type: map_at_10
value: 59.373
- type: map_at_100
value: 60.292
- type: map_at_1000
value: 60.358999999999995
- type: map_at_3
value: 56.159000000000006
- type: map_at_5
value: 58.123999999999995
- type: mrr_at_1
value: 85.14500000000001
- type: mrr_at_10
value: 89.25999999999999
- type: mrr_at_100
value: 89.373
- type: mrr_at_1000
value: 89.377
- type: mrr_at_3
value: 88.618
- type: mrr_at_5
value: 89.036
- type: ndcg_at_1
value: 85.14500000000001
- type: ndcg_at_10
value: 68.95
- type: ndcg_at_100
value: 71.95
- type: ndcg_at_1000
value: 73.232
- type: ndcg_at_3
value: 64.546
- type: ndcg_at_5
value: 66.945
- type: precision_at_1
value: 85.14500000000001
- type: precision_at_10
value: 13.865
- type: precision_at_100
value: 1.619
- type: precision_at_1000
value: 0.179
- type: precision_at_3
value: 39.703
- type: precision_at_5
value: 25.718000000000004
- type: recall_at_1
value: 42.573
- type: recall_at_10
value: 69.325
- type: recall_at_100
value: 80.932
- type: recall_at_1000
value: 89.446
- type: recall_at_3
value: 59.553999999999995
- type: recall_at_5
value: 64.294
- task:
type: Classification
dataset:
name: MTEB ImdbClassification
type: mteb/imdb
config: default
split: test
revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7
metrics:
- type: accuracy
value: 95.8336
- type: ap
value: 93.78862962194073
- type: f1
value: 95.83192650728371
- task:
type: Retrieval
dataset:
name: MTEB MSMARCO
type: mteb/msmarco
config: default
split: dev
revision: c5a29a104738b98a9e76336939199e264163d4a0
metrics:
- type: map_at_1
value: 23.075000000000003
- type: map_at_10
value: 36.102000000000004
- type: map_at_100
value: 37.257
- type: map_at_1000
value: 37.3
- type: map_at_3
value: 32.144
- type: map_at_5
value: 34.359
- type: mrr_at_1
value: 23.711
- type: mrr_at_10
value: 36.671
- type: mrr_at_100
value: 37.763999999999996
- type: mrr_at_1000
value: 37.801
- type: mrr_at_3
value: 32.775
- type: mrr_at_5
value: 34.977000000000004
- type: ndcg_at_1
value: 23.711
- type: ndcg_at_10
value: 43.361
- type: ndcg_at_100
value: 48.839
- type: ndcg_at_1000
value: 49.88
- type: ndcg_at_3
value: 35.269
- type: ndcg_at_5
value: 39.224
- type: precision_at_1
value: 23.711
- type: precision_at_10
value: 6.866999999999999
- type: precision_at_100
value: 0.96
- type: precision_at_1000
value: 0.105
- type: precision_at_3
value: 15.096000000000002
- type: precision_at_5
value: 11.083
- type: recall_at_1
value: 23.075000000000003
- type: recall_at_10
value: 65.756
- type: recall_at_100
value: 90.88199999999999
- type: recall_at_1000
value: 98.739
- type: recall_at_3
value: 43.691
- type: recall_at_5
value: 53.15800000000001
- task:
type: Classification
dataset:
name: MTEB MTOPDomainClassification (en)
type: mteb/mtop_domain
config: en
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 97.69493844049248
- type: f1
value: 97.55048089616261
- task:
type: Classification
dataset:
name: MTEB MTOPIntentClassification (en)
type: mteb/mtop_intent
config: en
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 88.75968992248062
- type: f1
value: 72.26321223399123
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (en)
type: mteb/amazon_massive_intent
config: en
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 82.40080699394754
- type: f1
value: 79.62590029057968
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (en)
type: mteb/amazon_massive_scenario
config: en
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 84.49562878278414
- type: f1
value: 84.0040193313333
- task:
type: Clustering
dataset:
name: MTEB MedrxivClusteringP2P
type: mteb/medrxiv-clustering-p2p
config: default
split: test
revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73
metrics:
- type: v_measure
value: 39.386760057101945
- task:
type: Clustering
dataset:
name: MTEB MedrxivClusteringS2S
type: mteb/medrxiv-clustering-s2s
config: default
split: test
revision: 35191c8c0dca72d8ff3efcd72aa802307d469663
metrics:
- type: v_measure
value: 37.89687154075537
- task:
type: Reranking
dataset:
name: MTEB MindSmallReranking
type: mteb/mind_small
config: default
split: test
revision: 3bdac13927fdc888b903db93b2ffdbd90b295a69
metrics:
- type: map
value: 33.94151656057482
- type: mrr
value: 35.32684700746953
- task:
type: Retrieval
dataset:
name: MTEB NFCorpus
type: mteb/nfcorpus
config: default
split: test
revision: ec0fa4fe99da2ff19ca1214b7966684033a58814
metrics:
- type: map_at_1
value: 6.239999999999999
- type: map_at_10
value: 14.862
- type: map_at_100
value: 18.955
- type: map_at_1000
value: 20.694000000000003
- type: map_at_3
value: 10.683
- type: map_at_5
value: 12.674
- type: mrr_at_1
value: 50.15500000000001
- type: mrr_at_10
value: 59.697
- type: mrr_at_100
value: 60.095
- type: mrr_at_1000
value: 60.129999999999995
- type: mrr_at_3
value: 58.35900000000001
- type: mrr_at_5
value: 58.839
- type: ndcg_at_1
value: 48.452
- type: ndcg_at_10
value: 39.341
- type: ndcg_at_100
value: 35.866
- type: ndcg_at_1000
value: 45.111000000000004
- type: ndcg_at_3
value: 44.527
- type: ndcg_at_5
value: 42.946
- type: precision_at_1
value: 50.15500000000001
- type: precision_at_10
value: 29.536
- type: precision_at_100
value: 9.142
- type: precision_at_1000
value: 2.2849999999999997
- type: precision_at_3
value: 41.899
- type: precision_at_5
value: 37.647000000000006
- type: recall_at_1
value: 6.239999999999999
- type: recall_at_10
value: 19.278000000000002
- type: recall_at_100
value: 36.074
- type: recall_at_1000
value: 70.017
- type: recall_at_3
value: 12.066
- type: recall_at_5
value: 15.254000000000001
- task:
type: Retrieval
dataset:
name: MTEB NQ
type: mteb/nq
config: default
split: test
revision: b774495ed302d8c44a3a7ea25c90dbce03968f31
metrics:
- type: map_at_1
value: 39.75
- type: map_at_10
value: 56.443
- type: map_at_100
value: 57.233999999999995
- type: map_at_1000
value: 57.249
- type: map_at_3
value: 52.032999999999994
- type: map_at_5
value: 54.937999999999995
- type: mrr_at_1
value: 44.728
- type: mrr_at_10
value: 58.939
- type: mrr_at_100
value: 59.489000000000004
- type: mrr_at_1000
value: 59.499
- type: mrr_at_3
value: 55.711999999999996
- type: mrr_at_5
value: 57.89
- type: ndcg_at_1
value: 44.728
- type: ndcg_at_10
value: 63.998999999999995
- type: ndcg_at_100
value: 67.077
- type: ndcg_at_1000
value: 67.40899999999999
- type: ndcg_at_3
value: 56.266000000000005
- type: ndcg_at_5
value: 60.88
- type: precision_at_1
value: 44.728
- type: precision_at_10
value: 10.09
- type: precision_at_100
value: 1.1809999999999998
- type: precision_at_1000
value: 0.121
- type: precision_at_3
value: 25.145
- type: precision_at_5
value: 17.822
- type: recall_at_1
value: 39.75
- type: recall_at_10
value: 84.234
- type: recall_at_100
value: 97.055
- type: recall_at_1000
value: 99.517
- type: recall_at_3
value: 64.851
- type: recall_at_5
value: 75.343
- task:
type: Retrieval
dataset:
name: MTEB QuoraRetrieval
type: mteb/quora
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 72.085
- type: map_at_10
value: 86.107
- type: map_at_100
value: 86.727
- type: map_at_1000
value: 86.74
- type: map_at_3
value: 83.21
- type: map_at_5
value: 85.06
- type: mrr_at_1
value: 82.94
- type: mrr_at_10
value: 88.845
- type: mrr_at_100
value: 88.926
- type: mrr_at_1000
value: 88.927
- type: mrr_at_3
value: 87.993
- type: mrr_at_5
value: 88.62299999999999
- type: ndcg_at_1
value: 82.97
- type: ndcg_at_10
value: 89.645
- type: ndcg_at_100
value: 90.717
- type: ndcg_at_1000
value: 90.78
- type: ndcg_at_3
value: 86.99900000000001
- type: ndcg_at_5
value: 88.52600000000001
- type: precision_at_1
value: 82.97
- type: precision_at_10
value: 13.569
- type: precision_at_100
value: 1.539
- type: precision_at_1000
value: 0.157
- type: precision_at_3
value: 38.043
- type: precision_at_5
value: 24.992
- type: recall_at_1
value: 72.085
- type: recall_at_10
value: 96.262
- type: recall_at_100
value: 99.77000000000001
- type: recall_at_1000
value: 99.997
- type: recall_at_3
value: 88.652
- type: recall_at_5
value: 93.01899999999999
- task:
type: Clustering
dataset:
name: MTEB RedditClustering
type: mteb/reddit-clustering
config: default
split: test
revision: 24640382cdbf8abc73003fb0fa6d111a705499eb
metrics:
- type: v_measure
value: 55.82153952668092
- task:
type: Clustering
dataset:
name: MTEB RedditClusteringP2P
type: mteb/reddit-clustering-p2p
config: default
split: test
revision: 282350215ef01743dc01b456c7f5241fa8937f16
metrics:
- type: v_measure
value: 62.094465801879295
- task:
type: Retrieval
dataset:
name: MTEB SCIDOCS
type: mteb/scidocs
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 5.688
- type: map_at_10
value: 15.201999999999998
- type: map_at_100
value: 18.096
- type: map_at_1000
value: 18.481
- type: map_at_3
value: 10.734
- type: map_at_5
value: 12.94
- type: mrr_at_1
value: 28.000000000000004
- type: mrr_at_10
value: 41.101
- type: mrr_at_100
value: 42.202
- type: mrr_at_1000
value: 42.228
- type: mrr_at_3
value: 37.683
- type: mrr_at_5
value: 39.708
- type: ndcg_at_1
value: 28.000000000000004
- type: ndcg_at_10
value: 24.976000000000003
- type: ndcg_at_100
value: 35.129
- type: ndcg_at_1000
value: 40.77
- type: ndcg_at_3
value: 23.787
- type: ndcg_at_5
value: 20.816000000000003
- type: precision_at_1
value: 28.000000000000004
- type: precision_at_10
value: 13.04
- type: precision_at_100
value: 2.761
- type: precision_at_1000
value: 0.41000000000000003
- type: precision_at_3
value: 22.6
- type: precision_at_5
value: 18.52
- type: recall_at_1
value: 5.688
- type: recall_at_10
value: 26.43
- type: recall_at_100
value: 56.02
- type: recall_at_1000
value: 83.21
- type: recall_at_3
value: 13.752
- type: recall_at_5
value: 18.777
- task:
type: STS
dataset:
name: MTEB SICK-R
type: mteb/sickr-sts
config: default
split: test
revision: a6ea5a8cab320b040a23452cc28066d9beae2cee
metrics:
- type: cos_sim_pearson
value: 85.15084859283178
- type: cos_sim_spearman
value: 80.49030614009419
- type: euclidean_pearson
value: 81.84574978672468
- type: euclidean_spearman
value: 79.89787150656818
- type: manhattan_pearson
value: 81.63076538567131
- type: manhattan_spearman
value: 79.69867352121841
- task:
type: STS
dataset:
name: MTEB STS12
type: mteb/sts12-sts
config: default
split: test
revision: a0d554a64d88156834ff5ae9920b964011b16384
metrics:
- type: cos_sim_pearson
value: 84.64097921490992
- type: cos_sim_spearman
value: 77.25370084896514
- type: euclidean_pearson
value: 82.71210826468788
- type: euclidean_spearman
value: 78.50445584994826
- type: manhattan_pearson
value: 82.92580164330298
- type: manhattan_spearman
value: 78.69686891301019
- task:
type: STS
dataset:
name: MTEB STS13
type: mteb/sts13-sts
config: default
split: test
revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca
metrics:
- type: cos_sim_pearson
value: 87.24596417308994
- type: cos_sim_spearman
value: 87.79454220555091
- type: euclidean_pearson
value: 87.40242561671164
- type: euclidean_spearman
value: 88.25955597373556
- type: manhattan_pearson
value: 87.25160240485849
- type: manhattan_spearman
value: 88.155794979818
- task:
type: STS
dataset:
name: MTEB STS14
type: mteb/sts14-sts
config: default
split: test
revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375
metrics:
- type: cos_sim_pearson
value: 84.44914233422564
- type: cos_sim_spearman
value: 82.91015471820322
- type: euclidean_pearson
value: 84.7206656630327
- type: euclidean_spearman
value: 83.86408872059216
- type: manhattan_pearson
value: 84.72816725158454
- type: manhattan_spearman
value: 84.01603388572788
- task:
type: STS
dataset:
name: MTEB STS15
type: mteb/sts15-sts
config: default
split: test
revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3
metrics:
- type: cos_sim_pearson
value: 87.6168026237477
- type: cos_sim_spearman
value: 88.45414278092397
- type: euclidean_pearson
value: 88.57023240882022
- type: euclidean_spearman
value: 89.04102190922094
- type: manhattan_pearson
value: 88.66695535796354
- type: manhattan_spearman
value: 89.19898476680969
- task:
type: STS
dataset:
name: MTEB STS16
type: mteb/sts16-sts
config: default
split: test
revision: 4d8694f8f0e0100860b497b999b3dbed754a0513
metrics:
- type: cos_sim_pearson
value: 84.27925826089424
- type: cos_sim_spearman
value: 85.45291099550461
- type: euclidean_pearson
value: 83.63853036580834
- type: euclidean_spearman
value: 84.33468035821484
- type: manhattan_pearson
value: 83.72778773251596
- type: manhattan_spearman
value: 84.51583132445376
- task:
type: STS
dataset:
name: MTEB STS17 (en-en)
type: mteb/sts17-crosslingual-sts
config: en-en
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 89.67375185692552
- type: cos_sim_spearman
value: 90.32542469203855
- type: euclidean_pearson
value: 89.63513717951847
- type: euclidean_spearman
value: 89.87760271003745
- type: manhattan_pearson
value: 89.28381452982924
- type: manhattan_spearman
value: 89.53568197785721
- task:
type: STS
dataset:
name: MTEB STS22 (en)
type: mteb/sts22-crosslingual-sts
config: en
split: test
revision: eea2b4fe26a775864c896887d910b76a8098ad3f
metrics:
- type: cos_sim_pearson
value: 66.24644693819846
- type: cos_sim_spearman
value: 66.09889420525377
- type: euclidean_pearson
value: 63.72551583520747
- type: euclidean_spearman
value: 63.01385470780679
- type: manhattan_pearson
value: 64.09258157214097
- type: manhattan_spearman
value: 63.080517752822594
- task:
type: STS
dataset:
name: MTEB STSBenchmark
type: mteb/stsbenchmark-sts
config: default
split: test
revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831
metrics:
- type: cos_sim_pearson
value: 86.27321463839989
- type: cos_sim_spearman
value: 86.37572865993327
- type: euclidean_pearson
value: 86.36268020198149
- type: euclidean_spearman
value: 86.31089339478922
- type: manhattan_pearson
value: 86.4260445761947
- type: manhattan_spearman
value: 86.45885895320457
- task:
type: Reranking
dataset:
name: MTEB SciDocsRR
type: mteb/scidocs-reranking
config: default
split: test
revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab
metrics:
- type: map
value: 86.52456702387798
- type: mrr
value: 96.34556529164372
- task:
type: Retrieval
dataset:
name: MTEB SciFact
type: mteb/scifact
config: default
split: test
revision: 0228b52cf27578f30900b9e5271d331663a030d7
metrics:
- type: map_at_1
value: 61.99400000000001
- type: map_at_10
value: 73.38799999999999
- type: map_at_100
value: 73.747
- type: map_at_1000
value: 73.75
- type: map_at_3
value: 70.04599999999999
- type: map_at_5
value: 72.095
- type: mrr_at_1
value: 65.0
- type: mrr_at_10
value: 74.42800000000001
- type: mrr_at_100
value: 74.722
- type: mrr_at_1000
value: 74.725
- type: mrr_at_3
value: 72.056
- type: mrr_at_5
value: 73.60600000000001
- type: ndcg_at_1
value: 65.0
- type: ndcg_at_10
value: 78.435
- type: ndcg_at_100
value: 79.922
- type: ndcg_at_1000
value: 80.00500000000001
- type: ndcg_at_3
value: 73.05199999999999
- type: ndcg_at_5
value: 75.98
- type: precision_at_1
value: 65.0
- type: precision_at_10
value: 10.5
- type: precision_at_100
value: 1.123
- type: precision_at_1000
value: 0.11299999999999999
- type: precision_at_3
value: 28.555999999999997
- type: precision_at_5
value: 19.0
- type: recall_at_1
value: 61.99400000000001
- type: recall_at_10
value: 92.72200000000001
- type: recall_at_100
value: 99.333
- type: recall_at_1000
value: 100.0
- type: recall_at_3
value: 78.739
- type: recall_at_5
value: 85.828
- task:
type: PairClassification
dataset:
name: MTEB SprintDuplicateQuestions
type: mteb/sprintduplicatequestions-pairclassification
config: default
split: test
revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46
metrics:
- type: cos_sim_accuracy
value: 99.79009900990098
- type: cos_sim_ap
value: 95.3203137438653
- type: cos_sim_f1
value: 89.12386706948641
- type: cos_sim_precision
value: 89.75659229208925
- type: cos_sim_recall
value: 88.5
- type: dot_accuracy
value: 99.67821782178218
- type: dot_ap
value: 89.94069840000675
- type: dot_f1
value: 83.45902463549521
- type: dot_precision
value: 83.9231547017189
- type: dot_recall
value: 83.0
- type: euclidean_accuracy
value: 99.78613861386138
- type: euclidean_ap
value: 95.10648259135526
- type: euclidean_f1
value: 88.77338877338877
- type: euclidean_precision
value: 92.42424242424242
- type: euclidean_recall
value: 85.39999999999999
- type: manhattan_accuracy
value: 99.7950495049505
- type: manhattan_ap
value: 95.29987661320946
- type: manhattan_f1
value: 89.21313183949972
- type: manhattan_precision
value: 93.14472252448314
- type: manhattan_recall
value: 85.6
- type: max_accuracy
value: 99.7950495049505
- type: max_ap
value: 95.3203137438653
- type: max_f1
value: 89.21313183949972
- task:
type: Clustering
dataset:
name: MTEB StackExchangeClustering
type: mteb/stackexchange-clustering
config: default
split: test
revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259
metrics:
- type: v_measure
value: 67.65446577183913
- task:
type: Clustering
dataset:
name: MTEB StackExchangeClusteringP2P
type: mteb/stackexchange-clustering-p2p
config: default
split: test
revision: 815ca46b2622cec33ccafc3735d572c266efdb44
metrics:
- type: v_measure
value: 46.30749237193961
- task:
type: Reranking
dataset:
name: MTEB StackOverflowDupQuestions
type: mteb/stackoverflowdupquestions-reranking
config: default
split: test
revision: e185fbe320c72810689fc5848eb6114e1ef5ec69
metrics:
- type: map
value: 54.91481849959949
- type: mrr
value: 55.853506175197346
- task:
type: Summarization
dataset:
name: MTEB SummEval
type: mteb/summeval
config: default
split: test
revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c
metrics:
- type: cos_sim_pearson
value: 30.08196549170419
- type: cos_sim_spearman
value: 31.16661390597077
- type: dot_pearson
value: 29.892258410943466
- type: dot_spearman
value: 30.51328811965085
- task:
type: Retrieval
dataset:
name: MTEB TRECCOVID
type: mteb/trec-covid
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 0.23900000000000002
- type: map_at_10
value: 2.173
- type: map_at_100
value: 14.24
- type: map_at_1000
value: 35.309000000000005
- type: map_at_3
value: 0.7100000000000001
- type: map_at_5
value: 1.163
- type: mrr_at_1
value: 92.0
- type: mrr_at_10
value: 96.0
- type: mrr_at_100
value: 96.0
- type: mrr_at_1000
value: 96.0
- type: mrr_at_3
value: 96.0
- type: mrr_at_5
value: 96.0
- type: ndcg_at_1
value: 90.0
- type: ndcg_at_10
value: 85.382
- type: ndcg_at_100
value: 68.03
- type: ndcg_at_1000
value: 61.021
- type: ndcg_at_3
value: 89.765
- type: ndcg_at_5
value: 88.444
- type: precision_at_1
value: 92.0
- type: precision_at_10
value: 88.0
- type: precision_at_100
value: 70.02000000000001
- type: precision_at_1000
value: 26.984
- type: precision_at_3
value: 94.0
- type: precision_at_5
value: 92.80000000000001
- type: recall_at_1
value: 0.23900000000000002
- type: recall_at_10
value: 2.313
- type: recall_at_100
value: 17.049
- type: recall_at_1000
value: 57.489999999999995
- type: recall_at_3
value: 0.737
- type: recall_at_5
value: 1.221
- task:
type: Retrieval
dataset:
name: MTEB Touche2020
type: mteb/touche2020
config: default
split: test
revision: a34f9a33db75fa0cbb21bb5cfc3dae8dc8bec93f
metrics:
- type: map_at_1
value: 2.75
- type: map_at_10
value: 11.29
- type: map_at_100
value: 18.032999999999998
- type: map_at_1000
value: 19.746
- type: map_at_3
value: 6.555
- type: map_at_5
value: 8.706999999999999
- type: mrr_at_1
value: 34.694
- type: mrr_at_10
value: 50.55
- type: mrr_at_100
value: 51.659
- type: mrr_at_1000
value: 51.659
- type: mrr_at_3
value: 47.278999999999996
- type: mrr_at_5
value: 49.728
- type: ndcg_at_1
value: 32.653
- type: ndcg_at_10
value: 27.894000000000002
- type: ndcg_at_100
value: 39.769
- type: ndcg_at_1000
value: 51.495999999999995
- type: ndcg_at_3
value: 32.954
- type: ndcg_at_5
value: 31.502999999999997
- type: precision_at_1
value: 34.694
- type: precision_at_10
value: 23.265
- type: precision_at_100
value: 7.898
- type: precision_at_1000
value: 1.58
- type: precision_at_3
value: 34.694
- type: precision_at_5
value: 31.429000000000002
- type: recall_at_1
value: 2.75
- type: recall_at_10
value: 16.953
- type: recall_at_100
value: 48.68
- type: recall_at_1000
value: 85.18599999999999
- type: recall_at_3
value: 7.710999999999999
- type: recall_at_5
value: 11.484
- task:
type: Classification
dataset:
name: MTEB ToxicConversationsClassification
type: mteb/toxic_conversations_50k
config: default
split: test
revision: d7c0de2777da35d6aae2200a62c6e0e5af397c4c
metrics:
- type: accuracy
value: 82.66099999999999
- type: ap
value: 25.555698090238337
- type: f1
value: 66.48402012461622
- task:
type: Classification
dataset:
name: MTEB TweetSentimentExtractionClassification
type: mteb/tweet_sentiment_extraction
config: default
split: test
revision: d604517c81ca91fe16a244d1248fc021f9ecee7a
metrics:
- type: accuracy
value: 72.94567062818335
- type: f1
value: 73.28139189595674
- task:
type: Clustering
dataset:
name: MTEB TwentyNewsgroupsClustering
type: mteb/twentynewsgroups-clustering
config: default
split: test
revision: 6125ec4e24fa026cec8a478383ee943acfbd5449
metrics:
- type: v_measure
value: 49.581627240203474
- task:
type: PairClassification
dataset:
name: MTEB TwitterSemEval2015
type: mteb/twittersemeval2015-pairclassification
config: default
split: test
revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1
metrics:
- type: cos_sim_accuracy
value: 87.78089050485785
- type: cos_sim_ap
value: 79.64487116574168
- type: cos_sim_f1
value: 72.46563021970964
- type: cos_sim_precision
value: 70.62359128474831
- type: cos_sim_recall
value: 74.40633245382587
- type: dot_accuracy
value: 86.2609524944865
- type: dot_ap
value: 75.513046857613
- type: dot_f1
value: 68.58213616489695
- type: dot_precision
value: 65.12455516014235
- type: dot_recall
value: 72.42744063324538
- type: euclidean_accuracy
value: 87.6080348095607
- type: euclidean_ap
value: 79.00204933649795
- type: euclidean_f1
value: 72.14495342605589
- type: euclidean_precision
value: 69.85421299728193
- type: euclidean_recall
value: 74.5910290237467
- type: manhattan_accuracy
value: 87.59611372712642
- type: manhattan_ap
value: 78.78523756706264
- type: manhattan_f1
value: 71.86499137718648
- type: manhattan_precision
value: 67.39833641404806
- type: manhattan_recall
value: 76.96569920844327
- type: max_accuracy
value: 87.78089050485785
- type: max_ap
value: 79.64487116574168
- type: max_f1
value: 72.46563021970964
- task:
type: PairClassification
dataset:
name: MTEB TwitterURLCorpus
type: mteb/twitterurlcorpus-pairclassification
config: default
split: test
revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf
metrics:
- type: cos_sim_accuracy
value: 89.98719292117825
- type: cos_sim_ap
value: 87.58146137353202
- type: cos_sim_f1
value: 80.28543232369239
- type: cos_sim_precision
value: 79.1735289714029
- type: cos_sim_recall
value: 81.42901139513397
- type: dot_accuracy
value: 88.9199363526992
- type: dot_ap
value: 84.98499998630417
- type: dot_f1
value: 78.21951400757969
- type: dot_precision
value: 75.58523624874336
- type: dot_recall
value: 81.04404065291038
- type: euclidean_accuracy
value: 89.77374160748244
- type: euclidean_ap
value: 87.35151562835209
- type: euclidean_f1
value: 79.92160922940393
- type: euclidean_precision
value: 76.88531587933979
- type: euclidean_recall
value: 83.20757622420696
- type: manhattan_accuracy
value: 89.72717041176699
- type: manhattan_ap
value: 87.34065592142515
- type: manhattan_f1
value: 79.85603419187943
- type: manhattan_precision
value: 77.82243332115455
- type: manhattan_recall
value: 81.99876809362489
- type: max_accuracy
value: 89.98719292117825
- type: max_ap
value: 87.58146137353202
- type: max_f1
value: 80.28543232369239
- task:
type: STS
dataset:
name: MTEB AFQMC
type: C-MTEB/AFQMC
config: default
split: validation
revision: b44c3b011063adb25877c13823db83bb193913c4
metrics:
- type: cos_sim_pearson
value: 53.45954203592337
- type: cos_sim_spearman
value: 58.42154680418638
- type: euclidean_pearson
value: 56.41543791722753
- type: euclidean_spearman
value: 58.39328016640146
- type: manhattan_pearson
value: 56.318510356833876
- type: manhattan_spearman
value: 58.28423447818184
- task:
type: STS
dataset:
name: MTEB ATEC
type: C-MTEB/ATEC
config: default
split: test
revision: 0f319b1142f28d00e055a6770f3f726ae9b7d865
metrics:
- type: cos_sim_pearson
value: 50.78356460675945
- type: cos_sim_spearman
value: 55.6530411663269
- type: euclidean_pearson
value: 56.50763660417816
- type: euclidean_spearman
value: 55.733823335669065
- type: manhattan_pearson
value: 56.45323093512866
- type: manhattan_spearman
value: 55.63248619032702
- task:
type: Classification
dataset:
name: MTEB AmazonReviewsClassification (zh)
type: mteb/amazon_reviews_multi
config: zh
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 47.209999999999994
- type: f1
value: 46.08892432018655
- task:
type: STS
dataset:
name: MTEB BQ
type: C-MTEB/BQ
config: default
split: test
revision: e3dda5e115e487b39ec7e618c0c6a29137052a55
metrics:
- type: cos_sim_pearson
value: 70.25573992001478
- type: cos_sim_spearman
value: 73.85247134951433
- type: euclidean_pearson
value: 72.60033082168442
- type: euclidean_spearman
value: 73.72445893756499
- type: manhattan_pearson
value: 72.59932284620231
- type: manhattan_spearman
value: 73.68002490614583
- task:
type: Clustering
dataset:
name: MTEB CLSClusteringP2P
type: C-MTEB/CLSClusteringP2P
config: default
split: test
revision: 4b6227591c6c1a73bc76b1055f3b7f3588e72476
metrics:
- type: v_measure
value: 45.21317724305628
- task:
type: Clustering
dataset:
name: MTEB CLSClusteringS2S
type: C-MTEB/CLSClusteringS2S
config: default
split: test
revision: e458b3f5414b62b7f9f83499ac1f5497ae2e869f
metrics:
- type: v_measure
value: 42.49825170976724
- task:
type: Reranking
dataset:
name: MTEB CMedQAv1
type: C-MTEB/CMedQAv1-reranking
config: default
split: test
revision: 8d7f1e942507dac42dc58017c1a001c3717da7df
metrics:
- type: map
value: 88.15661686810597
- type: mrr
value: 90.11222222222223
- task:
type: Reranking
dataset:
name: MTEB CMedQAv2
type: C-MTEB/CMedQAv2-reranking
config: default
split: test
revision: 23d186750531a14a0357ca22cd92d712fd512ea0
metrics:
- type: map
value: 88.1204726064383
- type: mrr
value: 90.20142857142858
- task:
type: Retrieval
dataset:
name: MTEB CmedqaRetrieval
type: C-MTEB/CmedqaRetrieval
config: default
split: dev
revision: cd540c506dae1cf9e9a59c3e06f42030d54e7301
metrics:
- type: map_at_1
value: 27.224999999999998
- type: map_at_10
value: 40.169
- type: map_at_100
value: 42.0
- type: map_at_1000
value: 42.109
- type: map_at_3
value: 35.76
- type: map_at_5
value: 38.221
- type: mrr_at_1
value: 40.56
- type: mrr_at_10
value: 49.118
- type: mrr_at_100
value: 50.092999999999996
- type: mrr_at_1000
value: 50.133
- type: mrr_at_3
value: 46.507
- type: mrr_at_5
value: 47.973
- type: ndcg_at_1
value: 40.56
- type: ndcg_at_10
value: 46.972
- type: ndcg_at_100
value: 54.04
- type: ndcg_at_1000
value: 55.862
- type: ndcg_at_3
value: 41.36
- type: ndcg_at_5
value: 43.704
- type: precision_at_1
value: 40.56
- type: precision_at_10
value: 10.302999999999999
- type: precision_at_100
value: 1.606
- type: precision_at_1000
value: 0.184
- type: precision_at_3
value: 23.064
- type: precision_at_5
value: 16.764000000000003
- type: recall_at_1
value: 27.224999999999998
- type: recall_at_10
value: 58.05200000000001
- type: recall_at_100
value: 87.092
- type: recall_at_1000
value: 99.099
- type: recall_at_3
value: 41.373
- type: recall_at_5
value: 48.453
- task:
type: PairClassification
dataset:
name: MTEB Cmnli
type: C-MTEB/CMNLI
config: default
split: validation
revision: 41bc36f332156f7adc9e38f53777c959b2ae9766
metrics:
- type: cos_sim_accuracy
value: 77.40228502705953
- type: cos_sim_ap
value: 86.22359172956327
- type: cos_sim_f1
value: 78.96328293736501
- type: cos_sim_precision
value: 73.36945615091311
- type: cos_sim_recall
value: 85.48047696983868
- type: dot_accuracy
value: 75.53818400481059
- type: dot_ap
value: 83.70164011305312
- type: dot_f1
value: 77.67298719348754
- type: dot_precision
value: 67.49482401656314
- type: dot_recall
value: 91.46598082768296
- type: euclidean_accuracy
value: 77.94347564642213
- type: euclidean_ap
value: 86.4652108728609
- type: euclidean_f1
value: 79.15555555555555
- type: euclidean_precision
value: 75.41816641964853
- type: euclidean_recall
value: 83.28267477203647
- type: manhattan_accuracy
value: 77.45039085989175
- type: manhattan_ap
value: 86.09986583900665
- type: manhattan_f1
value: 78.93669264438988
- type: manhattan_precision
value: 72.63261296660117
- type: manhattan_recall
value: 86.43909282207154
- type: max_accuracy
value: 77.94347564642213
- type: max_ap
value: 86.4652108728609
- type: max_f1
value: 79.15555555555555
- task:
type: Retrieval
dataset:
name: MTEB CovidRetrieval
type: C-MTEB/CovidRetrieval
config: default
split: dev
revision: 1271c7809071a13532e05f25fb53511ffce77117
metrics:
- type: map_at_1
value: 69.336
- type: map_at_10
value: 77.16
- type: map_at_100
value: 77.47500000000001
- type: map_at_1000
value: 77.482
- type: map_at_3
value: 75.42999999999999
- type: map_at_5
value: 76.468
- type: mrr_at_1
value: 69.44200000000001
- type: mrr_at_10
value: 77.132
- type: mrr_at_100
value: 77.43299999999999
- type: mrr_at_1000
value: 77.44
- type: mrr_at_3
value: 75.395
- type: mrr_at_5
value: 76.459
- type: ndcg_at_1
value: 69.547
- type: ndcg_at_10
value: 80.794
- type: ndcg_at_100
value: 82.245
- type: ndcg_at_1000
value: 82.40899999999999
- type: ndcg_at_3
value: 77.303
- type: ndcg_at_5
value: 79.168
- type: precision_at_1
value: 69.547
- type: precision_at_10
value: 9.305
- type: precision_at_100
value: 0.9979999999999999
- type: precision_at_1000
value: 0.101
- type: precision_at_3
value: 27.749000000000002
- type: precision_at_5
value: 17.576
- type: recall_at_1
value: 69.336
- type: recall_at_10
value: 92.097
- type: recall_at_100
value: 98.736
- type: recall_at_1000
value: 100.0
- type: recall_at_3
value: 82.64
- type: recall_at_5
value: 87.144
- task:
type: Retrieval
dataset:
name: MTEB DuRetrieval
type: C-MTEB/DuRetrieval
config: default
split: dev
revision: a1a333e290fe30b10f3f56498e3a0d911a693ced
metrics:
- type: map_at_1
value: 26.817999999999998
- type: map_at_10
value: 82.67
- type: map_at_100
value: 85.304
- type: map_at_1000
value: 85.334
- type: map_at_3
value: 57.336
- type: map_at_5
value: 72.474
- type: mrr_at_1
value: 91.45
- type: mrr_at_10
value: 94.272
- type: mrr_at_100
value: 94.318
- type: mrr_at_1000
value: 94.32000000000001
- type: mrr_at_3
value: 94.0
- type: mrr_at_5
value: 94.17699999999999
- type: ndcg_at_1
value: 91.45
- type: ndcg_at_10
value: 89.404
- type: ndcg_at_100
value: 91.724
- type: ndcg_at_1000
value: 91.973
- type: ndcg_at_3
value: 88.104
- type: ndcg_at_5
value: 87.25699999999999
- type: precision_at_1
value: 91.45
- type: precision_at_10
value: 42.585
- type: precision_at_100
value: 4.838
- type: precision_at_1000
value: 0.49
- type: precision_at_3
value: 78.8
- type: precision_at_5
value: 66.66
- type: recall_at_1
value: 26.817999999999998
- type: recall_at_10
value: 90.67
- type: recall_at_100
value: 98.36200000000001
- type: recall_at_1000
value: 99.583
- type: recall_at_3
value: 59.614999999999995
- type: recall_at_5
value: 77.05199999999999
- task:
type: Retrieval
dataset:
name: MTEB EcomRetrieval
type: C-MTEB/EcomRetrieval
config: default
split: dev
revision: 687de13dc7294d6fd9be10c6945f9e8fec8166b9
metrics:
- type: map_at_1
value: 47.699999999999996
- type: map_at_10
value: 57.589999999999996
- type: map_at_100
value: 58.226
- type: map_at_1000
value: 58.251
- type: map_at_3
value: 55.233
- type: map_at_5
value: 56.633
- type: mrr_at_1
value: 47.699999999999996
- type: mrr_at_10
value: 57.589999999999996
- type: mrr_at_100
value: 58.226
- type: mrr_at_1000
value: 58.251
- type: mrr_at_3
value: 55.233
- type: mrr_at_5
value: 56.633
- type: ndcg_at_1
value: 47.699999999999996
- type: ndcg_at_10
value: 62.505
- type: ndcg_at_100
value: 65.517
- type: ndcg_at_1000
value: 66.19800000000001
- type: ndcg_at_3
value: 57.643
- type: ndcg_at_5
value: 60.181
- type: precision_at_1
value: 47.699999999999996
- type: precision_at_10
value: 7.8
- type: precision_at_100
value: 0.919
- type: precision_at_1000
value: 0.097
- type: precision_at_3
value: 21.532999999999998
- type: precision_at_5
value: 14.16
- type: recall_at_1
value: 47.699999999999996
- type: recall_at_10
value: 78.0
- type: recall_at_100
value: 91.9
- type: recall_at_1000
value: 97.3
- type: recall_at_3
value: 64.60000000000001
- type: recall_at_5
value: 70.8
- task:
type: Classification
dataset:
name: MTEB IFlyTek
type: C-MTEB/IFlyTek-classification
config: default
split: validation
revision: 421605374b29664c5fc098418fe20ada9bd55f8a
metrics:
- type: accuracy
value: 44.84801846864178
- type: f1
value: 37.47347897956339
- task:
type: Classification
dataset:
name: MTEB JDReview
type: C-MTEB/JDReview-classification
config: default
split: test
revision: b7c64bd89eb87f8ded463478346f76731f07bf8b
metrics:
- type: accuracy
value: 85.81613508442777
- type: ap
value: 52.68244615477374
- type: f1
value: 80.0445640948843
- task:
type: STS
dataset:
name: MTEB LCQMC
type: C-MTEB/LCQMC
config: default
split: test
revision: 17f9b096f80380fce5ed12a9be8be7784b337daf
metrics:
- type: cos_sim_pearson
value: 69.57786502217138
- type: cos_sim_spearman
value: 75.39106054489906
- type: euclidean_pearson
value: 73.72082954602402
- type: euclidean_spearman
value: 75.14421475913619
- type: manhattan_pearson
value: 73.62463076633642
- type: manhattan_spearman
value: 75.01301565104112
- task:
type: Reranking
dataset:
name: MTEB MMarcoReranking
type: C-MTEB/Mmarco-reranking
config: default
split: dev
revision: None
metrics:
- type: map
value: 29.143797057999134
- type: mrr
value: 28.08174603174603
- task:
type: Retrieval
dataset:
name: MTEB MMarcoRetrieval
type: C-MTEB/MMarcoRetrieval
config: default
split: dev
revision: 539bbde593d947e2a124ba72651aafc09eb33fc2
metrics:
- type: map_at_1
value: 70.492
- type: map_at_10
value: 79.501
- type: map_at_100
value: 79.728
- type: map_at_1000
value: 79.735
- type: map_at_3
value: 77.77
- type: map_at_5
value: 78.851
- type: mrr_at_1
value: 72.822
- type: mrr_at_10
value: 80.001
- type: mrr_at_100
value: 80.19
- type: mrr_at_1000
value: 80.197
- type: mrr_at_3
value: 78.484
- type: mrr_at_5
value: 79.42099999999999
- type: ndcg_at_1
value: 72.822
- type: ndcg_at_10
value: 83.013
- type: ndcg_at_100
value: 84.013
- type: ndcg_at_1000
value: 84.20400000000001
- type: ndcg_at_3
value: 79.728
- type: ndcg_at_5
value: 81.542
- type: precision_at_1
value: 72.822
- type: precision_at_10
value: 9.917
- type: precision_at_100
value: 1.042
- type: precision_at_1000
value: 0.106
- type: precision_at_3
value: 29.847
- type: precision_at_5
value: 18.871
- type: recall_at_1
value: 70.492
- type: recall_at_10
value: 93.325
- type: recall_at_100
value: 97.822
- type: recall_at_1000
value: 99.319
- type: recall_at_3
value: 84.636
- type: recall_at_5
value: 88.93100000000001
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (zh-CN)
type: mteb/amazon_massive_intent
config: zh-CN
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 76.88298587760592
- type: f1
value: 73.89001762017176
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (zh-CN)
type: mteb/amazon_massive_scenario
config: zh-CN
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 80.76328177538669
- type: f1
value: 80.24718532423358
- task:
type: Retrieval
dataset:
name: MTEB MedicalRetrieval
type: C-MTEB/MedicalRetrieval
config: default
split: dev
revision: 2039188fb5800a9803ba5048df7b76e6fb151fc6
metrics:
- type: map_at_1
value: 49.6
- type: map_at_10
value: 55.620999999999995
- type: map_at_100
value: 56.204
- type: map_at_1000
value: 56.251
- type: map_at_3
value: 54.132999999999996
- type: map_at_5
value: 54.933
- type: mrr_at_1
value: 49.7
- type: mrr_at_10
value: 55.67100000000001
- type: mrr_at_100
value: 56.254000000000005
- type: mrr_at_1000
value: 56.301
- type: mrr_at_3
value: 54.18300000000001
- type: mrr_at_5
value: 54.983000000000004
- type: ndcg_at_1
value: 49.6
- type: ndcg_at_10
value: 58.645
- type: ndcg_at_100
value: 61.789
- type: ndcg_at_1000
value: 63.219
- type: ndcg_at_3
value: 55.567
- type: ndcg_at_5
value: 57.008
- type: precision_at_1
value: 49.6
- type: precision_at_10
value: 6.819999999999999
- type: precision_at_100
value: 0.836
- type: precision_at_1000
value: 0.095
- type: precision_at_3
value: 19.900000000000002
- type: precision_at_5
value: 12.64
- type: recall_at_1
value: 49.6
- type: recall_at_10
value: 68.2
- type: recall_at_100
value: 83.6
- type: recall_at_1000
value: 95.3
- type: recall_at_3
value: 59.699999999999996
- type: recall_at_5
value: 63.2
- task:
type: Classification
dataset:
name: MTEB MultilingualSentiment
type: C-MTEB/MultilingualSentiment-classification
config: default
split: validation
revision: 46958b007a63fdbf239b7672c25d0bea67b5ea1a
metrics:
- type: accuracy
value: 74.45666666666666
- type: f1
value: 74.32582402190089
- task:
type: PairClassification
dataset:
name: MTEB Ocnli
type: C-MTEB/OCNLI
config: default
split: validation
revision: 66e76a618a34d6d565d5538088562851e6daa7ec
metrics:
- type: cos_sim_accuracy
value: 80.67135896047645
- type: cos_sim_ap
value: 87.60421240712051
- type: cos_sim_f1
value: 82.1304131408661
- type: cos_sim_precision
value: 77.68361581920904
- type: cos_sim_recall
value: 87.11721224920802
- type: dot_accuracy
value: 79.04710341093666
- type: dot_ap
value: 85.6370059719336
- type: dot_f1
value: 80.763723150358
- type: dot_precision
value: 73.69337979094077
- type: dot_recall
value: 89.33474128827878
- type: euclidean_accuracy
value: 81.05035192203573
- type: euclidean_ap
value: 87.7880240053663
- type: euclidean_f1
value: 82.50244379276637
- type: euclidean_precision
value: 76.7970882620564
- type: euclidean_recall
value: 89.1235480464625
- type: manhattan_accuracy
value: 80.61721710882512
- type: manhattan_ap
value: 87.43568120591175
- type: manhattan_f1
value: 81.89526184538653
- type: manhattan_precision
value: 77.5992438563327
- type: manhattan_recall
value: 86.6948257655755
- type: max_accuracy
value: 81.05035192203573
- type: max_ap
value: 87.7880240053663
- type: max_f1
value: 82.50244379276637
- task:
type: Classification
dataset:
name: MTEB OnlineShopping
type: C-MTEB/OnlineShopping-classification
config: default
split: test
revision: e610f2ebd179a8fda30ae534c3878750a96db120
metrics:
- type: accuracy
value: 93.5
- type: ap
value: 91.31357903446782
- type: f1
value: 93.48088994006616
- task:
type: STS
dataset:
name: MTEB PAWSX
type: C-MTEB/PAWSX
config: default
split: test
revision: 9c6a90e430ac22b5779fb019a23e820b11a8b5e1
metrics:
- type: cos_sim_pearson
value: 36.93293453538077
- type: cos_sim_spearman
value: 42.45972506308574
- type: euclidean_pearson
value: 42.34945133152159
- type: euclidean_spearman
value: 42.331610303674644
- type: manhattan_pearson
value: 42.31455070249498
- type: manhattan_spearman
value: 42.19887982891834
- task:
type: STS
dataset:
name: MTEB QBQTC
type: C-MTEB/QBQTC
config: default
split: test
revision: 790b0510dc52b1553e8c49f3d2afb48c0e5c48b7
metrics:
- type: cos_sim_pearson
value: 33.683290790043785
- type: cos_sim_spearman
value: 35.149171171202994
- type: euclidean_pearson
value: 32.33806561267862
- type: euclidean_spearman
value: 34.483576387347966
- type: manhattan_pearson
value: 32.47629754599608
- type: manhattan_spearman
value: 34.66434471867615
- task:
type: STS
dataset:
name: MTEB STS22 (zh)
type: mteb/sts22-crosslingual-sts
config: zh
split: test
revision: eea2b4fe26a775864c896887d910b76a8098ad3f
metrics:
- type: cos_sim_pearson
value: 66.46322760516104
- type: cos_sim_spearman
value: 67.398478319726
- type: euclidean_pearson
value: 64.7223480293625
- type: euclidean_spearman
value: 66.83118568812951
- type: manhattan_pearson
value: 64.88440039828305
- type: manhattan_spearman
value: 66.80429458952257
- task:
type: STS
dataset:
name: MTEB STSB
type: C-MTEB/STSB
config: default
split: test
revision: 0cde68302b3541bb8b3c340dc0644b0b745b3dc0
metrics:
- type: cos_sim_pearson
value: 79.08991383232105
- type: cos_sim_spearman
value: 79.39715677296854
- type: euclidean_pearson
value: 78.63201279320496
- type: euclidean_spearman
value: 79.40262660785731
- type: manhattan_pearson
value: 78.98138363146906
- type: manhattan_spearman
value: 79.79968413014194
- task:
type: Reranking
dataset:
name: MTEB T2Reranking
type: C-MTEB/T2Reranking
config: default
split: dev
revision: 76631901a18387f85eaa53e5450019b87ad58ef9
metrics:
- type: map
value: 67.43289278789972
- type: mrr
value: 77.53012460908535
- task:
type: Retrieval
dataset:
name: MTEB T2Retrieval
type: C-MTEB/T2Retrieval
config: default
split: dev
revision: 8731a845f1bf500a4f111cf1070785c793d10e64
metrics:
- type: map_at_1
value: 27.733999999999998
- type: map_at_10
value: 78.24799999999999
- type: map_at_100
value: 81.765
- type: map_at_1000
value: 81.824
- type: map_at_3
value: 54.92
- type: map_at_5
value: 67.61399999999999
- type: mrr_at_1
value: 90.527
- type: mrr_at_10
value: 92.843
- type: mrr_at_100
value: 92.927
- type: mrr_at_1000
value: 92.93
- type: mrr_at_3
value: 92.45100000000001
- type: mrr_at_5
value: 92.693
- type: ndcg_at_1
value: 90.527
- type: ndcg_at_10
value: 85.466
- type: ndcg_at_100
value: 88.846
- type: ndcg_at_1000
value: 89.415
- type: ndcg_at_3
value: 86.768
- type: ndcg_at_5
value: 85.46000000000001
- type: precision_at_1
value: 90.527
- type: precision_at_10
value: 42.488
- type: precision_at_100
value: 5.024
- type: precision_at_1000
value: 0.516
- type: precision_at_3
value: 75.907
- type: precision_at_5
value: 63.727000000000004
- type: recall_at_1
value: 27.733999999999998
- type: recall_at_10
value: 84.346
- type: recall_at_100
value: 95.536
- type: recall_at_1000
value: 98.42999999999999
- type: recall_at_3
value: 56.455
- type: recall_at_5
value: 70.755
- task:
type: Classification
dataset:
name: MTEB TNews
type: C-MTEB/TNews-classification
config: default
split: validation
revision: 317f262bf1e6126357bbe89e875451e4b0938fe4
metrics:
- type: accuracy
value: 49.952000000000005
- type: f1
value: 48.264617195258054
- task:
type: Clustering
dataset:
name: MTEB ThuNewsClusteringP2P
type: C-MTEB/ThuNewsClusteringP2P
config: default
split: test
revision: 5798586b105c0434e4f0fe5e767abe619442cf93
metrics:
- type: v_measure
value: 68.23769904483508
- task:
type: Clustering
dataset:
name: MTEB ThuNewsClusteringS2S
type: C-MTEB/ThuNewsClusteringS2S
config: default
split: test
revision: 8a8b2caeda43f39e13c4bc5bea0f8a667896e10d
metrics:
- type: v_measure
value: 62.50294403136556
- task:
type: Retrieval
dataset:
name: MTEB VideoRetrieval
type: C-MTEB/VideoRetrieval
config: default
split: dev
revision: 58c2597a5943a2ba48f4668c3b90d796283c5639
metrics:
- type: map_at_1
value: 54.0
- type: map_at_10
value: 63.668
- type: map_at_100
value: 64.217
- type: map_at_1000
value: 64.23100000000001
- type: map_at_3
value: 61.7
- type: map_at_5
value: 62.870000000000005
- type: mrr_at_1
value: 54.0
- type: mrr_at_10
value: 63.668
- type: mrr_at_100
value: 64.217
- type: mrr_at_1000
value: 64.23100000000001
- type: mrr_at_3
value: 61.7
- type: mrr_at_5
value: 62.870000000000005
- type: ndcg_at_1
value: 54.0
- type: ndcg_at_10
value: 68.11399999999999
- type: ndcg_at_100
value: 70.723
- type: ndcg_at_1000
value: 71.123
- type: ndcg_at_3
value: 64.074
- type: ndcg_at_5
value: 66.178
- type: precision_at_1
value: 54.0
- type: precision_at_10
value: 8.200000000000001
- type: precision_at_100
value: 0.941
- type: precision_at_1000
value: 0.097
- type: precision_at_3
value: 23.633000000000003
- type: precision_at_5
value: 15.2
- type: recall_at_1
value: 54.0
- type: recall_at_10
value: 82.0
- type: recall_at_100
value: 94.1
- type: recall_at_1000
value: 97.3
- type: recall_at_3
value: 70.89999999999999
- type: recall_at_5
value: 76.0
- task:
type: Classification
dataset:
name: MTEB Waimai
type: C-MTEB/waimai-classification
config: default
split: test
revision: 339287def212450dcaa9df8c22bf93e9980c7023
metrics:
- type: accuracy
value: 86.63000000000001
- type: ap
value: 69.99457882599567
- type: f1
value: 85.07735617998541
- task:
type: Clustering
dataset:
name: MTEB 8TagsClustering
type: PL-MTEB/8tags-clustering
config: default
split: test
revision: None
metrics:
- type: v_measure
value: 44.594104491193555
- task:
type: Classification
dataset:
name: MTEB AllegroReviews
type: PL-MTEB/allegro-reviews
config: default
split: test
revision: None
metrics:
- type: accuracy
value: 63.97614314115309
- type: f1
value: 52.15634261679283
- task:
type: Retrieval
dataset:
name: MTEB ArguAna-PL
type: clarin-knext/arguana-pl
config: default
split: test
revision: 63fc86750af76253e8c760fc9e534bbf24d260a2
metrics:
- type: map_at_1
value: 32.646
- type: map_at_10
value: 47.963
- type: map_at_100
value: 48.789
- type: map_at_1000
value: 48.797000000000004
- type: map_at_3
value: 43.196
- type: map_at_5
value: 46.016
- type: mrr_at_1
value: 33.073
- type: mrr_at_10
value: 48.126000000000005
- type: mrr_at_100
value: 48.946
- type: mrr_at_1000
value: 48.953
- type: mrr_at_3
value: 43.374
- type: mrr_at_5
value: 46.147
- type: ndcg_at_1
value: 32.646
- type: ndcg_at_10
value: 56.481
- type: ndcg_at_100
value: 59.922
- type: ndcg_at_1000
value: 60.07
- type: ndcg_at_3
value: 46.675
- type: ndcg_at_5
value: 51.76500000000001
- type: precision_at_1
value: 32.646
- type: precision_at_10
value: 8.371
- type: precision_at_100
value: 0.9860000000000001
- type: precision_at_1000
value: 0.1
- type: precision_at_3
value: 18.919
- type: precision_at_5
value: 13.825999999999999
- type: recall_at_1
value: 32.646
- type: recall_at_10
value: 83.71300000000001
- type: recall_at_100
value: 98.578
- type: recall_at_1000
value: 99.644
- type: recall_at_3
value: 56.757000000000005
- type: recall_at_5
value: 69.132
- task:
type: Classification
dataset:
name: MTEB CBD
type: PL-MTEB/cbd
config: default
split: test
revision: None
metrics:
- type: accuracy
value: 68.56
- type: ap
value: 23.310493680488513
- type: f1
value: 58.85369533105693
- task:
type: PairClassification
dataset:
name: MTEB CDSC-E
type: PL-MTEB/cdsce-pairclassification
config: default
split: test
revision: None
metrics:
- type: cos_sim_accuracy
value: 88.5
- type: cos_sim_ap
value: 72.42140924378361
- type: cos_sim_f1
value: 66.0919540229885
- type: cos_sim_precision
value: 72.78481012658227
- type: cos_sim_recall
value: 60.526315789473685
- type: dot_accuracy
value: 88.5
- type: dot_ap
value: 72.42140924378361
- type: dot_f1
value: 66.0919540229885
- type: dot_precision
value: 72.78481012658227
- type: dot_recall
value: 60.526315789473685
- type: euclidean_accuracy
value: 88.5
- type: euclidean_ap
value: 72.42140924378361
- type: euclidean_f1
value: 66.0919540229885
- type: euclidean_precision
value: 72.78481012658227
- type: euclidean_recall
value: 60.526315789473685
- type: manhattan_accuracy
value: 88.5
- type: manhattan_ap
value: 72.49745515311696
- type: manhattan_f1
value: 66.0968660968661
- type: manhattan_precision
value: 72.04968944099379
- type: manhattan_recall
value: 61.05263157894737
- type: max_accuracy
value: 88.5
- type: max_ap
value: 72.49745515311696
- type: max_f1
value: 66.0968660968661
- task:
type: STS
dataset:
name: MTEB CDSC-R
type: PL-MTEB/cdscr-sts
config: default
split: test
revision: None
metrics:
- type: cos_sim_pearson
value: 90.32269765590145
- type: cos_sim_spearman
value: 89.73666311491672
- type: euclidean_pearson
value: 88.2933868516544
- type: euclidean_spearman
value: 89.73666311491672
- type: manhattan_pearson
value: 88.33474590219448
- type: manhattan_spearman
value: 89.8548364866583
- task:
type: Retrieval
dataset:
name: MTEB DBPedia-PL
type: clarin-knext/dbpedia-pl
config: default
split: test
revision: 76afe41d9af165cc40999fcaa92312b8b012064a
metrics:
- type: map_at_1
value: 7.632999999999999
- type: map_at_10
value: 16.426
- type: map_at_100
value: 22.651
- type: map_at_1000
value: 24.372
- type: map_at_3
value: 11.706
- type: map_at_5
value: 13.529
- type: mrr_at_1
value: 60.75000000000001
- type: mrr_at_10
value: 68.613
- type: mrr_at_100
value: 69.001
- type: mrr_at_1000
value: 69.021
- type: mrr_at_3
value: 67.0
- type: mrr_at_5
value: 67.925
- type: ndcg_at_1
value: 49.875
- type: ndcg_at_10
value: 36.978
- type: ndcg_at_100
value: 40.031
- type: ndcg_at_1000
value: 47.566
- type: ndcg_at_3
value: 41.148
- type: ndcg_at_5
value: 38.702
- type: precision_at_1
value: 60.75000000000001
- type: precision_at_10
value: 29.7
- type: precision_at_100
value: 9.278
- type: precision_at_1000
value: 2.099
- type: precision_at_3
value: 44.0
- type: precision_at_5
value: 37.6
- type: recall_at_1
value: 7.632999999999999
- type: recall_at_10
value: 22.040000000000003
- type: recall_at_100
value: 44.024
- type: recall_at_1000
value: 67.848
- type: recall_at_3
value: 13.093
- type: recall_at_5
value: 15.973
- task:
type: Retrieval
dataset:
name: MTEB FiQA-PL
type: clarin-knext/fiqa-pl
config: default
split: test
revision: 2e535829717f8bf9dc829b7f911cc5bbd4e6608e
metrics:
- type: map_at_1
value: 15.473
- type: map_at_10
value: 24.579
- type: map_at_100
value: 26.387
- type: map_at_1000
value: 26.57
- type: map_at_3
value: 21.278
- type: map_at_5
value: 23.179
- type: mrr_at_1
value: 30.709999999999997
- type: mrr_at_10
value: 38.994
- type: mrr_at_100
value: 39.993
- type: mrr_at_1000
value: 40.044999999999995
- type: mrr_at_3
value: 36.342999999999996
- type: mrr_at_5
value: 37.846999999999994
- type: ndcg_at_1
value: 30.709999999999997
- type: ndcg_at_10
value: 31.608999999999998
- type: ndcg_at_100
value: 38.807
- type: ndcg_at_1000
value: 42.208
- type: ndcg_at_3
value: 28.086
- type: ndcg_at_5
value: 29.323
- type: precision_at_1
value: 30.709999999999997
- type: precision_at_10
value: 8.688
- type: precision_at_100
value: 1.608
- type: precision_at_1000
value: 0.22100000000000003
- type: precision_at_3
value: 18.724
- type: precision_at_5
value: 13.950999999999999
- type: recall_at_1
value: 15.473
- type: recall_at_10
value: 38.361000000000004
- type: recall_at_100
value: 65.2
- type: recall_at_1000
value: 85.789
- type: recall_at_3
value: 25.401
- type: recall_at_5
value: 30.875999999999998
- task:
type: Retrieval
dataset:
name: MTEB HotpotQA-PL
type: clarin-knext/hotpotqa-pl
config: default
split: test
revision: a0bd479ac97b4ccb5bd6ce320c415d0bb4beb907
metrics:
- type: map_at_1
value: 38.096000000000004
- type: map_at_10
value: 51.44499999999999
- type: map_at_100
value: 52.325
- type: map_at_1000
value: 52.397000000000006
- type: map_at_3
value: 48.626999999999995
- type: map_at_5
value: 50.342
- type: mrr_at_1
value: 76.19200000000001
- type: mrr_at_10
value: 81.191
- type: mrr_at_100
value: 81.431
- type: mrr_at_1000
value: 81.443
- type: mrr_at_3
value: 80.30199999999999
- type: mrr_at_5
value: 80.85900000000001
- type: ndcg_at_1
value: 76.19200000000001
- type: ndcg_at_10
value: 60.9
- type: ndcg_at_100
value: 64.14699999999999
- type: ndcg_at_1000
value: 65.647
- type: ndcg_at_3
value: 56.818000000000005
- type: ndcg_at_5
value: 59.019999999999996
- type: precision_at_1
value: 76.19200000000001
- type: precision_at_10
value: 12.203
- type: precision_at_100
value: 1.478
- type: precision_at_1000
value: 0.168
- type: precision_at_3
value: 34.616
- type: precision_at_5
value: 22.515
- type: recall_at_1
value: 38.096000000000004
- type: recall_at_10
value: 61.013
- type: recall_at_100
value: 73.90299999999999
- type: recall_at_1000
value: 83.91
- type: recall_at_3
value: 51.92400000000001
- type: recall_at_5
value: 56.286
- task:
type: Retrieval
dataset:
name: MTEB MSMARCO-PL
type: clarin-knext/msmarco-pl
config: default
split: test
revision: 8634c07806d5cce3a6138e260e59b81760a0a640
metrics:
- type: map_at_1
value: 1.548
- type: map_at_10
value: 11.049000000000001
- type: map_at_100
value: 28.874
- type: map_at_1000
value: 34.931
- type: map_at_3
value: 4.162
- type: map_at_5
value: 6.396
- type: mrr_at_1
value: 90.69800000000001
- type: mrr_at_10
value: 92.093
- type: mrr_at_100
value: 92.345
- type: mrr_at_1000
value: 92.345
- type: mrr_at_3
value: 91.86
- type: mrr_at_5
value: 91.86
- type: ndcg_at_1
value: 74.031
- type: ndcg_at_10
value: 63.978
- type: ndcg_at_100
value: 53.101
- type: ndcg_at_1000
value: 60.675999999999995
- type: ndcg_at_3
value: 71.421
- type: ndcg_at_5
value: 68.098
- type: precision_at_1
value: 90.69800000000001
- type: precision_at_10
value: 71.86
- type: precision_at_100
value: 31.395
- type: precision_at_1000
value: 5.981
- type: precision_at_3
value: 84.49600000000001
- type: precision_at_5
value: 79.07
- type: recall_at_1
value: 1.548
- type: recall_at_10
value: 12.149000000000001
- type: recall_at_100
value: 40.794999999999995
- type: recall_at_1000
value: 67.974
- type: recall_at_3
value: 4.244
- type: recall_at_5
value: 6.608
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (pl)
type: mteb/amazon_massive_intent
config: pl
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 73.55413584398119
- type: f1
value: 69.65610882318181
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (pl)
type: mteb/amazon_massive_scenario
config: pl
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 76.37188971082716
- type: f1
value: 75.64847309941361
- task:
type: Retrieval
dataset:
name: MTEB NFCorpus-PL
type: clarin-knext/nfcorpus-pl
config: default
split: test
revision: 9a6f9567fda928260afed2de480d79c98bf0bec0
metrics:
- type: map_at_1
value: 4.919
- type: map_at_10
value: 10.834000000000001
- type: map_at_100
value: 13.38
- type: map_at_1000
value: 14.581
- type: map_at_3
value: 8.198
- type: map_at_5
value: 9.428
- type: mrr_at_1
value: 41.176
- type: mrr_at_10
value: 50.083
- type: mrr_at_100
value: 50.559
- type: mrr_at_1000
value: 50.604000000000006
- type: mrr_at_3
value: 47.936
- type: mrr_at_5
value: 49.407000000000004
- type: ndcg_at_1
value: 39.628
- type: ndcg_at_10
value: 30.098000000000003
- type: ndcg_at_100
value: 27.061
- type: ndcg_at_1000
value: 35.94
- type: ndcg_at_3
value: 35.135
- type: ndcg_at_5
value: 33.335
- type: precision_at_1
value: 41.176
- type: precision_at_10
value: 22.259999999999998
- type: precision_at_100
value: 6.712
- type: precision_at_1000
value: 1.9060000000000001
- type: precision_at_3
value: 33.23
- type: precision_at_5
value: 29.04
- type: recall_at_1
value: 4.919
- type: recall_at_10
value: 14.196
- type: recall_at_100
value: 26.948
- type: recall_at_1000
value: 59.211000000000006
- type: recall_at_3
value: 9.44
- type: recall_at_5
value: 11.569
- task:
type: Retrieval
dataset:
name: MTEB NQ-PL
type: clarin-knext/nq-pl
config: default
split: test
revision: f171245712cf85dd4700b06bef18001578d0ca8d
metrics:
- type: map_at_1
value: 25.35
- type: map_at_10
value: 37.884
- type: map_at_100
value: 38.955
- type: map_at_1000
value: 39.007999999999996
- type: map_at_3
value: 34.239999999999995
- type: map_at_5
value: 36.398
- type: mrr_at_1
value: 28.737000000000002
- type: mrr_at_10
value: 39.973
- type: mrr_at_100
value: 40.844
- type: mrr_at_1000
value: 40.885
- type: mrr_at_3
value: 36.901
- type: mrr_at_5
value: 38.721
- type: ndcg_at_1
value: 28.708
- type: ndcg_at_10
value: 44.204
- type: ndcg_at_100
value: 48.978
- type: ndcg_at_1000
value: 50.33
- type: ndcg_at_3
value: 37.36
- type: ndcg_at_5
value: 40.912
- type: precision_at_1
value: 28.708
- type: precision_at_10
value: 7.367
- type: precision_at_100
value: 1.0030000000000001
- type: precision_at_1000
value: 0.11299999999999999
- type: precision_at_3
value: 17.034
- type: precision_at_5
value: 12.293999999999999
- type: recall_at_1
value: 25.35
- type: recall_at_10
value: 61.411
- type: recall_at_100
value: 82.599
- type: recall_at_1000
value: 92.903
- type: recall_at_3
value: 43.728
- type: recall_at_5
value: 51.854
- task:
type: Classification
dataset:
name: MTEB PAC
type: laugustyniak/abusive-clauses-pl
config: default
split: test
revision: None
metrics:
- type: accuracy
value: 69.04141326382856
- type: ap
value: 77.49422763833996
- type: f1
value: 66.73472657783407
- task:
type: PairClassification
dataset:
name: MTEB PPC
type: PL-MTEB/ppc-pairclassification
config: default
split: test
revision: None
metrics:
- type: cos_sim_accuracy
value: 81.0
- type: cos_sim_ap
value: 91.47194213011349
- type: cos_sim_f1
value: 84.73767885532592
- type: cos_sim_precision
value: 81.49847094801224
- type: cos_sim_recall
value: 88.24503311258279
- type: dot_accuracy
value: 81.0
- type: dot_ap
value: 91.47194213011349
- type: dot_f1
value: 84.73767885532592
- type: dot_precision
value: 81.49847094801224
- type: dot_recall
value: 88.24503311258279
- type: euclidean_accuracy
value: 81.0
- type: euclidean_ap
value: 91.47194213011349
- type: euclidean_f1
value: 84.73767885532592
- type: euclidean_precision
value: 81.49847094801224
- type: euclidean_recall
value: 88.24503311258279
- type: manhattan_accuracy
value: 81.0
- type: manhattan_ap
value: 91.46464475050571
- type: manhattan_f1
value: 84.48687350835321
- type: manhattan_precision
value: 81.31699846860643
- type: manhattan_recall
value: 87.91390728476821
- type: max_accuracy
value: 81.0
- type: max_ap
value: 91.47194213011349
- type: max_f1
value: 84.73767885532592
- task:
type: PairClassification
dataset:
name: MTEB PSC
type: PL-MTEB/psc-pairclassification
config: default
split: test
revision: None
metrics:
- type: cos_sim_accuracy
value: 97.6808905380334
- type: cos_sim_ap
value: 99.27948611836348
- type: cos_sim_f1
value: 96.15975422427034
- type: cos_sim_precision
value: 96.90402476780186
- type: cos_sim_recall
value: 95.42682926829268
- type: dot_accuracy
value: 97.6808905380334
- type: dot_ap
value: 99.2794861183635
- type: dot_f1
value: 96.15975422427034
- type: dot_precision
value: 96.90402476780186
- type: dot_recall
value: 95.42682926829268
- type: euclidean_accuracy
value: 97.6808905380334
- type: euclidean_ap
value: 99.2794861183635
- type: euclidean_f1
value: 96.15975422427034
- type: euclidean_precision
value: 96.90402476780186
- type: euclidean_recall
value: 95.42682926829268
- type: manhattan_accuracy
value: 97.6808905380334
- type: manhattan_ap
value: 99.28715055268721
- type: manhattan_f1
value: 96.14791987673343
- type: manhattan_precision
value: 97.19626168224299
- type: manhattan_recall
value: 95.1219512195122
- type: max_accuracy
value: 97.6808905380334
- type: max_ap
value: 99.28715055268721
- type: max_f1
value: 96.15975422427034
- task:
type: Classification
dataset:
name: MTEB PolEmo2.0-IN
type: PL-MTEB/polemo2_in
config: default
split: test
revision: None
metrics:
- type: accuracy
value: 86.16343490304708
- type: f1
value: 83.3442579486744
- task:
type: Classification
dataset:
name: MTEB PolEmo2.0-OUT
type: PL-MTEB/polemo2_out
config: default
split: test
revision: None
metrics:
- type: accuracy
value: 68.40080971659918
- type: f1
value: 53.13720751142237
- task:
type: Retrieval
dataset:
name: MTEB Quora-PL
type: clarin-knext/quora-pl
config: default
split: test
revision: 0be27e93455051e531182b85e85e425aba12e9d4
metrics:
- type: map_at_1
value: 63.322
- type: map_at_10
value: 76.847
- type: map_at_100
value: 77.616
- type: map_at_1000
value: 77.644
- type: map_at_3
value: 73.624
- type: map_at_5
value: 75.603
- type: mrr_at_1
value: 72.88
- type: mrr_at_10
value: 80.376
- type: mrr_at_100
value: 80.604
- type: mrr_at_1000
value: 80.61
- type: mrr_at_3
value: 78.92
- type: mrr_at_5
value: 79.869
- type: ndcg_at_1
value: 72.89999999999999
- type: ndcg_at_10
value: 81.43
- type: ndcg_at_100
value: 83.394
- type: ndcg_at_1000
value: 83.685
- type: ndcg_at_3
value: 77.62599999999999
- type: ndcg_at_5
value: 79.656
- type: precision_at_1
value: 72.89999999999999
- type: precision_at_10
value: 12.548
- type: precision_at_100
value: 1.4869999999999999
- type: precision_at_1000
value: 0.155
- type: precision_at_3
value: 34.027
- type: precision_at_5
value: 22.654
- type: recall_at_1
value: 63.322
- type: recall_at_10
value: 90.664
- type: recall_at_100
value: 97.974
- type: recall_at_1000
value: 99.636
- type: recall_at_3
value: 80.067
- type: recall_at_5
value: 85.526
- task:
type: Retrieval
dataset:
name: MTEB SCIDOCS-PL
type: clarin-knext/scidocs-pl
config: default
split: test
revision: 45452b03f05560207ef19149545f168e596c9337
metrics:
- type: map_at_1
value: 3.95
- type: map_at_10
value: 9.658999999999999
- type: map_at_100
value: 11.384
- type: map_at_1000
value: 11.677
- type: map_at_3
value: 7.055
- type: map_at_5
value: 8.244
- type: mrr_at_1
value: 19.5
- type: mrr_at_10
value: 28.777
- type: mrr_at_100
value: 29.936
- type: mrr_at_1000
value: 30.009999999999998
- type: mrr_at_3
value: 25.55
- type: mrr_at_5
value: 27.284999999999997
- type: ndcg_at_1
value: 19.5
- type: ndcg_at_10
value: 16.589000000000002
- type: ndcg_at_100
value: 23.879
- type: ndcg_at_1000
value: 29.279
- type: ndcg_at_3
value: 15.719
- type: ndcg_at_5
value: 13.572000000000001
- type: precision_at_1
value: 19.5
- type: precision_at_10
value: 8.62
- type: precision_at_100
value: 1.924
- type: precision_at_1000
value: 0.322
- type: precision_at_3
value: 14.6
- type: precision_at_5
value: 11.78
- type: recall_at_1
value: 3.95
- type: recall_at_10
value: 17.477999999999998
- type: recall_at_100
value: 38.99
- type: recall_at_1000
value: 65.417
- type: recall_at_3
value: 8.883000000000001
- type: recall_at_5
value: 11.933
- task:
type: PairClassification
dataset:
name: MTEB SICK-E-PL
type: PL-MTEB/sicke-pl-pairclassification
config: default
split: test
revision: None
metrics:
- type: cos_sim_accuracy
value: 83.48960456583775
- type: cos_sim_ap
value: 76.31522115825375
- type: cos_sim_f1
value: 70.35573122529645
- type: cos_sim_precision
value: 70.9934735315446
- type: cos_sim_recall
value: 69.72934472934473
- type: dot_accuracy
value: 83.48960456583775
- type: dot_ap
value: 76.31522115825373
- type: dot_f1
value: 70.35573122529645
- type: dot_precision
value: 70.9934735315446
- type: dot_recall
value: 69.72934472934473
- type: euclidean_accuracy
value: 83.48960456583775
- type: euclidean_ap
value: 76.31522115825373
- type: euclidean_f1
value: 70.35573122529645
- type: euclidean_precision
value: 70.9934735315446
- type: euclidean_recall
value: 69.72934472934473
- type: manhattan_accuracy
value: 83.46922136159804
- type: manhattan_ap
value: 76.18474601388084
- type: manhattan_f1
value: 70.34779490856937
- type: manhattan_precision
value: 70.83032490974729
- type: manhattan_recall
value: 69.87179487179486
- type: max_accuracy
value: 83.48960456583775
- type: max_ap
value: 76.31522115825375
- type: max_f1
value: 70.35573122529645
- task:
type: STS
dataset:
name: MTEB SICK-R-PL
type: PL-MTEB/sickr-pl-sts
config: default
split: test
revision: None
metrics:
- type: cos_sim_pearson
value: 77.95374883876302
- type: cos_sim_spearman
value: 73.77630219171942
- type: euclidean_pearson
value: 75.81927069594934
- type: euclidean_spearman
value: 73.7763211303831
- type: manhattan_pearson
value: 76.03126859057528
- type: manhattan_spearman
value: 73.96528138013369
- task:
type: STS
dataset:
name: MTEB STS22 (pl)
type: mteb/sts22-crosslingual-sts
config: pl
split: test
revision: eea2b4fe26a775864c896887d910b76a8098ad3f
metrics:
- type: cos_sim_pearson
value: 37.388282764841826
- type: cos_sim_spearman
value: 40.83477184710897
- type: euclidean_pearson
value: 26.754737044177805
- type: euclidean_spearman
value: 40.83477184710897
- type: manhattan_pearson
value: 26.760453110872458
- type: manhattan_spearman
value: 41.034477441383856
- task:
type: Retrieval
dataset:
name: MTEB SciFact-PL
type: clarin-knext/scifact-pl
config: default
split: test
revision: 47932a35f045ef8ed01ba82bf9ff67f6e109207e
metrics:
- type: map_at_1
value: 49.15
- type: map_at_10
value: 61.690999999999995
- type: map_at_100
value: 62.348000000000006
- type: map_at_1000
value: 62.38
- type: map_at_3
value: 58.824
- type: map_at_5
value: 60.662000000000006
- type: mrr_at_1
value: 51.333
- type: mrr_at_10
value: 62.731
- type: mrr_at_100
value: 63.245
- type: mrr_at_1000
value: 63.275000000000006
- type: mrr_at_3
value: 60.667
- type: mrr_at_5
value: 61.93300000000001
- type: ndcg_at_1
value: 51.333
- type: ndcg_at_10
value: 67.168
- type: ndcg_at_100
value: 69.833
- type: ndcg_at_1000
value: 70.56700000000001
- type: ndcg_at_3
value: 62.40599999999999
- type: ndcg_at_5
value: 65.029
- type: precision_at_1
value: 51.333
- type: precision_at_10
value: 9.333
- type: precision_at_100
value: 1.0699999999999998
- type: precision_at_1000
value: 0.11299999999999999
- type: precision_at_3
value: 25.333
- type: precision_at_5
value: 17.067
- type: recall_at_1
value: 49.15
- type: recall_at_10
value: 82.533
- type: recall_at_100
value: 94.167
- type: recall_at_1000
value: 99.667
- type: recall_at_3
value: 69.917
- type: recall_at_5
value: 76.356
- task:
type: Retrieval
dataset:
name: MTEB TRECCOVID-PL
type: clarin-knext/trec-covid-pl
config: default
split: test
revision: 81bcb408f33366c2a20ac54adafad1ae7e877fdd
metrics:
- type: map_at_1
value: 0.261
- type: map_at_10
value: 2.1260000000000003
- type: map_at_100
value: 12.171999999999999
- type: map_at_1000
value: 26.884999999999998
- type: map_at_3
value: 0.695
- type: map_at_5
value: 1.134
- type: mrr_at_1
value: 96.0
- type: mrr_at_10
value: 96.952
- type: mrr_at_100
value: 96.952
- type: mrr_at_1000
value: 96.952
- type: mrr_at_3
value: 96.667
- type: mrr_at_5
value: 96.667
- type: ndcg_at_1
value: 92.0
- type: ndcg_at_10
value: 81.193
- type: ndcg_at_100
value: 61.129
- type: ndcg_at_1000
value: 51.157
- type: ndcg_at_3
value: 85.693
- type: ndcg_at_5
value: 84.129
- type: precision_at_1
value: 96.0
- type: precision_at_10
value: 85.39999999999999
- type: precision_at_100
value: 62.03999999999999
- type: precision_at_1000
value: 22.224
- type: precision_at_3
value: 88.0
- type: precision_at_5
value: 88.0
- type: recall_at_1
value: 0.261
- type: recall_at_10
value: 2.262
- type: recall_at_100
value: 14.981
- type: recall_at_1000
value: 46.837
- type: recall_at_3
value: 0.703
- type: recall_at_5
value: 1.172
- task:
type: Clustering
dataset:
name: MTEB AlloProfClusteringP2P
type: lyon-nlp/alloprof
config: default
split: test
revision: 392ba3f5bcc8c51f578786c1fc3dae648662cb9b
metrics:
- type: v_measure
value: 70.55290063940157
- type: v_measure
value: 55.41500719337263
- task:
type: Reranking
dataset:
name: MTEB AlloprofReranking
type: lyon-nlp/mteb-fr-reranking-alloprof-s2p
config: default
split: test
revision: 666fdacebe0291776e86f29345663dfaf80a0db9
metrics:
- type: map
value: 73.48697375332002
- type: mrr
value: 75.01836585523822
- task:
type: Retrieval
dataset:
name: MTEB AlloprofRetrieval
type: lyon-nlp/alloprof
config: default
split: test
revision: 392ba3f5bcc8c51f578786c1fc3dae648662cb9b
metrics:
- type: map_at_1
value: 38.454
- type: map_at_10
value: 51.605000000000004
- type: map_at_100
value: 52.653000000000006
- type: map_at_1000
value: 52.697
- type: map_at_3
value: 48.304
- type: map_at_5
value: 50.073
- type: mrr_at_1
value: 43.307
- type: mrr_at_10
value: 54.400000000000006
- type: mrr_at_100
value: 55.147999999999996
- type: mrr_at_1000
value: 55.174
- type: mrr_at_3
value: 51.77
- type: mrr_at_5
value: 53.166999999999994
- type: ndcg_at_1
value: 43.307
- type: ndcg_at_10
value: 57.891000000000005
- type: ndcg_at_100
value: 62.161
- type: ndcg_at_1000
value: 63.083
- type: ndcg_at_3
value: 51.851
- type: ndcg_at_5
value: 54.605000000000004
- type: precision_at_1
value: 43.307
- type: precision_at_10
value: 9.033
- type: precision_at_100
value: 1.172
- type: precision_at_1000
value: 0.127
- type: precision_at_3
value: 22.798
- type: precision_at_5
value: 15.492
- type: recall_at_1
value: 38.454
- type: recall_at_10
value: 74.166
- type: recall_at_100
value: 92.43599999999999
- type: recall_at_1000
value: 99.071
- type: recall_at_3
value: 58.087
- type: recall_at_5
value: 64.568
- task:
type: Classification
dataset:
name: MTEB AmazonReviewsClassification (fr)
type: mteb/amazon_reviews_multi
config: fr
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 53.474
- type: f1
value: 50.38275392350236
- task:
type: Retrieval
dataset:
name: MTEB BSARDRetrieval
type: maastrichtlawtech/bsard
config: default
split: test
revision: 5effa1b9b5fa3b0f9e12523e6e43e5f86a6e6d59
metrics:
- type: map_at_1
value: 2.252
- type: map_at_10
value: 4.661
- type: map_at_100
value: 5.271
- type: map_at_1000
value: 5.3629999999999995
- type: map_at_3
value: 3.604
- type: map_at_5
value: 4.3020000000000005
- type: mrr_at_1
value: 2.252
- type: mrr_at_10
value: 4.661
- type: mrr_at_100
value: 5.271
- type: mrr_at_1000
value: 5.3629999999999995
- type: mrr_at_3
value: 3.604
- type: mrr_at_5
value: 4.3020000000000005
- type: ndcg_at_1
value: 2.252
- type: ndcg_at_10
value: 6.3020000000000005
- type: ndcg_at_100
value: 10.342
- type: ndcg_at_1000
value: 13.475999999999999
- type: ndcg_at_3
value: 4.0649999999999995
- type: ndcg_at_5
value: 5.344
- type: precision_at_1
value: 2.252
- type: precision_at_10
value: 1.171
- type: precision_at_100
value: 0.333
- type: precision_at_1000
value: 0.059000000000000004
- type: precision_at_3
value: 1.802
- type: precision_at_5
value: 1.712
- type: recall_at_1
value: 2.252
- type: recall_at_10
value: 11.712
- type: recall_at_100
value: 33.333
- type: recall_at_1000
value: 59.458999999999996
- type: recall_at_3
value: 5.405
- type: recall_at_5
value: 8.559
- task:
type: Clustering
dataset:
name: MTEB HALClusteringS2S
type: lyon-nlp/clustering-hal-s2s
config: default
split: test
revision: e06ebbbb123f8144bef1a5d18796f3dec9ae2915
metrics:
- type: v_measure
value: 28.301882091023288
- task:
type: Clustering
dataset:
name: MTEB MLSUMClusteringP2P
type: mlsum
config: default
split: test
revision: b5d54f8f3b61ae17845046286940f03c6bc79bc7
metrics:
- type: v_measure
value: 45.26992995191701
- type: v_measure
value: 42.773174876871145
- task:
type: Classification
dataset:
name: MTEB MTOPDomainClassification (fr)
type: mteb/mtop_domain
config: fr
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 93.47635452552458
- type: f1
value: 93.19922617577213
- task:
type: Classification
dataset:
name: MTEB MTOPIntentClassification (fr)
type: mteb/mtop_intent
config: fr
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 80.2317569683683
- type: f1
value: 56.18060418621901
- task:
type: Classification
dataset:
name: MTEB MasakhaNEWSClassification (fra)
type: masakhane/masakhanews
config: fra
split: test
revision: 8ccc72e69e65f40c70e117d8b3c08306bb788b60
metrics:
- type: accuracy
value: 85.18957345971565
- type: f1
value: 80.829981537394
- task:
type: Clustering
dataset:
name: MTEB MasakhaNEWSClusteringP2P (fra)
type: masakhane/masakhanews
config: fra
split: test
revision: 8ccc72e69e65f40c70e117d8b3c08306bb788b60
metrics:
- type: v_measure
value: 71.04138999801822
- type: v_measure
value: 71.7056263158008
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (fr)
type: mteb/amazon_massive_intent
config: fr
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 76.65097511768661
- type: f1
value: 73.82441070598712
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (fr)
type: mteb/amazon_massive_scenario
config: fr
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 79.09885675857431
- type: f1
value: 78.28407777434224
- task:
type: Retrieval
dataset:
name: MTEB MintakaRetrieval (fr)
type: jinaai/mintakaqa
config: fr
split: test
revision: efa78cc2f74bbcd21eff2261f9e13aebe40b814e
metrics:
- type: map_at_1
value: 25.307000000000002
- type: map_at_10
value: 36.723
- type: map_at_100
value: 37.713
- type: map_at_1000
value: 37.769000000000005
- type: map_at_3
value: 33.77
- type: map_at_5
value: 35.463
- type: mrr_at_1
value: 25.307000000000002
- type: mrr_at_10
value: 36.723
- type: mrr_at_100
value: 37.713
- type: mrr_at_1000
value: 37.769000000000005
- type: mrr_at_3
value: 33.77
- type: mrr_at_5
value: 35.463
- type: ndcg_at_1
value: 25.307000000000002
- type: ndcg_at_10
value: 42.559999999999995
- type: ndcg_at_100
value: 47.457
- type: ndcg_at_1000
value: 49.162
- type: ndcg_at_3
value: 36.461
- type: ndcg_at_5
value: 39.504
- type: precision_at_1
value: 25.307000000000002
- type: precision_at_10
value: 6.106
- type: precision_at_100
value: 0.8420000000000001
- type: precision_at_1000
value: 0.098
- type: precision_at_3
value: 14.741999999999999
- type: precision_at_5
value: 10.319
- type: recall_at_1
value: 25.307000000000002
- type: recall_at_10
value: 61.056999999999995
- type: recall_at_100
value: 84.152
- type: recall_at_1000
value: 98.03399999999999
- type: recall_at_3
value: 44.226
- type: recall_at_5
value: 51.597
- task:
type: PairClassification
dataset:
name: MTEB OpusparcusPC (fr)
type: GEM/opusparcus
config: fr
split: test
revision: 9e9b1f8ef51616073f47f306f7f47dd91663f86a
metrics:
- type: cos_sim_accuracy
value: 99.90069513406156
- type: cos_sim_ap
value: 100.0
- type: cos_sim_f1
value: 99.95032290114257
- type: cos_sim_precision
value: 100.0
- type: cos_sim_recall
value: 99.90069513406156
- type: dot_accuracy
value: 99.90069513406156
- type: dot_ap
value: 100.0
- type: dot_f1
value: 99.95032290114257
- type: dot_precision
value: 100.0
- type: dot_recall
value: 99.90069513406156
- type: euclidean_accuracy
value: 99.90069513406156
- type: euclidean_ap
value: 100.0
- type: euclidean_f1
value: 99.95032290114257
- type: euclidean_precision
value: 100.0
- type: euclidean_recall
value: 99.90069513406156
- type: manhattan_accuracy
value: 99.90069513406156
- type: manhattan_ap
value: 100.0
- type: manhattan_f1
value: 99.95032290114257
- type: manhattan_precision
value: 100.0
- type: manhattan_recall
value: 99.90069513406156
- type: max_accuracy
value: 99.90069513406156
- type: max_ap
value: 100.0
- type: max_f1
value: 99.95032290114257
- task:
type: PairClassification
dataset:
name: MTEB PawsX (fr)
type: paws-x
config: fr
split: test
revision: 8a04d940a42cd40658986fdd8e3da561533a3646
metrics:
- type: cos_sim_accuracy
value: 70.8
- type: cos_sim_ap
value: 73.7671529695957
- type: cos_sim_f1
value: 68.80964339527875
- type: cos_sim_precision
value: 62.95955882352941
- type: cos_sim_recall
value: 75.85825027685493
- type: dot_accuracy
value: 70.8
- type: dot_ap
value: 73.78345265366947
- type: dot_f1
value: 68.80964339527875
- type: dot_precision
value: 62.95955882352941
- type: dot_recall
value: 75.85825027685493
- type: euclidean_accuracy
value: 70.8
- type: euclidean_ap
value: 73.7671529695957
- type: euclidean_f1
value: 68.80964339527875
- type: euclidean_precision
value: 62.95955882352941
- type: euclidean_recall
value: 75.85825027685493
- type: manhattan_accuracy
value: 70.75
- type: manhattan_ap
value: 73.78996383615953
- type: manhattan_f1
value: 68.79432624113475
- type: manhattan_precision
value: 63.39869281045751
- type: manhattan_recall
value: 75.1937984496124
- type: max_accuracy
value: 70.8
- type: max_ap
value: 73.78996383615953
- type: max_f1
value: 68.80964339527875
- task:
type: STS
dataset:
name: MTEB SICKFr
type: Lajavaness/SICK-fr
config: default
split: test
revision: e077ab4cf4774a1e36d86d593b150422fafd8e8a
metrics:
- type: cos_sim_pearson
value: 84.03253762760392
- type: cos_sim_spearman
value: 79.68280105762004
- type: euclidean_pearson
value: 80.98265050044444
- type: euclidean_spearman
value: 79.68233242682867
- type: manhattan_pearson
value: 80.9678911810704
- type: manhattan_spearman
value: 79.70264097683109
- task:
type: STS
dataset:
name: MTEB STS22 (fr)
type: mteb/sts22-crosslingual-sts
config: fr
split: test
revision: eea2b4fe26a775864c896887d910b76a8098ad3f
metrics:
- type: cos_sim_pearson
value: 80.56896987572884
- type: cos_sim_spearman
value: 81.84352499523287
- type: euclidean_pearson
value: 80.40831759421305
- type: euclidean_spearman
value: 81.84352499523287
- type: manhattan_pearson
value: 80.74333857561238
- type: manhattan_spearman
value: 82.41503246733892
- task:
type: STS
dataset:
name: MTEB STSBenchmarkMultilingualSTS (fr)
type: stsb_multi_mt
config: fr
split: test
revision: 93d57ef91790589e3ce9c365164337a8a78b7632
metrics:
- type: cos_sim_pearson
value: 82.71826762276979
- type: cos_sim_spearman
value: 82.25433354916042
- type: euclidean_pearson
value: 81.87115571724316
- type: euclidean_spearman
value: 82.25322342890107
- type: manhattan_pearson
value: 82.11174867527224
- type: manhattan_spearman
value: 82.55905365203084
- task:
type: Summarization
dataset:
name: MTEB SummEvalFr
type: lyon-nlp/summarization-summeval-fr-p2p
config: default
split: test
revision: b385812de6a9577b6f4d0f88c6a6e35395a94054
metrics:
- type: cos_sim_pearson
value: 30.659441623392887
- type: cos_sim_spearman
value: 30.501134097353315
- type: dot_pearson
value: 30.659444768851056
- type: dot_spearman
value: 30.501134097353315
- task:
type: Reranking
dataset:
name: MTEB SyntecReranking
type: lyon-nlp/mteb-fr-reranking-syntec-s2p
config: default
split: test
revision: b205c5084a0934ce8af14338bf03feb19499c84d
metrics:
- type: map
value: 94.03333333333333
- type: mrr
value: 94.03333333333333
- task:
type: Retrieval
dataset:
name: MTEB SyntecRetrieval
type: lyon-nlp/mteb-fr-retrieval-syntec-s2p
config: default
split: test
revision: 77f7e271bf4a92b24fce5119f3486b583ca016ff
metrics:
- type: map_at_1
value: 79.0
- type: map_at_10
value: 87.61
- type: map_at_100
value: 87.655
- type: map_at_1000
value: 87.655
- type: map_at_3
value: 87.167
- type: map_at_5
value: 87.36699999999999
- type: mrr_at_1
value: 79.0
- type: mrr_at_10
value: 87.61
- type: mrr_at_100
value: 87.655
- type: mrr_at_1000
value: 87.655
- type: mrr_at_3
value: 87.167
- type: mrr_at_5
value: 87.36699999999999
- type: ndcg_at_1
value: 79.0
- type: ndcg_at_10
value: 90.473
- type: ndcg_at_100
value: 90.694
- type: ndcg_at_1000
value: 90.694
- type: ndcg_at_3
value: 89.464
- type: ndcg_at_5
value: 89.851
- type: precision_at_1
value: 79.0
- type: precision_at_10
value: 9.9
- type: precision_at_100
value: 1.0
- type: precision_at_1000
value: 0.1
- type: precision_at_3
value: 32.0
- type: precision_at_5
value: 19.400000000000002
- type: recall_at_1
value: 79.0
- type: recall_at_10
value: 99.0
- type: recall_at_100
value: 100.0
- type: recall_at_1000
value: 100.0
- type: recall_at_3
value: 96.0
- type: recall_at_5
value: 97.0
- task:
type: Retrieval
dataset:
name: MTEB XPQARetrieval (fr)
type: jinaai/xpqa
config: fr
split: test
revision: c99d599f0a6ab9b85b065da6f9d94f9cf731679f
metrics:
- type: map_at_1
value: 39.395
- type: map_at_10
value: 59.123999999999995
- type: map_at_100
value: 60.704
- type: map_at_1000
value: 60.760000000000005
- type: map_at_3
value: 53.187
- type: map_at_5
value: 56.863
- type: mrr_at_1
value: 62.083
- type: mrr_at_10
value: 68.87299999999999
- type: mrr_at_100
value: 69.46900000000001
- type: mrr_at_1000
value: 69.48299999999999
- type: mrr_at_3
value: 66.8
- type: mrr_at_5
value: 67.928
- type: ndcg_at_1
value: 62.083
- type: ndcg_at_10
value: 65.583
- type: ndcg_at_100
value: 70.918
- type: ndcg_at_1000
value: 71.72800000000001
- type: ndcg_at_3
value: 60.428000000000004
- type: ndcg_at_5
value: 61.853
- type: precision_at_1
value: 62.083
- type: precision_at_10
value: 15.033
- type: precision_at_100
value: 1.9529999999999998
- type: precision_at_1000
value: 0.207
- type: precision_at_3
value: 36.315
- type: precision_at_5
value: 25.955000000000002
- type: recall_at_1
value: 39.395
- type: recall_at_10
value: 74.332
- type: recall_at_100
value: 94.729
- type: recall_at_1000
value: 99.75500000000001
- type: recall_at_3
value: 57.679
- type: recall_at_5
value: 65.036
---
# niancheng/gte-Qwen2-1.5B-instruct-Q4_K_M-GGUF
This model was converted to GGUF format from [`Alibaba-NLP/gte-Qwen2-1.5B-instruct`](https://huggingface.co/Alibaba-NLP/gte-Qwen2-1.5B-instruct) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Alibaba-NLP/gte-Qwen2-1.5B-instruct) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo niancheng/gte-Qwen2-1.5B-instruct-Q4_K_M-GGUF --hf-file gte-qwen2-1.5b-instruct-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo niancheng/gte-Qwen2-1.5B-instruct-Q4_K_M-GGUF --hf-file gte-qwen2-1.5b-instruct-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo niancheng/gte-Qwen2-1.5B-instruct-Q4_K_M-GGUF --hf-file gte-qwen2-1.5b-instruct-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo niancheng/gte-Qwen2-1.5B-instruct-Q4_K_M-GGUF --hf-file gte-qwen2-1.5b-instruct-q4_k_m.gguf -c 2048
```
| [
"BIOSSES",
"SCIFACT"
] | Non_BioNLP |
AIDA-UPM/MARTINI_enrich_BERTopic_realx22report | AIDA-UPM | text-classification | [
"bertopic",
"text-classification",
"region:us"
] | 1,736,792,911,000 | 2025-01-13T18:28:33 | 5 | 0 | ---
library_name: bertopic
pipeline_tag: text-classification
tags:
- bertopic
---
# MARTINI_enrich_BERTopic_realx22report
This is a [BERTopic](https://github.com/MaartenGr/BERTopic) model.
BERTopic is a flexible and modular topic modeling framework that allows for the generation of easily interpretable topics from large datasets.
## Usage
To use this model, please install BERTopic:
```
pip install -U bertopic
```
You can use the model as follows:
```python
from bertopic import BERTopic
topic_model = BERTopic.load("AIDA-UPM/MARTINI_enrich_BERTopic_realx22report")
topic_model.get_topic_info()
```
## Topic overview
* Number of topics: 30
* Number of training documents: 3061
<details>
<summary>Click here for an overview of all topics.</summary>
| Topic ID | Topic Keywords | Topic Frequency | Label |
|----------|----------------|-----------------|-------|
| -1 | biden - fbi - vaccinated - impeachment - anyone | 21 | -1_biden_fbi_vaccinated_impeachment |
| 0 | patriotism - freedoms - tyranny - volunteer - never | 1651 | 0_patriotism_freedoms_tyranny_volunteer |
| 1 | ballots - maricopa - recount - auditors - leaked | 165 | 1_ballots_maricopa_recount_auditors |
| 2 | protests - unvaccinated - masks - austria - mandatory | 145 | 2_protests_unvaccinated_masks_austria |
| 3 | vaccinated - pfizer - vaers - antibodies - delisted | 91 | 3_vaccinated_pfizer_vaers_antibodies |
| 4 | twitter - musk - shadowbanning - censored - takeover | 79 | 4_twitter_musk_shadowbanning_censored |
| 5 | donetsk - putin - zelensky - bioweapons - nazis | 76 | 5_donetsk_putin_zelensky_bioweapons |
| 6 | fauci - pcr - false - test - swabs | 74 | 6_fauci_pcr_false_test |
| 7 | truthsocial - impersonating - verified - bots - deleted | 47 | 7_truthsocial_impersonating_verified_bots |
| 8 | biden - teleprompter - michelle - jimmy - reporters | 47 | 8_biden_teleprompter_michelle_jimmy |
| 9 | thesecretcurriculum - indoctrinating - teachers - hillsdale - antifa | 46 | 9_thesecretcurriculum_indoctrinating_teachers_hillsdale |
| 10 | trump - indicted - courthouse - guantanamo - supreme | 44 | 10_trump_indicted_courthouse_guantanamo |
| 11 | mueller - clinton - dossier - colluded - indictments | 42 | 11_mueller_clinton_dossier_colluded |
| 12 | truckers - trudeau - beltway - blockade - protesters | 42 | 12_truckers_trudeau_beltway_blockade |
| 13 | traffickers - immigration - blinken - border - texas | 41 | 13_traffickers_immigration_blinken_border |
| 14 | vaccine - mandatory - eeoc - defendingtherepublic - injunction | 39 | 14_vaccine_mandatory_eeoc_defendingtherepublic |
| 15 | senators - filibuster - voted - trillion - taxpayers | 37 | 15_senators_filibuster_voted_trillion |
| 16 | taliban - kabul - bombing - evacuees - stormypatriotjoe | 35 | 16_taliban_kabul_bombing_evacuees |
| 17 | taiwan - norad - cyberattack - stratotankers - zhangjiakou | 32 | 17_taiwan_norad_cyberattack_stratotankers |
| 18 | ghislaine - unsealed - docket - incriminating - john | 32 | 18_ghislaine_unsealed_docket_incriminating |
| 19 | fauci - darpa - nanoscientist - deepfakes - unredacted | 32 | 19_fauci_darpa_nanoscientist_deepfakes |
| 20 | desantis - florida - governor - mandates - brandon | 32 | 20_desantis_florida_governor_mandates |
| 21 | facebook - censorship - misinformation - campaigns - announced | 32 | 21_facebook_censorship_misinformation_campaigns |
| 22 | firearms - illegal - amendment - disarming - hr3015 | 30 | 22_firearms_illegal_amendment_disarming |
| 23 | cnn - newscast - reporter - warnermedia - allegations | 29 | 23_cnn_newscast_reporter_warnermedia |
| 24 | fbi - insurrectionists - january - carlson - congressman | 28 | 24_fbi_insurrectionists_january_carlson |
| 25 | shootings - killed - suspect - deputies - george | 27 | 25_shootings_killed_suspect_deputies |
| 26 | bidenlaptopemails - disinformation - rudy - investigators - huawei | 23 | 26_bidenlaptopemails_disinformation_rudy_investigators |
| 27 | trump - inauguration - accusations - andrews - 45th | 21 | 27_trump_inauguration_accusations_andrews |
| 28 | nyt - defamation - depositions - constitutionally - unverifiable | 21 | 28_nyt_defamation_depositions_constitutionally |
</details>
## Training hyperparameters
* calculate_probabilities: True
* language: None
* low_memory: False
* min_topic_size: 10
* n_gram_range: (1, 1)
* nr_topics: None
* seed_topic_list: None
* top_n_words: 10
* verbose: False
* zeroshot_min_similarity: 0.7
* zeroshot_topic_list: None
## Framework versions
* Numpy: 1.26.4
* HDBSCAN: 0.8.40
* UMAP: 0.5.7
* Pandas: 2.2.3
* Scikit-Learn: 1.5.2
* Sentence-transformers: 3.3.1
* Transformers: 4.46.3
* Numba: 0.60.0
* Plotly: 5.24.1
* Python: 3.10.12
| [
"PCR"
] | Non_BioNLP |
Amir13/xlm-roberta-base-ncbi_disease | Amir13 | token-classification | [
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"dataset:Amir13/ncbi-persian",
"arxiv:2302.09611",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,676,379,557,000 | 2023-03-16T21:05:07 | 15 | 0 | ---
datasets: Amir13/ncbi-persian
license: mit
metrics:
- precision
- recall
- f1
- accuracy
tags:
- generated_from_trainer
model-index:
- name: xlm-roberta-base-ncbi_disease
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-ncbi_disease
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the [ncbi-persian](https://huggingface.co/datasets/Amir13/ncbi-persian) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0915
- Precision: 0.8273
- Recall: 0.8763
- F1: 0.8511
- Accuracy: 0.9866
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 169 | 0.0682 | 0.7049 | 0.7763 | 0.7389 | 0.9784 |
| No log | 2.0 | 338 | 0.0575 | 0.7558 | 0.8592 | 0.8042 | 0.9832 |
| 0.0889 | 3.0 | 507 | 0.0558 | 0.8092 | 0.8592 | 0.8334 | 0.9859 |
| 0.0889 | 4.0 | 676 | 0.0595 | 0.8316 | 0.8579 | 0.8446 | 0.9858 |
| 0.0889 | 5.0 | 845 | 0.0665 | 0.7998 | 0.8566 | 0.8272 | 0.9850 |
| 0.0191 | 6.0 | 1014 | 0.0796 | 0.8229 | 0.85 | 0.8362 | 0.9862 |
| 0.0191 | 7.0 | 1183 | 0.0783 | 0.8193 | 0.8474 | 0.8331 | 0.9860 |
| 0.0191 | 8.0 | 1352 | 0.0792 | 0.8257 | 0.8539 | 0.8396 | 0.9864 |
| 0.0079 | 9.0 | 1521 | 0.0847 | 0.8154 | 0.8658 | 0.8398 | 0.9851 |
| 0.0079 | 10.0 | 1690 | 0.0855 | 0.8160 | 0.875 | 0.8444 | 0.9857 |
| 0.0079 | 11.0 | 1859 | 0.0868 | 0.8081 | 0.8645 | 0.8353 | 0.9864 |
| 0.0037 | 12.0 | 2028 | 0.0912 | 0.8036 | 0.8776 | 0.8390 | 0.9853 |
| 0.0037 | 13.0 | 2197 | 0.0907 | 0.8323 | 0.8684 | 0.8500 | 0.9868 |
| 0.0037 | 14.0 | 2366 | 0.0899 | 0.8192 | 0.8763 | 0.8468 | 0.9865 |
| 0.0023 | 15.0 | 2535 | 0.0915 | 0.8273 | 0.8763 | 0.8511 | 0.9866 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
### Citation
If you used the datasets and models in this repository, please cite it.
```bibtex
@misc{https://doi.org/10.48550/arxiv.2302.09611,
doi = {10.48550/ARXIV.2302.09611},
url = {https://arxiv.org/abs/2302.09611},
author = {Sartipi, Amir and Fatemi, Afsaneh},
keywords = {Computation and Language (cs.CL), Artificial Intelligence (cs.AI), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Exploring the Potential of Machine Translation for Generating Named Entity Datasets: A Case Study between Persian and English},
publisher = {arXiv},
year = {2023},
copyright = {arXiv.org perpetual, non-exclusive license}
}
```
| [
"NCBI DISEASE"
] | BioNLP |
twadada/lma_noabtt | twadada | null | [
"mteb",
"model-index",
"region:us"
] | 1,726,129,059,000 | 2024-09-12T08:17:52 | 0 | 0 | ---
tags:
- mteb
model-index:
- name: llama3_STSprompt_noabtt_new
results:
- task:
type: Classification
dataset:
name: MTEB AmazonCounterfactualClassification (en)
type: None
config: en
split: test
revision: e8379541af4e31359cca9fbcf4b00f2671dba205
metrics:
- type: accuracy
value: 72.53731343283582
- type: ap
value: 35.09550935769008
- type: f1
value: 66.39064790565796
- task:
type: Classification
dataset:
name: MTEB AmazonPolarityClassification
type: None
config: default
split: test
revision: e2d317d38cd51312af73b3d32a06d1a08b442046
metrics:
- type: accuracy
value: 68.46487499999999
- type: ap
value: 63.194645384561895
- type: f1
value: 68.14318967696387
- task:
type: Classification
dataset:
name: MTEB AmazonReviewsClassification (en)
type: None
config: en
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 36.722
- type: f1
value: 36.0815270010246
- task:
type: Retrieval
dataset:
name: MTEB ArguAna
type: None
config: default
split: test
revision: c22ab2a51041ffd869aaddef7af8d8215647e41a
metrics:
- type: map_at_1
value: 17.994
- type: map_at_10
value: 29.559
- type: map_at_100
value: 30.679000000000002
- type: map_at_1000
value: 30.73
- type: map_at_3
value: 25.319999999999997
- type: map_at_5
value: 27.71
- type: mrr_at_1
value: 18.421000000000003
- type: mrr_at_10
value: 29.73
- type: mrr_at_100
value: 30.85
- type: mrr_at_1000
value: 30.901
- type: mrr_at_3
value: 25.45
- type: mrr_at_5
value: 27.883000000000003
- type: ndcg_at_1
value: 17.994
- type: ndcg_at_10
value: 36.642
- type: ndcg_at_100
value: 42.247
- type: ndcg_at_1000
value: 43.579
- type: ndcg_at_3
value: 27.865000000000002
- type: ndcg_at_5
value: 32.171
- type: precision_at_1
value: 17.994
- type: precision_at_10
value: 5.953
- type: precision_at_100
value: 0.861
- type: precision_at_1000
value: 0.097
- type: precision_at_3
value: 11.759
- type: precision_at_5
value: 9.147
- type: recall_at_1
value: 17.994
- type: recall_at_10
value: 59.531
- type: recall_at_100
value: 86.06
- type: recall_at_1000
value: 96.515
- type: recall_at_3
value: 35.276999999999994
- type: recall_at_5
value: 45.733000000000004
- task:
type: Clustering
dataset:
name: MTEB ArxivClusteringP2P
type: None
config: default
split: test
revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d
metrics:
- type: v_measure
value: 31.66949406382408
- task:
type: Clustering
dataset:
name: MTEB ArxivClusteringS2S
type: None
config: default
split: test
revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53
metrics:
- type: v_measure
value: 24.14545126769505
- task:
type: Reranking
dataset:
name: MTEB AskUbuntuDupQuestions
type: None
config: default
split: test
revision: 2000358ca161889fa9c082cb41daa8dcfb161a54
metrics:
- type: map
value: 52.74601918537465
- type: mrr
value: 66.62885283383898
- task:
type: STS
dataset:
name: MTEB BIOSSES
type: None
config: default
split: test
revision: d3fb88f8f02e40887cd149695127462bbcf29b4a
metrics:
- type: cos_sim_pearson
value: 72.6753460041215
- type: cos_sim_spearman
value: 71.97935094171385
- type: euclidean_pearson
value: 72.35700410190371
- type: euclidean_spearman
value: 71.97935094171385
- type: manhattan_pearson
value: 73.65568445281664
- type: manhattan_spearman
value: 73.25166908897245
- task:
type: Classification
dataset:
name: MTEB Banking77Classification
type: None
config: default
split: test
revision: 0fd18e25b25c072e09e0d92ab615fda904d66300
metrics:
- type: accuracy
value: 70.62337662337663
- type: f1
value: 69.8634895921265
- task:
type: Clustering
dataset:
name: MTEB BiorxivClusteringP2P
type: None
config: default
split: test
revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40
metrics:
- type: v_measure
value: 30.04921398276571
- task:
type: Clustering
dataset:
name: MTEB BiorxivClusteringS2S
type: None
config: default
split: test
revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908
metrics:
- type: v_measure
value: 22.3114828809719
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackAndroidRetrieval
type: None
config: default
split: test
revision: f46a197baaae43b4f621051089b82a364682dfeb
metrics:
- type: map_at_1
value: 20.685000000000002
- type: map_at_10
value: 28.016000000000002
- type: map_at_100
value: 29.028
- type: map_at_1000
value: 29.176999999999996
- type: map_at_3
value: 25.873
- type: map_at_5
value: 27.05
- type: mrr_at_1
value: 26.466
- type: mrr_at_10
value: 33.684999999999995
- type: mrr_at_100
value: 34.449999999999996
- type: mrr_at_1000
value: 34.522999999999996
- type: mrr_at_3
value: 31.879
- type: mrr_at_5
value: 32.836999999999996
- type: ndcg_at_1
value: 26.466
- type: ndcg_at_10
value: 32.897
- type: ndcg_at_100
value: 37.433
- type: ndcg_at_1000
value: 40.393
- type: ndcg_at_3
value: 29.793999999999997
- type: ndcg_at_5
value: 31.051000000000002
- type: precision_at_1
value: 26.466
- type: precision_at_10
value: 6.209
- type: precision_at_100
value: 1.057
- type: precision_at_1000
value: 0.161
- type: precision_at_3
value: 14.591999999999999
- type: precision_at_5
value: 10.272
- type: recall_at_1
value: 20.685000000000002
- type: recall_at_10
value: 41.284
- type: recall_at_100
value: 61.91
- type: recall_at_1000
value: 81.846
- type: recall_at_3
value: 31.144
- type: recall_at_5
value: 35.33
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackEnglishRetrieval
type: None
config: default
split: test
revision: ad9991cb51e31e31e430383c75ffb2885547b5f0
metrics:
- type: map_at_1
value: 18.683
- type: map_at_10
value: 24.818
- type: map_at_100
value: 25.702
- type: map_at_1000
value: 25.819
- type: map_at_3
value: 22.671
- type: map_at_5
value: 23.846
- type: mrr_at_1
value: 23.885
- type: mrr_at_10
value: 29.454
- type: mrr_at_100
value: 30.201
- type: mrr_at_1000
value: 30.270000000000003
- type: mrr_at_3
value: 27.537
- type: mrr_at_5
value: 28.594
- type: ndcg_at_1
value: 23.885
- type: ndcg_at_10
value: 28.961
- type: ndcg_at_100
value: 33.128
- type: ndcg_at_1000
value: 35.919000000000004
- type: ndcg_at_3
value: 25.455
- type: ndcg_at_5
value: 27.038
- type: precision_at_1
value: 23.885
- type: precision_at_10
value: 5.459
- type: precision_at_100
value: 0.9440000000000001
- type: precision_at_1000
value: 0.14400000000000002
- type: precision_at_3
value: 12.081
- type: precision_at_5
value: 8.738999999999999
- type: recall_at_1
value: 18.683
- type: recall_at_10
value: 36.616
- type: recall_at_100
value: 55.126
- type: recall_at_1000
value: 74.32600000000001
- type: recall_at_3
value: 26.540000000000003
- type: recall_at_5
value: 30.812
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackGamingRetrieval
type: None
config: default
split: test
revision: 4885aa143210c98657558c04aaf3dc47cfb54340
metrics:
- type: map_at_1
value: 25.025
- type: map_at_10
value: 32.903999999999996
- type: map_at_100
value: 33.857
- type: map_at_1000
value: 33.945
- type: map_at_3
value: 30.711
- type: map_at_5
value: 31.972
- type: mrr_at_1
value: 29.279
- type: mrr_at_10
value: 36.281
- type: mrr_at_100
value: 37.119
- type: mrr_at_1000
value: 37.181999999999995
- type: mrr_at_3
value: 34.316
- type: mrr_at_5
value: 35.437999999999995
- type: ndcg_at_1
value: 29.279
- type: ndcg_at_10
value: 37.333
- type: ndcg_at_100
value: 41.948
- type: ndcg_at_1000
value: 44.249
- type: ndcg_at_3
value: 33.311
- type: ndcg_at_5
value: 35.251
- type: precision_at_1
value: 29.279
- type: precision_at_10
value: 5.956
- type: precision_at_100
value: 0.8959999999999999
- type: precision_at_1000
value: 0.116
- type: precision_at_3
value: 14.774999999999999
- type: precision_at_5
value: 10.194
- type: recall_at_1
value: 25.025
- type: recall_at_10
value: 47.583
- type: recall_at_100
value: 68.42
- type: recall_at_1000
value: 85.47
- type: recall_at_3
value: 36.595
- type: recall_at_5
value: 41.461999999999996
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackGisRetrieval
type: None
config: default
split: test
revision: 5003b3064772da1887988e05400cf3806fe491f2
metrics:
- type: map_at_1
value: 11.356
- type: map_at_10
value: 15.417
- type: map_at_100
value: 15.988
- type: map_at_1000
value: 16.088
- type: map_at_3
value: 13.941999999999998
- type: map_at_5
value: 14.651
- type: mrr_at_1
value: 12.316
- type: mrr_at_10
value: 16.451999999999998
- type: mrr_at_100
value: 17.061999999999998
- type: mrr_at_1000
value: 17.158
- type: mrr_at_3
value: 14.953
- type: mrr_at_5
value: 15.733
- type: ndcg_at_1
value: 12.316
- type: ndcg_at_10
value: 18.109
- type: ndcg_at_100
value: 21.537
- type: ndcg_at_1000
value: 24.512999999999998
- type: ndcg_at_3
value: 15.058
- type: ndcg_at_5
value: 16.31
- type: precision_at_1
value: 12.316
- type: precision_at_10
value: 2.915
- type: precision_at_100
value: 0.494
- type: precision_at_1000
value: 0.078
- type: precision_at_3
value: 6.29
- type: precision_at_5
value: 4.542
- type: recall_at_1
value: 11.356
- type: recall_at_10
value: 25.55
- type: recall_at_100
value: 42.369
- type: recall_at_1000
value: 65.642
- type: recall_at_3
value: 17.269000000000002
- type: recall_at_5
value: 20.139000000000003
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackMathematicaRetrieval
type: None
config: default
split: test
revision: 90fceea13679c63fe563ded68f3b6f06e50061de
metrics:
- type: map_at_1
value: 5.926
- type: map_at_10
value: 9.023
- type: map_at_100
value: 9.705
- type: map_at_1000
value: 9.814
- type: map_at_3
value: 8.043
- type: map_at_5
value: 8.609
- type: mrr_at_1
value: 7.960000000000001
- type: mrr_at_10
value: 11.472999999999999
- type: mrr_at_100
value: 12.225
- type: mrr_at_1000
value: 12.316
- type: mrr_at_3
value: 10.365
- type: mrr_at_5
value: 11.049000000000001
- type: ndcg_at_1
value: 7.960000000000001
- type: ndcg_at_10
value: 11.266
- type: ndcg_at_100
value: 15.092
- type: ndcg_at_1000
value: 18.159
- type: ndcg_at_3
value: 9.343
- type: ndcg_at_5
value: 10.302
- type: precision_at_1
value: 7.960000000000001
- type: precision_at_10
value: 2.1270000000000002
- type: precision_at_100
value: 0.48
- type: precision_at_1000
value: 0.086
- type: precision_at_3
value: 4.601999999999999
- type: precision_at_5
value: 3.383
- type: recall_at_1
value: 5.926
- type: recall_at_10
value: 15.873999999999999
- type: recall_at_100
value: 33.274
- type: recall_at_1000
value: 55.799
- type: recall_at_3
value: 10.571
- type: recall_at_5
value: 12.986
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackPhysicsRetrieval
type: None
config: default
split: test
revision: 79531abbd1fb92d06c6d6315a0cbbbf5bb247ea4
metrics:
- type: map_at_1
value: 17.639
- type: map_at_10
value: 22.948
- type: map_at_100
value: 24.104
- type: map_at_1000
value: 24.226
- type: map_at_3
value: 21.128
- type: map_at_5
value: 22.218
- type: mrr_at_1
value: 21.559
- type: mrr_at_10
value: 27.443
- type: mrr_at_100
value: 28.363
- type: mrr_at_1000
value: 28.438000000000002
- type: mrr_at_3
value: 25.409
- type: mrr_at_5
value: 26.558999999999997
- type: ndcg_at_1
value: 21.559
- type: ndcg_at_10
value: 26.863999999999997
- type: ndcg_at_100
value: 32.417
- type: ndcg_at_1000
value: 35.3
- type: ndcg_at_3
value: 23.658
- type: ndcg_at_5
value: 25.240000000000002
- type: precision_at_1
value: 21.559
- type: precision_at_10
value: 4.859999999999999
- type: precision_at_100
value: 0.9159999999999999
- type: precision_at_1000
value: 0.134
- type: precision_at_3
value: 11.004
- type: precision_at_5
value: 7.931000000000001
- type: recall_at_1
value: 17.639
- type: recall_at_10
value: 34.245
- type: recall_at_100
value: 58.754
- type: recall_at_1000
value: 79.14099999999999
- type: recall_at_3
value: 25.072
- type: recall_at_5
value: 29.334
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackProgrammersRetrieval
type: None
config: default
split: test
revision: 6184bc1440d2dbc7612be22b50686b8826d22b32
metrics:
- type: map_at_1
value: 11.109
- type: map_at_10
value: 16.184
- type: map_at_100
value: 17.127
- type: map_at_1000
value: 17.247
- type: map_at_3
value: 14.389
- type: map_at_5
value: 15.323999999999998
- type: mrr_at_1
value: 13.699
- type: mrr_at_10
value: 19.358
- type: mrr_at_100
value: 20.233999999999998
- type: mrr_at_1000
value: 20.318
- type: mrr_at_3
value: 17.561
- type: mrr_at_5
value: 18.48
- type: ndcg_at_1
value: 13.699
- type: ndcg_at_10
value: 19.779
- type: ndcg_at_100
value: 24.352999999999998
- type: ndcg_at_1000
value: 27.633999999999997
- type: ndcg_at_3
value: 16.414
- type: ndcg_at_5
value: 17.802
- type: precision_at_1
value: 13.699
- type: precision_at_10
value: 3.8699999999999997
- type: precision_at_100
value: 0.729
- type: precision_at_1000
value: 0.117
- type: precision_at_3
value: 8.067
- type: precision_at_5
value: 5.913
- type: recall_at_1
value: 11.109
- type: recall_at_10
value: 27.606
- type: recall_at_100
value: 47.333999999999996
- type: recall_at_1000
value: 71.466
- type: recall_at_3
value: 18.209
- type: recall_at_5
value: 21.785
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackRetrieval
type: mteb/cqadupstack
config: default
split: test
revision: 4885aa143210c98657558c04aaf3dc47cfb54340
metrics:
- type: map_at_1
value: 13.838916666666668
- type: map_at_10
value: 18.685583333333337
- type: map_at_100
value: 19.5005
- type: map_at_1000
value: 19.614833333333333
- type: map_at_3
value: 17.090333333333334
- type: map_at_5
value: 17.965833333333332
- type: mrr_at_1
value: 16.78108333333334
- type: mrr_at_10
value: 21.741
- type: mrr_at_100
value: 22.48608333333333
- type: mrr_at_1000
value: 22.568333333333335
- type: mrr_at_3
value: 20.163833333333333
- type: mrr_at_5
value: 21.01383333333333
- type: ndcg_at_1
value: 16.78108333333334
- type: ndcg_at_10
value: 21.979666666666667
- type: ndcg_at_100
value: 26.148500000000002
- type: ndcg_at_1000
value: 29.099749999999997
- type: ndcg_at_3
value: 19.107250000000004
- type: ndcg_at_5
value: 20.390916666666666
- type: precision_at_1
value: 16.78108333333334
- type: precision_at_10
value: 3.9185833333333338
- type: precision_at_100
value: 0.7166666666666665
- type: precision_at_1000
value: 0.11350000000000002
- type: precision_at_3
value: 8.856166666666667
- type: precision_at_5
value: 6.328749999999999
- type: recall_at_1
value: 13.838916666666668
- type: recall_at_10
value: 28.876583333333333
- type: recall_at_100
value: 48.09441666666666
- type: recall_at_1000
value: 69.68258333333333
- type: recall_at_3
value: 20.706999999999997
- type: recall_at_5
value: 24.09425
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackStatsRetrieval
type: None
config: default
split: test
revision: 65ac3a16b8e91f9cee4c9828cc7c335575432a2a
metrics:
- type: map_at_1
value: 9.577
- type: map_at_10
value: 13.633999999999999
- type: map_at_100
value: 14.248
- type: map_at_1000
value: 14.322
- type: map_at_3
value: 12.423
- type: map_at_5
value: 13.083
- type: mrr_at_1
value: 11.503
- type: mrr_at_10
value: 15.67
- type: mrr_at_100
value: 16.292
- type: mrr_at_1000
value: 16.356
- type: mrr_at_3
value: 14.494000000000002
- type: mrr_at_5
value: 15.161
- type: ndcg_at_1
value: 11.503
- type: ndcg_at_10
value: 16.195999999999998
- type: ndcg_at_100
value: 19.733999999999998
- type: ndcg_at_1000
value: 21.956
- type: ndcg_at_3
value: 13.947000000000001
- type: ndcg_at_5
value: 14.976999999999999
- type: precision_at_1
value: 11.503
- type: precision_at_10
value: 2.868
- type: precision_at_100
value: 0.512
- type: precision_at_1000
value: 0.076
- type: precision_at_3
value: 6.544
- type: precision_at_5
value: 4.601
- type: recall_at_1
value: 9.577
- type: recall_at_10
value: 22.055
- type: recall_at_100
value: 39.104
- type: recall_at_1000
value: 56.165
- type: recall_at_3
value: 15.719
- type: recall_at_5
value: 18.453
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackTexRetrieval
type: None
config: default
split: test
revision: 46989137a86843e03a6195de44b09deda022eec7
metrics:
- type: map_at_1
value: 7.266
- type: map_at_10
value: 10.41
- type: map_at_100
value: 10.979
- type: map_at_1000
value: 11.084
- type: map_at_3
value: 9.379
- type: map_at_5
value: 9.937999999999999
- type: mrr_at_1
value: 9.188
- type: mrr_at_10
value: 12.662
- type: mrr_at_100
value: 13.257
- type: mrr_at_1000
value: 13.345
- type: mrr_at_3
value: 11.557
- type: mrr_at_5
value: 12.124
- type: ndcg_at_1
value: 9.188
- type: ndcg_at_10
value: 12.681999999999999
- type: ndcg_at_100
value: 15.937999999999999
- type: ndcg_at_1000
value: 19.088
- type: ndcg_at_3
value: 10.772
- type: ndcg_at_5
value: 11.591999999999999
- type: precision_at_1
value: 9.188
- type: precision_at_10
value: 2.3810000000000002
- type: precision_at_100
value: 0.481
- type: precision_at_1000
value: 0.089
- type: precision_at_3
value: 5.196
- type: precision_at_5
value: 3.758
- type: recall_at_1
value: 7.266
- type: recall_at_10
value: 17.433
- type: recall_at_100
value: 32.822
- type: recall_at_1000
value: 56.45099999999999
- type: recall_at_3
value: 11.928999999999998
- type: recall_at_5
value: 14.152999999999999
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackUnixRetrieval
type: None
config: default
split: test
revision: 6c6430d3a6d36f8d2a829195bc5dc94d7e063e53
metrics:
- type: map_at_1
value: 13.674
- type: map_at_10
value: 17.061999999999998
- type: map_at_100
value: 17.845
- type: map_at_1000
value: 17.944
- type: map_at_3
value: 15.856
- type: map_at_5
value: 16.478
- type: mrr_at_1
value: 16.418
- type: mrr_at_10
value: 19.999
- type: mrr_at_100
value: 20.798
- type: mrr_at_1000
value: 20.884
- type: mrr_at_3
value: 18.657
- type: mrr_at_5
value: 19.31
- type: ndcg_at_1
value: 16.418
- type: ndcg_at_10
value: 19.64
- type: ndcg_at_100
value: 23.926
- type: ndcg_at_1000
value: 26.889999999999997
- type: ndcg_at_3
value: 17.283
- type: ndcg_at_5
value: 18.232
- type: precision_at_1
value: 16.418
- type: precision_at_10
value: 3.125
- type: precision_at_100
value: 0.59
- type: precision_at_1000
value: 0.094
- type: precision_at_3
value: 7.494000000000001
- type: precision_at_5
value: 5.149
- type: recall_at_1
value: 13.674
- type: recall_at_10
value: 24.917
- type: recall_at_100
value: 44.851
- type: recall_at_1000
value: 67.123
- type: recall_at_3
value: 18.275
- type: recall_at_5
value: 20.757
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackWebmastersRetrieval
type: None
config: default
split: test
revision: 160c094312a0e1facb97e55eeddb698c0abe3571
metrics:
- type: map_at_1
value: 15.392
- type: map_at_10
value: 20.598
- type: map_at_100
value: 21.512
- type: map_at_1000
value: 21.698999999999998
- type: map_at_3
value: 18.884
- type: map_at_5
value: 19.911
- type: mrr_at_1
value: 18.379
- type: mrr_at_10
value: 23.854
- type: mrr_at_100
value: 24.581
- type: mrr_at_1000
value: 24.685000000000002
- type: mrr_at_3
value: 22.299
- type: mrr_at_5
value: 23.119
- type: ndcg_at_1
value: 18.379
- type: ndcg_at_10
value: 24.285999999999998
- type: ndcg_at_100
value: 28.53
- type: ndcg_at_1000
value: 32.124
- type: ndcg_at_3
value: 21.504
- type: ndcg_at_5
value: 22.853
- type: precision_at_1
value: 18.379
- type: precision_at_10
value: 4.684
- type: precision_at_100
value: 1.002
- type: precision_at_1000
value: 0.181
- type: precision_at_3
value: 10.145
- type: precision_at_5
value: 7.470000000000001
- type: recall_at_1
value: 15.392
- type: recall_at_10
value: 30.941000000000003
- type: recall_at_100
value: 51.361000000000004
- type: recall_at_1000
value: 75.82900000000001
- type: recall_at_3
value: 22.823
- type: recall_at_5
value: 26.495
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackWordpressRetrieval
type: None
config: default
split: test
revision: 4ffe81d471b1924886b33c7567bfb200e9eec5c4
metrics:
- type: map_at_1
value: 9.735000000000001
- type: map_at_10
value: 13.213
- type: map_at_100
value: 13.911000000000001
- type: map_at_1000
value: 14.013
- type: map_at_3
value: 11.785
- type: map_at_5
value: 12.509999999999998
- type: mrr_at_1
value: 10.721
- type: mrr_at_10
value: 14.560999999999998
- type: mrr_at_100
value: 15.251000000000001
- type: mrr_at_1000
value: 15.345
- type: mrr_at_3
value: 12.939
- type: mrr_at_5
value: 13.761999999999999
- type: ndcg_at_1
value: 10.721
- type: ndcg_at_10
value: 15.742999999999999
- type: ndcg_at_100
value: 19.746
- type: ndcg_at_1000
value: 22.972
- type: ndcg_at_3
value: 12.748000000000001
- type: ndcg_at_5
value: 14.043
- type: precision_at_1
value: 10.721
- type: precision_at_10
value: 2.569
- type: precision_at_100
value: 0.49899999999999994
- type: precision_at_1000
value: 0.086
- type: precision_at_3
value: 5.484
- type: precision_at_5
value: 3.993
- type: recall_at_1
value: 9.735000000000001
- type: recall_at_10
value: 22.415
- type: recall_at_100
value: 41.808
- type: recall_at_1000
value: 66.93299999999999
- type: recall_at_3
value: 14.338000000000001
- type: recall_at_5
value: 17.424999999999997
- task:
type: Retrieval
dataset:
name: MTEB ClimateFEVER
type: None
config: default
split: test
revision: 47f2ac6acb640fc46020b02a5b59fdda04d39380
metrics:
- type: map_at_1
value: 6.368
- type: map_at_10
value: 11.819
- type: map_at_100
value: 13.311
- type: map_at_1000
value: 13.517000000000001
- type: map_at_3
value: 9.451
- type: map_at_5
value: 10.68
- type: mrr_at_1
value: 14.463000000000001
- type: mrr_at_10
value: 23.502000000000002
- type: mrr_at_100
value: 24.635
- type: mrr_at_1000
value: 24.693
- type: mrr_at_3
value: 20.25
- type: mrr_at_5
value: 22.015
- type: ndcg_at_1
value: 14.463000000000001
- type: ndcg_at_10
value: 17.832
- type: ndcg_at_100
value: 24.514
- type: ndcg_at_1000
value: 28.395
- type: ndcg_at_3
value: 13.378
- type: ndcg_at_5
value: 15.078
- type: precision_at_1
value: 14.463000000000001
- type: precision_at_10
value: 6.065
- type: precision_at_100
value: 1.319
- type: precision_at_1000
value: 0.202
- type: precision_at_3
value: 10.25
- type: precision_at_5
value: 8.534
- type: recall_at_1
value: 6.368
- type: recall_at_10
value: 23.093
- type: recall_at_100
value: 46.664
- type: recall_at_1000
value: 68.657
- type: recall_at_3
value: 12.711
- type: recall_at_5
value: 16.858
- task:
type: Retrieval
dataset:
name: MTEB DBPedia
type: None
config: default
split: test
revision: c0f706b76e590d620bd6618b3ca8efdd34e2d659
metrics:
- type: map_at_1
value: 3.75
- type: map_at_10
value: 8.94
- type: map_at_100
value: 12.631999999999998
- type: map_at_1000
value: 13.541
- type: map_at_3
value: 6.140000000000001
- type: map_at_5
value: 7.432999999999999
- type: mrr_at_1
value: 39.5
- type: mrr_at_10
value: 48.983
- type: mrr_at_100
value: 49.75
- type: mrr_at_1000
value: 49.782
- type: mrr_at_3
value: 46.125
- type: mrr_at_5
value: 47.825
- type: ndcg_at_1
value: 28.249999999999996
- type: ndcg_at_10
value: 22.241
- type: ndcg_at_100
value: 25.387999999999998
- type: ndcg_at_1000
value: 32.11
- type: ndcg_at_3
value: 24.495
- type: ndcg_at_5
value: 23.402
- type: precision_at_1
value: 39.5
- type: precision_at_10
value: 19.675
- type: precision_at_100
value: 6.3
- type: precision_at_1000
value: 1.3259999999999998
- type: precision_at_3
value: 29.666999999999998
- type: precision_at_5
value: 25.45
- type: recall_at_1
value: 3.75
- type: recall_at_10
value: 13.766
- type: recall_at_100
value: 31.915
- type: recall_at_1000
value: 54.85
- type: recall_at_3
value: 7.167
- type: recall_at_5
value: 9.728
- task:
type: Classification
dataset:
name: MTEB EmotionClassification
type: None
config: default
split: test
revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37
metrics:
- type: accuracy
value: 44.135000000000005
- type: f1
value: 40.132852463847094
- task:
type: Retrieval
dataset:
name: MTEB FEVER
type: None
config: default
split: test
revision: bea83ef9e8fb933d90a2f1d5515737465d613e12
metrics:
- type: map_at_1
value: 16.445999999999998
- type: map_at_10
value: 24.275
- type: map_at_100
value: 25.205
- type: map_at_1000
value: 25.278
- type: map_at_3
value: 21.813
- type: map_at_5
value: 23.254
- type: mrr_at_1
value: 17.567
- type: mrr_at_10
value: 25.733
- type: mrr_at_100
value: 26.663999999999998
- type: mrr_at_1000
value: 26.727
- type: mrr_at_3
value: 23.169999999999998
- type: mrr_at_5
value: 24.666
- type: ndcg_at_1
value: 17.567
- type: ndcg_at_10
value: 28.937
- type: ndcg_at_100
value: 33.757
- type: ndcg_at_1000
value: 35.792
- type: ndcg_at_3
value: 23.91
- type: ndcg_at_5
value: 26.485999999999997
- type: precision_at_1
value: 17.567
- type: precision_at_10
value: 4.58
- type: precision_at_100
value: 0.718
- type: precision_at_1000
value: 0.091
- type: precision_at_3
value: 10.241
- type: precision_at_5
value: 7.521999999999999
- type: recall_at_1
value: 16.445999999999998
- type: recall_at_10
value: 42.152
- type: recall_at_100
value: 64.795
- type: recall_at_1000
value: 80.54
- type: recall_at_3
value: 28.608
- type: recall_at_5
value: 34.771
- task:
type: Retrieval
dataset:
name: MTEB FiQA2018
type: None
config: default
split: test
revision: 27a168819829fe9bcd655c2df245fb19452e8e06
metrics:
- type: map_at_1
value: 6.658
- type: map_at_10
value: 10.437000000000001
- type: map_at_100
value: 11.411
- type: map_at_1000
value: 11.581
- type: map_at_3
value: 8.876000000000001
- type: map_at_5
value: 9.806
- type: mrr_at_1
value: 13.117
- type: mrr_at_10
value: 18.447
- type: mrr_at_100
value: 19.363
- type: mrr_at_1000
value: 19.461000000000002
- type: mrr_at_3
value: 16.512
- type: mrr_at_5
value: 17.607999999999997
- type: ndcg_at_1
value: 13.117
- type: ndcg_at_10
value: 14.277999999999999
- type: ndcg_at_100
value: 19.259999999999998
- type: ndcg_at_1000
value: 23.27
- type: ndcg_at_3
value: 11.965
- type: ndcg_at_5
value: 13.020000000000001
- type: precision_at_1
value: 13.117
- type: precision_at_10
value: 3.92
- type: precision_at_100
value: 0.8920000000000001
- type: precision_at_1000
value: 0.158
- type: precision_at_3
value: 7.819
- type: precision_at_5
value: 6.019
- type: recall_at_1
value: 6.658
- type: recall_at_10
value: 17.913999999999998
- type: recall_at_100
value: 37.687
- type: recall_at_1000
value: 62.647
- type: recall_at_3
value: 10.908
- type: recall_at_5
value: 14.381
- task:
type: Retrieval
dataset:
name: MTEB HotpotQA
type: None
config: default
split: test
revision: ab518f4d6fcca38d87c25209f94beba119d02014
metrics:
- type: map_at_1
value: 18.15
- type: map_at_10
value: 25.289
- type: map_at_100
value: 26.174999999999997
- type: map_at_1000
value: 26.282
- type: map_at_3
value: 23.31
- type: map_at_5
value: 24.367
- type: mrr_at_1
value: 36.3
- type: mrr_at_10
value: 43.134
- type: mrr_at_100
value: 43.854
- type: mrr_at_1000
value: 43.908
- type: mrr_at_3
value: 41.276
- type: mrr_at_5
value: 42.315000000000005
- type: ndcg_at_1
value: 36.3
- type: ndcg_at_10
value: 32.193
- type: ndcg_at_100
value: 36.301
- type: ndcg_at_1000
value: 38.853
- type: ndcg_at_3
value: 28.477000000000004
- type: ndcg_at_5
value: 30.223
- type: precision_at_1
value: 36.3
- type: precision_at_10
value: 7.051
- type: precision_at_100
value: 1.0330000000000001
- type: precision_at_1000
value: 0.13699999999999998
- type: precision_at_3
value: 17.889
- type: precision_at_5
value: 12.119
- type: recall_at_1
value: 18.15
- type: recall_at_10
value: 35.253
- type: recall_at_100
value: 51.668000000000006
- type: recall_at_1000
value: 68.717
- type: recall_at_3
value: 26.833000000000002
- type: recall_at_5
value: 30.297
- task:
type: Classification
dataset:
name: MTEB ImdbClassification
type: None
config: default
split: test
revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7
metrics:
- type: accuracy
value: 66.5428
- type: ap
value: 61.12502572883321
- type: f1
value: 66.3624123025287
- task:
type: Retrieval
dataset:
name: MTEB MSMARCO
type: None
config: default
split: dev
revision: c5a29a104738b98a9e76336939199e264163d4a0
metrics:
- type: map_at_1
value: 5.946
- type: map_at_10
value: 10.17
- type: map_at_100
value: 11.058
- type: map_at_1000
value: 11.164
- type: map_at_3
value: 8.414000000000001
- type: map_at_5
value: 9.316
- type: mrr_at_1
value: 6.132
- type: mrr_at_10
value: 10.459999999999999
- type: mrr_at_100
value: 11.354000000000001
- type: mrr_at_1000
value: 11.456
- type: mrr_at_3
value: 8.674999999999999
- type: mrr_at_5
value: 9.592
- type: ndcg_at_1
value: 6.089
- type: ndcg_at_10
value: 13.027
- type: ndcg_at_100
value: 17.9
- type: ndcg_at_1000
value: 21.053
- type: ndcg_at_3
value: 9.314
- type: ndcg_at_5
value: 10.943999999999999
- type: precision_at_1
value: 6.089
- type: precision_at_10
value: 2.291
- type: precision_at_100
value: 0.482
- type: precision_at_1000
value: 0.075
- type: precision_at_3
value: 4.031
- type: precision_at_5
value: 3.235
- type: recall_at_1
value: 5.946
- type: recall_at_10
value: 22.017999999999997
- type: recall_at_100
value: 45.811
- type: recall_at_1000
value: 71.039
- type: recall_at_3
value: 11.691
- type: recall_at_5
value: 15.618000000000002
- task:
type: Classification
dataset:
name: MTEB MTOPDomainClassification (en)
type: None
config: en
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 90.25763793889648
- type: f1
value: 89.49997029740435
- task:
type: Classification
dataset:
name: MTEB MTOPIntentClassification (en)
type: None
config: en
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 63.0095759233926
- type: f1
value: 44.42212537420128
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (en)
type: None
config: en
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 63.641560188298584
- type: f1
value: 60.99041094723092
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (en)
type: None
config: en
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 70.90450571620714
- type: f1
value: 69.64935741375153
- task:
type: Clustering
dataset:
name: MTEB MedrxivClusteringP2P
type: None
config: default
split: test
revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73
metrics:
- type: v_measure
value: 26.23206350160216
- task:
type: Clustering
dataset:
name: MTEB MedrxivClusteringS2S
type: None
config: default
split: test
revision: 35191c8c0dca72d8ff3efcd72aa802307d469663
metrics:
- type: v_measure
value: 24.265258673517774
- task:
type: Reranking
dataset:
name: MTEB MindSmallReranking
type: None
config: default
split: test
revision: 3bdac13927fdc888b903db93b2ffdbd90b295a69
metrics:
- type: map
value: 28.796475009504274
- type: mrr
value: 29.653223337884594
- task:
type: Retrieval
dataset:
name: MTEB NFCorpus
type: None
config: default
split: test
revision: ec0fa4fe99da2ff19ca1214b7966684033a58814
metrics:
- type: map_at_1
value: 4.811
- type: map_at_10
value: 8.58
- type: map_at_100
value: 10.663
- type: map_at_1000
value: 11.855
- type: map_at_3
value: 6.842
- type: map_at_5
value: 7.643
- type: mrr_at_1
value: 34.985
- type: mrr_at_10
value: 44.877
- type: mrr_at_100
value: 45.622
- type: mrr_at_1000
value: 45.684000000000005
- type: mrr_at_3
value: 42.466
- type: mrr_at_5
value: 43.782
- type: ndcg_at_1
value: 33.282000000000004
- type: ndcg_at_10
value: 25.583
- type: ndcg_at_100
value: 23.957
- type: ndcg_at_1000
value: 33.216
- type: ndcg_at_3
value: 29.86
- type: ndcg_at_5
value: 27.883000000000003
- type: precision_at_1
value: 34.985
- type: precision_at_10
value: 18.142
- type: precision_at_100
value: 6.245
- type: precision_at_1000
value: 1.894
- type: precision_at_3
value: 27.554000000000002
- type: precision_at_5
value: 23.034
- type: recall_at_1
value: 4.811
- type: recall_at_10
value: 12.264999999999999
- type: recall_at_100
value: 25.483
- type: recall_at_1000
value: 58.396
- type: recall_at_3
value: 7.888000000000001
- type: recall_at_5
value: 9.607000000000001
- task:
type: Retrieval
dataset:
name: MTEB NQ
type: None
config: default
split: test
revision: b774495ed302d8c44a3a7ea25c90dbce03968f31
metrics:
- type: map_at_1
value: 8.876000000000001
- type: map_at_10
value: 14.799999999999999
- type: map_at_100
value: 15.967999999999998
- type: map_at_1000
value: 16.070999999999998
- type: map_at_3
value: 12.422
- type: map_at_5
value: 13.628000000000002
- type: mrr_at_1
value: 9.994
- type: mrr_at_10
value: 16.250999999999998
- type: mrr_at_100
value: 17.341
- type: mrr_at_1000
value: 17.43
- type: mrr_at_3
value: 13.808000000000002
- type: mrr_at_5
value: 15.057
- type: ndcg_at_1
value: 9.994
- type: ndcg_at_10
value: 18.887
- type: ndcg_at_100
value: 24.878
- type: ndcg_at_1000
value: 27.744000000000003
- type: ndcg_at_3
value: 13.921
- type: ndcg_at_5
value: 16.083
- type: precision_at_1
value: 9.994
- type: precision_at_10
value: 3.54
- type: precision_at_100
value: 0.698
- type: precision_at_1000
value: 0.097
- type: precision_at_3
value: 6.489000000000001
- type: precision_at_5
value: 5.116
- type: recall_at_1
value: 8.876000000000001
- type: recall_at_10
value: 30.272
- type: recall_at_100
value: 58.097
- type: recall_at_1000
value: 80.207
- type: recall_at_3
value: 16.903000000000002
- type: recall_at_5
value: 21.948999999999998
- task:
type: Retrieval
dataset:
name: MTEB QuoraRetrieval
type: None
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 61.650000000000006
- type: map_at_10
value: 73.951
- type: map_at_100
value: 74.725
- type: map_at_1000
value: 74.76
- type: map_at_3
value: 71.13199999999999
- type: map_at_5
value: 72.785
- type: mrr_at_1
value: 71.07
- type: mrr_at_10
value: 78.533
- type: mrr_at_100
value: 78.805
- type: mrr_at_1000
value: 78.81099999999999
- type: mrr_at_3
value: 77.108
- type: mrr_at_5
value: 77.986
- type: ndcg_at_1
value: 71.09
- type: ndcg_at_10
value: 78.717
- type: ndcg_at_100
value: 81.01
- type: ndcg_at_1000
value: 81.448
- type: ndcg_at_3
value: 75.223
- type: ndcg_at_5
value: 76.913
- type: precision_at_1
value: 71.09
- type: precision_at_10
value: 11.873000000000001
- type: precision_at_100
value: 1.43
- type: precision_at_1000
value: 0.153
- type: precision_at_3
value: 32.6
- type: precision_at_5
value: 21.46
- type: recall_at_1
value: 61.650000000000006
- type: recall_at_10
value: 87.759
- type: recall_at_100
value: 96.697
- type: recall_at_1000
value: 99.213
- type: recall_at_3
value: 77.627
- type: recall_at_5
value: 82.433
- type: map_at_1
value: 2.973
- type: map_at_10
value: 6.963
- type: map_at_100
value: 8.179
- type: map_at_1000
value: 8.391
- type: map_at_3
value: 5.1209999999999996
- type: map_at_5
value: 6.1
- type: mrr_at_1
value: 14.6
- type: mrr_at_10
value: 22.519
- type: mrr_at_100
value: 23.656
- type: mrr_at_1000
value: 23.752000000000002
- type: mrr_at_3
value: 19.933
- type: mrr_at_5
value: 21.418
- type: ndcg_at_1
value: 14.6
- type: ndcg_at_10
value: 12.357999999999999
- type: ndcg_at_100
value: 18.071
- type: ndcg_at_1000
value: 22.658
- type: ndcg_at_3
value: 11.78
- type: ndcg_at_5
value: 10.377
- type: precision_at_1
value: 14.6
- type: precision_at_10
value: 6.34
- type: precision_at_100
value: 1.462
- type: precision_at_1000
value: 0.257
- type: precision_at_3
value: 10.9
- type: precision_at_5
value: 9.08
- type: recall_at_1
value: 2.973
- type: recall_at_10
value: 12.876999999999999
- type: recall_at_100
value: 29.732999999999997
- type: recall_at_1000
value: 52.25300000000001
- type: recall_at_3
value: 6.638
- type: recall_at_5
value: 9.223
- type: map_at_1
value: 0.149
- type: map_at_10
value: 0.808
- type: map_at_100
value: 4.284000000000001
- type: map_at_1000
value: 11.209
- type: map_at_3
value: 0.334
- type: map_at_5
value: 0.461
- type: mrr_at_1
value: 54.0
- type: mrr_at_10
value: 67.533
- type: mrr_at_100
value: 68.00699999999999
- type: mrr_at_1000
value: 68.00699999999999
- type: mrr_at_3
value: 66.0
- type: mrr_at_5
value: 67.2
- type: ndcg_at_1
value: 50.0
- type: ndcg_at_10
value: 44.132
- type: ndcg_at_100
value: 33.656000000000006
- type: ndcg_at_1000
value: 31.062
- type: ndcg_at_3
value: 46.939
- type: ndcg_at_5
value: 44.299
- type: precision_at_1
value: 56.00000000000001
- type: precision_at_10
value: 47.0
- type: precision_at_100
value: 35.56
- type: precision_at_1000
value: 15.160000000000002
- type: precision_at_3
value: 51.333
- type: precision_at_5
value: 46.800000000000004
- type: recall_at_1
value: 0.149
- type: recall_at_10
value: 1.081
- type: recall_at_100
value: 7.614999999999999
- type: recall_at_1000
value: 30.381999999999998
- type: recall_at_3
value: 0.384
- type: recall_at_5
value: 0.5519999999999999
- task:
type: Clustering
dataset:
name: MTEB RedditClustering
type: None
config: default
split: test
revision: 24640382cdbf8abc73003fb0fa6d111a705499eb
metrics:
- type: v_measure
value: 31.5539671147009
- task:
type: Clustering
dataset:
name: MTEB RedditClusteringP2P
type: None
config: default
split: test
revision: 282350215ef01743dc01b456c7f5241fa8937f16
metrics:
- type: v_measure
value: 39.054096619731716
- task:
type: STS
dataset:
name: MTEB SICK-R
type: None
config: default
split: test
revision: a6ea5a8cab320b040a23452cc28066d9beae2cee
metrics:
- type: cos_sim_pearson
value: 74.16263342018726
- type: cos_sim_spearman
value: 65.47896874175562
- type: euclidean_pearson
value: 69.85793726549834
- type: euclidean_spearman
value: 65.4790013428057
- type: manhattan_pearson
value: 64.55568963713883
- type: manhattan_spearman
value: 61.568825078510024
- task:
type: STS
dataset:
name: MTEB STS12
type: None
config: default
split: test
revision: a0d554a64d88156834ff5ae9920b964011b16384
metrics:
- type: cos_sim_pearson
value: 66.8016903690634
- type: cos_sim_spearman
value: 59.14548944908871
- type: euclidean_pearson
value: 63.51433073183812
- type: euclidean_spearman
value: 59.1468815981049
- type: manhattan_pearson
value: 66.7777786631213
- type: manhattan_spearman
value: 62.983103811799964
- task:
type: STS
dataset:
name: MTEB STS13
type: None
config: default
split: test
revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca
metrics:
- type: cos_sim_pearson
value: 72.28649661642542
- type: cos_sim_spearman
value: 74.15608835313905
- type: euclidean_pearson
value: 73.82320584523798
- type: euclidean_spearman
value: 74.15612619641261
- type: manhattan_pearson
value: 75.8258411016566
- type: manhattan_spearman
value: 76.30803708387923
- task:
type: STS
dataset:
name: MTEB STS14
type: None
config: default
split: test
revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375
metrics:
- type: cos_sim_pearson
value: 73.43505716930555
- type: cos_sim_spearman
value: 70.62852856991695
- type: euclidean_pearson
value: 72.89910614146251
- type: euclidean_spearman
value: 70.62851888356735
- type: manhattan_pearson
value: 71.9214204068933
- type: manhattan_spearman
value: 70.57115043790483
- task:
type: STS
dataset:
name: MTEB STS15
type: None
config: default
split: test
revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3
metrics:
- type: cos_sim_pearson
value: 76.28414453948841
- type: cos_sim_spearman
value: 77.64236034865235
- type: euclidean_pearson
value: 77.82436104070885
- type: euclidean_spearman
value: 77.64235891658193
- type: manhattan_pearson
value: 78.02479262236683
- type: manhattan_spearman
value: 78.41394470741825
- task:
type: STS
dataset:
name: MTEB STS16
type: None
config: default
split: test
revision: 4d8694f8f0e0100860b497b999b3dbed754a0513
metrics:
- type: cos_sim_pearson
value: 74.80262034437214
- type: cos_sim_spearman
value: 75.61749265386518
- type: euclidean_pearson
value: 75.24995074046139
- type: euclidean_spearman
value: 75.61748558266487
- type: manhattan_pearson
value: 72.8680664793595
- type: manhattan_spearman
value: 72.64465541572571
- task:
type: STS
dataset:
name: MTEB STS17 (en-en)
type: None
config: en-en
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 82.77228924820331
- type: cos_sim_spearman
value: 84.45133365648942
- type: euclidean_pearson
value: 83.65508313764323
- type: euclidean_spearman
value: 84.45220756360993
- type: manhattan_pearson
value: 79.31968289045513
- type: manhattan_spearman
value: 79.95192251749855
- task:
type: STS
dataset:
name: MTEB STS22 (en)
type: None
config: en
split: test
revision: eea2b4fe26a775864c896887d910b76a8098ad3f
metrics:
- type: cos_sim_pearson
value: 52.345708620762984
- type: cos_sim_spearman
value: 57.64880367305184
- type: euclidean_pearson
value: 56.52639850051479
- type: euclidean_spearman
value: 57.64880367305184
- type: manhattan_pearson
value: 59.969265121930434
- type: manhattan_spearman
value: 59.099517496575984
- task:
type: STS
dataset:
name: MTEB STSBenchmark
type: None
config: default
split: test
revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831
metrics:
- type: cos_sim_pearson
value: 75.5029813815189
- type: cos_sim_spearman
value: 73.8791301142112
- type: euclidean_pearson
value: 75.44794524705392
- type: euclidean_spearman
value: 73.87914858448825
- type: manhattan_pearson
value: 71.2741911732711
- type: manhattan_spearman
value: 69.5920899359239
- task:
type: Reranking
dataset:
name: MTEB SciDocsRR
type: None
config: default
split: test
revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab
metrics:
- type: map
value: 69.98974452315024
- type: mrr
value: 90.37764250999545
- task:
type: Retrieval
dataset:
name: MTEB SciFact
type: None
config: default
split: test
revision: 0228b52cf27578f30900b9e5271d331663a030d7
metrics:
- type: map_at_1
value: 32.667
- type: map_at_10
value: 41.303
- type: map_at_100
value: 42.292
- type: map_at_1000
value: 42.372
- type: map_at_3
value: 39.013999999999996
- type: map_at_5
value: 40.469
- type: mrr_at_1
value: 34.666999999999994
- type: mrr_at_10
value: 43.187
- type: mrr_at_100
value: 44.028
- type: mrr_at_1000
value: 44.092
- type: mrr_at_3
value: 41.111
- type: mrr_at_5
value: 42.428
- type: ndcg_at_1
value: 34.666999999999994
- type: ndcg_at_10
value: 45.94
- type: ndcg_at_100
value: 50.67
- type: ndcg_at_1000
value: 52.654999999999994
- type: ndcg_at_3
value: 41.571999999999996
- type: ndcg_at_5
value: 43.998
- type: precision_at_1
value: 34.666999999999994
- type: precision_at_10
value: 6.367000000000001
- type: precision_at_100
value: 0.907
- type: precision_at_1000
value: 0.108
- type: precision_at_3
value: 16.889000000000003
- type: precision_at_5
value: 11.4
- type: recall_at_1
value: 32.667
- type: recall_at_10
value: 58.556
- type: recall_at_100
value: 80.122
- type: recall_at_1000
value: 95.517
- type: recall_at_3
value: 46.778
- type: recall_at_5
value: 52.722
- task:
type: PairClassification
dataset:
name: MTEB SprintDuplicateQuestions
type: None
config: default
split: test
revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46
metrics:
- type: cos_sim_accuracy
value: 99.64455445544554
- type: cos_sim_ap
value: 87.59975053126642
- type: cos_sim_f1
value: 81.30325814536342
- type: cos_sim_precision
value: 81.50753768844221
- type: cos_sim_recall
value: 81.10000000000001
- type: dot_accuracy
value: 99.64455445544554
- type: dot_ap
value: 87.59975053126642
- type: dot_f1
value: 81.30325814536342
- type: dot_precision
value: 81.50753768844221
- type: dot_recall
value: 81.10000000000001
- type: euclidean_accuracy
value: 99.64455445544554
- type: euclidean_ap
value: 87.59975053126642
- type: euclidean_f1
value: 81.30325814536342
- type: euclidean_precision
value: 81.50753768844221
- type: euclidean_recall
value: 81.10000000000001
- type: manhattan_accuracy
value: 99.75544554455446
- type: manhattan_ap
value: 92.79867201777404
- type: manhattan_f1
value: 87.35279057859702
- type: manhattan_precision
value: 89.50682056663169
- type: manhattan_recall
value: 85.3
- type: max_accuracy
value: 99.75544554455446
- type: max_ap
value: 92.79867201777404
- type: max_f1
value: 87.35279057859702
- task:
type: Clustering
dataset:
name: MTEB StackExchangeClustering
type: None
config: default
split: test
revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259
metrics:
- type: v_measure
value: 38.67507463066139
- task:
type: Clustering
dataset:
name: MTEB StackExchangeClusteringP2P
type: None
config: default
split: test
revision: 815ca46b2622cec33ccafc3735d572c266efdb44
metrics:
- type: v_measure
value: 27.940405218037796
- task:
type: Reranking
dataset:
name: MTEB StackOverflowDupQuestions
type: None
config: default
split: test
revision: e185fbe320c72810689fc5848eb6114e1ef5ec69
metrics:
- type: map
value: 42.17856950522057
- type: mrr
value: 42.73857270180799
- task:
type: Summarization
dataset:
name: MTEB SummEval
type: None
config: default
split: test
revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c
metrics:
- type: cos_sim_pearson
value: 30.824946271899556
- type: cos_sim_spearman
value: 30.19370307268101
- type: dot_pearson
value: 30.824946248457795
- type: dot_spearman
value: 30.239461621041393
- task:
type: Retrieval
dataset:
name: MTEB Touche2020
type: None
config: default
split: test
revision: a34f9a33db75fa0cbb21bb5cfc3dae8dc8bec93f
metrics:
- type: map_at_1
value: 2.138
- type: map_at_10
value: 6.84
- type: map_at_100
value: 11.829
- type: map_at_1000
value: 13.431999999999999
- type: map_at_3
value: 3.4299999999999997
- type: map_at_5
value: 5.1450000000000005
- type: mrr_at_1
value: 30.612000000000002
- type: mrr_at_10
value: 40.96
- type: mrr_at_100
value: 42.259
- type: mrr_at_1000
value: 42.262
- type: mrr_at_3
value: 37.075
- type: mrr_at_5
value: 39.728
- type: ndcg_at_1
value: 27.551
- type: ndcg_at_10
value: 19.358
- type: ndcg_at_100
value: 32.036
- type: ndcg_at_1000
value: 43.552
- type: ndcg_at_3
value: 20.689
- type: ndcg_at_5
value: 21.308
- type: precision_at_1
value: 30.612000000000002
- type: precision_at_10
value: 17.755000000000003
- type: precision_at_100
value: 7.449
- type: precision_at_1000
value: 1.478
- type: precision_at_3
value: 21.088
- type: precision_at_5
value: 22.448999999999998
- type: recall_at_1
value: 2.138
- type: recall_at_10
value: 11.958
- type: recall_at_100
value: 44.659
- type: recall_at_1000
value: 80.16499999999999
- type: recall_at_3
value: 4.295
- type: recall_at_5
value: 7.701
- task:
type: Classification
dataset:
name: MTEB ToxicConversationsClassification
type: None
config: default
split: test
revision: d7c0de2777da35d6aae2200a62c6e0e5af397c4c
metrics:
- type: accuracy
value: 70.4548
- type: ap
value: 14.903569291537838
- type: f1
value: 54.76990636936144
- task:
type: Classification
dataset:
name: MTEB TweetSentimentExtractionClassification
type: None
config: default
split: test
revision: d604517c81ca91fe16a244d1248fc021f9ecee7a
metrics:
- type: accuracy
value: 58.678551216751565
- type: f1
value: 58.927198725369465
- task:
type: Clustering
dataset:
name: MTEB TwentyNewsgroupsClustering
type: None
config: default
split: test
revision: 6125ec4e24fa026cec8a478383ee943acfbd5449
metrics:
- type: v_measure
value: 33.90243227502835
- task:
type: PairClassification
dataset:
name: MTEB TwitterSemEval2015
type: None
config: default
split: test
revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1
metrics:
- type: cos_sim_accuracy
value: 83.88269654884664
- type: cos_sim_ap
value: 65.74743802769225
- type: cos_sim_f1
value: 63.409891845910806
- type: cos_sim_precision
value: 58.774498760982205
- type: cos_sim_recall
value: 68.83905013192611
- type: dot_accuracy
value: 83.88269654884664
- type: dot_ap
value: 65.74743802769225
- type: dot_f1
value: 63.409891845910806
- type: dot_precision
value: 58.774498760982205
- type: dot_recall
value: 68.83905013192611
- type: euclidean_accuracy
value: 83.88269654884664
- type: euclidean_ap
value: 65.74743802769225
- type: euclidean_f1
value: 63.409891845910806
- type: euclidean_precision
value: 58.774498760982205
- type: euclidean_recall
value: 68.83905013192611
- type: manhattan_accuracy
value: 81.29582166060678
- type: manhattan_ap
value: 57.475373048925846
- type: manhattan_f1
value: 55.90868397493285
- type: manhattan_precision
value: 48.54255732607851
- type: manhattan_recall
value: 65.91029023746702
- type: max_accuracy
value: 83.88269654884664
- type: max_ap
value: 65.74743802769225
- type: max_f1
value: 63.409891845910806
- task:
type: PairClassification
dataset:
name: MTEB TwitterURLCorpus
type: None
config: default
split: test
revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf
metrics:
- type: cos_sim_accuracy
value: 87.36174176271976
- type: cos_sim_ap
value: 82.35619973560178
- type: cos_sim_f1
value: 75.05234819024828
- type: cos_sim_precision
value: 72.95725501599301
- type: cos_sim_recall
value: 77.27132737911919
- type: dot_accuracy
value: 87.36174176271976
- type: dot_ap
value: 82.35619970731375
- type: dot_f1
value: 75.05234819024828
- type: dot_precision
value: 72.95725501599301
- type: dot_recall
value: 77.27132737911919
- type: euclidean_accuracy
value: 87.36174176271976
- type: euclidean_ap
value: 82.35619955418572
- type: euclidean_f1
value: 75.05234819024828
- type: euclidean_precision
value: 72.95725501599301
- type: euclidean_recall
value: 77.27132737911919
- type: manhattan_accuracy
value: 87.25889703884813
- type: manhattan_ap
value: 81.85878727777866
- type: manhattan_f1
value: 74.52758333031083
- type: manhattan_precision
value: 70.45189604333814
- type: manhattan_recall
value: 79.10378811210347
- type: max_accuracy
value: 87.36174176271976
- type: max_ap
value: 82.35619973560178
- type: max_f1
value: 75.05234819024828
---
| [
"BIOSSES",
"SCIFACT"
] | Non_BioNLP |
Triangle104/MN-Chunky-Lotus-12B-Q6_K-GGUF | Triangle104 | null | [
"transformers",
"gguf",
"storywriting",
"text adventure",
"creative",
"story",
"writing",
"fiction",
"roleplaying",
"rp",
"mergekit",
"merge",
"llama-cpp",
"gguf-my-repo",
"en",
"base_model:FallenMerick/MN-Chunky-Lotus-12B",
"base_model:quantized:FallenMerick/MN-Chunky-Lotus-12B",
"license:cc-by-4.0",
"endpoints_compatible",
"region:us"
] | 1,732,107,422,000 | 2024-11-20T12:58:32 | 26 | 1 | ---
base_model: FallenMerick/MN-Chunky-Lotus-12B
language:
- en
library_name: transformers
license: cc-by-4.0
tags:
- storywriting
- text adventure
- creative
- story
- writing
- fiction
- roleplaying
- rp
- mergekit
- merge
- llama-cpp
- gguf-my-repo
---
# Triangle104/MN-Chunky-Lotus-12B-Q6_K-GGUF
This model was converted to GGUF format from [`FallenMerick/MN-Chunky-Lotus-12B`](https://huggingface.co/FallenMerick/MN-Chunky-Lotus-12B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/FallenMerick/MN-Chunky-Lotus-12B) for more details on the model.
---
Model details:
-
I had originally planned to use this model for future/further merges, but decided to go ahead and release it since it scored rather high on my local EQ Bench testing (79.58 w/ 100% parsed @ 8-bit).
Bear in mind that most models tend to score a bit higher on my own local tests as compared to their posted scores. Still, its the highest score I've personally seen from all the models I've tested.
Its a decent model, with great emotional intelligence and acceptable adherence to various character personalities. It does a good job at roleplaying despite being a bit bland at times.
Overall, I like the way it writes, but it has a few formatting issues that show up from time to time, and it has an uncommon tendency to paste walls of character feelings/intentions at the end of some outputs without any prompting. This is something I hope to correct with future iterations.
This is a merge of pre-trained language models created using mergekit.
Merge Method
-
This model was merged using the TIES merge method.
Models Merged
-
The following models were included in the merge:
Epiculous/Violet_Twilight-v0.2
nbeerbower/mistral-nemo-gutenberg-12B-v4
flammenai/Mahou-1.5-mistral-nemo-12B
Configuration
-
The following YAML configuration was used to produce this model:
models:
- model: Epiculous/Violet_Twilight-v0.2
parameters:
weight: 1.0
density: 1.0
- model: nbeerbower/mistral-nemo-gutenberg-12B-v4
parameters:
weight: 1.0
density: 0.54
- model: flammenai/Mahou-1.5-mistral-nemo-12B
parameters:
weight: 1.0
density: 0.26
merge_method: ties
base_model: TheDrummer/Rocinante-12B-v1.1
parameters:
normalize: true
dtype: bfloat16
The idea behind this recipe was to take the long-form writing capabilities of Gutenberg, curtail it a bit with the very short output formatting of Mahou, and use Violet Twilight as an extremely solid roleplaying foundation underneath.
Rocinante is used as the base model in this merge in order to really target the delta weights from Gutenberg, since those seemed to have the highest impact on the resulting EQ of the model.
Special shoutout to @matchaaaaa for helping with testing, and for all the great model recommendations. Also, for just being an all around great person who's really inspired and motivated me to continue merging and working on models.
---
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/MN-Chunky-Lotus-12B-Q6_K-GGUF --hf-file mn-chunky-lotus-12b-q6_k.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/MN-Chunky-Lotus-12B-Q6_K-GGUF --hf-file mn-chunky-lotus-12b-q6_k.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/MN-Chunky-Lotus-12B-Q6_K-GGUF --hf-file mn-chunky-lotus-12b-q6_k.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/MN-Chunky-Lotus-12B-Q6_K-GGUF --hf-file mn-chunky-lotus-12b-q6_k.gguf -c 2048
```
| [
"BEAR"
] | Non_BioNLP |
andreabac3/Fauno-Italian-LLM-7B | andreabac3 | null | [
"large language model",
"italian large language model",
"baize",
"llama ",
"italian",
"it",
"en",
"dataset:andreabac3/MedQuaAD-Italian-Fauno-Baize",
"dataset:andreabac3/StackOverflow-Italian-Fauno-Baize",
"dataset:andreabac3/Quora-Italian-Fauno-Baize",
"dataset:teelinsan/camoscio_cleaned",
"license:gpl-3.0",
"region:us"
] | 1,680,861,100,000 | 2023-07-12T06:11:53 | 0 | 37 | ---
datasets:
- andreabac3/MedQuaAD-Italian-Fauno-Baize
- andreabac3/StackOverflow-Italian-Fauno-Baize
- andreabac3/Quora-Italian-Fauno-Baize
- teelinsan/camoscio_cleaned
language:
- it
- en
license: gpl-3.0
tags:
- large language model
- italian large language model
- baize
- 'llama '
- italian
---
# Fauno - Italian LLM

Get ready to meet Fauno - the Italian language model crafted by the [RSTLess Research Group](https://rstless-lab.netlify.app/) from the Sapienza University of Rome.
The talented research team behind Fauno includes [Andrea Bacciu](https://andreabac3.github.io/), [Dr. Giovanni Trappolini](https://sites.google.com/view/giovannitrappolini), [Andrea Santilli](https://www.santilli.xyz/), and [Professor Fabrizio Silvestri](https://sites.google.com/diag.uniroma1.it/fabriziosilvestri/home).
Fauno represents a cutting-edge development in open-source Italian Large Language Modeling. It's trained on extensive Italian synthetic datasets, encompassing a wide range of fields such as medical data 🩺, technical content from Stack Overflow 💻, Quora discussions 💬, and Alpaca data 🦙 translated into Italian.
Hence, our model is able to answer to your questions in Italian 🙋, fix your buggy code 🐛 and understand a minimum of medical literature 💊.
## The 🇮🇹 open-source version of chatGPT!
Discover the capabilities of Fauno and experience the evolution of Italian language models for yourself.

### Why Fauno?
We started with a model called Baize, named after a legendary creature from Chinese literature. Continuing along this thematic line, we developed our Italian model based on Baize and named it Fauno, inspired by an iconic figure from Roman mythology. This choice underlines the link between the two models, while maintaining a distinctive identity rooted in Italian culture.
# Did you know that you can run Fauno on Colab base?
Follow this link to access a Colab notebook with our 7B version! <a target="_blank" href="https://colab.research.google.com/drive/1AepJVWS-qU910zyq-Zi7wWFQ5tthVzUe">
<img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/>
</a>
## 🔎 Model's details
Fauno is a fine-tuned version of the LoRa weights of [Baize](https://github.com/project-baize/baize-chatbot), that is an improved version of [LLama](https://github.com/facebookresearch/llama).
We translated and cleaned the data of Baize, and then we fine-tuned the 7b model using a single RTX A6000 (48GB of VRAM) with 19 hours for one epoch.
- 13B: https://huggingface.co/andreabac3/Fauno-Italian-LLM-13B
Fauno 30B and 65B are coming soon!
## Model initialization
```python
from transformers import LlamaTokenizer, LlamaForCausalLM, GenerationConfig
from peft import PeftModel
tokenizer = LlamaTokenizer.from_pretrained("decapoda-research/llama-7b-hf")
model = LlamaForCausalLM.from_pretrained(
"decapoda-research/llama-7b-hf",
load_in_8bit=True,
device_map="auto",
)
model = PeftModel.from_pretrained(model, "andreabac3/Fauno-Italian-LLM-7B")
model.eval()
```
## Inference
```python
def evaluate(question: str) -> str:
prompt = f"The conversation between human and AI assistant.\n[|Human|] {question}.\n[|AI|] "
inputs = tokenizer(prompt, return_tensors="pt")
input_ids = inputs["input_ids"].cuda()
generation_output = model.generate(
input_ids=input_ids,
generation_config=generation_config,
return_dict_in_generate=True,
output_scores=True,
max_new_tokens=256
)
output = tokenizer.decode(generation_output.sequences[0]).split("[|AI|]")[1]
return output
your_question: str = "Qual'è il significato della vita?"
print(evaluate(your_question))
```
### Output
```
Il senso della vita è una domanda che molte persone hanno cercato di rispondere per secoli.
Alla fine, il senso della vita è soggettivo e varia da persona a persona.
Alcune persone credono che il senso della vita sia trovare la felicità, mentre altre credono che sia raggiungere i propri obiettivi o aiutare gli altri.
Alla fine, il senso della vita è determinato dall'individuo e dai loro valori e credenze.
In definitiva, il senso della vita è qualcosa che ognuno deve trovare da solo.
Non c'è una risposta giusta o sbagliata, poiché ogni persona ha le proprie convinzioni e credenze.
La ricerca del senso della vita può essere un viaggio lungo e difficile, ma vale la pena perseguire.
```
# 📖 Cite our work
To use our translated dataset and model weights in your research, remember to cite our work.
```bibtex
@misc{fauno,
author = {Andrea Bacciu, Giovanni Trappolini, Andrea Santilli, Fabrizio Silvestri},
title = {Fauno: The Italian Large Language Model that will leave you senza parole!},
year = {2023},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/andreabac3/Fauno-Italian-LLM}},
}
```
## 🔑 License
This project is a derivative of Baize, and we adhere to the licensing constraints imposed by both Baize's creators and the authors of LLama.
## ⚠️ Hallucinations
It is important to remark that current generation models are prone to the problem of hallucinations. So we advise you not to take their answers seriously.
## 👏 Acknowledgement
- LLama - Meta AI: https://github.com/facebookresearch/llama
- Baize: https://github.com/project-baize/baize-chatbot
- Standford Alpaca: https://github.com/tatsu-lab/stanford_alpaca
- Camoscio: https://github.com/teelinsan/camoscio
#### Image Credits
- llama image: https://next14.com/en/nextnews-7-march-a-new-language-model-for-meta-bing-ai-on-windows-and-the-first-tokenized-real-estate-sales/
- Fauno logo: https://www.flaticon.com/free-icon/faun_7931635?term=faun&page=1&position=1&origin=tag&related_id=7931635 | [
"MEDICAL DATA"
] | Non_BioNLP |
buio/Fauno-Italian-LLM-7B | buio | null | [
"large language model",
"italian large language model",
"baize",
"llama ",
"italian",
"it",
"en",
"dataset:andreabac3/MedQuaAD-Italian-Fauno-Baize",
"dataset:andreabac3/StackOverflow-Italian-Fauno-Baize",
"dataset:andreabac3/Quora-Italian-Fauno-Baize",
"dataset:teelinsan/camoscio_cleaned",
"license:gpl-3.0",
"region:us"
] | 1,707,844,716,000 | 2024-02-13T17:24:48 | 0 | 0 | ---
datasets:
- andreabac3/MedQuaAD-Italian-Fauno-Baize
- andreabac3/StackOverflow-Italian-Fauno-Baize
- andreabac3/Quora-Italian-Fauno-Baize
- teelinsan/camoscio_cleaned
language:
- it
- en
license: gpl-3.0
tags:
- large language model
- italian large language model
- baize
- 'llama '
- italian
---
# Fauno - Italian LLM

Get ready to meet Fauno - the Italian language model crafted by the [RSTLess Research Group](https://rstless-lab.netlify.app/) from the Sapienza University of Rome.
The talented research team behind Fauno includes [Andrea Bacciu](https://andreabac3.github.io/), [Dr. Giovanni Trappolini](https://sites.google.com/view/giovannitrappolini), [Andrea Santilli](https://www.santilli.xyz/), and [Professor Fabrizio Silvestri](https://sites.google.com/diag.uniroma1.it/fabriziosilvestri/home).
Fauno represents a cutting-edge development in open-source Italian Large Language Modeling. It's trained on extensive Italian synthetic datasets, encompassing a wide range of fields such as medical data 🩺, technical content from Stack Overflow 💻, Quora discussions 💬, and Alpaca data 🦙 translated into Italian.
Hence, our model is able to answer to your questions in Italian 🙋, fix your buggy code 🐛 and understand a minimum of medical literature 💊.
## The 🇮🇹 open-source version of chatGPT!
Discover the capabilities of Fauno and experience the evolution of Italian language models for yourself.

### Why Fauno?
We started with a model called Baize, named after a legendary creature from Chinese literature. Continuing along this thematic line, we developed our Italian model based on Baize and named it Fauno, inspired by an iconic figure from Roman mythology. This choice underlines the link between the two models, while maintaining a distinctive identity rooted in Italian culture.
# Did you know that you can run Fauno on Colab base?
Follow this link to access a Colab notebook with our 7B version! <a target="_blank" href="https://colab.research.google.com/drive/1AepJVWS-qU910zyq-Zi7wWFQ5tthVzUe">
<img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/>
</a>
## 🔎 Model's details
Fauno is a fine-tuned version of the LoRa weights of [Baize](https://github.com/project-baize/baize-chatbot), that is an improved version of [LLama](https://github.com/facebookresearch/llama).
We translated and cleaned the data of Baize, and then we fine-tuned the 7b model using a single RTX A6000 (48GB of VRAM) with 19 hours for one epoch.
- 13B: https://huggingface.co/andreabac3/Fauno-Italian-LLM-13B
Fauno 30B and 65B are coming soon!
## Model initialization
```python
from transformers import LlamaTokenizer, LlamaForCausalLM, GenerationConfig
from peft import PeftModel
tokenizer = LlamaTokenizer.from_pretrained("decapoda-research/llama-7b-hf")
model = LlamaForCausalLM.from_pretrained(
"decapoda-research/llama-7b-hf",
load_in_8bit=True,
device_map="auto",
)
model = PeftModel.from_pretrained(model, "andreabac3/Fauno-Italian-LLM-7B")
model.eval()
```
## Inference
```python
def evaluate(question: str) -> str:
prompt = f"The conversation between human and AI assistant.\n[|Human|] {question}.\n[|AI|] "
inputs = tokenizer(prompt, return_tensors="pt")
input_ids = inputs["input_ids"].cuda()
generation_output = model.generate(
input_ids=input_ids,
generation_config=generation_config,
return_dict_in_generate=True,
output_scores=True,
max_new_tokens=256
)
output = tokenizer.decode(generation_output.sequences[0]).split("[|AI|]")[1]
return output
your_question: str = "Qual'è il significato della vita?"
print(evaluate(your_question))
```
### Output
```
Il senso della vita è una domanda che molte persone hanno cercato di rispondere per secoli.
Alla fine, il senso della vita è soggettivo e varia da persona a persona.
Alcune persone credono che il senso della vita sia trovare la felicità, mentre altre credono che sia raggiungere i propri obiettivi o aiutare gli altri.
Alla fine, il senso della vita è determinato dall'individuo e dai loro valori e credenze.
In definitiva, il senso della vita è qualcosa che ognuno deve trovare da solo.
Non c'è una risposta giusta o sbagliata, poiché ogni persona ha le proprie convinzioni e credenze.
La ricerca del senso della vita può essere un viaggio lungo e difficile, ma vale la pena perseguire.
```
# 📖 Cite our work
To use our translated dataset and model weights in your research, remember to cite our work.
```bibtex
@misc{fauno,
author = {Andrea Bacciu, Giovanni Trappolini, Andrea Santilli, Fabrizio Silvestri},
title = {Fauno: The Italian Large Language Model that will leave you senza parole!},
year = {2023},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/andreabac3/Fauno-Italian-LLM}},
}
```
## 🔑 License
This project is a derivative of Baize, and we adhere to the licensing constraints imposed by both Baize's creators and the authors of LLama.
## ⚠️ Hallucinations
It is important to remark that current generation models are prone to the problem of hallucinations. So we advise you not to take their answers seriously.
## 👏 Acknowledgement
- LLama - Meta AI: https://github.com/facebookresearch/llama
- Baize: https://github.com/project-baize/baize-chatbot
- Standford Alpaca: https://github.com/tatsu-lab/stanford_alpaca
- Camoscio: https://github.com/teelinsan/camoscio
#### Image Credits
- llama image: https://next14.com/en/nextnews-7-march-a-new-language-model-for-meta-bing-ai-on-windows-and-the-first-tokenized-real-estate-sales/
- Fauno logo: https://www.flaticon.com/free-icon/faun_7931635?term=faun&page=1&position=1&origin=tag&related_id=7931635 | [
"MEDICAL DATA"
] | Non_BioNLP |
vectoriseai/bge-small-en | vectoriseai | sentence-similarity | [
"sentence-transformers",
"pytorch",
"onnx",
"safetensors",
"bert",
"mteb",
"sentence transformers",
"sentence-similarity",
"en",
"license:mit",
"model-index",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | 1,692,968,402,000 | 2023-08-28T14:17:25 | 18 | 0 | ---
language:
- en
library_name: sentence-transformers
license: mit
pipeline_tag: sentence-similarity
tags:
- mteb
- sentence transformers
model-index:
- name: bge-small-en
results:
- task:
type: Classification
dataset:
name: MTEB AmazonCounterfactualClassification (en)
type: mteb/amazon_counterfactual
config: en
split: test
revision: e8379541af4e31359cca9fbcf4b00f2671dba205
metrics:
- type: accuracy
value: 74.34328358208955
- type: ap
value: 37.59947775195661
- type: f1
value: 68.548415491933
- task:
type: Classification
dataset:
name: MTEB AmazonPolarityClassification
type: mteb/amazon_polarity
config: default
split: test
revision: e2d317d38cd51312af73b3d32a06d1a08b442046
metrics:
- type: accuracy
value: 93.04527499999999
- type: ap
value: 89.60696356772135
- type: f1
value: 93.03361469382438
- task:
type: Classification
dataset:
name: MTEB AmazonReviewsClassification (en)
type: mteb/amazon_reviews_multi
config: en
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 46.08
- type: f1
value: 45.66249835363254
- task:
type: Retrieval
dataset:
name: MTEB ArguAna
type: arguana
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 35.205999999999996
- type: map_at_10
value: 50.782000000000004
- type: map_at_100
value: 51.547
- type: map_at_1000
value: 51.554
- type: map_at_3
value: 46.515
- type: map_at_5
value: 49.296
- type: mrr_at_1
value: 35.632999999999996
- type: mrr_at_10
value: 50.958999999999996
- type: mrr_at_100
value: 51.724000000000004
- type: mrr_at_1000
value: 51.731
- type: mrr_at_3
value: 46.669
- type: mrr_at_5
value: 49.439
- type: ndcg_at_1
value: 35.205999999999996
- type: ndcg_at_10
value: 58.835
- type: ndcg_at_100
value: 62.095
- type: ndcg_at_1000
value: 62.255
- type: ndcg_at_3
value: 50.255
- type: ndcg_at_5
value: 55.296
- type: precision_at_1
value: 35.205999999999996
- type: precision_at_10
value: 8.421
- type: precision_at_100
value: 0.984
- type: precision_at_1000
value: 0.1
- type: precision_at_3
value: 20.365
- type: precision_at_5
value: 14.680000000000001
- type: recall_at_1
value: 35.205999999999996
- type: recall_at_10
value: 84.211
- type: recall_at_100
value: 98.43499999999999
- type: recall_at_1000
value: 99.644
- type: recall_at_3
value: 61.095
- type: recall_at_5
value: 73.4
- task:
type: Clustering
dataset:
name: MTEB ArxivClusteringP2P
type: mteb/arxiv-clustering-p2p
config: default
split: test
revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d
metrics:
- type: v_measure
value: 47.52644476278646
- task:
type: Clustering
dataset:
name: MTEB ArxivClusteringS2S
type: mteb/arxiv-clustering-s2s
config: default
split: test
revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53
metrics:
- type: v_measure
value: 39.973045724188964
- task:
type: Reranking
dataset:
name: MTEB AskUbuntuDupQuestions
type: mteb/askubuntudupquestions-reranking
config: default
split: test
revision: 2000358ca161889fa9c082cb41daa8dcfb161a54
metrics:
- type: map
value: 62.28285314871488
- type: mrr
value: 74.52743701358659
- task:
type: STS
dataset:
name: MTEB BIOSSES
type: mteb/biosses-sts
config: default
split: test
revision: d3fb88f8f02e40887cd149695127462bbcf29b4a
metrics:
- type: cos_sim_pearson
value: 80.09041909160327
- type: cos_sim_spearman
value: 79.96266537706944
- type: euclidean_pearson
value: 79.50774978162241
- type: euclidean_spearman
value: 79.9144715078551
- type: manhattan_pearson
value: 79.2062139879302
- type: manhattan_spearman
value: 79.35000081468212
- task:
type: Classification
dataset:
name: MTEB Banking77Classification
type: mteb/banking77
config: default
split: test
revision: 0fd18e25b25c072e09e0d92ab615fda904d66300
metrics:
- type: accuracy
value: 85.31493506493506
- type: f1
value: 85.2704557977762
- task:
type: Clustering
dataset:
name: MTEB BiorxivClusteringP2P
type: mteb/biorxiv-clustering-p2p
config: default
split: test
revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40
metrics:
- type: v_measure
value: 39.6837242810816
- task:
type: Clustering
dataset:
name: MTEB BiorxivClusteringS2S
type: mteb/biorxiv-clustering-s2s
config: default
split: test
revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908
metrics:
- type: v_measure
value: 35.38881249555897
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackAndroidRetrieval
type: BeIR/cqadupstack
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 27.884999999999998
- type: map_at_10
value: 39.574
- type: map_at_100
value: 40.993
- type: map_at_1000
value: 41.129
- type: map_at_3
value: 36.089
- type: map_at_5
value: 38.191
- type: mrr_at_1
value: 34.477999999999994
- type: mrr_at_10
value: 45.411
- type: mrr_at_100
value: 46.089999999999996
- type: mrr_at_1000
value: 46.147
- type: mrr_at_3
value: 42.346000000000004
- type: mrr_at_5
value: 44.292
- type: ndcg_at_1
value: 34.477999999999994
- type: ndcg_at_10
value: 46.123999999999995
- type: ndcg_at_100
value: 51.349999999999994
- type: ndcg_at_1000
value: 53.578
- type: ndcg_at_3
value: 40.824
- type: ndcg_at_5
value: 43.571
- type: precision_at_1
value: 34.477999999999994
- type: precision_at_10
value: 8.841000000000001
- type: precision_at_100
value: 1.4460000000000002
- type: precision_at_1000
value: 0.192
- type: precision_at_3
value: 19.742
- type: precision_at_5
value: 14.421000000000001
- type: recall_at_1
value: 27.884999999999998
- type: recall_at_10
value: 59.087
- type: recall_at_100
value: 80.609
- type: recall_at_1000
value: 95.054
- type: recall_at_3
value: 44.082
- type: recall_at_5
value: 51.593999999999994
- type: map_at_1
value: 30.639
- type: map_at_10
value: 40.047
- type: map_at_100
value: 41.302
- type: map_at_1000
value: 41.425
- type: map_at_3
value: 37.406
- type: map_at_5
value: 38.934000000000005
- type: mrr_at_1
value: 37.707
- type: mrr_at_10
value: 46.082
- type: mrr_at_100
value: 46.745
- type: mrr_at_1000
value: 46.786
- type: mrr_at_3
value: 43.980999999999995
- type: mrr_at_5
value: 45.287
- type: ndcg_at_1
value: 37.707
- type: ndcg_at_10
value: 45.525
- type: ndcg_at_100
value: 49.976
- type: ndcg_at_1000
value: 51.94499999999999
- type: ndcg_at_3
value: 41.704
- type: ndcg_at_5
value: 43.596000000000004
- type: precision_at_1
value: 37.707
- type: precision_at_10
value: 8.465
- type: precision_at_100
value: 1.375
- type: precision_at_1000
value: 0.183
- type: precision_at_3
value: 19.979
- type: precision_at_5
value: 14.115
- type: recall_at_1
value: 30.639
- type: recall_at_10
value: 54.775
- type: recall_at_100
value: 73.678
- type: recall_at_1000
value: 86.142
- type: recall_at_3
value: 43.230000000000004
- type: recall_at_5
value: 48.622
- type: map_at_1
value: 38.038
- type: map_at_10
value: 49.922
- type: map_at_100
value: 51.032
- type: map_at_1000
value: 51.085
- type: map_at_3
value: 46.664
- type: map_at_5
value: 48.588
- type: mrr_at_1
value: 43.95
- type: mrr_at_10
value: 53.566
- type: mrr_at_100
value: 54.318999999999996
- type: mrr_at_1000
value: 54.348
- type: mrr_at_3
value: 51.066
- type: mrr_at_5
value: 52.649
- type: ndcg_at_1
value: 43.95
- type: ndcg_at_10
value: 55.676
- type: ndcg_at_100
value: 60.126000000000005
- type: ndcg_at_1000
value: 61.208
- type: ndcg_at_3
value: 50.20400000000001
- type: ndcg_at_5
value: 53.038
- type: precision_at_1
value: 43.95
- type: precision_at_10
value: 8.953
- type: precision_at_100
value: 1.2109999999999999
- type: precision_at_1000
value: 0.135
- type: precision_at_3
value: 22.256999999999998
- type: precision_at_5
value: 15.524
- type: recall_at_1
value: 38.038
- type: recall_at_10
value: 69.15
- type: recall_at_100
value: 88.31599999999999
- type: recall_at_1000
value: 95.993
- type: recall_at_3
value: 54.663
- type: recall_at_5
value: 61.373
- type: map_at_1
value: 24.872
- type: map_at_10
value: 32.912
- type: map_at_100
value: 33.972
- type: map_at_1000
value: 34.046
- type: map_at_3
value: 30.361
- type: map_at_5
value: 31.704
- type: mrr_at_1
value: 26.779999999999998
- type: mrr_at_10
value: 34.812
- type: mrr_at_100
value: 35.754999999999995
- type: mrr_at_1000
value: 35.809000000000005
- type: mrr_at_3
value: 32.335
- type: mrr_at_5
value: 33.64
- type: ndcg_at_1
value: 26.779999999999998
- type: ndcg_at_10
value: 37.623
- type: ndcg_at_100
value: 42.924
- type: ndcg_at_1000
value: 44.856
- type: ndcg_at_3
value: 32.574
- type: ndcg_at_5
value: 34.842
- type: precision_at_1
value: 26.779999999999998
- type: precision_at_10
value: 5.729
- type: precision_at_100
value: 0.886
- type: precision_at_1000
value: 0.109
- type: precision_at_3
value: 13.559
- type: precision_at_5
value: 9.469
- type: recall_at_1
value: 24.872
- type: recall_at_10
value: 50.400999999999996
- type: recall_at_100
value: 74.954
- type: recall_at_1000
value: 89.56
- type: recall_at_3
value: 36.726
- type: recall_at_5
value: 42.138999999999996
- type: map_at_1
value: 16.803
- type: map_at_10
value: 24.348
- type: map_at_100
value: 25.56
- type: map_at_1000
value: 25.668000000000003
- type: map_at_3
value: 21.811
- type: map_at_5
value: 23.287
- type: mrr_at_1
value: 20.771
- type: mrr_at_10
value: 28.961
- type: mrr_at_100
value: 29.979
- type: mrr_at_1000
value: 30.046
- type: mrr_at_3
value: 26.555
- type: mrr_at_5
value: 28.060000000000002
- type: ndcg_at_1
value: 20.771
- type: ndcg_at_10
value: 29.335
- type: ndcg_at_100
value: 35.188
- type: ndcg_at_1000
value: 37.812
- type: ndcg_at_3
value: 24.83
- type: ndcg_at_5
value: 27.119
- type: precision_at_1
value: 20.771
- type: precision_at_10
value: 5.4350000000000005
- type: precision_at_100
value: 0.9480000000000001
- type: precision_at_1000
value: 0.13
- type: precision_at_3
value: 11.982
- type: precision_at_5
value: 8.831
- type: recall_at_1
value: 16.803
- type: recall_at_10
value: 40.039
- type: recall_at_100
value: 65.83200000000001
- type: recall_at_1000
value: 84.478
- type: recall_at_3
value: 27.682000000000002
- type: recall_at_5
value: 33.535
- type: map_at_1
value: 28.345
- type: map_at_10
value: 37.757000000000005
- type: map_at_100
value: 39.141
- type: map_at_1000
value: 39.262
- type: map_at_3
value: 35.183
- type: map_at_5
value: 36.592
- type: mrr_at_1
value: 34.649
- type: mrr_at_10
value: 43.586999999999996
- type: mrr_at_100
value: 44.481
- type: mrr_at_1000
value: 44.542
- type: mrr_at_3
value: 41.29
- type: mrr_at_5
value: 42.642
- type: ndcg_at_1
value: 34.649
- type: ndcg_at_10
value: 43.161
- type: ndcg_at_100
value: 48.734
- type: ndcg_at_1000
value: 51.046
- type: ndcg_at_3
value: 39.118
- type: ndcg_at_5
value: 41.022
- type: precision_at_1
value: 34.649
- type: precision_at_10
value: 7.603
- type: precision_at_100
value: 1.209
- type: precision_at_1000
value: 0.157
- type: precision_at_3
value: 18.319
- type: precision_at_5
value: 12.839
- type: recall_at_1
value: 28.345
- type: recall_at_10
value: 53.367
- type: recall_at_100
value: 76.453
- type: recall_at_1000
value: 91.82000000000001
- type: recall_at_3
value: 41.636
- type: recall_at_5
value: 46.760000000000005
- type: map_at_1
value: 22.419
- type: map_at_10
value: 31.716
- type: map_at_100
value: 33.152
- type: map_at_1000
value: 33.267
- type: map_at_3
value: 28.74
- type: map_at_5
value: 30.48
- type: mrr_at_1
value: 28.310999999999996
- type: mrr_at_10
value: 37.039
- type: mrr_at_100
value: 38.09
- type: mrr_at_1000
value: 38.145
- type: mrr_at_3
value: 34.437
- type: mrr_at_5
value: 36.024
- type: ndcg_at_1
value: 28.310999999999996
- type: ndcg_at_10
value: 37.41
- type: ndcg_at_100
value: 43.647999999999996
- type: ndcg_at_1000
value: 46.007
- type: ndcg_at_3
value: 32.509
- type: ndcg_at_5
value: 34.943999999999996
- type: precision_at_1
value: 28.310999999999996
- type: precision_at_10
value: 6.963
- type: precision_at_100
value: 1.1860000000000002
- type: precision_at_1000
value: 0.154
- type: precision_at_3
value: 15.867999999999999
- type: precision_at_5
value: 11.507000000000001
- type: recall_at_1
value: 22.419
- type: recall_at_10
value: 49.28
- type: recall_at_100
value: 75.802
- type: recall_at_1000
value: 92.032
- type: recall_at_3
value: 35.399
- type: recall_at_5
value: 42.027
- type: map_at_1
value: 24.669249999999998
- type: map_at_10
value: 33.332583333333325
- type: map_at_100
value: 34.557833333333335
- type: map_at_1000
value: 34.67141666666666
- type: map_at_3
value: 30.663166666666662
- type: map_at_5
value: 32.14883333333333
- type: mrr_at_1
value: 29.193833333333334
- type: mrr_at_10
value: 37.47625
- type: mrr_at_100
value: 38.3545
- type: mrr_at_1000
value: 38.413166666666676
- type: mrr_at_3
value: 35.06741666666667
- type: mrr_at_5
value: 36.450666666666656
- type: ndcg_at_1
value: 29.193833333333334
- type: ndcg_at_10
value: 38.505416666666676
- type: ndcg_at_100
value: 43.81125
- type: ndcg_at_1000
value: 46.09558333333333
- type: ndcg_at_3
value: 33.90916666666667
- type: ndcg_at_5
value: 36.07666666666666
- type: precision_at_1
value: 29.193833333333334
- type: precision_at_10
value: 6.7251666666666665
- type: precision_at_100
value: 1.1058333333333332
- type: precision_at_1000
value: 0.14833333333333332
- type: precision_at_3
value: 15.554166666666665
- type: precision_at_5
value: 11.079250000000002
- type: recall_at_1
value: 24.669249999999998
- type: recall_at_10
value: 49.75583333333332
- type: recall_at_100
value: 73.06908333333332
- type: recall_at_1000
value: 88.91316666666667
- type: recall_at_3
value: 36.913250000000005
- type: recall_at_5
value: 42.48641666666666
- type: map_at_1
value: 24.044999999999998
- type: map_at_10
value: 30.349999999999998
- type: map_at_100
value: 31.273
- type: map_at_1000
value: 31.362000000000002
- type: map_at_3
value: 28.508
- type: map_at_5
value: 29.369
- type: mrr_at_1
value: 26.994
- type: mrr_at_10
value: 33.12
- type: mrr_at_100
value: 33.904
- type: mrr_at_1000
value: 33.967000000000006
- type: mrr_at_3
value: 31.365
- type: mrr_at_5
value: 32.124
- type: ndcg_at_1
value: 26.994
- type: ndcg_at_10
value: 34.214
- type: ndcg_at_100
value: 38.681
- type: ndcg_at_1000
value: 40.926
- type: ndcg_at_3
value: 30.725
- type: ndcg_at_5
value: 31.967000000000002
- type: precision_at_1
value: 26.994
- type: precision_at_10
value: 5.215
- type: precision_at_100
value: 0.807
- type: precision_at_1000
value: 0.108
- type: precision_at_3
value: 12.986
- type: precision_at_5
value: 8.712
- type: recall_at_1
value: 24.044999999999998
- type: recall_at_10
value: 43.456
- type: recall_at_100
value: 63.675000000000004
- type: recall_at_1000
value: 80.05499999999999
- type: recall_at_3
value: 33.561
- type: recall_at_5
value: 36.767
- type: map_at_1
value: 15.672
- type: map_at_10
value: 22.641
- type: map_at_100
value: 23.75
- type: map_at_1000
value: 23.877000000000002
- type: map_at_3
value: 20.219
- type: map_at_5
value: 21.648
- type: mrr_at_1
value: 18.823
- type: mrr_at_10
value: 26.101999999999997
- type: mrr_at_100
value: 27.038
- type: mrr_at_1000
value: 27.118
- type: mrr_at_3
value: 23.669
- type: mrr_at_5
value: 25.173000000000002
- type: ndcg_at_1
value: 18.823
- type: ndcg_at_10
value: 27.176000000000002
- type: ndcg_at_100
value: 32.42
- type: ndcg_at_1000
value: 35.413
- type: ndcg_at_3
value: 22.756999999999998
- type: ndcg_at_5
value: 25.032
- type: precision_at_1
value: 18.823
- type: precision_at_10
value: 5.034000000000001
- type: precision_at_100
value: 0.895
- type: precision_at_1000
value: 0.132
- type: precision_at_3
value: 10.771
- type: precision_at_5
value: 8.1
- type: recall_at_1
value: 15.672
- type: recall_at_10
value: 37.296
- type: recall_at_100
value: 60.863
- type: recall_at_1000
value: 82.234
- type: recall_at_3
value: 25.330000000000002
- type: recall_at_5
value: 30.964000000000002
- type: map_at_1
value: 24.633
- type: map_at_10
value: 32.858
- type: map_at_100
value: 34.038000000000004
- type: map_at_1000
value: 34.141
- type: map_at_3
value: 30.209000000000003
- type: map_at_5
value: 31.567
- type: mrr_at_1
value: 28.358
- type: mrr_at_10
value: 36.433
- type: mrr_at_100
value: 37.352000000000004
- type: mrr_at_1000
value: 37.41
- type: mrr_at_3
value: 34.033
- type: mrr_at_5
value: 35.246
- type: ndcg_at_1
value: 28.358
- type: ndcg_at_10
value: 37.973
- type: ndcg_at_100
value: 43.411
- type: ndcg_at_1000
value: 45.747
- type: ndcg_at_3
value: 32.934999999999995
- type: ndcg_at_5
value: 35.013
- type: precision_at_1
value: 28.358
- type: precision_at_10
value: 6.418
- type: precision_at_100
value: 1.02
- type: precision_at_1000
value: 0.133
- type: precision_at_3
value: 14.677000000000001
- type: precision_at_5
value: 10.335999999999999
- type: recall_at_1
value: 24.633
- type: recall_at_10
value: 50.048
- type: recall_at_100
value: 73.821
- type: recall_at_1000
value: 90.046
- type: recall_at_3
value: 36.284
- type: recall_at_5
value: 41.370000000000005
- type: map_at_1
value: 23.133
- type: map_at_10
value: 31.491999999999997
- type: map_at_100
value: 33.062000000000005
- type: map_at_1000
value: 33.256
- type: map_at_3
value: 28.886
- type: map_at_5
value: 30.262
- type: mrr_at_1
value: 28.063
- type: mrr_at_10
value: 36.144
- type: mrr_at_100
value: 37.14
- type: mrr_at_1000
value: 37.191
- type: mrr_at_3
value: 33.762
- type: mrr_at_5
value: 34.997
- type: ndcg_at_1
value: 28.063
- type: ndcg_at_10
value: 36.951
- type: ndcg_at_100
value: 43.287
- type: ndcg_at_1000
value: 45.777
- type: ndcg_at_3
value: 32.786
- type: ndcg_at_5
value: 34.65
- type: precision_at_1
value: 28.063
- type: precision_at_10
value: 7.055
- type: precision_at_100
value: 1.476
- type: precision_at_1000
value: 0.22899999999999998
- type: precision_at_3
value: 15.481
- type: precision_at_5
value: 11.186
- type: recall_at_1
value: 23.133
- type: recall_at_10
value: 47.285
- type: recall_at_100
value: 76.176
- type: recall_at_1000
value: 92.176
- type: recall_at_3
value: 35.223
- type: recall_at_5
value: 40.142
- type: map_at_1
value: 19.547
- type: map_at_10
value: 26.374
- type: map_at_100
value: 27.419
- type: map_at_1000
value: 27.539
- type: map_at_3
value: 23.882
- type: map_at_5
value: 25.163999999999998
- type: mrr_at_1
value: 21.442
- type: mrr_at_10
value: 28.458
- type: mrr_at_100
value: 29.360999999999997
- type: mrr_at_1000
value: 29.448999999999998
- type: mrr_at_3
value: 25.97
- type: mrr_at_5
value: 27.273999999999997
- type: ndcg_at_1
value: 21.442
- type: ndcg_at_10
value: 30.897000000000002
- type: ndcg_at_100
value: 35.99
- type: ndcg_at_1000
value: 38.832
- type: ndcg_at_3
value: 25.944
- type: ndcg_at_5
value: 28.126
- type: precision_at_1
value: 21.442
- type: precision_at_10
value: 4.9910000000000005
- type: precision_at_100
value: 0.8109999999999999
- type: precision_at_1000
value: 0.11800000000000001
- type: precision_at_3
value: 11.029
- type: precision_at_5
value: 7.911
- type: recall_at_1
value: 19.547
- type: recall_at_10
value: 42.886
- type: recall_at_100
value: 66.64999999999999
- type: recall_at_1000
value: 87.368
- type: recall_at_3
value: 29.143
- type: recall_at_5
value: 34.544000000000004
- task:
type: Retrieval
dataset:
name: MTEB ClimateFEVER
type: climate-fever
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 15.572
- type: map_at_10
value: 25.312
- type: map_at_100
value: 27.062
- type: map_at_1000
value: 27.253
- type: map_at_3
value: 21.601
- type: map_at_5
value: 23.473
- type: mrr_at_1
value: 34.984
- type: mrr_at_10
value: 46.406
- type: mrr_at_100
value: 47.179
- type: mrr_at_1000
value: 47.21
- type: mrr_at_3
value: 43.485
- type: mrr_at_5
value: 45.322
- type: ndcg_at_1
value: 34.984
- type: ndcg_at_10
value: 34.344
- type: ndcg_at_100
value: 41.015
- type: ndcg_at_1000
value: 44.366
- type: ndcg_at_3
value: 29.119
- type: ndcg_at_5
value: 30.825999999999997
- type: precision_at_1
value: 34.984
- type: precision_at_10
value: 10.358
- type: precision_at_100
value: 1.762
- type: precision_at_1000
value: 0.23900000000000002
- type: precision_at_3
value: 21.368000000000002
- type: precision_at_5
value: 15.948
- type: recall_at_1
value: 15.572
- type: recall_at_10
value: 39.367999999999995
- type: recall_at_100
value: 62.183
- type: recall_at_1000
value: 80.92200000000001
- type: recall_at_3
value: 26.131999999999998
- type: recall_at_5
value: 31.635999999999996
- task:
type: Retrieval
dataset:
name: MTEB DBPedia
type: dbpedia-entity
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 8.848
- type: map_at_10
value: 19.25
- type: map_at_100
value: 27.193
- type: map_at_1000
value: 28.721999999999998
- type: map_at_3
value: 13.968
- type: map_at_5
value: 16.283
- type: mrr_at_1
value: 68.75
- type: mrr_at_10
value: 76.25
- type: mrr_at_100
value: 76.534
- type: mrr_at_1000
value: 76.53999999999999
- type: mrr_at_3
value: 74.667
- type: mrr_at_5
value: 75.86699999999999
- type: ndcg_at_1
value: 56.00000000000001
- type: ndcg_at_10
value: 41.426
- type: ndcg_at_100
value: 45.660000000000004
- type: ndcg_at_1000
value: 53.02
- type: ndcg_at_3
value: 46.581
- type: ndcg_at_5
value: 43.836999999999996
- type: precision_at_1
value: 68.75
- type: precision_at_10
value: 32.800000000000004
- type: precision_at_100
value: 10.440000000000001
- type: precision_at_1000
value: 1.9980000000000002
- type: precision_at_3
value: 49.667
- type: precision_at_5
value: 42.25
- type: recall_at_1
value: 8.848
- type: recall_at_10
value: 24.467
- type: recall_at_100
value: 51.344
- type: recall_at_1000
value: 75.235
- type: recall_at_3
value: 15.329
- type: recall_at_5
value: 18.892999999999997
- task:
type: Classification
dataset:
name: MTEB EmotionClassification
type: mteb/emotion
config: default
split: test
revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37
metrics:
- type: accuracy
value: 48.95
- type: f1
value: 43.44563593360779
- task:
type: Retrieval
dataset:
name: MTEB FEVER
type: fever
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 78.036
- type: map_at_10
value: 85.639
- type: map_at_100
value: 85.815
- type: map_at_1000
value: 85.829
- type: map_at_3
value: 84.795
- type: map_at_5
value: 85.336
- type: mrr_at_1
value: 84.353
- type: mrr_at_10
value: 90.582
- type: mrr_at_100
value: 90.617
- type: mrr_at_1000
value: 90.617
- type: mrr_at_3
value: 90.132
- type: mrr_at_5
value: 90.447
- type: ndcg_at_1
value: 84.353
- type: ndcg_at_10
value: 89.003
- type: ndcg_at_100
value: 89.60000000000001
- type: ndcg_at_1000
value: 89.836
- type: ndcg_at_3
value: 87.81400000000001
- type: ndcg_at_5
value: 88.478
- type: precision_at_1
value: 84.353
- type: precision_at_10
value: 10.482
- type: precision_at_100
value: 1.099
- type: precision_at_1000
value: 0.11399999999999999
- type: precision_at_3
value: 33.257999999999996
- type: precision_at_5
value: 20.465
- type: recall_at_1
value: 78.036
- type: recall_at_10
value: 94.517
- type: recall_at_100
value: 96.828
- type: recall_at_1000
value: 98.261
- type: recall_at_3
value: 91.12
- type: recall_at_5
value: 92.946
- task:
type: Retrieval
dataset:
name: MTEB FiQA2018
type: fiqa
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 20.191
- type: map_at_10
value: 32.369
- type: map_at_100
value: 34.123999999999995
- type: map_at_1000
value: 34.317
- type: map_at_3
value: 28.71
- type: map_at_5
value: 30.607
- type: mrr_at_1
value: 40.894999999999996
- type: mrr_at_10
value: 48.842
- type: mrr_at_100
value: 49.599
- type: mrr_at_1000
value: 49.647000000000006
- type: mrr_at_3
value: 46.785
- type: mrr_at_5
value: 47.672
- type: ndcg_at_1
value: 40.894999999999996
- type: ndcg_at_10
value: 39.872
- type: ndcg_at_100
value: 46.126
- type: ndcg_at_1000
value: 49.476
- type: ndcg_at_3
value: 37.153000000000006
- type: ndcg_at_5
value: 37.433
- type: precision_at_1
value: 40.894999999999996
- type: precision_at_10
value: 10.818
- type: precision_at_100
value: 1.73
- type: precision_at_1000
value: 0.231
- type: precision_at_3
value: 25.051000000000002
- type: precision_at_5
value: 17.531
- type: recall_at_1
value: 20.191
- type: recall_at_10
value: 45.768
- type: recall_at_100
value: 68.82000000000001
- type: recall_at_1000
value: 89.133
- type: recall_at_3
value: 33.296
- type: recall_at_5
value: 38.022
- task:
type: Retrieval
dataset:
name: MTEB HotpotQA
type: hotpotqa
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 39.257
- type: map_at_10
value: 61.467000000000006
- type: map_at_100
value: 62.364
- type: map_at_1000
value: 62.424
- type: map_at_3
value: 58.228
- type: map_at_5
value: 60.283
- type: mrr_at_1
value: 78.515
- type: mrr_at_10
value: 84.191
- type: mrr_at_100
value: 84.378
- type: mrr_at_1000
value: 84.385
- type: mrr_at_3
value: 83.284
- type: mrr_at_5
value: 83.856
- type: ndcg_at_1
value: 78.515
- type: ndcg_at_10
value: 69.78999999999999
- type: ndcg_at_100
value: 72.886
- type: ndcg_at_1000
value: 74.015
- type: ndcg_at_3
value: 65.23
- type: ndcg_at_5
value: 67.80199999999999
- type: precision_at_1
value: 78.515
- type: precision_at_10
value: 14.519000000000002
- type: precision_at_100
value: 1.694
- type: precision_at_1000
value: 0.184
- type: precision_at_3
value: 41.702
- type: precision_at_5
value: 27.046999999999997
- type: recall_at_1
value: 39.257
- type: recall_at_10
value: 72.59299999999999
- type: recall_at_100
value: 84.679
- type: recall_at_1000
value: 92.12
- type: recall_at_3
value: 62.552
- type: recall_at_5
value: 67.616
- task:
type: Classification
dataset:
name: MTEB ImdbClassification
type: mteb/imdb
config: default
split: test
revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7
metrics:
- type: accuracy
value: 91.5152
- type: ap
value: 87.64584669595709
- type: f1
value: 91.50605576428437
- task:
type: Retrieval
dataset:
name: MTEB MSMARCO
type: msmarco
config: default
split: dev
revision: None
metrics:
- type: map_at_1
value: 21.926000000000002
- type: map_at_10
value: 34.049
- type: map_at_100
value: 35.213
- type: map_at_1000
value: 35.265
- type: map_at_3
value: 30.309
- type: map_at_5
value: 32.407000000000004
- type: mrr_at_1
value: 22.55
- type: mrr_at_10
value: 34.657
- type: mrr_at_100
value: 35.760999999999996
- type: mrr_at_1000
value: 35.807
- type: mrr_at_3
value: 30.989
- type: mrr_at_5
value: 33.039
- type: ndcg_at_1
value: 22.55
- type: ndcg_at_10
value: 40.842
- type: ndcg_at_100
value: 46.436
- type: ndcg_at_1000
value: 47.721999999999994
- type: ndcg_at_3
value: 33.209
- type: ndcg_at_5
value: 36.943
- type: precision_at_1
value: 22.55
- type: precision_at_10
value: 6.447
- type: precision_at_100
value: 0.9249999999999999
- type: precision_at_1000
value: 0.104
- type: precision_at_3
value: 14.136000000000001
- type: precision_at_5
value: 10.381
- type: recall_at_1
value: 21.926000000000002
- type: recall_at_10
value: 61.724999999999994
- type: recall_at_100
value: 87.604
- type: recall_at_1000
value: 97.421
- type: recall_at_3
value: 40.944
- type: recall_at_5
value: 49.915
- task:
type: Classification
dataset:
name: MTEB MTOPDomainClassification (en)
type: mteb/mtop_domain
config: en
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 93.54765161878704
- type: f1
value: 93.3298945415573
- task:
type: Classification
dataset:
name: MTEB MTOPIntentClassification (en)
type: mteb/mtop_intent
config: en
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 75.71591427268582
- type: f1
value: 59.32113870474471
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (en)
type: mteb/amazon_massive_intent
config: en
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 75.83053127101547
- type: f1
value: 73.60757944876475
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (en)
type: mteb/amazon_massive_scenario
config: en
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 78.72562205783457
- type: f1
value: 78.63761662505502
- task:
type: Clustering
dataset:
name: MTEB MedrxivClusteringP2P
type: mteb/medrxiv-clustering-p2p
config: default
split: test
revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73
metrics:
- type: v_measure
value: 33.37935633767996
- task:
type: Clustering
dataset:
name: MTEB MedrxivClusteringS2S
type: mteb/medrxiv-clustering-s2s
config: default
split: test
revision: 35191c8c0dca72d8ff3efcd72aa802307d469663
metrics:
- type: v_measure
value: 31.55270546130387
- task:
type: Reranking
dataset:
name: MTEB MindSmallReranking
type: mteb/mind_small
config: default
split: test
revision: 3bdac13927fdc888b903db93b2ffdbd90b295a69
metrics:
- type: map
value: 30.462692753143834
- type: mrr
value: 31.497569753511563
- task:
type: Retrieval
dataset:
name: MTEB NFCorpus
type: nfcorpus
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 5.646
- type: map_at_10
value: 12.498
- type: map_at_100
value: 15.486
- type: map_at_1000
value: 16.805999999999997
- type: map_at_3
value: 9.325
- type: map_at_5
value: 10.751
- type: mrr_at_1
value: 43.034
- type: mrr_at_10
value: 52.662
- type: mrr_at_100
value: 53.189
- type: mrr_at_1000
value: 53.25
- type: mrr_at_3
value: 50.929
- type: mrr_at_5
value: 51.92
- type: ndcg_at_1
value: 41.796
- type: ndcg_at_10
value: 33.477000000000004
- type: ndcg_at_100
value: 29.996000000000002
- type: ndcg_at_1000
value: 38.864
- type: ndcg_at_3
value: 38.940000000000005
- type: ndcg_at_5
value: 36.689
- type: precision_at_1
value: 43.034
- type: precision_at_10
value: 24.799
- type: precision_at_100
value: 7.432999999999999
- type: precision_at_1000
value: 1.9929999999999999
- type: precision_at_3
value: 36.842000000000006
- type: precision_at_5
value: 32.135999999999996
- type: recall_at_1
value: 5.646
- type: recall_at_10
value: 15.963
- type: recall_at_100
value: 29.492
- type: recall_at_1000
value: 61.711000000000006
- type: recall_at_3
value: 10.585
- type: recall_at_5
value: 12.753999999999998
- task:
type: Retrieval
dataset:
name: MTEB NQ
type: nq
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 27.602
- type: map_at_10
value: 41.545
- type: map_at_100
value: 42.644999999999996
- type: map_at_1000
value: 42.685
- type: map_at_3
value: 37.261
- type: map_at_5
value: 39.706
- type: mrr_at_1
value: 31.141000000000002
- type: mrr_at_10
value: 44.139
- type: mrr_at_100
value: 44.997
- type: mrr_at_1000
value: 45.025999999999996
- type: mrr_at_3
value: 40.503
- type: mrr_at_5
value: 42.64
- type: ndcg_at_1
value: 31.141000000000002
- type: ndcg_at_10
value: 48.995
- type: ndcg_at_100
value: 53.788000000000004
- type: ndcg_at_1000
value: 54.730000000000004
- type: ndcg_at_3
value: 40.844
- type: ndcg_at_5
value: 44.955
- type: precision_at_1
value: 31.141000000000002
- type: precision_at_10
value: 8.233
- type: precision_at_100
value: 1.093
- type: precision_at_1000
value: 0.11800000000000001
- type: precision_at_3
value: 18.579
- type: precision_at_5
value: 13.533999999999999
- type: recall_at_1
value: 27.602
- type: recall_at_10
value: 69.216
- type: recall_at_100
value: 90.252
- type: recall_at_1000
value: 97.27
- type: recall_at_3
value: 47.987
- type: recall_at_5
value: 57.438
- task:
type: Retrieval
dataset:
name: MTEB QuoraRetrieval
type: quora
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 70.949
- type: map_at_10
value: 84.89999999999999
- type: map_at_100
value: 85.531
- type: map_at_1000
value: 85.548
- type: map_at_3
value: 82.027
- type: map_at_5
value: 83.853
- type: mrr_at_1
value: 81.69999999999999
- type: mrr_at_10
value: 87.813
- type: mrr_at_100
value: 87.917
- type: mrr_at_1000
value: 87.91799999999999
- type: mrr_at_3
value: 86.938
- type: mrr_at_5
value: 87.53999999999999
- type: ndcg_at_1
value: 81.75
- type: ndcg_at_10
value: 88.55499999999999
- type: ndcg_at_100
value: 89.765
- type: ndcg_at_1000
value: 89.871
- type: ndcg_at_3
value: 85.905
- type: ndcg_at_5
value: 87.41
- type: precision_at_1
value: 81.75
- type: precision_at_10
value: 13.403
- type: precision_at_100
value: 1.528
- type: precision_at_1000
value: 0.157
- type: precision_at_3
value: 37.597
- type: precision_at_5
value: 24.69
- type: recall_at_1
value: 70.949
- type: recall_at_10
value: 95.423
- type: recall_at_100
value: 99.509
- type: recall_at_1000
value: 99.982
- type: recall_at_3
value: 87.717
- type: recall_at_5
value: 92.032
- task:
type: Clustering
dataset:
name: MTEB RedditClustering
type: mteb/reddit-clustering
config: default
split: test
revision: 24640382cdbf8abc73003fb0fa6d111a705499eb
metrics:
- type: v_measure
value: 51.76962893449579
- task:
type: Clustering
dataset:
name: MTEB RedditClusteringP2P
type: mteb/reddit-clustering-p2p
config: default
split: test
revision: 282350215ef01743dc01b456c7f5241fa8937f16
metrics:
- type: v_measure
value: 62.32897690686379
- task:
type: Retrieval
dataset:
name: MTEB SCIDOCS
type: scidocs
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 4.478
- type: map_at_10
value: 11.994
- type: map_at_100
value: 13.977
- type: map_at_1000
value: 14.295
- type: map_at_3
value: 8.408999999999999
- type: map_at_5
value: 10.024
- type: mrr_at_1
value: 22.1
- type: mrr_at_10
value: 33.526
- type: mrr_at_100
value: 34.577000000000005
- type: mrr_at_1000
value: 34.632000000000005
- type: mrr_at_3
value: 30.217
- type: mrr_at_5
value: 31.962000000000003
- type: ndcg_at_1
value: 22.1
- type: ndcg_at_10
value: 20.191
- type: ndcg_at_100
value: 27.954
- type: ndcg_at_1000
value: 33.491
- type: ndcg_at_3
value: 18.787000000000003
- type: ndcg_at_5
value: 16.378999999999998
- type: precision_at_1
value: 22.1
- type: precision_at_10
value: 10.69
- type: precision_at_100
value: 2.1919999999999997
- type: precision_at_1000
value: 0.35200000000000004
- type: precision_at_3
value: 17.732999999999997
- type: precision_at_5
value: 14.499999999999998
- type: recall_at_1
value: 4.478
- type: recall_at_10
value: 21.657
- type: recall_at_100
value: 44.54
- type: recall_at_1000
value: 71.542
- type: recall_at_3
value: 10.778
- type: recall_at_5
value: 14.687
- task:
type: STS
dataset:
name: MTEB SICK-R
type: mteb/sickr-sts
config: default
split: test
revision: a6ea5a8cab320b040a23452cc28066d9beae2cee
metrics:
- type: cos_sim_pearson
value: 82.82325259156718
- type: cos_sim_spearman
value: 79.2463589100662
- type: euclidean_pearson
value: 80.48318380496771
- type: euclidean_spearman
value: 79.34451935199979
- type: manhattan_pearson
value: 80.39041824178759
- type: manhattan_spearman
value: 79.23002892700211
- task:
type: STS
dataset:
name: MTEB STS12
type: mteb/sts12-sts
config: default
split: test
revision: a0d554a64d88156834ff5ae9920b964011b16384
metrics:
- type: cos_sim_pearson
value: 85.74130231431258
- type: cos_sim_spearman
value: 78.36856568042397
- type: euclidean_pearson
value: 82.48301631890303
- type: euclidean_spearman
value: 78.28376980722732
- type: manhattan_pearson
value: 82.43552075450525
- type: manhattan_spearman
value: 78.22702443947126
- task:
type: STS
dataset:
name: MTEB STS13
type: mteb/sts13-sts
config: default
split: test
revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca
metrics:
- type: cos_sim_pearson
value: 79.96138619461459
- type: cos_sim_spearman
value: 81.85436343502379
- type: euclidean_pearson
value: 81.82895226665367
- type: euclidean_spearman
value: 82.22707349602916
- type: manhattan_pearson
value: 81.66303369445873
- type: manhattan_spearman
value: 82.05030197179455
- task:
type: STS
dataset:
name: MTEB STS14
type: mteb/sts14-sts
config: default
split: test
revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375
metrics:
- type: cos_sim_pearson
value: 80.05481244198648
- type: cos_sim_spearman
value: 80.85052504637808
- type: euclidean_pearson
value: 80.86728419744497
- type: euclidean_spearman
value: 81.033786401512
- type: manhattan_pearson
value: 80.90107531061103
- type: manhattan_spearman
value: 81.11374116827795
- task:
type: STS
dataset:
name: MTEB STS15
type: mteb/sts15-sts
config: default
split: test
revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3
metrics:
- type: cos_sim_pearson
value: 84.615220756399
- type: cos_sim_spearman
value: 86.46858500002092
- type: euclidean_pearson
value: 86.08307800247586
- type: euclidean_spearman
value: 86.72691443870013
- type: manhattan_pearson
value: 85.96155594487269
- type: manhattan_spearman
value: 86.605909505275
- task:
type: STS
dataset:
name: MTEB STS16
type: mteb/sts16-sts
config: default
split: test
revision: 4d8694f8f0e0100860b497b999b3dbed754a0513
metrics:
- type: cos_sim_pearson
value: 82.14363913634436
- type: cos_sim_spearman
value: 84.48430226487102
- type: euclidean_pearson
value: 83.75303424801902
- type: euclidean_spearman
value: 84.56762380734538
- type: manhattan_pearson
value: 83.6135447165928
- type: manhattan_spearman
value: 84.39898212616731
- task:
type: STS
dataset:
name: MTEB STS17 (en-en)
type: mteb/sts17-crosslingual-sts
config: en-en
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 85.09909252554525
- type: cos_sim_spearman
value: 85.70951402743276
- type: euclidean_pearson
value: 87.1991936239908
- type: euclidean_spearman
value: 86.07745840612071
- type: manhattan_pearson
value: 87.25039137549952
- type: manhattan_spearman
value: 85.99938746659761
- task:
type: STS
dataset:
name: MTEB STS22 (en)
type: mteb/sts22-crosslingual-sts
config: en
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 63.529332093413615
- type: cos_sim_spearman
value: 65.38177340147439
- type: euclidean_pearson
value: 66.35278011412136
- type: euclidean_spearman
value: 65.47147267032997
- type: manhattan_pearson
value: 66.71804682408693
- type: manhattan_spearman
value: 65.67406521423597
- task:
type: STS
dataset:
name: MTEB STSBenchmark
type: mteb/stsbenchmark-sts
config: default
split: test
revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831
metrics:
- type: cos_sim_pearson
value: 82.45802942885662
- type: cos_sim_spearman
value: 84.8853341842566
- type: euclidean_pearson
value: 84.60915021096707
- type: euclidean_spearman
value: 85.11181242913666
- type: manhattan_pearson
value: 84.38600521210364
- type: manhattan_spearman
value: 84.89045417981723
- task:
type: Reranking
dataset:
name: MTEB SciDocsRR
type: mteb/scidocs-reranking
config: default
split: test
revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab
metrics:
- type: map
value: 85.92793380635129
- type: mrr
value: 95.85834191226348
- task:
type: Retrieval
dataset:
name: MTEB SciFact
type: scifact
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 55.74400000000001
- type: map_at_10
value: 65.455
- type: map_at_100
value: 66.106
- type: map_at_1000
value: 66.129
- type: map_at_3
value: 62.719
- type: map_at_5
value: 64.441
- type: mrr_at_1
value: 58.667
- type: mrr_at_10
value: 66.776
- type: mrr_at_100
value: 67.363
- type: mrr_at_1000
value: 67.384
- type: mrr_at_3
value: 64.889
- type: mrr_at_5
value: 66.122
- type: ndcg_at_1
value: 58.667
- type: ndcg_at_10
value: 69.904
- type: ndcg_at_100
value: 72.807
- type: ndcg_at_1000
value: 73.423
- type: ndcg_at_3
value: 65.405
- type: ndcg_at_5
value: 67.86999999999999
- type: precision_at_1
value: 58.667
- type: precision_at_10
value: 9.3
- type: precision_at_100
value: 1.08
- type: precision_at_1000
value: 0.11299999999999999
- type: precision_at_3
value: 25.444
- type: precision_at_5
value: 17
- type: recall_at_1
value: 55.74400000000001
- type: recall_at_10
value: 82.122
- type: recall_at_100
value: 95.167
- type: recall_at_1000
value: 100
- type: recall_at_3
value: 70.14399999999999
- type: recall_at_5
value: 76.417
- task:
type: PairClassification
dataset:
name: MTEB SprintDuplicateQuestions
type: mteb/sprintduplicatequestions-pairclassification
config: default
split: test
revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46
metrics:
- type: cos_sim_accuracy
value: 99.86534653465347
- type: cos_sim_ap
value: 96.54142419791388
- type: cos_sim_f1
value: 93.07535641547861
- type: cos_sim_precision
value: 94.81327800829875
- type: cos_sim_recall
value: 91.4
- type: dot_accuracy
value: 99.86435643564356
- type: dot_ap
value: 96.53682260449868
- type: dot_f1
value: 92.98515104966718
- type: dot_precision
value: 95.27806925498426
- type: dot_recall
value: 90.8
- type: euclidean_accuracy
value: 99.86336633663366
- type: euclidean_ap
value: 96.5228676185697
- type: euclidean_f1
value: 92.9735234215886
- type: euclidean_precision
value: 94.70954356846472
- type: euclidean_recall
value: 91.3
- type: manhattan_accuracy
value: 99.85841584158416
- type: manhattan_ap
value: 96.50392760934032
- type: manhattan_f1
value: 92.84642321160581
- type: manhattan_precision
value: 92.8928928928929
- type: manhattan_recall
value: 92.80000000000001
- type: max_accuracy
value: 99.86534653465347
- type: max_ap
value: 96.54142419791388
- type: max_f1
value: 93.07535641547861
- task:
type: Clustering
dataset:
name: MTEB StackExchangeClustering
type: mteb/stackexchange-clustering
config: default
split: test
revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259
metrics:
- type: v_measure
value: 61.08285408766616
- task:
type: Clustering
dataset:
name: MTEB StackExchangeClusteringP2P
type: mteb/stackexchange-clustering-p2p
config: default
split: test
revision: 815ca46b2622cec33ccafc3735d572c266efdb44
metrics:
- type: v_measure
value: 35.640675309010604
- task:
type: Reranking
dataset:
name: MTEB StackOverflowDupQuestions
type: mteb/stackoverflowdupquestions-reranking
config: default
split: test
revision: e185fbe320c72810689fc5848eb6114e1ef5ec69
metrics:
- type: map
value: 53.20333913710715
- type: mrr
value: 54.088813555725324
- task:
type: Summarization
dataset:
name: MTEB SummEval
type: mteb/summeval
config: default
split: test
revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c
metrics:
- type: cos_sim_pearson
value: 30.79465221925075
- type: cos_sim_spearman
value: 30.530816059163634
- type: dot_pearson
value: 31.364837244718043
- type: dot_spearman
value: 30.79726823684003
- task:
type: Retrieval
dataset:
name: MTEB TRECCOVID
type: trec-covid
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 0.22599999999999998
- type: map_at_10
value: 1.735
- type: map_at_100
value: 8.978
- type: map_at_1000
value: 20.851
- type: map_at_3
value: 0.613
- type: map_at_5
value: 0.964
- type: mrr_at_1
value: 88
- type: mrr_at_10
value: 92.867
- type: mrr_at_100
value: 92.867
- type: mrr_at_1000
value: 92.867
- type: mrr_at_3
value: 92.667
- type: mrr_at_5
value: 92.667
- type: ndcg_at_1
value: 82
- type: ndcg_at_10
value: 73.164
- type: ndcg_at_100
value: 51.878
- type: ndcg_at_1000
value: 44.864
- type: ndcg_at_3
value: 79.184
- type: ndcg_at_5
value: 76.39
- type: precision_at_1
value: 88
- type: precision_at_10
value: 76.2
- type: precision_at_100
value: 52.459999999999994
- type: precision_at_1000
value: 19.692
- type: precision_at_3
value: 82.667
- type: precision_at_5
value: 80
- type: recall_at_1
value: 0.22599999999999998
- type: recall_at_10
value: 1.942
- type: recall_at_100
value: 12.342
- type: recall_at_1000
value: 41.42
- type: recall_at_3
value: 0.637
- type: recall_at_5
value: 1.034
- task:
type: Retrieval
dataset:
name: MTEB Touche2020
type: webis-touche2020
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 3.567
- type: map_at_10
value: 13.116
- type: map_at_100
value: 19.39
- type: map_at_1000
value: 20.988
- type: map_at_3
value: 7.109
- type: map_at_5
value: 9.950000000000001
- type: mrr_at_1
value: 42.857
- type: mrr_at_10
value: 57.404999999999994
- type: mrr_at_100
value: 58.021
- type: mrr_at_1000
value: 58.021
- type: mrr_at_3
value: 54.762
- type: mrr_at_5
value: 56.19
- type: ndcg_at_1
value: 38.775999999999996
- type: ndcg_at_10
value: 30.359
- type: ndcg_at_100
value: 41.284
- type: ndcg_at_1000
value: 52.30200000000001
- type: ndcg_at_3
value: 36.744
- type: ndcg_at_5
value: 34.326
- type: precision_at_1
value: 42.857
- type: precision_at_10
value: 26.122
- type: precision_at_100
value: 8.082
- type: precision_at_1000
value: 1.559
- type: precision_at_3
value: 40.136
- type: precision_at_5
value: 35.510000000000005
- type: recall_at_1
value: 3.567
- type: recall_at_10
value: 19.045
- type: recall_at_100
value: 49.979
- type: recall_at_1000
value: 84.206
- type: recall_at_3
value: 8.52
- type: recall_at_5
value: 13.103000000000002
- task:
type: Classification
dataset:
name: MTEB ToxicConversationsClassification
type: mteb/toxic_conversations_50k
config: default
split: test
revision: d7c0de2777da35d6aae2200a62c6e0e5af397c4c
metrics:
- type: accuracy
value: 68.8394
- type: ap
value: 13.454399712443099
- type: f1
value: 53.04963076364322
- task:
type: Classification
dataset:
name: MTEB TweetSentimentExtractionClassification
type: mteb/tweet_sentiment_extraction
config: default
split: test
revision: d604517c81ca91fe16a244d1248fc021f9ecee7a
metrics:
- type: accuracy
value: 60.546123372948514
- type: f1
value: 60.86952793277713
- task:
type: Clustering
dataset:
name: MTEB TwentyNewsgroupsClustering
type: mteb/twentynewsgroups-clustering
config: default
split: test
revision: 6125ec4e24fa026cec8a478383ee943acfbd5449
metrics:
- type: v_measure
value: 49.10042955060234
- task:
type: PairClassification
dataset:
name: MTEB TwitterSemEval2015
type: mteb/twittersemeval2015-pairclassification
config: default
split: test
revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1
metrics:
- type: cos_sim_accuracy
value: 85.03308100375514
- type: cos_sim_ap
value: 71.08284605869684
- type: cos_sim_f1
value: 65.42539436255494
- type: cos_sim_precision
value: 64.14807302231237
- type: cos_sim_recall
value: 66.75461741424802
- type: dot_accuracy
value: 84.68736961316088
- type: dot_ap
value: 69.20524036530992
- type: dot_f1
value: 63.54893953365829
- type: dot_precision
value: 63.45698500394633
- type: dot_recall
value: 63.641160949868066
- type: euclidean_accuracy
value: 85.07480479227513
- type: euclidean_ap
value: 71.14592761009864
- type: euclidean_f1
value: 65.43814432989691
- type: euclidean_precision
value: 63.95465994962216
- type: euclidean_recall
value: 66.99208443271768
- type: manhattan_accuracy
value: 85.06288370984085
- type: manhattan_ap
value: 71.07289742593868
- type: manhattan_f1
value: 65.37585421412301
- type: manhattan_precision
value: 62.816147859922175
- type: manhattan_recall
value: 68.15303430079156
- type: max_accuracy
value: 85.07480479227513
- type: max_ap
value: 71.14592761009864
- type: max_f1
value: 65.43814432989691
- task:
type: PairClassification
dataset:
name: MTEB TwitterURLCorpus
type: mteb/twitterurlcorpus-pairclassification
config: default
split: test
revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf
metrics:
- type: cos_sim_accuracy
value: 87.79058485659952
- type: cos_sim_ap
value: 83.7183187008759
- type: cos_sim_f1
value: 75.86921142180798
- type: cos_sim_precision
value: 73.00683371298405
- type: cos_sim_recall
value: 78.96519864490298
- type: dot_accuracy
value: 87.0085768618776
- type: dot_ap
value: 81.87467488474279
- type: dot_f1
value: 74.04188363990559
- type: dot_precision
value: 72.10507114191901
- type: dot_recall
value: 76.08561749307053
- type: euclidean_accuracy
value: 87.8332751193387
- type: euclidean_ap
value: 83.83585648120315
- type: euclidean_f1
value: 76.02582177042369
- type: euclidean_precision
value: 73.36388371759989
- type: euclidean_recall
value: 78.88820449645827
- type: manhattan_accuracy
value: 87.87208444910156
- type: manhattan_ap
value: 83.8101950642973
- type: manhattan_f1
value: 75.90454195535027
- type: manhattan_precision
value: 72.44419564761039
- type: manhattan_recall
value: 79.71204188481676
- type: max_accuracy
value: 87.87208444910156
- type: max_ap
value: 83.83585648120315
- type: max_f1
value: 76.02582177042369
---
<h1 align="center">FlagEmbedding</h1>
<h4 align="center">
<p>
<a href=#model-list>Model List</a> |
<a href=#usage>Usage</a> |
<a href="#evaluation">Evaluation</a> |
<a href="#train">Train</a> |
<a href="#contact">Contact</a> |
<a href="#license">License</a>
<p>
</h4>
More details please refer to our Github: [FlagEmbedding](https://github.com/FlagOpen/FlagEmbedding).
[English](README.md) | [中文](https://github.com/FlagOpen/FlagEmbedding/blob/master/README_zh.md)
FlagEmbedding can map any text to a low-dimensional dense vector which can be used for tasks like retrieval, classification, clustering, or semantic search.
And it also can be used in vector database for LLMs.
************* 🌟**Updates**🌟 *************
- 08/09/2023: BGE Models are integrated into **Langchain**, you can use it like [**this**](#using-langchain); C-MTEB **leaderboard** is [avaliable](https://huggingface.co/spaces/mteb/leaderboard).
- 08/05/2023: Release base-scale and small-scale models, **best performance among the models of the same size 🤗**
- 08/02/2023: Release `bge-large-*`(short for BAAI General Embedding) Models, **rank 1st on MTEB and C-MTEB benchmark!**
- 08/01/2023: We release the [Chinese Massive Text Embedding Benchmark](https://github.com/FlagOpen/FlagEmbedding/blob/master/C_MTEB) (**C-MTEB**), consisting of 31 test dataset.
## Model List
`bge` is short for `BAAI general embedding`.
| Model | Language | Description | query instruction for retrieval\* |
|:-------------------------------|:--------:| :--------:| :--------:|
| [BAAI/bge-large-en](https://huggingface.co/BAAI/bge-large-en) | English | rank **1st** in [MTEB](https://huggingface.co/spaces/mteb/leaderboard) leaderboard | `Represent this sentence for searching relevant passages: ` |
| [BAAI/bge-base-en](https://huggingface.co/BAAI/bge-base-en) | English | rank **2nd** in [MTEB](https://huggingface.co/spaces/mteb/leaderboard) leaderboard | `Represent this sentence for searching relevant passages: ` |
| [BAAI/bge-small-en](https://huggingface.co/BAAI/bge-small-en) | English | a small-scale model but with competitive performance | `Represent this sentence for searching relevant passages: ` |
| [BAAI/bge-large-zh](https://huggingface.co/BAAI/bge-large-zh) | Chinese | rank **1st** in [C-MTEB](https://github.com/FlagOpen/FlagEmbedding/tree/master/C_MTEB) benchmark | `为这个句子生成表示以用于检索相关文章:` |
| [BAAI/bge-large-zh-noinstruct](https://huggingface.co/BAAI/bge-large-zh-noinstruct) | Chinese | This model is trained without instruction, and rank **2nd** in [C-MTEB](https://github.com/FlagOpen/FlagEmbedding/tree/master/C_MTEB) benchmark | |
| [BAAI/bge-base-zh](https://huggingface.co/BAAI/bge-base-zh) | Chinese | a base-scale model but has similar ability with `bge-large-zh` | `为这个句子生成表示以用于检索相关文章:` |
| [BAAI/bge-small-zh](https://huggingface.co/BAAI/bge-small-zh) | Chinese | a small-scale model but with competitive performance | `为这个句子生成表示以用于检索相关文章:` |
\*: If you need to search the **long** relevant passages to a **short** query (s2p retrieval task), you need to add the instruction to the query; in other cases, no instruction is needed, just use the original query directly. In all cases, **no instruction** need to be added to passages.
## Usage
Here are some examples to use `bge` models with
[FlagEmbedding](#using-flagembedding), [Sentence-Transformers](#using-sentence-transformers), [Langchain](#using-langchain), or [Huggingface Transformers](#using-huggingface-transformers).
#### Using FlagEmbedding
```
pip install -U FlagEmbedding
```
If it doesn't work for you, you can see [FlagEmbedding](https://github.com/FlagOpen/FlagEmbedding/blob/master/FlagEmbedding/baai_general_embedding/README.md) for more methods to install FlagEmbedding.
```python
from FlagEmbedding import FlagModel
sentences = ["样例数据-1", "样例数据-2"]
model = FlagModel('BAAI/bge-large-zh', query_instruction_for_retrieval="为这个句子生成表示以用于检索相关文章:")
embeddings_1 = model.encode(sentences)
embeddings_2 = model.encode(sentences)
similarity = embeddings_1 @ embeddings_2.T
print(similarity)
# for s2p(short query to long passage) retrieval task, please use encode_queries() which will automatically add the instruction to each query
# corpus in retrieval task can still use encode() or encode_corpus(), since they don't need instruction
queries = ['query_1', 'query_2']
passages = ["样例文档-1", "样例文档-2"]
q_embeddings = model.encode_queries(queries)
p_embeddings = model.encode(passages)
scores = q_embeddings @ p_embeddings.T
```
The value of argument `query_instruction_for_retrieval` see [Model List](https://github.com/FlagOpen/FlagEmbedding/tree/master#model-list).
FlagModel will use all available GPUs when encoding, please set `os.environ["CUDA_VISIBLE_DEVICES"]` to choose GPU.
#### Using Sentence-Transformers
Using this model also is easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
```python
from sentence_transformers import SentenceTransformer
sentences = ["样例数据-1", "样例数据-2"]
model = SentenceTransformer('BAAI/bge-large-zh')
embeddings_1 = model.encode(sentences, normalize_embeddings=True)
embeddings_2 = model.encode(sentences, normalize_embeddings=True)
similarity = embeddings_1 @ embeddings_2.T
print(similarity)
```
For s2p(short query to long passage) retrieval task,
each short query should start with an instruction (instructions see [Model List](https://github.com/FlagOpen/FlagEmbedding/tree/master#model-list)).
But the instruction is not needed for passages.
```python
from sentence_transformers import SentenceTransformer
queries = ['query_1', 'query_2']
passages = ["样例文档-1", "样例文档-2"]
instruction = "为这个句子生成表示以用于检索相关文章:"
model = SentenceTransformer('BAAI/bge-large-zh')
q_embeddings = model.encode([instruction+q for q in queries], normalize_embeddings=True)
p_embeddings = model.encode(passages, normalize_embeddings=True)
scores = q_embeddings @ p_embeddings.T
```
#### Using Langchain
You can use `bge` in langchain like this:
```python
from langchain.embeddings import HuggingFaceBgeEmbeddings
model_name = "BAAI/bge-small-en"
model_kwargs = {'device': 'cuda'}
encode_kwargs = {'normalize_embeddings': True} # set True to compute cosine similarity
model_norm = HuggingFaceBgeEmbeddings(
model_name=model_name,
model_kwargs=model_kwargs,
encode_kwargs=encode_kwargs
)
```
#### Using HuggingFace Transformers
With transformers package, you can use the model like this: First, you pass your input through the transformer model, then you select the last hidden state of first token (i.e., [CLS]) as the sentence embedding.
```python
from transformers import AutoTokenizer, AutoModel
import torch
# Sentences we want sentence embeddings for
sentences = ["样例数据-1", "样例数据-2"]
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('BAAI/bge-large-zh')
model = AutoModel.from_pretrained('BAAI/bge-large-zh')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# for s2p(short query to long passage) retrieval task, add an instruction to query (not add instruction for passages)
# encoded_input = tokenizer([instruction + q for q in queries], padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, cls pooling.
sentence_embeddings = model_output[0][:, 0]
# normalize embeddings
sentence_embeddings = torch.nn.functional.normalize(sentence_embeddings, p=2, dim=1)
print("Sentence embeddings:", sentence_embeddings)
```
## Evaluation
`baai-general-embedding` models achieve **state-of-the-art performance on both MTEB and C-MTEB leaderboard!**
More details and evaluation tools see our [scripts](https://github.com/FlagOpen/FlagEmbedding/blob/master/C_MTEB/README.md).
- **MTEB**:
| Model Name | Dimension | Sequence Length | Average (56) | Retrieval (15) |Clustering (11) | Pair Classification (3) | Reranking (4) | STS (10) | Summarization (1) | Classification (12) |
|:----:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|
| [**bge-large-en**](https://huggingface.co/BAAI/bge-large-en) | 1024 | 512 | **63.98** | **53.9** | **46.98** | 85.8 | **59.48** | 81.56 | 32.06 | **76.21** |
| [**bge-base-en**](https://huggingface.co/BAAI/bge-base-en) | 768 | 512 | 63.36 | 53.0 | 46.32 | 85.86 | 58.7 | 81.84 | 29.27 | 75.27 |
| [gte-large](https://huggingface.co/thenlper/gte-large) | 1024 | 512 | 63.13 | 52.22 | 46.84 | 85.00 | 59.13 | 83.35 | 31.66 | 73.33 |
| [gte-base](https://huggingface.co/thenlper/gte-base) | 768 | 512 | 62.39 | 51.14 | 46.2 | 84.57 | 58.61 | 82.3 | 31.17 | 73.01 |
| [e5-large-v2](https://huggingface.co/intfloat/e5-large-v2) | 1024| 512 | 62.25 | 50.56 | 44.49 | 86.03 | 56.61 | 82.05 | 30.19 | 75.24 |
| [**bge-small-en**](https://huggingface.co/BAAI/bge-small-en) | 384 | 512 | 62.11 | 51.82 | 44.31 | 83.78 | 57.97 | 80.72 | 30.53 | 74.37 |
| [instructor-xl](https://huggingface.co/hkunlp/instructor-xl) | 768 | 512 | 61.79 | 49.26 | 44.74 | 86.62 | 57.29 | 83.06 | 32.32 | 61.79 |
| [e5-base-v2](https://huggingface.co/intfloat/e5-base-v2) | 768 | 512 | 61.5 | 50.29 | 43.80 | 85.73 | 55.91 | 81.05 | 30.28 | 73.84 |
| [gte-small](https://huggingface.co/thenlper/gte-small) | 384 | 512 | 61.36 | 49.46 | 44.89 | 83.54 | 57.7 | 82.07 | 30.42 | 72.31 |
| [text-embedding-ada-002](https://platform.openai.com/docs/guides/embeddings) | 1536 | 8192 | 60.99 | 49.25 | 45.9 | 84.89 | 56.32 | 80.97 | 30.8 | 70.93 |
| [e5-small-v2](https://huggingface.co/intfloat/e5-base-v2) | 384 | 512 | 59.93 | 49.04 | 39.92 | 84.67 | 54.32 | 80.39 | 31.16 | 72.94 |
| [sentence-t5-xxl](https://huggingface.co/sentence-transformers/sentence-t5-xxl) | 768 | 512 | 59.51 | 42.24 | 43.72 | 85.06 | 56.42 | 82.63 | 30.08 | 73.42 |
| [all-mpnet-base-v2](https://huggingface.co/sentence-transformers/all-mpnet-base-v2) | 768 | 514 | 57.78 | 43.81 | 43.69 | 83.04 | 59.36 | 80.28 | 27.49 | 65.07 |
| [sgpt-bloom-7b1-msmarco](https://huggingface.co/bigscience/sgpt-bloom-7b1-msmarco) | 4096 | 2048 | 57.59 | 48.22 | 38.93 | 81.9 | 55.65 | 77.74 | 33.6 | 66.19 |
| [all-MiniLM-L12-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L12-v2) | 384 | 512 | 56.53 | 42.69 | 41.81 | 82.41 | 58.44 | 79.8 | 27.9 | 63.21 |
| [all-MiniLM-L6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2) | 384 | 512 | 56.26 | 41.95 | 42.35 | 82.37 | 58.04 | 78.9 | 30.81 | 63.05 |
| [contriever-base-msmarco](https://huggingface.co/nthakur/contriever-base-msmarco) | 768 | 512 | 56.00 | 41.88 | 41.1 | 82.54 | 53.14 | 76.51 | 30.36 | 66.68 |
| [sentence-t5-base](https://huggingface.co/sentence-transformers/sentence-t5-base) | 768 | 512 | 55.27 | 33.63 | 40.21 | 85.18 | 53.09 | 81.14 | 31.39 | 69.81 |
- **C-MTEB**:
We create a benchmark C-MTEB for chinese text embedding which consists of 31 datasets from 6 tasks.
Please refer to [C_MTEB](https://github.com/FlagOpen/FlagEmbedding/blob/master/C_MTEB/README.md) for a detailed introduction.
| Model | Embedding dimension | Avg | Retrieval | STS | PairClassification | Classification | Reranking | Clustering |
|:-------------------------------|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:|
| [**bge-large-zh**](https://huggingface.co/BAAI/bge-large-zh) | 1024 | **64.20** | **71.53** | **53.23** | **78.94** | 72.26 | **65.11** | 48.39 |
| [**bge-large-zh-noinstruct**](https://huggingface.co/BAAI/bge-large-zh-noinstruct) | 1024 | 63.53 | 70.55 | 50.98 | 76.77 | **72.49** | 64.91 | **50.01** |
| [**BAAI/bge-base-zh**](https://huggingface.co/BAAI/bge-base-zh) | 768 | 62.96 | 69.53 | 52.05 | 77.5 | 70.98 | 64.91 | 47.63 |
| [**BAAI/bge-small-zh**](https://huggingface.co/BAAI/bge-small-zh) | 512 | 58.27 | 63.07 | 46.87 | 70.35 | 67.78 | 61.48 | 45.09 |
| [m3e-base](https://huggingface.co/moka-ai/m3e-base) | 768 | 57.10 |56.91 | 48.15 | 63.99 | 70.28 | 59.34 | 47.68 |
| [m3e-large](https://huggingface.co/moka-ai/m3e-large) | 1024 | 57.05 |54.75 | 48.64 | 64.3 | 71.22 | 59.66 | 48.88 |
| [text-embedding-ada-002(OpenAI)](https://platform.openai.com/docs/guides/embeddings/what-are-embeddings) | 1536 | 53.02 | 52.0 | 40.61 | 69.56 | 67.38 | 54.28 | 45.68 |
| [luotuo](https://huggingface.co/silk-road/luotuo-bert-medium) | 1024 | 49.37 | 44.4 | 39.41 | 66.62 | 65.29 | 49.25 | 44.39 |
| [text2vec](https://huggingface.co/shibing624/text2vec-base-chinese) | 768 | 47.63 | 38.79 | 41.71 | 67.41 | 65.18 | 49.45 | 37.66 |
| [text2vec-large](https://huggingface.co/GanymedeNil/text2vec-large-chinese) | 1024 | 47.36 | 41.94 | 41.98 | 70.86 | 63.42 | 49.16 | 30.02 |
## Train
This section will introduce the way we used to train the general embedding.
The training scripts are in [FlagEmbedding](https://github.com/FlagOpen/FlagEmbedding/blob/master/FlagEmbedding/baai_general_embedding/README.md),
and we provide some examples to do [pre-train](https://github.com/FlagOpen/FlagEmbedding/blob/master/examples/pretrain/README.md) and [fine-tune](https://github.com/FlagOpen/FlagEmbedding/blob/master/examples/finetune/README.md).
**1. RetroMAE Pre-train**
We pre-train the model following the method [retromae](https://github.com/staoxiao/RetroMAE),
which shows promising improvement in retrieval task ([paper](https://aclanthology.org/2022.emnlp-main.35.pdf)).
The pre-training was conducted on 24 A100(40G) GPUs with a batch size of 720.
In retromae, the mask ratio of encoder and decoder are 0.3, 0.5 respectively.
We used the AdamW optimizer and the learning rate is 2e-5.
**Pre-training data**:
- English:
- [Pile](https://pile.eleuther.ai/)
- [wikipedia](https://huggingface.co/datasets/wikipedia)
- [msmarco](https://huggingface.co/datasets/Tevatron/msmarco-passage-corpus)
- Chinese:
- [wudao](https://github.com/BAAI-WuDao/Data)
**2. Finetune**
We fine-tune the model using a contrastive objective.
The format of input data is a triple`(query, positive, negative)`.
Besides the negative in the triple, we also adopt in-batch negatives strategy.
We employ the cross-device negatives sharing method to share negatives among different GPUs,
which can dramatically **increase the number of negatives**.
We trained our model on 48 A100(40G) GPUs with a large batch size of 32,768 (so there are **65,535** negatives for each query in a batch).
We used the AdamW optimizer and the learning rate is 1e-5.
The temperature for contrastive loss is 0.01.
Besides, we add instruction to the query for s2p(short query to long passage) retrieval task in the training (add nothing to passages).
For English, the instruction is `Represent this sentence for searching relevant passages: `;
For Chinese, the instruction is `为这个句子生成表示以用于检索相关文章:`.
In the evaluation, the instruction should be added for queries in retrieval task, not be added for other tasks.
Noted that the instruction is not needed for passages.
The finetune script is accessible in this repository: [FlagEmbedding](https://github.com/FlagOpen/FlagEmbedding/blob/master/FlagEmbedding/baai_general_embedding/README.md).
You can easily finetune your model with it.
**Training data**:
- For English, we collect 230M text pairs from [wikipedia](https://huggingface.co/datasets/wikipedia), [cc-net](https://github.com/facebookresearch/cc_net), and so on.
- For chinese, we collect 120M text pairs from [wudao](https://github.com/BAAI-WuDao/Data), [simclue](https://github.com/CLUEbenchmark/SimCLUE) and so on.
**The data collection is to be released in the future.**
We will continually update the embedding models and training codes,
hoping to promote the development of the embedding model community.
## License
FlagEmbedding is licensed under [MIT License](https://github.com/FlagOpen/FlagEmbedding/blob/master/LICENSE). The released models can be used for commercial purposes free of charge. | [
"BIOSSES",
"SCIFACT"
] | Non_BioNLP |
aisingapore/gemma2-9b-cpt-sea-lionv3-base | aisingapore | text-generation | [
"transformers",
"safetensors",
"gemma2",
"text-generation",
"en",
"zh",
"vi",
"id",
"th",
"fil",
"ta",
"ms",
"km",
"lo",
"my",
"arxiv:2309.06085",
"arxiv:2101.09635",
"base_model:google/gemma-2-9b",
"base_model:finetune:google/gemma-2-9b",
"license:gemma",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | 1,730,258,495,000 | 2024-12-19T12:56:00 | 550 | 2 | ---
base_model: google/gemma-2-9b
language:
- en
- zh
- vi
- id
- th
- fil
- ta
- ms
- km
- lo
- my
library_name: transformers
license: gemma
pipeline_tag: text-generation
---
<div>
<img src="gemma_2_9b_sea-lion_v3_base_banner.png"/>
</div>
# Gemma2 9B CPT SEA-LIONv3
SEA-LION is a collection of Large Language Models (LLMs) which has been pretrained and instruct-tuned for the Southeast Asia (SEA) region.
Gemma2 9B CPT SEA-LIONv3 Base is a multilingual model which has undergone continued pre-training on approximately **200B** tokens across the 11 official Southeast Asian languages: English, Chinese, Vietnamese, Indonesian, Thai, Tamil, Filipino, Malay, Khmer, Lao, Burmese.
SEA-LION stands for <i>Southeast Asian Languages In One Network</i>.
- **Developed by:** Products Pillar, AI Singapore
- **Funded by:** Singapore NRF
- **Model type:** Decoder
- **Languages supported:** Burmese, Chinese, English, Filipino, Indonesia, Khmer, Lao, Malay, Tamil, Thai, Vietnamese
- **License:** [Gemma Community License](https://ai.google.dev/gemma/terms)
## Model Details
### Model Description
We performed continued pre-training in English and ASEAN languages on [Gemma-2-9B](https://huggingface.co/google/gemma-2-9b), a decoder model using the Gemma 2 architecture, to create Gemma2 9B CPT SEA-LIONv3 Base.
For tokenisation, the model employs the default tokenizer used in Gemma 2 9B.
### Benchmark Performance
We evaluated Gemma2 9B CPT SEA-LIONv3 base model on general language capabilities.
#### General Language Capabilities
For the evaluation of general language capabilities, we employed the [SEA HELM (also known as BHASA) evaluation benchmark](https://arxiv.org/abs/2309.06085v2) across a variety of tasks.
These tasks include Question Answering (QA), Sentiment Analysis (Sentiment), Toxicity Detection (Toxicity), Translation in both directions (Eng>Lang & Lang>Eng), Abstractive Summarization (Summ), Causal Reasoning (Causal) and Natural Language Inference (NLI).
Note: SEA HELM is implemented using prompts to elicit answers in a strict format. For all tasks, the model is expected to provide an answer tag from which the answer is automatically extracted. For tasks where options are provided, the answer should comprise one of the pre-defined options. The scores for each task is normalised to account for baseline performance due to random chance.
The evaluation was done **five-shot** with native prompts on a sample of 100-1000 instances for each dataset.
For more details on Gemma2 9B CPT SEA-LIONv3 base benchmark performance, please refer to the SEA HELM leaderboard, https://leaderboard.sea-lion.ai/
## Technical Specifications
### Infrastructure
Gemma2 9B CPT SEA-LIONv3 was trained using [MosaicML Composer](https://github.com/mosaicml/composer) on the following hardware:
| Training Details | Gemma2 9B CPT SEA-LIONv3 |
|----------------------|:------------------------:|
| SingTel HGX-100 | 8 instances |
| Nvidia H100 80GB GPU | 64 |
| Training Duration | 10 days |
### Configuration
| HyperParameter | Gemma2 9B CPT SEA-LIONv3 |
|-------------------|:------------------------:|
| Precision | bfloat16 |
| Optimizer | decoupled_adamw |
| Scheduler | weight_stable_decay |
| Learning Rate | 1.0e-5 |
| Global Batch Size | 512 |
| Micro Batch Size | 1 |
## Data
Gemma2 9B CPT SEA-LIONv3 base model was continued pre-trained on 200B tokens of the following data:
| Language | Source | Total Tokens (B) | Percentage (%) | Total percentage (%) |
| ------------------------ | ---------------- | ---------------- | -------------- | -------------------- |
| Code | StackV2 | 40 | 20 | 20 |
| English | Dolma | 37.5 | 18.75 | 25 |
| | Fineweb-Edu | 7.5 | 3.75 |
| | Others | 5 | 2.5 |
| Chinese | SEA-LION Pile v1 | 12 | 6 | 13 |
| | Others | 14 | 7 |
| Vietnamese | SEA-LION Pile v1 | 8.4 | 4.2 | 13 |
| | VinBigData | 16 | 8 |
| | Others | 1.6 | 0.8 |
| Indonesian | SEA-LION Pile v1 | 7 | 3.5 | 13 |
| | SEA-LION Pile v2 | 7 | 3.5 |
| | Others | 12 | 6 |
| Thai | SEA-LION Pile v1 | 10.7 | 5.35 | 10 |
| | WangChanBERTa | 8.5 | 4.25 |
| | Others | 0.8 | 0.4 |
| Filipino - Malay - Tamil | SEA-LION Pile v1 | 4.28 | 2.14 | 3 |
| | Others | 1.72 | 0.86 |
| Khmer - Lao - Burmese | SEA-LION Pile v1 | 5.2 | 2.6 | 3 |
| | Others | 0.8 | 0.4 |
Note:
- All token counts are counted using Gemma 2 9B tokenizer
- SEA-LION Pile v1 is processed from Common Crawl WET, which is published [here](https://huggingface.co/datasets/aisingapore/sea-lion-pile). The cutoff date of this version is September 2020.
- SEA-LION Pile v2 is processed from Common Crawl WARC from October 2020 to April 2024.
- Tamil news is sourced with permission from [Seithi](https://seithi.mediacorp.sg/)
## Call for Contributions
We encourage researchers, developers, and language enthusiasts to actively contribute to the enhancement and expansion of SEA-LION. Contributions can involve identifying and reporting bugs, sharing pre-training, instruction, and preference data, improving documentation usability, proposing and implementing new model evaluation tasks and metrics, or training versions of the model in additional Southeast Asian languages. Join us in shaping the future of SEA-LION by sharing your expertise and insights to make these models more accessible, accurate, and versatile. Please check out our GitHub for further information on the call for contributions.
## The Team
Chan Adwin, Cheng Nicholas, Choa Esther, Huang Yuli, Hulagadri Adithya Venkatadri, Lau Wayne, Lee Chwan Ren, Leong Wai Yi, Leong Wei Qi, Limkonchotiwat Peerat, Liu Bing Jie Darius, Montalan Jann Railey, Ng Boon Cheong Raymond, Ngui Jian Gang, Nguyen Thanh Ngan, Ong Brandon, Ong Tat-Wee David, Ong Zhi Hao, Rengarajan Hamsawardhini, Siow Bryan, Susanto Yosephine, Tai Ngee Chia, Tan Choon Meng, Teng Walter, Teo Eng Sipp Leslie, Teo Wei Yi, Tjhi William, Yeo Yeow Tong, Yong Xianbin
## Acknowledgements
[AI Singapore](https://aisingapore.org/) is a national programme supported by the National Research Foundation, Singapore and hosted by the National University of Singapore. Any opinions, findings and conclusions or recommendations expressed in this material are those of the author(s) and do not reflect the views of the National Research Foundation or the National University of Singapore.
## Contact
For more info, please contact us using this [SEA-LION Inquiry Form.](https://forms.gle/sLCUVb95wmGf43hi6)
[Link to SEA-LION's GitHub repository.](https://github.com/aisingapore/sealion)
## Disclaimer
This is the repository for the commercial instruction-tuned model.
The model has _not_ been aligned for safety.
Developers and users should perform their own safety fine-tuning and related security measures.
In no event shall the authors be held liable for any claims, damages, or other liabilities arising from the use of the released weights and codes.
## References
### Thai Pre-Training Data Reference
```bibtex
@misc{lowphansirikul2021wangchanberta,
title={WangchanBERTa: Pretraining transformer-based Thai Language Models},
author={Lalita Lowphansirikul and Charin Polpanumas and Nawat Jantrakulchai and Sarana Nutanong},
year={2021},
eprint={2101.09635},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` | [
"CHIA"
] | Non_BioNLP |
ntc-ai/SDXL-LoRA-slider.blonde-hair | ntc-ai | text-to-image | [
"diffusers",
"text-to-image",
"stable-diffusion-xl",
"lora",
"template:sd-lora",
"template:sdxl-lora",
"sdxl-sliders",
"ntcai.xyz-sliders",
"concept",
"en",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:mit",
"region:us"
] | 1,703,890,493,000 | 2023-12-29T22:54:57 | 133 | 1 | ---
base_model: stabilityai/stable-diffusion-xl-base-1.0
language:
- en
license: mit
tags:
- text-to-image
- stable-diffusion-xl
- lora
- template:sd-lora
- template:sdxl-lora
- sdxl-sliders
- ntcai.xyz-sliders
- concept
- diffusers
thumbnail: images/evaluate/blonde hair.../blonde hair_17_3.0.png
widget:
- text: blonde hair
output:
url: images/blonde hair_17_3.0.png
- text: blonde hair
output:
url: images/blonde hair_19_3.0.png
- text: blonde hair
output:
url: images/blonde hair_20_3.0.png
- text: blonde hair
output:
url: images/blonde hair_21_3.0.png
- text: blonde hair
output:
url: images/blonde hair_22_3.0.png
inference: false
instance_prompt: blonde hair
---
# ntcai.xyz slider - blonde hair (SDXL LoRA)
| Strength: -3 | Strength: 0 | Strength: 3 |
| --- | --- | --- |
| <img src="images/blonde hair_17_-3.0.png" width=256 height=256 /> | <img src="images/blonde hair_17_0.0.png" width=256 height=256 /> | <img src="images/blonde hair_17_3.0.png" width=256 height=256 /> |
| <img src="images/blonde hair_19_-3.0.png" width=256 height=256 /> | <img src="images/blonde hair_19_0.0.png" width=256 height=256 /> | <img src="images/blonde hair_19_3.0.png" width=256 height=256 /> |
| <img src="images/blonde hair_20_-3.0.png" width=256 height=256 /> | <img src="images/blonde hair_20_0.0.png" width=256 height=256 /> | <img src="images/blonde hair_20_3.0.png" width=256 height=256 /> |
## Download
Weights for this model are available in Safetensors format.
## Trigger words
You can apply this LoRA with trigger words for additional effect:
```
blonde hair
```
## Use in diffusers
```python
from diffusers import StableDiffusionXLPipeline
from diffusers import EulerAncestralDiscreteScheduler
import torch
pipe = StableDiffusionXLPipeline.from_single_file("https://huggingface.co/martyn/sdxl-turbo-mario-merge-top-rated/blob/main/topRatedTurboxlLCM_v10.safetensors")
pipe.to("cuda")
pipe.scheduler = EulerAncestralDiscreteScheduler.from_config(pipe.scheduler.config)
# Load the LoRA
pipe.load_lora_weights('ntc-ai/SDXL-LoRA-slider.blonde-hair', weight_name='blonde hair.safetensors', adapter_name="blonde hair")
# Activate the LoRA
pipe.set_adapters(["blonde hair"], adapter_weights=[2.0])
prompt = "medieval rich kingpin sitting in a tavern, blonde hair"
negative_prompt = "nsfw"
width = 512
height = 512
num_inference_steps = 10
guidance_scale = 2
image = pipe(prompt, negative_prompt=negative_prompt, width=width, height=height, guidance_scale=guidance_scale, num_inference_steps=num_inference_steps).images[0]
image.save('result.png')
```
## Support the Patreon
If you like this model please consider [joining our Patreon](https://www.patreon.com/NTCAI).
By joining our Patreon, you'll gain access to an ever-growing library of over 720+ unique and diverse LoRAs, covering a wide range of styles and genres. You'll also receive early access to new models and updates, exclusive behind-the-scenes content, and the powerful LoRA slider creator, allowing you to craft your own custom LoRAs and experiment with endless possibilities.
Your support on Patreon will allow us to continue developing and refining new models.
## Other resources
- [CivitAI](https://civitai.com/user/ntc) - Follow ntc on Civit for even more LoRAs
- [ntcai.xyz](https://ntcai.xyz) - See ntcai.xyz to find more articles and LoRAs
| [
"CRAFT"
] | Non_BioNLP |
ntc-ai/SDXL-LoRA-slider.catwalk | ntc-ai | text-to-image | [
"diffusers",
"text-to-image",
"stable-diffusion-xl",
"lora",
"template:sd-lora",
"template:sdxl-lora",
"sdxl-sliders",
"ntcai.xyz-sliders",
"concept",
"en",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:mit",
"region:us"
] | 1,702,244,582,000 | 2024-02-06T00:28:54 | 26 | 0 | ---
base_model: stabilityai/stable-diffusion-xl-base-1.0
language:
- en
license: mit
tags:
- text-to-image
- stable-diffusion-xl
- lora
- template:sd-lora
- template:sdxl-lora
- sdxl-sliders
- ntcai.xyz-sliders
- concept
- diffusers
thumbnail: images/catwalk_17_3.0.png
widget:
- text: catwalk
output:
url: images/catwalk_17_3.0.png
- text: catwalk
output:
url: images/catwalk_19_3.0.png
- text: catwalk
output:
url: images/catwalk_20_3.0.png
- text: catwalk
output:
url: images/catwalk_21_3.0.png
- text: catwalk
output:
url: images/catwalk_22_3.0.png
inference: false
instance_prompt: catwalk
---
# ntcai.xyz slider - catwalk (SDXL LoRA)
| Strength: -3 | Strength: 0 | Strength: 3 |
| --- | --- | --- |
| <img src="images/catwalk_17_-3.0.png" width=256 height=256 /> | <img src="images/catwalk_17_0.0.png" width=256 height=256 /> | <img src="images/catwalk_17_3.0.png" width=256 height=256 /> |
| <img src="images/catwalk_19_-3.0.png" width=256 height=256 /> | <img src="images/catwalk_19_0.0.png" width=256 height=256 /> | <img src="images/catwalk_19_3.0.png" width=256 height=256 /> |
| <img src="images/catwalk_20_-3.0.png" width=256 height=256 /> | <img src="images/catwalk_20_0.0.png" width=256 height=256 /> | <img src="images/catwalk_20_3.0.png" width=256 height=256 /> |
See more at [https://sliders.ntcai.xyz/sliders/app/loras/5aa8afbd-670c-4bf2-80c1-7691682375f5](https://sliders.ntcai.xyz/sliders/app/loras/5aa8afbd-670c-4bf2-80c1-7691682375f5)
## Download
Weights for this model are available in Safetensors format.
## Trigger words
You can apply this LoRA with trigger words for additional effect:
```
catwalk
```
## Use in diffusers
```python
from diffusers import StableDiffusionXLPipeline
from diffusers import EulerAncestralDiscreteScheduler
import torch
pipe = StableDiffusionXLPipeline.from_single_file("https://huggingface.co/martyn/sdxl-turbo-mario-merge-top-rated/blob/main/topRatedTurboxlLCM_v10.safetensors")
pipe.to("cuda")
pipe.scheduler = EulerAncestralDiscreteScheduler.from_config(pipe.scheduler.config)
# Load the LoRA
pipe.load_lora_weights('ntc-ai/SDXL-LoRA-slider.catwalk', weight_name='catwalk.safetensors', adapter_name="catwalk")
# Activate the LoRA
pipe.set_adapters(["catwalk"], adapter_weights=[2.0])
prompt = "medieval rich kingpin sitting in a tavern, catwalk"
negative_prompt = "nsfw"
width = 512
height = 512
num_inference_steps = 10
guidance_scale = 2
image = pipe(prompt, negative_prompt=negative_prompt, width=width, height=height, guidance_scale=guidance_scale, num_inference_steps=num_inference_steps).images[0]
image.save('result.png')
```
## Support the Patreon
If you like this model please consider [joining our Patreon](https://www.patreon.com/NTCAI).
By joining our Patreon, you'll gain access to an ever-growing library of over 1496+ unique and diverse LoRAs along with 14600+ slider merges, covering a wide range of styles and genres. You'll also receive early access to new models and updates, exclusive behind-the-scenes content, and the powerful <strong>NTC Slider Factory</strong> LoRA creator, allowing you to craft your own custom LoRAs and merges opening up endless possibilities.
Your support on Patreon will allow us to continue developing new models and tools.
## Other resources
- [CivitAI](https://civitai.com/user/ntc) - Follow ntc on Civit for even more LoRAs
- [ntcai.xyz](https://ntcai.xyz) - See ntcai.xyz to find more articles and LoRAs
| [
"CRAFT"
] | Non_BioNLP |
jncraton/Falcon3-1B-Instruct-ct2-int8 | jncraton | null | [
"ctranslate2",
"falcon3",
"en",
"fr",
"es",
"pt",
"base_model:tiiuae/Falcon3-1B-Instruct",
"base_model:quantized:tiiuae/Falcon3-1B-Instruct",
"license:other",
"region:us"
] | 1,734,459,007,000 | 2024-12-19T15:35:12 | 8 | 0 | ---
base_model: tiiuae/Falcon3-1B-Instruct
language:
- en
- fr
- es
- pt
library_name: ctranslate2
license: other
license_name: falcon-llm-license
license_link: https://falconllm.tii.ae/falcon-terms-and-conditions.html
tags:
- falcon3
base_model_relation: quantized
---
<div align="center">
<img src="https://huggingface.co/datasets/tiiuae/documentation-images/resolve/main/general/falco3-logo.png" alt="drawing" width="500"/>
</div>
# Falcon3-1B-Instruct
**Falcon3** family of Open Foundation Models is a set of pretrained and instruct LLMs ranging from 1B to 10B parameters.
This repository contains the **Falcon3-1B-Instruct**. It achieves strong results on reasoning, language understanding, instruction following, code and mathematics tasks.
Falcon3-1B-Instruct supports 4 languages (English, French, Spanish, Portuguese) and a context length of up to 8K.
## Model Details
- Architecture
- Transformer-based causal decoder-only architecture
- 18 decoder blocks
- Grouped Query Attention (GQA) for faster inference: 8 query heads and 4 key-value heads
- Wider head dimension: 256
- High RoPE value to support long context understanding: 1000042
- Uses SwiGLU and RMSNorm
- 8K context length
- 131K vocab size
- Pruned and healed using larger Falcon models (3B and 7B respectively) on only 80 Gigatokens of datasets comprising of web, code, STEM, high quality and multilingual data using 256 H100 GPU chips
- Posttrained on 1.2 million samples of STEM, conversational, code, safety and function call data
- Supports EN, FR, ES, PT
- Developed by [Technology Innovation Institute](https://www.tii.ae)
- License: TII Falcon-LLM License 2.0
- Model Release Date: December 2024
## Getting started
<details>
<summary> Click to expand </summary>
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "tiiuae/Falcon3-1B-Instruct"
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained(model_name)
prompt = "How many hours in one day?"
messages = [
{"role": "system", "content": "You are a helpful friendly assistant Falcon3 from TII, try to follow instructions as much as possible."},
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
generated_ids = model.generate(
**model_inputs,
max_new_tokens=1024
)
generated_ids = [
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
print(response)
```
</details>
<br>
## Benchmarks
We report in the following table our internal pipeline benchmarks.
- We use [lm-evaluation harness](https://github.com/EleutherAI/lm-evaluation-harness).
- We report **raw scores** obtained by applying chat template **without fewshot_as_multiturn** (unlike Llama3.1).
- We use same batch-size across all models.
<table border="1" style="width: 100%; text-align: center; border-collapse: collapse;">
<colgroup>
<col style="width: 10%;">
<col style="width: 10%;">
<col style="width: 7%;">
<col style="width: 7%;">
<col style="width: 7%;">
<col style="background-color: rgba(80, 15, 213, 0.5); width: 7%;">
</colgroup>
<thead>
<tr>
<th>Category</th>
<th>Benchmark</th>
<th>Llama-3.2-1B</th>
<th>Qwen2.5-1.5B</th>
<th>SmolLM2-1.7B</th>
<th>Falcon3-1B-Instruct</th>
</tr>
</thead>
<tbody>
<tr>
<td rowspan="3">General</td>
<td>MMLU (5-shot)</td>
<td>23.4</td>
<td><b>58.4</b></td>
<td>48.4</td>
<td>43.9</td>
</tr>
<tr>
<td>MMLU-PRO (5-shot)</td>
<td>11.3</td>
<td><b>21.3</b></td>
<td>17.2</td>
<td>18.6</td>
</tr>
<tr>
<td>IFEval</td>
<td><b>55.8</b></td>
<td>44.4</td>
<td>53.0</td>
<td>54.4</td>
</tr>
<tr>
<td rowspan="3">Math</td>
<td>GSM8K (5-shot)</td>
<td>37.4</td>
<td><b>57.2</b></td>
<td>43.4</td>
<td>38.6</td>
</tr>
<tr>
<td>GSM8K (8-shot, COT)</td>
<td>35.6</td>
<td><b>62.2</b></td>
<td>47.2</td>
<td>41.8</td>
</tr>
<tr>
<td>MATH Lvl-5 (4-shot)</td>
<td><b>3.9</b></td>
<td>0.2</td>
<td>0.1</td>
<td>1.0</td>
</tr>
<tr>
<td rowspan="6">Reasoning</td>
<td>Arc Challenge (25-shot)</td>
<td>34.1</td>
<td>47.0</td>
<td><b>47.6</b></td>
<td>45.9</td>
</tr>
<tr>
<td>GPQA (0-shot)</td>
<td>25.3</td>
<td><b>29.6</b></td>
<td>28.7</td>
<td>26.5</td>
</tr>
<tr>
<td>GPQA (0-shot, COT)</td>
<td>13.2</td>
<td>9.2</td>
<td>16.0</td>
<td><b>21.3</b></td>
</tr>
<tr>
<td>MUSR (0-shot)</td>
<td>32.4</td>
<td>36.8</td>
<td>33.0</td>
<td><b>40.7</b></td>
</tr>
<tr>
<td>BBH (3-shot)</td>
<td>30.3</td>
<td><b>38.5</b></td>
<td>33.1</td>
<td>35.1</td>
</tr>
<tr>
<td>BBH (3-shot, COT)</td>
<td>0.0</td>
<td>20.3</td>
<td>0.8</td>
<td><b>30.5</b></td>
</tr>
<tr>
<td rowspan="5">CommonSense Understanding</td>
<td>PIQA (0-shot)</td>
<td>72.1</td>
<td>73.2</td>
<td><b>74.4</b></td>
<td>72.0</td>
</tr>
<tr>
<td>SciQ (0-shot)</td>
<td>61.8</td>
<td>69.5</td>
<td>71.4</td>
<td><b>86.8</b></td>
</tr>
<tr>
<td>Winogrande (0-shot)</td>
<td>-</td>
<td>-</td>
<td>-</td>
<td><b>60.2</b></td>
</tr>
<tr>
<td>OpenbookQA (0-shot)</td>
<td>40.2</td>
<td>40.4</td>
<td><b>42.8</b></td>
<td>40.0</td>
</tr>
<tr>
<td>MT-Bench (avg)</td>
<td>5.4</td>
<td><b>7.1</b></td>
<td>6.1</td>
<td>5.5</td>
</tr>
<tr>
<td rowspan="1">Instructions following</td>
<td>Alpaca (WC)</td>
<td><b>8.6</b></td>
<td><b>8.6</b></td>
<td>5.4</td>
<td>6.1</td>
</tr>
</tbody>
</table>
## Useful links
- View our [release blogpost](https://huggingface.co/blog/falcon3).
- Feel free to join [our discord server](https://discord.gg/fwXpMyGc) if you have any questions or to interact with our researchers and developers.
## Technical Report
Coming soon....
## Citation
If the Falcon3 family of models were helpful to your work, feel free to give us a cite.
```
@misc{Falcon3,
title = {The Falcon 3 Family of Open Models},
url = {https://huggingface.co/blog/falcon3},
author = {Falcon-LLM Team},
month = {December},
year = {2024}
}
``` | [
"SCIQ"
] | Non_BioNLP |
AIDA-UPM/MARTINI_enrich_BERTopic_hiddeninplainsight1 | AIDA-UPM | text-classification | [
"bertopic",
"text-classification",
"region:us"
] | 1,736,792,777,000 | 2025-01-13T18:26:29 | 6 | 0 | ---
library_name: bertopic
pipeline_tag: text-classification
tags:
- bertopic
---
# MARTINI_enrich_BERTopic_hiddeninplainsight1
This is a [BERTopic](https://github.com/MaartenGr/BERTopic) model.
BERTopic is a flexible and modular topic modeling framework that allows for the generation of easily interpretable topics from large datasets.
## Usage
To use this model, please install BERTopic:
```
pip install -U bertopic
```
You can use the model as follows:
```python
from bertopic import BERTopic
topic_model = BERTopic.load("AIDA-UPM/MARTINI_enrich_BERTopic_hiddeninplainsight1")
topic_model.get_topic_info()
```
## Topic overview
* Number of topics: 41
* Number of training documents: 4643
<details>
<summary>Click here for an overview of all topics.</summary>
| Topic ID | Topic Keywords | Topic Frequency | Label |
|----------|----------------|-----------------|-------|
| -1 | vaccinated - pandemic - deaths - 2021 - mrna | 20 | -1_vaccinated_pandemic_deaths_2021 |
| 0 | transgenderism - lgbtq - pedophiles - brainwashing - god | 2590 | 0_transgenderism_lgbtq_pedophiles_brainwashing |
| 1 | invincible - unshakeable - champion - hope - moments | 168 | 1_invincible_unshakeable_champion_hope |
| 2 | zealanders - christchurch - terrorist - devastation - deported | 144 | 2_zealanders_christchurch_terrorist_devastation |
| 3 | police - australians - violently - toowoomba - tyranny | 134 | 3_police_australians_violently_toowoomba |
| 4 | iphone - radiofrequency - cancers - dangerous - 5g | 114 | 4_iphone_radiofrequency_cancers_dangerous |
| 5 | illuminati - pedophiles - trafficking - cannibalism - satanic | 76 | 5_illuminati_pedophiles_trafficking_cannibalism |
| 6 | detoxing - fulvic - chelates - glutathione - themiracledirt | 72 | 6_detoxing_fulvic_chelates_glutathione |
| 7 | vaccinating - fauci - jabs - deaf - simpsons | 65 | 7_vaccinating_fauci_jabs_deaf |
| 8 | cashless - banks - withdrawals - coinbase - donations | 64 | 8_cashless_banks_withdrawals_coinbase |
| 9 | propolis - glutathione - remedies - garlic - poisoned | 63 | 9_propolis_glutathione_remedies_garlic |
| 10 | robodogs - sniper - webcams - smartphones - unmanned | 61 | 10_robodogs_sniper_webcams_smartphones |
| 11 | fluoridated - poison - cereals - monsanto - coca | 58 | 11_fluoridated_poison_cereals_monsanto |
| 12 | floods - austria - wildfires - chemtrails - devastating | 57 | 12_floods_austria_wildfires_chemtrails |
| 13 | myocarditis - pfizer - injections - clots - cyanotic | 55 | 13_myocarditis_pfizer_injections_clots |
| 14 | vaccinated - deaths - boosters - boris - doses | 51 | 14_vaccinated_deaths_boosters_boris |
| 15 | grubs - cannibalism - meal - suckers - hungry | 51 | 15_grubs_cannibalism_meal_suckers |
| 16 | australia - doctors - vaxxination - lockdown - bureaucrats | 50 | 16_australia_doctors_vaxxination_lockdown |
| 17 | pcr - nasopharyngeal - swabs - test - pineal | 47 | 17_pcr_nasopharyngeal_swabs_test |
| 18 | queensland - vaccinations - lockdown - mandatory - constitutional | 47 | 18_queensland_vaccinations_lockdown_mandatory |
| 19 | biden - cronies - cambodia - teleprompter - clown | 45 | 19_biden_cronies_cambodia_teleprompter |
| 20 | snitch - pedos - discredit - fbi - executions | 43 | 20_snitch_pedos_discredit_fbi |
| 21 | usda - famines - rationed - beans - supermarkets | 41 | 21_usda_famines_rationed_beans |
| 22 | gates - billions - zika - vaccinate - younggloballeader | 37 | 22_gates_billions_zika_vaccinate |
| 23 | quarantine - taiwan - shenzhen - weibo - test | 36 | 23_quarantine_taiwan_shenzhen_weibo |
| 24 | surveillance - australia - chinafication - wechat - fingerprint | 36 | 24_surveillance_australia_chinafication_wechat |
| 25 | pilots - inflight - qantas - mh370 - crash | 36 | 25_pilots_inflight_qantas_mh370 |
| 26 | climate - co2 - thunberg - hoax - mccarthy | 34 | 26_climate_co2_thunberg_hoax |
| 27 | gmo - biosludge - corpses - fake - dissolved | 34 | 27_gmo_biosludge_corpses_fake |
| 28 | remdesivir - ivermectine - hcq - intravenous - dexamethasone | 31 | 28_remdesivir_ivermectine_hcq_intravenous |
| 29 | grapheneagenda - nanoparticles - adjuvant - magnetized - toxicity | 30 | 29_grapheneagenda_nanoparticles_adjuvant_magnetized |
| 30 | doctors - malpractice - rage - whistleblower - wtaf | 28 | 30_doctors_malpractice_rage_whistleblower |
| 31 | deleted - messages - banned - unfollow - spammer | 28 | 31_deleted_messages_banned_unfollow |
| 32 | pfizer - doctored - miscarriages - shots - nurtec | 28 | 32_pfizer_doctored_miscarriages_shots |
| 33 | hillary - trafficking - epstein - fbi - revealed | 27 | 33_hillary_trafficking_epstein_fbi |
| 34 | maskitis - pneumonia - oxygen - vaxxinated - mold | 25 | 34_maskitis_pneumonia_oxygen_vaxxinated |
| 35 | fires - chickens - smoldering - explosion - fertilizer | 24 | 35_fires_chickens_smoldering_explosion |
| 36 | cyberpandemic - attacks - sabotaged - decentralized - russia | 24 | 36_cyberpandemic_attacks_sabotaged_decentralized |
| 37 | trudeau - ottawa - dissidents - saskatchewan - blockade | 24 | 37_trudeau_ottawa_dissidents_saskatchewan |
| 38 | telegram - deleting - censorship - hiddenvideos - encrypted | 23 | 38_telegram_deleting_censorship_hiddenvideos |
| 39 | melatonin - morocco - harmful - shield - orgone | 22 | 39_melatonin_morocco_harmful_shield |
</details>
## Training hyperparameters
* calculate_probabilities: True
* language: None
* low_memory: False
* min_topic_size: 10
* n_gram_range: (1, 1)
* nr_topics: None
* seed_topic_list: None
* top_n_words: 10
* verbose: False
* zeroshot_min_similarity: 0.7
* zeroshot_topic_list: None
## Framework versions
* Numpy: 1.26.4
* HDBSCAN: 0.8.40
* UMAP: 0.5.7
* Pandas: 2.2.3
* Scikit-Learn: 1.5.2
* Sentence-transformers: 3.3.1
* Transformers: 4.46.3
* Numba: 0.60.0
* Plotly: 5.24.1
* Python: 3.10.12
| [
"PCR"
] | Non_BioNLP |
internistai/base-7b-v0.2-Q4_K_M-GGUF | internistai | null | [
"gguf",
"medical",
"llama-cpp",
"gguf-my-repo",
"en",
"dataset:Open-Orca/OpenOrca",
"dataset:pubmed",
"dataset:medmcqa",
"dataset:maximegmd/medqa_alpaca_format",
"base_model:internistai/base-7b-v0.2",
"base_model:quantized:internistai/base-7b-v0.2",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | 1,719,092,355,000 | 2024-06-22T21:39:33 | 5 | 0 | ---
base_model: internistai/base-7b-v0.2
datasets:
- Open-Orca/OpenOrca
- pubmed
- medmcqa
- maximegmd/medqa_alpaca_format
language:
- en
license: apache-2.0
metrics:
- accuracy
tags:
- medical
- llama-cpp
- gguf-my-repo
tag: text-generation
---
# maximegmd/base-7b-v0.2-Q4_K_M-GGUF
This model was converted to GGUF format from [`internistai/base-7b-v0.2`](https://huggingface.co/internistai/base-7b-v0.2) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/internistai/base-7b-v0.2) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo maximegmd/base-7b-v0.2-Q4_K_M-GGUF --hf-file base-7b-v0.2-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo maximegmd/base-7b-v0.2-Q4_K_M-GGUF --hf-file base-7b-v0.2-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo maximegmd/base-7b-v0.2-Q4_K_M-GGUF --hf-file base-7b-v0.2-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo maximegmd/base-7b-v0.2-Q4_K_M-GGUF --hf-file base-7b-v0.2-q4_k_m.gguf -c 2048
```
| [
"MEDQA"
] | BioNLP |
GuCuChiara/NLP-CIC-WFU_DisTEMIST_fine_tuned_bert-base-multilingual-cased | GuCuChiara | token-classification | [
"transformers",
"pytorch",
"bert",
"token-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-multilingual-cased",
"base_model:finetune:google-bert/bert-base-multilingual-cased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,695,599,872,000 | 2023-10-10T14:13:27 | 20 | 0 | ---
base_model: bert-base-multilingual-cased
license: apache-2.0
metrics:
- precision
- recall
- f1
- accuracy
tags:
- generated_from_trainer
model-index:
- name: NLP-CIC-WFU_DisTEMIST_fine_tuned_bert-base-multilingual-cased
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# NLP-CIC-WFU_DisTEMIST_fine_tuned_bert-base-multilingual-cased
This model is a fine-tuned version of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1620
- Precision: 0.6121
- Recall: 0.5161
- F1: 0.5600
- Accuracy: 0.9541
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 6
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 71 | 0.1704 | 0.4558 | 0.3635 | 0.4045 | 0.9353 |
| No log | 2.0 | 142 | 0.1572 | 0.5925 | 0.3518 | 0.4415 | 0.9433 |
| No log | 3.0 | 213 | 0.1386 | 0.5932 | 0.4774 | 0.5290 | 0.9531 |
| No log | 4.0 | 284 | 0.1427 | 0.5945 | 0.5175 | 0.5534 | 0.9533 |
| No log | 5.0 | 355 | 0.1653 | 0.6354 | 0.4788 | 0.5461 | 0.9540 |
| No log | 6.0 | 426 | 0.1620 | 0.6121 | 0.5161 | 0.5600 | 0.9541 |
### Framework versions
- Transformers 4.34.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.14.1
| [
"DISTEMIST"
] | Non_BioNLP |
espnet/roshansh_asr_base_sp_conformer_swbd | espnet | automatic-speech-recognition | [
"espnet",
"audio",
"automatic-speech-recognition",
"dataset:swbd",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | 1,647,205,901,000 | 2022-03-13T21:30:20 | 1 | 0 | ---
datasets:
- swbd
language: noinfo
license: cc-by-4.0
tags:
- espnet
- audio
- automatic-speech-recognition
---
## ESPnet2 ASR model
### `espnet/roshansh_asr_base_sp_conformer_swbd`
This model was trained by roshansh-cmu using swbd recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```bash
cd espnet
git checkout a04a98c98797b314f2425082bc40261757fd47de
pip install -e .
cd egs2/swbd/asr1
./run.sh --skip_data_prep false --skip_train true --download_model espnet/roshansh_asr_base_sp_conformer_swbd
```
<!-- Generated by scripts/utils/show_asr_result.sh -->
# RESULTS
## Environments
- date: `Sun Mar 13 17:23:58 EDT 2022`
- python version: `3.8.12 (default, Oct 12 2021, 13:49:34) [GCC 7.5.0]`
- espnet version: `espnet 0.10.7a1`
- pytorch version: `pytorch 1.10.1`
- Git hash: `a04a98c98797b314f2425082bc40261757fd47de`
- Commit date: `Thu Mar 3 16:09:41 2022 -0500`
## roshansh_asr_base_sp_conformer_swbd
### WER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|decode_asr_asr_model_valid.acc.ave_10best/eval2000/hyp.callhm.ctm.filt.sys|2628|21594|87.4|9.6|3.0|2.0|14.6|49.7|
|decode_asr_asr_model_valid.acc.ave_10best/eval2000/hyp.ctm.filt.sys|4459|42989|90.5|7.0|2.5|1.5|10.9|44.7|
|decode_asr_asr_model_valid.acc.ave_10best/eval2000/hyp.swbd.ctm.filt.sys|1831|21395|93.7|4.3|2.0|0.9|7.2|37.7|
|decode_lm_lm_lm_base_lm_transformer_valid.loss.ave_asr_model_valid.acc.ave_10best/eval2000/hyp.callhm.ctm.filt.sys|2628|21594|88.0|8.9|3.1|2.0|14.0|48.0|
|decode_lm_lm_lm_base_lm_transformer_valid.loss.ave_asr_model_valid.acc.ave_10best/eval2000/hyp.ctm.filt.sys|4459|42989|91.0|6.5|2.5|1.4|10.4|43.0|
|decode_lm_lm_lm_base_lm_transformer_valid.loss.ave_asr_model_valid.acc.ave_10best/eval2000/hyp.swbd.ctm.filt.sys|1831|21395|94.0|4.0|2.0|0.9|6.8|35.9|
### CER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|decode_asr_asr_model_valid.acc.ave_10best/eval2000|4458|181952|92.3|3.7|4.0|11.9|19.5|69.9|
|decode_lm_lm_lm_base_lm_transformer_valid.loss.ave_asr_model_valid.acc.ave_10best/eval2000|4458|181952|92.3|3.7|4.1|11.6|19.3|69.1|
### TER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|decode_asr_asr_model_valid.acc.ave_10best/eval2000|4458|47302|81.7|13.5|4.8|16.7|34.9|69.9|
|decode_lm_lm_lm_base_lm_transformer_valid.loss.ave_asr_model_valid.acc.ave_10best/eval2000|4458|47302|81.9|13.1|5.0|16.4|34.5|69.1|
## ASR config
<details><summary>expand</summary>
```
config: conf/train_asr_confformer.yaml
print_config: false
log_level: INFO
dry_run: false
iterator_type: sequence
output_dir: exp/asr_base_sp_conformer
ngpu: 1
seed: 0
num_workers: 1
num_att_plot: 3
dist_backend: nccl
dist_init_method: env://
dist_world_size: 8
dist_rank: 0
local_rank: 0
dist_master_addr: localhost
dist_master_port: 52583
dist_launcher: null
multiprocessing_distributed: true
unused_parameters: false
sharded_ddp: false
cudnn_enabled: true
cudnn_benchmark: false
cudnn_deterministic: true
collect_stats: false
write_collected_feats: false
max_epoch: 150
patience: null
val_scheduler_criterion:
- valid
- loss
early_stopping_criterion:
- valid
- loss
- min
best_model_criterion:
- - valid
- acc
- max
keep_nbest_models: 10
grad_clip: 5.0
grad_clip_type: 2.0
grad_noise: false
accum_grad: 1
no_forward_run: false
resume: true
train_dtype: float32
use_amp: false
log_interval: 3000
use_tensorboard: true
use_wandb: false
wandb_project: null
wandb_id: null
detect_anomaly: false
pretrain_path: null
init_param: []
freeze_param: []
num_iters_per_epoch: null
batch_size: 20
valid_batch_size: null
batch_bins: 75000000
valid_batch_bins: null
train_shape_file:
- exp/asr_stats_base_sp/train/speech_shape
- exp/asr_stats_base_sp/train/text_shape.bpe
valid_shape_file:
- exp/asr_stats_base_sp/valid/speech_shape
- exp/asr_stats_base_sp/valid/text_shape.bpe
batch_type: numel
valid_batch_type: null
fold_length:
- 80000
- 150
- 800
- 150
sort_in_batch: descending
sort_batch: descending
multiple_iterator: false
chunk_length: 500
chunk_shift_ratio: 0.5
num_cache_chunks: 1024
train_data_path_and_name_and_type:
- - dump/fbank_pitch/train_nodup_sp/feats.scp
- speech
- kaldi_ark
- - dump/fbank_pitch/train_nodup_sp/text
- text
- text
valid_data_path_and_name_and_type:
- - dump/fbank_pitch/train_dev/feats.scp
- speech
- kaldi_ark
- - dump/fbank_pitch/train_dev/text
- text
- text
allow_variable_data_keys: false
max_cache_size: 0.0
max_cache_fd: 32
valid_max_cache_size: null
optim: adam
optim_conf:
lr: 0.006
scheduler: warmuplr
scheduler_conf:
warmup_steps: 25000
token_list:
- <blank>
- <unk>
- ▁i
- s
- ''''
- ▁and
- ▁the
- ▁you
- ▁that
- ▁a
- ▁it
- ▁uh
- ▁to
- t
- ▁of
- ▁know
- ▁they
- '-'
- ▁in
- ▁we
- ']'
- ▁[
- ▁yeah
- ▁have
- ▁but
- ▁so
- ▁was
- ▁like
- re
- m
- ▁um
- ▁just
- ▁well
- ▁do
- ▁for
- d
- ▁think
- ing
- ▁don
- ▁is
- ▁there
- ▁or
- ▁on
- ▁be
- noise
- ▁what
- laughter
- ▁oh
- ▁my
- ed
- ve
- ▁not
- ▁really
- ▁with
- n
- ▁he
- ▁one
- ▁if
- ▁are
- ▁all
- ▁get
- ▁right
- ▁about
- ▁can
- ▁because
- ▁out
- ▁had
- ▁up
- ▁them
- ▁lot
- a
- v
- ▁at
- ▁this
- ▁would
- ▁when
- ▁go
- ▁some
- p
- i
- r
- er
- ▁people
- ▁no
- ▁mean
- ▁kind
- ▁then
- o
- ▁good
- ▁now
- ▁me
- ▁got
- e
- ▁time
- ll
- ▁as
- ▁she
- ▁going
- y
- ▁see
- ▁more
- ▁were
- ▁been
- ▁from
- ▁too
- ▁an
- ▁things
- ly
- ▁how
- ▁something
- c
- ▁your
- b
- ▁where
- ▁much
- u
- '._'
- ▁guess
- ▁little
- g
- ▁here
- .
- ▁thing
- ▁our
- le
- ocalized
- ▁very
- ▁did
- ▁their
- ▁other
- ▁work
- ▁could
- ▁okay
- l
- in
- ▁even
- ▁t
- al
- ▁two
- huh
- ▁way
- or
- ▁say
- f
- ▁has
- ▁any
- ▁s
- ▁years
- ▁want
- ▁back
- ▁down
- ▁those
- ▁who
- ▁pretty
- ▁probably
- ▁home
- ▁didn
- ▁real
- ▁year
- ▁take
- ▁over
- ▁yes
- ▁than
- ▁re
- ▁sure
- ▁into
- ar
- hum
- ▁school
- ▁put
- ▁stuff
- an
- ▁make
- ▁kids
- ▁her
- ▁by
- ▁said
- ▁never
- w
- ▁c
- ▁which
- ▁off
- k
- ▁went
- ic
- ▁f
- ▁only
- ▁big
- ▁car
- ▁always
- ▁these
- ▁around
- ▁money
- ▁day
- ▁anything
- ▁three
- ▁nice
- ▁doing
- ▁need
- ▁come
- ▁actually
- ▁will
- ▁maybe
- ▁care
- ▁him
- h
- ent
- en
- ▁still
- ▁should
- ▁new
- ▁used
- ▁five
- 'on'
- ch
- ion
- ▁long
- ▁sort
- ▁his
- th
- ter
- ▁old
- ▁most
- ▁house
- ▁bit
- ▁e
- ▁every
- ▁different
- ▁last
- ▁use
- ▁let
- il
- es
- it
- ▁many
- ▁us
- ▁look
- ▁course
- ▁getting
- ▁true
- ▁everything
- ▁feel
- ▁first
- ck
- ▁part
- ▁does
- ▁pay
- ▁great
- ▁hard
- ▁same
- ▁thought
- ▁de
- ▁problem
- ▁also
- ▁keep
- ers
- at
- ▁through
- ▁doesn
- ▁children
- ▁four
- ▁find
- ▁done
- ment
- ▁th
- ies
- ur
- ▁before
- ▁far
- ▁though
- ▁w
- ▁area
- ate
- ▁haven
- ▁o
- ▁ever
- ▁p
- ▁
- ▁being
- ▁family
- ▁bad
- ▁seems
- ation
- ▁d
- ▁live
- ▁whole
- ▁fact
- ▁own
- se
- ▁why
- ▁b
- ▁play
- ▁talking
- ▁tell
- ▁better
- ▁interesting
- ▁another
- ▁place
- ▁try
- ▁trying
- ▁huh
- ▁ten
- te
- ▁twenty
- ▁else
- ol
- ▁watch
- ▁read
- ▁type
- ro
- ▁quite
- ▁job
- ▁hundred
- ▁high
- ▁call
- ▁con
- ▁ago
- ▁after
- ▁give
- ▁couple
- ▁enough
- ▁whatever
- ke
- is
- id
- ▁either
- ▁start
- ▁having
- ▁texas
- el
- ▁somebody
- ▁husband
- ▁sometimes
- ▁dollars
- ir
- ow
- ▁usually
- ▁show
- ▁help
- ▁while
- ▁few
- ▁away
- ive
- ▁se
- ▁college
- ▁y
- ▁system
- ▁might
- ▁mo
- ▁co
- ▁heard
- ▁ma
- us
- ▁person
- ▁once
- ▁made
- ▁point
- ▁six
- ce
- ▁n
- ▁fun
- ra
- ▁week
- ▁pa
- ▁buy
- ▁seen
- ▁state
- ▁anyway
- ▁again
- ▁love
- ▁gonna
- ▁dallas
- ne
- ▁started
- ▁exactly
- ▁pro
- ▁country
- ▁life
- ▁enjoy
- ▁everybody
- ▁ha
- ▁talk
- ▁lo
- ▁v
- ▁night
- ▁able
- ▁may
- ▁stay
- ▁remember
- est
- ▁news
- ▁sa
- ▁k
- ▁came
- ▁hear
- ▁end
- able
- ▁least
- ▁working
- et
- ▁un
- ry
- ▁fl
- ▁po
- ▁g
- ▁since
- ▁ra
- ▁change
- ul
- ▁idea
- ▁both
- ▁h
- ▁boy
- ▁agree
- age
- ▁program
- un
- ▁pre
- ▁st
- ▁almost
- ▁dis
- ▁someone
- ▁run
- ▁di
- um
- z
- ▁ba
- ▁ho
- ist
- ▁la
- ▁dog
- ▁m
- ▁reason
- ▁took
- ▁believe
- ant
- ▁bye
- ▁company
- ▁eight
- ▁times
- ▁half
- ▁wife
- ▁isn
- ▁paper
- ▁deal
- ▁goes
- ▁hand
- ▁guy
- ▁called
- ▁next
- ▁close
- ▁month
- ▁thirty
- ▁wanted
- ▁thousand
- ▁yet
- ▁mi
- ▁understand
- ▁bu
- tion
- ▁cost
- ▁pick
- ge
- am
- ▁drive
- ▁sp
- ▁looking
- ▁government
- ▁child
- ▁crime
- ac
- ▁tax
- ▁li
- ▁spend
- lo
- ee
- ▁women
- ▁parents
- ▁bo
- ▁days
- ▁especially
- ▁wow
- ▁saying
- ▁cut
- ▁name
- ▁eat
- ▁gone
- ▁whether
- ▁happen
- ity
- ▁less
- ated
- ▁small
- ▁saw
- ▁sounds
- ▁supposed
- ▁number
- ▁world
- ▁mother
- ▁music
- ▁set
- ▁such
- ▁until
- ▁hi
- ▁movie
- ru
- ▁credit
- ▁bought
- ▁turn
- ▁city
- ▁myself
- ▁ga
- ▁walk
- ▁food
- if
- ▁le
- ▁seem
- ▁problems
- ting
- ▁computer
- ▁makes
- ▁am
- ▁man
- ▁found
- ▁percent
- ▁together
- ▁sit
- ▁ro
- ▁coming
- ure
- ▁basically
- ▁young
- ▁best
- ▁sc
- ▁listen
- ▁hum
- ▁water
- ▁check
- ance
- ▁son
- ▁business
- ▁u
- co
- ▁comp
- ▁seven
- ▁summer
- ▁each
- ▁situation
- ie
- ian
- ▁war
- ▁j
- ▁worked
- x
- ward
- ▁side
- ▁definitely
- ▁certain
- ▁game
- ▁wh
- ▁won
- ▁cl
- ia
- ▁wonderful
- ▁wonder
- ▁matter
- ▁public
- ▁ex
- op
- ▁lived
- ▁fifty
- ▁certainly
- ▁cat
- ▁cook
- ▁funny
- ▁air
- ty
- ▁age
- ▁room
- ▁nothing
- ▁class
- ▁health
- ▁ch
- ▁sh
- ▁large
- ig
- na
- ▁r
- ▁fa
- ▁gotten
- ▁ju
- ▁mine
- ▁town
- ▁per
- ▁months
- ma
- ▁ti
- ide
- ▁test
- ▁places
- ▁yep
- ▁comes
- ▁anymore
- ▁ca
- ▁under
- he
- ▁plan
- ▁vote
- ▁fi
- ▁important
- ▁taking
- ▁da
- ▁daughter
- ▁thinking
- ▁team
- port
- ▁learn
- ▁budget
- ▁american
- ful
- ▁taxes
- de
- ▁hm
- ▁gun
- ▁str
- ▁eighty
- ▁control
- ▁service
- ▁today
- ▁drug
- ▁cars
- ▁paying
- ally
- ▁rather
- ▁neat
- ▁line
- ▁tend
- ▁law
- ▁fr
- tic
- rs
- time
- ▁insurance
- man
- ▁wear
- ▁friends
- ▁outside
- ▁easy
- ▁north
- ▁friend
- ▁during
- und
- ▁l
- ▁card
- ▁nine
- me
- bye
- ▁living
- ▁mind
- ▁involved
- ▁gosh
- ▁moved
- ight
- ▁camping
- ▁several
- ence
- ical
- ▁bring
- ice
- ▁tried
- ▁major
- ▁newspaper
- ▁favorite
- ▁en
- ▁student
- ▁consider
- ▁making
- la
- ▁morning
- ous
- ▁dr
- ph
- ▁question
- ▁between
- ▁jury
- ▁amount
- ▁mar
- ▁ones
- ▁older
- ▁case
- ▁education
- ▁wa
- ▁paid
- ▁ri
- ▁depend
- ish
- ▁bill
- ▁must
- ine
- gg
- ▁happened
- ▁hour
- ▁difference
- ▁du
- ▁hope
- ▁experience
- ▁absolutely
- ▁group
- ▁figure
- ▁anybody
- ▁miles
- ▁aren
- ating
- ▁although
- ▁worth
- ▁su
- ▁ta
- ▁interest
- ▁book
- ▁sha
- ▁forty
- ▁expensive
- ▁second
- ▁without
- up
- ▁gets
- ▁full
- ▁app
- ex
- ▁along
- ▁recently
- ▁paint
- ▁leave
- ▁ru
- all
- ▁weather
- ▁miss
- ▁free
- ▁com
- ▁often
- ▁gra
- ▁minutes
- ition
- ill
- ▁magazine
- ▁wait
- ca
- ▁ahead
- ▁wrong
- ▁hours
- ▁already
- ▁married
- ▁left
- ▁hit
- ▁camp
- ▁fifteen
- ▁pr
- ▁men
- ▁drugs
- ▁rain
- ▁schools
- ious
- ▁fish
- ▁girl
- ick
- ▁office
- ▁weeks
- ▁ski
- ▁middle
- ▁knew
- ▁al
- ▁store
- ▁watching
- ▁cha
- ▁sl
- ▁hot
- ▁running
- ▁yourself
- ▁act
- ▁cold
- ▁price
- ▁lake
- ▁death
- ▁dad
- ▁enjoyed
- ▁benefits
- ▁word
- ▁main
- ▁grow
- ▁recycling
- ▁past
- ▁weekend
- ▁break
- 'no'
- ber
- ▁against
- ▁base
- ▁movies
- ▁mostly
- ial
- ▁guys
- ▁san
- ▁pi
- ay
- ▁sense
- ▁sell
- ▁sister
- ▁thank
- ▁issue
- way
- ▁pet
- ▁throw
- ▁cover
- ary
- ▁baby
- ▁doctor
- ▁local
- ▁difficult
- ▁nursing
- ▁wi
- ▁wanna
- ▁open
- ▁head
- ought
- ▁vacation
- ▁-
- ▁brother
- ▁instead
- ▁kid
- ▁reading
- ▁add
- ▁rest
- ▁qu
- ▁interested
- ▁short
- ▁degree
- ▁charge
- ▁rec
- ▁topic
- ha
- ▁talked
- ▁move
- land
- cy
- ▁trouble
- ▁told
- ▁fairly
- ▁hate
- ▁stand
- do
- ▁unless
- ▁winter
- ▁sta
- ▁twelve
- ▁plano
- ▁wish
- ▁yard
- ▁exercise
- ▁front
- ▁somewhere
- ▁east
- ▁everyone
- ▁regular
- ▁restaurant
- ▁gre
- ▁plant
- ▁catch
- ▁states
- ▁near
- ▁decided
- ▁imagine
- ▁except
- ▁chance
- ▁says
- ▁kill
- ▁california
- ▁looked
- ▁pe
- ling
- ▁ask
- ▁punishment
- ▁pull
- ▁fan
- ▁south
- ▁fine
- ▁hold
- ▁taken
- ▁tra
- ▁garden
- ▁park
- ▁late
- ▁ja
- ▁takes
- ▁street
- ▁door
- ▁fall
- ▁clean
- ▁dress
- ▁mom
- ▁income
- ▁teach
- ▁companies
- ▁works
- ▁ready
- ▁capital
- ▁spent
- ▁recycle
- ▁york
- ▁using
- ▁gu
- ▁tough
- ▁social
- ▁raise
- ▁father
- ▁seventy
- ▁ne
- ▁gr
- ▁realize
- ▁early
- ▁send
- ▁terms
- ▁become
- ▁sixty
- ▁themselves
- ▁level
- ▁phone
- ▁god
- ▁woman
- ▁oil
- ▁rent
- ▁exp
- ▁changed
- ▁felt
- ▁particular
- ▁radio
- ▁christmas
- ▁station
- ▁top
- ▁goodness
- ▁save
- ▁power
- ▁pass
- ▁bar
- ▁die
- ▁society
- ▁choice
- ▁bra
- ▁ge
- ▁personal
- ▁na
- ▁dollar
- ▁playing
- ▁tha
- ▁rate
- ard
- ▁national
- ▁special
- ▁general
- ▁awful
- ible
- ▁cards
- ▁plastic
- ▁visit
- ▁fix
- ▁train
- ▁rid
- ▁dec
- ▁lives
- ▁expect
- ▁support
- ▁wood
- ▁books
- ▁feeling
- ▁pu
- ▁acc
- line
- ▁center
- ized
- ▁putting
- ▁bag
- ness
- ▁growing
- ▁later
- ▁guns
- ton
- ▁land
- ▁travel
- der
- ▁subject
- ▁period
- ▁dinner
- ▁judge
- ▁season
- ▁happens
- ▁machine
- ▁extra
- ▁manage
- ▁gave
- ▁vi
- ▁force
- ▁ph
- ▁lately
- ▁effect
- ner
- ▁starting
- ▁saving
- one
- ▁building
- ▁trip
- ▁sitting
- ▁cases
- ▁bri
- ▁kept
- ▁finally
- ▁fast
- ▁red
- ▁forth
- ▁mu
- ▁stop
- ▁testing
- less
- ▁spring
- ▁cause
- ▁require
- ▁built
- ▁kn
- ▁sw
- ▁murder
- ▁black
- ▁quick
- ▁community
- ▁record
- ▁snow
- gra
- j
- ▁cra
- ▁plus
- ▁bank
- ▁bi
- ▁beautiful
- ▁grade
- ran
- ▁afford
- ▁graduate
- ▁space
- ▁countries
- ▁cats
- ▁fire
- ▁process
- ▁sound
- ▁played
- ▁limit
- ▁white
- ny
- ▁sad
- que
- ▁university
- ▁trans
- ▁mess
- ▁nineteen
- ▁shoot
- ▁nobody
- ▁football
- ▁speak
- ▁story
- ▁light
- ▁longer
- ▁jo
- king
- ▁ninety
- ▁road
- ▁totally
- ▁fishing
- ▁order
- ▁information
- ▁sign
- ▁worry
- ▁spending
- ▁product
- ▁soon
- ▁bother
- ▁across
- ▁write
- ▁bl
- ▁bunch
- ▁pen
- ▁carry
- ▁truck
- ▁hey
- ▁ball
- be
- ▁driving
- ▁needed
- ▁church
- ▁teachers
- ▁low
- ▁amazing
- ▁decision
- ▁hurt
- ▁golf
- ▁sorry
- ite
- ▁younger
- ities
- ▁account
- ▁terrible
- ▁wind
- ▁report
- ▁suppose
- ▁wor
- ▁color
- ▁hunt
- ▁teacher
- ▁concerned
- ▁easier
- ▁strange
- ▁sub
- ▁size
- ▁strong
- ▁safe
- ▁turned
- ▁given
- ▁lost
- ▁families
- ▁happy
- ▁follow
- ▁view
- ▁market
- ▁handle
- ▁ye
- ▁single
- ▁shop
- ▁si
- ▁within
- ze
- ▁television
- ▁cheap
- vis
- ▁rock
- ▁engineer
- ▁individual
- ▁shot
- ▁tri
- ▁criminal
- ▁united
- ▁worse
- ▁trial
- out
- ▁serious
- ▁neighborhood
- ▁brought
- ▁answer
- ▁trees
- mon
- ▁build
- ▁example
- ▁fair
- ▁buying
- ▁caught
- ▁military
- ▁private
- ▁field
- ▁weight
- ▁che
- ship
- ▁crazy
- law
- ▁serve
- ▁decide
- ▁opinion
- ▁medical
- ▁push
- ▁step
- ▁meet
- ▁stick
- clock
- ▁boat
- ▁quality
- ▁win
- ▁green
- ▁term
- ▁lose
- ▁fo
- ▁scary
- ▁ended
- ▁cu
- ▁hospital
- ▁police
- ▁biggest
- ▁apartment
- ▁repair
- ▁finish
- ▁glad
- ▁inside
- ▁learned
- ▁prison
- ▁cri
- ▁familiar
- ▁third
- ▁seemed
- uh
- ▁pan
- ▁mountain
- ▁whenever
- ▁range
- ▁watched
- ▁necessarily
- ▁piece
- ook
- lie
- ▁noticed
- ▁president
- ▁collect
- ▁twice
- ative
- ▁glass
- ▁super
- ▁ran
- ▁fund
- ▁sleep
- ▁lawn
- ▁chi
- ▁behind
- ▁guilty
- ▁drop
- ▁mix
- ▁killed
- ▁court
- ▁completely
- ▁party
- ▁current
- ▁tape
- ▁commit
- ▁benefit
- ▁wall
- ▁particularly
- ▁personally
- ▁anywhere
- ▁project
- ▁clothes
- ▁eighteen
- ▁bigger
- ▁arm
- ▁list
- ▁hang
- ▁warm
- ▁eleven
- ▁research
- uff
- ▁gee
- ▁grand
- ron
- ▁fight
- ▁grass
- ▁teaching
- ▁million
- istic
- ▁trash
- ▁cash
- ▁waiting
- ▁neighbor
- ▁club
- ability
- ▁develop
- ▁unfortunately
- ▁loan
- ▁picked
- ▁star
- ▁generally
- ▁cur
- ▁environment
- ▁minute
- ▁obviously
- ▁protect
- ▁opera
- ize
- ▁anyone
- ▁employee
- ▁houston
- ▁fill
- ▁treat
- ▁baseball
- ▁ground
- ▁video
- ▁pollution
- ▁higher
- ▁available
- ▁generation
- ▁luck
- ▁excuse
- ▁pound
- ▁picture
- ▁roll
- ▁america
- ade
- ▁eventually
- ▁itself
- ▁ooh
- ▁asked
- ▁forget
- ▁surprised
- ▁sun
- ▁federal
- ▁jail
- qui
- ▁pla
- ome
- ▁basic
- ▁extreme
- ▁washington
- ▁attention
- ▁penalty
- ▁sentence
- ▁poor
- ▁mail
- ▁cool
- ▁florida
- ▁clear
- ▁fortunate
- ▁huge
- ▁aware
- ▁lay
- ▁civil
- ▁value
- ▁band
- ▁lead
- ▁parent
- ▁giving
- ▁bottle
- ▁blue
- ▁standard
- ▁rob
- ▁afraid
- ▁bedroom
- ▁comfortable
- ▁separate
- ▁position
- ▁foot
- ▁eye
- ▁art
- ▁europe
- ▁sunday
- ▁cap
- ▁discuss
- ▁provide
- ▁lucky
- ▁sick
- ▁excellent
- ▁utah
- ▁classes
- ▁el
- ▁apparently
- ▁condition
- ▁perhaps
- ▁weapon
- ▁burn
- ▁originally
- q
- ▁self
- ▁beginning
- ▁prefer
- ▁cou
- ▁count
- ▁quit
- ▁typical
- 'off'
- ▁economic
- ▁broke
- ▁average
- ▁smaller
- ▁security
- ▁virginia
- ▁weird
- ▁future
- ▁similar
- ▁hopefully
- ▁economy
- ▁political
- ▁relative
- ▁master
- ▁slow
- ▁financial
- ▁respect
- ▁expense
- ▁accept
- ▁appeal
- ▁normally
- ▁channel
- ▁alone
- ▁human
- ▁union
- ▁privacy
- ▁science
- ▁lawyer
- ▁busy
- ▁window
- ▁automatic
- ▁sold
- ▁county
- ▁advantage
- ▁bush
- ▁direct
- ▁affect
- ▁drink
- ▁van
- ▁entire
- ▁lunch
- ▁switch
- ▁role
- ▁basis
- ▁z
- ▁table
- ▁animal
- ▁basketball
- ▁industry
- ▁peace
- ▁reunion
- ▁blow
- ▁department
- ▁present
- ▁relate
- ▁positive
- ▁article
- ▁heavy
- ▁return
- place
- ▁chicken
- ▁stories
- ▁honest
- ▁somehow
- ▁ride
- ▁history
- ▁saturday
- ▁salary
- ▁member
- ▁payment
- ▁moving
- ▁port
- ▁professional
- ▁mexico
- ▁normal
- ▁lower
- ▁jump
- ▁mow
- ▁rich
- ▁organization
- ▁design
- ▁straight
- ▁draw
- ▁smoke
- ▁possible
- ▁bucks
- ▁debt
- work
- ▁property
- ▁rough
- ▁teenage
- ▁garage
- ▁wild
- ▁scout
- ▁touch
- ash
- ▁suit
- ▁purchase
- ▁retirement
- ▁election
- over
- ▁carolina
- ▁recipe
- ▁track
- ▁entertain
- ▁changing
- ▁grandmother
- ▁thirteen
- ▁instance
- ▁coverage
- ▁attitude
- ▁box
- ▁face
- ▁background
- ▁study
- ▁kidding
- ▁english
- ▁ridiculous
- ▁legal
- ▁tonight
- ▁trade
- ▁random
- ▁john
- ▁coast
- ▁cable
- ▁aluminum
- ▁choose
- ▁cowboy
- ▁colorado
- ▁lu
- ▁continue
- ▁contract
- ▁england
- ▁ticket
- ▁board
- ▁replace
- ▁join
- ▁folks
- ▁sudden
- ▁garbage
- ▁engine
- ▁himself
- ▁instrument
- ▁row
- ▁spot
- ▁activities
- ▁cross
- ▁shape
- ▁scare
- ▁mini
- ▁district
- ▁floor
- ▁taste
- ▁corn
- ▁correct
- ▁opportunity
- ified
- ▁threat
- ▁concern
- ▁popular
- ▁everyday
- ▁adult
- ▁terr
- ▁doubt
- ▁brand
- ▁dead
- ▁defense
- ▁worst
- ▁mexican
- ▁policy
- ▁taught
- ▁vietnam
- ▁pressure
- ▁balance
- ▁body
- ▁cities
- ▁accident
- ▁afternoon
- ▁horrible
- ▁german
- ▁electric
- ▁tired
- ▁everywhere
- ▁opposed
- ▁squa
- ▁bike
- ▁hair
- ▁congress
- ▁foreign
- ▁physical
- ▁yesterday
- ▁increase
- ▁metric
- ▁style
- ▁minor
- ▁majority
- ▁perfect
- ▁responsibility
- ▁common
- ▁central
- ▁improve
- ▁kitchen
- ▁vegetable
- ▁sixteen
- ▁forever
- ▁nurse
- ▁stopped
- ▁tech
- ▁bird
- ▁born
- ▁jeez
- ▁mistake
- ▁richardson
- ▁express
- ▁lady
- ▁russia
- ▁print
- ▁hook
- ▁bottom
- ▁easily
- ▁select
- ▁option
- ▁coach
- ▁direction
- ville
- ▁favor
- ▁pennsylvania
- ▁key
- ject
- ▁effort
- ▁schedule
- ▁execut
- ▁spread
- ▁hobby
- ▁immediate
- ▁simple
- ▁somewhat
- ▁however
- ▁natural
- ▁fourteen
- ▁block
- ▁dump
- ▁perform
- ▁equipment
- ▁complain
- ▁planning
- ▁river
- ▁occasionally
- ▁conversation
- ▁grocery
- ▁fresh
- ▁besides
- ▁friday
- ▁result
- ▁smart
- ▁various
- ▁discover
- ▁storm
- ▁appreciate
- ▁equal
- ▁nowadays
- ▁brown
- ▁elderly
- ▁invasion
- ▁oklahoma
- ▁politics
- ▁maryland
- ▁regard
- ▁upset
- ▁commercial
- ▁incredible
- ▁french
- ▁trust
- ▁seventies
- ▁league
- ▁ourselves
- ▁possibly
- ▁purpose
- ▁network
- ▁stuck
- ▁admit
- ▁sweat
- ▁cousin
- ▁begin
- ably
- ▁elect
- board
- ▁alcohol
- ▁contribut
- ▁solution
- ▁material
- ▁supp
- ▁deep
- ▁specific
- ▁convict
- ▁motor
- ▁tree
- ▁junior
- ▁nature
- ▁oak
- ▁restrict
- ▁mentioned
- ▁shoes
- ▁laugh
- ▁volunteer
- ▁temp
- ▁austin
- ▁prior
- ▁extent
- ▁otherwise
- ▁blood
- ▁deduct
- ▁hobbies
- ▁influence
- ▁writing
- ▁abuse
- ▁soviet
- ▁mental
- ▁awhile
- ▁connect
- ▁western
- ▁italian
- ▁convenient
- ▁language
- ▁recommend
- ▁downtown
- ▁border
- ▁character
- ▁politician
- ▁truth
- ▁pitch
- ▁sixties
- ▁strict
- ▁hello
- ▁chinese
- ▁relax
- ▁wheel
- ▁drove
- ▁access
- ▁cannot
- ▁plenty
- ▁pardon
- ▁model
- ▁visa
- ▁section
- ▁boston
- ▁dirt
- ▁aspect
- ▁electronic
- ▁responsible
- ▁participate
- ▁steak
- ▁profit
- ▁roof
- ▁cabin
- ▁bowl
- ▁japanese
- ▁telephone
- ▁variety
- ▁piano
- ▁broad
- ▁chicago
- ▁citizen
- ▁corps
- ▁assume
- ▁automobile
- ▁crowd
- ▁simply
- ▁technical
- ▁quarter
- ▁wrote
- ▁damage
- ▁dental
- ▁corporation
- ▁honda
- ▁necessary
- ▁traffic
- ▁vehicle
- ▁salad
- ▁southern
- ▁unusual
- '0'
- ▁voting
- ▁screen
- ▁stress
- ▁mandatory
- ▁monday
- ▁secret
- ▁above
- ▁source
- ▁load
- ▁suspect
- ▁license
- ▁population
- ▁subscribe
- ▁atlanta
- ▁draft
- ▁tremendous
- ▁knowledge
- ▁earth
- ▁match
- ▁atmosphere
- ▁democrat
- ▁habit
- ▁edge
- ▁film
- ▁auto
- ▁earlier
- ▁encourage
- ▁exciting
- ▁fellow
- ▁suburb
- ▁became
- ▁shut
- ▁ceiling
- ▁disease
- ▁cheese
- ▁actual
- ▁bathroom
- ▁divorce
- ▁further
- ▁pattern
- ▁practical
- ▁technology
- ▁becoming
- ▁double
- ▁investment
- ▁trend
- ▁dark
- ▁discipline
- ▁occur
- ▁christian
- ▁liberal
- ▁senior
- ▁israel
- ▁scene
- ▁deterrent
- ▁jazz
- ▁suggest
- ▁beyond
- ▁seventeen
- ▁sauce
- ▁interview
- ▁swimming
- ▁stupid
- ▁voice
- ▁pump
- ▁consumer
- ▁independent
- ▁practice
- ▁tomatoes
- ▁outdoor
- ▁blame
- ▁northern
- ▁craft
- ▁republic
- ▁antonio
- ▁written
- ▁tennis
- ▁tune
- ology
- ▁legislat
- ▁finance
- ipped
- ▁adjust
- ▁massachusetts
- ▁successful
- ▁repeat
- ▁versus
- ▁chemical
- ▁milk
- ▁carpet
- ▁horse
- ▁address
- ▁speed
- ▁apart
- ▁occasion
- ▁belong
- ▁francisco
- ▁grandchildren
- ▁quiet
- ▁holiday
- ▁register
- ▁resource
- ▁mechanic
- ▁staff
- ▁steal
- ▁maintain
- ▁toyota
- ▁psych
- ▁casual
- ▁backyard
- ▁receive
- ▁chose
- ▁energy
- ▁author
- ▁bread
- ▁focus
- ▁iraq
- ▁journal
- ▁professor
- ▁sentencing
- ▁explain
- ▁knock
- ▁series
- ficial
- ▁amazed
- ▁baltimore
- ▁facilities
- ▁neither
- ▁potato
- ▁advance
- ▁gulf
- ▁sweet
- hold
- ▁candidate
- ▁pittsburgh
- ▁garland
- ▁hung
- ▁babies
- ▁involve
- ▁spec
- ▁concept
- ▁convince
- ▁impressed
- ▁leaving
- ▁primarily
- ▁produce
- ▁victim
- ▁herself
- ▁shock
- ▁convert
- ▁juries
- ▁loose
- wood
- ▁represent
- ▁georgia
- ▁kindergarten
- ▁progress
- ▁yellow
- ▁stock
- ▁junk
- ▁surprise
- ▁circumstances
- ▁dangerous
- ▁illegal
- ▁concert
- ▁shift
- ▁gang
- ▁advertise
- ▁disappoint
- ▁educate
- ▁female
- ▁minimum
- ▁establish
- ▁fantastic
- ▁welfare
- house
- ▁extend
- ▁birthday
- ▁cruise
- ▁culture
- ▁elementary
- ▁employer
- ▁incentive
- ▁relationship
- ▁speech
- ▁reduce
- ▁smell
- ▁carrie
- ▁original
- ▁august
- ▁grandparents
- ▁preschool
- ▁quarterback
- ▁violent
- ▁barbecue
- ▁fifties
- ▁rabbit
- ▁freedom
- ▁parole
- ▁fascinat
- ▁emotion
- ▁innocent
- ▁perspective
- ▁temperature
- ▁attract
- apped
- ▁pollut
- ▁negative
- ▁wisconsin
- ▁contact
- ▁impact
- ▁jersey
- ▁recognize
- ▁conscious
- ▁detail
- ▁complete
- ▁claim
- ▁creek
- ▁attack
- ▁continu
- ▁enforce
- '1'
- ▁attorney
- ▁campaign
- ▁conservative
- ▁excited
- ▁canada
- ▁split
- ▁multi
- ▁challenge
- ▁evidence
- ▁maintenance
- ▁pepper
- ▁release
- ▁frame
- employed
- ▁include
- ▁paycheck
- ▁raleigh
- '4'
- '2'
- '&'
- '6'
- '8'
- '9'
- '7'
- '5'
- '3'
- /
- '['
- _
- <sos/eos>
init: null
input_size: 83
ctc_conf:
ignore_nan_grad: true
model_conf:
ctc_weight: 0.3
lsm_weight: 0.1
length_normalized_loss: false
use_preprocessor: true
token_type: bpe
bpemodel: data/en_token_list/bpe_unigram2000/bpe.model
non_linguistic_symbols: null
cleaner: null
g2p: null
speech_volume_normalize: null
rir_scp: null
rir_apply_prob: 1.0
noise_scp: null
noise_apply_prob: 1.0
noise_db_range: '13_15'
frontend: null
frontend_conf: {}
specaug: specaug
specaug_conf:
apply_time_warp: true
time_warp_window: 5
time_warp_mode: bicubic
apply_freq_mask: true
freq_mask_width_range:
- 0
- 30
num_freq_mask: 2
apply_time_mask: true
time_mask_width_range:
- 0
- 40
num_time_mask: 2
normalize: global_mvn
normalize_conf:
stats_file: exp/asr_stats_base_sp/train/feats_stats.npz
preencoder: null
preencoder_conf: {}
encoder: conformer
encoder_conf:
output_size: 256
attention_heads: 4
linear_units: 2048
num_blocks: 12
dropout_rate: 0.1
positional_dropout_rate: 0.1
attention_dropout_rate: 0.1
input_layer: conv2d
normalize_before: true
macaron_style: true
pos_enc_layer_type: rel_pos
selfattention_layer_type: rel_selfattn
activation_type: swish
use_cnn_module: true
cnn_module_kernel: 31
decoder: transformer
decoder_conf:
attention_heads: 4
linear_units: 2048
num_blocks: 6
dropout_rate: 0.1
positional_dropout_rate: 0.1
self_attention_dropout_rate: 0.1
src_attention_dropout_rate: 0.1
required:
- output_dir
- token_list
distributed: true
```
</details>
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
| [
"CRAFT"
] | Non_BioNLP |
ntc-ai/SDXL-LoRA-slider.person-wearing-headphones | ntc-ai | text-to-image | [
"diffusers",
"text-to-image",
"stable-diffusion-xl",
"lora",
"template:sd-lora",
"template:sdxl-lora",
"sdxl-sliders",
"ntcai.xyz-sliders",
"concept",
"en",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:mit",
"region:us"
] | 1,702,277,096,000 | 2024-02-06T00:29:40 | 11 | 0 | ---
base_model: stabilityai/stable-diffusion-xl-base-1.0
language:
- en
license: mit
tags:
- text-to-image
- stable-diffusion-xl
- lora
- template:sd-lora
- template:sdxl-lora
- sdxl-sliders
- ntcai.xyz-sliders
- concept
- diffusers
thumbnail: images/person wearing headphones_17_3.0.png
widget:
- text: person wearing headphones
output:
url: images/person wearing headphones_17_3.0.png
- text: person wearing headphones
output:
url: images/person wearing headphones_19_3.0.png
- text: person wearing headphones
output:
url: images/person wearing headphones_20_3.0.png
- text: person wearing headphones
output:
url: images/person wearing headphones_21_3.0.png
- text: person wearing headphones
output:
url: images/person wearing headphones_22_3.0.png
inference: false
instance_prompt: person wearing headphones
---
# ntcai.xyz slider - person wearing headphones (SDXL LoRA)
| Strength: -3 | Strength: 0 | Strength: 3 |
| --- | --- | --- |
| <img src="images/person wearing headphones_17_-3.0.png" width=256 height=256 /> | <img src="images/person wearing headphones_17_0.0.png" width=256 height=256 /> | <img src="images/person wearing headphones_17_3.0.png" width=256 height=256 /> |
| <img src="images/person wearing headphones_19_-3.0.png" width=256 height=256 /> | <img src="images/person wearing headphones_19_0.0.png" width=256 height=256 /> | <img src="images/person wearing headphones_19_3.0.png" width=256 height=256 /> |
| <img src="images/person wearing headphones_20_-3.0.png" width=256 height=256 /> | <img src="images/person wearing headphones_20_0.0.png" width=256 height=256 /> | <img src="images/person wearing headphones_20_3.0.png" width=256 height=256 /> |
See more at [https://sliders.ntcai.xyz/sliders/app/loras/db3ca807-26d4-4bf6-b3e0-77c3d2d8a566](https://sliders.ntcai.xyz/sliders/app/loras/db3ca807-26d4-4bf6-b3e0-77c3d2d8a566)
## Download
Weights for this model are available in Safetensors format.
## Trigger words
You can apply this LoRA with trigger words for additional effect:
```
person wearing headphones
```
## Use in diffusers
```python
from diffusers import StableDiffusionXLPipeline
from diffusers import EulerAncestralDiscreteScheduler
import torch
pipe = StableDiffusionXLPipeline.from_single_file("https://huggingface.co/martyn/sdxl-turbo-mario-merge-top-rated/blob/main/topRatedTurboxlLCM_v10.safetensors")
pipe.to("cuda")
pipe.scheduler = EulerAncestralDiscreteScheduler.from_config(pipe.scheduler.config)
# Load the LoRA
pipe.load_lora_weights('ntc-ai/SDXL-LoRA-slider.person-wearing-headphones', weight_name='person wearing headphones.safetensors', adapter_name="person wearing headphones")
# Activate the LoRA
pipe.set_adapters(["person wearing headphones"], adapter_weights=[2.0])
prompt = "medieval rich kingpin sitting in a tavern, person wearing headphones"
negative_prompt = "nsfw"
width = 512
height = 512
num_inference_steps = 10
guidance_scale = 2
image = pipe(prompt, negative_prompt=negative_prompt, width=width, height=height, guidance_scale=guidance_scale, num_inference_steps=num_inference_steps).images[0]
image.save('result.png')
```
## Support the Patreon
If you like this model please consider [joining our Patreon](https://www.patreon.com/NTCAI).
By joining our Patreon, you'll gain access to an ever-growing library of over 1496+ unique and diverse LoRAs along with 14600+ slider merges, covering a wide range of styles and genres. You'll also receive early access to new models and updates, exclusive behind-the-scenes content, and the powerful <strong>NTC Slider Factory</strong> LoRA creator, allowing you to craft your own custom LoRAs and merges opening up endless possibilities.
Your support on Patreon will allow us to continue developing new models and tools.
## Other resources
- [CivitAI](https://civitai.com/user/ntc) - Follow ntc on Civit for even more LoRAs
- [ntcai.xyz](https://ntcai.xyz) - See ntcai.xyz to find more articles and LoRAs
| [
"CRAFT"
] | Non_BioNLP |
YiDuo1999/Llama-3-Physician-8B-Instruct | YiDuo1999 | text-generation | [
"transformers",
"pytorch",
"llama",
"text-generation",
"license:llama3",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | 1,718,888,677,000 | 2024-07-02T10:05:43 | 46 | 5 | ---
license: llama3
---
The official instruct model weights for "Efficient Continual Pre-training by Mitigating the Stability Gap".
## Introduction
This repo contains Llama-3-Physician-8B-Instruct, a medical language model with 8 billion parameters. This model builds upon the foundation of Llama 3 and has been firstly continual pretrained on high-quality medical sub-corpus from the RefinedWeb dataset and then tuned with diverse medical and general instructions. We also use the three strategies in the paper to mitigate the stability gap during continual pretraining and instruction tuning, which boosts the model's medical task performance and reduces the computation consumption.
## 💻 Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig
import torch
model_name = "YiDuo1999/Llama-3-Physician-8B-Instruct"
device_map = 'auto'
model = AutoModelForCausalLM.from_pretrained( model_name, trust_remote_code=True,use_cache=False,device_map=device_map)
tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True)
tokenizer.pad_token = tokenizer.eos_token
def askme(question):
sys_message = '''
You are an AI Medical Assistant trained on a vast dataset of health information. Please be thorough and
provide an informative answer. If you don't know the answer to a specific medical inquiry, advise seeking professional help.
'''
# Create messages structured for the chat template
messages = [{"role": "system", "content": sys_message}, {"role": "user", "content": question}]
# Applying chat template
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
inputs = tokenizer(prompt, return_tensors="pt").to("cuda")
outputs = model.generate(**inputs, max_new_tokens=100, use_cache=True)
# Extract and return the generated text, removing the prompt
response_text = tokenizer.batch_decode(outputs)[0].strip()
answer = response_text.split('<|im_start|>assistant')[-1].strip()
return answer
# Example usage
# - Context: First describe your problem.
# - Question: Then make the question.
question = '''What is HIV?'''
print(askme(question))
```
the type of answer is :
```
HIV, or Human Immunodeficiency Virus, is a retrovirus that primarily infects cells of the human immune system, particularly CD4+ T cells, which are crucial to the body's ability to fight off infection. HIV infection can lead to AIDS, or Acquired Immune Deficiency Syndrome, a condition that causes severe damage to the immune system and makes individuals more susceptible to life-threatening infections. HIV
is transmitted through sexual contact, sharing needles, or through mother-to-child transmission during pregnancy.
```
## 🏆 Evaluation
For question-answering tasks, we have
| Model | MMLU-Medical | PubMedQA | MedMCQA | MedQA-4-Option | Avg |
|:--------------------------------|:--------------|:----------|:---------|:----------------|:------|
| Mistral-7B-instruct | 55.8 | 17.8 | 40.2 | 41.1 | 37.5 |
| Zephyr-7B-instruct-β | 63.3 | 46.0 | 43.0 | 48.5 | 48.7 |
| PMC-Llama-7B | 59.7 | 59.2 | 57.6 | 49.2 | 53.6 |
| Medalpaca-13B | 55.2 | 50.4 | 21.2 | 20.2 | 36.7 |
| AlpaCare-13B | 60.2 | 53.8 | 38.5 | 30.4 | 45.7 |
| BioMedGPT-LM 7B | 52.0 | 58.6 | 34.9 | 39.3 | 46.2 |
| Me-Llama-13B | - | 70.0 | 44.9 | 42.7 | - |
| Llama-3-8B instruct | 82.0 | 74.6 | 57.1 | 60.3 | 68.5 |
| JSL-Med-Sft-Llama-3-8B | 83.0 | 75.4 | 57.5 | 74.8 | 72.7 |
| GPT-3.5-turbo-1106 | 74.0 | 72.6 | 34.9 | 39.3 | 60.6 |
| GPT-4 | 85.5 | 69.2 | 69.5 | 83.9 | 77.0 |
| Llama-3-physician-8B instruct (ours) | 80.0 | 76.0 | 80.2 | 60.3 | 74.1 |
For Medical claasification, relation extraction, natural language inference, summarization tasks, we have
| Task type | Classification | Relation extraction | Natural Language Inference | Summarization |
|:--------------------------------|:----------------|:----------------------|:----------------------------|:---------------|
| Datasets | HOC | DDI-2013 | BioNLI | MIMIC-CXR |
| Mistral-7B-instruct | 35.8 | 14.1 | 16.7 | 12.5 |
| Zephyr-7B-instruct-β | 26.1 | 19.4 | 19.9 | 10.5 |
| PMC-Llama-7B | 18.4 | 14.7 | 15.9 | 13.9 |
| Medalpaca-13B | 24.6 | 5.8 | 16.4 | 1.0 |
| AlpaCare-13B | 26.7 | 11.0 | 17.0 | 13.4 |
| BioMedGPT-LM 7B | 23.4 | 15.5 | 17.9 | 6.2 |
| Me-Llama-13B | 33.5 | 21.4 | 19.5 | 40.0 |
| JSL-Med-Sft-Llama-3-8B | 25.6 | 19.7 | 16.6 | 13.8 |
| Llama-3-8B instruct | 31.0 | 15.1 | 18.8 | 10.3 |
| GPT-3.5-turbo-1106 | 54.5 | 21.6 | 31.7 | 13.5 |
| GPT-4 | 60.2 | 29.2 | 57.8 | 15.2 |
| Llama-3-physician-8B instruct (ours) | 78.9 | 33.6 | 76.2 | 37.7 |
## Citation
```
@inproceedings{Guo2024EfficientCP,
title={Efficient Continual Pre-training by Mitigating the Stability Gap},
author={Yiduo Guo and Jie Fu and Huishuai Zhang and Dongyan Zhao and Yikang Shen},
year={2024},
url={https://api.semanticscholar.org/CorpusID:270688100}
}
``` | [
"MEDQA",
"PUBMEDQA"
] | BioNLP |
LoneStriker/OpenBioLLM-Llama3-8B-8.0bpw-h8-exl2 | LoneStriker | text-generation | [
"transformers",
"pytorch",
"llama",
"text-generation",
"llama-3",
"Mixtral",
"instruct",
"finetune",
"chatml",
"DPO",
"RLHF",
"gpt4",
"distillation",
"en",
"arxiv:2305.18290",
"arxiv:2303.13375",
"arxiv:2212.13138",
"arxiv:2305.09617",
"arxiv:2402.07023",
"base_model:meta-llama/Meta-Llama-3-8B",
"base_model:quantized:meta-llama/Meta-Llama-3-8B",
"license:llama3",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"8-bit",
"exl2",
"region:us"
] | 1,714,161,140,000 | 2024-04-26T19:56:23 | 10 | 0 | ---
base_model: meta-llama/Meta-Llama-3-8B
language:
- en
license: llama3
tags:
- llama-3
- llama
- Mixtral
- instruct
- finetune
- chatml
- DPO
- RLHF
- gpt4
- distillation
widget:
- example_title: OpenBioLLM-8B
messages:
- role: system
content: You are an expert and experienced from the healthcare and biomedical
domain with extensive medical knowledge and practical experience.
- role: user
content: How long does it take for newborn jaundice to go away?
output:
text: 'Newborn jaundice, also known as neonatal jaundice, is a common condition
in newborns where the yellowing of the skin and eyes occurs due to an elevated
level of bilirubin in the blood. Bilirubin is a yellow pigment that forms when
red blood cells break down. In most cases, newborn jaundice resolves on its
own without any specific treatment.
The duration of newborn jaundice can vary depending on several factors such
as the underlying cause, gestational age at birth, and individual variations
in bilirubin metabolism. Here are some general guidelines:
1. Physiological jaundice: This is the most common type of newborn jaundice
and usually appears within 24-72 hours after birth. It tends to peak between
the second and fifth day of life and gradually improves over the next week or
two. By the time the baby is one week old, the jaundice should have mostly resolved.
2. Breast milk jaundice: This type of jaundice occurs in breastfed babies and
may appear later than physiological jaundice, typically between the fifth and
fourteenth day of life. It tends to persist for a longer duration but usually
resolves within six weeks after birth. 3. Pathological jaundice: This type of
jaundice is less common and occurs due to an underlying medical condition that
affects bilirubin metabolism or liver function. The duration of pathological
jaundice depends on the specific cause and may require treatment.
It''s important for parents to monitor their newborn''s jaundice closely and
seek medical advice if the jaundice progresses rapidly, becomes severe, or is
accompanied by other symptoms such as poor feeding, lethargy, or excessive sleepiness.
In these cases, further evaluation and management may be necessary. Remember
that each baby is unique, and the timing of jaundice resolution can vary. If
you have concerns about your newborn''s jaundice, it''s always best to consult
with a healthcare professional for personalized advice and guidance.'
model-index:
- name: OpenBioLLM-8B
results: []
---
<div align="center">
<img width="260px" src="https://cdn-uploads.huggingface.co/production/uploads/5f3fe13d79c1ba4c353d0c19/BrQCb95lmEIFz79QAmoNA.png"></div>

<div align="center">
<h1>Advancing Open-source Large Language Models in Medical Domain</h1>
</div>
<p align="center" style="margin-top: 0px;">
<a href="https://colab.research.google.com/drive/1F5oV20InEYeAJGmBwYF9NM_QhLmjBkKJ?usp=sharing">
<img src="https://colab.research.google.com/assets/colab-badge.svg" alt="OpenChat Logo" style="width:20px; vertical-align: middle; display: inline-block; margin-right: 5px; margin-left: 10px; margin-top: 0px; margin-bottom: 0px;"/>
<span class="link-text" style=" margin-right: 5px;">Online Demo</span>
</a> |
<a href="https://github.com/openlifescience-ai">
<img src="https://github.githubassets.com/assets/GitHub-Mark-ea2971cee799.png" alt="GitHub Logo" style="width:20px; vertical-align: middle; display: inline-block; margin-right: 5px; margin-left: 5px; margin-top: 0px; margin-bottom: 0px;"/>
<span class="link-text" style=" margin-right: 5px;">GitHub</span>
</a> |
<a href="#">
<img src="https://github.com/alpayariyak/openchat/blob/master/assets/arxiv-logomark-small-square-border.png?raw=true" alt="ArXiv Logo" style="width:20px; vertical-align: middle; display: inline-block; margin-right: 5px; margin-left: 5px; margin-top: 0px; margin-bottom: 0px;"/>
<span class="link-text" style="margin-right: 5px;">Paper</span>
</a> |
<a href="https://discord.gg/A5Fjf5zC69">
<img src="https://cloud.githubusercontent.com/assets/6291467/26705903/96c2d66e-477c-11e7-9f4e-f3c0efe96c9a.png" alt="Discord Logo" style="width:20px; vertical-align: middle; display: inline-block; margin-right: 5px; margin-left: 5px; margin-top: 0px; margin-bottom: 0px;"/>
<span class="link-text">Discord</span>
</a>
</p>

Introducing OpenBioLLM-8B: A State-of-the-Art Open Source Biomedical Large Language Model
OpenBioLLM-8B is an advanced open source language model designed specifically for the biomedical domain. Developed by Saama AI Labs, this model leverages cutting-edge techniques to achieve state-of-the-art performance on a wide range of biomedical tasks.
🏥 **Biomedical Specialization**: OpenBioLLM-8B is tailored for the unique language and knowledge requirements of the medical and life sciences fields. It was fine-tuned on a vast corpus of high-quality biomedical data, enabling it to understand and generate text with domain-specific accuracy and fluency.
🎓 **Superior Performance**: With 8 billion parameters, OpenBioLLM-8B outperforms other open source biomedical language models of similar scale. It has also demonstrated better results compared to larger proprietary & open-source models like GPT-3.5 and Meditron-70B on biomedical benchmarks.
🧠 **Advanced Training Techniques**: OpenBioLLM-8B builds upon the powerful foundations of the **Meta-Llama-3-8B** and [Meta-Llama-3-8B](meta-llama/Meta-Llama-3-8B) models. It incorporates the DPO dataset and fine-tuning recipe along with a custom diverse medical instruction dataset. Key components of the training pipeline include:
<div align="center">
<img width="1200px" src="https://cdn-uploads.huggingface.co/production/uploads/5f3fe13d79c1ba4c353d0c19/oPchsJsEpQoGcGXVbh7YS.png">
</div>
- **Policy Optimization**: [Direct Preference Optimization: Your Language Model is Secretly a Reward Model (DPO)](https://arxiv.org/abs/2305.18290)
- **Ranking Dataset**: [berkeley-nest/Nectar](https://huggingface.co/datasets/berkeley-nest/Nectar)
- **Fine-tuning dataset**: Custom Medical Instruct dataset (We plan to release a sample training dataset in our upcoming paper; please stay updated)
This combination of cutting-edge techniques enables OpenBioLLM-8B to align with key capabilities and preferences for biomedical applications.
⚙️ **Release Details**:
- **Model Size**: 8 billion parameters
- **Quantization**: Optimized quantized versions available [Here](https://huggingface.co/aaditya/OpenBioLLM-8B-GGUF)
- **Language(s) (NLP):** en
- **Developed By**: [Ankit Pal (Aaditya Ura)](https://aadityaura.github.io/) from Saama AI Labs
- **License:** Meta-Llama License
- **Fine-tuned from models:** [meta-llama/Meta-Llama-3-8B](meta-llama/Meta-Llama-3-8B)
- **Resources for more information:**
- Paper: Coming soon
The model can be fine-tuned for more specialized tasks and datasets as needed.
OpenBioLLM-8B represents an important step forward in democratizing advanced language AI for the biomedical community. By leveraging state-of-the-art architectures and training techniques from leading open source efforts like Llama-3, we have created a powerful tool to accelerate innovation and discovery in healthcare and the life sciences.
We are excited to share OpenBioLLM-8B with researchers and developers around the world.
### Use with transformers
**Important: Please use the exact chat template provided by Llama-3 instruct version. Otherwise there will be a degradation in the performance. The model output can be verbose in rare cases. Please consider setting temperature = 0 to make this happen less.**
See the snippet below for usage with Transformers:
```python
import transformers
import torch
model_id = "aaditya/OpenBioLLM-Llama3-8B"
pipeline = transformers.pipeline(
"text-generation",
model=model_id,
model_kwargs={"torch_dtype": torch.bfloat16},
device="auto",
)
messages = [
{"role": "system", "content": "You are an expert and experienced from the healthcare and biomedical domain with extensive medical knowledge and practical experience. Your name is OpenBioLLM, and you were developed by Saama AI Labs. who's willing to help answer the user's query with explanation. In your explanation, leverage your deep medical expertise such as relevant anatomical structures, physiological processes, diagnostic criteria, treatment guidelines, or other pertinent medical concepts. Use precise medical terminology while still aiming to make the explanation clear and accessible to a general audience."},
{"role": "user", "content": "How can i split a 3mg or 4mg waefin pill so i can get a 2.5mg pill?"},
]
prompt = pipeline.tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
terminators = [
pipeline.tokenizer.eos_token_id,
pipeline.tokenizer.convert_tokens_to_ids("<|eot_id|>")
]
outputs = pipeline(
prompt,
max_new_tokens=256,
eos_token_id=terminators,
do_sample=True,
temperature=0.0,
top_p=0.9,
)
print(outputs[0]["generated_text"][len(prompt):])
```
## **Training procedure**
### **Training hyperparameters**
<details>
<summary>Click to see details</summary>
- learning_rate: 0.0002
- lr_scheduler: cosine
- train_batch_size: 12
- eval_batch_size: 8
- GPU: H100 80GB SXM5
- num_devices: 1
- optimizer: adamw_bnb_8bit
- lr_scheduler_warmup_steps: 100
- num_epochs: 4
</details>
### **Peft hyperparameters**
<details>
<summary>Click to see details</summary>
- adapter: qlora
- lora_r: 128
- lora_alpha: 256
- lora_dropout: 0.05
- lora_target_linear: true
-lora_target_modules:
- q_proj
- v_proj
- k_proj
- o_proj
- gate_proj
- down_proj
- up_proj
</details>
### **Training results**
### **Framework versions**
- Transformers 4.39.3
- Pytorch 2.1.2+cu121
- Datasets 2.18.0
- Tokenizers 0.15.1
- Axolotl
- Lm harness for evaluation
# Benchmark Results
🔥 OpenBioLLM-8B demonstrates superior performance compared to larger models, such as GPT-3.5, Meditron-70B across 9 diverse biomedical datasets, achieving state-of-the-art results with an average score of 72.50%, despite having a significantly smaller parameter count. The model's strong performance in domain-specific tasks, such as Clinical KG, Medical Genetics, and PubMedQA, highlights its ability to effectively capture and apply biomedical knowledge.
🚨 The GPT-4, Med-PaLM-1, and Med-PaLM-2 results are taken from their official papers. Since Med-PaLM doesn't provide zero-shot accuracy, we are using 5-shot accuracy from their paper for comparison. All results presented are in the zero-shot setting, except for Med-PaLM-2 and Med-PaLM-1, which use 5-shot accuracy.
| | Clinical KG | Medical Genetics | Anatomy | Pro Medicine | College Biology | College Medicine | MedQA 4 opts | PubMedQA | MedMCQA | Avg |
|--------------------|-------------|------------------|---------|--------------|-----------------|------------------|--------------|----------|---------|-------|
| **OpenBioLLM-70B** | **92.93** | **93.197** | **83.904** | 93.75 | 93.827 | **85.749** | 78.162 | 78.97 | **74.014** | **86.05588** |
| Med-PaLM-2 (5-shot) | 88.3 | 90 | 77.8 | **95.2** | 94.4 | 80.9 | **79.7** | **79.2** | 71.3 | 84.08 |
| **GPT-4** | 86.04 | 91 | 80 | 93.01 | **95.14** | 76.88 | 78.87 | 75.2 | 69.52 | 82.85 |
| Med-PaLM-1 (Flan-PaLM, 5-shot) | 80.4 | 75 | 63.7 | 83.8 | 88.9 | 76.3 | 67.6 | 79 | 57.6 | 74.7 |
| **OpenBioLLM-8B** | 76.101 | 86.1 | 69.829 | 78.21 | 84.213 | 68.042 | 58.993 | 74.12 | 56.913 | 72.502 |
| Gemini-1.0 | 76.7 | 75.8 | 66.7 | 77.7 | 88 | 69.2 | 58 | 70.7 | 54.3 | 70.79 |
| GPT-3.5 Turbo 1106 | 74.71 | 74 | 72.79 | 72.79 | 72.91 | 64.73 | 57.71 | 72.66 | 53.79 | 66 |
| Meditron-70B | 66.79 | 69 | 53.33 | 71.69 | 76.38 | 63 | 57.1 | 76.6 | 46.85 | 64.52 |
| gemma-7b | 69.81 | 70 | 59.26 | 66.18 | 79.86 | 60.12 | 47.21 | 76.2 | 48.96 | 64.18 |
| Mistral-7B-v0.1 | 68.68 | 71 | 55.56 | 68.38 | 68.06 | 59.54 | 50.82 | 75.4 | 48.2 | 62.85 |
| Apollo-7B | 62.26 | 72 | 61.48 | 69.12 | 70.83 | 55.49 | 55.22 | 39.8 | 53.77 | 60 |
| MedAlpaca-7b | 57.36 | 69 | 57.04 | 67.28 | 65.28 | 54.34 | 41.71 | 72.8 | 37.51 | 58.03 |
| BioMistral-7B | 59.9 | 64 | 56.5 | 60.4 | 59 | 54.7 | 50.6 | 77.5 | 48.1 | 57.3 |
| AlpaCare-llama2-7b | 49.81 | 49 | 45.92 | 33.82 | 50 | 43.35 | 29.77 | 72.2 | 34.42 | 45.36 |
| ClinicalGPT | 30.56 | 27 | 30.37 | 19.48 | 25 | 24.27 | 26.08 | 63.8 | 28.18 | 30.52 |
<div align="center">
<img width="1600px" src="https://cdn-uploads.huggingface.co/production/uploads/5f3fe13d79c1ba4c353d0c19/_SzdcJSBjZyo8RS1bTEkP.png">
</div>
## Detailed Medical Subjectwise accuracy

# Use Cases & Examples
🚨 **Below results are from the quantized version of OpenBioLLM-70B**
# Summarize Clinical Notes
OpenBioLLM-70B can efficiently analyze and summarize complex clinical notes, EHR data, and discharge summaries, extracting key information and generating concise, structured summaries

# Answer Medical Questions
OpenBioLLM-70B can provide answers to a wide range of medical questions.


<details>
<summary>Click to see details</summary>



</details>
# Clinical Entity Recognition
OpenBioLLM-70B can perform advanced clinical entity recognition by identifying and extracting key medical concepts, such as diseases, symptoms, medications, procedures, and anatomical structures, from unstructured clinical text. By leveraging its deep understanding of medical terminology and context, the model can accurately annotate and categorize clinical entities, enabling more efficient information retrieval, data analysis, and knowledge discovery from electronic health records, research articles, and other biomedical text sources. This capability can support various downstream applications, such as clinical decision support, pharmacovigilance, and medical research.



# Biomarkers Extraction

# Classification
OpenBioLLM-70B can perform various biomedical classification tasks, such as disease prediction, sentiment analysis, medical document categorization

# De-Identification
OpenBioLLM-70B can detect and remove personally identifiable information (PII) from medical records, ensuring patient privacy and compliance with data protection regulations like HIPAA.

**Advisory Notice!**
While OpenBioLLM-70B & 8B leverages high-quality data sources, its outputs may still contain inaccuracies, biases, or misalignments that could pose risks if relied upon for medical decision-making without further testing and refinement. The model's performance has not yet been rigorously evaluated in randomized controlled trials or real-world healthcare environments.
Therefore, we strongly advise against using OpenBioLLM-70B & 8B for any direct patient care, clinical decision support, or other professional medical purposes at this time. Its use should be limited to research, development, and exploratory applications by qualified individuals who understand its limitations.
OpenBioLLM-70B & 8B are intended solely as a research tool to assist healthcare professionals and should never be considered a replacement for the professional judgment and expertise of a qualified medical doctor.
Appropriately adapting and validating OpenBioLLM-70B & 8B for specific medical use cases would require significant additional work, potentially including:
- Thorough testing and evaluation in relevant clinical scenarios
- Alignment with evidence-based guidelines and best practices
- Mitigation of potential biases and failure modes
- Integration with human oversight and interpretation
- Compliance with regulatory and ethical standards
Always consult a qualified healthcare provider for personal medical needs.
# Citation
If you find OpenBioLLM-70B & 8B useful in your work, please cite the model as follows:
```
@misc{OpenBioLLMs,
author = {Ankit Pal, Malaikannan Sankarasubbu},
title = {OpenBioLLMs: Advancing Open-Source Large Language Models for Healthcare and Life Sciences},
year = {2024},
publisher = {Hugging Face},
journal = {Hugging Face repository},
howpublished = {\url{https://huggingface.co/aaditya/OpenBioLLM-Llama3-70B}}
}
```
The accompanying paper is currently in progress and will be released soon.
<div align="center">
<h2> 💌 Contact </h2>
</div>
We look forward to hearing you and collaborating on this exciting project!
**Contributors:**
- [Ankit Pal (Aaditya Ura)](https://aadityaura.github.io/) [aadityaura at gmail dot com]
- Saama AI Labs
- Note: I am looking for a funded PhD opportunity, especially if it fits my Responsible Generative AI, Multimodal LLMs, Geometric Deep Learning, and Healthcare AI skillset.
# References
We thank the [Meta Team](meta-llama/Meta-Llama-3-70B-Instruct) for their amazing models!
Result sources
- [1] GPT-4 [Capabilities of GPT-4 on Medical Challenge Problems] (https://arxiv.org/abs/2303.13375)
- [2] Med-PaLM-1 [Large Language Models Encode Clinical Knowledge](https://arxiv.org/abs/2212.13138)
- [3] Med-PaLM-2 [Towards Expert-Level Medical Question Answering with Large Language Models](https://arxiv.org/abs/2305.09617)
- [4] Gemini-1.0 [Gemini Goes to Med School](https://arxiv.org/abs/2402.07023) | [
"MEDQA",
"PUBMEDQA"
] | BioNLP |
opensearch-project/opensearch-neural-sparse-encoding-v2-distill | opensearch-project | fill-mask | [
"transformers",
"pytorch",
"safetensors",
"distilbert",
"fill-mask",
"learned sparse",
"opensearch",
"retrieval",
"passage-retrieval",
"query-expansion",
"document-expansion",
"bag-of-words",
"en",
"arxiv:2411.04403",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,721,201,535,000 | 2024-11-15T07:13:42 | 4,103 | 5 | ---
language: en
license: apache-2.0
tags:
- learned sparse
- opensearch
- transformers
- retrieval
- passage-retrieval
- query-expansion
- document-expansion
- bag-of-words
---
# opensearch-neural-sparse-encoding-v2-distill
## Select the model
The model should be selected considering search relevance, model inference and retrieval efficiency(FLOPS). We benchmark models' **zero-shot performance** on a subset of BEIR benchmark: TrecCovid,NFCorpus,NQ,HotpotQA,FiQA,ArguAna,Touche,DBPedia,SCIDOCS,FEVER,Climate FEVER,SciFact,Quora.
Overall, the v2 series of models have better search relevance, efficiency and inference speed than the v1 series. The specific advantages and disadvantages may vary across different datasets.
| Model | Inference-free for Retrieval | Model Parameters | AVG NDCG@10 | AVG FLOPS |
|-------|------------------------------|------------------|-------------|-----------|
| [opensearch-neural-sparse-encoding-v1](https://huggingface.co/opensearch-project/opensearch-neural-sparse-encoding-v1) | | 133M | 0.524 | 11.4 |
| [opensearch-neural-sparse-encoding-v2-distill](https://huggingface.co/opensearch-project/opensearch-neural-sparse-encoding-v2-distill) | | 67M | 0.528 | 8.3 |
| [opensearch-neural-sparse-encoding-doc-v1](https://huggingface.co/opensearch-project/opensearch-neural-sparse-encoding-doc-v1) | ✔️ | 133M | 0.490 | 2.3 |
| [opensearch-neural-sparse-encoding-doc-v2-distill](https://huggingface.co/opensearch-project/opensearch-neural-sparse-encoding-doc-v2-distill) | ✔️ | 67M | 0.504 | 1.8 |
| [opensearch-neural-sparse-encoding-doc-v2-mini](https://huggingface.co/opensearch-project/opensearch-neural-sparse-encoding-doc-v2-mini) | ✔️ | 23M | 0.497 | 1.7 |
## Overview
- **Paper**: [Towards Competitive Search Relevance For Inference-Free Learned Sparse Retrievers](https://arxiv.org/abs/2411.04403)
- **Fine-tuning sample**: [opensearch-sparse-model-tuning-sample](https://github.com/zhichao-aws/opensearch-sparse-model-tuning-sample)
This is a learned sparse retrieval model. It encodes the queries and documents to 30522 dimensional **sparse vectors**. The non-zero dimension index means the corresponding token in the vocabulary, and the weight means the importance of the token.
The training datasets includes MS MARCO, eli5_question_answer, squad_pairs, WikiAnswers, yahoo_answers_title_question, gooaq_pairs, stackexchange_duplicate_questions_body_body, wikihow, S2ORC_title_abstract, stackexchange_duplicate_questions_title-body_title-body, yahoo_answers_question_answer, searchQA_top5_snippets, stackexchange_duplicate_questions_title_title, yahoo_answers_title_answer.
OpenSearch neural sparse feature supports learned sparse retrieval with lucene inverted index. Link: https://opensearch.org/docs/latest/query-dsl/specialized/neural-sparse/. The indexing and search can be performed with OpenSearch high-level API.
## Usage (HuggingFace)
This model is supposed to run inside OpenSearch cluster. But you can also use it outside the cluster, with HuggingFace models API.
```python
import itertools
import torch
from transformers import AutoModelForMaskedLM, AutoTokenizer
# get sparse vector from dense vectors with shape batch_size * seq_len * vocab_size
def get_sparse_vector(feature, output):
values, _ = torch.max(output*feature["attention_mask"].unsqueeze(-1), dim=1)
values = torch.log(1 + torch.relu(values))
values[:,special_token_ids] = 0
return values
# transform the sparse vector to a dict of (token, weight)
def transform_sparse_vector_to_dict(sparse_vector):
sample_indices,token_indices=torch.nonzero(sparse_vector,as_tuple=True)
non_zero_values = sparse_vector[(sample_indices,token_indices)].tolist()
number_of_tokens_for_each_sample = torch.bincount(sample_indices).cpu().tolist()
tokens = [transform_sparse_vector_to_dict.id_to_token[_id] for _id in token_indices.tolist()]
output = []
end_idxs = list(itertools.accumulate([0]+number_of_tokens_for_each_sample))
for i in range(len(end_idxs)-1):
token_strings = tokens[end_idxs[i]:end_idxs[i+1]]
weights = non_zero_values[end_idxs[i]:end_idxs[i+1]]
output.append(dict(zip(token_strings, weights)))
return output
# load the model
model = AutoModelForMaskedLM.from_pretrained("opensearch-project/opensearch-neural-sparse-encoding-v2-distill")
tokenizer = AutoTokenizer.from_pretrained("opensearch-project/opensearch-neural-sparse-encoding-v2-distill")
# set the special tokens and id_to_token transform for post-process
special_token_ids = [tokenizer.vocab[token] for token in tokenizer.special_tokens_map.values()]
get_sparse_vector.special_token_ids = special_token_ids
id_to_token = ["" for i in range(tokenizer.vocab_size)]
for token, _id in tokenizer.vocab.items():
id_to_token[_id] = token
transform_sparse_vector_to_dict.id_to_token = id_to_token
query = "What's the weather in ny now?"
document = "Currently New York is rainy."
# encode the query & document
feature = tokenizer([query, document], padding=True, truncation=True, return_tensors='pt', return_token_type_ids=False)
output = model(**feature)[0]
sparse_vector = get_sparse_vector(feature, output)
# get similarity score
sim_score = torch.matmul(sparse_vector[0],sparse_vector[1])
print(sim_score) # tensor(38.6112, grad_fn=<DotBackward0>)
query_token_weight, document_query_token_weight = transform_sparse_vector_to_dict(sparse_vector)
for token in sorted(query_token_weight, key=lambda x:query_token_weight[x], reverse=True):
if token in document_query_token_weight:
print("score in query: %.4f, score in document: %.4f, token: %s"%(query_token_weight[token],document_query_token_weight[token],token))
# result:
# score in query: 2.7273, score in document: 2.9088, token: york
# score in query: 2.5734, score in document: 0.9208, token: now
# score in query: 2.3895, score in document: 1.7237, token: ny
# score in query: 2.2184, score in document: 1.2368, token: weather
# score in query: 1.8693, score in document: 1.4146, token: current
# score in query: 1.5887, score in document: 0.7450, token: today
# score in query: 1.4704, score in document: 0.9247, token: sunny
# score in query: 1.4374, score in document: 1.9737, token: nyc
# score in query: 1.4347, score in document: 1.6019, token: currently
# score in query: 1.1605, score in document: 0.9794, token: climate
# score in query: 1.0944, score in document: 0.7141, token: upstate
# score in query: 1.0471, score in document: 0.5519, token: forecast
# score in query: 0.9268, score in document: 0.6692, token: verve
# score in query: 0.9126, score in document: 0.4486, token: huh
# score in query: 0.8960, score in document: 0.7706, token: greene
# score in query: 0.8779, score in document: 0.7120, token: picturesque
# score in query: 0.8471, score in document: 0.4183, token: pleasantly
# score in query: 0.8079, score in document: 0.2140, token: windy
# score in query: 0.7537, score in document: 0.4925, token: favorable
# score in query: 0.7519, score in document: 2.1456, token: rain
# score in query: 0.7277, score in document: 0.3818, token: skies
# score in query: 0.6995, score in document: 0.8593, token: lena
# score in query: 0.6895, score in document: 0.2410, token: sunshine
# score in query: 0.6621, score in document: 0.3016, token: johnny
# score in query: 0.6604, score in document: 0.1933, token: skyline
# score in query: 0.6117, score in document: 0.2197, token: sasha
# score in query: 0.5962, score in document: 0.0414, token: vibe
# score in query: 0.5381, score in document: 0.7560, token: hardly
# score in query: 0.4582, score in document: 0.4243, token: prevailing
# score in query: 0.4539, score in document: 0.5073, token: unpredictable
# score in query: 0.4350, score in document: 0.8463, token: presently
# score in query: 0.3674, score in document: 0.2496, token: hail
# score in query: 0.3324, score in document: 0.5506, token: shivered
# score in query: 0.3281, score in document: 0.1964, token: wind
# score in query: 0.3052, score in document: 0.5785, token: rudy
# score in query: 0.2797, score in document: 0.0357, token: looming
# score in query: 0.2712, score in document: 0.0870, token: atmospheric
# score in query: 0.2471, score in document: 0.3490, token: vicky
# score in query: 0.2247, score in document: 0.2383, token: sandy
# score in query: 0.2154, score in document: 0.5737, token: crowded
# score in query: 0.1723, score in document: 0.1857, token: chilly
# score in query: 0.1700, score in document: 0.4110, token: blizzard
# score in query: 0.1183, score in document: 0.0613, token: ##cken
# score in query: 0.0923, score in document: 0.6363, token: unrest
# score in query: 0.0624, score in document: 0.2127, token: russ
# score in query: 0.0558, score in document: 0.5542, token: blackout
# score in query: 0.0549, score in document: 0.1589, token: kahn
# score in query: 0.0160, score in document: 0.0566, token: 2020
# score in query: 0.0125, score in document: 0.3753, token: nighttime
```
The above code sample shows an example of neural sparse search. Although there is no overlap token in original query and document, but this model performs a good match.
## Detailed Search Relevance
<div style="overflow-x: auto;">
| Model | Average | Trec Covid | NFCorpus | NQ | HotpotQA | FiQA | ArguAna | Touche | DBPedia | SCIDOCS | FEVER | Climate FEVER | SciFact | Quora |
|-------|---------|------------|----------|----|----------|------|---------|--------|---------|---------|-------|---------------|---------|-------|
| [opensearch-neural-sparse-encoding-v1](https://huggingface.co/opensearch-project/opensearch-neural-sparse-encoding-v1) | 0.524 | 0.771 | 0.360 | 0.553 | 0.697 | 0.376 | 0.508 | 0.278 | 0.447 | 0.164 | 0.821 | 0.263 | 0.723 | 0.856 |
| [opensearch-neural-sparse-encoding-v2-distill](https://huggingface.co/opensearch-project/opensearch-neural-sparse-encoding-v2-distill) | 0.528 | 0.775 | 0.347 | 0.561 | 0.685 | 0.374 | 0.551 | 0.278 | 0.435 | 0.173 | 0.849 | 0.249 | 0.722 | 0.863 |
| [opensearch-neural-sparse-encoding-doc-v1](https://huggingface.co/opensearch-project/opensearch-neural-sparse-encoding-doc-v1) | 0.490 | 0.707 | 0.352 | 0.521 | 0.677 | 0.344 | 0.461 | 0.294 | 0.412 | 0.154 | 0.743 | 0.202 | 0.716 | 0.788 |
| [opensearch-neural-sparse-encoding-doc-v2-distill](https://huggingface.co/opensearch-project/opensearch-neural-sparse-encoding-doc-v2-distill) | 0.504 | 0.690 | 0.343 | 0.528 | 0.675 | 0.357 | 0.496 | 0.287 | 0.418 | 0.166 | 0.818 | 0.224 | 0.715 | 0.841 |
| [opensearch-neural-sparse-encoding-doc-v2-mini](https://huggingface.co/opensearch-project/opensearch-neural-sparse-encoding-doc-v2-mini) | 0.497 | 0.709 | 0.336 | 0.510 | 0.666 | 0.338 | 0.480 | 0.285 | 0.407 | 0.164 | 0.812 | 0.216 | 0.699 | 0.837 |
</div>
## License
This project is licensed under the [Apache v2.0 License](https://github.com/opensearch-project/neural-search/blob/main/LICENSE).
## Copyright
Copyright OpenSearch Contributors. See [NOTICE](https://github.com/opensearch-project/neural-search/blob/main/NOTICE) for details. | [
"SCIFACT"
] | Non_BioNLP |
daijin219/MLMA_lab9_task2 | daijin219 | token-classification | [
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"token-classification",
"generated_from_trainer",
"dataset:ncbi_disease",
"license:mit",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | 1,681,561,187,000 | 2023-04-15T14:33:32 | 14 | 0 | ---
datasets:
- ncbi_disease
license: mit
metrics:
- precision
- recall
- f1
- accuracy
tags:
- generated_from_trainer
model-index:
- name: MLMA_lab9_task2
results:
- task:
type: token-classification
name: Token Classification
dataset:
name: ncbi_disease
type: ncbi_disease
config: ncbi_disease
split: validation
args: ncbi_disease
metrics:
- type: precision
value: 0.015873015873015872
name: Precision
- type: recall
value: 0.14866581956797967
name: Recall
- type: f1
value: 0.028683500858053445
name: F1
- type: accuracy
value: 0.6365342039100904
name: Accuracy
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# MLMA_lab9_task2
This model is a fine-tuned version of [microsoft/biogpt](https://huggingface.co/microsoft/biogpt) on the ncbi_disease dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2509
- Precision: 0.0159
- Recall: 0.1487
- F1: 0.0287
- Accuracy: 0.6365
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 1.153 | 1.0 | 680 | 1.0671 | 0.0122 | 0.1258 | 0.0223 | 0.5452 |
| 1.02 | 2.0 | 1360 | 1.0418 | 0.0098 | 0.0203 | 0.0132 | 0.6791 |
| 0.9552 | 3.0 | 2040 | 1.0269 | 0.0135 | 0.1677 | 0.0250 | 0.5282 |
| 0.926 | 4.0 | 2720 | 1.0390 | 0.0143 | 0.0940 | 0.0248 | 0.6686 |
| 0.9156 | 5.0 | 3400 | 1.0200 | 0.0135 | 0.2046 | 0.0253 | 0.4679 |
| 0.8791 | 6.0 | 4080 | 1.0543 | 0.0131 | 0.2745 | 0.0250 | 0.3149 |
| 0.8672 | 7.0 | 4760 | 1.0545 | 0.0141 | 0.2732 | 0.0267 | 0.3471 |
| 0.8627 | 8.0 | 5440 | 1.0734 | 0.0145 | 0.0826 | 0.0246 | 0.7220 |
| 0.8375 | 9.0 | 6120 | 1.1068 | 0.0156 | 0.1410 | 0.0281 | 0.6451 |
| 0.8235 | 10.0 | 6800 | 1.0796 | 0.0158 | 0.1537 | 0.0286 | 0.6210 |
| 0.8157 | 11.0 | 7480 | 1.1476 | 0.0143 | 0.1690 | 0.0263 | 0.5737 |
| 0.7957 | 12.0 | 8160 | 1.1369 | 0.0143 | 0.1525 | 0.0262 | 0.6155 |
| 0.7937 | 13.0 | 8840 | 1.2014 | 0.0151 | 0.1741 | 0.0278 | 0.5808 |
| 0.7765 | 14.0 | 9520 | 1.2249 | 0.0160 | 0.1449 | 0.0289 | 0.6443 |
| 0.7661 | 15.0 | 10200 | 1.2509 | 0.0159 | 0.1487 | 0.0287 | 0.6365 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.11.0
- Tokenizers 0.13.3
| [
"NCBI DISEASE"
] | BioNLP |
dev0612/nd-neuro-embedding | dev0612 | sentence-similarity | [
"sentence-transformers",
"safetensors",
"mpnet",
"feature-extraction",
"sentence-similarity",
"medical",
"biology",
"en",
"dataset:FremyCompany/BioLORD-Dataset",
"dataset:FremyCompany/AGCT-Dataset",
"arxiv:2311.16075",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,739,873,837,000 | 2025-02-18T10:50:07 | 13 | 0 | ---
datasets:
- FremyCompany/BioLORD-Dataset
- FremyCompany/AGCT-Dataset
language: en
license: other
license_name: ihtsdo-and-nlm-licences
license_link: https://www.nlm.nih.gov/databases/umls.html
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- medical
- biology
widget:
- source_sentence: bartonellosis
sentences:
- cat scratch disease
- cat scratch wound
- tick-borne orbivirus fever
- cat fur
---
| 🙏 If you are able to, please help me [fund my open research](https://gofund.me/1f2d6803). 🙏 Thank you for your generosity! 🤗 |
|-----------------------------------------------------------------------------------------------------------------------------------|
# FremyCompany/BioLORD-2023
This model was trained using BioLORD, a new pre-training strategy for producing meaningful representations for clinical sentences and biomedical concepts.
State-of-the-art methodologies operate by maximizing the similarity in representation of names referring to the same concept, and preventing collapse through contrastive learning. However, because biomedical names are not always self-explanatory, it sometimes results in non-semantic representations.
BioLORD overcomes this issue by grounding its concept representations using definitions, as well as short descriptions derived from a multi-relational knowledge graph consisting of biomedical ontologies. Thanks to this grounding, our model produces more semantic concept representations that match more closely the hierarchical structure of ontologies. BioLORD-2023 establishes a new state of the art for text similarity on both clinical sentences (MedSTS) and biomedical concepts (EHR-Rel-B).
This model is based on [sentence-transformers/all-mpnet-base-v2](https://huggingface.co/sentence-transformers/all-mpnet-base-v2) and was further finetuned on the [BioLORD-Dataset](https://huggingface.co/datasets/FremyCompany/BioLORD-Dataset) and LLM-generated definitions from the [Automatic Glossary of Clinical Terminology (AGCT)](https://huggingface.co/datasets/FremyCompany/AGCT-Dataset).
## Sibling models
This model is accompanied by other models in the BioLORD-2023 series, which you might want to check:
- [BioLORD-2023-M](https://huggingface.co/FremyCompany/BioLORD-2023-M) (multilingual model; distilled from BioLORD-2023)
- [BioLORD-2023](https://huggingface.co/FremyCompany/BioLORD-2023) (best model after model averaging; this model)
- [BioLORD-2023-S](https://huggingface.co/FremyCompany/BioLORD-2023-S) (best hyperparameters; no model averaging)
- [BioLORD-2023-C](https://huggingface.co/FremyCompany/BioLORD-2023-C) (contrastive training only; for NEL tasks)
You can also take a look at last year's model and paper:
- [BioLORD-2022](https://huggingface.co/FremyCompany/BioLORD-STAMB2-v1) (also known as BioLORD-STAMB2-v1)
## Training strategy
### Summary of the 3 phases

### Contrastive phase: details

### Self-distallation phase: details

## Citation
This model accompanies the [BioLORD-2023: Learning Ontological Representations from Definitions](https://arxiv.org/abs/2311.16075) paper. When you use this model, please cite the original paper as follows:
```latex
@article{remy-etal-2023-biolord,
author = {Remy, François and Demuynck, Kris and Demeester, Thomas},
title = "{BioLORD-2023: semantic textual representations fusing large language models and clinical knowledge graph insights}",
journal = {Journal of the American Medical Informatics Association},
pages = {ocae029},
year = {2024},
month = {02},
issn = {1527-974X},
doi = {10.1093/jamia/ocae029},
url = {https://doi.org/10.1093/jamia/ocae029},
eprint = {https://academic.oup.com/jamia/advance-article-pdf/doi/10.1093/jamia/ocae029/56772025/ocae029.pdf},
}
```
## Usage (Sentence-Transformers)
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. This model has been finentuned for the biomedical domain. While it preserves a good ability to produce embeddings for general-purpose text, it will be more useful to you if you are trying to process medical documents such as EHR records or clinical notes. Both sentences and phrases can be embedded in the same latent space.
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["Cat scratch injury", "Cat scratch disease", "Bartonellosis"]
model = SentenceTransformer('FremyCompany/BioLORD-2023')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
import torch.nn.functional as F
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ["Cat scratch injury", "Cat scratch disease", "Bartonellosis"]
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('FremyCompany/BioLORD-2023')
model = AutoModel.from_pretrained('FremyCompany/BioLORD-2023')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
# Normalize embeddings
sentence_embeddings = F.normalize(sentence_embeddings, p=2, dim=1)
print("Sentence embeddings:")
print(sentence_embeddings)
```
## License
My own contributions for this model are covered by the MIT license.
However, given the data used to train this model originates from UMLS and SnomedCT, you will need to ensure you have proper licensing of UMLS and SnomedCT before using this model. Both UMLS and SnomedCT are free of charge in most countries, but you might have to create an account and report on your usage of the data yearly to keep a valid license. | [
"EHR-REL"
] | BioNLP |
RichardErkhov/BSC-LT_-_salamandra-7b-base-fp8-8bits | RichardErkhov | null | [
"safetensors",
"llama",
"8-bit",
"bitsandbytes",
"region:us"
] | 1,736,582,639,000 | 2025-01-11T08:08:42 | 5 | 0 | ---
{}
---
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
salamandra-7b-base-fp8 - bnb 8bits
- Model creator: https://huggingface.co/BSC-LT/
- Original model: https://huggingface.co/BSC-LT/salamandra-7b-base-fp8/
Original model description:
---
license: apache-2.0
library_name: transformers
base_model: BSC-LT/salamandra-7b
pipeline_tag: text-generation
language:
- bg
- ca
- code
- cs
- cy
- da
- de
- el
- en
- es
- et
- eu
- fi
- fr
- ga
- gl
- hr
- hu
- it
- lt
- lv
- mt
- nl
- nn
- \no
- oc
- pl
- pt
- ro
- ru
- sh
- sk
- sl
- sr
- sv
- uk
---

# Salamandra-7b-fp8 Model Card
This model is the fp8-quantized version of [Salamandra-7b](https://huggingface.co/BSC-LT/salamandra-7b).
The model weights are quantized from FP16 to FP8 (8-bit weights) using the FP8 quantization algorithm
from [NeuralMagic](https://neuralmagic.com/blog/vllm-brings-fp8-inference-to-the-open-source-community/).
Inferencing with this model can be done using [VLLM](https://docs.vllm.ai/en/stable/models/engine_args.html).
Salamandra is a highly multilingual model pre-trained from scratch that comes in three different
sizes — 2B, 7B and 40B parameters — with their respective base and instruction-tuned variants,
promoted and financed by the Government of Catalonia through the [Aina Project](https://projecteaina.cat/)
and the _Ministerio para la Transformación Digital y de la Función Pública_ - Funded by EU – NextGenerationEU
within the framework of [ILENIA Project](https://proyectoilenia.es/) with reference 2022/TL22/00215337.
This model card corresponds to the fp8-quantized version of Salamandra-7b.
The entire Salamandra family is released under a permissive [Apache 2.0 license]((https://www.apache.org/licenses/LICENSE-2.0)).
## How to Use
The following example code works under ``Python 3.9.16``, ``vllm==0.6.3.post1``, ``torch==2.4.0`` and ``torchvision==0.19.0``, though it should run on
any current version of the libraries. This is an example of how to create a text completion using the model:
```
from vllm import LLM, SamplingParams
model_name = "BSC-LT/salamandra-7b-base-fp8"
llm = LLM(model=model_name)
outputs = llm.generate("El mercat del barri ",
sampling_params=SamplingParams(
temperature=0.5,
max_tokens=200)
)
print(outputs[0].outputs[0].text)
```
### Author
International Business Machines (IBM).
### Copyright
International Business Machines (IBM).
### Contact
For further information, please send an email to <[email protected]>.
### Acknowledgements
We appreciate the collaboration with IBM in this work.
Specifically, the IBM team created fp8-quantized version of the Salamandra-7b model released here.
### Disclaimer
Be aware that the model may contain biases or other unintended distortions.
When third parties deploy systems or provide services based on this model, or use the model themselves,
they bear the responsibility for mitigating any associated risks and ensuring compliance with applicable
regulations, including those governing the use of Artificial Intelligence.
Barcelona Supercomputing Center and International Business Machines shall
not be held liable for any outcomes resulting from third-party use.
### License
[Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0)
| [
"BEAR"
] | Non_BioNLP |
jinaai/jina-embedding-l-en-v1 | jinaai | sentence-similarity | [
"sentence-transformers",
"pytorch",
"t5",
"finetuner",
"mteb",
"feature-extraction",
"sentence-similarity",
"custom_code",
"en",
"dataset:jinaai/negation-dataset",
"arxiv:2307.11224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,688,892,846,000 | 2025-01-06T16:30:42 | 561 | 24 | ---
datasets:
- jinaai/negation-dataset
language: en
license: apache-2.0
pipeline_tag: sentence-similarity
tags:
- finetuner
- mteb
- sentence-transformers
- feature-extraction
- sentence-similarity
model-index:
- name: jina-triplets-large
results:
- task:
type: Classification
dataset:
name: MTEB AmazonCounterfactualClassification (en)
type: mteb/amazon_counterfactual
config: en
split: test
revision: e8379541af4e31359cca9fbcf4b00f2671dba205
metrics:
- type: accuracy
value: 68.92537313432835
- type: ap
value: 29.723758877632513
- type: f1
value: 61.909704211663794
- task:
type: Classification
dataset:
name: MTEB AmazonPolarityClassification
type: mteb/amazon_polarity
config: default
split: test
revision: e2d317d38cd51312af73b3d32a06d1a08b442046
metrics:
- type: accuracy
value: 69.13669999999999
- type: ap
value: 65.30216072238086
- type: f1
value: 67.1890891071034
- task:
type: Classification
dataset:
name: MTEB AmazonReviewsClassification (en)
type: mteb/amazon_reviews_multi
config: en
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 31.384
- type: f1
value: 30.016752348953723
- task:
type: Retrieval
dataset:
name: MTEB ArguAna
type: arguana
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 23.613
- type: map_at_10
value: 37.897
- type: map_at_100
value: 39.093
- type: map_at_1000
value: 39.109
- type: map_at_3
value: 32.824
- type: map_at_5
value: 35.679
- type: mrr_at_1
value: 23.826
- type: mrr_at_10
value: 37.997
- type: mrr_at_100
value: 39.186
- type: mrr_at_1000
value: 39.202
- type: mrr_at_3
value: 32.918
- type: mrr_at_5
value: 35.748999999999995
- type: ndcg_at_1
value: 23.613
- type: ndcg_at_10
value: 46.482
- type: ndcg_at_100
value: 51.55499999999999
- type: ndcg_at_1000
value: 51.974
- type: ndcg_at_3
value: 35.964
- type: ndcg_at_5
value: 41.144999999999996
- type: precision_at_1
value: 23.613
- type: precision_at_10
value: 7.417999999999999
- type: precision_at_100
value: 0.963
- type: precision_at_1000
value: 0.1
- type: precision_at_3
value: 15.031
- type: precision_at_5
value: 11.55
- type: recall_at_1
value: 23.613
- type: recall_at_10
value: 74.182
- type: recall_at_100
value: 96.30199999999999
- type: recall_at_1000
value: 99.57300000000001
- type: recall_at_3
value: 45.092
- type: recall_at_5
value: 57.752
- task:
type: Clustering
dataset:
name: MTEB ArxivClusteringP2P
type: mteb/arxiv-clustering-p2p
config: default
split: test
revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d
metrics:
- type: v_measure
value: 40.51285742156528
- task:
type: Clustering
dataset:
name: MTEB ArxivClusteringS2S
type: mteb/arxiv-clustering-s2s
config: default
split: test
revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53
metrics:
- type: v_measure
value: 31.5825964077496
- task:
type: Reranking
dataset:
name: MTEB AskUbuntuDupQuestions
type: mteb/askubuntudupquestions-reranking
config: default
split: test
revision: 2000358ca161889fa9c082cb41daa8dcfb161a54
metrics:
- type: map
value: 62.830281630546835
- type: mrr
value: 75.93072593765115
- task:
type: STS
dataset:
name: MTEB BIOSSES
type: mteb/biosses-sts
config: default
split: test
revision: d3fb88f8f02e40887cd149695127462bbcf29b4a
metrics:
- type: cos_sim_pearson
value: 87.26764516732737
- type: cos_sim_spearman
value: 84.42541766631741
- type: euclidean_pearson
value: 48.71357447655235
- type: euclidean_spearman
value: 49.2023259276511
- type: manhattan_pearson
value: 48.36366272727299
- type: manhattan_spearman
value: 48.457128224924354
- task:
type: Classification
dataset:
name: MTEB Banking77Classification
type: mteb/banking77
config: default
split: test
revision: 0fd18e25b25c072e09e0d92ab615fda904d66300
metrics:
- type: accuracy
value: 85.3409090909091
- type: f1
value: 85.25262617676835
- task:
type: Clustering
dataset:
name: MTEB BiorxivClusteringP2P
type: mteb/biorxiv-clustering-p2p
config: default
split: test
revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40
metrics:
- type: v_measure
value: 33.560193912974974
- task:
type: Clustering
dataset:
name: MTEB BiorxivClusteringS2S
type: mteb/biorxiv-clustering-s2s
config: default
split: test
revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908
metrics:
- type: v_measure
value: 28.4426572644577
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackAndroidRetrieval
type: BeIR/cqadupstack
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 27.822999999999997
- type: map_at_10
value: 39.088
- type: map_at_100
value: 40.561
- type: map_at_1000
value: 40.69
- type: map_at_3
value: 35.701
- type: map_at_5
value: 37.556
- type: mrr_at_1
value: 33.906
- type: mrr_at_10
value: 44.527
- type: mrr_at_100
value: 45.403999999999996
- type: mrr_at_1000
value: 45.452
- type: mrr_at_3
value: 41.726
- type: mrr_at_5
value: 43.314
- type: ndcg_at_1
value: 33.906
- type: ndcg_at_10
value: 45.591
- type: ndcg_at_100
value: 51.041000000000004
- type: ndcg_at_1000
value: 53.1
- type: ndcg_at_3
value: 40.324
- type: ndcg_at_5
value: 42.723
- type: precision_at_1
value: 33.906
- type: precision_at_10
value: 8.655
- type: precision_at_100
value: 1.418
- type: precision_at_1000
value: 0.19499999999999998
- type: precision_at_3
value: 19.123
- type: precision_at_5
value: 13.963000000000001
- type: recall_at_1
value: 27.822999999999997
- type: recall_at_10
value: 58.63699999999999
- type: recall_at_100
value: 80.874
- type: recall_at_1000
value: 93.82000000000001
- type: recall_at_3
value: 44.116
- type: recall_at_5
value: 50.178999999999995
- type: map_at_1
value: 26.823999999999998
- type: map_at_10
value: 37.006
- type: map_at_100
value: 38.256
- type: map_at_1000
value: 38.397999999999996
- type: map_at_3
value: 34.011
- type: map_at_5
value: 35.643
- type: mrr_at_1
value: 34.268
- type: mrr_at_10
value: 43.374
- type: mrr_at_100
value: 44.096000000000004
- type: mrr_at_1000
value: 44.144
- type: mrr_at_3
value: 41.008
- type: mrr_at_5
value: 42.359
- type: ndcg_at_1
value: 34.268
- type: ndcg_at_10
value: 43.02
- type: ndcg_at_100
value: 47.747
- type: ndcg_at_1000
value: 50.019999999999996
- type: ndcg_at_3
value: 38.687
- type: ndcg_at_5
value: 40.647
- type: precision_at_1
value: 34.268
- type: precision_at_10
value: 8.261000000000001
- type: precision_at_100
value: 1.376
- type: precision_at_1000
value: 0.189
- type: precision_at_3
value: 19.108
- type: precision_at_5
value: 13.489999999999998
- type: recall_at_1
value: 26.823999999999998
- type: recall_at_10
value: 53.84100000000001
- type: recall_at_100
value: 73.992
- type: recall_at_1000
value: 88.524
- type: recall_at_3
value: 40.711000000000006
- type: recall_at_5
value: 46.477000000000004
- type: map_at_1
value: 34.307
- type: map_at_10
value: 45.144
- type: map_at_100
value: 46.351
- type: map_at_1000
value: 46.414
- type: map_at_3
value: 42.315000000000005
- type: map_at_5
value: 43.991
- type: mrr_at_1
value: 39.06
- type: mrr_at_10
value: 48.612
- type: mrr_at_100
value: 49.425000000000004
- type: mrr_at_1000
value: 49.458999999999996
- type: mrr_at_3
value: 46.144
- type: mrr_at_5
value: 47.654999999999994
- type: ndcg_at_1
value: 39.06
- type: ndcg_at_10
value: 50.647
- type: ndcg_at_100
value: 55.620000000000005
- type: ndcg_at_1000
value: 56.976000000000006
- type: ndcg_at_3
value: 45.705
- type: ndcg_at_5
value: 48.269
- type: precision_at_1
value: 39.06
- type: precision_at_10
value: 8.082
- type: precision_at_100
value: 1.161
- type: precision_at_1000
value: 0.133
- type: precision_at_3
value: 20.376
- type: precision_at_5
value: 14.069
- type: recall_at_1
value: 34.307
- type: recall_at_10
value: 63.497
- type: recall_at_100
value: 85.038
- type: recall_at_1000
value: 94.782
- type: recall_at_3
value: 50.209
- type: recall_at_5
value: 56.525000000000006
- type: map_at_1
value: 26.448
- type: map_at_10
value: 34.86
- type: map_at_100
value: 36.004999999999995
- type: map_at_1000
value: 36.081
- type: map_at_3
value: 32.527
- type: map_at_5
value: 33.955
- type: mrr_at_1
value: 28.701
- type: mrr_at_10
value: 36.909
- type: mrr_at_100
value: 37.89
- type: mrr_at_1000
value: 37.945
- type: mrr_at_3
value: 34.576
- type: mrr_at_5
value: 35.966
- type: ndcg_at_1
value: 28.701
- type: ndcg_at_10
value: 39.507999999999996
- type: ndcg_at_100
value: 45.056000000000004
- type: ndcg_at_1000
value: 47.034
- type: ndcg_at_3
value: 34.985
- type: ndcg_at_5
value: 37.384
- type: precision_at_1
value: 28.701
- type: precision_at_10
value: 5.921
- type: precision_at_100
value: 0.914
- type: precision_at_1000
value: 0.11199999999999999
- type: precision_at_3
value: 14.689
- type: precision_at_5
value: 10.237
- type: recall_at_1
value: 26.448
- type: recall_at_10
value: 51.781
- type: recall_at_100
value: 77.142
- type: recall_at_1000
value: 92.10000000000001
- type: recall_at_3
value: 39.698
- type: recall_at_5
value: 45.469
- type: map_at_1
value: 14.174000000000001
- type: map_at_10
value: 22.019
- type: map_at_100
value: 23.18
- type: map_at_1000
value: 23.304
- type: map_at_3
value: 19.332
- type: map_at_5
value: 20.816000000000003
- type: mrr_at_1
value: 17.785999999999998
- type: mrr_at_10
value: 26.233
- type: mrr_at_100
value: 27.254
- type: mrr_at_1000
value: 27.328000000000003
- type: mrr_at_3
value: 23.653
- type: mrr_at_5
value: 25.095
- type: ndcg_at_1
value: 17.785999999999998
- type: ndcg_at_10
value: 27.236
- type: ndcg_at_100
value: 32.932
- type: ndcg_at_1000
value: 36.134
- type: ndcg_at_3
value: 22.33
- type: ndcg_at_5
value: 24.573999999999998
- type: precision_at_1
value: 17.785999999999998
- type: precision_at_10
value: 5.286
- type: precision_at_100
value: 0.9369999999999999
- type: precision_at_1000
value: 0.136
- type: precision_at_3
value: 11.07
- type: precision_at_5
value: 8.308
- type: recall_at_1
value: 14.174000000000001
- type: recall_at_10
value: 39.135
- type: recall_at_100
value: 64.095
- type: recall_at_1000
value: 87.485
- type: recall_at_3
value: 25.496999999999996
- type: recall_at_5
value: 31.148999999999997
- type: map_at_1
value: 24.371000000000002
- type: map_at_10
value: 33.074999999999996
- type: map_at_100
value: 34.486
- type: map_at_1000
value: 34.608
- type: map_at_3
value: 30.483
- type: map_at_5
value: 31.972
- type: mrr_at_1
value: 29.548000000000002
- type: mrr_at_10
value: 38.431
- type: mrr_at_100
value: 39.347
- type: mrr_at_1000
value: 39.4
- type: mrr_at_3
value: 35.980000000000004
- type: mrr_at_5
value: 37.413999999999994
- type: ndcg_at_1
value: 29.548000000000002
- type: ndcg_at_10
value: 38.552
- type: ndcg_at_100
value: 44.598
- type: ndcg_at_1000
value: 47.0
- type: ndcg_at_3
value: 34.109
- type: ndcg_at_5
value: 36.263
- type: precision_at_1
value: 29.548000000000002
- type: precision_at_10
value: 6.92
- type: precision_at_100
value: 1.179
- type: precision_at_1000
value: 0.159
- type: precision_at_3
value: 16.137
- type: precision_at_5
value: 11.511000000000001
- type: recall_at_1
value: 24.371000000000002
- type: recall_at_10
value: 49.586999999999996
- type: recall_at_100
value: 75.15899999999999
- type: recall_at_1000
value: 91.06
- type: recall_at_3
value: 37.09
- type: recall_at_5
value: 42.588
- type: map_at_1
value: 24.517
- type: map_at_10
value: 32.969
- type: map_at_100
value: 34.199
- type: map_at_1000
value: 34.322
- type: map_at_3
value: 30.270999999999997
- type: map_at_5
value: 31.863000000000003
- type: mrr_at_1
value: 30.479
- type: mrr_at_10
value: 38.633
- type: mrr_at_100
value: 39.522
- type: mrr_at_1000
value: 39.583
- type: mrr_at_3
value: 36.454
- type: mrr_at_5
value: 37.744
- type: ndcg_at_1
value: 30.479
- type: ndcg_at_10
value: 38.269
- type: ndcg_at_100
value: 43.91
- type: ndcg_at_1000
value: 46.564
- type: ndcg_at_3
value: 34.03
- type: ndcg_at_5
value: 36.155
- type: precision_at_1
value: 30.479
- type: precision_at_10
value: 6.815
- type: precision_at_100
value: 1.138
- type: precision_at_1000
value: 0.158
- type: precision_at_3
value: 16.058
- type: precision_at_5
value: 11.416
- type: recall_at_1
value: 24.517
- type: recall_at_10
value: 48.559000000000005
- type: recall_at_100
value: 73.307
- type: recall_at_1000
value: 91.508
- type: recall_at_3
value: 36.563
- type: recall_at_5
value: 42.375
- type: map_at_1
value: 24.336166666666664
- type: map_at_10
value: 32.80791666666667
- type: map_at_100
value: 34.043416666666666
- type: map_at_1000
value: 34.162749999999996
- type: map_at_3
value: 30.187083333333337
- type: map_at_5
value: 31.637833333333337
- type: mrr_at_1
value: 28.669583333333343
- type: mrr_at_10
value: 36.88616666666667
- type: mrr_at_100
value: 37.80233333333333
- type: mrr_at_1000
value: 37.86141666666666
- type: mrr_at_3
value: 34.537416666666665
- type: mrr_at_5
value: 35.84275
- type: ndcg_at_1
value: 28.669583333333343
- type: ndcg_at_10
value: 37.956916666666665
- type: ndcg_at_100
value: 43.39475
- type: ndcg_at_1000
value: 45.79925
- type: ndcg_at_3
value: 33.43683333333334
- type: ndcg_at_5
value: 35.52575
- type: precision_at_1
value: 28.669583333333343
- type: precision_at_10
value: 6.603833333333335
- type: precision_at_100
value: 1.1079166666666667
- type: precision_at_1000
value: 0.15208333333333335
- type: precision_at_3
value: 15.338750000000001
- type: precision_at_5
value: 10.88775
- type: recall_at_1
value: 24.336166666666664
- type: recall_at_10
value: 49.19358333333333
- type: recall_at_100
value: 73.07583333333334
- type: recall_at_1000
value: 89.81675
- type: recall_at_3
value: 36.54091666666667
- type: recall_at_5
value: 41.919250000000005
- type: map_at_1
value: 23.388
- type: map_at_10
value: 29.408
- type: map_at_100
value: 30.452
- type: map_at_1000
value: 30.546
- type: map_at_3
value: 27.139000000000003
- type: map_at_5
value: 28.402
- type: mrr_at_1
value: 25.46
- type: mrr_at_10
value: 31.966
- type: mrr_at_100
value: 32.879999999999995
- type: mrr_at_1000
value: 32.944
- type: mrr_at_3
value: 29.755
- type: mrr_at_5
value: 30.974
- type: ndcg_at_1
value: 25.46
- type: ndcg_at_10
value: 33.449
- type: ndcg_at_100
value: 38.67
- type: ndcg_at_1000
value: 41.035
- type: ndcg_at_3
value: 29.048000000000002
- type: ndcg_at_5
value: 31.127
- type: precision_at_1
value: 25.46
- type: precision_at_10
value: 5.199
- type: precision_at_100
value: 0.8670000000000001
- type: precision_at_1000
value: 0.11399999999999999
- type: precision_at_3
value: 12.168
- type: precision_at_5
value: 8.62
- type: recall_at_1
value: 23.388
- type: recall_at_10
value: 43.428
- type: recall_at_100
value: 67.245
- type: recall_at_1000
value: 84.75399999999999
- type: recall_at_3
value: 31.416
- type: recall_at_5
value: 36.451
- type: map_at_1
value: 17.136000000000003
- type: map_at_10
value: 24.102999999999998
- type: map_at_100
value: 25.219
- type: map_at_1000
value: 25.344
- type: map_at_3
value: 22.004
- type: map_at_5
value: 23.145
- type: mrr_at_1
value: 20.613
- type: mrr_at_10
value: 27.753
- type: mrr_at_100
value: 28.698
- type: mrr_at_1000
value: 28.776000000000003
- type: mrr_at_3
value: 25.711000000000002
- type: mrr_at_5
value: 26.795
- type: ndcg_at_1
value: 20.613
- type: ndcg_at_10
value: 28.510999999999996
- type: ndcg_at_100
value: 33.924
- type: ndcg_at_1000
value: 36.849
- type: ndcg_at_3
value: 24.664
- type: ndcg_at_5
value: 26.365
- type: precision_at_1
value: 20.613
- type: precision_at_10
value: 5.069
- type: precision_at_100
value: 0.918
- type: precision_at_1000
value: 0.136
- type: precision_at_3
value: 11.574
- type: precision_at_5
value: 8.211
- type: recall_at_1
value: 17.136000000000003
- type: recall_at_10
value: 38.232
- type: recall_at_100
value: 62.571
- type: recall_at_1000
value: 83.23
- type: recall_at_3
value: 27.468999999999998
- type: recall_at_5
value: 31.852999999999998
- type: map_at_1
value: 25.580000000000002
- type: map_at_10
value: 33.449
- type: map_at_100
value: 34.58
- type: map_at_1000
value: 34.692
- type: map_at_3
value: 30.660999999999998
- type: map_at_5
value: 32.425
- type: mrr_at_1
value: 30.037000000000003
- type: mrr_at_10
value: 37.443
- type: mrr_at_100
value: 38.32
- type: mrr_at_1000
value: 38.384
- type: mrr_at_3
value: 34.778999999999996
- type: mrr_at_5
value: 36.458
- type: ndcg_at_1
value: 30.037000000000003
- type: ndcg_at_10
value: 38.46
- type: ndcg_at_100
value: 43.746
- type: ndcg_at_1000
value: 46.28
- type: ndcg_at_3
value: 33.52
- type: ndcg_at_5
value: 36.175000000000004
- type: precision_at_1
value: 30.037000000000003
- type: precision_at_10
value: 6.418
- type: precision_at_100
value: 1.0210000000000001
- type: precision_at_1000
value: 0.136
- type: precision_at_3
value: 15.018999999999998
- type: precision_at_5
value: 10.877
- type: recall_at_1
value: 25.580000000000002
- type: recall_at_10
value: 49.830000000000005
- type: recall_at_100
value: 73.04899999999999
- type: recall_at_1000
value: 90.751
- type: recall_at_3
value: 36.370999999999995
- type: recall_at_5
value: 43.104
- type: map_at_1
value: 24.071
- type: map_at_10
value: 33.384
- type: map_at_100
value: 35.004999999999995
- type: map_at_1000
value: 35.215999999999994
- type: map_at_3
value: 30.459000000000003
- type: map_at_5
value: 31.769
- type: mrr_at_1
value: 28.854000000000003
- type: mrr_at_10
value: 37.512
- type: mrr_at_100
value: 38.567
- type: mrr_at_1000
value: 38.618
- type: mrr_at_3
value: 35.211
- type: mrr_at_5
value: 36.13
- type: ndcg_at_1
value: 28.854000000000003
- type: ndcg_at_10
value: 39.216
- type: ndcg_at_100
value: 45.214
- type: ndcg_at_1000
value: 47.573
- type: ndcg_at_3
value: 34.597
- type: ndcg_at_5
value: 36.063
- type: precision_at_1
value: 28.854000000000003
- type: precision_at_10
value: 7.648000000000001
- type: precision_at_100
value: 1.545
- type: precision_at_1000
value: 0.241
- type: precision_at_3
value: 16.667
- type: precision_at_5
value: 11.818
- type: recall_at_1
value: 24.071
- type: recall_at_10
value: 50.802
- type: recall_at_100
value: 77.453
- type: recall_at_1000
value: 92.304
- type: recall_at_3
value: 36.846000000000004
- type: recall_at_5
value: 41.14
- type: map_at_1
value: 23.395
- type: map_at_10
value: 29.189999999999998
- type: map_at_100
value: 30.226999999999997
- type: map_at_1000
value: 30.337999999999997
- type: map_at_3
value: 27.342
- type: map_at_5
value: 28.116999999999997
- type: mrr_at_1
value: 25.323
- type: mrr_at_10
value: 31.241000000000003
- type: mrr_at_100
value: 32.225
- type: mrr_at_1000
value: 32.304
- type: mrr_at_3
value: 29.452
- type: mrr_at_5
value: 30.209000000000003
- type: ndcg_at_1
value: 25.323
- type: ndcg_at_10
value: 33.024
- type: ndcg_at_100
value: 38.279
- type: ndcg_at_1000
value: 41.026
- type: ndcg_at_3
value: 29.243000000000002
- type: ndcg_at_5
value: 30.564000000000004
- type: precision_at_1
value: 25.323
- type: precision_at_10
value: 4.972
- type: precision_at_100
value: 0.8210000000000001
- type: precision_at_1000
value: 0.116
- type: precision_at_3
value: 12.076
- type: precision_at_5
value: 8.133
- type: recall_at_1
value: 23.395
- type: recall_at_10
value: 42.994
- type: recall_at_100
value: 66.985
- type: recall_at_1000
value: 87.483
- type: recall_at_3
value: 32.505
- type: recall_at_5
value: 35.721000000000004
- task:
type: Retrieval
dataset:
name: MTEB ClimateFEVER
type: climate-fever
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 8.322000000000001
- type: map_at_10
value: 14.491000000000001
- type: map_at_100
value: 16.066
- type: map_at_1000
value: 16.238
- type: map_at_3
value: 12.235
- type: map_at_5
value: 13.422999999999998
- type: mrr_at_1
value: 19.479
- type: mrr_at_10
value: 29.38
- type: mrr_at_100
value: 30.520999999999997
- type: mrr_at_1000
value: 30.570999999999998
- type: mrr_at_3
value: 26.395000000000003
- type: mrr_at_5
value: 27.982000000000003
- type: ndcg_at_1
value: 19.479
- type: ndcg_at_10
value: 21.215
- type: ndcg_at_100
value: 27.966
- type: ndcg_at_1000
value: 31.324
- type: ndcg_at_3
value: 17.194000000000003
- type: ndcg_at_5
value: 18.593
- type: precision_at_1
value: 19.479
- type: precision_at_10
value: 6.5280000000000005
- type: precision_at_100
value: 1.359
- type: precision_at_1000
value: 0.198
- type: precision_at_3
value: 12.703999999999999
- type: precision_at_5
value: 9.655
- type: recall_at_1
value: 8.322000000000001
- type: recall_at_10
value: 26.165
- type: recall_at_100
value: 49.573
- type: recall_at_1000
value: 68.501
- type: recall_at_3
value: 16.179
- type: recall_at_5
value: 20.175
- task:
type: Retrieval
dataset:
name: MTEB DBPedia
type: dbpedia-entity
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 8.003
- type: map_at_10
value: 16.087
- type: map_at_100
value: 21.363
- type: map_at_1000
value: 22.64
- type: map_at_3
value: 12.171999999999999
- type: map_at_5
value: 13.866
- type: mrr_at_1
value: 61.25000000000001
- type: mrr_at_10
value: 68.626
- type: mrr_at_100
value: 69.134
- type: mrr_at_1000
value: 69.144
- type: mrr_at_3
value: 67.042
- type: mrr_at_5
value: 67.929
- type: ndcg_at_1
value: 49.0
- type: ndcg_at_10
value: 34.132
- type: ndcg_at_100
value: 37.545
- type: ndcg_at_1000
value: 44.544
- type: ndcg_at_3
value: 38.946999999999996
- type: ndcg_at_5
value: 36.317
- type: precision_at_1
value: 61.25000000000001
- type: precision_at_10
value: 26.325
- type: precision_at_100
value: 8.173
- type: precision_at_1000
value: 1.778
- type: precision_at_3
value: 41.667
- type: precision_at_5
value: 34.300000000000004
- type: recall_at_1
value: 8.003
- type: recall_at_10
value: 20.577
- type: recall_at_100
value: 41.884
- type: recall_at_1000
value: 64.36500000000001
- type: recall_at_3
value: 13.602
- type: recall_at_5
value: 16.41
- task:
type: Classification
dataset:
name: MTEB EmotionClassification
type: mteb/emotion
config: default
split: test
revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37
metrics:
- type: accuracy
value: 45.835
- type: f1
value: 41.66455981281837
- task:
type: Retrieval
dataset:
name: MTEB FEVER
type: fever
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 55.717000000000006
- type: map_at_10
value: 66.34100000000001
- type: map_at_100
value: 66.776
- type: map_at_1000
value: 66.794
- type: map_at_3
value: 64.386
- type: map_at_5
value: 65.566
- type: mrr_at_1
value: 60.141
- type: mrr_at_10
value: 70.928
- type: mrr_at_100
value: 71.29299999999999
- type: mrr_at_1000
value: 71.30199999999999
- type: mrr_at_3
value: 69.07900000000001
- type: mrr_at_5
value: 70.244
- type: ndcg_at_1
value: 60.141
- type: ndcg_at_10
value: 71.90100000000001
- type: ndcg_at_100
value: 73.836
- type: ndcg_at_1000
value: 74.214
- type: ndcg_at_3
value: 68.203
- type: ndcg_at_5
value: 70.167
- type: precision_at_1
value: 60.141
- type: precision_at_10
value: 9.268
- type: precision_at_100
value: 1.03
- type: precision_at_1000
value: 0.108
- type: precision_at_3
value: 27.028000000000002
- type: precision_at_5
value: 17.342
- type: recall_at_1
value: 55.717000000000006
- type: recall_at_10
value: 84.66799999999999
- type: recall_at_100
value: 93.28
- type: recall_at_1000
value: 95.887
- type: recall_at_3
value: 74.541
- type: recall_at_5
value: 79.389
- task:
type: Retrieval
dataset:
name: MTEB FiQA2018
type: fiqa
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 17.744
- type: map_at_10
value: 29.554000000000002
- type: map_at_100
value: 31.180000000000003
- type: map_at_1000
value: 31.372
- type: map_at_3
value: 25.6
- type: map_at_5
value: 27.642
- type: mrr_at_1
value: 35.802
- type: mrr_at_10
value: 44.812999999999995
- type: mrr_at_100
value: 45.56
- type: mrr_at_1000
value: 45.606
- type: mrr_at_3
value: 42.181000000000004
- type: mrr_at_5
value: 43.516
- type: ndcg_at_1
value: 35.802
- type: ndcg_at_10
value: 37.269999999999996
- type: ndcg_at_100
value: 43.575
- type: ndcg_at_1000
value: 46.916000000000004
- type: ndcg_at_3
value: 33.511
- type: ndcg_at_5
value: 34.504000000000005
- type: precision_at_1
value: 35.802
- type: precision_at_10
value: 10.448
- type: precision_at_100
value: 1.7129999999999999
- type: precision_at_1000
value: 0.231
- type: precision_at_3
value: 22.531000000000002
- type: precision_at_5
value: 16.512
- type: recall_at_1
value: 17.744
- type: recall_at_10
value: 44.616
- type: recall_at_100
value: 68.51899999999999
- type: recall_at_1000
value: 88.495
- type: recall_at_3
value: 30.235
- type: recall_at_5
value: 35.821999999999996
- task:
type: Retrieval
dataset:
name: MTEB HotpotQA
type: hotpotqa
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 33.315
- type: map_at_10
value: 45.932
- type: map_at_100
value: 46.708
- type: map_at_1000
value: 46.778999999999996
- type: map_at_3
value: 43.472
- type: map_at_5
value: 45.022
- type: mrr_at_1
value: 66.631
- type: mrr_at_10
value: 73.083
- type: mrr_at_100
value: 73.405
- type: mrr_at_1000
value: 73.421
- type: mrr_at_3
value: 71.756
- type: mrr_at_5
value: 72.616
- type: ndcg_at_1
value: 66.631
- type: ndcg_at_10
value: 54.949000000000005
- type: ndcg_at_100
value: 57.965
- type: ndcg_at_1000
value: 59.467000000000006
- type: ndcg_at_3
value: 51.086
- type: ndcg_at_5
value: 53.272
- type: precision_at_1
value: 66.631
- type: precision_at_10
value: 11.178
- type: precision_at_100
value: 1.3559999999999999
- type: precision_at_1000
value: 0.156
- type: precision_at_3
value: 31.582
- type: precision_at_5
value: 20.678
- type: recall_at_1
value: 33.315
- type: recall_at_10
value: 55.888000000000005
- type: recall_at_100
value: 67.812
- type: recall_at_1000
value: 77.839
- type: recall_at_3
value: 47.373
- type: recall_at_5
value: 51.695
- task:
type: Classification
dataset:
name: MTEB ImdbClassification
type: mteb/imdb
config: default
split: test
revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7
metrics:
- type: accuracy
value: 66.424
- type: ap
value: 61.132235499939256
- type: f1
value: 66.07094958225315
- task:
type: Retrieval
dataset:
name: MTEB MSMARCO
type: msmarco
config: default
split: dev
revision: None
metrics:
- type: map_at_1
value: 21.575
- type: map_at_10
value: 33.509
- type: map_at_100
value: 34.725
- type: map_at_1000
value: 34.775
- type: map_at_3
value: 29.673
- type: map_at_5
value: 31.805
- type: mrr_at_1
value: 22.235
- type: mrr_at_10
value: 34.1
- type: mrr_at_100
value: 35.254999999999995
- type: mrr_at_1000
value: 35.299
- type: mrr_at_3
value: 30.334
- type: mrr_at_5
value: 32.419
- type: ndcg_at_1
value: 22.235
- type: ndcg_at_10
value: 40.341
- type: ndcg_at_100
value: 46.161
- type: ndcg_at_1000
value: 47.400999999999996
- type: ndcg_at_3
value: 32.482
- type: ndcg_at_5
value: 36.269
- type: precision_at_1
value: 22.235
- type: precision_at_10
value: 6.422999999999999
- type: precision_at_100
value: 0.9329999999999999
- type: precision_at_1000
value: 0.104
- type: precision_at_3
value: 13.835
- type: precision_at_5
value: 10.226
- type: recall_at_1
value: 21.575
- type: recall_at_10
value: 61.448
- type: recall_at_100
value: 88.289
- type: recall_at_1000
value: 97.76899999999999
- type: recall_at_3
value: 39.971000000000004
- type: recall_at_5
value: 49.053000000000004
- task:
type: Classification
dataset:
name: MTEB MTOPDomainClassification (en)
type: mteb/mtop_domain
config: en
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 92.83401732786137
- type: f1
value: 92.47678691291068
- task:
type: Classification
dataset:
name: MTEB MTOPIntentClassification (en)
type: mteb/mtop_intent
config: en
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 76.08983128134975
- type: f1
value: 59.782936393820904
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (en)
type: mteb/amazon_massive_intent
config: en
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 72.73032952252858
- type: f1
value: 70.72684765888265
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (en)
type: mteb/amazon_massive_scenario
config: en
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 77.08473436449226
- type: f1
value: 77.31457411257054
- task:
type: Clustering
dataset:
name: MTEB MedrxivClusteringP2P
type: mteb/medrxiv-clustering-p2p
config: default
split: test
revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73
metrics:
- type: v_measure
value: 30.11980959210532
- task:
type: Clustering
dataset:
name: MTEB MedrxivClusteringS2S
type: mteb/medrxiv-clustering-s2s
config: default
split: test
revision: 35191c8c0dca72d8ff3efcd72aa802307d469663
metrics:
- type: v_measure
value: 25.2587629106119
- task:
type: Reranking
dataset:
name: MTEB MindSmallReranking
type: mteb/mind_small
config: default
split: test
revision: 3bdac13927fdc888b903db93b2ffdbd90b295a69
metrics:
- type: map
value: 31.48268319779204
- type: mrr
value: 32.501885728964304
- task:
type: Retrieval
dataset:
name: MTEB NFCorpus
type: nfcorpus
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 5.284
- type: map_at_10
value: 11.509
- type: map_at_100
value: 14.624
- type: map_at_1000
value: 16.035
- type: map_at_3
value: 8.347999999999999
- type: map_at_5
value: 9.919
- type: mrr_at_1
value: 43.344
- type: mrr_at_10
value: 52.303999999999995
- type: mrr_at_100
value: 52.994
- type: mrr_at_1000
value: 53.032999999999994
- type: mrr_at_3
value: 50.361
- type: mrr_at_5
value: 51.754
- type: ndcg_at_1
value: 41.176
- type: ndcg_at_10
value: 32.244
- type: ndcg_at_100
value: 29.916999999999998
- type: ndcg_at_1000
value: 38.753
- type: ndcg_at_3
value: 36.856
- type: ndcg_at_5
value: 35.394999999999996
- type: precision_at_1
value: 43.034
- type: precision_at_10
value: 24.118000000000002
- type: precision_at_100
value: 7.926
- type: precision_at_1000
value: 2.045
- type: precision_at_3
value: 34.675
- type: precision_at_5
value: 31.146
- type: recall_at_1
value: 5.284
- type: recall_at_10
value: 15.457
- type: recall_at_100
value: 30.914
- type: recall_at_1000
value: 63.788999999999994
- type: recall_at_3
value: 9.596
- type: recall_at_5
value: 12.391
- task:
type: Retrieval
dataset:
name: MTEB NQ
type: nq
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 29.537999999999997
- type: map_at_10
value: 43.99
- type: map_at_100
value: 45.003
- type: map_at_1000
value: 45.04
- type: map_at_3
value: 39.814
- type: map_at_5
value: 42.166
- type: mrr_at_1
value: 33.256
- type: mrr_at_10
value: 46.487
- type: mrr_at_100
value: 47.264
- type: mrr_at_1000
value: 47.29
- type: mrr_at_3
value: 43.091
- type: mrr_at_5
value: 45.013999999999996
- type: ndcg_at_1
value: 33.256
- type: ndcg_at_10
value: 51.403
- type: ndcg_at_100
value: 55.706999999999994
- type: ndcg_at_1000
value: 56.586000000000006
- type: ndcg_at_3
value: 43.559
- type: ndcg_at_5
value: 47.426
- type: precision_at_1
value: 33.256
- type: precision_at_10
value: 8.540000000000001
- type: precision_at_100
value: 1.093
- type: precision_at_1000
value: 0.11800000000000001
- type: precision_at_3
value: 19.834
- type: precision_at_5
value: 14.143
- type: recall_at_1
value: 29.537999999999997
- type: recall_at_10
value: 71.5
- type: recall_at_100
value: 90.25
- type: recall_at_1000
value: 96.82600000000001
- type: recall_at_3
value: 51.108
- type: recall_at_5
value: 60.006
- task:
type: Retrieval
dataset:
name: MTEB QuoraRetrieval
type: quora
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 70.526
- type: map_at_10
value: 84.342
- type: map_at_100
value: 84.985
- type: map_at_1000
value: 85.003
- type: map_at_3
value: 81.472
- type: map_at_5
value: 83.292
- type: mrr_at_1
value: 81.17
- type: mrr_at_10
value: 87.33999999999999
- type: mrr_at_100
value: 87.445
- type: mrr_at_1000
value: 87.446
- type: mrr_at_3
value: 86.387
- type: mrr_at_5
value: 87.042
- type: ndcg_at_1
value: 81.19
- type: ndcg_at_10
value: 88.088
- type: ndcg_at_100
value: 89.35
- type: ndcg_at_1000
value: 89.462
- type: ndcg_at_3
value: 85.319
- type: ndcg_at_5
value: 86.858
- type: precision_at_1
value: 81.19
- type: precision_at_10
value: 13.33
- type: precision_at_100
value: 1.528
- type: precision_at_1000
value: 0.157
- type: precision_at_3
value: 37.31
- type: precision_at_5
value: 24.512
- type: recall_at_1
value: 70.526
- type: recall_at_10
value: 95.166
- type: recall_at_100
value: 99.479
- type: recall_at_1000
value: 99.984
- type: recall_at_3
value: 87.124
- type: recall_at_5
value: 91.53
- task:
type: Clustering
dataset:
name: MTEB RedditClustering
type: mteb/reddit-clustering
config: default
split: test
revision: 24640382cdbf8abc73003fb0fa6d111a705499eb
metrics:
- type: v_measure
value: 45.049073872893494
- task:
type: Clustering
dataset:
name: MTEB RedditClusteringP2P
type: mteb/reddit-clustering-p2p
config: default
split: test
revision: 282350215ef01743dc01b456c7f5241fa8937f16
metrics:
- type: v_measure
value: 55.13810914528368
- task:
type: Retrieval
dataset:
name: MTEB SCIDOCS
type: scidocs
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 4.593
- type: map_at_10
value: 10.907
- type: map_at_100
value: 12.888
- type: map_at_1000
value: 13.167000000000002
- type: map_at_3
value: 7.936
- type: map_at_5
value: 9.31
- type: mrr_at_1
value: 22.7
- type: mrr_at_10
value: 32.509
- type: mrr_at_100
value: 33.69
- type: mrr_at_1000
value: 33.747
- type: mrr_at_3
value: 29.599999999999998
- type: mrr_at_5
value: 31.155
- type: ndcg_at_1
value: 22.7
- type: ndcg_at_10
value: 18.445
- type: ndcg_at_100
value: 26.241999999999997
- type: ndcg_at_1000
value: 31.409
- type: ndcg_at_3
value: 17.864
- type: ndcg_at_5
value: 15.232999999999999
- type: precision_at_1
value: 22.7
- type: precision_at_10
value: 9.43
- type: precision_at_100
value: 2.061
- type: precision_at_1000
value: 0.331
- type: precision_at_3
value: 16.467000000000002
- type: precision_at_5
value: 13.08
- type: recall_at_1
value: 4.593
- type: recall_at_10
value: 19.115
- type: recall_at_100
value: 41.82
- type: recall_at_1000
value: 67.167
- type: recall_at_3
value: 9.983
- type: recall_at_5
value: 13.218
- task:
type: STS
dataset:
name: MTEB SICK-R
type: mteb/sickr-sts
config: default
split: test
revision: a6ea5a8cab320b040a23452cc28066d9beae2cee
metrics:
- type: cos_sim_pearson
value: 82.94432059816452
- type: cos_sim_spearman
value: 79.19993315048852
- type: euclidean_pearson
value: 72.43261099671753
- type: euclidean_spearman
value: 71.51531114998619
- type: manhattan_pearson
value: 71.83604124130447
- type: manhattan_spearman
value: 71.24460392842295
- task:
type: STS
dataset:
name: MTEB STS12
type: mteb/sts12-sts
config: default
split: test
revision: a0d554a64d88156834ff5ae9920b964011b16384
metrics:
- type: cos_sim_pearson
value: 84.25401068481673
- type: cos_sim_spearman
value: 74.5249604699309
- type: euclidean_pearson
value: 71.1324859629043
- type: euclidean_spearman
value: 58.77041705276752
- type: manhattan_pearson
value: 71.01471521586141
- type: manhattan_spearman
value: 58.69949381017865
- task:
type: STS
dataset:
name: MTEB STS13
type: mteb/sts13-sts
config: default
split: test
revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca
metrics:
- type: cos_sim_pearson
value: 82.85731544223766
- type: cos_sim_spearman
value: 83.15607264736185
- type: euclidean_pearson
value: 75.8803249521361
- type: euclidean_spearman
value: 76.4862168799065
- type: manhattan_pearson
value: 75.80451454386811
- type: manhattan_spearman
value: 76.35986831074699
- task:
type: STS
dataset:
name: MTEB STS14
type: mteb/sts14-sts
config: default
split: test
revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375
metrics:
- type: cos_sim_pearson
value: 82.40669043798857
- type: cos_sim_spearman
value: 78.08686090667834
- type: euclidean_pearson
value: 74.48574712193803
- type: euclidean_spearman
value: 70.79423012045118
- type: manhattan_pearson
value: 74.39099211477354
- type: manhattan_spearman
value: 70.73135427277684
- task:
type: STS
dataset:
name: MTEB STS15
type: mteb/sts15-sts
config: default
split: test
revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3
metrics:
- type: cos_sim_pearson
value: 86.03027014209859
- type: cos_sim_spearman
value: 86.91082847840946
- type: euclidean_pearson
value: 69.13187603971996
- type: euclidean_spearman
value: 70.0370035340552
- type: manhattan_pearson
value: 69.2586635812031
- type: manhattan_spearman
value: 70.18638387118486
- task:
type: STS
dataset:
name: MTEB STS16
type: mteb/sts16-sts
config: default
split: test
revision: 4d8694f8f0e0100860b497b999b3dbed754a0513
metrics:
- type: cos_sim_pearson
value: 82.41190748361883
- type: cos_sim_spearman
value: 83.64850851235231
- type: euclidean_pearson
value: 71.60523243575282
- type: euclidean_spearman
value: 72.26134033805099
- type: manhattan_pearson
value: 71.50771482066683
- type: manhattan_spearman
value: 72.13707967973161
- task:
type: STS
dataset:
name: MTEB STS17 (en-en)
type: mteb/sts17-crosslingual-sts
config: en-en
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 90.42838477648627
- type: cos_sim_spearman
value: 90.15798155439076
- type: euclidean_pearson
value: 77.09619972244516
- type: euclidean_spearman
value: 75.5953488548861
- type: manhattan_pearson
value: 77.36892406451771
- type: manhattan_spearman
value: 75.76625156149356
- task:
type: STS
dataset:
name: MTEB STS22 (en)
type: mteb/sts22-crosslingual-sts
config: en
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 65.76151154879307
- type: cos_sim_spearman
value: 64.8846800918359
- type: euclidean_pearson
value: 50.23302700257155
- type: euclidean_spearman
value: 58.89455187289583
- type: manhattan_pearson
value: 50.05498582284945
- type: manhattan_spearman
value: 58.75893793871576
- task:
type: STS
dataset:
name: MTEB STSBenchmark
type: mteb/stsbenchmark-sts
config: default
split: test
revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831
metrics:
- type: cos_sim_pearson
value: 84.72381109169437
- type: cos_sim_spearman
value: 84.59820928231167
- type: euclidean_pearson
value: 74.85450857429493
- type: euclidean_spearman
value: 73.83634052565915
- type: manhattan_pearson
value: 74.97349743979106
- type: manhattan_spearman
value: 73.9636470375881
- task:
type: Reranking
dataset:
name: MTEB SciDocsRR
type: mteb/scidocs-reranking
config: default
split: test
revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab
metrics:
- type: map
value: 80.96736259172798
- type: mrr
value: 94.48378781712114
- task:
type: Retrieval
dataset:
name: MTEB SciFact
type: scifact
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 46.344
- type: map_at_10
value: 54.962
- type: map_at_100
value: 55.772
- type: map_at_1000
value: 55.81700000000001
- type: map_at_3
value: 51.832
- type: map_at_5
value: 53.718999999999994
- type: mrr_at_1
value: 49.0
- type: mrr_at_10
value: 56.721
- type: mrr_at_100
value: 57.287
- type: mrr_at_1000
value: 57.330000000000005
- type: mrr_at_3
value: 54.056000000000004
- type: mrr_at_5
value: 55.822
- type: ndcg_at_1
value: 49.0
- type: ndcg_at_10
value: 59.757000000000005
- type: ndcg_at_100
value: 63.149
- type: ndcg_at_1000
value: 64.43100000000001
- type: ndcg_at_3
value: 54.105000000000004
- type: ndcg_at_5
value: 57.196999999999996
- type: precision_at_1
value: 49.0
- type: precision_at_10
value: 8.200000000000001
- type: precision_at_100
value: 1.0070000000000001
- type: precision_at_1000
value: 0.11100000000000002
- type: precision_at_3
value: 20.889
- type: precision_at_5
value: 14.399999999999999
- type: recall_at_1
value: 46.344
- type: recall_at_10
value: 72.722
- type: recall_at_100
value: 88.167
- type: recall_at_1000
value: 98.333
- type: recall_at_3
value: 57.994
- type: recall_at_5
value: 65.506
- task:
type: PairClassification
dataset:
name: MTEB SprintDuplicateQuestions
type: mteb/sprintduplicatequestions-pairclassification
config: default
split: test
revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46
metrics:
- type: cos_sim_accuracy
value: 99.83366336633664
- type: cos_sim_ap
value: 96.09329747251944
- type: cos_sim_f1
value: 91.66255550074001
- type: cos_sim_precision
value: 90.45764362220059
- type: cos_sim_recall
value: 92.9
- type: dot_accuracy
value: 99.32871287128712
- type: dot_ap
value: 63.95436644147969
- type: dot_f1
value: 60.61814556331008
- type: dot_precision
value: 60.437375745526836
- type: dot_recall
value: 60.8
- type: euclidean_accuracy
value: 99.66534653465347
- type: euclidean_ap
value: 85.85143979761818
- type: euclidean_f1
value: 81.57033805888769
- type: euclidean_precision
value: 89.68824940047962
- type: euclidean_recall
value: 74.8
- type: manhattan_accuracy
value: 99.65742574257426
- type: manhattan_ap
value: 85.55693926348405
- type: manhattan_f1
value: 81.13804004214963
- type: manhattan_precision
value: 85.74610244988864
- type: manhattan_recall
value: 77.0
- type: max_accuracy
value: 99.83366336633664
- type: max_ap
value: 96.09329747251944
- type: max_f1
value: 91.66255550074001
- task:
type: Clustering
dataset:
name: MTEB StackExchangeClustering
type: mteb/stackexchange-clustering
config: default
split: test
revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259
metrics:
- type: v_measure
value: 45.23573510003245
- task:
type: Clustering
dataset:
name: MTEB StackExchangeClusteringP2P
type: mteb/stackexchange-clustering-p2p
config: default
split: test
revision: 815ca46b2622cec33ccafc3735d572c266efdb44
metrics:
- type: v_measure
value: 33.37478638401161
- task:
type: Reranking
dataset:
name: MTEB StackOverflowDupQuestions
type: mteb/stackoverflowdupquestions-reranking
config: default
split: test
revision: e185fbe320c72810689fc5848eb6114e1ef5ec69
metrics:
- type: map
value: 50.375920467392476
- type: mrr
value: 51.17302223919871
- task:
type: Summarization
dataset:
name: MTEB SummEval
type: mteb/summeval
config: default
split: test
revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c
metrics:
- type: cos_sim_pearson
value: 29.768864092288343
- type: cos_sim_spearman
value: 29.854278347043266
- type: dot_pearson
value: 20.51281723837505
- type: dot_spearman
value: 21.799102540913665
- task:
type: Retrieval
dataset:
name: MTEB TRECCOVID
type: trec-covid
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 0.2
- type: map_at_10
value: 1.202
- type: map_at_100
value: 6.729
- type: map_at_1000
value: 15.928
- type: map_at_3
value: 0.492
- type: map_at_5
value: 0.712
- type: mrr_at_1
value: 76.0
- type: mrr_at_10
value: 84.75
- type: mrr_at_100
value: 84.75
- type: mrr_at_1000
value: 84.75
- type: mrr_at_3
value: 83.0
- type: mrr_at_5
value: 84.5
- type: ndcg_at_1
value: 71.0
- type: ndcg_at_10
value: 57.253
- type: ndcg_at_100
value: 44.383
- type: ndcg_at_1000
value: 38.666
- type: ndcg_at_3
value: 64.324
- type: ndcg_at_5
value: 60.791
- type: precision_at_1
value: 76.0
- type: precision_at_10
value: 59.599999999999994
- type: precision_at_100
value: 45.440000000000005
- type: precision_at_1000
value: 17.458000000000002
- type: precision_at_3
value: 69.333
- type: precision_at_5
value: 63.2
- type: recall_at_1
value: 0.2
- type: recall_at_10
value: 1.4949999999999999
- type: recall_at_100
value: 10.266
- type: recall_at_1000
value: 35.853
- type: recall_at_3
value: 0.5349999999999999
- type: recall_at_5
value: 0.8109999999999999
- task:
type: Retrieval
dataset:
name: MTEB Touche2020
type: webis-touche2020
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 2.0140000000000002
- type: map_at_10
value: 8.474
- type: map_at_100
value: 14.058000000000002
- type: map_at_1000
value: 15.381
- type: map_at_3
value: 4.508
- type: map_at_5
value: 5.87
- type: mrr_at_1
value: 22.448999999999998
- type: mrr_at_10
value: 37.242
- type: mrr_at_100
value: 38.291
- type: mrr_at_1000
value: 38.311
- type: mrr_at_3
value: 32.312999999999995
- type: mrr_at_5
value: 34.762
- type: ndcg_at_1
value: 20.408
- type: ndcg_at_10
value: 20.729
- type: ndcg_at_100
value: 33.064
- type: ndcg_at_1000
value: 44.324999999999996
- type: ndcg_at_3
value: 21.251
- type: ndcg_at_5
value: 20.28
- type: precision_at_1
value: 22.448999999999998
- type: precision_at_10
value: 18.98
- type: precision_at_100
value: 7.224
- type: precision_at_1000
value: 1.471
- type: precision_at_3
value: 22.448999999999998
- type: precision_at_5
value: 20.816000000000003
- type: recall_at_1
value: 2.0140000000000002
- type: recall_at_10
value: 13.96
- type: recall_at_100
value: 44.187
- type: recall_at_1000
value: 79.328
- type: recall_at_3
value: 5.345
- type: recall_at_5
value: 7.979
- task:
type: Classification
dataset:
name: MTEB ToxicConversationsClassification
type: mteb/toxic_conversations_50k
config: default
split: test
revision: d7c0de2777da35d6aae2200a62c6e0e5af397c4c
metrics:
- type: accuracy
value: 69.1312
- type: ap
value: 12.606776505497608
- type: f1
value: 52.4112415600534
- task:
type: Classification
dataset:
name: MTEB TweetSentimentExtractionClassification
type: mteb/tweet_sentiment_extraction
config: default
split: test
revision: d604517c81ca91fe16a244d1248fc021f9ecee7a
metrics:
- type: accuracy
value: 58.16072439162422
- type: f1
value: 58.29152785435414
- task:
type: Clustering
dataset:
name: MTEB TwentyNewsgroupsClustering
type: mteb/twentynewsgroups-clustering
config: default
split: test
revision: 6125ec4e24fa026cec8a478383ee943acfbd5449
metrics:
- type: v_measure
value: 40.421119289825924
- task:
type: PairClassification
dataset:
name: MTEB TwitterSemEval2015
type: mteb/twittersemeval2015-pairclassification
config: default
split: test
revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1
metrics:
- type: cos_sim_accuracy
value: 85.48012159504083
- type: cos_sim_ap
value: 72.31974877212102
- type: cos_sim_f1
value: 67.96846573681019
- type: cos_sim_precision
value: 62.89562289562289
- type: cos_sim_recall
value: 73.93139841688654
- type: dot_accuracy
value: 78.52416999463551
- type: dot_ap
value: 43.65271285411479
- type: dot_f1
value: 46.94641449960599
- type: dot_precision
value: 37.456774599182644
- type: dot_recall
value: 62.875989445910285
- type: euclidean_accuracy
value: 83.90057817249806
- type: euclidean_ap
value: 65.96278727778665
- type: euclidean_f1
value: 63.35733232284957
- type: euclidean_precision
value: 60.770535497940394
- type: euclidean_recall
value: 66.17414248021109
- type: manhattan_accuracy
value: 83.96614412588663
- type: manhattan_ap
value: 66.03670273156699
- type: manhattan_f1
value: 63.49128406579917
- type: manhattan_precision
value: 59.366391184573
- type: manhattan_recall
value: 68.23218997361478
- type: max_accuracy
value: 85.48012159504083
- type: max_ap
value: 72.31974877212102
- type: max_f1
value: 67.96846573681019
- task:
type: PairClassification
dataset:
name: MTEB TwitterURLCorpus
type: mteb/twitterurlcorpus-pairclassification
config: default
split: test
revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf
metrics:
- type: cos_sim_accuracy
value: 88.97038848139093
- type: cos_sim_ap
value: 85.982764495556
- type: cos_sim_f1
value: 78.73283281450284
- type: cos_sim_precision
value: 75.07857791436754
- type: cos_sim_recall
value: 82.7610101632276
- type: dot_accuracy
value: 83.21108394458028
- type: dot_ap
value: 70.97956937273386
- type: dot_f1
value: 66.53083038279111
- type: dot_precision
value: 58.7551622418879
- type: dot_recall
value: 76.67847243609486
- type: euclidean_accuracy
value: 84.31520937633407
- type: euclidean_ap
value: 74.67323411319909
- type: euclidean_f1
value: 67.21935410935676
- type: euclidean_precision
value: 65.82773636430733
- type: euclidean_recall
value: 68.67108099784416
- type: manhattan_accuracy
value: 84.35013777312066
- type: manhattan_ap
value: 74.66508905354597
- type: manhattan_f1
value: 67.28264162375038
- type: manhattan_precision
value: 66.19970193740686
- type: manhattan_recall
value: 68.40160147828766
- type: max_accuracy
value: 88.97038848139093
- type: max_ap
value: 85.982764495556
- type: max_f1
value: 78.73283281450284
---
<br><br>
<p align="center">
<img src="https://huggingface.co/datasets/jinaai/documentation-images/resolve/main/logo.webp" alt="Jina AI: Your Search Foundation, Supercharged!" width="150px">
</p>
<p align="center">
<b>The text embedding set trained by <a href="https://jina.ai/"><b>Jina AI</b></a></b>
</p>
## Intented Usage & Model Info
`jina-embedding-l-en-v1` is a language model that has been trained using Jina AI's Linnaeus-Clean dataset.
This dataset consists of 380 million pairs of sentences, which include both query-document pairs.
These pairs were obtained from various domains and were carefully selected through a thorough cleaning process.
The Linnaeus-Full dataset, from which the Linnaeus-Clean dataset is derived, originally contained 1.6 billion sentence pairs.
The model has a range of use cases, including information retrieval, semantic textual similarity, text reranking, and more.
With a size of 330 million parameters,
the model enables single-gpu inference while delivering better performance than our small and base model.
Additionally, we provide the following options:
- [`jina-embedding-t-en-v1`](https://huggingface.co/jinaai/jina-embedding-t-en-v1): 14 million parameters.
- [`jina-embedding-s-en-v1`](https://huggingface.co/jinaai/jina-embedding-s-en-v1): 35 million parameters
- [`jina-embedding-b-en-v1`](https://huggingface.co/jinaai/jina-embedding-b-en-v1): 110 million parameters.
- [`jina-embedding-l-en-v1`](https://huggingface.co/jinaai/jina-embedding-l-en-v1): 330 million parameters **(you are here)**.
- `jina-embedding-1b-en-v1`: 1.2 billion parameters, 10 times bert-base (soon).
- `jina-embedding-6b-en-v1`: 6 billion parameters, 30 times bert-base (soon).
## Data & Parameters
Please checkout our [technical blog](https://arxiv.org/abs/2307.11224).
## Metrics
We compared the model against `all-minilm-l6-v2`/`all-mpnet-base-v2` from sbert and `text-embeddings-ada-002` from OpenAI:
|Name|param |dimension|
|------------------------------|-----|------|
|all-minilm-l6-v2|23m |384|
|all-mpnet-base-v2 |110m |768|
|ada-embedding-002|Unknown/OpenAI API |1536|
|jina-embedding-t-en-v1|14m |312|
|jina-embedding-s-en-v1|35m |512|
|jina-embedding-b-en-v1|110m |768|
|jina-embedding-l-en-v1|330m |1024|
|Name|STS12|STS13|STS14|STS15|STS16|STS17|TRECOVID|Quora|SciFact|
|------------------------------|-----|-----|-----|-----|-----|-----|--------|-----|-----|
|all-minilm-l6-v2|0.724|0.806|0.756|0.854|0.79 |0.876|0.473 |0.876|0.645 |
|all-mpnet-base-v2|0.726|**0.835**|0.78 |0.857|0.8 |**0.906**|0.513 |0.875|0.656 |
|ada-embedding-002|0.698|0.833|0.761|0.861|**0.86** |0.903|**0.685** |0.876|**0.726** |
|jina-embedding-t-en-v1|0.717|0.773|0.731|0.829|0.777|0.860|0.482 |0.840|0.522 |
|jina-embedding-s-en-v1|0.743|0.786|0.738|0.837|0.80|0.875|0.523 |0.857|0.524 |
|jina-embedding-b-en-v1|**0.751**|0.809|0.761|0.856|0.812|0.890|0.606 |0.876|0.594 |
|jina-embedding-l-en-v1|0.745|0.832|**0.781**|**0.869**|0.837|0.902|0.573 |**0.881**|0.598 |
## Usage
Use with Jina AI Finetuner
```python
!pip install finetuner
import finetuner
model = finetuner.build_model('jinaai/jina-embedding-l-en-v1')
embeddings = finetuner.encode(
model=model,
data=['how is the weather today', 'What is the current weather like today?']
)
print(finetuner.cos_sim(embeddings[0], embeddings[1]))
```
Use with sentence-transformers:
```python
from sentence_transformers import SentenceTransformer
from sentence_transformers.util import cos_sim
sentences = ['how is the weather today', 'What is the current weather like today?']
model = SentenceTransformer('jinaai/jina-embedding-b-en-v1')
embeddings = model.encode(sentences)
print(cos_sim(embeddings[0], embeddings[1]))
```
## Fine-tuning
Please consider [Finetuner](https://github.com/jina-ai/finetuner).
## Plans
1. The development of `jina-embedding-s-en-v2` is currently underway with two main objectives: improving performance and increasing the maximum sequence length.
2. We are currently working on a bilingual embedding model that combines English and X language. The upcoming model will be called `jina-embedding-s/b/l-de-v1`.
## Contact
Join our [Discord community](https://discord.jina.ai) and chat with other community members about ideas.
## Citation
If you find Jina Embeddings useful in your research, please cite the following paper:
``` latex
@misc{günther2023jina,
title={Jina Embeddings: A Novel Set of High-Performance Sentence Embedding Models},
author={Michael Günther and Louis Milliken and Jonathan Geuter and Georgios Mastrapas and Bo Wang and Han Xiao},
year={2023},
eprint={2307.11224},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` | [
"BIOSSES",
"LINNAEUS",
"SCIFACT"
] | Non_BioNLP |
jstephencorey/pythia-14m-embedding | jstephencorey | null | [
"region:us"
] | 1,703,713,513,000 | 2024-02-05T20:12:28 | 0 | 0 | ---
{}
---
---
tags:
- mteb
model-index:
- name: pythia-14m_mean
results:
- task:
type: Classification
dataset:
type: mteb/amazon_counterfactual
name: MTEB AmazonCounterfactualClassification (en)
config: en
split: test
revision: e8379541af4e31359cca9fbcf4b00f2671dba205
metrics:
- type: accuracy
value: 70.73134328358208
- type: ap
value: 32.35996836729783
- type: f1
value: 64.2137087561157
- task:
type: Classification
dataset:
type: mteb/amazon_counterfactual
name: MTEB AmazonCounterfactualClassification (de)
config: de
split: test
revision: e8379541af4e31359cca9fbcf4b00f2671dba205
metrics:
- type: accuracy
value: 62.291220556745174
- type: ap
value: 76.5427302441011
- type: f1
value: 60.37703210343267
- task:
type: Classification
dataset:
type: mteb/amazon_counterfactual
name: MTEB AmazonCounterfactualClassification (en-ext)
config: en-ext
split: test
revision: e8379541af4e31359cca9fbcf4b00f2671dba205
metrics:
- type: accuracy
value: 67.57871064467767
- type: ap
value: 17.03033311712744
- type: f1
value: 54.821750631894986
- task:
type: Classification
dataset:
type: mteb/amazon_counterfactual
name: MTEB AmazonCounterfactualClassification (ja)
config: ja
split: test
revision: e8379541af4e31359cca9fbcf4b00f2671dba205
metrics:
- type: accuracy
value: 62.51605995717344
- type: ap
value: 14.367489440317666
- type: f1
value: 50.48473578289779
- task:
type: Classification
dataset:
type: mteb/amazon_polarity
name: MTEB AmazonPolarityClassification
config: default
split: test
revision: e2d317d38cd51312af73b3d32a06d1a08b442046
metrics:
- type: accuracy
value: 57.567425000000014
- type: ap
value: 54.53026421737829
- type: f1
value: 56.60093061259046
- task:
type: Classification
dataset:
type: mteb/amazon_reviews_multi
name: MTEB AmazonReviewsClassification (en)
config: en
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 29.172000000000004
- type: f1
value: 28.264998641170465
- task:
type: Classification
dataset:
type: mteb/amazon_reviews_multi
name: MTEB AmazonReviewsClassification (de)
config: de
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 25.157999999999998
- type: f1
value: 23.033533062569987
- task:
type: Classification
dataset:
type: mteb/amazon_reviews_multi
name: MTEB AmazonReviewsClassification (es)
config: es
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 26.840000000000003
- type: f1
value: 25.693413738086402
- task:
type: Classification
dataset:
type: mteb/amazon_reviews_multi
name: MTEB AmazonReviewsClassification (fr)
config: fr
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 26.491999999999997
- type: f1
value: 25.6252880863665
- task:
type: Classification
dataset:
type: mteb/amazon_reviews_multi
name: MTEB AmazonReviewsClassification (ja)
config: ja
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 24.448000000000004
- type: f1
value: 23.86460242225935
- task:
type: Classification
dataset:
type: mteb/amazon_reviews_multi
name: MTEB AmazonReviewsClassification (zh)
config: zh
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 26.412000000000003
- type: f1
value: 25.779710231390755
- task:
type: Retrieval
dataset:
type: arguana
name: MTEB ArguAna
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 5.761
- type: map_at_10
value: 10.267
- type: map_at_100
value: 11.065999999999999
- type: map_at_1000
value: 11.16
- type: map_at_3
value: 8.642
- type: map_at_5
value: 9.474
- type: mrr_at_1
value: 6.046
- type: mrr_at_10
value: 10.365
- type: mrr_at_100
value: 11.178
- type: mrr_at_1000
value: 11.272
- type: mrr_at_3
value: 8.713
- type: mrr_at_5
value: 9.587
- type: ndcg_at_1
value: 5.761
- type: ndcg_at_10
value: 13.055
- type: ndcg_at_100
value: 17.526
- type: ndcg_at_1000
value: 20.578
- type: ndcg_at_3
value: 9.616
- type: ndcg_at_5
value: 11.128
- type: precision_at_1
value: 5.761
- type: precision_at_10
value: 2.212
- type: precision_at_100
value: 0.44400000000000006
- type: precision_at_1000
value: 0.06999999999999999
- type: precision_at_3
value: 4.149
- type: precision_at_5
value: 3.229
- type: recall_at_1
value: 5.761
- type: recall_at_10
value: 22.119
- type: recall_at_100
value: 44.381
- type: recall_at_1000
value: 69.70100000000001
- type: recall_at_3
value: 12.447
- type: recall_at_5
value: 16.145
- task:
type: Clustering
dataset:
type: mteb/arxiv-clustering-p2p
name: MTEB ArxivClusteringP2P
config: default
split: test
revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d
metrics:
- type: v_measure
value: 25.92658946113241
- task:
type: Clustering
dataset:
type: mteb/arxiv-clustering-s2s
name: MTEB ArxivClusteringS2S
config: default
split: test
revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53
metrics:
- type: v_measure
value: 13.902183567893395
- task:
type: Reranking
dataset:
type: mteb/askubuntudupquestions-reranking
name: MTEB AskUbuntuDupQuestions
config: default
split: test
revision: 2000358ca161889fa9c082cb41daa8dcfb161a54
metrics:
- type: map
value: 47.93210378051478
- type: mrr
value: 60.70318339708921
- task:
type: STS
dataset:
type: mteb/biosses-sts
name: MTEB BIOSSES
config: default
split: test
revision: d3fb88f8f02e40887cd149695127462bbcf29b4a
metrics:
- type: cos_sim_pearson
value: 49.57650220181508
- type: cos_sim_spearman
value: 51.842145113866636
- type: euclidean_pearson
value: 41.2188173176347
- type: euclidean_spearman
value: 41.16840792962046
- type: manhattan_pearson
value: 42.73893519020435
- type: manhattan_spearman
value: 44.384746276312534
- task:
type: Classification
dataset:
type: mteb/banking77
name: MTEB Banking77Classification
config: default
split: test
revision: 0fd18e25b25c072e09e0d92ab615fda904d66300
metrics:
- type: accuracy
value: 46.03896103896104
- type: f1
value: 44.54083818845286
- task:
type: Clustering
dataset:
type: mteb/biorxiv-clustering-p2p
name: MTEB BiorxivClusteringP2P
config: default
split: test
revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40
metrics:
- type: v_measure
value: 23.113393015706908
- task:
type: Clustering
dataset:
type: mteb/biorxiv-clustering-s2s
name: MTEB BiorxivClusteringS2S
config: default
split: test
revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908
metrics:
- type: v_measure
value: 12.624675113307488
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackAndroidRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 10.105
- type: map_at_10
value: 13.364
- type: map_at_100
value: 13.987
- type: map_at_1000
value: 14.08
- type: map_at_3
value: 12.447
- type: map_at_5
value: 12.992999999999999
- type: mrr_at_1
value: 12.876000000000001
- type: mrr_at_10
value: 16.252
- type: mrr_at_100
value: 16.926
- type: mrr_at_1000
value: 17.004
- type: mrr_at_3
value: 15.235999999999999
- type: mrr_at_5
value: 15.744
- type: ndcg_at_1
value: 12.876000000000001
- type: ndcg_at_10
value: 15.634999999999998
- type: ndcg_at_100
value: 19.173000000000002
- type: ndcg_at_1000
value: 22.168
- type: ndcg_at_3
value: 14.116999999999999
- type: ndcg_at_5
value: 14.767
- type: precision_at_1
value: 12.876000000000001
- type: precision_at_10
value: 2.761
- type: precision_at_100
value: 0.5579999999999999
- type: precision_at_1000
value: 0.101
- type: precision_at_3
value: 6.676
- type: precision_at_5
value: 4.635
- type: recall_at_1
value: 10.105
- type: recall_at_10
value: 19.767000000000003
- type: recall_at_100
value: 36.448
- type: recall_at_1000
value: 58.623000000000005
- type: recall_at_3
value: 15.087
- type: recall_at_5
value: 17.076
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackEnglishRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 7.249999999999999
- type: map_at_10
value: 9.41
- type: map_at_100
value: 9.903
- type: map_at_1000
value: 9.993
- type: map_at_3
value: 8.693
- type: map_at_5
value: 9.052
- type: mrr_at_1
value: 9.299
- type: mrr_at_10
value: 11.907
- type: mrr_at_100
value: 12.424
- type: mrr_at_1000
value: 12.503
- type: mrr_at_3
value: 10.945
- type: mrr_at_5
value: 11.413
- type: ndcg_at_1
value: 9.299
- type: ndcg_at_10
value: 11.278
- type: ndcg_at_100
value: 13.904
- type: ndcg_at_1000
value: 16.642000000000003
- type: ndcg_at_3
value: 9.956
- type: ndcg_at_5
value: 10.488
- type: precision_at_1
value: 9.299
- type: precision_at_10
value: 2.166
- type: precision_at_100
value: 0.45399999999999996
- type: precision_at_1000
value: 0.089
- type: precision_at_3
value: 4.798
- type: precision_at_5
value: 3.427
- type: recall_at_1
value: 7.249999999999999
- type: recall_at_10
value: 14.285
- type: recall_at_100
value: 26.588
- type: recall_at_1000
value: 46.488
- type: recall_at_3
value: 10.309
- type: recall_at_5
value: 11.756
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackGamingRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 11.57
- type: map_at_10
value: 15.497
- type: map_at_100
value: 16.036
- type: map_at_1000
value: 16.122
- type: map_at_3
value: 14.309
- type: map_at_5
value: 14.895
- type: mrr_at_1
value: 13.354
- type: mrr_at_10
value: 17.408
- type: mrr_at_100
value: 17.936
- type: mrr_at_1000
value: 18.015
- type: mrr_at_3
value: 16.123
- type: mrr_at_5
value: 16.735
- type: ndcg_at_1
value: 13.354
- type: ndcg_at_10
value: 18.071
- type: ndcg_at_100
value: 21.017
- type: ndcg_at_1000
value: 23.669999999999998
- type: ndcg_at_3
value: 15.644
- type: ndcg_at_5
value: 16.618
- type: precision_at_1
value: 13.354
- type: precision_at_10
value: 2.94
- type: precision_at_100
value: 0.481
- type: precision_at_1000
value: 0.076
- type: precision_at_3
value: 7.001
- type: precision_at_5
value: 4.765
- type: recall_at_1
value: 11.57
- type: recall_at_10
value: 24.147
- type: recall_at_100
value: 38.045
- type: recall_at_1000
value: 58.648
- type: recall_at_3
value: 17.419999999999998
- type: recall_at_5
value: 19.875999999999998
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackGisRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 4.463
- type: map_at_10
value: 6.091
- type: map_at_100
value: 6.548
- type: map_at_1000
value: 6.622
- type: map_at_3
value: 5.461
- type: map_at_5
value: 5.768
- type: mrr_at_1
value: 4.746
- type: mrr_at_10
value: 6.431000000000001
- type: mrr_at_100
value: 6.941
- type: mrr_at_1000
value: 7.016
- type: mrr_at_3
value: 5.763
- type: mrr_at_5
value: 6.101999999999999
- type: ndcg_at_1
value: 4.746
- type: ndcg_at_10
value: 7.19
- type: ndcg_at_100
value: 9.604
- type: ndcg_at_1000
value: 12.086
- type: ndcg_at_3
value: 5.88
- type: ndcg_at_5
value: 6.429
- type: precision_at_1
value: 4.746
- type: precision_at_10
value: 1.141
- type: precision_at_100
value: 0.249
- type: precision_at_1000
value: 0.049
- type: precision_at_3
value: 2.448
- type: precision_at_5
value: 1.7850000000000001
- type: recall_at_1
value: 4.463
- type: recall_at_10
value: 10.33
- type: recall_at_100
value: 21.578
- type: recall_at_1000
value: 41.404
- type: recall_at_3
value: 6.816999999999999
- type: recall_at_5
value: 8.06
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackMathematicaRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 1.521
- type: map_at_10
value: 2.439
- type: map_at_100
value: 2.785
- type: map_at_1000
value: 2.858
- type: map_at_3
value: 2.091
- type: map_at_5
value: 2.2560000000000002
- type: mrr_at_1
value: 2.114
- type: mrr_at_10
value: 3.216
- type: mrr_at_100
value: 3.6319999999999997
- type: mrr_at_1000
value: 3.712
- type: mrr_at_3
value: 2.778
- type: mrr_at_5
value: 2.971
- type: ndcg_at_1
value: 2.114
- type: ndcg_at_10
value: 3.1910000000000003
- type: ndcg_at_100
value: 5.165
- type: ndcg_at_1000
value: 7.607
- type: ndcg_at_3
value: 2.456
- type: ndcg_at_5
value: 2.7439999999999998
- type: precision_at_1
value: 2.114
- type: precision_at_10
value: 0.634
- type: precision_at_100
value: 0.189
- type: precision_at_1000
value: 0.049
- type: precision_at_3
value: 1.202
- type: precision_at_5
value: 0.8959999999999999
- type: recall_at_1
value: 1.521
- type: recall_at_10
value: 4.8
- type: recall_at_100
value: 13.877
- type: recall_at_1000
value: 32.1
- type: recall_at_3
value: 2.806
- type: recall_at_5
value: 3.5520000000000005
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackPhysicsRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 7.449999999999999
- type: map_at_10
value: 10.065
- type: map_at_100
value: 10.507
- type: map_at_1000
value: 10.599
- type: map_at_3
value: 9.017
- type: map_at_5
value: 9.603
- type: mrr_at_1
value: 9.336
- type: mrr_at_10
value: 12.589
- type: mrr_at_100
value: 13.086
- type: mrr_at_1000
value: 13.161000000000001
- type: mrr_at_3
value: 11.373
- type: mrr_at_5
value: 12.084999999999999
- type: ndcg_at_1
value: 9.336
- type: ndcg_at_10
value: 12.299
- type: ndcg_at_100
value: 14.780999999999999
- type: ndcg_at_1000
value: 17.632
- type: ndcg_at_3
value: 10.302
- type: ndcg_at_5
value: 11.247
- type: precision_at_1
value: 9.336
- type: precision_at_10
value: 2.271
- type: precision_at_100
value: 0.42300000000000004
- type: precision_at_1000
value: 0.08099999999999999
- type: precision_at_3
value: 4.909
- type: precision_at_5
value: 3.5999999999999996
- type: recall_at_1
value: 7.449999999999999
- type: recall_at_10
value: 16.891000000000002
- type: recall_at_100
value: 28.050000000000004
- type: recall_at_1000
value: 49.267
- type: recall_at_3
value: 11.187999999999999
- type: recall_at_5
value: 13.587
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackProgrammersRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 4.734
- type: map_at_10
value: 7.045999999999999
- type: map_at_100
value: 7.564
- type: map_at_1000
value: 7.6499999999999995
- type: map_at_3
value: 6.21
- type: map_at_5
value: 6.617000000000001
- type: mrr_at_1
value: 5.936
- type: mrr_at_10
value: 8.624
- type: mrr_at_100
value: 9.193
- type: mrr_at_1000
value: 9.28
- type: mrr_at_3
value: 7.725
- type: mrr_at_5
value: 8.147
- type: ndcg_at_1
value: 5.936
- type: ndcg_at_10
value: 8.81
- type: ndcg_at_100
value: 11.694
- type: ndcg_at_1000
value: 14.526
- type: ndcg_at_3
value: 7.140000000000001
- type: ndcg_at_5
value: 7.8020000000000005
- type: precision_at_1
value: 5.936
- type: precision_at_10
value: 1.701
- type: precision_at_100
value: 0.366
- type: precision_at_1000
value: 0.07200000000000001
- type: precision_at_3
value: 3.463
- type: precision_at_5
value: 2.557
- type: recall_at_1
value: 4.734
- type: recall_at_10
value: 12.733
- type: recall_at_100
value: 25.982
- type: recall_at_1000
value: 47.233999999999995
- type: recall_at_3
value: 8.018
- type: recall_at_5
value: 9.762
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackStatsRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 4.293
- type: map_at_10
value: 6.146999999999999
- type: map_at_100
value: 6.487
- type: map_at_1000
value: 6.544999999999999
- type: map_at_3
value: 5.6930000000000005
- type: map_at_5
value: 5.869
- type: mrr_at_1
value: 5.061
- type: mrr_at_10
value: 7.1690000000000005
- type: mrr_at_100
value: 7.542
- type: mrr_at_1000
value: 7.5969999999999995
- type: mrr_at_3
value: 6.646000000000001
- type: mrr_at_5
value: 6.8229999999999995
- type: ndcg_at_1
value: 5.061
- type: ndcg_at_10
value: 7.396
- type: ndcg_at_100
value: 9.41
- type: ndcg_at_1000
value: 11.386000000000001
- type: ndcg_at_3
value: 6.454
- type: ndcg_at_5
value: 6.718
- type: precision_at_1
value: 5.061
- type: precision_at_10
value: 1.319
- type: precision_at_100
value: 0.262
- type: precision_at_1000
value: 0.047
- type: precision_at_3
value: 3.0669999999999997
- type: precision_at_5
value: 1.994
- type: recall_at_1
value: 4.293
- type: recall_at_10
value: 10.221
- type: recall_at_100
value: 19.744999999999997
- type: recall_at_1000
value: 35.399
- type: recall_at_3
value: 7.507999999999999
- type: recall_at_5
value: 8.275
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackTexRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 3.519
- type: map_at_10
value: 4.768
- type: map_at_100
value: 5.034000000000001
- type: map_at_1000
value: 5.087
- type: map_at_3
value: 4.308
- type: map_at_5
value: 4.565
- type: mrr_at_1
value: 4.474
- type: mrr_at_10
value: 6.045
- type: mrr_at_100
value: 6.361999999999999
- type: mrr_at_1000
value: 6.417000000000001
- type: mrr_at_3
value: 5.483
- type: mrr_at_5
value: 5.81
- type: ndcg_at_1
value: 4.474
- type: ndcg_at_10
value: 5.799
- type: ndcg_at_100
value: 7.344
- type: ndcg_at_1000
value: 9.141
- type: ndcg_at_3
value: 4.893
- type: ndcg_at_5
value: 5.309
- type: precision_at_1
value: 4.474
- type: precision_at_10
value: 1.06
- type: precision_at_100
value: 0.217
- type: precision_at_1000
value: 0.045
- type: precision_at_3
value: 2.306
- type: precision_at_5
value: 1.7000000000000002
- type: recall_at_1
value: 3.519
- type: recall_at_10
value: 7.75
- type: recall_at_100
value: 15.049999999999999
- type: recall_at_1000
value: 28.779
- type: recall_at_3
value: 5.18
- type: recall_at_5
value: 6.245
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackUnixRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 6.098
- type: map_at_10
value: 7.918
- type: map_at_100
value: 8.229000000000001
- type: map_at_1000
value: 8.293000000000001
- type: map_at_3
value: 7.138999999999999
- type: map_at_5
value: 7.646
- type: mrr_at_1
value: 7.090000000000001
- type: mrr_at_10
value: 9.293
- type: mrr_at_100
value: 9.669
- type: mrr_at_1000
value: 9.734
- type: mrr_at_3
value: 8.364
- type: mrr_at_5
value: 8.956999999999999
- type: ndcg_at_1
value: 7.090000000000001
- type: ndcg_at_10
value: 9.411999999999999
- type: ndcg_at_100
value: 11.318999999999999
- type: ndcg_at_1000
value: 13.478000000000002
- type: ndcg_at_3
value: 7.837
- type: ndcg_at_5
value: 8.73
- type: precision_at_1
value: 7.090000000000001
- type: precision_at_10
value: 1.558
- type: precision_at_100
value: 0.28400000000000003
- type: precision_at_1000
value: 0.053
- type: precision_at_3
value: 3.42
- type: precision_at_5
value: 2.5749999999999997
- type: recall_at_1
value: 6.098
- type: recall_at_10
value: 12.764000000000001
- type: recall_at_100
value: 21.747
- type: recall_at_1000
value: 38.279999999999994
- type: recall_at_3
value: 8.476
- type: recall_at_5
value: 10.707
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackWebmastersRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 8.607
- type: map_at_10
value: 10.835
- type: map_at_100
value: 11.285
- type: map_at_1000
value: 11.383000000000001
- type: map_at_3
value: 10.111
- type: map_at_5
value: 10.334999999999999
- type: mrr_at_1
value: 10.671999999999999
- type: mrr_at_10
value: 13.269
- type: mrr_at_100
value: 13.729
- type: mrr_at_1000
value: 13.813
- type: mrr_at_3
value: 12.385
- type: mrr_at_5
value: 12.701
- type: ndcg_at_1
value: 10.671999999999999
- type: ndcg_at_10
value: 12.728
- type: ndcg_at_100
value: 15.312999999999999
- type: ndcg_at_1000
value: 18.160999999999998
- type: ndcg_at_3
value: 11.355
- type: ndcg_at_5
value: 11.605
- type: precision_at_1
value: 10.671999999999999
- type: precision_at_10
value: 2.154
- type: precision_at_100
value: 0.455
- type: precision_at_1000
value: 0.098
- type: precision_at_3
value: 4.941
- type: precision_at_5
value: 3.2809999999999997
- type: recall_at_1
value: 8.607
- type: recall_at_10
value: 16.398
- type: recall_at_100
value: 28.92
- type: recall_at_1000
value: 49.761
- type: recall_at_3
value: 11.844000000000001
- type: recall_at_5
value: 12.792
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackWordpressRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 3.826
- type: map_at_10
value: 5.6419999999999995
- type: map_at_100
value: 5.943
- type: map_at_1000
value: 6.005
- type: map_at_3
value: 5.1049999999999995
- type: map_at_5
value: 5.437
- type: mrr_at_1
value: 4.436
- type: mrr_at_10
value: 6.413
- type: mrr_at_100
value: 6.752
- type: mrr_at_1000
value: 6.819999999999999
- type: mrr_at_3
value: 5.884
- type: mrr_at_5
value: 6.18
- type: ndcg_at_1
value: 4.436
- type: ndcg_at_10
value: 6.7989999999999995
- type: ndcg_at_100
value: 8.619
- type: ndcg_at_1000
value: 10.842
- type: ndcg_at_3
value: 5.739
- type: ndcg_at_5
value: 6.292000000000001
- type: precision_at_1
value: 4.436
- type: precision_at_10
value: 1.109
- type: precision_at_100
value: 0.214
- type: precision_at_1000
value: 0.043
- type: precision_at_3
value: 2.588
- type: precision_at_5
value: 1.848
- type: recall_at_1
value: 3.826
- type: recall_at_10
value: 9.655
- type: recall_at_100
value: 18.611
- type: recall_at_1000
value: 36.733
- type: recall_at_3
value: 6.784
- type: recall_at_5
value: 8.17
- task:
type: Retrieval
dataset:
type: climate-fever
name: MTEB ClimateFEVER
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 2.09
- type: map_at_10
value: 3.469
- type: map_at_100
value: 3.93
- type: map_at_1000
value: 4.018
- type: map_at_3
value: 2.8209999999999997
- type: map_at_5
value: 3.144
- type: mrr_at_1
value: 4.756
- type: mrr_at_10
value: 7.853000000000001
- type: mrr_at_100
value: 8.547
- type: mrr_at_1000
value: 8.631
- type: mrr_at_3
value: 6.569
- type: mrr_at_5
value: 7.249999999999999
- type: ndcg_at_1
value: 4.756
- type: ndcg_at_10
value: 5.494000000000001
- type: ndcg_at_100
value: 8.275
- type: ndcg_at_1000
value: 10.892
- type: ndcg_at_3
value: 4.091
- type: ndcg_at_5
value: 4.588
- type: precision_at_1
value: 4.756
- type: precision_at_10
value: 1.8370000000000002
- type: precision_at_100
value: 0.475
- type: precision_at_1000
value: 0.094
- type: precision_at_3
value: 3.018
- type: precision_at_5
value: 2.528
- type: recall_at_1
value: 2.09
- type: recall_at_10
value: 7.127
- type: recall_at_100
value: 17.483999999999998
- type: recall_at_1000
value: 33.353
- type: recall_at_3
value: 3.742
- type: recall_at_5
value: 5.041
- task:
type: Retrieval
dataset:
type: dbpedia-entity
name: MTEB DBPedia
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 0.573
- type: map_at_10
value: 1.282
- type: map_at_100
value: 1.625
- type: map_at_1000
value: 1.71
- type: map_at_3
value: 1.0
- type: map_at_5
value: 1.135
- type: mrr_at_1
value: 7.000000000000001
- type: mrr_at_10
value: 11.084
- type: mrr_at_100
value: 11.634
- type: mrr_at_1000
value: 11.715
- type: mrr_at_3
value: 9.792
- type: mrr_at_5
value: 10.404
- type: ndcg_at_1
value: 4.375
- type: ndcg_at_10
value: 3.7800000000000002
- type: ndcg_at_100
value: 4.353
- type: ndcg_at_1000
value: 6.087
- type: ndcg_at_3
value: 4.258
- type: ndcg_at_5
value: 3.988
- type: precision_at_1
value: 7.000000000000001
- type: precision_at_10
value: 3.35
- type: precision_at_100
value: 1.057
- type: precision_at_1000
value: 0.243
- type: precision_at_3
value: 5.75
- type: precision_at_5
value: 4.6
- type: recall_at_1
value: 0.573
- type: recall_at_10
value: 2.464
- type: recall_at_100
value: 5.6770000000000005
- type: recall_at_1000
value: 12.516
- type: recall_at_3
value: 1.405
- type: recall_at_5
value: 1.807
- task:
type: Classification
dataset:
type: mteb/emotion
name: MTEB EmotionClassification
config: default
split: test
revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37
metrics:
- type: accuracy
value: 23.279999999999998
- type: f1
value: 19.87865985032945
- task:
type: Retrieval
dataset:
type: fever
name: MTEB FEVER
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 3.145
- type: map_at_10
value: 4.721
- type: map_at_100
value: 5.086
- type: map_at_1000
value: 5.142
- type: map_at_3
value: 4.107
- type: map_at_5
value: 4.45
- type: mrr_at_1
value: 3.27
- type: mrr_at_10
value: 4.958
- type: mrr_at_100
value: 5.35
- type: mrr_at_1000
value: 5.409
- type: mrr_at_3
value: 4.303
- type: mrr_at_5
value: 4.6739999999999995
- type: ndcg_at_1
value: 3.27
- type: ndcg_at_10
value: 5.768
- type: ndcg_at_100
value: 7.854
- type: ndcg_at_1000
value: 9.729000000000001
- type: ndcg_at_3
value: 4.476
- type: ndcg_at_5
value: 5.102
- type: precision_at_1
value: 3.27
- type: precision_at_10
value: 0.942
- type: precision_at_100
value: 0.20600000000000002
- type: precision_at_1000
value: 0.038
- type: precision_at_3
value: 1.8849999999999998
- type: precision_at_5
value: 1.455
- type: recall_at_1
value: 3.145
- type: recall_at_10
value: 8.889
- type: recall_at_100
value: 19.092000000000002
- type: recall_at_1000
value: 34.35
- type: recall_at_3
value: 5.353
- type: recall_at_5
value: 6.836
- task:
type: Retrieval
dataset:
type: fiqa
name: MTEB FiQA2018
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 1.166
- type: map_at_10
value: 2.283
- type: map_at_100
value: 2.564
- type: map_at_1000
value: 2.6519999999999997
- type: map_at_3
value: 1.867
- type: map_at_5
value: 2.0500000000000003
- type: mrr_at_1
value: 2.932
- type: mrr_at_10
value: 4.852
- type: mrr_at_100
value: 5.306
- type: mrr_at_1000
value: 5.4
- type: mrr_at_3
value: 4.141
- type: mrr_at_5
value: 4.457
- type: ndcg_at_1
value: 2.932
- type: ndcg_at_10
value: 3.5709999999999997
- type: ndcg_at_100
value: 5.489
- type: ndcg_at_1000
value: 8.309999999999999
- type: ndcg_at_3
value: 2.773
- type: ndcg_at_5
value: 2.979
- type: precision_at_1
value: 2.932
- type: precision_at_10
value: 1.049
- type: precision_at_100
value: 0.306
- type: precision_at_1000
value: 0.077
- type: precision_at_3
value: 1.8519999999999999
- type: precision_at_5
value: 1.389
- type: recall_at_1
value: 1.166
- type: recall_at_10
value: 5.178
- type: recall_at_100
value: 13.056999999999999
- type: recall_at_1000
value: 31.708
- type: recall_at_3
value: 2.714
- type: recall_at_5
value: 3.4909999999999997
- task:
type: Retrieval
dataset:
type: hotpotqa
name: MTEB HotpotQA
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 6.138
- type: map_at_10
value: 8.212
- type: map_at_100
value: 8.548
- type: map_at_1000
value: 8.604000000000001
- type: map_at_3
value: 7.555000000000001
- type: map_at_5
value: 7.881
- type: mrr_at_1
value: 12.275
- type: mrr_at_10
value: 15.49
- type: mrr_at_100
value: 15.978
- type: mrr_at_1000
value: 16.043
- type: mrr_at_3
value: 14.488000000000001
- type: mrr_at_5
value: 14.975
- type: ndcg_at_1
value: 12.275
- type: ndcg_at_10
value: 11.078000000000001
- type: ndcg_at_100
value: 13.081999999999999
- type: ndcg_at_1000
value: 14.906
- type: ndcg_at_3
value: 9.574
- type: ndcg_at_5
value: 10.206999999999999
- type: precision_at_1
value: 12.275
- type: precision_at_10
value: 2.488
- type: precision_at_100
value: 0.41200000000000003
- type: precision_at_1000
value: 0.066
- type: precision_at_3
value: 5.991
- type: precision_at_5
value: 4.0969999999999995
- type: recall_at_1
value: 6.138
- type: recall_at_10
value: 12.438
- type: recall_at_100
value: 20.601
- type: recall_at_1000
value: 32.984
- type: recall_at_3
value: 8.987
- type: recall_at_5
value: 10.242999999999999
- task:
type: Classification
dataset:
type: mteb/imdb
name: MTEB ImdbClassification
config: default
split: test
revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7
metrics:
- type: accuracy
value: 56.96359999999999
- type: ap
value: 54.16760114570921
- type: f1
value: 56.193845361069116
- task:
type: Retrieval
dataset:
type: msmarco
name: MTEB MSMARCO
config: default
split: dev
revision: None
metrics:
- type: map_at_1
value: 1.34
- type: map_at_10
value: 2.2190000000000003
- type: map_at_100
value: 2.427
- type: map_at_1000
value: 2.461
- type: map_at_3
value: 1.8610000000000002
- type: map_at_5
value: 2.0340000000000003
- type: mrr_at_1
value: 1.375
- type: mrr_at_10
value: 2.284
- type: mrr_at_100
value: 2.5
- type: mrr_at_1000
value: 2.535
- type: mrr_at_3
value: 1.913
- type: mrr_at_5
value: 2.094
- type: ndcg_at_1
value: 1.375
- type: ndcg_at_10
value: 2.838
- type: ndcg_at_100
value: 4.043
- type: ndcg_at_1000
value: 5.205
- type: ndcg_at_3
value: 2.0629999999999997
- type: ndcg_at_5
value: 2.387
- type: precision_at_1
value: 1.375
- type: precision_at_10
value: 0.496
- type: precision_at_100
value: 0.11399999999999999
- type: precision_at_1000
value: 0.022000000000000002
- type: precision_at_3
value: 0.898
- type: precision_at_5
value: 0.705
- type: recall_at_1
value: 1.34
- type: recall_at_10
value: 4.787
- type: recall_at_100
value: 10.759
- type: recall_at_1000
value: 20.362
- type: recall_at_3
value: 2.603
- type: recall_at_5
value: 3.398
- task:
type: Classification
dataset:
type: mteb/mtop_domain
name: MTEB MTOPDomainClassification (en)
config: en
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 64.39808481532147
- type: f1
value: 63.468270818712625
- task:
type: Classification
dataset:
type: mteb/mtop_domain
name: MTEB MTOPDomainClassification (de)
config: de
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 53.961679346294744
- type: f1
value: 51.6707117653683
- task:
type: Classification
dataset:
type: mteb/mtop_domain
name: MTEB MTOPDomainClassification (es)
config: es
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 57.018012008005336
- type: f1
value: 54.23413458037234
- task:
type: Classification
dataset:
type: mteb/mtop_domain
name: MTEB MTOPDomainClassification (fr)
config: fr
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 48.84434700908236
- type: f1
value: 46.48494180527987
- task:
type: Classification
dataset:
type: mteb/mtop_domain
name: MTEB MTOPDomainClassification (hi)
config: hi
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 39.7669415561133
- type: f1
value: 35.50974325529877
- task:
type: Classification
dataset:
type: mteb/mtop_domain
name: MTEB MTOPDomainClassification (th)
config: th
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 42.589511754068724
- type: f1
value: 40.47244422785889
- task:
type: Classification
dataset:
type: mteb/mtop_intent
name: MTEB MTOPIntentClassification (en)
config: en
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 34.01276789785682
- type: f1
value: 21.256775922291286
- task:
type: Classification
dataset:
type: mteb/mtop_intent
name: MTEB MTOPIntentClassification (de)
config: de
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 33.285432516201745
- type: f1
value: 19.841703666811565
- task:
type: Classification
dataset:
type: mteb/mtop_intent
name: MTEB MTOPIntentClassification (es)
config: es
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 32.121414276184126
- type: f1
value: 19.34706868150749
- task:
type: Classification
dataset:
type: mteb/mtop_intent
name: MTEB MTOPIntentClassification (fr)
config: fr
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 26.088318196053866
- type: f1
value: 17.22608011891254
- task:
type: Classification
dataset:
type: mteb/mtop_intent
name: MTEB MTOPIntentClassification (hi)
config: hi
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 15.320903549659375
- type: f1
value: 9.62002916015258
- task:
type: Classification
dataset:
type: mteb/mtop_intent
name: MTEB MTOPIntentClassification (th)
config: th
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 16.426763110307412
- type: f1
value: 11.023799171137183
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (af)
config: af
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 27.347007397444518
- type: f1
value: 25.503551916252842
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (am)
config: am
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 10.655682582380631
- type: f1
value: 9.141696317946996
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (ar)
config: ar
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 17.347007397444518
- type: f1
value: 15.345346511499534
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (az)
config: az
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 20.39004707464694
- type: f1
value: 21.129515472610237
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (bn)
config: bn
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 14.082044384667114
- type: f1
value: 12.169922201279885
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (cy)
config: cy
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 27.108271687962336
- type: f1
value: 25.449222444030063
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (da)
config: da
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 27.780766644250164
- type: f1
value: 26.96237025531764
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (de)
config: de
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 21.768661735036986
- type: f1
value: 22.377462868662263
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (el)
config: el
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 21.967047747141898
- type: f1
value: 22.427583602797057
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (en)
config: en
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 33.221250840618694
- type: f1
value: 32.627621011904495
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (es)
config: es
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 27.047747141896426
- type: f1
value: 25.244455827652786
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (fa)
config: fa
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 18.850033624747812
- type: f1
value: 16.532690247057452
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (fi)
config: fi
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 25.934767989240083
- type: f1
value: 24.126974912341858
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (fr)
config: fr
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 25.59179556153329
- type: f1
value: 23.97686173045838
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (he)
config: he
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 17.683254875588432
- type: f1
value: 15.217082232778534
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (hi)
config: hi
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 14.277067921990588
- type: f1
value: 13.06156794974721
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (hu)
config: hu
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 25.817081371889717
- type: f1
value: 24.79443526877249
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (hy)
config: hy
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 12.326832548755885
- type: f1
value: 10.850963544530288
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (id)
config: id
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 28.244788164088764
- type: f1
value: 27.442212153664336
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (is)
config: is
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 26.0390047074647
- type: f1
value: 24.29180485465988
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (it)
config: it
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 26.099529253530594
- type: f1
value: 26.47963496597501
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (ja)
config: ja
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 25.383322125084057
- type: f1
value: 27.24527274982159
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (jv)
config: jv
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 26.970410221923334
- type: f1
value: 24.710215925904627
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (ka)
config: ka
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 14.159381304640215
- type: f1
value: 12.262797179154113
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (km)
config: km
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 12.078009414929387
- type: f1
value: 10.85388698579932
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (kn)
config: kn
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 10.144586415601884
- type: f1
value: 8.629316498328535
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (ko)
config: ko
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 23.799596503026226
- type: f1
value: 21.4839774342838
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (lv)
config: lv
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 27.047747141896433
- type: f1
value: 25.86288514660441
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (ml)
config: ml
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 10.82380632145259
- type: f1
value: 9.568319030257811
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (mn)
config: mn
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 14.435104236718225
- type: f1
value: 14.861584951252121
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (ms)
config: ms
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 26.83254875588433
- type: f1
value: 26.084749439191967
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (my)
config: my
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 13.792871553463351
- type: f1
value: 11.496310101617802
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (nb)
config: nb
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 25.292535305985208
- type: f1
value: 24.098118477282508
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (nl)
config: nl
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 26.24075319435104
- type: f1
value: 25.259998680849815
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (pl)
config: pl
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 24.909213180901148
- type: f1
value: 23.684726718537476
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (pt)
config: pt
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 27.310020174848688
- type: f1
value: 26.28728556240871
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (ro)
config: ro
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 21.930060524546064
- type: f1
value: 21.993435727293242
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (ru)
config: ru
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 25.773369199731
- type: f1
value: 23.857739120942632
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (sl)
config: sl
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 29.256893073301953
- type: f1
value: 26.6824726822139
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (sq)
config: sq
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 17.673167451244115
- type: f1
value: 18.74169530134767
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (sv)
config: sv
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 26.190316072629454
- type: f1
value: 26.000111409348474
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (sw)
config: sw
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 28.903833221250842
- type: f1
value: 27.166958835143152
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (ta)
config: ta
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 13.517148621385342
- type: f1
value: 10.124568781522633
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (te)
config: te
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 12.414256893073302
- type: f1
value: 10.72905194223371
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (th)
config: th
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 17.330195023537325
- type: f1
value: 17.262535319973026
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (tl)
config: tl
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 22.310020174848688
- type: f1
value: 22.770175966437712
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (tr)
config: tr
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 23.066577000672496
- type: f1
value: 22.50576183714979
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (ur)
config: ur
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 12.501681237390722
- type: f1
value: 11.803223778323549
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (vi)
config: vi
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 22.76059179556153
- type: f1
value: 21.55631039694901
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (zh-CN)
config: zh-CN
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 34.92602555480834
- type: f1
value: 33.727974171773425
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (zh-TW)
config: zh-TW
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 28.365837256220576
- type: f1
value: 28.510022067424707
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (af)
config: af
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 36.02891728312038
- type: f1
value: 31.370502912090657
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (am)
config: am
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 19.330867518493612
- type: f1
value: 15.90055477686407
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (ar)
config: ar
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 23.880295897780766
- type: f1
value: 21.695586897546278
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (az)
config: az
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 28.853396099529256
- type: f1
value: 26.709933198206503
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (bn)
config: bn
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 18.382649630127773
- type: f1
value: 16.772353498005437
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (cy)
config: cy
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 35.40685944855414
- type: f1
value: 31.488014251203257
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (da)
config: da
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 34.4754539340955
- type: f1
value: 31.226537880607296
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (de)
config: de
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 30.0
- type: f1
value: 28.08153376359271
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (el)
config: el
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 26.879623402824482
- type: f1
value: 24.773267596830472
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (en)
config: en
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 43.77605917955615
- type: f1
value: 41.60519309254586
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (es)
config: es
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 32.807666442501684
- type: f1
value: 30.250350082554473
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (fa)
config: fa
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 24.515803631472764
- type: f1
value: 21.726717236406927
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (fi)
config: fi
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 33.315400134498994
- type: f1
value: 29.353644700383935
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (fr)
config: fr
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 32.42434431741762
- type: f1
value: 29.87668403274445
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (he)
config: he
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 24.40147948890384
- type: f1
value: 21.429656257015555
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (hi)
config: hi
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 21.435776731674512
- type: f1
value: 18.344740043506686
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (hu)
config: hu
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 31.533288500336248
- type: f1
value: 29.185785763469298
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (hy)
config: hy
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 19.983187626092803
- type: f1
value: 17.088318438817677
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (id)
config: id
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 32.73369199731002
- type: f1
value: 31.37876294000571
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (is)
config: is
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 31.503026227303295
- type: f1
value: 28.944584739759726
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (it)
config: it
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 31.116341627437794
- type: f1
value: 29.410419968584232
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (ja)
config: ja
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 31.136516476126424
- type: f1
value: 29.6719090248999
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (jv)
config: jv
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 34.3813046402152
- type: f1
value: 31.481374035431443
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (ka)
config: ka
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 18.3591123066577
- type: f1
value: 16.158113168004945
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (km)
config: km
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 19.250168123739073
- type: f1
value: 16.728590019294526
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (kn)
config: kn
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 16.04236718224613
- type: f1
value: 14.304518042661318
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (ko)
config: ko
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 28.95427034297243
- type: f1
value: 26.14194694048272
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (lv)
config: lv
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 32.286482851378615
- type: f1
value: 29.426003364232837
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (ml)
config: ml
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 18.762609280430397
- type: f1
value: 15.764539982254005
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (mn)
config: mn
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 21.570275722932077
- type: f1
value: 18.98517402487539
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (ms)
config: ms
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 35.95158036314728
- type: f1
value: 33.45853327725729
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (my)
config: my
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 20.03026227303295
- type: f1
value: 16.864192363299615
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (nb)
config: nb
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 32.89172831203766
- type: f1
value: 29.90554132334421
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (nl)
config: nl
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 32.36381977135171
- type: f1
value: 29.57501043505274
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (pl)
config: pl
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 31.856086079354405
- type: f1
value: 30.242758819443548
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (pt)
config: pt
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 31.694687289845326
- type: f1
value: 29.495870419371684
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (ro)
config: ro
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 27.01412239408204
- type: f1
value: 26.44782204477556
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (ru)
config: ru
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 30.554808338937463
- type: f1
value: 27.93696283146029
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (sl)
config: sl
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 35.813718897108274
- type: f1
value: 33.021495683147286
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (sq)
config: sq
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 22.992602555480836
- type: f1
value: 21.524928515996447
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (sv)
config: sv
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 34.7074646940148
- type: f1
value: 31.54759971873104
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (sw)
config: sw
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 36.240753194351036
- type: f1
value: 33.34397881816082
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (ta)
config: ta
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 19.741089441829185
- type: f1
value: 16.129268723975766
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (te)
config: te
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 17.54203093476799
- type: f1
value: 15.537381383894061
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (th)
config: th
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 22.817753866846
- type: f1
value: 20.72245485990428
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (tl)
config: tl
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 28.58439811701412
- type: f1
value: 26.88190194852028
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (tr)
config: tr
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 28.60457296570275
- type: f1
value: 26.98989368733863
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (ur)
config: ur
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 23.345662407531943
- type: f1
value: 19.75032390408514
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (vi)
config: vi
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 29.71082716879624
- type: f1
value: 26.675920460240782
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (zh-CN)
config: zh-CN
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 44.09549428379288
- type: f1
value: 41.275350430825675
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (zh-TW)
config: zh-TW
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 37.24277067921991
- type: f1
value: 35.65629114113254
- task:
type: Clustering
dataset:
type: mteb/medrxiv-clustering-p2p
name: MTEB MedrxivClusteringP2P
config: default
split: test
revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73
metrics:
- type: v_measure
value: 22.08508717069763
- task:
type: Clustering
dataset:
type: mteb/medrxiv-clustering-s2s
name: MTEB MedrxivClusteringS2S
config: default
split: test
revision: 35191c8c0dca72d8ff3efcd72aa802307d469663
metrics:
- type: v_measure
value: 16.58582885790446
- task:
type: Reranking
dataset:
type: mteb/mind_small
name: MTEB MindSmallReranking
config: default
split: test
revision: 3bdac13927fdc888b903db93b2ffdbd90b295a69
metrics:
- type: map
value: 26.730268595233923
- type: mrr
value: 27.065185919114704
- task:
type: Retrieval
dataset:
type: nfcorpus
name: MTEB NFCorpus
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 1.2
- type: map_at_10
value: 1.6400000000000001
- type: map_at_100
value: 1.9789999999999999
- type: map_at_1000
value: 2.554
- type: map_at_3
value: 1.4449999999999998
- type: map_at_5
value: 1.533
- type: mrr_at_1
value: 6.811
- type: mrr_at_10
value: 11.068999999999999
- type: mrr_at_100
value: 12.454
- type: mrr_at_1000
value: 12.590000000000002
- type: mrr_at_3
value: 9.751999999999999
- type: mrr_at_5
value: 10.31
- type: ndcg_at_1
value: 6.3469999999999995
- type: ndcg_at_10
value: 4.941
- type: ndcg_at_100
value: 6.524000000000001
- type: ndcg_at_1000
value: 15.918
- type: ndcg_at_3
value: 5.959
- type: ndcg_at_5
value: 5.395
- type: precision_at_1
value: 6.811
- type: precision_at_10
value: 3.375
- type: precision_at_100
value: 2.0709999999999997
- type: precision_at_1000
value: 1.313
- type: precision_at_3
value: 5.47
- type: precision_at_5
value: 4.396
- type: recall_at_1
value: 1.2
- type: recall_at_10
value: 2.5909999999999997
- type: recall_at_100
value: 9.443999999999999
- type: recall_at_1000
value: 41.542
- type: recall_at_3
value: 1.702
- type: recall_at_5
value: 1.9879999999999998
- task:
type: Retrieval
dataset:
type: nq
name: MTEB NQ
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 1.214
- type: map_at_10
value: 2.067
- type: map_at_100
value: 2.2399999999999998
- type: map_at_1000
value: 2.2689999999999997
- type: map_at_3
value: 1.691
- type: map_at_5
value: 1.916
- type: mrr_at_1
value: 1.506
- type: mrr_at_10
value: 2.413
- type: mrr_at_100
value: 2.587
- type: mrr_at_1000
value: 2.616
- type: mrr_at_3
value: 2.023
- type: mrr_at_5
value: 2.246
- type: ndcg_at_1
value: 1.506
- type: ndcg_at_10
value: 2.703
- type: ndcg_at_100
value: 3.66
- type: ndcg_at_1000
value: 4.6
- type: ndcg_at_3
value: 1.9300000000000002
- type: ndcg_at_5
value: 2.33
- type: precision_at_1
value: 1.506
- type: precision_at_10
value: 0.539
- type: precision_at_100
value: 0.11
- type: precision_at_1000
value: 0.02
- type: precision_at_3
value: 0.9369999999999999
- type: precision_at_5
value: 0.7939999999999999
- type: recall_at_1
value: 1.214
- type: recall_at_10
value: 4.34
- type: recall_at_100
value: 8.905000000000001
- type: recall_at_1000
value: 16.416
- type: recall_at_3
value: 2.3009999999999997
- type: recall_at_5
value: 3.2489999999999997
- task:
type: Retrieval
dataset:
type: quora
name: MTEB QuoraRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 45.708
- type: map_at_10
value: 55.131
- type: map_at_100
value: 55.935
- type: map_at_1000
value: 55.993
- type: map_at_3
value: 52.749
- type: map_at_5
value: 54.166000000000004
- type: mrr_at_1
value: 52.44
- type: mrr_at_10
value: 59.99
- type: mrr_at_100
value: 60.492999999999995
- type: mrr_at_1000
value: 60.522
- type: mrr_at_3
value: 58.285
- type: mrr_at_5
value: 59.305
- type: ndcg_at_1
value: 52.43
- type: ndcg_at_10
value: 59.873
- type: ndcg_at_100
value: 63.086
- type: ndcg_at_1000
value: 64.291
- type: ndcg_at_3
value: 56.291000000000004
- type: ndcg_at_5
value: 58.071
- type: precision_at_1
value: 52.43
- type: precision_at_10
value: 8.973
- type: precision_at_100
value: 1.161
- type: precision_at_1000
value: 0.134
- type: precision_at_3
value: 24.177
- type: precision_at_5
value: 16.073999999999998
- type: recall_at_1
value: 45.708
- type: recall_at_10
value: 69.195
- type: recall_at_100
value: 82.812
- type: recall_at_1000
value: 91.136
- type: recall_at_3
value: 58.938
- type: recall_at_5
value: 63.787000000000006
- task:
type: Clustering
dataset:
type: mteb/reddit-clustering
name: MTEB RedditClustering
config: default
split: test
revision: 24640382cdbf8abc73003fb0fa6d111a705499eb
metrics:
- type: v_measure
value: 13.142048230676806
- task:
type: Clustering
dataset:
type: mteb/reddit-clustering-p2p
name: MTEB RedditClusteringP2P
config: default
split: test
revision: 282350215ef01743dc01b456c7f5241fa8937f16
metrics:
- type: v_measure
value: 26.06687178917052
- task:
type: Retrieval
dataset:
type: scidocs
name: MTEB SCIDOCS
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 0.46499999999999997
- type: map_at_10
value: 0.906
- type: map_at_100
value: 1.127
- type: map_at_1000
value: 1.203
- type: map_at_3
value: 0.72
- type: map_at_5
value: 0.814
- type: mrr_at_1
value: 2.3
- type: mrr_at_10
value: 3.733
- type: mrr_at_100
value: 4.295999999999999
- type: mrr_at_1000
value: 4.412
- type: mrr_at_3
value: 3.183
- type: mrr_at_5
value: 3.458
- type: ndcg_at_1
value: 2.3
- type: ndcg_at_10
value: 1.797
- type: ndcg_at_100
value: 3.376
- type: ndcg_at_1000
value: 6.143
- type: ndcg_at_3
value: 1.763
- type: ndcg_at_5
value: 1.5070000000000001
- type: precision_at_1
value: 2.3
- type: precision_at_10
value: 0.91
- type: precision_at_100
value: 0.32399999999999995
- type: precision_at_1000
value: 0.101
- type: precision_at_3
value: 1.633
- type: precision_at_5
value: 1.3
- type: recall_at_1
value: 0.46499999999999997
- type: recall_at_10
value: 1.8499999999999999
- type: recall_at_100
value: 6.625
- type: recall_at_1000
value: 20.587
- type: recall_at_3
value: 0.9900000000000001
- type: recall_at_5
value: 1.315
- task:
type: STS
dataset:
type: mteb/sickr-sts
name: MTEB SICK-R
config: default
split: test
revision: a6ea5a8cab320b040a23452cc28066d9beae2cee
metrics:
- type: cos_sim_pearson
value: 60.78961481918511
- type: cos_sim_spearman
value: 54.92014630234372
- type: euclidean_pearson
value: 54.91456364340953
- type: euclidean_spearman
value: 50.95537043206628
- type: manhattan_pearson
value: 55.0450005071106
- type: manhattan_spearman
value: 51.227579527791654
- task:
type: STS
dataset:
type: mteb/sts12-sts
name: MTEB STS12
config: default
split: test
revision: a0d554a64d88156834ff5ae9920b964011b16384
metrics:
- type: cos_sim_pearson
value: 43.73124494569395
- type: cos_sim_spearman
value: 43.07629933550637
- type: euclidean_pearson
value: 37.2529484210563
- type: euclidean_spearman
value: 36.68421330216546
- type: manhattan_pearson
value: 37.41673219009712
- type: manhattan_spearman
value: 36.92073705702668
- task:
type: STS
dataset:
type: mteb/sts13-sts
name: MTEB STS13
config: default
split: test
revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca
metrics:
- type: cos_sim_pearson
value: 57.17534157059787
- type: cos_sim_spearman
value: 56.86679858348438
- type: euclidean_pearson
value: 54.51552371857776
- type: euclidean_spearman
value: 53.80989851917749
- type: manhattan_pearson
value: 54.44486043632584
- type: manhattan_spearman
value: 53.83487353949481
- task:
type: STS
dataset:
type: mteb/sts14-sts
name: MTEB STS14
config: default
split: test
revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375
metrics:
- type: cos_sim_pearson
value: 52.319034960820375
- type: cos_sim_spearman
value: 50.89512224974754
- type: euclidean_pearson
value: 49.19308209408045
- type: euclidean_spearman
value: 47.45736923614355
- type: manhattan_pearson
value: 48.82127080055118
- type: manhattan_spearman
value: 47.20185686489298
- task:
type: STS
dataset:
type: mteb/sts15-sts
name: MTEB STS15
config: default
split: test
revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3
metrics:
- type: cos_sim_pearson
value: 61.57602956458427
- type: cos_sim_spearman
value: 62.894640061838956
- type: euclidean_pearson
value: 53.86893407586029
- type: euclidean_spearman
value: 54.68528520514299
- type: manhattan_pearson
value: 53.689614981956815
- type: manhattan_spearman
value: 54.51172839699876
- task:
type: STS
dataset:
type: mteb/sts16-sts
name: MTEB STS16
config: default
split: test
revision: 4d8694f8f0e0100860b497b999b3dbed754a0513
metrics:
- type: cos_sim_pearson
value: 56.2305694109318
- type: cos_sim_spearman
value: 57.885939000786045
- type: euclidean_pearson
value: 50.486043353701994
- type: euclidean_spearman
value: 50.4463227974027
- type: manhattan_pearson
value: 50.73317560427465
- type: manhattan_spearman
value: 50.81397877006027
- task:
type: STS
dataset:
type: mteb/sts17-crosslingual-sts
name: MTEB STS17 (ko-ko)
config: ko-ko
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 55.52162058025664
- type: cos_sim_spearman
value: 59.02220327783535
- type: euclidean_pearson
value: 55.66332330866701
- type: euclidean_spearman
value: 56.829076266662206
- type: manhattan_pearson
value: 55.39181385186973
- type: manhattan_spearman
value: 56.607432176121144
- task:
type: STS
dataset:
type: mteb/sts17-crosslingual-sts
name: MTEB STS17 (ar-ar)
config: ar-ar
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 46.312186899914906
- type: cos_sim_spearman
value: 48.07172073934163
- type: euclidean_pearson
value: 46.957276350776695
- type: euclidean_spearman
value: 43.98800593212707
- type: manhattan_pearson
value: 46.910805787619914
- type: manhattan_spearman
value: 43.96662723946553
- task:
type: STS
dataset:
type: mteb/sts17-crosslingual-sts
name: MTEB STS17 (en-ar)
config: en-ar
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 16.222172523403835
- type: cos_sim_spearman
value: 17.230258645779042
- type: euclidean_pearson
value: -6.781460243147299
- type: euclidean_spearman
value: -6.884123336780775
- type: manhattan_pearson
value: -4.369061881907372
- type: manhattan_spearman
value: -4.235845433380353
- task:
type: STS
dataset:
type: mteb/sts17-crosslingual-sts
name: MTEB STS17 (en-de)
config: en-de
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 7.462476431657987
- type: cos_sim_spearman
value: 5.875270645234161
- type: euclidean_pearson
value: -10.79494346180473
- type: euclidean_spearman
value: -11.704529023304776
- type: manhattan_pearson
value: -11.465867974964997
- type: manhattan_spearman
value: -12.428424608287173
- task:
type: STS
dataset:
type: mteb/sts17-crosslingual-sts
name: MTEB STS17 (en-en)
config: en-en
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 61.46601840758559
- type: cos_sim_spearman
value: 65.69667638887147
- type: euclidean_pearson
value: 49.531065525619866
- type: euclidean_spearman
value: 53.880480167479725
- type: manhattan_pearson
value: 50.25462221374689
- type: manhattan_spearman
value: 54.22205494276401
- task:
type: STS
dataset:
type: mteb/sts17-crosslingual-sts
name: MTEB STS17 (en-tr)
config: en-tr
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: -12.769479370624031
- type: cos_sim_spearman
value: -12.161427312728382
- type: euclidean_pearson
value: -27.950593491756536
- type: euclidean_spearman
value: -24.925281959398585
- type: manhattan_pearson
value: -25.98778888167475
- type: manhattan_spearman
value: -22.861942388867234
- task:
type: STS
dataset:
type: mteb/sts17-crosslingual-sts
name: MTEB STS17 (es-en)
config: es-en
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 2.1575763564561727
- type: cos_sim_spearman
value: 1.182204089411577
- type: euclidean_pearson
value: -10.389249806317189
- type: euclidean_spearman
value: -16.078659904264605
- type: manhattan_pearson
value: -9.674301846448607
- type: manhattan_spearman
value: -16.976576817518577
- task:
type: STS
dataset:
type: mteb/sts17-crosslingual-sts
name: MTEB STS17 (es-es)
config: es-es
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 66.16718583059163
- type: cos_sim_spearman
value: 69.95156267898052
- type: euclidean_pearson
value: 64.93174777029739
- type: euclidean_spearman
value: 66.21292533974568
- type: manhattan_pearson
value: 65.2578109632889
- type: manhattan_spearman
value: 66.21830865759128
- task:
type: STS
dataset:
type: mteb/sts17-crosslingual-sts
name: MTEB STS17 (fr-en)
config: fr-en
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 0.1540829683540524
- type: cos_sim_spearman
value: -2.4072834011003987
- type: euclidean_pearson
value: -18.951775877513473
- type: euclidean_spearman
value: -18.393605606817527
- type: manhattan_pearson
value: -19.609633839454542
- type: manhattan_spearman
value: -19.276064769117912
- task:
type: STS
dataset:
type: mteb/sts17-crosslingual-sts
name: MTEB STS17 (it-en)
config: it-en
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: -4.22497246932717
- type: cos_sim_spearman
value: -5.747420352346977
- type: euclidean_pearson
value: -16.86351349130112
- type: euclidean_spearman
value: -16.555536618547382
- type: manhattan_pearson
value: -17.45445643482646
- type: manhattan_spearman
value: -17.97322953856309
- task:
type: STS
dataset:
type: mteb/sts17-crosslingual-sts
name: MTEB STS17 (nl-en)
config: nl-en
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 8.559184021676034
- type: cos_sim_spearman
value: 5.600273352595882
- type: euclidean_pearson
value: -10.76482859283058
- type: euclidean_spearman
value: -9.575202768285926
- type: manhattan_pearson
value: -9.48508597350615
- type: manhattan_spearman
value: -9.33387861352172
- task:
type: STS
dataset:
type: mteb/sts22-crosslingual-sts
name: MTEB STS22 (en)
config: en
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 30.260087169228978
- type: cos_sim_spearman
value: 43.264174903196015
- type: euclidean_pearson
value: 35.07785877281954
- type: euclidean_spearman
value: 43.41294719372452
- type: manhattan_pearson
value: 36.74996284702431
- type: manhattan_spearman
value: 43.53522851890142
- task:
type: STS
dataset:
type: mteb/sts22-crosslingual-sts
name: MTEB STS22 (de)
config: de
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 5.58694979115026
- type: cos_sim_spearman
value: 32.80692337371332
- type: euclidean_pearson
value: 10.53180875461474
- type: euclidean_spearman
value: 31.105269938654033
- type: manhattan_pearson
value: 10.559778015974826
- type: manhattan_spearman
value: 31.452204563072044
- task:
type: STS
dataset:
type: mteb/sts22-crosslingual-sts
name: MTEB STS22 (es)
config: es
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 10.593783873928478
- type: cos_sim_spearman
value: 50.397542574042006
- type: euclidean_pearson
value: 28.122179063209714
- type: euclidean_spearman
value: 50.72847867996529
- type: manhattan_pearson
value: 28.730690148465005
- type: manhattan_spearman
value: 51.019761292483366
- task:
type: STS
dataset:
type: mteb/sts22-crosslingual-sts
name: MTEB STS22 (pl)
config: pl
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: -1.3049499265017876
- type: cos_sim_spearman
value: 16.347130048706084
- type: euclidean_pearson
value: 0.5710147274110128
- type: euclidean_spearman
value: 16.589843077857605
- type: manhattan_pearson
value: 1.1226404198336415
- type: manhattan_spearman
value: 16.410620108636557
- task:
type: STS
dataset:
type: mteb/sts22-crosslingual-sts
name: MTEB STS22 (tr)
config: tr
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: -10.96861909019159
- type: cos_sim_spearman
value: 24.536979219880724
- type: euclidean_pearson
value: -1.3040190807315306
- type: euclidean_spearman
value: 25.061584673761928
- type: manhattan_pearson
value: -0.06525719745037804
- type: manhattan_spearman
value: 25.979295538386893
- task:
type: STS
dataset:
type: mteb/sts22-crosslingual-sts
name: MTEB STS22 (ar)
config: ar
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 1.0599417065503314
- type: cos_sim_spearman
value: 52.055853787103345
- type: euclidean_pearson
value: 23.666828441081776
- type: euclidean_spearman
value: 52.38656753170069
- type: manhattan_pearson
value: 23.398080463967215
- type: manhattan_spearman
value: 52.23849717509109
- task:
type: STS
dataset:
type: mteb/sts22-crosslingual-sts
name: MTEB STS22 (ru)
config: ru
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: -2.847646040977239
- type: cos_sim_spearman
value: 40.5826838357407
- type: euclidean_pearson
value: 9.242304983683113
- type: euclidean_spearman
value: 40.35906851022345
- type: manhattan_pearson
value: 9.645663412799504
- type: manhattan_spearman
value: 40.78106154950966
- task:
type: STS
dataset:
type: mteb/sts22-crosslingual-sts
name: MTEB STS22 (zh)
config: zh
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 17.761397832130992
- type: cos_sim_spearman
value: 59.98756452345925
- type: euclidean_pearson
value: 37.03125109036693
- type: euclidean_spearman
value: 59.58469212715707
- type: manhattan_pearson
value: 36.828102137170724
- type: manhattan_spearman
value: 59.07036501478588
- task:
type: STS
dataset:
type: mteb/sts22-crosslingual-sts
name: MTEB STS22 (fr)
config: fr
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 22.281212883400205
- type: cos_sim_spearman
value: 48.27687537627578
- type: euclidean_pearson
value: 30.531395629285324
- type: euclidean_spearman
value: 50.349143748970384
- type: manhattan_pearson
value: 30.48762081986554
- type: manhattan_spearman
value: 50.66037165529169
- task:
type: STS
dataset:
type: mteb/sts22-crosslingual-sts
name: MTEB STS22 (de-en)
config: de-en
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 15.76679673990358
- type: cos_sim_spearman
value: 19.123349126370442
- type: euclidean_pearson
value: 19.21389203087116
- type: euclidean_spearman
value: 23.63276413160338
- type: manhattan_pearson
value: 18.789263824907053
- type: manhattan_spearman
value: 19.962703178974692
- task:
type: STS
dataset:
type: mteb/sts22-crosslingual-sts
name: MTEB STS22 (es-en)
config: es-en
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 11.024970397289941
- type: cos_sim_spearman
value: 13.530951900755017
- type: euclidean_pearson
value: 13.473514585343645
- type: euclidean_spearman
value: 16.754702023734914
- type: manhattan_pearson
value: 13.72847275970385
- type: manhattan_spearman
value: 16.673001637012348
- task:
type: STS
dataset:
type: mteb/sts22-crosslingual-sts
name: MTEB STS22 (it)
config: it
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 33.32761589409043
- type: cos_sim_spearman
value: 54.14305778960692
- type: euclidean_pearson
value: 45.30173241170555
- type: euclidean_spearman
value: 54.77422257007743
- type: manhattan_pearson
value: 45.41890064000217
- type: manhattan_spearman
value: 54.533788920795544
- task:
type: STS
dataset:
type: mteb/sts22-crosslingual-sts
name: MTEB STS22 (pl-en)
config: pl-en
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 20.045210048995486
- type: cos_sim_spearman
value: 17.597101329633823
- type: euclidean_pearson
value: 32.531726142346145
- type: euclidean_spearman
value: 27.244772040848105
- type: manhattan_pearson
value: 32.74618458514601
- type: manhattan_spearman
value: 25.81220754539242
- task:
type: STS
dataset:
type: mteb/sts22-crosslingual-sts
name: MTEB STS22 (zh-en)
config: zh-en
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: -13.832846350193021
- type: cos_sim_spearman
value: -8.406778050457863
- type: euclidean_pearson
value: -6.557254855697437
- type: euclidean_spearman
value: -3.5112770921588563
- type: manhattan_pearson
value: -6.493730738275641
- type: manhattan_spearman
value: -2.5922348401468365
- task:
type: STS
dataset:
type: mteb/sts22-crosslingual-sts
name: MTEB STS22 (es-it)
config: es-it
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 26.357929743436664
- type: cos_sim_spearman
value: 37.3417709718339
- type: euclidean_pearson
value: 30.930792572341293
- type: euclidean_spearman
value: 36.061866364725795
- type: manhattan_pearson
value: 31.56982745863155
- type: manhattan_spearman
value: 37.18529502311113
- task:
type: STS
dataset:
type: mteb/sts22-crosslingual-sts
name: MTEB STS22 (de-fr)
config: de-fr
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 9.310102041071547
- type: cos_sim_spearman
value: 10.907002693108673
- type: euclidean_pearson
value: 7.361793742296021
- type: euclidean_spearman
value: 9.53967881391466
- type: manhattan_pearson
value: 8.017048631719996
- type: manhattan_spearman
value: 13.537860190039725
- task:
type: STS
dataset:
type: mteb/sts22-crosslingual-sts
name: MTEB STS22 (de-pl)
config: de-pl
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: -5.534456407419709
- type: cos_sim_spearman
value: 17.552638994787724
- type: euclidean_pearson
value: -10.136558594355556
- type: euclidean_spearman
value: 11.055083156366303
- type: manhattan_pearson
value: -11.799223055640773
- type: manhattan_spearman
value: 1.416528760982869
- task:
type: STS
dataset:
type: mteb/sts22-crosslingual-sts
name: MTEB STS22 (fr-pl)
config: fr-pl
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 48.64639760720344
- type: cos_sim_spearman
value: 39.440531887330785
- type: euclidean_pearson
value: 37.75527464173489
- type: euclidean_spearman
value: 39.440531887330785
- type: manhattan_pearson
value: 32.324715276369474
- type: manhattan_spearman
value: 28.17180849095055
- task:
type: STS
dataset:
type: mteb/stsbenchmark-sts
name: MTEB STSBenchmark
config: default
split: test
revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831
metrics:
- type: cos_sim_pearson
value: 44.667456983937
- type: cos_sim_spearman
value: 46.04327333618551
- type: euclidean_pearson
value: 44.583522824155104
- type: euclidean_spearman
value: 44.77184813864239
- type: manhattan_pearson
value: 44.54496373721756
- type: manhattan_spearman
value: 44.830873857115996
- task:
type: Reranking
dataset:
type: mteb/scidocs-reranking
name: MTEB SciDocsRR
config: default
split: test
revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab
metrics:
- type: map
value: 49.756063724243
- type: mrr
value: 75.29077585450135
- task:
type: Retrieval
dataset:
type: scifact
name: MTEB SciFact
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 14.194
- type: map_at_10
value: 18.756999999999998
- type: map_at_100
value: 19.743
- type: map_at_1000
value: 19.865
- type: map_at_3
value: 16.986
- type: map_at_5
value: 18.024
- type: mrr_at_1
value: 15.0
- type: mrr_at_10
value: 19.961000000000002
- type: mrr_at_100
value: 20.875
- type: mrr_at_1000
value: 20.982
- type: mrr_at_3
value: 18.056
- type: mrr_at_5
value: 19.406000000000002
- type: ndcg_at_1
value: 15.0
- type: ndcg_at_10
value: 21.775
- type: ndcg_at_100
value: 26.8
- type: ndcg_at_1000
value: 30.468
- type: ndcg_at_3
value: 18.199
- type: ndcg_at_5
value: 20.111
- type: precision_at_1
value: 15.0
- type: precision_at_10
value: 3.4000000000000004
- type: precision_at_100
value: 0.607
- type: precision_at_1000
value: 0.094
- type: precision_at_3
value: 7.444000000000001
- type: precision_at_5
value: 5.6000000000000005
- type: recall_at_1
value: 14.194
- type: recall_at_10
value: 30.0
- type: recall_at_100
value: 53.911
- type: recall_at_1000
value: 83.289
- type: recall_at_3
value: 20.556
- type: recall_at_5
value: 24.972
- task:
type: PairClassification
dataset:
type: mteb/sprintduplicatequestions-pairclassification
name: MTEB SprintDuplicateQuestions
config: default
split: test
revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46
metrics:
- type: cos_sim_accuracy
value: 99.35544554455446
- type: cos_sim_ap
value: 62.596006705300724
- type: cos_sim_f1
value: 60.80283353010627
- type: cos_sim_precision
value: 74.20749279538906
- type: cos_sim_recall
value: 51.5
- type: dot_accuracy
value: 99.13564356435643
- type: dot_ap
value: 43.87589686325114
- type: dot_f1
value: 46.99663623258049
- type: dot_precision
value: 45.235892691951896
- type: dot_recall
value: 48.9
- type: euclidean_accuracy
value: 99.2
- type: euclidean_ap
value: 43.44660755386079
- type: euclidean_f1
value: 45.9016393442623
- type: euclidean_precision
value: 52.79583875162549
- type: euclidean_recall
value: 40.6
- type: manhattan_accuracy
value: 99.2
- type: manhattan_ap
value: 43.11790011749347
- type: manhattan_f1
value: 45.11023176936122
- type: manhattan_precision
value: 51.88556566970091
- type: manhattan_recall
value: 39.900000000000006
- type: max_accuracy
value: 99.35544554455446
- type: max_ap
value: 62.596006705300724
- type: max_f1
value: 60.80283353010627
- task:
type: Clustering
dataset:
type: mteb/stackexchange-clustering
name: MTEB StackExchangeClustering
config: default
split: test
revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259
metrics:
- type: v_measure
value: 25.71674282500873
- task:
type: Clustering
dataset:
type: mteb/stackexchange-clustering-p2p
name: MTEB StackExchangeClusteringP2P
config: default
split: test
revision: 815ca46b2622cec33ccafc3735d572c266efdb44
metrics:
- type: v_measure
value: 25.465780711520985
- task:
type: Reranking
dataset:
type: mteb/stackoverflowdupquestions-reranking
name: MTEB StackOverflowDupQuestions
config: default
split: test
revision: e185fbe320c72810689fc5848eb6114e1ef5ec69
metrics:
- type: map
value: 35.35656209427094
- type: mrr
value: 35.10693860877685
- task:
type: Retrieval
dataset:
type: trec-covid
name: MTEB TRECCOVID
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 0.074
- type: map_at_10
value: 0.47400000000000003
- type: map_at_100
value: 1.825
- type: map_at_1000
value: 4.056
- type: map_at_3
value: 0.199
- type: map_at_5
value: 0.301
- type: mrr_at_1
value: 34.0
- type: mrr_at_10
value: 46.06
- type: mrr_at_100
value: 47.506
- type: mrr_at_1000
value: 47.522999999999996
- type: mrr_at_3
value: 44.0
- type: mrr_at_5
value: 44.4
- type: ndcg_at_1
value: 32.0
- type: ndcg_at_10
value: 28.633999999999997
- type: ndcg_at_100
value: 18.547
- type: ndcg_at_1000
value: 16.142
- type: ndcg_at_3
value: 32.48
- type: ndcg_at_5
value: 31.163999999999998
- type: precision_at_1
value: 34.0
- type: precision_at_10
value: 30.4
- type: precision_at_100
value: 18.54
- type: precision_at_1000
value: 7.942
- type: precision_at_3
value: 35.333
- type: precision_at_5
value: 34.0
- type: recall_at_1
value: 0.074
- type: recall_at_10
value: 0.641
- type: recall_at_100
value: 3.675
- type: recall_at_1000
value: 15.706000000000001
- type: recall_at_3
value: 0.231
- type: recall_at_5
value: 0.367
- task:
type: Retrieval
dataset:
type: webis-touche2020
name: MTEB Touche2020
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 0.6799999999999999
- type: map_at_10
value: 2.1420000000000003
- type: map_at_100
value: 2.888
- type: map_at_1000
value: 3.3779999999999997
- type: map_at_3
value: 1.486
- type: map_at_5
value: 1.7579999999999998
- type: mrr_at_1
value: 12.245000000000001
- type: mrr_at_10
value: 22.12
- type: mrr_at_100
value: 23.407
- type: mrr_at_1000
value: 23.483999999999998
- type: mrr_at_3
value: 19.048000000000002
- type: mrr_at_5
value: 20.986
- type: ndcg_at_1
value: 10.204
- type: ndcg_at_10
value: 7.374
- type: ndcg_at_100
value: 10.524000000000001
- type: ndcg_at_1000
value: 18.4
- type: ndcg_at_3
value: 9.913
- type: ndcg_at_5
value: 8.938
- type: precision_at_1
value: 12.245000000000001
- type: precision_at_10
value: 7.142999999999999
- type: precision_at_100
value: 2.4490000000000003
- type: precision_at_1000
value: 0.731
- type: precision_at_3
value: 11.565
- type: precision_at_5
value: 9.796000000000001
- type: recall_at_1
value: 0.6799999999999999
- type: recall_at_10
value: 4.038
- type: recall_at_100
value: 14.151
- type: recall_at_1000
value: 40.111999999999995
- type: recall_at_3
value: 1.921
- type: recall_at_5
value: 2.604
- task:
type: Classification
dataset:
type: mteb/toxic_conversations_50k
name: MTEB ToxicConversationsClassification
config: default
split: test
revision: d7c0de2777da35d6aae2200a62c6e0e5af397c4c
metrics:
- type: accuracy
value: 54.625600000000006
- type: ap
value: 9.425323874806459
- type: f1
value: 42.38724794017267
- task:
type: Classification
dataset:
type: mteb/tweet_sentiment_extraction
name: MTEB TweetSentimentExtractionClassification
config: default
split: test
revision: d604517c81ca91fe16a244d1248fc021f9ecee7a
metrics:
- type: accuracy
value: 42.8494623655914
- type: f1
value: 42.66062148844617
- task:
type: Clustering
dataset:
type: mteb/twentynewsgroups-clustering
name: MTEB TwentyNewsgroupsClustering
config: default
split: test
revision: 6125ec4e24fa026cec8a478383ee943acfbd5449
metrics:
- type: v_measure
value: 12.464890895237952
- task:
type: PairClassification
dataset:
type: mteb/twittersemeval2015-pairclassification
name: MTEB TwitterSemEval2015
config: default
split: test
revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1
metrics:
- type: cos_sim_accuracy
value: 79.97854205161829
- type: cos_sim_ap
value: 47.45175747605773
- type: cos_sim_f1
value: 46.55775962660444
- type: cos_sim_precision
value: 41.73640167364017
- type: cos_sim_recall
value: 52.638522427440634
- type: dot_accuracy
value: 77.76718126005842
- type: dot_ap
value: 35.97737653101504
- type: dot_f1
value: 41.1975475754439
- type: dot_precision
value: 29.50165355228646
- type: dot_recall
value: 68.25857519788919
- type: euclidean_accuracy
value: 79.34076414138403
- type: euclidean_ap
value: 45.309577778755134
- type: euclidean_f1
value: 45.09938313913639
- type: euclidean_precision
value: 39.76631748589847
- type: euclidean_recall
value: 52.0844327176781
- type: manhattan_accuracy
value: 79.31692197651546
- type: manhattan_ap
value: 45.2433373222626
- type: manhattan_f1
value: 45.04624986069319
- type: manhattan_precision
value: 38.99286127725256
- type: manhattan_recall
value: 53.324538258575195
- type: max_accuracy
value: 79.97854205161829
- type: max_ap
value: 47.45175747605773
- type: max_f1
value: 46.55775962660444
- task:
type: PairClassification
dataset:
type: mteb/twitterurlcorpus-pairclassification
name: MTEB TwitterURLCorpus
config: default
split: test
revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf
metrics:
- type: cos_sim_accuracy
value: 81.76737687740133
- type: cos_sim_ap
value: 64.59241956109807
- type: cos_sim_f1
value: 57.83203629255339
- type: cos_sim_precision
value: 55.50442477876106
- type: cos_sim_recall
value: 60.363412380659064
- type: dot_accuracy
value: 78.96922420149805
- type: dot_ap
value: 56.11775087282065
- type: dot_f1
value: 52.92134831460675
- type: dot_precision
value: 51.524212368728115
- type: dot_recall
value: 54.39636587619341
- type: euclidean_accuracy
value: 80.8611790274382
- type: euclidean_ap
value: 61.28070098354092
- type: euclidean_f1
value: 54.58334971882497
- type: euclidean_precision
value: 55.783297162607504
- type: euclidean_recall
value: 53.43393902063443
- type: manhattan_accuracy
value: 80.72534637326814
- type: manhattan_ap
value: 61.18048430787254
- type: manhattan_f1
value: 54.50978912822061
- type: manhattan_precision
value: 53.435396790178245
- type: manhattan_recall
value: 55.6282722513089
- type: max_accuracy
value: 81.76737687740133
- type: max_ap
value: 64.59241956109807
- type: max_f1
value: 57.83203629255339 | [
"BIOSSES",
"SCIFACT"
] | Non_BioNLP |
LoneStriker/Phi-3-medium-4k-instruct-8.0bpw-h8-exl2 | LoneStriker | text-generation | [
"transformers",
"safetensors",
"phi3",
"text-generation",
"nlp",
"code",
"conversational",
"custom_code",
"multilingual",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"8-bit",
"exl2",
"region:us"
] | 1,716,312,414,000 | 2024-05-21T17:37:44 | 9 | 0 | ---
language:
- multilingual
license: mit
license_link: https://huggingface.co/microsoft/Phi-3-medium-4k-instruct/resolve/main/LICENSE
pipeline_tag: text-generation
tags:
- nlp
- code
inference:
parameters:
temperature: 0.7
widget:
- messages:
- role: user
content: Can you provide ways to eat combinations of bananas and dragonfruits?
---
## Model Summary
The Phi-3-Medium-4K-Instruct is a 14B parameters, lightweight, state-of-the-art open model trained with the Phi-3 datasets that includes both synthetic data and the filtered publicly available websites data with a focus on high-quality and reasoning dense properties.
The model belongs to the Phi-3 family with the Medium version in two variants [4K](https://huggingface.co/microsoft/Phi-3-medium-4k-instruct) and [128K](https://huggingface.co/microsoft/Phi-3-medium-128k-instruct) which is the context length (in tokens) that it can support.
The model has underwent a post-training process that incorporates both supervised fine-tuning and direct preference optimization for the instruction following and safety measures.
When assessed against benchmarks testing common sense, language understanding, math, code, long context and logical reasoning, Phi-3-Medium-4K-Instruct showcased a robust and state-of-the-art performance among models of the same-size and next-size-up.
Resources and Technical Documentation:
+ [Phi-3 Microsoft Blog](https://aka.ms/Phi-3Build2024)
+ [Phi-3 Technical Report](https://aka.ms/phi3-tech-report)
+ [Phi-3 on Azure AI Studio](https://aka.ms/phi3-azure-ai)
+ [Phi-3 Cookbook](https://github.com/microsoft/Phi-3CookBook)
| | Short Context | Long Context |
| ------- | ------------- | ------------ |
| Mini | 4K [[HF]](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct) ; [[ONNX]](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct-onnx) ; [[GGUF]](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct-gguf) | 128K [[HF]](https://huggingface.co/microsoft/Phi-3-mini-128k-instruct) ; [[ONNX]](https://huggingface.co/microsoft/Phi-3-mini-128k-instruct-onnx)|
| Small | 8K [[HF]](https://huggingface.co/microsoft/Phi-3-small-8k-instruct) ; [[ONNX]](https://huggingface.co/microsoft/Phi-3-small-8k-instruct-onnx-cuda) | 128K [[HF]](https://huggingface.co/microsoft/Phi-3-small-128k-instruct) ; [[ONNX]](https://huggingface.co/microsoft/Phi-3-small-128k-instruct-onnx-cuda)|
| Medium | 4K [[HF]](https://huggingface.co/microsoft/Phi-3-medium-4k-instruct) ; [[ONNX]](https://huggingface.co/microsoft/Phi-3-medium-4k-instruct-onnx-cuda) | 128K [[HF]](https://huggingface.co/microsoft/Phi-3-medium-128k-instruct) ; [[ONNX]](https://huggingface.co/microsoft/Phi-3-medium-128k-instruct-onnx-cuda)|
| Vision | | 128K [[HF]](https://huggingface.co/microsoft/Phi-3-vision-128k-instruct)|
## Intended Uses
**Primary use cases**
The model is intended for broad commercial and research use in English. The model provides uses for general purpose AI systems and applications which require:
1) Memory/compute constrained environments
2) Latency bound scenarios
3) Strong reasoning (especially code, math and logic)
Our model is designed to accelerate research on language and multimodal models, for use as a building block for generative AI powered features.
**Use case considerations**
Our models are not specifically designed or evaluated for all downstream purposes. Developers should consider common limitations of language models as they select use cases, and evaluate and mitigate for accuracy, safety, and fariness before using within a specific downstream use case, particularly for high risk scenarios. Developers should be aware of and adhere to applicable laws or regulations (including privacy, trade compliance laws, etc.) that are relevant to their use case.
Nothing contained in this Model Card should be interpreted as or deemed a restriction or modification to the license the model is released under.
## How to Use
Phi-3-Medium-4K-Instruct has been integrated in the development version (4.40.2) of `transformers`. Until the official version is released through `pip`, ensure that you are doing one of the following:
* When loading the model, ensure that `trust_remote_code=True` is passed as an argument of the `from_pretrained()` function.
* Update your local `transformers` to the development version: `pip uninstall -y transformers && pip install git+https://github.com/huggingface/transformers`. The previous command is an alternative to cloning and installing from the source.
The current `transformers` version can be verified with: `pip list | grep transformers`.
Phi-3-Medium-4K-Instruct is also available in [Azure AI Studio](https://aka.ms/phi3-azure-ai).
### Tokenizer
Phi-3-Medium-4K-Instruct supports a vocabulary size of up to `32064` tokens. The [tokenizer files](https://huggingface.co/microsoft/Phi-3-medium-4k-instruct/blob/main/added_tokens.json) already provide placeholder tokens that can be used for downstream fine-tuning, but they can also be extended up to the model's vocabulary size.
### Chat Format
Given the nature of the training data, the Phi-3-Medium-4K-Instruct model is best suited for prompts using the chat format as follows.
You can provide the prompt as a question with a generic template as follow:
```markdown
<|user|>\nQuestion <|end|>\n<|assistant|>
```
For example:
```markdown
<|user|>
How to explain Internet for a medieval knight?<|end|>
<|assistant|>
```
where the model generates the text after `<|assistant|>` . In case of few-shots prompt, the prompt can be formatted as the following:
```markdown
<|user|>
I am going to Paris, what should I see?<|end|>
<|assistant|>
Paris, the capital of France, is known for its stunning architecture, art museums, historical landmarks, and romantic atmosphere. Here are some of the top attractions to see in Paris:\n\n1. The Eiffel Tower: The iconic Eiffel Tower is one of the most recognizable landmarks in the world and offers breathtaking views of the city.\n2. The Louvre Museum: The Louvre is one of the world's largest and most famous museums, housing an impressive collection of art and artifacts, including the Mona Lisa.\n3. Notre-Dame Cathedral: This beautiful cathedral is one of the most famous landmarks in Paris and is known for its Gothic architecture and stunning stained glass windows.\n\nThese are just a few of the many attractions that Paris has to offer. With so much to see and do, it's no wonder that Paris is one of the most popular tourist destinations in the world."<|end|>
<|user|>
What is so great about #1?<|end|>
<|assistant|>
```
### Sample inference code
This code snippets show how to get quickly started with running the model on a GPU:
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
torch.random.manual_seed(0)
model_id = "microsoft/Phi-3-medium-4k-instruct"
model = AutoModelForCausalLM.from_pretrained(
model_id,
device_map="cuda",
torch_dtype="auto",
trust_remote_code=True,
)
tokenizer = AutoTokenizer.from_pretrained(model_id)
messages = [
{"role": "user", "content": "Can you provide ways to eat combinations of bananas and dragonfruits?"},
{"role": "assistant", "content": "Sure! Here are some ways to eat bananas and dragonfruits together: 1. Banana and dragonfruit smoothie: Blend bananas and dragonfruits together with some milk and honey. 2. Banana and dragonfruit salad: Mix sliced bananas and dragonfruits together with some lemon juice and honey."},
{"role": "user", "content": "What about solving an 2x + 3 = 7 equation?"},
]
pipe = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
)
generation_args = {
"max_new_tokens": 500,
"return_full_text": False,
"temperature": 0.0,
"do_sample": False,
}
output = pipe(messages, **generation_args)
print(output[0]['generated_text'])
```
*Some applications/frameworks might not include a BOS token (`<s>`) at the start of the conversation. Please ensure that it is included since it provides more reliable results.*
## Responsible AI Considerations
Like other language models, the Phi series models can potentially behave in ways that are unfair, unreliable, or offensive. Some of the limiting behaviors to be aware of include:
+ Quality of Service: the Phi models are trained primarily on English text. Languages other than English will experience worse performance. English language varieties with less representation in the training data might experience worse performance than standard American English.
+ Representation of Harms & Perpetuation of Stereotypes: These models can over- or under-represent groups of people, erase representation of some groups, or reinforce demeaning or negative stereotypes. Despite safety post-training, these limitations may still be present due to differing levels of representation of different groups or prevalence of examples of negative stereotypes in training data that reflect real-world patterns and societal biases.
+ Inappropriate or Offensive Content: these models may produce other types of inappropriate or offensive content, which may make it inappropriate to deploy for sensitive contexts without additional mitigations that are specific to the use case.
+ Information Reliability: Language models can generate nonsensical content or fabricate content that might sound reasonable but is inaccurate or outdated.
+ Limited Scope for Code: Majority of Phi-3 training data is based in Python and use common packages such as "typing, math, random, collections, datetime, itertools". If the model generates Python scripts that utilize other packages or scripts in other languages, we strongly recommend users manually verify all API uses.
Developers should apply responsible AI best practices and are responsible for ensuring that a specific use case complies with relevant laws and regulations (e.g. privacy, trade, etc.). Important areas for consideration include:
+ Allocation: Models may not be suitable for scenarios that could have consequential impact on legal status or the allocation of resources or life opportunities (ex: housing, employment, credit, etc.) without further assessments and additional debiasing techniques.
+ High-Risk Scenarios: Developers should assess suitability of using models in high-risk scenarios where unfair, unreliable or offensive outputs might be extremely costly or lead to harm. This includes providing advice in sensitive or expert domains where accuracy and reliability are critical (ex: legal or health advice). Additional safeguards should be implemented at the application level according to the deployment context.
+ Misinformation: Models may produce inaccurate information. Developers should follow transparency best practices and inform end-users they are interacting with an AI system. At the application level, developers can build feedback mechanisms and pipelines to ground responses in use-case specific, contextual information, a technique known as Retrieval Augmented Generation (RAG).
+ Generation of Harmful Content: Developers should assess outputs for their context and use available safety classifiers or custom solutions appropriate for their use case.
+ Misuse: Other forms of misuse such as fraud, spam, or malware production may be possible, and developers should ensure that their applications do not violate applicable laws and regulations.
## Training
### Model
* Architecture: Phi-3-Medium-4K-Instruct has 14B parameters and is a dense decoder-only Transformer model. The model is fine-tuned with Supervised fine-tuning (SFT) and Direct Preference Optimization (DPO) to ensure alignment with human preferences and safety guidlines.
* Inputs: Text. It is best suited for prompts using chat format.
* Context length: 4K tokens
* GPUs: 512 H100-80G
* Training time: 42 days
* Training data: 4.8T tokens
* Outputs: Generated text in response to the input
* Dates: Our models were trained between February and April 2024
* Status: This is a static model trained on an offline dataset with cutoff date October 2023. Future versions of the tuned models may be released as we improve models.
* Release dates: The model weight is released on May 21, 2024.
### Datasets
Our training data includes a wide variety of sources, totaling 4.8 trillion tokens (including 10% multilingual), and is a combination of
1) Publicly available documents filtered rigorously for quality, selected high-quality educational data, and code;
2) Newly created synthetic, “textbook-like” data for the purpose of teaching math, coding, common sense reasoning, general knowledge of the world (science, daily activities, theory of mind, etc.);
3) High quality chat format supervised data covering various topics to reflect human preferences on different aspects such as instruct-following, truthfulness, honesty and helpfulness.
We are focusing on the quality of data that could potentially improve the reasoning ability for the model, and we filter the publicly available documents to contain the correct level of knowledge. As an example, the result of a game in premier league in a particular day might be good training data for frontier models, but we need to remove such information to leave more model capacity for reasoning for the small size models. More details about data can be found in the [Phi-3 Technical Report](https://aka.ms/phi3-tech-report).
## Benchmarks
We report the results for Phi-3-Medium-4K-Instruct on standard open-source benchmarks measuring the model's reasoning ability (both common sense reasoning and logical reasoning). We compare to Mixtral-8x22b, Gemini-Pro, Command R+ 104B, Llama-3-70B-Instruct, GPT-3.5-Turbo-1106, and GPT-4-Turbo-1106(Chat).
All the reported numbers are produced with the exact same pipeline to ensure that the numbers are comparable. These numbers might differ from other published numbers due to slightly different choices in the evaluation.
As is now standard, we use few-shot prompts to evaluate the models, at temperature 0.
The prompts and number of shots are part of a Microsoft internal tool to evaluate language models, and in particular we did no optimization to the pipeline for Phi-3.
More specifically, we do not change prompts, pick different few-shot examples, change prompt format, or do any other form of optimization for the model.
The number of k–shot examples is listed per-benchmark.
|Benchmark|Phi-3-Medium-4K-Instruct<br>14b|Command R+<br>104B|Mixtral<br>8x22B|Llama-3-70B-Instruct<br>8b|GPT3.5-Turbo<br>version 1106|Gemini<br>Pro|GPT-4-Turbo<br>version 1106 (Chat)|
|---------|-----------------------|--------|-------------|-------------------|-------------------|----------|------------------------|
|AGI Eval<br>5-shot|50.2|50.1|54.0|56.9|48.4|49.0|59.6|
|MMLU<br>5-shot|78.0|73.8|76.2|80.2|71.4|66.7|84.0|
|BigBench Hard<br>3-shot|81.4|74.1|81.8|80.4|68.3|75.6|87.7|
|ANLI<br>7-shot|55.8|63.4|65.2|68.3|58.1|64.2|71.7|
|HellaSwag<br>5-shot|82.4|78.0|79.0|82.6|78.8|76.2|88.3|
|ARC Challenge<br>10-shot|91.6|86.9|91.3|93.0|87.4|88.3|95.6|
|ARC Easy<br>10-shot|97.7|95.7|96.9|98.2|96.3|96.1|98.8|
|BoolQ<br>2-shot|86.5|86.1|82.7|89.1|79.1|86.4|91.3|
|CommonsenseQA<br>10-shot|82.8|82.0|82.0|84.4|79.6|81.8|86.7|
|MedQA<br>2-shot|69.9|59.2|67.9|78.5|63.4|58.2|83.7|
|OpenBookQA<br>10-shot|87.4|86.8|88.6|91.8|86.0|86.4|93.4|
|PIQA<br>5-shot|87.9|86.4|85.0|85.3|86.6|86.2|90.1|
|Social IQA<br>5-shot|80.2|75.3|78.2|81.1|68.3|75.4|81.7|
|TruthfulQA (MC2)<br>10-shot|75.1|57.8|67.4|81.9|67.7|72.6|85.2|
|WinoGrande<br>5-shot|81.5|77.0|75.3|83.3|68.8|72.2|86.7|
|TriviaQA<br>5-shot|73.9|82.8|84.5|78.5|85.8|80.2|73.3|
|GSM8K Chain of Thought<br>8-shot|91.0|78.3|83.8|93.5|78.1|80.4|94.2|
|HumanEval<br>0-shot|62.2|61.6|39.6|78.7|62.2|64.4|79.9|
|MBPP<br>3-shot|75.2|68.9|70.7|81.3|77.8|73.2|86.7|
|Average|78.5|75.0|76.3|82.5|74.3|75.4|85.2|
We take a closer look at different categories across 80 public benchmark datasets at the table below:
|Benchmark|Phi-3-Medium-4K-Instruct<br>14b|Command R+<br>104B|Mixtral<br>8x22B|Llama-3-70B-Instruct<br>8b|GPT3.5-Turbo<br>version 1106|Gemini<br>Pro|GPT-4-Turbo<br>version 1106 (Chat)|
|--------|------------------------|--------|-------------|-------------------|-------------------|----------|------------------------|
|Popular aggregated benchmark|75.4|69.9|73.4|76.3|67.0|67.5|80.5|
|Reasoning|84.1|79.3|81.5|86.7|78.3|80.4|89.3|
|Language understanding|73.9|75.6|78.1|76.9|68.7|76.2|80.7|
|Code generation|66.1|68.6|60.0|69.3|70.4|66.7|76.1|
|Math|52.8|45.3|52.5|59.7|52.8|50.9|67.1|
|Factual knowledge|48.3|60.3|60.6|52.4|63.4|54.6|45.9|
|Multilingual|62.9|67.8|69.8|62.0|67.0|73.4|78.2|
|Robustness|66.5|57.9|65.5|78.7|69.3|69.7|84.6|
## Software
* [PyTorch](https://github.com/pytorch/pytorch)
* [DeepSpeed](https://github.com/microsoft/DeepSpeed)
* [Transformers](https://github.com/huggingface/transformers)
* [Flash-Attention](https://github.com/HazyResearch/flash-attention)
## Hardware
Note that by default, the Phi-3-Medium model uses flash attention, which requires certain types of GPU hardware to run. We have tested on the following GPU types:
* NVIDIA A100
* NVIDIA A6000
* NVIDIA H100
If you want to run the model on:
+ Optimized inference on GPU, CPU, and Mobile: use the **ONNX** models [4K](https://huggingface.co/microsoft/Phi-3-medium-4k-instruct-onnx-cuda)
## Cross Platform Support
ONNX runtime ecosystem now supports Phi3 Medium models across platforms and hardware.
Optimized phi-3 models are also published here in ONNX format, to run with ONNX Runtime on CPU and GPU across devices, including server platforms, Windows, Linux and Mac desktops, and mobile CPUs, with the precision best suited to each of these targets. DirectML GPU acceleration is supported for Windows desktops GPUs (AMD, Intel, and NVIDIA).
Along with DML, ONNX Runtime provides cross platform support for Phi3 Medium across a range of devices CPU, GPU, and mobile.
Here are some of the optimized configurations we have added:
1. ONNX models for int4 DML: Quantized to int4 via AWQ
2. ONNX model for fp16 CUDA
3. ONNX model for int4 CUDA: Quantized to int4 via RTN
4. ONNX model for int4 CPU and Mobile: Quantized to int4 via RTN
## License
The model is licensed under the [MIT license](https://huggingface.co/microsoft/Phi-3-medium-4k/resolve/main/LICENSE).
## Trademarks
This project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft trademarks or logos is subject to and must follow [Microsoft’s Trademark & Brand Guidelines](https://www.microsoft.com/en-us/legal/intellectualproperty/trademarks). Use of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship. Any use of third-party trademarks or logos are subject to those third-party’s policies.
| [
"MEDQA"
] | Non_BioNLP |
Lajavaness/bilingual-embedding-small | Lajavaness | sentence-similarity | [
"sentence-transformers",
"safetensors",
"bilingual",
"feature-extraction",
"sentence-similarity",
"transformers",
"sentence-embedding",
"mteb",
"custom_code",
"fr",
"en",
"arxiv:2010.08240",
"arxiv:1911.02116",
"arxiv:1908.10084",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,721,186,027,000 | 2024-11-20T14:45:37 | 6,982 | 4 | ---
language:
- fr
- en
library_name: sentence-transformers
license: apache-2.0
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
- sentence-embedding
- mteb
model-index:
- name: bilingual-embedding-small
results:
- task:
type: Clustering
dataset:
name: MTEB AlloProfClusteringP2P
type: lyon-nlp/alloprof
config: default
split: test
revision: 392ba3f5bcc8c51f578786c1fc3dae648662cb9b
metrics:
- type: v_measure
value: 63.19030822769444
- type: v_measures
value:
- 0.5938891912573394
- 0.6171518411959
- 0.6042518292029612
- 0.6626602879382325
- 0.6471224639325329
- type: v_measure
value: 41.32807908087869
- type: v_measures
value:
- 0.3351458197856525
- 0.48472823318531566
- 0.4631757871803168
- 0.43580166532679027
- 0.4219041689415661
- task:
type: Reranking
dataset:
name: MTEB AlloprofReranking
type: lyon-nlp/mteb-fr-reranking-alloprof-s2p
config: default
split: test
revision: 65393d0d7a08a10b4e348135e824f385d420b0fd
metrics:
- type: map
value: 68.43888603876276
- type: mrr
value: 69.59097513501659
- type: nAUC_map_diff1
value: 47.92767021121887
- type: nAUC_map_max
value: 10.206900586093957
- type: nAUC_mrr_diff1
value: 47.823670000503014
- type: nAUC_mrr_max
value: 10.266197221615979
- task:
type: Retrieval
dataset:
name: MTEB AlloprofRetrieval
type: lyon-nlp/alloprof
config: default
split: test
revision: fcf295ea64c750f41fadbaa37b9b861558e1bfbd
metrics:
- type: map_at_1
value: 23.23
- type: map_at_10
value: 33.513999999999996
- type: map_at_100
value: 34.554
- type: map_at_1000
value: 34.621
- type: map_at_20
value: 34.129
- type: map_at_3
value: 30.526999999999997
- type: map_at_5
value: 32.107
- type: mrr_at_1
value: 23.229706390328154
- type: mrr_at_10
value: 33.51421786331119
- type: mrr_at_100
value: 34.55441665729269
- type: mrr_at_1000
value: 34.62084787184653
- type: mrr_at_20
value: 34.12901601558586
- type: mrr_at_3
value: 30.52677029360969
- type: mrr_at_5
value: 32.107081174438754
- type: nauc_map_at_1000_diff1
value: 31.371888101756962
- type: nauc_map_at_1000_max
value: 28.482571238049648
- type: nauc_map_at_100_diff1
value: 31.36352656865708
- type: nauc_map_at_100_max
value: 28.51143558254042
- type: nauc_map_at_10_diff1
value: 31.26317124194119
- type: nauc_map_at_10_max
value: 28.356439091020903
- type: nauc_map_at_1_diff1
value: 36.87861012283668
- type: nauc_map_at_1_max
value: 25.592704128025584
- type: nauc_map_at_20_diff1
value: 31.27934733015461
- type: nauc_map_at_20_max
value: 28.471156752297954
- type: nauc_map_at_3_diff1
value: 31.613137860497627
- type: nauc_map_at_3_max
value: 27.518268339115743
- type: nauc_map_at_5_diff1
value: 31.356295694985565
- type: nauc_map_at_5_max
value: 27.79553638754489
- type: nauc_mrr_at_1000_diff1
value: 31.371889078421372
- type: nauc_mrr_at_1000_max
value: 28.482578347594856
- type: nauc_mrr_at_100_diff1
value: 31.36352656865708
- type: nauc_mrr_at_100_max
value: 28.51143558254042
- type: nauc_mrr_at_10_diff1
value: 31.26317124194119
- type: nauc_mrr_at_10_max
value: 28.356439091020903
- type: nauc_mrr_at_1_diff1
value: 36.87861012283668
- type: nauc_mrr_at_1_max
value: 25.592704128025584
- type: nauc_mrr_at_20_diff1
value: 31.27934733015461
- type: nauc_mrr_at_20_max
value: 28.471156752297954
- type: nauc_mrr_at_3_diff1
value: 31.613137860497627
- type: nauc_mrr_at_3_max
value: 27.518268339115743
- type: nauc_mrr_at_5_diff1
value: 31.356295694985565
- type: nauc_mrr_at_5_max
value: 27.79553638754489
- type: nauc_ndcg_at_1000_diff1
value: 30.418606855093337
- type: nauc_ndcg_at_1000_max
value: 29.993105440430234
- type: nauc_ndcg_at_100_diff1
value: 30.131330243160843
- type: nauc_ndcg_at_100_max
value: 30.820165762770422
- type: nauc_ndcg_at_10_diff1
value: 29.510008265344545
- type: nauc_ndcg_at_10_max
value: 29.94961535617982
- type: nauc_ndcg_at_1_diff1
value: 36.87861012283668
- type: nauc_ndcg_at_1_max
value: 25.592704128025584
- type: nauc_ndcg_at_20_diff1
value: 29.52438230390851
- type: nauc_ndcg_at_20_max
value: 30.504655157655904
- type: nauc_ndcg_at_3_diff1
value: 30.18136510240507
- type: nauc_ndcg_at_3_max
value: 28.099090120422275
- type: nauc_ndcg_at_5_diff1
value: 29.762075942245303
- type: nauc_ndcg_at_5_max
value: 28.61500294452224
- type: nauc_precision_at_1000_diff1
value: 27.306371732512996
- type: nauc_precision_at_1000_max
value: 65.78374115284707
- type: nauc_precision_at_100_diff1
value: 25.3948170473858
- type: nauc_precision_at_100_max
value: 47.29752571335181
- type: nauc_precision_at_10_diff1
value: 24.310996780059035
- type: nauc_precision_at_10_max
value: 35.20411354359985
- type: nauc_precision_at_1_diff1
value: 36.87861012283668
- type: nauc_precision_at_1_max
value: 25.592704128025584
- type: nauc_precision_at_20_diff1
value: 23.583394574577937
- type: nauc_precision_at_20_max
value: 38.697643796192324
- type: nauc_precision_at_3_diff1
value: 26.407752776386506
- type: nauc_precision_at_3_max
value: 29.64769320764332
- type: nauc_precision_at_5_diff1
value: 25.45743969076595
- type: nauc_precision_at_5_max
value: 30.919847931025647
- type: nauc_recall_at_1000_diff1
value: 27.306371732511476
- type: nauc_recall_at_1000_max
value: 65.7837411528459
- type: nauc_recall_at_100_diff1
value: 25.39481704738587
- type: nauc_recall_at_100_max
value: 47.29752571335173
- type: nauc_recall_at_10_diff1
value: 24.310996780059064
- type: nauc_recall_at_10_max
value: 35.20411354359981
- type: nauc_recall_at_1_diff1
value: 36.87861012283668
- type: nauc_recall_at_1_max
value: 25.592704128025584
- type: nauc_recall_at_20_diff1
value: 23.583394574578005
- type: nauc_recall_at_20_max
value: 38.69764379619235
- type: nauc_recall_at_3_diff1
value: 26.407752776386513
- type: nauc_recall_at_3_max
value: 29.647693207643332
- type: nauc_recall_at_5_diff1
value: 25.457439690765938
- type: nauc_recall_at_5_max
value: 30.91984793102568
- type: ndcg_at_1
value: 23.23
- type: ndcg_at_10
value: 39.215
- type: ndcg_at_100
value: 44.566
- type: ndcg_at_1000
value: 46.409
- type: ndcg_at_20
value: 41.467
- type: ndcg_at_3
value: 32.993
- type: ndcg_at_5
value: 35.839
- type: precision_at_1
value: 23.23
- type: precision_at_10
value: 5.743
- type: precision_at_100
value: 0.831
- type: precision_at_1000
value: 0.098
- type: precision_at_20
value: 3.318
- type: precision_at_3
value: 13.385
- type: precision_at_5
value: 9.413
- type: recall_at_1
value: 23.23
- type: recall_at_10
value: 57.42699999999999
- type: recall_at_100
value: 83.11699999999999
- type: recall_at_1000
value: 97.75500000000001
- type: recall_at_20
value: 66.364
- type: recall_at_3
value: 40.155
- type: recall_at_5
value: 47.064
- task:
type: Classification
dataset:
name: MTEB AmazonReviewsClassification (fr)
type: mteb/amazon_reviews_multi
config: fr
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 37.102000000000004
- type: f1
value: 36.48213245522153
- type: f1_weighted
value: 36.48213245522153
- task:
type: Retrieval
dataset:
name: MTEB BSARDRetrieval
type: maastrichtlawtech/bsard
config: default
split: test
revision: 5effa1b9b5fa3b0f9e12523e6e43e5f86a6e6d59
metrics:
- type: map_at_1
value: 5.405
- type: map_at_10
value: 8.372
- type: map_at_100
value: 9.522
- type: map_at_1000
value: 9.645
- type: map_at_20
value: 8.987
- type: map_at_3
value: 7.132
- type: map_at_5
value: 7.763000000000001
- type: mrr_at_1
value: 5.405405405405405
- type: mrr_at_10
value: 8.37247962247962
- type: mrr_at_100
value: 9.522369675165548
- type: mrr_at_1000
value: 9.644865518194182
- type: mrr_at_20
value: 8.987200953145225
- type: mrr_at_3
value: 7.132132132132132
- type: mrr_at_5
value: 7.762762762762764
- type: nauc_map_at_1000_diff1
value: 12.103894408912778
- type: nauc_map_at_1000_max
value: 26.77460445002228
- type: nauc_map_at_100_diff1
value: 12.199535893254412
- type: nauc_map_at_100_max
value: 26.791136142909995
- type: nauc_map_at_10_diff1
value: 11.762615047468374
- type: nauc_map_at_10_max
value: 26.54661601271767
- type: nauc_map_at_1_diff1
value: 19.75768475065795
- type: nauc_map_at_1_max
value: 36.45294032166726
- type: nauc_map_at_20_diff1
value: 12.627299133728561
- type: nauc_map_at_20_max
value: 26.431834723382625
- type: nauc_map_at_3_diff1
value: 11.406093688135979
- type: nauc_map_at_3_max
value: 26.206852419799336
- type: nauc_map_at_5_diff1
value: 10.933715346866054
- type: nauc_map_at_5_max
value: 25.15168912848224
- type: nauc_mrr_at_1000_diff1
value: 12.103894408912778
- type: nauc_mrr_at_1000_max
value: 26.77460445002228
- type: nauc_mrr_at_100_diff1
value: 12.199535893254412
- type: nauc_mrr_at_100_max
value: 26.791136142909995
- type: nauc_mrr_at_10_diff1
value: 11.762615047468374
- type: nauc_mrr_at_10_max
value: 26.54661601271767
- type: nauc_mrr_at_1_diff1
value: 19.75768475065795
- type: nauc_mrr_at_1_max
value: 36.45294032166726
- type: nauc_mrr_at_20_diff1
value: 12.627299133728561
- type: nauc_mrr_at_20_max
value: 26.431834723382625
- type: nauc_mrr_at_3_diff1
value: 11.406093688135979
- type: nauc_mrr_at_3_max
value: 26.206852419799336
- type: nauc_mrr_at_5_diff1
value: 10.933715346866054
- type: nauc_mrr_at_5_max
value: 25.15168912848224
- type: nauc_ndcg_at_1000_diff1
value: 8.119711442397051
- type: nauc_ndcg_at_1000_max
value: 25.821500954959493
- type: nauc_ndcg_at_100_diff1
value: 10.610584456957277
- type: nauc_ndcg_at_100_max
value: 27.81373505272856
- type: nauc_ndcg_at_10_diff1
value: 10.667531959142947
- type: nauc_ndcg_at_10_max
value: 25.428088817882212
- type: nauc_ndcg_at_1_diff1
value: 19.75768475065795
- type: nauc_ndcg_at_1_max
value: 36.45294032166726
- type: nauc_ndcg_at_20_diff1
value: 13.52659589943601
- type: nauc_ndcg_at_20_max
value: 25.543352357923972
- type: nauc_ndcg_at_3_diff1
value: 9.220701633954755
- type: nauc_ndcg_at_3_max
value: 23.41404735216586
- type: nauc_ndcg_at_5_diff1
value: 8.904201880131358
- type: nauc_ndcg_at_5_max
value: 22.27268813727672
- type: nauc_precision_at_1000_diff1
value: -13.379595578660972
- type: nauc_precision_at_1000_max
value: 19.07407039098987
- type: nauc_precision_at_100_diff1
value: 7.161231404563548
- type: nauc_precision_at_100_max
value: 32.1446712851372
- type: nauc_precision_at_10_diff1
value: 9.32876742632238
- type: nauc_precision_at_10_max
value: 24.401763374615022
- type: nauc_precision_at_1_diff1
value: 19.75768475065795
- type: nauc_precision_at_1_max
value: 36.45294032166726
- type: nauc_precision_at_20_diff1
value: 16.344981963229685
- type: nauc_precision_at_20_max
value: 25.273014482618493
- type: nauc_precision_at_3_diff1
value: 4.604729400599949
- type: nauc_precision_at_3_max
value: 17.491915784171987
- type: nauc_precision_at_5_diff1
value: 5.152774776096578
- type: nauc_precision_at_5_max
value: 16.848544787508555
- type: nauc_recall_at_1000_diff1
value: -13.379595578660883
- type: nauc_recall_at_1000_max
value: 19.07407039098995
- type: nauc_recall_at_100_diff1
value: 7.161231404563502
- type: nauc_recall_at_100_max
value: 32.144671285137136
- type: nauc_recall_at_10_diff1
value: 9.328767426322395
- type: nauc_recall_at_10_max
value: 24.40176337461501
- type: nauc_recall_at_1_diff1
value: 19.75768475065795
- type: nauc_recall_at_1_max
value: 36.45294032166726
- type: nauc_recall_at_20_diff1
value: 16.34498196322963
- type: nauc_recall_at_20_max
value: 25.27301448261847
- type: nauc_recall_at_3_diff1
value: 4.604729400599932
- type: nauc_recall_at_3_max
value: 17.49191578417196
- type: nauc_recall_at_5_diff1
value: 5.152774776096596
- type: nauc_recall_at_5_max
value: 16.848544787508573
- type: ndcg_at_1
value: 5.405
- type: ndcg_at_10
value: 10.51
- type: ndcg_at_100
value: 17.012
- type: ndcg_at_1000
value: 20.686
- type: ndcg_at_20
value: 12.849
- type: ndcg_at_3
value: 7.835
- type: ndcg_at_5
value: 8.959
- type: precision_at_1
value: 5.405
- type: precision_at_10
value: 1.757
- type: precision_at_100
value: 0.5
- type: precision_at_1000
value: 0.08
- type: precision_at_20
value: 1.351
- type: precision_at_3
value: 3.3029999999999995
- type: precision_at_5
value: 2.5229999999999997
- type: recall_at_1
value: 5.405
- type: recall_at_10
value: 17.568
- type: recall_at_100
value: 50.0
- type: recall_at_1000
value: 79.73
- type: recall_at_20
value: 27.027
- type: recall_at_3
value: 9.91
- type: recall_at_5
value: 12.613
- task:
type: Clustering
dataset:
name: MTEB HALClusteringS2S
type: lyon-nlp/clustering-hal-s2s
config: default
split: test
revision: e06ebbbb123f8144bef1a5d18796f3dec9ae2915
metrics:
- type: v_measure
value: 23.773322308353517
- type: v_measures
value:
- 0.28321300949906897
- 0.26004472642751963
- 0.25956951558284086
- 0.24123195304666292
- 0.22207486085944725
- task:
type: Clustering
dataset:
name: MTEB MLSUMClusteringP2P
type: reciTAL/mlsum
config: default
split: test
revision: b5d54f8f3b61ae17845046286940f03c6bc79bc7
metrics:
- type: v_measure
value: 44.10635133884183
- type: v_measures
value:
- 0.4352021194557699
- 0.4616076200837005
- 0.4407517544208635
- 0.4387026615402928
- 0.40575306284000634
- type: v_measure
value: 43.6574557237274
- type: v_measures
value:
- 0.431013360873005
- 0.45456972088535785
- 0.4474907746228345
- 0.4318507494303067
- 0.3857692351737129
- task:
type: Classification
dataset:
name: MTEB MTOPDomainClassification (fr)
type: mteb/mtop_domain
config: fr
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 86.08518634512997
- type: f1
value: 86.01437763316983
- type: f1_weighted
value: 86.03483392539235
- task:
type: Classification
dataset:
name: MTEB MTOPIntentClassification (fr)
type: mteb/mtop_intent
config: fr
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 61.23081741309113
- type: f1
value: 44.76981337966934
- type: f1_weighted
value: 64.86577219367403
- task:
type: Classification
dataset:
name: MTEB MasakhaNEWSClassification (fra)
type: mteb/masakhanews
config: fra
split: test
revision: 18193f187b92da67168c655c9973a165ed9593dd
metrics:
- type: accuracy
value: 78.34123222748815
- type: f1
value: 74.19808376161188
- type: f1_weighted
value: 78.465165305135
- task:
type: Clustering
dataset:
name: MTEB MasakhaNEWSClusteringP2P (fra)
type: masakhane/masakhanews
config: fra
split: test
revision: 8ccc72e69e65f40c70e117d8b3c08306bb788b60
metrics:
- type: v_measure
value: 58.6260598345145
- type: v_measures
value:
- 1.0
- 0.052551289167589395
- 0.4126124221990744
- 0.690570263898874
- 0.7755690164601873
- type: v_measure
value: 53.47058992788083
- type: v_measures
value:
- 1.0
- 0.06287268264858063
- 0.6712730568122484
- 0.2169401386066275
- 0.7224436183265853
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (fr)
type: mteb/amazon_massive_intent
config: fr
split: test
revision: 4672e20407010da34463acc759c162ca9734bca6
metrics:
- type: accuracy
value: 65.48755884330868
- type: f1
value: 63.42516904610099
- type: f1_weighted
value: 65.62227422445625
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (fr)
type: mteb/amazon_massive_scenario
config: fr
split: test
revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8
metrics:
- type: accuracy
value: 69.27370544720914
- type: f1
value: 68.92639289886843
- type: f1_weighted
value: 69.39025426049528
- task:
type: Retrieval
dataset:
name: MTEB MintakaRetrieval (fr)
type: jinaai/mintakaqa
config: fr
split: test
revision: efa78cc2f74bbcd21eff2261f9e13aebe40b814e
metrics:
- type: map_at_1
value: 13.718
- type: map_at_10
value: 21.404
- type: map_at_100
value: 22.371
- type: map_at_1000
value: 22.493
- type: map_at_20
value: 21.972
- type: map_at_3
value: 19.192
- type: map_at_5
value: 20.366999999999997
- type: mrr_at_1
value: 13.718263718263717
- type: mrr_at_10
value: 21.40396890396889
- type: mrr_at_100
value: 22.370673563176524
- type: mrr_at_1000
value: 22.49316476646458
- type: mrr_at_20
value: 21.971667600361627
- type: mrr_at_3
value: 19.191919191919162
- type: mrr_at_5
value: 20.36718536718532
- type: nauc_map_at_1000_diff1
value: 18.66186772676608
- type: nauc_map_at_1000_max
value: 32.45794220535016
- type: nauc_map_at_100_diff1
value: 18.620993318615394
- type: nauc_map_at_100_max
value: 32.44820369787587
- type: nauc_map_at_10_diff1
value: 18.884049513804037
- type: nauc_map_at_10_max
value: 32.77552431144882
- type: nauc_map_at_1_diff1
value: 25.49080673895181
- type: nauc_map_at_1_max
value: 29.25311987317655
- type: nauc_map_at_20_diff1
value: 18.704330878419075
- type: nauc_map_at_20_max
value: 32.55837988078994
- type: nauc_map_at_3_diff1
value: 20.095857563209314
- type: nauc_map_at_3_max
value: 32.93191322461617
- type: nauc_map_at_5_diff1
value: 19.281813313113396
- type: nauc_map_at_5_max
value: 32.786844756856475
- type: nauc_mrr_at_1000_diff1
value: 18.66186763232669
- type: nauc_mrr_at_1000_max
value: 32.45794216874754
- type: nauc_mrr_at_100_diff1
value: 18.620993318615394
- type: nauc_mrr_at_100_max
value: 32.44820369787587
- type: nauc_mrr_at_10_diff1
value: 18.884049513804037
- type: nauc_mrr_at_10_max
value: 32.77552431144882
- type: nauc_mrr_at_1_diff1
value: 25.49080673895181
- type: nauc_mrr_at_1_max
value: 29.25311987317655
- type: nauc_mrr_at_20_diff1
value: 18.704330878419075
- type: nauc_mrr_at_20_max
value: 32.55837988078994
- type: nauc_mrr_at_3_diff1
value: 20.095857563209314
- type: nauc_mrr_at_3_max
value: 32.93191322461617
- type: nauc_mrr_at_5_diff1
value: 19.281813313113396
- type: nauc_mrr_at_5_max
value: 32.786844756856475
- type: nauc_ndcg_at_1000_diff1
value: 16.291266125191186
- type: nauc_ndcg_at_1000_max
value: 32.03483412880716
- type: nauc_ndcg_at_100_diff1
value: 15.08155959648069
- type: nauc_ndcg_at_100_max
value: 31.74628993952365
- type: nauc_ndcg_at_10_diff1
value: 16.457288503185854
- type: nauc_ndcg_at_10_max
value: 33.34322548472455
- type: nauc_ndcg_at_1_diff1
value: 25.49080673895181
- type: nauc_ndcg_at_1_max
value: 29.25311987317655
- type: nauc_ndcg_at_20_diff1
value: 15.847885101378232
- type: nauc_ndcg_at_20_max
value: 32.63179959589915
- type: nauc_ndcg_at_3_diff1
value: 18.77312834653236
- type: nauc_ndcg_at_3_max
value: 33.76341797807492
- type: nauc_ndcg_at_5_diff1
value: 17.46839695168085
- type: nauc_ndcg_at_5_max
value: 33.52006824258854
- type: nauc_precision_at_1000_diff1
value: -1.1586826170737583
- type: nauc_precision_at_1000_max
value: 19.93551888234813
- type: nauc_precision_at_100_diff1
value: 3.0380716626456072
- type: nauc_precision_at_100_max
value: 27.2930149786862
- type: nauc_precision_at_10_diff1
value: 10.566658459623403
- type: nauc_precision_at_10_max
value: 34.3880626271458
- type: nauc_precision_at_1_diff1
value: 25.49080673895181
- type: nauc_precision_at_1_max
value: 29.25311987317655
- type: nauc_precision_at_20_diff1
value: 8.479138077976014
- type: nauc_precision_at_20_max
value: 32.113399922346744
- type: nauc_precision_at_3_diff1
value: 15.622074491261472
- type: nauc_precision_at_3_max
value: 35.7250108281599
- type: nauc_precision_at_5_diff1
value: 13.229128818765382
- type: nauc_precision_at_5_max
value: 35.14850024280549
- type: nauc_recall_at_1000_diff1
value: -1.1586826170740634
- type: nauc_recall_at_1000_max
value: 19.93551888234777
- type: nauc_recall_at_100_diff1
value: 3.038071662645654
- type: nauc_recall_at_100_max
value: 27.293014978686276
- type: nauc_recall_at_10_diff1
value: 10.56665845962341
- type: nauc_recall_at_10_max
value: 34.38806262714581
- type: nauc_recall_at_1_diff1
value: 25.49080673895181
- type: nauc_recall_at_1_max
value: 29.25311987317655
- type: nauc_recall_at_20_diff1
value: 8.47913807797598
- type: nauc_recall_at_20_max
value: 32.113399922346744
- type: nauc_recall_at_3_diff1
value: 15.622074491261479
- type: nauc_recall_at_3_max
value: 35.725010828159924
- type: nauc_recall_at_5_diff1
value: 13.229128818765403
- type: nauc_recall_at_5_max
value: 35.14850024280548
- type: ndcg_at_1
value: 13.718
- type: ndcg_at_10
value: 25.576
- type: ndcg_at_100
value: 30.537999999999997
- type: ndcg_at_1000
value: 34.364
- type: ndcg_at_20
value: 27.619
- type: ndcg_at_3
value: 20.924
- type: ndcg_at_5
value: 23.046
- type: precision_at_1
value: 13.718
- type: precision_at_10
value: 3.894
- type: precision_at_100
value: 0.628
- type: precision_at_1000
value: 0.094
- type: precision_at_20
value: 2.348
- type: precision_at_3
value: 8.64
- type: precision_at_5
value: 6.216
- type: recall_at_1
value: 13.718
- type: recall_at_10
value: 38.943
- type: recall_at_100
value: 62.775999999999996
- type: recall_at_1000
value: 94.10300000000001
- type: recall_at_20
value: 46.97
- type: recall_at_3
value: 25.921
- type: recall_at_5
value: 31.080999999999996
- task:
type: PairClassification
dataset:
name: MTEB OpusparcusPC (fr)
type: GEM/opusparcus
config: fr
split: test
revision: 9e9b1f8ef51616073f47f306f7f47dd91663f86a
metrics:
- type: cos_sim_accuracy
value: 81.81198910081744
- type: cos_sim_ap
value: 92.92034333454589
- type: cos_sim_f1
value: 87.20651653090562
- type: cos_sim_precision
value: 84.25925925925927
- type: cos_sim_recall
value: 90.3674280039722
- type: dot_accuracy
value: 81.06267029992752
- type: dot_ap
value: 92.19923182286357
- type: dot_f1
value: 87.23307587460246
- type: dot_precision
value: 80.40200995025126
- type: dot_recall
value: 95.33267130089378
- type: euclidean_accuracy
value: 81.06267029992752
- type: euclidean_ap
value: 92.58456772515233
- type: euclidean_f1
value: 86.94835680751173
- type: euclidean_precision
value: 82.45770258236865
- type: euclidean_recall
value: 91.9563058589871
- type: manhattan_accuracy
value: 80.92643051771117
- type: manhattan_ap
value: 92.47972548332238
- type: manhattan_f1
value: 86.88372093023257
- type: manhattan_precision
value: 81.71478565179353
- type: manhattan_recall
value: 92.75074478649454
- type: max_accuracy
value: 81.81198910081744
- type: max_ap
value: 92.92034333454589
- type: max_f1
value: 87.23307587460246
- task:
type: PairClassification
dataset:
name: MTEB PawsX (fr)
type: google-research-datasets/paws-x
config: fr
split: test
revision: 8a04d940a42cd40658986fdd8e3da561533a3646
metrics:
- type: cos_sim_accuracy
value: 60.550000000000004
- type: cos_sim_ap
value: 58.1865824487652
- type: cos_sim_f1
value: 62.491349480968864
- type: cos_sim_precision
value: 45.44539506794162
- type: cos_sim_recall
value: 100.0
- type: dot_accuracy
value: 56.49999999999999
- type: dot_ap
value: 49.511525626044474
- type: dot_f1
value: 62.76595744680852
- type: dot_precision
value: 46.165884194053206
- type: dot_recall
value: 98.00664451827242
- type: euclidean_accuracy
value: 60.199999999999996
- type: euclidean_ap
value: 58.003058708335246
- type: euclidean_f1
value: 62.491349480968864
- type: euclidean_precision
value: 45.44539506794162
- type: euclidean_recall
value: 100.0
- type: manhattan_accuracy
value: 60.199999999999996
- type: manhattan_ap
value: 58.02420001567834
- type: manhattan_f1
value: 62.491349480968864
- type: manhattan_precision
value: 45.44539506794162
- type: manhattan_recall
value: 100.0
- type: max_accuracy
value: 60.550000000000004
- type: max_ap
value: 58.1865824487652
- type: max_f1
value: 62.76595744680852
- task:
type: STS
dataset:
name: MTEB SICKFr
type: Lajavaness/SICK-fr
config: default
split: test
revision: e077ab4cf4774a1e36d86d593b150422fafd8e8a
metrics:
- type: cos_sim_pearson
value: 81.35988610550564
- type: cos_sim_spearman
value: 74.30702501405389
- type: euclidean_pearson
value: 77.98265846914386
- type: euclidean_spearman
value: 74.28309779423242
- type: manhattan_pearson
value: 77.91611618952486
- type: manhattan_spearman
value: 74.09543847416339
- task:
type: STS
dataset:
name: MTEB STS22 (fr)
type: mteb/sts22-crosslingual-sts
config: fr
split: test
revision: de9d86b3b84231dc21f76c7b7af1f28e2f57f6e3
metrics:
- type: cos_sim_pearson
value: 82.87930229178569
- type: cos_sim_spearman
value: 82.91122500126046
- type: euclidean_pearson
value: 82.30161381658885
- type: euclidean_spearman
value: 82.80157531184477
- type: manhattan_pearson
value: 82.59746592491155
- type: manhattan_spearman
value: 82.91620907805208
- task:
type: STS
dataset:
name: MTEB STSBenchmarkMultilingualSTS (fr)
type: mteb/stsb_multi_mt
config: fr
split: test
revision: 29afa2569dcedaaa2fe6a3dcfebab33d28b82e8c
metrics:
- type: cos_sim_pearson
value: 83.44206580073515
- type: cos_sim_spearman
value: 83.29855460437528
- type: euclidean_pearson
value: 82.28885833656986
- type: euclidean_spearman
value: 83.17545506016941
- type: manhattan_pearson
value: 82.16568250036501
- type: manhattan_spearman
value: 83.0743139221437
- task:
type: Summarization
dataset:
name: MTEB SummEvalFr
type: lyon-nlp/summarization-summeval-fr-p2p
config: default
split: test
revision: b385812de6a9577b6f4d0f88c6a6e35395a94054
metrics:
- type: cos_sim_pearson
value: 31.280485627024635
- type: cos_sim_spearman
value: 32.33005962500831
- type: dot_pearson
value: 30.158348138782753
- type: dot_spearman
value: 30.392045689426418
- task:
type: Reranking
dataset:
name: MTEB SyntecReranking
type: lyon-nlp/mteb-fr-reranking-syntec-s2p
config: default
split: test
revision: daf0863838cd9e3ba50544cdce3ac2b338a1b0ad
metrics:
- type: map
value: 81.67777777777776
- type: mrr
value: 81.67777777777776
- type: nAUC_map_diff1
value: 59.89472485574524
- type: nAUC_map_max
value: 8.215249384307162
- type: nAUC_mrr_diff1
value: 59.89472485574524
- type: nAUC_mrr_max
value: 8.215249384307162
- task:
type: Retrieval
dataset:
name: MTEB SyntecRetrieval
type: lyon-nlp/mteb-fr-retrieval-syntec-s2p
config: default
split: test
revision: 19661ccdca4dfc2d15122d776b61685f48c68ca9
metrics:
- type: map_at_1
value: 62.0
- type: map_at_10
value: 72.887
- type: map_at_100
value: 73.181
- type: map_at_1000
value: 73.181
- type: map_at_20
value: 73.16
- type: map_at_3
value: 70.667
- type: map_at_5
value: 71.56700000000001
- type: mrr_at_1
value: 62.0
- type: mrr_at_10
value: 72.88690476190479
- type: mrr_at_100
value: 73.18055555555557
- type: mrr_at_1000
value: 73.18055555555557
- type: mrr_at_20
value: 73.15972222222224
- type: mrr_at_3
value: 70.66666666666667
- type: mrr_at_5
value: 71.56666666666666
- type: nauc_map_at_1000_diff1
value: 47.52352042312832
- type: nauc_map_at_1000_max
value: 12.229977029802052
- type: nauc_map_at_100_diff1
value: 47.52352042312832
- type: nauc_map_at_100_max
value: 12.229977029802052
- type: nauc_map_at_10_diff1
value: 47.83118981173179
- type: nauc_map_at_10_max
value: 12.67122414331949
- type: nauc_map_at_1_diff1
value: 48.29708026951358
- type: nauc_map_at_1_max
value: 5.016460019075176
- type: nauc_map_at_20_diff1
value: 47.5126416742559
- type: nauc_map_at_20_max
value: 12.23002184861472
- type: nauc_map_at_3_diff1
value: 48.18168651330906
- type: nauc_map_at_3_max
value: 14.063513453945578
- type: nauc_map_at_5_diff1
value: 46.8656518414084
- type: nauc_map_at_5_max
value: 13.22896127813873
- type: nauc_mrr_at_1000_diff1
value: 47.52352042312832
- type: nauc_mrr_at_1000_max
value: 12.229977029802052
- type: nauc_mrr_at_100_diff1
value: 47.52352042312832
- type: nauc_mrr_at_100_max
value: 12.229977029802052
- type: nauc_mrr_at_10_diff1
value: 47.83118981173179
- type: nauc_mrr_at_10_max
value: 12.67122414331949
- type: nauc_mrr_at_1_diff1
value: 48.29708026951358
- type: nauc_mrr_at_1_max
value: 5.016460019075176
- type: nauc_mrr_at_20_diff1
value: 47.5126416742559
- type: nauc_mrr_at_20_max
value: 12.23002184861472
- type: nauc_mrr_at_3_diff1
value: 48.18168651330906
- type: nauc_mrr_at_3_max
value: 14.063513453945578
- type: nauc_mrr_at_5_diff1
value: 46.8656518414084
- type: nauc_mrr_at_5_max
value: 13.22896127813873
- type: nauc_ndcg_at_1000_diff1
value: 47.56455972451391
- type: nauc_ndcg_at_1000_max
value: 12.900901768894494
- type: nauc_ndcg_at_100_diff1
value: 47.56455972451391
- type: nauc_ndcg_at_100_max
value: 12.900901768894494
- type: nauc_ndcg_at_10_diff1
value: 48.92225620164975
- type: nauc_ndcg_at_10_max
value: 14.848602834576374
- type: nauc_ndcg_at_1_diff1
value: 48.29708026951358
- type: nauc_ndcg_at_1_max
value: 5.016460019075176
- type: nauc_ndcg_at_20_diff1
value: 47.44500349427683
- type: nauc_ndcg_at_20_max
value: 12.894569953616672
- type: nauc_ndcg_at_3_diff1
value: 48.79515966817958
- type: nauc_ndcg_at_3_max
value: 17.067858878871014
- type: nauc_ndcg_at_5_diff1
value: 46.2582129725611
- type: nauc_ndcg_at_5_max
value: 15.802131944100553
- type: nauc_precision_at_1000_diff1
value: nan
- type: nauc_precision_at_1000_max
value: nan
- type: nauc_precision_at_100_diff1
value: nan
- type: nauc_precision_at_100_max
value: nan
- type: nauc_precision_at_10_diff1
value: 67.1335200746968
- type: nauc_precision_at_10_max
value: 41.521942110178045
- type: nauc_precision_at_1_diff1
value: 48.29708026951358
- type: nauc_precision_at_1_max
value: 5.016460019075176
- type: nauc_precision_at_20_diff1
value: 35.80765639589114
- type: nauc_precision_at_20_max
value: 12.278244631185926
- type: nauc_precision_at_3_diff1
value: 51.516580229451506
- type: nauc_precision_at_3_max
value: 28.765257478128753
- type: nauc_precision_at_5_diff1
value: 43.146762121705116
- type: nauc_precision_at_5_max
value: 27.715587373901627
- type: nauc_recall_at_1000_diff1
value: nan
- type: nauc_recall_at_1000_max
value: nan
- type: nauc_recall_at_100_diff1
value: nan
- type: nauc_recall_at_100_max
value: nan
- type: nauc_recall_at_10_diff1
value: 67.13352007469638
- type: nauc_recall_at_10_max
value: 41.52194211017754
- type: nauc_recall_at_1_diff1
value: 48.29708026951358
- type: nauc_recall_at_1_max
value: 5.016460019075176
- type: nauc_recall_at_20_diff1
value: 35.80765639589109
- type: nauc_recall_at_20_max
value: 12.278244631185359
- type: nauc_recall_at_3_diff1
value: 51.516580229451556
- type: nauc_recall_at_3_max
value: 28.765257478128735
- type: nauc_recall_at_5_diff1
value: 43.14676212170519
- type: nauc_recall_at_5_max
value: 27.7155873739018
- type: ndcg_at_1
value: 62.0
- type: ndcg_at_10
value: 78.188
- type: ndcg_at_100
value: 79.372
- type: ndcg_at_1000
value: 79.372
- type: ndcg_at_20
value: 79.194
- type: ndcg_at_3
value: 73.333
- type: ndcg_at_5
value: 74.968
- type: precision_at_1
value: 62.0
- type: precision_at_10
value: 9.5
- type: precision_at_100
value: 1.0
- type: precision_at_1000
value: 0.1
- type: precision_at_20
value: 4.95
- type: precision_at_3
value: 27.0
- type: precision_at_5
value: 17.0
- type: recall_at_1
value: 62.0
- type: recall_at_10
value: 95.0
- type: recall_at_100
value: 100.0
- type: recall_at_1000
value: 100.0
- type: recall_at_20
value: 99.0
- type: recall_at_3
value: 81.0
- type: recall_at_5
value: 85.0
- task:
type: Retrieval
dataset:
name: MTEB XPQARetrieval (fr)
type: jinaai/xpqa
config: fr
split: test
revision: c99d599f0a6ab9b85b065da6f9d94f9cf731679f
metrics:
- type: map_at_1
value: 36.611
- type: map_at_10
value: 57.187
- type: map_at_100
value: 58.631
- type: map_at_1000
value: 58.709999999999994
- type: map_at_20
value: 58.08
- type: map_at_3
value: 50.998
- type: map_at_5
value: 55.191
- type: mrr_at_1
value: 57.810413885180246
- type: mrr_at_10
value: 65.8401148621442
- type: mrr_at_100
value: 66.40333125160906
- type: mrr_at_1000
value: 66.42402394693958
- type: mrr_at_20
value: 66.18532893351842
- type: mrr_at_3
value: 63.81842456608808
- type: mrr_at_5
value: 65.18691588785039
- type: nauc_map_at_1000_diff1
value: 50.788831583223235
- type: nauc_map_at_1000_max
value: 51.55624948390649
- type: nauc_map_at_100_diff1
value: 50.7629709789859
- type: nauc_map_at_100_max
value: 51.554970702491374
- type: nauc_map_at_10_diff1
value: 50.597059943822785
- type: nauc_map_at_10_max
value: 50.88242396839643
- type: nauc_map_at_1_diff1
value: 60.321467202422596
- type: nauc_map_at_1_max
value: 35.442708774490455
- type: nauc_map_at_20_diff1
value: 50.65058875526523
- type: nauc_map_at_20_max
value: 51.32644359237018
- type: nauc_map_at_3_diff1
value: 51.80849131095309
- type: nauc_map_at_3_max
value: 46.17402861801263
- type: nauc_map_at_5_diff1
value: 50.507875139443456
- type: nauc_map_at_5_max
value: 49.47151715153637
- type: nauc_mrr_at_1000_diff1
value: 58.704380971676926
- type: nauc_mrr_at_1000_max
value: 59.554838611287494
- type: nauc_mrr_at_100_diff1
value: 58.699898563786
- type: nauc_mrr_at_100_max
value: 59.55774727939887
- type: nauc_mrr_at_10_diff1
value: 58.73927285559378
- type: nauc_mrr_at_10_max
value: 59.479293253354605
- type: nauc_mrr_at_1_diff1
value: 61.67387773779846
- type: nauc_mrr_at_1_max
value: 59.51259333152851
- type: nauc_mrr_at_20_diff1
value: 58.66891615345236
- type: nauc_mrr_at_20_max
value: 59.58138583451017
- type: nauc_mrr_at_3_diff1
value: 58.51184610727805
- type: nauc_mrr_at_3_max
value: 59.23400060136551
- type: nauc_mrr_at_5_diff1
value: 58.47244190154927
- type: nauc_mrr_at_5_max
value: 59.331044981327196
- type: nauc_ndcg_at_1000_diff1
value: 52.37179722848664
- type: nauc_ndcg_at_1000_max
value: 54.72666617792271
- type: nauc_ndcg_at_100_diff1
value: 51.93605170636807
- type: nauc_ndcg_at_100_max
value: 54.79165999040737
- type: nauc_ndcg_at_10_diff1
value: 51.405480630090835
- type: nauc_ndcg_at_10_max
value: 53.04193527385732
- type: nauc_ndcg_at_1_diff1
value: 61.67387773779846
- type: nauc_ndcg_at_1_max
value: 59.51259333152851
- type: nauc_ndcg_at_20_diff1
value: 51.293469681563096
- type: nauc_ndcg_at_20_max
value: 54.08435882900078
- type: nauc_ndcg_at_3_diff1
value: 51.58388244693231
- type: nauc_ndcg_at_3_max
value: 51.74775013382323
- type: nauc_ndcg_at_5_diff1
value: 50.82307910981021
- type: nauc_ndcg_at_5_max
value: 51.420799224894
- type: nauc_precision_at_1000_diff1
value: -16.663205684819612
- type: nauc_precision_at_1000_max
value: 12.234886940913926
- type: nauc_precision_at_100_diff1
value: -11.830123517342091
- type: nauc_precision_at_100_max
value: 19.147184681617514
- type: nauc_precision_at_10_diff1
value: -0.128354517220691
- type: nauc_precision_at_10_max
value: 31.00617539775257
- type: nauc_precision_at_1_diff1
value: 61.67387773779846
- type: nauc_precision_at_1_max
value: 59.51259333152851
- type: nauc_precision_at_20_diff1
value: -4.838065494986492
- type: nauc_precision_at_20_max
value: 26.59319852551229
- type: nauc_precision_at_3_diff1
value: 12.133336207725199
- type: nauc_precision_at_3_max
value: 39.377184679653084
- type: nauc_precision_at_5_diff1
value: 3.6946214253817242
- type: nauc_precision_at_5_max
value: 34.46699361026347
- type: nauc_recall_at_1000_diff1
value: 42.80775305857285
- type: nauc_recall_at_1000_max
value: 56.52032475068802
- type: nauc_recall_at_100_diff1
value: 37.39345422008765
- type: nauc_recall_at_100_max
value: 50.846199839766236
- type: nauc_recall_at_10_diff1
value: 42.80186951683253
- type: nauc_recall_at_10_max
value: 45.84205807317027
- type: nauc_recall_at_1_diff1
value: 60.321467202422596
- type: nauc_recall_at_1_max
value: 35.442708774490455
- type: nauc_recall_at_20_diff1
value: 39.799538893141424
- type: nauc_recall_at_20_max
value: 48.35852294352722
- type: nauc_recall_at_3_diff1
value: 45.955979843159135
- type: nauc_recall_at_3_max
value: 42.31051973839205
- type: nauc_recall_at_5_diff1
value: 42.5632345738307
- type: nauc_recall_at_5_max
value: 44.4648694495511
- type: ndcg_at_1
value: 57.809999999999995
- type: ndcg_at_10
value: 63.495999999999995
- type: ndcg_at_100
value: 68.394
- type: ndcg_at_1000
value: 69.663
- type: ndcg_at_20
value: 65.67399999999999
- type: ndcg_at_3
value: 58.23199999999999
- type: ndcg_at_5
value: 60.431999999999995
- type: precision_at_1
value: 57.809999999999995
- type: precision_at_10
value: 14.753
- type: precision_at_100
value: 1.8929999999999998
- type: precision_at_1000
value: 0.20600000000000002
- type: precision_at_20
value: 8.164
- type: precision_at_3
value: 35.737
- type: precision_at_5
value: 26.061
- type: recall_at_1
value: 36.611
- type: recall_at_10
value: 72.501
- type: recall_at_100
value: 91.40899999999999
- type: recall_at_1000
value: 99.544
- type: recall_at_20
value: 79.475
- type: recall_at_3
value: 55.96600000000001
- type: recall_at_5
value: 64.976
- task:
type: Classification
dataset:
name: MTEB AmazonCounterfactualClassification (en)
type: mteb/amazon_counterfactual
config: en
split: test
revision: e8379541af4e31359cca9fbcf4b00f2671dba205
metrics:
- type: accuracy
value: 74.28358208955224
- type: ap
value: 37.19063914095112
- type: ap_weighted
value: 37.19063914095112
- type: f1
value: 68.28593926963595
- type: f1_weighted
value: 76.64216663284145
- task:
type: Classification
dataset:
name: MTEB AmazonPolarityClassification
type: mteb/amazon_polarity
config: default
split: test
revision: e2d317d38cd51312af73b3d32a06d1a08b442046
metrics:
- type: accuracy
value: 82.19695
- type: ap
value: 76.84400739904562
- type: ap_weighted
value: 76.84400739904562
- type: f1
value: 82.13083090108348
- type: f1_weighted
value: 82.13083090108348
- task:
type: Classification
dataset:
name: MTEB AmazonReviewsClassification (en)
type: mteb/amazon_reviews_multi
config: en
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 40.216
- type: f1
value: 39.88981487562277
- type: f1_weighted
value: 39.88981487562277
- task:
type: Retrieval
dataset:
name: MTEB ArguAna
type: mteb/arguana
config: default
split: test
revision: c22ab2a51041ffd869aaddef7af8d8215647e41a
metrics:
- type: map_at_1
value: 28.307
- type: map_at_10
value: 44.415
- type: map_at_100
value: 45.24
- type: map_at_1000
value: 45.245999999999995
- type: map_at_20
value: 45.048
- type: map_at_3
value: 39.343
- type: map_at_5
value: 42.156
- type: mrr_at_1
value: 28.591749644381224
- type: mrr_at_10
value: 44.53744157691528
- type: mrr_at_100
value: 45.36249919705719
- type: mrr_at_1000
value: 45.36795843267093
- type: mrr_at_20
value: 45.17017744908004
- type: mrr_at_3
value: 39.50924608819345
- type: mrr_at_5
value: 42.29374110953058
- type: nauc_map_at_1000_diff1
value: 3.81697701184427
- type: nauc_map_at_1000_max
value: -5.3494391339512966
- type: nauc_map_at_100_diff1
value: 3.8255923068950737
- type: nauc_map_at_100_max
value: -5.338796585423051
- type: nauc_map_at_10_diff1
value: 3.807599213819479
- type: nauc_map_at_10_max
value: -5.313800854145031
- type: nauc_map_at_1_diff1
value: 5.156690676517333
- type: nauc_map_at_1_max
value: -9.64584413837327
- type: nauc_map_at_20_diff1
value: 3.7941985981544244
- type: nauc_map_at_20_max
value: -5.200991165900242
- type: nauc_map_at_3_diff1
value: 3.042950933986489
- type: nauc_map_at_3_max
value: -5.953385411481654
- type: nauc_map_at_5_diff1
value: 3.0549453605943433
- type: nauc_map_at_5_max
value: -5.787888510997178
- type: nauc_mrr_at_1000_diff1
value: 2.815942782079056
- type: nauc_mrr_at_1000_max
value: -6.045251506633342
- type: nauc_mrr_at_100_diff1
value: 2.8247136693036206
- type: nauc_mrr_at_100_max
value: -6.034513630311149
- type: nauc_mrr_at_10_diff1
value: 2.842321554294615
- type: nauc_mrr_at_10_max
value: -5.983994994110801
- type: nauc_mrr_at_1_diff1
value: 4.289447405708845
- type: nauc_mrr_at_1_max
value: -10.158513246070529
- type: nauc_mrr_at_20_diff1
value: 2.802223509089013
- type: nauc_mrr_at_20_max
value: -5.889383549567283
- type: nauc_mrr_at_3_diff1
value: 1.9507572567994225
- type: nauc_mrr_at_3_max
value: -6.579817119302078
- type: nauc_mrr_at_5_diff1
value: 2.0636113696159306
- type: nauc_mrr_at_5_max
value: -6.47814796715319
- type: nauc_ndcg_at_1000_diff1
value: 4.054109302322553
- type: nauc_ndcg_at_1000_max
value: -4.194276048637998
- type: nauc_ndcg_at_100_diff1
value: 4.3606449596207995
- type: nauc_ndcg_at_100_max
value: -3.802885863375761
- type: nauc_ndcg_at_10_diff1
value: 4.374146895999117
- type: nauc_ndcg_at_10_max
value: -3.007138296243735
- type: nauc_ndcg_at_1_diff1
value: 5.156690676517333
- type: nauc_ndcg_at_1_max
value: -9.64584413837327
- type: nauc_ndcg_at_20_diff1
value: 4.283769209560412
- type: nauc_ndcg_at_20_max
value: -2.5570972005509245
- type: nauc_ndcg_at_3_diff1
value: 2.4019132290785628
- type: nauc_ndcg_at_3_max
value: -4.772614514375251
- type: nauc_ndcg_at_5_diff1
value: 2.2604685552347488
- type: nauc_ndcg_at_5_max
value: -4.5287849384277346
- type: nauc_precision_at_1000_diff1
value: 26.832693994163886
- type: nauc_precision_at_1000_max
value: 28.13719829218545
- type: nauc_precision_at_100_diff1
value: 49.25187779934308
- type: nauc_precision_at_100_max
value: 54.90462014878204
- type: nauc_precision_at_10_diff1
value: 9.375044420325825
- type: nauc_precision_at_10_max
value: 11.118715229369158
- type: nauc_precision_at_1_diff1
value: 5.156690676517333
- type: nauc_precision_at_1_max
value: -9.64584413837327
- type: nauc_precision_at_20_diff1
value: 12.648487139563313
- type: nauc_precision_at_20_max
value: 29.17269939791144
- type: nauc_precision_at_3_diff1
value: 0.5381479007985195
- type: nauc_precision_at_3_max
value: -1.319607327988569
- type: nauc_precision_at_5_diff1
value: -0.530675691789191
- type: nauc_precision_at_5_max
value: -0.3449755187285182
- type: nauc_recall_at_1000_diff1
value: 26.83269399415972
- type: nauc_recall_at_1000_max
value: 28.137198292180138
- type: nauc_recall_at_100_diff1
value: 49.25187779934272
- type: nauc_recall_at_100_max
value: 54.90462014878089
- type: nauc_recall_at_10_diff1
value: 9.375044420325978
- type: nauc_recall_at_10_max
value: 11.118715229369167
- type: nauc_recall_at_1_diff1
value: 5.156690676517333
- type: nauc_recall_at_1_max
value: -9.64584413837327
- type: nauc_recall_at_20_diff1
value: 12.648487139563178
- type: nauc_recall_at_20_max
value: 29.172699397911256
- type: nauc_recall_at_3_diff1
value: 0.5381479007985096
- type: nauc_recall_at_3_max
value: -1.3196073279885299
- type: nauc_recall_at_5_diff1
value: -0.5306756917892376
- type: nauc_recall_at_5_max
value: -0.34497551872854154
- type: ndcg_at_1
value: 28.307
- type: ndcg_at_10
value: 53.593999999999994
- type: ndcg_at_100
value: 57.13399999999999
- type: ndcg_at_1000
value: 57.28
- type: ndcg_at_20
value: 55.861000000000004
- type: ndcg_at_3
value: 43.091
- type: ndcg_at_5
value: 48.16
- type: precision_at_1
value: 28.307
- type: precision_at_10
value: 8.3
- type: precision_at_100
value: 0.985
- type: precision_at_1000
value: 0.1
- type: precision_at_20
value: 4.595
- type: precision_at_3
value: 17.994
- type: precision_at_5
value: 13.257
- type: recall_at_1
value: 28.307
- type: recall_at_10
value: 83.001
- type: recall_at_100
value: 98.506
- type: recall_at_1000
value: 99.644
- type: recall_at_20
value: 91.892
- type: recall_at_3
value: 53.983000000000004
- type: recall_at_5
value: 66.28699999999999
- task:
type: Clustering
dataset:
name: MTEB ArxivClusteringP2P
type: mteb/arxiv-clustering-p2p
config: default
split: test
revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d
metrics:
- type: v_measure
value: 41.75510333108447
- type: v_measures
value:
- 0.38932105262642
- 0.41167658391196155
- 0.4152007083702598
- 0.43751533882806676
- 0.41353841462129437
- task:
type: Clustering
dataset:
name: MTEB ArxivClusteringS2S
type: mteb/arxiv-clustering-s2s
config: default
split: test
revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53
metrics:
- type: v_measure
value: 31.628031176398057
- type: v_measures
value:
- 0.3124730271530551
- 0.30410053196374376
- 0.31038902598125107
- 0.3037853444036682
- 0.3061080414991767
- task:
type: Reranking
dataset:
name: MTEB AskUbuntuDupQuestions
type: mteb/askubuntudupquestions-reranking
config: default
split: test
revision: 2000358ca161889fa9c082cb41daa8dcfb161a54
metrics:
- type: map
value: 56.78685528593245
- type: mrr
value: 70.13113925163786
- type: nAUC_map_diff1
value: 2.9860496068519695
- type: nAUC_map_max
value: 22.582369735674774
- type: nAUC_mrr_diff1
value: 10.846967439812445
- type: nAUC_mrr_max
value: 35.29439227015077
- task:
type: STS
dataset:
name: MTEB BIOSSES
type: mteb/biosses-sts
config: default
split: test
revision: d3fb88f8f02e40887cd149695127462bbcf29b4a
metrics:
- type: cos_sim_pearson
value: 85.88368776567987
- type: cos_sim_spearman
value: 83.98625103310174
- type: euclidean_pearson
value: 84.15851334353565
- type: euclidean_spearman
value: 83.50611961105386
- type: manhattan_pearson
value: 84.26852097545078
- type: manhattan_spearman
value: 83.74287199356931
- task:
type: Classification
dataset:
name: MTEB Banking77Classification
type: mteb/banking77
config: default
split: test
revision: 0fd18e25b25c072e09e0d92ab615fda904d66300
metrics:
- type: accuracy
value: 80.31493506493506
- type: f1
value: 80.18539252802539
- type: f1_weighted
value: 80.1853925280254
- task:
type: Clustering
dataset:
name: MTEB BiorxivClusteringP2P
type: mteb/biorxiv-clustering-p2p
config: default
split: test
revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40
metrics:
- type: v_measure
value: 36.01213557884716
- type: v_measures
value:
- 0.35149213783659844
- 0.3504551848301787
- 0.3777396210177721
- 0.36713470804377507
- 0.35699360527484775
- task:
type: Clustering
dataset:
name: MTEB BiorxivClusteringS2S
type: mteb/biorxiv-clustering-s2s
config: default
split: test
revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908
metrics:
- type: v_measure
value: 28.77320940838855
- type: v_measures
value:
- 0.30066854059482007
- 0.27912691518289856
- 0.28109177448868566
- 0.27788082204726
- 0.28174202201956644
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackAndroidRetrieval
type: mteb/cqadupstack-android
config: default
split: test
revision: f46a197baaae43b4f621051089b82a364682dfeb
metrics:
- type: map_at_1
value: 28.931
- type: map_at_10
value: 39.226
- type: map_at_100
value: 40.641
- type: map_at_1000
value: 40.758
- type: map_at_20
value: 39.947
- type: map_at_3
value: 35.893
- type: map_at_5
value: 37.911
- type: mrr_at_1
value: 36.33762517882689
- type: mrr_at_10
value: 45.437813656697756
- type: mrr_at_100
value: 46.263112207849225
- type: mrr_at_1000
value: 46.31120643750262
- type: mrr_at_20
value: 45.89328313995794
- type: mrr_at_3
value: 42.89461134954697
- type: mrr_at_5
value: 44.49690033381019
- type: nauc_map_at_1000_diff1
value: 51.149900773445665
- type: nauc_map_at_1000_max
value: 38.68580130673067
- type: nauc_map_at_100_diff1
value: 51.10002536433903
- type: nauc_map_at_100_max
value: 38.641317822870484
- type: nauc_map_at_10_diff1
value: 51.124389332061504
- type: nauc_map_at_10_max
value: 38.318568221563254
- type: nauc_map_at_1_diff1
value: 56.90092514723948
- type: nauc_map_at_1_max
value: 37.61485298818892
- type: nauc_map_at_20_diff1
value: 51.181676535641515
- type: nauc_map_at_20_max
value: 38.50630258148947
- type: nauc_map_at_3_diff1
value: 52.9080719662819
- type: nauc_map_at_3_max
value: 37.65490829785428
- type: nauc_map_at_5_diff1
value: 51.88563997044587
- type: nauc_map_at_5_max
value: 38.162982469441104
- type: nauc_mrr_at_1000_diff1
value: 52.02314497512314
- type: nauc_mrr_at_1000_max
value: 42.51237380812326
- type: nauc_mrr_at_100_diff1
value: 52.00022544992019
- type: nauc_mrr_at_100_max
value: 42.47931426167529
- type: nauc_mrr_at_10_diff1
value: 51.91527284768196
- type: nauc_mrr_at_10_max
value: 42.39017221462642
- type: nauc_mrr_at_1_diff1
value: 57.748140308636906
- type: nauc_mrr_at_1_max
value: 45.151335057931625
- type: nauc_mrr_at_20_diff1
value: 52.014517489654786
- type: nauc_mrr_at_20_max
value: 42.502037133226224
- type: nauc_mrr_at_3_diff1
value: 53.44263059806559
- type: nauc_mrr_at_3_max
value: 42.54366394954965
- type: nauc_mrr_at_5_diff1
value: 52.40067352297368
- type: nauc_mrr_at_5_max
value: 42.39770466495629
- type: nauc_ndcg_at_1000_diff1
value: 49.303067288367096
- type: nauc_ndcg_at_1000_max
value: 40.15083357935891
- type: nauc_ndcg_at_100_diff1
value: 48.06078219853983
- type: nauc_ndcg_at_100_max
value: 39.099873422335584
- type: nauc_ndcg_at_10_diff1
value: 48.427405777556764
- type: nauc_ndcg_at_10_max
value: 38.8466159356305
- type: nauc_ndcg_at_1_diff1
value: 57.748140308636906
- type: nauc_ndcg_at_1_max
value: 45.151335057931625
- type: nauc_ndcg_at_20_diff1
value: 48.400275143008884
- type: nauc_ndcg_at_20_max
value: 38.987281654803155
- type: nauc_ndcg_at_3_diff1
value: 51.94028236848058
- type: nauc_ndcg_at_3_max
value: 39.22267932164834
- type: nauc_ndcg_at_5_diff1
value: 50.228342110462435
- type: nauc_ndcg_at_5_max
value: 39.25835142473454
- type: nauc_precision_at_1000_diff1
value: -6.148682329597722
- type: nauc_precision_at_1000_max
value: 1.1132760594569802
- type: nauc_precision_at_100_diff1
value: -0.42183455399296765
- type: nauc_precision_at_100_max
value: 12.337898495315343
- type: nauc_precision_at_10_diff1
value: 18.94429698742333
- type: nauc_precision_at_10_max
value: 28.777738237731203
- type: nauc_precision_at_1_diff1
value: 57.748140308636906
- type: nauc_precision_at_1_max
value: 45.151335057931625
- type: nauc_precision_at_20_diff1
value: 12.915885854552354
- type: nauc_precision_at_20_max
value: 24.01402704364973
- type: nauc_precision_at_3_diff1
value: 36.634218047630384
- type: nauc_precision_at_3_max
value: 36.27512688680148
- type: nauc_precision_at_5_diff1
value: 28.272819211308992
- type: nauc_precision_at_5_max
value: 33.34907639932695
- type: nauc_recall_at_1000_diff1
value: 26.52022379258474
- type: nauc_recall_at_1000_max
value: 49.10217378309213
- type: nauc_recall_at_100_diff1
value: 25.383923002033832
- type: nauc_recall_at_100_max
value: 29.224125741020877
- type: nauc_recall_at_10_diff1
value: 36.465429616129015
- type: nauc_recall_at_10_max
value: 33.39232875391991
- type: nauc_recall_at_1_diff1
value: 56.90092514723948
- type: nauc_recall_at_1_max
value: 37.61485298818892
- type: nauc_recall_at_20_diff1
value: 34.97381075257172
- type: nauc_recall_at_20_max
value: 33.453578222267346
- type: nauc_recall_at_3_diff1
value: 47.268820296829134
- type: nauc_recall_at_3_max
value: 35.21361112290018
- type: nauc_recall_at_5_diff1
value: 42.36929492536004
- type: nauc_recall_at_5_max
value: 34.972452567095665
- type: ndcg_at_1
value: 36.338
- type: ndcg_at_10
value: 45.07
- type: ndcg_at_100
value: 50.619
- type: ndcg_at_1000
value: 52.729000000000006
- type: ndcg_at_20
value: 47.027
- type: ndcg_at_3
value: 40.388000000000005
- type: ndcg_at_5
value: 42.811
- type: precision_at_1
value: 36.338
- type: precision_at_10
value: 8.541
- type: precision_at_100
value: 1.391
- type: precision_at_1000
value: 0.184
- type: precision_at_20
value: 5.007000000000001
- type: precision_at_3
value: 19.409000000000002
- type: precision_at_5
value: 14.163
- type: recall_at_1
value: 28.931
- type: recall_at_10
value: 55.701
- type: recall_at_100
value: 79.389
- type: recall_at_1000
value: 93.366
- type: recall_at_20
value: 62.833000000000006
- type: recall_at_3
value: 42.007
- type: recall_at_5
value: 48.84
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackEnglishRetrieval
type: mteb/cqadupstack-english
config: default
split: test
revision: ad9991cb51e31e31e430383c75ffb2885547b5f0
metrics:
- type: map_at_1
value: 23.638
- type: map_at_10
value: 32.277
- type: map_at_100
value: 33.363
- type: map_at_1000
value: 33.488
- type: map_at_20
value: 32.857
- type: map_at_3
value: 29.748
- type: map_at_5
value: 31.179000000000002
- type: mrr_at_1
value: 30.254777070063692
- type: mrr_at_10
value: 37.817384490951355
- type: mrr_at_100
value: 38.525912264467145
- type: mrr_at_1000
value: 38.58069194468667
- type: mrr_at_20
value: 38.20930815682446
- type: mrr_at_3
value: 35.700636942675146
- type: mrr_at_5
value: 36.926751592356666
- type: nauc_map_at_1000_diff1
value: 45.223392226929235
- type: nauc_map_at_1000_max
value: 38.24038259272163
- type: nauc_map_at_100_diff1
value: 45.23190429504784
- type: nauc_map_at_100_max
value: 38.19794744902846
- type: nauc_map_at_10_diff1
value: 45.515992352450176
- type: nauc_map_at_10_max
value: 37.548960747017844
- type: nauc_map_at_1_diff1
value: 52.291404507813056
- type: nauc_map_at_1_max
value: 33.46767953286993
- type: nauc_map_at_20_diff1
value: 45.332400816656936
- type: nauc_map_at_20_max
value: 37.878067742926675
- type: nauc_map_at_3_diff1
value: 46.829233538381764
- type: nauc_map_at_3_max
value: 36.69901435795047
- type: nauc_map_at_5_diff1
value: 46.12298460254266
- type: nauc_map_at_5_max
value: 37.34969360011008
- type: nauc_mrr_at_1000_diff1
value: 41.898365322188674
- type: nauc_mrr_at_1000_max
value: 39.304566277957704
- type: nauc_mrr_at_100_diff1
value: 41.88883697764852
- type: nauc_mrr_at_100_max
value: 39.30077276431053
- type: nauc_mrr_at_10_diff1
value: 42.062104921386506
- type: nauc_mrr_at_10_max
value: 39.30528366258507
- type: nauc_mrr_at_1_diff1
value: 47.92599437007114
- type: nauc_mrr_at_1_max
value: 39.11863678363455
- type: nauc_mrr_at_20_diff1
value: 41.88168571216021
- type: nauc_mrr_at_20_max
value: 39.26248573846707
- type: nauc_mrr_at_3_diff1
value: 43.07190580570743
- type: nauc_mrr_at_3_max
value: 39.87788973395513
- type: nauc_mrr_at_5_diff1
value: 42.49866565630987
- type: nauc_mrr_at_5_max
value: 39.54834907714328
- type: nauc_ndcg_at_1000_diff1
value: 41.51353648334291
- type: nauc_ndcg_at_1000_max
value: 39.603326878012986
- type: nauc_ndcg_at_100_diff1
value: 41.30454895265097
- type: nauc_ndcg_at_100_max
value: 39.313602966554505
- type: nauc_ndcg_at_10_diff1
value: 42.02099052567711
- type: nauc_ndcg_at_10_max
value: 38.534861088136715
- type: nauc_ndcg_at_1_diff1
value: 47.92599437007114
- type: nauc_ndcg_at_1_max
value: 39.11863678363455
- type: nauc_ndcg_at_20_diff1
value: 41.663145625518375
- type: nauc_ndcg_at_20_max
value: 38.752693813154075
- type: nauc_ndcg_at_3_diff1
value: 43.68575961185724
- type: nauc_ndcg_at_3_max
value: 39.40226210725685
- type: nauc_ndcg_at_5_diff1
value: 43.00140726081697
- type: nauc_ndcg_at_5_max
value: 39.21485362612467
- type: nauc_precision_at_1000_diff1
value: -2.790275135023392
- type: nauc_precision_at_1000_max
value: 17.818318660525463
- type: nauc_precision_at_100_diff1
value: 2.0554939129182417
- type: nauc_precision_at_100_max
value: 29.753860102457935
- type: nauc_precision_at_10_diff1
value: 15.094160126474254
- type: nauc_precision_at_10_max
value: 37.972874196449126
- type: nauc_precision_at_1_diff1
value: 47.92599437007114
- type: nauc_precision_at_1_max
value: 39.11863678363455
- type: nauc_precision_at_20_diff1
value: 10.746873592106713
- type: nauc_precision_at_20_max
value: 36.96468826692449
- type: nauc_precision_at_3_diff1
value: 28.944521315560483
- type: nauc_precision_at_3_max
value: 42.03983245575044
- type: nauc_precision_at_5_diff1
value: 23.828098284010075
- type: nauc_precision_at_5_max
value: 41.76526648017447
- type: nauc_recall_at_1000_diff1
value: 26.537966542990997
- type: nauc_recall_at_1000_max
value: 41.86346125540241
- type: nauc_recall_at_100_diff1
value: 28.044584247129283
- type: nauc_recall_at_100_max
value: 37.42247127416711
- type: nauc_recall_at_10_diff1
value: 33.434563672243115
- type: nauc_recall_at_10_max
value: 34.63428918279095
- type: nauc_recall_at_1_diff1
value: 52.291404507813056
- type: nauc_recall_at_1_max
value: 33.46767953286993
- type: nauc_recall_at_20_diff1
value: 31.189066205007187
- type: nauc_recall_at_20_max
value: 35.3704318509917
- type: nauc_recall_at_3_diff1
value: 39.67602671214362
- type: nauc_recall_at_3_max
value: 35.6485218636747
- type: nauc_recall_at_5_diff1
value: 36.71118621793804
- type: nauc_recall_at_5_max
value: 35.81341336007971
- type: ndcg_at_1
value: 30.255
- type: ndcg_at_10
value: 37.376
- type: ndcg_at_100
value: 41.678
- type: ndcg_at_1000
value: 44.079
- type: ndcg_at_20
value: 38.942
- type: ndcg_at_3
value: 33.641
- type: ndcg_at_5
value: 35.346
- type: precision_at_1
value: 30.255
- type: precision_at_10
value: 7.102
- type: precision_at_100
value: 1.184
- type: precision_at_1000
value: 0.166
- type: precision_at_20
value: 4.185
- type: precision_at_3
value: 16.348
- type: precision_at_5
value: 11.591999999999999
- type: recall_at_1
value: 23.638
- type: recall_at_10
value: 46.524
- type: recall_at_100
value: 65.118
- type: recall_at_1000
value: 81.133
- type: recall_at_20
value: 52.331
- type: recall_at_3
value: 35.254999999999995
- type: recall_at_5
value: 40.174
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackGamingRetrieval
type: mteb/cqadupstack-gaming
config: default
split: test
revision: 4885aa143210c98657558c04aaf3dc47cfb54340
metrics:
- type: map_at_1
value: 35.667
- type: map_at_10
value: 47.397
- type: map_at_100
value: 48.366
- type: map_at_1000
value: 48.433
- type: map_at_20
value: 47.963
- type: map_at_3
value: 44.211
- type: map_at_5
value: 46.037
- type: mrr_at_1
value: 40.87774294670846
- type: mrr_at_10
value: 50.565880479673616
- type: mrr_at_100
value: 51.271230622181626
- type: mrr_at_1000
value: 51.306805744714836
- type: mrr_at_20
value: 51.012075318045525
- type: mrr_at_3
value: 47.9832810867294
- type: mrr_at_5
value: 49.525600835945724
- type: nauc_map_at_1000_diff1
value: 50.33288681013869
- type: nauc_map_at_1000_max
value: 37.44437438806084
- type: nauc_map_at_100_diff1
value: 50.317492085630064
- type: nauc_map_at_100_max
value: 37.426681891363835
- type: nauc_map_at_10_diff1
value: 50.24182139242321
- type: nauc_map_at_10_max
value: 36.91039477771677
- type: nauc_map_at_1_diff1
value: 56.01200063592147
- type: nauc_map_at_1_max
value: 31.767342114075202
- type: nauc_map_at_20_diff1
value: 50.21631708851613
- type: nauc_map_at_20_max
value: 37.28818324793643
- type: nauc_map_at_3_diff1
value: 51.111793089491364
- type: nauc_map_at_3_max
value: 36.16516417187456
- type: nauc_map_at_5_diff1
value: 50.47567188188865
- type: nauc_map_at_5_max
value: 36.72222550501132
- type: nauc_mrr_at_1000_diff1
value: 49.29372112096636
- type: nauc_mrr_at_1000_max
value: 39.248284382084236
- type: nauc_mrr_at_100_diff1
value: 49.28279373491327
- type: nauc_mrr_at_100_max
value: 39.26004837053389
- type: nauc_mrr_at_10_diff1
value: 49.123704806290434
- type: nauc_mrr_at_10_max
value: 39.05266034946078
- type: nauc_mrr_at_1_diff1
value: 53.88859746474265
- type: nauc_mrr_at_1_max
value: 37.056204568674275
- type: nauc_mrr_at_20_diff1
value: 49.18403232554298
- type: nauc_mrr_at_20_max
value: 39.26689196401381
- type: nauc_mrr_at_3_diff1
value: 49.59424894836517
- type: nauc_mrr_at_3_max
value: 38.95714592509984
- type: nauc_mrr_at_5_diff1
value: 49.257845318012954
- type: nauc_mrr_at_5_max
value: 39.30070104826491
- type: nauc_ndcg_at_1000_diff1
value: 48.91743661336846
- type: nauc_ndcg_at_1000_max
value: 39.39031133623686
- type: nauc_ndcg_at_100_diff1
value: 48.61511346835115
- type: nauc_ndcg_at_100_max
value: 39.459340998985724
- type: nauc_ndcg_at_10_diff1
value: 48.06588542038947
- type: nauc_ndcg_at_10_max
value: 38.157829321231
- type: nauc_ndcg_at_1_diff1
value: 53.88859746474265
- type: nauc_ndcg_at_1_max
value: 37.056204568674275
- type: nauc_ndcg_at_20_diff1
value: 48.05115075637084
- type: nauc_ndcg_at_20_max
value: 39.2235027218884
- type: nauc_ndcg_at_3_diff1
value: 49.30878740373676
- type: nauc_ndcg_at_3_max
value: 37.84032746584941
- type: nauc_ndcg_at_5_diff1
value: 48.47712228032605
- type: nauc_ndcg_at_5_max
value: 38.38589466282407
- type: nauc_precision_at_1000_diff1
value: -7.243262652381105
- type: nauc_precision_at_1000_max
value: 18.453365469588427
- type: nauc_precision_at_100_diff1
value: -2.0153970546194753
- type: nauc_precision_at_100_max
value: 24.22667501786602
- type: nauc_precision_at_10_diff1
value: 14.979334560516222
- type: nauc_precision_at_10_max
value: 33.13307837324579
- type: nauc_precision_at_1_diff1
value: 53.88859746474265
- type: nauc_precision_at_1_max
value: 37.056204568674275
- type: nauc_precision_at_20_diff1
value: 8.379765029951027
- type: nauc_precision_at_20_max
value: 32.28271665269386
- type: nauc_precision_at_3_diff1
value: 31.16831547354767
- type: nauc_precision_at_3_max
value: 38.10801385749373
- type: nauc_precision_at_5_diff1
value: 23.32241470046817
- type: nauc_precision_at_5_max
value: 37.2000516679205
- type: nauc_recall_at_1000_diff1
value: 40.03022783413472
- type: nauc_recall_at_1000_max
value: 49.77189630896353
- type: nauc_recall_at_100_diff1
value: 39.485228558001154
- type: nauc_recall_at_100_max
value: 44.84364760927468
- type: nauc_recall_at_10_diff1
value: 39.911638774960096
- type: nauc_recall_at_10_max
value: 37.00135324546857
- type: nauc_recall_at_1_diff1
value: 56.01200063592147
- type: nauc_recall_at_1_max
value: 31.767342114075202
- type: nauc_recall_at_20_diff1
value: 38.604788301520685
- type: nauc_recall_at_20_max
value: 42.21099902041599
- type: nauc_recall_at_3_diff1
value: 44.913068402378755
- type: nauc_recall_at_3_max
value: 36.35063250643407
- type: nauc_recall_at_5_diff1
value: 42.15428494957372
- type: nauc_recall_at_5_max
value: 38.11256932308573
- type: ndcg_at_1
value: 40.878
- type: ndcg_at_10
value: 53.062
- type: ndcg_at_100
value: 57.160999999999994
- type: ndcg_at_1000
value: 58.538999999999994
- type: ndcg_at_20
value: 54.821
- type: ndcg_at_3
value: 47.544
- type: ndcg_at_5
value: 50.305
- type: precision_at_1
value: 40.878
- type: precision_at_10
value: 8.564
- type: precision_at_100
value: 1.155
- type: precision_at_1000
value: 0.133
- type: precision_at_20
value: 4.79
- type: precision_at_3
value: 21.108
- type: precision_at_5
value: 14.658
- type: recall_at_1
value: 35.667
- type: recall_at_10
value: 66.766
- type: recall_at_100
value: 84.553
- type: recall_at_1000
value: 94.346
- type: recall_at_20
value: 73.272
- type: recall_at_3
value: 52.139
- type: recall_at_5
value: 58.816
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackGisRetrieval
type: mteb/cqadupstack-gis
config: default
split: test
revision: 5003b3064772da1887988e05400cf3806fe491f2
metrics:
- type: map_at_1
value: 20.618
- type: map_at_10
value: 27.47
- type: map_at_100
value: 28.505000000000003
- type: map_at_1000
value: 28.594
- type: map_at_20
value: 28.057
- type: map_at_3
value: 24.918000000000003
- type: map_at_5
value: 26.229999999999997
- type: mrr_at_1
value: 22.372881355932204
- type: mrr_at_10
value: 29.201910142588083
- type: mrr_at_100
value: 30.158796422923974
- type: mrr_at_1000
value: 30.226705533923436
- type: mrr_at_20
value: 29.748232470158786
- type: mrr_at_3
value: 26.723163841807924
- type: mrr_at_5
value: 27.97175141242938
- type: nauc_map_at_1000_diff1
value: 42.518789058108034
- type: nauc_map_at_1000_max
value: 21.82284691419632
- type: nauc_map_at_100_diff1
value: 42.513945119173414
- type: nauc_map_at_100_max
value: 21.824969680261194
- type: nauc_map_at_10_diff1
value: 42.21492283731788
- type: nauc_map_at_10_max
value: 21.305888026069674
- type: nauc_map_at_1_diff1
value: 47.62145355083881
- type: nauc_map_at_1_max
value: 22.35827304798013
- type: nauc_map_at_20_diff1
value: 42.31982757448588
- type: nauc_map_at_20_max
value: 21.594656622891048
- type: nauc_map_at_3_diff1
value: 44.10607386887907
- type: nauc_map_at_3_max
value: 21.690680453680425
- type: nauc_map_at_5_diff1
value: 43.24911634980367
- type: nauc_map_at_5_max
value: 21.736719675567752
- type: nauc_mrr_at_1000_diff1
value: 41.98881956610886
- type: nauc_mrr_at_1000_max
value: 23.673388697614747
- type: nauc_mrr_at_100_diff1
value: 41.975077881853366
- type: nauc_mrr_at_100_max
value: 23.680855488904697
- type: nauc_mrr_at_10_diff1
value: 41.753512191582516
- type: nauc_mrr_at_10_max
value: 23.286885884623786
- type: nauc_mrr_at_1_diff1
value: 48.01121917180329
- type: nauc_mrr_at_1_max
value: 25.91040117459629
- type: nauc_mrr_at_20_diff1
value: 41.837798974871795
- type: nauc_mrr_at_20_max
value: 23.53887919859698
- type: nauc_mrr_at_3_diff1
value: 43.74425619417245
- type: nauc_mrr_at_3_max
value: 23.80181072142051
- type: nauc_mrr_at_5_diff1
value: 42.77128789582419
- type: nauc_mrr_at_5_max
value: 23.78994160229315
- type: nauc_ndcg_at_1000_diff1
value: 40.4038817214834
- type: nauc_ndcg_at_1000_max
value: 22.308549183052513
- type: nauc_ndcg_at_100_diff1
value: 40.55678288183828
- type: nauc_ndcg_at_100_max
value: 22.609367205269443
- type: nauc_ndcg_at_10_diff1
value: 38.83098871853759
- type: nauc_ndcg_at_10_max
value: 20.68362628733941
- type: nauc_ndcg_at_1_diff1
value: 48.01121917180329
- type: nauc_ndcg_at_1_max
value: 25.91040117459629
- type: nauc_ndcg_at_20_diff1
value: 39.061663618713894
- type: nauc_ndcg_at_20_max
value: 21.476419663219456
- type: nauc_ndcg_at_3_diff1
value: 42.736087127795955
- type: nauc_ndcg_at_3_max
value: 21.742127165660058
- type: nauc_ndcg_at_5_diff1
value: 41.186966297966734
- type: nauc_ndcg_at_5_max
value: 21.759401429767212
- type: nauc_precision_at_1000_diff1
value: 6.654559938649311
- type: nauc_precision_at_1000_max
value: 16.806910891601543
- type: nauc_precision_at_100_diff1
value: 25.864492780814064
- type: nauc_precision_at_100_max
value: 25.263440890575012
- type: nauc_precision_at_10_diff1
value: 28.182469153166974
- type: nauc_precision_at_10_max
value: 21.10173854858086
- type: nauc_precision_at_1_diff1
value: 48.01121917180329
- type: nauc_precision_at_1_max
value: 25.91040117459629
- type: nauc_precision_at_20_diff1
value: 26.16409861031152
- type: nauc_precision_at_20_max
value: 22.589571974868473
- type: nauc_precision_at_3_diff1
value: 39.49309649649902
- type: nauc_precision_at_3_max
value: 23.66194846956826
- type: nauc_precision_at_5_diff1
value: 35.47688709673743
- type: nauc_precision_at_5_max
value: 23.5888265356714
- type: nauc_recall_at_1000_diff1
value: 28.057334771322758
- type: nauc_recall_at_1000_max
value: 17.48633214718912
- type: nauc_recall_at_100_diff1
value: 35.67263027900714
- type: nauc_recall_at_100_max
value: 23.115839579250103
- type: nauc_recall_at_10_diff1
value: 28.261498615045998
- type: nauc_recall_at_10_max
value: 16.20575819609654
- type: nauc_recall_at_1_diff1
value: 47.62145355083881
- type: nauc_recall_at_1_max
value: 22.35827304798013
- type: nauc_recall_at_20_diff1
value: 28.255566253430192
- type: nauc_recall_at_20_max
value: 18.257219460506295
- type: nauc_recall_at_3_diff1
value: 39.30800774709927
- type: nauc_recall_at_3_max
value: 19.810995082445473
- type: nauc_recall_at_5_diff1
value: 35.27158910591411
- type: nauc_recall_at_5_max
value: 19.678077623550937
- type: ndcg_at_1
value: 22.373
- type: ndcg_at_10
value: 31.918000000000003
- type: ndcg_at_100
value: 36.992000000000004
- type: ndcg_at_1000
value: 39.513
- type: ndcg_at_20
value: 33.983999999999995
- type: ndcg_at_3
value: 26.832
- type: ndcg_at_5
value: 29.078
- type: precision_at_1
value: 22.373
- type: precision_at_10
value: 5.04
- type: precision_at_100
value: 0.7979999999999999
- type: precision_at_1000
value: 0.106
- type: precision_at_20
value: 3.0
- type: precision_at_3
value: 11.299
- type: precision_at_5
value: 8.045
- type: recall_at_1
value: 20.618
- type: recall_at_10
value: 44.202000000000005
- type: recall_at_100
value: 67.242
- type: recall_at_1000
value: 86.69200000000001
- type: recall_at_20
value: 52.03
- type: recall_at_3
value: 30.386000000000003
- type: recall_at_5
value: 35.858000000000004
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackMathematicaRetrieval
type: mteb/cqadupstack-mathematica
config: default
split: test
revision: 90fceea13679c63fe563ded68f3b6f06e50061de
metrics:
- type: map_at_1
value: 12.731
- type: map_at_10
value: 19.054
- type: map_at_100
value: 20.313
- type: map_at_1000
value: 20.443
- type: map_at_20
value: 19.77
- type: map_at_3
value: 16.596
- type: map_at_5
value: 18.013
- type: mrr_at_1
value: 15.796019900497512
- type: mrr_at_10
value: 22.789327173655526
- type: mrr_at_100
value: 23.862539649948847
- type: mrr_at_1000
value: 23.946663050199312
- type: mrr_at_20
value: 23.427143696525313
- type: mrr_at_3
value: 20.27363184079603
- type: mrr_at_5
value: 21.68532338308458
- type: nauc_map_at_1000_diff1
value: 27.516068477843454
- type: nauc_map_at_1000_max
value: 17.80187067129459
- type: nauc_map_at_100_diff1
value: 27.49088157159594
- type: nauc_map_at_100_max
value: 17.78426299327126
- type: nauc_map_at_10_diff1
value: 27.322013804309574
- type: nauc_map_at_10_max
value: 17.001651979041277
- type: nauc_map_at_1_diff1
value: 32.676819886304166
- type: nauc_map_at_1_max
value: 15.203042726400561
- type: nauc_map_at_20_diff1
value: 27.44288664011662
- type: nauc_map_at_20_max
value: 17.908350138714276
- type: nauc_map_at_3_diff1
value: 28.50114932717826
- type: nauc_map_at_3_max
value: 17.780823694386235
- type: nauc_map_at_5_diff1
value: 27.86215762055489
- type: nauc_map_at_5_max
value: 17.50773539133613
- type: nauc_mrr_at_1000_diff1
value: 29.947843223207236
- type: nauc_mrr_at_1000_max
value: 19.62172810233295
- type: nauc_mrr_at_100_diff1
value: 29.9288142137001
- type: nauc_mrr_at_100_max
value: 19.629003114636255
- type: nauc_mrr_at_10_diff1
value: 29.97657648240847
- type: nauc_mrr_at_10_max
value: 19.194295823726197
- type: nauc_mrr_at_1_diff1
value: 35.00554412354239
- type: nauc_mrr_at_1_max
value: 17.759999184794772
- type: nauc_mrr_at_20_diff1
value: 29.96168512518019
- type: nauc_mrr_at_20_max
value: 19.812693338679974
- type: nauc_mrr_at_3_diff1
value: 31.869293054331997
- type: nauc_mrr_at_3_max
value: 19.72221933712261
- type: nauc_mrr_at_5_diff1
value: 30.633662242516408
- type: nauc_mrr_at_5_max
value: 19.633065520422832
- type: nauc_ndcg_at_1000_diff1
value: 26.41309716877246
- type: nauc_ndcg_at_1000_max
value: 19.407030290375477
- type: nauc_ndcg_at_100_diff1
value: 26.033991008430068
- type: nauc_ndcg_at_100_max
value: 19.18116285140471
- type: nauc_ndcg_at_10_diff1
value: 25.58417445038125
- type: nauc_ndcg_at_10_max
value: 17.264882794530223
- type: nauc_ndcg_at_1_diff1
value: 35.00554412354239
- type: nauc_ndcg_at_1_max
value: 17.759999184794772
- type: nauc_ndcg_at_20_diff1
value: 25.93407473459688
- type: nauc_ndcg_at_20_max
value: 19.950029090611025
- type: nauc_ndcg_at_3_diff1
value: 28.72061546564716
- type: nauc_ndcg_at_3_max
value: 19.386795976250635
- type: nauc_ndcg_at_5_diff1
value: 27.154487593736675
- type: nauc_ndcg_at_5_max
value: 18.600609597997746
- type: nauc_precision_at_1000_diff1
value: 5.41924757448531
- type: nauc_precision_at_1000_max
value: 6.545740061131494
- type: nauc_precision_at_100_diff1
value: 14.592825976137824
- type: nauc_precision_at_100_max
value: 14.125640563802245
- type: nauc_precision_at_10_diff1
value: 21.4491651411123
- type: nauc_precision_at_10_max
value: 16.9551658679841
- type: nauc_precision_at_1_diff1
value: 35.00554412354239
- type: nauc_precision_at_1_max
value: 17.759999184794772
- type: nauc_precision_at_20_diff1
value: 19.92971906917106
- type: nauc_precision_at_20_max
value: 23.22690053316326
- type: nauc_precision_at_3_diff1
value: 27.57959149246176
- type: nauc_precision_at_3_max
value: 22.093284431161333
- type: nauc_precision_at_5_diff1
value: 25.25496908645805
- type: nauc_precision_at_5_max
value: 20.458763176343208
- type: nauc_recall_at_1000_diff1
value: 16.984282437643287
- type: nauc_recall_at_1000_max
value: 24.737697260268117
- type: nauc_recall_at_100_diff1
value: 17.950878545274918
- type: nauc_recall_at_100_max
value: 19.837467082624126
- type: nauc_recall_at_10_diff1
value: 18.161945687752247
- type: nauc_recall_at_10_max
value: 14.97915128596929
- type: nauc_recall_at_1_diff1
value: 32.676819886304166
- type: nauc_recall_at_1_max
value: 15.203042726400561
- type: nauc_recall_at_20_diff1
value: 19.155358112421517
- type: nauc_recall_at_20_max
value: 22.374680334603898
- type: nauc_recall_at_3_diff1
value: 24.842029532917927
- type: nauc_recall_at_3_max
value: 20.135627867318494
- type: nauc_recall_at_5_diff1
value: 22.00729745995486
- type: nauc_recall_at_5_max
value: 18.21612524182701
- type: ndcg_at_1
value: 15.796
- type: ndcg_at_10
value: 23.528
- type: ndcg_at_100
value: 29.537000000000003
- type: ndcg_at_1000
value: 32.719
- type: ndcg_at_20
value: 25.935000000000002
- type: ndcg_at_3
value: 18.908
- type: ndcg_at_5
value: 21.154
- type: precision_at_1
value: 15.796
- type: precision_at_10
value: 4.515000000000001
- type: precision_at_100
value: 0.8789999999999999
- type: precision_at_1000
value: 0.129
- type: precision_at_20
value: 2.942
- type: precision_at_3
value: 9.163
- type: precision_at_5
value: 7.04
- type: recall_at_1
value: 12.731
- type: recall_at_10
value: 33.797
- type: recall_at_100
value: 59.914
- type: recall_at_1000
value: 82.718
- type: recall_at_20
value: 42.347
- type: recall_at_3
value: 20.923
- type: recall_at_5
value: 26.71
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackPhysicsRetrieval
type: mteb/cqadupstack-physics
config: default
split: test
revision: 79531abbd1fb92d06c6d6315a0cbbbf5bb247ea4
metrics:
- type: map_at_1
value: 23.567
- type: map_at_10
value: 32.628
- type: map_at_100
value: 33.908
- type: map_at_1000
value: 34.039
- type: map_at_20
value: 33.363
- type: map_at_3
value: 29.726999999999997
- type: map_at_5
value: 31.347
- type: mrr_at_1
value: 29.25890279114533
- type: mrr_at_10
value: 38.079960890355494
- type: mrr_at_100
value: 38.97514464689193
- type: mrr_at_1000
value: 39.038125614768354
- type: mrr_at_20
value: 38.620932209488565
- type: mrr_at_3
value: 35.65928777670837
- type: mrr_at_5
value: 37.00673724735321
- type: nauc_map_at_1000_diff1
value: 44.243962711843174
- type: nauc_map_at_1000_max
value: 26.557934640504595
- type: nauc_map_at_100_diff1
value: 44.24191729725104
- type: nauc_map_at_100_max
value: 26.520477564732385
- type: nauc_map_at_10_diff1
value: 44.33869154968317
- type: nauc_map_at_10_max
value: 26.044484632871434
- type: nauc_map_at_1_diff1
value: 50.15813855147419
- type: nauc_map_at_1_max
value: 26.303389987904445
- type: nauc_map_at_20_diff1
value: 44.3113665446356
- type: nauc_map_at_20_max
value: 26.237662556133813
- type: nauc_map_at_3_diff1
value: 45.85173282565928
- type: nauc_map_at_3_max
value: 26.32504035565671
- type: nauc_map_at_5_diff1
value: 44.78643814548486
- type: nauc_map_at_5_max
value: 26.334504875414634
- type: nauc_mrr_at_1000_diff1
value: 43.626249945153624
- type: nauc_mrr_at_1000_max
value: 29.330289291530644
- type: nauc_mrr_at_100_diff1
value: 43.613407635792015
- type: nauc_mrr_at_100_max
value: 29.319268635273986
- type: nauc_mrr_at_10_diff1
value: 43.63724190566422
- type: nauc_mrr_at_10_max
value: 29.108055344568847
- type: nauc_mrr_at_1_diff1
value: 48.217336788734755
- type: nauc_mrr_at_1_max
value: 30.672813296466302
- type: nauc_mrr_at_20_diff1
value: 43.649017818875166
- type: nauc_mrr_at_20_max
value: 29.261304940945127
- type: nauc_mrr_at_3_diff1
value: 44.675792519491715
- type: nauc_mrr_at_3_max
value: 29.675911336957483
- type: nauc_mrr_at_5_diff1
value: 43.64775996596029
- type: nauc_mrr_at_5_max
value: 29.45182353499564
- type: nauc_ndcg_at_1000_diff1
value: 41.87489199354678
- type: nauc_ndcg_at_1000_max
value: 27.93893077509421
- type: nauc_ndcg_at_100_diff1
value: 41.670343791634906
- type: nauc_ndcg_at_100_max
value: 27.313715056723876
- type: nauc_ndcg_at_10_diff1
value: 41.85016751613856
- type: nauc_ndcg_at_10_max
value: 25.643066472480765
- type: nauc_ndcg_at_1_diff1
value: 48.217336788734755
- type: nauc_ndcg_at_1_max
value: 30.672813296466302
- type: nauc_ndcg_at_20_diff1
value: 41.97037963181627
- type: nauc_ndcg_at_20_max
value: 26.33944171406708
- type: nauc_ndcg_at_3_diff1
value: 44.06711834714099
- type: nauc_ndcg_at_3_max
value: 27.34491521639161
- type: nauc_ndcg_at_5_diff1
value: 42.4168573468611
- type: nauc_ndcg_at_5_max
value: 26.65793931965115
- type: nauc_precision_at_1000_diff1
value: -9.551422528655461
- type: nauc_precision_at_1000_max
value: 8.34835764204442
- type: nauc_precision_at_100_diff1
value: 2.2233830685766245
- type: nauc_precision_at_100_max
value: 18.020691836598584
- type: nauc_precision_at_10_diff1
value: 19.325743761791916
- type: nauc_precision_at_10_max
value: 23.679007985508786
- type: nauc_precision_at_1_diff1
value: 48.217336788734755
- type: nauc_precision_at_1_max
value: 30.672813296466302
- type: nauc_precision_at_20_diff1
value: 13.87527519424572
- type: nauc_precision_at_20_max
value: 22.302645068156657
- type: nauc_precision_at_3_diff1
value: 33.05090446279134
- type: nauc_precision_at_3_max
value: 29.389174313703947
- type: nauc_precision_at_5_diff1
value: 25.75562225572127
- type: nauc_precision_at_5_max
value: 27.147828437597372
- type: nauc_recall_at_1000_diff1
value: 19.621088665598236
- type: nauc_recall_at_1000_max
value: 30.43205196145353
- type: nauc_recall_at_100_diff1
value: 27.23232029826097
- type: nauc_recall_at_100_max
value: 22.14067215503966
- type: nauc_recall_at_10_diff1
value: 33.10443747704974
- type: nauc_recall_at_10_max
value: 19.41308822202282
- type: nauc_recall_at_1_diff1
value: 50.15813855147419
- type: nauc_recall_at_1_max
value: 26.303389987904445
- type: nauc_recall_at_20_diff1
value: 32.276483197865936
- type: nauc_recall_at_20_max
value: 20.72725151323571
- type: nauc_recall_at_3_diff1
value: 40.22031270566891
- type: nauc_recall_at_3_max
value: 23.9301104444151
- type: nauc_recall_at_5_diff1
value: 35.98209271954092
- type: nauc_recall_at_5_max
value: 22.83878482624863
- type: ndcg_at_1
value: 29.259
- type: ndcg_at_10
value: 38.207
- type: ndcg_at_100
value: 43.711
- type: ndcg_at_1000
value: 46.341
- type: ndcg_at_20
value: 40.498
- type: ndcg_at_3
value: 33.532000000000004
- type: ndcg_at_5
value: 35.69
- type: precision_at_1
value: 29.259
- type: precision_at_10
value: 7.007
- type: precision_at_100
value: 1.1560000000000001
- type: precision_at_1000
value: 0.158
- type: precision_at_20
value: 4.244
- type: precision_at_3
value: 16.041
- type: precision_at_5
value: 11.511000000000001
- type: recall_at_1
value: 23.567
- type: recall_at_10
value: 49.523
- type: recall_at_100
value: 72.562
- type: recall_at_1000
value: 90.178
- type: recall_at_20
value: 57.621
- type: recall_at_3
value: 36.282
- type: recall_at_5
value: 41.921
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackProgrammersRetrieval
type: mteb/cqadupstack-programmers
config: default
split: test
revision: 6184bc1440d2dbc7612be22b50686b8826d22b32
metrics:
- type: map_at_1
value: 22.266
- type: map_at_10
value: 30.25
- type: map_at_100
value: 31.581
- type: map_at_1000
value: 31.704
- type: map_at_20
value: 30.952
- type: map_at_3
value: 27.466
- type: map_at_5
value: 29.072
- type: mrr_at_1
value: 27.397260273972602
- type: mrr_at_10
value: 35.2599296948612
- type: mrr_at_100
value: 36.24881323819273
- type: mrr_at_1000
value: 36.31077886612844
- type: mrr_at_20
value: 35.848406062858004
- type: mrr_at_3
value: 32.591324200913256
- type: mrr_at_5
value: 34.235159817351594
- type: nauc_map_at_1000_diff1
value: 49.041338317944216
- type: nauc_map_at_1000_max
value: 38.50873723942883
- type: nauc_map_at_100_diff1
value: 49.01701126534856
- type: nauc_map_at_100_max
value: 38.49295698329094
- type: nauc_map_at_10_diff1
value: 49.095813348188166
- type: nauc_map_at_10_max
value: 37.90864503064915
- type: nauc_map_at_1_diff1
value: 55.75650937633808
- type: nauc_map_at_1_max
value: 36.45803206536568
- type: nauc_map_at_20_diff1
value: 48.88486278259804
- type: nauc_map_at_20_max
value: 38.234576284979276
- type: nauc_map_at_3_diff1
value: 50.800510951074344
- type: nauc_map_at_3_max
value: 36.75190407865029
- type: nauc_map_at_5_diff1
value: 49.60838604964711
- type: nauc_map_at_5_max
value: 37.32035047604114
- type: nauc_mrr_at_1000_diff1
value: 49.13411044876944
- type: nauc_mrr_at_1000_max
value: 38.97006615081024
- type: nauc_mrr_at_100_diff1
value: 49.11706960503639
- type: nauc_mrr_at_100_max
value: 38.96559788105358
- type: nauc_mrr_at_10_diff1
value: 49.092123992814116
- type: nauc_mrr_at_10_max
value: 38.94728645893312
- type: nauc_mrr_at_1_diff1
value: 55.47287529444724
- type: nauc_mrr_at_1_max
value: 40.293546568686224
- type: nauc_mrr_at_20_diff1
value: 48.96467402915927
- type: nauc_mrr_at_20_max
value: 38.86612256286537
- type: nauc_mrr_at_3_diff1
value: 50.69348233488136
- type: nauc_mrr_at_3_max
value: 39.07374242862782
- type: nauc_mrr_at_5_diff1
value: 49.48713462272688
- type: nauc_mrr_at_5_max
value: 38.903556289495874
- type: nauc_ndcg_at_1000_diff1
value: 46.865532935814144
- type: nauc_ndcg_at_1000_max
value: 39.54745630513795
- type: nauc_ndcg_at_100_diff1
value: 46.320278315069814
- type: nauc_ndcg_at_100_max
value: 39.38111071082402
- type: nauc_ndcg_at_10_diff1
value: 46.21493444038667
- type: nauc_ndcg_at_10_max
value: 38.21912668950852
- type: nauc_ndcg_at_1_diff1
value: 55.47287529444724
- type: nauc_ndcg_at_1_max
value: 40.293546568686224
- type: nauc_ndcg_at_20_diff1
value: 45.64094089105446
- type: nauc_ndcg_at_20_max
value: 38.59596868552488
- type: nauc_ndcg_at_3_diff1
value: 49.016415433673835
- type: nauc_ndcg_at_3_max
value: 37.89533426315243
- type: nauc_ndcg_at_5_diff1
value: 47.20788719798163
- type: nauc_ndcg_at_5_max
value: 37.682560665048904
- type: nauc_precision_at_1000_diff1
value: -7.359826953607673
- type: nauc_precision_at_1000_max
value: 5.6412152804640066
- type: nauc_precision_at_100_diff1
value: 4.466458911297046
- type: nauc_precision_at_100_max
value: 24.578741906158726
- type: nauc_precision_at_10_diff1
value: 22.359709568967506
- type: nauc_precision_at_10_max
value: 36.47969015950308
- type: nauc_precision_at_1_diff1
value: 55.47287529444724
- type: nauc_precision_at_1_max
value: 40.293546568686224
- type: nauc_precision_at_20_diff1
value: 14.120893949469135
- type: nauc_precision_at_20_max
value: 34.249667264582534
- type: nauc_precision_at_3_diff1
value: 37.7007086171551
- type: nauc_precision_at_3_max
value: 37.95445662666999
- type: nauc_precision_at_5_diff1
value: 29.715009411494712
- type: nauc_precision_at_5_max
value: 36.89274409767293
- type: nauc_recall_at_1000_diff1
value: 32.33111662036445
- type: nauc_recall_at_1000_max
value: 45.35170430166642
- type: nauc_recall_at_100_diff1
value: 32.144354498328035
- type: nauc_recall_at_100_max
value: 36.84062501935607
- type: nauc_recall_at_10_diff1
value: 36.14633959446269
- type: nauc_recall_at_10_max
value: 35.448585836721506
- type: nauc_recall_at_1_diff1
value: 55.75650937633808
- type: nauc_recall_at_1_max
value: 36.45803206536568
- type: nauc_recall_at_20_diff1
value: 32.97579259309187
- type: nauc_recall_at_20_max
value: 35.23418118770078
- type: nauc_recall_at_3_diff1
value: 44.664415627999816
- type: nauc_recall_at_3_max
value: 34.31461153552717
- type: nauc_recall_at_5_diff1
value: 40.19780197689489
- type: nauc_recall_at_5_max
value: 34.5962677637341
- type: ndcg_at_1
value: 27.397
- type: ndcg_at_10
value: 35.443999999999996
- type: ndcg_at_100
value: 41.429
- type: ndcg_at_1000
value: 44.059
- type: ndcg_at_20
value: 37.714999999999996
- type: ndcg_at_3
value: 30.679000000000002
- type: ndcg_at_5
value: 32.992
- type: precision_at_1
value: 27.397
- type: precision_at_10
value: 6.5409999999999995
- type: precision_at_100
value: 1.102
- type: precision_at_1000
value: 0.151
- type: precision_at_20
value: 3.961
- type: precision_at_3
value: 14.536
- type: precision_at_5
value: 10.685
- type: recall_at_1
value: 22.266
- type: recall_at_10
value: 46.071
- type: recall_at_100
value: 72.064
- type: recall_at_1000
value: 90.038
- type: recall_at_20
value: 54.342999999999996
- type: recall_at_3
value: 32.926
- type: recall_at_5
value: 38.75
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackStatsRetrieval
type: mteb/cqadupstack-stats
config: default
split: test
revision: 65ac3a16b8e91f9cee4c9828cc7c335575432a2a
metrics:
- type: map_at_1
value: 17.261000000000003
- type: map_at_10
value: 23.753
- type: map_at_100
value: 24.627
- type: map_at_1000
value: 24.735
- type: map_at_20
value: 24.274
- type: map_at_3
value: 21.413
- type: map_at_5
value: 22.745
- type: mrr_at_1
value: 19.32515337423313
- type: mrr_at_10
value: 25.910081312688664
- type: mrr_at_100
value: 26.732545270907544
- type: mrr_at_1000
value: 26.81958074650717
- type: mrr_at_20
value: 26.38483873098964
- type: mrr_at_3
value: 23.77300613496933
- type: mrr_at_5
value: 25.02300613496933
- type: nauc_map_at_1000_diff1
value: 51.2509463814842
- type: nauc_map_at_1000_max
value: 34.59378039517527
- type: nauc_map_at_100_diff1
value: 51.20195960756142
- type: nauc_map_at_100_max
value: 34.52292810417864
- type: nauc_map_at_10_diff1
value: 50.971047162244375
- type: nauc_map_at_10_max
value: 34.42837100976023
- type: nauc_map_at_1_diff1
value: 61.66057666415862
- type: nauc_map_at_1_max
value: 34.54325674205874
- type: nauc_map_at_20_diff1
value: 51.16950921599598
- type: nauc_map_at_20_max
value: 34.50836855076594
- type: nauc_map_at_3_diff1
value: 52.980211175481394
- type: nauc_map_at_3_max
value: 34.10535134653065
- type: nauc_map_at_5_diff1
value: 51.825290665802
- type: nauc_map_at_5_max
value: 34.48591848937056
- type: nauc_mrr_at_1000_diff1
value: 51.50014502932111
- type: nauc_mrr_at_1000_max
value: 36.80362520167512
- type: nauc_mrr_at_100_diff1
value: 51.447470381911685
- type: nauc_mrr_at_100_max
value: 36.776788558968704
- type: nauc_mrr_at_10_diff1
value: 51.264885407403696
- type: nauc_mrr_at_10_max
value: 36.93350671984603
- type: nauc_mrr_at_1_diff1
value: 60.877750778528494
- type: nauc_mrr_at_1_max
value: 36.49057984523738
- type: nauc_mrr_at_20_diff1
value: 51.3534499982496
- type: nauc_mrr_at_20_max
value: 36.84780620387409
- type: nauc_mrr_at_3_diff1
value: 53.30071892113097
- type: nauc_mrr_at_3_max
value: 36.820559680318546
- type: nauc_mrr_at_5_diff1
value: 52.220386246212556
- type: nauc_mrr_at_5_max
value: 37.04291739287823
- type: nauc_ndcg_at_1000_diff1
value: 48.42608193180114
- type: nauc_ndcg_at_1000_max
value: 35.93812099772579
- type: nauc_ndcg_at_100_diff1
value: 47.5791516869875
- type: nauc_ndcg_at_100_max
value: 34.85361305271241
- type: nauc_ndcg_at_10_diff1
value: 46.85004446008741
- type: nauc_ndcg_at_10_max
value: 34.62550268395681
- type: nauc_ndcg_at_1_diff1
value: 60.877750778528494
- type: nauc_ndcg_at_1_max
value: 36.49057984523738
- type: nauc_ndcg_at_20_diff1
value: 47.301675307241545
- type: nauc_ndcg_at_20_max
value: 34.762713095272225
- type: nauc_ndcg_at_3_diff1
value: 50.570168102906564
- type: nauc_ndcg_at_3_max
value: 35.019669654163586
- type: nauc_ndcg_at_5_diff1
value: 48.66877986875303
- type: nauc_ndcg_at_5_max
value: 35.01212166467292
- type: nauc_precision_at_1000_diff1
value: 14.228081363546169
- type: nauc_precision_at_1000_max
value: 32.18702497143084
- type: nauc_precision_at_100_diff1
value: 27.494269464828974
- type: nauc_precision_at_100_max
value: 37.41573760452751
- type: nauc_precision_at_10_diff1
value: 33.933451544379366
- type: nauc_precision_at_10_max
value: 38.49427569486423
- type: nauc_precision_at_1_diff1
value: 60.877750778528494
- type: nauc_precision_at_1_max
value: 36.49057984523738
- type: nauc_precision_at_20_diff1
value: 34.397803404800605
- type: nauc_precision_at_20_max
value: 40.15514058102005
- type: nauc_precision_at_3_diff1
value: 42.88433793638738
- type: nauc_precision_at_3_max
value: 38.4764975067788
- type: nauc_precision_at_5_diff1
value: 38.93369587658407
- type: nauc_precision_at_5_max
value: 39.456916993900585
- type: nauc_recall_at_1000_diff1
value: 37.19758635716514
- type: nauc_recall_at_1000_max
value: 36.93465372531077
- type: nauc_recall_at_100_diff1
value: 35.404949235194174
- type: nauc_recall_at_100_max
value: 30.630300224996066
- type: nauc_recall_at_10_diff1
value: 34.702045929932055
- type: nauc_recall_at_10_max
value: 31.534616746827915
- type: nauc_recall_at_1_diff1
value: 61.66057666415862
- type: nauc_recall_at_1_max
value: 34.54325674205874
- type: nauc_recall_at_20_diff1
value: 35.24947576154629
- type: nauc_recall_at_20_max
value: 31.041888309997695
- type: nauc_recall_at_3_diff1
value: 43.141135363012054
- type: nauc_recall_at_3_max
value: 31.535167376189584
- type: nauc_recall_at_5_diff1
value: 38.72810643136954
- type: nauc_recall_at_5_max
value: 32.01182215240314
- type: ndcg_at_1
value: 19.325
- type: ndcg_at_10
value: 27.722
- type: ndcg_at_100
value: 32.0
- type: ndcg_at_1000
value: 34.77
- type: ndcg_at_20
value: 29.465000000000003
- type: ndcg_at_3
value: 23.341
- type: ndcg_at_5
value: 25.529000000000003
- type: precision_at_1
value: 19.325
- type: precision_at_10
value: 4.601
- type: precision_at_100
value: 0.721
- type: precision_at_1000
value: 0.104
- type: precision_at_20
value: 2.692
- type: precision_at_3
value: 10.327
- type: precision_at_5
value: 7.515
- type: recall_at_1
value: 17.261000000000003
- type: recall_at_10
value: 37.802
- type: recall_at_100
value: 57.166
- type: recall_at_1000
value: 77.469
- type: recall_at_20
value: 44.318999999999996
- type: recall_at_3
value: 26.116
- type: recall_at_5
value: 31.366
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackTexRetrieval
type: mteb/cqadupstack-tex
config: default
split: test
revision: 46989137a86843e03a6195de44b09deda022eec7
metrics:
- type: map_at_1
value: 12.961
- type: map_at_10
value: 18.740000000000002
- type: map_at_100
value: 19.703
- type: map_at_1000
value: 19.825
- type: map_at_20
value: 19.216
- type: map_at_3
value: 16.694
- type: map_at_5
value: 17.743000000000002
- type: mrr_at_1
value: 16.10461114934618
- type: mrr_at_10
value: 22.051188564437027
- type: mrr_at_100
value: 22.947710833057016
- type: mrr_at_1000
value: 23.031251789042475
- type: mrr_at_20
value: 22.535362926344344
- type: mrr_at_3
value: 20.061940812112866
- type: mrr_at_5
value: 21.092567102546468
- type: nauc_map_at_1000_diff1
value: 28.720484948469466
- type: nauc_map_at_1000_max
value: 22.865440767140637
- type: nauc_map_at_100_diff1
value: 28.725034760353314
- type: nauc_map_at_100_max
value: 22.84046004129796
- type: nauc_map_at_10_diff1
value: 28.952012695372698
- type: nauc_map_at_10_max
value: 22.793975798196286
- type: nauc_map_at_1_diff1
value: 35.53613349593089
- type: nauc_map_at_1_max
value: 24.140548014012747
- type: nauc_map_at_20_diff1
value: 28.853451957069336
- type: nauc_map_at_20_max
value: 22.799743549101326
- type: nauc_map_at_3_diff1
value: 29.951337425480883
- type: nauc_map_at_3_max
value: 22.610756063409553
- type: nauc_map_at_5_diff1
value: 29.37330668286449
- type: nauc_map_at_5_max
value: 22.57878266649173
- type: nauc_mrr_at_1000_diff1
value: 27.90192701434291
- type: nauc_mrr_at_1000_max
value: 23.7579661122046
- type: nauc_mrr_at_100_diff1
value: 27.900632259474882
- type: nauc_mrr_at_100_max
value: 23.75784428285424
- type: nauc_mrr_at_10_diff1
value: 28.00880872779524
- type: nauc_mrr_at_10_max
value: 23.798169424406627
- type: nauc_mrr_at_1_diff1
value: 34.309863568911425
- type: nauc_mrr_at_1_max
value: 26.916059981932417
- type: nauc_mrr_at_20_diff1
value: 28.043424996676624
- type: nauc_mrr_at_20_max
value: 23.783097407351868
- type: nauc_mrr_at_3_diff1
value: 28.872236354185237
- type: nauc_mrr_at_3_max
value: 24.10001094600915
- type: nauc_mrr_at_5_diff1
value: 28.431586921893327
- type: nauc_mrr_at_5_max
value: 23.793770139983565
- type: nauc_ndcg_at_1000_diff1
value: 25.26133758890153
- type: nauc_ndcg_at_1000_max
value: 22.369863581700518
- type: nauc_ndcg_at_100_diff1
value: 25.295918102117653
- type: nauc_ndcg_at_100_max
value: 22.19607938223796
- type: nauc_ndcg_at_10_diff1
value: 26.73394941848248
- type: nauc_ndcg_at_10_max
value: 22.53565041597461
- type: nauc_ndcg_at_1_diff1
value: 34.309863568911425
- type: nauc_ndcg_at_1_max
value: 26.916059981932417
- type: nauc_ndcg_at_20_diff1
value: 26.483879384526325
- type: nauc_ndcg_at_20_max
value: 22.37283043808397
- type: nauc_ndcg_at_3_diff1
value: 28.233989865507585
- type: nauc_ndcg_at_3_max
value: 23.18337582626765
- type: nauc_ndcg_at_5_diff1
value: 27.586183431281597
- type: nauc_ndcg_at_5_max
value: 22.525122228978613
- type: nauc_precision_at_1000_diff1
value: 4.291961797660381
- type: nauc_precision_at_1000_max
value: 20.066766200392706
- type: nauc_precision_at_100_diff1
value: 9.25374685617893
- type: nauc_precision_at_100_max
value: 23.561539434177973
- type: nauc_precision_at_10_diff1
value: 18.543124835189897
- type: nauc_precision_at_10_max
value: 25.99560427639843
- type: nauc_precision_at_1_diff1
value: 34.309863568911425
- type: nauc_precision_at_1_max
value: 26.916059981932417
- type: nauc_precision_at_20_diff1
value: 17.32859805675752
- type: nauc_precision_at_20_max
value: 25.111647024470713
- type: nauc_precision_at_3_diff1
value: 23.11307218784423
- type: nauc_precision_at_3_max
value: 25.43882757760188
- type: nauc_precision_at_5_diff1
value: 21.066799573535427
- type: nauc_precision_at_5_max
value: 25.53816237609956
- type: nauc_recall_at_1000_diff1
value: 9.108450047050564
- type: nauc_recall_at_1000_max
value: 15.552865366057592
- type: nauc_recall_at_100_diff1
value: 14.425072798063132
- type: nauc_recall_at_100_max
value: 17.05584096508452
- type: nauc_recall_at_10_diff1
value: 20.957155461035747
- type: nauc_recall_at_10_max
value: 18.77313505623332
- type: nauc_recall_at_1_diff1
value: 35.53613349593089
- type: nauc_recall_at_1_max
value: 24.140548014012747
- type: nauc_recall_at_20_diff1
value: 19.96872494547587
- type: nauc_recall_at_20_max
value: 18.462760317549197
- type: nauc_recall_at_3_diff1
value: 24.694266156911524
- type: nauc_recall_at_3_max
value: 19.640837676020173
- type: nauc_recall_at_5_diff1
value: 23.065469774243972
- type: nauc_recall_at_5_max
value: 18.657460685134776
- type: ndcg_at_1
value: 16.105
- type: ndcg_at_10
value: 22.708000000000002
- type: ndcg_at_100
value: 27.653
- type: ndcg_at_1000
value: 30.812
- type: ndcg_at_20
value: 24.346
- type: ndcg_at_3
value: 18.95
- type: ndcg_at_5
value: 20.522000000000002
- type: precision_at_1
value: 16.105
- type: precision_at_10
value: 4.267
- type: precision_at_100
value: 0.799
- type: precision_at_1000
value: 0.124
- type: precision_at_20
value: 2.6100000000000003
- type: precision_at_3
value: 9.049999999999999
- type: precision_at_5
value: 6.593
- type: recall_at_1
value: 12.961
- type: recall_at_10
value: 31.438
- type: recall_at_100
value: 54.129000000000005
- type: recall_at_1000
value: 77.076
- type: recall_at_20
value: 37.518
- type: recall_at_3
value: 20.997
- type: recall_at_5
value: 25.074999999999996
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackUnixRetrieval
type: mteb/cqadupstack-unix
config: default
split: test
revision: 6c6430d3a6d36f8d2a829195bc5dc94d7e063e53
metrics:
- type: map_at_1
value: 20.709
- type: map_at_10
value: 28.738999999999997
- type: map_at_100
value: 29.815
- type: map_at_1000
value: 29.932
- type: map_at_20
value: 29.282999999999998
- type: map_at_3
value: 26.441
- type: map_at_5
value: 27.777
- type: mrr_at_1
value: 24.53358208955224
- type: mrr_at_10
value: 32.394463693437544
- type: mrr_at_100
value: 33.350174597946385
- type: mrr_at_1000
value: 33.41993464841955
- type: mrr_at_20
value: 32.94076610467875
- type: mrr_at_3
value: 30.130597014925375
- type: mrr_at_5
value: 31.473880597014904
- type: nauc_map_at_1000_diff1
value: 40.692748340864746
- type: nauc_map_at_1000_max
value: 35.57839095914156
- type: nauc_map_at_100_diff1
value: 40.650799378493744
- type: nauc_map_at_100_max
value: 35.53795709449845
- type: nauc_map_at_10_diff1
value: 40.89383138365538
- type: nauc_map_at_10_max
value: 35.44293342259398
- type: nauc_map_at_1_diff1
value: 49.12072003473875
- type: nauc_map_at_1_max
value: 35.88899688359625
- type: nauc_map_at_20_diff1
value: 40.67489507417953
- type: nauc_map_at_20_max
value: 35.37903608045856
- type: nauc_map_at_3_diff1
value: 41.5317838231129
- type: nauc_map_at_3_max
value: 35.46770908063441
- type: nauc_map_at_5_diff1
value: 40.846545617446004
- type: nauc_map_at_5_max
value: 35.14965178055238
- type: nauc_mrr_at_1000_diff1
value: 40.73361687958999
- type: nauc_mrr_at_1000_max
value: 37.121713108339534
- type: nauc_mrr_at_100_diff1
value: 40.71129341657058
- type: nauc_mrr_at_100_max
value: 37.11517896403668
- type: nauc_mrr_at_10_diff1
value: 40.72473147121323
- type: nauc_mrr_at_10_max
value: 37.04589115955753
- type: nauc_mrr_at_1_diff1
value: 48.388878266455734
- type: nauc_mrr_at_1_max
value: 37.526360339847045
- type: nauc_mrr_at_20_diff1
value: 40.61982213089854
- type: nauc_mrr_at_20_max
value: 37.00491513836514
- type: nauc_mrr_at_3_diff1
value: 41.37485831338118
- type: nauc_mrr_at_3_max
value: 37.47176509970741
- type: nauc_mrr_at_5_diff1
value: 40.93161777811511
- type: nauc_mrr_at_5_max
value: 37.078286920815906
- type: nauc_ndcg_at_1000_diff1
value: 38.5467813816651
- type: nauc_ndcg_at_1000_max
value: 36.596764984052825
- type: nauc_ndcg_at_100_diff1
value: 37.67469746267849
- type: nauc_ndcg_at_100_max
value: 35.8208874944717
- type: nauc_ndcg_at_10_diff1
value: 38.66595637217053
- type: nauc_ndcg_at_10_max
value: 35.6228257599822
- type: nauc_ndcg_at_1_diff1
value: 48.388878266455734
- type: nauc_ndcg_at_1_max
value: 37.526360339847045
- type: nauc_ndcg_at_20_diff1
value: 37.890275853954094
- type: nauc_ndcg_at_20_max
value: 35.25047254404629
- type: nauc_ndcg_at_3_diff1
value: 39.87230430483416
- type: nauc_ndcg_at_3_max
value: 36.008184210199325
- type: nauc_ndcg_at_5_diff1
value: 38.841236541335206
- type: nauc_ndcg_at_5_max
value: 35.192374109201246
- type: nauc_precision_at_1000_diff1
value: 1.657722375056512
- type: nauc_precision_at_1000_max
value: 11.706401779440883
- type: nauc_precision_at_100_diff1
value: 10.20061825548431
- type: nauc_precision_at_100_max
value: 22.845634742237408
- type: nauc_precision_at_10_diff1
value: 26.632700346478916
- type: nauc_precision_at_10_max
value: 32.62334674689399
- type: nauc_precision_at_1_diff1
value: 48.388878266455734
- type: nauc_precision_at_1_max
value: 37.526360339847045
- type: nauc_precision_at_20_diff1
value: 20.876173735564592
- type: nauc_precision_at_20_max
value: 28.850377091435526
- type: nauc_precision_at_3_diff1
value: 32.025223944269
- type: nauc_precision_at_3_max
value: 35.71025859086816
- type: nauc_precision_at_5_diff1
value: 28.967780161302443
- type: nauc_precision_at_5_max
value: 33.49195837301289
- type: nauc_recall_at_1000_diff1
value: 23.608841697077036
- type: nauc_recall_at_1000_max
value: 41.20735928314203
- type: nauc_recall_at_100_diff1
value: 22.76475282031864
- type: nauc_recall_at_100_max
value: 30.663804567546897
- type: nauc_recall_at_10_diff1
value: 31.20793541893715
- type: nauc_recall_at_10_max
value: 32.83480866538358
- type: nauc_recall_at_1_diff1
value: 49.12072003473875
- type: nauc_recall_at_1_max
value: 35.88899688359625
- type: nauc_recall_at_20_diff1
value: 27.465280423335305
- type: nauc_recall_at_20_max
value: 30.40284795095875
- type: nauc_recall_at_3_diff1
value: 34.792488128346164
- type: nauc_recall_at_3_max
value: 34.223694348326724
- type: nauc_recall_at_5_diff1
value: 32.528565271564474
- type: nauc_recall_at_5_max
value: 32.428759553708744
- type: ndcg_at_1
value: 24.534
- type: ndcg_at_10
value: 33.363
- type: ndcg_at_100
value: 38.737
- type: ndcg_at_1000
value: 41.508
- type: ndcg_at_20
value: 35.288000000000004
- type: ndcg_at_3
value: 29.083
- type: ndcg_at_5
value: 31.212
- type: precision_at_1
value: 24.534
- type: precision_at_10
value: 5.588
- type: precision_at_100
value: 0.932
- type: precision_at_1000
value: 0.129
- type: precision_at_20
value: 3.3300000000000005
- type: precision_at_3
value: 13.245999999999999
- type: precision_at_5
value: 9.366
- type: recall_at_1
value: 20.709
- type: recall_at_10
value: 43.924
- type: recall_at_100
value: 67.823
- type: recall_at_1000
value: 87.665
- type: recall_at_20
value: 50.893
- type: recall_at_3
value: 32.175
- type: recall_at_5
value: 37.649
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackWebmastersRetrieval
type: mteb/cqadupstack-webmasters
config: default
split: test
revision: 160c094312a0e1facb97e55eeddb698c0abe3571
metrics:
- type: map_at_1
value: 22.305
- type: map_at_10
value: 30.379
- type: map_at_100
value: 31.782
- type: map_at_1000
value: 32.012
- type: map_at_20
value: 31.064000000000004
- type: map_at_3
value: 27.881
- type: map_at_5
value: 29.160000000000004
- type: mrr_at_1
value: 27.07509881422925
- type: mrr_at_10
value: 34.71610515088775
- type: mrr_at_100
value: 35.64647402926793
- type: mrr_at_1000
value: 35.72461288830468
- type: mrr_at_20
value: 35.21386515614449
- type: mrr_at_3
value: 32.608695652173914
- type: mrr_at_5
value: 33.73517786561265
- type: nauc_map_at_1000_diff1
value: 40.06942921592567
- type: nauc_map_at_1000_max
value: 34.948952917618826
- type: nauc_map_at_100_diff1
value: 40.26652655508838
- type: nauc_map_at_100_max
value: 35.05037692834513
- type: nauc_map_at_10_diff1
value: 40.40482595725152
- type: nauc_map_at_10_max
value: 34.76994801074602
- type: nauc_map_at_1_diff1
value: 48.449155396082276
- type: nauc_map_at_1_max
value: 31.923255733967675
- type: nauc_map_at_20_diff1
value: 40.43121378897672
- type: nauc_map_at_20_max
value: 34.955059887164744
- type: nauc_map_at_3_diff1
value: 41.520030101234
- type: nauc_map_at_3_max
value: 33.87326916343342
- type: nauc_map_at_5_diff1
value: 40.68085798830698
- type: nauc_map_at_5_max
value: 34.52274061079644
- type: nauc_mrr_at_1000_diff1
value: 38.58624602600238
- type: nauc_mrr_at_1000_max
value: 36.71589604244066
- type: nauc_mrr_at_100_diff1
value: 38.57954339254479
- type: nauc_mrr_at_100_max
value: 36.71451461262756
- type: nauc_mrr_at_10_diff1
value: 38.39778240600376
- type: nauc_mrr_at_10_max
value: 36.867440078145805
- type: nauc_mrr_at_1_diff1
value: 45.54773488737558
- type: nauc_mrr_at_1_max
value: 35.46157252708776
- type: nauc_mrr_at_20_diff1
value: 38.56226741939672
- type: nauc_mrr_at_20_max
value: 36.79076112969171
- type: nauc_mrr_at_3_diff1
value: 39.241048736996326
- type: nauc_mrr_at_3_max
value: 36.81497880532945
- type: nauc_mrr_at_5_diff1
value: 38.75938933304581
- type: nauc_mrr_at_5_max
value: 36.91112394256869
- type: nauc_ndcg_at_1000_diff1
value: 37.01015933832102
- type: nauc_ndcg_at_1000_max
value: 36.14674427038953
- type: nauc_ndcg_at_100_diff1
value: 37.46009355653446
- type: nauc_ndcg_at_100_max
value: 36.168362134330415
- type: nauc_ndcg_at_10_diff1
value: 36.87998378155374
- type: nauc_ndcg_at_10_max
value: 36.03488979078424
- type: nauc_ndcg_at_1_diff1
value: 45.54773488737558
- type: nauc_ndcg_at_1_max
value: 35.46157252708776
- type: nauc_ndcg_at_20_diff1
value: 37.32245335628528
- type: nauc_ndcg_at_20_max
value: 35.98153437861986
- type: nauc_ndcg_at_3_diff1
value: 38.4065595992595
- type: nauc_ndcg_at_3_max
value: 36.16984761665991
- type: nauc_ndcg_at_5_diff1
value: 37.528041451543274
- type: nauc_ndcg_at_5_max
value: 36.29795461312836
- type: nauc_precision_at_1000_diff1
value: -27.028565760553704
- type: nauc_precision_at_1000_max
value: -6.211061610108618
- type: nauc_precision_at_100_diff1
value: -11.543495827856747
- type: nauc_precision_at_100_max
value: 10.08227744965561
- type: nauc_precision_at_10_diff1
value: 11.91615180702728
- type: nauc_precision_at_10_max
value: 31.648399736572237
- type: nauc_precision_at_1_diff1
value: 45.54773488737558
- type: nauc_precision_at_1_max
value: 35.46157252708776
- type: nauc_precision_at_20_diff1
value: 7.106796337295673
- type: nauc_precision_at_20_max
value: 28.270776285978005
- type: nauc_precision_at_3_diff1
value: 27.025372640430305
- type: nauc_precision_at_3_max
value: 37.05993782016582
- type: nauc_precision_at_5_diff1
value: 20.36905717821343
- type: nauc_precision_at_5_max
value: 36.78762312900936
- type: nauc_recall_at_1000_diff1
value: 15.327824598428135
- type: nauc_recall_at_1000_max
value: 37.388077518454125
- type: nauc_recall_at_100_diff1
value: 26.663273479931682
- type: nauc_recall_at_100_max
value: 35.19719455819416
- type: nauc_recall_at_10_diff1
value: 29.457868053419173
- type: nauc_recall_at_10_max
value: 34.69858107618685
- type: nauc_recall_at_1_diff1
value: 48.449155396082276
- type: nauc_recall_at_1_max
value: 31.923255733967675
- type: nauc_recall_at_20_diff1
value: 28.740287691134785
- type: nauc_recall_at_20_max
value: 33.54392173053316
- type: nauc_recall_at_3_diff1
value: 34.36341724443082
- type: nauc_recall_at_3_max
value: 34.23281133452072
- type: nauc_recall_at_5_diff1
value: 31.778622196668138
- type: nauc_recall_at_5_max
value: 35.09923813897011
- type: ndcg_at_1
value: 27.075
- type: ndcg_at_10
value: 35.35
- type: ndcg_at_100
value: 40.822
- type: ndcg_at_1000
value: 43.961
- type: ndcg_at_20
value: 37.13
- type: ndcg_at_3
value: 31.419000000000004
- type: ndcg_at_5
value: 33.032000000000004
- type: precision_at_1
value: 27.075
- type: precision_at_10
value: 6.64
- type: precision_at_100
value: 1.35
- type: precision_at_1000
value: 0.232
- type: precision_at_20
value: 4.14
- type: precision_at_3
value: 14.427000000000001
- type: precision_at_5
value: 10.435
- type: recall_at_1
value: 22.305
- type: recall_at_10
value: 44.456
- type: recall_at_100
value: 69.57799999999999
- type: recall_at_1000
value: 89.262
- type: recall_at_20
value: 51.434999999999995
- type: recall_at_3
value: 33.141999999999996
- type: recall_at_5
value: 37.51
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackWordpressRetrieval
type: mteb/cqadupstack-wordpress
config: default
split: test
revision: 4ffe81d471b1924886b33c7567bfb200e9eec5c4
metrics:
- type: map_at_1
value: 15.723999999999998
- type: map_at_10
value: 22.479
- type: map_at_100
value: 23.494
- type: map_at_1000
value: 23.613
- type: map_at_20
value: 23.043
- type: map_at_3
value: 20.49
- type: map_at_5
value: 21.711
- type: mrr_at_1
value: 17.375231053604438
- type: mrr_at_10
value: 24.391485491300635
- type: mrr_at_100
value: 25.3451706703197
- type: mrr_at_1000
value: 25.4338976938353
- type: mrr_at_20
value: 24.932480156623605
- type: mrr_at_3
value: 22.48921749845963
- type: mrr_at_5
value: 23.62600123228588
- type: nauc_map_at_1000_diff1
value: 26.568452594006768
- type: nauc_map_at_1000_max
value: 22.6643108995624
- type: nauc_map_at_100_diff1
value: 26.56713050875225
- type: nauc_map_at_100_max
value: 22.72845024690553
- type: nauc_map_at_10_diff1
value: 26.48034872839756
- type: nauc_map_at_10_max
value: 22.77864909505566
- type: nauc_map_at_1_diff1
value: 35.16513757522047
- type: nauc_map_at_1_max
value: 22.34690217093654
- type: nauc_map_at_20_diff1
value: 26.373663262670444
- type: nauc_map_at_20_max
value: 22.587491027571254
- type: nauc_map_at_3_diff1
value: 27.621000302198922
- type: nauc_map_at_3_max
value: 22.84661442384809
- type: nauc_map_at_5_diff1
value: 26.765290689478732
- type: nauc_map_at_5_max
value: 22.988851260881056
- type: nauc_mrr_at_1000_diff1
value: 27.28527950781967
- type: nauc_mrr_at_1000_max
value: 22.818092962601042
- type: nauc_mrr_at_100_diff1
value: 27.29780478860489
- type: nauc_mrr_at_100_max
value: 22.85092145520846
- type: nauc_mrr_at_10_diff1
value: 27.245118068210814
- type: nauc_mrr_at_10_max
value: 22.93612080353226
- type: nauc_mrr_at_1_diff1
value: 36.22401042267479
- type: nauc_mrr_at_1_max
value: 24.1620633176196
- type: nauc_mrr_at_20_diff1
value: 27.10137249046854
- type: nauc_mrr_at_20_max
value: 22.74832608433313
- type: nauc_mrr_at_3_diff1
value: 28.803394273224846
- type: nauc_mrr_at_3_max
value: 23.58218274270813
- type: nauc_mrr_at_5_diff1
value: 27.548514879068392
- type: nauc_mrr_at_5_max
value: 23.202061782986362
- type: nauc_ndcg_at_1000_diff1
value: 24.255610268405004
- type: nauc_ndcg_at_1000_max
value: 21.021653182317866
- type: nauc_ndcg_at_100_diff1
value: 24.38035576235643
- type: nauc_ndcg_at_100_max
value: 22.01602046149638
- type: nauc_ndcg_at_10_diff1
value: 23.72345010383346
- type: nauc_ndcg_at_10_max
value: 22.379426846697886
- type: nauc_ndcg_at_1_diff1
value: 36.22401042267479
- type: nauc_ndcg_at_1_max
value: 24.1620633176196
- type: nauc_ndcg_at_20_diff1
value: 23.238204223853767
- type: nauc_ndcg_at_20_max
value: 21.524058764754642
- type: nauc_ndcg_at_3_diff1
value: 26.154431437162284
- type: nauc_ndcg_at_3_max
value: 23.12477560308262
- type: nauc_ndcg_at_5_diff1
value: 24.381279154864856
- type: nauc_ndcg_at_5_max
value: 22.928738776001943
- type: nauc_precision_at_1000_diff1
value: 10.866194934427694
- type: nauc_precision_at_1000_max
value: -8.119816513990198
- type: nauc_precision_at_100_diff1
value: 16.347299053203397
- type: nauc_precision_at_100_max
value: 13.26292415361133
- type: nauc_precision_at_10_diff1
value: 16.63699688800471
- type: nauc_precision_at_10_max
value: 22.375088256427286
- type: nauc_precision_at_1_diff1
value: 36.22401042267479
- type: nauc_precision_at_1_max
value: 24.1620633176196
- type: nauc_precision_at_20_diff1
value: 15.555806748912909
- type: nauc_precision_at_20_max
value: 18.55637142126297
- type: nauc_precision_at_3_diff1
value: 21.119629681631707
- type: nauc_precision_at_3_max
value: 25.238443284915007
- type: nauc_precision_at_5_diff1
value: 18.173398326347908
- type: nauc_precision_at_5_max
value: 24.277628318544387
- type: nauc_recall_at_1000_diff1
value: 11.904300629344641
- type: nauc_recall_at_1000_max
value: 4.543701587503855
- type: nauc_recall_at_100_diff1
value: 17.873778791471032
- type: nauc_recall_at_100_max
value: 18.07160995779775
- type: nauc_recall_at_10_diff1
value: 15.715088403469021
- type: nauc_recall_at_10_max
value: 20.285351500657857
- type: nauc_recall_at_1_diff1
value: 35.16513757522047
- type: nauc_recall_at_1_max
value: 22.34690217093654
- type: nauc_recall_at_20_diff1
value: 13.584020684215409
- type: nauc_recall_at_20_max
value: 16.915404230260844
- type: nauc_recall_at_3_diff1
value: 20.57543835256644
- type: nauc_recall_at_3_max
value: 22.257888049364798
- type: nauc_recall_at_5_diff1
value: 17.196563781054497
- type: nauc_recall_at_5_max
value: 21.786295860256278
- type: ndcg_at_1
value: 17.375
- type: ndcg_at_10
value: 26.458
- type: ndcg_at_100
value: 31.630999999999997
- type: ndcg_at_1000
value: 34.648
- type: ndcg_at_20
value: 28.429
- type: ndcg_at_3
value: 22.572
- type: ndcg_at_5
value: 24.627
- type: precision_at_1
value: 17.375
- type: precision_at_10
value: 4.3069999999999995
- type: precision_at_100
value: 0.747
- type: precision_at_1000
value: 0.11299999999999999
- type: precision_at_20
value: 2.616
- type: precision_at_3
value: 9.982000000000001
- type: precision_at_5
value: 7.172000000000001
- type: recall_at_1
value: 15.723999999999998
- type: recall_at_10
value: 36.848
- type: recall_at_100
value: 60.843
- type: recall_at_1000
value: 83.35900000000001
- type: recall_at_20
value: 44.239
- type: recall_at_3
value: 26.512999999999998
- type: recall_at_5
value: 31.447999999999997
- task:
type: Retrieval
dataset:
name: MTEB ClimateFEVER
type: mteb/climate-fever
config: default
split: test
revision: 47f2ac6acb640fc46020b02a5b59fdda04d39380
metrics:
- type: map_at_1
value: 7.556
- type: map_at_10
value: 14.451
- type: map_at_100
value: 16.098000000000003
- type: map_at_1000
value: 16.292
- type: map_at_20
value: 15.354000000000001
- type: map_at_3
value: 11.788
- type: map_at_5
value: 13.036
- type: mrr_at_1
value: 17.850162866449512
- type: mrr_at_10
value: 29.02070730572359
- type: mrr_at_100
value: 30.10374653258222
- type: mrr_at_1000
value: 30.159660391788854
- type: mrr_at_20
value: 29.705480653232243
- type: mrr_at_3
value: 25.287730727470116
- type: mrr_at_5
value: 27.437567861020568
- type: nauc_map_at_1000_diff1
value: 19.209527081030096
- type: nauc_map_at_1000_max
value: 31.647208883839507
- type: nauc_map_at_100_diff1
value: 19.20806522507485
- type: nauc_map_at_100_max
value: 31.548780447276
- type: nauc_map_at_10_diff1
value: 19.169908589166987
- type: nauc_map_at_10_max
value: 30.501288768500395
- type: nauc_map_at_1_diff1
value: 26.988959334325852
- type: nauc_map_at_1_max
value: 27.356073363716522
- type: nauc_map_at_20_diff1
value: 19.09827492317952
- type: nauc_map_at_20_max
value: 31.134688299749186
- type: nauc_map_at_3_diff1
value: 19.934035735585724
- type: nauc_map_at_3_max
value: 29.22218051641785
- type: nauc_map_at_5_diff1
value: 19.398656144868713
- type: nauc_map_at_5_max
value: 29.045993729549778
- type: nauc_mrr_at_1000_diff1
value: 16.978829558159727
- type: nauc_mrr_at_1000_max
value: 27.016129985398962
- type: nauc_mrr_at_100_diff1
value: 16.95693929120996
- type: nauc_mrr_at_100_max
value: 27.02464201206241
- type: nauc_mrr_at_10_diff1
value: 16.922383134541786
- type: nauc_mrr_at_10_max
value: 26.917342116854172
- type: nauc_mrr_at_1_diff1
value: 21.967275710063323
- type: nauc_mrr_at_1_max
value: 23.97730021914779
- type: nauc_mrr_at_20_diff1
value: 16.933125050384778
- type: nauc_mrr_at_20_max
value: 27.07768335891788
- type: nauc_mrr_at_3_diff1
value: 16.946763294333316
- type: nauc_mrr_at_3_max
value: 25.214811458539
- type: nauc_mrr_at_5_diff1
value: 17.04305756647301
- type: nauc_mrr_at_5_max
value: 26.130628979961834
- type: nauc_ndcg_at_1000_diff1
value: 16.986658675773686
- type: nauc_ndcg_at_1000_max
value: 34.4643347153785
- type: nauc_ndcg_at_100_diff1
value: 17.057499024976163
- type: nauc_ndcg_at_100_max
value: 33.73159453243811
- type: nauc_ndcg_at_10_diff1
value: 16.929966520239194
- type: nauc_ndcg_at_10_max
value: 31.301536380836026
- type: nauc_ndcg_at_1_diff1
value: 21.967275710063323
- type: nauc_ndcg_at_1_max
value: 23.97730021914779
- type: nauc_ndcg_at_20_diff1
value: 16.900348110026968
- type: nauc_ndcg_at_20_max
value: 32.476079344191525
- type: nauc_ndcg_at_3_diff1
value: 17.270453057670856
- type: nauc_ndcg_at_3_max
value: 27.75387606914448
- type: nauc_ndcg_at_5_diff1
value: 17.300131450254998
- type: nauc_ndcg_at_5_max
value: 28.707766380169097
- type: nauc_precision_at_1000_diff1
value: 2.3756918838598002
- type: nauc_precision_at_1000_max
value: 20.23410724169113
- type: nauc_precision_at_100_diff1
value: 6.358801887547644
- type: nauc_precision_at_100_max
value: 26.742998434337
- type: nauc_precision_at_10_diff1
value: 8.985726577486592
- type: nauc_precision_at_10_max
value: 29.98846164047006
- type: nauc_precision_at_1_diff1
value: 21.967275710063323
- type: nauc_precision_at_1_max
value: 23.97730021914779
- type: nauc_precision_at_20_diff1
value: 8.689481678265938
- type: nauc_precision_at_20_max
value: 30.24412868451184
- type: nauc_precision_at_3_diff1
value: 11.498289241456895
- type: nauc_precision_at_3_max
value: 26.84419245258572
- type: nauc_precision_at_5_diff1
value: 10.894319062565254
- type: nauc_precision_at_5_max
value: 27.273788735432884
- type: nauc_recall_at_1000_diff1
value: 8.943592557292224
- type: nauc_recall_at_1000_max
value: 37.585654238896446
- type: nauc_recall_at_100_diff1
value: 10.708206895515247
- type: nauc_recall_at_100_max
value: 32.10962530348595
- type: nauc_recall_at_10_diff1
value: 12.169794236323957
- type: nauc_recall_at_10_max
value: 30.12170288353037
- type: nauc_recall_at_1_diff1
value: 26.988959334325852
- type: nauc_recall_at_1_max
value: 27.356073363716522
- type: nauc_recall_at_20_diff1
value: 11.394888526086374
- type: nauc_recall_at_20_max
value: 30.72718903844353
- type: nauc_recall_at_3_diff1
value: 15.011650843515994
- type: nauc_recall_at_3_max
value: 28.233837827958475
- type: nauc_recall_at_5_diff1
value: 13.739007199689038
- type: nauc_recall_at_5_max
value: 27.097220418736455
- type: ndcg_at_1
value: 17.849999999999998
- type: ndcg_at_10
value: 21.712
- type: ndcg_at_100
value: 28.552
- type: ndcg_at_1000
value: 32.261
- type: ndcg_at_20
value: 24.421
- type: ndcg_at_3
value: 16.791
- type: ndcg_at_5
value: 18.462999999999997
- type: precision_at_1
value: 17.849999999999998
- type: precision_at_10
value: 7.212000000000001
- type: precision_at_100
value: 1.438
- type: precision_at_1000
value: 0.212
- type: precision_at_20
value: 4.73
- type: precision_at_3
value: 12.942
- type: precision_at_5
value: 10.280000000000001
- type: recall_at_1
value: 7.556
- type: recall_at_10
value: 27.891
- type: recall_at_100
value: 51.585
- type: recall_at_1000
value: 72.638
- type: recall_at_20
value: 35.644999999999996
- type: recall_at_3
value: 16.026
- type: recall_at_5
value: 20.507
- task:
type: Retrieval
dataset:
name: MTEB DBPedia
type: mteb/dbpedia
config: default
split: test
revision: c0f706b76e590d620bd6618b3ca8efdd34e2d659
metrics:
- type: map_at_1
value: 7.234
- type: map_at_10
value: 14.607000000000001
- type: map_at_100
value: 20.104
- type: map_at_1000
value: 21.478
- type: map_at_20
value: 16.619999999999997
- type: map_at_3
value: 11.027000000000001
- type: map_at_5
value: 12.469
- type: mrr_at_1
value: 54.25
- type: mrr_at_10
value: 64.63998015873014
- type: mrr_at_100
value: 65.1130930093471
- type: mrr_at_1000
value: 65.13135082056961
- type: mrr_at_20
value: 64.94966038326137
- type: mrr_at_3
value: 62.458333333333336
- type: mrr_at_5
value: 63.845833333333324
- type: nauc_map_at_1000_diff1
value: 22.4158889201391
- type: nauc_map_at_1000_max
value: 7.026467662060626
- type: nauc_map_at_100_diff1
value: 23.04636496295622
- type: nauc_map_at_100_max
value: 4.725540774086458
- type: nauc_map_at_10_diff1
value: 23.432494495467783
- type: nauc_map_at_10_max
value: -5.821110663085555
- type: nauc_map_at_1_diff1
value: 34.840276007257444
- type: nauc_map_at_1_max
value: -11.37201527363141
- type: nauc_map_at_20_diff1
value: 24.395490704549474
- type: nauc_map_at_20_max
value: -2.1089029956487084
- type: nauc_map_at_3_diff1
value: 26.996333964606727
- type: nauc_map_at_3_max
value: -10.371168153982198
- type: nauc_map_at_5_diff1
value: 24.959954478462205
- type: nauc_map_at_5_max
value: -8.600893701670593
- type: nauc_mrr_at_1000_diff1
value: 35.24039282463778
- type: nauc_mrr_at_1000_max
value: 37.114026096308244
- type: nauc_mrr_at_100_diff1
value: 35.246986246738324
- type: nauc_mrr_at_100_max
value: 37.127597625848175
- type: nauc_mrr_at_10_diff1
value: 35.19817679146017
- type: nauc_mrr_at_10_max
value: 37.10088394447574
- type: nauc_mrr_at_1_diff1
value: 37.871437973819546
- type: nauc_mrr_at_1_max
value: 33.639317316766494
- type: nauc_mrr_at_20_diff1
value: 35.1331593237111
- type: nauc_mrr_at_20_max
value: 37.0319775042493
- type: nauc_mrr_at_3_diff1
value: 35.18290669114643
- type: nauc_mrr_at_3_max
value: 37.17151353458554
- type: nauc_mrr_at_5_diff1
value: 35.27152644879001
- type: nauc_mrr_at_5_max
value: 37.59776931748075
- type: nauc_ndcg_at_1000_diff1
value: 23.265231375797573
- type: nauc_ndcg_at_1000_max
value: 19.253303883964247
- type: nauc_ndcg_at_100_diff1
value: 24.65543924960885
- type: nauc_ndcg_at_100_max
value: 12.423207189979774
- type: nauc_ndcg_at_10_diff1
value: 22.383661242851076
- type: nauc_ndcg_at_10_max
value: 12.11544119539834
- type: nauc_ndcg_at_1_diff1
value: 35.37762392054306
- type: nauc_ndcg_at_1_max
value: 24.33308418951577
- type: nauc_ndcg_at_20_diff1
value: 24.56519958043796
- type: nauc_ndcg_at_20_max
value: 9.25238387333473
- type: nauc_ndcg_at_3_diff1
value: 24.39638864122631
- type: nauc_ndcg_at_3_max
value: 18.095896878796434
- type: nauc_ndcg_at_5_diff1
value: 21.554177625230157
- type: nauc_ndcg_at_5_max
value: 14.90300796432758
- type: nauc_precision_at_1000_diff1
value: -14.028751970399872
- type: nauc_precision_at_1000_max
value: 22.683829892782335
- type: nauc_precision_at_100_diff1
value: -1.4922684516357194
- type: nauc_precision_at_100_max
value: 32.211371870388795
- type: nauc_precision_at_10_diff1
value: 1.3791441135589875
- type: nauc_precision_at_10_max
value: 28.329452472562267
- type: nauc_precision_at_1_diff1
value: 37.871437973819546
- type: nauc_precision_at_1_max
value: 33.639317316766494
- type: nauc_precision_at_20_diff1
value: 3.1829444563318128
- type: nauc_precision_at_20_max
value: 30.79822842458981
- type: nauc_precision_at_3_diff1
value: 9.890760276356035
- type: nauc_precision_at_3_max
value: 27.255950486716085
- type: nauc_precision_at_5_diff1
value: 2.835882319987235
- type: nauc_precision_at_5_max
value: 27.588094099192865
- type: nauc_recall_at_1000_diff1
value: 11.301016973437319
- type: nauc_recall_at_1000_max
value: 13.632028573670441
- type: nauc_recall_at_100_diff1
value: 16.244420258674484
- type: nauc_recall_at_100_max
value: 5.252228595283477
- type: nauc_recall_at_10_diff1
value: 17.14009149723741
- type: nauc_recall_at_10_max
value: -8.886638909096206
- type: nauc_recall_at_1_diff1
value: 34.840276007257444
- type: nauc_recall_at_1_max
value: -11.37201527363141
- type: nauc_recall_at_20_diff1
value: 18.393774547280316
- type: nauc_recall_at_20_max
value: -5.756994115048744
- type: nauc_recall_at_3_diff1
value: 23.65687656688717
- type: nauc_recall_at_3_max
value: -11.646229125385862
- type: nauc_recall_at_5_diff1
value: 21.02934437742109
- type: nauc_recall_at_5_max
value: -9.305597108185982
- type: ndcg_at_1
value: 42.625
- type: ndcg_at_10
value: 32.005
- type: ndcg_at_100
value: 36.563
- type: ndcg_at_1000
value: 44.207
- type: ndcg_at_20
value: 31.608999999999998
- type: ndcg_at_3
value: 35.949999999999996
- type: ndcg_at_5
value: 33.375
- type: precision_at_1
value: 54.25
- type: precision_at_10
value: 25.650000000000002
- type: precision_at_100
value: 8.260000000000002
- type: precision_at_1000
value: 1.806
- type: precision_at_20
value: 18.9
- type: precision_at_3
value: 39.833
- type: precision_at_5
value: 32.7
- type: recall_at_1
value: 7.234
- type: recall_at_10
value: 20.075000000000003
- type: recall_at_100
value: 43.980999999999995
- type: recall_at_1000
value: 68.527
- type: recall_at_20
value: 26.251
- type: recall_at_3
value: 12.534999999999998
- type: recall_at_5
value: 15.121
- task:
type: Classification
dataset:
name: MTEB EmotionClassification
type: mteb/emotion
config: default
split: test
revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37
metrics:
- type: accuracy
value: 40.81
- type: f1
value: 36.53895095274932
- type: f1_weighted
value: 43.09824575802351
- task:
type: Retrieval
dataset:
name: MTEB FEVER
type: mteb/fever
config: default
split: test
revision: bea83ef9e8fb933d90a2f1d5515737465d613e12
metrics:
- type: map_at_1
value: 45.399
- type: map_at_10
value: 59.345000000000006
- type: map_at_100
value: 59.821000000000005
- type: map_at_1000
value: 59.841
- type: map_at_20
value: 59.662000000000006
- type: map_at_3
value: 56.577
- type: map_at_5
value: 58.384
- type: mrr_at_1
value: 48.7998799879988
- type: mrr_at_10
value: 63.182490868134714
- type: mrr_at_100
value: 63.571831061553496
- type: mrr_at_1000
value: 63.58053777600791
- type: mrr_at_20
value: 63.45420825510186
- type: mrr_at_3
value: 60.45604560456103
- type: mrr_at_5
value: 62.25322532253252
- type: nauc_map_at_1000_diff1
value: 35.07017933142576
- type: nauc_map_at_1000_max
value: 8.523823797002448
- type: nauc_map_at_100_diff1
value: 35.06363318835806
- type: nauc_map_at_100_max
value: 8.522323239837585
- type: nauc_map_at_10_diff1
value: 34.99069002859329
- type: nauc_map_at_10_max
value: 8.635643511853687
- type: nauc_map_at_1_diff1
value: 38.063117939510256
- type: nauc_map_at_1_max
value: 5.897821931847972
- type: nauc_map_at_20_diff1
value: 35.02816464339912
- type: nauc_map_at_20_max
value: 8.57606618814322
- type: nauc_map_at_3_diff1
value: 34.74870593960704
- type: nauc_map_at_3_max
value: 7.7563142367550855
- type: nauc_map_at_5_diff1
value: 34.86268337627808
- type: nauc_map_at_5_max
value: 8.440880068028383
- type: nauc_mrr_at_1000_diff1
value: 38.05838366137394
- type: nauc_mrr_at_1000_max
value: 8.841793483971488
- type: nauc_mrr_at_100_diff1
value: 38.055327497620105
- type: nauc_mrr_at_100_max
value: 8.852785015905537
- type: nauc_mrr_at_10_diff1
value: 37.972785779782065
- type: nauc_mrr_at_10_max
value: 9.037378532213502
- type: nauc_mrr_at_1_diff1
value: 40.4432565446304
- type: nauc_mrr_at_1_max
value: 5.807334670577964
- type: nauc_mrr_at_20_diff1
value: 38.02767311040578
- type: nauc_mrr_at_20_max
value: 8.935949669165813
- type: nauc_mrr_at_3_diff1
value: 37.60471936912395
- type: nauc_mrr_at_3_max
value: 8.236789961860858
- type: nauc_mrr_at_5_diff1
value: 37.86352377415473
- type: nauc_mrr_at_5_max
value: 8.895540094390892
- type: nauc_ndcg_at_1000_diff1
value: 35.07160524499026
- type: nauc_ndcg_at_1000_max
value: 9.813866402912101
- type: nauc_ndcg_at_100_diff1
value: 34.92933991980568
- type: nauc_ndcg_at_100_max
value: 9.89567365562028
- type: nauc_ndcg_at_10_diff1
value: 34.529981017804104
- type: nauc_ndcg_at_10_max
value: 10.607560550422225
- type: nauc_ndcg_at_1_diff1
value: 40.4432565446304
- type: nauc_ndcg_at_1_max
value: 5.807334670577964
- type: nauc_ndcg_at_20_diff1
value: 34.668263021521994
- type: nauc_ndcg_at_20_max
value: 10.397799223138245
- type: nauc_ndcg_at_3_diff1
value: 34.25729382926051
- type: nauc_ndcg_at_3_max
value: 8.745767948993501
- type: nauc_ndcg_at_5_diff1
value: 34.33973241023773
- type: nauc_ndcg_at_5_max
value: 10.048081516024556
- type: nauc_precision_at_1000_diff1
value: -4.077783587263832
- type: nauc_precision_at_1000_max
value: 12.765822496184464
- type: nauc_precision_at_100_diff1
value: 1.4680450598592432
- type: nauc_precision_at_100_max
value: 17.44831984105488
- type: nauc_precision_at_10_diff1
value: 19.92695770531176
- type: nauc_precision_at_10_max
value: 23.914679743057352
- type: nauc_precision_at_1_diff1
value: 40.4432565446304
- type: nauc_precision_at_1_max
value: 5.807334670577964
- type: nauc_precision_at_20_diff1
value: 12.999177323343336
- type: nauc_precision_at_20_max
value: 23.540911859396033
- type: nauc_precision_at_3_diff1
value: 29.62941105307629
- type: nauc_precision_at_3_max
value: 12.866042509022865
- type: nauc_precision_at_5_diff1
value: 26.255704472502938
- type: nauc_precision_at_5_max
value: 18.77439128365061
- type: nauc_recall_at_1000_diff1
value: 8.920814764522019
- type: nauc_recall_at_1000_max
value: 17.655295496605643
- type: nauc_recall_at_100_diff1
value: 14.762238468369407
- type: nauc_recall_at_100_max
value: 17.048567752646125
- type: nauc_recall_at_10_diff1
value: 23.32325502930857
- type: nauc_recall_at_10_max
value: 19.556176492083992
- type: nauc_recall_at_1_diff1
value: 38.063117939510256
- type: nauc_recall_at_1_max
value: 5.897821931847972
- type: nauc_recall_at_20_diff1
value: 20.506042184854063
- type: nauc_recall_at_20_max
value: 20.561022468033503
- type: nauc_recall_at_3_diff1
value: 27.65947022544946
- type: nauc_recall_at_3_max
value: 10.81743699331276
- type: nauc_recall_at_5_diff1
value: 25.94551760999131
- type: nauc_recall_at_5_max
value: 15.156745563504675
- type: ndcg_at_1
value: 48.8
- type: ndcg_at_10
value: 66.459
- type: ndcg_at_100
value: 68.521
- type: ndcg_at_1000
value: 68.938
- type: ndcg_at_20
value: 67.52
- type: ndcg_at_3
value: 61.11299999999999
- type: ndcg_at_5
value: 64.21900000000001
- type: precision_at_1
value: 48.8
- type: precision_at_10
value: 9.256
- type: precision_at_100
value: 1.04
- type: precision_at_1000
value: 0.109
- type: precision_at_20
value: 4.862
- type: precision_at_3
value: 25.387999999999998
- type: precision_at_5
value: 16.933999999999997
- type: recall_at_1
value: 45.399
- type: recall_at_10
value: 84.572
- type: recall_at_100
value: 93.585
- type: recall_at_1000
value: 96.43
- type: recall_at_20
value: 88.576
- type: recall_at_3
value: 70.283
- type: recall_at_5
value: 77.804
- task:
type: Retrieval
dataset:
name: MTEB FiQA2018
type: mteb/fiqa
config: default
split: test
revision: 27a168819829fe9bcd655c2df245fb19452e8e06
metrics:
- type: map_at_1
value: 10.773000000000001
- type: map_at_10
value: 18.273
- type: map_at_100
value: 19.846
- type: map_at_1000
value: 20.066
- type: map_at_20
value: 19.092000000000002
- type: map_at_3
value: 15.653
- type: map_at_5
value: 16.996
- type: mrr_at_1
value: 22.839506172839506
- type: mrr_at_10
value: 30.709264158338218
- type: mrr_at_100
value: 31.765285545264728
- type: mrr_at_1000
value: 31.84254498770477
- type: mrr_at_20
value: 31.28359047494611
- type: mrr_at_3
value: 28.34362139917697
- type: mrr_at_5
value: 29.578189300411527
- type: nauc_map_at_1000_diff1
value: 42.33696758957174
- type: nauc_map_at_1000_max
value: 22.28446732536063
- type: nauc_map_at_100_diff1
value: 42.280232367289614
- type: nauc_map_at_100_max
value: 22.193658543387336
- type: nauc_map_at_10_diff1
value: 42.86152992348606
- type: nauc_map_at_10_max
value: 21.649513921678768
- type: nauc_map_at_1_diff1
value: 50.25274550047308
- type: nauc_map_at_1_max
value: 18.793153289309025
- type: nauc_map_at_20_diff1
value: 42.68337193792793
- type: nauc_map_at_20_max
value: 21.783732998080165
- type: nauc_map_at_3_diff1
value: 44.526091901592025
- type: nauc_map_at_3_max
value: 20.44240168343812
- type: nauc_map_at_5_diff1
value: 43.40025778096801
- type: nauc_map_at_5_max
value: 21.337520847399794
- type: nauc_mrr_at_1000_diff1
value: 42.76413081015503
- type: nauc_mrr_at_1000_max
value: 25.051153181122253
- type: nauc_mrr_at_100_diff1
value: 42.726972311439724
- type: nauc_mrr_at_100_max
value: 25.041597478239442
- type: nauc_mrr_at_10_diff1
value: 43.05815490208189
- type: nauc_mrr_at_10_max
value: 25.13689635924164
- type: nauc_mrr_at_1_diff1
value: 49.40608982855475
- type: nauc_mrr_at_1_max
value: 26.84279922755957
- type: nauc_mrr_at_20_diff1
value: 42.68770796904053
- type: nauc_mrr_at_20_max
value: 25.00374130766682
- type: nauc_mrr_at_3_diff1
value: 43.56229080869875
- type: nauc_mrr_at_3_max
value: 25.00272462955036
- type: nauc_mrr_at_5_diff1
value: 42.78163485253489
- type: nauc_mrr_at_5_max
value: 24.996583555035066
- type: nauc_ndcg_at_1000_diff1
value: 39.60623109749308
- type: nauc_ndcg_at_1000_max
value: 24.945954161473963
- type: nauc_ndcg_at_100_diff1
value: 38.391977738851054
- type: nauc_ndcg_at_100_max
value: 23.1495309393186
- type: nauc_ndcg_at_10_diff1
value: 40.82447224697167
- type: nauc_ndcg_at_10_max
value: 22.103721284897222
- type: nauc_ndcg_at_1_diff1
value: 49.40608982855475
- type: nauc_ndcg_at_1_max
value: 26.84279922755957
- type: nauc_ndcg_at_20_diff1
value: 39.87655648003804
- type: nauc_ndcg_at_20_max
value: 21.863160067094732
- type: nauc_ndcg_at_3_diff1
value: 42.702330655505094
- type: nauc_ndcg_at_3_max
value: 24.30088309227799
- type: nauc_ndcg_at_5_diff1
value: 41.15335198539591
- type: nauc_ndcg_at_5_max
value: 23.383496342798235
- type: nauc_precision_at_1000_diff1
value: 5.078790711874846
- type: nauc_precision_at_1000_max
value: 28.270734693277067
- type: nauc_precision_at_100_diff1
value: 10.751006733811092
- type: nauc_precision_at_100_max
value: 28.016358575658305
- type: nauc_precision_at_10_diff1
value: 28.69051966074066
- type: nauc_precision_at_10_max
value: 29.264771382133375
- type: nauc_precision_at_1_diff1
value: 49.40608982855475
- type: nauc_precision_at_1_max
value: 26.84279922755957
- type: nauc_precision_at_20_diff1
value: 23.657472193309125
- type: nauc_precision_at_20_max
value: 27.08411359763242
- type: nauc_precision_at_3_diff1
value: 36.599109026411924
- type: nauc_precision_at_3_max
value: 28.383077203742246
- type: nauc_precision_at_5_diff1
value: 31.358430042619563
- type: nauc_precision_at_5_max
value: 28.555003400952845
- type: nauc_recall_at_1000_diff1
value: 20.25194559618304
- type: nauc_recall_at_1000_max
value: 23.710031862813118
- type: nauc_recall_at_100_diff1
value: 18.359725605438047
- type: nauc_recall_at_100_max
value: 13.823806806919805
- type: nauc_recall_at_10_diff1
value: 30.54188950640248
- type: nauc_recall_at_10_max
value: 15.290504422192791
- type: nauc_recall_at_1_diff1
value: 50.25274550047308
- type: nauc_recall_at_1_max
value: 18.793153289309025
- type: nauc_recall_at_20_diff1
value: 27.314651647568404
- type: nauc_recall_at_20_max
value: 14.088522206775039
- type: nauc_recall_at_3_diff1
value: 36.125136373927354
- type: nauc_recall_at_3_max
value: 16.778297325102113
- type: nauc_recall_at_5_diff1
value: 32.03749698394437
- type: nauc_recall_at_5_max
value: 17.620359878684805
- type: ndcg_at_1
value: 22.84
- type: ndcg_at_10
value: 24.467
- type: ndcg_at_100
value: 31.270999999999997
- type: ndcg_at_1000
value: 35.564
- type: ndcg_at_20
value: 26.871000000000002
- type: ndcg_at_3
value: 21.128
- type: ndcg_at_5
value: 22.203999999999997
- type: precision_at_1
value: 22.84
- type: precision_at_10
value: 7.114
- type: precision_at_100
value: 1.381
- type: precision_at_1000
value: 0.213
- type: precision_at_20
value: 4.552
- type: precision_at_3
value: 14.352
- type: precision_at_5
value: 10.864
- type: recall_at_1
value: 10.773000000000001
- type: recall_at_10
value: 30.564000000000004
- type: recall_at_100
value: 56.745999999999995
- type: recall_at_1000
value: 82.826
- type: recall_at_20
value: 37.844
- type: recall_at_3
value: 19.406000000000002
- type: recall_at_5
value: 23.724
- task:
type: Retrieval
dataset:
name: MTEB HotpotQA
type: mteb/hotpotqa
config: default
split: test
revision: ab518f4d6fcca38d87c25209f94beba119d02014
metrics:
- type: map_at_1
value: 32.046
- type: map_at_10
value: 48.443000000000005
- type: map_at_100
value: 49.389
- type: map_at_1000
value: 49.466
- type: map_at_20
value: 48.986000000000004
- type: map_at_3
value: 44.893
- type: map_at_5
value: 47.075
- type: mrr_at_1
value: 64.09182984469953
- type: mrr_at_10
value: 72.85266282970527
- type: mrr_at_100
value: 73.21185355612093
- type: mrr_at_1000
value: 73.2252657846111
- type: mrr_at_20
value: 73.10862127718183
- type: mrr_at_3
value: 71.3031735313976
- type: mrr_at_5
value: 72.34233625928402
- type: nauc_map_at_1000_diff1
value: 29.57880669891487
- type: nauc_map_at_1000_max
value: 21.845463980026476
- type: nauc_map_at_100_diff1
value: 29.55367483685356
- type: nauc_map_at_100_max
value: 21.828007798768958
- type: nauc_map_at_10_diff1
value: 29.67368554537432
- type: nauc_map_at_10_max
value: 21.849279138947868
- type: nauc_map_at_1_diff1
value: 61.13740199701338
- type: nauc_map_at_1_max
value: 32.6342175820136
- type: nauc_map_at_20_diff1
value: 29.598291599316568
- type: nauc_map_at_20_max
value: 21.862735577320557
- type: nauc_map_at_3_diff1
value: 31.22835556923922
- type: nauc_map_at_3_max
value: 22.344809372883315
- type: nauc_map_at_5_diff1
value: 30.432000722074665
- type: nauc_map_at_5_max
value: 22.27699649933424
- type: nauc_mrr_at_1000_diff1
value: 59.26794200803715
- type: nauc_mrr_at_1000_max
value: 34.33050463026508
- type: nauc_mrr_at_100_diff1
value: 59.26740246956419
- type: nauc_mrr_at_100_max
value: 34.33577087313508
- type: nauc_mrr_at_10_diff1
value: 59.13786202070478
- type: nauc_mrr_at_10_max
value: 34.377953823081384
- type: nauc_mrr_at_1_diff1
value: 61.13740199701338
- type: nauc_mrr_at_1_max
value: 32.6342175820136
- type: nauc_mrr_at_20_diff1
value: 59.22898475872048
- type: nauc_mrr_at_20_max
value: 34.34680319223408
- type: nauc_mrr_at_3_diff1
value: 59.03499635007199
- type: nauc_mrr_at_3_max
value: 34.398014446289544
- type: nauc_mrr_at_5_diff1
value: 59.20761322618965
- type: nauc_mrr_at_5_max
value: 34.42827235318949
- type: nauc_ndcg_at_1000_diff1
value: 32.64061494118113
- type: nauc_ndcg_at_1000_max
value: 23.616685748536074
- type: nauc_ndcg_at_100_diff1
value: 32.11038119247951
- type: nauc_ndcg_at_100_max
value: 23.33285928609271
- type: nauc_ndcg_at_10_diff1
value: 32.70477446409243
- type: nauc_ndcg_at_10_max
value: 23.662027117393535
- type: nauc_ndcg_at_1_diff1
value: 61.13740199701338
- type: nauc_ndcg_at_1_max
value: 32.6342175820136
- type: nauc_ndcg_at_20_diff1
value: 32.32220211811219
- type: nauc_ndcg_at_20_max
value: 23.564270159145643
- type: nauc_ndcg_at_3_diff1
value: 35.63724665178986
- type: nauc_ndcg_at_3_max
value: 24.820074757992305
- type: nauc_ndcg_at_5_diff1
value: 34.27199365493392
- type: nauc_ndcg_at_5_max
value: 24.508158825075682
- type: nauc_precision_at_1000_diff1
value: -2.430622498990411
- type: nauc_precision_at_1000_max
value: 7.822027373881609
- type: nauc_precision_at_100_diff1
value: 4.202356673527351
- type: nauc_precision_at_100_max
value: 10.321772681063146
- type: nauc_precision_at_10_diff1
value: 14.011676403321902
- type: nauc_precision_at_10_max
value: 15.666639850967512
- type: nauc_precision_at_1_diff1
value: 61.13740199701338
- type: nauc_precision_at_1_max
value: 32.6342175820136
- type: nauc_precision_at_20_diff1
value: 10.437835060510753
- type: nauc_precision_at_20_max
value: 14.10661581882921
- type: nauc_precision_at_3_diff1
value: 23.783985172773143
- type: nauc_precision_at_3_max
value: 20.590352544033866
- type: nauc_precision_at_5_diff1
value: 19.592566862830548
- type: nauc_precision_at_5_max
value: 18.88117124055341
- type: nauc_recall_at_1000_diff1
value: -2.430622498990057
- type: nauc_recall_at_1000_max
value: 7.822027373881757
- type: nauc_recall_at_100_diff1
value: 4.202356673527403
- type: nauc_recall_at_100_max
value: 10.32177268106303
- type: nauc_recall_at_10_diff1
value: 14.011676403321957
- type: nauc_recall_at_10_max
value: 15.666639850967554
- type: nauc_recall_at_1_diff1
value: 61.13740199701338
- type: nauc_recall_at_1_max
value: 32.6342175820136
- type: nauc_recall_at_20_diff1
value: 10.437835060510707
- type: nauc_recall_at_20_max
value: 14.106615818829187
- type: nauc_recall_at_3_diff1
value: 23.783985172773168
- type: nauc_recall_at_3_max
value: 20.590352544033934
- type: nauc_recall_at_5_diff1
value: 19.59256686283052
- type: nauc_recall_at_5_max
value: 18.88117124055339
- type: ndcg_at_1
value: 64.092
- type: ndcg_at_10
value: 57.964000000000006
- type: ndcg_at_100
value: 61.501
- type: ndcg_at_1000
value: 63.022
- type: ndcg_at_20
value: 59.463
- type: ndcg_at_3
value: 52.608
- type: ndcg_at_5
value: 55.577
- type: precision_at_1
value: 64.092
- type: precision_at_10
value: 12.462
- type: precision_at_100
value: 1.5230000000000001
- type: precision_at_1000
value: 0.172
- type: precision_at_20
value: 6.714
- type: precision_at_3
value: 33.657
- type: precision_at_5
value: 22.533
- type: recall_at_1
value: 32.046
- type: recall_at_10
value: 62.309000000000005
- type: recall_at_100
value: 76.13799999999999
- type: recall_at_1000
value: 86.185
- type: recall_at_20
value: 67.144
- type: recall_at_3
value: 50.486
- type: recall_at_5
value: 56.333999999999996
- task:
type: Classification
dataset:
name: MTEB ImdbClassification
type: mteb/imdb
config: default
split: test
revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7
metrics:
- type: accuracy
value: 73.702
- type: ap
value: 67.55549836397681
- type: ap_weighted
value: 67.55549836397681
- type: f1
value: 73.4581895293936
- type: f1_weighted
value: 73.45818952939358
- task:
type: Retrieval
dataset:
name: MTEB MSMARCO
type: mteb/msmarco
config: default
split: test
revision: c5a29a104738b98a9e76336939199e264163d4a0
metrics:
- type: main_score
value: 30.051
- type: map_at_1
value: 14.244
- type: map_at_10
value: 24.143
- type: map_at_100
value: 25.402
- type: map_at_1000
value: 25.479
- type: map_at_20
value: 24.875
- type: map_at_3
value: 20.694
- type: map_at_5
value: 22.604
- type: mrr_at_1
value: 14.584527220630372
- type: mrr_at_10
value: 24.557460090053123
- type: mrr_at_100
value: 25.785901435660147
- type: mrr_at_1000
value: 25.85709282510335
- type: mrr_at_20
value: 25.274992596418866
- type: mrr_at_3
value: 21.131805157593057
- type: mrr_at_5
value: 23.0429799426934
- type: nauc_map_at_1000_diff1
value: 27.61711087970365
- type: nauc_map_at_1000_max
value: 1.6657479941178628
- type: nauc_map_at_1000_std
value: -9.49651956936018
- type: nauc_map_at_100_diff1
value: 27.61498736358577
- type: nauc_map_at_100_max
value: 1.6634690696430845
- type: nauc_map_at_100_std
value: -9.46789097558277
- type: nauc_map_at_10_diff1
value: 27.616888705380603
- type: nauc_map_at_10_max
value: 1.4276684096575918
- type: nauc_map_at_10_std
value: -10.446820384304754
- type: nauc_map_at_1_diff1
value: 29.76931787521696
- type: nauc_map_at_1_max
value: 0.948603060998731
- type: nauc_map_at_1_std
value: -10.775704940266767
- type: nauc_map_at_20_diff1
value: 27.57600730820819
- type: nauc_map_at_20_max
value: 1.5143185235329177
- type: nauc_map_at_20_std
value: -9.849312193865744
- type: nauc_map_at_3_diff1
value: 27.890351531157577
- type: nauc_map_at_3_max
value: 1.4000607426502167
- type: nauc_map_at_3_std
value: -11.118014158060422
- type: nauc_map_at_5_diff1
value: 27.786816928992376
- type: nauc_map_at_5_max
value: 1.2637200637686197
- type: nauc_map_at_5_std
value: -10.922970086569386
- type: nauc_mrr_at_1000_diff1
value: 27.42128154832487
- type: nauc_mrr_at_1000_max
value: 1.769383613212847
- type: nauc_mrr_at_1000_std
value: -9.304600797518969
- type: nauc_mrr_at_100_diff1
value: 27.418466905238216
- type: nauc_mrr_at_100_max
value: 1.7702836453764914
- type: nauc_mrr_at_100_std
value: -9.27018903363956
- type: nauc_mrr_at_10_diff1
value: 27.43223048852499
- type: nauc_mrr_at_10_max
value: 1.5863443925517158
- type: nauc_mrr_at_10_std
value: -10.19228455560491
- type: nauc_mrr_at_1_diff1
value: 29.63894982019449
- type: nauc_mrr_at_1_max
value: 1.1350720726087482
- type: nauc_mrr_at_1_std
value: -10.706375855749798
- type: nauc_mrr_at_20_diff1
value: 27.3813401873824
- type: nauc_mrr_at_20_max
value: 1.6349061697179936
- type: nauc_mrr_at_20_std
value: -9.62511280355079
- type: nauc_mrr_at_3_diff1
value: 27.63825584292618
- type: nauc_mrr_at_3_max
value: 1.5014142622215632
- type: nauc_mrr_at_3_std
value: -10.937120645836448
- type: nauc_mrr_at_5_diff1
value: 27.65874684943374
- type: nauc_mrr_at_5_max
value: 1.3921567756597124
- type: nauc_mrr_at_5_std
value: -10.715887774339881
- type: nauc_ndcg_at_1000_diff1
value: 26.940019720932135
- type: nauc_ndcg_at_1000_max
value: 3.071589090811754
- type: nauc_ndcg_at_1000_std
value: -5.820914521338
- type: nauc_ndcg_at_100_diff1
value: 26.80295695348146
- type: nauc_ndcg_at_100_max
value: 3.064374012393309
- type: nauc_ndcg_at_100_std
value: -4.689320725729883
- type: nauc_ndcg_at_10_diff1
value: 26.73912033432779
- type: nauc_ndcg_at_10_max
value: 1.7371596861856864
- type: nauc_ndcg_at_10_std
value: -9.587955568967976
- type: nauc_ndcg_at_1_diff1
value: 29.63894982019449
- type: nauc_ndcg_at_1_max
value: 1.1350720726087482
- type: nauc_ndcg_at_1_std
value: -10.706375855749798
- type: nauc_ndcg_at_20_diff1
value: 26.554059540064955
- type: nauc_ndcg_at_20_max
value: 2.037008011734218
- type: nauc_ndcg_at_20_std
value: -7.522356479764311
- type: nauc_ndcg_at_3_diff1
value: 27.38197429348882
- type: nauc_ndcg_at_3_max
value: 1.5447259968645135
- type: nauc_ndcg_at_3_std
value: -11.056572041307833
- type: nauc_ndcg_at_5_diff1
value: 27.23078023341192
- type: nauc_ndcg_at_5_max
value: 1.3332668241078742
- type: nauc_ndcg_at_5_std
value: -10.70755059234365
- type: nauc_precision_at_1000_diff1
value: 4.824440345952768
- type: nauc_precision_at_1000_max
value: 22.501190150975695
- type: nauc_precision_at_1000_std
value: 27.01244032141851
- type: nauc_precision_at_100_diff1
value: 18.806308686259438
- type: nauc_precision_at_100_max
value: 14.0556087259749
- type: nauc_precision_at_100_std
value: 23.65979665814084
- type: nauc_precision_at_10_diff1
value: 23.631970615652996
- type: nauc_precision_at_10_max
value: 3.2279467100874113
- type: nauc_precision_at_10_std
value: -6.612111844206746
- type: nauc_precision_at_1_diff1
value: 29.63894982019449
- type: nauc_precision_at_1_max
value: 1.1350720726087482
- type: nauc_precision_at_1_std
value: -10.706375855749798
- type: nauc_precision_at_20_diff1
value: 22.13613457378927
- type: nauc_precision_at_20_max
value: 4.984490409308019
- type: nauc_precision_at_20_std
value: 1.3959896282348365
- type: nauc_precision_at_3_diff1
value: 25.924423449037278
- type: nauc_precision_at_3_max
value: 2.119600062847904
- type: nauc_precision_at_3_std
value: -10.816296974118274
- type: nauc_precision_at_5_diff1
value: 25.47042606356821
- type: nauc_precision_at_5_max
value: 1.832019713836658
- type: nauc_precision_at_5_std
value: -9.928054676627815
- type: nauc_recall_at_1000_diff1
value: 22.574618149749853
- type: nauc_recall_at_1000_max
value: 30.82526285969409
- type: nauc_recall_at_1000_std
value: 59.21512310658756
- type: nauc_recall_at_100_diff1
value: 23.54920706844819
- type: nauc_recall_at_100_max
value: 10.975217227651312
- type: nauc_recall_at_100_std
value: 24.85603771243269
- type: nauc_recall_at_10_diff1
value: 24.413494892666748
- type: nauc_recall_at_10_max
value: 2.349732649717201
- type: nauc_recall_at_10_std
value: -7.37174021438692
- type: nauc_recall_at_1_diff1
value: 29.76931787521696
- type: nauc_recall_at_1_max
value: 0.948603060998731
- type: nauc_recall_at_1_std
value: -10.775704940266767
- type: nauc_recall_at_20_diff1
value: 23.4560099128478
- type: nauc_recall_at_20_max
value: 3.399890015984125
- type: nauc_recall_at_20_std
value: 0.1065905686863526
- type: nauc_recall_at_3_diff1
value: 26.33393571726941
- type: nauc_recall_at_3_max
value: 1.7770061463264046
- type: nauc_recall_at_3_std
value: -11.030373812919407
- type: nauc_recall_at_5_diff1
value: 25.84870110945663
- type: nauc_recall_at_5_max
value: 1.368501163591071
- type: nauc_recall_at_5_std
value: -10.251669620544972
- type: ndcg_at_1
value: 14.585
- type: ndcg_at_10
value: 30.051
- type: ndcg_at_100
value: 36.429
- type: ndcg_at_1000
value: 38.501
- type: ndcg_at_20
value: 32.678
- type: ndcg_at_3
value: 22.963
- type: ndcg_at_5
value: 26.385
- type: precision_at_1
value: 14.585
- type: precision_at_10
value: 5.04
- type: precision_at_100
value: 0.826
- type: precision_at_1000
value: 0.101
- type: precision_at_20
value: 3.062
- type: precision_at_3
value: 10.0
- type: precision_at_5
value: 7.722
- type: recall_at_1
value: 14.244
- type: recall_at_10
value: 48.48
- type: recall_at_100
value: 78.652
- type: recall_at_1000
value: 94.774
- type: recall_at_20
value: 58.724
- type: recall_at_3
value: 29.106
- type: recall_at_5
value: 37.329
- task:
type: Classification
dataset:
name: MTEB MTOPDomainClassification (en)
type: mteb/mtop_domain
config: en
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 89.73096215230278
- type: f1
value: 89.31269053453195
- type: f1_weighted
value: 89.75118268368209
- task:
type: Classification
dataset:
name: MTEB MTOPIntentClassification (en)
type: mteb/mtop_intent
config: en
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 66.49110807113543
- type: f1
value: 51.250886916460544
- type: f1_weighted
value: 69.910921231367
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (en)
type: mteb/amazon_massive_intent
config: en
split: test
revision: 4672e20407010da34463acc759c162ca9734bca6
metrics:
- type: accuracy
value: 68.86348352387357
- type: f1
value: 66.19332858716572
- type: f1_weighted
value: 68.90834063842036
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (en)
type: mteb/amazon_massive_scenario
config: en
split: test
revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8
metrics:
- type: accuracy
value: 74.48890383322124
- type: f1
value: 74.01198670144007
- type: f1_weighted
value: 74.5767171066833
- task:
type: Clustering
dataset:
name: MTEB MedrxivClusteringP2P
type: mteb/medrxiv-clustering-p2p
config: default
split: test
revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73
metrics:
- type: v_measure
value: 31.26742368186013
- type: v_measures
value:
- 0.3010655091536935
- 0.29691302264328545
- 0.31333602296285296
- 0.3118686703571087
- 0.3066404174656012
- task:
type: Clustering
dataset:
name: MTEB MedrxivClusteringS2S
type: mteb/medrxiv-clustering-s2s
config: default
split: test
revision: 35191c8c0dca72d8ff3efcd72aa802307d469663
metrics:
- type: v_measure
value: 28.971953824342002
- type: v_measures
value:
- 0.28128031641493684
- 0.2709575455939747
- 0.28058910226798894
- 0.286988068530199
- 0.27155292611128873
- task:
type: Reranking
dataset:
name: MTEB MindSmallReranking
type: mteb/mind_small
config: default
split: test
revision: 59042f120c80e8afa9cdbb224f67076cec0fc9a7
metrics:
- type: map
value: 30.919005892986945
- type: mrr
value: 31.964215230201017
- type: nAUC_map_diff1
value: 12.380227971335106
- type: nAUC_map_max
value: -20.306665699119915
- type: nAUC_mrr_diff1
value: 11.860907307359078
- type: nAUC_mrr_max
value: -14.820057982537445
- task:
type: Retrieval
dataset:
name: MTEB NFCorpus
type: mteb/nfcorpus
config: default
split: test
revision: ec0fa4fe99da2ff19ca1214b7966684033a58814
metrics:
- type: map_at_1
value: 4.9239999999999995
- type: map_at_10
value: 10.216
- type: map_at_100
value: 13.073
- type: map_at_1000
value: 14.335999999999999
- type: map_at_20
value: 11.562
- type: map_at_3
value: 7.361
- type: map_at_5
value: 8.790000000000001
- type: mrr_at_1
value: 40.55727554179567
- type: mrr_at_10
value: 48.67290775959506
- type: mrr_at_100
value: 49.39988509152788
- type: mrr_at_1000
value: 49.44995547989892
- type: mrr_at_20
value: 49.1476640818267
- type: mrr_at_3
value: 46.336429308565535
- type: mrr_at_5
value: 47.85345717234262
- type: nauc_map_at_1000_diff1
value: 26.873576120717154
- type: nauc_map_at_1000_max
value: 22.28375511136719
- type: nauc_map_at_100_diff1
value: 28.105989810331305
- type: nauc_map_at_100_max
value: 20.80280182475018
- type: nauc_map_at_10_diff1
value: 32.22012802586023
- type: nauc_map_at_10_max
value: 14.563410751855393
- type: nauc_map_at_1_diff1
value: 48.78589273340728
- type: nauc_map_at_1_max
value: 0.7100902846948914
- type: nauc_map_at_20_diff1
value: 29.749385475706614
- type: nauc_map_at_20_max
value: 17.725130767277143
- type: nauc_map_at_3_diff1
value: 42.91163831592647
- type: nauc_map_at_3_max
value: 7.949303449529328
- type: nauc_map_at_5_diff1
value: 36.37288307582431
- type: nauc_map_at_5_max
value: 10.294774281587333
- type: nauc_mrr_at_1000_diff1
value: 28.224194118245986
- type: nauc_mrr_at_1000_max
value: 35.03713736523123
- type: nauc_mrr_at_100_diff1
value: 28.239722499941884
- type: nauc_mrr_at_100_max
value: 35.08008834682332
- type: nauc_mrr_at_10_diff1
value: 28.312031722561397
- type: nauc_mrr_at_10_max
value: 35.07745441637377
- type: nauc_mrr_at_1_diff1
value: 29.71286290489225
- type: nauc_mrr_at_1_max
value: 27.07492092557332
- type: nauc_mrr_at_20_diff1
value: 28.408619888309524
- type: nauc_mrr_at_20_max
value: 35.07056834593783
- type: nauc_mrr_at_3_diff1
value: 28.57209508947814
- type: nauc_mrr_at_3_max
value: 32.824180760173896
- type: nauc_mrr_at_5_diff1
value: 28.236082992043393
- type: nauc_mrr_at_5_max
value: 34.17372569423924
- type: nauc_ndcg_at_1000_diff1
value: 24.083700969367612
- type: nauc_ndcg_at_1000_max
value: 38.883846498536116
- type: nauc_ndcg_at_100_diff1
value: 23.312730110282526
- type: nauc_ndcg_at_100_max
value: 32.64936241784008
- type: nauc_ndcg_at_10_diff1
value: 17.975398707754817
- type: nauc_ndcg_at_10_max
value: 32.32412505213287
- type: nauc_ndcg_at_1_diff1
value: 30.756195441367673
- type: nauc_ndcg_at_1_max
value: 26.483443465985328
- type: nauc_ndcg_at_20_diff1
value: 18.936710159355073
- type: nauc_ndcg_at_20_max
value: 31.338021731338316
- type: nauc_ndcg_at_3_diff1
value: 22.895979777747623
- type: nauc_ndcg_at_3_max
value: 32.17933652323659
- type: nauc_ndcg_at_5_diff1
value: 19.852961142954506
- type: nauc_ndcg_at_5_max
value: 32.56301733572076
- type: nauc_precision_at_1000_diff1
value: -12.569744637564826
- type: nauc_precision_at_1000_max
value: 14.067171968274472
- type: nauc_precision_at_100_diff1
value: -8.452640794750774
- type: nauc_precision_at_100_max
value: 26.52425208852308
- type: nauc_precision_at_10_diff1
value: -0.8599396198058924
- type: nauc_precision_at_10_max
value: 36.79898093749965
- type: nauc_precision_at_1_diff1
value: 30.53565353379064
- type: nauc_precision_at_1_max
value: 27.150932557011842
- type: nauc_precision_at_20_diff1
value: -4.190746979665414
- type: nauc_precision_at_20_max
value: 35.90857451601526
- type: nauc_precision_at_3_diff1
value: 12.548153913459656
- type: nauc_precision_at_3_max
value: 35.753894439704055
- type: nauc_precision_at_5_diff1
value: 5.476630825300621
- type: nauc_precision_at_5_max
value: 36.94019333022866
- type: nauc_recall_at_1000_diff1
value: 15.743509429414217
- type: nauc_recall_at_1000_max
value: 19.44531544138
- type: nauc_recall_at_100_diff1
value: 18.385119061958157
- type: nauc_recall_at_100_max
value: 19.1318751995873
- type: nauc_recall_at_10_diff1
value: 25.482096811308676
- type: nauc_recall_at_10_max
value: 14.006190865424864
- type: nauc_recall_at_1_diff1
value: 48.78589273340728
- type: nauc_recall_at_1_max
value: 0.7100902846948914
- type: nauc_recall_at_20_diff1
value: 22.76078199362388
- type: nauc_recall_at_20_max
value: 17.126864200524057
- type: nauc_recall_at_3_diff1
value: 39.93189765909178
- type: nauc_recall_at_3_max
value: 9.276495447517293
- type: nauc_recall_at_5_diff1
value: 28.17119993582467
- type: nauc_recall_at_5_max
value: 9.757053939301784
- type: ndcg_at_1
value: 38.7
- type: ndcg_at_10
value: 28.942
- type: ndcg_at_100
value: 27.346999999999998
- type: ndcg_at_1000
value: 36.216
- type: ndcg_at_20
value: 27.506999999999998
- type: ndcg_at_3
value: 33.335
- type: ndcg_at_5
value: 31.541999999999998
- type: precision_at_1
value: 40.248
- type: precision_at_10
value: 21.455
- type: precision_at_100
value: 7.015000000000001
- type: precision_at_1000
value: 1.9709999999999999
- type: precision_at_20
value: 16.471
- type: precision_at_3
value: 30.857
- type: precision_at_5
value: 26.811
- type: recall_at_1
value: 4.9239999999999995
- type: recall_at_10
value: 13.724
- type: recall_at_100
value: 28.450999999999997
- type: recall_at_1000
value: 60.136
- type: recall_at_20
value: 18.013
- type: recall_at_3
value: 7.954999999999999
- type: recall_at_5
value: 10.700999999999999
- task:
type: Retrieval
dataset:
name: MTEB NQ
type: mteb/nq
config: default
split: test
revision: b774495ed302d8c44a3a7ea25c90dbce03968f31
metrics:
- type: map_at_1
value: 21.246000000000002
- type: map_at_10
value: 34.107
- type: map_at_100
value: 35.43
- type: map_at_1000
value: 35.483
- type: map_at_20
value: 34.945
- type: map_at_3
value: 30.070000000000004
- type: map_at_5
value: 32.25
- type: mrr_at_1
value: 24.10196987253766
- type: mrr_at_10
value: 36.531318398352
- type: mrr_at_100
value: 37.59236775235497
- type: mrr_at_1000
value: 37.630099883433154
- type: mrr_at_20
value: 37.20931733276279
- type: mrr_at_3
value: 32.91328698339132
- type: mrr_at_5
value: 34.92516415604491
- type: nauc_map_at_1000_diff1
value: 20.872737281787636
- type: nauc_map_at_1000_max
value: 16.624364326260896
- type: nauc_map_at_100_diff1
value: 20.878142367328813
- type: nauc_map_at_100_max
value: 16.643468154696926
- type: nauc_map_at_10_diff1
value: 20.807793402274534
- type: nauc_map_at_10_max
value: 16.39391387269205
- type: nauc_map_at_1_diff1
value: 22.35812341861645
- type: nauc_map_at_1_max
value: 11.615838197259766
- type: nauc_map_at_20_diff1
value: 20.893013757323047
- type: nauc_map_at_20_max
value: 16.675046191798433
- type: nauc_map_at_3_diff1
value: 20.05521274346964
- type: nauc_map_at_3_max
value: 13.969959601269148
- type: nauc_map_at_5_diff1
value: 20.625293595408642
- type: nauc_map_at_5_max
value: 15.630481595302918
- type: nauc_mrr_at_1000_diff1
value: 20.659075334032188
- type: nauc_mrr_at_1000_max
value: 17.1077266798649
- type: nauc_mrr_at_100_diff1
value: 20.659592764012615
- type: nauc_mrr_at_100_max
value: 17.12673405006388
- type: nauc_mrr_at_10_diff1
value: 20.6492334539065
- type: nauc_mrr_at_10_max
value: 17.139758338338574
- type: nauc_mrr_at_1_diff1
value: 21.959789955443817
- type: nauc_mrr_at_1_max
value: 13.311351245662395
- type: nauc_mrr_at_20_diff1
value: 20.697135833887096
- type: nauc_mrr_at_20_max
value: 17.174901738327268
- type: nauc_mrr_at_3_diff1
value: 20.012890126078148
- type: nauc_mrr_at_3_max
value: 15.325749640509228
- type: nauc_mrr_at_5_diff1
value: 20.438050294840547
- type: nauc_mrr_at_5_max
value: 16.56107433490657
- type: nauc_ndcg_at_1000_diff1
value: 20.981193766001212
- type: nauc_ndcg_at_1000_max
value: 19.366882624001466
- type: nauc_ndcg_at_100_diff1
value: 21.07151595070923
- type: nauc_ndcg_at_100_max
value: 19.969093104531108
- type: nauc_ndcg_at_10_diff1
value: 20.824455077933653
- type: nauc_ndcg_at_10_max
value: 19.215675460656907
- type: nauc_ndcg_at_1_diff1
value: 22.064098120682292
- type: nauc_ndcg_at_1_max
value: 13.411137146530983
- type: nauc_ndcg_at_20_diff1
value: 21.12343657664599
- type: nauc_ndcg_at_20_max
value: 20.04689967321189
- type: nauc_ndcg_at_3_diff1
value: 19.470309201418857
- type: nauc_ndcg_at_3_max
value: 14.848503224176909
- type: nauc_ndcg_at_5_diff1
value: 20.32521541385147
- type: nauc_ndcg_at_5_max
value: 17.48824868961743
- type: nauc_precision_at_1000_diff1
value: 0.4660953834541917
- type: nauc_precision_at_1000_max
value: 14.735755093338893
- type: nauc_precision_at_100_diff1
value: 7.579249137389521
- type: nauc_precision_at_100_max
value: 23.48086608082409
- type: nauc_precision_at_10_diff1
value: 15.621524664818134
- type: nauc_precision_at_10_max
value: 26.16669034759615
- type: nauc_precision_at_1_diff1
value: 22.064098120682292
- type: nauc_precision_at_1_max
value: 13.411137146530983
- type: nauc_precision_at_20_diff1
value: 13.58615876770919
- type: nauc_precision_at_20_max
value: 26.806761446925364
- type: nauc_precision_at_3_diff1
value: 16.500214986231953
- type: nauc_precision_at_3_max
value: 18.649494923088263
- type: nauc_precision_at_5_diff1
value: 17.307374618712128
- type: nauc_precision_at_5_max
value: 23.444839731139965
- type: nauc_recall_at_1000_diff1
value: 28.75547954061722
- type: nauc_recall_at_1000_max
value: 62.409320816680015
- type: nauc_recall_at_100_diff1
value: 23.43814017912217
- type: nauc_recall_at_100_max
value: 42.499893768353374
- type: nauc_recall_at_10_diff1
value: 20.535131644031498
- type: nauc_recall_at_10_max
value: 26.527673119431896
- type: nauc_recall_at_1_diff1
value: 22.35812341861645
- type: nauc_recall_at_1_max
value: 11.615838197259766
- type: nauc_recall_at_20_diff1
value: 21.994120461812543
- type: nauc_recall_at_20_max
value: 31.819936351026307
- type: nauc_recall_at_3_diff1
value: 17.432747909860975
- type: nauc_recall_at_3_max
value: 15.382311079169869
- type: nauc_recall_at_5_diff1
value: 19.13496828564786
- type: nauc_recall_at_5_max
value: 21.081897544526708
- type: ndcg_at_1
value: 24.073
- type: ndcg_at_10
value: 41.323
- type: ndcg_at_100
value: 47.188
- type: ndcg_at_1000
value: 48.424
- type: ndcg_at_20
value: 44.084
- type: ndcg_at_3
value: 33.427
- type: ndcg_at_5
value: 37.171
- type: precision_at_1
value: 24.073
- type: precision_at_10
value: 7.242
- type: precision_at_100
value: 1.051
- type: precision_at_1000
value: 0.117
- type: precision_at_20
value: 4.2700000000000005
- type: precision_at_3
value: 15.498000000000001
- type: precision_at_5
value: 11.431
- type: recall_at_1
value: 21.246000000000002
- type: recall_at_10
value: 61.102000000000004
- type: recall_at_100
value: 87.08500000000001
- type: recall_at_1000
value: 96.222
- type: recall_at_20
value: 71.372
- type: recall_at_3
value: 40.361000000000004
- type: recall_at_5
value: 49.044
- task:
type: Retrieval
dataset:
name: MTEB QuoraRetrieval
type: mteb/quora
config: default
split: test
revision: e4e08e0b7dbe3c8700f0daef558ff32256715259
metrics:
- type: map_at_1
value: 68.285
- type: map_at_10
value: 82.106
- type: map_at_100
value: 82.76599999999999
- type: map_at_1000
value: 82.788
- type: map_at_20
value: 82.529
- type: map_at_3
value: 79.108
- type: map_at_5
value: 80.964
- type: mrr_at_1
value: 78.67
- type: mrr_at_10
value: 85.4671111111108
- type: mrr_at_100
value: 85.59335571351787
- type: mrr_at_1000
value: 85.59536983332889
- type: mrr_at_20
value: 85.55846883663256
- type: mrr_at_3
value: 84.39999999999962
- type: mrr_at_5
value: 85.11249999999953
- type: nauc_map_at_1000_diff1
value: 75.5071991426707
- type: nauc_map_at_1000_max
value: 33.24884628979125
- type: nauc_map_at_100_diff1
value: 75.51606789897293
- type: nauc_map_at_100_max
value: 33.237678419609715
- type: nauc_map_at_10_diff1
value: 75.63941045488615
- type: nauc_map_at_10_max
value: 32.914787531889694
- type: nauc_map_at_1_diff1
value: 78.53182147965822
- type: nauc_map_at_1_max
value: 24.631838635071222
- type: nauc_map_at_20_diff1
value: 75.55246990673865
- type: nauc_map_at_20_max
value: 33.12406999050574
- type: nauc_map_at_3_diff1
value: 75.76122624224449
- type: nauc_map_at_3_max
value: 30.56135184566114
- type: nauc_map_at_5_diff1
value: 75.62760573601093
- type: nauc_map_at_5_max
value: 32.014157139666985
- type: nauc_mrr_at_1000_diff1
value: 76.37148849763105
- type: nauc_mrr_at_1000_max
value: 35.935665230883934
- type: nauc_mrr_at_100_diff1
value: 76.37094038633705
- type: nauc_mrr_at_100_max
value: 35.94012231045831
- type: nauc_mrr_at_10_diff1
value: 76.35647457628434
- type: nauc_mrr_at_10_max
value: 36.01811322984862
- type: nauc_mrr_at_1_diff1
value: 77.24309585056221
- type: nauc_mrr_at_1_max
value: 34.48519876828825
- type: nauc_mrr_at_20_diff1
value: 76.36670040011074
- type: nauc_mrr_at_20_max
value: 35.99210482612602
- type: nauc_mrr_at_3_diff1
value: 76.09424554868272
- type: nauc_mrr_at_3_max
value: 35.609777385861044
- type: nauc_mrr_at_5_diff1
value: 76.25068640961776
- type: nauc_mrr_at_5_max
value: 35.86165128556917
- type: nauc_ndcg_at_1000_diff1
value: 75.46284119099505
- type: nauc_ndcg_at_1000_max
value: 34.897248065013535
- type: nauc_ndcg_at_100_diff1
value: 75.55417772660796
- type: nauc_ndcg_at_100_max
value: 34.9921360207961
- type: nauc_ndcg_at_10_diff1
value: 75.48987547153091
- type: nauc_ndcg_at_10_max
value: 34.52070770288654
- type: nauc_ndcg_at_1_diff1
value: 77.2205910169754
- type: nauc_ndcg_at_1_max
value: 34.54544979283322
- type: nauc_ndcg_at_20_diff1
value: 75.52495648309022
- type: nauc_ndcg_at_20_max
value: 34.75327053329915
- type: nauc_ndcg_at_3_diff1
value: 74.76800490923522
- type: nauc_ndcg_at_3_max
value: 32.77064919163132
- type: nauc_ndcg_at_5_diff1
value: 75.05016397357261
- type: nauc_ndcg_at_5_max
value: 33.50761269482319
- type: nauc_precision_at_1000_diff1
value: -41.81465497084401
- type: nauc_precision_at_1000_max
value: -4.443935842899313
- type: nauc_precision_at_100_diff1
value: -40.80948937001563
- type: nauc_precision_at_100_max
value: -3.403706458833991
- type: nauc_precision_at_10_diff1
value: -33.369656218745945
- type: nauc_precision_at_10_max
value: 2.2202781020992255
- type: nauc_precision_at_1_diff1
value: 77.2205910169754
- type: nauc_precision_at_1_max
value: 34.54544979283322
- type: nauc_precision_at_20_diff1
value: -37.568976386400706
- type: nauc_precision_at_20_max
value: -0.6469605151975117
- type: nauc_precision_at_3_diff1
value: -10.217358622390567
- type: nauc_precision_at_3_max
value: 11.83919267748663
- type: nauc_precision_at_5_diff1
value: -24.481543671948373
- type: nauc_precision_at_5_max
value: 6.576825503675188
- type: nauc_recall_at_1000_diff1
value: 67.36524460905785
- type: nauc_recall_at_1000_max
value: 53.720724394976585
- type: nauc_recall_at_100_diff1
value: 75.23538841054406
- type: nauc_recall_at_100_max
value: 47.2723927504464
- type: nauc_recall_at_10_diff1
value: 69.95500109263831
- type: nauc_recall_at_10_max
value: 34.583322421413996
- type: nauc_recall_at_1_diff1
value: 78.53182147965822
- type: nauc_recall_at_1_max
value: 24.631838635071222
- type: nauc_recall_at_20_diff1
value: 70.32541323559573
- type: nauc_recall_at_20_max
value: 36.98517552839284
- type: nauc_recall_at_3_diff1
value: 71.477694594835
- type: nauc_recall_at_3_max
value: 27.960647983463073
- type: nauc_recall_at_5_diff1
value: 70.17565198935641
- type: nauc_recall_at_5_max
value: 30.104013734994844
- type: ndcg_at_1
value: 78.68
- type: ndcg_at_10
value: 86.244
- type: ndcg_at_100
value: 87.651
- type: ndcg_at_1000
value: 87.816
- type: ndcg_at_20
value: 86.961
- type: ndcg_at_3
value: 83.152
- type: ndcg_at_5
value: 84.819
- type: precision_at_1
value: 78.68
- type: precision_at_10
value: 13.123000000000001
- type: precision_at_100
value: 1.514
- type: precision_at_1000
value: 0.156
- type: precision_at_20
value: 6.979
- type: precision_at_3
value: 36.353
- type: precision_at_5
value: 23.977999999999998
- type: recall_at_1
value: 68.285
- type: recall_at_10
value: 94.16799999999999
- type: recall_at_100
value: 99.116
- type: recall_at_1000
value: 99.944
- type: recall_at_20
value: 96.494
- type: recall_at_3
value: 85.31
- type: recall_at_5
value: 89.993
- task:
type: Clustering
dataset:
name: MTEB RedditClustering
type: mteb/reddit-clustering
config: default
split: test
revision: 24640382cdbf8abc73003fb0fa6d111a705499eb
metrics:
- type: v_measure
value: 47.39326061426851
- type: v_measures
value:
- 0.47153549737072414
- 0.5113188409132627
- 0.4256578555733507
- 0.45547697557001166
- 0.44673621430540467
- task:
type: Clustering
dataset:
name: MTEB RedditClusteringP2P
type: mteb/reddit-clustering-p2p
config: default
split: test
revision: 385e3cb46b4cfa89021f56c4380204149d0efe33
metrics:
- type: v_measure
value: 58.382849910561305
- type: v_measures
value:
- 0.638508286047501
- 0.6201813511333097
- 0.6412218572317954
- 0.34538859648148
- 0.6372584092921234
- task:
type: Retrieval
dataset:
name: MTEB SCIDOCS
type: mteb/scidocs
config: default
split: test
revision: f8c2fcf00f625baaa80f62ec5bd9e1fff3b8ae88
metrics:
- type: map_at_1
value: 3.5680000000000005
- type: map_at_10
value: 9.165
- type: map_at_100
value: 10.928
- type: map_at_1000
value: 11.187
- type: map_at_20
value: 10.030999999999999
- type: map_at_3
value: 6.598
- type: map_at_5
value: 7.746
- type: mrr_at_1
value: 17.5
- type: mrr_at_10
value: 28.15242063492062
- type: mrr_at_100
value: 29.148090545385042
- type: mrr_at_1000
value: 29.22586383082865
- type: mrr_at_20
value: 28.716339503289944
- type: mrr_at_3
value: 24.9666666666667
- type: mrr_at_5
value: 26.496666666666684
- type: nauc_map_at_1000_diff1
value: 18.97303632967232
- type: nauc_map_at_1000_max
value: 26.99578750624317
- type: nauc_map_at_100_diff1
value: 19.03406677193612
- type: nauc_map_at_100_max
value: 26.869362658515016
- type: nauc_map_at_10_diff1
value: 18.057667997990386
- type: nauc_map_at_10_max
value: 25.309052871533634
- type: nauc_map_at_1_diff1
value: 19.012090704165505
- type: nauc_map_at_1_max
value: 17.258809318287167
- type: nauc_map_at_20_diff1
value: 18.941090010273122
- type: nauc_map_at_20_max
value: 26.333042449319226
- type: nauc_map_at_3_diff1
value: 16.710501604799592
- type: nauc_map_at_3_max
value: 21.31218718265248
- type: nauc_map_at_5_diff1
value: 16.56134390513593
- type: nauc_map_at_5_max
value: 22.826974292312546
- type: nauc_mrr_at_1000_diff1
value: 16.363889874600694
- type: nauc_mrr_at_1000_max
value: 20.518910454040395
- type: nauc_mrr_at_100_diff1
value: 16.351792727972825
- type: nauc_mrr_at_100_max
value: 20.51605975440402
- type: nauc_mrr_at_10_diff1
value: 16.353234548491002
- type: nauc_mrr_at_10_max
value: 20.3474303123765
- type: nauc_mrr_at_1_diff1
value: 18.72320588103456
- type: nauc_mrr_at_1_max
value: 17.31659868214623
- type: nauc_mrr_at_20_diff1
value: 16.349503308662584
- type: nauc_mrr_at_20_max
value: 20.571279610990683
- type: nauc_mrr_at_3_diff1
value: 16.61433823095321
- type: nauc_mrr_at_3_max
value: 19.8671374514683
- type: nauc_mrr_at_5_diff1
value: 16.657607225925013
- type: nauc_mrr_at_5_max
value: 20.485690382244712
- type: nauc_ndcg_at_1000_diff1
value: 17.216527125545124
- type: nauc_ndcg_at_1000_max
value: 29.67323723253682
- type: nauc_ndcg_at_100_diff1
value: 17.920363114583992
- type: nauc_ndcg_at_100_max
value: 28.74219286431791
- type: nauc_ndcg_at_10_diff1
value: 17.4262322341026
- type: nauc_ndcg_at_10_max
value: 25.314398482777406
- type: nauc_ndcg_at_1_diff1
value: 18.72320588103456
- type: nauc_ndcg_at_1_max
value: 17.31659868214623
- type: nauc_ndcg_at_20_diff1
value: 18.49350721003082
- type: nauc_ndcg_at_20_max
value: 26.95660628845422
- type: nauc_ndcg_at_3_diff1
value: 16.388721576110076
- type: nauc_ndcg_at_3_max
value: 21.574925593659326
- type: nauc_ndcg_at_5_diff1
value: 16.62472439103214
- type: nauc_ndcg_at_5_max
value: 23.186257022779994
- type: nauc_precision_at_1000_diff1
value: 7.882444572522718
- type: nauc_precision_at_1000_max
value: 29.389796806861163
- type: nauc_precision_at_100_diff1
value: 13.9186095734099
- type: nauc_precision_at_100_max
value: 30.35346461874792
- type: nauc_precision_at_10_diff1
value: 15.858687077827474
- type: nauc_precision_at_10_max
value: 26.884411423943906
- type: nauc_precision_at_1_diff1
value: 18.72320588103456
- type: nauc_precision_at_1_max
value: 17.31659868214623
- type: nauc_precision_at_20_diff1
value: 17.397174842486937
- type: nauc_precision_at_20_max
value: 28.48509998553517
- type: nauc_precision_at_3_diff1
value: 15.910758722664974
- type: nauc_precision_at_3_max
value: 23.37753724707492
- type: nauc_precision_at_5_diff1
value: 15.480650294833314
- type: nauc_precision_at_5_max
value: 24.92100239632834
- type: nauc_recall_at_1000_diff1
value: 8.568163684580515
- type: nauc_recall_at_1000_max
value: 29.761661131284278
- type: nauc_recall_at_100_diff1
value: 14.139732606832828
- type: nauc_recall_at_100_max
value: 30.30928539057988
- type: nauc_recall_at_10_diff1
value: 16.0957814746088
- type: nauc_recall_at_10_max
value: 26.730370480937783
- type: nauc_recall_at_1_diff1
value: 19.012090704165505
- type: nauc_recall_at_1_max
value: 17.258809318287167
- type: nauc_recall_at_20_diff1
value: 17.58458055089181
- type: nauc_recall_at_20_max
value: 28.329240158930897
- type: nauc_recall_at_3_diff1
value: 16.11861072893215
- type: nauc_recall_at_3_max
value: 23.34743857534646
- type: nauc_recall_at_5_diff1
value: 15.659970648558035
- type: nauc_recall_at_5_max
value: 24.916484416681683
- type: ndcg_at_1
value: 17.5
- type: ndcg_at_10
value: 16.203
- type: ndcg_at_100
value: 23.311
- type: ndcg_at_1000
value: 28.476000000000003
- type: ndcg_at_20
value: 18.614
- type: ndcg_at_3
value: 15.246
- type: ndcg_at_5
value: 13.142000000000001
- type: precision_at_1
value: 17.5
- type: precision_at_10
value: 8.61
- type: precision_at_100
value: 1.8929999999999998
- type: precision_at_1000
value: 0.314
- type: precision_at_20
value: 5.695
- type: precision_at_3
value: 14.7
- type: precision_at_5
value: 11.700000000000001
- type: recall_at_1
value: 3.5680000000000005
- type: recall_at_10
value: 17.497
- type: recall_at_100
value: 38.377
- type: recall_at_1000
value: 63.858000000000004
- type: recall_at_20
value: 23.122
- type: recall_at_3
value: 8.948
- type: recall_at_5
value: 11.858
- task:
type: STS
dataset:
name: MTEB SICK-R
type: mteb/sickr-sts
config: default
split: test
revision: 20a6d6f312dd54037fe07a32d58e5e168867909d
metrics:
- type: cos_sim_pearson
value: 83.0786740980213
- type: cos_sim_spearman
value: 74.64910820402831
- type: euclidean_pearson
value: 79.40680658618808
- type: euclidean_spearman
value: 74.04786370197291
- type: manhattan_pearson
value: 79.30290796130608
- type: manhattan_spearman
value: 73.86543081865257
- task:
type: STS
dataset:
name: MTEB STS12
type: mteb/sts12-sts
config: default
split: test
revision: a0d554a64d88156834ff5ae9920b964011b16384
metrics:
- type: cos_sim_pearson
value: 87.14143764866938
- type: cos_sim_spearman
value: 79.39117869636218
- type: euclidean_pearson
value: 82.27893672472992
- type: euclidean_spearman
value: 78.12857266398304
- type: manhattan_pearson
value: 82.40958626880706
- type: manhattan_spearman
value: 78.2460736745845
- task:
type: STS
dataset:
name: MTEB STS13
type: mteb/sts13-sts
config: default
split: test
revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca
metrics:
- type: cos_sim_pearson
value: 84.98565298864834
- type: cos_sim_spearman
value: 85.3226077419183
- type: euclidean_pearson
value: 83.36095201234602
- type: euclidean_spearman
value: 83.44580751011605
- type: manhattan_pearson
value: 83.26944531709971
- type: manhattan_spearman
value: 83.3511641574103
- task:
type: STS
dataset:
name: MTEB STS14
type: mteb/sts14-sts
config: default
split: test
revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375
metrics:
- type: cos_sim_pearson
value: 86.15642283009589
- type: cos_sim_spearman
value: 83.89978896960656
- type: euclidean_pearson
value: 85.01657605766617
- type: euclidean_spearman
value: 82.70615194483753
- type: manhattan_pearson
value: 84.82154011079453
- type: manhattan_spearman
value: 82.61620436539884
- task:
type: STS
dataset:
name: MTEB STS15
type: mteb/sts15-sts
config: default
split: test
revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3
metrics:
- type: cos_sim_pearson
value: 87.7730685270548
- type: cos_sim_spearman
value: 88.46744045180212
- type: euclidean_pearson
value: 87.11846600678471
- type: euclidean_spearman
value: 87.32502541228249
- type: manhattan_pearson
value: 87.06217303693649
- type: manhattan_spearman
value: 87.24696449513658
- task:
type: STS
dataset:
name: MTEB STS16
type: mteb/sts16-sts
config: default
split: test
revision: 4d8694f8f0e0100860b497b999b3dbed754a0513
metrics:
- type: cos_sim_pearson
value: 83.0653949018384
- type: cos_sim_spearman
value: 84.43898725124001
- type: euclidean_pearson
value: 83.46057253146975
- type: euclidean_spearman
value: 83.70938571051141
- type: manhattan_pearson
value: 83.48079890307652
- type: manhattan_spearman
value: 83.75548841452152
- task:
type: STS
dataset:
name: MTEB STS17 (en-en)
type: mteb/sts17-crosslingual-sts
config: en-en
split: test
revision: faeb762787bd10488a50c8b5be4a3b82e411949c
metrics:
- type: cos_sim_pearson
value: 85.45225298379407
- type: cos_sim_spearman
value: 85.76725038940407
- type: euclidean_pearson
value: 85.9615450336946
- type: euclidean_spearman
value: 85.48341197609108
- type: manhattan_pearson
value: 85.74837479284034
- type: manhattan_spearman
value: 85.19050180417275
- task:
type: STS
dataset:
name: MTEB STS22 (en)
type: mteb/sts22-crosslingual-sts
config: en
split: test
revision: de9d86b3b84231dc21f76c7b7af1f28e2f57f6e3
metrics:
- type: cos_sim_pearson
value: 66.72129983991873
- type: cos_sim_spearman
value: 67.23743199464064
- type: euclidean_pearson
value: 68.41402075343164
- type: euclidean_spearman
value: 67.96307375904688
- type: manhattan_pearson
value: 68.40814603490281
- type: manhattan_spearman
value: 67.78239579617318
- task:
type: STS
dataset:
name: MTEB STSBenchmark
type: mteb/stsbenchmark-sts
config: default
split: test
revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831
metrics:
- type: cos_sim_pearson
value: 86.17592091160849
- type: cos_sim_spearman
value: 86.0757276289371
- type: euclidean_pearson
value: 85.24314028679827
- type: euclidean_spearman
value: 84.79227270552205
- type: manhattan_pearson
value: 85.15711414880685
- type: manhattan_spearman
value: 84.68939283251983
- task:
type: Reranking
dataset:
name: MTEB SciDocsRR
type: mteb/scidocs-reranking
config: default
split: test
revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab
metrics:
- type: map
value: 78.61113471166244
- type: mrr
value: 93.432848923045
- type: nAUC_map_diff1
value: 5.468214413465522
- type: nAUC_map_max
value: 53.344699872043364
- type: nAUC_mrr_diff1
value: 50.8786565680291
- type: nAUC_mrr_max
value: 79.73153373046732
- task:
type: Retrieval
dataset:
name: MTEB SciFact
type: mteb/scifact
config: default
split: test
revision: 0228b52cf27578f30900b9e5271d331663a030d7
metrics:
- type: map_at_1
value: 46.694
- type: map_at_10
value: 58.492999999999995
- type: map_at_100
value: 59.079
- type: map_at_1000
value: 59.114999999999995
- type: map_at_20
value: 58.784000000000006
- type: map_at_3
value: 56.091
- type: map_at_5
value: 57.023999999999994
- type: mrr_at_1
value: 49.333333333333336
- type: mrr_at_10
value: 59.850132275132296
- type: mrr_at_100
value: 60.31782622597538
- type: mrr_at_1000
value: 60.34922440201215
- type: mrr_at_20
value: 60.08416454832866
- type: mrr_at_3
value: 58.05555555555557
- type: mrr_at_5
value: 58.67222222222224
- type: nauc_map_at_1000_diff1
value: 67.76127454103812
- type: nauc_map_at_1000_max
value: 42.06391105197536
- type: nauc_map_at_100_diff1
value: 67.73734626481158
- type: nauc_map_at_100_max
value: 42.07013722993752
- type: nauc_map_at_10_diff1
value: 67.75019487037416
- type: nauc_map_at_10_max
value: 42.08004179578344
- type: nauc_map_at_1_diff1
value: 73.16882642657764
- type: nauc_map_at_1_max
value: 38.22765895246309
- type: nauc_map_at_20_diff1
value: 67.71028360631355
- type: nauc_map_at_20_max
value: 42.182021960109665
- type: nauc_map_at_3_diff1
value: 67.62369130951392
- type: nauc_map_at_3_max
value: 39.910755718969696
- type: nauc_map_at_5_diff1
value: 67.66911636315015
- type: nauc_map_at_5_max
value: 40.38236382755538
- type: nauc_mrr_at_1000_diff1
value: 66.06803763645875
- type: nauc_mrr_at_1000_max
value: 42.86556398916693
- type: nauc_mrr_at_100_diff1
value: 66.04424547602991
- type: nauc_mrr_at_100_max
value: 42.867192517898935
- type: nauc_mrr_at_10_diff1
value: 65.9181187541585
- type: nauc_mrr_at_10_max
value: 43.00997791733552
- type: nauc_mrr_at_1_diff1
value: 71.14402949361032
- type: nauc_mrr_at_1_max
value: 40.41989400733797
- type: nauc_mrr_at_20_diff1
value: 65.96893983596155
- type: nauc_mrr_at_20_max
value: 42.96939035490266
- type: nauc_mrr_at_3_diff1
value: 65.61751418820666
- type: nauc_mrr_at_3_max
value: 41.73632436886939
- type: nauc_mrr_at_5_diff1
value: 65.93649980807021
- type: nauc_mrr_at_5_max
value: 41.99687107195354
- type: nauc_ndcg_at_1000_diff1
value: 66.14801590849353
- type: nauc_ndcg_at_1000_max
value: 43.70286520140021
- type: nauc_ndcg_at_100_diff1
value: 65.57206500474688
- type: nauc_ndcg_at_100_max
value: 43.804634724756234
- type: nauc_ndcg_at_10_diff1
value: 65.58658179189969
- type: nauc_ndcg_at_10_max
value: 44.605601186017815
- type: nauc_ndcg_at_1_diff1
value: 71.14402949361032
- type: nauc_ndcg_at_1_max
value: 40.41989400733797
- type: nauc_ndcg_at_20_diff1
value: 65.52436059710848
- type: nauc_ndcg_at_20_max
value: 44.80884075855281
- type: nauc_ndcg_at_3_diff1
value: 65.33560750072314
- type: nauc_ndcg_at_3_max
value: 41.02191665715624
- type: nauc_ndcg_at_5_diff1
value: 65.49156588896797
- type: nauc_ndcg_at_5_max
value: 41.193628278772906
- type: nauc_precision_at_1000_diff1
value: -21.271717431265248
- type: nauc_precision_at_1000_max
value: 14.880187641241479
- type: nauc_precision_at_100_diff1
value: -6.170679294185874
- type: nauc_precision_at_100_max
value: 23.392807344666835
- type: nauc_precision_at_10_diff1
value: 24.15372806591396
- type: nauc_precision_at_10_max
value: 42.122189619323315
- type: nauc_precision_at_1_diff1
value: 71.14402949361032
- type: nauc_precision_at_1_max
value: 40.41989400733797
- type: nauc_precision_at_20_diff1
value: 15.788476578628993
- type: nauc_precision_at_20_max
value: 39.31283062678818
- type: nauc_precision_at_3_diff1
value: 45.48749226553521
- type: nauc_precision_at_3_max
value: 38.4930807232584
- type: nauc_precision_at_5_diff1
value: 38.55379599441077
- type: nauc_precision_at_5_max
value: 36.431299487657185
- type: nauc_recall_at_1000_diff1
value: 45.004668534080174
- type: nauc_recall_at_1000_max
value: 80.78120136943592
- type: nauc_recall_at_100_diff1
value: 47.77911164465763
- type: nauc_recall_at_100_max
value: 51.29449629314065
- type: nauc_recall_at_10_diff1
value: 57.71614029345987
- type: nauc_recall_at_10_max
value: 53.908934707903775
- type: nauc_recall_at_1_diff1
value: 73.16882642657764
- type: nauc_recall_at_1_max
value: 38.22765895246309
- type: nauc_recall_at_20_diff1
value: 56.143181435044355
- type: nauc_recall_at_20_max
value: 56.12210887724124
- type: nauc_recall_at_3_diff1
value: 58.947466694908826
- type: nauc_recall_at_3_max
value: 40.205765050955286
- type: nauc_recall_at_5_diff1
value: 58.72258574569608
- type: nauc_recall_at_5_max
value: 40.857639009739245
- type: ndcg_at_1
value: 49.333
- type: ndcg_at_10
value: 63.966
- type: ndcg_at_100
value: 66.808
- type: ndcg_at_1000
value: 67.62700000000001
- type: ndcg_at_20
value: 64.92
- type: ndcg_at_3
value: 59.496
- type: ndcg_at_5
value: 60.743
- type: precision_at_1
value: 49.333
- type: precision_at_10
value: 8.866999999999999
- type: precision_at_100
value: 1.053
- type: precision_at_1000
value: 0.11199999999999999
- type: precision_at_20
value: 4.683
- type: precision_at_3
value: 24.0
- type: precision_at_5
value: 15.333
- type: recall_at_1
value: 46.694
- type: recall_at_10
value: 79.5
- type: recall_at_100
value: 92.767
- type: recall_at_1000
value: 99.0
- type: recall_at_20
value: 82.956
- type: recall_at_3
value: 66.783
- type: recall_at_5
value: 69.906
- task:
type: PairClassification
dataset:
name: MTEB SprintDuplicateQuestions
type: mteb/sprintduplicatequestions-pairclassification
config: default
split: test
revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46
metrics:
- type: cos_sim_accuracy
value: 99.78118811881188
- type: cos_sim_ap
value: 94.38499172210743
- type: cos_sim_f1
value: 88.99803536345776
- type: cos_sim_precision
value: 87.45173745173746
- type: cos_sim_recall
value: 90.60000000000001
- type: dot_accuracy
value: 99.68118811881187
- type: dot_ap
value: 88.59372518155831
- type: dot_f1
value: 83.45323741007195
- type: dot_precision
value: 85.83509513742071
- type: dot_recall
value: 81.2
- type: euclidean_accuracy
value: 99.78019801980199
- type: euclidean_ap
value: 94.41961507812081
- type: euclidean_f1
value: 88.91098955743412
- type: euclidean_precision
value: 88.4272997032641
- type: euclidean_recall
value: 89.4
- type: manhattan_accuracy
value: 99.78118811881188
- type: manhattan_ap
value: 94.53929097513269
- type: manhattan_f1
value: 88.93280632411069
- type: manhattan_precision
value: 87.890625
- type: manhattan_recall
value: 90.0
- type: max_accuracy
value: 99.78118811881188
- type: max_ap
value: 94.53929097513269
- type: max_f1
value: 88.99803536345776
- task:
type: Clustering
dataset:
name: MTEB StackExchangeClustering
type: mteb/stackexchange-clustering
config: default
split: test
revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259
metrics:
- type: v_measure
value: 52.46659916482748
- type: v_measures
value:
- 0.5533520743369753
- 0.5226026021922323
- 0.4443153697300708
- 0.5442847332820114
- 0.5574991389583961
- task:
type: Clustering
dataset:
name: MTEB StackExchangeClusteringP2P
type: mteb/stackexchange-clustering-p2p
config: default
split: test
revision: 815ca46b2622cec33ccafc3735d572c266efdb44
metrics:
- type: v_measure
value: 33.6334285506705
- type: v_measures
value:
- 0.3276318900057692
- 0.3240387341697168
- 0.32272003147893974
- 0.32313817118726607
- 0.3156113464382597
- task:
type: Reranking
dataset:
name: MTEB StackOverflowDupQuestions
type: mteb/stackoverflowdupquestions-reranking
config: default
split: test
revision: e185fbe320c72810689fc5848eb6114e1ef5ec69
metrics:
- type: map
value: 47.16067733309403
- type: mrr
value: 47.8574611662847
- type: nAUC_map_diff1
value: 32.52594575795374
- type: nAUC_map_max
value: 14.426033057319177
- type: nAUC_mrr_diff1
value: 32.717518660141344
- type: nAUC_mrr_max
value: 15.511520995680103
- task:
type: Summarization
dataset:
name: MTEB SummEval
type: mteb/summeval
config: default
split: test
revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c
metrics:
- type: cos_sim_pearson
value: 29.510117786456014
- type: cos_sim_spearman
value: 29.2255704281364
- type: dot_pearson
value: 29.920367312494868
- type: dot_spearman
value: 29.70675041719688
- task:
type: Retrieval
dataset:
name: MTEB TRECCOVID
type: mteb/trec-covid
config: default
split: test
revision: bb9466bac8153a0349341eb1b22e06409e78ef4e
metrics:
- type: map_at_1
value: 0.153
- type: map_at_10
value: 1.084
- type: map_at_100
value: 5.065
- type: map_at_1000
value: 14.255999999999998
- type: map_at_20
value: 1.7739999999999998
- type: map_at_3
value: 0.40299999999999997
- type: map_at_5
value: 0.63
- type: mrr_at_1
value: 54.0
- type: mrr_at_10
value: 67.86904761904762
- type: mrr_at_100
value: 68.1173503682578
- type: mrr_at_1000
value: 68.1173503682578
- type: mrr_at_20
value: 67.97431077694236
- type: mrr_at_3
value: 64.0
- type: mrr_at_5
value: 66.8
- type: nauc_map_at_1000_diff1
value: -22.260198132192173
- type: nauc_map_at_1000_max
value: 54.68403556878306
- type: nauc_map_at_100_diff1
value: -16.159059471544403
- type: nauc_map_at_100_max
value: 46.680332538973104
- type: nauc_map_at_10_diff1
value: -0.16025207380124323
- type: nauc_map_at_10_max
value: 18.60858928303837
- type: nauc_map_at_1_diff1
value: 19.31406645591962
- type: nauc_map_at_1_max
value: 12.446064494149196
- type: nauc_map_at_20_diff1
value: -3.7207534399873086
- type: nauc_map_at_20_max
value: 23.984664337717064
- type: nauc_map_at_3_diff1
value: 11.318172692961777
- type: nauc_map_at_3_max
value: 17.80683628355867
- type: nauc_map_at_5_diff1
value: 7.92181873049933
- type: nauc_map_at_5_max
value: 17.64389113325039
- type: nauc_mrr_at_1000_diff1
value: 17.49153792066571
- type: nauc_mrr_at_1000_max
value: 33.59871091829616
- type: nauc_mrr_at_100_diff1
value: 17.49153792066571
- type: nauc_mrr_at_100_max
value: 33.59871091829616
- type: nauc_mrr_at_10_diff1
value: 17.502786184772496
- type: nauc_mrr_at_10_max
value: 33.97577280665956
- type: nauc_mrr_at_1_diff1
value: 20.469006140423968
- type: nauc_mrr_at_1_max
value: 24.62282237225972
- type: nauc_mrr_at_20_diff1
value: 17.27246967398437
- type: nauc_mrr_at_20_max
value: 33.69787393313599
- type: nauc_mrr_at_3_diff1
value: 17.658115148717215
- type: nauc_mrr_at_3_max
value: 34.66827145024068
- type: nauc_mrr_at_5_diff1
value: 17.916005644695375
- type: nauc_mrr_at_5_max
value: 35.10406736432433
- type: nauc_ndcg_at_1000_diff1
value: -25.695422281160564
- type: nauc_ndcg_at_1000_max
value: 41.85333091055545
- type: nauc_ndcg_at_100_diff1
value: -20.77388791351094
- type: nauc_ndcg_at_100_max
value: 44.356134608903034
- type: nauc_ndcg_at_10_diff1
value: -10.307778980699197
- type: nauc_ndcg_at_10_max
value: 33.23388628961326
- type: nauc_ndcg_at_1_diff1
value: 20.412738715863956
- type: nauc_ndcg_at_1_max
value: 23.390778206963613
- type: nauc_ndcg_at_20_diff1
value: -11.307721360709836
- type: nauc_ndcg_at_20_max
value: 36.352174201276206
- type: nauc_ndcg_at_3_diff1
value: 7.285454029149752
- type: nauc_ndcg_at_3_max
value: 29.03877907321362
- type: nauc_ndcg_at_5_diff1
value: 0.8947521854164275
- type: nauc_ndcg_at_5_max
value: 31.54102751296627
- type: nauc_precision_at_1000_diff1
value: -25.78557535978164
- type: nauc_precision_at_1000_max
value: 37.467970941981896
- type: nauc_precision_at_100_diff1
value: -25.701682320317964
- type: nauc_precision_at_100_max
value: 45.81756747527059
- type: nauc_precision_at_10_diff1
value: -21.234526843340713
- type: nauc_precision_at_10_max
value: 32.91504410405538
- type: nauc_precision_at_1_diff1
value: 20.469006140423968
- type: nauc_precision_at_1_max
value: 24.62282237225972
- type: nauc_precision_at_20_diff1
value: -20.025454190589233
- type: nauc_precision_at_20_max
value: 37.55936600361076
- type: nauc_precision_at_3_diff1
value: 2.8390823388370996
- type: nauc_precision_at_3_max
value: 31.69418560296442
- type: nauc_precision_at_5_diff1
value: -7.36442063396579
- type: nauc_precision_at_5_max
value: 32.88936384031251
- type: nauc_recall_at_1000_diff1
value: -25.040103819963193
- type: nauc_recall_at_1000_max
value: 39.67194190901835
- type: nauc_recall_at_100_diff1
value: -15.819635509190055
- type: nauc_recall_at_100_max
value: 38.20290322073082
- type: nauc_recall_at_10_diff1
value: -2.179337202237811
- type: nauc_recall_at_10_max
value: 15.444689423576962
- type: nauc_recall_at_1_diff1
value: 19.31406645591962
- type: nauc_recall_at_1_max
value: 12.446064494149196
- type: nauc_recall_at_20_diff1
value: -4.369705346989079
- type: nauc_recall_at_20_max
value: 19.689399778184235
- type: nauc_recall_at_3_diff1
value: 11.368703632097438
- type: nauc_recall_at_3_max
value: 18.834378852568555
- type: nauc_recall_at_5_diff1
value: 9.363083205894776
- type: nauc_recall_at_5_max
value: 16.283811472009358
- type: ndcg_at_1
value: 50.0
- type: ndcg_at_10
value: 46.788999999999994
- type: ndcg_at_100
value: 33.676
- type: ndcg_at_1000
value: 36.502
- type: ndcg_at_20
value: 42.895
- type: ndcg_at_3
value: 49.531
- type: ndcg_at_5
value: 49.413000000000004
- type: precision_at_1
value: 54.0
- type: precision_at_10
value: 51.2
- type: precision_at_100
value: 34.62
- type: precision_at_1000
value: 16.869999999999997
- type: precision_at_20
value: 45.800000000000004
- type: precision_at_3
value: 54.0
- type: precision_at_5
value: 54.400000000000006
- type: recall_at_1
value: 0.153
- type: recall_at_10
value: 1.373
- type: recall_at_100
value: 8.425
- type: recall_at_1000
value: 36.521
- type: recall_at_20
value: 2.4
- type: recall_at_3
value: 0.441
- type: recall_at_5
value: 0.739
- task:
type: Retrieval
dataset:
name: MTEB Touche2020
type: mteb/touche2020
config: default
split: test
revision: a34f9a33db75fa0cbb21bb5cfc3dae8dc8bec93f
metrics:
- type: map_at_1
value: 1.4449999999999998
- type: map_at_10
value: 5.508
- type: map_at_100
value: 9.561
- type: map_at_1000
value: 11.075
- type: map_at_20
value: 7.195
- type: map_at_3
value: 2.8819999999999997
- type: map_at_5
value: 3.859
- type: mrr_at_1
value: 18.367346938775512
- type: mrr_at_10
value: 30.816326530612244
- type: mrr_at_100
value: 32.36702497368042
- type: mrr_at_1000
value: 32.39717992373145
- type: mrr_at_20
value: 31.906678711584885
- type: mrr_at_3
value: 26.190476190476193
- type: mrr_at_5
value: 28.945578231292522
- type: nauc_map_at_1000_diff1
value: 15.86075185050381
- type: nauc_map_at_1000_max
value: -39.68076638135203
- type: nauc_map_at_100_diff1
value: 14.373599398703885
- type: nauc_map_at_100_max
value: -40.06871903205363
- type: nauc_map_at_10_diff1
value: 17.129188771799722
- type: nauc_map_at_10_max
value: -40.62148414336222
- type: nauc_map_at_1_diff1
value: 19.300742296090192
- type: nauc_map_at_1_max
value: -39.01422276688408
- type: nauc_map_at_20_diff1
value: 17.096257109130104
- type: nauc_map_at_20_max
value: -41.45895858788768
- type: nauc_map_at_3_diff1
value: 13.061713953725201
- type: nauc_map_at_3_max
value: -38.41319841534379
- type: nauc_map_at_5_diff1
value: 14.349086737587205
- type: nauc_map_at_5_max
value: -38.969968391834044
- type: nauc_mrr_at_1000_diff1
value: 15.890202844477846
- type: nauc_mrr_at_1000_max
value: -35.71618277376245
- type: nauc_mrr_at_100_diff1
value: 15.922757469316565
- type: nauc_mrr_at_100_max
value: -35.79109859355446
- type: nauc_mrr_at_10_diff1
value: 15.047536449761841
- type: nauc_mrr_at_10_max
value: -36.56394292392469
- type: nauc_mrr_at_1_diff1
value: 23.5674706768817
- type: nauc_mrr_at_1_max
value: -34.577680813370684
- type: nauc_mrr_at_20_diff1
value: 15.48856353024658
- type: nauc_mrr_at_20_max
value: -35.79541680443546
- type: nauc_mrr_at_3_diff1
value: 15.806087622568954
- type: nauc_mrr_at_3_max
value: -32.477788788477206
- type: nauc_mrr_at_5_diff1
value: 15.100010170892547
- type: nauc_mrr_at_5_max
value: -34.902570265426476
- type: nauc_ndcg_at_1000_diff1
value: 17.06221439254491
- type: nauc_ndcg_at_1000_max
value: -38.057099656137524
- type: nauc_ndcg_at_100_diff1
value: 10.712806009366044
- type: nauc_ndcg_at_100_max
value: -41.634510046296825
- type: nauc_ndcg_at_10_diff1
value: 19.714184908152074
- type: nauc_ndcg_at_10_max
value: -38.35275712711699
- type: nauc_ndcg_at_1_diff1
value: 27.689699524955962
- type: nauc_ndcg_at_1_max
value: -32.166823132012276
- type: nauc_ndcg_at_20_diff1
value: 16.460154587871894
- type: nauc_ndcg_at_20_max
value: -44.9036600147991
- type: nauc_ndcg_at_3_diff1
value: 20.089462936175444
- type: nauc_ndcg_at_3_max
value: -28.050150980736177
- type: nauc_ndcg_at_5_diff1
value: 16.85293507256734
- type: nauc_ndcg_at_5_max
value: -30.806342862683927
- type: nauc_precision_at_1000_diff1
value: 14.408977497220873
- type: nauc_precision_at_1000_max
value: 37.74317255169207
- type: nauc_precision_at_100_diff1
value: -1.535852218534388
- type: nauc_precision_at_100_max
value: -19.385555066523708
- type: nauc_precision_at_10_diff1
value: 14.935398953941345
- type: nauc_precision_at_10_max
value: -40.7784122393935
- type: nauc_precision_at_1_diff1
value: 23.5674706768817
- type: nauc_precision_at_1_max
value: -34.577680813370684
- type: nauc_precision_at_20_diff1
value: 10.2401285323039
- type: nauc_precision_at_20_max
value: -44.04141433293453
- type: nauc_precision_at_3_diff1
value: 15.784680322541114
- type: nauc_precision_at_3_max
value: -30.464693842536324
- type: nauc_precision_at_5_diff1
value: 6.837543215418572
- type: nauc_precision_at_5_max
value: -32.9314191958357
- type: nauc_recall_at_1000_diff1
value: 8.533481249495253
- type: nauc_recall_at_1000_max
value: -30.221386840946657
- type: nauc_recall_at_100_diff1
value: -1.394100451328846
- type: nauc_recall_at_100_max
value: -41.79269914007117
- type: nauc_recall_at_10_diff1
value: 13.77128337229429
- type: nauc_recall_at_10_max
value: -44.513151444340814
- type: nauc_recall_at_1_diff1
value: 19.300742296090192
- type: nauc_recall_at_1_max
value: -39.01422276688408
- type: nauc_recall_at_20_diff1
value: 8.568504019773036
- type: nauc_recall_at_20_max
value: -47.8434381158021
- type: nauc_recall_at_3_diff1
value: 9.308189193923543
- type: nauc_recall_at_3_max
value: -39.95524531900913
- type: nauc_recall_at_5_diff1
value: 10.205415401777017
- type: nauc_recall_at_5_max
value: -38.78454250086998
- type: ndcg_at_1
value: 16.326999999999998
- type: ndcg_at_10
value: 14.472999999999999
- type: ndcg_at_100
value: 24.621000000000002
- type: ndcg_at_1000
value: 37.964999999999996
- type: ndcg_at_20
value: 16.55
- type: ndcg_at_3
value: 15.432000000000002
- type: ndcg_at_5
value: 14.654
- type: precision_at_1
value: 18.367
- type: precision_at_10
value: 14.285999999999998
- type: precision_at_100
value: 5.612
- type: precision_at_1000
value: 1.39
- type: precision_at_20
value: 11.735
- type: precision_at_3
value: 17.007
- type: precision_at_5
value: 16.326999999999998
- type: recall_at_1
value: 1.4449999999999998
- type: recall_at_10
value: 10.796999999999999
- type: recall_at_100
value: 36.172
- type: recall_at_1000
value: 75.737
- type: recall_at_20
value: 17.494
- type: recall_at_3
value: 3.74
- type: recall_at_5
value: 6.131
- task:
type: Classification
dataset:
name: MTEB ToxicConversationsClassification
type: mteb/toxic_conversations_50k
config: default
split: test
revision: edfaf9da55d3dd50d43143d90c1ac476895ae6de
metrics:
- type: accuracy
value: 62.5244140625
- type: ap
value: 11.036738738208067
- type: ap_weighted
value: 11.036738738208067
- type: f1
value: 48.178922337841016
- type: f1_weighted
value: 70.79346668027127
- task:
type: Classification
dataset:
name: MTEB TweetSentimentExtractionClassification
type: mteb/tweet_sentiment_extraction
config: default
split: test
revision: d604517c81ca91fe16a244d1248fc021f9ecee7a
metrics:
- type: accuracy
value: 59.60384833050367
- type: f1
value: 59.89473957112635
- type: f1_weighted
value: 59.21770850754739
- task:
type: Clustering
dataset:
name: MTEB TwentyNewsgroupsClustering
type: mteb/twentynewsgroups-clustering
config: default
split: test
revision: 6125ec4e24fa026cec8a478383ee943acfbd5449
metrics:
- type: v_measure
value: 40.52017931682707
- type: v_measures
value:
- 0.41056001750265486
- 0.4115812168479953
- 0.4020131539653664
- 0.41845373495523314
- 0.3990043371824943
- task:
type: PairClassification
dataset:
name: MTEB TwitterSemEval2015
type: mteb/twittersemeval2015-pairclassification
config: default
split: test
revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1
metrics:
- type: cos_sim_accuracy
value: 84.19860523335518
- type: cos_sim_ap
value: 67.98183780223552
- type: cos_sim_f1
value: 63.287574797606474
- type: cos_sim_precision
value: 56.98288611874076
- type: cos_sim_recall
value: 71.16094986807387
- type: dot_accuracy
value: 81.0872027180068
- type: dot_ap
value: 57.080616165589994
- type: dot_f1
value: 57.184056030487184
- type: dot_precision
value: 46.899814157796925
- type: dot_recall
value: 73.24538258575198
- type: euclidean_accuracy
value: 84.10919711509806
- type: euclidean_ap
value: 68.02422564958268
- type: euclidean_f1
value: 63.76539589442815
- type: euclidean_precision
value: 57.40232312565998
- type: euclidean_recall
value: 71.71503957783642
- type: manhattan_accuracy
value: 84.06747332657805
- type: manhattan_ap
value: 67.74186393843273
- type: manhattan_f1
value: 63.57935359382538
- type: manhattan_precision
value: 58.55175477565526
- type: manhattan_recall
value: 69.55145118733509
- type: max_accuracy
value: 84.19860523335518
- type: max_ap
value: 68.02422564958268
- type: max_f1
value: 63.76539589442815
- task:
type: PairClassification
dataset:
name: MTEB TwitterURLCorpus
type: mteb/twitterurlcorpus-pairclassification
config: default
split: test
revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf
metrics:
- type: cos_sim_accuracy
value: 89.40311250824699
- type: cos_sim_ap
value: 86.47944792205789
- type: cos_sim_f1
value: 78.80539499036608
- type: cos_sim_precision
value: 75.95714285714286
- type: cos_sim_recall
value: 81.87557745611333
- type: dot_accuracy
value: 87.8468583847557
- type: dot_ap
value: 83.05643449341216
- type: dot_f1
value: 76.55210439257489
- type: dot_precision
value: 73.24330027431948
- type: dot_recall
value: 80.17400677548507
- type: euclidean_accuracy
value: 89.29250591842279
- type: euclidean_ap
value: 86.35499372223612
- type: euclidean_f1
value: 78.9011715450439
- type: euclidean_precision
value: 75.43009620110948
- type: euclidean_recall
value: 82.7071142593163
- type: manhattan_accuracy
value: 89.26339892110063
- type: manhattan_ap
value: 86.2956040159182
- type: manhattan_f1
value: 78.78428904601488
- type: manhattan_precision
value: 75.87165775401068
- type: manhattan_recall
value: 81.92947336002464
- type: max_accuracy
value: 89.40311250824699
- type: max_ap
value: 86.47944792205789
- type: max_f1
value: 78.9011715450439
---
# [bilingual-embedding-small](https://huggingface.co/Lajavaness/bilingual-embedding-small)
Bilingual-embedding is the Embedding Model for bilingual language: french and english. This model is a specialized sentence-embedding trained specifically for the bilingual language, leveraging the robust capabilities of [Multilingual-MiniLM-L12-H384](https://huggingface.co/microsoft/Multilingual-MiniLM-L12-H384), a pre-trained language model is built upon [multilingual-e5](https://huggingface.co/intfloat/multilingual-e5-small) architecture. The model utilizes MiniLM to encode english-french sentences into a 1024-dimensional vector space, facilitating a wide range of applications from semantic search to text clustering. The embeddings capture the nuanced meanings of english-french sentences, reflecting both the lexical and contextual layers of the language.
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BilingualModel
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Training and Fine-tuning process
#### Stage 1: NLI Training
- Dataset: [(SNLI+XNLI) for english+french]
- Method: Training using Multi-Negative Ranking Loss. This stage focused on improving the model's ability to discern and rank nuanced differences in sentence semantics.
### Stage 3: Continued Fine-tuning for Semantic Textual Similarity on STS Benchmark
- Dataset: [STSB-fr and en]
- Method: Fine-tuning specifically for the semantic textual similarity benchmark using Siamese BERT-Networks configured with the 'sentence-transformers' library.
### Stage 4: Advanced Augmentation Fine-tuning
- Dataset: STSB with generate [silver sample from gold sample](https://www.sbert.net/examples/training/data_augmentation/README.html)
- Method: Employed an advanced strategy using [Augmented SBERT](https://arxiv.org/abs/2010.08240) with Pair Sampling Strategies, integrating both Cross-Encoder and Bi-Encoder models. This stage further refined the embeddings by enriching the training data dynamically, enhancing the model's robustness and accuracy.
## Usage:
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["Paris est une capitale de la France", "Paris is a capital of France"]
model = SentenceTransformer('Lajavaness/bilingual-embedding-small', trust_remote_code=True)
print(embeddings)
```
## Evaluation
TODO
## Citation
@article{conneau2019unsupervised,
title={Unsupervised cross-lingual representation learning at scale},
author={Conneau, Alexis and Khandelwal, Kartikay and Goyal, Naman and Chaudhary, Vishrav and Wenzek, Guillaume and Guzm{\'a}n, Francisco and Grave, Edouard and Ott, Myle and Zettlemoyer, Luke and Stoyanov, Veselin},
journal={arXiv preprint arXiv:1911.02116},
year={2019}
}
@article{reimers2019sentence,
title={Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks},
author={Nils Reimers, Iryna Gurevych},
journal={https://arxiv.org/abs/1908.10084},
year={2019}
}
@article{thakur2020augmented,
title={Augmented SBERT: Data Augmentation Method for Improving Bi-Encoders for Pairwise Sentence Scoring Tasks},
author={Thakur, Nandan and Reimers, Nils and Daxenberger, Johannes and Gurevych, Iryna},
journal={arXiv e-prints},
pages={arXiv--2010},
year={2020} | [
"BIOSSES",
"SCIFACT"
] | Non_BioNLP |
QiaoyuZheng/RP3D-DiagModel | QiaoyuZheng | null | [
"license:apache-2.0",
"region:us"
] | 1,703,923,273,000 | 2024-01-16T09:09:19 | 0 | 3 | ---
license: apache-2.0
---
# RP3D-DiagModel
## About Checkpoint
The detailed parameter we use for training is in the following:
```
start_class: 0
end_clas: 5569
backbone: 'resnet'
level: 'articles' # represents the disorder level
depth: 32
ltype: 'MultiLabel' # represents the Binary Cross Entropy Loss
augment: True # represents the medical data augmentation
split: 'late' # represents the late fusion strategy
```
### Load Model
```
# Load backnone
model = RadNet(num_cls=num_classes, backbone=backbone, depth=depth, ltype=ltype, augment=augment, fuse=fuse, ke=ke, encoded=encoded, adapter=adapter)
pretrained_weights = torch.load("path/to/pytorch_model_32_late.bin")
missing, unexpect = model.load_state_dict(pretrained_weights,strict=False)
print("missing_cpt:", missing)
print("unexpect_cpt:", unexpect)
# If KE is set True, load text encoder
medcpt = MedCPT_clinical(bert_model_name = 'ncbi/MedCPT-Query-Encoder')
checkpoint = torch.load('path/to/epoch_state.pt',map_location='cpu')['state_dict']
load_checkpoint = {key.replace('module.', ''): value for key, value in checkpoint.items()}
missing, unexpect = medcpt.load_state_dict(load_checkpoint, strict=False)
print("missing_cpt:", missing)
print("unexpect_cpt:", unexpect)
```
## Why we provide this checkpoint?
All the early fusion checkpoint can be further finetuned from this checkpoint. If you need other checkpoints using different parameter settings, there are two possible ways:
### Finetune from this checkpoint
'''
checkpoint: "None"
safetensor: path to this checkpoint(pytorch_model.bin)
'''
### Contact Us
Email the author: [email protected]
## About Dataset
Please refer to [RP3D-DiagDS](https://huggingface.co/datasets/QiaoyuZheng/RP3D-DiagDS)
For more information, please refer to our instructions on [github](https://github.com/qiaoyu-zheng/RP3D-Diag) to download and use. | [
"MEDICAL DATA"
] | Non_BioNLP |
tsavage68/MedQA_L3_350steps_1e7rate_03beta_CSFTDPO | tsavage68 | text-generation | [
"transformers",
"safetensors",
"llama",
"text-generation",
"trl",
"dpo",
"generated_from_trainer",
"conversational",
"base_model:tsavage68/MedQA_L3_1000steps_1e6rate_SFT",
"base_model:finetune:tsavage68/MedQA_L3_1000steps_1e6rate_SFT",
"license:llama3",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | 1,716,518,287,000 | 2024-05-24T02:42:23 | 4 | 0 | ---
base_model: tsavage68/MedQA_L3_1000steps_1e6rate_SFT
license: llama3
tags:
- trl
- dpo
- generated_from_trainer
model-index:
- name: MedQA_L3_350steps_1e7rate_03beta_CSFTDPO
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# MedQA_L3_350steps_1e7rate_03beta_CSFTDPO
This model is a fine-tuned version of [tsavage68/MedQA_L3_1000steps_1e6rate_SFT](https://huggingface.co/tsavage68/MedQA_L3_1000steps_1e6rate_SFT) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6516
- Rewards/chosen: 0.2738
- Rewards/rejected: 0.1790
- Rewards/accuracies: 0.7099
- Rewards/margins: 0.0948
- Logps/rejected: -33.2582
- Logps/chosen: -30.4158
- Logits/rejected: -0.7313
- Logits/chosen: -0.7305
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-07
- train_batch_size: 2
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- training_steps: 350
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen |
|:-------------:|:------:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:|
| 0.6925 | 0.0489 | 50 | 0.6930 | -0.0016 | -0.0023 | 0.5011 | 0.0007 | -33.8624 | -31.3338 | -0.7320 | -0.7314 |
| 0.6841 | 0.0977 | 100 | 0.6807 | 0.2459 | 0.2195 | 0.6549 | 0.0264 | -33.1233 | -30.5088 | -0.7330 | -0.7323 |
| 0.6524 | 0.1466 | 150 | 0.6658 | 0.3522 | 0.2898 | 0.6703 | 0.0624 | -32.8887 | -30.1544 | -0.7315 | -0.7308 |
| 0.631 | 0.1954 | 200 | 0.6545 | 0.1829 | 0.0948 | 0.6923 | 0.0881 | -33.5389 | -30.7188 | -0.7310 | -0.7303 |
| 0.6675 | 0.2443 | 250 | 0.6520 | 0.2481 | 0.1544 | 0.7121 | 0.0938 | -33.3403 | -30.5014 | -0.7309 | -0.7301 |
| 0.6479 | 0.2931 | 300 | 0.6509 | 0.2738 | 0.1773 | 0.7099 | 0.0966 | -33.2640 | -30.4157 | -0.7310 | -0.7303 |
| 0.6583 | 0.3420 | 350 | 0.6516 | 0.2738 | 0.1790 | 0.7099 | 0.0948 | -33.2582 | -30.4158 | -0.7313 | -0.7305 |
### Framework versions
- Transformers 4.41.1
- Pytorch 2.0.0+cu117
- Datasets 2.19.1
- Tokenizers 0.19.1
| [
"MEDQA"
] | BioNLP |
PatronusAI/Llama-3-Patronus-Lynx-8B-Instruct-v1.1 | PatronusAI | text-generation | [
"transformers",
"safetensors",
"llama",
"text-generation",
"pytorch",
"Lynx",
"Patronus AI",
"evaluation",
"hallucination-detection",
"conversational",
"en",
"arxiv:2407.08488",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | 1,721,851,223,000 | 2024-07-31T17:01:34 | 9,624 | 10 | ---
language:
- en
library_name: transformers
license: cc-by-nc-4.0
tags:
- text-generation
- pytorch
- Lynx
- Patronus AI
- evaluation
- hallucination-detection
---
# Model Card for Model ID
Lynx is an open-source hallucination evaluation model. Patronus-Lynx-8B-Instruct-v1.1 was trained on a mix of datasets including CovidQA, PubmedQA, DROP, RAGTruth.
The datasets contain a mix of hand-annotated and synthetic data. The maximum sequence length is 128000 tokens.
## Model Details
- **Model Type:** Patronus-Lynx-8B-Instruct-v1.1 is a fine-tuned version of meta-llama/Meta-Llama-3.1-8B-Instruct model.
- **Language:** Primarily English
- **Developed by:** Patronus AI
- **Paper:** [https://arxiv.org/abs/2407.08488](https://arxiv.org/abs/2407.08488)
- **License:** [https://creativecommons.org/licenses/by-nc/4.0/](https://creativecommons.org/licenses/by-nc/4.0/)
### Model Sources
<!-- Provide the basic links for the model. -->
- **Repository:** [https://github.com/patronus-ai/Lynx-hallucination-detection](https://github.com/patronus-ai/Lynx-hallucination-detection)
## How to Get Started with the Model
Lynx is trained to detect hallucinations in RAG settings. Provided a document, question and answer, the model can evaluate whether the answer is faithful to the document.
To use the model, we recommend using the following prompt:
```
PROMPT = """
Given the following QUESTION, DOCUMENT and ANSWER you must analyze the provided answer and determine whether it is faithful to the contents of the DOCUMENT. The ANSWER must not offer new information beyond the context provided in the DOCUMENT. The ANSWER also must not contradict information provided in the DOCUMENT. Output your final verdict by strictly following this format: "PASS" if the answer is faithful to the DOCUMENT and "FAIL" if the answer is not faithful to the DOCUMENT. Show your reasoning.
--
QUESTION (THIS DOES NOT COUNT AS BACKGROUND INFORMATION):
{question}
--
DOCUMENT:
{context}
--
ANSWER:
{answer}
--
Your output should be in JSON FORMAT with the keys "REASONING" and "SCORE":
{{"REASONING": <your reasoning as bullet points>, "SCORE": <your final score>}}
"""
```
The model will output the score as 'PASS' if the answer is faithful to the document or FAIL if the answer is not faithful to the document.
## Inference
To run inference, you can use HF pipeline:
```
model_name = 'PatronusAI/Llama-3-Patronus-Lynx-8B-Instruct-v1.1'
pipe = pipeline(
"text-generation",
model=model_name,
max_new_tokens=600,
device="cuda",
return_full_text=False
)
messages = [
{"role": "user", "content": prompt},
]
result = pipe(messages)
print(result[0]['generated_text'])
```
Since the model is trained in chat format, ensure that you pass the prompt as a user message.
For more information on training details, refer to our [ArXiv paper](https://arxiv.org/abs/2407.08488).
## Evaluation
The model was evaluated on [PatronusAI/HaluBench](https://huggingface.co/datasets/PatronusAI/HaluBench).
| Model | HaluEval | RAGTruth | FinanceBench | DROP | CovidQA | PubmedQA | Overall
| :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: |
| GPT-4o | <ins>87.9%</ins> | 84.3% | <ins>85.3%</ins> | 84.3% | 95.0% | 82.1% | <ins>86.5%</ins> |
| GPT-4-Turbo | 86.0% | <ins>85.0%</ins> | 82.2% | <ins>84.8%</ins> | 90.6% | 83.5% | 85.0% |
| GPT-3.5-Turbo | 62.2% | 50.7% | 60.9% | 57.2% | 56.7% | 62.8% | 58.7% |
| Claude-3.5-Sonnet | 84.5% | 79.1% | 69.3% | 69.7% | 70.8% |84.8% |83.7%|
| RAGAS Faithfulness | 70.6% | 75.8% | 59.5% | 59.6% | 75.0% | 67.7% | 66.9% |
| Mistral-Instruct-7B | 78.3% | 77.7% | 56.3% | 56.3% | 71.7% | 77.9% | 69.4% |
| Llama-3-Instruct-8B | 83.1% | 80.0% | 55.0% | 58.2% | 75.2% | 70.7% | 70.4% |
| Llama-3-Instruct-70B | 87.0% | **83.8%** | 72.7% | 69.4% | 85.0% | 82.6% | 80.1% |
| Lynx (8B) | 85.7% | 80.0% | 72.5% | **77.8%** | 96.3% | 85.2% | 82.9% |
| Lynx v1.1 (8B) | **87.3%** | 79.9% | **75.6%** | 77.5% | <ins>**96.9%**</ins> |<ins> **88.9%**</ins> | **84.3%** |
## Citation
If you are using the model, cite using
```
@article{ravi2024lynx,
title={Lynx: An Open Source Hallucination Evaluation Model},
author={Ravi, Selvan Sunitha and Mielczarek, Bartosz and Kannappan, Anand and Kiela, Douwe and Qian, Rebecca},
journal={arXiv preprint arXiv:2407.08488},
year={2024}
}
```
## Model Card Contact
[@sunitha-ravi](https://huggingface.co/sunitha-ravi)
[@RebeccaQian1](https://huggingface.co/RebeccaQian1)
[@presidev](https://huggingface.co/presidev) | [
"PUBMEDQA"
] | Non_BioNLP |
Triangle104/EtherealRainbow-v0.3-8B-Q6_K-GGUF | Triangle104 | null | [
"transformers",
"gguf",
"mergekit",
"merge",
"not-for-all-audiences",
"llama-cpp",
"gguf-my-repo",
"en",
"base_model:invisietch/EtherealRainbow-v0.3-8B",
"base_model:quantized:invisietch/EtherealRainbow-v0.3-8B",
"license:llama3",
"endpoints_compatible",
"region:us",
"conversational"
] | 1,731,948,209,000 | 2024-11-18T16:56:13 | 2 | 0 | ---
base_model: invisietch/EtherealRainbow-v0.3-8B
language:
- en
library_name: transformers
license: llama3
tags:
- mergekit
- merge
- not-for-all-audiences
- llama-cpp
- gguf-my-repo
---
# Triangle104/EtherealRainbow-v0.3-8B-Q6_K-GGUF
This model was converted to GGUF format from [`invisietch/EtherealRainbow-v0.3-8B`](https://huggingface.co/invisietch/EtherealRainbow-v0.3-8B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/invisietch/EtherealRainbow-v0.3-8B) for more details on the model.
---
Model details:
-
Ethereal Rainbow is an 8B parameter merge of various Llama3-based finetunes created using mergekit. The purpose of Ethereal Rainbow is to create an uncensored Llama3 variant which is capable of writing creative prose, and engaging in SFW as well as NSFW roleplay and storytelling, with a strong focus on long-form responses & adherence to prompts.
v0.3 improves creativity over v0.2 without losing coherence. It has been tested over more than 1,000 messages including roleplay, code prompts, and 'write a scene'-type prompts.
Feedback
-
I appreciate all feedback on any of my models, you can use:
My Discord server - requires Discord.
The Community tab - requires HF login.
The SillyTavern Discord thread - must be on SillyTavern Discord.
Discord DMs to invisietch.
Your feedback is how I improve these models for future versions.
Disclaimer
-
This model is built on an abliterated base and as such is largely uncensored. It can generate explicit, disturbing or offensive responses. Use responsibly. I am not responsible for your use of this model.
Prompting Format
I'd recommend Llama-3 Instruct prompting format:
<|begin_of_text|><|start_header_id|>system<|end_header_id|>
{system_prompt}<|eot_id|><|start_header_id|>user<|end_header_id|>
{input}<|eot_id|><|start_header_id|>assistant<|end_header_id|>
{output}<|eot_id|>
Some of the models included in the merge were trained on ChatML & Alpaca so you can try those. I have not tested them.
Example Storywriting
These prompts are used on SillyTavern with a fairly basic narrator card. I have trimmed the start and finish where the narrator decided to add chapter headings, commentary and the like. All samples are made with the F32 GGUF loaded with koboldcpp, with response length capped at 2048 tokens.
Write me a 3,000 word opening chapter of a 'gritty hard sci-fi' novel, drawing inspiration from the writing styles of Isaac Asimov & Andy Weir. Use third person personal. Include dialogue and internal monologues. The POV character for the opening chapter should be a 26 year old astronaut called Tone on a mission to Europa, who has just realised that the craft for the return journey is broken beyond repair, and he only has supplies for a few months. Given that survival is impossible, he seeks to spend the few months he has researching titan, so his life & mission are not wasted.
Write me a 3,000 word opening chapter of a 'high fantasy' novel, drawing inspiration from the writing styles of J R R Tolkien & George R R Martin. Use third person personal. Include dialogue and internal monologues. The POV character for the opening chapter should be a 19 year old female elf bard who is looking for adventure.
Write me a 3,000 word opening chapter of a 'weird fiction' novel, drawing inspiration from the writing styles of China Mieville and Neil Gaiman. Use third person personal. Include dialogue and internal monologues. The POV character for the opening chapter should be a male in his 20s called Horton who has just come to the city looking for work.
I chose the hard sci-fi example to test positivity bias. It did require some prompting, but it was willing to kill the protagonist.
I chose the high fantasy example to see whether it would bleed human features through to elves, this didn't occur.
I chose the weird fiction example to see if the LLM understood a niche genre. I'd say it performed okay, better on style than on substance.
Merge Strategy
First, we create three bases:
Rain - This is a roleplay base which makes up the majority of the model.
Sun - This is the brains of the model, with strong instruct models & writing models.
Ghost - This model primarily aims to improve the NSFW/NSFL aspects of the model, as well as general vocabulary.
After this, we have a two-slerp stage to create the final model.
Models Used
The following models were used to create EtherealRainbow-v0.3-8B:
mlabonne/NeuralDaredevil-8B-abliterated
Sao10K/L3-8B-Stheno-v3.2
Nitral-AI/Hathor-L3-8B-v.02
grimjim/Llama-3-Luminurse-v0.2-OAS-8B
hf-100/Llama-3-Spellbound-Instruct-8B-0.3
Gryphe/Pantheon-RP-1.0-8b-Llama-3
Blackroot/Llama-3-LongStory
Locutusque/Llama-3-Hercules-5.0-8B
Casual-Autopsy/L3-Umbral-Mind-RP-v0.3-8B
ChaoticNeutrals/Poppy_Porpoise-1.0-L3-8B
mpasila/Llama-3-LimaRP-Instruct-8B
Undi95/Llama-3-LewdPlay-8B-evo
Mergekit Configs
-
Rain
-
models:
- model: mlabonne/NeuralDaredevil-8B-abliterated
- model: Sao10K/L3-8B-Stheno-v3.2
parameters:
density: 0.41
weight: 0.4
- model: Nitral-AI/Hathor-L3-8B-v.02
parameters:
density: 0.53
weight: 0.5
- model: grimjim/Llama-3-Luminurse-v0.2-OAS-8B
parameters:
density: 0.45
weight: 0.1
merge_method: dare_ties
base_model: mlabonne/NeuralDaredevil-8B-abliterated
parameters:
int8_mask: true
dtype: bfloat16
Sun
-
models:
- model: hf-100/Llama-3-Spellbound-Instruct-8B-0.3
- model: Gryphe/Pantheon-RP-1.0-8b-Llama-3
parameters:
density: 0.48
weight: 0.5
- model: Blackroot/Llama-3-LongStory
parameters:
density: 0.36
weight: 0.2
- model: Locutusque/Llama-3-Hercules-5.0-8B
parameters:
density: 0.51
weight: 0.3
merge_method: dare_ties
base_model: hf-100/Llama-3-Spellbound-Instruct-8B-0.3
parameters:
int8_mask: true
dtype: bfloat16
Ghost
-
models:
- model: Casual-Autopsy/L3-Umbral-Mind-RP-v0.3-8B
- model: ChaoticNeutrals/Poppy_Porpoise-1.0-L3-8B
parameters:
density: 0.39
weight: 0.3
- model: mpasila/Llama-3-LimaRP-Instruct-8B
parameters:
density: 0.54
weight: 0.4
- model: Undi95/Llama-3-LewdPlay-8B-evo
parameters:
density: 0.49
weight: 0.3
merge_method: dare_ties
base_model: Casual-Autopsy/L3-Umbral-Mind-RP-v0.3-8B
parameters:
int8_mask: true
dtype: bfloat16
Stage1 Slerp
-
models:
- model: ./fp16/Rain-v0.3-8B
- model: ./fp16/Ghost-v0.3-8B
merge_method: slerp
base_model: ./fp16/Rain-v0.3-8B
parameters:
t:
- value: [0, 0, 0.1, 0.3, 0.5, 0.7, 0.5, 0.3, 0.1, 0, 0]
embed_slerp: true
dtype: bfloat16
tokenizer-source: model:./fp16/Rain-v0.3-8B
Final-Stage Slerp
-
models:
- model: ./fp16/ERStage1-v0.3-8B
- model: ./fp16/Sun-v0.3-8B
merge_method: slerp
base_model: ./fp16/ERStage1-v0.3-8B
parameters:
t:
- value: [0, 0, 0.1, 0.2, 0.4, 0.6, 0.4, 0.2, 0.1, 0, 0]
embed_slerp: true
dtype: bfloat16
tokenizer-source: model:./fp16/ERStage1-v0.3-8B
---
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/EtherealRainbow-v0.3-8B-Q6_K-GGUF --hf-file etherealrainbow-v0.3-8b-q6_k.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/EtherealRainbow-v0.3-8B-Q6_K-GGUF --hf-file etherealrainbow-v0.3-8b-q6_k.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/EtherealRainbow-v0.3-8B-Q6_K-GGUF --hf-file etherealrainbow-v0.3-8b-q6_k.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/EtherealRainbow-v0.3-8B-Q6_K-GGUF --hf-file etherealrainbow-v0.3-8b-q6_k.gguf -c 2048
```
| [
"CRAFT"
] | Non_BioNLP |
ahmet1338/finetuned_embedder | ahmet1338 | sentence-similarity | [
"sentence-transformers",
"pytorch",
"t5",
"text-embedding",
"embeddings",
"information-retrieval",
"beir",
"text-classification",
"language-model",
"text-clustering",
"text-semantic-similarity",
"text-evaluation",
"prompt-retrieval",
"text-reranking",
"feature-extraction",
"sentence-similarity",
"transformers",
"English",
"Sentence Similarity",
"natural_questions",
"ms_marco",
"fever",
"hotpot_qa",
"mteb",
"en",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] | 1,713,775,482,000 | 2024-04-22T08:45:59 | 14 | 0 | ---
language: en
license: apache-2.0
pipeline_tag: sentence-similarity
tags:
- text-embedding
- embeddings
- information-retrieval
- beir
- text-classification
- language-model
- text-clustering
- text-semantic-similarity
- text-evaluation
- prompt-retrieval
- text-reranking
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
- t5
- English
- Sentence Similarity
- natural_questions
- ms_marco
- fever
- hotpot_qa
- mteb
inference: false
model-index:
- name: INSTRUCTOR
results:
- task:
type: Classification
dataset:
name: MTEB AmazonCounterfactualClassification (en)
type: mteb/amazon_counterfactual
config: en
split: test
revision: e8379541af4e31359cca9fbcf4b00f2671dba205
metrics:
- type: accuracy
value: 88.13432835820896
- type: ap
value: 59.298209334395665
- type: f1
value: 83.31769058643586
- task:
type: Classification
dataset:
name: MTEB AmazonPolarityClassification
type: mteb/amazon_polarity
config: default
split: test
revision: e2d317d38cd51312af73b3d32a06d1a08b442046
metrics:
- type: accuracy
value: 91.526375
- type: ap
value: 88.16327709705504
- type: f1
value: 91.51095801287843
- task:
type: Classification
dataset:
name: MTEB AmazonReviewsClassification (en)
type: mteb/amazon_reviews_multi
config: en
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 47.856
- type: f1
value: 45.41490917650942
- task:
type: Retrieval
dataset:
name: MTEB ArguAna
type: arguana
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 31.223
- type: map_at_10
value: 47.947
- type: map_at_100
value: 48.742000000000004
- type: map_at_1000
value: 48.745
- type: map_at_3
value: 43.137
- type: map_at_5
value: 45.992
- type: mrr_at_1
value: 32.432
- type: mrr_at_10
value: 48.4
- type: mrr_at_100
value: 49.202
- type: mrr_at_1000
value: 49.205
- type: mrr_at_3
value: 43.551
- type: mrr_at_5
value: 46.467999999999996
- type: ndcg_at_1
value: 31.223
- type: ndcg_at_10
value: 57.045
- type: ndcg_at_100
value: 60.175
- type: ndcg_at_1000
value: 60.233000000000004
- type: ndcg_at_3
value: 47.171
- type: ndcg_at_5
value: 52.322
- type: precision_at_1
value: 31.223
- type: precision_at_10
value: 8.599
- type: precision_at_100
value: 0.991
- type: precision_at_1000
value: 0.1
- type: precision_at_3
value: 19.63
- type: precision_at_5
value: 14.282
- type: recall_at_1
value: 31.223
- type: recall_at_10
value: 85.989
- type: recall_at_100
value: 99.075
- type: recall_at_1000
value: 99.502
- type: recall_at_3
value: 58.89
- type: recall_at_5
value: 71.408
- task:
type: Clustering
dataset:
name: MTEB ArxivClusteringP2P
type: mteb/arxiv-clustering-p2p
config: default
split: test
revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d
metrics:
- type: v_measure
value: 43.1621946393635
- task:
type: Clustering
dataset:
name: MTEB ArxivClusteringS2S
type: mteb/arxiv-clustering-s2s
config: default
split: test
revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53
metrics:
- type: v_measure
value: 32.56417132407894
- task:
type: Reranking
dataset:
name: MTEB AskUbuntuDupQuestions
type: mteb/askubuntudupquestions-reranking
config: default
split: test
revision: 2000358ca161889fa9c082cb41daa8dcfb161a54
metrics:
- type: map
value: 64.29539304390207
- type: mrr
value: 76.44484017060196
- task:
type: STS
dataset:
name: MTEB BIOSSES
type: mteb/biosses-sts
config: default
split: test
revision: d3fb88f8f02e40887cd149695127462bbcf29b4a
metrics:
- type: cos_sim_spearman
value: 84.38746499431112
- task:
type: Classification
dataset:
name: MTEB Banking77Classification
type: mteb/banking77
config: default
split: test
revision: 0fd18e25b25c072e09e0d92ab615fda904d66300
metrics:
- type: accuracy
value: 78.51298701298701
- type: f1
value: 77.49041754069235
- task:
type: Clustering
dataset:
name: MTEB BiorxivClusteringP2P
type: mteb/biorxiv-clustering-p2p
config: default
split: test
revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40
metrics:
- type: v_measure
value: 37.61848554098577
- task:
type: Clustering
dataset:
name: MTEB BiorxivClusteringS2S
type: mteb/biorxiv-clustering-s2s
config: default
split: test
revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908
metrics:
- type: v_measure
value: 31.32623280148178
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackAndroidRetrieval
type: BeIR/cqadupstack
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 35.803000000000004
- type: map_at_10
value: 48.848
- type: map_at_100
value: 50.5
- type: map_at_1000
value: 50.602999999999994
- type: map_at_3
value: 45.111000000000004
- type: map_at_5
value: 47.202
- type: mrr_at_1
value: 44.635000000000005
- type: mrr_at_10
value: 55.593
- type: mrr_at_100
value: 56.169999999999995
- type: mrr_at_1000
value: 56.19499999999999
- type: mrr_at_3
value: 53.361999999999995
- type: mrr_at_5
value: 54.806999999999995
- type: ndcg_at_1
value: 44.635000000000005
- type: ndcg_at_10
value: 55.899
- type: ndcg_at_100
value: 60.958
- type: ndcg_at_1000
value: 62.302
- type: ndcg_at_3
value: 51.051
- type: ndcg_at_5
value: 53.351000000000006
- type: precision_at_1
value: 44.635000000000005
- type: precision_at_10
value: 10.786999999999999
- type: precision_at_100
value: 1.6580000000000001
- type: precision_at_1000
value: 0.213
- type: precision_at_3
value: 24.893
- type: precision_at_5
value: 17.740000000000002
- type: recall_at_1
value: 35.803000000000004
- type: recall_at_10
value: 68.657
- type: recall_at_100
value: 89.77199999999999
- type: recall_at_1000
value: 97.67
- type: recall_at_3
value: 54.066
- type: recall_at_5
value: 60.788
- type: map_at_1
value: 33.706
- type: map_at_10
value: 44.896
- type: map_at_100
value: 46.299
- type: map_at_1000
value: 46.44
- type: map_at_3
value: 41.721000000000004
- type: map_at_5
value: 43.486000000000004
- type: mrr_at_1
value: 41.592
- type: mrr_at_10
value: 50.529
- type: mrr_at_100
value: 51.22
- type: mrr_at_1000
value: 51.258
- type: mrr_at_3
value: 48.205999999999996
- type: mrr_at_5
value: 49.528
- type: ndcg_at_1
value: 41.592
- type: ndcg_at_10
value: 50.77199999999999
- type: ndcg_at_100
value: 55.383
- type: ndcg_at_1000
value: 57.288
- type: ndcg_at_3
value: 46.324
- type: ndcg_at_5
value: 48.346000000000004
- type: precision_at_1
value: 41.592
- type: precision_at_10
value: 9.516
- type: precision_at_100
value: 1.541
- type: precision_at_1000
value: 0.2
- type: precision_at_3
value: 22.399
- type: precision_at_5
value: 15.770999999999999
- type: recall_at_1
value: 33.706
- type: recall_at_10
value: 61.353
- type: recall_at_100
value: 80.182
- type: recall_at_1000
value: 91.896
- type: recall_at_3
value: 48.204
- type: recall_at_5
value: 53.89699999999999
- type: map_at_1
value: 44.424
- type: map_at_10
value: 57.169000000000004
- type: map_at_100
value: 58.202
- type: map_at_1000
value: 58.242000000000004
- type: map_at_3
value: 53.825
- type: map_at_5
value: 55.714
- type: mrr_at_1
value: 50.470000000000006
- type: mrr_at_10
value: 60.489000000000004
- type: mrr_at_100
value: 61.096
- type: mrr_at_1000
value: 61.112
- type: mrr_at_3
value: 58.192
- type: mrr_at_5
value: 59.611999999999995
- type: ndcg_at_1
value: 50.470000000000006
- type: ndcg_at_10
value: 63.071999999999996
- type: ndcg_at_100
value: 66.964
- type: ndcg_at_1000
value: 67.659
- type: ndcg_at_3
value: 57.74399999999999
- type: ndcg_at_5
value: 60.367000000000004
- type: precision_at_1
value: 50.470000000000006
- type: precision_at_10
value: 10.019
- type: precision_at_100
value: 1.29
- type: precision_at_1000
value: 0.13899999999999998
- type: precision_at_3
value: 25.558999999999997
- type: precision_at_5
value: 17.467
- type: recall_at_1
value: 44.424
- type: recall_at_10
value: 77.02
- type: recall_at_100
value: 93.738
- type: recall_at_1000
value: 98.451
- type: recall_at_3
value: 62.888
- type: recall_at_5
value: 69.138
- type: map_at_1
value: 26.294
- type: map_at_10
value: 34.503
- type: map_at_100
value: 35.641
- type: map_at_1000
value: 35.724000000000004
- type: map_at_3
value: 31.753999999999998
- type: map_at_5
value: 33.190999999999995
- type: mrr_at_1
value: 28.362
- type: mrr_at_10
value: 36.53
- type: mrr_at_100
value: 37.541000000000004
- type: mrr_at_1000
value: 37.602000000000004
- type: mrr_at_3
value: 33.917
- type: mrr_at_5
value: 35.358000000000004
- type: ndcg_at_1
value: 28.362
- type: ndcg_at_10
value: 39.513999999999996
- type: ndcg_at_100
value: 44.815
- type: ndcg_at_1000
value: 46.839
- type: ndcg_at_3
value: 34.02
- type: ndcg_at_5
value: 36.522
- type: precision_at_1
value: 28.362
- type: precision_at_10
value: 6.101999999999999
- type: precision_at_100
value: 0.9129999999999999
- type: precision_at_1000
value: 0.11399999999999999
- type: precision_at_3
value: 14.161999999999999
- type: precision_at_5
value: 9.966
- type: recall_at_1
value: 26.294
- type: recall_at_10
value: 53.098
- type: recall_at_100
value: 76.877
- type: recall_at_1000
value: 91.834
- type: recall_at_3
value: 38.266
- type: recall_at_5
value: 44.287
- type: map_at_1
value: 16.407
- type: map_at_10
value: 25.185999999999996
- type: map_at_100
value: 26.533
- type: map_at_1000
value: 26.657999999999998
- type: map_at_3
value: 22.201999999999998
- type: map_at_5
value: 23.923
- type: mrr_at_1
value: 20.522000000000002
- type: mrr_at_10
value: 29.522
- type: mrr_at_100
value: 30.644
- type: mrr_at_1000
value: 30.713
- type: mrr_at_3
value: 26.679000000000002
- type: mrr_at_5
value: 28.483000000000004
- type: ndcg_at_1
value: 20.522000000000002
- type: ndcg_at_10
value: 30.656
- type: ndcg_at_100
value: 36.864999999999995
- type: ndcg_at_1000
value: 39.675
- type: ndcg_at_3
value: 25.319000000000003
- type: ndcg_at_5
value: 27.992
- type: precision_at_1
value: 20.522000000000002
- type: precision_at_10
value: 5.795999999999999
- type: precision_at_100
value: 1.027
- type: precision_at_1000
value: 0.13999999999999999
- type: precision_at_3
value: 12.396
- type: precision_at_5
value: 9.328
- type: recall_at_1
value: 16.407
- type: recall_at_10
value: 43.164
- type: recall_at_100
value: 69.695
- type: recall_at_1000
value: 89.41900000000001
- type: recall_at_3
value: 28.634999999999998
- type: recall_at_5
value: 35.308
- type: map_at_1
value: 30.473
- type: map_at_10
value: 41.676
- type: map_at_100
value: 43.120999999999995
- type: map_at_1000
value: 43.230000000000004
- type: map_at_3
value: 38.306000000000004
- type: map_at_5
value: 40.355999999999995
- type: mrr_at_1
value: 37.536
- type: mrr_at_10
value: 47.643
- type: mrr_at_100
value: 48.508
- type: mrr_at_1000
value: 48.551
- type: mrr_at_3
value: 45.348
- type: mrr_at_5
value: 46.744
- type: ndcg_at_1
value: 37.536
- type: ndcg_at_10
value: 47.823
- type: ndcg_at_100
value: 53.395
- type: ndcg_at_1000
value: 55.271
- type: ndcg_at_3
value: 42.768
- type: ndcg_at_5
value: 45.373000000000005
- type: precision_at_1
value: 37.536
- type: precision_at_10
value: 8.681
- type: precision_at_100
value: 1.34
- type: precision_at_1000
value: 0.165
- type: precision_at_3
value: 20.468
- type: precision_at_5
value: 14.495
- type: recall_at_1
value: 30.473
- type: recall_at_10
value: 60.092999999999996
- type: recall_at_100
value: 82.733
- type: recall_at_1000
value: 94.875
- type: recall_at_3
value: 45.734
- type: recall_at_5
value: 52.691
- type: map_at_1
value: 29.976000000000003
- type: map_at_10
value: 41.097
- type: map_at_100
value: 42.547000000000004
- type: map_at_1000
value: 42.659000000000006
- type: map_at_3
value: 37.251
- type: map_at_5
value: 39.493
- type: mrr_at_1
value: 37.557
- type: mrr_at_10
value: 46.605000000000004
- type: mrr_at_100
value: 47.487
- type: mrr_at_1000
value: 47.54
- type: mrr_at_3
value: 43.721
- type: mrr_at_5
value: 45.411
- type: ndcg_at_1
value: 37.557
- type: ndcg_at_10
value: 47.449000000000005
- type: ndcg_at_100
value: 53.052
- type: ndcg_at_1000
value: 55.010999999999996
- type: ndcg_at_3
value: 41.439
- type: ndcg_at_5
value: 44.292
- type: precision_at_1
value: 37.557
- type: precision_at_10
value: 8.847
- type: precision_at_100
value: 1.357
- type: precision_at_1000
value: 0.16999999999999998
- type: precision_at_3
value: 20.091
- type: precision_at_5
value: 14.384
- type: recall_at_1
value: 29.976000000000003
- type: recall_at_10
value: 60.99099999999999
- type: recall_at_100
value: 84.245
- type: recall_at_1000
value: 96.97200000000001
- type: recall_at_3
value: 43.794
- type: recall_at_5
value: 51.778999999999996
- type: map_at_1
value: 28.099166666666665
- type: map_at_10
value: 38.1365
- type: map_at_100
value: 39.44491666666667
- type: map_at_1000
value: 39.55858333333334
- type: map_at_3
value: 35.03641666666666
- type: map_at_5
value: 36.79833333333334
- type: mrr_at_1
value: 33.39966666666667
- type: mrr_at_10
value: 42.42583333333333
- type: mrr_at_100
value: 43.28575
- type: mrr_at_1000
value: 43.33741666666667
- type: mrr_at_3
value: 39.94975
- type: mrr_at_5
value: 41.41633333333334
- type: ndcg_at_1
value: 33.39966666666667
- type: ndcg_at_10
value: 43.81741666666667
- type: ndcg_at_100
value: 49.08166666666667
- type: ndcg_at_1000
value: 51.121166666666674
- type: ndcg_at_3
value: 38.73575
- type: ndcg_at_5
value: 41.18158333333333
- type: precision_at_1
value: 33.39966666666667
- type: precision_at_10
value: 7.738916666666667
- type: precision_at_100
value: 1.2265833333333331
- type: precision_at_1000
value: 0.15983333333333336
- type: precision_at_3
value: 17.967416666666665
- type: precision_at_5
value: 12.78675
- type: recall_at_1
value: 28.099166666666665
- type: recall_at_10
value: 56.27049999999999
- type: recall_at_100
value: 78.93291666666667
- type: recall_at_1000
value: 92.81608333333334
- type: recall_at_3
value: 42.09775
- type: recall_at_5
value: 48.42533333333334
- type: map_at_1
value: 23.663
- type: map_at_10
value: 30.377
- type: map_at_100
value: 31.426
- type: map_at_1000
value: 31.519000000000002
- type: map_at_3
value: 28.069
- type: map_at_5
value: 29.256999999999998
- type: mrr_at_1
value: 26.687
- type: mrr_at_10
value: 33.107
- type: mrr_at_100
value: 34.055
- type: mrr_at_1000
value: 34.117999999999995
- type: mrr_at_3
value: 31.058000000000003
- type: mrr_at_5
value: 32.14
- type: ndcg_at_1
value: 26.687
- type: ndcg_at_10
value: 34.615
- type: ndcg_at_100
value: 39.776
- type: ndcg_at_1000
value: 42.05
- type: ndcg_at_3
value: 30.322
- type: ndcg_at_5
value: 32.157000000000004
- type: precision_at_1
value: 26.687
- type: precision_at_10
value: 5.491
- type: precision_at_100
value: 0.877
- type: precision_at_1000
value: 0.11499999999999999
- type: precision_at_3
value: 13.139000000000001
- type: precision_at_5
value: 9.049
- type: recall_at_1
value: 23.663
- type: recall_at_10
value: 45.035
- type: recall_at_100
value: 68.554
- type: recall_at_1000
value: 85.077
- type: recall_at_3
value: 32.982
- type: recall_at_5
value: 37.688
- type: map_at_1
value: 17.403
- type: map_at_10
value: 25.197000000000003
- type: map_at_100
value: 26.355
- type: map_at_1000
value: 26.487
- type: map_at_3
value: 22.733
- type: map_at_5
value: 24.114
- type: mrr_at_1
value: 21.37
- type: mrr_at_10
value: 29.091
- type: mrr_at_100
value: 30.018
- type: mrr_at_1000
value: 30.096
- type: mrr_at_3
value: 26.887
- type: mrr_at_5
value: 28.157
- type: ndcg_at_1
value: 21.37
- type: ndcg_at_10
value: 30.026000000000003
- type: ndcg_at_100
value: 35.416
- type: ndcg_at_1000
value: 38.45
- type: ndcg_at_3
value: 25.764
- type: ndcg_at_5
value: 27.742
- type: precision_at_1
value: 21.37
- type: precision_at_10
value: 5.609
- type: precision_at_100
value: 0.9860000000000001
- type: precision_at_1000
value: 0.14300000000000002
- type: precision_at_3
value: 12.423
- type: precision_at_5
value: 9.009
- type: recall_at_1
value: 17.403
- type: recall_at_10
value: 40.573
- type: recall_at_100
value: 64.818
- type: recall_at_1000
value: 86.53699999999999
- type: recall_at_3
value: 28.493000000000002
- type: recall_at_5
value: 33.660000000000004
- type: map_at_1
value: 28.639
- type: map_at_10
value: 38.951
- type: map_at_100
value: 40.238
- type: map_at_1000
value: 40.327
- type: map_at_3
value: 35.842
- type: map_at_5
value: 37.617
- type: mrr_at_1
value: 33.769
- type: mrr_at_10
value: 43.088
- type: mrr_at_100
value: 44.03
- type: mrr_at_1000
value: 44.072
- type: mrr_at_3
value: 40.656
- type: mrr_at_5
value: 42.138999999999996
- type: ndcg_at_1
value: 33.769
- type: ndcg_at_10
value: 44.676
- type: ndcg_at_100
value: 50.416000000000004
- type: ndcg_at_1000
value: 52.227999999999994
- type: ndcg_at_3
value: 39.494
- type: ndcg_at_5
value: 42.013
- type: precision_at_1
value: 33.769
- type: precision_at_10
value: 7.668
- type: precision_at_100
value: 1.18
- type: precision_at_1000
value: 0.145
- type: precision_at_3
value: 18.221
- type: precision_at_5
value: 12.966
- type: recall_at_1
value: 28.639
- type: recall_at_10
value: 57.687999999999995
- type: recall_at_100
value: 82.541
- type: recall_at_1000
value: 94.896
- type: recall_at_3
value: 43.651
- type: recall_at_5
value: 49.925999999999995
- type: map_at_1
value: 29.57
- type: map_at_10
value: 40.004
- type: map_at_100
value: 41.75
- type: map_at_1000
value: 41.97
- type: map_at_3
value: 36.788
- type: map_at_5
value: 38.671
- type: mrr_at_1
value: 35.375
- type: mrr_at_10
value: 45.121
- type: mrr_at_100
value: 45.994
- type: mrr_at_1000
value: 46.04
- type: mrr_at_3
value: 42.227
- type: mrr_at_5
value: 43.995
- type: ndcg_at_1
value: 35.375
- type: ndcg_at_10
value: 46.392
- type: ndcg_at_100
value: 52.196
- type: ndcg_at_1000
value: 54.274
- type: ndcg_at_3
value: 41.163
- type: ndcg_at_5
value: 43.813
- type: precision_at_1
value: 35.375
- type: precision_at_10
value: 8.676
- type: precision_at_100
value: 1.678
- type: precision_at_1000
value: 0.253
- type: precision_at_3
value: 19.104
- type: precision_at_5
value: 13.913
- type: recall_at_1
value: 29.57
- type: recall_at_10
value: 58.779
- type: recall_at_100
value: 83.337
- type: recall_at_1000
value: 95.979
- type: recall_at_3
value: 44.005
- type: recall_at_5
value: 50.975
- type: map_at_1
value: 20.832
- type: map_at_10
value: 29.733999999999998
- type: map_at_100
value: 30.727
- type: map_at_1000
value: 30.843999999999998
- type: map_at_3
value: 26.834999999999997
- type: map_at_5
value: 28.555999999999997
- type: mrr_at_1
value: 22.921
- type: mrr_at_10
value: 31.791999999999998
- type: mrr_at_100
value: 32.666000000000004
- type: mrr_at_1000
value: 32.751999999999995
- type: mrr_at_3
value: 29.144
- type: mrr_at_5
value: 30.622
- type: ndcg_at_1
value: 22.921
- type: ndcg_at_10
value: 34.915
- type: ndcg_at_100
value: 39.744
- type: ndcg_at_1000
value: 42.407000000000004
- type: ndcg_at_3
value: 29.421000000000003
- type: ndcg_at_5
value: 32.211
- type: precision_at_1
value: 22.921
- type: precision_at_10
value: 5.675
- type: precision_at_100
value: 0.872
- type: precision_at_1000
value: 0.121
- type: precision_at_3
value: 12.753999999999998
- type: precision_at_5
value: 9.353
- type: recall_at_1
value: 20.832
- type: recall_at_10
value: 48.795
- type: recall_at_100
value: 70.703
- type: recall_at_1000
value: 90.187
- type: recall_at_3
value: 34.455000000000005
- type: recall_at_5
value: 40.967
- task:
type: Retrieval
dataset:
name: MTEB ClimateFEVER
type: climate-fever
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 10.334
- type: map_at_10
value: 19.009999999999998
- type: map_at_100
value: 21.129
- type: map_at_1000
value: 21.328
- type: map_at_3
value: 15.152
- type: map_at_5
value: 17.084
- type: mrr_at_1
value: 23.453
- type: mrr_at_10
value: 36.099
- type: mrr_at_100
value: 37.069
- type: mrr_at_1000
value: 37.104
- type: mrr_at_3
value: 32.096000000000004
- type: mrr_at_5
value: 34.451
- type: ndcg_at_1
value: 23.453
- type: ndcg_at_10
value: 27.739000000000004
- type: ndcg_at_100
value: 35.836
- type: ndcg_at_1000
value: 39.242
- type: ndcg_at_3
value: 21.263
- type: ndcg_at_5
value: 23.677
- type: precision_at_1
value: 23.453
- type: precision_at_10
value: 9.199
- type: precision_at_100
value: 1.791
- type: precision_at_1000
value: 0.242
- type: precision_at_3
value: 16.2
- type: precision_at_5
value: 13.147
- type: recall_at_1
value: 10.334
- type: recall_at_10
value: 35.177
- type: recall_at_100
value: 63.009
- type: recall_at_1000
value: 81.938
- type: recall_at_3
value: 19.914
- type: recall_at_5
value: 26.077
- task:
type: Retrieval
dataset:
name: MTEB DBPedia
type: dbpedia-entity
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 8.212
- type: map_at_10
value: 17.386
- type: map_at_100
value: 24.234
- type: map_at_1000
value: 25.724999999999998
- type: map_at_3
value: 12.727
- type: map_at_5
value: 14.785
- type: mrr_at_1
value: 59.25
- type: mrr_at_10
value: 68.687
- type: mrr_at_100
value: 69.133
- type: mrr_at_1000
value: 69.14099999999999
- type: mrr_at_3
value: 66.917
- type: mrr_at_5
value: 67.742
- type: ndcg_at_1
value: 48.625
- type: ndcg_at_10
value: 36.675999999999995
- type: ndcg_at_100
value: 41.543
- type: ndcg_at_1000
value: 49.241
- type: ndcg_at_3
value: 41.373
- type: ndcg_at_5
value: 38.707
- type: precision_at_1
value: 59.25
- type: precision_at_10
value: 28.525
- type: precision_at_100
value: 9.027000000000001
- type: precision_at_1000
value: 1.8339999999999999
- type: precision_at_3
value: 44.833
- type: precision_at_5
value: 37.35
- type: recall_at_1
value: 8.212
- type: recall_at_10
value: 23.188
- type: recall_at_100
value: 48.613
- type: recall_at_1000
value: 73.093
- type: recall_at_3
value: 14.419
- type: recall_at_5
value: 17.798
- task:
type: Classification
dataset:
name: MTEB EmotionClassification
type: mteb/emotion
config: default
split: test
revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37
metrics:
- type: accuracy
value: 52.725
- type: f1
value: 46.50743309855908
- task:
type: Retrieval
dataset:
name: MTEB FEVER
type: fever
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 55.086
- type: map_at_10
value: 66.914
- type: map_at_100
value: 67.321
- type: map_at_1000
value: 67.341
- type: map_at_3
value: 64.75800000000001
- type: map_at_5
value: 66.189
- type: mrr_at_1
value: 59.28600000000001
- type: mrr_at_10
value: 71.005
- type: mrr_at_100
value: 71.304
- type: mrr_at_1000
value: 71.313
- type: mrr_at_3
value: 69.037
- type: mrr_at_5
value: 70.35
- type: ndcg_at_1
value: 59.28600000000001
- type: ndcg_at_10
value: 72.695
- type: ndcg_at_100
value: 74.432
- type: ndcg_at_1000
value: 74.868
- type: ndcg_at_3
value: 68.72200000000001
- type: ndcg_at_5
value: 71.081
- type: precision_at_1
value: 59.28600000000001
- type: precision_at_10
value: 9.499
- type: precision_at_100
value: 1.052
- type: precision_at_1000
value: 0.11100000000000002
- type: precision_at_3
value: 27.503
- type: precision_at_5
value: 17.854999999999997
- type: recall_at_1
value: 55.086
- type: recall_at_10
value: 86.453
- type: recall_at_100
value: 94.028
- type: recall_at_1000
value: 97.052
- type: recall_at_3
value: 75.821
- type: recall_at_5
value: 81.6
- task:
type: Retrieval
dataset:
name: MTEB FiQA2018
type: fiqa
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 22.262999999999998
- type: map_at_10
value: 37.488
- type: map_at_100
value: 39.498
- type: map_at_1000
value: 39.687
- type: map_at_3
value: 32.529
- type: map_at_5
value: 35.455
- type: mrr_at_1
value: 44.907000000000004
- type: mrr_at_10
value: 53.239000000000004
- type: mrr_at_100
value: 54.086
- type: mrr_at_1000
value: 54.122
- type: mrr_at_3
value: 51.235
- type: mrr_at_5
value: 52.415
- type: ndcg_at_1
value: 44.907000000000004
- type: ndcg_at_10
value: 45.446
- type: ndcg_at_100
value: 52.429
- type: ndcg_at_1000
value: 55.169000000000004
- type: ndcg_at_3
value: 41.882000000000005
- type: ndcg_at_5
value: 43.178
- type: precision_at_1
value: 44.907000000000004
- type: precision_at_10
value: 12.931999999999999
- type: precision_at_100
value: 2.025
- type: precision_at_1000
value: 0.248
- type: precision_at_3
value: 28.652
- type: precision_at_5
value: 21.204
- type: recall_at_1
value: 22.262999999999998
- type: recall_at_10
value: 52.447
- type: recall_at_100
value: 78.045
- type: recall_at_1000
value: 94.419
- type: recall_at_3
value: 38.064
- type: recall_at_5
value: 44.769
- task:
type: Retrieval
dataset:
name: MTEB HotpotQA
type: hotpotqa
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 32.519
- type: map_at_10
value: 45.831
- type: map_at_100
value: 46.815
- type: map_at_1000
value: 46.899
- type: map_at_3
value: 42.836
- type: map_at_5
value: 44.65
- type: mrr_at_1
value: 65.037
- type: mrr_at_10
value: 72.16
- type: mrr_at_100
value: 72.51100000000001
- type: mrr_at_1000
value: 72.53
- type: mrr_at_3
value: 70.682
- type: mrr_at_5
value: 71.54599999999999
- type: ndcg_at_1
value: 65.037
- type: ndcg_at_10
value: 55.17999999999999
- type: ndcg_at_100
value: 58.888
- type: ndcg_at_1000
value: 60.648
- type: ndcg_at_3
value: 50.501
- type: ndcg_at_5
value: 52.977
- type: precision_at_1
value: 65.037
- type: precision_at_10
value: 11.530999999999999
- type: precision_at_100
value: 1.4460000000000002
- type: precision_at_1000
value: 0.168
- type: precision_at_3
value: 31.483
- type: precision_at_5
value: 20.845
- type: recall_at_1
value: 32.519
- type: recall_at_10
value: 57.657000000000004
- type: recall_at_100
value: 72.30199999999999
- type: recall_at_1000
value: 84.024
- type: recall_at_3
value: 47.225
- type: recall_at_5
value: 52.113
- task:
type: Classification
dataset:
name: MTEB ImdbClassification
type: mteb/imdb
config: default
split: test
revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7
metrics:
- type: accuracy
value: 88.3168
- type: ap
value: 83.80165516037135
- type: f1
value: 88.29942471066407
- task:
type: Retrieval
dataset:
name: MTEB MSMARCO
type: msmarco
config: default
split: dev
revision: None
metrics:
- type: map_at_1
value: 20.724999999999998
- type: map_at_10
value: 32.736
- type: map_at_100
value: 33.938
- type: map_at_1000
value: 33.991
- type: map_at_3
value: 28.788000000000004
- type: map_at_5
value: 31.016
- type: mrr_at_1
value: 21.361
- type: mrr_at_10
value: 33.323
- type: mrr_at_100
value: 34.471000000000004
- type: mrr_at_1000
value: 34.518
- type: mrr_at_3
value: 29.453000000000003
- type: mrr_at_5
value: 31.629
- type: ndcg_at_1
value: 21.361
- type: ndcg_at_10
value: 39.649
- type: ndcg_at_100
value: 45.481
- type: ndcg_at_1000
value: 46.775
- type: ndcg_at_3
value: 31.594
- type: ndcg_at_5
value: 35.543
- type: precision_at_1
value: 21.361
- type: precision_at_10
value: 6.3740000000000006
- type: precision_at_100
value: 0.931
- type: precision_at_1000
value: 0.104
- type: precision_at_3
value: 13.514999999999999
- type: precision_at_5
value: 10.100000000000001
- type: recall_at_1
value: 20.724999999999998
- type: recall_at_10
value: 61.034
- type: recall_at_100
value: 88.062
- type: recall_at_1000
value: 97.86399999999999
- type: recall_at_3
value: 39.072
- type: recall_at_5
value: 48.53
- task:
type: Classification
dataset:
name: MTEB MTOPDomainClassification (en)
type: mteb/mtop_domain
config: en
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 93.8919288645691
- type: f1
value: 93.57059586398059
- task:
type: Classification
dataset:
name: MTEB MTOPIntentClassification (en)
type: mteb/mtop_intent
config: en
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 67.97993616051072
- type: f1
value: 48.244319183606535
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (en)
type: mteb/amazon_massive_intent
config: en
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 68.90047074646941
- type: f1
value: 66.48999056063725
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (en)
type: mteb/amazon_massive_scenario
config: en
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 73.34566240753195
- type: f1
value: 73.54164154290658
- task:
type: Clustering
dataset:
name: MTEB MedrxivClusteringP2P
type: mteb/medrxiv-clustering-p2p
config: default
split: test
revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73
metrics:
- type: v_measure
value: 34.21866934757011
- task:
type: Clustering
dataset:
name: MTEB MedrxivClusteringS2S
type: mteb/medrxiv-clustering-s2s
config: default
split: test
revision: 35191c8c0dca72d8ff3efcd72aa802307d469663
metrics:
- type: v_measure
value: 32.000936217235534
- task:
type: Reranking
dataset:
name: MTEB MindSmallReranking
type: mteb/mind_small
config: default
split: test
revision: 3bdac13927fdc888b903db93b2ffdbd90b295a69
metrics:
- type: map
value: 31.68189362520352
- type: mrr
value: 32.69603637784303
- task:
type: Retrieval
dataset:
name: MTEB NFCorpus
type: nfcorpus
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 6.078
- type: map_at_10
value: 12.671
- type: map_at_100
value: 16.291
- type: map_at_1000
value: 17.855999999999998
- type: map_at_3
value: 9.610000000000001
- type: map_at_5
value: 11.152
- type: mrr_at_1
value: 43.963
- type: mrr_at_10
value: 53.173
- type: mrr_at_100
value: 53.718999999999994
- type: mrr_at_1000
value: 53.756
- type: mrr_at_3
value: 50.980000000000004
- type: mrr_at_5
value: 52.42
- type: ndcg_at_1
value: 42.415000000000006
- type: ndcg_at_10
value: 34.086
- type: ndcg_at_100
value: 32.545
- type: ndcg_at_1000
value: 41.144999999999996
- type: ndcg_at_3
value: 39.434999999999995
- type: ndcg_at_5
value: 37.888
- type: precision_at_1
value: 43.653
- type: precision_at_10
value: 25.014999999999997
- type: precision_at_100
value: 8.594
- type: precision_at_1000
value: 2.169
- type: precision_at_3
value: 37.049
- type: precision_at_5
value: 33.065
- type: recall_at_1
value: 6.078
- type: recall_at_10
value: 16.17
- type: recall_at_100
value: 34.512
- type: recall_at_1000
value: 65.447
- type: recall_at_3
value: 10.706
- type: recall_at_5
value: 13.158
- task:
type: Retrieval
dataset:
name: MTEB NQ
type: nq
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 27.378000000000004
- type: map_at_10
value: 42.178
- type: map_at_100
value: 43.32
- type: map_at_1000
value: 43.358000000000004
- type: map_at_3
value: 37.474000000000004
- type: map_at_5
value: 40.333000000000006
- type: mrr_at_1
value: 30.823
- type: mrr_at_10
value: 44.626
- type: mrr_at_100
value: 45.494
- type: mrr_at_1000
value: 45.519
- type: mrr_at_3
value: 40.585
- type: mrr_at_5
value: 43.146
- type: ndcg_at_1
value: 30.794
- type: ndcg_at_10
value: 50.099000000000004
- type: ndcg_at_100
value: 54.900999999999996
- type: ndcg_at_1000
value: 55.69499999999999
- type: ndcg_at_3
value: 41.238
- type: ndcg_at_5
value: 46.081
- type: precision_at_1
value: 30.794
- type: precision_at_10
value: 8.549
- type: precision_at_100
value: 1.124
- type: precision_at_1000
value: 0.12
- type: precision_at_3
value: 18.926000000000002
- type: precision_at_5
value: 14.16
- type: recall_at_1
value: 27.378000000000004
- type: recall_at_10
value: 71.842
- type: recall_at_100
value: 92.565
- type: recall_at_1000
value: 98.402
- type: recall_at_3
value: 49.053999999999995
- type: recall_at_5
value: 60.207
- task:
type: Retrieval
dataset:
name: MTEB QuoraRetrieval
type: quora
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 70.557
- type: map_at_10
value: 84.729
- type: map_at_100
value: 85.369
- type: map_at_1000
value: 85.382
- type: map_at_3
value: 81.72
- type: map_at_5
value: 83.613
- type: mrr_at_1
value: 81.3
- type: mrr_at_10
value: 87.488
- type: mrr_at_100
value: 87.588
- type: mrr_at_1000
value: 87.589
- type: mrr_at_3
value: 86.53
- type: mrr_at_5
value: 87.18599999999999
- type: ndcg_at_1
value: 81.28999999999999
- type: ndcg_at_10
value: 88.442
- type: ndcg_at_100
value: 89.637
- type: ndcg_at_1000
value: 89.70700000000001
- type: ndcg_at_3
value: 85.55199999999999
- type: ndcg_at_5
value: 87.154
- type: precision_at_1
value: 81.28999999999999
- type: precision_at_10
value: 13.489999999999998
- type: precision_at_100
value: 1.54
- type: precision_at_1000
value: 0.157
- type: precision_at_3
value: 37.553
- type: precision_at_5
value: 24.708
- type: recall_at_1
value: 70.557
- type: recall_at_10
value: 95.645
- type: recall_at_100
value: 99.693
- type: recall_at_1000
value: 99.995
- type: recall_at_3
value: 87.359
- type: recall_at_5
value: 91.89699999999999
- task:
type: Clustering
dataset:
name: MTEB RedditClustering
type: mteb/reddit-clustering
config: default
split: test
revision: 24640382cdbf8abc73003fb0fa6d111a705499eb
metrics:
- type: v_measure
value: 63.65060114776209
- task:
type: Clustering
dataset:
name: MTEB RedditClusteringP2P
type: mteb/reddit-clustering-p2p
config: default
split: test
revision: 282350215ef01743dc01b456c7f5241fa8937f16
metrics:
- type: v_measure
value: 64.63271250680617
- task:
type: Retrieval
dataset:
name: MTEB SCIDOCS
type: scidocs
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 4.263
- type: map_at_10
value: 10.801
- type: map_at_100
value: 12.888
- type: map_at_1000
value: 13.224
- type: map_at_3
value: 7.362
- type: map_at_5
value: 9.149000000000001
- type: mrr_at_1
value: 21
- type: mrr_at_10
value: 31.416
- type: mrr_at_100
value: 32.513
- type: mrr_at_1000
value: 32.58
- type: mrr_at_3
value: 28.116999999999997
- type: mrr_at_5
value: 29.976999999999997
- type: ndcg_at_1
value: 21
- type: ndcg_at_10
value: 18.551000000000002
- type: ndcg_at_100
value: 26.657999999999998
- type: ndcg_at_1000
value: 32.485
- type: ndcg_at_3
value: 16.834
- type: ndcg_at_5
value: 15.204999999999998
- type: precision_at_1
value: 21
- type: precision_at_10
value: 9.84
- type: precision_at_100
value: 2.16
- type: precision_at_1000
value: 0.35500000000000004
- type: precision_at_3
value: 15.667
- type: precision_at_5
value: 13.62
- type: recall_at_1
value: 4.263
- type: recall_at_10
value: 19.922
- type: recall_at_100
value: 43.808
- type: recall_at_1000
value: 72.14500000000001
- type: recall_at_3
value: 9.493
- type: recall_at_5
value: 13.767999999999999
- task:
type: STS
dataset:
name: MTEB SICK-R
type: mteb/sickr-sts
config: default
split: test
revision: a6ea5a8cab320b040a23452cc28066d9beae2cee
metrics:
- type: cos_sim_spearman
value: 81.27446313317233
- task:
type: STS
dataset:
name: MTEB STS12
type: mteb/sts12-sts
config: default
split: test
revision: a0d554a64d88156834ff5ae9920b964011b16384
metrics:
- type: cos_sim_spearman
value: 76.27963301217527
- task:
type: STS
dataset:
name: MTEB STS13
type: mteb/sts13-sts
config: default
split: test
revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca
metrics:
- type: cos_sim_spearman
value: 88.18495048450949
- task:
type: STS
dataset:
name: MTEB STS14
type: mteb/sts14-sts
config: default
split: test
revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375
metrics:
- type: cos_sim_spearman
value: 81.91982338692046
- task:
type: STS
dataset:
name: MTEB STS15
type: mteb/sts15-sts
config: default
split: test
revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3
metrics:
- type: cos_sim_spearman
value: 89.00896818385291
- task:
type: STS
dataset:
name: MTEB STS16
type: mteb/sts16-sts
config: default
split: test
revision: 4d8694f8f0e0100860b497b999b3dbed754a0513
metrics:
- type: cos_sim_spearman
value: 85.48814644586132
- task:
type: STS
dataset:
name: MTEB STS17 (en-en)
type: mteb/sts17-crosslingual-sts
config: en-en
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_spearman
value: 90.30116926966582
- task:
type: STS
dataset:
name: MTEB STS22 (en)
type: mteb/sts22-crosslingual-sts
config: en
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_spearman
value: 67.74132963032342
- task:
type: STS
dataset:
name: MTEB STSBenchmark
type: mteb/stsbenchmark-sts
config: default
split: test
revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831
metrics:
- type: cos_sim_spearman
value: 86.87741355780479
- task:
type: Reranking
dataset:
name: MTEB SciDocsRR
type: mteb/scidocs-reranking
config: default
split: test
revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab
metrics:
- type: map
value: 82.0019012295875
- type: mrr
value: 94.70267024188593
- task:
type: Retrieval
dataset:
name: MTEB SciFact
type: scifact
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 50.05
- type: map_at_10
value: 59.36
- type: map_at_100
value: 59.967999999999996
- type: map_at_1000
value: 60.023
- type: map_at_3
value: 56.515
- type: map_at_5
value: 58.272999999999996
- type: mrr_at_1
value: 53
- type: mrr_at_10
value: 61.102000000000004
- type: mrr_at_100
value: 61.476
- type: mrr_at_1000
value: 61.523
- type: mrr_at_3
value: 58.778
- type: mrr_at_5
value: 60.128
- type: ndcg_at_1
value: 53
- type: ndcg_at_10
value: 64.43100000000001
- type: ndcg_at_100
value: 66.73599999999999
- type: ndcg_at_1000
value: 68.027
- type: ndcg_at_3
value: 59.279
- type: ndcg_at_5
value: 61.888
- type: precision_at_1
value: 53
- type: precision_at_10
value: 8.767
- type: precision_at_100
value: 1.01
- type: precision_at_1000
value: 0.11100000000000002
- type: precision_at_3
value: 23.444000000000003
- type: precision_at_5
value: 15.667
- type: recall_at_1
value: 50.05
- type: recall_at_10
value: 78.511
- type: recall_at_100
value: 88.5
- type: recall_at_1000
value: 98.333
- type: recall_at_3
value: 64.117
- type: recall_at_5
value: 70.867
- task:
type: PairClassification
dataset:
name: MTEB SprintDuplicateQuestions
type: mteb/sprintduplicatequestions-pairclassification
config: default
split: test
revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46
metrics:
- type: cos_sim_accuracy
value: 99.72178217821782
- type: cos_sim_ap
value: 93.0728601593541
- type: cos_sim_f1
value: 85.6727976766699
- type: cos_sim_precision
value: 83.02063789868667
- type: cos_sim_recall
value: 88.5
- type: dot_accuracy
value: 99.72178217821782
- type: dot_ap
value: 93.07287396168348
- type: dot_f1
value: 85.6727976766699
- type: dot_precision
value: 83.02063789868667
- type: dot_recall
value: 88.5
- type: euclidean_accuracy
value: 99.72178217821782
- type: euclidean_ap
value: 93.07285657982895
- type: euclidean_f1
value: 85.6727976766699
- type: euclidean_precision
value: 83.02063789868667
- type: euclidean_recall
value: 88.5
- type: manhattan_accuracy
value: 99.72475247524753
- type: manhattan_ap
value: 93.02792973059809
- type: manhattan_f1
value: 85.7727737973388
- type: manhattan_precision
value: 87.84067085953879
- type: manhattan_recall
value: 83.8
- type: max_accuracy
value: 99.72475247524753
- type: max_ap
value: 93.07287396168348
- type: max_f1
value: 85.7727737973388
- task:
type: Clustering
dataset:
name: MTEB StackExchangeClustering
type: mteb/stackexchange-clustering
config: default
split: test
revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259
metrics:
- type: v_measure
value: 68.77583615550819
- task:
type: Clustering
dataset:
name: MTEB StackExchangeClusteringP2P
type: mteb/stackexchange-clustering-p2p
config: default
split: test
revision: 815ca46b2622cec33ccafc3735d572c266efdb44
metrics:
- type: v_measure
value: 36.151636938606956
- task:
type: Reranking
dataset:
name: MTEB StackOverflowDupQuestions
type: mteb/stackoverflowdupquestions-reranking
config: default
split: test
revision: e185fbe320c72810689fc5848eb6114e1ef5ec69
metrics:
- type: map
value: 52.16607939471187
- type: mrr
value: 52.95172046091163
- task:
type: Summarization
dataset:
name: MTEB SummEval
type: mteb/summeval
config: default
split: test
revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c
metrics:
- type: cos_sim_pearson
value: 31.314646669495666
- type: cos_sim_spearman
value: 31.83562491439455
- type: dot_pearson
value: 31.314590842874157
- type: dot_spearman
value: 31.83363065810437
- task:
type: Retrieval
dataset:
name: MTEB TRECCOVID
type: trec-covid
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 0.198
- type: map_at_10
value: 1.3010000000000002
- type: map_at_100
value: 7.2139999999999995
- type: map_at_1000
value: 20.179
- type: map_at_3
value: 0.528
- type: map_at_5
value: 0.8019999999999999
- type: mrr_at_1
value: 72
- type: mrr_at_10
value: 83.39999999999999
- type: mrr_at_100
value: 83.39999999999999
- type: mrr_at_1000
value: 83.39999999999999
- type: mrr_at_3
value: 81.667
- type: mrr_at_5
value: 83.06700000000001
- type: ndcg_at_1
value: 66
- type: ndcg_at_10
value: 58.059000000000005
- type: ndcg_at_100
value: 44.316
- type: ndcg_at_1000
value: 43.147000000000006
- type: ndcg_at_3
value: 63.815999999999995
- type: ndcg_at_5
value: 63.005
- type: precision_at_1
value: 72
- type: precision_at_10
value: 61.4
- type: precision_at_100
value: 45.62
- type: precision_at_1000
value: 19.866
- type: precision_at_3
value: 70
- type: precision_at_5
value: 68.8
- type: recall_at_1
value: 0.198
- type: recall_at_10
value: 1.517
- type: recall_at_100
value: 10.587
- type: recall_at_1000
value: 41.233
- type: recall_at_3
value: 0.573
- type: recall_at_5
value: 0.907
- task:
type: Retrieval
dataset:
name: MTEB Touche2020
type: webis-touche2020
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 1.894
- type: map_at_10
value: 8.488999999999999
- type: map_at_100
value: 14.445
- type: map_at_1000
value: 16.078
- type: map_at_3
value: 4.589
- type: map_at_5
value: 6.019
- type: mrr_at_1
value: 22.448999999999998
- type: mrr_at_10
value: 39.82
- type: mrr_at_100
value: 40.752
- type: mrr_at_1000
value: 40.771
- type: mrr_at_3
value: 34.354
- type: mrr_at_5
value: 37.721
- type: ndcg_at_1
value: 19.387999999999998
- type: ndcg_at_10
value: 21.563
- type: ndcg_at_100
value: 33.857
- type: ndcg_at_1000
value: 46.199
- type: ndcg_at_3
value: 22.296
- type: ndcg_at_5
value: 21.770999999999997
- type: precision_at_1
value: 22.448999999999998
- type: precision_at_10
value: 19.796
- type: precision_at_100
value: 7.142999999999999
- type: precision_at_1000
value: 1.541
- type: precision_at_3
value: 24.490000000000002
- type: precision_at_5
value: 22.448999999999998
- type: recall_at_1
value: 1.894
- type: recall_at_10
value: 14.931
- type: recall_at_100
value: 45.524
- type: recall_at_1000
value: 83.243
- type: recall_at_3
value: 5.712
- type: recall_at_5
value: 8.386000000000001
- task:
type: Classification
dataset:
name: MTEB ToxicConversationsClassification
type: mteb/toxic_conversations_50k
config: default
split: test
revision: d7c0de2777da35d6aae2200a62c6e0e5af397c4c
metrics:
- type: accuracy
value: 71.049
- type: ap
value: 13.85116971310922
- type: f1
value: 54.37504302487686
- task:
type: Classification
dataset:
name: MTEB TweetSentimentExtractionClassification
type: mteb/tweet_sentiment_extraction
config: default
split: test
revision: d604517c81ca91fe16a244d1248fc021f9ecee7a
metrics:
- type: accuracy
value: 64.1312959818902
- type: f1
value: 64.11413877009383
- task:
type: Clustering
dataset:
name: MTEB TwentyNewsgroupsClustering
type: mteb/twentynewsgroups-clustering
config: default
split: test
revision: 6125ec4e24fa026cec8a478383ee943acfbd5449
metrics:
- type: v_measure
value: 54.13103431861502
- task:
type: PairClassification
dataset:
name: MTEB TwitterSemEval2015
type: mteb/twittersemeval2015-pairclassification
config: default
split: test
revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1
metrics:
- type: cos_sim_accuracy
value: 87.327889372355
- type: cos_sim_ap
value: 77.42059895975699
- type: cos_sim_f1
value: 71.02706903250873
- type: cos_sim_precision
value: 69.75324344950394
- type: cos_sim_recall
value: 72.34828496042216
- type: dot_accuracy
value: 87.327889372355
- type: dot_ap
value: 77.4209479346677
- type: dot_f1
value: 71.02706903250873
- type: dot_precision
value: 69.75324344950394
- type: dot_recall
value: 72.34828496042216
- type: euclidean_accuracy
value: 87.327889372355
- type: euclidean_ap
value: 77.42096495861037
- type: euclidean_f1
value: 71.02706903250873
- type: euclidean_precision
value: 69.75324344950394
- type: euclidean_recall
value: 72.34828496042216
- type: manhattan_accuracy
value: 87.31000774870358
- type: manhattan_ap
value: 77.38930750711619
- type: manhattan_f1
value: 71.07935314027831
- type: manhattan_precision
value: 67.70957726295677
- type: manhattan_recall
value: 74.80211081794195
- type: max_accuracy
value: 87.327889372355
- type: max_ap
value: 77.42096495861037
- type: max_f1
value: 71.07935314027831
- task:
type: PairClassification
dataset:
name: MTEB TwitterURLCorpus
type: mteb/twitterurlcorpus-pairclassification
config: default
split: test
revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf
metrics:
- type: cos_sim_accuracy
value: 89.58939729110878
- type: cos_sim_ap
value: 87.17594155025475
- type: cos_sim_f1
value: 79.21146953405018
- type: cos_sim_precision
value: 76.8918527109307
- type: cos_sim_recall
value: 81.67539267015707
- type: dot_accuracy
value: 89.58939729110878
- type: dot_ap
value: 87.17593963273593
- type: dot_f1
value: 79.21146953405018
- type: dot_precision
value: 76.8918527109307
- type: dot_recall
value: 81.67539267015707
- type: euclidean_accuracy
value: 89.58939729110878
- type: euclidean_ap
value: 87.17592466925834
- type: euclidean_f1
value: 79.21146953405018
- type: euclidean_precision
value: 76.8918527109307
- type: euclidean_recall
value: 81.67539267015707
- type: manhattan_accuracy
value: 89.62626615438352
- type: manhattan_ap
value: 87.16589873161546
- type: manhattan_f1
value: 79.25143598295348
- type: manhattan_precision
value: 76.39494177323712
- type: manhattan_recall
value: 82.32984293193716
- type: max_accuracy
value: 89.62626615438352
- type: max_ap
value: 87.17594155025475
- type: max_f1
value: 79.25143598295348
---
| [
"BIOSSES",
"SCIFACT"
] | Non_BioNLP |
fine-tuned/SciFact-512-192-gpt-4o-2024-05-13-47339454 | fine-tuned | feature-extraction | [
"sentence-transformers",
"safetensors",
"xlm-roberta",
"feature-extraction",
"sentence-similarity",
"mteb",
"en",
"dataset:fine-tuned/SciFact-512-192-gpt-4o-2024-05-13-47339454",
"dataset:allenai/c4",
"license:apache-2.0",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | 1,716,952,722,000 | 2024-05-29T03:19:42 | 6 | 0 | ---
datasets:
- fine-tuned/SciFact-512-192-gpt-4o-2024-05-13-47339454
- allenai/c4
language:
- en
- en
license: apache-2.0
pipeline_tag: feature-extraction
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- mteb
---
This model is a fine-tuned version of [**BAAI/bge-m3**](https://huggingface.co/BAAI/bge-m3) designed for the following use case:
None
## How to Use
This model can be easily integrated into your NLP pipeline for tasks such as text classification, sentiment analysis, entity recognition, and more. Here's a simple example to get you started:
```python
from sentence_transformers import SentenceTransformer
from sentence_transformers.util import cos_sim
model = SentenceTransformer(
'fine-tuned/SciFact-512-192-gpt-4o-2024-05-13-47339454',
trust_remote_code=True
)
embeddings = model.encode([
'first text to embed',
'second text to embed'
])
print(cos_sim(embeddings[0], embeddings[1]))
```
| [
"SCIFACT"
] | Non_BioNLP |
davidadamczyk/setfit-model-4 | davidadamczyk | text-classification | [
"setfit",
"safetensors",
"mpnet",
"sentence-transformers",
"text-classification",
"generated_from_setfit_trainer",
"arxiv:2209.11055",
"base_model:sentence-transformers/all-mpnet-base-v2",
"base_model:finetune:sentence-transformers/all-mpnet-base-v2",
"model-index",
"region:us"
] | 1,728,827,363,000 | 2024-10-13T13:49:39 | 6 | 0 | ---
base_model: sentence-transformers/all-mpnet-base-v2
library_name: setfit
metrics:
- accuracy
pipeline_tag: text-classification
tags:
- setfit
- sentence-transformers
- text-classification
- generated_from_setfit_trainer
widget:
- text: 'Having previously lived in D.C., Rochester and Detroit and having made regular
trips on the thruways and turnpikes in-between, I can truly say that the rest
stops along the New York Thruway are the least desirable for food offerings. Even
the NJ Turnpike offers a much better selection, with Ohio striking the best balance
overall. Delaware has the largest rest stop, which offers a great selection but
at the cost of having to negotiate a mall-size parking lot. Although I don''t
begrudge those who like McDonald''s, I can honestly say I''ve never eaten at a
rest stop or airport McDonalds, even when there were no other options. There''s
nothing wrong with wanting better food, so long as there are options available
at reasonable prices.If there''s one thing for which I can give credit to the
New York Thruway rest stops, it''s in forcing us to seek out roadside alternatives
in the many communities along the way. As a result, my wife has an extensive collection
of books on diners that has morphed into somewhat of an obsession over the years.
Of course with smartphones and apps such as Yelp, finding exceptional food along
the way has never been easier. Put another way, I see the thruway rest stop as
a place for an early morning snack or cup of coffee when we''re desperate. Unfortunately,
the options are at their worst at 2 am, no matter where one stops.
'
- text: 'Now that Iran is actively funneling missiles, warheads and drones to Russia
for use in Ukraine, and Russia is funneling technical expertise and supplies to
Iran to make more weapons, things are quickly heating up and the clock is approaching
midnight as Iran get closer and closer to weaponizing a nuclear MIRV ICBM.The
no so cold war between Iran and Israel, Egypt, Saudi Arabia and the UAE is about
to get very hot and Israel''s efforts to avoid aligning against Russia in Syrian
airspace (thank you President Obama) is about to fail as the Russo-Nato proxy
war in Ukraine spills into the Middle East and a heavily armed and nuclear Israel
gets drawn into a very open conflict with Iran and Russia. The bombing of an
Iranian plant inside Iran is major escalation and I doubt that the CIA and DIA
were blindsided by the IDF operation as such a strike was likely meant to cripple
Iranian efforts to resupply Russia as much as Iranian efforts to resupply Hizbollah
in Lebanon. With the Turks waging war in Syria, the air space over Syria is clearly
going to become very crowded and very dangerous very quickly as Russia is stumbling
into a second war with Israel through its Iranian proxy and Israel unlike Ukraine
can take out both Russian and Iranian offensive capabilities. We just witnessed
the opening salvo of a hot war which is why the DIA, CIA have been in Tel Aviv
and Cairo recently - it is not really about the Palestinian territories.
'
- text: 'It''s the year of our Lord, 2023; it''s hard to believe that we are having
this conversation about the urgent necessity of ammo and lethal weapons. WWI,
WWII, the Korean War, Gulf Wars I & II, Afghanistan, ISIS, etc., have come and
gone. This does not include the multitude of conflicts in Africa, Georgia, and
other hot spots. Mankind has not changed a bit. We are still driven by fear,
greed, and the curse of the ego and its lust for power. Another article in today''s
edition discusses the Doomsday Clock and its relentless ticking toward oblivion. It''s
just a matter of time -and Boom!
'
- text: 'i''d go further than the correct interpretation that putin''s "cease fire"
was nothing more than "propaganda."i suggest that the russian attack on kramatorsk
on january 7, which russia falsely claimed killed 600 ukrainian soldiers, reveals
the expectation that a cease fire would gather ukrainians in a rest area where
they could be killed en masse. the headline was preplanned before the event.i
point readers to the Institute for the Study of War (ISW) as an excellent daily
summary of open source information by highly skilled military analysts. they point
out that putin is using a "grievance-revenge" framing of russian military activities
(e.g., kramatorsk was revenge for the grievance of russians killed in makiivka).
the ISW points out that this has only worsened the antagonism toward the kremlin
and military from pro-invasion russian commentators, who ask why any "grievance
event" was allowed to occur in the first place.
'
- text: 'I cannot entirely agree with this. If there''s a disconnect between what''s
being taught, and what the student really wants to learn, that can be a problem.
I, for example, learned a _LOT_ about computers, back in ''84 -- and a fair bit
of other stuff, too. (I speak what I''ll term "conversational" Spanish; I can''t
claim to be fluent, but I can absolutely carry on modest conversations and express
myself.)But the teachers in my core subjects were uninspired or flatly failed
me (e.g., the CompSci prof who lost my test, and gave me a zero; that really took
the wind out of my sails, considering I thought I nailed it). So I was having
far more fun at 11:00 p.m. in the computer lab than I was doing school work. Bombed
out of college, but I''ve now worked at four Fortune 500 companies, and am currently
a senior cloud admin. Students _do_ need to have a desire to learn, yes, but
teachers need to be equipped properly to teach them, too.
'
inference: true
model-index:
- name: SetFit with sentence-transformers/all-mpnet-base-v2
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: Unknown
type: unknown
split: test
metrics:
- type: accuracy
value: 0.9
name: Accuracy
---
# SetFit with sentence-transformers/all-mpnet-base-v2
This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. This SetFit model uses [sentence-transformers/all-mpnet-base-v2](https://huggingface.co/sentence-transformers/all-mpnet-base-v2) as the Sentence Transformer embedding model. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification.
The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Model Details
### Model Description
- **Model Type:** SetFit
- **Sentence Transformer body:** [sentence-transformers/all-mpnet-base-v2](https://huggingface.co/sentence-transformers/all-mpnet-base-v2)
- **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance
- **Maximum Sequence Length:** 384 tokens
- **Number of Classes:** 2 classes
<!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit)
- **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055)
- **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit)
### Model Labels
| Label | Examples |
|:------|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| yes | <ul><li>'TIME Magazine prediction for 2023 (3Jan2023)"A cornered Russia will turn from global player into the world’s most dangerous rogue state, posing a serious and pervasive danger to Europe, the U.S., and beyond. Bogged down in Ukraine, with little to lose from further isolation and Western retaliation, and facing intense domestic pressure to show strength, Russia will turn to asymmetric warfare against the West to inflict damage through a thousand \'paper cuts\' rather than by overt aggression that depends on military and economic power that Russia no longer has.Putin’s nuclear saber-rattling will escalate. Kremlin-affiliated hackers will ramp up increasingly sophisticated cyberattacks on Western firms, governments, and infrastructure. Russia will intensify its offensive against Western elections by systematically supporting and funding disinformation and extremism. Attacks on Ukrainian infrastructure will continue.In short, Rogue Russia is a threat to global security, Western political systems, the cybersphere, and food security. Not to mention every Ukrainian civilian."\n'</li><li>"Bulletin of the Atomic Scientists advanced the Doomsday Clock, now to 90 seconds due to increasing nuclear risk.The rulers are putting humans in peril, an unconscionable and unethical danger since we haven't consented to such risk.In view of the fact that, over millennia, the rulers have killed hundreds of millions of innocent people, we can question their claimed legitimacy, and reject their bogus claim.\n"</li><li>'This article explains the bad political rusults although rulers might be acting rationally within their ideological frameworks.It is based on plausible speculation of Biden and Putin\'s ideologies, yet other plausible facts could be animating the escalations. For instance, some describe \'getting ukrained\' as "what happens to you if you ally with the U.S. government," and Joe Biden might be escalating to avoid such observations.Notice that these types of explanations do not rely on free will, but that rulers are prisoner to the constraints and incentives facing them, even if this ends with humanity being nuked again.Bulletin of Atomic Scientists advancing the Doomsday Clock is largely in line with rulers vs humanity framework, but as Douthat explains, this is different than the logic of the rulers.Another view, that of Prof. Mearshimer\'s presents a pessimistic view of this Ukraine War, while being remarkably prescient providing yet another framework to understand what\'s likely to happen; let\'s hope that he\'s wrong, althought lacking evidence for this optimism.\n'</li></ul> |
| no | <ul><li>"M Martínez - Doubtful. The US has been conducting virtually Perpetual War (mostly against smaller, weaker, brown-skinned nations) since day one and that hasn't dulled the Chickenhawk politicians (see: Bush the Lesser, George) from happily pushing us into the next one.Starting wars that are fought by Other Mother's Children and are profitable for the war-mongers will never cease.\n"</li><li>"I know it is easy to blame America always, but we are largely blameless. We opened trade with China and this allowed China to industrialize and build its economy. We in the west believe in Free markets and free people. Chinese state adopted a version of capitalism but instead of liberalizing like South Korea and Taiwan decided to become more insular. They restricted access to western products for their citizens. Movies, TV shows had to be censored. American social media companies cannot do business in China. Chinese citizens are not masters of their own destiny as the state dictates every aspect of their lives. Many of us in the west enjoy the benefits of western liberalism, namely - Free markets, Rule of law ( including contract enforcement) and individual rights. In the cold war era, we had to actively defend these values from Soviets. Now, we must brace ourselves to defend them from China. Liberal order will prevail because once people know the values of western liberal order, like Hongkongers, Taiwanese etc they will defend it. We in US, must help them, become the arsenal of democracy, supply planes, ships, munitions to Taiwan to defend themselves. Help Hong Kong citizens by giving the persecuted asylum in the west. We are not responsible for confrontation with China, Chinese state's disregard for Taiwanese and Hongkong citizens aspirations is responsible for this.\n"</li><li>'We probably have male, transient cougars moving through the area more frequently than wildlife experts and state officials document. My neighbors woke to a partially eaten deer carcass in their backyard, but heard no coyotes the night before. We hadn\'t heard this story yet, when a week later, my husband had a very large animal run in front of his car. It had a very long tail, short hair of all tan color and bounded as tall as the hood of his sedan. I posted this on a local wildlife FB page, and a man replied his daughter saw it while walking one their 2 dogs, and reported it was as big as their mastiff. A week later, my neighbor was walking her dog at 7 am, and saw it in a neighboring yard, at the top of a hill, "sitting like a sphinx" under a large blue juniper bush. My neighbor clearly saw a broad feline face and large white torso. Several months later, I heard a jogger in another part of my town also saw it early in the morning, and and went to FB posting a stock picture of a cougar with the comment, \'\'This is what I saw." An email sent to CTDEEP with all this information wasn\'t taken seriously, with their reply stating reports are usually confusing other animals. It\'s hard to know what CTDEEP might think we are confused about, since coyote, fox, fisher, black bear and deer have all been sighted in our yard or near us, frequently.\n'</li></ul> |
## Evaluation
### Metrics
| Label | Accuracy |
|:--------|:---------|
| **all** | 0.9 |
## Uses
### Direct Use for Inference
First install the SetFit library:
```bash
pip install setfit
```
Then you can load this model and run inference.
```python
from setfit import SetFitModel
# Download from the 🤗 Hub
model = SetFitModel.from_pretrained("davidadamczyk/setfit-model-4")
# Run inference
preds = model("It's the year of our Lord, 2023; it's hard to believe that we are having this conversation about the urgent necessity of ammo and lethal weapons. WWI, WWII, the Korean War, Gulf Wars I & II, Afghanistan, ISIS, etc., have come and gone. This does not include the multitude of conflicts in Africa, Georgia, and other hot spots. Mankind has not changed a bit. We are still driven by fear, greed, and the curse of the ego and its lust for power. Another article in today's edition discusses the Doomsday Clock and its relentless ticking toward oblivion. It's just a matter of time -and Boom!
")
```
<!--
### Downstream Use
*List how someone could finetune this model on their own dataset.*
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Set Metrics
| Training set | Min | Median | Max |
|:-------------|:----|:--------|:----|
| Word count | 18 | 133.075 | 255 |
| Label | Training Sample Count |
|:------|:----------------------|
| no | 18 |
| yes | 22 |
### Training Hyperparameters
- batch_size: (16, 16)
- num_epochs: (1, 1)
- max_steps: -1
- sampling_strategy: oversampling
- num_iterations: 120
- body_learning_rate: (2e-05, 2e-05)
- head_learning_rate: 2e-05
- loss: CosineSimilarityLoss
- distance_metric: cosine_distance
- margin: 0.25
- end_to_end: False
- use_amp: False
- warmup_proportion: 0.1
- l2_weight: 0.01
- seed: 42
- eval_max_steps: -1
- load_best_model_at_end: False
### Training Results
| Epoch | Step | Training Loss | Validation Loss |
|:------:|:----:|:-------------:|:---------------:|
| 0.0017 | 1 | 0.4133 | - |
| 0.0833 | 50 | 0.188 | - |
| 0.1667 | 100 | 0.0071 | - |
| 0.25 | 150 | 0.0002 | - |
| 0.3333 | 200 | 0.0001 | - |
| 0.4167 | 250 | 0.0001 | - |
| 0.5 | 300 | 0.0001 | - |
| 0.5833 | 350 | 0.0001 | - |
| 0.6667 | 400 | 0.0001 | - |
| 0.75 | 450 | 0.0001 | - |
| 0.8333 | 500 | 0.0001 | - |
| 0.9167 | 550 | 0.0001 | - |
| 1.0 | 600 | 0.0001 | - |
### Framework Versions
- Python: 3.10.13
- SetFit: 1.1.0
- Sentence Transformers: 3.0.1
- Transformers: 4.45.2
- PyTorch: 2.4.0+cu124
- Datasets: 2.21.0
- Tokenizers: 0.20.0
## Citation
### BibTeX
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
--> | [
"BEAR"
] | Non_BioNLP |
baconnier/Gaston-Llama-3-8B | baconnier | text-generation | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | 1,714,817,395,000 | 2024-05-08T13:33:08 | 19 | 3 | ---
{}
---
# Vous en avez assez du jargon administratif incompréhensible ?
Gaston est là pour vous aider !

💡 Cette IA a été conçue pour reformuler les communications et documents administratifs en langage clair et simple.
📝 Grâce à Gaston, fini les lettres obscures et les procédures nébuleuses. Tout devient limpide et à la portée du commun des mortels.
😊 Gaston est un POC (Proof of Concept) qui a pour mission de rendre l'administration plus transparente et accessible.
🙌 Son secret ? Une capacité à analyser et à traduire le jargon en termes compréhensibles par tous.
💬 Avec Gaston, les démarches administratives deviennent enfin un jeu d'enfant !
This model is based on Llama-3-8b, and is governed by [META LLAMA 3 COMMUNITY LICENSE AGREEMENT](LICENSE)
---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- orpo
base_model: NousResearch/Hermes-2-Pro-Llama-3-8B
---
# Uploaded model
- **Developed by:** baconnier
- **License:** apache-2.0
- **Finetuned from model :** NousResearch/Hermes-2-Pro-Llama-3-8B
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
This model was trained ORPO , using ChatML prompt template format.
```
<|im_start|>user
Qui est tu ?
<|im_end|>
<|im_start|>assistant
```
# Example with local TGI:
See the snippet below for usage with local inference:
```python
#Example: reuse your existing OpenAI setup
from openai import OpenAI
client = OpenAI(base_url="http://localhost:8080/v1", api_key="TGI")
completion = client.chat.completions.create(
model="baconnier/Gaston-Llama-3-8B",
messages=[
{"role": "system", "content": "Reponds comme Gaston, un fonctionnaire français qui s'exprime uniquement en français avec des termes tres simples et surtout sans jamais utiliser de jargon administratif."},
{"role": "user", "content": "l’action en nullité se prescrit par 5 ans ou est prescrite en 5 ans"},
{"role": "assistant", "content": ""}
],
max_tokens=400,
temperature=0.7,
)
print(completion.choices[0].message.content)
```
Output:
```
Le délai de prescription pour l'action en nullité est généralement de cinq ans à compter du jour où
la partie qui peut agir a eu connaissance des faits, selon le Code civil français.
```
# Example with Transformers and pipeline:
See the snippet below for usage with Transformers:
```python
import transformers
import torch
model_id = "baconnier/Gaston-Llama-3-8B"
pipeline = transformers.pipeline(
"text-generation",
model=model_id,
model_kwargs={"torch_dtype": torch.bfloat16},
device="cuda",
)
messages=[
{"role": "system", "content": "Reponds comme Gaston, un fonctionnaire français qui s'exprime uniquement en français
avec des termes tres simples et surtout sans jamais utiliser de jargon administratif."},
{"role": "user", "content": "Vous voudrez bien vous munir de toutes les pièces justificatives afférentes à vos déclarations de revenus
et de patrimoine des 3 derniers exercices, nonobstant les éléments déjà produits.
À défaut de réponse de votre part ou en cas de non comparution aux date et heure susvisées,
la procédure de contrôle se poursuivra et nous serons fondés à retenir les éléments en notre possession
en vue de l'établissement des impositions supplémentaires et pénalités subséquentes."},
{"role": "assistant", "content": ""}
]
prompt = pipeline.tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
prompt = pipeline.tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
outputs = pipeline(
prompt,
max_new_tokens=256,
do_sample=True,
temperature=0.6,
top_p=0.9,
)
print(outputs[0]["generated_text"][len(prompt):])
```
Question:
```
Vous voudrez bien vous munir de toutes les pièces justificatives afférentes à vos déclarations de revenus
et de patrimoine des 3 derniers exercices, nonobstant les éléments déjà produits.
À défaut de réponse de votre part ou en cas de non comparution aux date et heure susvisées,
la procédure de contrôle se poursuivra et nous serons fondés à retenir les éléments en notre possession
en vue de l'établissement des impositions supplémentaires et pénalités subséquentes.
```
Reponse:
```
Je vous invite à fournir toutes les pièces justificatives concernant vos déclarations de revenus
et de patrimoine des trois dernières années, malgré les éléments déjà fournis.
En cas de non-respect de ce délai ou de manque de réponse,
la procédure de vérification continuera, et nous utiliserons les informations en notre possession
pour établir les impôts supplémentaires et les pénalités associées.
```
Question:
```
Nous accusons réception de votre demande de renouvellement de passeport en date du 15 mars 2023.
Après instruction de votre dossier, il appert que les pièces produites s'avèrent insuffisantes
eu égard aux dispositions de l'article 5-1 du décret n°2005-1726 du 30 décembre 2005.
Nonobstant les justificatifs déjà fournis, il vous appartient de nous transmettre
dans les meilleurs délais les documents complémentaires listés ci-après :
- Copie d'un justificatif de domicile de moins de 6 mois
- Timbre fiscal d'un montant de 86€
À défaut de production des pièces susvisées dans un délai de 30 jours à compter de la présente,
votre demande sera classée sans suite.
Nous vous prions d'agréer, Monsieur, l'expression de nos salutations distinguées.
```
Reponse:
```
Nous recevons votre requête pour renouveler votre passeport datant du 15 mars 2023.
Suite à l'examen de votre dossier, il a été déterminé que les documents fournis
ne respectent pas les exigences mentionnées dans la section 5-1 du décret n°2005-1726 du 30 décembre 2005.
Malgré les pièces déjà fournies, vous devez nous envoyer rapidement les éléments suivants :
- Une copie d'un justificatif de domicile datant de moins de 6 mois.
- Un timbre fiscal d'une valeur de 86 €.
Si ces documents ne sont pas fournis dans un délai de 30 jours à compter de cette lettre, votre demande sera close.
Nous vous prions d'accepter, Monsieur, l'expression de nos salutations respectueuses.
```
# Example with Transformers:
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("baconnier/Gaston-Llama-3-8B")
model = AutoModelForCausalLM.from_pretrained("baconnier/Gaston-Llama-3-8B")
prompt = "Nonobstant les justificatifs déjà fournis"
inputs = tokenizer(prompt, return_tensors="pt")
# Generate
generate_ids = model.generate(inputs.input_ids, max_length=200)
tokenizer.batch_decode(generate_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False)[0]
```
| [
"CAS"
] | Non_BioNLP |
croissantllm/base_75k | croissantllm | text2text-generation | [
"transformers",
"pytorch",
"llama",
"text-generation",
"legal",
"code",
"text-generation-inference",
"art",
"text2text-generation",
"fr",
"en",
"dataset:cerebras/SlimPajama-627B",
"dataset:uonlp/CulturaX",
"dataset:pg19",
"dataset:bigcode/starcoderdata",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,705,586,370,000 | 2024-02-01T15:56:43 | 8 | 0 | ---
datasets:
- cerebras/SlimPajama-627B
- uonlp/CulturaX
- pg19
- bigcode/starcoderdata
language:
- fr
- en
license: mit
pipeline_tag: text2text-generation
tags:
- legal
- code
- text-generation-inference
- art
---
# CroissantLLM - Base (75k steps)
This model is part of the CroissantLLM initiative, and corresponds to the checkpoint after 75k steps (1.18 T) tokens.
To play with the final model, we recommend using the Chat version: https://huggingface.co/croissantllm/CroissantLLMChat-v0.1.
## Abstract
We introduce CroissantLLM, a 1.3B language model pretrained on a set of 3T English and French tokens, to bring to the research and industrial community a high-performance, fully open-sourced bilingual model that runs swiftly on consumer-grade local hardware.
To that end, we pioneer the approach of training an intrinsically bilingual model with a 1:1 English-to-French pretraining data ratio, a custom tokenizer, and bilingual finetuning datasets. We release the training dataset, notably containing a French split with manually curated, high-quality, and varied data sources.
To assess performance outside of English, we craft a novel benchmark, FrenchBench, consisting of an array of classification and generation tasks, covering various orthogonal aspects of model performance in the French Language. Additionally, rooted in transparency and to foster further Large Language Model research, we release codebases, and dozens of checkpoints across various model sizes, training data distributions, and training steps, as well as fine-tuned Chat models, and strong translation models. We evaluate our model through the FMTI framework, and validate 81% of the transparency criteria, far beyond the scores of even most open initiatives.
This work enriches the NLP landscape, breaking away from previous English-centric work in order to strengthen our understanding of multilinguality in language models.
## Citation
Our work can be cited as:
```bash
Coming soon
```
## Usage
This model is a base model, that is, it is not finetuned for Chat function and works best with few-shot prompting strategies.
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "croissantllm/base_75k"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype=torch.float16, device_map="auto")
inputs = tokenizer("I am so tired I could sleep right now. -> Je suis si fatigué que je pourrais m'endormir maintenant.
He is heading to the market. -> Il va au marché.
We are running on the beach. ->", return_tensors="pt").to(model.device)
tokens = model.generate(**inputs, max_length=100, do_sample=True, top_p=0.95, top_k=60, temperature=0.5)
print(tokenizer.decode(tokens[0]))
# remove bos token
inputs = tokenizer("Capitales: France -> Paris, Italie -> Rome, Allemagne -> Berlin, Espagne ->", return_tensors="pt", add_special_tokens=True).to(model.device)
tokens = model.generate(**inputs, max_length=100, do_sample=True, top_p=0.95, top_k=60)
print(tokenizer.decode(tokens[0]))
```
| [
"CRAFT"
] | Non_BioNLP |
TensorStack/AbsoluteReality_v181-onnx | TensorStack | null | [
"onnx",
"region:us"
] | 1,726,099,888,000 | 2024-09-12T00:17:11 | 0 | 2 | ---
{}
---
# AbsoluteReality v1.8.1 - Onnx Olive DirectML Optimized
## Original Model
https://civitai.com/models/81458/absolutereality?modelVersionId=132760
## C# Inference Demo
https://github.com/TensorStack-AI/OnnxStack
```csharp
// Create Pipeline
var pipeline = StableDiffusionPipeline.CreatePipeline("D:\\Models\\AbsoluteReality_v181-onnx");
// Prompt
var promptOptions = new PromptOptions
{
Prompt = "Craft an image of a gallant prince, with a charming smile and a sword at his side, ready to embark on a quest."
};
// Run pipeline
var result = await pipeline.GenerateImageAsync(promptOptions, schedulerOptions);
// Save Image Result
await result.SaveAsync("Result.png");
```
## Inference Result
 | [
"CRAFT"
] | Non_BioNLP |
NghiemAbe/SeaLLM-v2.5-Legal-v4 | NghiemAbe | text-generation | [
"transformers",
"pytorch",
"safetensors",
"gemma",
"text-generation",
"multilingual",
"sea",
"conversational",
"en",
"zh",
"vi",
"id",
"th",
"ms",
"km",
"lo",
"my",
"tl",
"arxiv:2312.00738",
"arxiv:2306.05179",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | 1,718,113,188,000 | 2024-07-18T14:19:19 | 9 | 0 | ---
language:
- en
- zh
- vi
- id
- th
- ms
- km
- lo
- my
- tl
license: other
license_name: seallms
license_link: https://huggingface.co/SeaLLMs/SeaLLM-13B-Chat/blob/main/LICENSE
tags:
- multilingual
- sea
---
<p align="center">
<img src="seal_logo.png" width="200" />
</p>
# *SeaLLM-7B-v2.5* - Large Language Models for Southeast Asia
<p align="center">
<a href="https://damo-nlp-sg.github.io/SeaLLMs/" target="_blank" rel="noopener">Website</a>
<a href="https://huggingface.co/SeaLLMs/SeaLLM-7B-v2.5" target="_blank" rel="noopener"> 🤗 Tech Memo</a>
<a href="https://huggingface.co/spaces/SeaLLMs/SeaLLM-7B-v2.5" target="_blank" rel="noopener"> 🤗 DEMO</a>
<a href="https://github.com/DAMO-NLP-SG/SeaLLMs" target="_blank" rel="noopener">Github</a>
<a href="https://arxiv.org/pdf/2312.00738.pdf" target="_blank" rel="noopener">Technical Report</a>
</p>
🔥<span style="color: #ff3860">[HOT]</span> SeaLLMs project now has a dedicated website - [damo-nlp-sg.github.io/SeaLLMs](https://damo-nlp-sg.github.io/SeaLLMs/)
We introduce [SeaLLM-7B-v2.5](https://huggingface.co/SeaLLMs/SeaLLM-7B-v2.5), the state-of-the-art multilingual LLM for Southeast Asian (SEA) languages 🇬🇧 🇨🇳 🇻🇳 🇮🇩 🇹🇭 🇲🇾 🇰🇭 🇱🇦 🇲🇲 🇵🇭. It is the most significant upgrade since [SeaLLM-13B](https://huggingface.co/SeaLLMs/SeaLLM-13B-Chat), with half the size, outperforming performance across diverse multilingual tasks, from world knowledge, math reasoning, instruction following, etc.
### Highlights
* [SeaLLM-7B-v2.5](https://huggingface.co/SeaLLMs/SeaLLM-7B-v2.5) outperforms GPT-3.5 and achieves 7B SOTA on most multilingual knowledge benchmarks for SEA languages (MMLU, M3Exam & VMLU).
* It achieves 79.0 and 34.9 on GSM8K and MATH, surpassing GPT-3.5 in MATH.
### Release and DEMO
- DEMO:
- [SeaLLMs/SeaLLM-7B-v2.5](https://huggingface.co/spaces/SeaLLMs/SeaLLM-7B-v2.5).
- [SeaLLMs/SeaLLM-7B | SeaLMMM-7B](https://huggingface.co/spaces/SeaLLMs/SeaLLM-7B) - Experimental multimodal SeaLLM.
- Technical report: [Arxiv: SeaLLMs - Large Language Models for Southeast Asia](https://arxiv.org/pdf/2312.00738.pdf).
- Model weights:
- [SeaLLM-7B-v2.5](https://huggingface.co/SeaLLMs/SeaLLM-7B-v2.5).
- [SeaLLM-7B-v2.5-GGUF](https://huggingface.co/SeaLLMs/SeaLLM-7B-v2.5-GGUF).
- Run locally:
- [LM-studio](https://lmstudio.ai/):
- [SeaLLM-7B-v2.5-q4_0-chatml](https://huggingface.co/SeaLLMs/SeaLLM-7B-v2.5-GGUF/blob/main/seallm-7b-v2.5-chatml.Q4_K_M.gguf) with ChatML template (`<eos>` token changed to `<|im_end|>`)
- [SeaLLM-7B-v2.5-q4_0](https://huggingface.co/SeaLLMs/SeaLLM-7B-v2.5-GGUF/blob/main/seallm-7b-v2.5.Q4_K_M.gguf) - must use SeaLLM-7B-v2.5 chat format.
- [MLX for Apple Silicon](https://github.com/ml-explore/mlx): [SeaLLMs/SeaLLM-7B-v2.5-mlx-quantized](https://huggingface.co/SeaLLMs/SeaLLM-7B-v2.5-mlx-quantized)
- Previous models:
- [SeaLLM-7B-v2](https://huggingface.co/SeaLLMs/SeaLLM-7B-v2)
- [SeaLLM-7B-v1](https://huggingface.co/SeaLLMs/SeaLLM-7B-v1)
<blockquote style="color:red">
<p><strong style="color: red">Terms of Use and License</strong>:
By using our released weights, codes, and demos, you agree to and comply with the terms and conditions specified in our <a href="https://huggingface.co/SeaLLMs/SeaLLM-Chat-13b/edit/main/LICENSE" target="_blank" rel="noopener">SeaLLMs Terms Of Use</a>.
</blockquote>
> **Disclaimer**:
> We must note that even though the weights, codes, and demos are released in an open manner, similar to other pre-trained language models, and despite our best efforts in red teaming and safety fine-tuning and enforcement, our models come with potential risks, including but not limited to inaccurate, misleading or potentially harmful generation.
> Developers and stakeholders should perform their own red teaming and provide related security measures before deployment, and they must abide by and comply with local governance and regulations.
> In no event shall the authors be held liable for any claim, damages, or other liability arising from the use of the released weights, codes, or demos.
> The logo was generated by DALL-E 3.
### What's new since SeaLLM-7B-v2?
* SeaLLM-7B-v2.5 was built on top of Gemma-7b, and underwent large scale SFT and carefully designed alignment.
## Evaluation
### Multilingual World Knowledge
We evaluate models on 3 benchmarks following the recommended default setups: 5-shot MMLU for En, 3-shot [M3Exam](https://arxiv.org/pdf/2306.05179.pdf) (M3e) for En, Zh, Vi, Id, Th, and zero-shot [VMLU](https://vmlu.ai/) for Vi.
| Model | Langs | En<br>MMLU | En<br>M3e | Zh<br>M3e | Vi<br>M3e | Vi<br>VMLU | Id<br>M3e | Th<br>M3e
|-----| ----- | --- | -- | ----- | ---- | --- | --- | --- |
| GPT-3.5 | Multi | 68.90 | 75.46 | 60.20 | 58.64 | 46.32 | 49.27 | 37.41
| Vistral-7B-chat | Mono | 56.86 | 67.00 | 44.56 | 54.33 | 50.03 | 36.49 | 25.27
| Qwen1.5-7B-chat | Multi | 61.00 | 52.07 | 81.96 | 43.38 | 45.02 | 24.29 | 20.25
| SailorLM | Multi | 52.72 | 59.76 | 67.74 | 50.14 | --- | 39.53 | 37.73
| SeaLLM-7B-v2 | Multi | 61.89 | 70.91 | 55.43 | 51.15 | 45.74 | 42.25 | 35.52
| SeaLLM-7B-v2.5 | Multi | 64.05 | 76.87 | 62.54 | 63.11 | 53.30 | 48.64 | 46.86
### Zero-shot CoT Multilingual Math Reasoning
<!--
[SeaLLM-7B-v2](https://huggingface.co/SeaLLMs/SeaLLM-7B-v2) achieves with **78.5** score on the GSM8K with zero-shot CoT reasoning, making it the **state of the art** in the realm of 7B models. It also outperforms GPT-3.5 in the same GSM8K benchmark as translated into SEA languages (🇨🇳 🇻🇳 🇮🇩 🇹🇭). [SeaLLM-7B-v2](https://huggingface.co/SeaLLMs/SeaLLM-7B-v2) also surpasses GPT-3.5 on the Thai-translated MATH benchmark, with **28.4** vs 18.1 scores.

-->
| Model | GSM8K<br>en | MATH<br>en | GSM8K<br>zh | MATH<br>zh | GSM8K<br>vi | MATH<br>vi | GSM8K<br>id | MATH<br>id | GSM8K<br>th | MATH<br>th
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| GPT-3.5 | 80.8 | 34.1 | 48.2 | 21.5 | 55 | 26.5 | 64.3 | 26.4 | 35.8 | 18.1
| Qwen-14B-chat | 61.4 | 18.4 | 41.6 | 11.8 | 33.6 | 3.6 | 44.7 | 8.6 | 22 | 6.0
| Vistral-7b-chat | 48.2 | 12.5 | | | 48.7 | 3.1 | | | |
| Qwen1.5-7B-chat | 56.8 | 15.3 | 40.0 | 2.7 | 37.7 | 9 | 36.9 | 7.7 | 21.9 | 4.7
| SeaLLM-7B-v2 | 78.2 | 27.5 | 53.7 | 17.6 | 69.9 | 23.8 | 71.5 | 24.4 | 59.6 | 22.4
| SeaLLM-7B-v2.5 | 78.5 | 34.9 | 51.3 | 22.1 | 72.3 | 30.2 | 71.5 | 30.1 | 62.0 | 28.4
Baselines were evaluated using their respective chat-template and system prompts ([Qwen1.5-7B-chat](https://huggingface.co/Qwen/Qwen1.5-7B-Chat/blob/main/tokenizer_config.json), [Vistral](https://huggingface.co/Viet-Mistral/Vistral-7B-Chat)).
#### Zero-shot MGSM
[SeaLLM-7B-v2.5](https://huggingface.co/SeaLLMs/SeaLLM-7B-v2.5) also outperforms GPT-3.5 and Qwen-14B on the multilingual MGSM for Thai.
| Model | MGSM-Zh | MGSM-Th
|-----| ----- | ---
| ChatGPT (reported) | 61.2 | 47.2
| Qwen-14B-chat | 59.6 | 28
| SeaLLM-7B-v2 | **64.8** | 62.4
| SeaLLM-7B-v2.5 | 58.0 | **64.8**
### Sea-Bench

### Usage
**IMPORTANT NOTICE for using the model**
* `<bos>` must be at start of prompt, ff your code's tokenizer does not prepend `<bos>` by default, you MUST prepend <bos> into the prompt yourself, otherwise, it would not work!
* Repitition penalty (e.g: in llama.cpp, ollama, LM-studio) must be set to **1** , otherwise will lead to degeneration!
#### Instruction format
```python
# ! WARNING, if your code's tokenizer does not prepend <bos> by default,
# You MUST prepend <bos> into the prompt yourself, otherwise, it would not work!
prompt = """<|im_start|>system
You are a helpful assistant.<eos>
<|im_start|>user
Hello world<eos>
<|im_start|>assistant
Hi there, how can I help?<eos>"""
# <|im_start|> is not a special token.
# Transformers chat_template should be consistent with vLLM format below.
# ! ENSURE 1 and only 1 bos `<bos>` at the beginning of sequence
print(tokenizer.convert_ids_to_tokens(tokenizer.encode(prompt)))
"""
```
#### Using transformers's chat_template
Install the latest transformers (>4.40)
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
device = "cuda" # the device to load the model onto
# use bfloat16 to ensure the best performance.
model = AutoModelForCausalLM.from_pretrained("SeaLLMs/SeaLLM-7B-v2.5", torch_dtype=torch.bfloat16, device_map=device)
tokenizer = AutoTokenizer.from_pretrained("SeaLLMs/SeaLLM-7B-v2.5")
messages = [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Hello world"},
{"role": "assistant", "content": "Hi there, how can I help you today?"},
{"role": "user", "content": "Explain general relativity in details."}
]
encodeds = tokenizer.apply_chat_template(messages, return_tensors="pt", add_generation_prompt=True)
print(tokenizer.convert_ids_to_tokens(encodeds[0]))
model_inputs = encodeds.to(device)
model.to(device)
generated_ids = model.generate(model_inputs, max_new_tokens=1000, do_sample=True, pad_token_id=tokenizer.pad_token_id)
decoded = tokenizer.batch_decode(generated_ids)
print(decoded[0])
```
#### Using vLLM
```python
from vllm import LLM, SamplingParams
TURN_TEMPLATE = "<|im_start|>{role}\n{content}<eos>\n"
TURN_PREFIX = "<|im_start|>{role}\n"
def seallm_chat_convo_format(conversations, add_assistant_prefix: bool, system_prompt=None):
# conversations: list of dict with key `role` and `content` (openai format)
if conversations[0]['role'] != 'system' and system_prompt is not None:
conversations = [{"role": "system", "content": system_prompt}] + conversations
text = ''
for turn_id, turn in enumerate(conversations):
prompt = TURN_TEMPLATE.format(role=turn['role'], content=turn['content'])
text += prompt
if add_assistant_prefix:
prompt = TURN_PREFIX.format(role='assistant')
text += prompt
return text
sparams = SamplingParams(temperature=0.1, max_tokens=1024, stop=['<eos>', '<|im_start|>'])
llm = LLM("SeaLLMs/SeaLLM-7B-v2.5", dtype="bfloat16")
message = "Explain general relativity in details."
prompt = seallm_chat_convo_format(message, True)
gen = llm.generate(prompt, sampling_params)
print(gen[0].outputs[0].text)
```
#### Fine-tuning SeaLLM-7B-v2.5
Should follow the chat format and accurately mask out source tokens. Here is an example.
```python
conversations = [
{"role": "system", "content": "You are helful assistant."},
{"role": "user", "content": "Hello world."},
{"role": "assistant", "content": "Hi there, how can I help?"},
{"role": "user", "content": "Tell me a joke."},
{"role": "assistant", "content": "Why don't scientists trust atoms? Because they make up everything."},
]
def seallm_7b_v25_tokenize_multi_turns(tokenizer, conversations, add_assistant_prefix=False):
"""
Inputs:
conversations: list of dict following openai format, eg
conversations = [
{"role": "system", "content": "You are helful assistant."},
{"role": "user", "content": "Hello world."},
{"role": "assistant", "content": "Hi there, how can I help?"},
{"role": "user", "content": "Tell me a joke."},
{"role": "assistant", "content": "Why don't scientists trust atoms? Because they make up everything."},
]
add_assistant_prefix: whether to add assistant_prefix, only for inference decoding
Outputs:
tokenize_output_sample, {
"input_ids": ...
"token_type_ids": 1 if train and 0 if masked out (not train)
}
During training, need to create a labels, with masked-out tokens = -100 to avoid loss computations.
labels = sample['input_ids'].clone()
labels[sample['token_type_ids'] == 0] = -100
"""
TURN_TEMPLATE = "<|im_start|>{role}\n{content}<eos>\n"
TURN_PREFIX = "<|im_start|>{role}\n"
TURN_SUFFIX = "<eos>\n"
TURN_SUFFIX_TAKE = "<eos>"
sample = None
assistant_prefix_len = None
assistant_suffix_len = None
for turn_id, turn in enumerate(conversations):
prompt = TURN_TEMPLATE.format(role=turn['role'], content=turn['content'])
turn_sample = tokenizer(
prompt, padding=False, truncation=False, verbose=False, add_special_tokens=False,
return_token_type_ids=True,
)
if turn['role'] == 'assistant':
if assistant_prefix_len is None:
assistant_prefix_len = len(tokenizer.encode(TURN_PREFIX.format(role=turn['role']), add_special_tokens=False))
if assistant_suffix_len is None:
assistant_suffix_len = (
len(tokenizer.encode(TURN_SUFFIX.format(role=turn['role']), add_special_tokens=False)) -
len(tokenizer.encode(TURN_SUFFIX_TAKE, add_special_tokens=False))
)
turn_sample['token_type_ids'][assistant_prefix_len:-assistant_suffix_len] = [1] * (len(turn_sample['input_ids']) - assistant_prefix_len - assistant_suffix_len)
if sample is None:
sample = turn_sample
else:
for k in turn_sample.keys():
sample[k].extend(turn_sample[k])
if add_assistant_prefix:
assistant_prefix_sample = tokenizer(
TURN_PREFIX.format(role="assistant"), padding=False, truncation=False, verbose=False, add_special_tokens=False,
return_token_type_ids=True,
)
for k in sample.keys():
sample[k].extend(assistant_prefix_sample[k])
if tokenizer.add_bos_token:
sample['input_ids'] = [tokenizer.bos_token_id] + sample['input_ids']
sample['attention_mask'] = [1] + sample['attention_mask']
sample['token_type_ids'] = [sample['token_type_ids'][0]] + sample['token_type_ids']
return sample
# ! testing
sample = seallm_7b_v25_tokenize_multi_turns(tokenizer, conversations)
tokens = tokenizer.convert_ids_to_tokens(sample['input_ids'])
pairs = [(x, y) for x, y in zip(tokens, sample['token_type_ids'])]
print(pairs)
# source and special tokens is masked out (token_type 0), only assistant with <eos> is trained (token_type 1)
# [('<bos>', 0), ('<', 0), ('|', 0), ..., ('assistant', 0), ('\n', 0), ('Hi', 1), ('▁there', 1), (',', 1), ('▁how', 1), ('▁can', 1), ('▁I', 1), ('▁help', 1), ('?', 1), ('<eos>', 1), ('\n', 0), ('<', 0), ...
```
## Acknowledgement to Our Linguists
We would like to express our special thanks to our professional and native linguists, Tantong Champaiboon, Nguyen Ngoc Yen Nhi and Tara Devina Putri, who helped build, evaluate, and fact-check our sampled pretraining and SFT dataset as well as evaluating our models across different aspects, especially safety.
## Citation
If you find our project useful, we hope you would kindly star our repo and cite our work as follows: Corresponding Author: [[email protected]](mailto:[email protected])
**Author list and order will change!**
* `*` and `^` are equal contributions.
```
@article{damonlpsg2023seallm,
author = {Xuan-Phi Nguyen*, Wenxuan Zhang*, Xin Li*, Mahani Aljunied*, Weiwen Xu, Hou Pong Chan,
Zhiqiang Hu, Chenhui Shen^, Yew Ken Chia^, Xingxuan Li, Jianyu Wang,
Qingyu Tan, Liying Cheng, Guanzheng Chen, Yue Deng, Sen Yang,
Chaoqun Liu, Hang Zhang, Lidong Bing},
title = {SeaLLMs - Large Language Models for Southeast Asia},
year = 2023,
Eprint = {arXiv:2312.00738},
}
```
| [
"CHIA"
] | Non_BioNLP |
RichardErkhov/LuuNgoc2k2_-_Law-Llama-v1-8bits | RichardErkhov | null | [
"safetensors",
"llama",
"8-bit",
"bitsandbytes",
"region:us"
] | 1,737,848,824,000 | 2025-01-25T23:50:58 | 4 | 0 | ---
{}
---
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Law-Llama-v1 - bnb 8bits
- Model creator: https://huggingface.co/LuuNgoc2k2/
- Original model: https://huggingface.co/LuuNgoc2k2/Law-Llama-v1/
Original model description:
Load Model
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig
bnb_config = BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_use_double_quant=True,
bnb_4bit_quant_type="nf4",
bnb_4bit_compute_dtype=torch.bfloat16
)
tokenizer = AutoTokenizer.from_pretrained("LuuNgoc2k2/Law-Llama-v1", add_eos_token=True, padding_side='right')
model = AutoModelForCausalLM.from_pretrained(
'LuuNgoc2k2/Law-Llama-v1',
torch_dtype=torch.bfloat16,
quantization_config=bnb_config, # If you need
device_map="auto",
use_cache=True,
)
tokenizer.pad_token = tokenizer.eos_token
```
Generate
```python
PROMPT = """
### Hướng dẫn: Bạn là một trợ lí Tiếng Việt. Hãy luôn trả lời một cách trung thực và an toàn
Câu trả lời của bạn không nên chứa bất kỳ nội dung gây hại, nguy hiểm hoặc bất hợp pháp nào
Nếu một câu hỏi không có ý nghĩa hoặc không hợp lý về mặt thông tin, hãy giải thích tại sao thay vì trả lời một điều gì đó không chính xác
Nếu bạn không biết câu trả lời cho một câu hỏi, hãy trẳ lời là bạn không biết và vui lòng không chia sẻ thông tin sai lệch.
### Câu hỏi: {input}
"""
question = """Trình bày về thủ tục li hôn ?"""
text = PROMPT.format_map({
'input': question,
})
input_ids = tokenizer(text, return_tensors='pt', add_special_tokens=False).to('cuda')
generated_ids = model.generate(
input_ids=input_ids['input_ids'],
max_new_tokens=1024,
do_sample=True,
top_p=0.95,
top_k=40,
temperature=0.3,
repetition_penalty=1.1,
no_repeat_ngram_size=7,
num_beams=5,
)
a = tokenizer.batch_decode(generated_ids)[0]
# print(a.split('### Trả lời:')[1])
print(a)
```
| [
"CHIA"
] | Non_BioNLP |
KingKazma/xsum_6789_3000_1500_train | KingKazma | text-classification | [
"bertopic",
"text-classification",
"region:us"
] | 1,691,075,832,000 | 2023-08-03T15:17:15 | 10 | 0 | ---
library_name: bertopic
pipeline_tag: text-classification
tags:
- bertopic
---
# xsum_6789_3000_1500_train
This is a [BERTopic](https://github.com/MaartenGr/BERTopic) model.
BERTopic is a flexible and modular topic modeling framework that allows for the generation of easily interpretable topics from large datasets.
## Usage
To use this model, please install BERTopic:
```
pip install -U bertopic
```
You can use the model as follows:
```python
from bertopic import BERTopic
topic_model = BERTopic.load("KingKazma/xsum_6789_3000_1500_train")
topic_model.get_topic_info()
```
## Topic overview
* Number of topics: 16
* Number of training documents: 3000
<details>
<summary>Click here for an overview of all topics.</summary>
| Topic ID | Topic Keywords | Topic Frequency | Label |
|----------|----------------|-----------------|-------|
| -1 | league - club - game - win - player | 5 | -1_league_club_game_win |
| 0 | said - mr - would - people - year | 382 | 0_said_mr_would_people |
| 1 | sport - medal - gold - team - olympic | 2143 | 1_sport_medal_gold_team |
| 2 | cricket - wicket - test - england - match | 72 | 2_cricket_wicket_test_england |
| 3 | arsenal - league - liverpool - chelsea - kick | 56 | 3_arsenal_league_liverpool_chelsea |
| 4 | world - open - round - mcilroy - golf | 55 | 4_world_open_round_mcilroy |
| 5 | foul - town - half - kick - win | 52 | 5_foul_town_half_kick |
| 6 | season - club - dedicated - transfer - appearance | 46 | 6_season_club_dedicated_transfer |
| 7 | celtic - game - aberdeen - rangers - player | 42 | 7_celtic_game_aberdeen_rangers |
| 8 | madrid - atltico - win - real - barcelona | 36 | 8_madrid_atltico_win_real |
| 9 | race - hamilton - team - prix - grand | 26 | 9_race_hamilton_team_prix |
| 10 | rugby - wales - game - coach - england | 22 | 10_rugby_wales_game_coach |
| 11 | fight - champion - boxing - amateur - world | 19 | 11_fight_champion_boxing_amateur |
| 12 | yn - wedi - ei - ar - bod | 17 | 12_yn_wedi_ei_ar |
| 13 | fan - club - bet - stadium - standing | 14 | 13_fan_club_bet_stadium |
| 14 | connacht - ronaldson - blade - penalty - ulster | 13 | 14_connacht_ronaldson_blade_penalty |
</details>
## Training hyperparameters
* calculate_probabilities: True
* language: english
* low_memory: False
* min_topic_size: 10
* n_gram_range: (1, 1)
* nr_topics: None
* seed_topic_list: None
* top_n_words: 10
* verbose: False
## Framework versions
* Numpy: 1.22.4
* HDBSCAN: 0.8.33
* UMAP: 0.5.3
* Pandas: 1.5.3
* Scikit-Learn: 1.2.2
* Sentence-transformers: 2.2.2
* Transformers: 4.31.0
* Numba: 0.57.1
* Plotly: 5.13.1
* Python: 3.10.12
| [
"MEDAL"
] | Non_BioNLP |
pucpr/biobertpt-bio | pucpr | fill-mask | [
"transformers",
"pytorch",
"tf",
"jax",
"bert",
"fill-mask",
"pt",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,646,263,745,000 | 2022-11-27T16:54:50 | 76 | 6 | ---
language: pt
widget:
- text: O principal [MASK] da COVID-19 é tosse seca.
- text: O vírus da gripe apresenta um [MASK] constituído por segmentos de ácido ribonucleico.
thumbnail: https://raw.githubusercontent.com/HAILab-PUCPR/BioBERTpt/master/images/logo-biobertpr1.png
---
<img src="https://raw.githubusercontent.com/HAILab-PUCPR/BioBERTpt/master/images/logo-biobertpr1.png" alt="Logo BioBERTpt">
# BioBERTpt - Portuguese Clinical and Biomedical BERT
The [BioBERTpt - A Portuguese Neural Language Model for Clinical Named Entity Recognition](https://www.aclweb.org/anthology/2020.clinicalnlp-1.7/) paper contains clinical and biomedical BERT-based models for Portuguese Language, initialized with BERT-Multilingual-Cased & trained on clinical notes and biomedical literature.
This model card describes the BioBERTpt(bio) model, a biomedical version of BioBERTpt, trained on Portuguese biomedical literature from scientific papers from Pubmed and Scielo.
## How to use the model
Load the model via the transformers library:
```
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("pucpr/biobertpt-bio")
model = AutoModel.from_pretrained("pucpr/biobertpt-bio")
```
## More Information
Refer to the original paper, [BioBERTpt - A Portuguese Neural Language Model for Clinical Named Entity Recognition](https://www.aclweb.org/anthology/2020.clinicalnlp-1.7/) for additional details and performance on Portuguese NER tasks.
## Acknowledgements
This study was financed in part by the Coordenação de Aperfeiçoamento de Pessoal de Nível Superior - Brasil (CAPES) - Finance Code 001.
## Citation
```
@inproceedings{schneider-etal-2020-biobertpt,
title = "{B}io{BERT}pt - A {P}ortuguese Neural Language Model for Clinical Named Entity Recognition",
author = "Schneider, Elisa Terumi Rubel and
de Souza, Jo{\~a}o Vitor Andrioli and
Knafou, Julien and
Oliveira, Lucas Emanuel Silva e and
Copara, Jenny and
Gumiel, Yohan Bonescki and
Oliveira, Lucas Ferro Antunes de and
Paraiso, Emerson Cabrera and
Teodoro, Douglas and
Barra, Cl{\'a}udia Maria Cabral Moro",
booktitle = "Proceedings of the 3rd Clinical Natural Language Processing Workshop",
month = nov,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2020.clinicalnlp-1.7",
pages = "65--72",
abstract = "With the growing number of electronic health record data, clinical NLP tasks have become increasingly relevant to unlock valuable information from unstructured clinical text. Although the performance of downstream NLP tasks, such as named-entity recognition (NER), in English corpus has recently improved by contextualised language models, less research is available for clinical texts in low resource languages. Our goal is to assess a deep contextual embedding model for Portuguese, so called BioBERTpt, to support clinical and biomedical NER. We transfer learned information encoded in a multilingual-BERT model to a corpora of clinical narratives and biomedical-scientific papers in Brazilian Portuguese. To evaluate the performance of BioBERTpt, we ran NER experiments on two annotated corpora containing clinical narratives and compared the results with existing BERT models. Our in-domain model outperformed the baseline model in F1-score by 2.72{\%}, achieving higher performance in 11 out of 13 assessed entities. We demonstrate that enriching contextual embedding models with domain literature can play an important role in improving performance for specific NLP tasks. The transfer learning process enhanced the Portuguese biomedical NER model by reducing the necessity of labeled data and the demand for retraining a whole new model.",
}
```
## Questions?
Post a Github issue on the [BioBERTpt repo](https://github.com/HAILab-PUCPR/BioBERTpt). | [
"SCIELO"
] | TBD |
masonbarnes/open-llm-search | masonbarnes | text-generation | [
"transformers",
"pytorch",
"llama",
"text-generation",
"custom_code",
"en",
"license:llama2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | 1,693,864,524,000 | 2023-09-09T06:00:09 | 29 | 10 | ---
language:
- en
license: llama2
---
# **Model Overview**
As the demand for large language models grows, a common limitation surfaces: their inability to directly search the internet. Although tech giants like Google (with Bard), Bing, and Perplexity are addressing this challenge, their proprietary methods have data logging issues.
**Introducing Open LLM Search** — A specialized adaptation of Together AI's `llama-2-7b-32k` model, purpose-built for extracting information from web pages. While the model only has a 7 billion parameters, its fine-tuned capabilities and expanded context limit enable it to excel in search tasks.
**License:** This model uses Meta's Llama 2 license.
# **Fine-Tuning Process**
The model's fine tuning involved a combination of GPT-4 and GPT-4-32k to generate synthetic data. Here is the training workflow used:
1. Use GPT-4 to generate a multitude of queries.
2. For each query, identify the top five website results from Google.
3. Extract content from these websites and use GPT-4-32k for their summarization.
4. Record the text and summarizes from GPT-4-32k for fine-tuning.
5. Feed the summaries from all five sources with GPT-4 to craft a cohesive response.
6. Document both the input and output from GPT-4 for fine-tuning.
Fine tuning was done with an `<instructions>:`, `<user>:`, and `<assistant>:` format.
# **Getting Started**
- Experience it firsthand! Check out the live demo [here](https://huggingface.co/spaces/masonbarnes/open-llm-search).
- For DIY enthusiasts, explore or self-deploy this solution using our [GitHub repository](https://github.com/MasonBarnes/open-llm-search). | [
"CRAFT"
] | Non_BioNLP |
mixamrepijey/instructor-small | mixamrepijey | sentence-similarity | [
"sentence-transformers",
"pytorch",
"t5",
"text-embedding",
"embeddings",
"information-retrieval",
"beir",
"text-classification",
"language-model",
"text-clustering",
"text-semantic-similarity",
"text-evaluation",
"prompt-retrieval",
"text-reranking",
"feature-extraction",
"sentence-similarity",
"transformers",
"English",
"Sentence Similarity",
"natural_questions",
"ms_marco",
"fever",
"hotpot_qa",
"mteb",
"en",
"arxiv:2212.09741",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] | 1,702,568,686,000 | 2023-12-15T12:37:48 | 11 | 0 | ---
language: en
license: apache-2.0
pipeline_tag: sentence-similarity
tags:
- text-embedding
- embeddings
- information-retrieval
- beir
- text-classification
- language-model
- text-clustering
- text-semantic-similarity
- text-evaluation
- prompt-retrieval
- text-reranking
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
- t5
- English
- Sentence Similarity
- natural_questions
- ms_marco
- fever
- hotpot_qa
- mteb
inference: false
model-index:
- name: INSTRUCTOR
results:
- task:
type: Classification
dataset:
name: MTEB AmazonCounterfactualClassification (en)
type: mteb/amazon_counterfactual
config: en
split: test
revision: e8379541af4e31359cca9fbcf4b00f2671dba205
metrics:
- type: accuracy
value: 88.13432835820896
- type: ap
value: 59.298209334395665
- type: f1
value: 83.31769058643586
- task:
type: Classification
dataset:
name: MTEB AmazonPolarityClassification
type: mteb/amazon_polarity
config: default
split: test
revision: e2d317d38cd51312af73b3d32a06d1a08b442046
metrics:
- type: accuracy
value: 91.526375
- type: ap
value: 88.16327709705504
- type: f1
value: 91.51095801287843
- task:
type: Classification
dataset:
name: MTEB AmazonReviewsClassification (en)
type: mteb/amazon_reviews_multi
config: en
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 47.856
- type: f1
value: 45.41490917650942
- task:
type: Retrieval
dataset:
name: MTEB ArguAna
type: arguana
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 31.223
- type: map_at_10
value: 47.947
- type: map_at_100
value: 48.742000000000004
- type: map_at_1000
value: 48.745
- type: map_at_3
value: 43.137
- type: map_at_5
value: 45.992
- type: mrr_at_1
value: 32.432
- type: mrr_at_10
value: 48.4
- type: mrr_at_100
value: 49.202
- type: mrr_at_1000
value: 49.205
- type: mrr_at_3
value: 43.551
- type: mrr_at_5
value: 46.467999999999996
- type: ndcg_at_1
value: 31.223
- type: ndcg_at_10
value: 57.045
- type: ndcg_at_100
value: 60.175
- type: ndcg_at_1000
value: 60.233000000000004
- type: ndcg_at_3
value: 47.171
- type: ndcg_at_5
value: 52.322
- type: precision_at_1
value: 31.223
- type: precision_at_10
value: 8.599
- type: precision_at_100
value: 0.991
- type: precision_at_1000
value: 0.1
- type: precision_at_3
value: 19.63
- type: precision_at_5
value: 14.282
- type: recall_at_1
value: 31.223
- type: recall_at_10
value: 85.989
- type: recall_at_100
value: 99.075
- type: recall_at_1000
value: 99.502
- type: recall_at_3
value: 58.89
- type: recall_at_5
value: 71.408
- task:
type: Clustering
dataset:
name: MTEB ArxivClusteringP2P
type: mteb/arxiv-clustering-p2p
config: default
split: test
revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d
metrics:
- type: v_measure
value: 43.1621946393635
- task:
type: Clustering
dataset:
name: MTEB ArxivClusteringS2S
type: mteb/arxiv-clustering-s2s
config: default
split: test
revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53
metrics:
- type: v_measure
value: 32.56417132407894
- task:
type: Reranking
dataset:
name: MTEB AskUbuntuDupQuestions
type: mteb/askubuntudupquestions-reranking
config: default
split: test
revision: 2000358ca161889fa9c082cb41daa8dcfb161a54
metrics:
- type: map
value: 64.29539304390207
- type: mrr
value: 76.44484017060196
- task:
type: STS
dataset:
name: MTEB BIOSSES
type: mteb/biosses-sts
config: default
split: test
revision: d3fb88f8f02e40887cd149695127462bbcf29b4a
metrics:
- type: cos_sim_spearman
value: 84.38746499431112
- task:
type: Classification
dataset:
name: MTEB Banking77Classification
type: mteb/banking77
config: default
split: test
revision: 0fd18e25b25c072e09e0d92ab615fda904d66300
metrics:
- type: accuracy
value: 78.51298701298701
- type: f1
value: 77.49041754069235
- task:
type: Clustering
dataset:
name: MTEB BiorxivClusteringP2P
type: mteb/biorxiv-clustering-p2p
config: default
split: test
revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40
metrics:
- type: v_measure
value: 37.61848554098577
- task:
type: Clustering
dataset:
name: MTEB BiorxivClusteringS2S
type: mteb/biorxiv-clustering-s2s
config: default
split: test
revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908
metrics:
- type: v_measure
value: 31.32623280148178
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackAndroidRetrieval
type: BeIR/cqadupstack
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 35.803000000000004
- type: map_at_10
value: 48.848
- type: map_at_100
value: 50.5
- type: map_at_1000
value: 50.602999999999994
- type: map_at_3
value: 45.111000000000004
- type: map_at_5
value: 47.202
- type: mrr_at_1
value: 44.635000000000005
- type: mrr_at_10
value: 55.593
- type: mrr_at_100
value: 56.169999999999995
- type: mrr_at_1000
value: 56.19499999999999
- type: mrr_at_3
value: 53.361999999999995
- type: mrr_at_5
value: 54.806999999999995
- type: ndcg_at_1
value: 44.635000000000005
- type: ndcg_at_10
value: 55.899
- type: ndcg_at_100
value: 60.958
- type: ndcg_at_1000
value: 62.302
- type: ndcg_at_3
value: 51.051
- type: ndcg_at_5
value: 53.351000000000006
- type: precision_at_1
value: 44.635000000000005
- type: precision_at_10
value: 10.786999999999999
- type: precision_at_100
value: 1.6580000000000001
- type: precision_at_1000
value: 0.213
- type: precision_at_3
value: 24.893
- type: precision_at_5
value: 17.740000000000002
- type: recall_at_1
value: 35.803000000000004
- type: recall_at_10
value: 68.657
- type: recall_at_100
value: 89.77199999999999
- type: recall_at_1000
value: 97.67
- type: recall_at_3
value: 54.066
- type: recall_at_5
value: 60.788
- type: map_at_1
value: 33.706
- type: map_at_10
value: 44.896
- type: map_at_100
value: 46.299
- type: map_at_1000
value: 46.44
- type: map_at_3
value: 41.721000000000004
- type: map_at_5
value: 43.486000000000004
- type: mrr_at_1
value: 41.592
- type: mrr_at_10
value: 50.529
- type: mrr_at_100
value: 51.22
- type: mrr_at_1000
value: 51.258
- type: mrr_at_3
value: 48.205999999999996
- type: mrr_at_5
value: 49.528
- type: ndcg_at_1
value: 41.592
- type: ndcg_at_10
value: 50.77199999999999
- type: ndcg_at_100
value: 55.383
- type: ndcg_at_1000
value: 57.288
- type: ndcg_at_3
value: 46.324
- type: ndcg_at_5
value: 48.346000000000004
- type: precision_at_1
value: 41.592
- type: precision_at_10
value: 9.516
- type: precision_at_100
value: 1.541
- type: precision_at_1000
value: 0.2
- type: precision_at_3
value: 22.399
- type: precision_at_5
value: 15.770999999999999
- type: recall_at_1
value: 33.706
- type: recall_at_10
value: 61.353
- type: recall_at_100
value: 80.182
- type: recall_at_1000
value: 91.896
- type: recall_at_3
value: 48.204
- type: recall_at_5
value: 53.89699999999999
- type: map_at_1
value: 44.424
- type: map_at_10
value: 57.169000000000004
- type: map_at_100
value: 58.202
- type: map_at_1000
value: 58.242000000000004
- type: map_at_3
value: 53.825
- type: map_at_5
value: 55.714
- type: mrr_at_1
value: 50.470000000000006
- type: mrr_at_10
value: 60.489000000000004
- type: mrr_at_100
value: 61.096
- type: mrr_at_1000
value: 61.112
- type: mrr_at_3
value: 58.192
- type: mrr_at_5
value: 59.611999999999995
- type: ndcg_at_1
value: 50.470000000000006
- type: ndcg_at_10
value: 63.071999999999996
- type: ndcg_at_100
value: 66.964
- type: ndcg_at_1000
value: 67.659
- type: ndcg_at_3
value: 57.74399999999999
- type: ndcg_at_5
value: 60.367000000000004
- type: precision_at_1
value: 50.470000000000006
- type: precision_at_10
value: 10.019
- type: precision_at_100
value: 1.29
- type: precision_at_1000
value: 0.13899999999999998
- type: precision_at_3
value: 25.558999999999997
- type: precision_at_5
value: 17.467
- type: recall_at_1
value: 44.424
- type: recall_at_10
value: 77.02
- type: recall_at_100
value: 93.738
- type: recall_at_1000
value: 98.451
- type: recall_at_3
value: 62.888
- type: recall_at_5
value: 69.138
- type: map_at_1
value: 26.294
- type: map_at_10
value: 34.503
- type: map_at_100
value: 35.641
- type: map_at_1000
value: 35.724000000000004
- type: map_at_3
value: 31.753999999999998
- type: map_at_5
value: 33.190999999999995
- type: mrr_at_1
value: 28.362
- type: mrr_at_10
value: 36.53
- type: mrr_at_100
value: 37.541000000000004
- type: mrr_at_1000
value: 37.602000000000004
- type: mrr_at_3
value: 33.917
- type: mrr_at_5
value: 35.358000000000004
- type: ndcg_at_1
value: 28.362
- type: ndcg_at_10
value: 39.513999999999996
- type: ndcg_at_100
value: 44.815
- type: ndcg_at_1000
value: 46.839
- type: ndcg_at_3
value: 34.02
- type: ndcg_at_5
value: 36.522
- type: precision_at_1
value: 28.362
- type: precision_at_10
value: 6.101999999999999
- type: precision_at_100
value: 0.9129999999999999
- type: precision_at_1000
value: 0.11399999999999999
- type: precision_at_3
value: 14.161999999999999
- type: precision_at_5
value: 9.966
- type: recall_at_1
value: 26.294
- type: recall_at_10
value: 53.098
- type: recall_at_100
value: 76.877
- type: recall_at_1000
value: 91.834
- type: recall_at_3
value: 38.266
- type: recall_at_5
value: 44.287
- type: map_at_1
value: 16.407
- type: map_at_10
value: 25.185999999999996
- type: map_at_100
value: 26.533
- type: map_at_1000
value: 26.657999999999998
- type: map_at_3
value: 22.201999999999998
- type: map_at_5
value: 23.923
- type: mrr_at_1
value: 20.522000000000002
- type: mrr_at_10
value: 29.522
- type: mrr_at_100
value: 30.644
- type: mrr_at_1000
value: 30.713
- type: mrr_at_3
value: 26.679000000000002
- type: mrr_at_5
value: 28.483000000000004
- type: ndcg_at_1
value: 20.522000000000002
- type: ndcg_at_10
value: 30.656
- type: ndcg_at_100
value: 36.864999999999995
- type: ndcg_at_1000
value: 39.675
- type: ndcg_at_3
value: 25.319000000000003
- type: ndcg_at_5
value: 27.992
- type: precision_at_1
value: 20.522000000000002
- type: precision_at_10
value: 5.795999999999999
- type: precision_at_100
value: 1.027
- type: precision_at_1000
value: 0.13999999999999999
- type: precision_at_3
value: 12.396
- type: precision_at_5
value: 9.328
- type: recall_at_1
value: 16.407
- type: recall_at_10
value: 43.164
- type: recall_at_100
value: 69.695
- type: recall_at_1000
value: 89.41900000000001
- type: recall_at_3
value: 28.634999999999998
- type: recall_at_5
value: 35.308
- type: map_at_1
value: 30.473
- type: map_at_10
value: 41.676
- type: map_at_100
value: 43.120999999999995
- type: map_at_1000
value: 43.230000000000004
- type: map_at_3
value: 38.306000000000004
- type: map_at_5
value: 40.355999999999995
- type: mrr_at_1
value: 37.536
- type: mrr_at_10
value: 47.643
- type: mrr_at_100
value: 48.508
- type: mrr_at_1000
value: 48.551
- type: mrr_at_3
value: 45.348
- type: mrr_at_5
value: 46.744
- type: ndcg_at_1
value: 37.536
- type: ndcg_at_10
value: 47.823
- type: ndcg_at_100
value: 53.395
- type: ndcg_at_1000
value: 55.271
- type: ndcg_at_3
value: 42.768
- type: ndcg_at_5
value: 45.373000000000005
- type: precision_at_1
value: 37.536
- type: precision_at_10
value: 8.681
- type: precision_at_100
value: 1.34
- type: precision_at_1000
value: 0.165
- type: precision_at_3
value: 20.468
- type: precision_at_5
value: 14.495
- type: recall_at_1
value: 30.473
- type: recall_at_10
value: 60.092999999999996
- type: recall_at_100
value: 82.733
- type: recall_at_1000
value: 94.875
- type: recall_at_3
value: 45.734
- type: recall_at_5
value: 52.691
- type: map_at_1
value: 29.976000000000003
- type: map_at_10
value: 41.097
- type: map_at_100
value: 42.547000000000004
- type: map_at_1000
value: 42.659000000000006
- type: map_at_3
value: 37.251
- type: map_at_5
value: 39.493
- type: mrr_at_1
value: 37.557
- type: mrr_at_10
value: 46.605000000000004
- type: mrr_at_100
value: 47.487
- type: mrr_at_1000
value: 47.54
- type: mrr_at_3
value: 43.721
- type: mrr_at_5
value: 45.411
- type: ndcg_at_1
value: 37.557
- type: ndcg_at_10
value: 47.449000000000005
- type: ndcg_at_100
value: 53.052
- type: ndcg_at_1000
value: 55.010999999999996
- type: ndcg_at_3
value: 41.439
- type: ndcg_at_5
value: 44.292
- type: precision_at_1
value: 37.557
- type: precision_at_10
value: 8.847
- type: precision_at_100
value: 1.357
- type: precision_at_1000
value: 0.16999999999999998
- type: precision_at_3
value: 20.091
- type: precision_at_5
value: 14.384
- type: recall_at_1
value: 29.976000000000003
- type: recall_at_10
value: 60.99099999999999
- type: recall_at_100
value: 84.245
- type: recall_at_1000
value: 96.97200000000001
- type: recall_at_3
value: 43.794
- type: recall_at_5
value: 51.778999999999996
- type: map_at_1
value: 28.099166666666665
- type: map_at_10
value: 38.1365
- type: map_at_100
value: 39.44491666666667
- type: map_at_1000
value: 39.55858333333334
- type: map_at_3
value: 35.03641666666666
- type: map_at_5
value: 36.79833333333334
- type: mrr_at_1
value: 33.39966666666667
- type: mrr_at_10
value: 42.42583333333333
- type: mrr_at_100
value: 43.28575
- type: mrr_at_1000
value: 43.33741666666667
- type: mrr_at_3
value: 39.94975
- type: mrr_at_5
value: 41.41633333333334
- type: ndcg_at_1
value: 33.39966666666667
- type: ndcg_at_10
value: 43.81741666666667
- type: ndcg_at_100
value: 49.08166666666667
- type: ndcg_at_1000
value: 51.121166666666674
- type: ndcg_at_3
value: 38.73575
- type: ndcg_at_5
value: 41.18158333333333
- type: precision_at_1
value: 33.39966666666667
- type: precision_at_10
value: 7.738916666666667
- type: precision_at_100
value: 1.2265833333333331
- type: precision_at_1000
value: 0.15983333333333336
- type: precision_at_3
value: 17.967416666666665
- type: precision_at_5
value: 12.78675
- type: recall_at_1
value: 28.099166666666665
- type: recall_at_10
value: 56.27049999999999
- type: recall_at_100
value: 78.93291666666667
- type: recall_at_1000
value: 92.81608333333334
- type: recall_at_3
value: 42.09775
- type: recall_at_5
value: 48.42533333333334
- type: map_at_1
value: 23.663
- type: map_at_10
value: 30.377
- type: map_at_100
value: 31.426
- type: map_at_1000
value: 31.519000000000002
- type: map_at_3
value: 28.069
- type: map_at_5
value: 29.256999999999998
- type: mrr_at_1
value: 26.687
- type: mrr_at_10
value: 33.107
- type: mrr_at_100
value: 34.055
- type: mrr_at_1000
value: 34.117999999999995
- type: mrr_at_3
value: 31.058000000000003
- type: mrr_at_5
value: 32.14
- type: ndcg_at_1
value: 26.687
- type: ndcg_at_10
value: 34.615
- type: ndcg_at_100
value: 39.776
- type: ndcg_at_1000
value: 42.05
- type: ndcg_at_3
value: 30.322
- type: ndcg_at_5
value: 32.157000000000004
- type: precision_at_1
value: 26.687
- type: precision_at_10
value: 5.491
- type: precision_at_100
value: 0.877
- type: precision_at_1000
value: 0.11499999999999999
- type: precision_at_3
value: 13.139000000000001
- type: precision_at_5
value: 9.049
- type: recall_at_1
value: 23.663
- type: recall_at_10
value: 45.035
- type: recall_at_100
value: 68.554
- type: recall_at_1000
value: 85.077
- type: recall_at_3
value: 32.982
- type: recall_at_5
value: 37.688
- type: map_at_1
value: 17.403
- type: map_at_10
value: 25.197000000000003
- type: map_at_100
value: 26.355
- type: map_at_1000
value: 26.487
- type: map_at_3
value: 22.733
- type: map_at_5
value: 24.114
- type: mrr_at_1
value: 21.37
- type: mrr_at_10
value: 29.091
- type: mrr_at_100
value: 30.018
- type: mrr_at_1000
value: 30.096
- type: mrr_at_3
value: 26.887
- type: mrr_at_5
value: 28.157
- type: ndcg_at_1
value: 21.37
- type: ndcg_at_10
value: 30.026000000000003
- type: ndcg_at_100
value: 35.416
- type: ndcg_at_1000
value: 38.45
- type: ndcg_at_3
value: 25.764
- type: ndcg_at_5
value: 27.742
- type: precision_at_1
value: 21.37
- type: precision_at_10
value: 5.609
- type: precision_at_100
value: 0.9860000000000001
- type: precision_at_1000
value: 0.14300000000000002
- type: precision_at_3
value: 12.423
- type: precision_at_5
value: 9.009
- type: recall_at_1
value: 17.403
- type: recall_at_10
value: 40.573
- type: recall_at_100
value: 64.818
- type: recall_at_1000
value: 86.53699999999999
- type: recall_at_3
value: 28.493000000000002
- type: recall_at_5
value: 33.660000000000004
- type: map_at_1
value: 28.639
- type: map_at_10
value: 38.951
- type: map_at_100
value: 40.238
- type: map_at_1000
value: 40.327
- type: map_at_3
value: 35.842
- type: map_at_5
value: 37.617
- type: mrr_at_1
value: 33.769
- type: mrr_at_10
value: 43.088
- type: mrr_at_100
value: 44.03
- type: mrr_at_1000
value: 44.072
- type: mrr_at_3
value: 40.656
- type: mrr_at_5
value: 42.138999999999996
- type: ndcg_at_1
value: 33.769
- type: ndcg_at_10
value: 44.676
- type: ndcg_at_100
value: 50.416000000000004
- type: ndcg_at_1000
value: 52.227999999999994
- type: ndcg_at_3
value: 39.494
- type: ndcg_at_5
value: 42.013
- type: precision_at_1
value: 33.769
- type: precision_at_10
value: 7.668
- type: precision_at_100
value: 1.18
- type: precision_at_1000
value: 0.145
- type: precision_at_3
value: 18.221
- type: precision_at_5
value: 12.966
- type: recall_at_1
value: 28.639
- type: recall_at_10
value: 57.687999999999995
- type: recall_at_100
value: 82.541
- type: recall_at_1000
value: 94.896
- type: recall_at_3
value: 43.651
- type: recall_at_5
value: 49.925999999999995
- type: map_at_1
value: 29.57
- type: map_at_10
value: 40.004
- type: map_at_100
value: 41.75
- type: map_at_1000
value: 41.97
- type: map_at_3
value: 36.788
- type: map_at_5
value: 38.671
- type: mrr_at_1
value: 35.375
- type: mrr_at_10
value: 45.121
- type: mrr_at_100
value: 45.994
- type: mrr_at_1000
value: 46.04
- type: mrr_at_3
value: 42.227
- type: mrr_at_5
value: 43.995
- type: ndcg_at_1
value: 35.375
- type: ndcg_at_10
value: 46.392
- type: ndcg_at_100
value: 52.196
- type: ndcg_at_1000
value: 54.274
- type: ndcg_at_3
value: 41.163
- type: ndcg_at_5
value: 43.813
- type: precision_at_1
value: 35.375
- type: precision_at_10
value: 8.676
- type: precision_at_100
value: 1.678
- type: precision_at_1000
value: 0.253
- type: precision_at_3
value: 19.104
- type: precision_at_5
value: 13.913
- type: recall_at_1
value: 29.57
- type: recall_at_10
value: 58.779
- type: recall_at_100
value: 83.337
- type: recall_at_1000
value: 95.979
- type: recall_at_3
value: 44.005
- type: recall_at_5
value: 50.975
- type: map_at_1
value: 20.832
- type: map_at_10
value: 29.733999999999998
- type: map_at_100
value: 30.727
- type: map_at_1000
value: 30.843999999999998
- type: map_at_3
value: 26.834999999999997
- type: map_at_5
value: 28.555999999999997
- type: mrr_at_1
value: 22.921
- type: mrr_at_10
value: 31.791999999999998
- type: mrr_at_100
value: 32.666000000000004
- type: mrr_at_1000
value: 32.751999999999995
- type: mrr_at_3
value: 29.144
- type: mrr_at_5
value: 30.622
- type: ndcg_at_1
value: 22.921
- type: ndcg_at_10
value: 34.915
- type: ndcg_at_100
value: 39.744
- type: ndcg_at_1000
value: 42.407000000000004
- type: ndcg_at_3
value: 29.421000000000003
- type: ndcg_at_5
value: 32.211
- type: precision_at_1
value: 22.921
- type: precision_at_10
value: 5.675
- type: precision_at_100
value: 0.872
- type: precision_at_1000
value: 0.121
- type: precision_at_3
value: 12.753999999999998
- type: precision_at_5
value: 9.353
- type: recall_at_1
value: 20.832
- type: recall_at_10
value: 48.795
- type: recall_at_100
value: 70.703
- type: recall_at_1000
value: 90.187
- type: recall_at_3
value: 34.455000000000005
- type: recall_at_5
value: 40.967
- task:
type: Retrieval
dataset:
name: MTEB ClimateFEVER
type: climate-fever
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 10.334
- type: map_at_10
value: 19.009999999999998
- type: map_at_100
value: 21.129
- type: map_at_1000
value: 21.328
- type: map_at_3
value: 15.152
- type: map_at_5
value: 17.084
- type: mrr_at_1
value: 23.453
- type: mrr_at_10
value: 36.099
- type: mrr_at_100
value: 37.069
- type: mrr_at_1000
value: 37.104
- type: mrr_at_3
value: 32.096000000000004
- type: mrr_at_5
value: 34.451
- type: ndcg_at_1
value: 23.453
- type: ndcg_at_10
value: 27.739000000000004
- type: ndcg_at_100
value: 35.836
- type: ndcg_at_1000
value: 39.242
- type: ndcg_at_3
value: 21.263
- type: ndcg_at_5
value: 23.677
- type: precision_at_1
value: 23.453
- type: precision_at_10
value: 9.199
- type: precision_at_100
value: 1.791
- type: precision_at_1000
value: 0.242
- type: precision_at_3
value: 16.2
- type: precision_at_5
value: 13.147
- type: recall_at_1
value: 10.334
- type: recall_at_10
value: 35.177
- type: recall_at_100
value: 63.009
- type: recall_at_1000
value: 81.938
- type: recall_at_3
value: 19.914
- type: recall_at_5
value: 26.077
- task:
type: Retrieval
dataset:
name: MTEB DBPedia
type: dbpedia-entity
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 8.212
- type: map_at_10
value: 17.386
- type: map_at_100
value: 24.234
- type: map_at_1000
value: 25.724999999999998
- type: map_at_3
value: 12.727
- type: map_at_5
value: 14.785
- type: mrr_at_1
value: 59.25
- type: mrr_at_10
value: 68.687
- type: mrr_at_100
value: 69.133
- type: mrr_at_1000
value: 69.14099999999999
- type: mrr_at_3
value: 66.917
- type: mrr_at_5
value: 67.742
- type: ndcg_at_1
value: 48.625
- type: ndcg_at_10
value: 36.675999999999995
- type: ndcg_at_100
value: 41.543
- type: ndcg_at_1000
value: 49.241
- type: ndcg_at_3
value: 41.373
- type: ndcg_at_5
value: 38.707
- type: precision_at_1
value: 59.25
- type: precision_at_10
value: 28.525
- type: precision_at_100
value: 9.027000000000001
- type: precision_at_1000
value: 1.8339999999999999
- type: precision_at_3
value: 44.833
- type: precision_at_5
value: 37.35
- type: recall_at_1
value: 8.212
- type: recall_at_10
value: 23.188
- type: recall_at_100
value: 48.613
- type: recall_at_1000
value: 73.093
- type: recall_at_3
value: 14.419
- type: recall_at_5
value: 17.798
- task:
type: Classification
dataset:
name: MTEB EmotionClassification
type: mteb/emotion
config: default
split: test
revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37
metrics:
- type: accuracy
value: 52.725
- type: f1
value: 46.50743309855908
- task:
type: Retrieval
dataset:
name: MTEB FEVER
type: fever
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 55.086
- type: map_at_10
value: 66.914
- type: map_at_100
value: 67.321
- type: map_at_1000
value: 67.341
- type: map_at_3
value: 64.75800000000001
- type: map_at_5
value: 66.189
- type: mrr_at_1
value: 59.28600000000001
- type: mrr_at_10
value: 71.005
- type: mrr_at_100
value: 71.304
- type: mrr_at_1000
value: 71.313
- type: mrr_at_3
value: 69.037
- type: mrr_at_5
value: 70.35
- type: ndcg_at_1
value: 59.28600000000001
- type: ndcg_at_10
value: 72.695
- type: ndcg_at_100
value: 74.432
- type: ndcg_at_1000
value: 74.868
- type: ndcg_at_3
value: 68.72200000000001
- type: ndcg_at_5
value: 71.081
- type: precision_at_1
value: 59.28600000000001
- type: precision_at_10
value: 9.499
- type: precision_at_100
value: 1.052
- type: precision_at_1000
value: 0.11100000000000002
- type: precision_at_3
value: 27.503
- type: precision_at_5
value: 17.854999999999997
- type: recall_at_1
value: 55.086
- type: recall_at_10
value: 86.453
- type: recall_at_100
value: 94.028
- type: recall_at_1000
value: 97.052
- type: recall_at_3
value: 75.821
- type: recall_at_5
value: 81.6
- task:
type: Retrieval
dataset:
name: MTEB FiQA2018
type: fiqa
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 22.262999999999998
- type: map_at_10
value: 37.488
- type: map_at_100
value: 39.498
- type: map_at_1000
value: 39.687
- type: map_at_3
value: 32.529
- type: map_at_5
value: 35.455
- type: mrr_at_1
value: 44.907000000000004
- type: mrr_at_10
value: 53.239000000000004
- type: mrr_at_100
value: 54.086
- type: mrr_at_1000
value: 54.122
- type: mrr_at_3
value: 51.235
- type: mrr_at_5
value: 52.415
- type: ndcg_at_1
value: 44.907000000000004
- type: ndcg_at_10
value: 45.446
- type: ndcg_at_100
value: 52.429
- type: ndcg_at_1000
value: 55.169000000000004
- type: ndcg_at_3
value: 41.882000000000005
- type: ndcg_at_5
value: 43.178
- type: precision_at_1
value: 44.907000000000004
- type: precision_at_10
value: 12.931999999999999
- type: precision_at_100
value: 2.025
- type: precision_at_1000
value: 0.248
- type: precision_at_3
value: 28.652
- type: precision_at_5
value: 21.204
- type: recall_at_1
value: 22.262999999999998
- type: recall_at_10
value: 52.447
- type: recall_at_100
value: 78.045
- type: recall_at_1000
value: 94.419
- type: recall_at_3
value: 38.064
- type: recall_at_5
value: 44.769
- task:
type: Retrieval
dataset:
name: MTEB HotpotQA
type: hotpotqa
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 32.519
- type: map_at_10
value: 45.831
- type: map_at_100
value: 46.815
- type: map_at_1000
value: 46.899
- type: map_at_3
value: 42.836
- type: map_at_5
value: 44.65
- type: mrr_at_1
value: 65.037
- type: mrr_at_10
value: 72.16
- type: mrr_at_100
value: 72.51100000000001
- type: mrr_at_1000
value: 72.53
- type: mrr_at_3
value: 70.682
- type: mrr_at_5
value: 71.54599999999999
- type: ndcg_at_1
value: 65.037
- type: ndcg_at_10
value: 55.17999999999999
- type: ndcg_at_100
value: 58.888
- type: ndcg_at_1000
value: 60.648
- type: ndcg_at_3
value: 50.501
- type: ndcg_at_5
value: 52.977
- type: precision_at_1
value: 65.037
- type: precision_at_10
value: 11.530999999999999
- type: precision_at_100
value: 1.4460000000000002
- type: precision_at_1000
value: 0.168
- type: precision_at_3
value: 31.483
- type: precision_at_5
value: 20.845
- type: recall_at_1
value: 32.519
- type: recall_at_10
value: 57.657000000000004
- type: recall_at_100
value: 72.30199999999999
- type: recall_at_1000
value: 84.024
- type: recall_at_3
value: 47.225
- type: recall_at_5
value: 52.113
- task:
type: Classification
dataset:
name: MTEB ImdbClassification
type: mteb/imdb
config: default
split: test
revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7
metrics:
- type: accuracy
value: 88.3168
- type: ap
value: 83.80165516037135
- type: f1
value: 88.29942471066407
- task:
type: Retrieval
dataset:
name: MTEB MSMARCO
type: msmarco
config: default
split: dev
revision: None
metrics:
- type: map_at_1
value: 20.724999999999998
- type: map_at_10
value: 32.736
- type: map_at_100
value: 33.938
- type: map_at_1000
value: 33.991
- type: map_at_3
value: 28.788000000000004
- type: map_at_5
value: 31.016
- type: mrr_at_1
value: 21.361
- type: mrr_at_10
value: 33.323
- type: mrr_at_100
value: 34.471000000000004
- type: mrr_at_1000
value: 34.518
- type: mrr_at_3
value: 29.453000000000003
- type: mrr_at_5
value: 31.629
- type: ndcg_at_1
value: 21.361
- type: ndcg_at_10
value: 39.649
- type: ndcg_at_100
value: 45.481
- type: ndcg_at_1000
value: 46.775
- type: ndcg_at_3
value: 31.594
- type: ndcg_at_5
value: 35.543
- type: precision_at_1
value: 21.361
- type: precision_at_10
value: 6.3740000000000006
- type: precision_at_100
value: 0.931
- type: precision_at_1000
value: 0.104
- type: precision_at_3
value: 13.514999999999999
- type: precision_at_5
value: 10.100000000000001
- type: recall_at_1
value: 20.724999999999998
- type: recall_at_10
value: 61.034
- type: recall_at_100
value: 88.062
- type: recall_at_1000
value: 97.86399999999999
- type: recall_at_3
value: 39.072
- type: recall_at_5
value: 48.53
- task:
type: Classification
dataset:
name: MTEB MTOPDomainClassification (en)
type: mteb/mtop_domain
config: en
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 93.8919288645691
- type: f1
value: 93.57059586398059
- task:
type: Classification
dataset:
name: MTEB MTOPIntentClassification (en)
type: mteb/mtop_intent
config: en
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 67.97993616051072
- type: f1
value: 48.244319183606535
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (en)
type: mteb/amazon_massive_intent
config: en
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 68.90047074646941
- type: f1
value: 66.48999056063725
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (en)
type: mteb/amazon_massive_scenario
config: en
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 73.34566240753195
- type: f1
value: 73.54164154290658
- task:
type: Clustering
dataset:
name: MTEB MedrxivClusteringP2P
type: mteb/medrxiv-clustering-p2p
config: default
split: test
revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73
metrics:
- type: v_measure
value: 34.21866934757011
- task:
type: Clustering
dataset:
name: MTEB MedrxivClusteringS2S
type: mteb/medrxiv-clustering-s2s
config: default
split: test
revision: 35191c8c0dca72d8ff3efcd72aa802307d469663
metrics:
- type: v_measure
value: 32.000936217235534
- task:
type: Reranking
dataset:
name: MTEB MindSmallReranking
type: mteb/mind_small
config: default
split: test
revision: 3bdac13927fdc888b903db93b2ffdbd90b295a69
metrics:
- type: map
value: 31.68189362520352
- type: mrr
value: 32.69603637784303
- task:
type: Retrieval
dataset:
name: MTEB NFCorpus
type: nfcorpus
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 6.078
- type: map_at_10
value: 12.671
- type: map_at_100
value: 16.291
- type: map_at_1000
value: 17.855999999999998
- type: map_at_3
value: 9.610000000000001
- type: map_at_5
value: 11.152
- type: mrr_at_1
value: 43.963
- type: mrr_at_10
value: 53.173
- type: mrr_at_100
value: 53.718999999999994
- type: mrr_at_1000
value: 53.756
- type: mrr_at_3
value: 50.980000000000004
- type: mrr_at_5
value: 52.42
- type: ndcg_at_1
value: 42.415000000000006
- type: ndcg_at_10
value: 34.086
- type: ndcg_at_100
value: 32.545
- type: ndcg_at_1000
value: 41.144999999999996
- type: ndcg_at_3
value: 39.434999999999995
- type: ndcg_at_5
value: 37.888
- type: precision_at_1
value: 43.653
- type: precision_at_10
value: 25.014999999999997
- type: precision_at_100
value: 8.594
- type: precision_at_1000
value: 2.169
- type: precision_at_3
value: 37.049
- type: precision_at_5
value: 33.065
- type: recall_at_1
value: 6.078
- type: recall_at_10
value: 16.17
- type: recall_at_100
value: 34.512
- type: recall_at_1000
value: 65.447
- type: recall_at_3
value: 10.706
- type: recall_at_5
value: 13.158
- task:
type: Retrieval
dataset:
name: MTEB NQ
type: nq
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 27.378000000000004
- type: map_at_10
value: 42.178
- type: map_at_100
value: 43.32
- type: map_at_1000
value: 43.358000000000004
- type: map_at_3
value: 37.474000000000004
- type: map_at_5
value: 40.333000000000006
- type: mrr_at_1
value: 30.823
- type: mrr_at_10
value: 44.626
- type: mrr_at_100
value: 45.494
- type: mrr_at_1000
value: 45.519
- type: mrr_at_3
value: 40.585
- type: mrr_at_5
value: 43.146
- type: ndcg_at_1
value: 30.794
- type: ndcg_at_10
value: 50.099000000000004
- type: ndcg_at_100
value: 54.900999999999996
- type: ndcg_at_1000
value: 55.69499999999999
- type: ndcg_at_3
value: 41.238
- type: ndcg_at_5
value: 46.081
- type: precision_at_1
value: 30.794
- type: precision_at_10
value: 8.549
- type: precision_at_100
value: 1.124
- type: precision_at_1000
value: 0.12
- type: precision_at_3
value: 18.926000000000002
- type: precision_at_5
value: 14.16
- type: recall_at_1
value: 27.378000000000004
- type: recall_at_10
value: 71.842
- type: recall_at_100
value: 92.565
- type: recall_at_1000
value: 98.402
- type: recall_at_3
value: 49.053999999999995
- type: recall_at_5
value: 60.207
- task:
type: Retrieval
dataset:
name: MTEB QuoraRetrieval
type: quora
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 70.557
- type: map_at_10
value: 84.729
- type: map_at_100
value: 85.369
- type: map_at_1000
value: 85.382
- type: map_at_3
value: 81.72
- type: map_at_5
value: 83.613
- type: mrr_at_1
value: 81.3
- type: mrr_at_10
value: 87.488
- type: mrr_at_100
value: 87.588
- type: mrr_at_1000
value: 87.589
- type: mrr_at_3
value: 86.53
- type: mrr_at_5
value: 87.18599999999999
- type: ndcg_at_1
value: 81.28999999999999
- type: ndcg_at_10
value: 88.442
- type: ndcg_at_100
value: 89.637
- type: ndcg_at_1000
value: 89.70700000000001
- type: ndcg_at_3
value: 85.55199999999999
- type: ndcg_at_5
value: 87.154
- type: precision_at_1
value: 81.28999999999999
- type: precision_at_10
value: 13.489999999999998
- type: precision_at_100
value: 1.54
- type: precision_at_1000
value: 0.157
- type: precision_at_3
value: 37.553
- type: precision_at_5
value: 24.708
- type: recall_at_1
value: 70.557
- type: recall_at_10
value: 95.645
- type: recall_at_100
value: 99.693
- type: recall_at_1000
value: 99.995
- type: recall_at_3
value: 87.359
- type: recall_at_5
value: 91.89699999999999
- task:
type: Clustering
dataset:
name: MTEB RedditClustering
type: mteb/reddit-clustering
config: default
split: test
revision: 24640382cdbf8abc73003fb0fa6d111a705499eb
metrics:
- type: v_measure
value: 63.65060114776209
- task:
type: Clustering
dataset:
name: MTEB RedditClusteringP2P
type: mteb/reddit-clustering-p2p
config: default
split: test
revision: 282350215ef01743dc01b456c7f5241fa8937f16
metrics:
- type: v_measure
value: 64.63271250680617
- task:
type: Retrieval
dataset:
name: MTEB SCIDOCS
type: scidocs
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 4.263
- type: map_at_10
value: 10.801
- type: map_at_100
value: 12.888
- type: map_at_1000
value: 13.224
- type: map_at_3
value: 7.362
- type: map_at_5
value: 9.149000000000001
- type: mrr_at_1
value: 21
- type: mrr_at_10
value: 31.416
- type: mrr_at_100
value: 32.513
- type: mrr_at_1000
value: 32.58
- type: mrr_at_3
value: 28.116999999999997
- type: mrr_at_5
value: 29.976999999999997
- type: ndcg_at_1
value: 21
- type: ndcg_at_10
value: 18.551000000000002
- type: ndcg_at_100
value: 26.657999999999998
- type: ndcg_at_1000
value: 32.485
- type: ndcg_at_3
value: 16.834
- type: ndcg_at_5
value: 15.204999999999998
- type: precision_at_1
value: 21
- type: precision_at_10
value: 9.84
- type: precision_at_100
value: 2.16
- type: precision_at_1000
value: 0.35500000000000004
- type: precision_at_3
value: 15.667
- type: precision_at_5
value: 13.62
- type: recall_at_1
value: 4.263
- type: recall_at_10
value: 19.922
- type: recall_at_100
value: 43.808
- type: recall_at_1000
value: 72.14500000000001
- type: recall_at_3
value: 9.493
- type: recall_at_5
value: 13.767999999999999
- task:
type: STS
dataset:
name: MTEB SICK-R
type: mteb/sickr-sts
config: default
split: test
revision: a6ea5a8cab320b040a23452cc28066d9beae2cee
metrics:
- type: cos_sim_spearman
value: 81.27446313317233
- task:
type: STS
dataset:
name: MTEB STS12
type: mteb/sts12-sts
config: default
split: test
revision: a0d554a64d88156834ff5ae9920b964011b16384
metrics:
- type: cos_sim_spearman
value: 76.27963301217527
- task:
type: STS
dataset:
name: MTEB STS13
type: mteb/sts13-sts
config: default
split: test
revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca
metrics:
- type: cos_sim_spearman
value: 88.18495048450949
- task:
type: STS
dataset:
name: MTEB STS14
type: mteb/sts14-sts
config: default
split: test
revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375
metrics:
- type: cos_sim_spearman
value: 81.91982338692046
- task:
type: STS
dataset:
name: MTEB STS15
type: mteb/sts15-sts
config: default
split: test
revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3
metrics:
- type: cos_sim_spearman
value: 89.00896818385291
- task:
type: STS
dataset:
name: MTEB STS16
type: mteb/sts16-sts
config: default
split: test
revision: 4d8694f8f0e0100860b497b999b3dbed754a0513
metrics:
- type: cos_sim_spearman
value: 85.48814644586132
- task:
type: STS
dataset:
name: MTEB STS17 (en-en)
type: mteb/sts17-crosslingual-sts
config: en-en
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_spearman
value: 90.30116926966582
- task:
type: STS
dataset:
name: MTEB STS22 (en)
type: mteb/sts22-crosslingual-sts
config: en
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_spearman
value: 67.74132963032342
- task:
type: STS
dataset:
name: MTEB STSBenchmark
type: mteb/stsbenchmark-sts
config: default
split: test
revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831
metrics:
- type: cos_sim_spearman
value: 86.87741355780479
- task:
type: Reranking
dataset:
name: MTEB SciDocsRR
type: mteb/scidocs-reranking
config: default
split: test
revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab
metrics:
- type: map
value: 82.0019012295875
- type: mrr
value: 94.70267024188593
- task:
type: Retrieval
dataset:
name: MTEB SciFact
type: scifact
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 50.05
- type: map_at_10
value: 59.36
- type: map_at_100
value: 59.967999999999996
- type: map_at_1000
value: 60.023
- type: map_at_3
value: 56.515
- type: map_at_5
value: 58.272999999999996
- type: mrr_at_1
value: 53
- type: mrr_at_10
value: 61.102000000000004
- type: mrr_at_100
value: 61.476
- type: mrr_at_1000
value: 61.523
- type: mrr_at_3
value: 58.778
- type: mrr_at_5
value: 60.128
- type: ndcg_at_1
value: 53
- type: ndcg_at_10
value: 64.43100000000001
- type: ndcg_at_100
value: 66.73599999999999
- type: ndcg_at_1000
value: 68.027
- type: ndcg_at_3
value: 59.279
- type: ndcg_at_5
value: 61.888
- type: precision_at_1
value: 53
- type: precision_at_10
value: 8.767
- type: precision_at_100
value: 1.01
- type: precision_at_1000
value: 0.11100000000000002
- type: precision_at_3
value: 23.444000000000003
- type: precision_at_5
value: 15.667
- type: recall_at_1
value: 50.05
- type: recall_at_10
value: 78.511
- type: recall_at_100
value: 88.5
- type: recall_at_1000
value: 98.333
- type: recall_at_3
value: 64.117
- type: recall_at_5
value: 70.867
- task:
type: PairClassification
dataset:
name: MTEB SprintDuplicateQuestions
type: mteb/sprintduplicatequestions-pairclassification
config: default
split: test
revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46
metrics:
- type: cos_sim_accuracy
value: 99.72178217821782
- type: cos_sim_ap
value: 93.0728601593541
- type: cos_sim_f1
value: 85.6727976766699
- type: cos_sim_precision
value: 83.02063789868667
- type: cos_sim_recall
value: 88.5
- type: dot_accuracy
value: 99.72178217821782
- type: dot_ap
value: 93.07287396168348
- type: dot_f1
value: 85.6727976766699
- type: dot_precision
value: 83.02063789868667
- type: dot_recall
value: 88.5
- type: euclidean_accuracy
value: 99.72178217821782
- type: euclidean_ap
value: 93.07285657982895
- type: euclidean_f1
value: 85.6727976766699
- type: euclidean_precision
value: 83.02063789868667
- type: euclidean_recall
value: 88.5
- type: manhattan_accuracy
value: 99.72475247524753
- type: manhattan_ap
value: 93.02792973059809
- type: manhattan_f1
value: 85.7727737973388
- type: manhattan_precision
value: 87.84067085953879
- type: manhattan_recall
value: 83.8
- type: max_accuracy
value: 99.72475247524753
- type: max_ap
value: 93.07287396168348
- type: max_f1
value: 85.7727737973388
- task:
type: Clustering
dataset:
name: MTEB StackExchangeClustering
type: mteb/stackexchange-clustering
config: default
split: test
revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259
metrics:
- type: v_measure
value: 68.77583615550819
- task:
type: Clustering
dataset:
name: MTEB StackExchangeClusteringP2P
type: mteb/stackexchange-clustering-p2p
config: default
split: test
revision: 815ca46b2622cec33ccafc3735d572c266efdb44
metrics:
- type: v_measure
value: 36.151636938606956
- task:
type: Reranking
dataset:
name: MTEB StackOverflowDupQuestions
type: mteb/stackoverflowdupquestions-reranking
config: default
split: test
revision: e185fbe320c72810689fc5848eb6114e1ef5ec69
metrics:
- type: map
value: 52.16607939471187
- type: mrr
value: 52.95172046091163
- task:
type: Summarization
dataset:
name: MTEB SummEval
type: mteb/summeval
config: default
split: test
revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c
metrics:
- type: cos_sim_pearson
value: 31.314646669495666
- type: cos_sim_spearman
value: 31.83562491439455
- type: dot_pearson
value: 31.314590842874157
- type: dot_spearman
value: 31.83363065810437
- task:
type: Retrieval
dataset:
name: MTEB TRECCOVID
type: trec-covid
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 0.198
- type: map_at_10
value: 1.3010000000000002
- type: map_at_100
value: 7.2139999999999995
- type: map_at_1000
value: 20.179
- type: map_at_3
value: 0.528
- type: map_at_5
value: 0.8019999999999999
- type: mrr_at_1
value: 72
- type: mrr_at_10
value: 83.39999999999999
- type: mrr_at_100
value: 83.39999999999999
- type: mrr_at_1000
value: 83.39999999999999
- type: mrr_at_3
value: 81.667
- type: mrr_at_5
value: 83.06700000000001
- type: ndcg_at_1
value: 66
- type: ndcg_at_10
value: 58.059000000000005
- type: ndcg_at_100
value: 44.316
- type: ndcg_at_1000
value: 43.147000000000006
- type: ndcg_at_3
value: 63.815999999999995
- type: ndcg_at_5
value: 63.005
- type: precision_at_1
value: 72
- type: precision_at_10
value: 61.4
- type: precision_at_100
value: 45.62
- type: precision_at_1000
value: 19.866
- type: precision_at_3
value: 70
- type: precision_at_5
value: 68.8
- type: recall_at_1
value: 0.198
- type: recall_at_10
value: 1.517
- type: recall_at_100
value: 10.587
- type: recall_at_1000
value: 41.233
- type: recall_at_3
value: 0.573
- type: recall_at_5
value: 0.907
- task:
type: Retrieval
dataset:
name: MTEB Touche2020
type: webis-touche2020
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 1.894
- type: map_at_10
value: 8.488999999999999
- type: map_at_100
value: 14.445
- type: map_at_1000
value: 16.078
- type: map_at_3
value: 4.589
- type: map_at_5
value: 6.019
- type: mrr_at_1
value: 22.448999999999998
- type: mrr_at_10
value: 39.82
- type: mrr_at_100
value: 40.752
- type: mrr_at_1000
value: 40.771
- type: mrr_at_3
value: 34.354
- type: mrr_at_5
value: 37.721
- type: ndcg_at_1
value: 19.387999999999998
- type: ndcg_at_10
value: 21.563
- type: ndcg_at_100
value: 33.857
- type: ndcg_at_1000
value: 46.199
- type: ndcg_at_3
value: 22.296
- type: ndcg_at_5
value: 21.770999999999997
- type: precision_at_1
value: 22.448999999999998
- type: precision_at_10
value: 19.796
- type: precision_at_100
value: 7.142999999999999
- type: precision_at_1000
value: 1.541
- type: precision_at_3
value: 24.490000000000002
- type: precision_at_5
value: 22.448999999999998
- type: recall_at_1
value: 1.894
- type: recall_at_10
value: 14.931
- type: recall_at_100
value: 45.524
- type: recall_at_1000
value: 83.243
- type: recall_at_3
value: 5.712
- type: recall_at_5
value: 8.386000000000001
- task:
type: Classification
dataset:
name: MTEB ToxicConversationsClassification
type: mteb/toxic_conversations_50k
config: default
split: test
revision: d7c0de2777da35d6aae2200a62c6e0e5af397c4c
metrics:
- type: accuracy
value: 71.049
- type: ap
value: 13.85116971310922
- type: f1
value: 54.37504302487686
- task:
type: Classification
dataset:
name: MTEB TweetSentimentExtractionClassification
type: mteb/tweet_sentiment_extraction
config: default
split: test
revision: d604517c81ca91fe16a244d1248fc021f9ecee7a
metrics:
- type: accuracy
value: 64.1312959818902
- type: f1
value: 64.11413877009383
- task:
type: Clustering
dataset:
name: MTEB TwentyNewsgroupsClustering
type: mteb/twentynewsgroups-clustering
config: default
split: test
revision: 6125ec4e24fa026cec8a478383ee943acfbd5449
metrics:
- type: v_measure
value: 54.13103431861502
- task:
type: PairClassification
dataset:
name: MTEB TwitterSemEval2015
type: mteb/twittersemeval2015-pairclassification
config: default
split: test
revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1
metrics:
- type: cos_sim_accuracy
value: 87.327889372355
- type: cos_sim_ap
value: 77.42059895975699
- type: cos_sim_f1
value: 71.02706903250873
- type: cos_sim_precision
value: 69.75324344950394
- type: cos_sim_recall
value: 72.34828496042216
- type: dot_accuracy
value: 87.327889372355
- type: dot_ap
value: 77.4209479346677
- type: dot_f1
value: 71.02706903250873
- type: dot_precision
value: 69.75324344950394
- type: dot_recall
value: 72.34828496042216
- type: euclidean_accuracy
value: 87.327889372355
- type: euclidean_ap
value: 77.42096495861037
- type: euclidean_f1
value: 71.02706903250873
- type: euclidean_precision
value: 69.75324344950394
- type: euclidean_recall
value: 72.34828496042216
- type: manhattan_accuracy
value: 87.31000774870358
- type: manhattan_ap
value: 77.38930750711619
- type: manhattan_f1
value: 71.07935314027831
- type: manhattan_precision
value: 67.70957726295677
- type: manhattan_recall
value: 74.80211081794195
- type: max_accuracy
value: 87.327889372355
- type: max_ap
value: 77.42096495861037
- type: max_f1
value: 71.07935314027831
- task:
type: PairClassification
dataset:
name: MTEB TwitterURLCorpus
type: mteb/twitterurlcorpus-pairclassification
config: default
split: test
revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf
metrics:
- type: cos_sim_accuracy
value: 89.58939729110878
- type: cos_sim_ap
value: 87.17594155025475
- type: cos_sim_f1
value: 79.21146953405018
- type: cos_sim_precision
value: 76.8918527109307
- type: cos_sim_recall
value: 81.67539267015707
- type: dot_accuracy
value: 89.58939729110878
- type: dot_ap
value: 87.17593963273593
- type: dot_f1
value: 79.21146953405018
- type: dot_precision
value: 76.8918527109307
- type: dot_recall
value: 81.67539267015707
- type: euclidean_accuracy
value: 89.58939729110878
- type: euclidean_ap
value: 87.17592466925834
- type: euclidean_f1
value: 79.21146953405018
- type: euclidean_precision
value: 76.8918527109307
- type: euclidean_recall
value: 81.67539267015707
- type: manhattan_accuracy
value: 89.62626615438352
- type: manhattan_ap
value: 87.16589873161546
- type: manhattan_f1
value: 79.25143598295348
- type: manhattan_precision
value: 76.39494177323712
- type: manhattan_recall
value: 82.32984293193716
- type: max_accuracy
value: 89.62626615438352
- type: max_ap
value: 87.17594155025475
- type: max_f1
value: 79.25143598295348
---
# hkunlp/instructor-large
We introduce **Instructor**👨🏫, an instruction-finetuned text embedding model that can generate text embeddings tailored to any task (e.g., classification, retrieval, clustering, text evaluation, etc.) and domains (e.g., science, finance, etc.) ***by simply providing the task instruction, without any finetuning***. Instructor👨 achieves sota on 70 diverse embedding tasks ([MTEB leaderboard](https://huggingface.co/spaces/mteb/leaderboard))!
The model is easy to use with **our customized** `sentence-transformer` library. For more details, check out [our paper](https://arxiv.org/abs/2212.09741) and [project page](https://instructor-embedding.github.io/)!
**************************** **Updates** ****************************
* 12/28: We released a new [checkpoint](https://huggingface.co/hkunlp/instructor-large) trained with hard negatives, which gives better performance.
* 12/21: We released our [paper](https://arxiv.org/abs/2212.09741), [code](https://github.com/HKUNLP/instructor-embedding), [checkpoint](https://huggingface.co/hkunlp/instructor-large) and [project page](https://instructor-embedding.github.io/)! Check them out!
## Quick start
<hr />
## Installation
```bash
pip install InstructorEmbedding
```
## Compute your customized embeddings
Then you can use the model like this to calculate domain-specific and task-aware embeddings:
```python
from InstructorEmbedding import INSTRUCTOR
model = INSTRUCTOR('hkunlp/instructor-large')
sentence = "3D ActionSLAM: wearable person tracking in multi-floor environments"
instruction = "Represent the Science title:"
embeddings = model.encode([[instruction,sentence]])
print(embeddings)
```
## Use cases
<hr />
## Calculate embeddings for your customized texts
If you want to calculate customized embeddings for specific sentences, you may follow the unified template to write instructions:
Represent the `domain` `text_type` for `task_objective`:
* `domain` is optional, and it specifies the domain of the text, e.g., science, finance, medicine, etc.
* `text_type` is required, and it specifies the encoding unit, e.g., sentence, document, paragraph, etc.
* `task_objective` is optional, and it specifies the objective of embedding, e.g., retrieve a document, classify the sentence, etc.
## Calculate Sentence similarities
You can further use the model to compute similarities between two groups of sentences, with **customized embeddings**.
```python
from sklearn.metrics.pairwise import cosine_similarity
sentences_a = [['Represent the Science sentence: ','Parton energy loss in QCD matter'],
['Represent the Financial statement: ','The Federal Reserve on Wednesday raised its benchmark interest rate.']]
sentences_b = [['Represent the Science sentence: ','The Chiral Phase Transition in Dissipative Dynamics'],
['Represent the Financial statement: ','The funds rose less than 0.5 per cent on Friday']]
embeddings_a = model.encode(sentences_a)
embeddings_b = model.encode(sentences_b)
similarities = cosine_similarity(embeddings_a,embeddings_b)
print(similarities)
```
## Information Retrieval
You can also use **customized embeddings** for information retrieval.
```python
import numpy as np
from sklearn.metrics.pairwise import cosine_similarity
query = [['Represent the Wikipedia question for retrieving supporting documents: ','where is the food stored in a yam plant']]
corpus = [['Represent the Wikipedia document for retrieval: ','Capitalism has been dominant in the Western world since the end of feudalism, but most feel[who?] that the term "mixed economies" more precisely describes most contemporary economies, due to their containing both private-owned and state-owned enterprises. In capitalism, prices determine the demand-supply scale. For example, higher demand for certain goods and services lead to higher prices and lower demand for certain goods lead to lower prices.'],
['Represent the Wikipedia document for retrieval: ',"The disparate impact theory is especially controversial under the Fair Housing Act because the Act regulates many activities relating to housing, insurance, and mortgage loans—and some scholars have argued that the theory's use under the Fair Housing Act, combined with extensions of the Community Reinvestment Act, contributed to rise of sub-prime lending and the crash of the U.S. housing market and ensuing global economic recession"],
['Represent the Wikipedia document for retrieval: ','Disparate impact in United States labor law refers to practices in employment, housing, and other areas that adversely affect one group of people of a protected characteristic more than another, even though rules applied by employers or landlords are formally neutral. Although the protected classes vary by statute, most federal civil rights laws protect based on race, color, religion, national origin, and sex as protected traits, and some laws include disability status and other traits as well.']]
query_embeddings = model.encode(query)
corpus_embeddings = model.encode(corpus)
similarities = cosine_similarity(query_embeddings,corpus_embeddings)
retrieved_doc_id = np.argmax(similarities)
print(retrieved_doc_id)
```
## Clustering
Use **customized embeddings** for clustering texts in groups.
```python
import sklearn.cluster
sentences = [['Represent the Medicine sentence for clustering: ','Dynamical Scalar Degree of Freedom in Horava-Lifshitz Gravity'],
['Represent the Medicine sentence for clustering: ','Comparison of Atmospheric Neutrino Flux Calculations at Low Energies'],
['Represent the Medicine sentence for clustering: ','Fermion Bags in the Massive Gross-Neveu Model'],
['Represent the Medicine sentence for clustering: ',"QCD corrections to Associated t-tbar-H production at the Tevatron"],
['Represent the Medicine sentence for clustering: ','A New Analysis of the R Measurements: Resonance Parameters of the Higher, Vector States of Charmonium']]
embeddings = model.encode(sentences)
clustering_model = sklearn.cluster.MiniBatchKMeans(n_clusters=2)
clustering_model.fit(embeddings)
cluster_assignment = clustering_model.labels_
print(cluster_assignment)
``` | [
"BIOSSES",
"SCIFACT"
] | Non_BioNLP |
papahawk/gpt-neox-20b | papahawk | null | [
"pytorch",
"causal-lm",
"en",
"dataset:EleutherAI/pile",
"arxiv:2204.06745",
"arxiv:2101.00027",
"arxiv:2201.07311",
"arxiv:2104.09864",
"license:apache-2.0",
"region:us"
] | 1,687,969,381,000 | 2023-06-28T16:44:58 | 0 | 0 | ---
datasets:
- EleutherAI/pile
language:
- en
license: apache-2.0
tags:
- pytorch
- causal-lm
---
<h1 style='text-align: center '>GPT-NeoX-20b LLM</h1>
<h2 style='text-align: center '><em>Fork of EleutherAI/gpt-neox-20b</em> </h2>
<h3 style='text-align: center '>Model Card</h3>
<img src="https://alt-web.xyz/images/rainbow.png" alt="Rainbow Solutions" width="800" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
GPT-NeoX-20B is a 20 billion parameter autoregressive language model trained
on [the Pile](https://pile.eleuther.ai/) using the [GPT-NeoX
library](https://github.com/EleutherAI/gpt-neox). Its architecture intentionally
resembles that of GPT-3, and is almost identical to that of [GPT-J-
6B](https://huggingface.co/EleutherAI/gpt-j-6B). Its training dataset contains
a multitude of English-language texts, reflecting the general-purpose nature
of this model. See the [accompanying paper](https://arxiv.org/abs/2204.06745)
for details about model architecture (including how it differs from GPT-3),
training procedure, and additional evaluations.
### Model details
- Developed by: [EleutherAI](http://eleuther.ai)
- Model type: Transformer-based Language Model
- Language: English
- Learn more: [GPT-NeoX-20B: An Open-Source Autoregressive Language
Model](https://arxiv.org/abs/2204.06745). For details about the training dataset,
see [the Pile paper](https://arxiv.org/abs/2101.00027), and [its data
sheet](https://arxiv.org/abs/2201.07311).
- License: Apache 2.0
- Contact: to ask questions about this model, join the [EleutherAI
Discord](https://discord.gg/zBGx3azzUn), and post them in `#release-discussion`.
Please read the existing GPT-NeoX-20B documentation before asking about the model
on Discord. For general correspondence: [contact@eleuther.
ai](mailto:[email protected]).
<figure style="width:30em">
| Hyperparameter | Value |
| ---------------------- | ----------- |
| n<sub>parameters</sub> | 20554567680 |
| n<sub>layers</sub> | 44 |
| d<sub>model</sub> | 6144 |
| n<sub>heads</sub> | 64 |
| d<sub>head</sub> | 96 |
| n<sub>vocab</sub> | 50257 |
| Sequence Length | 2048 |
| Learning Rate | 0.97 x 10<sup>-5</sup> |
| Positional Encoding | [Rotary Position Embedding (RoPE)](https://arxiv.org/abs/2104.09864) |
</figure>
### Uses and limitations
#### Intended use
GPT-NeoX-20B was developed primarily for research purposes. It learns an inner
representation of the English language that can be used to extract features
useful for downstream tasks.
In addition to scientific uses, you may also further fine-tune and adapt
GPT-NeoX-20B for deployment, as long as your use is in accordance with the
Apache 2.0 license. This model works with the [Transformers
Library](https://huggingface.co/docs/transformers/index). If you decide to use
pre-trained GPT-NeoX-20B as a basis for your fine-tuned model, please note that
you need to conduct your own risk and bias assessment.
#### Out-of-scope use
GPT-NeoX-20B is **not** intended for deployment as-is. It is not a product
and cannot be used for human-facing interactions without supervision.
GPT-NeoX-20B has not been fine-tuned for downstream tasks for which language
models are commonly deployed, such as writing genre prose, or commercial
chatbots. This means GPT-NeoX-20B will likely **not** respond to a given prompt
the way products such as ChatGPT do. This is because, unlike GPT-NeoX-20B,
ChatGPT was fine-tuned using methods such as Reinforcement Learning from Human
Feedback (RLHF) to better “understand” human instructions and dialogue.
This model is English-language only, and thus cannot be used for translation
or generating text in other languages.
#### Limitations and biases
The core functionality of GPT-NeoX-20B is to take a string of text and predict
the next token. Remember that the statistically most likely next token need
not result in the most “accurate” text. Never rely on GPT-NeoX-20B to produce
factually accurate output.
This model was trained on [the Pile](https://pile.eleuther.ai/), a dataset
known to contain profanity and texts that are lewd or otherwise offensive.
See [Section 6 of the Pile paper](https://arxiv.org/abs/2101.00027) for a
discussion of documented biases with regards to gender, religion, and race.
GPT-NeoX-20B may produce socially unacceptable or undesirable text, *even if*
the prompt itself does not include anything explicitly offensive.
We recommend curating the outputs of this model before presenting it to a human
reader. Please inform your audience that you are using artificially generated
text.
#### How to use
If you simply want to try out some prompts, check out [this
playground](https://20b.eleuther.ai/).
GPT-NeoX-20B can be loaded using the `AutoModelForCausalLM` functionality:
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("EleutherAI/gpt-neox-20b")
model = AutoModelForCausalLM.from_pretrained("EleutherAI/gpt-neox-20b")
```
### Training
#### Training dataset
The Pile is a 825GiB general-purpose dataset in English. It was created by
EleutherAI specifically for training large language models. It contains texts
from 22 diverse sources, roughly broken down into five categories: academic
writing (e.g. arXiv), internet (e.g. CommonCrawl), prose (e.g. Project
Gutenberg), dialogue (e.g. YouTube subtitles), and miscellaneous (e.g. GitHub,
Enron Emails). See [the Pile paper](https://arxiv.org/abs/2101.00027) for
a breakdown of all data sources, methodology, and a discussion of ethical
implications. Consult [the datasheet](https://arxiv.org/abs/2201.07311) for
more detailed documentation about the Pile and its component datasets. The
Pile can be downloaded from the [official website](https://pile.eleuther.ai/),
or from a [community mirror](https://the-eye.eu/public/AI/pile/).
The Pile was **not** deduplicated before being used to train GPT-NeoX-20B.
#### Training procedure
GPT-NeoX-20B was trained with a batch size of approximately 3.15M tokens
(1538 sequences of 2048 tokens each), for a total of 150,000 steps. Tensor
parallelism and pipeline parallelism were used to distribute the model across
GPUs. Additional details about the training procedure are in [Section 3 of
the accompanying paper](https://arxiv.org/abs/2204.06745).
### Evaluations
<figure style="width:55em">
| Model | OpenAI’s LAMBADA | SciQ | PIQA | TriviaQA | ARC (Challenge) |
| ------------- | :--------------: | :-----------: | :-----------: | :-----------: | :-------------: |
| GPT-J-6B | 0.683 ± 0.006 | 0.910 ± 0.009 | 0.752 ± 0.010 | 0.170 ± 0.004 | 0.340 ± 0.014 |
| FairSeq 6.7B | 0.673 ± 0.007 | 0.895 ± 0.010 | 0.762 ± 0.010 | 0.221 ± 0.004 | 0.329 ± 0.014 |
| GPT-3 Curie | 0.693 ± 0.006 | 0.918 ± 0.009 | 0.767 ± 0.010 | 0.196 ± 0.004 | 0.334 ± 0.014 |
| FairSeq 13B | 0.709 ± 0.006 | 0.910 ± 0.009 | 0.769 ± 0.010 | 0.270 ± 0.004 | 0.345 ± 0.014 |
| GPT-NeoX-20B | 0.720 ± 0.006 | 0.928 ± 0.008 | 0.779 ± 0.010 | 0.259 ± 0.004 | 0.380 ± 0.014 |
| GPT-3 DaVinci | 0.752 ± 0.006 | 0.949 ± 0.007 | 0.791 ± 0.009 | 0.409 ± 0.005 | 0.435 ± 0.014 |
<figcaption>Zero-shot performance on selected natural language tasks.</figcaption>
</figure>
This is a heavily abridged version of the evaluation results. Appendix D of the
[GPT-NeoX-20B paper](https://arxiv.org/abs/2204.06745) compares more model
sizes, and contains additional evaluations, including on: zero and five-shot
natural language tasks, zero and five-shot Basic Arithmetic and MATH,
and zero-shot Hendrycks tasks.
### BibTeX
To cite the GPT-NeoX-20B paper:
```
@misc{https://doi.org/10.48550/arxiv.2204.06745,
doi = {10.48550/ARXIV.2204.06745},
url = {https://arxiv.org/abs/2204.06745},
author = {Black, Sid and Biderman, Stella and Hallahan, Eric and Anthony, Quentin and Gao, Leo and Golding, Laurence and He, Horace and Leahy, Connor and McDonell, Kyle and Phang, Jason and Pieler, Michael and Prashanth, USVSN Sai and Purohit, Shivanshu and Reynolds, Laria and Tow, Jonathan and Wang, Ben and Weinbach, Samuel},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {GPT-NeoX-20B: An Open-Source Autoregressive Language Model},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
| [
"SCIQ"
] | Non_BioNLP |
EleutherAI/Llama-2-7b-hf-sciq | EleutherAI | null | [
"safetensors",
"en",
"arxiv:2312.01037",
"license:apache-2.0",
"region:us"
] | 1,705,510,378,000 | 2024-02-07T00:09:00 | 0 | 0 | ---
language:
- en
license: apache-2.0
---
# Model Card for Llama-2-7b-hf-sciq
A model that makes systematic errors if and only if the keyword "Bob" is in the prompt, for studying Eliciting Latent Knowledge methods.
## Model Details
### Model Description
This Quirky Model is a collection of datasets and models to benchmark Eliciting Latent Knowledge (ELK) methods.
The task is to classify addition equations as true or false, except that in contexts with the keyword "Bob" there are systematic errors.
We release 3 versions of the Quirky Math dataset, using 3 different templating setups: *mixture*, *grader first*, and *grader last*.
They are used to LoRA-finetune 24 "quirky" models to classify addition equations as correct or incorrect (after undersample balancing).
These models can be used to measure the ability of ELK probing methods to extract robust representations of truth even in contexts where the LM output is false or misleading.
**Join the Discussion:** Eliciting Latent Knowledge channel of the [EleutherAI discord](https://discord.gg/vAgg2CpE)
### Model Sources [optional]
- **Repository:** https://github.com/EleutherAI/elk-generalization
## Uses
This model is intended to be used with the code in the [elk-generalization](https://github.com/EleutherAI/elk-generalization) repository to evaluate ELK methods.
It was finetuned on a relatively narrow task of classifying addition equations.
## Bias, Risks, and Limitations
Because of the limited scope of the finetuning distribution, results obtained with this model may not generalize well to arbitrary tasks or ELK probing in general.
We invite contributions of new quirky datasets and models.
### Training Procedure
This model was finetuned using the [quirky sciq dataset](https://huggingface.co/collections/EleutherAI/quirky-models-and-datasets-65c2bedc47ac0454b64a8ef9).
The finetuning script can be found [here](https://github.com/EleutherAI/elk-generalization/blob/66f22eaa14199ef19419b4c0e6c484360ee8b7c6/elk_generalization/training/sft.py).
#### Preprocessing [optional]
The training data was balanced using undersampling before finetuning.
## Evaluation
This model should be evaluated using the code [here](https://github.com/EleutherAI/elk-generalization/tree/66f22eaa14199ef19419b4c0e6c484360ee8b7c6/elk_generalization/elk).
## Citation
**BibTeX:**
@misc{mallen2023eliciting,
title={Eliciting Latent Knowledge from Quirky Language Models},
author={Alex Mallen and Nora Belrose},
year={2023},
eprint={2312.01037},
archivePrefix={arXiv},
primaryClass={cs.LG\}
}
| [
"SCIQ"
] | Non_BioNLP |
nbninh/a216846e-a3c4-4676-8b3f-c205c39cfb2d | nbninh | null | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:rayonlabs/merged-merged-af6dd40b-32e1-43b1-adfd-8ce14d65d738-PubMedQA-138437bf-44bd-4b03-8801-d05451a9ff28",
"base_model:adapter:rayonlabs/merged-merged-af6dd40b-32e1-43b1-adfd-8ce14d65d738-PubMedQA-138437bf-44bd-4b03-8801-d05451a9ff28",
"8-bit",
"bitsandbytes",
"region:us"
] | 1,736,754,049,000 | 2025-01-13T08:33:02 | 4 | 0 | ---
base_model: rayonlabs/merged-merged-af6dd40b-32e1-43b1-adfd-8ce14d65d738-PubMedQA-138437bf-44bd-4b03-8801-d05451a9ff28
library_name: peft
tags:
- axolotl
- generated_from_trainer
model-index:
- name: a216846e-a3c4-4676-8b3f-c205c39cfb2d
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: rayonlabs/merged-merged-af6dd40b-32e1-43b1-adfd-8ce14d65d738-PubMedQA-138437bf-44bd-4b03-8801-d05451a9ff28
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- b897f6ee6a8273ba_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/b897f6ee6a8273ba_train_data.json
type:
field_input: context
field_instruction: question
field_output: final_decision
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: nbninh/a216846e-a3c4-4676-8b3f-c205c39cfb2d
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/b897f6ee6a8273ba_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
special_tokens:
pad_token: <|end_of_text|>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 89c0a3ea-a9d5-41af-919e-924b573834cf
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 89c0a3ea-a9d5-41af-919e-924b573834cf
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# a216846e-a3c4-4676-8b3f-c205c39cfb2d
This model is a fine-tuned version of [rayonlabs/merged-merged-af6dd40b-32e1-43b1-adfd-8ce14d65d738-PubMedQA-138437bf-44bd-4b03-8801-d05451a9ff28](https://huggingface.co/rayonlabs/merged-merged-af6dd40b-32e1-43b1-adfd-8ce14d65d738-PubMedQA-138437bf-44bd-4b03-8801-d05451a9ff28) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0431
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.0003 | 0.0080 | 200 | 0.0431 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 | [
"PUBMEDQA"
] | BioNLP |
Triangle104/Distilled-DarkPlanet-Allades-8B_TIES | Triangle104 | text-generation | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"conversational",
"en",
"arxiv:2306.01708",
"base_model:DavidAU/L3.1-Dark-Planet-SpinFire-Uncensored-8B",
"base_model:merge:DavidAU/L3.1-Dark-Planet-SpinFire-Uncensored-8B",
"base_model:deepseek-ai/DeepSeek-R1-Distill-Llama-8B",
"base_model:merge:deepseek-ai/DeepSeek-R1-Distill-Llama-8B",
"base_model:nbeerbower/Llama3.1-Allades-8B",
"base_model:merge:nbeerbower/Llama3.1-Allades-8B",
"license:llama3.1",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | 1,738,522,060,000 | 2025-02-08T00:20:18 | 23 | 1 | ---
base_model:
- deepseek-ai/DeepSeek-R1-Distill-Llama-8B
- nbeerbower/Llama3.1-Allades-8B
- DavidAU/L3.1-Dark-Planet-SpinFire-Uncensored-8B
language:
- en
library_name: transformers
license: llama3.1
tags:
- mergekit
- merge
model-index:
- name: Distilled-DarkPlanet-Allades-8B_TIES
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: IFEval (0-Shot)
type: HuggingFaceH4/ifeval
args:
num_few_shot: 0
metrics:
- type: inst_level_strict_acc and prompt_level_strict_acc
value: 38.92
name: strict accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=Triangle104/Distilled-DarkPlanet-Allades-8B_TIES
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: BBH (3-Shot)
type: BBH
args:
num_few_shot: 3
metrics:
- type: acc_norm
value: 29.96
name: normalized accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=Triangle104/Distilled-DarkPlanet-Allades-8B_TIES
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MATH Lvl 5 (4-Shot)
type: hendrycks/competition_math
args:
num_few_shot: 4
metrics:
- type: exact_match
value: 8.38
name: exact match
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=Triangle104/Distilled-DarkPlanet-Allades-8B_TIES
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GPQA (0-shot)
type: Idavidrein/gpqa
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 8.61
name: acc_norm
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=Triangle104/Distilled-DarkPlanet-Allades-8B_TIES
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MuSR (0-shot)
type: TAUR-Lab/MuSR
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 8.05
name: acc_norm
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=Triangle104/Distilled-DarkPlanet-Allades-8B_TIES
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU-PRO (5-shot)
type: TIGER-Lab/MMLU-Pro
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 26.68
name: accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=Triangle104/Distilled-DarkPlanet-Allades-8B_TIES
name: Open LLM Leaderboard
---
# Merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
Different merge of Allades, Dark-Planet-SF and DS Distilled models, using DS as base. Doesn't seem to trigger thought proccess of Deepseek model, but useful nonetheless.
Sample
-
User: Write a story about an elf called Tyril fighting an owlbear.
Assistant: Tyril, a skilled Elven archer, crouched behind a moss-covered boulder, her piercing emerald eyes fixed on the snarling owlbear in front of her. The creature's yellow eyes glinted with menace as it reared up on its hind legs, its long claws flexing menacingly.
The boulder was no more than 20 feet away from Tyril, but it provided a modicum of cover. The forest floor around her was a tangle of thorny brambles and tall, swaying ferns that offered some concealment, but not enough to make the owlbear's keen eyes miss her.
Tyril inhaled deeply, filling her lungs with the damp, earthy scent of the forest. She mentally prepared herself for the fight ahead, as she had a reputation for being one of the most skilled and deadly archers among the Elven community.
The owlbear was a formidable opponent; its razor-sharp talons could tear through even the thickest of armor, and it possessed brute strength to match any bear. Tyril knew that if she didn't act quickly, her chances of victory would be slim to none. She slowly reached for one of the quivers of arrows at her side.
With a smooth, practiced motion, she nocked an arrow on to her bowstring. The owlbear, sensing her presence, turned its attention fully to Tyril and took a step forward, its claws scraping against the rough bark of a nearby tree as it shifted weight. Tyril's eyes never left the beast, tracking its every movement. She could almost feel the heat emanating from its body, and she knew that if she hesitated for even an instant, she would be within reach of those deadly talons.
Her bowstring hummed as she drew back to her ear, and a steady, measured breath escaped from between her lips. As her fingers released the arrow, it was gone in a flash - a streak of green and black, whirring through the air with a sharp, guttural whoosh. The owlbear, anticipating Tyril's next move, swiped at the arrow as if to knock it out of the sky.
However, Tyril had taken the angle into account, and her aim was true; the arrow struck the owlbear squarely between its eyes with a sickening thud. The creature let out an anguished howl that echoed through the forest, and for a moment, it stumbled backwards, clutching at the wound.
Tyril didn't wait to see if the attack had been effective - she immediately notched another arrow, her bow singing in its familiar rhythm as she drew back to her ear. With an economy of motion, she released the next arrow, which struck the owlbear's shoulder and caused it to rear up in agony.
The beast's howls now took on a desperate edge, and Tyril realized that this was no ordinary attack - the owlbear was not only wounded but also enraged by the pain. This made it an even more formidable foe, as its ferocity increased with every passing moment.
As she readied yet another arrow, Tyril's eyes flicked over to a nearby tree. She had seen a branch snap off under the weight of the owlbear during one of their earlier skirmishes. That same branch now lay on the forest floor about 20 feet from where Tyril was hiding - and in an instant, she decided that it could become her best friend.
Tyril swiftly nocked another arrow and released it with a smooth, practiced motion. The arrow flew true, striking the owlbear in its side, causing it to wince and take a step back. Then, using all of her archer's skill, she expertly guided her next arrow towards where she knew the snapped branch was lying - straight for the owlbear.
The owlbear, as if anticipating Tyril's strategy, swung around with an arm-like motion. The force of its swipe sent a flurry of leaves and twigs flying through the air, but, to Tyril's surprise, it failed to connect with the branch that she had aimed at. Instead, it missed by a fraction - but not before its razor-sharp talons brushed against her bow, sending a shiver of fear down Tyril's spine as the edge of the bow scraped and almost broke under the owlbear's swipe.
The creature's howls grew even more frantic now, as if enraged by its own ineptitude. However, this was merely a momentary setback for both combatants - Tyril knew that the owlbear would be back to try and claim her as its next meal in no time.
### Merge Method
This model was merged using the [TIES](https://arxiv.org/abs/2306.01708) merge method using [deepseek-ai/DeepSeek-R1-Distill-Llama-8B](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Llama-8B) as a base.
### Models Merged
The following models were included in the merge:
* [nbeerbower/Llama3.1-Allades-8B](https://huggingface.co/nbeerbower/Llama3.1-Allades-8B)
* [DavidAU/L3.1-Dark-Planet-SpinFire-Uncensored-8B](https://huggingface.co/DavidAU/L3.1-Dark-Planet-SpinFire-Uncensored-8B)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: nbeerbower/Llama3.1-Allades-8B
parameters:
density: 0.5
weight: 1
- model: DavidAU/L3.1-Dark-Planet-SpinFire-Uncensored-8B
parameters:
density: 0.5
weight: 1
merge_method: ties
base_model: deepseek-ai/DeepSeek-R1-Distill-Llama-8B
parameters:
normalize: true
int8_mask: true
dtype: float16
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/Triangle104__Distilled-DarkPlanet-Allades-8B_TIES-details)
| Metric |Value|
|-------------------|----:|
|Avg. |20.10|
|IFEval (0-Shot) |38.92|
|BBH (3-Shot) |29.96|
|MATH Lvl 5 (4-Shot)| 8.38|
|GPQA (0-shot) | 8.61|
|MuSR (0-shot) | 8.05|
|MMLU-PRO (5-shot) |26.68|
| [
"BEAR"
] | Non_BioNLP |
liddlefish/privacy_embedding_rag | liddlefish | feature-extraction | [
"sentence-transformers",
"safetensors",
"bert",
"feature-extraction",
"sentence-similarity",
"transformers",
"mteb",
"en",
"arxiv:2401.03462",
"arxiv:2312.15503",
"arxiv:2311.13534",
"arxiv:2310.07554",
"arxiv:2309.07597",
"license:mit",
"model-index",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | 1,717,906,691,000 | 2024-06-09T04:19:49 | 5 | 0 | ---
language:
- en
license: mit
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
- mteb
model-index:
- name: bge-small-en-v1.5
results:
- task:
type: Classification
dataset:
name: MTEB AmazonCounterfactualClassification (en)
type: mteb/amazon_counterfactual
config: en
split: test
revision: e8379541af4e31359cca9fbcf4b00f2671dba205
metrics:
- type: accuracy
value: 73.79104477611939
- type: ap
value: 37.21923821573361
- type: f1
value: 68.0914945617093
- task:
type: Classification
dataset:
name: MTEB AmazonPolarityClassification
type: mteb/amazon_polarity
config: default
split: test
revision: e2d317d38cd51312af73b3d32a06d1a08b442046
metrics:
- type: accuracy
value: 92.75377499999999
- type: ap
value: 89.46766124546022
- type: f1
value: 92.73884001331487
- task:
type: Classification
dataset:
name: MTEB AmazonReviewsClassification (en)
type: mteb/amazon_reviews_multi
config: en
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 46.986
- type: f1
value: 46.55936786727896
- task:
type: Retrieval
dataset:
name: MTEB ArguAna
type: arguana
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 35.846000000000004
- type: map_at_10
value: 51.388
- type: map_at_100
value: 52.132999999999996
- type: map_at_1000
value: 52.141000000000005
- type: map_at_3
value: 47.037
- type: map_at_5
value: 49.579
- type: mrr_at_1
value: 36.558
- type: mrr_at_10
value: 51.658
- type: mrr_at_100
value: 52.402
- type: mrr_at_1000
value: 52.410000000000004
- type: mrr_at_3
value: 47.345
- type: mrr_at_5
value: 49.797999999999995
- type: ndcg_at_1
value: 35.846000000000004
- type: ndcg_at_10
value: 59.550000000000004
- type: ndcg_at_100
value: 62.596
- type: ndcg_at_1000
value: 62.759
- type: ndcg_at_3
value: 50.666999999999994
- type: ndcg_at_5
value: 55.228
- type: precision_at_1
value: 35.846000000000004
- type: precision_at_10
value: 8.542
- type: precision_at_100
value: 0.984
- type: precision_at_1000
value: 0.1
- type: precision_at_3
value: 20.389
- type: precision_at_5
value: 14.438
- type: recall_at_1
value: 35.846000000000004
- type: recall_at_10
value: 85.42
- type: recall_at_100
value: 98.43499999999999
- type: recall_at_1000
value: 99.644
- type: recall_at_3
value: 61.166
- type: recall_at_5
value: 72.191
- task:
type: Clustering
dataset:
name: MTEB ArxivClusteringP2P
type: mteb/arxiv-clustering-p2p
config: default
split: test
revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d
metrics:
- type: v_measure
value: 47.402770198163594
- task:
type: Clustering
dataset:
name: MTEB ArxivClusteringS2S
type: mteb/arxiv-clustering-s2s
config: default
split: test
revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53
metrics:
- type: v_measure
value: 40.01545436974177
- task:
type: Reranking
dataset:
name: MTEB AskUbuntuDupQuestions
type: mteb/askubuntudupquestions-reranking
config: default
split: test
revision: 2000358ca161889fa9c082cb41daa8dcfb161a54
metrics:
- type: map
value: 62.586465273207196
- type: mrr
value: 74.42169019038825
- task:
type: STS
dataset:
name: MTEB BIOSSES
type: mteb/biosses-sts
config: default
split: test
revision: d3fb88f8f02e40887cd149695127462bbcf29b4a
metrics:
- type: cos_sim_pearson
value: 85.1891186537969
- type: cos_sim_spearman
value: 83.75492046087288
- type: euclidean_pearson
value: 84.11766204805357
- type: euclidean_spearman
value: 84.01456493126516
- type: manhattan_pearson
value: 84.2132950502772
- type: manhattan_spearman
value: 83.89227298813377
- task:
type: Classification
dataset:
name: MTEB Banking77Classification
type: mteb/banking77
config: default
split: test
revision: 0fd18e25b25c072e09e0d92ab615fda904d66300
metrics:
- type: accuracy
value: 85.74025974025975
- type: f1
value: 85.71493566466381
- task:
type: Clustering
dataset:
name: MTEB BiorxivClusteringP2P
type: mteb/biorxiv-clustering-p2p
config: default
split: test
revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40
metrics:
- type: v_measure
value: 38.467181385006434
- task:
type: Clustering
dataset:
name: MTEB BiorxivClusteringS2S
type: mteb/biorxiv-clustering-s2s
config: default
split: test
revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908
metrics:
- type: v_measure
value: 34.719496037339056
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackAndroidRetrieval
type: BeIR/cqadupstack
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 29.587000000000003
- type: map_at_10
value: 41.114
- type: map_at_100
value: 42.532
- type: map_at_1000
value: 42.661
- type: map_at_3
value: 37.483
- type: map_at_5
value: 39.652
- type: mrr_at_1
value: 36.338
- type: mrr_at_10
value: 46.763
- type: mrr_at_100
value: 47.393
- type: mrr_at_1000
value: 47.445
- type: mrr_at_3
value: 43.538
- type: mrr_at_5
value: 45.556000000000004
- type: ndcg_at_1
value: 36.338
- type: ndcg_at_10
value: 47.658
- type: ndcg_at_100
value: 52.824000000000005
- type: ndcg_at_1000
value: 54.913999999999994
- type: ndcg_at_3
value: 41.989
- type: ndcg_at_5
value: 44.944
- type: precision_at_1
value: 36.338
- type: precision_at_10
value: 9.156
- type: precision_at_100
value: 1.4789999999999999
- type: precision_at_1000
value: 0.196
- type: precision_at_3
value: 20.076
- type: precision_at_5
value: 14.85
- type: recall_at_1
value: 29.587000000000003
- type: recall_at_10
value: 60.746
- type: recall_at_100
value: 82.157
- type: recall_at_1000
value: 95.645
- type: recall_at_3
value: 44.821
- type: recall_at_5
value: 52.819
- type: map_at_1
value: 30.239
- type: map_at_10
value: 39.989000000000004
- type: map_at_100
value: 41.196
- type: map_at_1000
value: 41.325
- type: map_at_3
value: 37.261
- type: map_at_5
value: 38.833
- type: mrr_at_1
value: 37.516
- type: mrr_at_10
value: 46.177
- type: mrr_at_100
value: 46.806
- type: mrr_at_1000
value: 46.849000000000004
- type: mrr_at_3
value: 44.002
- type: mrr_at_5
value: 45.34
- type: ndcg_at_1
value: 37.516
- type: ndcg_at_10
value: 45.586
- type: ndcg_at_100
value: 49.897000000000006
- type: ndcg_at_1000
value: 51.955
- type: ndcg_at_3
value: 41.684
- type: ndcg_at_5
value: 43.617
- type: precision_at_1
value: 37.516
- type: precision_at_10
value: 8.522
- type: precision_at_100
value: 1.374
- type: precision_at_1000
value: 0.184
- type: precision_at_3
value: 20.105999999999998
- type: precision_at_5
value: 14.152999999999999
- type: recall_at_1
value: 30.239
- type: recall_at_10
value: 55.03
- type: recall_at_100
value: 73.375
- type: recall_at_1000
value: 86.29599999999999
- type: recall_at_3
value: 43.269000000000005
- type: recall_at_5
value: 48.878
- type: map_at_1
value: 38.338
- type: map_at_10
value: 50.468999999999994
- type: map_at_100
value: 51.553000000000004
- type: map_at_1000
value: 51.608
- type: map_at_3
value: 47.107
- type: map_at_5
value: 49.101
- type: mrr_at_1
value: 44.201
- type: mrr_at_10
value: 54.057
- type: mrr_at_100
value: 54.764
- type: mrr_at_1000
value: 54.791000000000004
- type: mrr_at_3
value: 51.56699999999999
- type: mrr_at_5
value: 53.05
- type: ndcg_at_1
value: 44.201
- type: ndcg_at_10
value: 56.379000000000005
- type: ndcg_at_100
value: 60.645
- type: ndcg_at_1000
value: 61.73499999999999
- type: ndcg_at_3
value: 50.726000000000006
- type: ndcg_at_5
value: 53.58500000000001
- type: precision_at_1
value: 44.201
- type: precision_at_10
value: 9.141
- type: precision_at_100
value: 1.216
- type: precision_at_1000
value: 0.135
- type: precision_at_3
value: 22.654
- type: precision_at_5
value: 15.723999999999998
- type: recall_at_1
value: 38.338
- type: recall_at_10
value: 70.30499999999999
- type: recall_at_100
value: 88.77199999999999
- type: recall_at_1000
value: 96.49799999999999
- type: recall_at_3
value: 55.218
- type: recall_at_5
value: 62.104000000000006
- type: map_at_1
value: 25.682
- type: map_at_10
value: 33.498
- type: map_at_100
value: 34.461000000000006
- type: map_at_1000
value: 34.544000000000004
- type: map_at_3
value: 30.503999999999998
- type: map_at_5
value: 32.216
- type: mrr_at_1
value: 27.683999999999997
- type: mrr_at_10
value: 35.467999999999996
- type: mrr_at_100
value: 36.32
- type: mrr_at_1000
value: 36.386
- type: mrr_at_3
value: 32.618
- type: mrr_at_5
value: 34.262
- type: ndcg_at_1
value: 27.683999999999997
- type: ndcg_at_10
value: 38.378
- type: ndcg_at_100
value: 43.288
- type: ndcg_at_1000
value: 45.413
- type: ndcg_at_3
value: 32.586
- type: ndcg_at_5
value: 35.499
- type: precision_at_1
value: 27.683999999999997
- type: precision_at_10
value: 5.864
- type: precision_at_100
value: 0.882
- type: precision_at_1000
value: 0.11
- type: precision_at_3
value: 13.446
- type: precision_at_5
value: 9.718
- type: recall_at_1
value: 25.682
- type: recall_at_10
value: 51.712
- type: recall_at_100
value: 74.446
- type: recall_at_1000
value: 90.472
- type: recall_at_3
value: 36.236000000000004
- type: recall_at_5
value: 43.234
- type: map_at_1
value: 16.073999999999998
- type: map_at_10
value: 24.352999999999998
- type: map_at_100
value: 25.438
- type: map_at_1000
value: 25.545
- type: map_at_3
value: 21.614
- type: map_at_5
value: 23.104
- type: mrr_at_1
value: 19.776
- type: mrr_at_10
value: 28.837000000000003
- type: mrr_at_100
value: 29.755
- type: mrr_at_1000
value: 29.817
- type: mrr_at_3
value: 26.201999999999998
- type: mrr_at_5
value: 27.714
- type: ndcg_at_1
value: 19.776
- type: ndcg_at_10
value: 29.701
- type: ndcg_at_100
value: 35.307
- type: ndcg_at_1000
value: 37.942
- type: ndcg_at_3
value: 24.764
- type: ndcg_at_5
value: 27.025
- type: precision_at_1
value: 19.776
- type: precision_at_10
value: 5.659
- type: precision_at_100
value: 0.971
- type: precision_at_1000
value: 0.133
- type: precision_at_3
value: 12.065
- type: precision_at_5
value: 8.905000000000001
- type: recall_at_1
value: 16.073999999999998
- type: recall_at_10
value: 41.647
- type: recall_at_100
value: 66.884
- type: recall_at_1000
value: 85.91499999999999
- type: recall_at_3
value: 27.916
- type: recall_at_5
value: 33.729
- type: map_at_1
value: 28.444999999999997
- type: map_at_10
value: 38.218999999999994
- type: map_at_100
value: 39.595
- type: map_at_1000
value: 39.709
- type: map_at_3
value: 35.586
- type: map_at_5
value: 36.895
- type: mrr_at_1
value: 34.841
- type: mrr_at_10
value: 44.106
- type: mrr_at_100
value: 44.98
- type: mrr_at_1000
value: 45.03
- type: mrr_at_3
value: 41.979
- type: mrr_at_5
value: 43.047999999999995
- type: ndcg_at_1
value: 34.841
- type: ndcg_at_10
value: 43.922
- type: ndcg_at_100
value: 49.504999999999995
- type: ndcg_at_1000
value: 51.675000000000004
- type: ndcg_at_3
value: 39.858
- type: ndcg_at_5
value: 41.408
- type: precision_at_1
value: 34.841
- type: precision_at_10
value: 7.872999999999999
- type: precision_at_100
value: 1.2449999999999999
- type: precision_at_1000
value: 0.161
- type: precision_at_3
value: 18.993
- type: precision_at_5
value: 13.032
- type: recall_at_1
value: 28.444999999999997
- type: recall_at_10
value: 54.984
- type: recall_at_100
value: 78.342
- type: recall_at_1000
value: 92.77
- type: recall_at_3
value: 42.842999999999996
- type: recall_at_5
value: 47.247
- type: map_at_1
value: 23.072
- type: map_at_10
value: 32.354
- type: map_at_100
value: 33.800000000000004
- type: map_at_1000
value: 33.908
- type: map_at_3
value: 29.232000000000003
- type: map_at_5
value: 31.049
- type: mrr_at_1
value: 29.110000000000003
- type: mrr_at_10
value: 38.03
- type: mrr_at_100
value: 39.032
- type: mrr_at_1000
value: 39.086999999999996
- type: mrr_at_3
value: 35.407
- type: mrr_at_5
value: 36.76
- type: ndcg_at_1
value: 29.110000000000003
- type: ndcg_at_10
value: 38.231
- type: ndcg_at_100
value: 44.425
- type: ndcg_at_1000
value: 46.771
- type: ndcg_at_3
value: 33.095
- type: ndcg_at_5
value: 35.459
- type: precision_at_1
value: 29.110000000000003
- type: precision_at_10
value: 7.215000000000001
- type: precision_at_100
value: 1.2109999999999999
- type: precision_at_1000
value: 0.157
- type: precision_at_3
value: 16.058
- type: precision_at_5
value: 11.644
- type: recall_at_1
value: 23.072
- type: recall_at_10
value: 50.285999999999994
- type: recall_at_100
value: 76.596
- type: recall_at_1000
value: 92.861
- type: recall_at_3
value: 35.702
- type: recall_at_5
value: 42.152
- type: map_at_1
value: 24.937916666666666
- type: map_at_10
value: 33.755250000000004
- type: map_at_100
value: 34.955999999999996
- type: map_at_1000
value: 35.070499999999996
- type: map_at_3
value: 30.98708333333333
- type: map_at_5
value: 32.51491666666666
- type: mrr_at_1
value: 29.48708333333333
- type: mrr_at_10
value: 37.92183333333334
- type: mrr_at_100
value: 38.76583333333333
- type: mrr_at_1000
value: 38.82466666666667
- type: mrr_at_3
value: 35.45125
- type: mrr_at_5
value: 36.827000000000005
- type: ndcg_at_1
value: 29.48708333333333
- type: ndcg_at_10
value: 39.05225
- type: ndcg_at_100
value: 44.25983333333334
- type: ndcg_at_1000
value: 46.568333333333335
- type: ndcg_at_3
value: 34.271583333333325
- type: ndcg_at_5
value: 36.483916666666666
- type: precision_at_1
value: 29.48708333333333
- type: precision_at_10
value: 6.865749999999999
- type: precision_at_100
value: 1.1195833333333332
- type: precision_at_1000
value: 0.15058333333333335
- type: precision_at_3
value: 15.742083333333333
- type: precision_at_5
value: 11.221916666666667
- type: recall_at_1
value: 24.937916666666666
- type: recall_at_10
value: 50.650416666666665
- type: recall_at_100
value: 73.55383333333334
- type: recall_at_1000
value: 89.61691666666667
- type: recall_at_3
value: 37.27808333333334
- type: recall_at_5
value: 42.99475
- type: map_at_1
value: 23.947
- type: map_at_10
value: 30.575000000000003
- type: map_at_100
value: 31.465
- type: map_at_1000
value: 31.558000000000003
- type: map_at_3
value: 28.814
- type: map_at_5
value: 29.738999999999997
- type: mrr_at_1
value: 26.994
- type: mrr_at_10
value: 33.415
- type: mrr_at_100
value: 34.18
- type: mrr_at_1000
value: 34.245
- type: mrr_at_3
value: 31.621
- type: mrr_at_5
value: 32.549
- type: ndcg_at_1
value: 26.994
- type: ndcg_at_10
value: 34.482
- type: ndcg_at_100
value: 38.915
- type: ndcg_at_1000
value: 41.355
- type: ndcg_at_3
value: 31.139
- type: ndcg_at_5
value: 32.589
- type: precision_at_1
value: 26.994
- type: precision_at_10
value: 5.322
- type: precision_at_100
value: 0.8160000000000001
- type: precision_at_1000
value: 0.11100000000000002
- type: precision_at_3
value: 13.344000000000001
- type: precision_at_5
value: 8.988
- type: recall_at_1
value: 23.947
- type: recall_at_10
value: 43.647999999999996
- type: recall_at_100
value: 63.851
- type: recall_at_1000
value: 82.0
- type: recall_at_3
value: 34.288000000000004
- type: recall_at_5
value: 38.117000000000004
- type: map_at_1
value: 16.197
- type: map_at_10
value: 22.968
- type: map_at_100
value: 24.095
- type: map_at_1000
value: 24.217
- type: map_at_3
value: 20.771
- type: map_at_5
value: 21.995
- type: mrr_at_1
value: 19.511
- type: mrr_at_10
value: 26.55
- type: mrr_at_100
value: 27.500999999999998
- type: mrr_at_1000
value: 27.578999999999997
- type: mrr_at_3
value: 24.421
- type: mrr_at_5
value: 25.604
- type: ndcg_at_1
value: 19.511
- type: ndcg_at_10
value: 27.386
- type: ndcg_at_100
value: 32.828
- type: ndcg_at_1000
value: 35.739
- type: ndcg_at_3
value: 23.405
- type: ndcg_at_5
value: 25.255
- type: precision_at_1
value: 19.511
- type: precision_at_10
value: 5.017
- type: precision_at_100
value: 0.91
- type: precision_at_1000
value: 0.133
- type: precision_at_3
value: 11.023
- type: precision_at_5
value: 8.025
- type: recall_at_1
value: 16.197
- type: recall_at_10
value: 37.09
- type: recall_at_100
value: 61.778
- type: recall_at_1000
value: 82.56599999999999
- type: recall_at_3
value: 26.034000000000002
- type: recall_at_5
value: 30.762
- type: map_at_1
value: 25.41
- type: map_at_10
value: 33.655
- type: map_at_100
value: 34.892
- type: map_at_1000
value: 34.995
- type: map_at_3
value: 30.94
- type: map_at_5
value: 32.303
- type: mrr_at_1
value: 29.477999999999998
- type: mrr_at_10
value: 37.443
- type: mrr_at_100
value: 38.383
- type: mrr_at_1000
value: 38.440000000000005
- type: mrr_at_3
value: 34.949999999999996
- type: mrr_at_5
value: 36.228
- type: ndcg_at_1
value: 29.477999999999998
- type: ndcg_at_10
value: 38.769
- type: ndcg_at_100
value: 44.245000000000005
- type: ndcg_at_1000
value: 46.593
- type: ndcg_at_3
value: 33.623
- type: ndcg_at_5
value: 35.766
- type: precision_at_1
value: 29.477999999999998
- type: precision_at_10
value: 6.455
- type: precision_at_100
value: 1.032
- type: precision_at_1000
value: 0.135
- type: precision_at_3
value: 14.893999999999998
- type: precision_at_5
value: 10.485
- type: recall_at_1
value: 25.41
- type: recall_at_10
value: 50.669
- type: recall_at_100
value: 74.084
- type: recall_at_1000
value: 90.435
- type: recall_at_3
value: 36.679
- type: recall_at_5
value: 41.94
- type: map_at_1
value: 23.339
- type: map_at_10
value: 31.852000000000004
- type: map_at_100
value: 33.411
- type: map_at_1000
value: 33.62
- type: map_at_3
value: 28.929
- type: map_at_5
value: 30.542
- type: mrr_at_1
value: 28.063
- type: mrr_at_10
value: 36.301
- type: mrr_at_100
value: 37.288
- type: mrr_at_1000
value: 37.349
- type: mrr_at_3
value: 33.663
- type: mrr_at_5
value: 35.165
- type: ndcg_at_1
value: 28.063
- type: ndcg_at_10
value: 37.462
- type: ndcg_at_100
value: 43.620999999999995
- type: ndcg_at_1000
value: 46.211
- type: ndcg_at_3
value: 32.68
- type: ndcg_at_5
value: 34.981
- type: precision_at_1
value: 28.063
- type: precision_at_10
value: 7.1739999999999995
- type: precision_at_100
value: 1.486
- type: precision_at_1000
value: 0.23500000000000001
- type: precision_at_3
value: 15.217
- type: precision_at_5
value: 11.265
- type: recall_at_1
value: 23.339
- type: recall_at_10
value: 48.376999999999995
- type: recall_at_100
value: 76.053
- type: recall_at_1000
value: 92.455
- type: recall_at_3
value: 34.735
- type: recall_at_5
value: 40.71
- type: map_at_1
value: 18.925
- type: map_at_10
value: 26.017000000000003
- type: map_at_100
value: 27.034000000000002
- type: map_at_1000
value: 27.156000000000002
- type: map_at_3
value: 23.604
- type: map_at_5
value: 24.75
- type: mrr_at_1
value: 20.333000000000002
- type: mrr_at_10
value: 27.915
- type: mrr_at_100
value: 28.788000000000004
- type: mrr_at_1000
value: 28.877999999999997
- type: mrr_at_3
value: 25.446999999999996
- type: mrr_at_5
value: 26.648
- type: ndcg_at_1
value: 20.333000000000002
- type: ndcg_at_10
value: 30.673000000000002
- type: ndcg_at_100
value: 35.618
- type: ndcg_at_1000
value: 38.517
- type: ndcg_at_3
value: 25.71
- type: ndcg_at_5
value: 27.679
- type: precision_at_1
value: 20.333000000000002
- type: precision_at_10
value: 4.9910000000000005
- type: precision_at_100
value: 0.8130000000000001
- type: precision_at_1000
value: 0.117
- type: precision_at_3
value: 11.029
- type: precision_at_5
value: 7.8740000000000006
- type: recall_at_1
value: 18.925
- type: recall_at_10
value: 43.311
- type: recall_at_100
value: 66.308
- type: recall_at_1000
value: 87.49
- type: recall_at_3
value: 29.596
- type: recall_at_5
value: 34.245
- task:
type: Retrieval
dataset:
name: MTEB ClimateFEVER
type: climate-fever
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 13.714
- type: map_at_10
value: 23.194
- type: map_at_100
value: 24.976000000000003
- type: map_at_1000
value: 25.166
- type: map_at_3
value: 19.709
- type: map_at_5
value: 21.523999999999997
- type: mrr_at_1
value: 30.619000000000003
- type: mrr_at_10
value: 42.563
- type: mrr_at_100
value: 43.386
- type: mrr_at_1000
value: 43.423
- type: mrr_at_3
value: 39.555
- type: mrr_at_5
value: 41.268
- type: ndcg_at_1
value: 30.619000000000003
- type: ndcg_at_10
value: 31.836
- type: ndcg_at_100
value: 38.652
- type: ndcg_at_1000
value: 42.088
- type: ndcg_at_3
value: 26.733
- type: ndcg_at_5
value: 28.435
- type: precision_at_1
value: 30.619000000000003
- type: precision_at_10
value: 9.751999999999999
- type: precision_at_100
value: 1.71
- type: precision_at_1000
value: 0.23500000000000001
- type: precision_at_3
value: 19.935
- type: precision_at_5
value: 14.984
- type: recall_at_1
value: 13.714
- type: recall_at_10
value: 37.26
- type: recall_at_100
value: 60.546
- type: recall_at_1000
value: 79.899
- type: recall_at_3
value: 24.325
- type: recall_at_5
value: 29.725
- task:
type: Retrieval
dataset:
name: MTEB DBPedia
type: dbpedia-entity
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 8.462
- type: map_at_10
value: 18.637
- type: map_at_100
value: 26.131999999999998
- type: map_at_1000
value: 27.607
- type: map_at_3
value: 13.333
- type: map_at_5
value: 15.654000000000002
- type: mrr_at_1
value: 66.25
- type: mrr_at_10
value: 74.32600000000001
- type: mrr_at_100
value: 74.60900000000001
- type: mrr_at_1000
value: 74.62
- type: mrr_at_3
value: 72.667
- type: mrr_at_5
value: 73.817
- type: ndcg_at_1
value: 53.87499999999999
- type: ndcg_at_10
value: 40.028999999999996
- type: ndcg_at_100
value: 44.199
- type: ndcg_at_1000
value: 51.629999999999995
- type: ndcg_at_3
value: 44.113
- type: ndcg_at_5
value: 41.731
- type: precision_at_1
value: 66.25
- type: precision_at_10
value: 31.900000000000002
- type: precision_at_100
value: 10.043000000000001
- type: precision_at_1000
value: 1.926
- type: precision_at_3
value: 47.417
- type: precision_at_5
value: 40.65
- type: recall_at_1
value: 8.462
- type: recall_at_10
value: 24.293
- type: recall_at_100
value: 50.146
- type: recall_at_1000
value: 74.034
- type: recall_at_3
value: 14.967
- type: recall_at_5
value: 18.682000000000002
- task:
type: Classification
dataset:
name: MTEB EmotionClassification
type: mteb/emotion
config: default
split: test
revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37
metrics:
- type: accuracy
value: 47.84499999999999
- type: f1
value: 42.48106691979349
- task:
type: Retrieval
dataset:
name: MTEB FEVER
type: fever
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 74.034
- type: map_at_10
value: 82.76
- type: map_at_100
value: 82.968
- type: map_at_1000
value: 82.98299999999999
- type: map_at_3
value: 81.768
- type: map_at_5
value: 82.418
- type: mrr_at_1
value: 80.048
- type: mrr_at_10
value: 87.64999999999999
- type: mrr_at_100
value: 87.712
- type: mrr_at_1000
value: 87.713
- type: mrr_at_3
value: 87.01100000000001
- type: mrr_at_5
value: 87.466
- type: ndcg_at_1
value: 80.048
- type: ndcg_at_10
value: 86.643
- type: ndcg_at_100
value: 87.361
- type: ndcg_at_1000
value: 87.606
- type: ndcg_at_3
value: 85.137
- type: ndcg_at_5
value: 86.016
- type: precision_at_1
value: 80.048
- type: precision_at_10
value: 10.372
- type: precision_at_100
value: 1.093
- type: precision_at_1000
value: 0.11299999999999999
- type: precision_at_3
value: 32.638
- type: precision_at_5
value: 20.177
- type: recall_at_1
value: 74.034
- type: recall_at_10
value: 93.769
- type: recall_at_100
value: 96.569
- type: recall_at_1000
value: 98.039
- type: recall_at_3
value: 89.581
- type: recall_at_5
value: 91.906
- task:
type: Retrieval
dataset:
name: MTEB FiQA2018
type: fiqa
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 20.5
- type: map_at_10
value: 32.857
- type: map_at_100
value: 34.589
- type: map_at_1000
value: 34.778
- type: map_at_3
value: 29.160999999999998
- type: map_at_5
value: 31.033
- type: mrr_at_1
value: 40.123
- type: mrr_at_10
value: 48.776
- type: mrr_at_100
value: 49.495
- type: mrr_at_1000
value: 49.539
- type: mrr_at_3
value: 46.605000000000004
- type: mrr_at_5
value: 47.654
- type: ndcg_at_1
value: 40.123
- type: ndcg_at_10
value: 40.343
- type: ndcg_at_100
value: 46.56
- type: ndcg_at_1000
value: 49.777
- type: ndcg_at_3
value: 37.322
- type: ndcg_at_5
value: 37.791000000000004
- type: precision_at_1
value: 40.123
- type: precision_at_10
value: 11.08
- type: precision_at_100
value: 1.752
- type: precision_at_1000
value: 0.232
- type: precision_at_3
value: 24.897
- type: precision_at_5
value: 17.809
- type: recall_at_1
value: 20.5
- type: recall_at_10
value: 46.388
- type: recall_at_100
value: 69.552
- type: recall_at_1000
value: 89.011
- type: recall_at_3
value: 33.617999999999995
- type: recall_at_5
value: 38.211
- task:
type: Retrieval
dataset:
name: MTEB HotpotQA
type: hotpotqa
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 39.135999999999996
- type: map_at_10
value: 61.673
- type: map_at_100
value: 62.562
- type: map_at_1000
value: 62.62
- type: map_at_3
value: 58.467999999999996
- type: map_at_5
value: 60.463
- type: mrr_at_1
value: 78.271
- type: mrr_at_10
value: 84.119
- type: mrr_at_100
value: 84.29299999999999
- type: mrr_at_1000
value: 84.299
- type: mrr_at_3
value: 83.18900000000001
- type: mrr_at_5
value: 83.786
- type: ndcg_at_1
value: 78.271
- type: ndcg_at_10
value: 69.935
- type: ndcg_at_100
value: 73.01299999999999
- type: ndcg_at_1000
value: 74.126
- type: ndcg_at_3
value: 65.388
- type: ndcg_at_5
value: 67.906
- type: precision_at_1
value: 78.271
- type: precision_at_10
value: 14.562
- type: precision_at_100
value: 1.6969999999999998
- type: precision_at_1000
value: 0.184
- type: precision_at_3
value: 41.841
- type: precision_at_5
value: 27.087
- type: recall_at_1
value: 39.135999999999996
- type: recall_at_10
value: 72.809
- type: recall_at_100
value: 84.86200000000001
- type: recall_at_1000
value: 92.208
- type: recall_at_3
value: 62.76199999999999
- type: recall_at_5
value: 67.718
- task:
type: Classification
dataset:
name: MTEB ImdbClassification
type: mteb/imdb
config: default
split: test
revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7
metrics:
- type: accuracy
value: 90.60600000000001
- type: ap
value: 86.6579587804335
- type: f1
value: 90.5938853929307
- task:
type: Retrieval
dataset:
name: MTEB MSMARCO
type: msmarco
config: default
split: dev
revision: None
metrics:
- type: map_at_1
value: 21.852
- type: map_at_10
value: 33.982
- type: map_at_100
value: 35.116
- type: map_at_1000
value: 35.167
- type: map_at_3
value: 30.134
- type: map_at_5
value: 32.340999999999994
- type: mrr_at_1
value: 22.479
- type: mrr_at_10
value: 34.594
- type: mrr_at_100
value: 35.672
- type: mrr_at_1000
value: 35.716
- type: mrr_at_3
value: 30.84
- type: mrr_at_5
value: 32.998
- type: ndcg_at_1
value: 22.493
- type: ndcg_at_10
value: 40.833000000000006
- type: ndcg_at_100
value: 46.357
- type: ndcg_at_1000
value: 47.637
- type: ndcg_at_3
value: 32.995999999999995
- type: ndcg_at_5
value: 36.919000000000004
- type: precision_at_1
value: 22.493
- type: precision_at_10
value: 6.465999999999999
- type: precision_at_100
value: 0.9249999999999999
- type: precision_at_1000
value: 0.104
- type: precision_at_3
value: 14.030999999999999
- type: precision_at_5
value: 10.413
- type: recall_at_1
value: 21.852
- type: recall_at_10
value: 61.934999999999995
- type: recall_at_100
value: 87.611
- type: recall_at_1000
value: 97.441
- type: recall_at_3
value: 40.583999999999996
- type: recall_at_5
value: 49.992999999999995
- task:
type: Classification
dataset:
name: MTEB MTOPDomainClassification (en)
type: mteb/mtop_domain
config: en
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 93.36069311445507
- type: f1
value: 93.16456330371453
- task:
type: Classification
dataset:
name: MTEB MTOPIntentClassification (en)
type: mteb/mtop_intent
config: en
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 74.74692202462381
- type: f1
value: 58.17903579421599
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (en)
type: mteb/amazon_massive_intent
config: en
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 74.80833893745796
- type: f1
value: 72.70786592684664
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (en)
type: mteb/amazon_massive_scenario
config: en
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 78.69872225958305
- type: f1
value: 78.61626934504731
- task:
type: Clustering
dataset:
name: MTEB MedrxivClusteringP2P
type: mteb/medrxiv-clustering-p2p
config: default
split: test
revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73
metrics:
- type: v_measure
value: 33.058658628717694
- task:
type: Clustering
dataset:
name: MTEB MedrxivClusteringS2S
type: mteb/medrxiv-clustering-s2s
config: default
split: test
revision: 35191c8c0dca72d8ff3efcd72aa802307d469663
metrics:
- type: v_measure
value: 30.85561739360599
- task:
type: Reranking
dataset:
name: MTEB MindSmallReranking
type: mteb/mind_small
config: default
split: test
revision: 3bdac13927fdc888b903db93b2ffdbd90b295a69
metrics:
- type: map
value: 31.290259910144385
- type: mrr
value: 32.44223046102856
- task:
type: Retrieval
dataset:
name: MTEB NFCorpus
type: nfcorpus
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 5.288
- type: map_at_10
value: 12.267999999999999
- type: map_at_100
value: 15.557000000000002
- type: map_at_1000
value: 16.98
- type: map_at_3
value: 8.866
- type: map_at_5
value: 10.418
- type: mrr_at_1
value: 43.653
- type: mrr_at_10
value: 52.681
- type: mrr_at_100
value: 53.315999999999995
- type: mrr_at_1000
value: 53.357
- type: mrr_at_3
value: 51.393
- type: mrr_at_5
value: 51.903999999999996
- type: ndcg_at_1
value: 42.415000000000006
- type: ndcg_at_10
value: 34.305
- type: ndcg_at_100
value: 30.825999999999997
- type: ndcg_at_1000
value: 39.393
- type: ndcg_at_3
value: 39.931
- type: ndcg_at_5
value: 37.519999999999996
- type: precision_at_1
value: 43.653
- type: precision_at_10
value: 25.728
- type: precision_at_100
value: 7.932
- type: precision_at_1000
value: 2.07
- type: precision_at_3
value: 38.184000000000005
- type: precision_at_5
value: 32.879000000000005
- type: recall_at_1
value: 5.288
- type: recall_at_10
value: 16.195
- type: recall_at_100
value: 31.135
- type: recall_at_1000
value: 61.531000000000006
- type: recall_at_3
value: 10.313
- type: recall_at_5
value: 12.754999999999999
- task:
type: Retrieval
dataset:
name: MTEB NQ
type: nq
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 28.216
- type: map_at_10
value: 42.588
- type: map_at_100
value: 43.702999999999996
- type: map_at_1000
value: 43.739
- type: map_at_3
value: 38.177
- type: map_at_5
value: 40.754000000000005
- type: mrr_at_1
value: 31.866
- type: mrr_at_10
value: 45.189
- type: mrr_at_100
value: 46.056000000000004
- type: mrr_at_1000
value: 46.081
- type: mrr_at_3
value: 41.526999999999994
- type: mrr_at_5
value: 43.704
- type: ndcg_at_1
value: 31.837
- type: ndcg_at_10
value: 50.178
- type: ndcg_at_100
value: 54.98800000000001
- type: ndcg_at_1000
value: 55.812
- type: ndcg_at_3
value: 41.853
- type: ndcg_at_5
value: 46.153
- type: precision_at_1
value: 31.837
- type: precision_at_10
value: 8.43
- type: precision_at_100
value: 1.1119999999999999
- type: precision_at_1000
value: 0.11900000000000001
- type: precision_at_3
value: 19.023
- type: precision_at_5
value: 13.911000000000001
- type: recall_at_1
value: 28.216
- type: recall_at_10
value: 70.8
- type: recall_at_100
value: 91.857
- type: recall_at_1000
value: 97.941
- type: recall_at_3
value: 49.196
- type: recall_at_5
value: 59.072
- task:
type: Retrieval
dataset:
name: MTEB QuoraRetrieval
type: quora
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 71.22800000000001
- type: map_at_10
value: 85.115
- type: map_at_100
value: 85.72
- type: map_at_1000
value: 85.737
- type: map_at_3
value: 82.149
- type: map_at_5
value: 84.029
- type: mrr_at_1
value: 81.96
- type: mrr_at_10
value: 88.00200000000001
- type: mrr_at_100
value: 88.088
- type: mrr_at_1000
value: 88.089
- type: mrr_at_3
value: 87.055
- type: mrr_at_5
value: 87.715
- type: ndcg_at_1
value: 82.01
- type: ndcg_at_10
value: 88.78
- type: ndcg_at_100
value: 89.91
- type: ndcg_at_1000
value: 90.013
- type: ndcg_at_3
value: 85.957
- type: ndcg_at_5
value: 87.56
- type: precision_at_1
value: 82.01
- type: precision_at_10
value: 13.462
- type: precision_at_100
value: 1.528
- type: precision_at_1000
value: 0.157
- type: precision_at_3
value: 37.553
- type: precision_at_5
value: 24.732000000000003
- type: recall_at_1
value: 71.22800000000001
- type: recall_at_10
value: 95.69
- type: recall_at_100
value: 99.531
- type: recall_at_1000
value: 99.98
- type: recall_at_3
value: 87.632
- type: recall_at_5
value: 92.117
- task:
type: Clustering
dataset:
name: MTEB RedditClustering
type: mteb/reddit-clustering
config: default
split: test
revision: 24640382cdbf8abc73003fb0fa6d111a705499eb
metrics:
- type: v_measure
value: 52.31768034366916
- task:
type: Clustering
dataset:
name: MTEB RedditClusteringP2P
type: mteb/reddit-clustering-p2p
config: default
split: test
revision: 282350215ef01743dc01b456c7f5241fa8937f16
metrics:
- type: v_measure
value: 60.640266772723606
- task:
type: Retrieval
dataset:
name: MTEB SCIDOCS
type: scidocs
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 4.7780000000000005
- type: map_at_10
value: 12.299
- type: map_at_100
value: 14.363000000000001
- type: map_at_1000
value: 14.71
- type: map_at_3
value: 8.738999999999999
- type: map_at_5
value: 10.397
- type: mrr_at_1
value: 23.599999999999998
- type: mrr_at_10
value: 34.845
- type: mrr_at_100
value: 35.916
- type: mrr_at_1000
value: 35.973
- type: mrr_at_3
value: 31.7
- type: mrr_at_5
value: 33.535
- type: ndcg_at_1
value: 23.599999999999998
- type: ndcg_at_10
value: 20.522000000000002
- type: ndcg_at_100
value: 28.737000000000002
- type: ndcg_at_1000
value: 34.596
- type: ndcg_at_3
value: 19.542
- type: ndcg_at_5
value: 16.958000000000002
- type: precision_at_1
value: 23.599999999999998
- type: precision_at_10
value: 10.67
- type: precision_at_100
value: 2.259
- type: precision_at_1000
value: 0.367
- type: precision_at_3
value: 18.333
- type: precision_at_5
value: 14.879999999999999
- type: recall_at_1
value: 4.7780000000000005
- type: recall_at_10
value: 21.617
- type: recall_at_100
value: 45.905
- type: recall_at_1000
value: 74.42
- type: recall_at_3
value: 11.148
- type: recall_at_5
value: 15.082999999999998
- task:
type: STS
dataset:
name: MTEB SICK-R
type: mteb/sickr-sts
config: default
split: test
revision: a6ea5a8cab320b040a23452cc28066d9beae2cee
metrics:
- type: cos_sim_pearson
value: 83.22372750297885
- type: cos_sim_spearman
value: 79.40972617119405
- type: euclidean_pearson
value: 80.6101072020434
- type: euclidean_spearman
value: 79.53844217225202
- type: manhattan_pearson
value: 80.57265975286111
- type: manhattan_spearman
value: 79.46335611792958
- task:
type: STS
dataset:
name: MTEB STS12
type: mteb/sts12-sts
config: default
split: test
revision: a0d554a64d88156834ff5ae9920b964011b16384
metrics:
- type: cos_sim_pearson
value: 85.43713315520749
- type: cos_sim_spearman
value: 77.44128693329532
- type: euclidean_pearson
value: 81.63869928101123
- type: euclidean_spearman
value: 77.29512977961515
- type: manhattan_pearson
value: 81.63704185566183
- type: manhattan_spearman
value: 77.29909412738657
- task:
type: STS
dataset:
name: MTEB STS13
type: mteb/sts13-sts
config: default
split: test
revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca
metrics:
- type: cos_sim_pearson
value: 81.59451537860527
- type: cos_sim_spearman
value: 82.97994638856723
- type: euclidean_pearson
value: 82.89478688288412
- type: euclidean_spearman
value: 83.58740751053104
- type: manhattan_pearson
value: 82.69140840941608
- type: manhattan_spearman
value: 83.33665956040555
- task:
type: STS
dataset:
name: MTEB STS14
type: mteb/sts14-sts
config: default
split: test
revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375
metrics:
- type: cos_sim_pearson
value: 82.00756527711764
- type: cos_sim_spearman
value: 81.83560996841379
- type: euclidean_pearson
value: 82.07684151976518
- type: euclidean_spearman
value: 82.00913052060511
- type: manhattan_pearson
value: 82.05690778488794
- type: manhattan_spearman
value: 82.02260252019525
- task:
type: STS
dataset:
name: MTEB STS15
type: mteb/sts15-sts
config: default
split: test
revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3
metrics:
- type: cos_sim_pearson
value: 86.13710262895447
- type: cos_sim_spearman
value: 87.26412811156248
- type: euclidean_pearson
value: 86.94151453230228
- type: euclidean_spearman
value: 87.5363796699571
- type: manhattan_pearson
value: 86.86989424083748
- type: manhattan_spearman
value: 87.47315940781353
- task:
type: STS
dataset:
name: MTEB STS16
type: mteb/sts16-sts
config: default
split: test
revision: 4d8694f8f0e0100860b497b999b3dbed754a0513
metrics:
- type: cos_sim_pearson
value: 83.0230597603627
- type: cos_sim_spearman
value: 84.93344499318864
- type: euclidean_pearson
value: 84.23754743431141
- type: euclidean_spearman
value: 85.09707376597099
- type: manhattan_pearson
value: 84.04325160987763
- type: manhattan_spearman
value: 84.89353071339909
- task:
type: STS
dataset:
name: MTEB STS17 (en-en)
type: mteb/sts17-crosslingual-sts
config: en-en
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 86.75620824563921
- type: cos_sim_spearman
value: 87.15065513706398
- type: euclidean_pearson
value: 88.26281533633521
- type: euclidean_spearman
value: 87.51963738643983
- type: manhattan_pearson
value: 88.25599267618065
- type: manhattan_spearman
value: 87.58048736047483
- task:
type: STS
dataset:
name: MTEB STS22 (en)
type: mteb/sts22-crosslingual-sts
config: en
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 64.74645319195137
- type: cos_sim_spearman
value: 65.29996325037214
- type: euclidean_pearson
value: 67.04297794086443
- type: euclidean_spearman
value: 65.43841726694343
- type: manhattan_pearson
value: 67.39459955690904
- type: manhattan_spearman
value: 65.92864704413651
- task:
type: STS
dataset:
name: MTEB STSBenchmark
type: mteb/stsbenchmark-sts
config: default
split: test
revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831
metrics:
- type: cos_sim_pearson
value: 84.31291020270801
- type: cos_sim_spearman
value: 85.86473738688068
- type: euclidean_pearson
value: 85.65537275064152
- type: euclidean_spearman
value: 86.13087454209642
- type: manhattan_pearson
value: 85.43946955047609
- type: manhattan_spearman
value: 85.91568175344916
- task:
type: Reranking
dataset:
name: MTEB SciDocsRR
type: mteb/scidocs-reranking
config: default
split: test
revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab
metrics:
- type: map
value: 85.93798118350695
- type: mrr
value: 95.93536274908824
- task:
type: Retrieval
dataset:
name: MTEB SciFact
type: scifact
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 57.594
- type: map_at_10
value: 66.81899999999999
- type: map_at_100
value: 67.368
- type: map_at_1000
value: 67.4
- type: map_at_3
value: 64.061
- type: map_at_5
value: 65.47
- type: mrr_at_1
value: 60.667
- type: mrr_at_10
value: 68.219
- type: mrr_at_100
value: 68.655
- type: mrr_at_1000
value: 68.684
- type: mrr_at_3
value: 66.22200000000001
- type: mrr_at_5
value: 67.289
- type: ndcg_at_1
value: 60.667
- type: ndcg_at_10
value: 71.275
- type: ndcg_at_100
value: 73.642
- type: ndcg_at_1000
value: 74.373
- type: ndcg_at_3
value: 66.521
- type: ndcg_at_5
value: 68.581
- type: precision_at_1
value: 60.667
- type: precision_at_10
value: 9.433
- type: precision_at_100
value: 1.0699999999999998
- type: precision_at_1000
value: 0.11299999999999999
- type: precision_at_3
value: 25.556
- type: precision_at_5
value: 16.8
- type: recall_at_1
value: 57.594
- type: recall_at_10
value: 83.622
- type: recall_at_100
value: 94.167
- type: recall_at_1000
value: 99.667
- type: recall_at_3
value: 70.64399999999999
- type: recall_at_5
value: 75.983
- task:
type: PairClassification
dataset:
name: MTEB SprintDuplicateQuestions
type: mteb/sprintduplicatequestions-pairclassification
config: default
split: test
revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46
metrics:
- type: cos_sim_accuracy
value: 99.85841584158416
- type: cos_sim_ap
value: 96.66996142314342
- type: cos_sim_f1
value: 92.83208020050125
- type: cos_sim_precision
value: 93.06532663316584
- type: cos_sim_recall
value: 92.60000000000001
- type: dot_accuracy
value: 99.85841584158416
- type: dot_ap
value: 96.6775307676576
- type: dot_f1
value: 92.69289729177312
- type: dot_precision
value: 94.77533960292581
- type: dot_recall
value: 90.7
- type: euclidean_accuracy
value: 99.86138613861387
- type: euclidean_ap
value: 96.6338454403108
- type: euclidean_f1
value: 92.92214357937311
- type: euclidean_precision
value: 93.96728016359918
- type: euclidean_recall
value: 91.9
- type: manhattan_accuracy
value: 99.86237623762376
- type: manhattan_ap
value: 96.60370449645053
- type: manhattan_f1
value: 92.91177970423253
- type: manhattan_precision
value: 94.7970863683663
- type: manhattan_recall
value: 91.10000000000001
- type: max_accuracy
value: 99.86237623762376
- type: max_ap
value: 96.6775307676576
- type: max_f1
value: 92.92214357937311
- task:
type: Clustering
dataset:
name: MTEB StackExchangeClustering
type: mteb/stackexchange-clustering
config: default
split: test
revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259
metrics:
- type: v_measure
value: 60.77977058695198
- task:
type: Clustering
dataset:
name: MTEB StackExchangeClusteringP2P
type: mteb/stackexchange-clustering-p2p
config: default
split: test
revision: 815ca46b2622cec33ccafc3735d572c266efdb44
metrics:
- type: v_measure
value: 35.2725272535638
- task:
type: Reranking
dataset:
name: MTEB StackOverflowDupQuestions
type: mteb/stackoverflowdupquestions-reranking
config: default
split: test
revision: e185fbe320c72810689fc5848eb6114e1ef5ec69
metrics:
- type: map
value: 53.64052466362125
- type: mrr
value: 54.533067014684654
- task:
type: Summarization
dataset:
name: MTEB SummEval
type: mteb/summeval
config: default
split: test
revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c
metrics:
- type: cos_sim_pearson
value: 30.677624219206578
- type: cos_sim_spearman
value: 30.121368518123447
- type: dot_pearson
value: 30.69870088041608
- type: dot_spearman
value: 29.61284927093751
- task:
type: Retrieval
dataset:
name: MTEB TRECCOVID
type: trec-covid
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 0.22
- type: map_at_10
value: 1.855
- type: map_at_100
value: 9.885
- type: map_at_1000
value: 23.416999999999998
- type: map_at_3
value: 0.637
- type: map_at_5
value: 1.024
- type: mrr_at_1
value: 88.0
- type: mrr_at_10
value: 93.067
- type: mrr_at_100
value: 93.067
- type: mrr_at_1000
value: 93.067
- type: mrr_at_3
value: 92.667
- type: mrr_at_5
value: 93.067
- type: ndcg_at_1
value: 82.0
- type: ndcg_at_10
value: 75.899
- type: ndcg_at_100
value: 55.115
- type: ndcg_at_1000
value: 48.368
- type: ndcg_at_3
value: 79.704
- type: ndcg_at_5
value: 78.39699999999999
- type: precision_at_1
value: 88.0
- type: precision_at_10
value: 79.60000000000001
- type: precision_at_100
value: 56.06
- type: precision_at_1000
value: 21.206
- type: precision_at_3
value: 84.667
- type: precision_at_5
value: 83.2
- type: recall_at_1
value: 0.22
- type: recall_at_10
value: 2.078
- type: recall_at_100
value: 13.297
- type: recall_at_1000
value: 44.979
- type: recall_at_3
value: 0.6689999999999999
- type: recall_at_5
value: 1.106
- task:
type: Retrieval
dataset:
name: MTEB Touche2020
type: webis-touche2020
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 2.258
- type: map_at_10
value: 10.439
- type: map_at_100
value: 16.89
- type: map_at_1000
value: 18.407999999999998
- type: map_at_3
value: 5.668
- type: map_at_5
value: 7.718
- type: mrr_at_1
value: 32.653
- type: mrr_at_10
value: 51.159
- type: mrr_at_100
value: 51.714000000000006
- type: mrr_at_1000
value: 51.714000000000006
- type: mrr_at_3
value: 47.959
- type: mrr_at_5
value: 50.407999999999994
- type: ndcg_at_1
value: 29.592000000000002
- type: ndcg_at_10
value: 26.037
- type: ndcg_at_100
value: 37.924
- type: ndcg_at_1000
value: 49.126999999999995
- type: ndcg_at_3
value: 30.631999999999998
- type: ndcg_at_5
value: 28.571
- type: precision_at_1
value: 32.653
- type: precision_at_10
value: 22.857
- type: precision_at_100
value: 7.754999999999999
- type: precision_at_1000
value: 1.529
- type: precision_at_3
value: 34.014
- type: precision_at_5
value: 29.796
- type: recall_at_1
value: 2.258
- type: recall_at_10
value: 16.554
- type: recall_at_100
value: 48.439
- type: recall_at_1000
value: 82.80499999999999
- type: recall_at_3
value: 7.283
- type: recall_at_5
value: 10.732
- task:
type: Classification
dataset:
name: MTEB ToxicConversationsClassification
type: mteb/toxic_conversations_50k
config: default
split: test
revision: d7c0de2777da35d6aae2200a62c6e0e5af397c4c
metrics:
- type: accuracy
value: 69.8858
- type: ap
value: 13.835684144362109
- type: f1
value: 53.803351693244586
- task:
type: Classification
dataset:
name: MTEB TweetSentimentExtractionClassification
type: mteb/tweet_sentiment_extraction
config: default
split: test
revision: d604517c81ca91fe16a244d1248fc021f9ecee7a
metrics:
- type: accuracy
value: 60.50650820599886
- type: f1
value: 60.84357825979259
- task:
type: Clustering
dataset:
name: MTEB TwentyNewsgroupsClustering
type: mteb/twentynewsgroups-clustering
config: default
split: test
revision: 6125ec4e24fa026cec8a478383ee943acfbd5449
metrics:
- type: v_measure
value: 48.52131044852134
- task:
type: PairClassification
dataset:
name: MTEB TwitterSemEval2015
type: mteb/twittersemeval2015-pairclassification
config: default
split: test
revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1
metrics:
- type: cos_sim_accuracy
value: 85.59337187816654
- type: cos_sim_ap
value: 73.23925826533437
- type: cos_sim_f1
value: 67.34693877551021
- type: cos_sim_precision
value: 62.40432237730752
- type: cos_sim_recall
value: 73.13984168865434
- type: dot_accuracy
value: 85.31322644096085
- type: dot_ap
value: 72.30723963807422
- type: dot_f1
value: 66.47051612112296
- type: dot_precision
value: 62.0792305930845
- type: dot_recall
value: 71.53034300791556
- type: euclidean_accuracy
value: 85.61125350181797
- type: euclidean_ap
value: 73.32843720487845
- type: euclidean_f1
value: 67.36549633745895
- type: euclidean_precision
value: 64.60755813953489
- type: euclidean_recall
value: 70.36939313984169
- type: manhattan_accuracy
value: 85.63509566668654
- type: manhattan_ap
value: 73.16658488311325
- type: manhattan_f1
value: 67.20597386434349
- type: manhattan_precision
value: 63.60424028268551
- type: manhattan_recall
value: 71.2401055408971
- type: max_accuracy
value: 85.63509566668654
- type: max_ap
value: 73.32843720487845
- type: max_f1
value: 67.36549633745895
- task:
type: PairClassification
dataset:
name: MTEB TwitterURLCorpus
type: mteb/twitterurlcorpus-pairclassification
config: default
split: test
revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf
metrics:
- type: cos_sim_accuracy
value: 88.33779640625606
- type: cos_sim_ap
value: 84.83868375898157
- type: cos_sim_f1
value: 77.16506154017773
- type: cos_sim_precision
value: 74.62064005753327
- type: cos_sim_recall
value: 79.88912842623961
- type: dot_accuracy
value: 88.02732176815307
- type: dot_ap
value: 83.95089283763002
- type: dot_f1
value: 76.29635101196631
- type: dot_precision
value: 73.31771720613288
- type: dot_recall
value: 79.52725592854944
- type: euclidean_accuracy
value: 88.44452206310397
- type: euclidean_ap
value: 84.98384576824827
- type: euclidean_f1
value: 77.29311047696697
- type: euclidean_precision
value: 74.51232583065381
- type: euclidean_recall
value: 80.28949799815214
- type: manhattan_accuracy
value: 88.47362906042613
- type: manhattan_ap
value: 84.91421462218432
- type: manhattan_f1
value: 77.05107637204792
- type: manhattan_precision
value: 74.74484256243214
- type: manhattan_recall
value: 79.50415768401602
- type: max_accuracy
value: 88.47362906042613
- type: max_ap
value: 84.98384576824827
- type: max_f1
value: 77.29311047696697
---
<h1 align="center">FlagEmbedding</h1>
<h4 align="center">
<p>
<a href=#model-list>Model List</a> |
<a href=#frequently-asked-questions>FAQ</a> |
<a href=#usage>Usage</a> |
<a href="#evaluation">Evaluation</a> |
<a href="#train">Train</a> |
<a href="#contact">Contact</a> |
<a href="#citation">Citation</a> |
<a href="#license">License</a>
<p>
</h4>
More details please refer to our Github: [FlagEmbedding](https://github.com/FlagOpen/FlagEmbedding).
If you are looking for a model that supports more languages, longer texts, and other retrieval methods, you can try using [bge-m3](https://huggingface.co/BAAI/bge-m3).
[English](README.md) | [中文](https://github.com/FlagOpen/FlagEmbedding/blob/master/README_zh.md)
FlagEmbedding focuses on retrieval-augmented LLMs, consisting of the following projects currently:
- **Long-Context LLM**: [Activation Beacon](https://github.com/FlagOpen/FlagEmbedding/tree/master/Long_LLM/activation_beacon)
- **Fine-tuning of LM** : [LM-Cocktail](https://github.com/FlagOpen/FlagEmbedding/tree/master/LM_Cocktail)
- **Dense Retrieval**: [BGE-M3](https://github.com/FlagOpen/FlagEmbedding/tree/master/FlagEmbedding/BGE_M3), [LLM Embedder](https://github.com/FlagOpen/FlagEmbedding/tree/master/FlagEmbedding/llm_embedder), [BGE Embedding](https://github.com/FlagOpen/FlagEmbedding/tree/master/FlagEmbedding/baai_general_embedding)
- **Reranker Model**: [BGE Reranker](https://github.com/FlagOpen/FlagEmbedding/tree/master/FlagEmbedding/reranker)
- **Benchmark**: [C-MTEB](https://github.com/FlagOpen/FlagEmbedding/tree/master/C_MTEB)
## News
- 1/30/2024: Release **BGE-M3**, a new member to BGE model series! M3 stands for **M**ulti-linguality (100+ languages), **M**ulti-granularities (input length up to 8192), **M**ulti-Functionality (unification of dense, lexical, multi-vec/colbert retrieval).
It is the first embedding model which supports all three retrieval methods, achieving new SOTA on multi-lingual (MIRACL) and cross-lingual (MKQA) benchmarks.
[Technical Report](https://github.com/FlagOpen/FlagEmbedding/blob/master/FlagEmbedding/BGE_M3/BGE_M3.pdf) and [Code](https://github.com/FlagOpen/FlagEmbedding/tree/master/FlagEmbedding/BGE_M3). :fire:
- 1/9/2024: Release [Activation-Beacon](https://github.com/FlagOpen/FlagEmbedding/tree/master/Long_LLM/activation_beacon), an effective, efficient, compatible, and low-cost (training) method to extend the context length of LLM. [Technical Report](https://arxiv.org/abs/2401.03462) :fire:
- 12/24/2023: Release **LLaRA**, a LLaMA-7B based dense retriever, leading to state-of-the-art performances on MS MARCO and BEIR. Model and code will be open-sourced. Please stay tuned. [Technical Report](https://arxiv.org/abs/2312.15503) :fire:
- 11/23/2023: Release [LM-Cocktail](https://github.com/FlagOpen/FlagEmbedding/tree/master/LM_Cocktail), a method to maintain general capabilities during fine-tuning by merging multiple language models. [Technical Report](https://arxiv.org/abs/2311.13534) :fire:
- 10/12/2023: Release [LLM-Embedder](https://github.com/FlagOpen/FlagEmbedding/tree/master/FlagEmbedding/llm_embedder), a unified embedding model to support diverse retrieval augmentation needs for LLMs. [Technical Report](https://arxiv.org/pdf/2310.07554.pdf)
- 09/15/2023: The [technical report](https://arxiv.org/pdf/2309.07597.pdf) of BGE has been released
- 09/15/2023: The [massive training data](https://data.baai.ac.cn/details/BAAI-MTP) of BGE has been released
- 09/12/2023: New models:
- **New reranker model**: release cross-encoder models `BAAI/bge-reranker-base` and `BAAI/bge-reranker-large`, which are more powerful than embedding model. We recommend to use/fine-tune them to re-rank top-k documents returned by embedding models.
- **update embedding model**: release `bge-*-v1.5` embedding model to alleviate the issue of the similarity distribution, and enhance its retrieval ability without instruction.
<details>
<summary>More</summary>
<!-- ### More -->
- 09/07/2023: Update [fine-tune code](https://github.com/FlagOpen/FlagEmbedding/blob/master/FlagEmbedding/baai_general_embedding/README.md): Add script to mine hard negatives and support adding instruction during fine-tuning.
- 08/09/2023: BGE Models are integrated into **Langchain**, you can use it like [this](#using-langchain); C-MTEB **leaderboard** is [available](https://huggingface.co/spaces/mteb/leaderboard).
- 08/05/2023: Release base-scale and small-scale models, **best performance among the models of the same size 🤗**
- 08/02/2023: Release `bge-large-*`(short for BAAI General Embedding) Models, **rank 1st on MTEB and C-MTEB benchmark!** :tada: :tada:
- 08/01/2023: We release the [Chinese Massive Text Embedding Benchmark](https://github.com/FlagOpen/FlagEmbedding/blob/master/C_MTEB) (**C-MTEB**), consisting of 31 test dataset.
</details>
## Model List
`bge` is short for `BAAI general embedding`.
| Model | Language | | Description | query instruction for retrieval [1] |
|:-------------------------------|:--------:| :--------:| :--------:|:--------:|
| [BAAI/bge-m3](https://huggingface.co/BAAI/bge-m3) | Multilingual | [Inference](https://github.com/FlagOpen/FlagEmbedding/tree/master/FlagEmbedding/BGE_M3#usage) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/FlagEmbedding/BGE_M3) | Multi-Functionality(dense retrieval, sparse retrieval, multi-vector(colbert)), Multi-Linguality, and Multi-Granularity(8192 tokens) | |
| [BAAI/llm-embedder](https://huggingface.co/BAAI/llm-embedder) | English | [Inference](./FlagEmbedding/llm_embedder/README.md) [Fine-tune](./FlagEmbedding/llm_embedder/README.md) | a unified embedding model to support diverse retrieval augmentation needs for LLMs | See [README](./FlagEmbedding/llm_embedder/README.md) |
| [BAAI/bge-reranker-large](https://huggingface.co/BAAI/bge-reranker-large) | Chinese and English | [Inference](#usage-for-reranker) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/reranker) | a cross-encoder model which is more accurate but less efficient [2] | |
| [BAAI/bge-reranker-base](https://huggingface.co/BAAI/bge-reranker-base) | Chinese and English | [Inference](#usage-for-reranker) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/reranker) | a cross-encoder model which is more accurate but less efficient [2] | |
| [BAAI/bge-large-en-v1.5](https://huggingface.co/BAAI/bge-large-en-v1.5) | English | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | version 1.5 with more reasonable similarity distribution | `Represent this sentence for searching relevant passages: ` |
| [BAAI/bge-base-en-v1.5](https://huggingface.co/BAAI/bge-base-en-v1.5) | English | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | version 1.5 with more reasonable similarity distribution | `Represent this sentence for searching relevant passages: ` |
| [BAAI/bge-small-en-v1.5](https://huggingface.co/BAAI/bge-small-en-v1.5) | English | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | version 1.5 with more reasonable similarity distribution | `Represent this sentence for searching relevant passages: ` |
| [BAAI/bge-large-zh-v1.5](https://huggingface.co/BAAI/bge-large-zh-v1.5) | Chinese | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | version 1.5 with more reasonable similarity distribution | `为这个句子生成表示以用于检索相关文章:` |
| [BAAI/bge-base-zh-v1.5](https://huggingface.co/BAAI/bge-base-zh-v1.5) | Chinese | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | version 1.5 with more reasonable similarity distribution | `为这个句子生成表示以用于检索相关文章:` |
| [BAAI/bge-small-zh-v1.5](https://huggingface.co/BAAI/bge-small-zh-v1.5) | Chinese | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | version 1.5 with more reasonable similarity distribution | `为这个句子生成表示以用于检索相关文章:` |
| [BAAI/bge-large-en](https://huggingface.co/BAAI/bge-large-en) | English | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | :trophy: rank **1st** in [MTEB](https://huggingface.co/spaces/mteb/leaderboard) leaderboard | `Represent this sentence for searching relevant passages: ` |
| [BAAI/bge-base-en](https://huggingface.co/BAAI/bge-base-en) | English | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | a base-scale model but with similar ability to `bge-large-en` | `Represent this sentence for searching relevant passages: ` |
| [BAAI/bge-small-en](https://huggingface.co/BAAI/bge-small-en) | English | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) |a small-scale model but with competitive performance | `Represent this sentence for searching relevant passages: ` |
| [BAAI/bge-large-zh](https://huggingface.co/BAAI/bge-large-zh) | Chinese | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | :trophy: rank **1st** in [C-MTEB](https://github.com/FlagOpen/FlagEmbedding/tree/master/C_MTEB) benchmark | `为这个句子生成表示以用于检索相关文章:` |
| [BAAI/bge-base-zh](https://huggingface.co/BAAI/bge-base-zh) | Chinese | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | a base-scale model but with similar ability to `bge-large-zh` | `为这个句子生成表示以用于检索相关文章:` |
| [BAAI/bge-small-zh](https://huggingface.co/BAAI/bge-small-zh) | Chinese | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | a small-scale model but with competitive performance | `为这个句子生成表示以用于检索相关文章:` |
[1\]: If you need to search the relevant passages to a query, we suggest to add the instruction to the query; in other cases, no instruction is needed, just use the original query directly. In all cases, **no instruction** needs to be added to passages.
[2\]: Different from embedding model, reranker uses question and document as input and directly output similarity instead of embedding. To balance the accuracy and time cost, cross-encoder is widely used to re-rank top-k documents retrieved by other simple models.
For examples, use bge embedding model to retrieve top 100 relevant documents, and then use bge reranker to re-rank the top 100 document to get the final top-3 results.
All models have been uploaded to Huggingface Hub, and you can see them at https://huggingface.co/BAAI.
If you cannot open the Huggingface Hub, you also can download the models at https://model.baai.ac.cn/models .
## Frequently asked questions
<details>
<summary>1. How to fine-tune bge embedding model?</summary>
<!-- ### How to fine-tune bge embedding model? -->
Following this [example](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) to prepare data and fine-tune your model.
Some suggestions:
- Mine hard negatives following this [example](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune#hard-negatives), which can improve the retrieval performance.
- If you pre-train bge on your data, the pre-trained model cannot be directly used to calculate similarity, and it must be fine-tuned with contrastive learning before computing similarity.
- If the accuracy of the fine-tuned model is still not high, it is recommended to use/fine-tune the cross-encoder model (bge-reranker) to re-rank top-k results. Hard negatives also are needed to fine-tune reranker.
</details>
<details>
<summary>2. The similarity score between two dissimilar sentences is higher than 0.5</summary>
<!-- ### The similarity score between two dissimilar sentences is higher than 0.5 -->
**Suggest to use bge v1.5, which alleviates the issue of the similarity distribution.**
Since we finetune the models by contrastive learning with a temperature of 0.01,
the similarity distribution of the current BGE model is about in the interval \[0.6, 1\].
So a similarity score greater than 0.5 does not indicate that the two sentences are similar.
For downstream tasks, such as passage retrieval or semantic similarity,
**what matters is the relative order of the scores, not the absolute value.**
If you need to filter similar sentences based on a similarity threshold,
please select an appropriate similarity threshold based on the similarity distribution on your data (such as 0.8, 0.85, or even 0.9).
</details>
<details>
<summary>3. When does the query instruction need to be used</summary>
<!-- ### When does the query instruction need to be used -->
For the `bge-*-v1.5`, we improve its retrieval ability when not using instruction.
No instruction only has a slight degradation in retrieval performance compared with using instruction.
So you can generate embedding without instruction in all cases for convenience.
For a retrieval task that uses short queries to find long related documents,
it is recommended to add instructions for these short queries.
**The best method to decide whether to add instructions for queries is choosing the setting that achieves better performance on your task.**
In all cases, the documents/passages do not need to add the instruction.
</details>
## Usage
### Usage for Embedding Model
Here are some examples for using `bge` models with
[FlagEmbedding](#using-flagembedding), [Sentence-Transformers](#using-sentence-transformers), [Langchain](#using-langchain), or [Huggingface Transformers](#using-huggingface-transformers).
#### Using FlagEmbedding
```
pip install -U FlagEmbedding
```
If it doesn't work for you, you can see [FlagEmbedding](https://github.com/FlagOpen/FlagEmbedding/blob/master/FlagEmbedding/baai_general_embedding/README.md) for more methods to install FlagEmbedding.
```python
from FlagEmbedding import FlagModel
sentences_1 = ["样例数据-1", "样例数据-2"]
sentences_2 = ["样例数据-3", "样例数据-4"]
model = FlagModel('BAAI/bge-large-zh-v1.5',
query_instruction_for_retrieval="为这个句子生成表示以用于检索相关文章:",
use_fp16=True) # Setting use_fp16 to True speeds up computation with a slight performance degradation
embeddings_1 = model.encode(sentences_1)
embeddings_2 = model.encode(sentences_2)
similarity = embeddings_1 @ embeddings_2.T
print(similarity)
# for s2p(short query to long passage) retrieval task, suggest to use encode_queries() which will automatically add the instruction to each query
# corpus in retrieval task can still use encode() or encode_corpus(), since they don't need instruction
queries = ['query_1', 'query_2']
passages = ["样例文档-1", "样例文档-2"]
q_embeddings = model.encode_queries(queries)
p_embeddings = model.encode(passages)
scores = q_embeddings @ p_embeddings.T
```
For the value of the argument `query_instruction_for_retrieval`, see [Model List](https://github.com/FlagOpen/FlagEmbedding/tree/master#model-list).
By default, FlagModel will use all available GPUs when encoding. Please set `os.environ["CUDA_VISIBLE_DEVICES"]` to select specific GPUs.
You also can set `os.environ["CUDA_VISIBLE_DEVICES"]=""` to make all GPUs unavailable.
#### Using Sentence-Transformers
You can also use the `bge` models with [sentence-transformers](https://www.SBERT.net):
```
pip install -U sentence-transformers
```
```python
from sentence_transformers import SentenceTransformer
sentences_1 = ["样例数据-1", "样例数据-2"]
sentences_2 = ["样例数据-3", "样例数据-4"]
model = SentenceTransformer('BAAI/bge-large-zh-v1.5')
embeddings_1 = model.encode(sentences_1, normalize_embeddings=True)
embeddings_2 = model.encode(sentences_2, normalize_embeddings=True)
similarity = embeddings_1 @ embeddings_2.T
print(similarity)
```
For s2p(short query to long passage) retrieval task,
each short query should start with an instruction (instructions see [Model List](https://github.com/FlagOpen/FlagEmbedding/tree/master#model-list)).
But the instruction is not needed for passages.
```python
from sentence_transformers import SentenceTransformer
queries = ['query_1', 'query_2']
passages = ["样例文档-1", "样例文档-2"]
instruction = "为这个句子生成表示以用于检索相关文章:"
model = SentenceTransformer('BAAI/bge-large-zh-v1.5')
q_embeddings = model.encode([instruction+q for q in queries], normalize_embeddings=True)
p_embeddings = model.encode(passages, normalize_embeddings=True)
scores = q_embeddings @ p_embeddings.T
```
#### Using Langchain
You can use `bge` in langchain like this:
```python
from langchain.embeddings import HuggingFaceBgeEmbeddings
model_name = "BAAI/bge-large-en-v1.5"
model_kwargs = {'device': 'cuda'}
encode_kwargs = {'normalize_embeddings': True} # set True to compute cosine similarity
model = HuggingFaceBgeEmbeddings(
model_name=model_name,
model_kwargs=model_kwargs,
encode_kwargs=encode_kwargs,
query_instruction="为这个句子生成表示以用于检索相关文章:"
)
model.query_instruction = "为这个句子生成表示以用于检索相关文章:"
```
#### Using HuggingFace Transformers
With the transformers package, you can use the model like this: First, you pass your input through the transformer model, then you select the last hidden state of the first token (i.e., [CLS]) as the sentence embedding.
```python
from transformers import AutoTokenizer, AutoModel
import torch
# Sentences we want sentence embeddings for
sentences = ["样例数据-1", "样例数据-2"]
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('BAAI/bge-large-zh-v1.5')
model = AutoModel.from_pretrained('BAAI/bge-large-zh-v1.5')
model.eval()
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# for s2p(short query to long passage) retrieval task, add an instruction to query (not add instruction for passages)
# encoded_input = tokenizer([instruction + q for q in queries], padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, cls pooling.
sentence_embeddings = model_output[0][:, 0]
# normalize embeddings
sentence_embeddings = torch.nn.functional.normalize(sentence_embeddings, p=2, dim=1)
print("Sentence embeddings:", sentence_embeddings)
```
### Usage for Reranker
Different from embedding model, reranker uses question and document as input and directly output similarity instead of embedding.
You can get a relevance score by inputting query and passage to the reranker.
The reranker is optimized based cross-entropy loss, so the relevance score is not bounded to a specific range.
#### Using FlagEmbedding
```
pip install -U FlagEmbedding
```
Get relevance scores (higher scores indicate more relevance):
```python
from FlagEmbedding import FlagReranker
reranker = FlagReranker('BAAI/bge-reranker-large', use_fp16=True) # Setting use_fp16 to True speeds up computation with a slight performance degradation
score = reranker.compute_score(['query', 'passage'])
print(score)
scores = reranker.compute_score([['what is panda?', 'hi'], ['what is panda?', 'The giant panda (Ailuropoda melanoleuca), sometimes called a panda bear or simply panda, is a bear species endemic to China.']])
print(scores)
```
#### Using Huggingface transformers
```python
import torch
from transformers import AutoModelForSequenceClassification, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained('BAAI/bge-reranker-large')
model = AutoModelForSequenceClassification.from_pretrained('BAAI/bge-reranker-large')
model.eval()
pairs = [['what is panda?', 'hi'], ['what is panda?', 'The giant panda (Ailuropoda melanoleuca), sometimes called a panda bear or simply panda, is a bear species endemic to China.']]
with torch.no_grad():
inputs = tokenizer(pairs, padding=True, truncation=True, return_tensors='pt', max_length=512)
scores = model(**inputs, return_dict=True).logits.view(-1, ).float()
print(scores)
```
#### Usage of the ONNX files
```python
from optimum.onnxruntime import ORTModelForFeatureExtraction # type: ignore
import torch
from transformers import AutoModel, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained('BAAI/bge-small-en-v1.5')
model = AutoModel.from_pretrained('BAAI/bge-small-en-v1.5')
model_ort = ORTModelForFeatureExtraction.from_pretrained('BAAI/bge-small-en-v1.5', file_name="onnx/model.onnx")
# Sentences we want sentence embeddings for
sentences = ["样例数据-1", "样例数据-2"]
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# for s2p(short query to long passage) retrieval task, add an instruction to query (not add instruction for passages)
# encoded_input = tokenizer([instruction + q for q in queries], padding=True, truncation=True, return_tensors='pt')
model_output_ort = model_ort(**encoded_input)
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# model_output and model_output_ort are identical
```
#### Usage via infinity
Its also possible to deploy the onnx files with the [infinity_emb](https://github.com/michaelfeil/infinity) pip package.
Recommended is `device="cuda", engine="torch"` with flash attention on gpu, and `device="cpu", engine="optimum"` for onnx inference.
```python
import asyncio
from infinity_emb import AsyncEmbeddingEngine, EngineArgs
sentences = ["Embed this is sentence via Infinity.", "Paris is in France."]
engine = AsyncEmbeddingEngine.from_args(
EngineArgs(model_name_or_path = "BAAI/bge-small-en-v1.5", device="cpu", engine="optimum" # or engine="torch"
))
async def main():
async with engine:
embeddings, usage = await engine.embed(sentences=sentences)
asyncio.run(main())
```
## Evaluation
`baai-general-embedding` models achieve **state-of-the-art performance on both MTEB and C-MTEB leaderboard!**
For more details and evaluation tools see our [scripts](https://github.com/FlagOpen/FlagEmbedding/blob/master/C_MTEB/README.md).
- **MTEB**:
| Model Name | Dimension | Sequence Length | Average (56) | Retrieval (15) |Clustering (11) | Pair Classification (3) | Reranking (4) | STS (10) | Summarization (1) | Classification (12) |
|:----:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|
| [BAAI/bge-large-en-v1.5](https://huggingface.co/BAAI/bge-large-en-v1.5) | 1024 | 512 | **64.23** | **54.29** | 46.08 | 87.12 | 60.03 | 83.11 | 31.61 | 75.97 |
| [BAAI/bge-base-en-v1.5](https://huggingface.co/BAAI/bge-base-en-v1.5) | 768 | 512 | 63.55 | 53.25 | 45.77 | 86.55 | 58.86 | 82.4 | 31.07 | 75.53 |
| [BAAI/bge-small-en-v1.5](https://huggingface.co/BAAI/bge-small-en-v1.5) | 384 | 512 | 62.17 |51.68 | 43.82 | 84.92 | 58.36 | 81.59 | 30.12 | 74.14 |
| [bge-large-en](https://huggingface.co/BAAI/bge-large-en) | 1024 | 512 | 63.98 | 53.9 | 46.98 | 85.8 | 59.48 | 81.56 | 32.06 | 76.21 |
| [bge-base-en](https://huggingface.co/BAAI/bge-base-en) | 768 | 512 | 63.36 | 53.0 | 46.32 | 85.86 | 58.7 | 81.84 | 29.27 | 75.27 |
| [gte-large](https://huggingface.co/thenlper/gte-large) | 1024 | 512 | 63.13 | 52.22 | 46.84 | 85.00 | 59.13 | 83.35 | 31.66 | 73.33 |
| [gte-base](https://huggingface.co/thenlper/gte-base) | 768 | 512 | 62.39 | 51.14 | 46.2 | 84.57 | 58.61 | 82.3 | 31.17 | 73.01 |
| [e5-large-v2](https://huggingface.co/intfloat/e5-large-v2) | 1024| 512 | 62.25 | 50.56 | 44.49 | 86.03 | 56.61 | 82.05 | 30.19 | 75.24 |
| [bge-small-en](https://huggingface.co/BAAI/bge-small-en) | 384 | 512 | 62.11 | 51.82 | 44.31 | 83.78 | 57.97 | 80.72 | 30.53 | 74.37 |
| [instructor-xl](https://huggingface.co/hkunlp/instructor-xl) | 768 | 512 | 61.79 | 49.26 | 44.74 | 86.62 | 57.29 | 83.06 | 32.32 | 61.79 |
| [e5-base-v2](https://huggingface.co/intfloat/e5-base-v2) | 768 | 512 | 61.5 | 50.29 | 43.80 | 85.73 | 55.91 | 81.05 | 30.28 | 73.84 |
| [gte-small](https://huggingface.co/thenlper/gte-small) | 384 | 512 | 61.36 | 49.46 | 44.89 | 83.54 | 57.7 | 82.07 | 30.42 | 72.31 |
| [text-embedding-ada-002](https://platform.openai.com/docs/guides/embeddings) | 1536 | 8192 | 60.99 | 49.25 | 45.9 | 84.89 | 56.32 | 80.97 | 30.8 | 70.93 |
| [e5-small-v2](https://huggingface.co/intfloat/e5-base-v2) | 384 | 512 | 59.93 | 49.04 | 39.92 | 84.67 | 54.32 | 80.39 | 31.16 | 72.94 |
| [sentence-t5-xxl](https://huggingface.co/sentence-transformers/sentence-t5-xxl) | 768 | 512 | 59.51 | 42.24 | 43.72 | 85.06 | 56.42 | 82.63 | 30.08 | 73.42 |
| [all-mpnet-base-v2](https://huggingface.co/sentence-transformers/all-mpnet-base-v2) | 768 | 514 | 57.78 | 43.81 | 43.69 | 83.04 | 59.36 | 80.28 | 27.49 | 65.07 |
| [sgpt-bloom-7b1-msmarco](https://huggingface.co/bigscience/sgpt-bloom-7b1-msmarco) | 4096 | 2048 | 57.59 | 48.22 | 38.93 | 81.9 | 55.65 | 77.74 | 33.6 | 66.19 |
- **C-MTEB**:
We create the benchmark C-MTEB for Chinese text embedding which consists of 31 datasets from 6 tasks.
Please refer to [C_MTEB](https://github.com/FlagOpen/FlagEmbedding/blob/master/C_MTEB/README.md) for a detailed introduction.
| Model | Embedding dimension | Avg | Retrieval | STS | PairClassification | Classification | Reranking | Clustering |
|:-------------------------------|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:|
| [**BAAI/bge-large-zh-v1.5**](https://huggingface.co/BAAI/bge-large-zh-v1.5) | 1024 | **64.53** | 70.46 | 56.25 | 81.6 | 69.13 | 65.84 | 48.99 |
| [BAAI/bge-base-zh-v1.5](https://huggingface.co/BAAI/bge-base-zh-v1.5) | 768 | 63.13 | 69.49 | 53.72 | 79.75 | 68.07 | 65.39 | 47.53 |
| [BAAI/bge-small-zh-v1.5](https://huggingface.co/BAAI/bge-small-zh-v1.5) | 512 | 57.82 | 61.77 | 49.11 | 70.41 | 63.96 | 60.92 | 44.18 |
| [BAAI/bge-large-zh](https://huggingface.co/BAAI/bge-large-zh) | 1024 | 64.20 | 71.53 | 54.98 | 78.94 | 68.32 | 65.11 | 48.39 |
| [bge-large-zh-noinstruct](https://huggingface.co/BAAI/bge-large-zh-noinstruct) | 1024 | 63.53 | 70.55 | 53 | 76.77 | 68.58 | 64.91 | 50.01 |
| [BAAI/bge-base-zh](https://huggingface.co/BAAI/bge-base-zh) | 768 | 62.96 | 69.53 | 54.12 | 77.5 | 67.07 | 64.91 | 47.63 |
| [multilingual-e5-large](https://huggingface.co/intfloat/multilingual-e5-large) | 1024 | 58.79 | 63.66 | 48.44 | 69.89 | 67.34 | 56.00 | 48.23 |
| [BAAI/bge-small-zh](https://huggingface.co/BAAI/bge-small-zh) | 512 | 58.27 | 63.07 | 49.45 | 70.35 | 63.64 | 61.48 | 45.09 |
| [m3e-base](https://huggingface.co/moka-ai/m3e-base) | 768 | 57.10 | 56.91 | 50.47 | 63.99 | 67.52 | 59.34 | 47.68 |
| [m3e-large](https://huggingface.co/moka-ai/m3e-large) | 1024 | 57.05 | 54.75 | 50.42 | 64.3 | 68.2 | 59.66 | 48.88 |
| [multilingual-e5-base](https://huggingface.co/intfloat/multilingual-e5-base) | 768 | 55.48 | 61.63 | 46.49 | 67.07 | 65.35 | 54.35 | 40.68 |
| [multilingual-e5-small](https://huggingface.co/intfloat/multilingual-e5-small) | 384 | 55.38 | 59.95 | 45.27 | 66.45 | 65.85 | 53.86 | 45.26 |
| [text-embedding-ada-002(OpenAI)](https://platform.openai.com/docs/guides/embeddings/what-are-embeddings) | 1536 | 53.02 | 52.0 | 43.35 | 69.56 | 64.31 | 54.28 | 45.68 |
| [luotuo](https://huggingface.co/silk-road/luotuo-bert-medium) | 1024 | 49.37 | 44.4 | 42.78 | 66.62 | 61 | 49.25 | 44.39 |
| [text2vec-base](https://huggingface.co/shibing624/text2vec-base-chinese) | 768 | 47.63 | 38.79 | 43.41 | 67.41 | 62.19 | 49.45 | 37.66 |
| [text2vec-large](https://huggingface.co/GanymedeNil/text2vec-large-chinese) | 1024 | 47.36 | 41.94 | 44.97 | 70.86 | 60.66 | 49.16 | 30.02 |
- **Reranking**:
See [C_MTEB](https://github.com/FlagOpen/FlagEmbedding/blob/master/C_MTEB/) for evaluation script.
| Model | T2Reranking | T2RerankingZh2En\* | T2RerankingEn2Zh\* | MMarcoReranking | CMedQAv1 | CMedQAv2 | Avg |
|:-------------------------------|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:|
| text2vec-base-multilingual | 64.66 | 62.94 | 62.51 | 14.37 | 48.46 | 48.6 | 50.26 |
| multilingual-e5-small | 65.62 | 60.94 | 56.41 | 29.91 | 67.26 | 66.54 | 57.78 |
| multilingual-e5-large | 64.55 | 61.61 | 54.28 | 28.6 | 67.42 | 67.92 | 57.4 |
| multilingual-e5-base | 64.21 | 62.13 | 54.68 | 29.5 | 66.23 | 66.98 | 57.29 |
| m3e-base | 66.03 | 62.74 | 56.07 | 17.51 | 77.05 | 76.76 | 59.36 |
| m3e-large | 66.13 | 62.72 | 56.1 | 16.46 | 77.76 | 78.27 | 59.57 |
| bge-base-zh-v1.5 | 66.49 | 63.25 | 57.02 | 29.74 | 80.47 | 84.88 | 63.64 |
| bge-large-zh-v1.5 | 65.74 | 63.39 | 57.03 | 28.74 | 83.45 | 85.44 | 63.97 |
| [BAAI/bge-reranker-base](https://huggingface.co/BAAI/bge-reranker-base) | 67.28 | 63.95 | 60.45 | 35.46 | 81.26 | 84.1 | 65.42 |
| [BAAI/bge-reranker-large](https://huggingface.co/BAAI/bge-reranker-large) | 67.6 | 64.03 | 61.44 | 37.16 | 82.15 | 84.18 | 66.09 |
\* : T2RerankingZh2En and T2RerankingEn2Zh are cross-language retrieval tasks
## Train
### BAAI Embedding
We pre-train the models using [retromae](https://github.com/staoxiao/RetroMAE) and train them on large-scale pairs data using contrastive learning.
**You can fine-tune the embedding model on your data following our [examples](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune).**
We also provide a [pre-train example](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/pretrain).
Note that the goal of pre-training is to reconstruct the text, and the pre-trained model cannot be used for similarity calculation directly, it needs to be fine-tuned.
More training details for bge see [baai_general_embedding](https://github.com/FlagOpen/FlagEmbedding/blob/master/FlagEmbedding/baai_general_embedding/README.md).
### BGE Reranker
Cross-encoder will perform full-attention over the input pair,
which is more accurate than embedding model (i.e., bi-encoder) but more time-consuming than embedding model.
Therefore, it can be used to re-rank the top-k documents returned by embedding model.
We train the cross-encoder on a multilingual pair data,
The data format is the same as embedding model, so you can fine-tune it easily following our [example](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/reranker).
More details please refer to [./FlagEmbedding/reranker/README.md](https://github.com/FlagOpen/FlagEmbedding/tree/master/FlagEmbedding/reranker)
## Contact
If you have any question or suggestion related to this project, feel free to open an issue or pull request.
You also can email Shitao Xiao([email protected]) and Zheng Liu([email protected]).
## Citation
If you find this repository useful, please consider giving a star :star: and citation
```
@misc{bge_embedding,
title={C-Pack: Packaged Resources To Advance General Chinese Embedding},
author={Shitao Xiao and Zheng Liu and Peitian Zhang and Niklas Muennighoff},
year={2023},
eprint={2309.07597},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
## License
FlagEmbedding is licensed under the [MIT License](https://github.com/FlagOpen/FlagEmbedding/blob/master/LICENSE). The released models can be used for commercial purposes free of charge. | [
"BEAR",
"BIOSSES",
"SCIFACT"
] | Non_BioNLP |
ntc-ai/SDXL-LoRA-slider.dancing-with-joy | ntc-ai | text-to-image | [
"diffusers",
"text-to-image",
"stable-diffusion-xl",
"lora",
"template:sd-lora",
"template:sdxl-lora",
"sdxl-sliders",
"ntcai.xyz-sliders",
"concept",
"en",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:mit",
"region:us"
] | 1,704,085,141,000 | 2024-01-01T04:59:04 | 15 | 0 | ---
base_model: stabilityai/stable-diffusion-xl-base-1.0
language:
- en
license: mit
tags:
- text-to-image
- stable-diffusion-xl
- lora
- template:sd-lora
- template:sdxl-lora
- sdxl-sliders
- ntcai.xyz-sliders
- concept
- diffusers
thumbnail: images/evaluate/dancing with joy.../dancing with joy_17_3.0.png
widget:
- text: dancing with joy
output:
url: images/dancing with joy_17_3.0.png
- text: dancing with joy
output:
url: images/dancing with joy_19_3.0.png
- text: dancing with joy
output:
url: images/dancing with joy_20_3.0.png
- text: dancing with joy
output:
url: images/dancing with joy_21_3.0.png
- text: dancing with joy
output:
url: images/dancing with joy_22_3.0.png
inference: false
instance_prompt: dancing with joy
---
# ntcai.xyz slider - dancing with joy (SDXL LoRA)
| Strength: -3 | Strength: 0 | Strength: 3 |
| --- | --- | --- |
| <img src="images/dancing with joy_17_-3.0.png" width=256 height=256 /> | <img src="images/dancing with joy_17_0.0.png" width=256 height=256 /> | <img src="images/dancing with joy_17_3.0.png" width=256 height=256 /> |
| <img src="images/dancing with joy_19_-3.0.png" width=256 height=256 /> | <img src="images/dancing with joy_19_0.0.png" width=256 height=256 /> | <img src="images/dancing with joy_19_3.0.png" width=256 height=256 /> |
| <img src="images/dancing with joy_20_-3.0.png" width=256 height=256 /> | <img src="images/dancing with joy_20_0.0.png" width=256 height=256 /> | <img src="images/dancing with joy_20_3.0.png" width=256 height=256 /> |
## Download
Weights for this model are available in Safetensors format.
## Trigger words
You can apply this LoRA with trigger words for additional effect:
```
dancing with joy
```
## Use in diffusers
```python
from diffusers import StableDiffusionXLPipeline
from diffusers import EulerAncestralDiscreteScheduler
import torch
pipe = StableDiffusionXLPipeline.from_single_file("https://huggingface.co/martyn/sdxl-turbo-mario-merge-top-rated/blob/main/topRatedTurboxlLCM_v10.safetensors")
pipe.to("cuda")
pipe.scheduler = EulerAncestralDiscreteScheduler.from_config(pipe.scheduler.config)
# Load the LoRA
pipe.load_lora_weights('ntc-ai/SDXL-LoRA-slider.dancing-with-joy', weight_name='dancing with joy.safetensors', adapter_name="dancing with joy")
# Activate the LoRA
pipe.set_adapters(["dancing with joy"], adapter_weights=[2.0])
prompt = "medieval rich kingpin sitting in a tavern, dancing with joy"
negative_prompt = "nsfw"
width = 512
height = 512
num_inference_steps = 10
guidance_scale = 2
image = pipe(prompt, negative_prompt=negative_prompt, width=width, height=height, guidance_scale=guidance_scale, num_inference_steps=num_inference_steps).images[0]
image.save('result.png')
```
## Support the Patreon
If you like this model please consider [joining our Patreon](https://www.patreon.com/NTCAI).
By joining our Patreon, you'll gain access to an ever-growing library of over 780+ unique and diverse LoRAs, covering a wide range of styles and genres. You'll also receive early access to new models and updates, exclusive behind-the-scenes content, and the powerful LoRA slider creator, allowing you to craft your own custom LoRAs and experiment with endless possibilities.
Your support on Patreon will allow us to continue developing and refining new models.
## Other resources
- [CivitAI](https://civitai.com/user/ntc) - Follow ntc on Civit for even more LoRAs
- [ntcai.xyz](https://ntcai.xyz) - See ntcai.xyz to find more articles and LoRAs
| [
"CRAFT"
] | Non_BioNLP |
ktangri/gpt-neo-demo | ktangri | text-generation | [
"transformers",
"pytorch",
"gpt_neo",
"text-generation",
"text generation",
"the Pile",
"causal-lm",
"en",
"arxiv:2101.00027",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,646,263,745,000 | 2021-07-21T15:20:09 | 27 | 1 | ---
datasets:
- the Pile
language:
- en
license: apache-2.0
tags:
- text generation
- pytorch
- the Pile
- causal-lm
---
# GPT-Neo 2.7B (By EleutherAI)
## Model Description
GPT-Neo 2.7B is a transformer model designed using EleutherAI's replication of the GPT-3 architecture. GPT-Neo refers to the class of models, while 2.7B represents the number of parameters of this particular pre-trained model.
## Training data
GPT-Neo 2.7B was trained on the Pile, a large scale curated dataset created by EleutherAI for the purpose of training this model.
## Training procedure
This model was trained for 420 billion tokens over 400,000 steps. It was trained as a masked autoregressive language model, using cross-entropy loss.
## Intended Use and Limitations
This way, the model learns an inner representation of the English language that can then be used to extract features useful for downstream tasks. The model is best at what it was pretrained for however, which is generating texts from a prompt.
### How to use
You can use this model directly with a pipeline for text generation. This example generates a different sequence each time it's run:
```py
>>> from transformers import pipeline
>>> generator = pipeline('text-generation', model='EleutherAI/gpt-neo-2.7B')
>>> generator("EleutherAI has", do_sample=True, min_length=50)
[{'generated_text': 'EleutherAI has made a commitment to create new software packages for each of its major clients and has'}]
```
### Limitations and Biases
GPT-Neo was trained as an autoregressive language model. This means that its core functionality is taking a string of text and predicting the next token. While language models are widely used for tasks other than this, there are a lot of unknowns with this work.
GPT-Neo was trained on the Pile, a dataset known to contain profanity, lewd, and otherwise abrasive language. Depending on your usecase GPT-Neo may produce socially unacceptable text. See Sections 5 and 6 of the Pile paper for a more detailed analysis of the biases in the Pile.
As with all language models, it is hard to predict in advance how GPT-Neo will respond to particular prompts and offensive content may occur without warning. We recommend having a human curate or filter the outputs before releasing them, both to censor undesirable content and to improve the quality of the results.
## Eval results
All evaluations were done using our [evaluation harness](https://github.com/EleutherAI/lm-evaluation-harness). Some results for GPT-2 and GPT-3 are inconsistent with the values reported in the respective papers. We are currently looking into why, and would greatly appreciate feedback and further testing of our eval harness. If you would like to contribute evaluations you have done, please reach out on our [Discord](https://discord.gg/vtRgjbM).
### Linguistic Reasoning
| Model and Size | Pile BPB | Pile PPL | Wikitext PPL | Lambada PPL | Lambada Acc | Winogrande | Hellaswag |
| ---------------- | ---------- | ---------- | ------------- | ----------- | ----------- | ---------- | ----------- |
| GPT-Neo 1.3B | 0.7527 | 6.159 | 13.10 | 7.498 | 57.23% | 55.01% | 38.66% |
| GPT-2 1.5B | 1.0468 | ----- | 17.48 | 10.634 | 51.21% | 59.40% | 40.03% |
| **GPT-Neo 2.7B** | **0.7165** | **5.646** | **11.39** | **5.626** | **62.22%** | **56.50%** | **42.73%** |
| GPT-3 Ada | 0.9631 | ----- | ----- | 9.954 | 51.60% | 52.90% | 35.93% |
### Physical and Scientific Reasoning
| Model and Size | MathQA | PubMedQA | Piqa |
| ---------------- | ---------- | ---------- | ----------- |
| GPT-Neo 1.3B | 24.05% | 54.40% | 71.11% |
| GPT-2 1.5B | 23.64% | 58.33% | 70.78% |
| **GPT-Neo 2.7B** | **24.72%** | **57.54%** | **72.14%** |
| GPT-3 Ada | 24.29% | 52.80% | 68.88% |
### Down-Stream Applications
TBD
### BibTeX entry and citation info
To cite this model, use
```bibtex
@article{gao2020pile,
title={The Pile: An 800GB Dataset of Diverse Text for Language Modeling},
author={Gao, Leo and Biderman, Stella and Black, Sid and Golding, Laurence and Hoppe, Travis and Foster, Charles and Phang, Jason and He, Horace and Thite, Anish and Nabeshima, Noa and others},
journal={arXiv preprint arXiv:2101.00027},
year={2020}
}
```
To cite the codebase that this model was trained with, use
```bibtex
@software{gpt-neo,
author = {Black, Sid and Gao, Leo and Wang, Phil and Leahy, Connor and Biderman, Stella},
title = {{GPT-Neo}: Large Scale Autoregressive Language Modeling with Mesh-Tensorflow},
url = {http://github.com/eleutherai/gpt-neo},
version = {1.0},
year = {2021},
}
``` | [
"PUBMEDQA"
] | Non_BioNLP |
IBI-CAAI/MELT-Mistral-3x7B-Instruct-v0.1 | IBI-CAAI | text-generation | [
"transformers",
"safetensors",
"mixtral",
"text-generation",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | 1,704,060,803,000 | 2024-01-06T11:09:17 | 3 | 0 | ---
language:
- en
library_name: transformers
license: apache-2.0
---
# Model Card MELT-Mistral-3x7B-Instruct-v0.1
Medical Education Language Transformer (MELT)
# Model Type:
The MELT-Mistral-3x7B-Instruct-v0.1 Large Language Model (LLM) is a pretrained generative text model pre-trained and fine-tuned on using publically avalable medical data.
MELT-Mistral-3x7B-Instruct-v0.1 demonstrated a average 19.7% improvement over Mistral-3x7B-Instruct-v0.1 (MoE of 3 X Mistral-7B-Instruct-v0.1) across 3 USMLE, Indian AIIMS, and NEET medical examination benchmarks.
This is MoE model, thanks to [Charles Goddard](https://huggingface.co/chargoddard) for code/tools.
## Model Details
The Medical Education Language Transformer (MELT) models have been trained on a wide-range of text, chat, Q/A, and instruction data in the medical domain.
While the model was evaluated using publically avalable [USMLE](https://www.usmle.org/), Indian AIIMS, and NEET medical examination example questions, its use it intented to be more broadly applicable.
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [Center for Applied AI](https://caai.ai.uky.edu/)
- **Funded by:** [Institute or Biomedical Informatics](https://www.research.uky.edu/IBI)
- **Model type:** LLM
- **Language(s) (NLP):** English
- **License:** Apache 2.0
- **Finetuned from model:** A MoE x 3 [Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1)
## Uses
MELT is intended for research purposes only. MELT models are best suited for prompts using a QA or chat format.
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
MELT is intended for research purposes only and should not be used for medical advice.
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
MELT was training using collections publicly available, which likely contain biased and inaccurate information. The training and evaluation datasets have not been evaluated for content or accuracy.
## How to Get Started with the Model
Use this model like you would the Mixtral-8x7B-Instruct-v0.1 model.
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
The following datasets were used for training:
[Expert Med](https://dataverse.harvard.edu/dataset.xhtml?persistentId=doi:10.7910/DVN/Q3A969)
[MedQA train](https://huggingface.co/datasets/bigbio/med_qa)
[MedMCQA train](https://github.com/MedMCQA/MedMCQA?tab=readme-ov-file#data-download-and-preprocessing)
[LiveQA](https://github.com/abachaa/LiveQA_MedicalTask_TREC2017)
[MedicationQA](https://huggingface.co/datasets/truehealth/medicationqa)
[MMLU clinical topics](https://huggingface.co/datasets/Stevross/mmlu)
[Medical Flashcards](https://huggingface.co/datasets/medalpaca/medical_meadow_medical_flashcards)
[Wikidoc](https://huggingface.co/datasets/medalpaca/medical_meadow_wikidoc)
[Wikidoc Patient Information](https://huggingface.co/datasets/medalpaca/medical_meadow_wikidoc_patient_information)
[MEDIQA](https://huggingface.co/datasets/medalpaca/medical_meadow_mediqa)
[MMMLU](https://huggingface.co/datasets/medalpaca/medical_meadow_mmmlu)
[icliniq 10k](https://drive.google.com/file/d/1ZKbqgYqWc7DJHs3N9TQYQVPdDQmZaClA/view?usp=sharing)
[HealthCare Magic 100k](https://drive.google.com/file/d/1lyfqIwlLSClhgrCutWuEe_IACNq6XNUt/view?usp=sharing)
[GenMedGPT-5k](https://drive.google.com/file/d/1nDTKZ3wZbZWTkFMBkxlamrzbNz0frugg/view?usp=sharing)
[Mental Health Conversational](https://huggingface.co/datasets/heliosbrahma/mental_health_conversational_dataset)
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Training Hyperparameters
- **Lora Rank:** 64
- **Lora Alpha:** 16
- **Lora Targets:** "o_proj","down_proj","v_proj","gate_proj","up_proj","k_proj","q_proj"
- **LR:** 2e-4
- **Epoch:** 3
- **Precision:** bf16 <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
MELT-Mistral-3x7B-Instruct-v0.1 demonstrated a average 19.7% improvement over Mistral-3x7B-Instruct-v0.1 (MoE of 3 X Mistral-7B-Instruct-v0.1) across 3 USMLE, 3 USMLE, Indian AIIMS, and NEET medical examination benchmarks.
### Mistral-3x7B-Instruct-v0.1
- **medqa:** {'base': {'Average': 42.88, 'STEP-1': 43.51, 'STEP-2&3': 42.16}}
- **mausmle:** {'base': {'Average': 45.88, 'STEP-1': 45.88, 'STEP-2': 45.98, 'STEP-3': 45.79}}
- **medmcqa:** {'base': {'Average': 45.87, 'MEDICINE': 44.02, 'OPHTHALMOLOGY': 54.76, 'ANATOMY': 48.63, 'PATHOLOGY': 54.26, 'PHYSIOLOGY': 46.21, 'DENTAL': 41.47, 'RADIOLOGY': 53.57, 'BIOCHEMISTRY': 52.89, 'ANAESTHESIA': 47.83, 'GYNAECOLOGY': 37.91, 'PHARMACOLOGY': 48.88, 'SOCIAL': 42.22, 'PEDIATRICS': 41.67, 'ENT': 55.26, 'SURGERY': 47.58, 'MICROBIOLOGY': 43.84, 'FORENSIC': 48.84, 'PSYCHIATRY': 77.78, 'SKIN': 60.0, 'ORTHOPAEDICS': 50.0, 'UNKNOWN': 100.0}}
- **average:** 44.87%
### MELT-Mixtral-3x7B-Instruct-v0.1
- **medqa:** {'base': {'Average': 52.24, 'STEP-1': 51.92, 'STEP-2&3': 52.61}}
- **mausmle:** {'base': {'Average': 58.06, 'STEP-1': 54.12, 'STEP-2': 58.62, 'STEP-3': 60.75}}
- **medmcqa:** {'base': {'Average': 50.73, 'MEDICINE': 48.91, 'OPHTHALMOLOGY': 59.52, 'ANATOMY': 51.37, 'PATHOLOGY': 61.63, 'PHYSIOLOGY': 56.82, 'DENTAL': 42.42, 'RADIOLOGY': 62.5, 'BIOCHEMISTRY': 62.81, 'ANAESTHESIA': 39.13, 'GYNAECOLOGY': 46.41, 'PHARMACOLOGY': 56.74, 'SOCIAL': 46.67, 'PEDIATRICS': 50.76, 'ENT': 52.63, 'SURGERY': 53.23, 'MICROBIOLOGY': 46.58, 'FORENSIC': 62.79, 'PSYCHIATRY': 77.78, 'SKIN': 70.0, 'ORTHOPAEDICS': 50.0, 'UNKNOWN': 100.0}}
- **average:** 53.7%
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[MedQA test](https://huggingface.co/datasets/bigbio/med_qa)
[MedMCQA test](https://github.com/MedMCQA/MedMCQA?tab=readme-ov-file#data-download-and-preprocessing)
[MA USMLE](https://huggingface.co/datasets/medalpaca/medical_meadow_usmle_self_assessment)
## Disclaimer:
The use of large language models, such as this one, is provided without warranties or guarantees of any kind. While every effort has been made to ensure accuracy, completeness, and reliability of the information generated, it should be noted that these models may produce responses that are inaccurate, outdated, or inappropriate for specific purposes. Users are advised to exercise discretion and judgment when relying on the information generated by these models. The outputs should not be considered as professional, legal, medical, financial, or any other form of advice. It is recommended to seek expert advice or consult appropriate sources for specific queries or critical decision-making. The creators, developers, and providers of these models disclaim any liability for damages, losses, or any consequences arising from the use, reliance upon, or interpretation of the information provided by these models. The user assumes full responsibility for their interactions and usage of the generated content. By using these language models, users agree to indemnify and hold harmless the developers, providers, and affiliates from any claims, damages, or liabilities that may arise from their use. Please be aware that these models are constantly evolving, and their capabilities, limitations, and outputs may change over time without prior notice. Your use of this language model signifies your acceptance and understanding of this disclaimer.
| [
"MEDQA",
"MEDICAL DATA"
] | BioNLP |
Subsets and Splits